Compare commits

..

131 Commits

Author SHA1 Message Date
1102
74b8bfac72 fix: Ensure RevisionMetadataPanel refreshes on revision change (#19580)
Signed-off-by: nueavv <nuguni@kakao.com>
2024-08-20 11:50:12 -04:00
Kunho Lee
7a1dfc2307 feat(hydrator): handle sourceHydrator fields from webhook (#19397)
* feat(hydrator): handle sourceHydrator fields from webhook

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

* fix: use GetDrySource instead of GetHydratorDrySource

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

* test: test if the hydration is properly triggered in the webhook when SourceHydrator is configured

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

---------

Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-08-19 10:13:17 -04:00
Michael Crenshaw
89c22c2f95 Merge branch 'commit-server' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-14 15:10:23 -04:00
Michael Crenshaw
cb2feec273 feat(hydrator): add commit-server component
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

go mod tidy

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

one test file for both implementations

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix test for linux

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix git client mock

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix git client mock

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

address comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

unit tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-14 12:30:45 -04:00
Michael Crenshaw
73371f981a hacky but functional creds UI support
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-14 12:29:56 -04:00
Michael Crenshaw
79829eca98 feat(hydrator): add sourceHydrator types
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 17:08:02 -04:00
Michael Crenshaw
77029cbdc9 partial creds add UI
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 16:02:22 -04:00
Michael Crenshaw
b8601fe940 more logs
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 13:24:50 -04:00
Michael Crenshaw
dc675de3dd test error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 12:44:31 -04:00
Michael Crenshaw
ac27d50d31 simplify
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 12:38:38 -04:00
Kevin
6045ea243a feat: Refactor writeManifests function for simplification and better error handling (#19447)
Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
2024-08-12 12:23:51 -04:00
Michael Crenshaw
0789d44429 simplify, more tests
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 20:56:41 -04:00
Michael Crenshaw
e41b4851b0 lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 17:07:23 -04:00
Michael Crenshaw
01cd32916c ui improvements
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 16:57:54 -04:00
Michael Crenshaw
1e233c1949 fix enum, ui fixes
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 16:36:51 -04:00
Michael Crenshaw
4911a41e00 tidy
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 16:17:24 -04:00
Michael Crenshaw
f4a368cd44 ui improvements; new queue for app hydration; lots of bug fixes
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 16:09:04 -04:00
Michael Crenshaw
257c419550 revert unnecessary changes
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 20:44:51 -04:00
Michael Crenshaw
f541a8e9d5 remove previewer code; will wait for another PR
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 20:23:10 -04:00
Michael Crenshaw
4507984205 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 20:19:58 -04:00
Michael Crenshaw
95021e1984 more logging, don't always re-hydrate
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 19:01:43 -04:00
Michael Crenshaw
053d0f98f3 test adding a second app
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 17:03:53 -04:00
Michael Crenshaw
38ddab3c78 handle changes to sourceHydrator field
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 16:28:09 -04:00
Kim Minju
91e63d75f8 test: add Unit Tests for SecureMkdirAll Function (#19471)
* Add test for SecureMkdirAll with existing directory

- Verify that SecureMkdirAll correctly handles the case where the directory already exists.
- Ensure that the same path is returned when creating an existing directory.

Signed-off-by: miiiinju1 <gms07073@ynu.ac.kr>

* Add test for SecureMkdirAll with file path

- Ensure SecureMkdirAll returns an error when a file path is provided instead of a directory.
- Check that the error message indicates a failure to create a directory.

Signed-off-by: miiiinju1 <gms07073@ynu.ac.kr>

* Add test for SecureMkdirAll with dot-dot path

- Test SecureMkdirAll with a path containing  to ensure it doesn't traverse outside the root directory.
- Verify that the resulting path is as expected and does not use relative paths that escape the root directory.

Signed-off-by: miiiinju1 <gms07073@ynu.ac.kr>

---------

Signed-off-by: miiiinju1 <gms07073@ynu.ac.kr>
2024-08-10 13:19:32 -04:00
Michael Crenshaw
9ea972556b test failed sync with hydrateTo
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-09 15:22:52 -04:00
Michael Crenshaw
e227d71abf better filenames
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-09 14:53:10 -04:00
Michael Crenshaw
d82f140300 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-09 14:06:51 -04:00
Michael Crenshaw
b0693a642c Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-09 10:18:50 -04:00
Robin Lieb
8727b17067 feat(hydrator): add metrics on getting git creds user info (#19427)
* feat(hydrator): add metrics on getting git creds user info

Signed-off-by: Robin Lieb <robin.j.lieb@gmail.com>

* feat: move credential helper to private function

---------

Signed-off-by: Robin Lieb <robin.j.lieb@gmail.com>
2024-08-08 16:13:05 -04:00
Michael Crenshaw
06d499deb0 remove unused fields
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-08 12:02:25 -04:00
Michael Crenshaw
17cd8b756e Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 16:41:48 -04:00
Michael Crenshaw
768bc92d26 fix import
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 16:39:37 -04:00
Michael Crenshaw
8802205257 remove unused service
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 16:39:06 -04:00
Michael Crenshaw
6b8a00342d remove duplicate implementation of secure dir creation logic
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 14:54:28 -04:00
Michael Crenshaw
012887a246 fix test
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 13:24:14 -04:00
mirageoasis
d83ceab37c test: add unit test for util.go makeSecureTempDir function (#19369)
* test: added test for util.go

Signed-off-by: mirageoasis <kimhw0820@naver.com>

* refactor: fixed to require statement

Signed-off-by: mirageoasis <kimhw0820@naver.com>

* style tweaks

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: mirageoasis <kimhw0820@naver.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 12:17:21 -04:00
Michael Crenshaw
fcd240b704 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 11:58:43 -04:00
Michael Crenshaw
fa2338016b fix build error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-04 22:00:26 -04:00
Michael Crenshaw
2819ef942f fix build error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-04 21:50:36 -04:00
Kunho Lee
9d29353961 test: add unit test for util.git.Client.CommitAndPush (#19354)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-08-04 18:00:41 -04:00
thisishwan2
c7cf4bb9e2 feat: use MkdirAll by OS(#19301) (#19323)
* feat: add mkdir provider interface and Implementation for os

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* fix: Using MkdirAll Functions by OS

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* fix: Modifying implementations for linux and non-linux

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* remove: unused interface

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* fix: remove struct & use SecureMkdirAll func

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* fix: use SecureMkdirAll func

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* feat: add unit test SecureMkdirAll func

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* tweaks

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: thisishwan2 <feel000617@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-04 17:57:37 -04:00
Michael Crenshaw
14b19dae7a rename endpoints to allow more clarity when more are added
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-01 15:29:41 -04:00
Michael Crenshaw
fbcdc65308 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-01 13:25:15 -04:00
Michael Crenshaw
7cc04b830e Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-31 13:53:55 -04:00
Kevin
1bb5763bd7 test: Add unit tests for WriteForPaths function (#19311)
* test: Add unit tests for WriteForPaths function

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>

* test: refactor unit tests for WriteForPaths function

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>

---------

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
2024-07-31 13:40:14 -04:00
Michael Crenshaw
2d8824482c Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 17:10:45 -04:00
Michael Crenshaw
c49ffae139 housekeeping
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 14:53:12 -04:00
Kevin
2315f8ae44 test: Add unit tests for HydratorHelper methods (#19308)
* test: Add unit tests for HydratorHelper methods

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>

* lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 14:15:51 -04:00
Michael Crenshaw
9221cdd780 simplify commit API, more docstrings
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 14:07:57 -04:00
Michael Crenshaw
652541c065 lint, refactor
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 12:56:28 -04:00
Michael Crenshaw
e781a08d86 graceful shutdown of commit server
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 11:19:36 -04:00
Michael Crenshaw
519fadc538 make directory for commit server output
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 10:39:24 -04:00
Michael Crenshaw
62ff4093e8 record e2e coverage
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 09:35:40 -04:00
Michael Crenshaw
31b46348d9 fix test
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 21:05:04 -04:00
Michael Crenshaw
e9385c8949 codegen
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 21:04:59 -04:00
Michael Crenshaw
d516fd82e1 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 20:46:45 -04:00
Michael Crenshaw
40b7280953 lint, fix old function calls
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 20:45:33 -04:00
Michael Crenshaw
3b7bbcefdf wait feature for CLI, more debug lines in commit server, e2e test works
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 20:42:03 -04:00
Kunho Lee
f7fece9ad2 test: add unit test for util.git.Client.RemoveContents (#19288)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-07-29 10:09:05 -04:00
Michael Crenshaw
4d0969aa28 lint and codegen
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-27 18:54:15 -04:00
Michael Crenshaw
64c1d4d4af Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-27 18:21:59 -04:00
Kunho Lee
9554f57690 test: add unit test for util.git.Client.CheckoutOrNew (#19274)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-07-27 18:20:27 -04:00
Michael Crenshaw
73b953ddb9 add app create CLI support and stub out an e2e test
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-27 18:15:56 -04:00
Michael Crenshaw
4a39e608b1 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-26 16:51:08 -04:00
Michael Crenshaw
8e6fd8ff46 more cleanup
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-26 16:50:25 -04:00
Michael Crenshaw
51bd8b11c3 clean up commit.go
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-26 15:10:52 -04:00
Michael Crenshaw
0c8242f7e4 more metrics, plus docs
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-25 17:04:30 -04:00
Michael Crenshaw
d69a78f63f add more git metrics
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-25 17:00:00 -04:00
Michael Crenshaw
11daa4e153 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-25 16:35:42 -04:00
Michael Crenshaw
6acd67e26b add metrics server
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-24 13:14:30 -04:00
Kunho Lee
449fa4e29f test: add unit test for util.git.Client.CheckoutOrOrphan (#19191)
* test: add unit test for util.git.Client.CheckoutOrOrphan

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

* test,fix: set author before do git actions

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

---------

Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-07-24 11:38:22 -04:00
Michael Crenshaw
e79aa171fa Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 12:18:24 -04:00
Michael Crenshaw
40028b4078 fix nil reference error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 11:39:40 -04:00
Michael Crenshaw
2cdb25ad4c support kustomize commands
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 11:22:24 -04:00
Michael Crenshaw
6f3ddda2d7 lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 10:29:01 -04:00
Michael Crenshaw
4b0eeb6a18 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 10:26:33 -04:00
Michael Crenshaw
cf90d8ce50 clean up generated swagger doc
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 16:12:06 -04:00
Michael Crenshaw
a9ba5cd3a1 better package name
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 16:00:16 -04:00
Michael Crenshaw
3910aa088a better package name
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 15:59:38 -04:00
Michael Crenshaw
f8f7baf467 fix incorrect socket path
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 15:58:39 -04:00
Michael Crenshaw
28bae8fe4d centralize user info code
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 15:57:13 -04:00
Michael Crenshaw
4eaeee320d remove accidental file
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 14:40:18 -04:00
Michael Crenshaw
a2501be80a Merge branch 'hydrator' of https://github.com/argoproj/argo-cd into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 14:34:57 -04:00
Michael Crenshaw
1edba58774 use mockery.yaml
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 14:33:37 -04:00
Kunho Lee
c3efa44a58 test: add unit test for util.git.Client.SetAuthor (#19128)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 13:59:39 -04:00
Michael Crenshaw
e8c505fe3c update mocks, basic path redaction, fix tests
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 12:57:34 -04:00
Michael Crenshaw
6b9c743b42 added basic dupe destination detection; still buggy
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 17:24:31 -04:00
Michael Crenshaw
282702e571 use different socket paths to avoid collisions during local development
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 17:02:08 -04:00
Michael Crenshaw
51122b913d simplify hydrate queue processor - only process for the requested source branch
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 16:35:22 -04:00
Michael Crenshaw
8c1983de60 fix field names, readability
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 16:15:59 -04:00
Michael Crenshaw
a4422a4745 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 13:49:10 -04:00
Michael Crenshaw
bbe8b0cc2e Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-15 16:10:44 -04:00
Michael Crenshaw
34a00bdc10 don't expose unnecessary public function
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-15 15:54:58 -04:00
Michael Crenshaw
c5961c9f5c lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-08 15:58:22 -04:00
Michael Crenshaw
8046ec330f fix retry, use main git client
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-08 15:47:21 -04:00
Michael Crenshaw
36b74225fc Merge remote-tracking branch 'origin/master' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-05 15:53:46 -04:00
Michael Crenshaw
1dfd0eb9ff fix UI error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-14 10:10:31 -04:00
Michael Crenshaw
d27d8c0bae lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 22:24:45 -04:00
Michael Crenshaw
bbd83207ca lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 22:14:24 -04:00
Michael Crenshaw
f4a12e8cdc lint ui
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 22:07:20 -04:00
Michael Crenshaw
a677d43c1b Merge remote-tracking branch 'origin/master' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 22:02:53 -04:00
Michael Crenshaw
02b7258c68 more docs, some abstraction of functions
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 21:29:45 -04:00
Michael Crenshaw
e0fc424011 commit service as standalone deployment
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-10 21:43:05 -04:00
Michael Crenshaw
26c4235361 remove temp dir
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-10 16:27:44 -04:00
Michael Crenshaw
e784875825 Merge remote-tracking branch 'origin/master' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-06 13:18:08 -04:00
Michael Crenshaw
ffcbb8068a Merge remote-tracking branch 'origin/master' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-06 10:28:18 -04:00
Michael Crenshaw
d28b53f43d fix diff order and update bot name
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-03 14:29:10 -04:00
Michael Crenshaw
bc11a49e25 add merge action
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-29 13:23:19 -04:00
Michael Crenshaw
e598333528 use shared auth, better error handling
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-29 13:23:02 -04:00
Michael Crenshaw
f1c58bb7db finish previewer PoC
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-28 16:14:19 -04:00
Michael Crenshaw
68c968047c Merge remote-tracking branch 'crenshaw-dev/manifest-hydrator' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-24 16:36:42 -04:00
Omer Azmon
b277580aff add sleep to main loop 2024-05-24 10:16:37 -07:00
Omer Azmon
044375a797 add previewer POC untested 2024-05-23 19:47:49 -07:00
Alexandre Gaudreault
62d4894d51 fix: for helm namespace (#23)
* feat: manifest hydrator

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* it's monitoring both branches now

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* push works w/ my personal creds

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* write metadata, readme, and commands

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* handle missing branches, missing manifest files, and no-op changes

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* don't set release name or namespace to values from the app CR

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more determinism

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* handle new branches

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* show hydration progress

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use workqueue

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use securejoin, use log contexts, clean up temp dirs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use app auth for github only

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* it works

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* retry failed operations

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* fix

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* codegen

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* it just works

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-23 15:42:20 -04:00
Michael Crenshaw
74d4e980f9 use securejoin, use log contexts, clean up temp dirs
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:08:31 -04:00
Michael Crenshaw
9dc94b08a6 use workqueue
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:08:31 -04:00
Michael Crenshaw
29d8937de6 show hydration progress
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:08:28 -04:00
Michael Crenshaw
0caa4d3d33 handle new branches
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:05:24 -04:00
Michael Crenshaw
207f0aba5d more determinism
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:05:21 -04:00
Michael Crenshaw
8257311db2 don't set release name or namespace to values from the app CR
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:02:33 -04:00
Michael Crenshaw
9944e2a8d1 handle missing branches, missing manifest files, and no-op changes
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:02:33 -04:00
Michael Crenshaw
87d2f3f263 write metadata, readme, and commands
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:02:28 -04:00
Michael Crenshaw
55f3fa8b53 push works w/ my personal creds
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 19:59:30 -04:00
Michael Crenshaw
ccf18147b2 it's monitoring both branches now
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 19:59:27 -04:00
Michael Crenshaw
a1e8e1f17d feat: manifest hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 19:55:05 -04:00
Argo CD Hydrator
f816ada864 hydrate 6241416c8ef1a94d3c80f38fa2f1f865608886f7
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-24 11:09:24 -04:00
Argo CD Hydrator
ebb71a0018 hydrate f0cc9868393be8abf156adb963f66b9ec8417f37
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-24 10:56:51 -04:00
Argo CD Hydrator
8bd52a7b74 hydrate 665d6fd139b3bcf2df0ff4b464f2fe42597f2455
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-24 10:54:59 -04:00
Argo CD Hydrator
2d43a8331f hydrate
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-24 10:54:56 -04:00
Michael Crenshaw
dd7952e389 it's monitoring both branches now
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-23 17:02:39 -04:00
Michael Crenshaw
037d098d7c feat: manifest hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-22 10:47:52 -04:00
664 changed files with 56415 additions and 23170 deletions

View File

@@ -31,7 +31,7 @@ jobs:
docs: ${{ steps.filter.outputs.docs_any_changed }}
steps:
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: tj-actions/changed-files@e9772d140489982e0e3704fea5ee93d536f1e275 # v45.0.1
- uses: tj-actions/changed-files@c65cd883420fd2eb864698a825fc4162dd94482c # v44.5.7
id: filter
with:
# Any file which is not under docs/, ui/ or is not a markdown file is counted as a backend file
@@ -172,7 +172,7 @@ jobs:
- name: Run all unit tests
run: make test-local
- name: Generate test results artifacts
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: test-results
path: test-results
@@ -236,7 +236,7 @@ jobs:
- name: Run all unit tests
run: make test-race-local
- name: Generate test results artifacts
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: race-results
path: test-results/
@@ -323,8 +323,6 @@ jobs:
NODE_ENV: production
NODE_ONLINE_ENV: online
HOST_ARCH: amd64
# If we're on the master branch, set the codecov token so that we upload bundle analysis
CODECOV_TOKEN: ${{ github.ref == 'refs/heads/master' && secrets.CODECOV_TOKEN || '' }}
working-directory: ui/
- name: Run ESLint
run: yarn lint
@@ -367,11 +365,11 @@ jobs:
path: test-results
- name: combine-go-coverage
# We generate coverage reports for all Argo CD components, but only the applicationset-controller,
# app-controller, and repo-server report contain coverage data. The other components currently don't shut down
# gracefully, so no coverage data is produced. Once those components are fixed, we can add references to their
# coverage output directories.
# app-controller, repo-server, and commit-server report contain coverage data. The other components currently
# don't shut down gracefully, so no coverage data is produced. Once those components are fixed, we can add
# references to their coverage output directories.
run: |
go tool covdata percent -i=test-results,e2e-code-coverage/applicationset-controller,e2e-code-coverage/repo-server,e2e-code-coverage/app-controller -o test-results/full-coverage.out
go tool covdata percent -i=test-results,e2e-code-coverage/applicationset-controller,e2e-code-coverage/repo-server,e2e-code-coverage/app-controller,e2e-code-coverage/commit-server -o test-results/full-coverage.out
- name: Upload code coverage information to codecov.io
uses: codecov/codecov-action@e28ff129e5465c2c0dcc6f003fc735cb6ae0c673 # v4.5.0
with:
@@ -379,13 +377,6 @@ jobs:
fail_ci_if_error: true
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test results to Codecov
if: github.ref == 'refs/heads/master' && github.event_name == 'push' && github.repository == 'argoproj/argo-cd'
uses: codecov/test-results-action@1b5b448b98e58ba90d1a1a1d9fcb72ca2263be46 # v1.0.0
with:
file: test-results/junit.xml
fail_ci_if_error: true
token: ${{ secrets.CODECOV_TOKEN }}
- name: Perform static code analysis using SonarCloud
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -506,13 +497,13 @@ jobs:
goreman run stop-all || echo "goreman trouble"
sleep 30
- name: Upload e2e coverage report
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: e2e-code-coverage
path: /tmp/coverage
if: ${{ matrix.k3s.latest }}
- name: Upload e2e-server logs
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: e2e-server-k8s${{ matrix.k3s.version }}.log
path: /tmp/e2e-server.log

View File

@@ -143,7 +143,7 @@ jobs:
- name: Build and push container image
id: image
uses: docker/build-push-action@5cd11c3a4ced054e52742c5fd54dca954e0edd85 #v6.7.0
uses: docker/build-push-action@16ebe778df0e7752d2cfcbd924afdbbd89c1a755 #v6.6.1
with:
context: .
platforms: ${{ inputs.platforms }}

View File

@@ -64,7 +64,7 @@ jobs:
git stash pop
- name: Create pull request
uses: peter-evans/create-pull-request@d121e62763d8cc35b5fb1710e887d6e69a52d3a4 # v7.0.2
uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6.1.0
with:
commit-message: "Bump version to ${{ inputs.TARGET_VERSION }}"
title: "Bump version to ${{ inputs.TARGET_VERSION }} on ${{ inputs.TARGET_BRANCH }} branch"

View File

@@ -295,7 +295,7 @@ jobs:
if: ${{ env.UPDATE_VERSION == 'true' }}
- name: Create PR to update VERSION on master branch
uses: peter-evans/create-pull-request@d121e62763d8cc35b5fb1710e887d6e69a52d3a4 # v7.0.2
uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6.1.0
with:
commit-message: Bump version in master
title: "chore: Bump version in master"

View File

@@ -54,7 +54,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: SARIF file
path: results.sarif

2
.gitpod.Dockerfile vendored
View File

@@ -1,4 +1,4 @@
FROM gitpod/workspace-full@sha256:230285e0b949e6d728d384b2029a4111db7b9c87c182f22f32a0be9e36b225df
FROM gitpod/workspace-full@sha256:fbff2dce4236535b96de0e94622bbe9a44fba954ca064862004c34e3e08904df
USER root

View File

@@ -1,12 +1,9 @@
issues:
exclude:
- SA1019
- SA5011
max-issues-per-linter: 0
max-same-issues: 0
exclude-rules:
- path: '(.+)_test\.go'
linters:
- unparam
linters:
enable:
- errcheck
@@ -20,7 +17,6 @@ linters:
- misspell
- staticcheck
- testifylint
- unparam
- unused
- whitespace
linters-settings:

View File

@@ -26,6 +26,13 @@ packages:
github.com/argoproj/argo-cd/v2/applicationset/utils:
interfaces:
Renderer:
github.com/argoproj/argo-cd/v2/commitserver/commit:
interfaces:
RepoClientFactory:
github.com/argoproj/argo-cd/v2/commitserver/apiclient:
interfaces:
CommitServiceClient:
Clientset:
github.com/argoproj/argo-cd/v2/controller/cache:
interfaces:
LiveStateCache:
@@ -43,7 +50,6 @@ packages:
ProjectGetter:
RbacEnforcer:
SettingsGetter:
UserGetter:
github.com/argoproj/argo-cd/v2/util/db:
interfaces:
ArgoDB:
@@ -66,4 +72,4 @@ packages:
SessionServiceClient:
github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster:
interfaces:
ClusterServiceServer:
ClusterServiceServer:

View File

@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:24.04@sha256:3f85b7caad41a95462cf5b787d8
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.23.1@sha256:2fe82a3f3e006b4f2a316c6a21f62b66e1330ae211d039bb8d1128e12ed57bf1 AS builder
FROM docker.io/library/golang:1.22.6@sha256:2bd56f00ff47baf33e64eae7996b65846c7cb5e0a46e0a882ef179fd89654afa AS builder
RUN echo 'deb http://archive.debian.org/debian buster-backports main' >> /etc/apt/sources.list
@@ -83,7 +83,7 @@ WORKDIR /home/argocd
####################################################################################################
# Argo CD UI stage
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/node:22.8.0@sha256:bd00c03095f7586432805dbf7989be10361d27987f93de904b1fc003949a4794 AS argocd-ui
FROM --platform=$BUILDPLATFORM docker.io/library/node:22.3.0@sha256:5e4044ff6001d06e7748e35bfa4f80c73cf5f5a7360a1b782995e038a01b0585 AS argocd-ui
WORKDIR /src
COPY ["ui/package.json", "ui/yarn.lock", "./"]
@@ -101,7 +101,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.23.1@sha256:2fe82a3f3e006b4f2a316c6a21f62b66e1330ae211d039bb8d1128e12ed57bf1 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.22.6@sha256:2bd56f00ff47baf33e64eae7996b65846c7cb5e0a46e0a882ef179fd89654afa AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd

View File

@@ -254,7 +254,7 @@ cli: test-tools-image
.PHONY: cli-local
cli-local: clean-debug
CGO_ENABLED=${CGO_FLAG} GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -gcflags="all=-N -l" $(COVERAGE_FLAG) -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd
CGO_ENABLED=${CGO_FLAG} GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build $(COVERAGE_FLAG) -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd
.PHONY: gen-resources-cli-local
gen-resources-cli-local: clean-debug
@@ -472,6 +472,7 @@ start-e2e-local: mod-vendor-local dep-ui-local cli-local
mkdir -p /tmp/coverage/repo-server
mkdir -p /tmp/coverage/applicationset-controller
mkdir -p /tmp/coverage/notification
mkdir -p /tmp/coverage/commit-server
# set paths for locally managed ssh known hosts and tls certs data
ARGOCD_SSH_DATA_PATH=/tmp/argo-e2e/app/config/ssh \
ARGOCD_TLS_DATA_PATH=/tmp/argo-e2e/app/config/tls \
@@ -553,7 +554,7 @@ build-docs-local:
.PHONY: build-docs
build-docs:
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install mkdocs; pip install $$(mkdocs get-deps); mkdocs build'
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install -r docs/requirements.txt; mkdocs build'
.PHONY: serve-docs-local
serve-docs-local:
@@ -561,7 +562,7 @@ serve-docs-local:
.PHONY: serve-docs
serve-docs:
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install mkdocs; pip install $$(mkdocs get-deps); mkdocs serve -a $$(ip route get 1 | awk '\''{print $$7}'\''):8000'
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install -r docs/requirements.txt; mkdocs serve -a $$(ip route get 1 | awk '\''{print $$7}'\''):8000'
# Verify that kubectl can connect to your K8s cluster from Docker
.PHONY: verify-kube-connect

View File

@@ -1,9 +1,11 @@
controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/app-controller} HOSTNAME=testappcontroller-1 FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --server-side-diff-enabled=${ARGOCD_APPLICATION_CONTROLLER_SERVER_SIDE_DIFF:-'false'}"
controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/app-controller} HOSTNAME=testappcontroller-1 FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --commit-server localhost:${ARGOCD_E2E_COMMITSERVER_PORT:-8086} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --server-side-diff-enabled=${ARGOCD_APPLICATION_CONTROLLER_SERVER_SIDE_DIFF:-'false'}"
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''}"
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v2/cmd gendexcfg -o `pwd`/dist/dex.yaml && (test -f dist/dex.yaml || { echo 'Failed to generate dex configuration'; exit 1; }) && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
redis: bash -c "if [ \"$ARGOCD_REDIS_LOCAL\" = 'true' ]; then redis-server --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; else docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} docker.io/library/redis:$(grep "image: redis" manifests/base/redis/argocd-redis-deployment.yaml | cut -d':' -f3) --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; fi"
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
cmp-server: [ "$ARGOCD_E2E_TEST" = 'true' ] && exit 0 || [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
git-server: test/fixture/testrepos/start-git.sh
helm-registry: test/fixture/testrepos/start-helm-registry.sh

View File

@@ -11,7 +11,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [7shifts](https://www.7shifts.com/)
1. [Adevinta](https://www.adevinta.com/)
1. [Adfinis](https://adfinis.com)
1. [Adobe](https://www.adobe.com/)
1. [Adventure](https://jp.adventurekk.com/)
1. [Adyen](https://www.adyen.com)
1. [AirQo](https://airqo.net/)
@@ -30,7 +29,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Arctiq Inc.](https://www.arctiq.ca)
2. [Arturia](https://www.arturia.com)
1. [ARZ Allgemeines Rechenzentrum GmbH](https://www.arz.at/)
1. [Augury](https://www.augury.com/)
1. [Autodesk](https://www.autodesk.com)
1. [Axians ACSP](https://www.axians.fr)
1. [Axual B.V.](https://axual.com)
@@ -41,7 +39,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Beez Innovation Labs](https://www.beezlabs.com/)
1. [Bedag Informatik AG](https://www.bedag.ch/)
1. [Beleza Na Web](https://www.belezanaweb.com.br/)
1. [Believable Bots](https://believablebots.io)
1. [BigPanda](https://bigpanda.io)
1. [BioBox Analytics](https://biobox.io)
1. [BMW Group](https://www.bmwgroup.com/)
@@ -210,7 +207,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Moengage](https://www.moengage.com/)
1. [Money Forward](https://corp.moneyforward.com/en/)
1. [MOO Print](https://www.moo.com/)
1. [Mozilla](https://www.mozilla.org)
1. [MTN Group](https://www.mtn.com/)
1. [Municipality of The Hague](https://www.denhaag.nl/)
1. [My Job Glasses](https://myjobglasses.com)

View File

@@ -1 +1 @@
2.13.2
2.13.0

View File

@@ -18,7 +18,6 @@ import (
"context"
"fmt"
"reflect"
"sort"
"strings"
"time"
@@ -32,10 +31,11 @@ import (
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/kubernetes"
k8scache "k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
@@ -45,11 +45,11 @@ import (
"github.com/argoproj/argo-cd/v2/applicationset/controllers/template"
"github.com/argoproj/argo-cd/v2/applicationset/generators"
"github.com/argoproj/argo-cd/v2/applicationset/metrics"
"github.com/argoproj/argo-cd/v2/applicationset/status"
"github.com/argoproj/argo-cd/v2/applicationset/utils"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/glob"
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
@@ -90,7 +90,7 @@ type ApplicationSetReconciler struct {
SCMRootCAPath string
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
Cache cache.Cache
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -101,7 +101,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
var applicationSetInfo argov1alpha1.ApplicationSet
parametersGenerated := false
startTime := time.Now()
if err := r.Get(ctx, req.NamespacedName, &applicationSetInfo); err != nil {
if client.IgnoreNotFound(err) != nil {
logCtx.WithError(err).Infof("unable to get ApplicationSet: '%v' ", err)
@@ -109,10 +109,6 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, client.IgnoreNotFound(err)
}
defer func() {
r.Metrics.ObserveReconcile(&applicationSetInfo, time.Since(startTime))
}()
// Do not attempt to further reconcile the ApplicationSet if it is being deleted.
if applicationSetInfo.ObjectMeta.DeletionTimestamp != nil {
appsetName := applicationSetInfo.ObjectMeta.Name
@@ -246,8 +242,20 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
if r.EnableProgressiveSyncs {
// trigger appropriate application syncs if RollingSync strategy is enabled
if progressiveSyncsRollingSyncStrategyEnabled(&applicationSetInfo) {
validApps = r.syncValidApplications(logCtx, &applicationSetInfo, appSyncMap, appMap, validApps)
if progressiveSyncsStrategyEnabled(&applicationSetInfo, "RollingSync") {
validApps, err = r.syncValidApplications(logCtx, &applicationSetInfo, appSyncMap, appMap, validApps)
if err != nil {
_ = r.setApplicationSetStatusCondition(ctx,
&applicationSetInfo,
argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionErrorOccurred,
Message: err.Error(),
Reason: argov1alpha1.ApplicationSetReasonSyncApplicationError,
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, parametersGenerated,
)
return ctrl.Result{}, err
}
}
}
@@ -401,21 +409,8 @@ func (r *ApplicationSetReconciler) setApplicationSetStatusCondition(ctx context.
paramtersGeneratedCondition := getParametersGeneratedCondition(paramtersGenerated, condition.Message)
resourceUpToDateCondition := getResourceUpToDateCondition(errOccurred, condition.Message, condition.Reason)
evaluatedTypes := map[argov1alpha1.ApplicationSetConditionType]bool{
argov1alpha1.ApplicationSetConditionErrorOccurred: true,
argov1alpha1.ApplicationSetConditionParametersGenerated: true,
argov1alpha1.ApplicationSetConditionResourcesUpToDate: true,
}
newConditions := []argov1alpha1.ApplicationSetCondition{errOccurredCondition, paramtersGeneratedCondition, resourceUpToDateCondition}
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
evaluatedTypes[argov1alpha1.ApplicationSetConditionRolloutProgressing] = true
if condition.Type == argov1alpha1.ApplicationSetConditionRolloutProgressing {
newConditions = append(newConditions, condition)
}
}
needToUpdateConditions := false
for _, condition := range newConditions {
// do nothing if appset already has same condition
@@ -426,32 +421,28 @@ func (r *ApplicationSetReconciler) setApplicationSetStatusCondition(ctx context.
}
}
}
evaluatedTypes := map[argov1alpha1.ApplicationSetConditionType]bool{
argov1alpha1.ApplicationSetConditionErrorOccurred: true,
argov1alpha1.ApplicationSetConditionParametersGenerated: true,
argov1alpha1.ApplicationSetConditionResourcesUpToDate: true,
}
if needToUpdateConditions || len(applicationSet.Status.Conditions) < len(newConditions) {
if needToUpdateConditions || len(applicationSet.Status.Conditions) < 3 {
// fetch updated Application Set object before updating it
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
namespacedName := types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}
if err := r.Get(ctx, namespacedName, applicationSet); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.SetConditions(
newConditions, evaluatedTypes,
)
applicationSet.Status.SetConditions(
newConditions, evaluatedTypes,
)
// Update the newly fetched object with new set of conditions
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(applicationSet)
return nil
})
// Update the newly fetched object with new set of conditions
err := r.Client.Status().Update(ctx, applicationSet)
if err != nil && !apierr.IsNotFound(err) {
return fmt.Errorf("unable to set application set condition: %w", err)
}
@@ -512,7 +503,7 @@ func (r *ApplicationSetReconciler) getMinRequeueAfter(applicationSetInfo *argov1
func ignoreNotAllowedNamespaces(namespaces []string) predicate.Predicate {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
return utils.IsNamespaceAllowed(namespaces, e.Object.GetNamespace())
return glob.MatchStringInList(namespaces, e.Object.GetNamespace(), false)
},
}
}
@@ -555,6 +546,25 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
Complete(r)
}
func (r *ApplicationSetReconciler) updateCache(ctx context.Context, obj client.Object, logger *log.Entry) {
informer, err := r.Cache.GetInformer(ctx, obj)
if err != nil {
logger.Errorf("failed to get informer: %v", err)
return
}
// The controller runtime abstract away informers creation
// so unfortunately could not find any other way to access informer store.
k8sInformer, ok := informer.(k8scache.SharedInformer)
if !ok {
logger.Error("informer is not a kubernetes informer")
return
}
if err := k8sInformer.GetStore().Update(obj); err != nil {
logger.Errorf("failed to update cache: %v", err)
return
}
}
// createOrUpdateInCluster will create / update application resources in the cluster.
// - For new applications, it will call create
// - For existing application, it will call update
@@ -652,6 +662,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
}
continue
}
r.updateCache(ctx, found, appLog)
if action != controllerutil.OperationResultNone {
// Don't pollute etcd with "unchanged Application" events
@@ -818,6 +829,7 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
if err := r.Client.Patch(ctx, updated, patch); err != nil {
return fmt.Errorf("error updating finalizers: %w", err)
}
r.updateCache(ctx, updated, appLog)
// Application must have updated list of finalizers
updated.DeepCopyInto(app)
@@ -847,9 +859,12 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
}
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
appDependencyList, appStepMap := r.buildAppDependencyList(logCtx, appset, desiredApplications)
appDependencyList, appStepMap, err := r.buildAppDependencyList(logCtx, appset, desiredApplications)
if err != nil {
return nil, fmt.Errorf("failed to build app dependency list: %w", err)
}
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
_, err = r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset app status: %w", err)
}
@@ -859,27 +874,34 @@ func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context,
logCtx.Infof("step %v: %+v", i+1, step)
}
appSyncMap := r.buildAppSyncMap(appset, appDependencyList, appMap)
appSyncMap, err := r.buildAppSyncMap(ctx, appset, appDependencyList, appMap)
if err != nil {
return nil, fmt.Errorf("failed to build app sync map: %w", err)
}
logCtx.Infof("Application allowed to sync before maxUpdate?: %+v", appSyncMap)
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appSyncMap, appStepMap)
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appSyncMap, appStepMap, appMap)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset application status progress: %w", err)
}
_ = r.updateApplicationSetApplicationStatusConditions(ctx, &appset)
_, err = r.updateApplicationSetApplicationStatusConditions(ctx, &appset)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset application status conditions: %w", err)
}
return appSyncMap, nil
}
// this list tracks which Applications belong to each RollingUpdate step
func (r *ApplicationSetReconciler) buildAppDependencyList(logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, applications []argov1alpha1.Application) ([][]string, map[string]int) {
func (r *ApplicationSetReconciler) buildAppDependencyList(logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, applications []argov1alpha1.Application) ([][]string, map[string]int, error) {
if applicationSet.Spec.Strategy == nil || applicationSet.Spec.Strategy.Type == "" || applicationSet.Spec.Strategy.Type == "AllAtOnce" {
return [][]string{}, map[string]int{}
return [][]string{}, map[string]int{}, nil
}
steps := []argov1alpha1.ApplicationSetRolloutStep{}
if progressiveSyncsRollingSyncStrategyEnabled(&applicationSet) {
if progressiveSyncsStrategyEnabled(&applicationSet, "RollingSync") {
steps = applicationSet.Spec.Strategy.RollingSync.Steps
}
@@ -920,7 +942,7 @@ func (r *ApplicationSetReconciler) buildAppDependencyList(logCtx *log.Entry, app
}
}
return appDependencyList, appStepMap
return appDependencyList, appStepMap, nil
}
func labelMatchedExpression(logCtx *log.Entry, val string, matchExpression argov1alpha1.ApplicationMatchExpression) bool {
@@ -944,7 +966,7 @@ func labelMatchedExpression(logCtx *log.Entry, val string, matchExpression argov
}
// this map is used to determine which stage of Applications are ready to be updated in the reconciler loop
func (r *ApplicationSetReconciler) buildAppSyncMap(applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, appMap map[string]argov1alpha1.Application) map[string]bool {
func (r *ApplicationSetReconciler) buildAppSyncMap(ctx context.Context, applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
appSyncMap := map[string]bool{}
syncEnabled := true
@@ -981,11 +1003,11 @@ func (r *ApplicationSetReconciler) buildAppSyncMap(applicationSet argov1alpha1.A
}
}
return appSyncMap
return appSyncMap, nil
}
func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1alpha1.Application, appStatus argov1alpha1.ApplicationSetApplicationStatus) bool {
if progressiveSyncsRollingSyncStrategyEnabled(appset) {
if progressiveSyncsStrategyEnabled(appset, "RollingSync") {
// we still need to complete the current step if the Application is not yet Healthy or there are still pending Application changes
return isApplicationHealthy(app) && appStatus.Status == "Healthy"
}
@@ -993,8 +1015,16 @@ func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1al
return true
}
func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.ApplicationSet) bool {
return appset.Spec.Strategy != nil && appset.Spec.Strategy.RollingSync != nil && appset.Spec.Strategy.Type == "RollingSync" && len(appset.Spec.Strategy.RollingSync.Steps) > 0
func progressiveSyncsStrategyEnabled(appset *argov1alpha1.ApplicationSet, strategyType string) bool {
if appset.Spec.Strategy == nil || appset.Spec.Strategy.Type != strategyType {
return false
}
if strategyType == "RollingSync" && appset.Spec.Strategy.RollingSync == nil {
return false
}
return true
}
func isApplicationHealthy(app argov1alpha1.Application) bool {
@@ -1017,16 +1047,6 @@ func statusStrings(app argov1alpha1.Application) (string, string, string) {
return healthStatusString, syncStatusString, operationPhaseString
}
func getAppStep(appName string, appStepMap map[string]int) int {
// if an application is not selected by any match expression, it defaults to step -1
step := -1
if appStep, ok := appStepMap[appName]; ok {
// 1-based indexing
step = appStep + 1
}
return step
}
// check the status of each Application's status and promote Applications to the next status if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
@@ -1046,7 +1066,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
LastTransitionTime: &now,
Message: "No Application status found, defaulting status to Waiting.",
Status: "Waiting",
Step: fmt.Sprint(getAppStep(app.Name, appStepMap)),
Step: fmt.Sprint(appStepMap[app.Name] + 1),
TargetRevisions: app.Status.GetRevisions(),
}
} else {
@@ -1056,13 +1076,13 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
// upgrade any existing AppStatus that might have been set by an older argo-cd version
// note: currentAppStatus.TargetRevisions may be set to empty list earlier during migrations,
// to prevent other usage of r.Client.Status().Update to fail before reaching here.
if len(currentAppStatus.TargetRevisions) == 0 {
if currentAppStatus.TargetRevisions == nil || len(currentAppStatus.TargetRevisions) == 0 {
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
}
}
appOutdated := false
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
appOutdated = syncStatusString == "OutOfSync"
}
@@ -1071,7 +1091,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Waiting"
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
}
@@ -1089,14 +1109,14 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
}
} else if operationPhaseString == "Running" || healthStatusString == "Progressing" {
logCtx.Infof("Application %v has entered Progressing status, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing."
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
}
}
@@ -1105,7 +1125,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource is already Healthy, updating status from Waiting to Healthy."
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
}
if currentAppStatus.Status == "Progressing" && isApplicationHealthy(app) {
@@ -1113,7 +1133,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy."
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
}
appStatuses = append(appStatuses, currentAppStatus)
@@ -1128,18 +1148,20 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
}
// check Applications that are in Waiting status and promote them to Pending if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appStepMap map[string]int, appMap map[string]argov1alpha1.Application) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applicationSet.Status.ApplicationStatus))
// if we have no RollingUpdate steps, clear out the existing ApplicationStatus entries
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
if applicationSet.Spec.Strategy != nil && applicationSet.Spec.Strategy.Type != "" && applicationSet.Spec.Strategy.Type != "AllAtOnce" {
updateCountMap := []int{}
totalCountMap := []int{}
length := len(applicationSet.Spec.Strategy.RollingSync.Steps)
length := 0
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
length = len(applicationSet.Spec.Strategy.RollingSync.Steps)
}
for s := 0; s < length; s++ {
updateCountMap = append(updateCountMap, 0)
totalCountMap = append(totalCountMap, 0)
@@ -1149,15 +1171,17 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
for _, appStatus := range applicationSet.Status.ApplicationStatus {
totalCountMap[appStepMap[appStatus.Application]] += 1
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
updateCountMap[appStepMap[appStatus.Application]] += 1
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
updateCountMap[appStepMap[appStatus.Application]] += 1
}
}
}
for _, appStatus := range applicationSet.Status.ApplicationStatus {
maxUpdateAllowed := true
maxUpdate := &intstr.IntOrString{}
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
maxUpdate = applicationSet.Spec.Strategy.RollingSync.Steps[appStepMap[appStatus.Application]].MaxUpdate
}
@@ -1175,7 +1199,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
if updateCountMap[appStepMap[appStatus.Application]] >= maxUpdateVal {
maxUpdateAllowed = false
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap), applicationSet.Name)
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, appStepMap[appStatus.Application]+1, applicationSet.Name)
}
}
@@ -1184,7 +1208,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
appStatus.LastTransitionTime = &now
appStatus.Status = "Pending"
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing."
appStatus.Step = fmt.Sprint(getAppStep(appStatus.Application, appStepMap))
appStatus.Step = fmt.Sprint(appStepMap[appStatus.Application] + 1)
updateCountMap[appStepMap[appStatus.Application]] += 1
}
@@ -1201,7 +1225,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
return appStatuses, nil
}
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditions(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet) []argov1alpha1.ApplicationSetCondition {
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditions(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet) ([]argov1alpha1.ApplicationSetCondition, error) {
appSetProgressing := false
for _, appStatus := range applicationSet.Status.ApplicationStatus {
if appStatus.Status != "Healthy" {
@@ -1226,7 +1250,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditio
Message: "ApplicationSet Rollout Rollout started",
Reason: argov1alpha1.ApplicationSetReasonApplicationSetModified,
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, true,
}, false,
)
} else if !appSetProgressing && appSetConditionProgressing {
_ = r.setApplicationSetStatusCondition(ctx,
@@ -1236,11 +1260,11 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditio
Message: "ApplicationSet Rollout Rollout complete",
Reason: argov1alpha1.ApplicationSetReasonApplicationSetRolloutComplete,
Status: argov1alpha1.ApplicationSetConditionStatusFalse,
}, true,
}, false,
)
}
return applicationSet.Status.Conditions
return applicationSet.Status.Conditions, nil
}
func findApplicationStatusIndex(appStatuses []argov1alpha1.ApplicationSetApplicationStatus, application string) int {
@@ -1266,29 +1290,8 @@ func (r *ApplicationSetReconciler) migrateStatus(ctx context.Context, appset *ar
}
if update {
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.ApplicationStatus = appset.Status.ApplicationStatus
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(appset)
return nil
})
if err != nil && !apierr.IsNotFound(err) {
return fmt.Errorf("unable to set application set condition: %w", err)
if err := r.Client.Status().Update(ctx, appset); err != nil {
return fmt.Errorf("unable to set application set status: %w", err)
}
}
return nil
@@ -1302,35 +1305,22 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
for _, status := range statusMap {
statuses = append(statuses, status)
}
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
appset.Status.Resources = statuses
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.Resources = appset.Status.Resources
// Update the newly fetched object with new status resources
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(appset)
return nil
})
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
err := r.Client.Status().Update(ctx, appset)
if err != nil {
logCtx.Errorf("unable to set application set status: %v", err)
return fmt.Errorf("unable to set application set status: %w", err)
}
if err := r.Get(ctx, namespacedName, appset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
return nil
}
@@ -1365,36 +1355,26 @@ func (r *ApplicationSetReconciler) setAppSetApplicationStatus(ctx context.Contex
for i := range applicationStatuses {
applicationSet.Status.SetApplicationStatus(applicationStatuses[i])
}
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.ApplicationStatus = applicationSet.Status.ApplicationStatus
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(applicationSet)
return nil
})
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, applicationSet)
if err != nil {
logCtx.Errorf("unable to set application set status: %v", err)
return fmt.Errorf("unable to set application set status: %w", err)
}
if err := r.Get(ctx, namespacedName, applicationSet); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
}
return nil
}
func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appMap map[string]argov1alpha1.Application, validApps []argov1alpha1.Application) []argov1alpha1.Application {
func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appMap map[string]argov1alpha1.Application, validApps []argov1alpha1.Application) ([]argov1alpha1.Application, error) {
rolloutApps := []argov1alpha1.Application{}
for i := range validApps {
pruneEnabled := false
@@ -1415,15 +1395,15 @@ func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, appl
// check appSyncMap to determine which Applications are ready to be updated and which should be skipped
if appSyncMap[validApps[i].Name] && appMap[validApps[i].Name].Status.Sync.Status == "OutOfSync" && appSetStatusPending {
logCtx.Infof("triggering sync for application: %v, prune enabled: %v", validApps[i].Name, pruneEnabled)
validApps[i] = syncApplication(validApps[i], pruneEnabled)
validApps[i], _ = syncApplication(validApps[i], pruneEnabled)
}
rolloutApps = append(rolloutApps, validApps[i])
}
return rolloutApps
return rolloutApps, nil
}
// used by the RollingSync Progressive Sync strategy to trigger a sync of a particular Application resource
func syncApplication(application argov1alpha1.Application, prune bool) argov1alpha1.Application {
func syncApplication(application argov1alpha1.Application, prune bool) (argov1alpha1.Application, error) {
operation := argov1alpha1.Operation{
InitiatedBy: argov1alpha1.OperationInitiator{
Username: "applicationset-controller",
@@ -1449,7 +1429,7 @@ func syncApplication(application argov1alpha1.Application, prune bool) argov1alp
}
application.Operation = &operation
return application
return application, nil
}
func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {

File diff suppressed because it is too large Load Diff

View File

@@ -4,8 +4,6 @@ import (
"context"
"fmt"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/types"
@@ -26,29 +24,29 @@ type clusterSecretEventHandler struct {
Client client.Client
}
func (h *clusterSecretEventHandler) Create(ctx context.Context, e event.CreateEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
func (h *clusterSecretEventHandler) Create(ctx context.Context, e event.CreateEvent, q workqueue.RateLimitingInterface) {
h.queueRelatedAppGenerators(ctx, q, e.Object)
}
func (h *clusterSecretEventHandler) Update(ctx context.Context, e event.UpdateEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
func (h *clusterSecretEventHandler) Update(ctx context.Context, e event.UpdateEvent, q workqueue.RateLimitingInterface) {
h.queueRelatedAppGenerators(ctx, q, e.ObjectNew)
}
func (h *clusterSecretEventHandler) Delete(ctx context.Context, e event.DeleteEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
func (h *clusterSecretEventHandler) Delete(ctx context.Context, e event.DeleteEvent, q workqueue.RateLimitingInterface) {
h.queueRelatedAppGenerators(ctx, q, e.Object)
}
func (h *clusterSecretEventHandler) Generic(ctx context.Context, e event.GenericEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
func (h *clusterSecretEventHandler) Generic(ctx context.Context, e event.GenericEvent, q workqueue.RateLimitingInterface) {
h.queueRelatedAppGenerators(ctx, q, e.Object)
}
// addRateLimitingInterface defines the Add method of workqueue.RateLimitingInterface, allow us to easily mock
// it for testing purposes.
type addRateLimitingInterface[T comparable] interface {
Add(item T)
type addRateLimitingInterface interface {
Add(item interface{})
}
func (h *clusterSecretEventHandler) queueRelatedAppGenerators(ctx context.Context, q addRateLimitingInterface[reconcile.Request], object client.Object) {
func (h *clusterSecretEventHandler) queueRelatedAppGenerators(ctx context.Context, q addRateLimitingInterface, object client.Object) {
// Check for label, lookup all ApplicationSets that might match the cluster, queue them all
if object.GetLabels()[generators.ArgoCDSecretTypeLabel] != generators.ArgoCDSecretTypeCluster {
return

View File

@@ -551,18 +551,24 @@ func TestClusterEventHandler(t *testing.T) {
handler.queueRelatedAppGenerators(context.Background(), &mockAddRateLimitingInterface, &test.secret)
assert.False(t, mockAddRateLimitingInterface.errorOccurred)
assert.ElementsMatch(t, mockAddRateLimitingInterface.addedItems, test.expectedRequests)
})
}
}
// Add checks the type, and adds it to the internal list of received additions
func (obj *mockAddRateLimitingInterface) Add(item reconcile.Request) {
obj.addedItems = append(obj.addedItems, item)
func (obj *mockAddRateLimitingInterface) Add(item interface{}) {
if req, ok := item.(ctrl.Request); ok {
obj.addedItems = append(obj.addedItems, req)
} else {
obj.errorOccurred = true
}
}
type mockAddRateLimitingInterface struct {
addedItems []reconcile.Request
errorOccurred bool
addedItems []ctrl.Request
}
func TestNestedGeneratorHasClusterGenerator_NestedClusterGenerator(t *testing.T) {

View File

@@ -17,7 +17,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/argoproj/argo-cd/v2/applicationset/generators"
appsetmetrics "github.com/argoproj/argo-cd/v2/applicationset/metrics"
"github.com/argoproj/argo-cd/v2/applicationset/services/mocks"
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
@@ -61,7 +60,7 @@ func TestRequeueAfter(t *testing.T) {
terminalGenerators := map[string]generators.Generator{
"List": generators.NewListGenerator(),
"Clusters": generators.NewClusterGenerator(k8sClient, ctx, appClientset, "argocd"),
"Git": generators.NewGitGenerator(mockServer, "namespace"),
"Git": generators.NewGitGenerator(mockServer),
"SCMProvider": generators.NewSCMProviderGenerator(fake.NewClientBuilder().WithObjects(&corev1.Secret{}).Build(), scmConfig),
"ClusterDecisionResource": generators.NewDuckTypeGenerator(ctx, fakeDynClient, appClientset, "argocd"),
"PullRequest": generators.NewPullRequestGenerator(k8sClient, scmConfig),
@@ -90,13 +89,11 @@ func TestRequeueAfter(t *testing.T) {
}
client := fake.NewClientBuilder().WithScheme(scheme).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics(client)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(0),
Generators: topLevelGenerators,
Metrics: metrics,
}
type args struct {

View File

@@ -218,7 +218,7 @@ func (g *DuckTypeGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.A
res = append(res, params)
}
} else {
log.Warningf("clusterDecisionResource status.%s missing", statusListKey)
log.Warningf("clusterDecisionResource status." + statusListKey + " missing")
return nil, nil
}

View File

@@ -346,7 +346,7 @@ func getMockClusterGenerator() Generator {
func getMockGitGenerator() Generator {
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "namespace")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
return gitGenerator
}

View File

@@ -24,16 +24,13 @@ import (
var _ Generator = (*GitGenerator)(nil)
type GitGenerator struct {
repos services.Repos
namespace string
repos services.Repos
}
func NewGitGenerator(repos services.Repos, namespace string) Generator {
func NewGitGenerator(repos services.Repos) Generator {
g := &GitGenerator{
repos: repos,
namespace: namespace,
repos: repos,
}
return g
}
@@ -62,25 +59,21 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
noRevisionCache := appSet.RefreshRequired()
verifyCommit := false
// When the project field is templated, the contents of the git repo are required to run the git generator and get the templated value,
// but git generator cannot be called without verifying the commit signature.
// In this case, we skip the signature verification.
if !strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
project := appSet.Spec.Template.Spec.Project
appProject := &argoprojiov1alpha1.AppProject{}
namespace := g.namespace
if namespace == "" {
namespace = appSet.Namespace
}
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: namespace}, appProject); err != nil {
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled
verifyCommit = len(appProject.Spec.SignatureKeys) > 0 && gpg.IsGPGEnabled()
var project string
if strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
project = appSetGenerator.Git.Template.Spec.Project
} else {
project = appSet.Spec.Template.Spec.Project
}
appProject := &argoprojiov1alpha1.AppProject{}
if err := client.Get(context.TODO(), types.NamespacedName{Name: appSet.Spec.Template.Spec.Project, Namespace: appSet.Namespace}, appProject); err != nil {
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled
verifyCommit := appProject.Spec.SignatureKeys != nil && len(appProject.Spec.SignatureKeys) > 0 && gpg.IsGPGEnabled()
var err error
var res []map[string]interface{}
if len(appSetGenerator.Git.Directories) != 0 {

View File

@@ -323,7 +323,7 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -624,7 +624,7 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -989,7 +989,7 @@ cluster:
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1345,7 +1345,7 @@ cluster:
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1383,114 +1383,3 @@ cluster:
})
}
}
func TestGitGenerator_GenerateParams(t *testing.T) {
cases := []struct {
name string
directories []argoprojiov1alpha1.GitDirectoryGeneratorItem
pathParamPrefix string
repoApps []string
repoPathsError error
repoFileContents map[string][]byte
values map[string]string
expected []map[string]interface{}
expectedError error
appset argoprojiov1alpha1.ApplicationSet
callGetDirectories bool
}{
{
name: "Signature Verification - ignores templated project field",
repoApps: []string{
"app1",
},
repoPathsError: nil,
appset: argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
Namespace: "namespace",
},
Spec: argoprojiov1alpha1.ApplicationSetSpec{
Generators: []argoprojiov1alpha1.ApplicationSetGenerator{{
Git: &argoprojiov1alpha1.GitGenerator{
RepoURL: "RepoURL",
Revision: "Revision",
Directories: []argoprojiov1alpha1.GitDirectoryGeneratorItem{{Path: "*"}},
PathParamPrefix: "",
Values: map[string]string{
"foo": "bar",
},
},
}},
Template: argoprojiov1alpha1.ApplicationSetTemplate{
Spec: argoprojiov1alpha1.ApplicationSpec{
Project: "{{.project}}",
},
},
},
},
callGetDirectories: true,
expected: []map[string]interface{}{{"path": "app1", "path.basename": "app1", "path.basenameNormalized": "app1", "path[0]": "app1", "values.foo": "bar"}},
expectedError: nil,
},
{
name: "Signature Verification - Checks for non-templated project field",
repoApps: []string{
"app1",
},
repoPathsError: nil,
appset: argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
Namespace: "namespace",
},
Spec: argoprojiov1alpha1.ApplicationSetSpec{
Generators: []argoprojiov1alpha1.ApplicationSetGenerator{{
Git: &argoprojiov1alpha1.GitGenerator{
RepoURL: "RepoURL",
Revision: "Revision",
Directories: []argoprojiov1alpha1.GitDirectoryGeneratorItem{{Path: "*"}},
PathParamPrefix: "",
Values: map[string]string{
"foo": "bar",
},
},
}},
Template: argoprojiov1alpha1.ApplicationSetTemplate{
Spec: argoprojiov1alpha1.ApplicationSpec{
Project: "project",
},
},
},
},
callGetDirectories: false,
expected: []map[string]interface{}{{"path": "app1", "path.basename": "app1", "path.basenameNormalized": "app1", "path[0]": "app1", "values.foo": "bar"}},
expectedError: fmt.Errorf("error getting project project: appprojects.argoproj.io \"project\" not found"),
},
}
for _, testCase := range cases {
argoCDServiceMock := mocks.Repos{}
if testCase.callGetDirectories {
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
}
gitGenerator := NewGitGenerator(&argoCDServiceMock, "namespace")
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
appProject := argoprojiov1alpha1.AppProject{}
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&appProject).Build()
got, err := gitGenerator.GenerateParams(&testCase.appset.Spec.Generators[0], &testCase.appset, client)
if testCase.expectedError != nil {
require.EqualError(t, err, testCase.expectedError.Error())
} else {
require.NoError(t, err)
assert.Equal(t, testCase.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
}
}

View File

@@ -1089,7 +1089,7 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
repoServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
"some/path.json": []byte("test: content"),
}, nil)
gitGenerator := NewGitGenerator(repoServiceMock, "")
gitGenerator := NewGitGenerator(repoServiceMock)
matrixGenerator := NewMatrixGenerator(map[string]Generator{
"List": listGeneratorMock,

View File

@@ -168,7 +168,7 @@ func (g *PullRequestGenerator) selectServiceProvider(ctx context.Context, genera
if err != nil {
return nil, fmt.Errorf("error fetching Secret Bearer token: %w", err)
}
return pullrequest.NewBitbucketServiceBearerToken(ctx, appToken, providerConfig.API, providerConfig.Project, providerConfig.Repo, g.scmRootCAPath, providerConfig.Insecure, caCerts)
return pullrequest.NewBitbucketServiceBearerToken(ctx, providerConfig.API, appToken, providerConfig.Project, providerConfig.Repo, g.scmRootCAPath, providerConfig.Insecure, caCerts)
} else if providerConfig.BasicAuth != nil {
password, err := utils.GetSecretRef(ctx, g.client, providerConfig.BasicAuth.PasswordRef, applicationSetInfo.Namespace)
if err != nil {

View File

@@ -14,7 +14,7 @@ func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.In
terminalGenerators := map[string]Generator{
"List": NewListGenerator(),
"Clusters": NewClusterGenerator(c, ctx, k8sClient, namespace),
"Git": NewGitGenerator(argoCDService, namespace),
"Git": NewGitGenerator(argoCDService),
"SCMProvider": NewSCMProviderGenerator(c, scmConfig),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, namespace),
"PullRequest": NewPullRequestGenerator(c, scmConfig),

View File

@@ -1,22 +0,0 @@
package metrics
import (
"github.com/prometheus/client_golang/prometheus"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
)
// Fake implementation for testing
func NewFakeAppsetMetrics(client ctrlclient.WithWatch) *ApplicationsetMetrics {
reconcileHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_appset_reconcile",
Help: "Application reconciliation performance in seconds.",
// Buckets can be set later on after observing median time
},
[]string{"name", "namespace"},
)
return &ApplicationsetMetrics{
reconcileHistogram: reconcileHistogram,
}
}

View File

@@ -1,131 +0,0 @@
package metrics
import (
"time"
"github.com/prometheus/client_golang/prometheus"
"k8s.io/apimachinery/pkg/labels"
"sigs.k8s.io/controller-runtime/pkg/metrics"
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
applisters "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
metricsutil "github.com/argoproj/argo-cd/v2/util/metrics"
)
var (
descAppsetLabels *prometheus.Desc
descAppsetDefaultLabels = []string{"namespace", "name"}
descAppsetInfo = prometheus.NewDesc(
"argocd_appset_info",
"Information about applicationset",
append(descAppsetDefaultLabels, "resource_update_status"),
nil,
)
descAppsetGeneratedApps = prometheus.NewDesc(
"argocd_appset_owned_applications",
"Number of applications owned by the applicationset",
descAppsetDefaultLabels,
nil,
)
)
type ApplicationsetMetrics struct {
reconcileHistogram *prometheus.HistogramVec
}
type appsetCollector struct {
lister applisters.ApplicationSetLister
// appsClientSet appclientset.Interface
labels []string
filter func(appset *argoappv1.ApplicationSet) bool
}
func NewApplicationsetMetrics(appsetLister applisters.ApplicationSetLister, appsetLabels []string, appsetFilter func(appset *argoappv1.ApplicationSet) bool) ApplicationsetMetrics {
reconcileHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_appset_reconcile",
Help: "Application reconciliation performance in seconds.",
// Buckets can be set later on after observing median time
},
descAppsetDefaultLabels,
)
appsetCollector := newAppsetCollector(appsetLister, appsetLabels, appsetFilter)
// Register collectors and metrics
metrics.Registry.MustRegister(reconcileHistogram)
metrics.Registry.MustRegister(appsetCollector)
return ApplicationsetMetrics{
reconcileHistogram: reconcileHistogram,
}
}
func (m *ApplicationsetMetrics) ObserveReconcile(appset *argoappv1.ApplicationSet, duration time.Duration) {
m.reconcileHistogram.WithLabelValues(appset.Namespace, appset.Name).Observe(duration.Seconds())
}
func newAppsetCollector(lister applisters.ApplicationSetLister, labels []string, filter func(appset *argoappv1.ApplicationSet) bool) *appsetCollector {
descAppsetDefaultLabels = []string{"namespace", "name"}
if len(labels) > 0 {
descAppsetLabels = prometheus.NewDesc(
"argocd_appset_labels",
"Applicationset labels translated to Prometheus labels",
append(descAppsetDefaultLabels, metricsutil.NormalizeLabels("label", labels)...),
nil,
)
}
return &appsetCollector{
lister: lister,
labels: labels,
filter: filter,
}
}
// Describe implements the prometheus.Collector interface
func (c *appsetCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- descAppsetInfo
ch <- descAppsetGeneratedApps
if len(c.labels) > 0 {
ch <- descAppsetLabels
}
}
// Collect implements the prometheus.Collector interface
func (c *appsetCollector) Collect(ch chan<- prometheus.Metric) {
appsets, _ := c.lister.List(labels.NewSelector())
for _, appset := range appsets {
if c.filter(appset) {
collectAppset(appset, c.labels, ch)
}
}
}
func collectAppset(appset *argoappv1.ApplicationSet, labelsToCollect []string, ch chan<- prometheus.Metric) {
labelValues := make([]string, 0)
commonLabelValues := []string{appset.Namespace, appset.Name}
for _, label := range labelsToCollect {
labelValues = append(labelValues, appset.GetLabels()[label])
}
resourceUpdateStatus := "Unknown"
for _, condition := range appset.Status.Conditions {
if condition.Type == argoappv1.ApplicationSetConditionResourcesUpToDate {
resourceUpdateStatus = condition.Reason
}
}
if len(labelsToCollect) > 0 {
ch <- prometheus.MustNewConstMetric(descAppsetLabels, prometheus.GaugeValue, 1, append(commonLabelValues, labelValues...)...)
}
ch <- prometheus.MustNewConstMetric(descAppsetInfo, prometheus.GaugeValue, 1, appset.Namespace, appset.Name, resourceUpdateStatus)
ch <- prometheus.MustNewConstMetric(descAppsetGeneratedApps, prometheus.GaugeValue, float64(len(appset.Status.Resources)), appset.Namespace, appset.Name)
}

View File

@@ -1,256 +0,0 @@
package metrics
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/argoproj/argo-cd/v2/applicationset/utils"
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/runtime"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
fake "sigs.k8s.io/controller-runtime/pkg/client/fake"
prometheus "github.com/prometheus/client_golang/prometheus"
metricsutil "github.com/argoproj/argo-cd/v2/util/metrics"
"sigs.k8s.io/controller-runtime/pkg/metrics"
"sigs.k8s.io/yaml"
)
var (
applicationsetNamespaces = []string{"argocd", "test-namespace1"}
filter = func(appset *argoappv1.ApplicationSet) bool {
return utils.IsNamespaceAllowed(applicationsetNamespaces, appset.Namespace)
}
collectedLabels = []string{"included/test"}
)
const fakeAppsetList = `
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: test1
namespace: argocd
labels:
included/test: test
not-included.label/test: test
spec:
generators:
- git:
directories:
- path: test/*
repoURL: https://github.com/test/test.git
revision: HEAD
template:
metadata:
name: '{{.path.basename}}'
spec:
destination:
namespace: '{{.path.basename}}'
server: https://kubernetes.default.svc
project: default
source:
path: '{{.path.path}}'
repoURL: https://github.com/test/test.git
targetRevision: HEAD
status:
resources:
- group: argoproj.io
health:
status: Missing
kind: Application
name: test-app1
namespace: argocd
status: OutOfSync
version: v1alpha1
- group: argoproj.io
health:
status: Missing
kind: Application
name: test-app2
namespace: argocd
status: OutOfSync
version: v1alpha1
conditions:
- lastTransitionTime: "2024-01-01T00:00:00Z"
message: Successfully generated parameters for all Applications
reason: ApplicationSetUpToDate
status: "False"
type: ErrorOccurred
- lastTransitionTime: "2024-01-01T00:00:00Z"
message: Successfully generated parameters for all Applications
reason: ParametersGenerated
status: "True"
type: ParametersGenerated
- lastTransitionTime: "2024-01-01T00:00:00Z"
message: ApplicationSet up to date
reason: ApplicationSetUpToDate
status: "True"
type: ResourcesUpToDate
---
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: test2
namespace: argocd
labels:
not-included.label/test: test
spec:
generators:
- git:
directories:
- path: test/*
repoURL: https://github.com/test/test.git
revision: HEAD
template:
metadata:
name: '{{.path.basename}}'
spec:
destination:
namespace: '{{.path.basename}}'
server: https://kubernetes.default.svc
project: default
source:
path: '{{.path.path}}'
repoURL: https://github.com/test/test.git
targetRevision: HEAD
---
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: should-be-filtered-out
namespace: not-allowed
spec:
generators:
- git:
directories:
- path: test/*
repoURL: https://github.com/test/test.git
revision: HEAD
template:
metadata:
name: '{{.path.basename}}'
spec:
destination:
namespace: '{{.path.basename}}'
server: https://kubernetes.default.svc
project: default
source:
path: '{{.path.path}}'
repoURL: https://github.com/test/test.git
targetRevision: HEAD
`
func newFakeAppsets(fakeAppsetYAML string) []argoappv1.ApplicationSet {
var results []argoappv1.ApplicationSet
appsetRawYamls := strings.Split(fakeAppsetYAML, "---")
for _, appsetRawYaml := range appsetRawYamls {
var appset argoappv1.ApplicationSet
err := yaml.Unmarshal([]byte(appsetRawYaml), &appset)
if err != nil {
panic(err)
}
results = append(results, appset)
}
return results
}
func TestApplicationsetCollector(t *testing.T) {
appsetList := newFakeAppsets(fakeAppsetList)
client := initializeClient(appsetList)
metrics.Registry = prometheus.NewRegistry()
appsetCollector := newAppsetCollector(utils.NewAppsetLister(client), collectedLabels, filter)
metrics.Registry.MustRegister(appsetCollector)
req, err := http.NewRequest("GET", "/metrics", nil)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
handler.ServeHTTP(rr, req)
assert.Equal(t, http.StatusOK, rr.Code)
// Test correct appset_info and owned applications
assert.Contains(t, rr.Body.String(), `
argocd_appset_info{name="test1",namespace="argocd",resource_update_status="ApplicationSetUpToDate"} 1
`)
assert.Contains(t, rr.Body.String(), `
argocd_appset_owned_applications{name="test1",namespace="argocd"} 2
`)
// Test labels collection - should not include labels not included in the list of collected labels and include the ones that do.
assert.Contains(t, rr.Body.String(), `
argocd_appset_labels{label_included_test="test",name="test1",namespace="argocd"} 1
`)
assert.NotContains(t, rr.Body.String(), normalizeLabel("not-included.label/test"))
// If collected label is not present on the applicationset the value should be empty
assert.Contains(t, rr.Body.String(), `
argocd_appset_labels{label_included_test="",name="test2",namespace="argocd"} 1
`)
// If ResourcesUpToDate condition is not present on the applicationset the status should be reported as 'Unknown'
assert.Contains(t, rr.Body.String(), `
argocd_appset_info{name="test2",namespace="argocd",resource_update_status="Unknown"} 1
`)
// If there are no resources on the applicationset the owned application gague should return 0
assert.Contains(t, rr.Body.String(), `
argocd_appset_owned_applications{name="test2",namespace="argocd"} 0
`)
// Test that filter is working
assert.NotContains(t, rr.Body.String(), `name="should-be-filtered-out"`)
}
func TestObserveReconcile(t *testing.T) {
appsetList := newFakeAppsets(fakeAppsetList)
client := initializeClient(appsetList)
metrics.Registry = prometheus.NewRegistry()
appsetMetrics := NewApplicationsetMetrics(utils.NewAppsetLister(client), collectedLabels, filter)
req, err := http.NewRequest("GET", "/metrics", nil)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
appsetMetrics.ObserveReconcile(&appsetList[0], 5*time.Second)
handler.ServeHTTP(rr, req)
assert.Contains(t, rr.Body.String(), `
argocd_appset_reconcile_sum{name="test1",namespace="argocd"} 5
`)
// If there are no resources on the applicationset the owned application gague should return 0
assert.Contains(t, rr.Body.String(), `
argocd_appset_reconcile_count{name="test1",namespace="argocd"} 1
`)
}
func initializeClient(appsets []argoappv1.ApplicationSet) ctrlclient.WithWatch {
scheme := runtime.NewScheme()
err := argoappv1.AddToScheme(scheme)
if err != nil {
panic(err)
}
var clientObjects []ctrlclient.Object
for _, appset := range appsets {
clientObjects = append(clientObjects, appset.DeepCopy())
}
return fake.NewClientBuilder().WithScheme(scheme).WithObjects(clientObjects...).Build()
}
func normalizeLabel(label string) string {
return metricsutil.NormalizeLabels("label", []string{label})[0]
}

View File

@@ -19,7 +19,7 @@ type BitbucketCloudPullRequest struct {
ID int `json:"id"`
Title string `json:"title"`
Source BitbucketCloudPullRequestSource `json:"source"`
Author BitbucketCloudPullRequestAuthor `json:"author"`
Author string `json:"author"`
}
type BitbucketCloudPullRequestSource struct {
@@ -35,11 +35,6 @@ type BitbucketCloudPullRequestSourceCommit struct {
Hash string `json:"hash"`
}
// Also have display_name and uuid, but don't plan to use them.
type BitbucketCloudPullRequestAuthor struct {
Nickname string `json:"nickname"`
}
type PullRequestResponse struct {
Page int32 `json:"page"`
Size int32 `json:"size"`
@@ -139,7 +134,7 @@ func (b *BitbucketCloudService) List(_ context.Context) ([]*PullRequest, error)
Title: pull.Title,
Branch: pull.Source.Branch.Name,
HeadSHA: pull.Source.Commit.Hash,
Author: pull.Author.Nickname,
Author: pull.Author,
})
}

View File

@@ -37,9 +37,7 @@ func defaultHandlerCloud(t *testing.T) func(http.ResponseWriter, *http.Request)
"hash": "1a8dd249c04a"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`)
@@ -156,9 +154,7 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "1a8dd249c04a"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
},
{
"id": 102,
@@ -172,9 +168,7 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "4cf807e67a6d"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`, r.Host))
@@ -197,9 +191,7 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "6344d9623e3b"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`, r.Host))
@@ -347,9 +339,7 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "1a8dd249c04a"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
},
{
"id": 200,
@@ -363,9 +353,7 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "4cf807e67a6d"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`, r.Host))
@@ -388,9 +376,7 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "6344d9623e3b"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`, r.Host))

View File

@@ -46,7 +46,7 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
return true, nil
}
return false, fmt.Errorf("%s", resp.Status)
return false, fmt.Errorf(resp.Status)
}
var _ SCMProviderService = &BitBucketCloudProvider{}

View File

@@ -1,63 +0,0 @@
package utils
import (
"context"
"k8s.io/apimachinery/pkg/labels"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
. "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
)
// Implements AppsetLister interface with controller-runtime client
type AppsetLister struct {
Client ctrlclient.Client
}
func NewAppsetLister(client ctrlclient.Client) ApplicationSetLister {
return &AppsetLister{Client: client}
}
func (l *AppsetLister) List(selector labels.Selector) (ret []*ApplicationSet, err error) {
return clientListAppsets(l.Client, ctrlclient.ListOptions{})
}
// ApplicationSets returns an object that can list and get ApplicationSets.
func (l *AppsetLister) ApplicationSets(namespace string) ApplicationSetNamespaceLister {
return &appsetNamespaceLister{
Client: l.Client,
Namespace: namespace,
}
}
// Implements ApplicationSetNamespaceLister
type appsetNamespaceLister struct {
Client ctrlclient.Client
Namespace string
}
func (n *appsetNamespaceLister) List(selector labels.Selector) (ret []*ApplicationSet, err error) {
return clientListAppsets(n.Client, ctrlclient.ListOptions{Namespace: n.Namespace})
}
func (n *appsetNamespaceLister) Get(name string) (*ApplicationSet, error) {
appset := ApplicationSet{}
err := n.Client.Get(context.TODO(), ctrlclient.ObjectKeyFromObject(&appset), &appset)
return &appset, err
}
func clientListAppsets(client ctrlclient.Client, listOptions ctrlclient.ListOptions) (ret []*ApplicationSet, err error) {
var appsetlist ApplicationSetList
var results []*ApplicationSet
err = client.List(context.TODO(), &appsetlist, &listOptions)
if err == nil {
for _, appset := range appsetlist.Items {
results = append(results, appset.DeepCopy())
}
}
return results, err
}

View File

@@ -132,12 +132,9 @@ func getLocalCluster(clientset kubernetes.Interface) *appv1.Cluster {
initLocalCluster.Do(func() {
info, err := clientset.Discovery().ServerVersion()
if err == nil {
// nolint:staticcheck
localCluster.ServerVersion = fmt.Sprintf("%s.%s", info.Major, info.Minor)
// nolint:staticcheck
localCluster.ConnectionState = appv1.ConnectionState{Status: appv1.ConnectionStatusSuccessful}
} else {
// nolint:staticcheck
localCluster.ConnectionState = appv1.ConnectionState{
Status: appv1.ConnectionStatusFailed,
Message: err.Error(),
@@ -146,7 +143,6 @@ func getLocalCluster(clientset kubernetes.Interface) *appv1.Cluster {
})
cluster := localCluster.DeepCopy()
now := metav1.Now()
// nolint:staticcheck
cluster.ConnectionState.ModifiedAt = &now
return cluster
}

View File

@@ -23,7 +23,6 @@ import (
log "github.com/sirupsen/logrus"
argoappsv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/glob"
)
var sprigFuncMap = sprig.GenericFuncMap() // a singleton for better performance
@@ -47,10 +46,6 @@ type Renderer interface {
type Render struct{}
func IsNamespaceAllowed(namespaces []string, namespace string) bool {
return glob.MatchStringInList(namespaces, namespace, glob.REGEXP)
}
func copyValueIntoUnexported(destination, value reflect.Value) {
reflect.NewAt(destination.Type(), unsafe.Pointer(destination.UnsafeAddr())).
Elem().
@@ -273,7 +268,7 @@ func (r *Render) RenderTemplateParams(tmpl *argoappsv1.Application, syncPolicy *
// b) there IS a syncPolicy, but preserveResourcesOnDeletion is set to false
// See TestRenderTemplateParamsFinalizers in util_test.go for test-based definition of behaviour
if (syncPolicy == nil || !syncPolicy.PreserveResourcesOnDeletion) &&
len(replacedTmpl.ObjectMeta.Finalizers) == 0 {
(replacedTmpl.ObjectMeta.Finalizers == nil || len(replacedTmpl.ObjectMeta.Finalizers) == 0) {
replacedTmpl.ObjectMeta.Finalizers = []string{"resources-finalizer.argocd.argoproj.io"}
}

View File

@@ -1,186 +0,0 @@
{
"ref": "refs/heads/env/dev",
"before": "d5c1ffa8e294bc18c639bfb4e0df499251034414",
"after": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"created": false,
"deleted": false,
"forced": true,
"base_ref": null,
"compare": "https://github.com/org/repo/compare/d5c1ffa8e294...63738bb582c8",
"commits": [
{
"id": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"tree_id": "64897da445207e409ad05af93b1f349ad0a4ee19",
"distinct": true,
"message": "Add staging-argocd-demo environment",
"timestamp": "2018-05-04T15:40:02-07:00",
"url": "https://github.com/org/repo/commit/63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"author": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"committer": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"added": [
"ksapps/test-app/environments/staging-argocd-demo/main.jsonnet",
"ksapps/test-app/environments/staging-argocd-demo/params.libsonnet"
],
"removed": [
],
"modified": [
"ksapps/test-app/app.yaml"
]
}
],
"head_commit": {
"id": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"tree_id": "64897da445207e409ad05af93b1f349ad0a4ee19",
"distinct": true,
"message": "Add staging-argocd-demo environment",
"timestamp": "2018-05-04T15:40:02-07:00",
"url": "https://github.com/org/repo/commit/63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"author": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"committer": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"added": [
"ksapps/test-app/environments/staging-argocd-demo/main.jsonnet",
"ksapps/test-app/environments/staging-argocd-demo/params.libsonnet"
],
"removed": [
],
"modified": [
"ksapps/test-app/app.yaml"
]
},
"repository": {
"id": 123060978,
"name": "repo",
"full_name": "org/repo",
"owner": {
"name": "org",
"email": "org@users.noreply.github.com",
"login": "org",
"id": 12677113,
"avatar_url": "https://avatars0.githubusercontent.com/u/12677113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/org",
"html_url": "https://github.com/org",
"followers_url": "https://api.github.com/users/org/followers",
"following_url": "https://api.github.com/users/org/following{/other_user}",
"gists_url": "https://api.github.com/users/org/gists{/gist_id}",
"starred_url": "https://api.github.com/users/org/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/org/subscriptions",
"organizations_url": "https://api.github.com/users/org/orgs",
"repos_url": "https://api.github.com/users/org/repos",
"events_url": "https://api.github.com/users/org/events{/privacy}",
"received_events_url": "https://api.github.com/users/org/received_events",
"type": "User",
"site_admin": false
},
"private": false,
"html_url": "https://github.com/org/repo",
"description": "Test Repository",
"fork": false,
"url": "https://github.com/org/repo",
"forks_url": "https://api.github.com/repos/org/repo/forks",
"keys_url": "https://api.github.com/repos/org/repo/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/org/repo/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/org/repo/teams",
"hooks_url": "https://api.github.com/repos/org/repo/hooks",
"issue_events_url": "https://api.github.com/repos/org/repo/issues/events{/number}",
"events_url": "https://api.github.com/repos/org/repo/events",
"assignees_url": "https://api.github.com/repos/org/repo/assignees{/user}",
"branches_url": "https://api.github.com/repos/org/repo/branches{/branch}",
"tags_url": "https://api.github.com/repos/org/repo/tags",
"blobs_url": "https://api.github.com/repos/org/repo/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/org/repo/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/org/repo/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/org/repo/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/org/repo/statuses/{sha}",
"languages_url": "https://api.github.com/repos/org/repo/languages",
"stargazers_url": "https://api.github.com/repos/org/repo/stargazers",
"contributors_url": "https://api.github.com/repos/org/repo/contributors",
"subscribers_url": "https://api.github.com/repos/org/repo/subscribers",
"subscription_url": "https://api.github.com/repos/org/repo/subscription",
"commits_url": "https://api.github.com/repos/org/repo/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/org/repo/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/org/repo/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/org/repo/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/org/repo/contents/{+path}",
"compare_url": "https://api.github.com/repos/org/repo/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/org/repo/merges",
"archive_url": "https://api.github.com/repos/org/repo/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/org/repo/downloads",
"issues_url": "https://api.github.com/repos/org/repo/issues{/number}",
"pulls_url": "https://api.github.com/repos/org/repo/pulls{/number}",
"milestones_url": "https://api.github.com/repos/org/repo/milestones{/number}",
"notifications_url": "https://api.github.com/repos/org/repo/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/org/repo/labels{/name}",
"releases_url": "https://api.github.com/repos/org/repo/releases{/id}",
"deployments_url": "https://api.github.com/repos/org/repo/deployments",
"created_at": 1519698615,
"updated_at": "2018-05-04T22:37:55Z",
"pushed_at": 1525473610,
"git_url": "git://github.com/org/repo.git",
"ssh_url": "git@github.com:org/repo.git",
"clone_url": "https://github.com/org/repo.git",
"svn_url": "https://github.com/org/repo",
"homepage": null,
"size": 538,
"stargazers_count": 0,
"watchers_count": 0,
"language": null,
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"forks_count": 1,
"mirror_url": null,
"archived": false,
"open_issues_count": 0,
"license": null,
"forks": 1,
"open_issues": 0,
"watchers": 0,
"default_branch": "master",
"stargazers": 0,
"master_branch": "master"
},
"pusher": {
"name": "org",
"email": "org@users.noreply.github.com"
},
"sender": {
"login": "org",
"id": 12677113,
"avatar_url": "https://avatars0.githubusercontent.com/u/12677113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/org",
"html_url": "https://github.com/org",
"followers_url": "https://api.github.com/users/org/followers",
"following_url": "https://api.github.com/users/org/following{/other_user}",
"gists_url": "https://api.github.com/users/org/gists{/gist_id}",
"starred_url": "https://api.github.com/users/org/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/org/subscriptions",
"organizations_url": "https://api.github.com/users/org/orgs",
"repos_url": "https://api.github.com/users/org/repos",
"events_url": "https://api.github.com/users/org/events{/privacy}",
"received_events_url": "https://api.github.com/users/org/received_events",
"type": "User",
"site_admin": false
}
}

View File

@@ -19,7 +19,6 @@ import (
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
argosettings "github.com/argoproj/argo-cd/v2/util/settings"
"github.com/argoproj/argo-cd/v2/util/webhook"
"github.com/go-playground/webhooks/v6/azuredevops"
"github.com/go-playground/webhooks/v6/github"
@@ -191,6 +190,11 @@ func (h *WebhookHandler) Handler(w http.ResponseWriter, r *http.Request) {
}
}
func parseRevision(ref string) string {
refParts := strings.SplitN(ref, "/", 3)
return refParts[len(refParts)-1]
}
func getGitGeneratorInfo(payload interface{}) *gitGeneratorInfo {
var (
webURL string
@@ -200,16 +204,16 @@ func getGitGeneratorInfo(payload interface{}) *gitGeneratorInfo {
switch payload := payload.(type) {
case github.PushPayload:
webURL = payload.Repository.HTMLURL
revision = webhook.ParseRevision(payload.Ref)
revision = parseRevision(payload.Ref)
touchedHead = payload.Repository.DefaultBranch == revision
case gitlab.PushEventPayload:
webURL = payload.Project.WebURL
revision = webhook.ParseRevision(payload.Ref)
revision = parseRevision(payload.Ref)
touchedHead = payload.Project.DefaultBranch == revision
case azuredevops.GitPushEvent:
// See: https://learn.microsoft.com/en-us/azure/devops/service-hooks/events?view=azure-devops#git.push
webURL = payload.Resource.Repository.RemoteURL
revision = webhook.ParseRevision(payload.Resource.RefUpdates[0].Name)
revision = parseRevision(payload.Resource.RefUpdates[0].Name)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
// unfortunately, Azure DevOps doesn't provide a list of changed files
default:
@@ -369,12 +373,12 @@ func shouldRefreshPluginGenerator(gen *v1alpha1.PluginGenerator) bool {
}
func genRevisionHasChanged(gen *v1alpha1.GitGenerator, revision string, touchedHead bool) bool {
targetRev := webhook.ParseRevision(gen.Revision)
targetRev := parseRevision(gen.Revision)
if targetRev == "HEAD" || targetRev == "" { // revision is head
return touchedHead
}
return targetRev == revision || gen.Revision == revision
return targetRev == revision
}
func gitGeneratorUsesURL(gen *v1alpha1.GitGenerator, webURL string, repoRegexp *regexp.Regexp) bool {

View File

@@ -67,15 +67,6 @@ func TestWebhookHandler(t *testing.T) {
expectedStatusCode: http.StatusOK,
expectedRefresh: true,
},
{
desc: "WebHook from a GitHub repository via Commit shorthand",
headerKey: "X-GitHub-Event",
headerValue: "push",
payloadFile: "github-commit-event-feature-branch.json",
effectedAppSets: []string{"github-shorthand", "matrix-pull-request-github-plugin", "plugin"},
expectedStatusCode: http.StatusOK,
expectedRefresh: true,
},
{
desc: "WebHook from a GitHub repository via Commit to branch",
headerKey: "X-GitHub-Event",
@@ -201,7 +192,6 @@ func TestWebhookHandler(t *testing.T) {
fakeAppWithGitGenerator("git-github", namespace, "https://github.com/org/repo"),
fakeAppWithGitGenerator("git-gitlab", namespace, "https://gitlab/group/name"),
fakeAppWithGitGenerator("git-azure-devops", namespace, "https://dev.azure.com/fabrikam-fiber-inc/DefaultCollection/_git/Fabrikam-Fiber-Git"),
fakeAppWithGitGeneratorWithRevision("github-shorthand", namespace, "https://github.com/org/repo", "env/dev"),
fakeAppWithGithubPullRequestGenerator("pull-request-github", namespace, "CodErTOcat", "Hello-World"),
fakeAppWithGitlabPullRequestGenerator("pull-request-gitlab", namespace, "100500"),
fakeAppWithAzureDevOpsPullRequestGenerator("pull-request-azure-devops", namespace, "DefaultCollection", "Fabrikam"),
@@ -312,62 +302,14 @@ func mockGenerators() map[string]generators.Generator {
}
func TestGenRevisionHasChanged(t *testing.T) {
type args struct {
gen *v1alpha1.GitGenerator
revision string
touchedHead bool
}
tests := []struct {
name string
args args
want bool
}{
{name: "touchedHead", args: args{
gen: &v1alpha1.GitGenerator{},
revision: "main",
touchedHead: true,
}, want: true},
{name: "didntTouchHead", args: args{
gen: &v1alpha1.GitGenerator{},
revision: "main",
touchedHead: false,
}, want: false},
{name: "foundEqualShort", args: args{
gen: &v1alpha1.GitGenerator{Revision: "dev"},
revision: "dev",
touchedHead: true,
}, want: true},
{name: "foundEqualLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/heads/dev"},
revision: "dev",
touchedHead: true,
}, want: true},
{name: "foundNotEqualLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/heads/dev"},
revision: "main",
touchedHead: true,
}, want: false},
{name: "foundNotEqualShort", args: args{
gen: &v1alpha1.GitGenerator{Revision: "dev"},
revision: "main",
touchedHead: false,
}, want: false},
{name: "foundEqualTag", args: args{
gen: &v1alpha1.GitGenerator{Revision: "v3.14.1"},
revision: "v3.14.1",
touchedHead: false,
}, want: true},
{name: "foundEqualTagLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/tags/v3.14.1"},
revision: "v3.14.1",
touchedHead: false,
}, want: true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
assert.Equalf(t, tt.want, genRevisionHasChanged(tt.args.gen, tt.args.revision, tt.args.touchedHead), "genRevisionHasChanged(%v, %v, %v)", tt.args.gen, tt.args.revision, tt.args.touchedHead)
})
}
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{}, "master", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{}, "master", false))
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "dev"}, "dev", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "dev"}, "master", false))
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "refs/heads/dev"}, "dev", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "refs/heads/dev"}, "master", false))
}
func fakeAppWithGitGenerator(name, namespace, repo string) *v1alpha1.ApplicationSet {
@@ -389,12 +331,6 @@ func fakeAppWithGitGenerator(name, namespace, repo string) *v1alpha1.Application
}
}
func fakeAppWithGitGeneratorWithRevision(name, namespace, repo, revision string) *v1alpha1.ApplicationSet {
appSet := fakeAppWithGitGenerator(name, namespace, repo)
appSet.Spec.Generators[0].Git.Revision = revision
return appSet
}
func fakeAppWithGitlabPullRequestGenerator(name, namespace, projectId string) *v1alpha1.ApplicationSet {
return &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -775,7 +711,7 @@ func fakeAppWithMatrixAndPullRequestGeneratorWithPluginGenerator(name, namespace
func newFakeClient(ns string) *kubefake.Clientset {
s := runtime.NewScheme()
s.AddKnownTypes(v1alpha1.SchemeGroupVersion, &v1alpha1.ApplicationSet{})
return kubefake.NewClientset(&corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "argocd-cm", Namespace: ns, Labels: map[string]string{
return kubefake.NewSimpleClientset(&corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "argocd-cm", Namespace: ns, Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
}}}, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{

325
assets/swagger.json generated
View File

@@ -3290,6 +3290,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -3334,6 +3340,12 @@
"description": "Whether to operate on credential set instead of repository.",
"name": "credsOnly",
"in": "query"
},
{
"type": "boolean",
"description": "Write determines whether the credential will be stored as a read credential or a write credential.",
"name": "write",
"in": "query"
}
],
"responses": {
@@ -3374,6 +3386,12 @@
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
{
"type": "boolean",
"description": "Write determines whether the credential to be updated is a read credential or a write credential.",
"name": "write",
"in": "query"
}
],
"responses": {
@@ -3418,6 +3436,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -3460,6 +3484,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -3550,6 +3580,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -3593,6 +3629,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -4489,27 +4531,6 @@
}
}
},
"applicationsetApplicationSetGenerateRequest": {
"type": "object",
"title": "ApplicationSetGetQuery is a query for applicationset resources",
"properties": {
"applicationSet": {
"$ref": "#/definitions/v1alpha1ApplicationSet"
}
}
},
"applicationsetApplicationSetGenerateResponse": {
"type": "object",
"title": "ApplicationSetGenerateResponse is a response for applicationset generate request",
"properties": {
"applications": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1Application"
}
}
}
},
"applicationsetApplicationSetResponse": {
"type": "object",
"properties": {
@@ -4535,43 +4556,6 @@
}
}
},
"applicationv1alpha1ResourceStatus": {
"type": "object",
"title": "ResourceStatus holds the current sync and health status of a resource\nTODO: describe members of this type",
"properties": {
"group": {
"type": "string"
},
"health": {
"$ref": "#/definitions/v1alpha1HealthStatus"
},
"hook": {
"type": "boolean"
},
"kind": {
"type": "string"
},
"name": {
"type": "string"
},
"namespace": {
"type": "string"
},
"requiresPruning": {
"type": "boolean"
},
"status": {
"type": "string"
},
"syncWave": {
"type": "integer",
"format": "int64"
},
"version": {
"type": "string"
}
}
},
"clusterClusterID": {
"type": "object",
"title": "ClusterID holds a cluster server URL or cluster name",
@@ -4716,12 +4700,6 @@
"help": {
"$ref": "#/definitions/clusterHelp"
},
"impersonationEnabled": {
"type": "boolean"
},
"installationID": {
"type": "string"
},
"kustomizeOptions": {
"$ref": "#/definitions/v1alpha1KustomizeOptions"
},
@@ -5528,7 +5506,7 @@
"properties": {
"matchExpressions": {
"type": "array",
"title": "matchExpressions is a list of label selector requirements. The requirements are ANDed.\n+optional\n+listType=atomic",
"title": "matchExpressions is a list of label selector requirements. The requirements are ANDed.\n+optional",
"items": {
"$ref": "#/definitions/v1LabelSelectorRequirement"
}
@@ -5556,7 +5534,7 @@
},
"values": {
"type": "array",
"title": "values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch.\n+optional\n+listType=atomic",
"title": "values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch.\n+optional",
"items": {
"type": "string"
}
@@ -5680,7 +5658,7 @@
"type": "string"
},
"kubeProxyVersion": {
"description": "Deprecated: KubeProxy Version reported by the node.",
"description": "KubeProxy Version reported by the node.",
"type": "string"
},
"kubeletVersion": {
@@ -5729,7 +5707,7 @@
},
"finalizers": {
"type": "array",
"title": "Must be empty before the object is deleted from the registry. Each entry\nis an identifier for the responsible component that will remove the entry\nfrom the list. If the deletionTimestamp of the object is non-nil, entries\nin this list can only be removed.\nFinalizers may be processed and removed in any order. Order is NOT enforced\nbecause it introduces significant risk of stuck finalizers.\nfinalizers is a shared field, any actor with permission can reorder it.\nIf the finalizer list is processed in order, then this can lead to a situation\nin which the component responsible for the first finalizer in the list is\nwaiting for a signal (field value, external system, or other) produced by a\ncomponent responsible for a finalizer later in the list, resulting in a deadlock.\nWithout enforced ordering finalizers are free to order amongst themselves and\nare not vulnerable to ordering changes in the list.\n+optional\n+patchStrategy=merge\n+listType=set",
"title": "Must be empty before the object is deleted from the registry. Each entry\nis an identifier for the responsible component that will remove the entry\nfrom the list. If the deletionTimestamp of the object is non-nil, entries\nin this list can only be removed.\nFinalizers may be processed and removed in any order. Order is NOT enforced\nbecause it introduces significant risk of stuck finalizers.\nfinalizers is a shared field, any actor with permission can reorder it.\nIf the finalizer list is processed in order, then this can lead to a situation\nin which the component responsible for the first finalizer in the list is\nwaiting for a signal (field value, external system, or other) produced by a\ncomponent responsible for a finalizer later in the list, resulting in a deadlock.\nWithout enforced ordering finalizers are free to order amongst themselves and\nare not vulnerable to ordering changes in the list.\n+optional\n+patchStrategy=merge",
"items": {
"type": "string"
}
@@ -5751,7 +5729,7 @@
}
},
"managedFields": {
"description": "ManagedFields maps workflow-id and version to the set of fields\nthat are managed by that workflow. This is mostly for internal\nhousekeeping, and users typically shouldn't need to set or\nunderstand this field. A workflow can be the user's name, a\ncontroller's name, or the name of a specific apply path like\n\"ci-cd\". The set of fields is always in the version that the\nworkflow used when modifying the object.\n\n+optional\n+listType=atomic",
"description": "ManagedFields maps workflow-id and version to the set of fields\nthat are managed by that workflow. This is mostly for internal\nhousekeeping, and users typically shouldn't need to set or\nunderstand this field. A workflow can be the user's name, a\ncontroller's name, or the name of a specific apply path like\n\"ci-cd\". The set of fields is always in the version that the\nworkflow used when modifying the object.\n\n+optional",
"type": "array",
"items": {
"$ref": "#/definitions/v1ManagedFieldsEntry"
@@ -5767,7 +5745,7 @@
},
"ownerReferences": {
"type": "array",
"title": "List of objects depended by this object. If ALL objects in the list have\nbeen deleted, this object will be garbage collected. If this object is managed by a controller,\nthen an entry in this list will point to this controller, with the controller field set to true.\nThere cannot be more than one managing controller.\n+optional\n+patchMergeKey=uid\n+patchStrategy=merge\n+listType=map\n+listMapKey=uid",
"title": "List of objects depended by this object. If ALL objects in the list have\nbeen deleted, this object will be garbage collected. If this object is managed by a controller,\nthen an entry in this list will point to this controller, with the controller field set to true.\nThere cannot be more than one managing controller.\n+optional\n+patchMergeKey=uid\n+patchStrategy=merge",
"items": {
"$ref": "#/definitions/v1OwnerReference"
}
@@ -5943,13 +5921,6 @@
"type": "string",
"title": "Description contains optional project description"
},
"destinationServiceAccounts": {
"description": "DestinationServiceAccounts holds information about the service accounts to be impersonated for the application sync operation for each destination.",
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1ApplicationDestinationServiceAccount"
}
},
"destinations": {
"type": "array",
"title": "Destinations contains list of destinations available for deployment",
@@ -6081,24 +6052,6 @@
}
}
},
"v1alpha1ApplicationDestinationServiceAccount": {
"description": "ApplicationDestinationServiceAccount holds information about the service account to be impersonated for the application sync operation.",
"type": "object",
"properties": {
"defaultServiceAccount": {
"type": "string",
"title": "DefaultServiceAccount to be used for impersonation during the sync operation"
},
"namespace": {
"description": "Namespace specifies the target namespace for the application's resources.",
"type": "string"
},
"server": {
"description": "Server specifies the URL of the target cluster's Kubernetes control plane API.",
"type": "string"
}
}
},
"v1alpha1ApplicationList": {
"type": "object",
"title": "ApplicationList is list of Application resources\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object",
@@ -6423,7 +6376,7 @@
"description": "Resources is a list of Applications resources managed by this application set.",
"type": "array",
"items": {
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
"$ref": "#/definitions/v1alpha1ResourceStatus"
}
}
}
@@ -6836,6 +6789,9 @@
"source": {
"$ref": "#/definitions/v1alpha1ApplicationSource"
},
"sourceHydrator": {
"$ref": "#/definitions/v1alpha1SourceHydrator"
},
"sources": {
"type": "array",
"title": "Sources is a reference to the location of the application's manifests or chart",
@@ -6890,9 +6846,12 @@
"type": "array",
"title": "Resources is a list of Kubernetes resources managed by this application",
"items": {
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
"$ref": "#/definitions/v1alpha1ResourceStatus"
}
},
"sourceHydrator": {
"$ref": "#/definitions/v1alpha1SourceHydratorStatus"
},
"sourceType": {
"type": "string",
"title": "SourceType specifies the type of this application"
@@ -6956,11 +6915,6 @@
"items": {
"$ref": "#/definitions/v1alpha1ResourceNode"
}
},
"shardsCount": {
"type": "integer",
"format": "int64",
"title": "ShardsCount contains total number of shards the application tree is split into"
}
}
},
@@ -7308,6 +7262,24 @@
}
}
},
"v1alpha1DrySource": {
"description": "DrySource specifies a location for dry \"don't repeat yourself\" manifest source information.",
"type": "object",
"properties": {
"path": {
"type": "string",
"title": "Path is a directory path within the Git repository where the manifests are located"
},
"repoURL": {
"type": "string",
"title": "RepoURL is the URL to the git repository that contains the application manifests"
},
"targetRevision": {
"type": "string",
"title": "TargetRevision defines the revision of the source to hydrate"
}
}
},
"v1alpha1DuckTypeGenerator": {
"description": "DuckType defines a generator to match against clusters registered with ArgoCD.",
"type": "object",
@@ -7559,6 +7531,47 @@
}
}
},
"v1alpha1HydrateOperation": {
"type": "object",
"title": "HydrateOperation contains information about the most recent hydrate operation",
"properties": {
"drySHA": {
"type": "string",
"title": "DrySHA holds the resolved revision (sha) of the dry source as of the most recent reconciliation"
},
"finishedAt": {
"$ref": "#/definitions/v1Time"
},
"hydratedSHA": {
"type": "string",
"title": "HydratedSHA holds the resolved revision (sha) of the hydrated source as of the most recent reconciliation"
},
"message": {
"type": "string",
"title": "Message contains a message describing the current status of the hydrate operation"
},
"phase": {
"type": "string",
"title": "Phase indicates the status of the hydrate operation"
},
"sourceHydrator": {
"$ref": "#/definitions/v1alpha1SourceHydrator"
},
"startedAt": {
"$ref": "#/definitions/v1Time"
}
}
},
"v1alpha1HydrateTo": {
"description": "HydrateTo specifies a location to which hydrated manifests should be pushed as a \"staging area\" before being moved to\nthe SyncSource. The RepoURL and Path are assumed based on the associated SyncSource config in the SourceHydrator.",
"type": "object",
"properties": {
"targetBranch": {
"type": "string",
"title": "TargetBranch is the branch to which hydrated manifests should be committed"
}
}
},
"v1alpha1Info": {
"type": "object",
"properties": {
@@ -8273,10 +8286,6 @@
"type": "string",
"title": "GithubAppPrivateKey specifies the private key PEM data for authentication via GitHub app"
},
"noProxy": {
"type": "string",
"title": "NoProxy specifies a list of targets where the proxy isn't used, applies only in cases where the proxy is applied"
},
"password": {
"type": "string",
"title": "Password for authenticating at the repo server"
@@ -8383,10 +8392,6 @@
"type": "string",
"title": "Name specifies a name to be used for this repo. Only used with Helm repos"
},
"noProxy": {
"type": "string",
"title": "NoProxy specifies a list of targets where the proxy isn't used, applies only in cases where the proxy is applied"
},
"password": {
"type": "string",
"title": "Password contains the password or PAT used for authenticating at the remote repository"
@@ -8784,6 +8789,43 @@
}
}
},
"v1alpha1ResourceStatus": {
"type": "object",
"title": "ResourceStatus holds the current sync and health status of a resource\nTODO: describe members of this type",
"properties": {
"group": {
"type": "string"
},
"health": {
"$ref": "#/definitions/v1alpha1HealthStatus"
},
"hook": {
"type": "boolean"
},
"kind": {
"type": "string"
},
"name": {
"type": "string"
},
"namespace": {
"type": "string"
},
"requiresPruning": {
"type": "boolean"
},
"status": {
"type": "string"
},
"syncWave": {
"type": "integer",
"format": "int64"
},
"version": {
"type": "string"
}
}
},
"v1alpha1RetryStrategy": {
"type": "object",
"title": "RetryStrategy contains information about the strategy to apply when a sync failed",
@@ -9166,15 +9208,54 @@
}
}
},
"v1alpha1SourceHydrator": {
"description": "SourceHydrator specifies a dry \"don't repeat yourself\" source for manifests, a sync source from which to sync\nhydrated manifests, and an optional hydrateTo location to act as a \"staging\" aread for hydrated manifests.",
"type": "object",
"properties": {
"drySource": {
"$ref": "#/definitions/v1alpha1DrySource"
},
"hydrateTo": {
"$ref": "#/definitions/v1alpha1HydrateTo"
},
"syncSource": {
"$ref": "#/definitions/v1alpha1SyncSource"
}
}
},
"v1alpha1SourceHydratorStatus": {
"type": "object",
"title": "SourceHydratorStatus contains information about the current state of source hydration",
"properties": {
"currentOperation": {
"$ref": "#/definitions/v1alpha1HydrateOperation"
},
"lastSuccessfulOperation": {
"$ref": "#/definitions/v1alpha1SuccessfulHydrateOperation"
}
}
},
"v1alpha1SuccessfulHydrateOperation": {
"type": "object",
"title": "SuccessfulHydrateOperation contains information about the most recent successful hydrate operation",
"properties": {
"drySHA": {
"type": "string",
"title": "DrySHA holds the resolved revision (sha) of the dry source as of the most recent reconciliation"
},
"hydratedSHA": {
"type": "string",
"title": "HydratedSHA holds the resolved revision (sha) of the hydrated source as of the most recent reconciliation"
},
"sourceHydrator": {
"$ref": "#/definitions/v1alpha1SourceHydrator"
}
}
},
"v1alpha1SyncOperation": {
"description": "SyncOperation contains details about a sync operation.",
"type": "object",
"properties": {
"autoHealAttemptsCount": {
"type": "integer",
"format": "int64",
"title": "SelfHealAttemptsCount contains the number of auto-heal attempts"
},
"dryRun": {
"type": "boolean",
"title": "DryRun specifies to perform a `kubectl apply --dry-run` without actually performing the sync"
@@ -9325,6 +9406,20 @@
}
}
},
"v1alpha1SyncSource": {
"description": "SyncSource specifies a location from which hydrated manifests may be synced. RepoURL is assumed based on the\nassociated DrySource config in the SourceHydrator.",
"type": "object",
"properties": {
"path": {
"description": "Path is a directory path within the git repository where hydrated manifests should be committed to and synced\nfrom. If hydrateTo is set, this is just the path from which hydrated manifests will be synced.",
"type": "string"
},
"targetBranch": {
"type": "string",
"title": "TargetBranch is the branch to which hydrated manifests should be committed"
}
}
},
"v1alpha1SyncStatus": {
"type": "object",
"title": "SyncStatus contains information about the currently observed live and desired states of an application",

View File

@@ -13,11 +13,11 @@ import (
"github.com/redis/go-redis/v9"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
commitclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/controller"
"github.com/argoproj/argo-cd/v2/controller/sharding"
@@ -25,7 +25,6 @@ import (
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/v2/pkg/ratelimiter"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/normalizers"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
appstatecache "github.com/argoproj/argo-cd/v2/util/cache/appstate"
@@ -57,17 +56,14 @@ func NewCommand() *cobra.Command {
repoErrorGracePeriod int64
repoServerAddress string
repoServerTimeoutSeconds int
commitServerAddress string
selfHealTimeoutSeconds int
selfHealBackoffTimeoutSeconds int
selfHealBackoffFactor int
selfHealBackoffCapSeconds int
statusProcessors int
operationProcessors int
glogLevel int
metricsPort int
metricsCacheExpiration time.Duration
metricsAplicationLabels []string
metricsAplicationConditions []string
kubectlParallelismLimit int64
cacheSource func() (*appstatecache.Cache, error)
redisClient *redis.Client
@@ -83,9 +79,6 @@ func NewCommand() *cobra.Command {
enableDynamicClusterDistribution bool
serverSideDiff bool
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts
// argocd k8s event logging flag
enableK8sEvent []string
)
command := cobra.Command{
Use: cliName,
@@ -148,6 +141,8 @@ func NewCommand() *cobra.Command {
repoClientset := apiclient.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds, tlsConfig)
commitClientset := commitclient.NewCommitServerClientset(commitServerAddress)
cache, err := cacheSource()
errors.CheckError(err)
cache.Cache.SetClient(cacheutil.NewTwoLevelClient(cache.Cache.GetClient(), 10*time.Minute))
@@ -160,32 +155,23 @@ func NewCommand() *cobra.Command {
kubectl := kubeutil.NewKubectl()
clusterSharding, err := sharding.GetClusterSharding(kubeClient, settingsMgr, shardingAlgorithm, enableDynamicClusterDistribution)
errors.CheckError(err)
var selfHealBackoff *wait.Backoff
if selfHealBackoffTimeoutSeconds != 0 {
selfHealBackoff = &wait.Backoff{
Duration: time.Duration(selfHealBackoffTimeoutSeconds) * time.Second,
Factor: float64(selfHealBackoffFactor),
Cap: time.Duration(selfHealBackoffCapSeconds) * time.Second,
}
}
appController, err = controller.NewApplicationController(
namespace,
settingsMgr,
kubeClient,
appClient,
repoClientset,
commitClientset,
cache,
kubectl,
resyncDuration,
hardResyncDuration,
time.Duration(appResyncJitter)*time.Second,
time.Duration(selfHealTimeoutSeconds)*time.Second,
selfHealBackoff,
time.Duration(repoErrorGracePeriod)*time.Second,
metricsPort,
metricsCacheExpiration,
metricsAplicationLabels,
metricsAplicationConditions,
kubectlParallelismLimit,
persistResourceHealth,
clusterSharding,
@@ -194,7 +180,6 @@ func NewCommand() *cobra.Command {
serverSideDiff,
enableDynamicClusterDistribution,
ignoreNormalizerOpts,
enableK8sEvent,
)
errors.CheckError(err)
cacheutil.CollectMetrics(redisClient, appController.GetMetricsServer())
@@ -237,6 +222,7 @@ func NewCommand() *cobra.Command {
command.Flags().Int64Var(&repoErrorGracePeriod, "repo-error-grace-period-seconds", int64(env.ParseDurationFromEnv("ARGOCD_REPO_ERROR_GRACE_PERIOD_SECONDS", defaultAppResyncPeriod*time.Second, 0, math.MaxInt64).Seconds()), "Grace period in seconds for ignoring consecutive errors while communicating with repo server.")
command.Flags().StringVar(&repoServerAddress, "repo-server", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER", common.DefaultRepoServerAddr), "Repo server address.")
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS", 60, 0, math.MaxInt64), "Repo server RPC call timeout seconds.")
command.Flags().StringVar(&commitServerAddress, "commit-server", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER", common.DefaultCommitServerAddr), "Commit server address.")
command.Flags().IntVar(&statusProcessors, "status-processors", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_STATUS_PROCESSORS", 20, 0, math.MaxInt32), "Number of application status processors")
command.Flags().IntVar(&operationProcessors, "operation-processors", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_OPERATION_PROCESSORS", 10, 0, math.MaxInt32), "Number of application operation processors")
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
@@ -244,15 +230,11 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
command.Flags().DurationVar(&metricsCacheExpiration, "metrics-cache-expiration", env.ParseDurationFromEnv("ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION", 0*time.Second, 0, math.MaxInt64), "Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS", 0, 0, math.MaxInt32), "Specifies timeout between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffTimeoutSeconds, "self-heal-backoff-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_TIMEOUT_SECONDS", 2, 0, math.MaxInt32), "Specifies initial timeout of exponential backoff between self heal attempts")
command.Flags().IntVar(&selfHealBackoffFactor, "self-heal-backoff-factor", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_FACTOR", 3, 0, math.MaxInt32), "Specifies factor of exponential timeout between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffCapSeconds, "self-heal-backoff-cap-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_CAP_SECONDS", 300, 0, math.MaxInt32), "Specifies max timeout of exponential backoff between application self heal attempts")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS", 5, 0, math.MaxInt32), "Specifies timeout between application self heal attempts")
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", env.ParseInt64FromEnv("ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT", 20, 0, math.MaxInt64), "Number of allowed concurrent kubectl fork/execs. Any value less than 1 means no limit.")
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
command.Flags().BoolVar(&repoServerStrictTLS, "repo-server-strict-tls", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS", false), "Whether to use strict validation of the TLS cert presented by the repo server")
command.Flags().StringSliceVar(&metricsAplicationLabels, "metrics-application-labels", []string{}, "List of Application labels that will be added to the argocd_application_labels metric")
command.Flags().StringSliceVar(&metricsAplicationConditions, "metrics-application-conditions", []string{}, "List of Application conditions that will be added to the argocd_application_conditions metric")
command.Flags().StringVar(&otlpAddress, "otlp-address", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_ADDRESS", ""), "OpenTelemetry collector address to send traces to")
command.Flags().BoolVar(&otlpInsecure, "otlp-insecure", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_INSECURE", true), "OpenTelemetry collector insecure mode")
command.Flags().StringToStringVar(&otlpHeaders, "otlp-headers", env.ParseStringToStringFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_HEADERS", map[string]string{}, ","), "List of OpenTelemetry collector extra headers sent with traces, headers are comma-separated key-value pairs(e.g. key1=value1,key2=value2)")
@@ -272,9 +254,6 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&enableDynamicClusterDistribution, "dynamic-cluster-distribution-enabled", env.ParseBoolFromEnv(common.EnvEnableDynamicClusterDistribution, false), "Enables dynamic cluster distribution.")
command.Flags().BoolVar(&serverSideDiff, "server-side-diff-enabled", env.ParseBoolFromEnv(common.EnvServerSideDiff, false), "Feature flag to enable ServerSide diff. Default (\"false\")")
command.Flags().DurationVar(&ignoreNormalizerOpts.JQExecutionTimeout, "ignore-normalizer-jq-execution-timeout-seconds", env.ParseDurationFromEnv("ARGOCD_IGNORE_NORMALIZER_JQ_TIMEOUT", 0*time.Second, 0, math.MaxInt64), "Set ignore normalizer JQ execution timeout")
// argocd k8s event logging flag
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
cacheSource = appstatecache.AddCacheFlagsToCmd(&command, cacheutil.Options{
OnClientCreated: func(client *redis.Client) {
redisClient = client

View File

@@ -35,7 +35,6 @@ import (
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
appsetmetrics "github.com/argoproj/argo-cd/v2/applicationset/metrics"
"github.com/argoproj/argo-cd/v2/applicationset/services"
appv1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
@@ -70,7 +69,6 @@ func NewCommand() *cobra.Command {
allowedScmProviders []string
globalPreservedAnnotations []string
globalPreservedLabels []string
metricsAplicationsetLabels []string
enableScmProviders bool
webhookParallelism int
)
@@ -169,7 +167,7 @@ func NewCommand() *cobra.Command {
tlsConfig := apiclient.TLSConfiguration{
DisableTLS: repoServerPlaintext,
StrictValidation: repoServerStrictTLS,
StrictValidation: repoServerPlaintext,
}
if !repoServerPlaintext && repoServerStrictTLS {
@@ -196,13 +194,6 @@ func NewCommand() *cobra.Command {
startWebhookServer(webhookHandler, webhookAddr)
}
metrics := appsetmetrics.NewApplicationsetMetrics(
utils.NewAppsetLister(mgr.GetClient()),
metricsAplicationsetLabels,
func(appset *appv1alpha1.ApplicationSet) bool {
return utils.IsNamespaceAllowed(applicationSetNamespaces, appset.Namespace)
})
if err = (&controllers.ApplicationSetReconciler{
Generators: topLevelGenerators,
Client: mgr.GetClient(),
@@ -220,7 +211,7 @@ func NewCommand() *cobra.Command {
SCMRootCAPath: scmRootCAPath,
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
Cache: mgr.GetCache(),
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -262,7 +253,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&globalPreservedAnnotations, "preserved-annotations", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_ANNOTATIONS", []string{}, ","), "Sets global preserved field values for annotations")
command.Flags().StringSliceVar(&globalPreservedLabels, "preserved-labels", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS", []string{}, ","), "Sets global preserved field values for labels")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
return &command
}
@@ -270,7 +260,7 @@ func startWebhookServer(webhookHandler *webhook.WebhookHandler, webhookAddr stri
mux := http.NewServeMux()
mux.HandleFunc("/api/webhook", webhookHandler.Handler)
go func() {
log.Infof("Starting webhook server %s", webhookAddr)
log.Info("Starting webhook server")
err := http.ListenAndServe(webhookAddr, mux)
if err != nil {
log.Error(err, "failed to start webhook server")

View File

@@ -0,0 +1,91 @@
package commands
import (
"fmt"
"net"
"net/http"
"os"
"os/signal"
"sync"
"syscall"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
"github.com/argoproj/argo-cd/v2/commitserver"
"github.com/argoproj/argo-cd/v2/commitserver/metrics"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/reposerver/askpass"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/env"
"github.com/argoproj/argo-cd/v2/util/errors"
)
// NewCommand returns a new instance of an argocd-commit-server command
func NewCommand() *cobra.Command {
var (
listenHost string
listenPort int
metricsPort int
metricsHost string
)
command := &cobra.Command{
Use: "argocd-commit-server",
Short: "Run Argo CD Commit Server",
Long: "Argo CD Commit Server is an internal service which commits and pushes hydrated manifests to git. This command runs Commit Server in the foreground.",
RunE: func(cmd *cobra.Command, args []string) error {
vers := common.GetVersion()
vers.LogStartupInfo(
"Argo CD Commit Server",
map[string]any{
"port": listenPort,
},
)
cli.SetLogFormat(cmdutil.LogFormat)
cli.SetLogLevel(cmdutil.LogLevel)
metricsServer := metrics.NewMetricsServer()
http.Handle("/metrics", metricsServer.GetHandler())
go func() { errors.CheckError(http.ListenAndServe(fmt.Sprintf("%s:%d", metricsHost, metricsPort), nil)) }()
askPassServer := askpass.NewServer(askpass.CommitServerSocketPath)
go func() { errors.CheckError(askPassServer.Run()) }()
server := commitserver.NewServer(askPassServer, metricsServer)
grpc := server.CreateGRPC()
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
errors.CheckError(err)
// Graceful shutdown code adapted from here: https://gist.github.com/embano1/e0bf49d24f1cdd07cffad93097c04f0a
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
s := <-sigCh
log.Printf("got signal %v, attempting graceful shutdown", s)
grpc.GracefulStop()
wg.Done()
}()
log.Println("starting grpc server")
err = grpc.Serve(listener)
errors.CheckError(err)
wg.Wait()
log.Println("clean shutdown")
return nil
},
}
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_COMMIT_SERVER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", env.StringFromEnv("ARGOCD_COMMIT_SERVER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVar(&listenHost, "address", env.StringFromEnv("ARGOCD_COMMIT_SERVER_LISTEN_ADDRESS", common.DefaultAddressCommitServer), "Listen on given address for incoming connections")
command.Flags().IntVar(&listenPort, "port", common.DefaultPortCommitServer, "Listen on given port for incoming connections")
command.Flags().StringVar(&metricsHost, "metrics-address", env.StringFromEnv("ARGOCD_COMMIT_SERVER_METRICS_LISTEN_ADDRESS", common.DefaultAddressCommitServerMetrics), "Listen on given address for metrics")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortCommitServerMetrics, "Start metrics server on given port")
return command
}

View File

@@ -136,8 +136,8 @@ func NewRunDexCommand() *cobra.Command {
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_DEX_SERVER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", env.StringFromEnv("ARGOCD_DEX_SERVER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().BoolVar(&disableTLS, "disable-tls", env.ParseBoolFromEnv("ARGOCD_DEX_SERVER_DISABLE_TLS", false), "Disable TLS on the HTTP endpoint")
return &command
}
@@ -204,8 +204,8 @@ func NewGenDexConfigCommand() *cobra.Command {
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_DEX_SERVER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", env.StringFromEnv("ARGOCD_DEX_SERVER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVarP(&out, "out", "o", "", "Output to the specified file instead of stdout")
command.Flags().BoolVar(&disableTLS, "disable-tls", env.ParseBoolFromEnv("ARGOCD_DEX_SERVER_DISABLE_TLS", false), "Disable TLS on the HTTP endpoint")
return &command

View File

@@ -1,14 +1,10 @@
package commands
import (
"context"
"fmt"
"net/http"
"os"
"os/signal"
"strings"
"sync"
"syscall"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
@@ -66,8 +62,7 @@ func NewCommand() *cobra.Command {
Use: "controller",
Short: "Starts Argo CD Notifications controller",
RunE: func(c *cobra.Command, args []string) error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ctx := c.Context()
vers := common.GetVersion()
namespace, _, err := clientConfig.Namespace()
@@ -151,17 +146,6 @@ func NewCommand() *cobra.Command {
return fmt.Errorf("failed to initialize controller: %w", err)
}
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
s := <-sigCh
log.Printf("got signal %v, attempting graceful shutdown", s)
cancel()
}()
go ctrl.Run(ctx, processorsCount)
<-ctx.Done()
return nil
@@ -175,7 +159,7 @@ func NewCommand() *cobra.Command {
command.Flags().StringVar(&logFormat, "logformat", env.StringFromEnv("ARGOCD_NOTIFICATIONS_CONTROLLER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
command.Flags().IntVar(&metricsPort, "metrics-port", defaultMetricsPort, "Metrics port")
command.Flags().StringVar(&argocdRepoServer, "argocd-repo-server", common.DefaultRepoServerAddr, "Argo CD repo server address")
command.Flags().BoolVar(&argocdRepoServerPlaintext, "argocd-repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_NOTIFICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Use a plaintext client (non-TLS) to connect to repository server")
command.Flags().BoolVar(&argocdRepoServerPlaintext, "argocd-repo-server-plaintext", false, "Use a plaintext client (non-TLS) to connect to repository server")
command.Flags().BoolVar(&argocdRepoServerStrictTLS, "argocd-repo-server-strict-tls", false, "Perform strict validation of TLS certificates when connecting to repo server")
command.Flags().StringVar(&configMapName, "config-map-name", "argocd-notifications-cm", "Set notifications ConfigMap name")
command.Flags().StringVar(&secretName, "secret-name", "argocd-notifications-secret", "Set notifications Secret name")

View File

@@ -8,8 +8,6 @@ import (
"time"
"github.com/redis/go-redis/v9"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"github.com/argoproj/pkg/stats"
log "github.com/sirupsen/logrus"
@@ -27,7 +25,6 @@ import (
reposervercache "github.com/argoproj/argo-cd/v2/reposerver/cache"
"github.com/argoproj/argo-cd/v2/server"
servercache "github.com/argoproj/argo-cd/v2/server/cache"
"github.com/argoproj/argo-cd/v2/util/argo"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/dex"
@@ -92,9 +89,6 @@ func NewCommand() *cobra.Command {
scmRootCAPath string
allowedScmProviders []string
enableScmProviders bool
// argocd k8s event logging flag
enableK8sEvent []string
)
command := &cobra.Command{
Use: cliName,
@@ -148,14 +142,9 @@ func NewCommand() *cobra.Command {
dynamicClient := dynamic.NewForConfigOrDie(config)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
_ = v1alpha1.AddToScheme(scheme)
controllerClient, err := client.New(config, client.Options{Scheme: scheme})
controllerClient, err := client.New(config, client.Options{})
errors.CheckError(err)
controllerClient = client.NewDryRunClient(controllerClient)
controllerClient = client.NewNamespacedClient(controllerClient, namespace)
// Load CA information to use for validating connections to the
// repository server, if strict TLS validation was requested.
@@ -234,7 +223,6 @@ func NewCommand() *cobra.Command {
ApplicationNamespaces: applicationNamespaces,
EnableProxyExtension: enableProxyExtension,
WebhookParallelism: webhookParallelism,
EnableK8sEvent: enableK8sEvent,
}
appsetOpts := server.ApplicationSetOpts{
@@ -309,7 +297,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&applicationNamespaces, "application-namespaces", env.StringsFromEnv("ARGOCD_APPLICATION_NAMESPACES", []string{}, ","), "List of additional namespaces where application resources can be managed in")
command.Flags().BoolVar(&enableProxyExtension, "enable-proxy-extension", env.ParseBoolFromEnv("ARGOCD_SERVER_ENABLE_PROXY_EXTENSION", false), "Enable Proxy Extension feature")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_SERVER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
// Flags related to the applicationSet component.
command.Flags().StringVar(&scmRootCAPath, "appset-scm-root-ca-path", env.StringFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_SCM_ROOT_CA_PATH", ""), "Provide Root CA Path for self-signed TLS Certificates")

View File

@@ -1,13 +1,10 @@
package admin
import (
"context"
"reflect"
"strings"
"github.com/spf13/cobra"
apiv1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -86,12 +83,11 @@ func newArgoCDClientsets(config *rest.Config, namespace string) *argoCDClientset
dynamicIf, err := dynamic.NewForConfig(config)
errors.CheckError(err)
return &argoCDClientsets{
configMaps: dynamicIf.Resource(configMapResource).Namespace(namespace),
secrets: dynamicIf.Resource(secretResource).Namespace(namespace),
// To support applications and applicationsets in any namespace we will watch all namespaces and filter them afterwards
applications: dynamicIf.Resource(applicationsResource),
configMaps: dynamicIf.Resource(configMapResource).Namespace(namespace),
secrets: dynamicIf.Resource(secretResource).Namespace(namespace),
applications: dynamicIf.Resource(applicationsResource).Namespace(namespace),
projects: dynamicIf.Resource(appprojectsResource).Namespace(namespace),
applicationSets: dynamicIf.Resource(appplicationSetResource),
applicationSets: dynamicIf.Resource(appplicationSetResource).Namespace(namespace),
}
}
@@ -190,11 +186,7 @@ func isArgoCDConfigMap(name string) bool {
// specsEqual returns if the spec, data, labels, annotations, and finalizers of the two
// supplied objects are equal, indicating that no update is necessary during importing
func specsEqual(left, right unstructured.Unstructured) bool {
leftAnnotation := left.GetAnnotations()
rightAnnotation := right.GetAnnotations()
delete(leftAnnotation, apiv1.LastAppliedConfigAnnotation)
delete(rightAnnotation, apiv1.LastAppliedConfigAnnotation)
if !reflect.DeepEqual(leftAnnotation, rightAnnotation) {
if !reflect.DeepEqual(left.GetAnnotations(), right.GetAnnotations()) {
return false
}
if !reflect.DeepEqual(left.GetLabels(), right.GetLabels()) {
@@ -226,52 +218,3 @@ func specsEqual(left, right unstructured.Unstructured) bool {
}
return false
}
type argocdAdditonalNamespaces struct {
applicationNamespaces []string
applicationsetNamespaces []string
}
const (
applicationsetNamespacesCmdParamsKey = "applicationsetcontroller.namespaces"
applicationNamespacesCmdParamsKey = "application.namespaces"
)
// Get additional namespaces from argocd-cmd-params
func getAdditionalNamespaces(ctx context.Context, argocdClientsets *argoCDClientsets) *argocdAdditonalNamespaces {
applicationNamespaces := make([]string, 0)
applicationsetNamespaces := make([]string, 0)
un, err := argocdClientsets.configMaps.Get(ctx, common.ArgoCDCmdParamsConfigMapName, v1.GetOptions{})
errors.CheckError(err)
var cm apiv1.ConfigMap
err = runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &cm)
errors.CheckError(err)
namespacesListFromString := func(namespaces string) []string {
listOfNamespaces := []string{}
ss := strings.Split(namespaces, ",")
for _, namespace := range ss {
if namespace != "" {
listOfNamespaces = append(listOfNamespaces, strings.TrimSpace(namespace))
}
}
return listOfNamespaces
}
if strNamespaces, ok := cm.Data[applicationNamespacesCmdParamsKey]; ok {
applicationNamespaces = namespacesListFromString(strNamespaces)
}
if strNamespaces, ok := cm.Data[applicationsetNamespacesCmdParamsKey]; ok {
applicationsetNamespaces = namespacesListFromString(strNamespaces)
}
return &argocdAdditonalNamespaces{
applicationNamespaces: applicationNamespaces,
applicationsetNamespaces: applicationsetNamespaces,
}
}

View File

@@ -1,75 +0,0 @@
package admin
import (
"context"
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
dynfake "k8s.io/client-go/dynamic/fake"
)
func TestGetAdditionalNamespaces(t *testing.T) {
createArgoCDCmdCMWithKeys := func(data map[string]interface{}) *unstructured.Unstructured {
return &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "v1",
"kind": "ConfigMap",
"metadata": map[string]interface{}{
"name": "argocd-cmd-params-cm",
"namespace": "argocd",
},
"data": data,
},
}
}
testCases := []struct {
CmdParamsKeys map[string]interface{}
expected argocdAdditonalNamespaces
description string
}{
{
description: "empty configmap should return no additional namespaces",
CmdParamsKeys: map[string]interface{}{},
expected: argocdAdditonalNamespaces{applicationNamespaces: []string{}, applicationsetNamespaces: []string{}},
},
{
description: "empty strings in respective keys in cm shoud return empty namespace list",
CmdParamsKeys: map[string]interface{}{applicationsetNamespacesCmdParamsKey: "", applicationNamespacesCmdParamsKey: ""},
expected: argocdAdditonalNamespaces{applicationNamespaces: []string{}, applicationsetNamespaces: []string{}},
},
{
description: "when only one of the keys in the cm is set only correct respective list of namespaces should be returned",
CmdParamsKeys: map[string]interface{}{applicationNamespacesCmdParamsKey: "foo, bar*"},
expected: argocdAdditonalNamespaces{applicationsetNamespaces: []string{}, applicationNamespaces: []string{"foo", "bar*"}},
},
{
description: "when only one of the keys in the cm is set only correct respective list of namespaces should be returned",
CmdParamsKeys: map[string]interface{}{applicationsetNamespacesCmdParamsKey: "foo, bar*"},
expected: argocdAdditonalNamespaces{applicationNamespaces: []string{}, applicationsetNamespaces: []string{"foo", "bar*"}},
},
{
description: "whitespaces are removed for both multiple and single namespace",
CmdParamsKeys: map[string]interface{}{applicationNamespacesCmdParamsKey: " bar ", applicationsetNamespacesCmdParamsKey: " foo , bar* "},
expected: argocdAdditonalNamespaces{applicationNamespaces: []string{"bar"}, applicationsetNamespaces: []string{"foo", "bar*"}},
},
}
for _, c := range testCases {
fakeDynClient := dynfake.NewSimpleDynamicClient(runtime.NewScheme(), createArgoCDCmdCMWithKeys(c.CmdParamsKeys))
argoCDClientsets := &argoCDClientsets{
configMaps: fakeDynClient.Resource(configMapResource).Namespace("argocd"),
applications: fakeDynClient.Resource(schema.GroupVersionResource{}),
applicationSets: fakeDynClient.Resource(schema.GroupVersionResource{}),
secrets: fakeDynClient.Resource(schema.GroupVersionResource{}),
projects: fakeDynClient.Resource(schema.GroupVersionResource{}),
}
result := getAdditionalNamespaces(context.TODO(), argoCDClientsets)
assert.Equal(t, c.expected, *result)
}
}

View File

@@ -387,7 +387,7 @@ func reconcileApplications(
return true
}, func(r *http.Request) error {
return nil
}, []string{}, []string{})
}, []string{})
if err != nil {
return nil, err
}

View File

@@ -20,16 +20,13 @@ import (
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
secutil "github.com/argoproj/argo-cd/v2/util/security"
)
// NewExportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewExportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
out string
applicationNamespaces []string
applicationsetNamespaces []string
clientConfig clientcmd.ClientConfig
out string
)
command := cobra.Command{
Use: "export",
@@ -61,47 +58,34 @@ func NewExportCommand() *cobra.Command {
acdClients := newArgoCDClientsets(config, namespace)
acdConfigMap, err := acdClients.configMaps.Get(ctx, common.ArgoCDConfigMapName, v1.GetOptions{})
errors.CheckError(err)
export(writer, *acdConfigMap, namespace)
export(writer, *acdConfigMap)
acdRBACConfigMap, err := acdClients.configMaps.Get(ctx, common.ArgoCDRBACConfigMapName, v1.GetOptions{})
errors.CheckError(err)
export(writer, *acdRBACConfigMap, namespace)
export(writer, *acdRBACConfigMap)
acdKnownHostsConfigMap, err := acdClients.configMaps.Get(ctx, common.ArgoCDKnownHostsConfigMapName, v1.GetOptions{})
errors.CheckError(err)
export(writer, *acdKnownHostsConfigMap, namespace)
export(writer, *acdKnownHostsConfigMap)
acdTLSCertsConfigMap, err := acdClients.configMaps.Get(ctx, common.ArgoCDTLSCertsConfigMapName, v1.GetOptions{})
errors.CheckError(err)
export(writer, *acdTLSCertsConfigMap, namespace)
export(writer, *acdTLSCertsConfigMap)
referencedSecrets := getReferencedSecrets(*acdConfigMap)
secrets, err := acdClients.secrets.List(ctx, v1.ListOptions{})
errors.CheckError(err)
for _, secret := range secrets.Items {
if isArgoCDSecret(referencedSecrets, secret) {
export(writer, secret, namespace)
export(writer, secret)
}
}
projects, err := acdClients.projects.List(ctx, v1.ListOptions{})
errors.CheckError(err)
for _, proj := range projects.Items {
export(writer, proj, namespace)
export(writer, proj)
}
additionalNamespaces := getAdditionalNamespaces(ctx, acdClients)
if len(applicationNamespaces) == 0 {
applicationNamespaces = additionalNamespaces.applicationNamespaces
}
if len(applicationsetNamespaces) == 0 {
applicationsetNamespaces = additionalNamespaces.applicationsetNamespaces
}
applications, err := acdClients.applications.List(ctx, v1.ListOptions{})
errors.CheckError(err)
for _, app := range applications.Items {
// Export application only if it is in one of the enabled namespaces
if secutil.IsNamespaceEnabled(app.GetNamespace(), namespace, applicationNamespaces) {
export(writer, app, namespace)
}
export(writer, app)
}
applicationSets, err := acdClients.applicationSets.List(ctx, v1.ListOptions{})
if err != nil && !apierr.IsNotFound(err) {
@@ -113,9 +97,7 @@ func NewExportCommand() *cobra.Command {
}
if applicationSets != nil {
for _, appSet := range applicationSets.Items {
if secutil.IsNamespaceEnabled(appSet.GetNamespace(), namespace, applicationsetNamespaces) {
export(writer, appSet, namespace)
}
export(writer, appSet)
}
}
},
@@ -123,22 +105,18 @@ func NewExportCommand() *cobra.Command {
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().StringVarP(&out, "out", "o", "-", "Output to the specified file instead of stdout")
command.Flags().StringSliceVarP(&applicationNamespaces, "application-namespaces", "", []string{}, fmt.Sprintf("Comma separated list of namespace globs to export applications from. If not provided value from '%s' in %s will be used,if it's not defined only applications from Argo CD namespace will be exported", applicationNamespacesCmdParamsKey, common.ArgoCDCmdParamsConfigMapName))
command.Flags().StringSliceVarP(&applicationsetNamespaces, "applicationset-namespaces", "", []string{}, fmt.Sprintf("Comma separated list of namespace globs to export applicationsets from. If not provided value from '%s' in %s will be used,if it's not defined only applicationsets from Argo CD namespace will be exported", applicationsetNamespacesCmdParamsKey, common.ArgoCDCmdParamsConfigMapName))
return &command
}
// NewImportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewImportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
prune bool
dryRun bool
verbose bool
stopOperation bool
ignoreTracking bool
applicationNamespaces []string
applicationsetNamespaces []string
clientConfig clientcmd.ClientConfig
prune bool
dryRun bool
verbose bool
stopOperation bool
)
command := cobra.Command{
Use: "import SOURCE",
@@ -157,8 +135,6 @@ func NewImportCommand() *cobra.Command {
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
acdClients := newArgoCDClientsets(config, namespace)
client, err := dynamic.NewForConfig(config)
errors.CheckError(err)
var input []byte
if in := args[0]; in == "-" {
@@ -172,15 +148,6 @@ func NewImportCommand() *cobra.Command {
dryRunMsg = " (dry run)"
}
additionalNamespaces := getAdditionalNamespaces(ctx, acdClients)
if len(applicationNamespaces) == 0 {
applicationNamespaces = additionalNamespaces.applicationNamespaces
}
if len(applicationsetNamespaces) == 0 {
applicationsetNamespaces = additionalNamespaces.applicationsetNamespaces
}
// pruneObjects tracks live objects and it's current resource version. any remaining
// items in this map indicates the resource should be pruned since it no longer appears
// in the backup
@@ -192,7 +159,7 @@ func NewImportCommand() *cobra.Command {
var referencedSecrets map[string]bool
for _, cm := range configMaps.Items {
if isArgoCDConfigMap(cm.GetName()) {
pruneObjects[kube.ResourceKey{Group: "", Kind: "ConfigMap", Name: cm.GetName(), Namespace: cm.GetNamespace()}] = cm
pruneObjects[kube.ResourceKey{Group: "", Kind: "ConfigMap", Name: cm.GetName()}] = cm
}
if cm.GetName() == common.ArgoCDConfigMapName {
referencedSecrets = getReferencedSecrets(cm)
@@ -203,20 +170,18 @@ func NewImportCommand() *cobra.Command {
errors.CheckError(err)
for _, secret := range secrets.Items {
if isArgoCDSecret(referencedSecrets, secret) {
pruneObjects[kube.ResourceKey{Group: "", Kind: "Secret", Name: secret.GetName(), Namespace: secret.GetNamespace()}] = secret
pruneObjects[kube.ResourceKey{Group: "", Kind: "Secret", Name: secret.GetName()}] = secret
}
}
applications, err := acdClients.applications.List(ctx, v1.ListOptions{})
errors.CheckError(err)
for _, app := range applications.Items {
if secutil.IsNamespaceEnabled(app.GetNamespace(), namespace, applicationNamespaces) {
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.ApplicationKind, Name: app.GetName(), Namespace: app.GetNamespace()}] = app
}
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.ApplicationKind, Name: app.GetName()}] = app
}
projects, err := acdClients.projects.List(ctx, v1.ListOptions{})
errors.CheckError(err)
for _, proj := range projects.Items {
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.AppProjectKind, Name: proj.GetName(), Namespace: proj.GetNamespace()}] = proj
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.AppProjectKind, Name: proj.GetName()}] = proj
}
applicationSets, err := acdClients.applicationSets.List(ctx, v1.ListOptions{})
if apierr.IsForbidden(err) || apierr.IsNotFound(err) {
@@ -226,9 +191,7 @@ func NewImportCommand() *cobra.Command {
}
if applicationSets != nil {
for _, appSet := range applicationSets.Items {
if secutil.IsNamespaceEnabled(appSet.GetNamespace(), namespace, applicationsetNamespaces) {
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.ApplicationSetKind, Name: appSet.GetName(), Namespace: appSet.GetNamespace()}] = appSet
}
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.ApplicationSetKind, Name: appSet.GetName()}] = appSet
}
}
@@ -237,41 +200,22 @@ func NewImportCommand() *cobra.Command {
errors.CheckError(err)
for _, bakObj := range backupObjects {
gvk := bakObj.GroupVersionKind()
// For objects without namespace, assume they belong in ArgoCD namespace
if bakObj.GetNamespace() == "" {
bakObj.SetNamespace(namespace)
}
key := kube.ResourceKey{Group: gvk.Group, Kind: gvk.Kind, Name: bakObj.GetName(), Namespace: bakObj.GetNamespace()}
key := kube.ResourceKey{Group: gvk.Group, Kind: gvk.Kind, Name: bakObj.GetName()}
liveObj, exists := pruneObjects[key]
delete(pruneObjects, key)
var dynClient dynamic.ResourceInterface
switch bakObj.GetKind() {
case "Secret":
dynClient = client.Resource(secretResource).Namespace(bakObj.GetNamespace())
dynClient = acdClients.secrets
case "ConfigMap":
dynClient = client.Resource(configMapResource).Namespace(bakObj.GetNamespace())
dynClient = acdClients.configMaps
case application.AppProjectKind:
dynClient = client.Resource(appprojectsResource).Namespace(bakObj.GetNamespace())
dynClient = acdClients.projects
case application.ApplicationKind:
dynClient = client.Resource(applicationsResource).Namespace(bakObj.GetNamespace())
// If application is not in one of the allowed namespaces do not import it
if !secutil.IsNamespaceEnabled(bakObj.GetNamespace(), namespace, applicationNamespaces) {
continue
}
dynClient = acdClients.applications
case application.ApplicationSetKind:
dynClient = client.Resource(appplicationSetResource).Namespace(bakObj.GetNamespace())
// If applicationset is not in one of the allowed namespaces do not import it
if !secutil.IsNamespaceEnabled(bakObj.GetNamespace(), namespace, applicationsetNamespaces) {
continue
}
dynClient = acdClients.applicationSets
}
// If there is a live object, remove the tracking annotations/label that might conflict
// when argo is managed with an application.
if ignoreTracking && exists {
updateTracking(bakObj, &liveObj)
}
if !exists {
isForbidden := false
if !dryRun {
@@ -284,7 +228,7 @@ func NewImportCommand() *cobra.Command {
}
}
if !isForbidden {
fmt.Printf("%s/%s %s in namespace %s created%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), bakObj.GetNamespace(), dryRunMsg)
fmt.Printf("%s/%s %s created%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
}
} else if specsEqual(*bakObj, liveObj) && checkAppHasNoNeedToStopOperation(liveObj, stopOperation) {
if verbose {
@@ -303,7 +247,7 @@ func NewImportCommand() *cobra.Command {
}
}
if !isForbidden {
fmt.Printf("%s/%s %s in namespace %s updated%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), bakObj.GetNamespace(), dryRunMsg)
fmt.Printf("%s/%s %s updated%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
}
}
}
@@ -314,11 +258,11 @@ func NewImportCommand() *cobra.Command {
var dynClient dynamic.ResourceInterface
switch key.Kind {
case "Secret":
dynClient = client.Resource(secretResource).Namespace(liveObj.GetNamespace())
dynClient = acdClients.secrets
case application.AppProjectKind:
dynClient = client.Resource(appprojectsResource).Namespace(liveObj.GetNamespace())
dynClient = acdClients.projects
case application.ApplicationKind:
dynClient = client.Resource(applicationsResource).Namespace(liveObj.GetNamespace())
dynClient = acdClients.applications
if !dryRun {
if finalizers := liveObj.GetFinalizers(); len(finalizers) > 0 {
newLive := liveObj.DeepCopy()
@@ -330,7 +274,7 @@ func NewImportCommand() *cobra.Command {
}
}
case application.ApplicationSetKind:
dynClient = client.Resource(appplicationSetResource).Namespace(liveObj.GetNamespace())
dynClient = acdClients.applicationSets
default:
log.Fatalf("Unexpected kind '%s' in prune list", key.Kind)
}
@@ -357,11 +301,8 @@ func NewImportCommand() *cobra.Command {
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().BoolVar(&dryRun, "dry-run", false, "Print what will be performed")
command.Flags().BoolVar(&prune, "prune", false, "Prune secrets, applications and projects which do not appear in the backup")
command.Flags().BoolVar(&ignoreTracking, "ignore-tracking", false, "Do not update the tracking annotation if the resource is already tracked")
command.Flags().BoolVar(&verbose, "verbose", false, "Verbose output (versus only changed output)")
command.Flags().BoolVar(&stopOperation, "stop-operation", false, "Stop any existing operations")
command.Flags().StringSliceVarP(&applicationNamespaces, "application-namespaces", "", []string{}, fmt.Sprintf("Comma separated list of namespace globs to which import of applications is allowed. If not provided value from '%s' in %s will be used,if it's not defined only applications without an explicit namespace will be imported to the Argo CD namespace", applicationNamespacesCmdParamsKey, common.ArgoCDCmdParamsConfigMapName))
command.Flags().StringSliceVarP(&applicationsetNamespaces, "applicationset-namespaces", "", []string{}, fmt.Sprintf("Comma separated list of namespace globs which import of applicationsets is allowed. If not provided value from '%s' in %s will be used,if it's not defined only applicationsets without an explicit namespace will be imported to the Argo CD namespace", applicationsetNamespacesCmdParamsKey, common.ArgoCDCmdParamsConfigMapName))
return &command
}
@@ -379,14 +320,13 @@ func checkAppHasNoNeedToStopOperation(liveObj unstructured.Unstructured, stopOpe
}
// export writes the unstructured object and removes extraneous cruft from output before writing
func export(w io.Writer, un unstructured.Unstructured, argocdNamespace string) {
func export(w io.Writer, un unstructured.Unstructured) {
name := un.GetName()
finalizers := un.GetFinalizers()
apiVersion := un.GetAPIVersion()
kind := un.GetKind()
labels := un.GetLabels()
annotations := un.GetAnnotations()
namespace := un.GetNamespace()
unstructured.RemoveNestedField(un.Object, "metadata")
un.SetName(name)
un.SetFinalizers(finalizers)
@@ -394,9 +334,6 @@ func export(w io.Writer, un unstructured.Unstructured, argocdNamespace string) {
un.SetKind(kind)
un.SetLabels(labels)
un.SetAnnotations(annotations)
if namespace != argocdNamespace {
un.SetNamespace(namespace)
}
data, err := yaml.Marshal(un.Object)
errors.CheckError(err)
_, err = w.Write(data)
@@ -431,32 +368,3 @@ func updateLive(bak, live *unstructured.Unstructured, stopOperation bool) *unstr
}
return newLive
}
// updateTracking will update the tracking label and annotation in the bak resources to the
// value of the live resource.
func updateTracking(bak, live *unstructured.Unstructured) {
// update the common annotation
bakAnnotations := bak.GetAnnotations()
liveAnnotations := live.GetAnnotations()
if liveAnnotations != nil && bakAnnotations != nil {
if v, ok := liveAnnotations[common.AnnotationKeyAppInstance]; ok {
if _, ok := bakAnnotations[common.AnnotationKeyAppInstance]; ok {
bakAnnotations[common.AnnotationKeyAppInstance] = v
bak.SetAnnotations(bakAnnotations)
}
}
}
// update the common label
// A custom label can be set, but it is impossible to know which instance is managing the application
bakLabels := bak.GetLabels()
liveLabels := live.GetLabels()
if liveLabels != nil && bakLabels != nil {
if v, ok := liveLabels[common.LabelKeyAppInstance]; ok {
if _, ok := bakLabels[common.LabelKeyAppInstance]; ok {
bakLabels[common.LabelKeyAppInstance] = v
bak.SetLabels(bakLabels)
}
}
}
}

View File

@@ -1,87 +0,0 @@
package admin
import (
"testing"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/v2/common"
)
func newBackupObject(trackingValue string, trackingLabel bool, trackingAnnotation bool) *unstructured.Unstructured {
cm := v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "my-configmap",
Namespace: "namespace",
},
Data: map[string]string{
"foo": "bar",
},
}
if trackingLabel {
cm.SetLabels(map[string]string{
common.LabelKeyAppInstance: trackingValue,
})
}
if trackingAnnotation {
cm.SetAnnotations(map[string]string{
common.AnnotationKeyAppInstance: trackingValue,
})
}
return kube.MustToUnstructured(&cm)
}
func Test_updateTracking(t *testing.T) {
type args struct {
bak *unstructured.Unstructured
live *unstructured.Unstructured
}
tests := []struct {
name string
args args
expected *unstructured.Unstructured
}{
{
name: "update annotation when present in live",
args: args{
bak: newBackupObject("bak", false, true),
live: newBackupObject("live", false, true),
},
expected: newBackupObject("live", false, true),
},
{
name: "update default label when present in live",
args: args{
bak: newBackupObject("bak", true, true),
live: newBackupObject("live", true, true),
},
expected: newBackupObject("live", true, true),
},
{
name: "do not update if live object does not have tracking",
args: args{
bak: newBackupObject("bak", true, true),
live: newBackupObject("live", false, false),
},
expected: newBackupObject("bak", true, true),
},
{
name: "do not update if bak object does not have tracking",
args: args{
bak: newBackupObject("bak", false, false),
live: newBackupObject("live", true, true),
},
expected: newBackupObject("bak", false, false),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
updateTracking(tt.args.bak, tt.args.live)
assert.Equal(t, tt.expected, tt.args.bak)
})
}
}

View File

@@ -35,8 +35,7 @@ func NewNotificationsCommand() *cobra.Command {
"notifications",
"argocd admin notifications",
applications,
settings.GetFactorySettingsForCLI(&argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
func(clientConfig clientcmd.ClientConfig) {
settings.GetFactorySettings(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false), func(clientConfig clientcmd.ClientConfig) {
k8sCfg, err := clientConfig.ClientConfig()
if err != nil {
log.Fatalf("Failed to parse k8s config: %v", err)

View File

@@ -159,7 +159,7 @@ func NewSettingsCommand() *cobra.Command {
command.AddCommand(NewValidateSettingsCommand(&opts))
command.AddCommand(NewResourceOverridesCommand(&opts))
command.AddCommand(NewRBACCommand(&opts))
command.AddCommand(NewRBACCommand())
opts.clientConfig = cli.AddKubectlFlagsToCmd(command)
command.PersistentFlags().StringVar(&opts.argocdCMPath, "argocd-cm-path", "", "Path to local argocd-cm.yaml file")

View File

@@ -18,7 +18,6 @@ import (
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/assets"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
"github.com/argoproj/argo-cd/v2/util/rbac"
)
@@ -29,7 +28,7 @@ type rbacTrait struct {
}
// Provide a mapping of short-hand resource names to their RBAC counterparts
var resourceMap = map[string]string{
var resourceMap map[string]string = map[string]string{
"account": rbacpolicy.ResourceAccounts,
"app": rbacpolicy.ResourceApplications,
"apps": rbacpolicy.ResourceApplications,
@@ -53,17 +52,8 @@ var resourceMap = map[string]string{
"repository": rbacpolicy.ResourceRepositories,
}
var projectScoped = map[string]bool{
rbacpolicy.ResourceApplications: true,
rbacpolicy.ResourceApplicationSets: true,
rbacpolicy.ResourceLogs: true,
rbacpolicy.ResourceExec: true,
rbacpolicy.ResourceClusters: true,
rbacpolicy.ResourceRepositories: true,
}
// List of allowed RBAC resources
var validRBACResourcesActions = map[string]actionTraitMap{
var validRBACResourcesActions map[string]actionTraitMap = map[string]actionTraitMap{
rbacpolicy.ResourceAccounts: accountsActions,
rbacpolicy.ResourceApplications: applicationsActions,
rbacpolicy.ResourceApplicationSets: defaultCRUDActions,
@@ -119,7 +109,7 @@ var extensionActions = actionTraitMap{
}
// NewRBACCommand is the command for 'rbac'
func NewRBACCommand(cmdCtx commandContext) *cobra.Command {
func NewRBACCommand() *cobra.Command {
command := &cobra.Command{
Use: "rbac",
Short: "Validate and test RBAC configuration",
@@ -127,13 +117,13 @@ func NewRBACCommand(cmdCtx commandContext) *cobra.Command {
c.HelpFunc()(c, args)
},
}
command.AddCommand(NewRBACCanCommand(cmdCtx))
command.AddCommand(NewRBACCanCommand())
command.AddCommand(NewRBACValidateCommand())
return command
}
// NewRBACCanCommand is the command for 'rbac can'
func NewRBACCanCommand(cmdCtx commandContext) *cobra.Command {
// NewRBACCanCommand is the command for 'rbac can-role'
func NewRBACCanCommand() *cobra.Command {
var (
policyFile string
defaultRole string
@@ -185,6 +175,11 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
subResource = args[3]
}
userPolicy := ""
builtinPolicy := ""
var newDefaultRole string
namespace, nsOverride, err := clientConfig.Namespace()
if err != nil {
log.Fatalf("could not create k8s client: %v", err)
@@ -208,7 +203,6 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
userPolicy, newDefaultRole, matchMode := getPolicy(ctx, policyFile, realClientset, namespace)
// Use built-in policy as augmentation if requested
builtinPolicy := ""
if useBuiltin {
builtinPolicy = assets.BuiltinPolicyCSV
}
@@ -219,30 +213,7 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
defaultRole = newDefaultRole
}
// Logs RBAC will be enforced only if an internal var serverRBACLogEnforceEnable
// (representing server.rbac.log.enforce.enable env var in argocd-cm)
// is defined and has a "true" value
// Otherwise, no RBAC enforcement for logs will take place (meaning, 'can' request on a logs resource will result in "yes",
// even if there is no explicit RBAC allow, or if there is an explicit RBAC deny)
var isLogRbacEnforced func() bool
if nsOverride && policyFile == "" {
if resolveRBACResourceName(resource) == rbacpolicy.ResourceLogs {
isLogRbacEnforced = func() bool {
if opts, ok := cmdCtx.(*settingsOpts); ok {
opts.loadClusterSettings = true
opts.clientConfig = clientConfig
settingsMgr, err := opts.createSettingsManager(ctx)
errors.CheckError(err)
logEnforceEnable, err := settingsMgr.GetServerRBACLogEnforceEnable()
errors.CheckError(err)
return logEnforceEnable
}
return false
}
}
}
res := checkPolicy(subject, action, resource, subResource, builtinPolicy, userPolicy, defaultRole, matchMode, strict, isLogRbacEnforced)
res := checkPolicy(subject, action, resource, subResource, builtinPolicy, userPolicy, defaultRole, matchMode, strict)
if res {
if !quiet {
fmt.Println("Yes")
@@ -388,16 +359,20 @@ func getPolicyFromFile(policyFile string) (string, string, string, error) {
// Retrieve policy information from a ConfigMap
func getPolicyFromConfigMap(cm *corev1.ConfigMap) (string, string, string) {
var (
userPolicy string
defaultRole string
ok bool
)
userPolicy, ok = cm.Data[rbac.ConfigMapPolicyCSVKey]
if !ok {
userPolicy = ""
}
defaultRole, ok = cm.Data[rbac.ConfigMapPolicyDefaultKey]
if !ok {
defaultRole = ""
}
return rbac.PolicyCSV(cm.Data), defaultRole, cm.Data[rbac.ConfigMapMatchModeKey]
return userPolicy, defaultRole, cm.Data[rbac.ConfigMapMatchModeKey]
}
// getPolicyConfigMap fetches the RBAC config map from K8s cluster
@@ -411,7 +386,7 @@ func getPolicyConfigMap(ctx context.Context, client kubernetes.Interface, namesp
// checkPolicy checks whether given subject is allowed to execute specified
// action against specified resource
func checkPolicy(subject, action, resource, subResource, builtinPolicy, userPolicy, defaultRole, matchMode string, strict bool, isLogRbacEnforced func() bool) bool {
func checkPolicy(subject, action, resource, subResource, builtinPolicy, userPolicy, defaultRole, matchMode string, strict bool) bool {
enf := rbac.NewEnforcer(nil, "argocd", "argocd-rbac-cm", nil)
enf.SetDefaultRole(defaultRole)
enf.SetMatchMode(matchMode)
@@ -445,19 +420,15 @@ func checkPolicy(subject, action, resource, subResource, builtinPolicy, userPoli
}
}
// Some project scoped resources have a special notation - for simplicity's sake,
// Application resources have a special notation - for simplicity's sake,
// if user gives no sub-resource (or specifies simple '*'), we construct
// the required notation by setting subresource to '*/*'.
if projectScoped[realResource] {
if realResource == rbacpolicy.ResourceApplications {
if subResource == "*" || subResource == "" {
subResource = "*/*"
}
}
if realResource == rbacpolicy.ResourceLogs {
if isLogRbacEnforced != nil && !isLogRbacEnforced() {
return true
}
}
return enf.Enforce(subject, realResource, action, subResource)
}

View File

@@ -130,16 +130,6 @@ func Test_PolicyFromYAML(t *testing.T) {
require.NotEmpty(t, uPol)
require.Equal(t, "role:unknown", dRole)
require.Empty(t, matchMode)
require.True(t, checkPolicy("my-org:team-qa", "update", "project", "foo",
"", uPol, dRole, matchMode, true, nil))
}
func trueLogRbacEnforce() bool {
return true
}
func falseLogRbacEnforce() bool {
return false
}
func Test_PolicyFromK8s(t *testing.T) {
@@ -163,105 +153,43 @@ func Test_PolicyFromK8s(t *testing.T) {
require.Equal(t, "", matchMode)
t.Run("get applications", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "applications", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
ok := checkPolicy("role:user", "get", "applications", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true)
require.True(t, ok)
})
t.Run("get clusters", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "clusters", "*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
ok := checkPolicy("role:user", "get", "clusters", "*", assets.BuiltinPolicyCSV, uPol, dRole, "", true)
require.True(t, ok)
})
t.Run("get certificates", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "certificates", "*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
ok := checkPolicy("role:user", "get", "certificates", "*", assets.BuiltinPolicyCSV, uPol, dRole, "", true)
require.False(t, ok)
})
t.Run("get certificates by default role", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "certificates", "*", assets.BuiltinPolicyCSV, uPol, "role:readonly", "glob", true, nil)
ok := checkPolicy("role:user", "get", "certificates", "*", assets.BuiltinPolicyCSV, uPol, "role:readonly", "glob", true)
require.True(t, ok)
})
t.Run("get certificates by default role without builtin policy", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "certificates", "*", "", uPol, "role:readonly", "glob", true, nil)
ok := checkPolicy("role:user", "get", "certificates", "*", "", uPol, "role:readonly", "glob", true)
require.False(t, ok)
})
t.Run("use regex match mode instead of glob", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "certificates", ".*", assets.BuiltinPolicyCSV, uPol, "role:readonly", "regex", true, nil)
ok := checkPolicy("role:user", "get", "certificates", ".*", assets.BuiltinPolicyCSV, uPol, "role:readonly", "regex", true)
require.False(t, ok)
})
t.Run("get logs", func(t *testing.T) {
ok := checkPolicy("role:test", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.True(t, ok)
})
// no function is provided to check if logs rbac is enforced or not, so the policy permissions are queried to determine if no-such-user can get logs
t.Run("no-such-user get logs", func(t *testing.T) {
ok := checkPolicy("no-such-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.False(t, ok)
})
// logs rbac policy is enforced, and no-such-user is not granted logs permission in user policy, so the result should be false (cannot get logs)
t.Run("no-such-user get logs rbac enforced", func(t *testing.T) {
ok := checkPolicy("no-such-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, trueLogRbacEnforce)
require.False(t, ok)
})
// no-such-user is not granted logs permission in user policy, but logs rbac policy is not enforced, so logs permission is open to all
t.Run("no-such-user get logs rbac not enforced", func(t *testing.T) {
ok := checkPolicy("no-such-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, falseLogRbacEnforce)
require.True(t, ok)
})
// no function is provided to check if logs rbac is enforced or not, so the policy permissions are queried to determine if log-deny-user can get logs
t.Run("log-deny-user get logs", func(t *testing.T) {
ok := checkPolicy("log-deny-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.False(t, ok)
})
// logs rbac policy is enforced, and log-deny-user is denied logs permission in user policy, so the result should be false (cannot get logs)
t.Run("log-deny-user get logs rbac enforced", func(t *testing.T) {
ok := checkPolicy("log-deny-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, trueLogRbacEnforce)
require.False(t, ok)
})
// log-deny-user is denied logs permission in user policy, but logs rbac policy is not enforced, so logs permission is open to all
t.Run("log-deny-user get logs rbac not enforced", func(t *testing.T) {
ok := checkPolicy("log-deny-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, falseLogRbacEnforce)
require.True(t, ok)
})
// no function is provided to check if logs rbac is enforced or not, so the policy permissions are queried to determine if log-allow-user can get logs
t.Run("log-allow-user get logs", func(t *testing.T) {
ok := checkPolicy("log-allow-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.True(t, ok)
})
// logs rbac policy is enforced, and log-allow-user is granted logs permission in user policy, so the result should be true (can get logs)
t.Run("log-allow-user get logs rbac enforced", func(t *testing.T) {
ok := checkPolicy("log-allow-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, trueLogRbacEnforce)
require.True(t, ok)
})
// log-allow-user is granted logs permission in user policy, and logs rbac policy is not enforced, so logs permission is open to all
t.Run("log-allow-user get logs rbac not enforced", func(t *testing.T) {
ok := checkPolicy("log-allow-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, falseLogRbacEnforce)
require.True(t, ok)
})
t.Run("get logs", func(t *testing.T) {
ok := checkPolicy("role:test", "get", "logs", "*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.True(t, ok)
})
t.Run("get logs", func(t *testing.T) {
ok := checkPolicy("role:test", "get", "logs", "", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
ok := checkPolicy("role:test", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true)
require.True(t, ok)
})
t.Run("create exec", func(t *testing.T) {
ok := checkPolicy("role:test", "create", "exec", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
ok := checkPolicy("role:test", "create", "exec", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true)
require.True(t, ok)
})
t.Run("create applicationsets", func(t *testing.T) {
ok := checkPolicy("role:user", "create", "applicationsets", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.True(t, ok)
})
// trueLogRbacEnforce or falseLogRbacEnforce should not affect non-logs resources
t.Run("create applicationsets with trueLogRbacEnforce", func(t *testing.T) {
ok := checkPolicy("role:user", "create", "applicationsets", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, trueLogRbacEnforce)
require.True(t, ok)
})
t.Run("create applicationsets with falseLogRbacEnforce", func(t *testing.T) {
ok := checkPolicy("role:user", "create", "applicationsets", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, trueLogRbacEnforce)
ok := checkPolicy("role:user", "create", "applicationsets", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true)
require.True(t, ok)
})
t.Run("delete applicationsets", func(t *testing.T) {
ok := checkPolicy("role:user", "delete", "applicationsets", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
ok := checkPolicy("role:user", "delete", "applicationsets", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true)
require.True(t, ok)
})
}
@@ -301,49 +229,49 @@ p, role:readonly, certificates, get, .*, allow
p, role:, certificates, get, .*, allow`
t.Run("get applications", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "applications", ".*/.*", builtInPolicy, uPol, dRole, "regex", true, nil)
ok := checkPolicy("role:user", "get", "applications", ".*/.*", builtInPolicy, uPol, dRole, "regex", true)
require.True(t, ok)
})
t.Run("get clusters", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "clusters", ".*", builtInPolicy, uPol, dRole, "regex", true, nil)
ok := checkPolicy("role:user", "get", "clusters", ".*", builtInPolicy, uPol, dRole, "regex", true)
require.True(t, ok)
})
t.Run("get certificates", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "certificates", ".*", builtInPolicy, uPol, dRole, "regex", true, nil)
ok := checkPolicy("role:user", "get", "certificates", ".*", builtInPolicy, uPol, dRole, "regex", true)
require.False(t, ok)
})
t.Run("get certificates by default role", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "certificates", ".*", builtInPolicy, uPol, "role:readonly", "regex", true, nil)
ok := checkPolicy("role:user", "get", "certificates", ".*", builtInPolicy, uPol, "role:readonly", "regex", true)
require.True(t, ok)
})
t.Run("get certificates by default role without builtin policy", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "certificates", ".*", "", uPol, "role:readonly", "regex", true, nil)
ok := checkPolicy("role:user", "get", "certificates", ".*", "", uPol, "role:readonly", "regex", true)
require.False(t, ok)
})
t.Run("use glob match mode instead of regex", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "certificates", ".+", builtInPolicy, uPol, dRole, "glob", true, nil)
ok := checkPolicy("role:user", "get", "certificates", ".+", builtInPolicy, uPol, dRole, "glob", true)
require.False(t, ok)
})
t.Run("get logs via glob match mode", func(t *testing.T) {
ok := checkPolicy("role:user", "get", "logs", ".*/.*", builtInPolicy, uPol, dRole, "glob", true, nil)
ok := checkPolicy("role:user", "get", "logs", ".*/.*", builtInPolicy, uPol, dRole, "glob", true)
require.True(t, ok)
})
t.Run("create exec", func(t *testing.T) {
ok := checkPolicy("role:user", "create", "exec", ".*/.*", builtInPolicy, uPol, dRole, "regex", true, nil)
ok := checkPolicy("role:user", "create", "exec", ".*/.*", builtInPolicy, uPol, dRole, "regex", true)
require.True(t, ok)
})
t.Run("create applicationsets", func(t *testing.T) {
ok := checkPolicy("role:user", "create", "applicationsets", ".*/.*", builtInPolicy, uPol, dRole, "regex", true, nil)
ok := checkPolicy("role:user", "create", "applicationsets", ".*/.*", builtInPolicy, uPol, dRole, "regex", true)
require.True(t, ok)
})
t.Run("delete applicationsets", func(t *testing.T) {
ok := checkPolicy("role:user", "delete", "applicationsets", ".*/.*", builtInPolicy, uPol, dRole, "regex", true, nil)
ok := checkPolicy("role:user", "delete", "applicationsets", ".*/.*", builtInPolicy, uPol, dRole, "regex", true)
require.True(t, ok)
})
}
func TestNewRBACCanCommand(t *testing.T) {
command := NewRBACCanCommand(&settingsOpts{})
command := NewRBACCanCommand()
require.NotNil(t, command)
assert.Equal(t, "can", command.Name())

View File

@@ -12,10 +12,6 @@ data:
p, role:user, applicationsets, delete, */*, allow
p, role:user, logs, get, */*, allow
g, test, role:user
policy.overlay.csv: |
p, role:tester, applications, *, */*, allow
p, role:tester, projects, *, *, allow
g, my-org:team-qa, role:tester
policy.default: role:unknown
kind: ConfigMap
metadata:

View File

@@ -10,6 +10,4 @@ p, role:user, applicationsets, delete, */*, allow
p, role:test, certificates, get, *, allow
p, role:test, logs, get, */*, allow
p, role:test, exec, create, */*, allow
p, log-allow-user, logs, get, */*, allow
p, log-deny-user, logs, get, */*, deny
g, test, role:user
1 p role:user clusters get * allow
10 p role:test certificates get * allow
11 p role:test logs get */* allow
12 p role:test exec create */* allow
p log-allow-user logs get */* allow
p log-deny-user logs get */* deny
13 g test role:user

View File

@@ -108,6 +108,7 @@ type watchOpts struct {
suspended bool
degraded bool
delete bool
hydrated bool
}
// NewApplicationCreateCommand returns a new instance of an `argocd app create` command
@@ -294,7 +295,7 @@ func parentChildDetails(appIf application.ApplicationServiceClient, ctx context.
return mapUidToNode, mapParentToChild, parentNode
}
func printHeader(acdClient argocdclient.Client, app *argoappv1.Application, ctx context.Context, windows *argoappv1.SyncWindows, showOperation bool, showParams bool, sourcePosition int) {
func printHeader(acdClient argocdclient.Client, app *argoappv1.Application, ctx context.Context, windows *argoappv1.SyncWindows, showOperation bool, showParams bool) {
aURL := appURL(ctx, acdClient, app.Name)
printAppSummaryTable(app, aURL, windows)
@@ -309,21 +310,20 @@ func printHeader(acdClient argocdclient.Client, app *argoappv1.Application, ctx
fmt.Println()
printOperationResult(app.Status.OperationState)
}
if showParams {
printParams(app, sourcePosition)
if !app.Spec.HasMultipleSources() && showParams {
printParams(app)
}
}
// NewApplicationGetCommand returns a new instance of an `argocd app get` command
func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
refresh bool
hardRefresh bool
output string
showParams bool
showOperation bool
appNamespace string
sourcePosition int
refresh bool
hardRefresh bool
output string
showParams bool
showOperation bool
appNamespace string
)
command := &cobra.Command{
Use: "get APPNAME",
@@ -344,9 +344,6 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
# Show application parameters and overrides
argocd app get my-app --show-params
# Show application parameters and overrides for a source at position 1 under spec.sources of app my-app
argocd app get my-app --show-params --source-position 1
# Refresh application data when retrieving
argocd app get my-app --refresh
@@ -377,17 +374,8 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
Refresh: getRefreshType(refresh, hardRefresh),
AppNamespace: &appNs,
})
errors.CheckError(err)
// check for source position if --show-params is set
if app.Spec.HasMultipleSources() && showParams {
if sourcePosition <= 0 {
errors.CheckError(fmt.Errorf("Source position should be specified and must be greater than 0 for applications with multiple sources"))
}
if len(app.Spec.GetSources()) < sourcePosition {
errors.CheckError(fmt.Errorf("Source position should be less than the number of sources in the application"))
}
}
errors.CheckError(err)
pConn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
defer argoio.Close(pConn)
@@ -401,7 +389,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
err := PrintResource(app, output)
errors.CheckError(err)
case "wide", "":
printHeader(acdClient, app, ctx, windows, showOperation, showParams, sourcePosition)
printHeader(acdClient, app, ctx, windows, showOperation, showParams)
if len(app.Status.Resources) > 0 {
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
@@ -409,14 +397,14 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
_ = w.Flush()
}
case "tree":
printHeader(acdClient, app, ctx, windows, showOperation, showParams, sourcePosition)
printHeader(acdClient, app, ctx, windows, showOperation, showParams)
mapUidToNode, mapParentToChild, parentNode, mapNodeNameToResourceState := resourceParentChild(ctx, acdClient, appName, appNs)
if len(mapUidToNode) > 0 {
fmt.Println()
printTreeView(mapUidToNode, mapParentToChild, parentNode, mapNodeNameToResourceState)
}
case "tree=detailed":
printHeader(acdClient, app, ctx, windows, showOperation, showParams, sourcePosition)
printHeader(acdClient, app, ctx, windows, showOperation, showParams)
mapUidToNode, mapParentToChild, parentNode, mapNodeNameToResourceState := resourceParentChild(ctx, acdClient, appName, appNs)
if len(mapUidToNode) > 0 {
fmt.Println()
@@ -433,7 +421,6 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
command.Flags().BoolVar(&refresh, "refresh", false, "Refresh application data when retrieving")
command.Flags().BoolVar(&hardRefresh, "hard-refresh", false, "Refresh application data as well as target manifests cache")
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Only get application from namespace")
command.Flags().IntVar(&sourcePosition, "source-position", -1, "Position of the source from the list of sources of the app. Counting starts at 1.")
return command
}
@@ -586,8 +573,8 @@ func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *ar
var status string
var allow, deny, inactiveAllows bool
if windows.HasWindows() {
active, err := windows.Active()
if err == nil && active.HasWindows() {
active := windows.Active()
if active.HasWindows() {
for _, w := range *active {
if w.Kind == "deny" {
deny = true
@@ -596,14 +583,13 @@ func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *ar
}
}
}
inactiveAllowWindows, err := windows.InactiveAllows()
if err == nil && inactiveAllowWindows.HasWindows() {
if windows.InactiveAllows().HasWindows() {
inactiveAllows = true
}
s := windows.CanSync(true)
if deny || !deny && !allow && inactiveAllows {
s, err := windows.CanSync(true)
if err == nil && s {
if s {
status = "Manual Allowed"
} else {
status = "Sync Denied"
@@ -716,22 +702,9 @@ func truncateString(str string, num int) string {
}
// printParams prints parameters and overrides
func printParams(app *argoappv1.Application, sourcePosition int) {
var source *argoappv1.ApplicationSource
if app.Spec.HasMultipleSources() {
// Get the source by the sourcePosition whose params you'd like to print
source = app.Spec.GetSourcePtrByPosition(sourcePosition)
if source == nil {
source = &argoappv1.ApplicationSource{}
}
} else {
src := app.Spec.GetSource()
source = &src
}
if source.Helm != nil {
printHelmParams(source.Helm)
func printParams(app *argoappv1.Application) {
if app.Spec.GetSource().Helm != nil {
printHelmParams(app.Spec.GetSource().Helm)
}
}
@@ -821,9 +794,9 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
errors.CheckError(err)
},
}
command.Flags().IntVar(&sourcePosition, "source-position", -1, "Position of the source from the list of sources of the app. Counting starts at 1.")
cmdutil.AddAppFlags(command, &appOpts)
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Set application parameters in namespace")
command.Flags().IntVar(&sourcePosition, "source-position", -1, "Position of the source from the list of sources of the app. Counting starts at 1.")
return command
}
@@ -1283,7 +1256,7 @@ func findandPrintDiff(ctx context.Context, app *argoappv1.Application, proj *arg
if diffOptions.local != "" {
localObjs := groupObjsByKey(getLocalObjects(ctx, app, proj, diffOptions.local, diffOptions.localRepoRoot, argoSettings.AppLabelKey, diffOptions.cluster.Info.ServerVersion, diffOptions.cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.TrackingMethod), liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, localObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace), app.Spec.Destination.Namespace)
} else if diffOptions.revision != "" || len(diffOptions.revisions) > 0 {
} else if diffOptions.revision != "" || (diffOptions.revisions != nil && len(diffOptions.revisions) > 0) {
var unstructureds []*unstructured.Unstructured
for _, mfst := range diffOptions.res.Manifests {
obj, err := argoappv1.UnmarshalToUnstructured(mfst)
@@ -1376,7 +1349,7 @@ func groupObjsForDiff(resources *application.ManagedResourcesResponse, objs map[
}
if local, ok := objs[key]; ok || live != nil {
if local != nil && !kube.IsCRD(local) {
err = resourceTracking.SetAppInstance(local, argoSettings.AppLabelKey, appName, namespace, argoappv1.TrackingMethod(argoSettings.GetTrackingMethod()), argoSettings.GetInstallationID())
err = resourceTracking.SetAppInstance(local, argoSettings.AppLabelKey, appName, namespace, argoappv1.TrackingMethod(argoSettings.GetTrackingMethod()))
errors.CheckError(err)
}
@@ -1790,6 +1763,7 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().BoolVar(&watch.suspended, "suspended", false, "Wait for suspended")
command.Flags().BoolVar(&watch.degraded, "degraded", false, "Wait for degraded")
command.Flags().BoolVar(&watch.delete, "delete", false, "Wait for delete")
command.Flags().BoolVar(&watch.hydrated, "hydrated", false, "Wait for hydration operations")
command.Flags().StringVarP(&selector, "selector", "l", "", "Wait for apps by label. Supports '=', '==', '!=', in, notin, exists & not exists. Matching apps must satisfy all of the specified label constraints.")
command.Flags().StringArrayVar(&resources, "resource", []string{}, fmt.Sprintf("Sync only specific resources as GROUP%[1]sKIND%[1]sNAME or %[2]sGROUP%[1]sKIND%[1]sNAME. Fields may be blank and '*' can be used. This option may be specified repeatedly", resourceFieldDelimiter, resourceExcludeIndicator))
command.Flags().BoolVar(&watch.operation, "operation", false, "Wait for pending operations")
@@ -1934,7 +1908,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if len(projects) != 0 {
errMsg += fmt.Sprintf(" projects %v", projects)
}
log.Fatal(errMsg)
log.Fatalf(errMsg)
}
for _, i := range list.Items {
@@ -2326,7 +2300,7 @@ func groupResourceStates(app *argoappv1.Application, selectedResources []*argoap
}
// check if resource health, sync and operation statuses matches watch options
func checkResourceStatus(watch watchOpts, healthStatus string, syncStatus string, operationStatus *argoappv1.Operation) bool {
func checkResourceStatus(watch watchOpts, healthStatus string, syncStatus string, operationStatus *argoappv1.Operation, hydrationFinished bool) bool {
if watch.delete {
return false
}
@@ -2356,7 +2330,8 @@ func checkResourceStatus(watch watchOpts, healthStatus string, syncStatus string
synced := !watch.sync || syncStatus == string(argoappv1.SyncStatusCodeSynced)
operational := !watch.operation || operationStatus == nil
return synced && healthCheckPassed && operational
hydration := !watch.hydrated || (watch.hydrated && hydrationFinished)
return synced && healthCheckPassed && operational && hydration
}
// resourceParentChild gets the latest state of the app and the latest state of the app's resource tree and then
@@ -2520,13 +2495,15 @@ func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client,
}
}
hydrationFinished := app.Status.SourceHydrator.CurrentOperation != nil && app.Status.SourceHydrator.CurrentOperation.Phase == argoappv1.HydrateOperationPhaseHydrated && app.Status.SourceHydrator.CurrentOperation.SourceHydrator.DeepEquals(app.Status.SourceHydrator.LastSuccessfulOperation.SourceHydrator) && app.Status.SourceHydrator.CurrentOperation.DrySHA == app.Status.SourceHydrator.LastSuccessfulOperation.DrySHA
var selectedResourcesAreReady bool
// If selected resources are included, wait only on those resources, otherwise wait on the application as a whole.
if len(selectedResources) > 0 {
selectedResourcesAreReady = true
for _, state := range getResourceStates(app, selectedResources) {
resourceIsReady := checkResourceStatus(watch, state.Health, state.Status, appEvent.Application.Operation)
resourceIsReady := checkResourceStatus(watch, state.Health, state.Status, appEvent.Application.Operation, hydrationFinished)
if !resourceIsReady {
selectedResourcesAreReady = false
break
@@ -2534,7 +2511,7 @@ func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client,
}
} else {
// Wait on the application as a whole
selectedResourcesAreReady = checkResourceStatus(watch, string(app.Status.Health.Status), string(app.Status.Sync.Status), appEvent.Application.Operation)
selectedResourcesAreReady = checkResourceStatus(watch, string(app.Status.Health.Status), string(app.Status.Sync.Status), appEvent.Application.Operation, hydrationFinished)
}
if selectedResourcesAreReady && (!operationInProgress || !watch.operation) {
@@ -2877,7 +2854,6 @@ func NewApplicationManifestsCommand(clientOpts *argocdclient.ClientOptions) *cob
errors.CheckError(err)
proj := getProject(c, clientOpts, ctx, app.Spec.Project)
// nolint:staticcheck
unstructureds = getLocalObjects(context.Background(), app, proj.Project, local, localRepoRoot, argoSettings.AppLabelKey, cluster.ServerVersion, cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.TrackingMethod)
} else if len(revisions) > 0 && len(sourcePositions) > 0 {
q := application.ApplicationManifestQuery{

View File

@@ -36,7 +36,7 @@ func TestPrintTreeViewAppResources(t *testing.T) {
buf := &bytes.Buffer{}
w := tabwriter.NewWriter(buf, 0, 0, 2, ' ', 0)
printTreeViewAppResourcesNotOrphaned(nodeMapping, mapParentToChild, parentNode, w)
printTreeViewAppResourcesNotOrphaned(nodeMapping, mapParentToChild, parentNode, false, false, w)
if err := w.Flush(); err != nil {
t.Fatal(err)
}
@@ -77,7 +77,7 @@ func TestPrintTreeViewDetailedAppResources(t *testing.T) {
buf := &bytes.Buffer{}
w := tabwriter.NewWriter(buf, 0, 0, 2, ' ', 0)
printDetailedTreeViewAppResourcesNotOrphaned(nodeMapping, mapParentToChild, parentNode, w)
printDetailedTreeViewAppResourcesNotOrphaned(nodeMapping, mapParentToChild, parentNode, false, false, w)
if err := w.Flush(); err != nil {
t.Fatal(err)
}

View File

@@ -175,25 +175,25 @@ func parentChildInfo(nodes []v1alpha1.ResourceNode) (map[string]v1alpha1.Resourc
return mapUidToNode, mapParentToChild, parentNode
}
func printDetailedTreeViewAppResourcesNotOrphaned(nodeMapping map[string]v1alpha1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, w *tabwriter.Writer) {
func printDetailedTreeViewAppResourcesNotOrphaned(nodeMapping map[string]v1alpha1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, orphaned bool, listAll bool, w *tabwriter.Writer) {
for uid := range parentNodes {
detailedTreeViewAppResourcesNotOrphaned("", nodeMapping, parentChildMapping, nodeMapping[uid], w)
}
}
func printDetailedTreeViewAppResourcesOrphaned(nodeMapping map[string]v1alpha1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, w *tabwriter.Writer) {
func printDetailedTreeViewAppResourcesOrphaned(nodeMapping map[string]v1alpha1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, orphaned bool, listAll bool, w *tabwriter.Writer) {
for uid := range parentNodes {
detailedTreeViewAppResourcesOrphaned("", nodeMapping, parentChildMapping, nodeMapping[uid], w)
}
}
func printTreeViewAppResourcesNotOrphaned(nodeMapping map[string]v1alpha1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, w *tabwriter.Writer) {
func printTreeViewAppResourcesNotOrphaned(nodeMapping map[string]v1alpha1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, orphaned bool, listAll bool, w *tabwriter.Writer) {
for uid := range parentNodes {
treeViewAppResourcesNotOrphaned("", nodeMapping, parentChildMapping, nodeMapping[uid], w)
}
}
func printTreeViewAppResourcesOrphaned(nodeMapping map[string]v1alpha1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, w *tabwriter.Writer) {
func printTreeViewAppResourcesOrphaned(nodeMapping map[string]v1alpha1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, orphaned bool, listAll bool, w *tabwriter.Writer) {
for uid := range parentNodes {
treeViewAppResourcesOrphaned("", nodeMapping, parentChildMapping, nodeMapping[uid], w)
}
@@ -206,24 +206,24 @@ func printResources(listAll bool, orphaned bool, appResourceTree *v1alpha1.Appli
if !orphaned || listAll {
mapUidToNode, mapParentToChild, parentNode := parentChildInfo(appResourceTree.Nodes)
printDetailedTreeViewAppResourcesNotOrphaned(mapUidToNode, mapParentToChild, parentNode, w)
printDetailedTreeViewAppResourcesNotOrphaned(mapUidToNode, mapParentToChild, parentNode, orphaned, listAll, w)
}
if orphaned || listAll {
mapUidToNode, mapParentToChild, parentNode := parentChildInfo(appResourceTree.OrphanedNodes)
printDetailedTreeViewAppResourcesOrphaned(mapUidToNode, mapParentToChild, parentNode, w)
printDetailedTreeViewAppResourcesOrphaned(mapUidToNode, mapParentToChild, parentNode, orphaned, listAll, w)
}
} else if output == "tree" {
fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\n")
if !orphaned || listAll {
mapUidToNode, mapParentToChild, parentNode := parentChildInfo(appResourceTree.Nodes)
printTreeViewAppResourcesNotOrphaned(mapUidToNode, mapParentToChild, parentNode, w)
printTreeViewAppResourcesNotOrphaned(mapUidToNode, mapParentToChild, parentNode, orphaned, listAll, w)
}
if orphaned || listAll {
mapUidToNode, mapParentToChild, parentNode := parentChildInfo(appResourceTree.OrphanedNodes)
printTreeViewAppResourcesOrphaned(mapUidToNode, mapParentToChild, parentNode, w)
printTreeViewAppResourcesOrphaned(mapUidToNode, mapParentToChild, parentNode, orphaned, listAll, w)
}
} else {
headers := []interface{}{"GROUP", "KIND", "NAMESPACE", "NAME", "ORPHANED"}

View File

@@ -918,83 +918,35 @@ func TestPrintAppConditions(t *testing.T) {
}
func TestPrintParams(t *testing.T) {
testCases := []struct {
name string
app *v1alpha1.Application
sourcePosition int
expectedOutput string
}{
{
name: "Single Source application with valid helm parameters",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "name1",
Value: "value1",
},
{
Name: "name2",
Value: "value2",
},
{
Name: "name3",
Value: "value3",
},
output, _ := captureOutput(func() error {
app := &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "name1",
Value: "value1",
},
{
Name: "name2",
Value: "value2",
},
{
Name: "name3",
Value: "value3",
},
},
},
},
},
sourcePosition: -1,
expectedOutput: "\n\nNAME VALUE\nname1 value1\nname2 value2\nname3 value3\n",
},
{
name: "Multi-source application with a valid Source Position",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Sources: []v1alpha1.ApplicationSource{
{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "nameA",
Value: "valueA",
},
},
},
},
{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "nameB",
Value: "valueB",
},
},
},
},
},
},
},
sourcePosition: 1,
expectedOutput: "\n\nNAME VALUE\nnameA valueA\n",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
output, _ := captureOutput(func() error {
printParams(tc.app, tc.sourcePosition)
return nil
})
if output != tc.expectedOutput {
t.Fatalf("Incorrect print params output %q, should be %q\n", output, tc.expectedOutput)
}
})
}
printParams(app)
return nil
})
expectation := "\n\nNAME VALUE\nname1 value1\nname2 value2\nname3 value3\n"
if output != expectation {
t.Fatalf("Incorrect print params output %q, should be %q", output, expectation)
}
}
@@ -1383,14 +1335,6 @@ func TestFilterAppResources(t *testing.T) {
Namespace: "",
Exclude: true,
}
// apps:ReplicaSet:*
includeAllReplicaSetResource = v1alpha1.SyncOperationResource{
Group: "apps",
Kind: "ReplicaSet",
Name: "*",
Namespace: "",
Exclude: false,
}
// apps:ReplicaSet:replicaSet-name1
includeReplicaSet1Resource = v1alpha1.SyncOperationResource{
Group: "apps",
@@ -1463,13 +1407,13 @@ func TestFilterAppResources(t *testing.T) {
{
testName: "Include ReplicaSet replicaSet-name1 resource and exclude all service resources",
selectedResources: []*v1alpha1.SyncOperationResource{&excludeAllServiceResources, &includeReplicaSet1Resource},
expectedResult: []*v1alpha1.SyncOperationResource{&replicaSet1},
expectedResult: []*v1alpha1.SyncOperationResource{&replicaSet1, &replicaSet2, &job, &deployment},
},
// --resource !apps:ReplicaSet:replicaSet-name2 --resource !*:Service:*
{
testName: "Exclude ReplicaSet replicaSet-name2 resource and all service resources",
selectedResources: []*v1alpha1.SyncOperationResource{&excludeReplicaSet2Resource, &excludeAllServiceResources},
expectedResult: []*v1alpha1.SyncOperationResource{&replicaSet1, &job, &deployment},
expectedResult: []*v1alpha1.SyncOperationResource{&replicaSet1, &replicaSet2, &job, &service1, &service2, &deployment},
},
// --resource !apps:ReplicaSet:replicaSet-name2
{
@@ -1483,12 +1427,6 @@ func TestFilterAppResources(t *testing.T) {
selectedResources: []*v1alpha1.SyncOperationResource{&includeReplicaSet1Resource},
expectedResult: []*v1alpha1.SyncOperationResource{&replicaSet1},
},
// --resource apps:ReplicaSet:* --resource !apps:ReplicaSet:replicaSet-name2
{
testName: "Include All ReplicaSet resource and exclude replicaSet-name1 resource",
selectedResources: []*v1alpha1.SyncOperationResource{&includeAllReplicaSetResource, &excludeReplicaSet2Resource},
expectedResult: []*v1alpha1.SyncOperationResource{&replicaSet1},
},
// --resource !*:Service:*
{
testName: "Exclude Service resources",
@@ -1762,7 +1700,7 @@ func TestCheckResourceStatus(t *testing.T) {
suspended: true,
health: true,
degraded: true,
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Degraded, Suspended and health status failed", func(t *testing.T) {
@@ -1770,57 +1708,57 @@ func TestCheckResourceStatus(t *testing.T) {
suspended: true,
health: true,
degraded: true,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
t.Run("Suspended and health status passed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: true,
health: true,
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Suspended and health status failed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: true,
health: true,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
t.Run("Suspended passed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: true,
health: false,
}, string(health.HealthStatusSuspended), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusSuspended), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Suspended failed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: true,
health: false,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
t.Run("Health passed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: false,
health: true,
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Health failed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: false,
health: true,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
t.Run("Synced passed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
res := checkResourceStatus(watchOpts{}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Synced failed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeOutOfSync), &v1alpha1.Operation{})
res := checkResourceStatus(watchOpts{}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeOutOfSync), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Degraded passed", func(t *testing.T) {
@@ -1828,7 +1766,7 @@ func TestCheckResourceStatus(t *testing.T) {
suspended: false,
health: false,
degraded: true,
}, string(health.HealthStatusDegraded), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusDegraded), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Degraded failed", func(t *testing.T) {
@@ -1836,7 +1774,7 @@ func TestCheckResourceStatus(t *testing.T) {
suspended: false,
health: false,
degraded: true,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
}

View File

@@ -11,7 +11,6 @@ import (
"github.com/spf13/cobra"
"google.golang.org/grpc/codes"
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/admin"
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/headless"
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
@@ -54,7 +53,6 @@ func NewAppSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command.AddCommand(NewApplicationSetCreateCommand(clientOpts))
command.AddCommand(NewApplicationSetListCommand(clientOpts))
command.AddCommand(NewApplicationSetDeleteCommand(clientOpts))
command.AddCommand(NewApplicationSetGenerateCommand(clientOpts))
return command
}
@@ -210,75 +208,6 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
return command
}
// NewApplicationSetGenerateCommand returns a new instance of an `argocd appset generate` command
func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var output string
command := &cobra.Command{
Use: "generate",
Short: "Generate apps of ApplicationSet rendered templates",
Example: templates.Examples(`
# Generate apps of ApplicationSet rendered templates
argocd appset generate <filename or URL> (<filename or URL>...)
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
argocdClient := headless.NewClientOrDie(clientOpts, c)
fileUrl := args[0]
appsets, err := cmdutil.ConstructApplicationSet(fileUrl)
errors.CheckError(err)
if len(appsets) != 1 {
fmt.Printf("Input file must contain one ApplicationSet")
os.Exit(1)
}
appset := appsets[0]
if appset.Name == "" {
err := fmt.Errorf("Error generating apps for ApplicationSet %s. ApplicationSet does not have Name field set", appset)
errors.CheckError(err)
}
conn, appIf := argocdClient.NewApplicationSetClientOrDie()
defer argoio.Close(conn)
req := applicationset.ApplicationSetGenerateRequest{
ApplicationSet: appset,
}
resp, err := appIf.Generate(ctx, &req)
errors.CheckError(err)
var appsList []arogappsetv1.Application
for i := range resp.Applications {
appsList = append(appsList, *resp.Applications[i])
}
switch output {
case "yaml", "json":
var resources []interface{}
for i := range appsList {
app := appsList[i]
// backfill api version and kind because k8s client always return empty values for these fields
app.APIVersion = arogappsetv1.ApplicationSchemaGroupVersionKind.GroupVersion().String()
app.Kind = arogappsetv1.ApplicationSchemaGroupVersionKind.Kind
resources = append(resources, app)
}
cobra.CheckErr(admin.PrintResources(output, os.Stdout, resources...))
case "wide", "":
printApplicationTable(appsList, &output)
default:
errors.CheckError(fmt.Errorf("unknown output format: %s", output))
}
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
return command
}
// NewApplicationSetListCommand returns a new instance of an `argocd appset list` command
func NewApplicationSetListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (

View File

@@ -34,10 +34,6 @@ const (
clusterFieldName = "name"
// cluster field is 'namespaces'
clusterFieldNamespaces = "namespaces"
// cluster field is 'labels'
clusterFieldLabel = "labels"
// cluster field is 'annotations'
clusterFieldAnnotation = "annotations"
// indicates managing all namespaces
allNamespaces = "*"
)
@@ -224,8 +220,6 @@ func NewClusterSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
var (
clusterOptions cmdutil.ClusterOptions
clusterName string
labels []string
annotations []string
)
command := &cobra.Command{
Use: "set NAME",
@@ -244,25 +238,17 @@ func NewClusterSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
conn, clusterIf := headless.NewClientOrDie(clientOpts, c).NewClusterClientOrDie()
defer io.Close(conn)
// checks the fields that needs to be updated
updatedFields := checkFieldsToUpdate(clusterOptions, labels, annotations)
updatedFields := checkFieldsToUpdate(clusterOptions)
namespaces := clusterOptions.Namespaces
// check if all namespaces have to be considered
if len(namespaces) == 1 && strings.EqualFold(namespaces[0], allNamespaces) {
namespaces[0] = ""
}
// parse the labels you're receiving from the label flag
labelsMap, err := label.Parse(labels)
errors.CheckError(err)
// parse the annotations you're receiving from the annotation flag
annotationsMap, err := label.Parse(annotations)
errors.CheckError(err)
if updatedFields != nil {
clusterUpdateRequest := clusterpkg.ClusterUpdateRequest{
Cluster: &argoappv1.Cluster{
Name: clusterOptions.Name,
Namespaces: namespaces,
Labels: labelsMap,
Annotations: annotationsMap,
Name: clusterOptions.Name,
Namespaces: namespaces,
},
UpdatedFields: updatedFields,
Id: &clusterpkg.ClusterID{
@@ -280,13 +266,11 @@ func NewClusterSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
}
command.Flags().StringVar(&clusterOptions.Name, "name", "", "Overwrite the cluster name")
command.Flags().StringArrayVar(&clusterOptions.Namespaces, "namespace", nil, "List of namespaces which are allowed to manage. Specify '*' to manage all namespaces")
command.Flags().StringArrayVar(&labels, "label", nil, "Set metadata labels (e.g. --label key=value)")
command.Flags().StringArrayVar(&annotations, "annotation", nil, "Set metadata annotations (e.g. --annotation key=value)")
return command
}
// checkFieldsToUpdate returns the fields that needs to be updated
func checkFieldsToUpdate(clusterOptions cmdutil.ClusterOptions, labels []string, annotations []string) []string {
func checkFieldsToUpdate(clusterOptions cmdutil.ClusterOptions) []string {
var updatedFields []string
if clusterOptions.Name != "" {
updatedFields = append(updatedFields, clusterFieldName)
@@ -294,12 +278,6 @@ func checkFieldsToUpdate(clusterOptions cmdutil.ClusterOptions, labels []string,
if clusterOptions.Namespaces != nil {
updatedFields = append(updatedFields, clusterFieldNamespaces)
}
if labels != nil {
updatedFields = append(updatedFields, clusterFieldLabel)
}
if annotations != nil {
updatedFields = append(updatedFields, clusterFieldAnnotation)
}
return updatedFields
}
@@ -363,7 +341,6 @@ func printClusterDetails(clusters []argoappv1.Cluster) {
fmt.Printf("Cluster information\n\n")
fmt.Printf(" Server URL: %s\n", cluster.Server)
fmt.Printf(" Server Name: %s\n", strWithDefault(cluster.Name, "-"))
// nolint:staticcheck
fmt.Printf(" Server Version: %s\n", cluster.ServerVersion)
fmt.Printf(" Namespaces: %s\n", formatNamespaces(cluster))
fmt.Printf("\nTLS configuration\n\n")
@@ -456,7 +433,6 @@ func printClusterTable(clusters []argoappv1.Cluster) {
if len(c.Namespaces) > 0 {
server = fmt.Sprintf("%s (%d namespaces)", c.Server, len(c.Namespaces))
}
// nolint:staticcheck
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\n", server, c.Name, c.ServerVersion, c.ConnectionState.Status, c.ConnectionState.Message, c.Project)
}
_ = w.Flush()

View File

@@ -15,8 +15,7 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/pflag"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
runtimeUtil "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
cache2 "k8s.io/client-go/tools/cache"
@@ -49,7 +48,6 @@ type forwardCacheClient struct {
err error
redisHaProxyName string
redisName string
redisPassword string
}
func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error) error {
@@ -66,7 +64,7 @@ func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error)
return
}
redisClient := redis.NewClient(&redis.Options{Addr: fmt.Sprintf("localhost:%d", redisPort), Password: c.redisPassword})
redisClient := redis.NewClient(&redis.Options{Addr: fmt.Sprintf("localhost:%d", redisPort)})
c.client = cache.NewRedisCache(redisClient, time.Hour, c.compression)
})
if c.err != nil {
@@ -203,7 +201,7 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
}
// get rid of logging error handler
runtimeUtil.ErrorHandlers = runtimeUtil.ErrorHandlers[1:]
runtime.ErrorHandlers = runtime.ErrorHandlers[1:]
cli.SetLogLevel(log.ErrorLevel.String())
log.SetLevel(log.ErrorLevel)
os.Setenv(v1alpha1.EnvVarFakeInClusterConfig, "true")
@@ -238,14 +236,7 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
return fmt.Errorf("error creating kubernetes dynamic clientset: %w", err)
}
scheme := runtime.NewScheme()
err = v1alpha1.AddToScheme(scheme)
if err != nil {
return fmt.Errorf("error adding argo resources to scheme: %w", err)
}
controllerClientset, err := client.New(restConfig, client.Options{
Scheme: scheme,
})
controllerClientset, err := client.New(restConfig, client.Options{})
if err != nil {
return fmt.Errorf("error creating kubernetes controller clientset: %w", err)
}
@@ -260,12 +251,12 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
if err != nil {
return fmt.Errorf("error running miniredis: %w", err)
}
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: compression, redisHaProxyName: clientOpts.RedisHaProxyName, redisName: clientOpts.RedisName}), time.Hour)
redisOptions := &redis.Options{Addr: mr.Addr()}
if err = common.SetOptionalRedisPasswordFromKubeConfig(ctx, kubeClientset, namespace, redisOptions); err != nil {
log.Warnf("Failed to fetch & set redis password for namespace %s: %v", namespace, err)
}
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: compression, redisHaProxyName: clientOpts.RedisHaProxyName, redisName: clientOpts.RedisName, redisPassword: redisOptions.Password}), time.Hour)
srv := server.NewServer(ctx, server.ArgoCDServerOpts{
EnableGZip: false,
Namespace: namespace,

View File

@@ -6,7 +6,6 @@ import (
"fmt"
"io"
"os"
"slices"
"strings"
"text/tabwriter"
"time"
@@ -81,8 +80,6 @@ func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command.AddCommand(NewProjectRemoveOrphanedIgnoreCommand(clientOpts))
command.AddCommand(NewProjectAddSourceNamespace(clientOpts))
command.AddCommand(NewProjectRemoveSourceNamespace(clientOpts))
command.AddCommand(NewProjectAddDestinationServiceAccountCommand(clientOpts))
command.AddCommand(NewProjectRemoveDestinationServiceAccountCommand(clientOpts))
return command
}
@@ -802,7 +799,7 @@ func printProjectNames(projects []v1alpha1.AppProject) {
// Print table of project info
func printProjectTable(projects []v1alpha1.AppProject) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tSIGNATURE-KEYS\tORPHANED-RESOURCES\tDESTINATION-SERVICE-ACCOUNTS\n")
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tSIGNATURE-KEYS\tORPHANED-RESOURCES\n")
for _, p := range projects {
printProjectLine(w, &p)
}
@@ -858,7 +855,7 @@ func formatOrphanedResources(p *v1alpha1.AppProject) string {
}
func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
var destinations, destinationServiceAccounts, sourceRepos, clusterWhitelist, namespaceBlacklist, signatureKeys string
var destinations, sourceRepos, clusterWhitelist, namespaceBlacklist, signatureKeys string
switch len(p.Spec.Destinations) {
case 0:
destinations = "<none>"
@@ -867,14 +864,6 @@ func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
default:
destinations = fmt.Sprintf("%d destinations", len(p.Spec.Destinations))
}
switch len(p.Spec.DestinationServiceAccounts) {
case 0:
destinationServiceAccounts = "<none>"
case 1:
destinationServiceAccounts = fmt.Sprintf("%s,%s,%s", p.Spec.DestinationServiceAccounts[0].Server, p.Spec.DestinationServiceAccounts[0].Namespace, p.Spec.DestinationServiceAccounts[0].DefaultServiceAccount)
default:
destinationServiceAccounts = fmt.Sprintf("%d destinationServiceAccounts", len(p.Spec.DestinationServiceAccounts))
}
switch len(p.Spec.SourceRepos) {
case 0:
sourceRepos = "<none>"
@@ -903,7 +892,7 @@ func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
default:
signatureKeys = fmt.Sprintf("%d key(s)", len(p.Spec.SignatureKeys))
}
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, destinations, sourceRepos, clusterWhitelist, namespaceBlacklist, signatureKeys, formatOrphanedResources(p), destinationServiceAccounts)
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, destinations, sourceRepos, clusterWhitelist, namespaceBlacklist, signatureKeys, formatOrphanedResources(p))
}
func printProject(p *v1alpha1.AppProject, scopedRepositories []*v1alpha1.Repository, scopedClusters []*v1alpha1.Cluster) {
@@ -1093,122 +1082,3 @@ func NewProjectEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
}
return command
}
// NewProjectAddDestinationServiceAccountCommand returns a new instance of an `argocd proj add-destination-service-account` command
func NewProjectAddDestinationServiceAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var serviceAccountNamespace string
buildApplicationDestinationServiceAccount := func(destination string, namespace string, serviceAccount string, serviceAccountNamespace string) v1alpha1.ApplicationDestinationServiceAccount {
if serviceAccountNamespace != "" {
return v1alpha1.ApplicationDestinationServiceAccount{
Server: destination,
Namespace: namespace,
DefaultServiceAccount: fmt.Sprintf("%s:%s", serviceAccountNamespace, serviceAccount),
}
} else {
return v1alpha1.ApplicationDestinationServiceAccount{
Server: destination,
Namespace: namespace,
DefaultServiceAccount: serviceAccount,
}
}
}
command := &cobra.Command{
Use: "add-destination-service-account PROJECT SERVER NAMESPACE SERVICE_ACCOUNT",
Short: "Add project destination's default service account",
Example: templates.Examples(`
# Add project destination service account (SERVICE_ACCOUNT) for a server URL (SERVER) in the specified namespace (NAMESPACE) on the project with name PROJECT
argocd proj add-destination-service-account PROJECT SERVER NAMESPACE SERVICE_ACCOUNT
# Add project destination service account (SERVICE_ACCOUNT) from a different namespace
argocd proj add-destination PROJECT SERVER NAMESPACE SERVICE_ACCOUNT --service-account-namespace <service_account_namespace>
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
if len(args) != 4 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
serviceAccount := args[3]
if strings.Contains(serviceAccountNamespace, "*") {
log.Fatal("service-account-namespace for DestinationServiceAccount must not contain wildcards")
}
if strings.Contains(serviceAccount, "*") {
log.Fatal("ServiceAccount for DestinationServiceAccount must not contain wildcards")
}
destinationServiceAccount := buildApplicationDestinationServiceAccount(server, namespace, serviceAccount, serviceAccountNamespace)
conn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
defer argoio.Close(conn)
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, dest := range proj.Spec.DestinationServiceAccounts {
dstServerExist := destinationServiceAccount.Server != "" && dest.Server == destinationServiceAccount.Server
dstServiceAccountExist := destinationServiceAccount.DefaultServiceAccount != "" && dest.DefaultServiceAccount == destinationServiceAccount.DefaultServiceAccount
if dest.Namespace == destinationServiceAccount.Namespace && dstServerExist && dstServiceAccountExist {
log.Fatal("Specified destination service account is already defined in project")
}
}
proj.Spec.DestinationServiceAccounts = append(proj.Spec.DestinationServiceAccounts, destinationServiceAccount)
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
command.Flags().StringVar(&serviceAccountNamespace, "service-account-namespace", "", "Use service-account-namespace as namespace where the service account is present")
return command
}
// NewProjectRemoveDestinationCommand returns a new instance of an `argocd proj remove-destination-service-account` command
func NewProjectRemoveDestinationServiceAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command := &cobra.Command{
Use: "remove-destination-service-account PROJECT SERVER NAMESPACE SERVICE_ACCOUNT",
Short: "Remove default destination service account from the project",
Example: templates.Examples(`
# Remove the destination service account (SERVICE_ACCOUNT) from the specified destination (SERVER and NAMESPACE combination) on the project with name PROJECT
argocd proj remove-destination-service-account PROJECT SERVER NAMESPACE SERVICE_ACCOUNT
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
if len(args) != 4 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
serviceAccount := args[3]
conn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
defer argoio.Close(conn)
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
originalLength := len(proj.Spec.DestinationServiceAccounts)
proj.Spec.DestinationServiceAccounts = slices.DeleteFunc(proj.Spec.DestinationServiceAccounts,
func(destServiceAccount v1alpha1.ApplicationDestinationServiceAccount) bool {
return destServiceAccount.Namespace == namespace &&
destServiceAccount.Server == server &&
destServiceAccount.DefaultServiceAccount == serviceAccount
},
)
if originalLength != len(proj.Spec.DestinationServiceAccounts) {
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
} else {
log.Fatal("Specified destination service account does not exist in project")
}
},
}
return command
}

View File

@@ -352,10 +352,9 @@ func printSyncWindows(proj *v1alpha1.AppProject) {
fmt.Fprintf(w, fmtStr, headers...)
if proj.Spec.SyncWindows.HasWindows() {
for i, window := range proj.Spec.SyncWindows {
isActive, _ := window.Active()
vals := []interface{}{
strconv.Itoa(i),
formatBoolOutput(isActive),
formatBoolOutput(window.Active()),
window.Kind,
window.Schedule,
window.Duration,

View File

@@ -178,7 +178,6 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
repoOpts.Repo.GithubAppInstallationId = repoOpts.GithubAppInstallationId
repoOpts.Repo.GitHubAppEnterpriseBaseURL = repoOpts.GitHubAppEnterpriseBaseURL
repoOpts.Repo.Proxy = repoOpts.Proxy
repoOpts.Repo.NoProxy = repoOpts.NoProxy
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {

View File

@@ -187,7 +187,6 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
command.Flags().StringVar(&repo.Type, "type", common.DefaultRepoType, "type of the repository, \"git\" or \"helm\"")
command.Flags().StringVar(&gcpServiceAccountKeyPath, "gcp-service-account-key-path", "", "service account key for the Google Cloud Platform")
command.Flags().BoolVar(&repo.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force basic auth when connecting via HTTP")
command.Flags().StringVar(&repo.Proxy, "proxy-url", "", "If provided, this URL will be used to connect via proxy")
return command
}

View File

@@ -80,7 +80,6 @@ func NewCommand() *cobra.Command {
command.PersistentFlags().StringVar(&clientOpts.PortForwardNamespace, "port-forward-namespace", config.GetFlag("port-forward-namespace", ""), "Namespace name which should be used for port forwarding")
command.PersistentFlags().IntVar(&clientOpts.HttpRetryMax, "http-retry-max", config.GetIntFlag("http-retry-max", 0), "Maximum number of retries to establish http connection to Argo CD server")
command.PersistentFlags().BoolVar(&clientOpts.Core, "core", config.GetBoolFlag("core"), "If set to true then CLI talks directly to Kubernetes instead of talking to Argo CD API server")
command.PersistentFlags().StringVar(&clientOpts.Context, "argocd-context", "", "The name of the Argo-CD server context to use")
command.PersistentFlags().StringVar(&clientOpts.ServerName, "server-name", env.StringFromEnv(common.EnvServerName, common.DefaultServerName), fmt.Sprintf("Name of the Argo CD API server; set this or the %s environment variable when the server's name label differs from the default, for example when installing via the Helm chart", common.EnvServerName))
command.PersistentFlags().StringVar(&clientOpts.AppControllerName, "controller-name", env.StringFromEnv(common.EnvAppControllerName, common.DefaultApplicationControllerName), fmt.Sprintf("Name of the Argo CD Application controller; set this or the %s environment variable when the controller's name label differs from the default, for example when installing via the Helm chart", common.EnvAppControllerName))
command.PersistentFlags().StringVar(&clientOpts.RedisHaProxyName, "redis-haproxy-name", env.StringFromEnv(common.EnvRedisHaProxyName, common.DefaultRedisHaProxyName), fmt.Sprintf("Name of the Redis HA Proxy; set this or the %s environment variable when the HA Proxy's name label differs from the default, for example when installing via the Helm chart", common.EnvRedisHaProxyName))

View File

@@ -4,13 +4,12 @@ import (
"os"
"path/filepath"
"github.com/argoproj/argo-cd/v2/cmd/util"
"github.com/spf13/cobra"
appcontroller "github.com/argoproj/argo-cd/v2/cmd/argocd-application-controller/commands"
applicationset "github.com/argoproj/argo-cd/v2/cmd/argocd-applicationset-controller/commands"
cmpserver "github.com/argoproj/argo-cd/v2/cmd/argocd-cmp-server/commands"
commitserver "github.com/argoproj/argo-cd/v2/cmd/argocd-commit-server/commands"
dex "github.com/argoproj/argo-cd/v2/cmd/argocd-dex/commands"
gitaskpass "github.com/argoproj/argo-cd/v2/cmd/argocd-git-ask-pass/commands"
k8sauth "github.com/argoproj/argo-cd/v2/cmd/argocd-k8s-auth/commands"
@@ -31,12 +30,9 @@ func main() {
if val := os.Getenv(binaryNameEnv); val != "" {
binaryName = val
}
isCLI := false
switch binaryName {
case "argocd", "argocd-linux-amd64", "argocd-darwin-amd64", "argocd-windows-amd64.exe":
command = cli.NewCommand()
isCLI = true
case "argocd-server":
command = apiserver.NewCommand()
case "argocd-application-controller":
@@ -45,24 +41,21 @@ func main() {
command = reposerver.NewCommand()
case "argocd-cmp-server":
command = cmpserver.NewCommand()
isCLI = true
case "argocd-commit-server":
command = commitserver.NewCommand()
case "argocd-dex":
command = dex.NewCommand()
case "argocd-notifications":
command = notification.NewCommand()
case "argocd-git-ask-pass":
command = gitaskpass.NewCommand()
isCLI = true
case "argocd-applicationset-controller":
command = applicationset.NewCommand()
case "argocd-k8s-auth":
command = k8sauth.NewCommand()
isCLI = true
default:
command = cli.NewCommand()
isCLI = true
}
util.SetAutoMaxProcs(isCLI)
if err := command.Execute(); err != nil {
os.Exit(1)

View File

@@ -9,8 +9,6 @@ import (
"strings"
"time"
"go.uber.org/automaxprocs/maxprocs"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
@@ -88,19 +86,12 @@ type AppOptions struct {
retryBackoffMaxDuration time.Duration
retryBackoffFactor int64
ref string
}
// SetAutoMaxProcs sets the GOMAXPROCS value based on the binary name.
// It suppresses logs for CLI binaries and logs the setting for services.
func SetAutoMaxProcs(isCLI bool) {
if isCLI {
_, _ = maxprocs.Set() // Intentionally ignore errors for CLI binaries
} else {
_, err := maxprocs.Set(maxprocs.Logger(log.Infof))
if err != nil {
log.Errorf("Error setting GOMAXPROCS: %v", err)
}
}
drySourceRepo string
drySourceRevision string
drySourcePath string
syncSourceBranch string
syncSourcePath string
hydrateToBranch string
}
func AddAppFlags(command *cobra.Command, opts *AppOptions) {
@@ -109,6 +100,12 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
command.Flags().StringVar(&opts.chart, "helm-chart", "", "Helm Chart name")
command.Flags().StringVar(&opts.env, "env", "", "Application environment to monitor")
command.Flags().StringVar(&opts.revision, "revision", "", "The tracking source branch, tag, commit or Helm chart version the application will sync to")
command.Flags().StringVar(&opts.drySourceRepo, "dry-source-repo", "", "Repository URL of the app dry source")
command.Flags().StringVar(&opts.drySourceRevision, "dry-source-revision", "", "Revision of the app dry source")
command.Flags().StringVar(&opts.drySourcePath, "dry-source-path", "", "Path in repository to the app directory for the dry source")
command.Flags().StringVar(&opts.syncSourceBranch, "sync-source-branch", "", "The branch from which the app will sync")
command.Flags().StringVar(&opts.syncSourcePath, "sync-source-path", "", "The path in the repository from which the app will sync")
command.Flags().StringVar(&opts.hydrateToBranch, "hydrate-to-branch", "", "The branch to hydrate the app to")
command.Flags().IntVar(&opts.revisionHistoryLimit, "revision-history-limit", argoappv1.RevisionHistoryLimit, "How many items to keep in revision history")
command.Flags().StringVar(&opts.destServer, "dest-server", "", "K8s cluster URL (e.g. https://kubernetes.default.svc)")
command.Flags().StringVar(&opts.destName, "dest-name", "", "K8s cluster Name (e.g. minikube)")
@@ -169,21 +166,28 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
if flags == nil {
return visited
}
source := spec.GetSourcePtrByPosition(sourcePosition)
if source == nil {
source = &argoappv1.ApplicationSource{}
}
source, visited = ConstructSource(source, *appOpts, flags)
if spec.HasMultipleSources() {
if sourcePosition == 0 {
spec.Sources[sourcePosition] = *source
} else if sourcePosition > 0 {
spec.Sources[sourcePosition-1] = *source
} else {
spec.Sources = append(spec.Sources, *source)
}
var h *argoappv1.SourceHydrator
h, hasHydratorFlag := constructSourceHydrator(spec.SourceHydrator, *appOpts, flags)
if hasHydratorFlag {
spec.SourceHydrator = h
} else {
spec.Source = source
source := spec.GetSourcePtrByPosition(sourcePosition)
if source == nil {
source = &argoappv1.ApplicationSource{}
}
source, visited = ConstructSource(source, *appOpts, flags)
if spec.HasMultipleSources() {
if sourcePosition == 0 {
spec.Sources[sourcePosition] = *source
} else if sourcePosition > 0 {
spec.Sources[sourcePosition-1] = *source
} else {
spec.Sources = append(spec.Sources, *source)
}
} else {
spec.Source = source
}
}
flags.Visit(func(f *pflag.Flag) {
visited++
@@ -578,9 +582,7 @@ func constructAppsBaseOnName(appName string, labels, annotations, args []string,
Name: appName,
Namespace: appNs,
},
Spec: argoappv1.ApplicationSpec{
Source: &argoappv1.ApplicationSource{},
},
Spec: argoappv1.ApplicationSpec{},
}
SetAppSpecOptions(flags, &app.Spec, &appOpts, 0)
SetParameterOverrides(app, appOpts.Parameters, 0)
@@ -748,6 +750,47 @@ func ConstructSource(source *argoappv1.ApplicationSource, appOpts AppOptions, fl
return source, visited
}
// constructSourceHydrator constructs a source hydrator from the command line flags. It returns the modified source
// hydrator and a boolean indicating if any hydrator flags were set. We return instead of just modifying the source
// hydrator in place because the given hydrator `h` might be nil. In that case, we need to create a new source hydrator
// and return it.
func constructSourceHydrator(h *argoappv1.SourceHydrator, appOpts AppOptions, flags *pflag.FlagSet) (*argoappv1.SourceHydrator, bool) {
hasHydratorFlag := false
ensureNotNil := func(notEmpty bool) {
hasHydratorFlag = true
if notEmpty && h == nil {
h = &argoappv1.SourceHydrator{}
}
}
flags.Visit(func(f *pflag.Flag) {
switch f.Name {
case "dry-source-repo":
ensureNotNil(appOpts.drySourceRepo != "")
h.DrySource.RepoURL = appOpts.drySourceRepo
case "dry-source-path":
ensureNotNil(appOpts.drySourcePath != "")
h.DrySource.Path = appOpts.drySourcePath
case "dry-source-revision":
ensureNotNil(appOpts.drySourceRevision != "")
h.DrySource.TargetRevision = appOpts.drySourceRevision
case "sync-source-branch":
ensureNotNil(appOpts.syncSourceBranch != "")
h.SyncSource.TargetBranch = appOpts.syncSourceBranch
case "sync-source-path":
ensureNotNil(appOpts.syncSourcePath != "")
h.SyncSource.Path = appOpts.syncSourcePath
case "hydrate-to-branch":
ensureNotNil(appOpts.hydrateToBranch != "")
if appOpts.hydrateToBranch == "" {
h.HydrateTo = nil
} else {
h.HydrateTo = &argoappv1.HydrateTo{TargetBranch: appOpts.hydrateToBranch}
}
}
})
return h, hasHydratorFlag
}
func mergeLabels(app *argoappv1.Application, labels []string) {
mapLabels, err := label.Parse(labels)
errors.CheckError(err)

View File

@@ -1,11 +1,10 @@
package util
import (
"bytes"
"log"
"os"
"testing"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -285,6 +284,28 @@ func Test_setAppSpecOptions(t *testing.T) {
require.NoError(t, f.SetFlag("helm-api-versions", "v2"))
assert.Equal(t, []string{"v1", "v2"}, f.spec.Source.Helm.APIVersions)
})
t.Run("source hydrator", func(t *testing.T) {
require.NoError(t, f.SetFlag("dry-source-repo", "https://github.com/argoproj/argocd-example-apps"))
assert.Equal(t, "https://github.com/argoproj/argocd-example-apps", f.spec.SourceHydrator.DrySource.RepoURL)
require.NoError(t, f.SetFlag("dry-source-path", "apps"))
assert.Equal(t, "apps", f.spec.SourceHydrator.DrySource.Path)
require.NoError(t, f.SetFlag("dry-source-revision", "HEAD"))
assert.Equal(t, "HEAD", f.spec.SourceHydrator.DrySource.TargetRevision)
require.NoError(t, f.SetFlag("sync-source-branch", "env/test"))
assert.Equal(t, "env/test", f.spec.SourceHydrator.SyncSource.TargetBranch)
require.NoError(t, f.SetFlag("sync-source-path", "apps"))
assert.Equal(t, "apps", f.spec.SourceHydrator.SyncSource.Path)
require.NoError(t, f.SetFlag("hydrate-to-branch", "env/test-next"))
assert.Equal(t, "env/test-next", f.spec.SourceHydrator.HydrateTo.TargetBranch)
require.NoError(t, f.SetFlag("hydrate-to-branch", ""))
assert.Nil(t, f.spec.SourceHydrator.HydrateTo)
})
}
func newMultiSourceAppOptionsFixture() *appOptionsFixture {
@@ -530,27 +551,3 @@ func TestFilterResources(t *testing.T) {
assert.Nil(t, filteredResources)
})
}
func TestSetAutoMaxProcs(t *testing.T) {
t.Run("CLI mode ignores errors", func(t *testing.T) {
logBuffer := &bytes.Buffer{}
oldLogger := log.Default()
log.SetOutput(logBuffer)
defer log.SetOutput(oldLogger.Writer())
SetAutoMaxProcs(true)
assert.Empty(t, logBuffer.String(), "Expected no log output when isCLI is true")
})
t.Run("Non-CLI mode logs error on failure", func(t *testing.T) {
logBuffer := &bytes.Buffer{}
oldLogger := log.Default()
log.SetOutput(logBuffer)
defer log.SetOutput(oldLogger.Writer())
SetAutoMaxProcs(false)
assert.NotContains(t, logBuffer.String(), "Error setting GOMAXPROCS", "Unexpected log output detected")
})
}

View File

@@ -20,12 +20,11 @@ import (
)
type ProjectOpts struct {
Description string
destinations []string
destinationServiceAccounts []string
Sources []string
SignatureKeys []string
SourceNamespaces []string
Description string
destinations []string
Sources []string
SignatureKeys []string
SourceNamespaces []string
orphanedResourcesEnabled bool
orphanedResourcesWarn bool
@@ -48,8 +47,6 @@ func AddProjFlags(command *cobra.Command, opts *ProjectOpts) {
command.Flags().StringArrayVar(&opts.allowedNamespacedResources, "allow-namespaced-resource", []string{}, "List of allowed namespaced resources")
command.Flags().StringArrayVar(&opts.deniedNamespacedResources, "deny-namespaced-resource", []string{}, "List of denied namespaced resources")
command.Flags().StringSliceVar(&opts.SourceNamespaces, "source-namespaces", []string{}, "List of source namespaces for applications")
command.Flags().StringArrayVar(&opts.destinationServiceAccounts, "dest-service-accounts", []string{},
"Destination server, namespace and target service account (e.g. https://192.168.99.100:8443,default,default-sa)")
}
func getGroupKindList(values []string) []v1.GroupKind {
@@ -96,23 +93,6 @@ func (opts *ProjectOpts) GetDestinations() []v1alpha1.ApplicationDestination {
return destinations
}
func (opts *ProjectOpts) GetDestinationServiceAccounts() []v1alpha1.ApplicationDestinationServiceAccount {
destinationServiceAccounts := make([]v1alpha1.ApplicationDestinationServiceAccount, 0)
for _, destStr := range opts.destinationServiceAccounts {
parts := strings.Split(destStr, ",")
if len(parts) != 3 {
log.Fatalf("Expected destination service account of the form: server,namespace, defaultServiceAccount. Received: %s", destStr)
} else {
destinationServiceAccounts = append(destinationServiceAccounts, v1alpha1.ApplicationDestinationServiceAccount{
Server: parts[0],
Namespace: parts[1],
DefaultServiceAccount: parts[2],
})
}
}
return destinationServiceAccounts
}
// GetSignatureKeys TODO: Get configured keys and emit warning when a key is specified that is not configured
func (opts *ProjectOpts) GetSignatureKeys() []v1alpha1.SignatureKey {
signatureKeys := make([]v1alpha1.SignatureKey, 0)
@@ -186,8 +166,6 @@ func SetProjSpecOptions(flags *pflag.FlagSet, spec *v1alpha1.AppProjectSpec, pro
spec.NamespaceResourceBlacklist = projOpts.GetDeniedNamespacedResources()
case "source-namespaces":
spec.SourceNamespaces = projOpts.GetSourceNamespaces()
case "dest-service-accounts":
spec.DestinationServiceAccounts = projOpts.GetDestinationServiceAccounts()
}
})
if flags.Changed("orphaned-resources") || flags.Changed("orphaned-resources-warn") {

View File

@@ -5,8 +5,6 @@ import (
"github.com/stretchr/testify/assert"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
func TestProjectOpts_ResourceLists(t *testing.T) {
@@ -24,27 +22,3 @@ func TestProjectOpts_ResourceLists(t *testing.T) {
[]v1.GroupKind{{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"}}, opts.GetDeniedClusterResources(),
)
}
func TestProjectOpts_GetDestinationServiceAccounts(t *testing.T) {
opts := ProjectOpts{
destinationServiceAccounts: []string{
"https://192.168.99.100:8443,test-ns,test-sa",
"https://kubernetes.default.svc.local:6443,guestbook,guestbook-sa",
},
}
assert.ElementsMatch(t,
[]v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://192.168.99.100:8443",
Namespace: "test-ns",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://kubernetes.default.svc.local:6443",
Namespace: "guestbook",
DefaultServiceAccount: "guestbook-sa",
},
}, opts.GetDestinationServiceAccounts(),
)
}

View File

@@ -22,7 +22,6 @@ type RepoOptions struct {
GithubAppPrivateKeyPath string
GitHubAppEnterpriseBaseURL string
Proxy string
NoProxy string
GCPServiceAccountKeyPath string
ForceHttpBasicAuth bool
}
@@ -45,7 +44,6 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().StringVar(&opts.GithubAppPrivateKeyPath, "github-app-private-key-path", "", "private key of the GitHub Application")
command.Flags().StringVar(&opts.GitHubAppEnterpriseBaseURL, "github-app-enterprise-base-url", "", "base url to use when using GitHub Enterprise (e.g. https://ghe.example.com/api/v3")
command.Flags().StringVar(&opts.Proxy, "proxy", "", "use proxy to access repository")
command.Flags().StringVar(&opts.Proxy, "no-proxy", "", "don't access these targets via proxy")
command.Flags().StringVar(&opts.GCPServiceAccountKeyPath, "gcp-service-account-key-path", "", "service account key for the Google Cloud Platform")
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
}

View File

@@ -0,0 +1,47 @@
package apiclient
import (
"fmt"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"github.com/argoproj/argo-cd/v2/util/io"
)
// Clientset represents commit server api clients
type Clientset interface {
NewCommitServerClient() (io.Closer, CommitServiceClient, error)
}
type clientSet struct {
address string
}
// NewCommitServerClient creates new instance of commit server client
func (c *clientSet) NewCommitServerClient() (io.Closer, CommitServiceClient, error) {
conn, err := NewConnection(c.address)
if err != nil {
return nil, nil, fmt.Errorf("failed to open a new connection to commit server: %w", err)
}
return conn, NewCommitServiceClient(conn), nil
}
// NewConnection creates new connection to commit server
func NewConnection(address string) (*grpc.ClientConn, error) {
var opts []grpc.DialOption
opts = append(opts, grpc.WithTransportCredentials(insecure.NewCredentials()))
conn, err := grpc.Dial(address, opts...)
if err != nil {
log.Errorf("Unable to connect to commit service with address %s", address)
return nil, err
}
return conn, nil
}
// NewCommitServerClientset creates new instance of commit server Clientset
func NewCommitServerClientset(address string) Clientset {
return &clientSet{address: address}
}

1382
commitserver/apiclient/commit.pb.go generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,68 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks
import (
apiclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient"
io "github.com/argoproj/argo-cd/v2/util/io"
mock "github.com/stretchr/testify/mock"
)
// Clientset is an autogenerated mock type for the Clientset type
type Clientset struct {
mock.Mock
}
// NewCommitServerClient provides a mock function with given fields:
func (_m *Clientset) NewCommitServerClient() (io.Closer, apiclient.CommitServiceClient, error) {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for NewCommitServerClient")
}
var r0 io.Closer
var r1 apiclient.CommitServiceClient
var r2 error
if rf, ok := ret.Get(0).(func() (io.Closer, apiclient.CommitServiceClient, error)); ok {
return rf()
}
if rf, ok := ret.Get(0).(func() io.Closer); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(io.Closer)
}
}
if rf, ok := ret.Get(1).(func() apiclient.CommitServiceClient); ok {
r1 = rf()
} else {
if ret.Get(1) != nil {
r1 = ret.Get(1).(apiclient.CommitServiceClient)
}
}
if rf, ok := ret.Get(2).(func() error); ok {
r2 = rf()
} else {
r2 = ret.Error(2)
}
return r0, r1, r2
}
// NewClientset creates a new instance of Clientset. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewClientset(t interface {
mock.TestingT
Cleanup(func())
}) *Clientset {
mock := &Clientset{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -0,0 +1,69 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks
import (
context "context"
apiclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient"
grpc "google.golang.org/grpc"
mock "github.com/stretchr/testify/mock"
)
// CommitServiceClient is an autogenerated mock type for the CommitServiceClient type
type CommitServiceClient struct {
mock.Mock
}
// CommitHydratedManifests provides a mock function with given fields: ctx, in, opts
func (_m *CommitServiceClient) CommitHydratedManifests(ctx context.Context, in *apiclient.CommitHydratedManifestsRequest, opts ...grpc.CallOption) (*apiclient.CommitHydratedManifestsResponse, error) {
_va := make([]interface{}, len(opts))
for _i := range opts {
_va[_i] = opts[_i]
}
var _ca []interface{}
_ca = append(_ca, ctx, in)
_ca = append(_ca, _va...)
ret := _m.Called(_ca...)
if len(ret) == 0 {
panic("no return value specified for CommitHydratedManifests")
}
var r0 *apiclient.CommitHydratedManifestsResponse
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *apiclient.CommitHydratedManifestsRequest, ...grpc.CallOption) (*apiclient.CommitHydratedManifestsResponse, error)); ok {
return rf(ctx, in, opts...)
}
if rf, ok := ret.Get(0).(func(context.Context, *apiclient.CommitHydratedManifestsRequest, ...grpc.CallOption) *apiclient.CommitHydratedManifestsResponse); ok {
r0 = rf(ctx, in, opts...)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*apiclient.CommitHydratedManifestsResponse)
}
}
if rf, ok := ret.Get(1).(func(context.Context, *apiclient.CommitHydratedManifestsRequest, ...grpc.CallOption) error); ok {
r1 = rf(ctx, in, opts...)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// NewCommitServiceClient creates a new instance of CommitServiceClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewCommitServiceClient(t interface {
mock.TestingT
Cleanup(func())
}) *CommitServiceClient {
mock := &CommitServiceClient{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -0,0 +1,218 @@
package commit
import (
"context"
"fmt"
"os"
"time"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/commitserver/metrics"
"github.com/argoproj/argo-cd/v2/util/git"
"github.com/argoproj/argo-cd/v2/util/io/files"
)
// Service is the service that handles commit requests.
type Service struct {
gitCredsStore git.CredsStore
metricsServer *metrics.Server
repoClientFactory RepoClientFactory
}
// NewService returns a new instance of the commit service.
func NewService(gitCredsStore git.CredsStore, metricsServer *metrics.Server) *Service {
return &Service{
gitCredsStore: gitCredsStore,
metricsServer: metricsServer,
repoClientFactory: NewRepoClientFactory(gitCredsStore, metricsServer),
}
}
// CommitHydratedManifests handles a commit request. It clones the repository, checks out the sync branch, checks out
// the target branch, clears the repository contents, writes the manifests to the repository, commits the changes, and
// pushes the changes. It returns the hydrated revision SHA and an error if one occurred.
func (s *Service) CommitHydratedManifests(ctx context.Context, r *apiclient.CommitHydratedManifestsRequest) (*apiclient.CommitHydratedManifestsResponse, error) {
// This method is intentionally short. It's a wrapper around handleCommitRequest that adds metrics and logging.
// Keep logic here minimal and put most of the logic in handleCommitRequest.
startTime := time.Now()
// We validate for a nil repo in handleCommitRequest, but we need to check for a nil repo here to get the repo URL
// for metrics.
var repoURL string
if r.Repo != nil {
repoURL = r.Repo.Repo
}
s.metricsServer.IncPendingCommitRequest(repoURL)
defer s.metricsServer.DecPendingCommitRequest(repoURL)
logCtx := log.WithFields(log.Fields{"branch": r.TargetBranch, "drySHA": r.DrySha})
out, sha, err := s.handleCommitRequest(ctx, logCtx, r)
if err != nil {
logCtx.WithError(err).WithField("output", out).Error("failed to handle commit request")
s.metricsServer.IncCommitRequest(repoURL, metrics.CommitResponseTypeFailure)
s.metricsServer.ObserveCommitRequestDuration(repoURL, metrics.CommitResponseTypeFailure, time.Since(startTime))
// No need to wrap this error, sufficient context is build in handleCommitRequest.
return &apiclient.CommitHydratedManifestsResponse{}, err
}
logCtx.Info("Successfully handled commit request")
s.metricsServer.IncCommitRequest(repoURL, metrics.CommitResponseTypeSuccess)
s.metricsServer.ObserveCommitRequestDuration(repoURL, metrics.CommitResponseTypeSuccess, time.Since(startTime))
return &apiclient.CommitHydratedManifestsResponse{
HydratedSha: sha,
}, nil
}
// handleCommitRequest handles the commit request. It clones the repository, checks out the sync branch, checks out the
// target branch, clears the repository contents, writes the manifests to the repository, commits the changes, and pushes
// the changes. It returns the output of the git commands and an error if one occurred.
func (s *Service) handleCommitRequest(ctx context.Context, logCtx *log.Entry, r *apiclient.CommitHydratedManifestsRequest) (string, string, error) {
if r.Repo == nil {
return "", "", fmt.Errorf("repo is required")
}
if r.Repo.Repo == "" {
return "", "", fmt.Errorf("repo URL is required")
}
if r.TargetBranch == "" {
return "", "", fmt.Errorf("target branch is required")
}
if r.SyncBranch == "" {
return "", "", fmt.Errorf("sync branch is required")
}
logCtx = logCtx.WithField("repo", r.Repo.Repo)
logCtx.Debug("Initiating git client")
gitClient, dirPath, cleanup, err := s.initGitClient(ctx, logCtx, r)
if err != nil {
return "", "", fmt.Errorf("failed to init git client: %w", err)
}
defer cleanup()
logCtx.Debugf("Checking out sync branch %s", r.SyncBranch)
var out string
out, err = gitClient.CheckoutOrOrphan(r.SyncBranch, false)
if err != nil {
return out, "", fmt.Errorf("failed to checkout sync branch: %w", err)
}
logCtx.Debugf("Checking out target branch %s", r.TargetBranch)
out, err = gitClient.CheckoutOrNew(r.TargetBranch, r.SyncBranch, false)
if err != nil {
return out, "", fmt.Errorf("failed to checkout target branch: %w", err)
}
logCtx.Debug("Clearing repo contents")
out, err = gitClient.RemoveContents()
if err != nil {
return out, "", fmt.Errorf("failed to clear repo: %w", err)
}
logCtx.Debug("Writing manifests")
err = WriteForPaths(dirPath, r.Repo.Repo, r.DrySha, r.Paths)
if err != nil {
return "", "", fmt.Errorf("failed to write manifests: %w", err)
}
logCtx.Debug("Committing and pushing changes")
out, err = gitClient.CommitAndPush(r.TargetBranch, r.CommitMessage)
if err != nil {
return out, "", fmt.Errorf("failed to commit and push: %w", err)
}
logCtx.Debug("Getting commit SHA")
sha, err := gitClient.CommitSHA()
if err != nil {
return "", "", fmt.Errorf("failed to get commit SHA: %w", err)
}
return "", sha, nil
}
// initGitClient initializes a git client for the given repository and returns the client, the path to the directory where
// the repository is cloned, a cleanup function that should be called when the directory is no longer needed, and an error
// if one occurred.
func (s *Service) initGitClient(ctx context.Context, logCtx *log.Entry, r *apiclient.CommitHydratedManifestsRequest) (git.Client, string, func(), error) {
dirPath, err := files.CreateTempDir("/tmp/_commit-service")
if err != nil {
return nil, "", nil, fmt.Errorf("failed to create temp dir: %w", err)
}
// Call cleanupOrLog in this function if an error occurs to ensure the temp dir is cleaned up.
cleanupOrLog := func() {
err := os.RemoveAll(dirPath)
if err != nil {
logCtx.WithError(err).Error("failed to cleanup temp dir")
}
}
gitClient, err := s.repoClientFactory.NewClient(r.Repo, dirPath)
if err != nil {
cleanupOrLog()
return nil, "", nil, fmt.Errorf("failed to create git client: %w", err)
}
logCtx.Debugf("Initializing repo %s", r.Repo.Repo)
err = gitClient.Init()
if err != nil {
cleanupOrLog()
return nil, "", nil, fmt.Errorf("failed to init git client: %w", err)
}
logCtx.Debugf("Fetching repo %s", r.Repo.Repo)
err = gitClient.Fetch("")
if err != nil {
cleanupOrLog()
return nil, "", nil, fmt.Errorf("failed to clone repo: %w", err)
}
logCtx.Debugf("Getting user info for repo credentials")
gitCreds := r.Repo.GetGitCreds(s.gitCredsStore)
startTime := time.Now()
authorName, authorEmail, err := gitCreds.GetUserInfo(ctx)
s.metricsServer.ObserveUserInfoRequestDuration(r.Repo.Repo, getCredentialType(r.Repo), time.Since(startTime))
if err != nil {
cleanupOrLog()
return nil, "", nil, fmt.Errorf("failed to get github app info: %w", err)
}
if authorName == "" {
authorName = "Argo CD"
}
if authorEmail == "" {
logCtx.Warnf("Author email not available, using 'argo-cd@example.com'.")
authorEmail = "argo-cd@example.com"
}
logCtx.Debugf("Setting author %s <%s>", authorName, authorEmail)
_, err = gitClient.SetAuthor(authorName, authorEmail)
if err != nil {
cleanupOrLog()
return nil, "", nil, fmt.Errorf("failed to set author: %w", err)
}
return gitClient, dirPath, cleanupOrLog, nil
}
type hydratorMetadataFile struct {
RepoURL string `json:"repoURL"`
DrySHA string `json:"drySha"`
Commands []string `json:"commands"`
}
// TODO: make this configurable via ConfigMap.
var manifestHydrationReadmeTemplate = `
# Manifest Hydration
To hydrate the manifests in this repository, run the following commands:
` + "```shell\n" + `
git clone {{ .RepoURL }}
# cd into the cloned directory
git checkout {{ .DrySHA }}
{{ range $command := .Commands -}}
{{ $command }}
{{ end -}}` + "```"

View File

@@ -0,0 +1,50 @@
syntax = "proto3";
option go_package = "github.com/argoproj/argo-cd/v2/commitserver/apiclient";
import "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1/generated.proto";
// CommitHydratedManifestsRequest is the request to commit hydrated manifests to a repository.
message CommitHydratedManifestsRequest {
// Repo contains repository information including, at minimum, the URL of the repository. Generally it will contain
// repo credentials.
github.com.argoproj.argo_cd.v2.pkg.apis.application.v1alpha1.Repository repo = 1;
// SyncBranch is the branch Argo CD syncs from, i.e. the hydrated branch.
string syncBranch = 2;
// TargetBranch is the branch Argo CD is committing to, i.e. the branch that will be updated.
string targetBranch = 3;
// DrySha is the commit SHA from the dry branch, i.e. pre-rendered manifest branch.
string drySha = 4;
// CommitMessage is the commit message to use when committing changes.
string commitMessage = 5;
// Paths contains the paths to write hydrated manifests to, along with the manifests and commands to execute.
repeated PathDetails paths = 6;
}
// PathDetails holds information about hydrated manifests to be written to a particular path in the hydrated manifests
// commit.
message PathDetails {
// Path is the path to write the hydrated manifests to.
string path = 1;
// Manifests contains the manifests to write to the path.
repeated HydratedManifestDetails manifests = 2;
// Commands contains the commands executed when hydrating the manifests.
repeated string commands = 3;
}
// ManifestDetails contains the hydrated manifests.
message HydratedManifestDetails {
// ManifestJSON is the hydrated manifest as JSON.
string manifestJSON = 1;
}
// ManifestsResponse is the response to the ManifestsRequest.
message CommitHydratedManifestsResponse {
// HydratedSha is the commit SHA of the hydrated manifests commit.
string hydratedSha = 1;
}
// CommitService is the service for committing hydrated manifests to a repository.
service CommitService {
// Commit commits hydrated manifests to a repository.
rpc CommitHydratedManifests (CommitHydratedManifestsRequest) returns (CommitHydratedManifestsResponse);
}

View File

@@ -0,0 +1,123 @@
package commit
import (
"context"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/commitserver/commit/mocks"
"github.com/argoproj/argo-cd/v2/commitserver/metrics"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/git"
gitmocks "github.com/argoproj/argo-cd/v2/util/git/mocks"
)
func Test_CommitHydratedManifests(t *testing.T) {
t.Parallel()
validRequest := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
Repo: "https://github.com/argoproj/argocd-example-apps.git",
},
TargetBranch: "main",
SyncBranch: "env/test",
CommitMessage: "test commit message",
}
t.Run("missing repo", func(t *testing.T) {
t.Parallel()
service, _ := newServiceWithMocks(t)
request := &apiclient.CommitHydratedManifestsRequest{}
_, err := service.CommitHydratedManifests(context.Background(), request)
require.Error(t, err)
assert.ErrorContains(t, err, "repo is required")
})
t.Run("missing repo URL", func(t *testing.T) {
t.Parallel()
service, _ := newServiceWithMocks(t)
request := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{},
}
_, err := service.CommitHydratedManifests(context.Background(), request)
require.Error(t, err)
assert.ErrorContains(t, err, "repo URL is required")
})
t.Run("missing target branch", func(t *testing.T) {
t.Parallel()
service, _ := newServiceWithMocks(t)
request := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
Repo: "https://github.com/argoproj/argocd-example-apps.git",
},
}
_, err := service.CommitHydratedManifests(context.Background(), request)
require.Error(t, err)
assert.ErrorContains(t, err, "target branch is required")
})
t.Run("missing sync branch", func(t *testing.T) {
t.Parallel()
service, _ := newServiceWithMocks(t)
request := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
Repo: "https://github.com/argoproj/argocd-example-apps.git",
},
TargetBranch: "main",
}
_, err := service.CommitHydratedManifests(context.Background(), request)
require.Error(t, err)
assert.ErrorContains(t, err, "sync branch is required")
})
t.Run("failed to create git client", func(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
_, err := service.CommitHydratedManifests(context.Background(), validRequest)
require.Error(t, err)
assert.ErrorIs(t, err, assert.AnError)
})
t.Run("happy path", func(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents").Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
resp, err := service.CommitHydratedManifests(context.Background(), validRequest)
require.NoError(t, err)
require.NotNil(t, resp)
assert.Equal(t, "it-worked!", resp.HydratedSha)
})
}
func newServiceWithMocks(t *testing.T) (*Service, *mocks.RepoClientFactory) {
metricsServer := metrics.NewMetricsServer()
mockCredsStore := git.NoopCredsStore{}
service := NewService(mockCredsStore, metricsServer)
mockRepoClientFactory := mocks.NewRepoClientFactory(t)
service.repoClientFactory = mockRepoClientFactory
return service, mockRepoClientFactory
}

View File

@@ -0,0 +1,23 @@
package commit
import "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
// getCredentialType returns the type of credential used by the repository.
func getCredentialType(repo *v1alpha1.Repository) string {
if repo == nil {
return ""
}
if repo.Password != "" {
return "https"
}
if repo.SSHPrivateKey != "" {
return "ssh"
}
if repo.GithubAppPrivateKey != "" && repo.GithubAppId != 0 && repo.GithubAppInstallationId != 0 {
return "github-app"
}
if repo.GCPServiceAccountKey != "" {
return "cloud-source-repositories"
}
return ""
}

View File

@@ -0,0 +1,62 @@
package commit
import (
"testing"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
func TestRepository_GetCredentialType(t *testing.T) {
tests := []struct {
name string
repo *v1alpha1.Repository
want string
}{
{
name: "Empty Repository",
repo: nil,
want: "",
},
{
name: "HTTPS Repository",
repo: &v1alpha1.Repository{
Repo: "foo",
Password: "some-password",
},
want: "https",
},
{
name: "SSH Repository",
repo: &v1alpha1.Repository{
Repo: "foo",
SSHPrivateKey: "some-key",
},
want: "ssh",
},
{
name: "GitHub App Repository",
repo: &v1alpha1.Repository{
Repo: "foo",
GithubAppPrivateKey: "some-key",
GithubAppId: 1,
GithubAppInstallationId: 1,
},
want: "github-app",
},
{
name: "Google Cloud Repository",
repo: &v1alpha1.Repository{
Repo: "foo",
GCPServiceAccountKey: "some-key",
},
want: "cloud-source-repositories",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := getCredentialType(tt.repo); got != tt.want {
t.Errorf("Repository.GetCredentialType() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -0,0 +1,145 @@
package commit
import (
"encoding/json"
"fmt"
"os"
"path"
"text/template"
log "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/util/io/files"
)
// WriteForPaths writes the manifests, hydrator.metadata, and README.md files for each path in the provided paths. It
// also writes a root-level hydrator.metadata file containing the repo URL and dry SHA.
func WriteForPaths(rootPath string, repoUrl string, drySha string, paths []*apiclient.PathDetails) error {
// Write the top-level readme.
err := writeMetadata(rootPath, hydratorMetadataFile{DrySHA: drySha, RepoURL: repoUrl})
if err != nil {
return fmt.Errorf("failed to write top-level hydrator metadata: %w", err)
}
for _, p := range paths {
hydratePath := p.Path
if hydratePath == "." {
hydratePath = ""
}
var fullHydratePath string
fullHydratePath, err = files.SecureMkdirAll(rootPath, hydratePath, os.ModePerm)
if err != nil {
return fmt.Errorf("failed to create path: %w", err)
}
// Write the manifests
err = writeManifests(fullHydratePath, p.Manifests)
if err != nil {
return fmt.Errorf("failed to write manifests: %w", err)
}
// Write hydrator.metadata containing information about the hydration process.
hydratorMetadata := hydratorMetadataFile{
Commands: p.Commands,
DrySHA: drySha,
RepoURL: repoUrl,
}
err = writeMetadata(fullHydratePath, hydratorMetadata)
if err != nil {
return fmt.Errorf("failed to write hydrator metadata: %w", err)
}
// Write README
err = writeReadme(fullHydratePath, hydratorMetadata)
if err != nil {
return fmt.Errorf("failed to write readme: %w", err)
}
}
return nil
}
// writeMetadata writes the metadata to the hydrator.metadata file.
func writeMetadata(dirPath string, metadata hydratorMetadataFile) error {
hydratorMetadataJson, err := json.MarshalIndent(metadata, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal hydrator metadata: %w", err)
}
// No need to use SecureJoin here, as the path is already sanitized.
hydratorMetadataPath := path.Join(dirPath, "hydrator.metadata")
err = os.WriteFile(hydratorMetadataPath, hydratorMetadataJson, os.ModePerm)
if err != nil {
return fmt.Errorf("failed to write hydrator metadata: %w", err)
}
return nil
}
// writeReadme writes the readme to the README.md file.
func writeReadme(dirPath string, metadata hydratorMetadataFile) error {
readmeTemplate := template.New("readme")
readmeTemplate, err := readmeTemplate.Parse(manifestHydrationReadmeTemplate)
if err != nil {
return fmt.Errorf("failed to parse readme template: %w", err)
}
// Create writer to template into
// No need to use SecureJoin here, as the path is already sanitized.
readmePath := path.Join(dirPath, "README.md")
readmeFile, err := os.Create(readmePath)
if err != nil && !os.IsExist(err) {
return fmt.Errorf("failed to create README file: %w", err)
}
err = readmeTemplate.Execute(readmeFile, metadata)
closeErr := readmeFile.Close()
if closeErr != nil {
log.WithError(closeErr).Error("failed to close README file")
}
if err != nil {
return fmt.Errorf("failed to execute readme template: %w", err)
}
return nil
}
// writeManifests writes the manifests to the manifest.yaml file, truncating the file if it exists and appending the
// manifests in the order they are provided.
func writeManifests(dirPath string, manifests []*apiclient.HydratedManifestDetails) error {
// If the file exists, truncate it.
// No need to use SecureJoin here, as the path is already sanitized.
manifestPath := path.Join(dirPath, "manifest.yaml")
file, err := os.OpenFile(manifestPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.ModePerm)
if err != nil {
return fmt.Errorf("failed to open manifest file: %w", err)
}
defer func() {
err := file.Close()
if err != nil {
log.WithError(err).Error("failed to close file")
}
}()
enc := yaml.NewEncoder(file)
defer func() {
err := enc.Close()
if err != nil {
log.WithError(err).Error("failed to close yaml encoder")
}
}()
enc.SetIndent(2)
for _, m := range manifests {
obj := &unstructured.Unstructured{}
err = json.Unmarshal([]byte(m.ManifestJSON), obj)
if err != nil {
return fmt.Errorf("failed to unmarshal manifest: %w", err)
}
err = enc.Encode(&obj.Object)
if err != nil {
return fmt.Errorf("failed to encode manifest: %w", err)
}
}
return nil
}

View File

@@ -0,0 +1,154 @@
package commit
import (
"encoding/json"
"os"
"path"
"testing"
securejoin "github.com/cyphar/filepath-securejoin"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/argoproj/argo-cd/v2/commitserver/apiclient"
)
func TestWriteForPaths(t *testing.T) {
dir := t.TempDir()
repoUrl := "https://github.com/example/repo"
drySha := "abc123"
paths := []*apiclient.PathDetails{
{
Path: "path1",
Manifests: []*apiclient.HydratedManifestDetails{
{ManifestJSON: `{"kind":"Pod","apiVersion":"v1"}`},
},
Commands: []string{"command1", "command2"},
},
{
Path: "path2",
Manifests: []*apiclient.HydratedManifestDetails{
{ManifestJSON: `{"kind":"Service","apiVersion":"v1"}`},
},
Commands: []string{"command3"},
},
}
err := WriteForPaths(dir, repoUrl, drySha, paths)
require.NoError(t, err)
// Check if the top-level hydrator.metadata exists and contains the repo URL and dry SHA
topMetadataPath := path.Join(dir, "hydrator.metadata")
topMetadataBytes, err := os.ReadFile(topMetadataPath)
require.NoError(t, err)
var topMetadata hydratorMetadataFile
err = json.Unmarshal(topMetadataBytes, &topMetadata)
require.NoError(t, err)
assert.Equal(t, repoUrl, topMetadata.RepoURL)
assert.Equal(t, drySha, topMetadata.DrySHA)
for _, p := range paths {
fullHydratePath, err := securejoin.SecureJoin(dir, p.Path)
require.NoError(t, err)
// Check if each path directory exists
assert.DirExists(t, fullHydratePath)
// Check if each path contains a hydrator.metadata file and contains the repo URL
metadataPath := path.Join(fullHydratePath, "hydrator.metadata")
metadataBytes, err := os.ReadFile(metadataPath)
require.NoError(t, err)
var readMetadata hydratorMetadataFile
err = json.Unmarshal(metadataBytes, &readMetadata)
require.NoError(t, err)
assert.Equal(t, repoUrl, readMetadata.RepoURL)
// Check if each path contains a README.md file and contains the repo URL
readmePath := path.Join(fullHydratePath, "README.md")
readmeBytes, err := os.ReadFile(readmePath)
require.NoError(t, err)
assert.Contains(t, string(readmeBytes), repoUrl)
// Check if each path contains a manifest.yaml file and contains the word Pod
manifestPath := path.Join(fullHydratePath, "manifest.yaml")
manifestBytes, err := os.ReadFile(manifestPath)
require.NoError(t, err)
assert.Contains(t, string(manifestBytes), "kind")
}
}
func TestWriteForPaths_invalid_yaml(t *testing.T) {
dir := t.TempDir()
repoUrl := "https://github.com/example/repo"
drySha := "abc123"
paths := []*apiclient.PathDetails{
{
Path: "path1",
Manifests: []*apiclient.HydratedManifestDetails{
{ManifestJSON: `{`}, // Invalid YAML
},
Commands: []string{"command1", "command2"},
},
}
err := WriteForPaths(dir, repoUrl, drySha, paths)
require.Error(t, err)
}
func TestWriteMetadata(t *testing.T) {
dir := t.TempDir()
metadata := hydratorMetadataFile{
RepoURL: "https://github.com/example/repo",
DrySHA: "abc123",
}
err := writeMetadata(dir, metadata)
require.NoError(t, err)
metadataPath := path.Join(dir, "hydrator.metadata")
metadataBytes, err := os.ReadFile(metadataPath)
require.NoError(t, err)
var readMetadata hydratorMetadataFile
err = json.Unmarshal(metadataBytes, &readMetadata)
require.NoError(t, err)
assert.Equal(t, metadata, readMetadata)
}
func TestWriteReadme(t *testing.T) {
dir := t.TempDir()
metadata := hydratorMetadataFile{
RepoURL: "https://github.com/example/repo",
DrySHA: "abc123",
}
err := writeReadme(dir, metadata)
require.NoError(t, err)
readmePath := path.Join(dir, "README.md")
readmeBytes, err := os.ReadFile(readmePath)
require.NoError(t, err)
assert.Contains(t, string(readmeBytes), metadata.RepoURL)
}
func TestWriteManifests(t *testing.T) {
dir := t.TempDir()
manifests := []*apiclient.HydratedManifestDetails{
{ManifestJSON: `{"kind":"Pod","apiVersion":"v1"}`},
}
err := writeManifests(dir, manifests)
require.NoError(t, err)
manifestPath := path.Join(dir, "manifest.yaml")
manifestBytes, err := os.ReadFile(manifestPath)
require.NoError(t, err)
assert.Contains(t, string(manifestBytes), "kind")
}

View File

@@ -0,0 +1,59 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks
import (
git "github.com/argoproj/argo-cd/v2/util/git"
mock "github.com/stretchr/testify/mock"
v1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
// RepoClientFactory is an autogenerated mock type for the RepoClientFactory type
type RepoClientFactory struct {
mock.Mock
}
// NewClient provides a mock function with given fields: repo, rootPath
func (_m *RepoClientFactory) NewClient(repo *v1alpha1.Repository, rootPath string) (git.Client, error) {
ret := _m.Called(repo, rootPath)
if len(ret) == 0 {
panic("no return value specified for NewClient")
}
var r0 git.Client
var r1 error
if rf, ok := ret.Get(0).(func(*v1alpha1.Repository, string) (git.Client, error)); ok {
return rf(repo, rootPath)
}
if rf, ok := ret.Get(0).(func(*v1alpha1.Repository, string) git.Client); ok {
r0 = rf(repo, rootPath)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(git.Client)
}
}
if rf, ok := ret.Get(1).(func(*v1alpha1.Repository, string) error); ok {
r1 = rf(repo, rootPath)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// NewRepoClientFactory creates a new instance of RepoClientFactory. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewRepoClientFactory(t interface {
mock.TestingT
Cleanup(func())
}) *RepoClientFactory {
mock := &RepoClientFactory{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -0,0 +1,32 @@
package commit
import (
"github.com/argoproj/argo-cd/v2/commitserver/metrics"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/git"
)
// RepoClientFactory is a factory for creating git clients for a repository.
type RepoClientFactory interface {
NewClient(repo *v1alpha1.Repository, rootPath string) (git.Client, error)
}
type repoClientFactory struct {
gitCredsStore git.CredsStore
metricsServer *metrics.Server
}
// NewRepoClientFactory returns a new instance of the repo client factory.
func NewRepoClientFactory(gitCredsStore git.CredsStore, metricsServer *metrics.Server) RepoClientFactory {
return &repoClientFactory{
gitCredsStore: gitCredsStore,
metricsServer: metricsServer,
}
}
// NewClient creates a new git client for the repository.
func (r *repoClientFactory) NewClient(repo *v1alpha1.Repository, rootPath string) (git.Client, error) {
gitCreds := repo.GetGitCreds(r.gitCredsStore)
opts := git.WithEventHandlers(metrics.NewGitClientEventHandlers(r.metricsServer))
return git.NewClientExt(repo.Repo, rootPath, gitCreds, repo.IsInsecure(), repo.IsLFSEnabled(), repo.Proxy, opts)
}

View File

@@ -0,0 +1,22 @@
//go:build !linux
package commit
import (
"fmt"
"os"
securejoin "github.com/cyphar/filepath-securejoin"
)
func SecureMkdirAll(root, unsafePath string, mode os.FileMode) (string, error) {
fullPath, err := securejoin.SecureJoin(root, unsafePath)
if err != nil {
return "", fmt.Errorf("failed to construct secure path: %w", err)
}
err = os.MkdirAll(fullPath, mode)
if err != nil {
return "", fmt.Errorf("failed to create directory: %w", err)
}
return fullPath, nil
}

View File

@@ -0,0 +1,69 @@
//go:build !linux
package commit
import (
"os"
"path"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSecureMkdirAllDefault(t *testing.T) {
root := t.TempDir()
unsafePath := "test/dir"
fullPath, err := SecureMkdirAll(root, unsafePath, os.ModePerm)
require.NoError(t, err)
expectedPath := path.Join(root, unsafePath)
assert.Equal(t, expectedPath, fullPath)
}
func TestSecureMkdirAllWithExistingDir(t *testing.T) {
root := t.TempDir()
unsafePath := "existing/dir"
fullPath, err := SecureMkdirAll(root, unsafePath, os.ModePerm)
require.NoError(t, err)
newPath, err := SecureMkdirAll(root, unsafePath, os.ModePerm)
require.NoError(t, err)
assert.Equal(t, fullPath, newPath)
}
func TestSecureMkdirAllWithFile(t *testing.T) {
root := t.TempDir()
unsafePath := "file.txt"
filePath := filepath.Join(root, unsafePath)
err := os.WriteFile(filePath, []byte("test"), os.ModePerm)
require.NoError(t, err)
_, err = SecureMkdirAll(root, unsafePath, os.ModePerm)
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to create directory")
}
func TestSecureMkdirAllDotDotPath(t *testing.T) {
root := t.TempDir()
unsafePath := "../outside"
fullPath, err := SecureMkdirAll(root, unsafePath, os.ModePerm)
require.NoError(t, err)
expectedPath := filepath.Join(root, "outside")
assert.Equal(t, expectedPath, fullPath)
info, err := os.Stat(fullPath)
require.NoError(t, err)
assert.True(t, info.IsDir())
relPath, err := filepath.Rel(root, fullPath)
require.NoError(t, err)
assert.False(t, strings.HasPrefix(relPath, ".."))
}

View File

@@ -0,0 +1,22 @@
//go:build linux
package commit
import (
"fmt"
"os"
securejoin "github.com/cyphar/filepath-securejoin"
)
func SecureMkdirAll(root, unsafePath string, mode os.FileMode) (string, error) {
err := securejoin.MkdirAll(root, unsafePath, int(mode))
if err != nil {
return "", fmt.Errorf("failed to make directory: %w", err)
}
fullPath, err := securejoin.SecureJoin(root, unsafePath)
if err != nil {
return "", fmt.Errorf("failed to construct secure path: %w", err)
}
return fullPath, nil
}

View File

@@ -0,0 +1,22 @@
//go:build linux
package commit
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
)
func TestSecureMkdirAllLinux(t *testing.T) {
root := t.TempDir()
unsafePath := "test/dir"
fullPath, err := SecureMkdirAll(root, unsafePath, os.ModePerm)
require.NoError(t, err)
expectedPath := filepath.Join(root, unsafePath)
require.Equal(t, expectedPath, fullPath)
}

View File

@@ -0,0 +1,34 @@
package metrics
import (
"time"
"github.com/argoproj/argo-cd/v2/util/git"
)
// NewGitClientEventHandlers creates event handlers that update Git related metrics
func NewGitClientEventHandlers(metricsServer *Server) git.EventHandlers {
return git.EventHandlers{
OnFetch: func(repo string) func() {
startTime := time.Now()
metricsServer.IncGitRequest(repo, GitRequestTypeFetch)
return func() {
metricsServer.ObserveGitRequestDuration(repo, GitRequestTypeFetch, time.Since(startTime))
}
},
OnLsRemote: func(repo string) func() {
startTime := time.Now()
metricsServer.IncGitRequest(repo, GitRequestTypeLsRemote)
return func() {
metricsServer.ObserveGitRequestDuration(repo, GitRequestTypeLsRemote, time.Since(startTime))
}
},
OnPush: func(repo string) func() {
startTime := time.Now()
metricsServer.IncGitRequest(repo, GitRequestTypePush)
return func() {
metricsServer.ObserveGitRequestDuration(repo, GitRequestTypePush, time.Since(startTime))
}
},
}
}

View File

@@ -0,0 +1,157 @@
package metrics
import (
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
// Server is a prometheus server which collects application metrics.
type Server struct {
handler http.Handler
commitPendingRequestsGauge *prometheus.GaugeVec
gitRequestCounter *prometheus.CounterVec
gitRequestHistogram *prometheus.HistogramVec
commitRequestHistogram *prometheus.HistogramVec
userInfoRequestHistogram *prometheus.HistogramVec
commitRequestCounter *prometheus.CounterVec
}
// GitRequestType is the type of git request
type GitRequestType string
const (
// GitRequestTypeLsRemote is a request to list remote refs
GitRequestTypeLsRemote = "ls-remote"
// GitRequestTypeFetch is a request to fetch from remote
GitRequestTypeFetch = "fetch"
// GitRequestTypePush is a request to push to remote
GitRequestTypePush = "push"
)
// CommitResponseType is the type of response for a commit request
type CommitResponseType string
const (
// CommitResponseTypeSuccess is a successful commit request
CommitResponseTypeSuccess = "success"
// CommitResponseTypeFailure is a failed commit request
CommitResponseTypeFailure = "failure"
)
// NewMetricsServer returns a new prometheus server which collects application metrics.
func NewMetricsServer() *Server {
registry := prometheus.NewRegistry()
registry.MustRegister(collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}))
registry.MustRegister(collectors.NewGoCollector())
commitPendingRequestsGauge := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "argocd_commitserver_commit_pending_request_total",
Help: "Number of pending commit requests",
},
[]string{"repo"},
)
registry.MustRegister(commitPendingRequestsGauge)
gitRequestCounter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "argocd_commitserver_git_request_total",
Help: "Number of git requests performed by repo server",
},
[]string{"repo", "request_type"},
)
registry.MustRegister(gitRequestCounter)
gitRequestHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_commitserver_git_request_duration_seconds",
Help: "Git requests duration seconds.",
Buckets: []float64{0.1, 0.25, .5, 1, 2, 4, 10, 20},
},
[]string{"repo", "request_type"},
)
registry.MustRegister(gitRequestHistogram)
commitRequestHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_commitserver_commit_request_duration_seconds",
Help: "Commit request duration seconds.",
Buckets: []float64{0.1, 0.25, .5, 1, 2, 4, 10, 20},
},
[]string{"repo", "response_type"},
)
registry.MustRegister(commitRequestHistogram)
userInfoRequestHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_commitserver_userinfo_request_duration_seconds",
Help: "Userinfo request duration seconds.",
Buckets: []float64{0.1, 0.25, .5, 1, 2, 4, 10, 20},
},
[]string{"repo", "credential_type"},
)
registry.MustRegister(userInfoRequestHistogram)
commitRequestCounter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "argocd_commitserver_commit_request_total",
Help: "Number of commit requests performed handled",
},
[]string{"repo", "response_type"},
)
registry.MustRegister(commitRequestCounter)
return &Server{
handler: promhttp.HandlerFor(registry, promhttp.HandlerOpts{}),
commitPendingRequestsGauge: commitPendingRequestsGauge,
gitRequestCounter: gitRequestCounter,
gitRequestHistogram: gitRequestHistogram,
commitRequestHistogram: commitRequestHistogram,
userInfoRequestHistogram: userInfoRequestHistogram,
commitRequestCounter: commitRequestCounter,
}
}
// GetHandler returns the http.Handler for the prometheus server
func (m *Server) GetHandler() http.Handler {
return m.handler
}
// IncPendingCommitRequest increments the pending commit requests gauge
func (m *Server) IncPendingCommitRequest(repo string) {
m.commitPendingRequestsGauge.WithLabelValues(repo).Inc()
}
// DecPendingCommitRequest decrements the pending commit requests gauge
func (m *Server) DecPendingCommitRequest(repo string) {
m.commitPendingRequestsGauge.WithLabelValues(repo).Dec()
}
// IncGitRequest increments the git requests counter
func (m *Server) IncGitRequest(repo string, requestType GitRequestType) {
m.gitRequestCounter.WithLabelValues(repo, string(requestType)).Inc()
}
// ObserveGitRequestDuration observes the duration of a git request
func (m *Server) ObserveGitRequestDuration(repo string, requestType GitRequestType, duration time.Duration) {
m.gitRequestHistogram.WithLabelValues(repo, string(requestType)).Observe(duration.Seconds())
}
// ObserveCommitRequestDuration observes the duration of a commit request
func (m *Server) ObserveCommitRequestDuration(repo string, rt CommitResponseType, duration time.Duration) {
m.commitRequestHistogram.WithLabelValues(repo, string(rt)).Observe(duration.Seconds())
}
// ObserveUserInfoRequestDuration observes the duration of a userinfo request
func (m *Server) ObserveUserInfoRequestDuration(repo string, credentialType string, duration time.Duration) {
m.userInfoRequestHistogram.WithLabelValues(repo, credentialType).Observe(duration.Seconds())
}
// IncCommitRequest increments the commit request counter
func (m *Server) IncCommitRequest(repo string, rt CommitResponseType) {
m.commitRequestCounter.WithLabelValues(repo, string(rt)).Inc()
}

29
commitserver/server.go Normal file
View File

@@ -0,0 +1,29 @@
package commitserver
import (
"google.golang.org/grpc"
"github.com/argoproj/argo-cd/v2/commitserver/metrics"
"github.com/argoproj/argo-cd/v2/util/git"
"github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/commitserver/commit"
)
// ArgoCDCommitServer is the server that handles commit requests.
type ArgoCDCommitServer struct {
commitService *commit.Service
}
// NewServer returns a new instance of the commit server.
func NewServer(gitCredsStore git.CredsStore, metricsServer *metrics.Server) *ArgoCDCommitServer {
return &ArgoCDCommitServer{commitService: commit.NewService(gitCredsStore, metricsServer)}
}
// CreateGRPC creates a new gRPC server.
func (a *ArgoCDCommitServer) CreateGRPC() *grpc.Server {
server := grpc.NewServer()
apiclient.RegisterCommitServiceServer(server, a.commitService)
return server
}

View File

@@ -26,6 +26,8 @@ const (
const (
// DefaultRepoServerAddr is the gRPC address of the Argo CD repo server
DefaultRepoServerAddr = "argocd-repo-server:8081"
// DefaultCommitServerAddr is the gRPC address of the Argo CD commit server
DefaultCommitServerAddr = "argocd-commit-server:8086"
// DefaultDexServerAddr is the HTTP address of the Dex OIDC server, which we run a reverse proxy against
DefaultDexServerAddr = "argocd-dex-server:5556"
// DefaultRedisAddr is the default redis address
@@ -46,7 +48,6 @@ const (
ArgoCDGPGKeysConfigMapName = "argocd-gpg-keys-cm"
// ArgoCDAppControllerShardConfigMapName contains the application controller to shard mapping
ArgoCDAppControllerShardConfigMapName = "argocd-app-controller-shard-cm"
ArgoCDCmdParamsConfigMapName = "argocd-cmd-params-cm"
)
// Some default configurables
@@ -62,15 +63,19 @@ const (
DefaultPortArgoCDMetrics = 8082
DefaultPortArgoCDAPIServerMetrics = 8083
DefaultPortRepoServerMetrics = 8084
DefaultPortCommitServer = 8086
DefaultPortCommitServerMetrics = 8087
)
// DefaultAddressAPIServer for ArgoCD components
const (
DefaultAddressAdminDashboard = "localhost"
DefaultAddressAPIServer = "0.0.0.0"
DefaultAddressAPIServerMetrics = "0.0.0.0"
DefaultAddressRepoServer = "0.0.0.0"
DefaultAddressRepoServerMetrics = "0.0.0.0"
DefaultAddressAdminDashboard = "localhost"
DefaultAddressAPIServer = "0.0.0.0"
DefaultAddressAPIServerMetrics = "0.0.0.0"
DefaultAddressRepoServer = "0.0.0.0"
DefaultAddressRepoServerMetrics = "0.0.0.0"
DefaultAddressCommitServer = "0.0.0.0"
DefaultAddressCommitServerMetrics = "0.0.0.0"
)
// Default paths on the pod's file system
@@ -175,10 +180,11 @@ const (
LabelValueSecretTypeRepository = "repository"
// LabelValueSecretTypeRepoCreds indicates a secret type of repository credentials
LabelValueSecretTypeRepoCreds = "repo-creds"
// LabelValueSecretTypeRepositoryWrite indicates a secret type of repository credentials for writing
LabelValueSecretTypeRepositoryWrite = "repository-write"
// AnnotationKeyAppInstance is the Argo CD application name is used as the instance name
AnnotationKeyAppInstance = "argocd.argoproj.io/tracking-id"
AnnotationInstallationID = "argocd.argoproj.io/installation-id"
// AnnotationCompareOptions is a comma-separated list of options for comparison
AnnotationCompareOptions = "argocd.argoproj.io/compare-options"
@@ -223,7 +229,7 @@ const (
EnvGitRetryMaxDuration = "ARGOCD_GIT_RETRY_MAX_DURATION"
// EnvGitRetryDuration specifies duration of git remote operation retry
EnvGitRetryDuration = "ARGOCD_GIT_RETRY_DURATION"
// EnvGitRetryFactor specifies factor of git remote operation retry
// EnvGitRetryFactor specifies fator of git remote operation retry
EnvGitRetryFactor = "ARGOCD_GIT_RETRY_FACTOR"
// EnvGitSubmoduleEnabled overrides git submodule support, true by default
EnvGitSubmoduleEnabled = "ARGOCD_GIT_MODULES_ENABLED"

View File

@@ -42,6 +42,8 @@ import (
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
commitclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/common"
statecache "github.com/argoproj/argo-cd/v2/controller/cache"
"github.com/argoproj/argo-cd/v2/controller/metrics"
@@ -116,11 +118,13 @@ type ApplicationController struct {
applicationClientset appclientset.Interface
auditLogger *argo.AuditLogger
// queue contains app namespace/name
appRefreshQueue workqueue.TypedRateLimitingInterface[string]
appRefreshQueue workqueue.RateLimitingInterface
// queue contains app namespace/name/comparisonType and used to request app refresh with the predefined comparison type
appComparisonTypeRefreshQueue workqueue.TypedRateLimitingInterface[string]
appOperationQueue workqueue.TypedRateLimitingInterface[string]
projectRefreshQueue workqueue.TypedRateLimitingInterface[string]
appComparisonTypeRefreshQueue workqueue.RateLimitingInterface
appOperationQueue workqueue.RateLimitingInterface
projectRefreshQueue workqueue.RateLimitingInterface
appHydrateQueue workqueue.RateLimitingInterface
hydrationQueue workqueue.RateLimitingInterface
appInformer cache.SharedIndexInformer
appLister applisters.ApplicationLister
projInformer cache.SharedIndexInformer
@@ -130,8 +134,8 @@ type ApplicationController struct {
statusHardRefreshTimeout time.Duration
statusRefreshJitter time.Duration
selfHealTimeout time.Duration
selfHealBackOff *wait.Backoff
repoClientset apiclient.Clientset
commitClientset commitclient.Clientset
db db.ArgoDB
settingsMgr *settings_util.SettingsManager
refreshRequestedApps map[string]CompareWith
@@ -155,18 +159,17 @@ func NewApplicationController(
kubeClientset kubernetes.Interface,
applicationClientset appclientset.Interface,
repoClientset apiclient.Clientset,
commitClientset commitclient.Clientset,
argoCache *appstatecache.Cache,
kubectl kube.Kubectl,
appResyncPeriod time.Duration,
appHardResyncPeriod time.Duration,
appResyncJitter time.Duration,
selfHealTimeout time.Duration,
selfHealBackoff *wait.Backoff,
repoErrorGracePeriod time.Duration,
metricsPort int,
metricsCacheExpiration time.Duration,
metricsApplicationLabels []string,
metricsApplicationConditions []string,
kubectlParallelismLimit int64,
persistResourceHealth bool,
clusterSharding sharding.ClusterShardingCache,
@@ -175,7 +178,6 @@ func NewApplicationController(
serverSideDiff bool,
dynamicClusterDistributionEnabled bool,
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts,
enableK8sEvent []string,
) (*ApplicationController, error) {
log.Infof("appResyncPeriod=%v, appHardResyncPeriod=%v, appResyncJitter=%v", appResyncPeriod, appHardResyncPeriod, appResyncJitter)
db := db.NewDB(namespace, settingsMgr, kubeClientset)
@@ -190,20 +192,22 @@ func NewApplicationController(
kubectl: kubectl,
applicationClientset: applicationClientset,
repoClientset: repoClientset,
appRefreshQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "app_reconciliation_queue"}),
appOperationQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "app_operation_processing_queue"}),
projectRefreshQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "project_reconciliation_queue"}),
appComparisonTypeRefreshQueue: workqueue.NewTypedRateLimitingQueue(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig)),
commitClientset: commitClientset,
appRefreshQueue: workqueue.NewRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.RateLimitingQueueConfig{Name: "app_reconciliation_queue"}),
appOperationQueue: workqueue.NewRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.RateLimitingQueueConfig{Name: "app_operation_processing_queue"}),
projectRefreshQueue: workqueue.NewRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.RateLimitingQueueConfig{Name: "project_reconciliation_queue"}),
appComparisonTypeRefreshQueue: workqueue.NewRateLimitingQueue(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig)),
appHydrateQueue: workqueue.NewRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.RateLimitingQueueConfig{Name: "app_hydration_queue"}),
hydrationQueue: workqueue.NewRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.RateLimitingQueueConfig{Name: "manifest_hydration_queue"}),
db: db,
statusRefreshTimeout: appResyncPeriod,
statusHardRefreshTimeout: appHardResyncPeriod,
statusRefreshJitter: appResyncJitter,
refreshRequestedApps: make(map[string]CompareWith),
refreshRequestedAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, common.ApplicationController, enableK8sEvent),
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, common.ApplicationController),
settingsMgr: settingsMgr,
selfHealTimeout: selfHealTimeout,
selfHealBackOff: selfHealBackoff,
clusterSharding: clusterSharding,
projByNameCache: sync.Map{},
applicationNamespaces: applicationNamespaces,
@@ -284,7 +288,7 @@ func NewApplicationController(
metricsAddr := fmt.Sprintf("0.0.0.0:%d", metricsPort)
ctrl.metricsServer, err = metrics.NewMetricsServer(metricsAddr, appLister, ctrl.canProcessApp, readinessHealthCheck, metricsApplicationLabels, metricsApplicationConditions)
ctrl.metricsServer, err = metrics.NewMetricsServer(metricsAddr, appLister, ctrl.canProcessApp, readinessHealthCheck, metricsApplicationLabels)
if err != nil {
return nil, err
}
@@ -839,6 +843,8 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
defer ctrl.appComparisonTypeRefreshQueue.ShutDown()
defer ctrl.appOperationQueue.ShutDown()
defer ctrl.projectRefreshQueue.ShutDown()
defer ctrl.appHydrateQueue.ShutDown()
defer ctrl.hydrationQueue.ShutDown()
ctrl.metricsServer.RegisterClustersInfoSource(ctx, ctrl.stateCache)
ctrl.RegisterClusterSecretUpdater(ctx)
@@ -897,6 +903,17 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
for ctrl.processProjectQueueItem() {
}
}, time.Second, ctx.Done())
go wait.Until(func() {
for ctrl.processAppHydrateQueueItem() {
}
}, time.Second, ctx.Done())
go wait.Until(func() {
for ctrl.processHydrationQueueItem() {
}
}, time.Second, ctx.Done())
<-ctx.Done()
}
@@ -945,7 +962,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
ctrl.appOperationQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
@@ -1017,8 +1034,8 @@ func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processN
return
}
if parts := strings.Split(key, "/"); len(parts) != 3 {
log.Warnf("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consists of namespace/name/comparisonType but got: %s", key)
if parts := strings.Split(key.(string), "/"); len(parts) != 3 {
log.Warnf("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consists of namespace/name/comparisonType but got: %s", key.(string))
} else {
if compareWith, err := strconv.Atoi(parts[2]); err != nil {
log.Warnf("Unable to parse comparison type: %v", err)
@@ -1044,7 +1061,7 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
processNext = false
return
}
obj, exists, err := ctrl.projInformer.GetIndexer().GetByKey(key)
obj, exists, err := ctrl.projInformer.GetIndexer().GetByKey(key.(string))
if err != nil {
log.Errorf("Failed to get project '%s' from informer index: %+v", key, err)
return
@@ -1538,6 +1555,12 @@ func (ctrl *ApplicationController) PatchAppWithWriteBack(ctx context.Context, na
return patchedApp, err
}
// processAppRefreshQueueItem does roughly these tasks:
// 1. If we're shutting down, it quits early and returns "false" to indicate we're done processing refreshes.
// 2. Checks whether the app needs to be refreshed. If not, quit early.
// 3. If we're "comparing with nothing," just update the app resource tree in Redis and the app status in k8s.
// 4. Checks that all AppProject restrictions are being followed. If not, clears the app resource tree and managed
// resources in Redis and sets failure conditions on the app status.
func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext bool) {
patchMs := time.Duration(0) // time spent in doing patch/update calls
setOpMs := time.Duration(0) // time spent in doing Operation patch calls in autosync
@@ -1556,7 +1579,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
ctrl.appOperationQueue.AddRateLimited(appKey)
ctrl.appRefreshQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
@@ -1694,9 +1717,8 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
app.Status.Summary = tree.GetSummary(app)
}
canSync, _ := project.Spec.SyncWindows.Matches(app).CanSync(false)
if canSync {
syncErrCond, opMS := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources, compareResult.revisionUpdated)
if project.Spec.SyncWindows.Matches(app).CanSync(false) {
syncErrCond, opMS := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources)
setOpMs = opMS
if syncErrCond != nil {
app.Status.SetConditions(
@@ -1748,6 +1770,329 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
return
}
func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appHydrateQueue.Get()
if shutdown {
processNext = false
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appHydrateQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
origApp = origApp.DeepCopy()
app := origApp.DeepCopy()
if app.Spec.SourceHydrator == nil {
return
}
logCtx := getAppLog(app)
logCtx.Debug("Processing app hydrate queue item")
// If we're using a source hydrator, see if the dry source has changed.
latestRevision, err := ctrl.appStateManager.ResolveDryRevision(app.Spec.SourceHydrator.DrySource.RepoURL, app.Spec.SourceHydrator.DrySource.TargetRevision)
if err != nil {
logCtx.Errorf("Failed to check whether dry source has changed, skipping: %v", err)
return
}
// TODO: don't reuse statusRefreshTimeout. Create a new timeout for hydration.
reason := appNeedsHydration(origApp, ctrl.statusRefreshTimeout, latestRevision)
if reason == "" {
return
}
if latestRevision == "" {
logCtx.Errorf("Dry source has not been resolved, skipping")
return
}
logCtx.WithField("reason", reason).Info("Hydrating app")
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
DrySHA: latestRevision,
StartedAt: metav1.Now(),
FinishedAt: nil,
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
}
ctrl.persistAppStatus(origApp, &app.Status)
origApp.Status.SourceHydrator = app.Status.SourceHydrator
ctrl.hydrationQueue.Add(getHydrationQueueKey(app))
logCtx.Debug("Successfully processed app hydrate queue item")
return
}
func getHydrationQueueKey(app *appv1.Application) hydrationQueueKey {
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
key := hydrationQueueKey{
sourceRepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
sourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
destinationBranch: destinationBranch,
}
return key
}
type hydrationQueueKey struct {
sourceRepoURL string
sourceTargetRevision string
destinationBranch string
}
type uniqueHydrationDestination struct {
sourceRepoURL string
sourceTargetRevision string
destinationBranch string
destinationPath string
}
func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool) {
key, shutdown := ctrl.hydrationQueue.Get()
if shutdown {
processNext = false
return
}
hydrationKey, ok := key.(hydrationQueueKey)
if !ok {
log.Errorf("Failed to cast key to hydrationQueueKey")
processNext = true
return
}
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.sourceRepoURL,
"sourceTargetRevision": hydrationKey.sourceTargetRevision,
"destinationBranch": hydrationKey.destinationBranch,
})
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.hydrationQueue.Done(key)
}()
logCtx.Debug("Processing hydration queue item")
relevantApps, drySHA, hydratedSHA, err := ctrl.hydrateAppsLatestCommit(logCtx, hydrationKey)
if err != nil {
logCtx.WithField("appCount", len(relevantApps)).WithError(err).Error("Failed to hydrate apps")
for _, app := range relevantApps {
origApp := app.DeepCopy()
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrated revision %s: %v", drySHA, err.Error())
ctrl.persistAppStatus(origApp, &app.Status)
logCtx.Errorf("Failed to hydrate app: %v", err)
}
return
}
logCtx.WithField("appCount", len(relevantApps)).Debug("Successfully hydrated apps")
finishedAt := metav1.Now()
for _, app := range relevantApps {
origApp := app.DeepCopy()
operation := &appv1.HydrateOperation{
StartedAt: app.Status.SourceHydrator.CurrentOperation.StartedAt,
FinishedAt: &finishedAt,
Phase: appv1.HydrateOperationPhaseHydrated,
Message: "",
DrySHA: drySHA,
HydratedSHA: hydratedSHA,
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
app.Status.SourceHydrator.CurrentOperation = operation
app.Status.SourceHydrator.LastSuccessfulOperation = &appv1.SuccessfulHydrateOperation{
DrySHA: drySHA,
HydratedSHA: hydratedSHA,
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
ctrl.persistAppStatus(origApp, &app.Status)
// Request a refresh since we pushed a new commit.
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), nil)
}
return
}
func (ctrl *ApplicationController) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey hydrationQueueKey) ([]*appv1.Application, string, string, error) {
relevantApps, err := ctrl.getRelevantAppsForHydration(logCtx, hydrationKey)
if err != nil {
return nil, "", "", fmt.Errorf("failed to get relevant apps for hydration: %w", err)
}
dryRevision, err := ctrl.appStateManager.ResolveDryRevision(hydrationKey.sourceRepoURL, hydrationKey.sourceTargetRevision)
if err != nil {
return relevantApps, "", "", fmt.Errorf("failed to resolve dry revision: %w", err)
}
hydratedRevision, err := ctrl.hydrate(relevantApps, dryRevision)
if err != nil {
return relevantApps, dryRevision, "", fmt.Errorf("failed to hydrate apps: %w", err)
}
return relevantApps, dryRevision, hydratedRevision, nil
}
func (ctrl *ApplicationController) getRelevantAppsForHydration(logCtx *log.Entry, hydrationKey hydrationQueueKey) ([]*appv1.Application, error) {
// Get all apps
apps, err := ctrl.appLister.List(labels.Everything())
if err != nil {
return nil, fmt.Errorf("failed to list apps: %w", err)
}
var relevantApps []*appv1.Application
uniqueDestinations := make(map[uniqueHydrationDestination]bool, len(apps))
for _, app := range apps {
// TODO: test that we're actually skipping un-processable apps.
if !ctrl.canProcessApp(app) {
continue
}
if app.Spec.SourceHydrator == nil {
continue
}
if app.Spec.SourceHydrator.DrySource.RepoURL != hydrationKey.sourceRepoURL ||
app.Spec.SourceHydrator.DrySource.TargetRevision != hydrationKey.sourceTargetRevision {
continue
}
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
if destinationBranch != hydrationKey.destinationBranch {
continue
}
var proj *appv1.AppProject
proj, err = ctrl.getAppProj(app)
if err != nil {
return nil, fmt.Errorf("failed to get project %q for app %q: %w", app.Spec.Project, app.QualifiedName(), err)
}
permitted := proj.IsSourcePermitted(app.Spec.GetSource())
if !permitted {
// Log and skip. We don't want to fail the entire operation because of one app.
logCtx.Warnf("App %q is not permitted to use source %q", app.QualifiedName(), app.Spec.Source.String())
continue
}
uniqueDestinationKey := uniqueHydrationDestination{
sourceRepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
sourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
destinationBranch: destinationBranch,
destinationPath: app.Spec.SourceHydrator.SyncSource.Path,
}
// TODO: test the dupe detection
if _, ok := uniqueDestinations[uniqueDestinationKey]; ok {
return nil, fmt.Errorf("multiple app hydrators use the same destination: %v", uniqueDestinationKey)
}
uniqueDestinations[uniqueDestinationKey] = true
relevantApps = append(relevantApps, app)
}
return relevantApps, nil
}
func (ctrl *ApplicationController) hydrate(apps []*appv1.Application, revision string) (string, error) {
if len(apps) == 0 {
return "", nil
}
repoURL := apps[0].Spec.SourceHydrator.DrySource.RepoURL
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
var paths []*commitclient.PathDetails
for _, app := range apps {
project, err := ctrl.getAppProj(app)
if err != nil {
return "", fmt.Errorf("failed to get project: %w", err)
}
drySource := appv1.ApplicationSource{
RepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
Path: app.Spec.SourceHydrator.DrySource.Path,
TargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
}
drySources := []appv1.ApplicationSource{drySource}
revisions := []string{app.Spec.SourceHydrator.DrySource.TargetRevision}
appLabelKey, err := ctrl.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return "", fmt.Errorf("failed to get app instance label key: %w", err)
}
// TODO: enable signature verification
objs, resp, err := ctrl.appStateManager.GetRepoObjs(app, drySources, appLabelKey, revisions, false, false, false, project, false, false)
if err != nil {
return "", fmt.Errorf("failed to get repo objects: %w", err)
}
// Set up a ManifestsRequest
manifestDetails := make([]*commitclient.HydratedManifestDetails, len(objs))
for i, obj := range objs {
objJson, err := json.Marshal(obj)
if err != nil {
return "", fmt.Errorf("failed to marshal object: %w", err)
}
manifestDetails[i] = &commitclient.HydratedManifestDetails{ManifestJSON: string(objJson)}
}
paths = append(paths, &commitclient.PathDetails{
Path: app.Spec.SourceHydrator.SyncSource.Path,
Manifests: manifestDetails,
Commands: resp[0].Commands,
})
}
repo, err := ctrl.db.GetHydratorCredentials(context.Background(), repoURL)
if err != nil {
return "", fmt.Errorf("failed to get hydrator credentials: %w", err)
}
if repo == nil {
// Try without credentials.
repo = &appv1.Repository{
Repo: repoURL,
}
}
manifestsRequest := commitclient.CommitHydratedManifestsRequest{
Repo: repo,
SyncBranch: syncBranch,
TargetBranch: targetBranch,
DrySha: revision,
CommitMessage: fmt.Sprintf("[Argo CD Bot] hydrate %s", revision),
Paths: paths,
}
closer, commitService, err := ctrl.commitClientset.NewCommitServerClient()
if err != nil {
return "", fmt.Errorf("failed to create commit service: %w", err)
}
defer closer.Close()
resp, err := commitService.CommitHydratedManifests(context.Background(), &manifestsRequest)
if err != nil {
return "", fmt.Errorf("failed to commit hydrated manifests: %w", err)
}
return resp.HydratedSha, nil
}
func resourceStatusKey(res appv1.ResourceStatus) string {
return strings.Join([]string{res.Group, res.Kind, res.Namespace, res.Name}, "/")
}
@@ -1816,6 +2161,36 @@ func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application,
return false, refreshType, compareWith
}
// appNeedsHydration answers if application needs manifests hydrated.
func appNeedsHydration(app *appv1.Application, statusHydrateTimeout time.Duration, latestRevision string) string {
if app.Spec.SourceHydrator == nil {
return ""
}
var hydratedAt *metav1.Time
if app.Status.SourceHydrator.CurrentOperation != nil {
hydratedAt = &app.Status.SourceHydrator.CurrentOperation.StartedAt
}
if app.IsHydrateRequested() {
return "hydrate requested"
} else if app.Status.SourceHydrator.CurrentOperation == nil {
return "no previous hydrate operation"
} else if !app.Spec.SourceHydrator.DeepEquals(app.Status.SourceHydrator.CurrentOperation.SourceHydrator) {
return "spec.sourceHydrator differs"
} else if app.Status.SourceHydrator.CurrentOperation.DrySHA != latestRevision {
return "revision differs"
} else if app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseFailed && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.FinishedAt.Time) > 2*time.Minute {
return "previous hydrate operation failed more than 2 minutes ago"
} else if hydratedAt == nil || hydratedAt.Add(statusHydrateTimeout).Before(time.Now().UTC()) {
return "hydration expired"
}
return ""
}
// refreshAppConditions validates whether AppProject restrictions are being followed. If not, it adds error conditions
// to the app status.
func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application) (*appv1.AppProject, bool) {
errorConditions := make([]appv1.ApplicationCondition, 0)
proj, err := ctrl.getAppProj(app)
@@ -1919,7 +2294,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
}
// autoSync will initiate a sync operation for an application configured with automated sync
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus, resources []appv1.ResourceStatus, revisionUpdated bool) (*appv1.ApplicationCondition, time.Duration) {
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus, resources []appv1.ResourceStatus) (*appv1.ApplicationCondition, time.Duration) {
logCtx := getAppLog(app)
ts := stats.NewTimingStats()
defer func() {
@@ -1963,18 +2338,11 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
}
}
selfHeal := app.Spec.SyncPolicy.Automated.SelfHeal
// Multi-Source Apps with selfHeal disabled should not trigger an autosync if
// the last sync revision and the new sync revision is the same.
if app.Spec.HasMultipleSources() && !selfHeal && reflect.DeepEqual(app.Status.Sync.Revisions, syncStatus.Revisions) {
logCtx.Infof("Skipping auto-sync: selfHeal disabled and sync caused by object update")
return nil, 0
}
desiredCommitSHA := syncStatus.Revision
desiredCommitSHAsMS := syncStatus.Revisions
alreadyAttempted, attemptPhase := alreadyAttemptedSync(app, desiredCommitSHA, desiredCommitSHAsMS, app.Spec.HasMultipleSources(), revisionUpdated)
alreadyAttempted, attemptPhase := alreadyAttemptedSync(app, desiredCommitSHA, desiredCommitSHAsMS, app.Spec.HasMultipleSources())
ts.AddCheckpoint("already_attempted_sync_ms")
selfHeal := app.Spec.SyncPolicy.Automated.SelfHeal
op := appv1.Operation{
Sync: &appv1.SyncOperation{
Revision: desiredCommitSHA,
@@ -1985,9 +2353,6 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
InitiatedBy: appv1.OperationInitiator{Automated: true},
Retry: appv1.RetryStrategy{Limit: 5},
}
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
}
if app.Spec.SyncPolicy.Retry != nil {
op.Retry = *app.Spec.SyncPolicy.Retry
}
@@ -2005,7 +2370,6 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
return nil, 0
} else if alreadyAttempted && selfHeal {
if shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app); shouldSelfHeal {
op.Sync.SelfHealAttemptsCount++
for _, resource := range resources {
if resource.Status != appv1.SyncStatusCodeSynced {
op.Sync.Resources = append(op.Sync.Resources, appv1.SyncOperationResource{
@@ -2032,7 +2396,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
}
if bAllNeedPrune {
message := fmt.Sprintf("Skipping sync attempt to %s: auto-sync will wipe out all resources", desiredCommitSHA)
logCtx.Warn(message)
logCtx.Warnf(message)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}, 0
}
}
@@ -2072,26 +2436,17 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
// alreadyAttemptedSync returns whether the most recent sync was performed against the
// commitSHA and with the same app source config which are currently set in the app
func alreadyAttemptedSync(app *appv1.Application, commitSHA string, commitSHAsMS []string, hasMultipleSources bool, revisionUpdated bool) (bool, synccommon.OperationPhase) {
func alreadyAttemptedSync(app *appv1.Application, commitSHA string, commitSHAsMS []string, hasMultipleSources bool) (bool, synccommon.OperationPhase) {
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
return false, ""
}
if hasMultipleSources {
if revisionUpdated {
if !reflect.DeepEqual(app.Status.OperationState.SyncResult.Revisions, commitSHAsMS) {
return false, ""
}
} else {
log.WithField("application", app.Name).Debugf("Skipping auto-sync: commitSHA %s has no changes", commitSHA)
if !reflect.DeepEqual(app.Status.OperationState.SyncResult.Revisions, commitSHAsMS) {
return false, ""
}
} else {
if revisionUpdated {
log.WithField("application", app.Name).Infof("Executing compare of syncResult.Revision and commitSha because manifest changed: %v", commitSHA)
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false, ""
}
} else {
log.WithField("application", app.Name).Debugf("Skipping auto-sync: commitSHA %s has no changes", commitSHA)
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false, ""
}
}
@@ -2110,7 +2465,7 @@ func alreadyAttemptedSync(app *appv1.Application, commitSHA string, commitSHAsMS
} else {
// Ignore differences in target revision, since we already just verified commitSHAs are equal,
// and we do not want to trigger auto-sync due to things like HEAD != master
specSource := app.Spec.Source.DeepCopy()
specSource := app.Spec.GetSource()
specSource.TargetRevision = ""
syncResSource := app.Status.OperationState.SyncResult.Source.DeepCopy()
syncResSource.TargetRevision = ""
@@ -2124,24 +2479,10 @@ func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application) (bool,
}
var retryAfter time.Duration
if ctrl.selfHealBackOff == nil {
if app.Status.OperationState.FinishedAt == nil {
retryAfter = ctrl.selfHealTimeout
} else {
retryAfter = ctrl.selfHealTimeout - time.Since(app.Status.OperationState.FinishedAt.Time)
}
if app.Status.OperationState.FinishedAt == nil {
retryAfter = ctrl.selfHealTimeout
} else {
backOff := *ctrl.selfHealBackOff
backOff.Steps = int(app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount)
var delay time.Duration
for backOff.Steps > 0 {
delay = backOff.Step()
}
if app.Status.OperationState.FinishedAt == nil {
retryAfter = delay
} else {
retryAfter = delay - time.Since(app.Status.OperationState.FinishedAt.Time)
}
retryAfter = ctrl.selfHealTimeout - time.Since(app.Status.OperationState.FinishedAt.Time)
}
return retryAfter <= 0, retryAfter
}
@@ -2149,7 +2490,7 @@ func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application) (bool,
// isAppNamespaceAllowed returns whether the application is allowed in the
// namespace it's residing in.
func (ctrl *ApplicationController) isAppNamespaceAllowed(app *appv1.Application) bool {
return app.Namespace == ctrl.namespace || glob.MatchStringInList(ctrl.applicationNamespaces, app.Namespace, glob.REGEXP)
return app.Namespace == ctrl.namespace || glob.MatchStringInList(ctrl.applicationNamespaces, app.Namespace, false)
}
func (ctrl *ApplicationController) canProcessApp(obj interface{}) bool {
@@ -2315,6 +2656,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
if !newOK || (delay != nil && *delay != time.Duration(0)) {
ctrl.appOperationQueue.AddRateLimited(key)
}
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
ctrl.clusterSharding.UpdateApp(newApp)
},
DeleteFunc: func(obj interface{}) {

View File

@@ -4,18 +4,16 @@ import (
"context"
"encoding/json"
"errors"
"fmt"
"testing"
"time"
clustercache "github.com/argoproj/gitops-engine/pkg/cache"
"github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/rest"
"k8s.io/utils/ptr"
clustercache "github.com/argoproj/gitops-engine/pkg/cache"
"github.com/argoproj/argo-cd/v2/common"
statecache "github.com/argoproj/argo-cd/v2/controller/cache"
@@ -39,21 +37,19 @@ import (
dbmocks "github.com/argoproj/argo-cd/v2/util/db/mocks"
mockcommitclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient/mocks"
mockstatecache "github.com/argoproj/argo-cd/v2/controller/cache/mocks"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned/fake"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
mockrepoclient "github.com/argoproj/argo-cd/v2/reposerver/apiclient/mocks"
"github.com/argoproj/argo-cd/v2/test"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/normalizers"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
appstatecache "github.com/argoproj/argo-cd/v2/util/cache/appstate"
"github.com/argoproj/argo-cd/v2/util/settings"
)
var testEnableEventList []string = argo.DefaultEnableEventList()
type namespacedResource struct {
v1alpha1.ResourceNode
AppName string
@@ -69,7 +65,6 @@ type fakeData struct {
metricsCacheExpiration time.Duration
applicationNamespaces []string
updateRevisionForPathsResponse *apiclient.UpdateRevisionForPathsResponse
additionalObjs []runtime.Object
}
type MockKubectl struct {
@@ -119,6 +114,8 @@ func newFakeController(data *fakeData, repoErr error) *ApplicationController {
mockRepoClientset := mockrepoclient.Clientset{RepoServerServiceClient: &mockRepoClient}
mockCommitClientset := mockcommitclient.Clientset{}
secret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-secret",
@@ -139,9 +136,7 @@ func newFakeController(data *fakeData, repoErr error) *ApplicationController {
},
Data: data.configMapData,
}
runtimeObjs := []runtime.Object{&clust, &secret, &cm}
runtimeObjs = append(runtimeObjs, data.additionalObjs...)
kubeClient := fake.NewSimpleClientset(runtimeObjs...)
kubeClient := fake.NewSimpleClientset(&clust, &cm, &secret)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
kubectl := &MockKubectl{Kubectl: &kubetest.MockKubectlCmd{}}
ctrl, err := NewApplicationController(
@@ -150,6 +145,7 @@ func newFakeController(data *fakeData, repoErr error) *ApplicationController {
kubeClient,
appclientset.NewSimpleClientset(data.apps...),
&mockRepoClientset,
&mockCommitClientset,
appstatecache.NewCache(
cacheutil.NewCache(cacheutil.NewInMemoryCache(1*time.Minute)),
1*time.Minute,
@@ -159,12 +155,10 @@ func newFakeController(data *fakeData, repoErr error) *ApplicationController {
time.Hour,
time.Second,
time.Minute,
nil,
time.Second*10,
common.DefaultPortArgoCDMetrics,
data.metricsCacheExpiration,
[]string{},
[]string{},
0,
true,
nil,
@@ -173,7 +167,6 @@ func newFakeController(data *fakeData, repoErr error) *ApplicationController {
false,
false,
normalizers.IgnoreNormalizerOpts{},
testEnableEventList,
)
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
@@ -565,7 +558,7 @@ func TestAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -574,42 +567,6 @@ func TestAutoSync(t *testing.T) {
assert.False(t, app.Operation.Sync.Prune)
}
func TestMultiSourceSelfHeal(t *testing.T) {
// Simulate OutOfSync caused by object change in cluster
// So our Sync Revisions and SyncStatus Revisions should deep equal
t.Run("ClusterObjectChangeShouldNotTriggerAutoSync", func(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.Sync.Revisions = []string{"z", "x", "v"}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"z", "x", "v"},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook-1", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
assert.Nil(t, app.Operation)
})
t.Run("NewRevisionChangeShouldTriggerAutoSync", func(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.Sync.Revisions = []string{"a", "b", "c"}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"z", "x", "v"},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook-1", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
assert.NotNil(t, app.Operation)
})
}
func TestAutoSyncNotAllowEmpty(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated.Prune = true
@@ -618,7 +575,7 @@ func TestAutoSyncNotAllowEmpty(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
assert.NotNil(t, cond)
}
@@ -631,7 +588,7 @@ func TestAutoSyncAllowEmpty(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
assert.Nil(t, cond)
}
@@ -645,7 +602,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -660,7 +617,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -676,7 +633,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -693,7 +650,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -719,7 +676,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -735,7 +692,7 @@ func TestSkipAutoSync(t *testing.T) {
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{
{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync, RequiresPruning: true},
}, true)
})
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -771,7 +728,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -814,7 +771,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -2182,78 +2139,87 @@ func TestAppStatusIsReplaced(t *testing.T) {
require.Nil(t, val)
}
func TestAlreadyAttemptSync(t *testing.T) {
app := newFakeApp()
t.Run("same manifest with sync result", func(t *testing.T) {
attempted, _ := alreadyAttemptedSync(app, "sha", []string{}, false, false)
assert.True(t, attempted)
})
func Test_appNeedsHydration(t *testing.T) {
t.Parallel()
t.Run("different manifest with sync result", func(t *testing.T) {
attempted, _ := alreadyAttemptedSync(app, "sha", []string{}, false, true)
assert.False(t, attempted)
})
}
now := time.Now()
oneHourAgo := metav1.NewTime(now.Add(-1 * time.Hour))
func assertDurationAround(t *testing.T, expected time.Duration, actual time.Duration) {
delta := time.Second / 2
assert.GreaterOrEqual(t, expected, actual-delta)
assert.LessOrEqual(t, expected, actual+delta)
}
func TestSelfHealExponentialBackoff(t *testing.T) {
ctrl := newFakeController(&fakeData{}, nil)
ctrl.selfHealBackOff = &wait.Backoff{
Factor: 3,
Duration: 2 * time.Second,
Cap: 5 * time.Minute,
}
app := &v1alpha1.Application{
Status: v1alpha1.ApplicationStatus{
OperationState: &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
testCases := []struct {
name string
app *v1alpha1.Application
timeout time.Duration
latestRevision string
expected string
}{
{
name: "source hydrator not configured",
app: &v1alpha1.Application{},
expected: "source hydrator not configured",
},
{
name: "hydrate requested",
app: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{v1alpha1.AnnotationKeyHydrate: "normal"}}},
timeout: 1 * time.Hour,
latestRevision: "abc123",
expected: "hydrate requested",
},
{
name: "no previous hydrate operation",
app: &v1alpha1.Application{},
timeout: 1 * time.Hour,
latestRevision: "abc123",
expected: "no previous hydrate operation",
},
{
name: "spec.sourceHydrator differs",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
SourceHydrator: &v1alpha1.SourceHydrator{},
},
Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{
SourceHydrator: v1alpha1.SourceHydrator{DrySource: v1alpha1.DrySource{RepoURL: "something new"}},
}}},
},
timeout: 1 * time.Hour,
latestRevision: "abc123",
expected: "spec.sourceHydrator differs",
},
{
name: "dry SHA has changed",
app: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{DrySHA: "xyz123"}}}},
timeout: 1 * time.Hour,
latestRevision: "abc123",
expected: "revision differs",
},
{
name: "hydration failed more than two minutes ago",
app: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{DrySHA: "abc123", FinishedAt: &oneHourAgo, Phase: v1alpha1.HydrateOperationPhaseFailed}}}},
timeout: 1 * time.Hour,
latestRevision: "abc123",
expected: "previous hydrate operation failed more than 2 minutes ago",
},
{
name: "timeout reached",
app: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{StartedAt: oneHourAgo}}}},
timeout: 1 * time.Minute,
latestRevision: "abc123",
expected: "hydration expired",
},
{
name: "hydrate not needed",
app: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{DrySHA: "abc123", FinishedAt: &oneHourAgo, Phase: v1alpha1.HydrateOperationPhaseFailed}}}},
timeout: 1 * time.Hour,
latestRevision: "abc123",
expected: "",
},
}
testCases := []struct {
attempts int64
finishedAt *metav1.Time
expectedDuration time.Duration
shouldSelfHeal bool
}{{
attempts: 0,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 0,
shouldSelfHeal: true,
}, {
attempts: 1,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 2 * time.Second,
shouldSelfHeal: false,
}, {
attempts: 2,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 6 * time.Second,
shouldSelfHeal: false,
}, {
attempts: 3,
finishedAt: nil,
expectedDuration: 18 * time.Second,
shouldSelfHeal: false,
}}
for i := range testCases {
tc := testCases[i]
t.Run(fmt.Sprintf("test case %d", i), func(t *testing.T) {
app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount = tc.attempts
app.Status.OperationState.FinishedAt = tc.finishedAt
ok, duration := ctrl.shouldSelfHeal(app)
require.Equal(t, ok, tc.shouldSelfHeal)
assertDurationAround(t, tc.expectedDuration, duration)
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
result := appNeedsHydration(tc.app, tc.timeout, tc.latestRevision)
assert.Equal(t, tc.expected, result)
})
}
}

View File

@@ -197,7 +197,6 @@ type cacheSettings struct {
clusterSettings clustercache.Settings
appInstanceLabelKey string
trackingMethod appv1.TrackingMethod
installationID string
// resourceOverrides provides a list of ignored differences to ignore watched resource updates
resourceOverrides map[string]appv1.ResourceOverride
@@ -226,10 +225,6 @@ func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
if err != nil {
return nil, err
}
installationID, err := c.settingsMgr.GetInstallationID()
if err != nil {
return nil, err
}
resourceUpdatesOverrides, err := c.settingsMgr.GetIgnoreResourceUpdatesOverrides()
if err != nil {
return nil, err
@@ -251,7 +246,7 @@ func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
ResourcesFilter: resourcesFilter,
}
return &cacheSettings{clusterSettings, appInstanceLabelKey, argo.GetTrackingMethod(c.settingsMgr), installationID, resourceUpdatesOverrides, ignoreResourceUpdatesEnabled}, nil
return &cacheSettings{clusterSettings, appInstanceLabelKey, argo.GetTrackingMethod(c.settingsMgr), resourceUpdatesOverrides, ignoreResourceUpdatesEnabled}, nil
}
func asResourceNode(r *clustercache.Resource) appv1.ResourceNode {
@@ -528,7 +523,7 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
res.Health, _ = health.GetResourceHealth(un, cacheSettings.clusterSettings.ResourceHealthOverride)
appName := c.resourceTracking.GetAppName(un, cacheSettings.appInstanceLabelKey, cacheSettings.trackingMethod, cacheSettings.installationID)
appName := c.resourceTracking.GetAppName(un, cacheSettings.appInstanceLabelKey, cacheSettings.trackingMethod)
if isRoot && appName != "" {
res.AppName = appName
}

View File

@@ -278,32 +278,6 @@ func populateIstioVirtualServiceInfo(un *unstructured.Unstructured, res *Resourc
res.NetworkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetRefs: targets, ExternalURLs: urls}
}
func isPodInitializedConditionTrue(status *v1.PodStatus) bool {
for _, condition := range status.Conditions {
if condition.Type != v1.PodInitialized {
continue
}
return condition.Status == v1.ConditionTrue
}
return false
}
func isRestartableInitContainer(initContainer *v1.Container) bool {
if initContainer == nil {
return false
}
if initContainer.RestartPolicy == nil {
return false
}
return *initContainer.RestartPolicy == v1.ContainerRestartPolicyAlways
}
func isPodPhaseTerminal(phase v1.PodPhase) bool {
return phase == v1.PodFailed || phase == v1.PodSucceeded
}
func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
pod := v1.Pod{}
err := runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &pod)
@@ -314,8 +288,7 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
totalContainers := len(pod.Spec.Containers)
readyContainers := 0
podPhase := pod.Status.Phase
reason := string(podPhase)
reason := string(pod.Status.Phase)
if pod.Status.Reason != "" {
reason = pod.Status.Reason
}
@@ -333,21 +306,6 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
res.Images = append(res.Images, image)
}
// If the Pod carries {type:PodScheduled, reason:SchedulingGated}, set reason to 'SchedulingGated'.
for _, condition := range pod.Status.Conditions {
if condition.Type == v1.PodScheduled && condition.Reason == v1.PodReasonSchedulingGated {
reason = v1.PodReasonSchedulingGated
}
}
initContainers := make(map[string]*v1.Container)
for i := range pod.Spec.InitContainers {
initContainers[pod.Spec.InitContainers[i].Name] = &pod.Spec.InitContainers[i]
if isRestartableInitContainer(&pod.Spec.InitContainers[i]) {
totalContainers++
}
}
initializing := false
for i := range pod.Status.InitContainerStatuses {
container := pod.Status.InitContainerStatuses[i]
@@ -355,12 +313,6 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
switch {
case container.State.Terminated != nil && container.State.Terminated.ExitCode == 0:
continue
case isRestartableInitContainer(initContainers[container.Name]) &&
container.Started != nil && *container.Started:
if container.Ready {
readyContainers++
}
continue
case container.State.Terminated != nil:
// initialization is failed
if len(container.State.Terminated.Reason) == 0 {
@@ -382,7 +334,8 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
}
break
}
if !initializing || isPodInitializedConditionTrue(&pod.Status) {
if !initializing {
restarts = 0
hasRunning := false
for i := len(pod.Status.ContainerStatuses) - 1; i >= 0; i-- {
container := pod.Status.ContainerStatuses[i]
@@ -417,9 +370,7 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
// and https://github.com/kubernetes/kubernetes/issues/90358#issuecomment-617859364
if pod.DeletionTimestamp != nil && pod.Status.Reason == "NodeLost" {
reason = "Unknown"
// If the pod is being deleted and the pod phase is not succeeded or failed, set the reason to "Terminating".
// See https://github.com/kubernetes/kubectl/issues/1595#issuecomment-2080001023
} else if pod.DeletionTimestamp != nil && !isPodPhaseTerminal(podPhase) {
} else if pod.DeletionTimestamp != nil {
reason = "Terminating"
}

View File

@@ -285,552 +285,6 @@ func TestGetPodInfo(t *testing.T) {
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{Labels: map[string]string{"app": "guestbook"}}, info.NetworkingInfo)
}
func TestGetPodWithInitialContainerInfo(t *testing.T) {
pod := strToUnstructured(`
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
app: "app-with-initial-container"
name: "app-with-initial-container-5f46976fdb-vd6rv"
namespace: "default"
ownerReferences:
- apiVersion: "apps/v1"
kind: "ReplicaSet"
name: "app-with-initial-container-5f46976fdb"
spec:
containers:
- image: "alpine:latest"
imagePullPolicy: "Always"
name: "app-with-initial-container"
initContainers:
- image: "alpine:latest"
imagePullPolicy: "Always"
name: "app-with-initial-container-logshipper"
nodeName: "minikube"
status:
containerStatuses:
- image: "alpine:latest"
name: "app-with-initial-container"
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2024-10-08T08:44:25Z"
initContainerStatuses:
- image: "alpine:latest"
name: "app-with-initial-container-logshipper"
ready: true
restartCount: 0
started: false
state:
terminated:
exitCode: 0
reason: "Completed"
phase: "Running"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "1/1"},
}, info.Info)
}
func TestGetPodInfoWithSidecar(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
labels:
app: app-with-sidecar
name: app-with-sidecar-6664cc788c-lqlrp
namespace: default
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: app-with-sidecar-6664cc788c
spec:
containers:
- image: 'docker.m.daocloud.io/library/alpine:latest'
imagePullPolicy: Always
name: app-with-sidecar
initContainers:
- image: 'docker.m.daocloud.io/library/alpine:latest'
imagePullPolicy: Always
name: logshipper
restartPolicy: Always
nodeName: minikube
status:
containerStatuses:
- image: 'docker.m.daocloud.io/library/alpine:latest'
name: app-with-sidecar
ready: true
restartCount: 0
started: true
state:
running:
startedAt: '2024-10-08T08:39:43Z'
initContainerStatuses:
- image: 'docker.m.daocloud.io/library/alpine:latest'
name: logshipper
ready: true
restartCount: 0
started: true
state:
running:
startedAt: '2024-10-08T08:39:40Z'
phase: Running
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "2/2"},
}, info.Info)
}
func TestGetPodInfoWithInitialContainer(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
generateName: myapp-long-exist-56b7d8794d-
labels:
app: myapp-long-exist
name: myapp-long-exist-56b7d8794d-pbgrd
namespace: linghao
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: myapp-long-exist-56b7d8794d
spec:
containers:
- image: alpine:latest
imagePullPolicy: Always
name: myapp-long-exist
initContainers:
- image: alpine:latest
imagePullPolicy: Always
name: myapp-long-exist-logshipper
nodeName: minikube
status:
containerStatuses:
- image: alpine:latest
name: myapp-long-exist
ready: false
restartCount: 0
started: false
state:
waiting:
reason: PodInitializing
initContainerStatuses:
- image: alpine:latest
name: myapp-long-exist-logshipper
ready: false
restartCount: 0
started: true
state:
running:
startedAt: '2024-10-09T08:03:45Z'
phase: Pending
startTime: '2024-10-09T08:02:39Z'
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Init:0/1"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod has 2 restartable init containers, the first one running but not started.
func TestGetPodInfoWithRestartableInitContainer(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test1
spec:
initContainers:
- name: restartable-init-1
restartPolicy: Always
- name: restartable-init-2
restartPolicy: Always
containers:
- name: container
nodeName: minikube
status:
phase: Pending
initContainerStatuses:
- name: restartable-init-1
ready: false
restartCount: 3
state:
running: {}
started: false
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with actual time
- name: restartable-init-2
ready: false
state:
waiting: {}
started: false
containerStatuses:
- ready: false
restartCount: 0
state:
waiting: {}
conditions:
- type: ContainersReady
status: "False"
- type: Initialized
status: "False"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Init:0/2"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/3"},
{Name: "Restart Count", Value: "3"},
}, info.Info)
}
// Test pod has 2 restartable init containers, the first one started and the second one running but not started.
func TestGetPodInfoWithPartiallyStartedInitContainers(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test1
spec:
initContainers:
- name: restartable-init-1
restartPolicy: Always
- name: restartable-init-2
restartPolicy: Always
containers:
- name: container
nodeName: minikube
status:
phase: Pending
initContainerStatuses:
- name: restartable-init-1
ready: false
restartCount: 3
state:
running: {}
started: true
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with actual time
- name: restartable-init-2
ready: false
state:
running: {}
started: false
containerStatuses:
- ready: false
restartCount: 0
state:
waiting: {}
conditions:
- type: ContainersReady
status: "False"
- type: Initialized
status: "False"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Init:1/2"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/3"},
{Name: "Restart Count", Value: "3"},
}, info.Info)
}
// Test pod has 2 restartable init containers started and 1 container running
func TestGetPodInfoWithStartedInitContainers(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test2
spec:
initContainers:
- name: restartable-init-1
restartPolicy: Always
- name: restartable-init-2
restartPolicy: Always
containers:
- name: container
nodeName: minikube
status:
phase: Running
initContainerStatuses:
- name: restartable-init-1
ready: false
restartCount: 3
state:
running: {}
started: true
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with actual time
- name: restartable-init-2
ready: false
state:
running: {}
started: true
containerStatuses:
- ready: true
restartCount: 4
state:
running: {}
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with actual time
conditions:
- type: ContainersReady
status: "False"
- type: Initialized
status: "True"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "1/3"},
{Name: "Restart Count", Value: "7"},
}, info.Info)
}
// Test pod has 1 init container restarting and 1 container not running
func TestGetPodInfoWithNormalInitContainer(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test7
spec:
initContainers:
- name: init-container
containers:
- name: main-container
nodeName: minikube
status:
phase: podPhase
initContainerStatuses:
- ready: false
restartCount: 3
state:
running: {}
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with the actual time
containerStatuses:
- ready: false
restartCount: 0
state:
waiting: {}
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Init:0/1"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
{Name: "Restart Count", Value: "3"},
}, info.Info)
}
// Test pod condition succeed
func TestPodConditionSucceeded(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test8
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Succeeded
containerStatuses:
- ready: false
restartCount: 0
state:
terminated:
reason: Completed
exitCode: 0
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Completed"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod condition failed
func TestPodConditionFailed(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test9
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Failed
containerStatuses:
- ready: false
restartCount: 0
state:
terminated:
reason: Error
exitCode: 1
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Error"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod condition succeed with deletion
func TestPodConditionSucceededWithDeletion(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test10
deletionTimestamp: "2023-10-01T00:00:00Z"
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Succeeded
containerStatuses:
- ready: false
restartCount: 0
state:
terminated:
reason: Completed
exitCode: 0
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Completed"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod condition running with deletion
func TestPodConditionRunningWithDeletion(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test11
deletionTimestamp: "2023-10-01T00:00:00Z"
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Running
containerStatuses:
- ready: false
restartCount: 0
state:
running: {}
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Terminating"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod condition pending with deletion
func TestPodConditionPendingWithDeletion(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test12
deletionTimestamp: "2023-10-01T00:00:00Z"
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Pending
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Terminating"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test PodScheduled condition with reason SchedulingGated
func TestPodScheduledWithSchedulingGated(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test13
spec:
nodeName: minikube
containers:
- name: container1
- name: container2
status:
phase: podPhase
conditions:
- type: PodScheduled
status: "False"
reason: SchedulingGated
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "SchedulingGated"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/2"},
}, info.Info)
}
func TestGetNodeInfo(t *testing.T) {
node := strToUnstructured(`
apiVersion: v1

View File

@@ -51,7 +51,7 @@ func (ctrl *ApplicationController) executePostDeleteHooks(app *v1alpha1.Applicat
revisions = append(revisions, src.TargetRevision)
}
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, false)
targets, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, false, true)
if err != nil {
return false, err
}

Some files were not shown because too many files have changed in this diff Show More