Compare commits

..

131 Commits

Author SHA1 Message Date
1102
74b8bfac72 fix: Ensure RevisionMetadataPanel refreshes on revision change (#19580)
Signed-off-by: nueavv <nuguni@kakao.com>
2024-08-20 11:50:12 -04:00
Kunho Lee
7a1dfc2307 feat(hydrator): handle sourceHydrator fields from webhook (#19397)
* feat(hydrator): handle sourceHydrator fields from webhook

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

* fix: use GetDrySource instead of GetHydratorDrySource

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

* test: test if the hydration is properly triggered in the webhook when SourceHydrator is configured

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

---------

Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-08-19 10:13:17 -04:00
Michael Crenshaw
89c22c2f95 Merge branch 'commit-server' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-14 15:10:23 -04:00
Michael Crenshaw
cb2feec273 feat(hydrator): add commit-server component
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

go mod tidy

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

one test file for both implementations

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix test for linux

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix git client mock

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix git client mock

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

address comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

unit tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-14 12:30:45 -04:00
Michael Crenshaw
73371f981a hacky but functional creds UI support
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-14 12:29:56 -04:00
Michael Crenshaw
79829eca98 feat(hydrator): add sourceHydrator types
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 17:08:02 -04:00
Michael Crenshaw
77029cbdc9 partial creds add UI
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 16:02:22 -04:00
Michael Crenshaw
b8601fe940 more logs
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 13:24:50 -04:00
Michael Crenshaw
dc675de3dd test error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 12:44:31 -04:00
Michael Crenshaw
ac27d50d31 simplify
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-12 12:38:38 -04:00
Kevin
6045ea243a feat: Refactor writeManifests function for simplification and better error handling (#19447)
Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
2024-08-12 12:23:51 -04:00
Michael Crenshaw
0789d44429 simplify, more tests
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 20:56:41 -04:00
Michael Crenshaw
e41b4851b0 lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 17:07:23 -04:00
Michael Crenshaw
01cd32916c ui improvements
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 16:57:54 -04:00
Michael Crenshaw
1e233c1949 fix enum, ui fixes
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 16:36:51 -04:00
Michael Crenshaw
4911a41e00 tidy
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 16:17:24 -04:00
Michael Crenshaw
f4a368cd44 ui improvements; new queue for app hydration; lots of bug fixes
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-11 16:09:04 -04:00
Michael Crenshaw
257c419550 revert unnecessary changes
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 20:44:51 -04:00
Michael Crenshaw
f541a8e9d5 remove previewer code; will wait for another PR
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 20:23:10 -04:00
Michael Crenshaw
4507984205 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 20:19:58 -04:00
Michael Crenshaw
95021e1984 more logging, don't always re-hydrate
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 19:01:43 -04:00
Michael Crenshaw
053d0f98f3 test adding a second app
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 17:03:53 -04:00
Michael Crenshaw
38ddab3c78 handle changes to sourceHydrator field
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-10 16:28:09 -04:00
Kim Minju
91e63d75f8 test: add Unit Tests for SecureMkdirAll Function (#19471)
* Add test for SecureMkdirAll with existing directory

- Verify that SecureMkdirAll correctly handles the case where the directory already exists.
- Ensure that the same path is returned when creating an existing directory.

Signed-off-by: miiiinju1 <gms07073@ynu.ac.kr>

* Add test for SecureMkdirAll with file path

- Ensure SecureMkdirAll returns an error when a file path is provided instead of a directory.
- Check that the error message indicates a failure to create a directory.

Signed-off-by: miiiinju1 <gms07073@ynu.ac.kr>

* Add test for SecureMkdirAll with dot-dot path

- Test SecureMkdirAll with a path containing  to ensure it doesn't traverse outside the root directory.
- Verify that the resulting path is as expected and does not use relative paths that escape the root directory.

Signed-off-by: miiiinju1 <gms07073@ynu.ac.kr>

---------

Signed-off-by: miiiinju1 <gms07073@ynu.ac.kr>
2024-08-10 13:19:32 -04:00
Michael Crenshaw
9ea972556b test failed sync with hydrateTo
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-09 15:22:52 -04:00
Michael Crenshaw
e227d71abf better filenames
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-09 14:53:10 -04:00
Michael Crenshaw
d82f140300 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-09 14:06:51 -04:00
Michael Crenshaw
b0693a642c Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-09 10:18:50 -04:00
Robin Lieb
8727b17067 feat(hydrator): add metrics on getting git creds user info (#19427)
* feat(hydrator): add metrics on getting git creds user info

Signed-off-by: Robin Lieb <robin.j.lieb@gmail.com>

* feat: move credential helper to private function

---------

Signed-off-by: Robin Lieb <robin.j.lieb@gmail.com>
2024-08-08 16:13:05 -04:00
Michael Crenshaw
06d499deb0 remove unused fields
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-08 12:02:25 -04:00
Michael Crenshaw
17cd8b756e Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 16:41:48 -04:00
Michael Crenshaw
768bc92d26 fix import
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 16:39:37 -04:00
Michael Crenshaw
8802205257 remove unused service
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 16:39:06 -04:00
Michael Crenshaw
6b8a00342d remove duplicate implementation of secure dir creation logic
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 14:54:28 -04:00
Michael Crenshaw
012887a246 fix test
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 13:24:14 -04:00
mirageoasis
d83ceab37c test: add unit test for util.go makeSecureTempDir function (#19369)
* test: added test for util.go

Signed-off-by: mirageoasis <kimhw0820@naver.com>

* refactor: fixed to require statement

Signed-off-by: mirageoasis <kimhw0820@naver.com>

* style tweaks

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: mirageoasis <kimhw0820@naver.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 12:17:21 -04:00
Michael Crenshaw
fcd240b704 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-06 11:58:43 -04:00
Michael Crenshaw
fa2338016b fix build error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-04 22:00:26 -04:00
Michael Crenshaw
2819ef942f fix build error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-04 21:50:36 -04:00
Kunho Lee
9d29353961 test: add unit test for util.git.Client.CommitAndPush (#19354)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-08-04 18:00:41 -04:00
thisishwan2
c7cf4bb9e2 feat: use MkdirAll by OS(#19301) (#19323)
* feat: add mkdir provider interface and Implementation for os

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* fix: Using MkdirAll Functions by OS

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* fix: Modifying implementations for linux and non-linux

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* remove: unused interface

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* fix: remove struct & use SecureMkdirAll func

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* fix: use SecureMkdirAll func

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* feat: add unit test SecureMkdirAll func

Signed-off-by: thisishwan2 <feel000617@gmail.com>

* tweaks

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: thisishwan2 <feel000617@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-04 17:57:37 -04:00
Michael Crenshaw
14b19dae7a rename endpoints to allow more clarity when more are added
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-01 15:29:41 -04:00
Michael Crenshaw
fbcdc65308 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-01 13:25:15 -04:00
Michael Crenshaw
7cc04b830e Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-31 13:53:55 -04:00
Kevin
1bb5763bd7 test: Add unit tests for WriteForPaths function (#19311)
* test: Add unit tests for WriteForPaths function

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>

* test: refactor unit tests for WriteForPaths function

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>

---------

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
2024-07-31 13:40:14 -04:00
Michael Crenshaw
2d8824482c Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 17:10:45 -04:00
Michael Crenshaw
c49ffae139 housekeeping
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 14:53:12 -04:00
Kevin
2315f8ae44 test: Add unit tests for HydratorHelper methods (#19308)
* test: Add unit tests for HydratorHelper methods

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>

* lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 14:15:51 -04:00
Michael Crenshaw
9221cdd780 simplify commit API, more docstrings
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 14:07:57 -04:00
Michael Crenshaw
652541c065 lint, refactor
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 12:56:28 -04:00
Michael Crenshaw
e781a08d86 graceful shutdown of commit server
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 11:19:36 -04:00
Michael Crenshaw
519fadc538 make directory for commit server output
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 10:39:24 -04:00
Michael Crenshaw
62ff4093e8 record e2e coverage
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-30 09:35:40 -04:00
Michael Crenshaw
31b46348d9 fix test
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 21:05:04 -04:00
Michael Crenshaw
e9385c8949 codegen
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 21:04:59 -04:00
Michael Crenshaw
d516fd82e1 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 20:46:45 -04:00
Michael Crenshaw
40b7280953 lint, fix old function calls
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 20:45:33 -04:00
Michael Crenshaw
3b7bbcefdf wait feature for CLI, more debug lines in commit server, e2e test works
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-29 20:42:03 -04:00
Kunho Lee
f7fece9ad2 test: add unit test for util.git.Client.RemoveContents (#19288)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-07-29 10:09:05 -04:00
Michael Crenshaw
4d0969aa28 lint and codegen
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-27 18:54:15 -04:00
Michael Crenshaw
64c1d4d4af Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-27 18:21:59 -04:00
Kunho Lee
9554f57690 test: add unit test for util.git.Client.CheckoutOrNew (#19274)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-07-27 18:20:27 -04:00
Michael Crenshaw
73b953ddb9 add app create CLI support and stub out an e2e test
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-27 18:15:56 -04:00
Michael Crenshaw
4a39e608b1 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-26 16:51:08 -04:00
Michael Crenshaw
8e6fd8ff46 more cleanup
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-26 16:50:25 -04:00
Michael Crenshaw
51bd8b11c3 clean up commit.go
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-26 15:10:52 -04:00
Michael Crenshaw
0c8242f7e4 more metrics, plus docs
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-25 17:04:30 -04:00
Michael Crenshaw
d69a78f63f add more git metrics
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-25 17:00:00 -04:00
Michael Crenshaw
11daa4e153 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-25 16:35:42 -04:00
Michael Crenshaw
6acd67e26b add metrics server
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-24 13:14:30 -04:00
Kunho Lee
449fa4e29f test: add unit test for util.git.Client.CheckoutOrOrphan (#19191)
* test: add unit test for util.git.Client.CheckoutOrOrphan

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

* test,fix: set author before do git actions

Signed-off-by: daengdaengLee <gunho1020@gmail.com>

---------

Signed-off-by: daengdaengLee <gunho1020@gmail.com>
2024-07-24 11:38:22 -04:00
Michael Crenshaw
e79aa171fa Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 12:18:24 -04:00
Michael Crenshaw
40028b4078 fix nil reference error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 11:39:40 -04:00
Michael Crenshaw
2cdb25ad4c support kustomize commands
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 11:22:24 -04:00
Michael Crenshaw
6f3ddda2d7 lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 10:29:01 -04:00
Michael Crenshaw
4b0eeb6a18 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-23 10:26:33 -04:00
Michael Crenshaw
cf90d8ce50 clean up generated swagger doc
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 16:12:06 -04:00
Michael Crenshaw
a9ba5cd3a1 better package name
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 16:00:16 -04:00
Michael Crenshaw
3910aa088a better package name
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 15:59:38 -04:00
Michael Crenshaw
f8f7baf467 fix incorrect socket path
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 15:58:39 -04:00
Michael Crenshaw
28bae8fe4d centralize user info code
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 15:57:13 -04:00
Michael Crenshaw
4eaeee320d remove accidental file
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 14:40:18 -04:00
Michael Crenshaw
a2501be80a Merge branch 'hydrator' of https://github.com/argoproj/argo-cd into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 14:34:57 -04:00
Michael Crenshaw
1edba58774 use mockery.yaml
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 14:33:37 -04:00
Kunho Lee
c3efa44a58 test: add unit test for util.git.Client.SetAuthor (#19128)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 13:59:39 -04:00
Michael Crenshaw
e8c505fe3c update mocks, basic path redaction, fix tests
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-22 12:57:34 -04:00
Michael Crenshaw
6b9c743b42 added basic dupe destination detection; still buggy
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 17:24:31 -04:00
Michael Crenshaw
282702e571 use different socket paths to avoid collisions during local development
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 17:02:08 -04:00
Michael Crenshaw
51122b913d simplify hydrate queue processor - only process for the requested source branch
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 16:35:22 -04:00
Michael Crenshaw
8c1983de60 fix field names, readability
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 16:15:59 -04:00
Michael Crenshaw
a4422a4745 Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-19 13:49:10 -04:00
Michael Crenshaw
bbe8b0cc2e Merge remote-tracking branch 'origin/master' into hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-15 16:10:44 -04:00
Michael Crenshaw
34a00bdc10 don't expose unnecessary public function
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-15 15:54:58 -04:00
Michael Crenshaw
c5961c9f5c lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-08 15:58:22 -04:00
Michael Crenshaw
8046ec330f fix retry, use main git client
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-08 15:47:21 -04:00
Michael Crenshaw
36b74225fc Merge remote-tracking branch 'origin/master' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-05 15:53:46 -04:00
Michael Crenshaw
1dfd0eb9ff fix UI error
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-14 10:10:31 -04:00
Michael Crenshaw
d27d8c0bae lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 22:24:45 -04:00
Michael Crenshaw
bbd83207ca lint
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 22:14:24 -04:00
Michael Crenshaw
f4a12e8cdc lint ui
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 22:07:20 -04:00
Michael Crenshaw
a677d43c1b Merge remote-tracking branch 'origin/master' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 22:02:53 -04:00
Michael Crenshaw
02b7258c68 more docs, some abstraction of functions
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-11 21:29:45 -04:00
Michael Crenshaw
e0fc424011 commit service as standalone deployment
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-10 21:43:05 -04:00
Michael Crenshaw
26c4235361 remove temp dir
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-10 16:27:44 -04:00
Michael Crenshaw
e784875825 Merge remote-tracking branch 'origin/master' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-06 13:18:08 -04:00
Michael Crenshaw
ffcbb8068a Merge remote-tracking branch 'origin/master' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-06 10:28:18 -04:00
Michael Crenshaw
d28b53f43d fix diff order and update bot name
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-03 14:29:10 -04:00
Michael Crenshaw
bc11a49e25 add merge action
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-29 13:23:19 -04:00
Michael Crenshaw
e598333528 use shared auth, better error handling
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-29 13:23:02 -04:00
Michael Crenshaw
f1c58bb7db finish previewer PoC
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-28 16:14:19 -04:00
Michael Crenshaw
68c968047c Merge remote-tracking branch 'crenshaw-dev/manifest-hydrator' into previewer
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-24 16:36:42 -04:00
Omer Azmon
b277580aff add sleep to main loop 2024-05-24 10:16:37 -07:00
Omer Azmon
044375a797 add previewer POC untested 2024-05-23 19:47:49 -07:00
Alexandre Gaudreault
62d4894d51 fix: for helm namespace (#23)
* feat: manifest hydrator

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* it's monitoring both branches now

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* push works w/ my personal creds

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* write metadata, readme, and commands

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* handle missing branches, missing manifest files, and no-op changes

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* don't set release name or namespace to values from the app CR

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more determinism

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* handle new branches

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* show hydration progress

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use workqueue

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use securejoin, use log contexts, clean up temp dirs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use app auth for github only

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* it works

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* retry failed operations

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* fix

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* codegen

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* it just works

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-23 15:42:20 -04:00
Michael Crenshaw
74d4e980f9 use securejoin, use log contexts, clean up temp dirs
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:08:31 -04:00
Michael Crenshaw
9dc94b08a6 use workqueue
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:08:31 -04:00
Michael Crenshaw
29d8937de6 show hydration progress
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:08:28 -04:00
Michael Crenshaw
0caa4d3d33 handle new branches
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:05:24 -04:00
Michael Crenshaw
207f0aba5d more determinism
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:05:21 -04:00
Michael Crenshaw
8257311db2 don't set release name or namespace to values from the app CR
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:02:33 -04:00
Michael Crenshaw
9944e2a8d1 handle missing branches, missing manifest files, and no-op changes
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:02:33 -04:00
Michael Crenshaw
87d2f3f263 write metadata, readme, and commands
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 20:02:28 -04:00
Michael Crenshaw
55f3fa8b53 push works w/ my personal creds
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 19:59:30 -04:00
Michael Crenshaw
ccf18147b2 it's monitoring both branches now
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 19:59:27 -04:00
Michael Crenshaw
a1e8e1f17d feat: manifest hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-05-21 19:55:05 -04:00
Argo CD Hydrator
f816ada864 hydrate 6241416c8ef1a94d3c80f38fa2f1f865608886f7
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-24 11:09:24 -04:00
Argo CD Hydrator
ebb71a0018 hydrate f0cc9868393be8abf156adb963f66b9ec8417f37
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-24 10:56:51 -04:00
Argo CD Hydrator
8bd52a7b74 hydrate 665d6fd139b3bcf2df0ff4b464f2fe42597f2455
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-24 10:54:59 -04:00
Argo CD Hydrator
2d43a8331f hydrate
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-24 10:54:56 -04:00
Michael Crenshaw
dd7952e389 it's monitoring both branches now
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-23 17:02:39 -04:00
Michael Crenshaw
037d098d7c feat: manifest hydrator
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-22 10:47:52 -04:00
1316 changed files with 63808 additions and 172897 deletions

View File

@@ -31,11 +31,6 @@ updates:
directory: "/"
schedule:
interval: "daily"
ignore:
# We use consistent go and node versions across a lot of different files, and updating via dependabot would cause
# drift among those files, instead we let renovate bot handle them.
- dependency-name: "library/golang"
- dependency-name: "library/node"
- package-ecosystem: "docker"
directory: "/test/container/"

View File

@@ -13,8 +13,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.24.6'
GOLANG_VERSION: '1.22'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -32,7 +31,7 @@ jobs:
docs: ${{ steps.filter.outputs.docs_any_changed }}
steps:
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: tj-actions/changed-files@bab30c2299617f6615ec02a68b9a40d10bd21366 # v45.0.5
- uses: tj-actions/changed-files@c65cd883420fd2eb864698a825fc4162dd94482c # v44.5.7
id: filter
with:
# Any file which is not under docs/, ui/ or is not a markdown file is counted as a backend file
@@ -57,7 +56,7 @@ jobs:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Download all Go modules
@@ -78,11 +77,11 @@ jobs:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -105,14 +104,13 @@ jobs:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Run golangci-lint
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
uses: golangci/golangci-lint-action@aaa42aa0628b4ae2578232a66b541047968fac86 # v6.1.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v2.1.6
version: v1.58.2
args: --verbose
test-go:
@@ -133,7 +131,7 @@ jobs:
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Install required packages
@@ -153,7 +151,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -174,7 +172,7 @@ jobs:
- name: Run all unit tests
run: make test-local
- name: Generate test results artifacts
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: test-results
path: test-results
@@ -197,7 +195,7 @@ jobs:
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Install required packages
@@ -217,7 +215,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -238,7 +236,7 @@ jobs:
- name: Run all unit tests
run: make test-race-local
- name: Generate test results artifacts
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: race-results
path: test-results/
@@ -253,7 +251,7 @@ jobs:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Create symlink in GOPATH
@@ -305,13 +303,12 @@ jobs:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup NodeJS
uses: actions/setup-node@39370e3970a6d050c480ffad4ff0ed4d3fdee5af # v4.1.0
uses: actions/setup-node@1e60f620b9541d16bece96c5465dc8ee9832be0b # v4.0.3
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '22.9.0'
node-version: '21.6.1'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -326,8 +323,6 @@ jobs:
NODE_ENV: production
NODE_ONLINE_ENV: online
HOST_ARCH: amd64
# If we're on the master branch, set the codecov token so that we upload bundle analysis
CODECOV_TOKEN: ${{ github.ref == 'refs/heads/master' && secrets.CODECOV_TOKEN || '' }}
working-directory: ui/
- name: Run ESLint
run: yarn lint
@@ -351,7 +346,7 @@ jobs:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -376,24 +371,17 @@ jobs:
run: |
go tool covdata percent -i=test-results,e2e-code-coverage/applicationset-controller,e2e-code-coverage/repo-server,e2e-code-coverage/app-controller,e2e-code-coverage/commit-server -o test-results/full-coverage.out
- name: Upload code coverage information to codecov.io
uses: codecov/codecov-action@b9fd7d16f6d7d1b5d2bec1a2887e65ceed900238 # v4.6.0
uses: codecov/codecov-action@e28ff129e5465c2c0dcc6f003fc735cb6ae0c673 # v4.5.0
with:
file: test-results/full-coverage.out
fail_ci_if_error: true
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test results to Codecov
if: github.ref == 'refs/heads/master' && github.event_name == 'push' && github.repository == 'argoproj/argo-cd'
uses: codecov/test-results-action@9739113ad922ea0a9abb4b2c0f8bf6a4aa8ef820 # v1.0.1
with:
file: test-results/junit.xml
fail_ci_if_error: true
token: ${{ secrets.CODECOV_TOKEN }}
- name: Perform static code analysis using SonarCloud
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
uses: SonarSource/sonarqube-scan-action@bfd4e558cda28cda6b5defafb9232d191be8c203 # v4.2.1
uses: SonarSource/sonarqube-scan-action@aecaf43ae57e412bd97d70ef9ce6076e672fe0a9 # v2.2
if: env.sonar_secret != ''
test-e2e:
name: Run end-to-end tests
@@ -403,14 +391,14 @@ jobs:
fail-fast: false
matrix:
k3s:
- version: v1.31.0
- version: v1.30.2
# We designate the latest version because we only collect code coverage for that version.
latest: true
- version: v1.30.4
- version: v1.29.6
latest: false
- version: v1.29.8
- version: v1.28.11
latest: false
- version: v1.28.13
- version: v1.27.15
latest: false
needs:
- build-go
@@ -429,17 +417,10 @@ jobs:
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be
with:
large-packages: false
docker-images: false
swap-storage: false
tool-cache: false
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: GH actions workaround - Kill XSP4 process
@@ -458,7 +439,7 @@ jobs:
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -516,13 +497,13 @@ jobs:
goreman run stop-all || echo "goreman trouble"
sleep 30
- name: Upload e2e coverage report
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: e2e-code-coverage
path: /tmp/coverage
if: ${{ matrix.k3s.latest }}
- name: Upload e2e-server logs
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: e2e-server-k8s${{ matrix.k3s.version }}.log
path: /tmp/e2e-server.log
@@ -549,4 +530,4 @@ jobs:
exit 0
else
exit 1
fi
fi

View File

@@ -23,7 +23,7 @@ jobs:
actions: read # for github/codeql-action/init to get workflow details
contents: read # for actions/checkout to fetch code
security-events: write # for github/codeql-action/autobuild to send a status report
if: github.repository == 'argoproj/argo-cd' || vars.enable_codeql
if: github.repository == 'argoproj/argo-cd'
# CodeQL runs on ubuntu-latest and windows-latest
runs-on: ubuntu-22.04
@@ -33,7 +33,7 @@ jobs:
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version-file: go.mod

View File

@@ -17,9 +17,11 @@ on:
platforms:
required: true
type: string
default: linux/amd64
push:
required: true
type: boolean
default: false
target:
required: false
type: string
@@ -67,15 +69,15 @@ jobs:
if: ${{ github.ref_type != 'tag'}}
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ inputs.go-version }}
- name: Install cosign
uses: sigstore/cosign-installer@dc72c7d5c4d10cd6bcb8cf6e3fd625a9e5e537da # v3.7.0
uses: sigstore/cosign-installer@4959ce089c160fddf62f7b42464195ba1a56d382 # v3.6.0
- uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf # v3.2.0
- uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349 # v3.7.1
- uses: docker/setup-buildx-action@988b5a0280414f521da01fcc63a27aeeb4b104db # v3.6.1
- name: Setup tags for container image as a CSV type
run: |
@@ -141,7 +143,7 @@ jobs:
- name: Build and push container image
id: image
uses: docker/build-push-action@48aba3b46d1b1fec4febb7c5d0c644b249a11355 #v6.10.0
uses: docker/build-push-action@16ebe778df0e7752d2cfcbd924afdbbd89c1a755 #v6.6.1
with:
context: .
platforms: ${{ inputs.platforms }}

View File

@@ -52,8 +52,7 @@ jobs:
uses: ./.github/workflows/image-reuse.yaml
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.24.6
go-version: 1.22
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -69,8 +68,7 @@ jobs:
quay_image_name: quay.io/argoproj/argocd:latest
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.24.6
go-version: 1.22
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:

View File

@@ -64,7 +64,7 @@ jobs:
git stash pop
- name: Create pull request
uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f # v7.0.5
uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6.1.0
with:
commit-message: "Bump version to ${{ inputs.TARGET_VERSION }}"
title: "Bump version to ${{ inputs.TARGET_VERSION }} on ${{ inputs.TARGET_BRANCH }} branch"

View File

@@ -23,7 +23,7 @@ jobs:
name: Validate PR Title
runs-on: ubuntu-latest
steps:
- uses: thehanimo/pr-title-checker@7fbfe05602bdd86f926d3fb3bccb6f3aed43bc70 # v1.4.3
- uses: thehanimo/pr-title-checker@1d8cd483a2b73118406a187f54dca8a9415f1375 # v1.4.2
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
configuration_path: ".github/pr-title-checker-config.json"

View File

@@ -10,8 +10,7 @@ on:
permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.24.6' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.22' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
@@ -24,8 +23,7 @@ jobs:
with:
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.24.6
go-version: 1.22
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
@@ -69,15 +67,19 @@ jobs:
- name: Fetch all tags
run: git fetch --force --tags
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Set GORELEASER_PREVIOUS_TAG # Workaround, GoReleaser uses 'git-describe' to determine a previous tag. Our tags are created in release branches.
- name: Set GORELEASER_PREVIOUS_TAG # Workaround, GoReleaser uses 'git-describe' to determine a previous tag. Our tags are created in realease branches.
run: |
set -xue
echo "GORELEASER_PREVIOUS_TAG=$(go run hack/get-previous-release/get-previous-version-for-release-notes.go ${{ github.ref_name }})" >> $GITHUB_ENV
if echo ${{ github.ref_name }} | grep -E -- '-rc1+$';then
echo "GORELEASER_PREVIOUS_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n 2 | head -n 1)" >> $GITHUB_ENV
else
echo "This is not the first release on the branch, Using GoReleaser defaults"
fi
- name: Setup Golang
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Set environment variables for ldflags
id: set_ldflag
@@ -94,14 +96,14 @@ jobs:
tool-cache: false
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@9ed2f89a662bf1735a48bc8557fd212fa902bebf # v6.1.0
uses: goreleaser/goreleaser-action@286f3b13b1b49da4ac219696163fb8c1c93e1200 # v6.0.0
id: run-goreleaser
with:
version: latest
args: release --clean --timeout 55m
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
KUBECTL_VERSION: ${{ env.KUBECTL_VERSION }}
KUBECTL_VERSION: ${{ env.KUBECTL_VERSION }}
GIT_TREE_STATE: ${{ env.GIT_TREE_STATE }}
- name: Generate subject for provenance
@@ -151,7 +153,7 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup Golang
uses: actions/setup-go@3041bf56c941b39c61721a86cd11f3bb1338122a # v5.2.0
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
with:
go-version: ${{ env.GOLANG_VERSION }}
@@ -184,7 +186,7 @@ jobs:
fi
cd /tmp && tar -zcf sbom.tar.gz *.spdx
- name: Generate SBOM hash
shell: bash
id: sbom-hash
@@ -193,15 +195,15 @@ jobs:
# base64 -w0 encodes to base64 and outputs on a single line.
# sha256sum /tmp/sbom.tar.gz ... | base64 -w0
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Upload SBOM
uses: softprops/action-gh-release@c95fe1489396fe8a9eb87c0abf8aa5b2ef267fda # v2.2.1
uses: softprops/action-gh-release@c062e08bd532815e2082a85e87e3ef29c3e6d191 # v2.0.8
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
files: |
/tmp/sbom.tar.gz
sbom-provenance:
needs: [generate-sbom]
permissions:
@@ -209,13 +211,13 @@ jobs:
id-token: write # Needed for provenance signing and ID
contents: write # Needed for release uploads
if: github.repository == 'argoproj/argo-cd'
# Must be referenced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
# Must be refernced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
with:
base64-subjects: "${{ needs.generate-sbom.outputs.hashes }}"
provenance-name: "argocd-sbom.intoto.jsonl"
upload-assets: true
post-release:
needs:
- argocd-image
@@ -293,7 +295,7 @@ jobs:
if: ${{ env.UPDATE_VERSION == 'true' }}
- name: Create PR to update VERSION on master branch
uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f # v7.0.5
uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6.1.0
with:
commit-message: Bump version in master
title: "chore: Bump version in master"

View File

@@ -54,7 +54,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6
with:
name: SARIF file
path: results.sarif

1
.gitignore vendored
View File

@@ -1,7 +1,6 @@
.vscode/
.idea/
.DS_Store
.run/
vendor/
dist/*
ui/dist/app/*

2
.gitpod.Dockerfile vendored
View File

@@ -1,4 +1,4 @@
FROM gitpod/workspace-full@sha256:230285e0b949e6d728d384b2029a4111db7b9c87c182f22f32a0be9e36b225df
FROM gitpod/workspace-full@sha256:fbff2dce4236535b96de0e94622bbe9a44fba954ca064862004c34e3e08904df
USER root

View File

@@ -1,101 +1,41 @@
version: "2"
run:
timeout: 50m
issues:
exclude:
- SA1019
- SA5011
max-issues-per-linter: 0
max-same-issues: 0
linters:
default: none
enable:
- errcheck
- errorlint
- gocritic
- gofumpt
- goimports
- gosimple
- govet
- ineffassign
- misspell
- perfsprint
- staticcheck
- testifylint
- thelper
- unparam
- unused
- usestdlibvars
- whitespace
settings:
gocritic:
disabled-checks:
- appendAssign
- assignOp
- badCond
- commentFormatting
- exitAfterDefer
- ifElseChain
- mapKey
- singleCaseSwitch
- typeSwitchVar
perfsprint:
int-conversion: true
err-error: false
errorf: false
sprintf1: true
strconcat: false
staticcheck:
checks:
- "all"
- "-ST1001" # dot imports are discouraged
- "-ST1003" # poorly chosen identifier
- "-ST1005" # incorrectly formatted error string
- "-ST1011" # poorly chosen name for variable of type time.Duration
- "-ST1012" # poorly chosen name for an error variable
- "-ST1016" # use consistent method receiver names
- "-ST1017" # don't use Yoda conditions
- "-ST1019" # importing the same package multiple times
- "-ST1023" # redundant type in variable declaration
- "-QF1001" # apply De Morgans law
- "-QF1003" # convert if/else-if chain to tagged switch
- "-QF1006" # lift if+break into loop condition
- "-QF1007" # merge conditional assignment into variable declaration
- "-QF1008" # omit embedded fields from selector expression
- "-QF1009" # use time.Time.Equal instead of == operator
- "-QF1011" # omit redundant type from variable declaration
- "-QF1012" # use fmt.Fprintf(x, ...) instead of x.Write(fmt.Sprintf(...))
testifylint:
enable-all: true
disable:
- go-require
- equal-values
- empty
- len
- expected-actual
- formatter
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- unparam
path: (.+)_test\.go
- path: (.+)\.go$
text: SA5011
paths:
- third_party$
- builtin$
- examples$
issues:
max-issues-per-linter: 0
max-same-issues: 0
formatters:
enable:
- gofumpt
- goimports
settings:
goimports:
local-prefixes:
- github.com/argoproj/argo-cd/v2
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
- whitespace
linters-settings:
gocritic:
disabled-checks:
- appendAssign
- assignOp # Keep it disabled for readability
- badCond
- commentFormatting
- exitAfterDefer
- ifElseChain
- mapKey
- singleCaseSwitch
- typeSwitchVar
goimports:
local-prefixes: github.com/argoproj/argo-cd/v2
testifylint:
enable-all: true
disable:
- go-require
run:
timeout: 50m

View File

@@ -50,7 +50,6 @@ packages:
ProjectGetter:
RbacEnforcer:
SettingsGetter:
UserGetter:
github.com/argoproj/argo-cd/v2/util/db:
interfaces:
ArgoDB:
@@ -73,7 +72,4 @@ packages:
SessionServiceClient:
github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster:
interfaces:
ClusterServiceServer:
github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned/typed/application/v1alpha1:
interfaces:
AppProjectInterface:
ClusterServiceServer:

View File

@@ -2,11 +2,10 @@ version: 2
formats: all
mkdocs:
fail_on_warning: false
configuration: mkdocs.yml
python:
install:
- requirements: docs/requirements.txt
build:
os: "ubuntu-22.04"
tools:
python: "3.12"
python: "3.7"

View File

@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:24.04@sha256:3f85b7caad41a95462cf5b787d8
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.24.6@sha256:2c89c41fb9efc3807029b59af69645867cfe978d2b877d475be0d72f6c6ce6f6 AS builder
FROM docker.io/library/golang:1.22.6@sha256:2bd56f00ff47baf33e64eae7996b65846c7cb5e0a46e0a882ef179fd89654afa AS builder
RUN echo 'deb http://archive.debian.org/debian buster-backports main' >> /etc/apt/sources.list
@@ -83,7 +83,7 @@ WORKDIR /home/argocd
####################################################################################################
# Argo CD UI stage
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/node:23.0.0@sha256:e643c0b70dca9704dff42e12b17f5b719dbe4f95e6392fc2dfa0c5f02ea8044d AS argocd-ui
FROM --platform=$BUILDPLATFORM docker.io/library/node:22.3.0@sha256:5e4044ff6001d06e7748e35bfa4f80c73cf5f5a7360a1b782995e038a01b0585 AS argocd-ui
WORKDIR /src
COPY ["ui/package.json", "ui/yarn.lock", "./"]
@@ -101,7 +101,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.24.6@sha256:2c89c41fb9efc3807029b59af69645867cfe978d2b877d475be0d72f6c6ce6f6 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.22.6@sha256:2bd56f00ff47baf33e64eae7996b65846c7cb5e0a46e0a882ef179fd89654afa AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd
@@ -140,8 +140,7 @@ RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-server && \
ln -s /usr/local/bin/argocd /usr/local/bin/argocd-dex && \
ln -s /usr/local/bin/argocd /usr/local/bin/argocd-notifications && \
ln -s /usr/local/bin/argocd /usr/local/bin/argocd-applicationset-controller && \
ln -s /usr/local/bin/argocd /usr/local/bin/argocd-k8s-auth && \
ln -s /usr/local/bin/argocd /usr/local/bin/argocd-commit-server
ln -s /usr/local/bin/argocd /usr/local/bin/argocd-k8s-auth
USER $ARGOCD_USER_ID
ENTRYPOINT ["/usr/bin/tini", "--"]

View File

@@ -254,7 +254,7 @@ cli: test-tools-image
.PHONY: cli-local
cli-local: clean-debug
CGO_ENABLED=${CGO_FLAG} GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -gcflags="all=-N -l" $(COVERAGE_FLAG) -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd
CGO_ENABLED=${CGO_FLAG} GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build $(COVERAGE_FLAG) -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd
.PHONY: gen-resources-cli-local
gen-resources-cli-local: clean-debug
@@ -487,10 +487,8 @@ start-e2e-local: mod-vendor-local dep-ui-local cli-local
BIN_MODE=$(ARGOCD_BIN_MODE) \
ARGOCD_APPLICATION_NAMESPACES=argocd-e2e-external,argocd-e2e-external-2 \
ARGOCD_APPLICATIONSET_CONTROLLER_NAMESPACES=argocd-e2e-external,argocd-e2e-external-2 \
ARGOCD_APPLICATIONSET_CONTROLLER_TOKENREF_STRICT_MODE=true \
ARGOCD_APPLICATIONSET_CONTROLLER_ALLOWED_SCM_PROVIDERS=http://127.0.0.1:8341,http://127.0.0.1:8342,http://127.0.0.1:8343,http://127.0.0.1:8344 \
ARGOCD_E2E_TEST=true \
ARGOCD_HYDRATOR_ENABLED=true \
goreman -f $(ARGOCD_PROCFILE) start ${ARGOCD_START}
ls -lrt /tmp/coverage
@@ -556,7 +554,7 @@ build-docs-local:
.PHONY: build-docs
build-docs:
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install mkdocs; pip install $$(mkdocs get-deps); mkdocs build'
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install -r docs/requirements.txt; mkdocs build'
.PHONY: serve-docs-local
serve-docs-local:
@@ -564,7 +562,7 @@ serve-docs-local:
.PHONY: serve-docs
serve-docs:
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install mkdocs; pip install $$(mkdocs get-deps); mkdocs serve -a $$(ip route get 1 | awk '\''{print $$7}'\''):8000'
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install -r docs/requirements.txt; mkdocs serve -a $$(ip route get 1 | awk '\''{print $$7}'\''):8000'
# Verify that kubectl can connect to your K8s cluster from Docker
.PHONY: verify-kube-connect

View File

@@ -1,8 +1,9 @@
controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/app-controller} HOSTNAME=testappcontroller-1 FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --commit-server localhost:${ARGOCD_E2E_COMMITSERVER_PORT:-8086} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --server-side-diff-enabled=${ARGOCD_APPLICATION_CONTROLLER_SERVER_SIDE_DIFF:-'false'} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/app-controller} HOSTNAME=testappcontroller-1 FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --commit-server localhost:${ARGOCD_E2E_COMMITSERVER_PORT:-8086} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --server-side-diff-enabled=${ARGOCD_APPLICATION_CONTROLLER_SERVER_SIDE_DIFF:-'false'}"
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''}"
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v2/cmd gendexcfg -o `pwd`/dist/dex.yaml && (test -f dist/dex.yaml || { echo 'Failed to generate dex configuration'; exit 1; }) && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
redis: hack/start-redis-with-password.sh
redis: bash -c "if [ \"$ARGOCD_REDIS_LOCAL\" = 'true' ]; then redis-server --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; else docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} docker.io/library/redis:$(grep "image: redis" manifests/base/redis/argocd-redis-deployment.yaml | cut -d':' -f3) --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; fi"
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
cmp-server: [ "$ARGOCD_E2E_TEST" = 'true' ] && exit 0 || [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'

View File

@@ -8,6 +8,7 @@
[![codecov](https://codecov.io/gh/argoproj/argo-cd/branch/master/graph/badge.svg)](https://codecov.io/gh/argoproj/argo-cd)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4486/badge)](https://bestpractices.coreinfrastructure.org/projects/4486)
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/argoproj/argo-cd/badge)](https://scorecard.dev/viewer/?uri=github.com/argoproj/argo-cd)
[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fargoproj%2Fargo-cd.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fargoproj%2Fargo-cd?ref=badge_shield)
**Social:**
[![Twitter Follow](https://img.shields.io/twitter/follow/argoproj?style=social)](https://twitter.com/argoproj)
@@ -56,7 +57,7 @@ Participation in the Argo CD project is governed by the [CNCF Code of Conduct](h
### Blogs and Presentations
1. [Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo](https://github.com/terrytangyuan/awesome-argo)
1. [Unveil the Secret Ingredients of Continuous Delivery at Enterprise Scale with Argo CD](https://akuity.io/blog/secret-ingredients-of-continuous-delivery-at-enterprise-scale-with-argocd/)
1. [Unveil the Secret Ingredients of Continuous Delivery at Enterprise Scale with Argo CD](https://akuity.io/blog/unveil-the-secret-ingredients-of-continuous-delivery-at-enterprise-scale-with-argocd-kubecon-china-2021/)
1. [GitOps Without Pipelines With ArgoCD Image Updater](https://youtu.be/avPUQin9kzU)
1. [Combining Argo CD (GitOps), Crossplane (Control Plane), And KubeVela (OAM)](https://youtu.be/eEcgn_gU3SM)
1. [How to Apply GitOps to Everything - Combining Argo CD and Crossplane](https://youtu.be/yrj4lmScKHQ)

View File

@@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: 74a367d10e7110209610ba3ec225539ebe5f7522
commit-hash: fe606708859574b9b6102a505e260fac5d3fb14e
project-url: https://github.com/argoproj/argo-cd
project-release: v2.14.0
project-release: v2.13.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View File

@@ -11,13 +11,10 @@ Currently, the following organizations are **officially** using Argo CD:
1. [7shifts](https://www.7shifts.com/)
1. [Adevinta](https://www.adevinta.com/)
1. [Adfinis](https://adfinis.com)
1. [Adobe](https://www.adobe.com/)
1. [Adventure](https://jp.adventurekk.com/)
1. [Adyen](https://www.adyen.com)
1. [AirQo](https://airqo.net/)
1. [Akuity](https://akuity.io/)
1. [Alarm.com](https://alarm.com/)
1. [Alauda](https://alauda.io/)
1. [Albert Heijn](https://ah.nl/)
1. [Alibaba Group](https://www.alibabagroup.com/)
1. [Allianz Direct](https://www.allianzdirect.de/)
@@ -32,19 +29,16 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Arctiq Inc.](https://www.arctiq.ca)
2. [Arturia](https://www.arturia.com)
1. [ARZ Allgemeines Rechenzentrum GmbH](https://www.arz.at/)
1. [Augury](https://www.augury.com/)
1. [Autodesk](https://www.autodesk.com)
1. [Axians ACSP](https://www.axians.fr)
1. [Axual B.V.](https://axual.com)
1. [Back Market](https://www.backmarket.com)
1. [Bajaj Finserv Health Ltd.](https://www.bajajfinservhealth.in)
1. [Baloise](https://www.baloise.com)
1. [BCDevExchange DevOps Platform](https://bcdevexchange.org/DevOpsPlatform)
1. [Beat](https://thebeat.co/en/)
1. [Beez Innovation Labs](https://www.beezlabs.com/)
1. [Bedag Informatik AG](https://www.bedag.ch/)
1. [Beleza Na Web](https://www.belezanaweb.com.br/)
1. [Believable Bots](https://believablebots.io)
1. [BigPanda](https://bigpanda.io)
1. [BioBox Analytics](https://biobox.io)
1. [BMW Group](https://www.bmwgroup.com/)
@@ -79,7 +73,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Codility](https://www.codility.com/)
1. [Cognizant](https://www.cognizant.com/)
1. [Commonbond](https://commonbond.co/)
1. [Compatio.AI](https://compatio.ai/)
1. [Contlo](https://contlo.com/)
1. [Coralogix](https://coralogix.com/)
1. [Crédit Agricole CIB](https://www.ca-cib.com)
@@ -89,7 +82,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [D2iQ](https://www.d2iq.com)
1. [DaoCloud](https://daocloud.io/)
1. [Datarisk](https://www.datarisk.io/)
1. [Daydream](https://daydream.ing)
1. [Deloitte](https://www.deloitte.com/)
1. [Deutsche Telekom AG](https://telekom.com)
1. [Devopsi - Poland Software/DevOps Consulting](https://devopsi.pl/)
@@ -120,7 +112,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [freee](https://corp.freee.co.jp/en/company/)
1. [Freshop, Inc](https://www.freshop.com/)
1. [Future PLC](https://www.futureplc.com/)
1. [Flagler Health](https://www.flaglerhealth.io/)
1. [G DATA CyberDefense AG](https://www.gdata-software.com/)
1. [G-Research](https://www.gresearch.com/teams/open-source-software/)
1. [Garner](https://www.garnercorp.com)
@@ -216,7 +207,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Moengage](https://www.moengage.com/)
1. [Money Forward](https://corp.moneyforward.com/en/)
1. [MOO Print](https://www.moo.com/)
1. [Mozilla](https://www.mozilla.org)
1. [MTN Group](https://www.mtn.com/)
1. [Municipality of The Hague](https://www.denhaag.nl/)
1. [My Job Glasses](https://myjobglasses.com)
@@ -249,7 +239,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Optoro](https://www.optoro.com/)
1. [Orbital Insight](https://orbitalinsight.com/)
1. [Oscar Health Insurance](https://hioscar.com/)
1. [Outpost24](https://outpost24.com/)
1. [p3r](https://www.p3r.one/)
1. [Packlink](https://www.packlink.com/)
1. [PagerDuty](https://www.pagerduty.com/)
@@ -278,7 +267,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [PT Boer Technology (Btech)](https://btech.id/)
1. [PUBG](https://www.pubg.com)
1. [Puzzle ITC](https://www.puzzle.ch/)
1. [Pvotal Technologies](https://pvotal.tech/)
1. [Qonto](https://qonto.com)
1. [QuintoAndar](https://quintoandar.com.br)
1. [Quipper](https://www.quipper.com/)
@@ -315,7 +303,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Skyscanner](https://www.skyscanner.net/)
1. [Smart Pension](https://www.smartpension.co.uk/)
1. [Smilee.io](https://smilee.io)
1. [Smilegate Stove](https://www.onstove.com/)
1. [Smood.ch](https://www.smood.ch/)
1. [Snapp](https://snapp.ir/)
1. [Snyk](https://snyk.io/)
@@ -335,17 +322,14 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Swisscom](https://www.swisscom.ch)
1. [Swissquote](https://github.com/swissquote)
1. [Syncier](https://syncier.com/)
1. [Synergy](https://synergy.net.au)
1. [Syself](https://syself.com)
1. [TableCheck](https://tablecheck.com/)
1. [Tailor Brands](https://www.tailorbrands.com)
1. [Tamkeen Technologies](https://tamkeentech.sa/)
1. [TBC Bank](https://tbcbank.ge/)
1. [Techcombank](https://www.techcombank.com.vn/trang-chu)
1. [Technacy](https://www.technacy.it/)
1. [Telavita](https://www.telavita.com.br/)
1. [Tesla](https://tesla.com/)
1. [TextNow](https://www.textnow.com/)
1. [The Scale Factory](https://www.scalefactory.com/)
1. [ThousandEyes](https://www.thousandeyes.com/)
1. [Ticketmaster](https://ticketmaster.com)
@@ -393,5 +377,4 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Yubo](https://www.yubo.live/)
1. [ZDF](https://www.zdf.de/)
1. [Zimpler](https://www.zimpler.com/)
1. [ZipRecuiter](https://www.ziprecruiter.com/)
1. [ZOZO](https://corp.zozo.com/)

View File

@@ -1 +1 @@
2.14.18
2.13.0

View File

@@ -18,9 +18,6 @@ import (
"context"
"fmt"
"reflect"
"runtime/debug"
"sort"
"strconv"
"strings"
"time"
@@ -34,10 +31,11 @@ import (
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/kubernetes"
k8scache "k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
@@ -47,13 +45,14 @@ import (
"github.com/argoproj/argo-cd/v2/applicationset/controllers/template"
"github.com/argoproj/argo-cd/v2/applicationset/generators"
"github.com/argoproj/argo-cd/v2/applicationset/metrics"
"github.com/argoproj/argo-cd/v2/applicationset/status"
"github.com/argoproj/argo-cd/v2/applicationset/utils"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/glob"
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
argoutil "github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/normalizers"
@@ -80,6 +79,7 @@ type ApplicationSetReconciler struct {
Recorder record.EventRecorder
Generators map[string]generators.Generator
ArgoDB db.ArgoDB
ArgoAppClientset appclientset.Interface
KubeClientset kubernetes.Interface
Policy argov1alpha1.ApplicationsSyncPolicy
EnablePolicyOverride bool
@@ -90,31 +90,18 @@ type ApplicationSetReconciler struct {
SCMRootCAPath string
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
Cache cache.Cache
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets/status,verbs=get;update;patch
func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Request) (result ctrl.Result, err error) {
startReconcile := time.Now()
func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logCtx := log.WithField("applicationset", req.NamespacedName)
defer func() {
if rec := recover(); rec != nil {
logCtx.Errorf("Recovered from panic: %+v\n%s", rec, debug.Stack())
result = ctrl.Result{}
var ok bool
err, ok = rec.(error)
if !ok {
err = fmt.Errorf("%v", r)
}
}
}()
var applicationSetInfo argov1alpha1.ApplicationSet
parametersGenerated := false
startTime := time.Now()
if err := r.Get(ctx, req.NamespacedName, &applicationSetInfo); err != nil {
if client.IgnoreNotFound(err) != nil {
logCtx.WithError(err).Infof("unable to get ApplicationSet: '%v' ", err)
@@ -122,13 +109,9 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, client.IgnoreNotFound(err)
}
defer func() {
r.Metrics.ObserveReconcile(&applicationSetInfo, time.Since(startTime))
}()
// Do not attempt to further reconcile the ApplicationSet if it is being deleted.
if applicationSetInfo.DeletionTimestamp != nil {
appsetName := applicationSetInfo.Name
if applicationSetInfo.ObjectMeta.DeletionTimestamp != nil {
appsetName := applicationSetInfo.ObjectMeta.Name
logCtx.Debugf("DeletionTimestamp is set on %s", appsetName)
deleteAllowed := utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowDelete()
if !deleteAllowed {
@@ -155,7 +138,6 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
// desiredApplications is the main list of all expected Applications from all generators in this appset.
desiredApplications, applicationSetReason, err := template.GenerateApplications(logCtx, applicationSetInfo, r.Generators, r.Renderer, r.Client)
if err != nil {
logCtx.Errorf("unable to generate applications: %v", err)
_ = r.setApplicationSetStatusCondition(ctx,
&applicationSetInfo,
argov1alpha1.ApplicationSetCondition{
@@ -165,8 +147,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, parametersGenerated,
)
// In order for the controller SDK to respect RequeueAfter, the error must be nil
return ctrl.Result{RequeueAfter: ReconcileRequeueOnValidationError}, nil
return ctrl.Result{RequeueAfter: ReconcileRequeueOnValidationError}, err
}
parametersGenerated = true
@@ -210,16 +191,16 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
appSyncMap := map[string]bool{}
if r.EnableProgressiveSyncs {
if !isRollingSyncStrategy(&applicationSetInfo) && len(applicationSetInfo.Status.ApplicationStatus) > 0 {
// If an appset was previously syncing with a `RollingSync` strategy but it has switched to the default strategy, clean up the progressive sync application statuses
if applicationSetInfo.Spec.Strategy == nil && len(applicationSetInfo.Status.ApplicationStatus) > 0 {
// If appset used progressive sync but stopped, clean up the progressive sync application statuses
logCtx.Infof("Removing %v unnecessary AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear previous AppSet application statuses for %v: %w", applicationSetInfo.Name, err)
}
} else if isRollingSyncStrategy(&applicationSetInfo) {
// The appset uses progressive sync with `RollingSync` strategy
} else if applicationSetInfo.Spec.Strategy != nil {
// appset uses progressive sync
for _, app := range currentApplications {
appMap[app.Name] = app
}
@@ -261,8 +242,20 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
if r.EnableProgressiveSyncs {
// trigger appropriate application syncs if RollingSync strategy is enabled
if progressiveSyncsRollingSyncStrategyEnabled(&applicationSetInfo) {
validApps = r.syncValidApplications(logCtx, &applicationSetInfo, appSyncMap, appMap, validApps)
if progressiveSyncsStrategyEnabled(&applicationSetInfo, "RollingSync") {
validApps, err = r.syncValidApplications(logCtx, &applicationSetInfo, appSyncMap, appMap, validApps)
if err != nil {
_ = r.setApplicationSetStatusCondition(ctx,
&applicationSetInfo,
argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionErrorOccurred,
Message: err.Error(),
Reason: argov1alpha1.ApplicationSetReasonSyncApplicationError,
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, parametersGenerated,
)
return ctrl.Result{}, err
}
}
}
@@ -314,7 +307,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
if applicationSetInfo.RefreshRequired() {
delete(applicationSetInfo.Annotations, common.AnnotationApplicationSetRefresh)
err := r.Update(ctx, &applicationSetInfo)
err := r.Client.Update(ctx, &applicationSetInfo)
if err != nil {
logCtx.Warnf("error occurred while updating ApplicationSet: %v", err)
_ = r.setApplicationSetStatusCondition(ctx,
@@ -349,7 +342,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
requeueAfter = ReconcileRequeueOnValidationError
}
logCtx.WithField("requeueAfter", requeueAfter).Info("end reconcile in ", time.Since(startReconcile))
logCtx.WithField("requeueAfter", requeueAfter).Info("end reconcile")
return ctrl.Result{
RequeueAfter: requeueAfter,
@@ -416,21 +409,8 @@ func (r *ApplicationSetReconciler) setApplicationSetStatusCondition(ctx context.
paramtersGeneratedCondition := getParametersGeneratedCondition(paramtersGenerated, condition.Message)
resourceUpToDateCondition := getResourceUpToDateCondition(errOccurred, condition.Message, condition.Reason)
evaluatedTypes := map[argov1alpha1.ApplicationSetConditionType]bool{
argov1alpha1.ApplicationSetConditionErrorOccurred: true,
argov1alpha1.ApplicationSetConditionParametersGenerated: true,
argov1alpha1.ApplicationSetConditionResourcesUpToDate: true,
}
newConditions := []argov1alpha1.ApplicationSetCondition{errOccurredCondition, paramtersGeneratedCondition, resourceUpToDateCondition}
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
evaluatedTypes[argov1alpha1.ApplicationSetConditionRolloutProgressing] = true
if condition.Type == argov1alpha1.ApplicationSetConditionRolloutProgressing {
newConditions = append(newConditions, condition)
}
}
needToUpdateConditions := false
for _, condition := range newConditions {
// do nothing if appset already has same condition
@@ -441,32 +421,28 @@ func (r *ApplicationSetReconciler) setApplicationSetStatusCondition(ctx context.
}
}
}
evaluatedTypes := map[argov1alpha1.ApplicationSetConditionType]bool{
argov1alpha1.ApplicationSetConditionErrorOccurred: true,
argov1alpha1.ApplicationSetConditionParametersGenerated: true,
argov1alpha1.ApplicationSetConditionResourcesUpToDate: true,
}
if needToUpdateConditions || len(applicationSet.Status.Conditions) < len(newConditions) {
if needToUpdateConditions || len(applicationSet.Status.Conditions) < 3 {
// fetch updated Application Set object before updating it
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
namespacedName := types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}
if err := r.Get(ctx, namespacedName, applicationSet); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.SetConditions(
newConditions, evaluatedTypes,
)
applicationSet.Status.SetConditions(
newConditions, evaluatedTypes,
)
// Update the newly fetched object with new set of conditions
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(applicationSet)
return nil
})
// Update the newly fetched object with new set of conditions
err := r.Client.Status().Update(ctx, applicationSet)
if err != nil && !apierr.IsNotFound(err) {
return fmt.Errorf("unable to set application set condition: %w", err)
}
@@ -487,9 +463,7 @@ func (r *ApplicationSetReconciler) validateGeneratedApplications(ctx context.Con
errorsByIndex[i] = fmt.Errorf("ApplicationSet %s contains applications with duplicate name: %s", applicationSetInfo.Name, app.Name)
continue
}
appProject := &argov1alpha1.AppProject{}
err := r.Get(ctx, types.NamespacedName{Name: app.Spec.Project, Namespace: r.ArgoCDNamespace}, appProject)
_, err := r.ArgoAppClientset.ArgoprojV1alpha1().AppProjects(r.ArgoCDNamespace).Get(ctx, app.Spec.GetProject(), metav1.GetOptions{})
if err != nil {
if apierr.IsNotFound(err) {
errorsByIndex[i] = fmt.Errorf("application references project %s which does not exist", app.Spec.Project)
@@ -527,9 +501,11 @@ func (r *ApplicationSetReconciler) getMinRequeueAfter(applicationSetInfo *argov1
}
func ignoreNotAllowedNamespaces(namespaces []string) predicate.Predicate {
return predicate.NewPredicateFuncs(func(object client.Object) bool {
return utils.IsNamespaceAllowed(namespaces, object.GetNamespace())
})
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
return glob.MatchStringInList(namespaces, e.Object.GetNamespace(), false)
},
}
}
func appControllerIndexer(rawObj client.Object) []string {
@@ -553,13 +529,12 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
return fmt.Errorf("error setting up with manager: %w", err)
}
appOwnsHandler := getApplicationOwnsHandler(enableProgressiveSyncs)
appSetOwnsHandler := getApplicationSetOwnsHandler(enableProgressiveSyncs)
ownsHandler := getOwnsHandlerPredicates(enableProgressiveSyncs)
return ctrl.NewControllerManagedBy(mgr).WithOptions(controller.Options{
MaxConcurrentReconciles: maxConcurrentReconciliations,
}).For(&argov1alpha1.ApplicationSet{}, builder.WithPredicates(appSetOwnsHandler)).
Owns(&argov1alpha1.Application{}, builder.WithPredicates(appOwnsHandler)).
}).For(&argov1alpha1.ApplicationSet{}).
Owns(&argov1alpha1.Application{}, builder.WithPredicates(ownsHandler)).
WithEventFilter(ignoreNotAllowedNamespaces(r.ApplicationSetNamespaces)).
Watches(
&corev1.Secret{},
@@ -567,9 +542,29 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
Client: mgr.GetClient(),
Log: log.WithField("type", "createSecretEventHandler"),
}).
// TODO: also watch Applications and respond on changes if we own them.
Complete(r)
}
func (r *ApplicationSetReconciler) updateCache(ctx context.Context, obj client.Object, logger *log.Entry) {
informer, err := r.Cache.GetInformer(ctx, obj)
if err != nil {
logger.Errorf("failed to get informer: %v", err)
return
}
// The controller runtime abstract away informers creation
// so unfortunately could not find any other way to access informer store.
k8sInformer, ok := informer.(k8scache.SharedInformer)
if !ok {
logger.Error("informer is not a kubernetes informer")
return
}
if err := k8sInformer.GetStore().Update(obj); err != nil {
logger.Errorf("failed to update cache: %v", err)
return
}
}
// createOrUpdateInCluster will create / update application resources in the cluster.
// - For new applications, it will call create
// - For existing application, it will call update
@@ -625,7 +620,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.Annotations[key]; exists {
if state, exists := found.ObjectMeta.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
@@ -634,7 +629,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
}
for _, key := range preservedLabels {
if state, exists := found.Labels[key]; exists {
if state, exists := found.ObjectMeta.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
@@ -644,7 +639,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
// Preserve post-delete finalizers:
// https://github.com/argoproj/argo-cd/issues/17181
for _, finalizer := range found.Finalizers {
for _, finalizer := range found.ObjectMeta.Finalizers {
if strings.HasPrefix(finalizer, argov1alpha1.PostDeleteFinalizerName) {
if generatedApp.Finalizers == nil {
generatedApp.Finalizers = []string{}
@@ -653,10 +648,10 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
}
}
found.Annotations = generatedApp.Annotations
found.ObjectMeta.Annotations = generatedApp.Annotations
found.Finalizers = generatedApp.Finalizers
found.Labels = generatedApp.Labels
found.ObjectMeta.Finalizers = generatedApp.Finalizers
found.ObjectMeta.Labels = generatedApp.Labels
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
})
@@ -667,6 +662,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
}
continue
}
r.updateCache(ctx, found, appLog)
if action != controllerutil.OperationResultNone {
// Don't pollute etcd with "unchanged Application" events
@@ -710,7 +706,7 @@ func (r *ApplicationSetReconciler) createInCluster(ctx context.Context, logCtx *
func (r *ApplicationSetReconciler) getCurrentApplications(ctx context.Context, applicationSet argov1alpha1.ApplicationSet) ([]argov1alpha1.Application, error) {
var current argov1alpha1.ApplicationList
err := r.List(ctx, &current, client.MatchingFields{".metadata.controller": applicationSet.Name}, client.InNamespace(applicationSet.Namespace))
err := r.Client.List(ctx, &current, client.MatchingFields{".metadata.controller": applicationSet.Name}, client.InNamespace(applicationSet.Namespace))
if err != nil {
return nil, fmt.Errorf("error retrieving applications: %w", err)
}
@@ -721,6 +717,9 @@ func (r *ApplicationSetReconciler) getCurrentApplications(ctx context.Context, a
// deleteInCluster will delete Applications that are currently on the cluster, but not in appList.
// The function must be called after all generators had been called and generated applications
func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, desiredApplications []argov1alpha1.Application) error {
// settingsMgr := settings.NewSettingsManager(context.TODO(), r.KubeClientset, applicationSet.Namespace)
// argoDB := db.NewDB(applicationSet.Namespace, settingsMgr, r.KubeClientset)
// clusterList, err := argoDB.ListClusters(ctx)
clusterList, err := utils.ListClusters(ctx, r.KubeClientset, r.ArgoCDNamespace)
if err != nil {
return fmt.Errorf("error listing clusters: %w", err)
@@ -755,7 +754,7 @@ func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *
continue
}
err = r.Delete(ctx, &app)
err = r.Client.Delete(ctx, &app)
if err != nil {
logCtx.WithError(err).Error("failed to delete Application")
if firstError != nil {
@@ -827,9 +826,10 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
if log.IsLevelEnabled(log.DebugLevel) {
utils.LogPatch(appLog, patch, updated)
}
if err := r.Patch(ctx, updated, patch); err != nil {
if err := r.Client.Patch(ctx, updated, patch); err != nil {
return fmt.Errorf("error updating finalizers: %w", err)
}
r.updateCache(ctx, updated, appLog)
// Application must have updated list of finalizers
updated.DeepCopyInto(app)
@@ -849,7 +849,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
for _, app := range applications {
app.SetOwnerReferences([]metav1.OwnerReference{})
err := r.Update(ctx, &app)
err := r.Client.Update(ctx, &app)
if err != nil {
return fmt.Errorf("error updating application: %w", err)
}
@@ -859,9 +859,12 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
}
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
appDependencyList, appStepMap := r.buildAppDependencyList(logCtx, appset, desiredApplications)
appDependencyList, appStepMap, err := r.buildAppDependencyList(logCtx, appset, desiredApplications)
if err != nil {
return nil, fmt.Errorf("failed to build app dependency list: %w", err)
}
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
_, err = r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset app status: %w", err)
}
@@ -871,27 +874,34 @@ func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context,
logCtx.Infof("step %v: %+v", i+1, step)
}
appSyncMap := r.buildAppSyncMap(appset, appDependencyList, appMap)
appSyncMap, err := r.buildAppSyncMap(ctx, appset, appDependencyList, appMap)
if err != nil {
return nil, fmt.Errorf("failed to build app sync map: %w", err)
}
logCtx.Infof("Application allowed to sync before maxUpdate?: %+v", appSyncMap)
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appSyncMap, appStepMap)
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appSyncMap, appStepMap, appMap)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset application status progress: %w", err)
}
_ = r.updateApplicationSetApplicationStatusConditions(ctx, &appset)
_, err = r.updateApplicationSetApplicationStatusConditions(ctx, &appset)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset application status conditions: %w", err)
}
return appSyncMap, nil
}
// this list tracks which Applications belong to each RollingUpdate step
func (r *ApplicationSetReconciler) buildAppDependencyList(logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, applications []argov1alpha1.Application) ([][]string, map[string]int) {
func (r *ApplicationSetReconciler) buildAppDependencyList(logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, applications []argov1alpha1.Application) ([][]string, map[string]int, error) {
if applicationSet.Spec.Strategy == nil || applicationSet.Spec.Strategy.Type == "" || applicationSet.Spec.Strategy.Type == "AllAtOnce" {
return [][]string{}, map[string]int{}
return [][]string{}, map[string]int{}, nil
}
steps := []argov1alpha1.ApplicationSetRolloutStep{}
if progressiveSyncsRollingSyncStrategyEnabled(&applicationSet) {
if progressiveSyncsStrategyEnabled(&applicationSet, "RollingSync") {
steps = applicationSet.Spec.Strategy.RollingSync.Steps
}
@@ -932,7 +942,7 @@ func (r *ApplicationSetReconciler) buildAppDependencyList(logCtx *log.Entry, app
}
}
return appDependencyList, appStepMap
return appDependencyList, appStepMap, nil
}
func labelMatchedExpression(logCtx *log.Entry, val string, matchExpression argov1alpha1.ApplicationMatchExpression) bool {
@@ -956,7 +966,7 @@ func labelMatchedExpression(logCtx *log.Entry, val string, matchExpression argov
}
// this map is used to determine which stage of Applications are ready to be updated in the reconciler loop
func (r *ApplicationSetReconciler) buildAppSyncMap(applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, appMap map[string]argov1alpha1.Application) map[string]bool {
func (r *ApplicationSetReconciler) buildAppSyncMap(ctx context.Context, applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
appSyncMap := map[string]bool{}
syncEnabled := true
@@ -993,11 +1003,11 @@ func (r *ApplicationSetReconciler) buildAppSyncMap(applicationSet argov1alpha1.A
}
}
return appSyncMap
return appSyncMap, nil
}
func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1alpha1.Application, appStatus argov1alpha1.ApplicationSetApplicationStatus) bool {
if progressiveSyncsRollingSyncStrategyEnabled(appset) {
if progressiveSyncsStrategyEnabled(appset, "RollingSync") {
// we still need to complete the current step if the Application is not yet Healthy or there are still pending Application changes
return isApplicationHealthy(app) && appStatus.Status == "Healthy"
}
@@ -1005,14 +1015,16 @@ func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1al
return true
}
func isRollingSyncStrategy(appset *argov1alpha1.ApplicationSet) bool {
// It's only RollingSync if the type specifically sets it
return appset.Spec.Strategy != nil && appset.Spec.Strategy.Type == "RollingSync" && appset.Spec.Strategy.RollingSync != nil
}
func progressiveSyncsStrategyEnabled(appset *argov1alpha1.ApplicationSet, strategyType string) bool {
if appset.Spec.Strategy == nil || appset.Spec.Strategy.Type != strategyType {
return false
}
func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.ApplicationSet) bool {
// ProgressiveSync is enabled if the strategy is set to `RollingSync` + steps slice is not empty
return isRollingSyncStrategy(appset) && len(appset.Spec.Strategy.RollingSync.Steps) > 0
if strategyType == "RollingSync" && appset.Spec.Strategy.RollingSync == nil {
return false
}
return true
}
func isApplicationHealthy(app argov1alpha1.Application) bool {
@@ -1035,16 +1047,6 @@ func statusStrings(app argov1alpha1.Application) (string, string, string) {
return healthStatusString, syncStatusString, operationPhaseString
}
func getAppStep(appName string, appStepMap map[string]int) int {
// if an application is not selected by any match expression, it defaults to step -1
step := -1
if appStep, ok := appStepMap[appName]; ok {
// 1-based indexing
step = appStep + 1
}
return step
}
// check the status of each Application's status and promote Applications to the next status if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
@@ -1064,24 +1066,23 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
LastTransitionTime: &now,
Message: "No Application status found, defaulting status to Waiting.",
Status: "Waiting",
Step: strconv.Itoa(getAppStep(app.Name, appStepMap)),
Step: fmt.Sprint(appStepMap[app.Name] + 1),
TargetRevisions: app.Status.GetRevisions(),
}
} else {
// we have an existing AppStatus
currentAppStatus = applicationSet.Status.ApplicationStatus[idx]
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
// upgrade any existing AppStatus that might have been set by an older argo-cd version
// note: currentAppStatus.TargetRevisions may be set to empty list earlier during migrations,
// to prevent other usage of r.Client.Status().Update to fail before reaching here.
if currentAppStatus.TargetRevisions == nil || len(currentAppStatus.TargetRevisions) == 0 {
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
}
}
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
currentAppStatus.Status = "Waiting"
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
appOutdated := false
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
appOutdated = syncStatusString == "OutOfSync"
}
@@ -1090,22 +1091,32 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Waiting"
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
}
if currentAppStatus.Status == "Pending" {
if !appOutdated && operationPhaseString == "Succeeded" {
logCtx.Infof("Application %v has completed a sync successfully, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
if operationPhaseString == "Succeeded" {
revisions := []string{}
if len(app.Status.OperationState.SyncResult.Revisions) > 0 {
revisions = app.Status.OperationState.SyncResult.Revisions
} else if app.Status.OperationState.SyncResult.Revision != "" {
revisions = append(revisions, app.Status.OperationState.SyncResult.Revision)
}
if reflect.DeepEqual(currentAppStatus.TargetRevisions, revisions) {
logCtx.Infof("Application %v has completed a sync successfully, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
}
} else if operationPhaseString == "Running" || healthStatusString == "Progressing" {
logCtx.Infof("Application %v has entered Progressing status, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
}
}
@@ -1114,7 +1125,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource is already Healthy, updating status from Waiting to Healthy."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
}
if currentAppStatus.Status == "Progressing" && isApplicationHealthy(app) {
@@ -1122,7 +1133,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
}
appStatuses = append(appStatuses, currentAppStatus)
@@ -1137,18 +1148,20 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
}
// check Applications that are in Waiting status and promote them to Pending if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appStepMap map[string]int, appMap map[string]argov1alpha1.Application) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applicationSet.Status.ApplicationStatus))
// if we have no RollingUpdate steps, clear out the existing ApplicationStatus entries
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
if applicationSet.Spec.Strategy != nil && applicationSet.Spec.Strategy.Type != "" && applicationSet.Spec.Strategy.Type != "AllAtOnce" {
updateCountMap := []int{}
totalCountMap := []int{}
length := len(applicationSet.Spec.Strategy.RollingSync.Steps)
length := 0
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
length = len(applicationSet.Spec.Strategy.RollingSync.Steps)
}
for s := 0; s < length; s++ {
updateCountMap = append(updateCountMap, 0)
totalCountMap = append(totalCountMap, 0)
@@ -1158,15 +1171,17 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
for _, appStatus := range applicationSet.Status.ApplicationStatus {
totalCountMap[appStepMap[appStatus.Application]] += 1
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
updateCountMap[appStepMap[appStatus.Application]] += 1
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
updateCountMap[appStepMap[appStatus.Application]] += 1
}
}
}
for _, appStatus := range applicationSet.Status.ApplicationStatus {
maxUpdateAllowed := true
maxUpdate := &intstr.IntOrString{}
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
maxUpdate = applicationSet.Spec.Strategy.RollingSync.Steps[appStepMap[appStatus.Application]].MaxUpdate
}
@@ -1184,7 +1199,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
if updateCountMap[appStepMap[appStatus.Application]] >= maxUpdateVal {
maxUpdateAllowed = false
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap), applicationSet.Name)
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, appStepMap[appStatus.Application]+1, applicationSet.Name)
}
}
@@ -1193,7 +1208,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
appStatus.LastTransitionTime = &now
appStatus.Status = "Pending"
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing."
appStatus.Step = strconv.Itoa(getAppStep(appStatus.Application, appStepMap))
appStatus.Step = fmt.Sprint(appStepMap[appStatus.Application] + 1)
updateCountMap[appStepMap[appStatus.Application]] += 1
}
@@ -1210,7 +1225,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
return appStatuses, nil
}
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditions(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet) []argov1alpha1.ApplicationSetCondition {
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditions(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet) ([]argov1alpha1.ApplicationSetCondition, error) {
appSetProgressing := false
for _, appStatus := range applicationSet.Status.ApplicationStatus {
if appStatus.Status != "Healthy" {
@@ -1235,7 +1250,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditio
Message: "ApplicationSet Rollout Rollout started",
Reason: argov1alpha1.ApplicationSetReasonApplicationSetModified,
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, true,
}, false,
)
} else if !appSetProgressing && appSetConditionProgressing {
_ = r.setApplicationSetStatusCondition(ctx,
@@ -1245,11 +1260,11 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditio
Message: "ApplicationSet Rollout Rollout complete",
Reason: argov1alpha1.ApplicationSetReasonApplicationSetRolloutComplete,
Status: argov1alpha1.ApplicationSetConditionStatusFalse,
}, true,
}, false,
)
}
return applicationSet.Status.Conditions
return applicationSet.Status.Conditions, nil
}
func findApplicationStatusIndex(appStatuses []argov1alpha1.ApplicationSetApplicationStatus, application string) int {
@@ -1275,29 +1290,8 @@ func (r *ApplicationSetReconciler) migrateStatus(ctx context.Context, appset *ar
}
if update {
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.ApplicationStatus = appset.Status.ApplicationStatus
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(appset)
return nil
})
if err != nil && !apierr.IsNotFound(err) {
return fmt.Errorf("unable to set application set condition: %w", err)
if err := r.Client.Status().Update(ctx, appset); err != nil {
return fmt.Errorf("unable to set application set status: %w", err)
}
}
return nil
@@ -1311,35 +1305,22 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
for _, status := range statusMap {
statuses = append(statuses, status)
}
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
appset.Status.Resources = statuses
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.Resources = appset.Status.Resources
// Update the newly fetched object with new status resources
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(appset)
return nil
})
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
err := r.Client.Status().Update(ctx, appset)
if err != nil {
logCtx.Errorf("unable to set application set status: %v", err)
return fmt.Errorf("unable to set application set status: %w", err)
}
if err := r.Get(ctx, namespacedName, appset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
return nil
}
@@ -1374,36 +1355,26 @@ func (r *ApplicationSetReconciler) setAppSetApplicationStatus(ctx context.Contex
for i := range applicationStatuses {
applicationSet.Status.SetApplicationStatus(applicationStatuses[i])
}
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.ApplicationStatus = applicationSet.Status.ApplicationStatus
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(applicationSet)
return nil
})
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, applicationSet)
if err != nil {
logCtx.Errorf("unable to set application set status: %v", err)
return fmt.Errorf("unable to set application set status: %w", err)
}
if err := r.Get(ctx, namespacedName, applicationSet); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
}
return nil
}
func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appMap map[string]argov1alpha1.Application, validApps []argov1alpha1.Application) []argov1alpha1.Application {
func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appMap map[string]argov1alpha1.Application, validApps []argov1alpha1.Application) ([]argov1alpha1.Application, error) {
rolloutApps := []argov1alpha1.Application{}
for i := range validApps {
pruneEnabled := false
@@ -1424,15 +1395,15 @@ func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, appl
// check appSyncMap to determine which Applications are ready to be updated and which should be skipped
if appSyncMap[validApps[i].Name] && appMap[validApps[i].Name].Status.Sync.Status == "OutOfSync" && appSetStatusPending {
logCtx.Infof("triggering sync for application: %v, prune enabled: %v", validApps[i].Name, pruneEnabled)
validApps[i] = syncApplication(validApps[i], pruneEnabled)
validApps[i], _ = syncApplication(validApps[i], pruneEnabled)
}
rolloutApps = append(rolloutApps, validApps[i])
}
return rolloutApps
return rolloutApps, nil
}
// used by the RollingSync Progressive Sync strategy to trigger a sync of a particular Application resource
func syncApplication(application argov1alpha1.Application, prune bool) argov1alpha1.Application {
func syncApplication(application argov1alpha1.Application, prune bool) (argov1alpha1.Application, error) {
operation := argov1alpha1.Operation{
InitiatedBy: argov1alpha1.OperationInitiator{
Username: "applicationset-controller",
@@ -1458,10 +1429,10 @@ func syncApplication(application argov1alpha1.Application, prune bool) argov1alp
}
application.Operation = &operation
return application
return application, nil
}
func getApplicationOwnsHandler(enableProgressiveSyncs bool) predicate.Funcs {
func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
// if we are the owner and there is a create event, we most likely created it and do not need to
@@ -1498,8 +1469,8 @@ func getApplicationOwnsHandler(enableProgressiveSyncs bool) predicate.Funcs {
if !isApp {
return false
}
requeue := shouldRequeueForApplication(appOld, appNew, enableProgressiveSyncs)
logCtx.WithField("requeue", requeue).Debugf("requeue caused by application %s", appNew.Name)
requeue := shouldRequeueApplicationSet(appOld, appNew, enableProgressiveSyncs)
logCtx.WithField("requeue", requeue).Debugf("requeue: %t caused by application %s\n", requeue, appNew.Name)
return requeue
},
GenericFunc: func(e event.GenericEvent) bool {
@@ -1516,13 +1487,13 @@ func getApplicationOwnsHandler(enableProgressiveSyncs bool) predicate.Funcs {
}
}
// shouldRequeueForApplication determines when we want to requeue an ApplicationSet for reconciling based on an owned
// shouldRequeueApplicationSet determines when we want to requeue an ApplicationSet for reconciling based on an owned
// application change
// The applicationset controller owns a subset of the Application CR.
// We do not need to re-reconcile if parts of the application change outside the applicationset's control.
// An example being, Application.ApplicationStatus.ReconciledAt which gets updated by the application controller.
// Additionally, Application.ObjectMeta.ResourceVersion and Application.ObjectMeta.Generation which are set by K8s.
func shouldRequeueForApplication(appOld *argov1alpha1.Application, appNew *argov1alpha1.Application, enableProgressiveSyncs bool) bool {
func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov1alpha1.Application, enableProgressiveSyncs bool) bool {
if appOld == nil || appNew == nil {
return false
}
@@ -1532,9 +1503,9 @@ func shouldRequeueForApplication(appOld *argov1alpha1.Application, appNew *argov
// https://pkg.go.dev/reflect#DeepEqual
// ApplicationDestination has an unexported field so we can just use the == for comparison
if !cmp.Equal(appOld.Spec, appNew.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{})) ||
!cmp.Equal(appOld.GetAnnotations(), appNew.GetAnnotations(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.GetLabels(), appNew.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.GetFinalizers(), appNew.GetFinalizers(), cmpopts.EquateEmpty()) {
!cmp.Equal(appOld.ObjectMeta.GetAnnotations(), appNew.ObjectMeta.GetAnnotations(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.ObjectMeta.GetLabels(), appNew.ObjectMeta.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.ObjectMeta.GetFinalizers(), appNew.ObjectMeta.GetFinalizers(), cmpopts.EquateEmpty()) {
return true
}
@@ -1555,90 +1526,4 @@ func shouldRequeueForApplication(appOld *argov1alpha1.Application, appNew *argov
return false
}
func getApplicationSetOwnsHandler(enableProgressiveSyncs bool) predicate.Funcs {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received create event")
// Always queue a new applicationset
return true
},
DeleteFunc: func(e event.DeleteEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received delete event")
// Always queue for the deletion of an applicationset
return true
},
UpdateFunc: func(e event.UpdateEvent) bool {
appSetOld, isAppSet := e.ObjectOld.(*argov1alpha1.ApplicationSet)
if !isAppSet {
return false
}
appSetNew, isAppSet := e.ObjectNew.(*argov1alpha1.ApplicationSet)
if !isAppSet {
return false
}
requeue := shouldRequeueForApplicationSet(appSetOld, appSetNew, enableProgressiveSyncs)
log.WithField("applicationset", appSetNew.QualifiedName()).
WithField("requeue", requeue).Debugln("received update event")
return requeue
},
GenericFunc: func(e event.GenericEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received generic event")
// Always queue for the generic of an applicationset
return true
},
}
}
// shouldRequeueForApplicationSet determines when we need to requeue an applicationset
func shouldRequeueForApplicationSet(appSetOld, appSetNew *argov1alpha1.ApplicationSet, enableProgressiveSyncs bool) bool {
if appSetOld == nil || appSetNew == nil {
return false
}
// Requeue if any ApplicationStatus.Status changed for Progressive sync strategy
if enableProgressiveSyncs {
if !cmp.Equal(appSetOld.Status.ApplicationStatus, appSetNew.Status.ApplicationStatus, cmpopts.EquateEmpty()) {
return true
}
}
// only compare the applicationset spec, annotations, labels and finalizers, deletionTimestamp, specifically avoiding
// the status field. status is owned by the applicationset controller,
// and we do not need to requeue when it does bookkeeping
// NB: the ApplicationDestination comes from the ApplicationSpec being embedded
// in the ApplicationSetTemplate from the generators
if !cmp.Equal(appSetOld.Spec, appSetNew.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{})) ||
!cmp.Equal(appSetOld.GetLabels(), appSetNew.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appSetOld.GetFinalizers(), appSetNew.GetFinalizers(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appSetOld.DeletionTimestamp, appSetNew.DeletionTimestamp, cmpopts.EquateEmpty()) {
return true
}
// Requeue only when the refresh annotation is newly added to the ApplicationSet.
// Changes to other annotations made simultaneously might be missed, but such cases are rare.
if !cmp.Equal(appSetOld.GetAnnotations(), appSetNew.GetAnnotations(), cmpopts.EquateEmpty()) {
_, oldHasRefreshAnnotation := appSetOld.Annotations[common.AnnotationApplicationSetRefresh]
_, newHasRefreshAnnotation := appSetNew.Annotations[common.AnnotationApplicationSetRefresh]
if oldHasRefreshAnnotation && !newHasRefreshAnnotation {
return false
}
return true
}
return false
}
var _ handler.EventHandler = &clusterSecretEventHandler{}

File diff suppressed because it is too large Load Diff

View File

@@ -4,8 +4,6 @@ import (
"context"
"fmt"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/types"
@@ -14,7 +12,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/applicationset/generators"
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
@@ -26,31 +24,31 @@ type clusterSecretEventHandler struct {
Client client.Client
}
func (h *clusterSecretEventHandler) Create(ctx context.Context, e event.CreateEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
func (h *clusterSecretEventHandler) Create(ctx context.Context, e event.CreateEvent, q workqueue.RateLimitingInterface) {
h.queueRelatedAppGenerators(ctx, q, e.Object)
}
func (h *clusterSecretEventHandler) Update(ctx context.Context, e event.UpdateEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
func (h *clusterSecretEventHandler) Update(ctx context.Context, e event.UpdateEvent, q workqueue.RateLimitingInterface) {
h.queueRelatedAppGenerators(ctx, q, e.ObjectNew)
}
func (h *clusterSecretEventHandler) Delete(ctx context.Context, e event.DeleteEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
func (h *clusterSecretEventHandler) Delete(ctx context.Context, e event.DeleteEvent, q workqueue.RateLimitingInterface) {
h.queueRelatedAppGenerators(ctx, q, e.Object)
}
func (h *clusterSecretEventHandler) Generic(ctx context.Context, e event.GenericEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
func (h *clusterSecretEventHandler) Generic(ctx context.Context, e event.GenericEvent, q workqueue.RateLimitingInterface) {
h.queueRelatedAppGenerators(ctx, q, e.Object)
}
// addRateLimitingInterface defines the Add method of workqueue.RateLimitingInterface, allow us to easily mock
// it for testing purposes.
type addRateLimitingInterface[T comparable] interface {
Add(item T)
type addRateLimitingInterface interface {
Add(item interface{})
}
func (h *clusterSecretEventHandler) queueRelatedAppGenerators(ctx context.Context, q addRateLimitingInterface[reconcile.Request], object client.Object) {
func (h *clusterSecretEventHandler) queueRelatedAppGenerators(ctx context.Context, q addRateLimitingInterface, object client.Object) {
// Check for label, lookup all ApplicationSets that might match the cluster, queue them all
if object.GetLabels()[common.LabelKeySecretType] != common.LabelValueSecretTypeCluster {
if object.GetLabels()[generators.ArgoCDSecretTypeLabel] != generators.ArgoCDSecretTypeCluster {
return
}

View File

@@ -4,8 +4,6 @@ import (
"context"
"testing"
argocommon "github.com/argoproj/argo-cd/v2/common"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -18,6 +16,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/argoproj/argo-cd/v2/applicationset/generators"
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
@@ -43,7 +42,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -71,7 +70,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -114,7 +113,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -158,7 +157,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -219,7 +218,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -255,7 +254,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -305,7 +304,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -356,7 +355,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -390,7 +389,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -426,7 +425,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -476,7 +475,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -527,7 +526,7 @@ func TestClusterEventHandler(t *testing.T) {
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
},
},
},
@@ -552,18 +551,24 @@ func TestClusterEventHandler(t *testing.T) {
handler.queueRelatedAppGenerators(context.Background(), &mockAddRateLimitingInterface, &test.secret)
assert.False(t, mockAddRateLimitingInterface.errorOccurred)
assert.ElementsMatch(t, mockAddRateLimitingInterface.addedItems, test.expectedRequests)
})
}
}
// Add checks the type, and adds it to the internal list of received additions
func (obj *mockAddRateLimitingInterface) Add(item reconcile.Request) {
obj.addedItems = append(obj.addedItems, item)
func (obj *mockAddRateLimitingInterface) Add(item interface{}) {
if req, ok := item.(ctrl.Request); ok {
obj.addedItems = append(obj.addedItems, req)
} else {
obj.errorOccurred = true
}
}
type mockAddRateLimitingInterface struct {
addedItems []reconcile.Request
errorOccurred bool
addedItems []ctrl.Request
}
func TestNestedGeneratorHasClusterGenerator_NestedClusterGenerator(t *testing.T) {

View File

@@ -17,7 +17,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/argoproj/argo-cd/v2/applicationset/generators"
appsetmetrics "github.com/argoproj/argo-cd/v2/applicationset/metrics"
"github.com/argoproj/argo-cd/v2/applicationset/services/mocks"
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
@@ -57,11 +56,11 @@ func TestRequeueAfter(t *testing.T) {
},
}
fakeDynClient := dynfake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind, duckType)
scmConfig := generators.NewSCMConfig("", []string{""}, true, nil, true)
scmConfig := generators.NewSCMConfig("", []string{""}, true, nil)
terminalGenerators := map[string]generators.Generator{
"List": generators.NewListGenerator(),
"Clusters": generators.NewClusterGenerator(k8sClient, ctx, appClientset, "argocd"),
"Git": generators.NewGitGenerator(mockServer, "namespace"),
"Git": generators.NewGitGenerator(mockServer),
"SCMProvider": generators.NewSCMProviderGenerator(fake.NewClientBuilder().WithObjects(&corev1.Secret{}).Build(), scmConfig),
"ClusterDecisionResource": generators.NewDuckTypeGenerator(ctx, fakeDynClient, appClientset, "argocd"),
"PullRequest": generators.NewPullRequestGenerator(k8sClient, scmConfig),
@@ -90,18 +89,15 @@ func TestRequeueAfter(t *testing.T) {
}
client := fake.NewClientBuilder().WithScheme(scheme).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics(client)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(0),
Generators: topLevelGenerators,
Metrics: metrics,
}
type args struct {
appset *argov1alpha1.ApplicationSet
requeueAfterOverride string
appset *argov1alpha1.ApplicationSet
}
tests := []struct {
name string
@@ -109,13 +105,11 @@ func TestRequeueAfter(t *testing.T) {
want time.Duration
wantErr assert.ErrorAssertionFunc
}{
{name: "Cluster", args: args{
appset: &argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{{Clusters: &argov1alpha1.ClusterGenerator{}}},
},
}, requeueAfterOverride: "",
}, want: generators.NoRequeueAfter, wantErr: assert.NoError},
{name: "Cluster", args: args{appset: &argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{{Clusters: &argov1alpha1.ClusterGenerator{}}},
},
}}, want: generators.NoRequeueAfter, wantErr: assert.NoError},
{name: "ClusterMergeNested", args: args{&argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
@@ -130,7 +124,7 @@ func TestRequeueAfter(t *testing.T) {
}},
},
},
}, ""}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
}}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
{name: "ClusterMatrixNested", args: args{&argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
@@ -145,65 +139,15 @@ func TestRequeueAfter(t *testing.T) {
}},
},
},
}, ""}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
}}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
{name: "ListGenerator", args: args{appset: &argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{{List: &argov1alpha1.ListGenerator{}}},
},
}}, want: generators.NoRequeueAfter, wantErr: assert.NoError},
{name: "DuckGenerator", args: args{appset: &argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{{ClusterDecisionResource: &argov1alpha1.DuckTypeGenerator{}}},
},
}}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
{name: "OverrideRequeueDuck", args: args{
appset: &argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{{ClusterDecisionResource: &argov1alpha1.DuckTypeGenerator{}}},
},
}, requeueAfterOverride: "1h",
}, want: 1 * time.Hour, wantErr: assert.NoError},
{name: "OverrideRequeueGit", args: args{&argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
{Git: &argov1alpha1.GitGenerator{}},
},
},
}, "1h"}, want: 1 * time.Hour, wantErr: assert.NoError},
{name: "OverrideRequeueMatrix", args: args{&argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
{Clusters: &argov1alpha1.ClusterGenerator{}},
{Merge: &argov1alpha1.MergeGenerator{
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
{
Clusters: &argov1alpha1.ClusterGenerator{},
Git: &argov1alpha1.GitGenerator{},
},
},
}},
},
},
}, "5m"}, want: 5 * time.Minute, wantErr: assert.NoError},
{name: "OverrideRequeueMerge", args: args{&argov1alpha1.ApplicationSet{
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
{Clusters: &argov1alpha1.ClusterGenerator{}},
{Merge: &argov1alpha1.MergeGenerator{
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
{
Clusters: &argov1alpha1.ClusterGenerator{},
Git: &argov1alpha1.GitGenerator{},
},
},
}},
},
},
}, "12s"}, want: 12 * time.Second, wantErr: assert.NoError},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Setenv("ARGOCD_APPLICATIONSET_CONTROLLER_REQUEUE_AFTER", tt.args.requeueAfterOverride)
assert.Equalf(t, tt.want, r.getMinRequeueAfter(tt.args.appset), "getMinRequeueAfter(%v)", tt.args.appset)
})
}

View File

@@ -69,11 +69,9 @@ func GenerateApplications(logCtx *log.Entry, applicationSetInfo argov1alpha1.App
res = append(res, *app)
}
}
if log.IsLevelEnabled(log.DebugLevel) {
logCtx.WithField("generator", requestedGenerator).Debugf("apps from generator: %+v", res)
} else {
logCtx.Infof("generated %d applications", len(res))
}
logCtx.WithField("generator", requestedGenerator).Infof("generated %d applications", len(res))
logCtx.WithField("generator", requestedGenerator).Debugf("apps from generator: %+v", res)
}
return res, applicationSetReason, firstError

View File

@@ -2,7 +2,6 @@ package template
import (
"fmt"
"maps"
"testing"
"github.com/stretchr/testify/mock"
@@ -19,6 +18,7 @@ import (
rendmock "github.com/argoproj/argo-cd/v2/applicationset/utils/mocks"
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/collections"
)
func TestGenerateApplications(t *testing.T) {
@@ -344,7 +344,7 @@ func TestGenerateAppsUsingPullRequestGenerator(t *testing.T) {
assert.EqualValues(t, cases.expectedApp[0].ObjectMeta.Name, gotApp[0].ObjectMeta.Name)
assert.EqualValues(t, cases.expectedApp[0].Spec.Source.TargetRevision, gotApp[0].Spec.Source.TargetRevision)
assert.EqualValues(t, cases.expectedApp[0].Spec.Destination.Namespace, gotApp[0].Spec.Destination.Namespace)
assert.True(t, maps.Equal(cases.expectedApp[0].ObjectMeta.Labels, gotApp[0].ObjectMeta.Labels))
assert.True(t, collections.StringMapsEqual(cases.expectedApp[0].ObjectMeta.Labels, gotApp[0].ObjectMeta.Labels))
})
}
}

View File

@@ -15,10 +15,14 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/argoproj/argo-cd/v2/applicationset/utils"
"github.com/argoproj/argo-cd/v2/common"
argoappsetv1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
const (
ArgoCDSecretTypeLabel = "argocd.argoproj.io/secret-type"
ArgoCDSecretTypeCluster = "cluster"
)
var _ Generator = (*ClusterGenerator)(nil)
// ClusterGenerator generates Applications for some or all clusters registered with ArgoCD.
@@ -48,7 +52,7 @@ func NewClusterGenerator(c client.Client, ctx context.Context, clientset kuberne
// GetRequeueAfter never requeue the cluster generator because the `clusterSecretEventHandler` will requeue the appsets
// when the cluster secrets change
func (g *ClusterGenerator) GetRequeueAfter(_ *argoappsetv1alpha1.ApplicationSetGenerator) time.Duration {
func (g *ClusterGenerator) GetRequeueAfter(appSetGenerator *argoappsetv1alpha1.ApplicationSetGenerator) time.Duration {
return NoRequeueAfter
}
@@ -57,7 +61,6 @@ func (g *ClusterGenerator) GetTemplate(appSetGenerator *argoappsetv1alpha1.Appli
}
func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.ApplicationSetGenerator, appSet *argoappsetv1alpha1.ApplicationSet, _ client.Client) ([]map[string]interface{}, error) {
logCtx := log.WithField("applicationset", appSet.GetName()).WithField("namespace", appSet.GetNamespace())
if appSetGenerator == nil {
return nil, EmptyAppSetGeneratorError
}
@@ -80,7 +83,7 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
return nil, nil
}
clusterSecrets, err := g.getSecretsByClusterName(logCtx, appSetGenerator)
clusterSecrets, err := g.getSecretsByClusterName(appSetGenerator)
if err != nil {
return nil, fmt.Errorf("error getting cluster secrets: %w", err)
}
@@ -89,10 +92,6 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
secretsFound := []corev1.Secret{}
isFlatMode := appSetGenerator.Clusters.FlatList
logCtx.Debugf("Using flat mode = %t for cluster generator", isFlatMode)
clustersParams := make([]map[string]interface{}, 0)
for _, cluster := range clustersFromArgoCD.Items {
// If there is a secret for this cluster, then it's a non-local cluster, so it will be
// handled by the next step.
@@ -104,20 +103,15 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
params["name"] = cluster.Name
params["nameNormalized"] = cluster.Name
params["server"] = cluster.Server
params["project"] = ""
err = appendTemplatedValues(appSetGenerator.Clusters.Values, params, appSet.Spec.GoTemplate, appSet.Spec.GoTemplateOptions)
if err != nil {
return nil, fmt.Errorf("error appending templated values for local cluster: %w", err)
}
if isFlatMode {
clustersParams = append(clustersParams, params)
} else {
res = append(res, params)
}
res = append(res, params)
logCtx.WithField("cluster", "local cluster").Info("matched local cluster")
log.WithField("cluster", "local cluster").Info("matched local cluster")
}
}
@@ -129,13 +123,6 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
params["nameNormalized"] = utils.SanitizeName(string(cluster.Data["name"]))
params["server"] = string(cluster.Data["server"])
project, ok := cluster.Data["project"]
if ok {
params["project"] = string(project)
} else {
params["project"] = ""
}
if appSet.Spec.GoTemplate {
meta := map[string]interface{}{}
@@ -162,27 +149,19 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
return nil, fmt.Errorf("error appending templated values for cluster: %w", err)
}
if isFlatMode {
clustersParams = append(clustersParams, params)
} else {
res = append(res, params)
}
res = append(res, params)
logCtx.WithField("cluster", cluster.Name).Debug("matched cluster secret")
log.WithField("cluster", cluster.Name).Info("matched cluster secret")
}
if isFlatMode {
res = append(res, map[string]interface{}{
"clusters": clustersParams,
})
}
return res, nil
}
func (g *ClusterGenerator) getSecretsByClusterName(log *log.Entry, appSetGenerator *argoappsetv1alpha1.ApplicationSetGenerator) (map[string]corev1.Secret, error) {
func (g *ClusterGenerator) getSecretsByClusterName(appSetGenerator *argoappsetv1alpha1.ApplicationSetGenerator) (map[string]corev1.Secret, error) {
// List all Clusters:
clusterSecretList := &corev1.SecretList{}
selector := metav1.AddLabelToSelector(&appSetGenerator.Clusters.Selector, common.LabelKeySecretType, common.LabelValueSecretTypeCluster)
selector := metav1.AddLabelToSelector(&appSetGenerator.Clusters.Selector, ArgoCDSecretTypeLabel, ArgoCDSecretTypeCluster)
secretSelector, err := metav1.LabelSelectorAsSelector(selector)
if err != nil {
return nil, fmt.Errorf("error converting label selector: %w", err)
@@ -191,7 +170,7 @@ func (g *ClusterGenerator) getSecretsByClusterName(log *log.Entry, appSetGenerat
if err := g.Client.List(context.Background(), clusterSecretList, client.MatchingLabelsSelector{Selector: secretSelector}); err != nil {
return nil, err
}
log.Debugf("clusters matching labels: %d", len(clusterSecretList.Items))
log.Debug("clusters matching labels", "count", len(clusterSecretList.Items))
res := map[string]corev1.Secret{}

View File

@@ -76,20 +76,18 @@ func TestGenerateParams(t *testing.T) {
},
},
Data: map[string][]byte{
"config": []byte("{}"),
"name": []byte("production_01/west"),
"server": []byte("https://production-01.example.com"),
"project": []byte("prod-project"),
"config": []byte("{}"),
"name": []byte("production_01/west"),
"server": []byte("https://production-01.example.com"),
},
Type: corev1.SecretType("Opaque"),
},
}
testCases := []struct {
name string
selector metav1.LabelSelector
isFlatMode bool
values map[string]string
expected []map[string]interface{}
name string
selector metav1.LabelSelector
values map[string]string
expected []map[string]interface{}
// clientError is true if a k8s client error should be simulated
clientError bool
expectedError error
@@ -107,16 +105,17 @@ func TestGenerateParams(t *testing.T) {
"aaa": "{{ server }}",
"no-op": "{{ this-does-not-exist }}",
}, expected: []map[string]interface{}{
{"values.lol1": "lol", "values.lol2": "{{values.lol1}}{{values.lol1}}", "values.lol3": "{{values.lol2}}{{values.lol2}}{{values.lol2}}", "values.foo": "bar", "values.bar": "{{ metadata.annotations.foo.argoproj.io }}", "values.no-op": "{{ this-does-not-exist }}", "values.bat": "{{ metadata.labels.environment }}", "values.aaa": "https://kubernetes.default.svc", "nameNormalized": "in-cluster", "name": "in-cluster", "server": "https://kubernetes.default.svc", "project": ""},
{
"values.lol1": "lol", "values.lol2": "{{values.lol1}}{{values.lol1}}", "values.lol3": "{{values.lol2}}{{values.lol2}}{{values.lol2}}", "values.foo": "bar", "values.bar": "production", "values.no-op": "{{ this-does-not-exist }}", "values.bat": "production", "values.aaa": "https://production-01.example.com", "name": "production_01/west", "nameNormalized": "production-01-west", "server": "https://production-01.example.com", "metadata.labels.environment": "production", "metadata.labels.org": "bar",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production", "project": "prod-project",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production",
},
{
"values.lol1": "lol", "values.lol2": "{{values.lol1}}{{values.lol1}}", "values.lol3": "{{values.lol2}}{{values.lol2}}{{values.lol2}}", "values.foo": "bar", "values.bar": "staging", "values.no-op": "{{ this-does-not-exist }}", "values.bat": "staging", "values.aaa": "https://staging-01.example.com", "name": "staging-01", "nameNormalized": "staging-01", "server": "https://staging-01.example.com", "metadata.labels.environment": "staging", "metadata.labels.org": "foo",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging", "project": "",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging",
},
{"values.lol1": "lol", "values.lol2": "{{values.lol1}}{{values.lol1}}", "values.lol3": "{{values.lol2}}{{values.lol2}}{{values.lol2}}", "values.foo": "bar", "values.bar": "{{ metadata.annotations.foo.argoproj.io }}", "values.no-op": "{{ this-does-not-exist }}", "values.bat": "{{ metadata.labels.environment }}", "values.aaa": "https://kubernetes.default.svc", "nameNormalized": "in-cluster", "name": "in-cluster", "server": "https://kubernetes.default.svc"},
},
clientError: false,
expectedError: nil,
@@ -132,12 +131,12 @@ func TestGenerateParams(t *testing.T) {
expected: []map[string]interface{}{
{
"name": "production_01/west", "nameNormalized": "production-01-west", "server": "https://production-01.example.com", "metadata.labels.environment": "production", "metadata.labels.org": "bar",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production", "project": "prod-project",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production",
},
{
"name": "staging-01", "nameNormalized": "staging-01", "server": "https://staging-01.example.com", "metadata.labels.environment": "staging", "metadata.labels.org": "foo",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging", "project": "",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging",
},
},
clientError: false,
@@ -156,7 +155,7 @@ func TestGenerateParams(t *testing.T) {
expected: []map[string]interface{}{
{
"values.foo": "bar", "name": "production_01/west", "nameNormalized": "production-01-west", "server": "https://production-01.example.com", "metadata.labels.environment": "production", "metadata.labels.org": "bar",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production", "project": "prod-project",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production",
},
},
clientError: false,
@@ -182,11 +181,11 @@ func TestGenerateParams(t *testing.T) {
expected: []map[string]interface{}{
{
"values.foo": "bar", "name": "staging-01", "nameNormalized": "staging-01", "server": "https://staging-01.example.com", "metadata.labels.environment": "staging", "metadata.labels.org": "foo",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging", "project": "",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging",
},
{
"values.foo": "bar", "name": "production_01/west", "nameNormalized": "production-01-west", "server": "https://production-01.example.com", "metadata.labels.environment": "production", "metadata.labels.org": "bar",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production", "project": "prod-project",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production",
},
},
clientError: false,
@@ -215,7 +214,7 @@ func TestGenerateParams(t *testing.T) {
expected: []map[string]interface{}{
{
"values.name": "baz", "name": "staging-01", "nameNormalized": "staging-01", "server": "https://staging-01.example.com", "metadata.labels.environment": "staging", "metadata.labels.org": "foo",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging", "project": "",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging",
},
},
clientError: false,
@@ -229,74 +228,6 @@ func TestGenerateParams(t *testing.T) {
clientError: true,
expectedError: fmt.Errorf("error getting cluster secrets: could not list Secrets"),
},
{
name: "flat mode without selectors",
selector: metav1.LabelSelector{},
values: map[string]string{
"lol1": "lol",
"lol2": "{{values.lol1}}{{values.lol1}}",
"lol3": "{{values.lol2}}{{values.lol2}}{{values.lol2}}",
"foo": "bar",
"bar": "{{ metadata.annotations.foo.argoproj.io }}",
"bat": "{{ metadata.labels.environment }}",
"aaa": "{{ server }}",
"no-op": "{{ this-does-not-exist }}",
},
expected: []map[string]interface{}{
{
"clusters": []map[string]interface{}{
{"values.lol1": "lol", "values.lol2": "{{values.lol1}}{{values.lol1}}", "values.lol3": "{{values.lol2}}{{values.lol2}}{{values.lol2}}", "values.foo": "bar", "values.bar": "{{ metadata.annotations.foo.argoproj.io }}", "values.no-op": "{{ this-does-not-exist }}", "values.bat": "{{ metadata.labels.environment }}", "values.aaa": "https://kubernetes.default.svc", "nameNormalized": "in-cluster", "name": "in-cluster", "server": "https://kubernetes.default.svc", "project": ""},
{
"values.lol1": "lol", "values.lol2": "{{values.lol1}}{{values.lol1}}", "values.lol3": "{{values.lol2}}{{values.lol2}}{{values.lol2}}", "values.foo": "bar", "values.bar": "production", "values.no-op": "{{ this-does-not-exist }}", "values.bat": "production", "values.aaa": "https://production-01.example.com", "name": "production_01/west", "nameNormalized": "production-01-west", "server": "https://production-01.example.com", "metadata.labels.environment": "production", "metadata.labels.org": "bar",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production", "project": "prod-project",
},
{
"values.lol1": "lol", "values.lol2": "{{values.lol1}}{{values.lol1}}", "values.lol3": "{{values.lol2}}{{values.lol2}}{{values.lol2}}", "values.foo": "bar", "values.bar": "staging", "values.no-op": "{{ this-does-not-exist }}", "values.bat": "staging", "values.aaa": "https://staging-01.example.com", "name": "staging-01", "nameNormalized": "staging-01", "server": "https://staging-01.example.com", "metadata.labels.environment": "staging", "metadata.labels.org": "foo",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging", "project": "",
},
},
},
},
isFlatMode: true,
clientError: false,
expectedError: nil,
},
{
name: "production or staging with flat mode",
selector: metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "environment",
Operator: "In",
Values: []string{
"production",
"staging",
},
},
},
},
isFlatMode: true,
values: map[string]string{
"foo": "bar",
},
expected: []map[string]interface{}{
{
"clusters": []map[string]interface{}{
{
"values.foo": "bar", "name": "production_01/west", "nameNormalized": "production-01-west", "server": "https://production-01.example.com", "metadata.labels.environment": "production", "metadata.labels.org": "bar",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "production", "project": "prod-project",
},
{
"values.foo": "bar", "name": "staging-01", "nameNormalized": "staging-01", "server": "https://staging-01.example.com", "metadata.labels.environment": "staging", "metadata.labels.org": "foo",
"metadata.labels.argocd.argoproj.io/secret-type": "cluster", "metadata.annotations.foo.argoproj.io": "staging", "project": "",
},
},
},
},
clientError: false,
expectedError: nil,
},
}
// convert []client.Object to []runtime.Object, for use by kubefake package
@@ -328,7 +259,6 @@ func TestGenerateParams(t *testing.T) {
Clusters: &argoprojiov1alpha1.ClusterGenerator{
Selector: testCase.selector,
Values: testCase.values,
FlatList: testCase.isFlatMode,
},
}, &applicationSetInfo, nil)
@@ -394,11 +324,10 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
},
}
testCases := []struct {
name string
selector metav1.LabelSelector
values map[string]string
isFlatMode bool
expected []map[string]interface{}
name string
selector metav1.LabelSelector
values map[string]string
expected []map[string]interface{}
// clientError is true if a k8s client error should be simulated
clientError bool
expectedError error
@@ -420,7 +349,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"name": "production_01/west",
"nameNormalized": "production-01-west",
"server": "https://production-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
@@ -446,7 +374,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"name": "staging-01",
"nameNormalized": "staging-01",
"server": "https://staging-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
@@ -472,7 +399,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"nameNormalized": "in-cluster",
"name": "in-cluster",
"server": "https://kubernetes.default.svc",
"project": "",
"values": map[string]string{
"lol1": "lol",
"lol2": "<no value><no value>",
@@ -501,7 +427,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"name": "production_01/west",
"nameNormalized": "production-01-west",
"server": "https://production-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
@@ -517,7 +442,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"name": "staging-01",
"nameNormalized": "staging-01",
"server": "https://staging-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
@@ -548,7 +472,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"name": "production_01/west",
"nameNormalized": "production-01-west",
"server": "https://production-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
@@ -589,7 +512,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"name": "production_01/west",
"nameNormalized": "production-01-west",
"server": "https://production-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
@@ -608,7 +530,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"name": "staging-01",
"nameNormalized": "staging-01",
"server": "https://staging-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
@@ -652,7 +573,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
"name": "staging-01",
"nameNormalized": "staging-01",
"server": "https://staging-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
@@ -679,162 +599,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
clientError: true,
expectedError: fmt.Errorf("error getting cluster secrets: could not list Secrets"),
},
{
name: "Clusters with flat list mode and no selector",
selector: metav1.LabelSelector{},
isFlatMode: true,
values: map[string]string{
"lol1": "lol",
"lol2": "{{ .values.lol1 }}{{ .values.lol1 }}",
"lol3": "{{ .values.lol2 }}{{ .values.lol2 }}{{ .values.lol2 }}",
"foo": "bar",
"bar": "{{ if not (empty .metadata) }}{{index .metadata.annotations \"foo.argoproj.io\" }}{{ end }}",
"bat": "{{ if not (empty .metadata) }}{{.metadata.labels.environment}}{{ end }}",
"aaa": "{{ .server }}",
"no-op": "{{ .thisDoesNotExist }}",
},
expected: []map[string]interface{}{
{
"clusters": []map[string]interface{}{
{
"nameNormalized": "in-cluster",
"name": "in-cluster",
"server": "https://kubernetes.default.svc",
"project": "",
"values": map[string]string{
"lol1": "lol",
"lol2": "<no value><no value>",
"lol3": "<no value><no value><no value>",
"foo": "bar",
"bar": "",
"bat": "",
"aaa": "https://kubernetes.default.svc",
"no-op": "<no value>",
},
},
{
"name": "production_01/west",
"nameNormalized": "production-01-west",
"server": "https://production-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
"environment": "production",
"org": "bar",
},
"annotations": map[string]string{
"foo.argoproj.io": "production",
},
},
"values": map[string]string{
"lol1": "lol",
"lol2": "<no value><no value>",
"lol3": "<no value><no value><no value>",
"foo": "bar",
"bar": "production",
"bat": "production",
"aaa": "https://production-01.example.com",
"no-op": "<no value>",
},
},
{
"name": "staging-01",
"nameNormalized": "staging-01",
"server": "https://staging-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
"environment": "staging",
"org": "foo",
},
"annotations": map[string]string{
"foo.argoproj.io": "staging",
},
},
"values": map[string]string{
"lol1": "lol",
"lol2": "<no value><no value>",
"lol3": "<no value><no value><no value>",
"foo": "bar",
"bar": "staging",
"bat": "staging",
"aaa": "https://staging-01.example.com",
"no-op": "<no value>",
},
},
},
},
},
clientError: false,
expectedError: nil,
},
{
name: "production or staging with flat mode",
selector: metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "environment",
Operator: "In",
Values: []string{
"production",
"staging",
},
},
},
},
isFlatMode: true,
values: map[string]string{
"foo": "bar",
},
expected: []map[string]interface{}{
{
"clusters": []map[string]interface{}{
{
"name": "production_01/west",
"nameNormalized": "production-01-west",
"server": "https://production-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
"environment": "production",
"org": "bar",
},
"annotations": map[string]string{
"foo.argoproj.io": "production",
},
},
"values": map[string]string{
"foo": "bar",
},
},
{
"name": "staging-01",
"nameNormalized": "staging-01",
"server": "https://staging-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"argocd.argoproj.io/secret-type": "cluster",
"environment": "staging",
"org": "foo",
},
"annotations": map[string]string{
"foo.argoproj.io": "staging",
},
},
"values": map[string]string{
"foo": "bar",
},
},
},
},
},
clientError: false,
expectedError: nil,
},
}
// convert []client.Object to []runtime.Object, for use by kubefake package
@@ -868,7 +632,6 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
Clusters: &argoprojiov1alpha1.ClusterGenerator{
Selector: testCase.selector,
Values: testCase.values,
FlatList: testCase.isFlatMode,
},
}, &applicationSetInfo, nil)

View File

@@ -52,7 +52,7 @@ func (g *DuckTypeGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.
return time.Duration(*appSetGenerator.ClusterDecisionResource.RequeueAfterSeconds) * time.Second
}
return getDefaultRequeueAfter()
return DefaultRequeueAfterSeconds
}
func (g *DuckTypeGenerator) GetTemplate(appSetGenerator *argoprojiov1alpha1.ApplicationSetGenerator) *argoprojiov1alpha1.ApplicationSetTemplate {
@@ -218,7 +218,7 @@ func (g *DuckTypeGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.A
res = append(res, params)
}
} else {
log.Warningf("clusterDecisionResource status.%s missing", statusListKey)
log.Warningf("clusterDecisionResource status." + statusListKey + " missing")
return nil, nil
}

View File

@@ -199,7 +199,6 @@ func TestTransForm(t *testing.T) {
"name": "production_01/west",
"nameNormalized": "production-01-west",
"server": "https://production-01.example.com",
"project": "",
}},
},
{
@@ -215,7 +214,6 @@ func TestTransForm(t *testing.T) {
"name": "some-really-long-server-url",
"nameNormalized": "some-really-long-server-url",
"server": "https://some-really-long-url-that-will-exceed-63-characters.com",
"project": "",
}},
},
}
@@ -348,7 +346,7 @@ func getMockClusterGenerator() Generator {
func getMockGitGenerator() Generator {
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "namespace")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
return gitGenerator
}

View File

@@ -24,17 +24,13 @@ import (
var _ Generator = (*GitGenerator)(nil)
type GitGenerator struct {
repos services.Repos
namespace string
repos services.Repos
}
// NewGitGenerator creates a new instance of Git Generator
func NewGitGenerator(repos services.Repos, controllerNamespace string) Generator {
func NewGitGenerator(repos services.Repos) Generator {
g := &GitGenerator{
repos: repos,
namespace: controllerNamespace,
repos: repos,
}
return g
}
@@ -49,7 +45,7 @@ func (g *GitGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.Appli
return time.Duration(*appSetGenerator.Git.RequeueAfterSeconds) * time.Second
}
return getDefaultRequeueAfter()
return DefaultRequeueAfterSeconds
}
func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.ApplicationSetGenerator, appSet *argoprojiov1alpha1.ApplicationSet, client client.Client) ([]map[string]interface{}, error) {
@@ -63,25 +59,21 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
noRevisionCache := appSet.RefreshRequired()
verifyCommit := false
// When the project field is templated, the contents of the git repo are required to run the git generator and get the templated value,
// but git generator cannot be called without verifying the commit signature.
// In this case, we skip the signature verification.
if !strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
project := appSet.Spec.Template.Spec.Project
appProject := &argoprojiov1alpha1.AppProject{}
controllerNamespace := g.namespace
if controllerNamespace == "" {
controllerNamespace = appSet.Namespace
}
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: controllerNamespace}, appProject); err != nil {
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled
verifyCommit = len(appProject.Spec.SignatureKeys) > 0 && gpg.IsGPGEnabled()
var project string
if strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
project = appSetGenerator.Git.Template.Spec.Project
} else {
project = appSet.Spec.Template.Spec.Project
}
appProject := &argoprojiov1alpha1.AppProject{}
if err := client.Get(context.TODO(), types.NamespacedName{Name: appSet.Spec.Template.Spec.Project, Namespace: appSet.Namespace}, appProject); err != nil {
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled
verifyCommit := appProject.Spec.SignatureKeys != nil && len(appProject.Spec.SignatureKeys) > 0 && gpg.IsGPGEnabled()
var err error
var res []map[string]interface{}
if len(appSetGenerator.Git.Directories) != 0 {

View File

@@ -323,7 +323,7 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -624,7 +624,7 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -989,7 +989,7 @@ cluster:
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1345,7 +1345,7 @@ cluster:
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock)
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1383,114 +1383,3 @@ cluster:
})
}
}
func TestGitGenerator_GenerateParams(t *testing.T) {
cases := []struct {
name string
directories []argoprojiov1alpha1.GitDirectoryGeneratorItem
pathParamPrefix string
repoApps []string
repoPathsError error
repoFileContents map[string][]byte
values map[string]string
expected []map[string]interface{}
expectedError error
appset argoprojiov1alpha1.ApplicationSet
callGetDirectories bool
}{
{
name: "Signature Verification - ignores templated project field",
repoApps: []string{
"app1",
},
repoPathsError: nil,
appset: argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
Namespace: "namespace",
},
Spec: argoprojiov1alpha1.ApplicationSetSpec{
Generators: []argoprojiov1alpha1.ApplicationSetGenerator{{
Git: &argoprojiov1alpha1.GitGenerator{
RepoURL: "RepoURL",
Revision: "Revision",
Directories: []argoprojiov1alpha1.GitDirectoryGeneratorItem{{Path: "*"}},
PathParamPrefix: "",
Values: map[string]string{
"foo": "bar",
},
},
}},
Template: argoprojiov1alpha1.ApplicationSetTemplate{
Spec: argoprojiov1alpha1.ApplicationSpec{
Project: "{{.project}}",
},
},
},
},
callGetDirectories: true,
expected: []map[string]interface{}{{"path": "app1", "path.basename": "app1", "path.basenameNormalized": "app1", "path[0]": "app1", "values.foo": "bar"}},
expectedError: nil,
},
{
name: "Signature Verification - Checks for non-templated project field",
repoApps: []string{
"app1",
},
repoPathsError: nil,
appset: argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
Namespace: "namespace",
},
Spec: argoprojiov1alpha1.ApplicationSetSpec{
Generators: []argoprojiov1alpha1.ApplicationSetGenerator{{
Git: &argoprojiov1alpha1.GitGenerator{
RepoURL: "RepoURL",
Revision: "Revision",
Directories: []argoprojiov1alpha1.GitDirectoryGeneratorItem{{Path: "*"}},
PathParamPrefix: "",
Values: map[string]string{
"foo": "bar",
},
},
}},
Template: argoprojiov1alpha1.ApplicationSetTemplate{
Spec: argoprojiov1alpha1.ApplicationSpec{
Project: "project",
},
},
},
},
callGetDirectories: false,
expected: []map[string]interface{}{{"path": "app1", "path.basename": "app1", "path.basenameNormalized": "app1", "path[0]": "app1", "values.foo": "bar"}},
expectedError: fmt.Errorf("error getting project project: appprojects.argoproj.io \"project\" not found"),
},
}
for _, testCase := range cases {
argoCDServiceMock := mocks.Repos{}
if testCase.callGetDirectories {
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
}
gitGenerator := NewGitGenerator(&argoCDServiceMock, "namespace")
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
appProject := argoprojiov1alpha1.AppProject{}
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&appProject).Build()
got, err := gitGenerator.GenerateParams(&testCase.appset.Spec.Generators[0], &testCase.appset, client)
if testCase.expectedError != nil {
require.EqualError(t, err, testCase.expectedError.Error())
} else {
require.NoError(t, err)
assert.Equal(t, testCase.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
}
}

View File

@@ -7,7 +7,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/env"
)
// Generator defines the interface implemented by all ApplicationSet generators.
@@ -31,11 +30,7 @@ var (
NoRequeueAfter time.Duration
)
// DefaultRequeueAfterSeconds is used when GetRequeueAfter is not specified, it is the default time to wait before the next reconcile loop
const (
DefaultRequeueAfterSeconds = 3 * time.Minute
)
func getDefaultRequeueAfter() time.Duration {
// Default is 3 minutes, min is 1 second, max is 1 year
return env.ParseDurationFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_REQUEUE_AFTER", DefaultRequeueAfterSeconds, 1*time.Second, 8760*time.Hour)
}

View File

@@ -1,29 +0,0 @@
package generators
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func Test_getDefaultRequeueAfter(t *testing.T) {
tests := []struct {
name string
requeueAfterEnv string
want time.Duration
}{
{name: "Default", requeueAfterEnv: "", want: DefaultRequeueAfterSeconds},
{name: "Min", requeueAfterEnv: "1s", want: 1 * time.Second},
{name: "Max", requeueAfterEnv: "8760h", want: 8760 * time.Hour},
{name: "Override", requeueAfterEnv: "10m", want: 10 * time.Minute},
{name: "LessThanMin", requeueAfterEnv: "1ms", want: DefaultRequeueAfterSeconds},
{name: "MoreThanMax", requeueAfterEnv: "8761h", want: DefaultRequeueAfterSeconds},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Setenv("ARGOCD_APPLICATIONSET_CONTROLLER_REQUEUE_AFTER", tt.requeueAfterEnv)
assert.Equalf(t, tt.want, getDefaultRequeueAfter(), "getDefaultRequeueAfter()")
})
}
}

View File

@@ -578,8 +578,8 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
},
},
expected: []map[string]interface{}{
{"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json", "path.basename": "dev", "path.basenameNormalized": "dev", "name": "dev-01", "nameNormalized": "dev-01", "server": "https://dev-01.example.com", "metadata.labels.environment": "dev", "metadata.labels.argocd.argoproj.io/secret-type": "cluster", "project": ""},
{"path": "examples/git-generator-files-discovery/cluster-config/prod/config.json", "path.basename": "prod", "path.basenameNormalized": "prod", "name": "prod-01", "nameNormalized": "prod-01", "server": "https://prod-01.example.com", "metadata.labels.environment": "prod", "metadata.labels.argocd.argoproj.io/secret-type": "cluster", "project": ""},
{"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json", "path.basename": "dev", "path.basenameNormalized": "dev", "name": "dev-01", "nameNormalized": "dev-01", "server": "https://dev-01.example.com", "metadata.labels.environment": "dev", "metadata.labels.argocd.argoproj.io/secret-type": "cluster"},
{"path": "examples/git-generator-files-discovery/cluster-config/prod/config.json", "path.basename": "prod", "path.basenameNormalized": "prod", "name": "prod-01", "nameNormalized": "prod-01", "server": "https://prod-01.example.com", "metadata.labels.environment": "prod", "metadata.labels.argocd.argoproj.io/secret-type": "cluster"},
},
clientError: false,
},
@@ -734,7 +734,6 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
"name": "dev-01",
"nameNormalized": "dev-01",
"server": "https://dev-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"environment": "dev",
@@ -751,7 +750,6 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
"name": "prod-01",
"nameNormalized": "prod-01",
"server": "https://prod-01.example.com",
"project": "",
"metadata": map[string]interface{}{
"labels": map[string]string{
"environment": "prod",
@@ -1091,7 +1089,7 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
repoServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
"some/path.json": []byte("test: content"),
}, nil)
gitGenerator := NewGitGenerator(repoServiceMock, "")
gitGenerator := NewGitGenerator(repoServiceMock)
matrixGenerator := NewMatrixGenerator(map[string]Generator{
"List": listGeneratorMock,

View File

@@ -197,7 +197,6 @@ func TestMergeGenerate(t *testing.T) {
}
func toAPIExtensionsJSON(t *testing.T, g interface{}) *apiextensionsv1.JSON {
t.Helper()
resVal, err := json.Marshal(g)
if err != nil {
t.Error("unable to unmarshal json", g)

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.53.4. DO NOT EDIT.
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks

View File

@@ -694,7 +694,7 @@ func TestPluginGenerateParams(t *testing.T) {
require.NoError(t, err)
gotJson, err := json.Marshal(got)
require.NoError(t, err)
assert.JSONEq(t, string(expectedJson), string(gotJson))
assert.Equal(t, string(expectedJson), string(gotJson))
}
})
}

View File

@@ -139,7 +139,7 @@ func (g *PullRequestGenerator) selectServiceProvider(ctx context.Context, genera
return nil, fmt.Errorf("error fetching CA certificates from ConfigMap: %w", prErr)
}
}
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret token: %w", err)
}
@@ -147,7 +147,7 @@ func (g *PullRequestGenerator) selectServiceProvider(ctx context.Context, genera
}
if generatorConfig.Gitea != nil {
providerConfig := generatorConfig.Gitea
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret token: %w", err)
}
@@ -164,13 +164,13 @@ func (g *PullRequestGenerator) selectServiceProvider(ctx context.Context, genera
}
}
if providerConfig.BearerToken != nil {
appToken, err := utils.GetSecretRef(ctx, g.client, providerConfig.BearerToken.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
appToken, err := utils.GetSecretRef(ctx, g.client, providerConfig.BearerToken.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret Bearer token: %w", err)
}
return pullrequest.NewBitbucketServiceBearerToken(ctx, appToken, providerConfig.API, providerConfig.Project, providerConfig.Repo, g.scmRootCAPath, providerConfig.Insecure, caCerts)
return pullrequest.NewBitbucketServiceBearerToken(ctx, providerConfig.API, appToken, providerConfig.Project, providerConfig.Repo, g.scmRootCAPath, providerConfig.Insecure, caCerts)
} else if providerConfig.BasicAuth != nil {
password, err := utils.GetSecretRef(ctx, g.client, providerConfig.BasicAuth.PasswordRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
password, err := utils.GetSecretRef(ctx, g.client, providerConfig.BasicAuth.PasswordRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret token: %w", err)
}
@@ -182,13 +182,13 @@ func (g *PullRequestGenerator) selectServiceProvider(ctx context.Context, genera
if generatorConfig.Bitbucket != nil {
providerConfig := generatorConfig.Bitbucket
if providerConfig.BearerToken != nil {
appToken, err := utils.GetSecretRef(ctx, g.client, providerConfig.BearerToken.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
appToken, err := utils.GetSecretRef(ctx, g.client, providerConfig.BearerToken.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret Bearer token: %w", err)
}
return pullrequest.NewBitbucketCloudServiceBearerToken(providerConfig.API, appToken, providerConfig.Owner, providerConfig.Repo)
} else if providerConfig.BasicAuth != nil {
password, err := utils.GetSecretRef(ctx, g.client, providerConfig.BasicAuth.PasswordRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
password, err := utils.GetSecretRef(ctx, g.client, providerConfig.BasicAuth.PasswordRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret token: %w", err)
}
@@ -199,7 +199,7 @@ func (g *PullRequestGenerator) selectServiceProvider(ctx context.Context, genera
}
if generatorConfig.AzureDevOps != nil {
providerConfig := generatorConfig.AzureDevOps
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret token: %w", err)
}
@@ -219,7 +219,7 @@ func (g *PullRequestGenerator) github(ctx context.Context, cfg *argoprojiov1alph
}
// always default to token, even if not set (public access)
token, err := utils.GetSecretRef(ctx, g.client, cfg.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
token, err := utils.GetSecretRef(ctx, g.client, cfg.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret token: %w", err)
}

View File

@@ -283,7 +283,7 @@ func TestAllowedSCMProviderPullRequest(t *testing.T) {
"gitea.myorg.com",
"bitbucket.myorg.com",
"azuredevops.myorg.com",
}, true, nil, true))
}, true, nil))
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -306,7 +306,7 @@ func TestAllowedSCMProviderPullRequest(t *testing.T) {
}
func TestSCMProviderDisabled_PRGenerator(t *testing.T) {
generator := NewPullRequestGenerator(nil, NewSCMConfig("", []string{}, false, nil, true))
generator := NewPullRequestGenerator(nil, NewSCMConfig("", []string{}, false, nil))
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{

View File

@@ -35,16 +35,14 @@ type SCMConfig struct {
allowedSCMProviders []string
enableSCMProviders bool
GitHubApps github_app_auth.Credentials
tokenRefStrictMode bool
}
func NewSCMConfig(scmRootCAPath string, allowedSCMProviders []string, enableSCMProviders bool, gitHubApps github_app_auth.Credentials, tokenRefStrictMode bool) SCMConfig {
func NewSCMConfig(scmRootCAPath string, allowedSCMProviders []string, enableSCMProviders bool, gitHubApps github_app_auth.Credentials) SCMConfig {
return SCMConfig{
scmRootCAPath: scmRootCAPath,
allowedSCMProviders: allowedSCMProviders,
enableSCMProviders: enableSCMProviders,
GitHubApps: gitHubApps,
tokenRefStrictMode: tokenRefStrictMode,
}
}
@@ -156,7 +154,7 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
return nil, fmt.Errorf("error fetching CA certificates from ConfigMap: %w", scmError)
}
}
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Gitlab token: %w", err)
}
@@ -165,7 +163,7 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
return nil, fmt.Errorf("error initializing Gitlab service: %w", err)
}
} else if providerConfig.Gitea != nil {
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.Gitea.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.Gitea.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Gitea token: %w", err)
}
@@ -184,13 +182,13 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
}
}
if providerConfig.BearerToken != nil {
appToken, err := utils.GetSecretRef(ctx, g.client, providerConfig.BearerToken.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
appToken, err := utils.GetSecretRef(ctx, g.client, providerConfig.BearerToken.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret Bearer token: %w", err)
}
provider, scmError = scm_provider.NewBitbucketServerProviderBearerToken(ctx, appToken, providerConfig.API, providerConfig.Project, providerConfig.AllBranches, g.scmRootCAPath, providerConfig.Insecure, caCerts)
} else if providerConfig.BasicAuth != nil {
password, err := utils.GetSecretRef(ctx, g.client, providerConfig.BasicAuth.PasswordRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
password, err := utils.GetSecretRef(ctx, g.client, providerConfig.BasicAuth.PasswordRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Secret token: %w", err)
}
@@ -202,7 +200,7 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
return nil, fmt.Errorf("error initializing Bitbucket Server service: %w", scmError)
}
} else if providerConfig.AzureDevOps != nil {
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.AzureDevOps.AccessTokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
token, err := utils.GetSecretRef(ctx, g.client, providerConfig.AzureDevOps.AccessTokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Azure Devops access token: %w", err)
}
@@ -211,7 +209,7 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
return nil, fmt.Errorf("error initializing Azure Devops service: %w", err)
}
} else if providerConfig.Bitbucket != nil {
appPassword, err := utils.GetSecretRef(ctx, g.client, providerConfig.Bitbucket.AppPasswordRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
appPassword, err := utils.GetSecretRef(ctx, g.client, providerConfig.Bitbucket.AppPasswordRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Bitbucket cloud appPassword: %w", err)
}
@@ -285,7 +283,7 @@ func (g *SCMProviderGenerator) githubProvider(ctx context.Context, github *argop
)
}
token, err := utils.GetSecretRef(ctx, g.client, github.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
token, err := utils.GetSecretRef(ctx, g.client, github.TokenRef, applicationSetInfo.Namespace)
if err != nil {
return nil, fmt.Errorf("error fetching Github token: %w", err)
}

View File

@@ -10,15 +10,15 @@ import (
"github.com/argoproj/argo-cd/v2/applicationset/services"
)
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, controllerNamespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, namespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
terminalGenerators := map[string]Generator{
"List": NewListGenerator(),
"Clusters": NewClusterGenerator(c, ctx, k8sClient, controllerNamespace),
"Git": NewGitGenerator(argoCDService, controllerNamespace),
"Clusters": NewClusterGenerator(c, ctx, k8sClient, namespace),
"Git": NewGitGenerator(argoCDService),
"SCMProvider": NewSCMProviderGenerator(c, scmConfig),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, controllerNamespace),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, namespace),
"PullRequest": NewPullRequestGenerator(c, scmConfig),
"Plugin": NewPluginGenerator(c, ctx, k8sClient, controllerNamespace),
"Plugin": NewPluginGenerator(c, ctx, k8sClient, namespace),
}
nestedGenerators := map[string]Generator{

View File

@@ -1,22 +0,0 @@
package metrics
import (
"github.com/prometheus/client_golang/prometheus"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
)
// Fake implementation for testing
func NewFakeAppsetMetrics(client ctrlclient.WithWatch) *ApplicationsetMetrics {
reconcileHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_appset_reconcile",
Help: "Application reconciliation performance in seconds.",
// Buckets can be set later on after observing median time
},
[]string{"name", "namespace"},
)
return &ApplicationsetMetrics{
reconcileHistogram: reconcileHistogram,
}
}

View File

@@ -1,131 +0,0 @@
package metrics
import (
"time"
"github.com/prometheus/client_golang/prometheus"
"k8s.io/apimachinery/pkg/labels"
"sigs.k8s.io/controller-runtime/pkg/metrics"
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
applisters "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
metricsutil "github.com/argoproj/argo-cd/v2/util/metrics"
)
var (
descAppsetLabels *prometheus.Desc
descAppsetDefaultLabels = []string{"namespace", "name"}
descAppsetInfo = prometheus.NewDesc(
"argocd_appset_info",
"Information about applicationset",
append(descAppsetDefaultLabels, "resource_update_status"),
nil,
)
descAppsetGeneratedApps = prometheus.NewDesc(
"argocd_appset_owned_applications",
"Number of applications owned by the applicationset",
descAppsetDefaultLabels,
nil,
)
)
type ApplicationsetMetrics struct {
reconcileHistogram *prometheus.HistogramVec
}
type appsetCollector struct {
lister applisters.ApplicationSetLister
// appsClientSet appclientset.Interface
labels []string
filter func(appset *argoappv1.ApplicationSet) bool
}
func NewApplicationsetMetrics(appsetLister applisters.ApplicationSetLister, appsetLabels []string, appsetFilter func(appset *argoappv1.ApplicationSet) bool) ApplicationsetMetrics {
reconcileHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_appset_reconcile",
Help: "Application reconciliation performance in seconds.",
// Buckets can be set later on after observing median time
},
descAppsetDefaultLabels,
)
appsetCollector := newAppsetCollector(appsetLister, appsetLabels, appsetFilter)
// Register collectors and metrics
metrics.Registry.MustRegister(reconcileHistogram)
metrics.Registry.MustRegister(appsetCollector)
return ApplicationsetMetrics{
reconcileHistogram: reconcileHistogram,
}
}
func (m *ApplicationsetMetrics) ObserveReconcile(appset *argoappv1.ApplicationSet, duration time.Duration) {
m.reconcileHistogram.WithLabelValues(appset.Namespace, appset.Name).Observe(duration.Seconds())
}
func newAppsetCollector(lister applisters.ApplicationSetLister, labels []string, filter func(appset *argoappv1.ApplicationSet) bool) *appsetCollector {
descAppsetDefaultLabels = []string{"namespace", "name"}
if len(labels) > 0 {
descAppsetLabels = prometheus.NewDesc(
"argocd_appset_labels",
"Applicationset labels translated to Prometheus labels",
append(descAppsetDefaultLabels, metricsutil.NormalizeLabels("label", labels)...),
nil,
)
}
return &appsetCollector{
lister: lister,
labels: labels,
filter: filter,
}
}
// Describe implements the prometheus.Collector interface
func (c *appsetCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- descAppsetInfo
ch <- descAppsetGeneratedApps
if len(c.labels) > 0 {
ch <- descAppsetLabels
}
}
// Collect implements the prometheus.Collector interface
func (c *appsetCollector) Collect(ch chan<- prometheus.Metric) {
appsets, _ := c.lister.List(labels.NewSelector())
for _, appset := range appsets {
if c.filter(appset) {
collectAppset(appset, c.labels, ch)
}
}
}
func collectAppset(appset *argoappv1.ApplicationSet, labelsToCollect []string, ch chan<- prometheus.Metric) {
labelValues := make([]string, 0)
commonLabelValues := []string{appset.Namespace, appset.Name}
for _, label := range labelsToCollect {
labelValues = append(labelValues, appset.GetLabels()[label])
}
resourceUpdateStatus := "Unknown"
for _, condition := range appset.Status.Conditions {
if condition.Type == argoappv1.ApplicationSetConditionResourcesUpToDate {
resourceUpdateStatus = condition.Reason
}
}
if len(labelsToCollect) > 0 {
ch <- prometheus.MustNewConstMetric(descAppsetLabels, prometheus.GaugeValue, 1, append(commonLabelValues, labelValues...)...)
}
ch <- prometheus.MustNewConstMetric(descAppsetInfo, prometheus.GaugeValue, 1, appset.Namespace, appset.Name, resourceUpdateStatus)
ch <- prometheus.MustNewConstMetric(descAppsetGeneratedApps, prometheus.GaugeValue, float64(len(appset.Status.Resources)), appset.Namespace, appset.Name)
}

View File

@@ -1,256 +0,0 @@
package metrics
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/argoproj/argo-cd/v2/applicationset/utils"
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/runtime"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
fake "sigs.k8s.io/controller-runtime/pkg/client/fake"
prometheus "github.com/prometheus/client_golang/prometheus"
metricsutil "github.com/argoproj/argo-cd/v2/util/metrics"
"sigs.k8s.io/controller-runtime/pkg/metrics"
"sigs.k8s.io/yaml"
)
var (
applicationsetNamespaces = []string{"argocd", "test-namespace1"}
filter = func(appset *argoappv1.ApplicationSet) bool {
return utils.IsNamespaceAllowed(applicationsetNamespaces, appset.Namespace)
}
collectedLabels = []string{"included/test"}
)
const fakeAppsetList = `
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: test1
namespace: argocd
labels:
included/test: test
not-included.label/test: test
spec:
generators:
- git:
directories:
- path: test/*
repoURL: https://github.com/test/test.git
revision: HEAD
template:
metadata:
name: '{{.path.basename}}'
spec:
destination:
namespace: '{{.path.basename}}'
server: https://kubernetes.default.svc
project: default
source:
path: '{{.path.path}}'
repoURL: https://github.com/test/test.git
targetRevision: HEAD
status:
resources:
- group: argoproj.io
health:
status: Missing
kind: Application
name: test-app1
namespace: argocd
status: OutOfSync
version: v1alpha1
- group: argoproj.io
health:
status: Missing
kind: Application
name: test-app2
namespace: argocd
status: OutOfSync
version: v1alpha1
conditions:
- lastTransitionTime: "2024-01-01T00:00:00Z"
message: Successfully generated parameters for all Applications
reason: ApplicationSetUpToDate
status: "False"
type: ErrorOccurred
- lastTransitionTime: "2024-01-01T00:00:00Z"
message: Successfully generated parameters for all Applications
reason: ParametersGenerated
status: "True"
type: ParametersGenerated
- lastTransitionTime: "2024-01-01T00:00:00Z"
message: ApplicationSet up to date
reason: ApplicationSetUpToDate
status: "True"
type: ResourcesUpToDate
---
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: test2
namespace: argocd
labels:
not-included.label/test: test
spec:
generators:
- git:
directories:
- path: test/*
repoURL: https://github.com/test/test.git
revision: HEAD
template:
metadata:
name: '{{.path.basename}}'
spec:
destination:
namespace: '{{.path.basename}}'
server: https://kubernetes.default.svc
project: default
source:
path: '{{.path.path}}'
repoURL: https://github.com/test/test.git
targetRevision: HEAD
---
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: should-be-filtered-out
namespace: not-allowed
spec:
generators:
- git:
directories:
- path: test/*
repoURL: https://github.com/test/test.git
revision: HEAD
template:
metadata:
name: '{{.path.basename}}'
spec:
destination:
namespace: '{{.path.basename}}'
server: https://kubernetes.default.svc
project: default
source:
path: '{{.path.path}}'
repoURL: https://github.com/test/test.git
targetRevision: HEAD
`
func newFakeAppsets(fakeAppsetYAML string) []argoappv1.ApplicationSet {
var results []argoappv1.ApplicationSet
appsetRawYamls := strings.Split(fakeAppsetYAML, "---")
for _, appsetRawYaml := range appsetRawYamls {
var appset argoappv1.ApplicationSet
err := yaml.Unmarshal([]byte(appsetRawYaml), &appset)
if err != nil {
panic(err)
}
results = append(results, appset)
}
return results
}
func TestApplicationsetCollector(t *testing.T) {
appsetList := newFakeAppsets(fakeAppsetList)
client := initializeClient(appsetList)
metrics.Registry = prometheus.NewRegistry()
appsetCollector := newAppsetCollector(utils.NewAppsetLister(client), collectedLabels, filter)
metrics.Registry.MustRegister(appsetCollector)
req, err := http.NewRequest(http.MethodGet, "/metrics", nil)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
handler.ServeHTTP(rr, req)
assert.Equal(t, http.StatusOK, rr.Code)
// Test correct appset_info and owned applications
assert.Contains(t, rr.Body.String(), `
argocd_appset_info{name="test1",namespace="argocd",resource_update_status="ApplicationSetUpToDate"} 1
`)
assert.Contains(t, rr.Body.String(), `
argocd_appset_owned_applications{name="test1",namespace="argocd"} 2
`)
// Test labels collection - should not include labels not included in the list of collected labels and include the ones that do.
assert.Contains(t, rr.Body.String(), `
argocd_appset_labels{label_included_test="test",name="test1",namespace="argocd"} 1
`)
assert.NotContains(t, rr.Body.String(), normalizeLabel("not-included.label/test"))
// If collected label is not present on the applicationset the value should be empty
assert.Contains(t, rr.Body.String(), `
argocd_appset_labels{label_included_test="",name="test2",namespace="argocd"} 1
`)
// If ResourcesUpToDate condition is not present on the applicationset the status should be reported as 'Unknown'
assert.Contains(t, rr.Body.String(), `
argocd_appset_info{name="test2",namespace="argocd",resource_update_status="Unknown"} 1
`)
// If there are no resources on the applicationset the owned application gague should return 0
assert.Contains(t, rr.Body.String(), `
argocd_appset_owned_applications{name="test2",namespace="argocd"} 0
`)
// Test that filter is working
assert.NotContains(t, rr.Body.String(), `name="should-be-filtered-out"`)
}
func TestObserveReconcile(t *testing.T) {
appsetList := newFakeAppsets(fakeAppsetList)
client := initializeClient(appsetList)
metrics.Registry = prometheus.NewRegistry()
appsetMetrics := NewApplicationsetMetrics(utils.NewAppsetLister(client), collectedLabels, filter)
req, err := http.NewRequest(http.MethodGet, "/metrics", nil)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
appsetMetrics.ObserveReconcile(&appsetList[0], 5*time.Second)
handler.ServeHTTP(rr, req)
assert.Contains(t, rr.Body.String(), `
argocd_appset_reconcile_sum{name="test1",namespace="argocd"} 5
`)
// If there are no resources on the applicationset the owned application gague should return 0
assert.Contains(t, rr.Body.String(), `
argocd_appset_reconcile_count{name="test1",namespace="argocd"} 1
`)
}
func initializeClient(appsets []argoappv1.ApplicationSet) ctrlclient.WithWatch {
scheme := runtime.NewScheme()
err := argoappv1.AddToScheme(scheme)
if err != nil {
panic(err)
}
var clientObjects []ctrlclient.Object
for _, appset := range appsets {
clientObjects = append(clientObjects, appset.DeepCopy())
}
return fake.NewClientBuilder().WithScheme(scheme).WithObjects(clientObjects...).Build()
}
func normalizeLabel(label string) string {
return metricsutil.NormalizeLabels("label", []string{label})[0]
}

View File

@@ -5,7 +5,7 @@ import (
"net/http"
"github.com/bradleyfalzon/ghinstallation/v2"
"github.com/google/go-github/v66/github"
"github.com/google/go-github/v63/github"
"github.com/argoproj/argo-cd/v2/applicationset/services/github_app_auth"
)

View File

@@ -134,7 +134,7 @@ func (c *Client) Do(ctx context.Context, req *http.Request, v interface{}) (*htt
// CheckResponse checks the API response for errors, and returns them if present.
func CheckResponse(resp *http.Response) error {
if c := resp.StatusCode; http.StatusOK <= c && c < http.StatusMultipleChoices {
if c := resp.StatusCode; 200 <= c && c <= 299 {
return nil
}

View File

@@ -10,7 +10,6 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestClient(t *testing.T) {
@@ -25,7 +24,9 @@ func TestClient(t *testing.T) {
var clientOptionFns []ClientOptionFunc
_, err := NewClient(server.URL, clientOptionFns...)
require.NoError(t, err, "Failed to create client")
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
}
func TestClientDo(t *testing.T) {
@@ -76,7 +77,7 @@ func TestClientDo(t *testing.T) {
"key3": float64(123),
},
},
expectedCode: http.StatusOK,
expectedCode: 200,
expectedError: nil,
},
{
@@ -108,7 +109,7 @@ func TestClientDo(t *testing.T) {
})),
clientOptionFns: nil,
expected: []map[string]interface{}(nil),
expectedCode: http.StatusUnauthorized,
expectedCode: 401,
expectedError: fmt.Errorf("API error with status code 401: "),
},
} {
@@ -117,10 +118,14 @@ func TestClientDo(t *testing.T) {
defer cc.fakeServer.Close()
client, err := NewClient(cc.fakeServer.URL, cc.clientOptionFns...)
require.NoError(t, err, "NewClient returned unexpected error")
if err != nil {
t.Fatalf("NewClient returned unexpected error: %v", err)
}
req, err := client.NewRequest("POST", "", cc.params, nil)
require.NoError(t, err, "NewRequest returned unexpected error")
if err != nil {
t.Fatalf("NewRequest returned unexpected error: %v", err)
}
var data []map[string]interface{}
@@ -144,5 +149,12 @@ func TestCheckResponse(t *testing.T) {
}
err := CheckResponse(resp)
require.EqualError(t, err, "API error with status code 400: invalid_request")
if err == nil {
t.Error("Expected an error, got nil")
}
expected := "API error with status code 400: invalid_request"
if err.Error() != expected {
t.Errorf("Expected error '%s', got '%s'", expected, err.Error())
}
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.53.4. DO NOT EDIT.
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks

View File

@@ -9,7 +9,6 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestPlugin(t *testing.T) {
@@ -32,13 +31,19 @@ func TestPlugin(t *testing.T) {
defer ts.Close()
client, err := NewPluginService(context.Background(), "plugin-test", ts.URL, token, 0)
require.NoError(t, err)
if err != nil {
t.Errorf("unexpected error: %v", err)
}
data, err := client.List(context.Background(), nil)
require.NoError(t, err)
if err != nil {
t.Errorf("unexpected error: %v", err)
}
var expectedData ServiceResponse
err = json.Unmarshal([]byte(expectedJSON), &expectedData)
require.NoError(t, err)
if err != nil {
t.Fatal(err)
}
assert.Equal(t, &expectedData, data)
}

View File

@@ -82,7 +82,6 @@ func (a *AzureDevOpsService) List(ctx context.Context) ([]*PullRequest, error) {
pr.Repository.Name == nil ||
pr.PullRequestId == nil ||
pr.SourceRefName == nil ||
pr.TargetRefName == nil ||
pr.LastMergeSourceCommit == nil ||
pr.LastMergeSourceCommit.CommitId == nil {
continue
@@ -95,13 +94,12 @@ func (a *AzureDevOpsService) List(ctx context.Context) ([]*PullRequest, error) {
if *pr.Repository.Name == a.repo {
pullRequests = append(pullRequests, &PullRequest{
Number: *pr.PullRequestId,
Title: *pr.Title,
Branch: strings.Replace(*pr.SourceRefName, "refs/heads/", "", 1),
TargetBranch: strings.Replace(*pr.TargetRefName, "refs/heads/", "", 1),
HeadSHA: *pr.LastMergeSourceCommit.CommitId,
Labels: azureDevOpsLabels,
Author: strings.Split(*pr.CreatedBy.UniqueName, "@")[0], // Get the part before the @ in the email-address
Number: *pr.PullRequestId,
Title: *pr.Title,
Branch: strings.Replace(*pr.SourceRefName, "refs/heads/", "", 1),
HeadSHA: *pr.LastMergeSourceCommit.CommitId,
Labels: azureDevOpsLabels,
Author: strings.Split(*pr.CreatedBy.UniqueName, "@")[0], // Get the part before the @ in the email-address
})
}
}

View File

@@ -72,7 +72,6 @@ func TestListPullRequest(t *testing.T) {
PullRequestId: createIntPtr(pr_id),
Title: createStringPtr(pr_title),
SourceRefName: createStringPtr("refs/heads/feature-branch"),
TargetRefName: createStringPtr("refs/heads/main"),
LastMergeSourceCommit: &git.GitCommitRef{
CommitId: createStringPtr(pr_head_sha),
},
@@ -107,7 +106,6 @@ func TestListPullRequest(t *testing.T) {
require.NoError(t, err)
assert.Len(t, list, 1)
assert.Equal(t, "feature-branch", list[0].Branch)
assert.Equal(t, "main", list[0].TargetBranch)
assert.Equal(t, pr_head_sha, list[0].HeadSHA)
assert.Equal(t, "feat(123)", list[0].Title)
assert.Equal(t, pr_id, list[0].Number)

View File

@@ -19,7 +19,7 @@ type BitbucketCloudPullRequest struct {
ID int `json:"id"`
Title string `json:"title"`
Source BitbucketCloudPullRequestSource `json:"source"`
Author BitbucketCloudPullRequestAuthor `json:"author"`
Author string `json:"author"`
}
type BitbucketCloudPullRequestSource struct {
@@ -35,11 +35,6 @@ type BitbucketCloudPullRequestSourceCommit struct {
Hash string `json:"hash"`
}
// Also have display_name and uuid, but don't plan to use them.
type BitbucketCloudPullRequestAuthor struct {
Nickname string `json:"nickname"`
}
type PullRequestResponse struct {
Page int32 `json:"page"`
Size int32 `json:"size"`
@@ -139,7 +134,7 @@ func (b *BitbucketCloudService) List(_ context.Context) ([]*PullRequest, error)
Title: pull.Title,
Branch: pull.Source.Branch.Name,
HeadSHA: pull.Source.Commit.Hash,
Author: pull.Author.Nickname,
Author: pull.Author,
})
}

View File

@@ -15,7 +15,6 @@ import (
)
func defaultHandlerCloud(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
var err error
@@ -38,9 +37,7 @@ func defaultHandlerCloud(t *testing.T) func(http.ResponseWriter, *http.Request)
"hash": "1a8dd249c04a"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`)
@@ -157,9 +154,7 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "1a8dd249c04a"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
},
{
"id": 102,
@@ -173,9 +168,7 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "4cf807e67a6d"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`, r.Host))
@@ -198,9 +191,7 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "6344d9623e3b"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`, r.Host))
@@ -242,7 +233,7 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
func TestListResponseErrorCloud(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusInternalServerError)
w.WriteHeader(500)
}))
defer ts.Close()
svc, _ := NewBitbucketCloudServiceNoAuth(ts.URL, "OWNER", "REPO")
@@ -348,9 +339,7 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "1a8dd249c04a"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
},
{
"id": 200,
@@ -364,9 +353,7 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "4cf807e67a6d"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`, r.Host))
@@ -389,9 +376,7 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "6344d9623e3b"
}
},
"author": {
"nickname": "testName"
}
"author": "testName"
}
]
}`, r.Host))

View File

@@ -16,7 +16,6 @@ import (
)
func defaultHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
var err error

View File

@@ -14,7 +14,6 @@ import (
)
func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
fmt.Println(r.RequestURI)

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"os"
"github.com/google/go-github/v66/github"
"github.com/google/go-github/v63/github"
"golang.org/x/oauth2"
)

View File

@@ -3,8 +3,8 @@ package pull_request
import (
"testing"
"github.com/google/go-github/v66/github"
"github.com/stretchr/testify/require"
"github.com/google/go-github/v63/github"
"github.com/stretchr/testify/assert"
)
func toPtr(s string) *string {
@@ -52,8 +52,9 @@ func TestContainLabels(t *testing.T) {
for _, c := range cases {
t.Run(c.Name, func(t *testing.T) {
got := containLabels(c.Labels, c.PullLabels)
require.Equal(t, got, c.Expect)
if got := containLabels(c.Labels, c.PullLabels); got != c.Expect {
t.Errorf("expect: %v, got: %v", c.Expect, got)
}
})
}
}
@@ -82,7 +83,7 @@ func TestGetGitHubPRLabelNames(t *testing.T) {
for _, test := range Tests {
t.Run(test.Name, func(t *testing.T) {
labels := getGithubPRLabelNames(test.PullLabels)
require.Equal(t, test.ExpectedResult, labels)
assert.Equal(t, test.ExpectedResult, labels)
})
}
}

View File

@@ -15,12 +15,14 @@ import (
)
func writeMRListResponse(t *testing.T, w io.Writer) {
t.Helper()
f, err := os.Open("fixtures/gitlab_mr_list_response.json")
require.NoErrorf(t, err, "error opening fixture file: %v", err)
if err != nil {
t.Fatalf("error opening fixture file: %v", err)
}
_, err = io.Copy(w, f)
require.NoErrorf(t, err, "error writing response: %v", err)
if _, err = io.Copy(w, f); err != nil {
t.Fatalf("error writing response: %v", err)
}
}
func TestGitLabServiceCustomBaseURL(t *testing.T) {

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.53.4. DO NOT EDIT.
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.53.4. DO NOT EDIT.
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.53.4. DO NOT EDIT.
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks

View File

@@ -46,7 +46,7 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
return true, nil
}
return false, fmt.Errorf("%s", resp.Status)
return false, fmt.Errorf(resp.Status)
}
var _ SCMProviderService = &BitBucketCloudProvider{}

View File

@@ -129,7 +129,7 @@ func (b *BitbucketServerProvider) RepoHasPath(_ context.Context, repo *Repositor
}
// No need to query for all pages here
response, err := b.client.DefaultApi.GetContent_0(repo.Organization, repo.Repository, path, opts)
if response != nil && response.StatusCode == http.StatusNotFound {
if response != nil && response.StatusCode == 404 {
// File/directory not found
return false, nil
}
@@ -203,7 +203,7 @@ func (b *BitbucketServerProvider) getDefaultBranch(org string, repo string) (*bi
response, err := b.client.DefaultApi.GetDefaultBranch(org, repo)
// The API will return 404 if a default branch is set but doesn't exist. In case the repo is empty and default branch is unset,
// we will get an EOF and a nil response.
if (response != nil && response.StatusCode == http.StatusNotFound) || (response == nil && err != nil && errors.Is(err, io.EOF)) {
if (response != nil && response.StatusCode == 404) || (response == nil && err != nil && errors.Is(err, io.EOF)) {
return nil, nil
}
if err != nil {

View File

@@ -14,7 +14,6 @@ import (
)
func defaultHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
var err error
@@ -83,7 +82,6 @@ func defaultHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
}
func verifyDefaultRepo(t *testing.T, err error, repos []*Repository) {
t.Helper()
require.NoError(t, err)
assert.Len(t, repos, 1)
assert.Equal(t, Repository{

View File

@@ -128,7 +128,7 @@ func (g *GiteaProvider) ListRepos(ctx context.Context, cloneProtocol string) ([]
func (g *GiteaProvider) RepoHasPath(ctx context.Context, repo *Repository, path string) (bool, error) {
_, resp, err := g.client.GetContents(repo.Organization, repo.Repository, repo.Branch, path)
if resp != nil && resp.StatusCode == http.StatusNotFound {
if resp != nil && resp.StatusCode == 404 {
return false, nil
}
if fmt.Sprint(err) == "expect file, got directory" {

View File

@@ -15,7 +15,6 @@ import (
)
func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
switch r.RequestURI {

View File

@@ -6,7 +6,7 @@ import (
"net/http"
"os"
"github.com/google/go-github/v66/github"
"github.com/google/go-github/v63/github"
"golang.org/x/oauth2"
)
@@ -107,7 +107,7 @@ func (g *GithubProvider) RepoHasPath(ctx context.Context, repo *Repository, path
Ref: repo.Branch,
})
// 404s are not an error here, just a normal false.
if resp != nil && resp.StatusCode == http.StatusNotFound {
if resp != nil && resp.StatusCode == 404 {
return false, nil
}
if err != nil {

View File

@@ -14,7 +14,6 @@ import (
)
func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
switch r.RequestURI {

View File

@@ -2,10 +2,10 @@ package scm_provider
import (
"context"
"errors"
"fmt"
"net/http"
"os"
pathpkg "path"
"github.com/hashicorp/go-retryablehttp"
"github.com/xanzy/go-gitlab"
@@ -129,31 +129,40 @@ func (g *GitlabProvider) ListRepos(ctx context.Context, cloneProtocol string) ([
func (g *GitlabProvider) RepoHasPath(_ context.Context, repo *Repository, path string) (bool, error) {
p, _, err := g.client.Projects.GetProject(repo.Organization+"/"+repo.Repository, nil)
if err != nil {
return false, fmt.Errorf("error getting Project Info: %w", err)
}
// search if the path is a file and exists in the repo
fileOptions := gitlab.GetFileOptions{Ref: &repo.Branch}
_, _, err = g.client.RepositoryFiles.GetFile(p.ID, path, &fileOptions)
if err != nil {
if errors.Is(err, gitlab.ErrNotFound) {
// no file found, check for a directory
options := gitlab.ListTreeOptions{
Path: &path,
Ref: &repo.Branch,
}
_, _, err := g.client.Repositories.ListTree(p.ID, &options)
if err != nil {
if errors.Is(err, gitlab.ErrNotFound) {
return false, nil // no file or directory found
}
return false, err
}
return true, nil // directory found
}
return false, err
}
return true, nil // file found
directories := []string{
path,
pathpkg.Dir(path),
}
for _, directory := range directories {
options := gitlab.ListTreeOptions{
Path: &directory,
Ref: &repo.Branch,
}
for {
treeNode, resp, err := g.client.Repositories.ListTree(p.ID, &options)
if err != nil {
return false, err
}
if path == directory {
if resp.TotalItems > 0 {
return true, nil
}
}
for i := range treeNode {
if treeNode[i].Path == path {
return true, nil
}
}
if resp.NextPage == 0 {
// no future pages
break
}
options.Page = resp.NextPage
}
}
return false, nil
}
func (g *GitlabProvider) listBranches(_ context.Context, repo *Repository) ([]gitlab.Branch, error) {

View File

@@ -17,10 +17,8 @@ import (
)
func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
fmt.Println(r.RequestURI)
switch r.RequestURI {
case "/api/v4":
fmt.Println("here1")
@@ -1041,32 +1039,6 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
if err != nil {
t.Fail()
}
// Recent versions of the Gitlab API (v17.7+) listTree return 404 not only when a file doesn't exist, but also
// when a path is to a file instead of a directory. Code was refactored to explicitly search for file then
// search for directory, catching 404 errors as "file not found".
case "/api/v4/projects/27084533/repository/files/argocd?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/files/argocd%2Finstall%2Eyaml?ref=master":
_, err := io.WriteString(w, `{"file_name":"install.yaml","file_path":"argocd/install.yaml","size":0,"encoding":"base64","content_sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","ref":"main","blob_id":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391","commit_id":"6d4c0f9d34534ccc73aa3f3180b25e2aebe630eb","last_commit_id":"b50eb63f9c0e09bfdb070db26fd32c7210291f52","execute_filemode":false,"content":""}`)
if err != nil {
t.Fail()
}
case "/api/v4/projects/27084533/repository/files/notathing?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/tree?path=notathing&ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/files/argocd%2Fnotathing%2Eyaml?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/tree?path=argocd%2Fnotathing.yaml&ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/files/notathing%2Fnotathing%2Eyaml?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/tree?path=notathing%2Fnotathing.yaml&ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/files/notathing%2Fnotathing%2Fnotathing%2Eyaml?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/tree?path=notathing%2Fnotathing%2Fnotathing.yaml&ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/branches/foo":
w.WriteHeader(http.StatusNotFound)
default:
@@ -1221,16 +1193,6 @@ func TestGitlabHasPath(t *testing.T) {
path: "argocd/notathing.yaml",
exists: false,
},
{
name: "noexistent file in noexistent directory",
path: "notathing/notathing.yaml",
exists: false,
},
{
name: "noexistent file in nested noexistent directory",
path: "notathing/notathing/notathing.yaml",
exists: false,
},
}
for _, c := range cases {

View File

@@ -1,63 +0,0 @@
package utils
import (
"context"
"k8s.io/apimachinery/pkg/labels"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
. "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
)
// Implements AppsetLister interface with controller-runtime client
type AppsetLister struct {
Client ctrlclient.Client
}
func NewAppsetLister(client ctrlclient.Client) ApplicationSetLister {
return &AppsetLister{Client: client}
}
func (l *AppsetLister) List(selector labels.Selector) (ret []*ApplicationSet, err error) {
return clientListAppsets(l.Client, ctrlclient.ListOptions{})
}
// ApplicationSets returns an object that can list and get ApplicationSets.
func (l *AppsetLister) ApplicationSets(namespace string) ApplicationSetNamespaceLister {
return &appsetNamespaceLister{
Client: l.Client,
Namespace: namespace,
}
}
// Implements ApplicationSetNamespaceLister
type appsetNamespaceLister struct {
Client ctrlclient.Client
Namespace string
}
func (n *appsetNamespaceLister) List(selector labels.Selector) (ret []*ApplicationSet, err error) {
return clientListAppsets(n.Client, ctrlclient.ListOptions{Namespace: n.Namespace})
}
func (n *appsetNamespaceLister) Get(name string) (*ApplicationSet, error) {
appset := ApplicationSet{}
err := n.Client.Get(context.TODO(), ctrlclient.ObjectKeyFromObject(&appset), &appset)
return &appset, err
}
func clientListAppsets(client ctrlclient.Client, listOptions ctrlclient.ListOptions) (ret []*ApplicationSet, err error) {
var appsetlist ApplicationSetList
var results []*ApplicationSet
err = client.List(context.TODO(), &appsetlist, &listOptions)
if err == nil {
for _, appset := range appsetlist.Items {
results = append(results, appset.DeepCopy())
}
}
return results, err
}

View File

@@ -2,15 +2,22 @@ package utils
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"sync"
"time"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v2/common"
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/db"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/utils/ptr"
)
// The contents of this file are from
@@ -44,12 +51,9 @@ const (
// if we used destination name we infer the server url
// if we used both name and server then we return an invalid spec error
func ValidateDestination(ctx context.Context, dest *appv1.ApplicationDestination, clientset kubernetes.Interface, argoCDNamespace string) error {
if dest.IsServerInferred() && dest.IsNameInferred() {
return fmt.Errorf("application destination can't have both name and server inferred: %s %s", dest.Name, dest.Server)
}
if dest.Name != "" {
if dest.Server == "" {
server, err := getDestinationBy(ctx, dest.Name, clientset, argoCDNamespace, true)
server, err := getDestinationServer(ctx, dest.Name, clientset, argoCDNamespace)
if err != nil {
return fmt.Errorf("unable to find destination server: %w", err)
}
@@ -57,25 +61,14 @@ func ValidateDestination(ctx context.Context, dest *appv1.ApplicationDestination
return fmt.Errorf("application references destination cluster %s which does not exist", dest.Name)
}
dest.SetInferredServer(server)
} else if !dest.IsServerInferred() && !dest.IsNameInferred() {
} else if !dest.IsServerInferred() {
return fmt.Errorf("application destination can't have both name and server defined: %s %s", dest.Name, dest.Server)
}
} else if dest.Server != "" {
if dest.Name == "" {
serverName, err := getDestinationBy(ctx, dest.Server, clientset, argoCDNamespace, false)
if err != nil {
return fmt.Errorf("unable to find destination server: %w", err)
}
if serverName == "" {
return fmt.Errorf("application references destination cluster %s which does not exist", dest.Server)
}
dest.SetInferredName(serverName)
}
}
return nil
}
func getDestinationBy(ctx context.Context, cluster string, clientset kubernetes.Interface, argoCDNamespace string, byName bool) (string, error) {
func getDestinationServer(ctx context.Context, clusterName string, clientset kubernetes.Interface, argoCDNamespace string) (string, error) {
// settingsMgr := settings.NewSettingsManager(context.TODO(), clientset, namespace)
// argoDB := db.NewDB(namespace, settingsMgr, clientset)
// clusterList, err := argoDB.ListClusters(ctx)
@@ -85,17 +78,14 @@ func getDestinationBy(ctx context.Context, cluster string, clientset kubernetes.
}
var servers []string
for _, c := range clusterList.Items {
if byName && c.Name == cluster {
if c.Name == clusterName {
servers = append(servers, c.Server)
}
if !byName && c.Server == cluster {
servers = append(servers, c.Name)
}
}
if len(servers) > 1 {
return "", fmt.Errorf("there are %d clusters with the same name: %v", len(servers), servers)
} else if len(servers) == 0 {
return "", fmt.Errorf("there are no clusters with this name: %s", cluster)
return "", fmt.Errorf("there are no clusters with this name: %s", clusterName)
}
return servers[0], nil
}
@@ -119,15 +109,11 @@ func ListClusters(ctx context.Context, clientset kubernetes.Interface, namespace
hasInClusterCredentials := false
for i, clusterSecret := range clusterSecrets {
// This line has changed from the original Argo CD code: now receives an error, and handles it
cluster, err := db.SecretToCluster(&clusterSecret)
cluster, err := secretToCluster(&clusterSecret)
if err != nil || cluster == nil {
return nil, fmt.Errorf("unable to convert cluster secret to cluster object '%s': %w", clusterSecret.Name, err)
}
// db.SecretToCluster populates these, but they're not meant to be available to the caller.
cluster.Labels = nil
cluster.Annotations = nil
clusterList.Items[i] = *cluster
if cluster.Server == appv1.KubernetesInternalAPIServerAddr {
hasInClusterCredentials = true
@@ -146,12 +132,9 @@ func getLocalCluster(clientset kubernetes.Interface) *appv1.Cluster {
initLocalCluster.Do(func() {
info, err := clientset.Discovery().ServerVersion()
if err == nil {
// nolint:staticcheck
localCluster.ServerVersion = fmt.Sprintf("%s.%s", info.Major, info.Minor)
// nolint:staticcheck
localCluster.ConnectionState = appv1.ConnectionState{Status: appv1.ConnectionStatusSuccessful}
} else {
// nolint:staticcheck
localCluster.ConnectionState = appv1.ConnectionState{
Status: appv1.ConnectionStatusFailed,
Message: err.Error(),
@@ -160,7 +143,51 @@ func getLocalCluster(clientset kubernetes.Interface) *appv1.Cluster {
})
cluster := localCluster.DeepCopy()
now := metav1.Now()
// nolint:staticcheck
cluster.ConnectionState.ModifiedAt = &now
return cluster
}
// secretToCluster converts a secret into a Cluster object
func secretToCluster(s *corev1.Secret) (*appv1.Cluster, error) {
var config appv1.ClusterConfig
if len(s.Data["config"]) > 0 {
if err := json.Unmarshal(s.Data["config"], &config); err != nil {
// This line has changed from the original Argo CD: now returns an error rather than panicing.
return nil, err
}
}
var namespaces []string
for _, ns := range strings.Split(string(s.Data["namespaces"]), ",") {
if ns = strings.TrimSpace(ns); ns != "" {
namespaces = append(namespaces, ns)
}
}
var refreshRequestedAt *metav1.Time
if v, found := s.Annotations[appv1.AnnotationKeyRefresh]; found {
requestedAt, err := time.Parse(time.RFC3339, v)
if err != nil {
log.Warnf("Error while parsing date in cluster secret '%s': %v", s.Name, err)
} else {
refreshRequestedAt = &metav1.Time{Time: requestedAt}
}
}
var shard *int64
if shardStr := s.Data["shard"]; shardStr != nil {
if val, err := strconv.Atoi(string(shardStr)); err != nil {
log.Warnf("Error while parsing shard in cluster secret '%s': %v", s.Name, err)
} else {
shard = ptr.To(int64(val))
}
}
cluster := appv1.Cluster{
ID: string(s.UID),
Server: strings.TrimRight(string(s.Data["server"]), "/"),
Name: string(s.Data["name"]),
Namespaces: namespaces,
Config: config,
RefreshRequestedAt: refreshRequestedAt,
Shard: shard,
}
return &cluster, nil
}

View File

@@ -20,6 +20,50 @@ const (
fakeNamespace = "fake-ns"
)
// From Argo CD util/db/cluster_test.go
func Test_secretToCluster(t *testing.T) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "mycluster",
Namespace: fakeNamespace,
},
Data: map[string][]byte{
"name": []byte("test"),
"server": []byte("http://mycluster"),
"config": []byte("{\"username\":\"foo\"}"),
},
}
cluster, err := secretToCluster(secret)
require.NoError(t, err)
assert.Equal(t, argoappv1.Cluster{
Name: "test",
Server: "http://mycluster",
Config: argoappv1.ClusterConfig{
Username: "foo",
},
}, *cluster)
}
// From Argo CD util/db/cluster_test.go
func Test_secretToCluster_NoConfig(t *testing.T) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "mycluster",
Namespace: fakeNamespace,
},
Data: map[string][]byte{
"name": []byte("test"),
"server": []byte("http://mycluster"),
},
}
cluster, err := secretToCluster(secret)
require.NoError(t, err)
assert.Equal(t, argoappv1.Cluster{
Name: "test",
Server: "http://mycluster",
}, *cluster)
}
func createClusterSecret(secretName string, clusterName string, clusterServer string) *corev1.Secret {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
@@ -48,12 +92,7 @@ func TestValidateDestination(t *testing.T) {
Namespace: "default",
}
secret := createClusterSecret("my-secret", "minikube", "https://127.0.0.1:6443")
objects := []runtime.Object{}
objects = append(objects, secret)
kubeclientset := fake.NewSimpleClientset(objects...)
appCond := ValidateDestination(context.Background(), &dest, kubeclientset, fakeNamespace)
appCond := ValidateDestination(context.Background(), &dest, nil, fakeNamespace)
require.NoError(t, appCond)
assert.False(t, dest.IsServerInferred())
})

View File

@@ -229,7 +229,7 @@ spec:
require.NoError(t, err)
yamlExpected, err := yaml.Marshal(tc.expectedApp)
require.NoError(t, err)
assert.YAMLEq(t, string(yamlExpected), string(yamlFound))
assert.Equal(t, string(yamlExpected), string(yamlFound))
})
}
}

View File

@@ -4,18 +4,14 @@ import (
"context"
"fmt"
"github.com/argoproj/argo-cd/v2/common"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
var ErrDisallowedSecretAccess = fmt.Errorf("secret must have label %q=%q", common.LabelKeySecretType, common.LabelValueSecretTypeSCMCreds)
// getSecretRef gets the value of the key for the specified Secret resource.
func GetSecretRef(ctx context.Context, k8sClient client.Client, ref *argoprojiov1alpha1.SecretRef, namespace string, tokenRefStrictMode bool) (string, error) {
func GetSecretRef(ctx context.Context, k8sClient client.Client, ref *argoprojiov1alpha1.SecretRef, namespace string) (string, error) {
if ref == nil {
return "", nil
}
@@ -31,11 +27,6 @@ func GetSecretRef(ctx context.Context, k8sClient client.Client, ref *argoprojiov
if err != nil {
return "", fmt.Errorf("error fetching secret %s/%s: %w", namespace, ref.SecretName, err)
}
if tokenRefStrictMode && secret.GetLabels()[common.LabelKeySecretType] != common.LabelValueSecretTypeSCMCreds {
return "", fmt.Errorf("secret %s/%s is not a valid SCM creds secret: %w", namespace, ref.SecretName, ErrDisallowedSecretAccess)
}
tokenBytes, ok := secret.Data[ref.Key]
if !ok {
return "", fmt.Errorf("key %q in secret %s/%s not found", ref.Key, namespace, ref.SecretName)

View File

@@ -67,7 +67,7 @@ func TestGetSecretRef(t *testing.T) {
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
token, err := GetSecretRef(ctx, client, c.ref, c.namespace, false)
token, err := GetSecretRef(ctx, client, c.ref, c.namespace)
if c.hasError {
require.Error(t, err)
} else {

View File

@@ -8,7 +8,10 @@ func ConvertToMapStringString(mapStringInterface map[string]interface{}) map[str
mapStringString := make(map[string]string, len(mapStringInterface))
for key, value := range mapStringInterface {
mapStringString[key] = fmt.Sprintf("%v", value)
strKey := fmt.Sprintf("%v", key)
strValue := fmt.Sprintf("%v", value)
mapStringString[strKey] = strValue
}
return mapStringString
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.53.4. DO NOT EDIT.
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks

View File

@@ -23,7 +23,6 @@ import (
log "github.com/sirupsen/logrus"
argoappsv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/glob"
)
var sprigFuncMap = sprig.GenericFuncMap() // a singleton for better performance
@@ -47,10 +46,6 @@ type Renderer interface {
type Render struct{}
func IsNamespaceAllowed(namespaces []string, namespace string) bool {
return glob.MatchStringInList(namespaces, namespace, glob.REGEXP)
}
func copyValueIntoUnexported(destination, value reflect.Value) {
reflect.NewAt(destination.Type(), unsafe.Pointer(destination.UnsafeAddr())).
Elem().
@@ -273,7 +268,7 @@ func (r *Render) RenderTemplateParams(tmpl *argoappsv1.Application, syncPolicy *
// b) there IS a syncPolicy, but preserveResourcesOnDeletion is set to false
// See TestRenderTemplateParamsFinalizers in util_test.go for test-based definition of behaviour
if (syncPolicy == nil || !syncPolicy.PreserveResourcesOnDeletion) &&
len(replacedTmpl.ObjectMeta.Finalizers) == 0 {
(replacedTmpl.ObjectMeta.Finalizers == nil || len(replacedTmpl.ObjectMeta.Finalizers) == 0) {
replacedTmpl.ObjectMeta.Finalizers = []string{"resources-finalizer.argocd.argoproj.io"}
}

View File

@@ -1,186 +0,0 @@
{
"ref": "refs/heads/env/dev",
"before": "d5c1ffa8e294bc18c639bfb4e0df499251034414",
"after": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"created": false,
"deleted": false,
"forced": true,
"base_ref": null,
"compare": "https://github.com/org/repo/compare/d5c1ffa8e294...63738bb582c8",
"commits": [
{
"id": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"tree_id": "64897da445207e409ad05af93b1f349ad0a4ee19",
"distinct": true,
"message": "Add staging-argocd-demo environment",
"timestamp": "2018-05-04T15:40:02-07:00",
"url": "https://github.com/org/repo/commit/63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"author": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"committer": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"added": [
"ksapps/test-app/environments/staging-argocd-demo/main.jsonnet",
"ksapps/test-app/environments/staging-argocd-demo/params.libsonnet"
],
"removed": [
],
"modified": [
"ksapps/test-app/app.yaml"
]
}
],
"head_commit": {
"id": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"tree_id": "64897da445207e409ad05af93b1f349ad0a4ee19",
"distinct": true,
"message": "Add staging-argocd-demo environment",
"timestamp": "2018-05-04T15:40:02-07:00",
"url": "https://github.com/org/repo/commit/63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"author": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"committer": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"added": [
"ksapps/test-app/environments/staging-argocd-demo/main.jsonnet",
"ksapps/test-app/environments/staging-argocd-demo/params.libsonnet"
],
"removed": [
],
"modified": [
"ksapps/test-app/app.yaml"
]
},
"repository": {
"id": 123060978,
"name": "repo",
"full_name": "org/repo",
"owner": {
"name": "org",
"email": "org@users.noreply.github.com",
"login": "org",
"id": 12677113,
"avatar_url": "https://avatars0.githubusercontent.com/u/12677113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/org",
"html_url": "https://github.com/org",
"followers_url": "https://api.github.com/users/org/followers",
"following_url": "https://api.github.com/users/org/following{/other_user}",
"gists_url": "https://api.github.com/users/org/gists{/gist_id}",
"starred_url": "https://api.github.com/users/org/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/org/subscriptions",
"organizations_url": "https://api.github.com/users/org/orgs",
"repos_url": "https://api.github.com/users/org/repos",
"events_url": "https://api.github.com/users/org/events{/privacy}",
"received_events_url": "https://api.github.com/users/org/received_events",
"type": "User",
"site_admin": false
},
"private": false,
"html_url": "https://github.com/org/repo",
"description": "Test Repository",
"fork": false,
"url": "https://github.com/org/repo",
"forks_url": "https://api.github.com/repos/org/repo/forks",
"keys_url": "https://api.github.com/repos/org/repo/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/org/repo/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/org/repo/teams",
"hooks_url": "https://api.github.com/repos/org/repo/hooks",
"issue_events_url": "https://api.github.com/repos/org/repo/issues/events{/number}",
"events_url": "https://api.github.com/repos/org/repo/events",
"assignees_url": "https://api.github.com/repos/org/repo/assignees{/user}",
"branches_url": "https://api.github.com/repos/org/repo/branches{/branch}",
"tags_url": "https://api.github.com/repos/org/repo/tags",
"blobs_url": "https://api.github.com/repos/org/repo/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/org/repo/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/org/repo/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/org/repo/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/org/repo/statuses/{sha}",
"languages_url": "https://api.github.com/repos/org/repo/languages",
"stargazers_url": "https://api.github.com/repos/org/repo/stargazers",
"contributors_url": "https://api.github.com/repos/org/repo/contributors",
"subscribers_url": "https://api.github.com/repos/org/repo/subscribers",
"subscription_url": "https://api.github.com/repos/org/repo/subscription",
"commits_url": "https://api.github.com/repos/org/repo/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/org/repo/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/org/repo/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/org/repo/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/org/repo/contents/{+path}",
"compare_url": "https://api.github.com/repos/org/repo/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/org/repo/merges",
"archive_url": "https://api.github.com/repos/org/repo/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/org/repo/downloads",
"issues_url": "https://api.github.com/repos/org/repo/issues{/number}",
"pulls_url": "https://api.github.com/repos/org/repo/pulls{/number}",
"milestones_url": "https://api.github.com/repos/org/repo/milestones{/number}",
"notifications_url": "https://api.github.com/repos/org/repo/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/org/repo/labels{/name}",
"releases_url": "https://api.github.com/repos/org/repo/releases{/id}",
"deployments_url": "https://api.github.com/repos/org/repo/deployments",
"created_at": 1519698615,
"updated_at": "2018-05-04T22:37:55Z",
"pushed_at": 1525473610,
"git_url": "git://github.com/org/repo.git",
"ssh_url": "git@github.com:org/repo.git",
"clone_url": "https://github.com/org/repo.git",
"svn_url": "https://github.com/org/repo",
"homepage": null,
"size": 538,
"stargazers_count": 0,
"watchers_count": 0,
"language": null,
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"forks_count": 1,
"mirror_url": null,
"archived": false,
"open_issues_count": 0,
"license": null,
"forks": 1,
"open_issues": 0,
"watchers": 0,
"default_branch": "master",
"stargazers": 0,
"master_branch": "master"
},
"pusher": {
"name": "org",
"email": "org@users.noreply.github.com"
},
"sender": {
"login": "org",
"id": 12677113,
"avatar_url": "https://avatars0.githubusercontent.com/u/12677113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/org",
"html_url": "https://github.com/org",
"followers_url": "https://api.github.com/users/org/followers",
"following_url": "https://api.github.com/users/org/following{/other_user}",
"gists_url": "https://api.github.com/users/org/gists{/gist_id}",
"starred_url": "https://api.github.com/users/org/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/org/subscriptions",
"organizations_url": "https://api.github.com/users/org/orgs",
"repos_url": "https://api.github.com/users/org/repos",
"events_url": "https://api.github.com/users/org/events{/privacy}",
"received_events_url": "https://api.github.com/users/org/received_events",
"type": "User",
"site_admin": false
}
}

View File

@@ -19,7 +19,6 @@ import (
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
argosettings "github.com/argoproj/argo-cd/v2/util/settings"
"github.com/argoproj/argo-cd/v2/util/webhook"
"github.com/go-playground/webhooks/v6/azuredevops"
"github.com/go-playground/webhooks/v6/github"
@@ -191,6 +190,11 @@ func (h *WebhookHandler) Handler(w http.ResponseWriter, r *http.Request) {
}
}
func parseRevision(ref string) string {
refParts := strings.SplitN(ref, "/", 3)
return refParts[len(refParts)-1]
}
func getGitGeneratorInfo(payload interface{}) *gitGeneratorInfo {
var (
webURL string
@@ -200,16 +204,16 @@ func getGitGeneratorInfo(payload interface{}) *gitGeneratorInfo {
switch payload := payload.(type) {
case github.PushPayload:
webURL = payload.Repository.HTMLURL
revision = webhook.ParseRevision(payload.Ref)
revision = parseRevision(payload.Ref)
touchedHead = payload.Repository.DefaultBranch == revision
case gitlab.PushEventPayload:
webURL = payload.Project.WebURL
revision = webhook.ParseRevision(payload.Ref)
revision = parseRevision(payload.Ref)
touchedHead = payload.Project.DefaultBranch == revision
case azuredevops.GitPushEvent:
// See: https://learn.microsoft.com/en-us/azure/devops/service-hooks/events?view=azure-devops#git.push
webURL = payload.Resource.Repository.RemoteURL
revision = webhook.ParseRevision(payload.Resource.RefUpdates[0].Name)
revision = parseRevision(payload.Resource.RefUpdates[0].Name)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
// unfortunately, Azure DevOps doesn't provide a list of changed files
default:
@@ -222,7 +226,7 @@ func getGitGeneratorInfo(payload interface{}) *gitGeneratorInfo {
log.Errorf("Failed to parse repoURL '%s'", webURL)
return nil
}
regexpStr := `(?i)(http://|https://|\w+@|ssh://(\w+@)?)` + urlObj.Hostname() + "(:[0-9]+|)[:/]" + urlObj.Path[1:] + "(\\.git)?$"
regexpStr := `(?i)(http://|https://|\w+@|ssh://(\w+@)?)` + urlObj.Hostname() + "(:[0-9]+|)[:/]" + urlObj.Path[1:] + "(\\.git)?"
repoRegexp, err := regexp.Compile(regexpStr)
if err != nil {
log.Errorf("Failed to compile regexp for repoURL '%s'", webURL)
@@ -369,12 +373,12 @@ func shouldRefreshPluginGenerator(gen *v1alpha1.PluginGenerator) bool {
}
func genRevisionHasChanged(gen *v1alpha1.GitGenerator, revision string, touchedHead bool) bool {
targetRev := webhook.ParseRevision(gen.Revision)
targetRev := parseRevision(gen.Revision)
if targetRev == "HEAD" || targetRev == "" { // revision is head
return touchedHead
}
return targetRev == revision || gen.Revision == revision
return targetRev == revision
}
func gitGeneratorUsesURL(gen *v1alpha1.GitGenerator, webURL string, repoRegexp *regexp.Regexp) bool {

View File

@@ -67,15 +67,6 @@ func TestWebhookHandler(t *testing.T) {
expectedStatusCode: http.StatusOK,
expectedRefresh: true,
},
{
desc: "WebHook from a GitHub repository via Commit shorthand",
headerKey: "X-GitHub-Event",
headerValue: "push",
payloadFile: "github-commit-event-feature-branch.json",
effectedAppSets: []string{"github-shorthand", "matrix-pull-request-github-plugin", "plugin"},
expectedStatusCode: http.StatusOK,
expectedRefresh: true,
},
{
desc: "WebHook from a GitHub repository via Commit to branch",
headerKey: "X-GitHub-Event",
@@ -199,10 +190,8 @@ func TestWebhookHandler(t *testing.T) {
t.Run(test.desc, func(t *testing.T) {
fc := fake.NewClientBuilder().WithScheme(scheme).WithObjects(
fakeAppWithGitGenerator("git-github", namespace, "https://github.com/org/repo"),
fakeAppWithGitGenerator("git-github-copy", namespace, "https://github.com/org/repo-copy"),
fakeAppWithGitGenerator("git-gitlab", namespace, "https://gitlab/group/name"),
fakeAppWithGitGenerator("git-azure-devops", namespace, "https://dev.azure.com/fabrikam-fiber-inc/DefaultCollection/_git/Fabrikam-Fiber-Git"),
fakeAppWithGitGeneratorWithRevision("github-shorthand", namespace, "https://github.com/org/repo", "env/dev"),
fakeAppWithGithubPullRequestGenerator("pull-request-github", namespace, "CodErTOcat", "Hello-World"),
fakeAppWithGitlabPullRequestGenerator("pull-request-gitlab", namespace, "100500"),
fakeAppWithAzureDevOpsPullRequestGenerator("pull-request-azure-devops", namespace, "DefaultCollection", "Fabrikam"),
@@ -243,8 +232,9 @@ func TestWebhookHandler(t *testing.T) {
for i := range list.Items {
gotAppSet := &list.Items[i]
if _, isEffected := effectedAppSetsAsExpected[gotAppSet.Name]; isEffected {
expected, got := test.expectedRefresh, gotAppSet.RefreshRequired()
require.Equalf(t, expected, got, "unexpected RefreshRequired() for appset '%s' expect: %v got: %v", gotAppSet.Name, expected, got)
if expected, got := test.expectedRefresh, gotAppSet.RefreshRequired(); expected != got {
t.Errorf("unexpected RefreshRequired() for appset '%s' expect: %v got: %v", gotAppSet.Name, expected, got)
}
effectedAppSetsAsExpected[gotAppSet.Name] = true
} else {
assert.False(t, gotAppSet.RefreshRequired())
@@ -312,62 +302,14 @@ func mockGenerators() map[string]generators.Generator {
}
func TestGenRevisionHasChanged(t *testing.T) {
type args struct {
gen *v1alpha1.GitGenerator
revision string
touchedHead bool
}
tests := []struct {
name string
args args
want bool
}{
{name: "touchedHead", args: args{
gen: &v1alpha1.GitGenerator{},
revision: "main",
touchedHead: true,
}, want: true},
{name: "didntTouchHead", args: args{
gen: &v1alpha1.GitGenerator{},
revision: "main",
touchedHead: false,
}, want: false},
{name: "foundEqualShort", args: args{
gen: &v1alpha1.GitGenerator{Revision: "dev"},
revision: "dev",
touchedHead: true,
}, want: true},
{name: "foundEqualLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/heads/dev"},
revision: "dev",
touchedHead: true,
}, want: true},
{name: "foundNotEqualLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/heads/dev"},
revision: "main",
touchedHead: true,
}, want: false},
{name: "foundNotEqualShort", args: args{
gen: &v1alpha1.GitGenerator{Revision: "dev"},
revision: "main",
touchedHead: false,
}, want: false},
{name: "foundEqualTag", args: args{
gen: &v1alpha1.GitGenerator{Revision: "v3.14.1"},
revision: "v3.14.1",
touchedHead: false,
}, want: true},
{name: "foundEqualTagLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/tags/v3.14.1"},
revision: "v3.14.1",
touchedHead: false,
}, want: true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
assert.Equalf(t, tt.want, genRevisionHasChanged(tt.args.gen, tt.args.revision, tt.args.touchedHead), "genRevisionHasChanged(%v, %v, %v)", tt.args.gen, tt.args.revision, tt.args.touchedHead)
})
}
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{}, "master", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{}, "master", false))
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "dev"}, "dev", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "dev"}, "master", false))
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "refs/heads/dev"}, "dev", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "refs/heads/dev"}, "master", false))
}
func fakeAppWithGitGenerator(name, namespace, repo string) *v1alpha1.ApplicationSet {
@@ -389,12 +331,6 @@ func fakeAppWithGitGenerator(name, namespace, repo string) *v1alpha1.Application
}
}
func fakeAppWithGitGeneratorWithRevision(name, namespace, repo, revision string) *v1alpha1.ApplicationSet {
appSet := fakeAppWithGitGenerator(name, namespace, repo)
appSet.Spec.Generators[0].Git.Revision = revision
return appSet
}
func fakeAppWithGitlabPullRequestGenerator(name, namespace, projectId string) *v1alpha1.ApplicationSet {
return &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -775,7 +711,7 @@ func fakeAppWithMatrixAndPullRequestGeneratorWithPluginGenerator(name, namespace
func newFakeClient(ns string) *kubefake.Clientset {
s := runtime.NewScheme()
s.AddKnownTypes(v1alpha1.SchemeGroupVersion, &v1alpha1.ApplicationSet{})
return kubefake.NewClientset(&corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "argocd-cm", Namespace: ns, Labels: map[string]string{
return kubefake.NewSimpleClientset(&corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "argocd-cm", Namespace: ns, Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
}}}, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{

View File

@@ -10,7 +10,6 @@ p, role:readonly, applications, get, */*, allow
p, role:readonly, certificates, get, *, allow
p, role:readonly, clusters, get, *, allow
p, role:readonly, repositories, get, *, allow
p, role:readonly, write-repositories, get, *, allow
p, role:readonly, projects, get, *, allow
p, role:readonly, accounts, get, *, allow
p, role:readonly, gpgkeys, get, *, allow
@@ -18,9 +17,7 @@ p, role:readonly, logs, get, */*, allow
p, role:admin, applications, create, */*, allow
p, role:admin, applications, update, */*, allow
p, role:admin, applications, update/*, */*, allow
p, role:admin, applications, delete, */*, allow
p, role:admin, applications, delete/*, */*, allow
p, role:admin, applications, sync, */*, allow
p, role:admin, applications, override, */*, allow
p, role:admin, applications, action/*, */*, allow
@@ -37,9 +34,6 @@ p, role:admin, clusters, delete, *, allow
p, role:admin, repositories, create, *, allow
p, role:admin, repositories, update, *, allow
p, role:admin, repositories, delete, *, allow
p, role:admin, write-repositories, create, *, allow
p, role:admin, write-repositories, update, *, allow
p, role:admin, write-repositories, delete, *, allow
p, role:admin, projects, create, *, allow
p, role:admin, projects, update, *, allow
p, role:admin, projects, delete, *, allow
@@ -49,4 +43,4 @@ p, role:admin, gpgkeys, delete, *, allow
p, role:admin, exec, create, */*, allow
g, role:admin, role:readonly
g, admin, role:admin
g, admin, role:admin
1 # Built-in policy which defines two roles: role:readonly and role:admin,
10 p, role:readonly, clusters, get, *, allow
11 p, role:readonly, repositories, get, *, allow
12 p, role:readonly, write-repositories, get, *, allow p, role:readonly, projects, get, *, allow
p, role:readonly, projects, get, *, allow
13 p, role:readonly, accounts, get, *, allow
14 p, role:readonly, gpgkeys, get, *, allow
15 p, role:readonly, logs, get, */*, allow
17 p, role:admin, applications, update, */*, allow
18 p, role:admin, applications, update/*, */*, allow p, role:admin, applications, delete, */*, allow
19 p, role:admin, applications, delete, */*, allow p, role:admin, applications, sync, */*, allow
p, role:admin, applications, delete/*, */*, allow
20 p, role:admin, applications, sync, */*, allow p, role:admin, applications, override, */*, allow
p, role:admin, applications, override, */*, allow
21 p, role:admin, applications, action/*, */*, allow
22 p, role:admin, applicationsets, get, */*, allow
23 p, role:admin, applicationsets, create, */*, allow
34 p, role:admin, repositories, delete, *, allow
35 p, role:admin, write-repositories, create, *, allow p, role:admin, projects, create, *, allow
36 p, role:admin, write-repositories, update, *, allow p, role:admin, projects, update, *, allow
p, role:admin, write-repositories, delete, *, allow
p, role:admin, projects, create, *, allow
p, role:admin, projects, update, *, allow
37 p, role:admin, projects, delete, *, allow
38 p, role:admin, accounts, update, *, allow
39 p, role:admin, gpgkeys, create, *, allow
43 g, admin, role:admin
44
45
46

772
assets/swagger.json generated
View File

@@ -1990,39 +1990,6 @@
}
}
},
"/api/v1/applicationsets/generate": {
"post": {
"tags": [
"ApplicationSetService"
],
"summary": "Generate generates",
"operationId": "ApplicationSetService_Generate",
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/applicationsetApplicationSetGenerateRequest"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/applicationsetApplicationSetGenerateResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/applicationsets/{name}": {
"get": {
"tags": [
@@ -3323,6 +3290,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -3367,6 +3340,12 @@
"description": "Whether to operate on credential set instead of repository.",
"name": "credsOnly",
"in": "query"
},
{
"type": "boolean",
"description": "Write determines whether the credential will be stored as a read credential or a write credential.",
"name": "write",
"in": "query"
}
],
"responses": {
@@ -3407,6 +3386,12 @@
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
{
"type": "boolean",
"description": "Write determines whether the credential to be updated is a read credential or a write credential.",
"name": "write",
"in": "query"
}
],
"responses": {
@@ -3451,6 +3436,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -3493,6 +3484,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -3583,6 +3580,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -3626,6 +3629,12 @@
"description": "App project for query.",
"name": "appProject",
"in": "query"
},
{
"type": "string",
"description": "Type determines what kind of credential we're interacting with. It can be \"read\", \"write\", or \"both\". Default is\n\"read\".",
"name": "type",
"in": "query"
}
],
"responses": {
@@ -4117,504 +4126,6 @@
}
}
},
"/api/v1/write-repocreds": {
"get": {
"tags": [
"RepoCredsService"
],
"summary": "ListWriteRepositoryCredentials gets a list of all configured repository credential sets that have write access",
"operationId": "RepoCredsService_ListWriteRepositoryCredentials",
"parameters": [
{
"type": "string",
"description": "Repo URL for query.",
"name": "url",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1RepoCredsList"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
},
"post": {
"tags": [
"RepoCredsService"
],
"summary": "CreateWriteRepositoryCredentials creates a new repository credential set with write access",
"operationId": "RepoCredsService_CreateWriteRepositoryCredentials",
"parameters": [
{
"description": "Repository definition",
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1RepoCreds"
}
},
{
"type": "boolean",
"description": "Whether to create in upsert mode.",
"name": "upsert",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1RepoCreds"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repocreds/{creds.url}": {
"put": {
"tags": [
"RepoCredsService"
],
"summary": "UpdateWriteRepositoryCredentials updates a repository credential set with write access",
"operationId": "RepoCredsService_UpdateWriteRepositoryCredentials",
"parameters": [
{
"type": "string",
"description": "URL is the URL to which these credentials match",
"name": "creds.url",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1RepoCreds"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1RepoCreds"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repocreds/{url}": {
"delete": {
"tags": [
"RepoCredsService"
],
"summary": "DeleteWriteRepositoryCredentials deletes a repository credential set with write access from the configuration",
"operationId": "RepoCredsService_DeleteWriteRepositoryCredentials",
"parameters": [
{
"type": "string",
"name": "url",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/repocredsRepoCredsResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repositories": {
"get": {
"tags": [
"RepositoryService"
],
"summary": "ListWriteRepositories gets a list of all configured write repositories",
"operationId": "RepositoryService_ListWriteRepositories",
"parameters": [
{
"type": "string",
"description": "Repo URL for query.",
"name": "repo",
"in": "query"
},
{
"type": "boolean",
"description": "Whether to force a cache refresh on repo's connection state.",
"name": "forceRefresh",
"in": "query"
},
{
"type": "string",
"description": "App project for query.",
"name": "appProject",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1RepositoryList"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
},
"post": {
"tags": [
"RepositoryService"
],
"summary": "CreateWriteRepository creates a new write repository configuration",
"operationId": "RepositoryService_CreateWriteRepository",
"parameters": [
{
"description": "Repository definition",
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
{
"type": "boolean",
"description": "Whether to create in upsert mode.",
"name": "upsert",
"in": "query"
},
{
"type": "boolean",
"description": "Whether to operate on credential set instead of repository.",
"name": "credsOnly",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repositories/{repo.repo}": {
"put": {
"tags": [
"RepositoryService"
],
"summary": "UpdateWriteRepository updates a write repository configuration",
"operationId": "RepositoryService_UpdateWriteRepository",
"parameters": [
{
"type": "string",
"description": "Repo contains the URL to the remote repository",
"name": "repo.repo",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repositories/{repo}": {
"get": {
"tags": [
"RepositoryService"
],
"summary": "GetWrite returns a repository or its write credentials",
"operationId": "RepositoryService_GetWrite",
"parameters": [
{
"type": "string",
"description": "Repo URL for query",
"name": "repo",
"in": "path",
"required": true
},
{
"type": "boolean",
"description": "Whether to force a cache refresh on repo's connection state.",
"name": "forceRefresh",
"in": "query"
},
{
"type": "string",
"description": "App project for query.",
"name": "appProject",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
},
"delete": {
"tags": [
"RepositoryService"
],
"summary": "DeleteWriteRepository deletes a write repository from the configuration",
"operationId": "RepositoryService_DeleteWriteRepository",
"parameters": [
{
"type": "string",
"description": "Repo URL for query",
"name": "repo",
"in": "path",
"required": true
},
{
"type": "boolean",
"description": "Whether to force a cache refresh on repo's connection state.",
"name": "forceRefresh",
"in": "query"
},
{
"type": "string",
"description": "App project for query.",
"name": "appProject",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/repositoryRepoResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repositories/{repo}/validate": {
"post": {
"tags": [
"RepositoryService"
],
"summary": "ValidateWriteAccess validates write access to a repository with given parameters",
"operationId": "RepositoryService_ValidateWriteAccess",
"parameters": [
{
"type": "string",
"description": "The URL to the repo",
"name": "repo",
"in": "path",
"required": true
},
{
"description": "The URL to the repo",
"name": "body",
"in": "body",
"required": true,
"schema": {
"type": "string"
}
},
{
"type": "string",
"description": "Username for accessing repo.",
"name": "username",
"in": "query"
},
{
"type": "string",
"description": "Password for accessing repo.",
"name": "password",
"in": "query"
},
{
"type": "string",
"description": "Private key data for accessing SSH repository.",
"name": "sshPrivateKey",
"in": "query"
},
{
"type": "boolean",
"description": "Whether to skip certificate or host key validation.",
"name": "insecure",
"in": "query"
},
{
"type": "string",
"description": "TLS client cert data for accessing HTTPS repository.",
"name": "tlsClientCertData",
"in": "query"
},
{
"type": "string",
"description": "TLS client cert key for accessing HTTPS repository.",
"name": "tlsClientCertKey",
"in": "query"
},
{
"type": "string",
"description": "The type of the repo.",
"name": "type",
"in": "query"
},
{
"type": "string",
"description": "The name of the repo.",
"name": "name",
"in": "query"
},
{
"type": "boolean",
"description": "Whether helm-oci support should be enabled for this repo.",
"name": "enableOci",
"in": "query"
},
{
"type": "string",
"description": "Github App Private Key PEM data.",
"name": "githubAppPrivateKey",
"in": "query"
},
{
"type": "string",
"format": "int64",
"description": "Github App ID of the app used to access the repo.",
"name": "githubAppID",
"in": "query"
},
{
"type": "string",
"format": "int64",
"description": "Github App Installation ID of the installed GitHub App.",
"name": "githubAppInstallationID",
"in": "query"
},
{
"type": "string",
"description": "Github App Enterprise base url if empty will default to https://api.github.com.",
"name": "githubAppEnterpriseBaseUrl",
"in": "query"
},
{
"type": "string",
"description": "HTTP/HTTPS proxy to access the repository.",
"name": "proxy",
"in": "query"
},
{
"type": "string",
"description": "Reference between project and repository that allow you automatically to be added as item inside SourceRepos project entity.",
"name": "project",
"in": "query"
},
{
"type": "string",
"description": "Google Cloud Platform service account key.",
"name": "gcpServiceAccountKey",
"in": "query"
},
{
"type": "boolean",
"description": "Whether to force HTTP basic auth.",
"name": "forceHttpBasicAuth",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/repositoryRepoResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/version": {
"get": {
"tags": [
@@ -5020,27 +4531,6 @@
}
}
},
"applicationsetApplicationSetGenerateRequest": {
"type": "object",
"title": "ApplicationSetGetQuery is a query for applicationset resources",
"properties": {
"applicationSet": {
"$ref": "#/definitions/v1alpha1ApplicationSet"
}
}
},
"applicationsetApplicationSetGenerateResponse": {
"type": "object",
"title": "ApplicationSetGenerateResponse is a response for applicationset generate request",
"properties": {
"applications": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1Application"
}
}
}
},
"applicationsetApplicationSetResponse": {
"type": "object",
"properties": {
@@ -5066,46 +4556,6 @@
}
}
},
"applicationv1alpha1ResourceStatus": {
"type": "object",
"title": "ResourceStatus holds the current sync and health status of a resource\nTODO: describe members of this type",
"properties": {
"group": {
"type": "string"
},
"health": {
"$ref": "#/definitions/v1alpha1HealthStatus"
},
"hook": {
"type": "boolean"
},
"kind": {
"type": "string"
},
"name": {
"type": "string"
},
"namespace": {
"type": "string"
},
"requiresDeletionConfirmation": {
"type": "boolean"
},
"requiresPruning": {
"type": "boolean"
},
"status": {
"type": "string"
},
"syncWave": {
"type": "integer",
"format": "int64"
},
"version": {
"type": "string"
}
}
},
"clusterClusterID": {
"type": "object",
"title": "ClusterID holds a cluster server URL or cluster name",
@@ -5222,12 +4672,6 @@
"clusterSettings": {
"type": "object",
"properties": {
"additionalUrls": {
"type": "array",
"items": {
"type": "string"
}
},
"appLabelKey": {
"type": "string"
},
@@ -5256,15 +4700,6 @@
"help": {
"$ref": "#/definitions/clusterHelp"
},
"hydratorEnabled": {
"type": "boolean"
},
"impersonationEnabled": {
"type": "boolean"
},
"installationID": {
"type": "string"
},
"kustomizeOptions": {
"$ref": "#/definitions/v1alpha1KustomizeOptions"
},
@@ -6071,7 +5506,7 @@
"properties": {
"matchExpressions": {
"type": "array",
"title": "matchExpressions is a list of label selector requirements. The requirements are ANDed.\n+optional\n+listType=atomic",
"title": "matchExpressions is a list of label selector requirements. The requirements are ANDed.\n+optional",
"items": {
"$ref": "#/definitions/v1LabelSelectorRequirement"
}
@@ -6099,7 +5534,7 @@
},
"values": {
"type": "array",
"title": "values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch.\n+optional\n+listType=atomic",
"title": "values is an array of string values. If the operator is In or NotIn,\nthe values array must be non-empty. If the operator is Exists or DoesNotExist,\nthe values array must be empty. This array is replaced during a strategic\nmerge patch.\n+optional",
"items": {
"type": "string"
}
@@ -6223,7 +5658,7 @@
"type": "string"
},
"kubeProxyVersion": {
"description": "Deprecated: KubeProxy Version reported by the node.",
"description": "KubeProxy Version reported by the node.",
"type": "string"
},
"kubeletVersion": {
@@ -6272,7 +5707,7 @@
},
"finalizers": {
"type": "array",
"title": "Must be empty before the object is deleted from the registry. Each entry\nis an identifier for the responsible component that will remove the entry\nfrom the list. If the deletionTimestamp of the object is non-nil, entries\nin this list can only be removed.\nFinalizers may be processed and removed in any order. Order is NOT enforced\nbecause it introduces significant risk of stuck finalizers.\nfinalizers is a shared field, any actor with permission can reorder it.\nIf the finalizer list is processed in order, then this can lead to a situation\nin which the component responsible for the first finalizer in the list is\nwaiting for a signal (field value, external system, or other) produced by a\ncomponent responsible for a finalizer later in the list, resulting in a deadlock.\nWithout enforced ordering finalizers are free to order amongst themselves and\nare not vulnerable to ordering changes in the list.\n+optional\n+patchStrategy=merge\n+listType=set",
"title": "Must be empty before the object is deleted from the registry. Each entry\nis an identifier for the responsible component that will remove the entry\nfrom the list. If the deletionTimestamp of the object is non-nil, entries\nin this list can only be removed.\nFinalizers may be processed and removed in any order. Order is NOT enforced\nbecause it introduces significant risk of stuck finalizers.\nfinalizers is a shared field, any actor with permission can reorder it.\nIf the finalizer list is processed in order, then this can lead to a situation\nin which the component responsible for the first finalizer in the list is\nwaiting for a signal (field value, external system, or other) produced by a\ncomponent responsible for a finalizer later in the list, resulting in a deadlock.\nWithout enforced ordering finalizers are free to order amongst themselves and\nare not vulnerable to ordering changes in the list.\n+optional\n+patchStrategy=merge",
"items": {
"type": "string"
}
@@ -6294,7 +5729,7 @@
}
},
"managedFields": {
"description": "ManagedFields maps workflow-id and version to the set of fields\nthat are managed by that workflow. This is mostly for internal\nhousekeeping, and users typically shouldn't need to set or\nunderstand this field. A workflow can be the user's name, a\ncontroller's name, or the name of a specific apply path like\n\"ci-cd\". The set of fields is always in the version that the\nworkflow used when modifying the object.\n\n+optional\n+listType=atomic",
"description": "ManagedFields maps workflow-id and version to the set of fields\nthat are managed by that workflow. This is mostly for internal\nhousekeeping, and users typically shouldn't need to set or\nunderstand this field. A workflow can be the user's name, a\ncontroller's name, or the name of a specific apply path like\n\"ci-cd\". The set of fields is always in the version that the\nworkflow used when modifying the object.\n\n+optional",
"type": "array",
"items": {
"$ref": "#/definitions/v1ManagedFieldsEntry"
@@ -6310,7 +5745,7 @@
},
"ownerReferences": {
"type": "array",
"title": "List of objects depended by this object. If ALL objects in the list have\nbeen deleted, this object will be garbage collected. If this object is managed by a controller,\nthen an entry in this list will point to this controller, with the controller field set to true.\nThere cannot be more than one managing controller.\n+optional\n+patchMergeKey=uid\n+patchStrategy=merge\n+listType=map\n+listMapKey=uid",
"title": "List of objects depended by this object. If ALL objects in the list have\nbeen deleted, this object will be garbage collected. If this object is managed by a controller,\nthen an entry in this list will point to this controller, with the controller field set to true.\nThere cannot be more than one managing controller.\n+optional\n+patchMergeKey=uid\n+patchStrategy=merge",
"items": {
"$ref": "#/definitions/v1OwnerReference"
}
@@ -6486,13 +5921,6 @@
"type": "string",
"title": "Description contains optional project description"
},
"destinationServiceAccounts": {
"description": "DestinationServiceAccounts holds information about the service accounts to be impersonated for the application sync operation for each destination.",
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1ApplicationDestinationServiceAccount"
}
},
"destinations": {
"type": "array",
"title": "Destinations contains list of destinations available for deployment",
@@ -6624,24 +6052,6 @@
}
}
},
"v1alpha1ApplicationDestinationServiceAccount": {
"description": "ApplicationDestinationServiceAccount holds information about the service account to be impersonated for the application sync operation.",
"type": "object",
"properties": {
"defaultServiceAccount": {
"type": "string",
"title": "DefaultServiceAccount to be used for impersonation during the sync operation"
},
"namespace": {
"description": "Namespace specifies the target namespace for the application's resources.",
"type": "string"
},
"server": {
"description": "Server specifies the URL of the target cluster's Kubernetes control plane API.",
"type": "string"
}
}
},
"v1alpha1ApplicationList": {
"type": "object",
"title": "ApplicationList is list of Application resources\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object",
@@ -6966,7 +6376,7 @@
"description": "Resources is a list of Applications resources managed by this application set.",
"type": "array",
"items": {
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
"$ref": "#/definitions/v1alpha1ResourceStatus"
}
}
}
@@ -7069,10 +6479,6 @@
"kustomize": {
"$ref": "#/definitions/v1alpha1ApplicationSourceKustomize"
},
"name": {
"description": "Name is used to refer to a source and is displayed in the UI. It is used in multi-source Applications.",
"type": "string"
},
"path": {
"description": "Path is a directory path within the Git repository, and is only valid for applications sourced from Git.",
"type": "string"
@@ -7164,14 +6570,6 @@
"type": "boolean",
"title": "SkipCrds skips custom resource definition installation step (Helm's --skip-crds)"
},
"skipSchemaValidation": {
"type": "boolean",
"title": "SkipSchemaValidation skips JSON schema validation (Helm's --skip-schema-validation)"
},
"skipTests": {
"description": "SkipTests skips test manifest installation step (Helm's --skip-tests).",
"type": "boolean"
},
"valueFiles": {
"type": "array",
"title": "ValuesFiles is a list of Helm value files to use when generating a template",
@@ -7448,7 +6846,7 @@
"type": "array",
"title": "Resources is a list of Kubernetes resources managed by this application",
"items": {
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
"$ref": "#/definitions/v1alpha1ResourceStatus"
}
},
"sourceHydrator": {
@@ -7517,11 +6915,6 @@
"items": {
"$ref": "#/definitions/v1alpha1ResourceNode"
}
},
"shardsCount": {
"type": "integer",
"format": "int64",
"title": "ShardsCount contains total number of shards the application tree is split into"
}
}
},
@@ -7701,20 +7094,12 @@
"description": "Server requires Bearer authentication. This client will not attempt to use\nrefresh tokens for an OAuth2 flow.\nTODO: demonstrate an OAuth2 compatible client.",
"type": "string"
},
"disableCompression": {
"description": "DisableCompression bypasses automatic GZip compression requests to the server.",
"type": "boolean"
},
"execProviderConfig": {
"$ref": "#/definitions/v1alpha1ExecProviderConfig"
},
"password": {
"type": "string"
},
"proxyUrl": {
"type": "string",
"title": "ProxyURL is the URL to the proxy to be used for all requests send to the server"
},
"tlsClientConfig": {
"$ref": "#/definitions/v1alpha1TLSClientConfig"
},
@@ -7728,10 +7113,6 @@
"description": "ClusterGenerator defines a generator to match against clusters registered with ArgoCD.",
"type": "object",
"properties": {
"flatList": {
"type": "boolean",
"title": "returns the clusters a single 'clusters' value in the template"
},
"selector": {
"$ref": "#/definitions/v1LabelSelector"
},
@@ -8069,9 +7450,6 @@
"type": "object",
"title": "HealthStatus contains information about the currently observed health state of an application or resource",
"properties": {
"lastTransitionTime": {
"$ref": "#/definitions/v1Time"
},
"message": {
"type": "string",
"title": "Message is a human-readable informational message describing the health status"
@@ -8908,10 +8286,6 @@
"type": "string",
"title": "GithubAppPrivateKey specifies the private key PEM data for authentication via GitHub app"
},
"noProxy": {
"type": "string",
"title": "NoProxy specifies a list of targets where the proxy isn't used, applies only in cases where the proxy is applied"
},
"password": {
"type": "string",
"title": "Password for authenticating at the repo server"
@@ -9018,10 +8392,6 @@
"type": "string",
"title": "Name specifies a name to be used for this repo. Only used with Helm repos"
},
"noProxy": {
"type": "string",
"title": "NoProxy specifies a list of targets where the proxy isn't used, applies only in cases where the proxy is applied"
},
"password": {
"type": "string",
"title": "Password contains the password or PAT used for authenticating at the remote repository"
@@ -9419,6 +8789,43 @@
}
}
},
"v1alpha1ResourceStatus": {
"type": "object",
"title": "ResourceStatus holds the current sync and health status of a resource\nTODO: describe members of this type",
"properties": {
"group": {
"type": "string"
},
"health": {
"$ref": "#/definitions/v1alpha1HealthStatus"
},
"hook": {
"type": "boolean"
},
"kind": {
"type": "string"
},
"name": {
"type": "string"
},
"namespace": {
"type": "string"
},
"requiresPruning": {
"type": "boolean"
},
"status": {
"type": "string"
},
"syncWave": {
"type": "integer",
"format": "int64"
},
"version": {
"type": "string"
}
}
},
"v1alpha1RetryStrategy": {
"type": "object",
"title": "RetryStrategy contains information about the strategy to apply when a sync failed",
@@ -9849,11 +9256,6 @@
"description": "SyncOperation contains details about a sync operation.",
"type": "object",
"properties": {
"autoHealAttemptsCount": {
"type": "integer",
"format": "int64",
"title": "SelfHealAttemptsCount contains the number of auto-heal attempts"
},
"dryRun": {
"type": "boolean",
"title": "DryRun specifies to perform a `kubectl apply --dry-run` without actually performing the sync"

View File

@@ -6,7 +6,6 @@ import (
"math"
"os"
"os/signal"
"runtime/debug"
"syscall"
"time"
@@ -14,7 +13,6 @@ import (
"github.com/redis/go-redis/v9"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
@@ -27,7 +25,6 @@ import (
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/v2/pkg/ratelimiter"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/normalizers"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
appstatecache "github.com/argoproj/argo-cd/v2/util/cache/appstate"
@@ -61,18 +58,12 @@ func NewCommand() *cobra.Command {
repoServerTimeoutSeconds int
commitServerAddress string
selfHealTimeoutSeconds int
selfHealBackoffTimeoutSeconds int
selfHealBackoffFactor int
selfHealBackoffCapSeconds int
selfHealBackoffCooldownSeconds int
syncTimeout int
statusProcessors int
operationProcessors int
glogLevel int
metricsPort int
metricsCacheExpiration time.Duration
metricsAplicationLabels []string
metricsAplicationConditions []string
kubectlParallelismLimit int64
cacheSource func() (*appstatecache.Cache, error)
redisClient *redis.Client
@@ -88,10 +79,6 @@ func NewCommand() *cobra.Command {
enableDynamicClusterDistribution bool
serverSideDiff bool
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts
// argocd k8s event logging flag
enableK8sEvent []string
hydratorEnabled bool
)
command := cobra.Command{
Use: cliName,
@@ -116,13 +103,6 @@ func NewCommand() *cobra.Command {
cli.SetLogLevel(cmdutil.LogLevel)
cli.SetGLogLevel(glogLevel)
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
if r := recover(); r != nil {
log.WithField("trace", string(debug.Stack())).Fatal("Recovered from panic: ", r)
}
}()
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
errors.CheckError(v1alpha1.SetK8SConfigDefaults(config))
@@ -175,14 +155,6 @@ func NewCommand() *cobra.Command {
kubectl := kubeutil.NewKubectl()
clusterSharding, err := sharding.GetClusterSharding(kubeClient, settingsMgr, shardingAlgorithm, enableDynamicClusterDistribution)
errors.CheckError(err)
var selfHealBackoff *wait.Backoff
if selfHealBackoffTimeoutSeconds != 0 {
selfHealBackoff = &wait.Backoff{
Duration: time.Duration(selfHealBackoffTimeoutSeconds) * time.Second,
Factor: float64(selfHealBackoffFactor),
Cap: time.Duration(selfHealBackoffCapSeconds) * time.Second,
}
}
appController, err = controller.NewApplicationController(
namespace,
settingsMgr,
@@ -196,14 +168,10 @@ func NewCommand() *cobra.Command {
hardResyncDuration,
time.Duration(appResyncJitter)*time.Second,
time.Duration(selfHealTimeoutSeconds)*time.Second,
selfHealBackoff,
time.Duration(selfHealBackoffCooldownSeconds)*time.Second,
time.Duration(syncTimeout)*time.Second,
time.Duration(repoErrorGracePeriod)*time.Second,
metricsPort,
metricsCacheExpiration,
metricsAplicationLabels,
metricsAplicationConditions,
kubectlParallelismLimit,
persistResourceHealth,
clusterSharding,
@@ -212,11 +180,9 @@ func NewCommand() *cobra.Command {
serverSideDiff,
enableDynamicClusterDistribution,
ignoreNormalizerOpts,
enableK8sEvent,
hydratorEnabled,
)
errors.CheckError(err)
cacheutil.CollectMetrics(redisClient, appController.GetMetricsServer(), nil)
cacheutil.CollectMetrics(redisClient, appController.GetMetricsServer())
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
@@ -264,17 +230,11 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
command.Flags().DurationVar(&metricsCacheExpiration, "metrics-cache-expiration", env.ParseDurationFromEnv("ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION", 0*time.Second, 0, math.MaxInt64), "Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS", 0, 0, math.MaxInt32), "Specifies timeout between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffTimeoutSeconds, "self-heal-backoff-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_TIMEOUT_SECONDS", 2, 0, math.MaxInt32), "Specifies initial timeout of exponential backoff between self heal attempts")
command.Flags().IntVar(&selfHealBackoffFactor, "self-heal-backoff-factor", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_FACTOR", 3, 0, math.MaxInt32), "Specifies factor of exponential timeout between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffCapSeconds, "self-heal-backoff-cap-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_CAP_SECONDS", 300, 0, math.MaxInt32), "Specifies max timeout of exponential backoff between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffCooldownSeconds, "self-heal-backoff-cooldown-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS", 330, 0, math.MaxInt32), "Specifies period of time the app needs to stay synced before the self heal backoff can reset")
command.Flags().IntVar(&syncTimeout, "sync-timeout", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT", 0, 0, math.MaxInt32), "Specifies the timeout after which a sync would be terminated. 0 means no timeout (default 0).")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS", 5, 0, math.MaxInt32), "Specifies timeout between application self heal attempts")
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", env.ParseInt64FromEnv("ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT", 20, 0, math.MaxInt64), "Number of allowed concurrent kubectl fork/execs. Any value less than 1 means no limit.")
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
command.Flags().BoolVar(&repoServerStrictTLS, "repo-server-strict-tls", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS", false), "Whether to use strict validation of the TLS cert presented by the repo server")
command.Flags().StringSliceVar(&metricsAplicationLabels, "metrics-application-labels", []string{}, "List of Application labels that will be added to the argocd_application_labels metric")
command.Flags().StringSliceVar(&metricsAplicationConditions, "metrics-application-conditions", []string{}, "List of Application conditions that will be added to the argocd_application_conditions metric")
command.Flags().StringVar(&otlpAddress, "otlp-address", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_ADDRESS", ""), "OpenTelemetry collector address to send traces to")
command.Flags().BoolVar(&otlpInsecure, "otlp-insecure", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_INSECURE", true), "OpenTelemetry collector insecure mode")
command.Flags().StringToStringVar(&otlpHeaders, "otlp-headers", env.ParseStringToStringFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_HEADERS", map[string]string{}, ","), "List of OpenTelemetry collector extra headers sent with traces, headers are comma-separated key-value pairs(e.g. key1=value1,key2=value2)")
@@ -294,9 +254,6 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&enableDynamicClusterDistribution, "dynamic-cluster-distribution-enabled", env.ParseBoolFromEnv(common.EnvEnableDynamicClusterDistribution, false), "Enables dynamic cluster distribution.")
command.Flags().BoolVar(&serverSideDiff, "server-side-diff-enabled", env.ParseBoolFromEnv(common.EnvServerSideDiff, false), "Feature flag to enable ServerSide diff. Default (\"false\")")
command.Flags().DurationVar(&ignoreNormalizerOpts.JQExecutionTimeout, "ignore-normalizer-jq-execution-timeout-seconds", env.ParseDurationFromEnv("ARGOCD_IGNORE_NORMALIZER_JQ_TIMEOUT", 0*time.Second, 0, math.MaxInt64), "Set ignore normalizer JQ execution timeout")
// argocd k8s event logging flag
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
command.Flags().BoolVar(&hydratorEnabled, "hydrator-enabled", env.ParseBoolFromEnv("ARGOCD_HYDRATOR_ENABLED", false), "Feature flag to enable Hydrator. Default (\"false\")")
cacheSource = appstatecache.AddCacheFlagsToCmd(&command, cacheutil.Options{
OnClientCreated: func(client *redis.Client) {
redisClient = client

View File

@@ -5,7 +5,6 @@ import (
"math"
"net/http"
"os"
"runtime/debug"
"time"
"github.com/argoproj/pkg/stats"
@@ -36,9 +35,9 @@ import (
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
appsetmetrics "github.com/argoproj/argo-cd/v2/applicationset/metrics"
"github.com/argoproj/argo-cd/v2/applicationset/services"
appv1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/errors"
@@ -70,10 +69,8 @@ func NewCommand() *cobra.Command {
allowedScmProviders []string
globalPreservedAnnotations []string
globalPreservedLabels []string
metricsAplicationsetLabels []string
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -101,13 +98,6 @@ func NewCommand() *cobra.Command {
ctrl.SetLogger(logutils.NewLogrusLogger(logutils.NewWithCurrentConfig()))
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
if r := recover(); r != nil {
log.WithField("trace", string(debug.Stack())).Fatal("Recovered from panic: ", r)
}
}()
restConfig, err := clientConfig.ClientConfig()
errors.CheckError(err)
@@ -170,13 +160,14 @@ func NewCommand() *cobra.Command {
errors.CheckError(err)
argoSettingsMgr := argosettings.NewSettingsManager(ctx, k8sClient, namespace)
appSetConfig := appclientset.NewForConfigOrDie(mgr.GetConfig())
argoCDDB := db.NewDB(namespace, argoSettingsMgr, k8sClient)
scmConfig := generators.NewSCMConfig(scmRootCAPath, allowedScmProviders, enableScmProviders, github_app.NewAuthCredentials(argoCDDB.(db.RepoCredsDB)), tokenRefStrictMode)
scmConfig := generators.NewSCMConfig(scmRootCAPath, allowedScmProviders, enableScmProviders, github_app.NewAuthCredentials(argoCDDB.(db.RepoCredsDB)))
tlsConfig := apiclient.TLSConfiguration{
DisableTLS: repoServerPlaintext,
StrictValidation: repoServerStrictTLS,
StrictValidation: repoServerPlaintext,
}
if !repoServerPlaintext && repoServerStrictTLS {
@@ -203,13 +194,6 @@ func NewCommand() *cobra.Command {
startWebhookServer(webhookHandler, webhookAddr)
}
metrics := appsetmetrics.NewApplicationsetMetrics(
utils.NewAppsetLister(mgr.GetClient()),
metricsAplicationsetLabels,
func(appset *appv1alpha1.ApplicationSet) bool {
return utils.IsNamespaceAllowed(applicationSetNamespaces, appset.Namespace)
})
if err = (&controllers.ApplicationSetReconciler{
Generators: topLevelGenerators,
Client: mgr.GetClient(),
@@ -218,6 +202,7 @@ func NewCommand() *cobra.Command {
Renderer: &utils.Render{},
Policy: policyObj,
EnablePolicyOverride: enablePolicyOverride,
ArgoAppClientset: appSetConfig,
KubeClientset: k8sClient,
ArgoDB: argoCDDB,
ArgoCDNamespace: namespace,
@@ -226,7 +211,7 @@ func NewCommand() *cobra.Command {
SCMRootCAPath: scmRootCAPath,
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
Cache: mgr.GetCache(),
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -258,7 +243,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&allowedScmProviders, "allowed-scm-providers", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ALLOWED_SCM_PROVIDERS", []string{}, ","), "The list of allowed custom SCM provider API URLs. This restriction does not apply to SCM or PR generators which do not accept a custom API URL. (Default: Empty = all)")
command.Flags().BoolVar(&enableScmProviders, "enable-scm-providers", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_SCM_PROVIDERS", true), "Enable retrieving information from SCM providers, used by the SCM and PR generators (Default: true)")
command.Flags().BoolVar(&dryRun, "dry-run", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_DRY_RUN", false), "Enable dry run mode")
command.Flags().BoolVar(&tokenRefStrictMode, "token-ref-strict-mode", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_TOKENREF_STRICT_MODE", false), fmt.Sprintf("Set to true to require secrets referenced by SCM providers to have the %s=%s label set (Default: false)", common.LabelKeySecretType, common.LabelValueSecretTypeSCMCreds))
command.Flags().BoolVar(&enableProgressiveSyncs, "enable-progressive-syncs", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS", false), "Enable use of the experimental progressive syncs feature.")
command.Flags().BoolVar(&enableNewGitFileGlobbing, "enable-new-git-file-globbing", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING", false), "Enable new globbing in Git files generator.")
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
@@ -269,7 +253,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&globalPreservedAnnotations, "preserved-annotations", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_ANNOTATIONS", []string{}, ","), "Sets global preserved field values for annotations")
command.Flags().StringSliceVar(&globalPreservedLabels, "preserved-labels", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS", []string{}, ","), "Sets global preserved field values for labels")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
return &command
}
@@ -277,7 +260,7 @@ func startWebhookServer(webhookHandler *webhook.WebhookHandler, webhookAddr stri
mux := http.NewServeMux()
mux.HandleFunc("/api/webhook", webhookHandler.Handler)
go func() {
log.Infof("Starting webhook server %s", webhookAddr)
log.Info("Starting webhook server")
err := http.ListenAndServe(webhookAddr, mux)
if err != nil {
log.Error(err, "failed to start webhook server")

View File

@@ -1,7 +1,6 @@
package commands
import (
"runtime/debug"
"time"
"github.com/argoproj/pkg/stats"
@@ -45,13 +44,6 @@ func NewCommand() *cobra.Command {
cli.SetLogFormat(cmdutil.LogFormat)
cli.SetLogLevel(cmdutil.LogLevel)
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
if r := recover(); r != nil {
log.WithField("trace", string(debug.Stack())).Fatal("Recovered from panic: ", r)
}
}()
config, err := plugin.ReadPluginConfig(configFilePath)
errors.CheckError(err)

View File

@@ -10,20 +10,17 @@ import (
"syscall"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"google.golang.org/grpc/health/grpc_health_v1"
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
"github.com/argoproj/argo-cd/v2/commitserver"
"github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/commitserver/metrics"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/util/askpass"
"github.com/argoproj/argo-cd/v2/reposerver/askpass"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/env"
"github.com/argoproj/argo-cd/v2/util/errors"
"github.com/argoproj/argo-cd/v2/util/healthz"
ioutil "github.com/argoproj/argo-cd/v2/util/io"
)
// NewCommand returns a new instance of an argocd-commit-server command
@@ -63,28 +60,6 @@ func NewCommand() *cobra.Command {
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
errors.CheckError(err)
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {
if val, ok := r.URL.Query()["full"]; ok && len(val) > 0 && val[0] == "true" {
// connect to itself to make sure commit server is able to serve connection
// used by liveness probe to auto restart commit server
conn, err := apiclient.NewConnection(fmt.Sprintf("localhost:%d", listenPort))
if err != nil {
return err
}
defer ioutil.Close(conn)
client := grpc_health_v1.NewHealthClient(conn)
res, err := client.Check(r.Context(), &grpc_health_v1.HealthCheckRequest{})
if err != nil {
return err
}
if res.Status != grpc_health_v1.HealthCheckResponse_SERVING {
return fmt.Errorf("grpc health check status is '%v'", res.Status)
}
return nil
}
return nil
})
// Graceful shutdown code adapted from here: https://gist.github.com/embano1/e0bf49d24f1cdd07cffad93097c04f0a
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
@@ -112,6 +87,5 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&listenPort, "port", common.DefaultPortCommitServer, "Listen on given port for incoming connections")
command.Flags().StringVar(&metricsHost, "metrics-address", env.StringFromEnv("ARGOCD_COMMIT_SERVER_METRICS_LISTEN_ADDRESS", common.DefaultAddressCommitServerMetrics), "Listen on given address for metrics")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortCommitServerMetrics, "Start metrics server on given port")
return command
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"os"
"os/exec"
"runtime/debug"
"syscall"
"github.com/argoproj/argo-cd/v2/common"
@@ -67,14 +66,6 @@ func NewRunDexCommand() *cobra.Command {
cli.SetLogFormat(cmdutil.LogFormat)
cli.SetLogLevel(cmdutil.LogLevel)
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
if r := recover(); r != nil {
log.WithField("trace", string(debug.Stack())).Fatal("Recovered from panic: ", r)
}
}()
_, err = exec.LookPath("dex")
errors.CheckError(err)
config, err := clientConfig.ClientConfig()
@@ -145,8 +136,8 @@ func NewRunDexCommand() *cobra.Command {
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_DEX_SERVER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", env.StringFromEnv("ARGOCD_DEX_SERVER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().BoolVar(&disableTLS, "disable-tls", env.ParseBoolFromEnv("ARGOCD_DEX_SERVER_DISABLE_TLS", false), "Disable TLS on the HTTP endpoint")
return &command
}
@@ -213,8 +204,8 @@ func NewGenDexConfigCommand() *cobra.Command {
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_DEX_SERVER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", env.StringFromEnv("ARGOCD_DEX_SERVER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVarP(&out, "out", "o", "", "Output to the specified file instead of stdout")
command.Flags().BoolVar(&disableTLS, "disable-tls", env.ParseBoolFromEnv("ARGOCD_DEX_SERVER_DISABLE_TLS", false), "Disable TLS on the HTTP endpoint")
return &command

View File

@@ -9,7 +9,7 @@ import (
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"github.com/argoproj/argo-cd/v2/util/askpass"
"github.com/argoproj/argo-cd/v2/reposerver/askpass"
"github.com/argoproj/argo-cd/v2/util/errors"
grpc_util "github.com/argoproj/argo-cd/v2/util/grpc"
"github.com/argoproj/argo-cd/v2/util/io"

View File

@@ -1,7 +1,6 @@
package commands
import (
"fmt"
"os"
"github.com/Azure/kubelogin/pkg/token"
@@ -20,26 +19,24 @@ const (
)
func newAzureCommand() *cobra.Command {
o := token.NewOptions()
// we'll use default of WorkloadIdentityLogin for the login flow
o.LoginMethod = token.WorkloadIdentityLogin
o.ServerID = DEFAULT_AAD_SERVER_APPLICATION_ID
command := &cobra.Command{
Use: "azure",
Run: func(c *cobra.Command, args []string) {
o := token.OptionsWithEnv()
if o.LoginMethod == "" { // no environment variable overrides
// we'll use default of WorkloadIdentityLogin for the login flow
o.LoginMethod = token.WorkloadIdentityLogin
}
o.ServerID = DEFAULT_AAD_SERVER_APPLICATION_ID
o.UpdateFromEnv()
if v, ok := os.LookupEnv(envServerApplicationID); ok {
o.ServerID = v
}
if v, ok := os.LookupEnv(envEnvironmentName); ok {
o.Environment = v
}
tp, err := token.GetTokenProvider(o)
plugin, err := token.New(&o)
errors.CheckError(err)
tok, err := tp.GetAccessToken(c.Context())
err = plugin.Do()
errors.CheckError(err)
_, _ = fmt.Fprint(os.Stdout, formatJSON(tok.Token, tok.ExpiresOn))
},
}
return command

View File

@@ -1,15 +1,10 @@
package commands
import (
"context"
"fmt"
"net/http"
"os"
"os/signal"
"runtime/debug"
"strings"
"sync"
"syscall"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
@@ -67,8 +62,7 @@ func NewCommand() *cobra.Command {
Use: "controller",
Short: "Starts Argo CD Notifications controller",
RunE: func(c *cobra.Command, args []string) error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ctx := c.Context()
vers := common.GetVersion()
namespace, _, err := clientConfig.Namespace()
@@ -116,13 +110,6 @@ func NewCommand() *cobra.Command {
return fmt.Errorf("unknown log format '%s'", logFormat)
}
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
if r := recover(); r != nil {
log.WithField("trace", string(debug.Stack())).Fatal("Recovered from panic: ", r)
}
}()
tlsConfig := apiclient.TLSConfiguration{
DisableTLS: argocdRepoServerPlaintext,
StrictValidation: argocdRepoServerStrictTLS,
@@ -159,17 +146,6 @@ func NewCommand() *cobra.Command {
return fmt.Errorf("failed to initialize controller: %w", err)
}
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
s := <-sigCh
log.Printf("got signal %v, attempting graceful shutdown", s)
cancel()
}()
go ctrl.Run(ctx, processorsCount)
<-ctx.Done()
return nil
@@ -183,7 +159,7 @@ func NewCommand() *cobra.Command {
command.Flags().StringVar(&logFormat, "logformat", env.StringFromEnv("ARGOCD_NOTIFICATIONS_CONTROLLER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
command.Flags().IntVar(&metricsPort, "metrics-port", defaultMetricsPort, "Metrics port")
command.Flags().StringVar(&argocdRepoServer, "argocd-repo-server", common.DefaultRepoServerAddr, "Argo CD repo server address")
command.Flags().BoolVar(&argocdRepoServerPlaintext, "argocd-repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_NOTIFICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Use a plaintext client (non-TLS) to connect to repository server")
command.Flags().BoolVar(&argocdRepoServerPlaintext, "argocd-repo-server-plaintext", false, "Use a plaintext client (non-TLS) to connect to repository server")
command.Flags().BoolVar(&argocdRepoServerStrictTLS, "argocd-repo-server-strict-tls", false, "Perform strict validation of TLS certificates when connecting to repo server")
command.Flags().StringVar(&configMapName, "config-map-name", "argocd-notifications-cm", "Set notifications ConfigMap name")
command.Flags().StringVar(&secretName, "secret-name", "argocd-notifications-secret", "Set notifications Secret name")

View File

@@ -7,7 +7,6 @@ import (
"net/http"
"os"
"os/signal"
"runtime/debug"
"sync"
"syscall"
"time"
@@ -23,10 +22,10 @@ import (
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/reposerver"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/reposerver/askpass"
reposervercache "github.com/argoproj/argo-cd/v2/reposerver/cache"
"github.com/argoproj/argo-cd/v2/reposerver/metrics"
"github.com/argoproj/argo-cd/v2/reposerver/repository"
"github.com/argoproj/argo-cd/v2/util/askpass"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/env"
@@ -76,7 +75,6 @@ func NewCommand() *cobra.Command {
helmRegistryMaxIndexSize string
disableManifestMaxExtractedSize bool
includeHiddenDirectories bool
cmpUseManifestGeneratePaths bool
)
command := cobra.Command{
Use: cliName,
@@ -97,13 +95,6 @@ func NewCommand() *cobra.Command {
cli.SetLogFormat(cmdutil.LogFormat)
cli.SetLogLevel(cmdutil.LogLevel)
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
if r := recover(); r != nil {
log.WithField("trace", string(debug.Stack())).Fatal("Recovered from panic: ", r)
}
}()
if !disableTLS {
var err error
tlsConfigCustomizer, err = tlsConfigCustomizerSrc()
@@ -130,7 +121,7 @@ func NewCommand() *cobra.Command {
askPassServer := askpass.NewServer(askpass.SocketPath)
metricsServer := metrics.NewMetricsServer()
cacheutil.CollectMetrics(redisClient, metricsServer, nil)
cacheutil.CollectMetrics(redisClient, metricsServer)
server, err := reposerver.NewServer(metricsServer, cache, tlsConfigCustomizer, repository.RepoServerInitConstants{
ParallelismLimit: parallelismLimit,
PauseGenerationAfterFailedGenerationAttempts: pauseGenerationAfterFailedGenerationAttempts,
@@ -145,7 +136,6 @@ func NewCommand() *cobra.Command {
HelmManifestMaxExtractedSize: helmManifestMaxExtractedSizeQuantity.ToDec().Value(),
HelmRegistryMaxIndexSize: helmRegistryMaxIndexSizeQuantity.ToDec().Value(),
IncludeHiddenDirectories: includeHiddenDirectories,
CMPUseManifestGeneratePaths: cmpUseManifestGeneratePaths,
}, askPassServer)
errors.CheckError(err)
@@ -251,7 +241,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringVar(&helmRegistryMaxIndexSize, "helm-registry-max-index-size", env.StringFromEnv("ARGOCD_REPO_SERVER_HELM_MANIFEST_MAX_INDEX_SIZE", "1G"), "Maximum size of registry index file")
command.Flags().BoolVar(&disableManifestMaxExtractedSize, "disable-helm-manifest-max-extracted-size", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_DISABLE_HELM_MANIFEST_MAX_EXTRACTED_SIZE", false), "Disable maximum size of helm manifest archives when extracted")
command.Flags().BoolVar(&includeHiddenDirectories, "include-hidden-directories", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_INCLUDE_HIDDEN_DIRECTORIES", false), "Include hidden directories from Git")
command.Flags().BoolVar(&cmpUseManifestGeneratePaths, "plugin-use-manifest-generate-paths", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_PLUGIN_USE_MANIFEST_GENERATE_PATHS", false), "Pass the resources described in argocd.argoproj.io/manifest-generate-paths value to the cmpserver to generate the application manifests.")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
cacheSrc = reposervercache.AddCacheFlagsToCmd(&command, cacheutil.Options{
OnClientCreated: func(client *redis.Client) {

View File

@@ -4,13 +4,10 @@ import (
"context"
"fmt"
"math"
"runtime/debug"
"strings"
"time"
"github.com/redis/go-redis/v9"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"github.com/argoproj/pkg/stats"
log "github.com/sirupsen/logrus"
@@ -28,7 +25,6 @@ import (
reposervercache "github.com/argoproj/argo-cd/v2/reposerver/cache"
"github.com/argoproj/argo-cd/v2/server"
servercache "github.com/argoproj/argo-cd/v2/server/cache"
"github.com/argoproj/argo-cd/v2/util/argo"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/dex"
@@ -87,16 +83,12 @@ func NewCommand() *cobra.Command {
applicationNamespaces []string
enableProxyExtension bool
webhookParallelism int
hydratorEnabled bool
// ApplicationSet
enableNewGitFileGlobbing bool
scmRootCAPath string
allowedScmProviders []string
enableScmProviders bool
// argocd k8s event logging flag
enableK8sEvent []string
)
command := &cobra.Command{
Use: cliName,
@@ -121,13 +113,6 @@ func NewCommand() *cobra.Command {
cli.SetLogLevel(cmdutil.LogLevel)
cli.SetGLogLevel(glogLevel)
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
if r := recover(); r != nil {
log.WithField("trace", string(debug.Stack())).Fatal("Recovered from panic: ", r)
}
}()
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
errors.CheckError(v1alpha1.SetK8SConfigDefaults(config))
@@ -157,14 +142,9 @@ func NewCommand() *cobra.Command {
dynamicClient := dynamic.NewForConfigOrDie(config)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
_ = v1alpha1.AddToScheme(scheme)
controllerClient, err := client.New(config, client.Options{Scheme: scheme})
controllerClient, err := client.New(config, client.Options{})
errors.CheckError(err)
controllerClient = client.NewDryRunClient(controllerClient)
controllerClient = client.NewNamespacedClient(controllerClient, namespace)
// Load CA information to use for validating connections to the
// repository server, if strict TLS validation was requested.
@@ -243,8 +223,6 @@ func NewCommand() *cobra.Command {
ApplicationNamespaces: applicationNamespaces,
EnableProxyExtension: enableProxyExtension,
WebhookParallelism: webhookParallelism,
EnableK8sEvent: enableK8sEvent,
HydratorEnabled: hydratorEnabled,
}
appsetOpts := server.ApplicationSetOpts{
@@ -260,25 +238,22 @@ func NewCommand() *cobra.Command {
stats.RegisterHeapDumper("memprofile")
argocd := server.NewServer(ctx, argoCDOpts, appsetOpts)
argocd.Init(ctx)
lns, err := argocd.Listen()
errors.CheckError(err)
for {
var closer func()
serverCtx, cancel := context.WithCancel(ctx)
lns, err := argocd.Listen()
errors.CheckError(err)
ctx, cancel := context.WithCancel(ctx)
if otlpAddress != "" {
closer, err = traceutil.InitTracer(serverCtx, "argocd-server", otlpAddress, otlpInsecure, otlpHeaders, otlpAttrs)
closer, err = traceutil.InitTracer(ctx, "argocd-server", otlpAddress, otlpInsecure, otlpHeaders, otlpAttrs)
if err != nil {
log.Fatalf("failed to initialize tracing: %v", err)
}
}
argocd.Run(serverCtx, lns)
argocd.Run(ctx, lns)
cancel()
if closer != nil {
closer()
}
cancel()
if argocd.TerminateRequested() {
break
}
}
},
Example: templates.Examples(`
@@ -322,8 +297,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&applicationNamespaces, "application-namespaces", env.StringsFromEnv("ARGOCD_APPLICATION_NAMESPACES", []string{}, ","), "List of additional namespaces where application resources can be managed in")
command.Flags().BoolVar(&enableProxyExtension, "enable-proxy-extension", env.ParseBoolFromEnv("ARGOCD_SERVER_ENABLE_PROXY_EXTENSION", false), "Enable Proxy Extension feature")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_SERVER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
command.Flags().BoolVar(&hydratorEnabled, "hydrator-enabled", env.ParseBoolFromEnv("ARGOCD_HYDRATOR_ENABLED", false), "Feature flag to enable Hydrator. Default (\"false\")")
// Flags related to the applicationSet component.
command.Flags().StringVar(&scmRootCAPath, "appset-scm-root-ca-path", env.StringFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_SCM_ROOT_CA_PATH", ""), "Provide Root CA Path for self-signed TLS Certificates")

View File

@@ -16,13 +16,11 @@ import (
"golang.org/x/term"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v2/util/rbac"
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/headless"
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/utils"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
accountpkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/account"
"github.com/argoproj/argo-cd/v2/pkg/apiclient/session"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
"github.com/argoproj/argo-cd/v2/util/io"
@@ -219,7 +217,7 @@ argocd account can-i create clusters '*'
Actions: %v
Resources: %v
`, rbac.Actions, rbac.Resources),
`, rbacpolicy.Actions, rbacpolicy.Resources),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@@ -263,7 +261,7 @@ func NewAccountListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
Use: "list",
Short: "List accounts",
Example: "argocd account list",
Run: func(c *cobra.Command, _ []string) {
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
conn, client := headless.NewClientOrDie(clientOpts, c).NewAccountClientOrDie()
@@ -310,7 +308,7 @@ argocd account get
# Get details for an account by name
argocd account get --account <account-name>`,
Run: func(c *cobra.Command, _ []string) {
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
clientset := headless.NewClientOrDie(clientOpts, c)
@@ -359,7 +357,7 @@ func printAccountDetails(acc *accountpkg.Account) {
expiresAt := time.Unix(t.ExpiresAt, 0)
expiresAtFormatted = expiresAt.Format(time.RFC3339)
if expiresAt.Before(time.Now()) {
expiresAtFormatted = expiresAtFormatted + " (expired)"
expiresAtFormatted = fmt.Sprintf("%s (expired)", expiresAtFormatted)
}
}
@@ -383,7 +381,7 @@ argocd account generate-token
# Generate token for the account with the specified name
argocd account generate-token --account <account-name>`,
Run: func(c *cobra.Command, _ []string) {
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
clientset := headless.NewClientOrDie(clientOpts, c)
@@ -434,14 +432,8 @@ argocd account delete-token --account <account-name> ID`,
if account == "" {
account = getCurrentAccount(ctx, clientset).Username
}
promptUtil := utils.NewPrompt(clientOpts.PromptsEnabled)
canDelete := promptUtil.Confirm(fmt.Sprintf("Are you sure you want to delete '%s' token? [y/n]", id))
if canDelete {
_, err := client.DeleteToken(ctx, &accountpkg.DeleteTokenRequest{Name: account, Id: id})
errors.CheckError(err)
} else {
fmt.Printf("The command to delete '%s' was cancelled.\n", id)
}
_, err := client.DeleteToken(ctx, &accountpkg.DeleteTokenRequest{Name: account, Id: id})
errors.CheckError(err)
},
}
cmd.Flags().StringVarP(&account, "account", "a", "", "Account name. Defaults to the current account.")

View File

@@ -1,13 +1,10 @@
package admin
import (
"context"
"reflect"
"strings"
"github.com/spf13/cobra"
apiv1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -86,12 +83,11 @@ func newArgoCDClientsets(config *rest.Config, namespace string) *argoCDClientset
dynamicIf, err := dynamic.NewForConfig(config)
errors.CheckError(err)
return &argoCDClientsets{
configMaps: dynamicIf.Resource(configMapResource).Namespace(namespace),
secrets: dynamicIf.Resource(secretResource).Namespace(namespace),
// To support applications and applicationsets in any namespace we will watch all namespaces and filter them afterwards
applications: dynamicIf.Resource(applicationsResource),
configMaps: dynamicIf.Resource(configMapResource).Namespace(namespace),
secrets: dynamicIf.Resource(secretResource).Namespace(namespace),
applications: dynamicIf.Resource(applicationsResource).Namespace(namespace),
projects: dynamicIf.Resource(appprojectsResource).Namespace(namespace),
applicationSets: dynamicIf.Resource(appplicationSetResource),
applicationSets: dynamicIf.Resource(appplicationSetResource).Namespace(namespace),
}
}
@@ -190,11 +186,7 @@ func isArgoCDConfigMap(name string) bool {
// specsEqual returns if the spec, data, labels, annotations, and finalizers of the two
// supplied objects are equal, indicating that no update is necessary during importing
func specsEqual(left, right unstructured.Unstructured) bool {
leftAnnotation := left.GetAnnotations()
rightAnnotation := right.GetAnnotations()
delete(leftAnnotation, apiv1.LastAppliedConfigAnnotation)
delete(rightAnnotation, apiv1.LastAppliedConfigAnnotation)
if !reflect.DeepEqual(leftAnnotation, rightAnnotation) {
if !reflect.DeepEqual(left.GetAnnotations(), right.GetAnnotations()) {
return false
}
if !reflect.DeepEqual(left.GetLabels(), right.GetLabels()) {
@@ -226,52 +218,3 @@ func specsEqual(left, right unstructured.Unstructured) bool {
}
return false
}
type argocdAdditonalNamespaces struct {
applicationNamespaces []string
applicationsetNamespaces []string
}
const (
applicationsetNamespacesCmdParamsKey = "applicationsetcontroller.namespaces"
applicationNamespacesCmdParamsKey = "application.namespaces"
)
// Get additional namespaces from argocd-cmd-params
func getAdditionalNamespaces(ctx context.Context, argocdClientsets *argoCDClientsets) *argocdAdditonalNamespaces {
applicationNamespaces := make([]string, 0)
applicationsetNamespaces := make([]string, 0)
un, err := argocdClientsets.configMaps.Get(ctx, common.ArgoCDCmdParamsConfigMapName, v1.GetOptions{})
errors.CheckError(err)
var cm apiv1.ConfigMap
err = runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &cm)
errors.CheckError(err)
namespacesListFromString := func(namespaces string) []string {
listOfNamespaces := []string{}
ss := strings.Split(namespaces, ",")
for _, namespace := range ss {
if namespace != "" {
listOfNamespaces = append(listOfNamespaces, strings.TrimSpace(namespace))
}
}
return listOfNamespaces
}
if strNamespaces, ok := cm.Data[applicationNamespacesCmdParamsKey]; ok {
applicationNamespaces = namespacesListFromString(strNamespaces)
}
if strNamespaces, ok := cm.Data[applicationsetNamespacesCmdParamsKey]; ok {
applicationsetNamespaces = namespacesListFromString(strNamespaces)
}
return &argocdAdditonalNamespaces{
applicationNamespaces: applicationNamespaces,
applicationsetNamespaces: applicationsetNamespaces,
}
}

View File

@@ -1,75 +0,0 @@
package admin
import (
"context"
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
dynfake "k8s.io/client-go/dynamic/fake"
)
func TestGetAdditionalNamespaces(t *testing.T) {
createArgoCDCmdCMWithKeys := func(data map[string]interface{}) *unstructured.Unstructured {
return &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "v1",
"kind": "ConfigMap",
"metadata": map[string]interface{}{
"name": "argocd-cmd-params-cm",
"namespace": "argocd",
},
"data": data,
},
}
}
testCases := []struct {
CmdParamsKeys map[string]interface{}
expected argocdAdditonalNamespaces
description string
}{
{
description: "empty configmap should return no additional namespaces",
CmdParamsKeys: map[string]interface{}{},
expected: argocdAdditonalNamespaces{applicationNamespaces: []string{}, applicationsetNamespaces: []string{}},
},
{
description: "empty strings in respective keys in cm shoud return empty namespace list",
CmdParamsKeys: map[string]interface{}{applicationsetNamespacesCmdParamsKey: "", applicationNamespacesCmdParamsKey: ""},
expected: argocdAdditonalNamespaces{applicationNamespaces: []string{}, applicationsetNamespaces: []string{}},
},
{
description: "when only one of the keys in the cm is set only correct respective list of namespaces should be returned",
CmdParamsKeys: map[string]interface{}{applicationNamespacesCmdParamsKey: "foo, bar*"},
expected: argocdAdditonalNamespaces{applicationsetNamespaces: []string{}, applicationNamespaces: []string{"foo", "bar*"}},
},
{
description: "when only one of the keys in the cm is set only correct respective list of namespaces should be returned",
CmdParamsKeys: map[string]interface{}{applicationsetNamespacesCmdParamsKey: "foo, bar*"},
expected: argocdAdditonalNamespaces{applicationNamespaces: []string{}, applicationsetNamespaces: []string{"foo", "bar*"}},
},
{
description: "whitespaces are removed for both multiple and single namespace",
CmdParamsKeys: map[string]interface{}{applicationNamespacesCmdParamsKey: " bar ", applicationsetNamespacesCmdParamsKey: " foo , bar* "},
expected: argocdAdditonalNamespaces{applicationNamespaces: []string{"bar"}, applicationsetNamespaces: []string{"foo", "bar*"}},
},
}
for _, c := range testCases {
fakeDynClient := dynfake.NewSimpleDynamicClient(runtime.NewScheme(), createArgoCDCmdCMWithKeys(c.CmdParamsKeys))
argoCDClientsets := &argoCDClientsets{
configMaps: fakeDynClient.Resource(configMapResource).Namespace("argocd"),
applications: fakeDynClient.Resource(schema.GroupVersionResource{}),
applicationSets: fakeDynClient.Resource(schema.GroupVersionResource{}),
secrets: fakeDynClient.Resource(schema.GroupVersionResource{}),
projects: fakeDynClient.Resource(schema.GroupVersionResource{}),
}
result := getAdditionalNamespaces(context.TODO(), argoCDClientsets)
assert.Equal(t, c.expected, *result)
}
}

Some files were not shown because too many files have changed in this diff Show More