mirror of
https://github.com/argoproj/argo-cd.git
synced 2026-02-20 01:28:45 +01:00
Compare commits
586 Commits
v3.2.1
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
500126019a | ||
|
|
b4c7467cf3 | ||
|
|
e6152b827b | ||
|
|
1ae13b2896 | ||
|
|
8d0e5b9408 | ||
|
|
0b40e3bc78 | ||
|
|
1389f0c032 | ||
|
|
59b6b0e2b8 | ||
|
|
27a503aa59 | ||
|
|
943936a909 | ||
|
|
8d40fa3b5c | ||
|
|
2d71941dd0 | ||
|
|
49f5c03622 | ||
|
|
ebca0521ad | ||
|
|
4c57962cf4 | ||
|
|
b7691b2167 | ||
|
|
21ae489589 | ||
|
|
e58bdf2f87 | ||
|
|
7a09f69ad6 | ||
|
|
1b08fd1004 | ||
|
|
0a93e5701f | ||
|
|
5072fb7136 | ||
|
|
b91c191a34 | ||
|
|
fe727c8fc9 | ||
|
|
fe4ab01cba | ||
|
|
4ea276860c | ||
|
|
f26533ab37 | ||
|
|
c2f611f8cd | ||
|
|
ef48aa9c37 | ||
|
|
1b4fde1986 | ||
|
|
ca84c31a46 | ||
|
|
f5eaae73a2 | ||
|
|
a2f57be3d5 | ||
|
|
0d0cec6b66 | ||
|
|
7ba62f9838 | ||
|
|
e80395bacf | ||
|
|
d8c72c2c8e | ||
|
|
8e9d24a60f | ||
|
|
3d2d32bc2c | ||
|
|
5f66fe5751 | ||
|
|
ad96cb8f0e | ||
|
|
541a1546cd | ||
|
|
97d50a14a6 | ||
|
|
9c4579b229 | ||
|
|
2f175fcd64 | ||
|
|
05ccc01e17 | ||
|
|
0fa1f675c5 | ||
|
|
c207a4f76c | ||
|
|
6d5678a351 | ||
|
|
6d40847e17 | ||
|
|
54311b9613 | ||
|
|
8fc795f1ad | ||
|
|
82a2b76da2 | ||
|
|
ed1fb04bcc | ||
|
|
9db75f63b3 | ||
|
|
2daaf1e317 | ||
|
|
7aa522b816 | ||
|
|
51c9add05d | ||
|
|
3f3e54f7c3 | ||
|
|
aadd977816 | ||
|
|
0e10276154 | ||
|
|
cb6608346f | ||
|
|
f0bb1192a8 | ||
|
|
76c4996e2e | ||
|
|
b8f5299f21 | ||
|
|
daa9a18953 | ||
|
|
ca7dcbc594 | ||
|
|
e161522dc0 | ||
|
|
576c0c2887 | ||
|
|
b7b4ab938a | ||
|
|
b7e09e0dba | ||
|
|
28ec26a6ca | ||
|
|
9198b79e31 | ||
|
|
440f1d25a7 | ||
|
|
4a71661dbe | ||
|
|
b74cf4563b | ||
|
|
d1ece2d57b | ||
|
|
d4476dcce4 | ||
|
|
99691565df | ||
|
|
5611814243 | ||
|
|
33e817549e | ||
|
|
7ba800af75 | ||
|
|
a1a1dc13fe | ||
|
|
fdd27369f9 | ||
|
|
8e778f12ac | ||
|
|
48c6671593 | ||
|
|
c3302545cc | ||
|
|
8dcaa2fa47 | ||
|
|
492f712e70 | ||
|
|
95b191d349 | ||
|
|
ce183f02fe | ||
|
|
bf0661ea81 | ||
|
|
25ee9cc36d | ||
|
|
4d33ca981b | ||
|
|
0aaffce556 | ||
|
|
290aab9fff | ||
|
|
b260143db1 | ||
|
|
8273c3a8ba | ||
|
|
de901da62a | ||
|
|
b4e022c7d9 | ||
|
|
d1523a07b6 | ||
|
|
65cbbca068 | ||
|
|
026d10e3f2 | ||
|
|
00eb906211 | ||
|
|
7430650ff5 | ||
|
|
079240b9ba | ||
|
|
33f6889efd | ||
|
|
8c890d4285 | ||
|
|
1db5ee809c | ||
|
|
d9699bdcef | ||
|
|
ab11e959b4 | ||
|
|
ea3925e34f | ||
|
|
ed537d5f3f | ||
|
|
afaf16b808 | ||
|
|
30abebda3d | ||
|
|
5efb184c79 | ||
|
|
96038ba2a1 | ||
|
|
b553cc5ea4 | ||
|
|
64421a7acc | ||
|
|
ef5b77811d | ||
|
|
ba41758147 | ||
|
|
3ee16c0860 | ||
|
|
fb9d9747bd | ||
|
|
90b3e856a6 | ||
|
|
1b973b81ef | ||
|
|
7faf8c9818 | ||
|
|
b1e05e3e07 | ||
|
|
f308ebf197 | ||
|
|
46deaabea9 | ||
|
|
b8e8c1fccb | ||
|
|
2d2249d3d8 | ||
|
|
68ff7df659 | ||
|
|
7396c1a9d2 | ||
|
|
24fbf285d2 | ||
|
|
9419eb95a8 | ||
|
|
701bc50d01 | ||
|
|
fa0d6a8eb6 | ||
|
|
1988c704d5 | ||
|
|
174fcfe01e | ||
|
|
c9fb04a606 | ||
|
|
5a60cc4ee6 | ||
|
|
0679215992 | ||
|
|
d5d7e8fad2 | ||
|
|
7357465ea6 | ||
|
|
116707bed1 | ||
|
|
cbea4e7ef3 | ||
|
|
a2b3f0a78e | ||
|
|
8dd534ec38 | ||
|
|
eff6a63e10 | ||
|
|
d00c99d52e | ||
|
|
bc811c5774 | ||
|
|
ac12ab91f3 | ||
|
|
31088be65b | ||
|
|
af7ae18189 | ||
|
|
ac46a18b55 | ||
|
|
e75e37c06c | ||
|
|
6b1654bea9 | ||
|
|
d0cb9c4ea8 | ||
|
|
42477a5fcd | ||
|
|
74a3275df2 | ||
|
|
631d429e2c | ||
|
|
2849f53930 | ||
|
|
6dee3d61d4 | ||
|
|
b4e626c9eb | ||
|
|
f20995b271 | ||
|
|
3fcb1a2dca | ||
|
|
d59276a397 | ||
|
|
a36bd76d49 | ||
|
|
983f47e3bb | ||
|
|
7dacb11b02 | ||
|
|
c3079fc31a | ||
|
|
f43147f813 | ||
|
|
f6f1a42492 | ||
|
|
ba9125230f | ||
|
|
f5382d9ef1 | ||
|
|
598dbcb61d | ||
|
|
f13be2a393 | ||
|
|
bcc0243f1e | ||
|
|
b8ca641e1d | ||
|
|
452f6c68b8 | ||
|
|
552ad1c75b | ||
|
|
68d10fe7cb | ||
|
|
e7b51daf58 | ||
|
|
44324c0791 | ||
|
|
59c9c60153 | ||
|
|
7e1db4a2a8 | ||
|
|
712109be5e | ||
|
|
e181fbb81d | ||
|
|
d1f8cddad0 | ||
|
|
52c70b84c8 | ||
|
|
2adf4568a1 | ||
|
|
12f332ee2e | ||
|
|
b7d975e0b3 | ||
|
|
5aa6bcb387 | ||
|
|
ed3acd0f79 | ||
|
|
a864d7052f | ||
|
|
f960274139 | ||
|
|
1bce29fd79 | ||
|
|
4e42f00b57 | ||
|
|
51b93e7b3f | ||
|
|
ed983d8a69 | ||
|
|
b76078863d | ||
|
|
b9916c3313 | ||
|
|
273ece0c67 | ||
|
|
77f313c2ec | ||
|
|
f209e7aa7b | ||
|
|
fbcaf35ab2 | ||
|
|
1d4d5dbbdc | ||
|
|
a880feeefa | ||
|
|
6ca71fec00 | ||
|
|
9ef837c326 | ||
|
|
c11d35a20f | ||
|
|
a7a07e2cd8 | ||
|
|
9faa6098ed | ||
|
|
0fb6c51f9d | ||
|
|
dbef22c843 | ||
|
|
47142b89f4 | ||
|
|
98a22612dd | ||
|
|
6cce4b29b9 | ||
|
|
9087ad7282 | ||
|
|
c377101491 | ||
|
|
1d13ebc372 | ||
|
|
9068f90261 | ||
|
|
97ad5b59a6 | ||
|
|
b3a2ec15e6 | ||
|
|
e0a0d5ba17 | ||
|
|
dab4cc0b8a | ||
|
|
dc952c1a60 | ||
|
|
15973bc6b4 | ||
|
|
90e5e3a40e | ||
|
|
38f73a75c3 | ||
|
|
d590fe55be | ||
|
|
093aef0dad | ||
|
|
8007df5f6c | ||
|
|
f8f1b61ba3 | ||
|
|
8697b44eea | ||
|
|
69dfa708a6 | ||
|
|
cebed7e704 | ||
|
|
89c110b595 | ||
|
|
90b69e9ae5 | ||
|
|
60c6378a12 | ||
|
|
c7f25189f0 | ||
|
|
9169c08c91 | ||
|
|
d65e9d9227 | ||
|
|
5f90e7b481 | ||
|
|
717b8bfd69 | ||
|
|
c61756277b | ||
|
|
370078d070 | ||
|
|
7258614f50 | ||
|
|
e5ef2e16d8 | ||
|
|
762f9b70f3 | ||
|
|
1265e8382e | ||
|
|
acb47d5407 | ||
|
|
e4cacd37c4 | ||
|
|
a16fb84a8c | ||
|
|
4fd18478f5 | ||
|
|
65db274b8d | ||
|
|
11a5e25708 | ||
|
|
04266647b1 | ||
|
|
cc13a7d417 | ||
|
|
ad846ac0fd | ||
|
|
c323d36706 | ||
|
|
782fb85b94 | ||
|
|
f5aa9e4d10 | ||
|
|
367311bd6f | ||
|
|
70bee6a3a5 | ||
|
|
b111e50082 | ||
|
|
3ef5ab187e | ||
|
|
30f4accb42 | ||
|
|
00472077d3 | ||
|
|
bfdad63e27 | ||
|
|
a093a7627f | ||
|
|
ccee58366a | ||
|
|
edb9faabbf | ||
|
|
7ac688a30f | ||
|
|
f948991e78 | ||
|
|
382663864e | ||
|
|
7e21b91e9d | ||
|
|
d78929e7f6 | ||
|
|
54992bf424 | ||
|
|
8d65e80ecb | ||
|
|
363a7155a5 | ||
|
|
73452f8a58 | ||
|
|
d948e6b41c | ||
|
|
8849c3f30c | ||
|
|
0371401803 | ||
|
|
88c35a9acf | ||
|
|
847cfc9f8b | ||
|
|
9ab0b2ecae | ||
|
|
09e5225f84 | ||
|
|
72bcdda3f0 | ||
|
|
df9b446fd7 | ||
|
|
3d9aab3cdc | ||
|
|
bd7681ae3f | ||
|
|
95e00254f8 | ||
|
|
099cba69bd | ||
|
|
6b2984ebc4 | ||
|
|
7d150d0b6b | ||
|
|
adb68bcaab | ||
|
|
a0c23b4210 | ||
|
|
a22b34675f | ||
|
|
fa0e8d60a3 | ||
|
|
f38075deb3 | ||
|
|
4386ff4b8d | ||
|
|
0be58f261a | ||
|
|
83ce6ca8ce | ||
|
|
1f371a01cf | ||
|
|
a9fd001c11 | ||
|
|
8a3ce6d85c | ||
|
|
0aecd43903 | ||
|
|
86a368824c | ||
|
|
fbecbb86e4 | ||
|
|
1ade3a1998 | ||
|
|
3de313666b | ||
|
|
5fd9f449e7 | ||
|
|
792124280f | ||
|
|
c1e23597e7 | ||
|
|
aba38192fb | ||
|
|
c0c2dd1f6f | ||
|
|
4a5648ee41 | ||
|
|
f15cf615b8 | ||
|
|
9a03edb8e7 | ||
|
|
a00ce82f1c | ||
|
|
b0fffe419a | ||
|
|
187312fe86 | ||
|
|
ed7c77a929 | ||
|
|
425d65e076 | ||
|
|
b58645a27c | ||
|
|
c0ffe8428a | ||
|
|
e56739ceba | ||
|
|
ad9a694fe4 | ||
|
|
b4dd8b8c39 | ||
|
|
ed70eac8b7 | ||
|
|
917f5a0f16 | ||
|
|
e284fd71cb | ||
|
|
b371e3bfc5 | ||
|
|
9664cf8123 | ||
|
|
98ccd3d43f | ||
|
|
3951079de1 | ||
|
|
517c1fff4e | ||
|
|
c036d3f6b0 | ||
|
|
ce2fb703a6 | ||
|
|
9970faba81 | ||
|
|
a56a803031 | ||
|
|
2bc3fef13e | ||
|
|
e03364f7dd | ||
|
|
51a33e63f1 | ||
|
|
e5e3a1cf5c | ||
|
|
da6623b2e7 | ||
|
|
112657a1f9 | ||
|
|
ab8fdc7dbd | ||
|
|
2d495813b7 | ||
|
|
d8c17c206f | ||
|
|
6cde7989d5 | ||
|
|
1c4ef33687 | ||
|
|
da6681916f | ||
|
|
67ddccd3cc | ||
|
|
ed31317b27 | ||
|
|
f9456de217 | ||
|
|
7493226dda | ||
|
|
4f069a220a | ||
|
|
b855894da0 | ||
|
|
55bb49480a | ||
|
|
d8b1a12ce6 | ||
|
|
a586397dc3 | ||
|
|
73bcea9c8c | ||
|
|
723667dff7 | ||
|
|
531c0dbb68 | ||
|
|
61c0cc745e | ||
|
|
553ae80972 | ||
|
|
c517b47f2f | ||
|
|
e360551b19 | ||
|
|
8aefb18433 | ||
|
|
b0c5e00ccf | ||
|
|
36e77462ae | ||
|
|
5bbbdfbc69 | ||
|
|
b560016286 | ||
|
|
f6495020a3 | ||
|
|
ae94ad9510 | ||
|
|
c7bab2eeca | ||
|
|
c0b63afb74 | ||
|
|
2565df31d1 | ||
|
|
c8139b3f94 | ||
|
|
27374da031 | ||
|
|
762cb1bc26 | ||
|
|
bc9ce5764f | ||
|
|
e8d9803a2b | ||
|
|
23f41cb849 | ||
|
|
33f542da00 | ||
|
|
57ea690344 | ||
|
|
a4c77d5c70 | ||
|
|
2c97a96cab | ||
|
|
7495c633c3 | ||
|
|
b067bd7463 | ||
|
|
579ea1d764 | ||
|
|
5da9c7eea0 | ||
|
|
6884d330a0 | ||
|
|
ddc92c9bdb | ||
|
|
6911e599ae | ||
|
|
411c8d0f1c | ||
|
|
f0c9d7e75e | ||
|
|
515974410e | ||
|
|
3f38eee773 | ||
|
|
46d1496140 | ||
|
|
73f3e7f01a | ||
|
|
0f8652d4e7 | ||
|
|
ba03b48543 | ||
|
|
09186f3d4f | ||
|
|
b3254f88f4 | ||
|
|
9e414998c8 | ||
|
|
11e322186b | ||
|
|
8e19104276 | ||
|
|
2a9c1448b2 | ||
|
|
1ce2acc845 | ||
|
|
3c778a5431 | ||
|
|
46073c1cd6 | ||
|
|
c1332abf89 | ||
|
|
89ddd0dffb | ||
|
|
a9f11fade3 | ||
|
|
478f8cb207 | ||
|
|
38db8bb691 | ||
|
|
5d680d6b80 | ||
|
|
928245881d | ||
|
|
380f7be5bf | ||
|
|
89cb483bbb | ||
|
|
aae8ded161 | ||
|
|
354817a103 | ||
|
|
c5b7114c50 | ||
|
|
814d79df49 | ||
|
|
0b4199b001 | ||
|
|
bb076f0a89 | ||
|
|
82f0935363 | ||
|
|
ee1772e1dc | ||
|
|
32c6afc4a7 | ||
|
|
dac837751e | ||
|
|
209882714e | ||
|
|
53cbe5f6be | ||
|
|
0bc2f8c395 | ||
|
|
069a5e64fb | ||
|
|
760fcb68be | ||
|
|
9a6cf9d611 | ||
|
|
cfdefa46b2 | ||
|
|
eb76c93f0a | ||
|
|
e344629d42 | ||
|
|
31311943a5 | ||
|
|
4eb3ca3fee | ||
|
|
872c470033 | ||
|
|
a1dc4c598b | ||
|
|
8d99997db0 | ||
|
|
646ff039d2 | ||
|
|
cc707ccc94 | ||
|
|
ef7702fe86 | ||
|
|
54bbebf593 | ||
|
|
3a3f6a33d7 | ||
|
|
d25b8fd69f | ||
|
|
828dc7574b | ||
|
|
7171d62f8c | ||
|
|
8d05efd2df | ||
|
|
c04f859da9 | ||
|
|
21b78bd366 | ||
|
|
c9bb0095d3 | ||
|
|
4d6f2988b2 | ||
|
|
8472746916 | ||
|
|
e024377862 | ||
|
|
7026950e0d | ||
|
|
8617aa1a7e | ||
|
|
51a45c0835 | ||
|
|
2545f6c175 | ||
|
|
2cf3a72c65 | ||
|
|
26efb17b22 | ||
|
|
0eede8069a | ||
|
|
bf8e17f73f | ||
|
|
90979fe432 | ||
|
|
92a3433562 | ||
|
|
afc2e64c70 | ||
|
|
f420bb91a1 | ||
|
|
605958d429 | ||
|
|
11d47a6215 | ||
|
|
607629ec6a | ||
|
|
618250fc48 | ||
|
|
dcb86f7d6b | ||
|
|
3e620057f4 | ||
|
|
cc0fb5531c | ||
|
|
b576959416 | ||
|
|
3c545080c9 | ||
|
|
59a09cd918 | ||
|
|
ee5c440fc4 | ||
|
|
42e1b44849 | ||
|
|
933f7cb71e | ||
|
|
d12976c298 | ||
|
|
817d0e48c3 | ||
|
|
db76222068 | ||
|
|
c5bf3a2484 | ||
|
|
ee1db0902c | ||
|
|
eb210cf6d1 | ||
|
|
bfac3f53c9 | ||
|
|
344b1bcfc2 | ||
|
|
6fe0c0050b | ||
|
|
b18f378548 | ||
|
|
97d4a75189 | ||
|
|
a45189fc91 | ||
|
|
56e31f3668 | ||
|
|
c23d4d77d8 | ||
|
|
ce9616ad10 | ||
|
|
7d3da9f16e | ||
|
|
c820482a8b | ||
|
|
6657adfcfd | ||
|
|
fb2ec13845 | ||
|
|
16598d5148 | ||
|
|
5622a64808 | ||
|
|
16575f1834 | ||
|
|
e8cfb83132 | ||
|
|
1723191dde | ||
|
|
93cf3c532b | ||
|
|
c3b39b3a7e | ||
|
|
4bd4f29670 | ||
|
|
58cd2c3897 | ||
|
|
ff6e9f8532 | ||
|
|
df17961bbd | ||
|
|
99bd42d9a3 | ||
|
|
cd2e16da1a | ||
|
|
7500c4faa4 | ||
|
|
a702089057 | ||
|
|
625651c65d | ||
|
|
6f7cd4fca5 | ||
|
|
f11f15b3d5 | ||
|
|
70ab73ca32 | ||
|
|
916375861e | ||
|
|
8430dc06d7 | ||
|
|
2257d233ab | ||
|
|
aedd5f533e | ||
|
|
21a4cdc1c5 | ||
|
|
be3abd2a12 | ||
|
|
d3f2ce3ad7 | ||
|
|
5946820661 | ||
|
|
051b7c6d9d | ||
|
|
4815faf272 | ||
|
|
7220635f97 | ||
|
|
f39140cf0a | ||
|
|
a8b4283616 | ||
|
|
c08fd189d2 | ||
|
|
278472de09 | ||
|
|
777c0ff629 | ||
|
|
7d29c700ca | ||
|
|
b8a7d7bfad | ||
|
|
9d32a603b8 | ||
|
|
3c2b7ebbde | ||
|
|
06a52d3d71 | ||
|
|
53ff80ff60 | ||
|
|
921c307c0c | ||
|
|
142bfe40f5 | ||
|
|
6f8e242890 | ||
|
|
9de8221284 | ||
|
|
fd50001c44 | ||
|
|
148710afc0 | ||
|
|
40c5701ac3 | ||
|
|
3a53391324 | ||
|
|
37e7d2b470 | ||
|
|
2bb958819a | ||
|
|
ab1888eef0 | ||
|
|
8635459748 | ||
|
|
374c236196 | ||
|
|
6a28738865 | ||
|
|
ffe694f117 | ||
|
|
3d22b0b8be | ||
|
|
d79c06d1f0 | ||
|
|
ae40d12575 | ||
|
|
fb4d62b938 | ||
|
|
1bd4f7af94 | ||
|
|
58443c658c | ||
|
|
78edcf4310 | ||
|
|
ef4dcd6c12 | ||
|
|
57db7848cf | ||
|
|
6f11a04a64 | ||
|
|
fec5ec399b | ||
|
|
dd78b0b09e | ||
|
|
c90d36e813 | ||
|
|
8b533e5763 | ||
|
|
d5b318d430 | ||
|
|
a1b21057fd | ||
|
|
a9c4d400c5 | ||
|
|
8049fbc818 | ||
|
|
47e2511dc1 | ||
|
|
3fdca00e92 | ||
|
|
0cb4f8a5fb | ||
|
|
c00a93dd2a |
80
.github/ISSUE_TEMPLATE/release.md
vendored
80
.github/ISSUE_TEMPLATE/release.md
vendored
@@ -9,18 +9,76 @@ assignees: ''
|
||||
Target RC1 date: ___. __, ____
|
||||
Target GA date: ___. __, ____
|
||||
|
||||
- [ ] 1wk before feature freeze post in #argo-contributors that PRs must be merged by DD-MM-YYYY to be included in the release - ask approvers to drop items from milestone they can’t merge
|
||||
## RC1 Release Checklist
|
||||
|
||||
- [ ] 1wk before feature freeze post in #argo-contributors that PRs must be merged by DD-MM-YYYY to be included in the release - ask approvers to drop items from milestone they can't merge
|
||||
- [ ] At least two days before RC1 date, draft RC blog post and submit it for review (or delegate this task)
|
||||
- [ ] Cut RC1 (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] Create new release branch
|
||||
- [ ] Create new release branch (or delegate this task to an Approver)
|
||||
- [ ] Add the release branch to ReadTheDocs
|
||||
- [ ] Confirm that tweet and blog post are ready
|
||||
- [ ] Trigger the release
|
||||
- [ ] After the release is finished, publish tweet and blog post
|
||||
- [ ] Post in #argo-cd and #argo-announcements with lots of emojis announcing the release and requesting help testing
|
||||
- [ ] Monitor support channels for issues, cherry-picking bugfixes and docs fixes as appropriate (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] At release date, evaluate if any bugs justify delaying the release. If not, cut the release (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] If unreleased changes are on the release branch for {current minor version minus 3}, cut a final patch release for that series (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] After the release, post in #argo-cd that the {current minor version minus 3} has reached EOL (example: https://cloud-native.slack.com/archives/C01TSERG0KZ/p1667336234059729)
|
||||
- [ ] Cut RC1 (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] Run the [Init ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/init-release.yaml) from the release branch
|
||||
- [ ] Review and merge the generated version bump PR
|
||||
- [ ] Run `./hack/trigger-release.sh` to push the release tag
|
||||
- [ ] Monitor the [Publish ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml)
|
||||
- [ ] Verify the release on [GitHub releases](https://github.com/argoproj/argo-cd/releases)
|
||||
- [ ] Verify the container image on [Quay.io](https://quay.io/repository/argoproj/argocd?tab=tags)
|
||||
- [ ] Confirm the new version appears in [Read the Docs](https://argo-cd.readthedocs.io/)
|
||||
- [ ] Announce RC1 release
|
||||
- [ ] Confirm that tweet and blog post are ready
|
||||
- [ ] Publish tweet and blog post
|
||||
- [ ] Post in #argo-cd and #argo-announcements requesting help testing:
|
||||
```
|
||||
:mega: Argo CD v{MAJOR}.{MINOR}.{PATCH}-rc{RC_NUMBER} is OUT NOW! :argocd::tada:
|
||||
|
||||
Please go through the following resources to know more about the release:
|
||||
|
||||
Release notes: https://github.com/argoproj/argo-cd/releases/tag/v{VERSION}
|
||||
Blog: {BLOG_POST_URL}
|
||||
|
||||
We'd love your help testing this release candidate! Please try it out in your environments and report any issues you find. This helps us ensure a stable GA release.
|
||||
|
||||
Thanks to all the folks who spent their time contributing to this release in any way possible!
|
||||
```
|
||||
- [ ] Monitor support channels for issues, cherry-picking bugfixes and docs fixes as appropriate during the RC period (or delegate this task to an Approver and coordinate timing)
|
||||
|
||||
## GA Release Checklist
|
||||
|
||||
- [ ] At GA release date, evaluate if any bugs justify delaying the release
|
||||
- [ ] Prepare for EOL version (version that is 3 releases old)
|
||||
- [ ] If unreleased changes are on the release branch for {current minor version minus 3}, cut a final patch release for that series (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] Edit the final patch release on GitHub and add the following notice at the top:
|
||||
```markdown
|
||||
> [!IMPORTANT]
|
||||
> **END OF LIFE NOTICE**
|
||||
>
|
||||
> This is the final release of the {EOL_SERIES} release series. As of {GA_DATE}, this version has reached end of life and will no longer receive bug fixes or security updates.
|
||||
>
|
||||
> **Action Required**: Please upgrade to a [supported version](https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/overview/) (v{SUPPORTED_VERSION_1}, v{SUPPORTED_VERSION_2}, or v{NEW_VERSION}).
|
||||
```
|
||||
- [ ] Cut GA release (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] Run the [Init ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/init-release.yaml) from the release branch
|
||||
- [ ] Review and merge the generated version bump PR
|
||||
- [ ] Run `./hack/trigger-release.sh` to push the release tag
|
||||
- [ ] Monitor the [Publish ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml)
|
||||
- [ ] Verify the release on [GitHub releases](https://github.com/argoproj/argo-cd/releases)
|
||||
- [ ] Verify the container image on [Quay.io](https://quay.io/repository/argoproj/argocd?tab=tags)
|
||||
- [ ] Verify the `stable` tag has been updated
|
||||
- [ ] Confirm the new version appears in [Read the Docs](https://argo-cd.readthedocs.io/)
|
||||
- [ ] Announce GA release with EOL notice
|
||||
- [ ] Confirm that tweet and blog post are ready
|
||||
- [ ] Publish tweet and blog post
|
||||
- [ ] Post in #argo-cd and #argo-announcements announcing the release and EOL:
|
||||
```
|
||||
:mega: Argo CD v{MAJOR}.{MINOR} is OUT NOW! :argocd::tada:
|
||||
|
||||
Please go through the following resources to know more about the release:
|
||||
|
||||
Upgrade instructions: https://argo-cd.readthedocs.io/en/latest/operator-manual/upgrading/{PREV_MINOR}-{MAJOR}.{MINOR}/
|
||||
Blog: {BLOG_POST_URL}
|
||||
|
||||
:warning: IMPORTANT: With the release of Argo CD v{MAJOR}.{MINOR}, support for Argo CD v{EOL_VERSION} has officially reached End of Life (EOL).
|
||||
|
||||
Thanks to all the folks who spent their time contributing to this release in any way possible!
|
||||
```
|
||||
- [ ] (For the next release champion) Review the [items scheduled for the next release](https://github.com/orgs/argoproj/projects/25). If any item does not have an assignee who can commit to finish the feature, move it to the next release.
|
||||
- [ ] (For the next release champion) Schedule a time mid-way through the release cycle to review items again.
|
||||
2
.github/workflows/bump-major-version.yaml
vendored
2
.github/workflows/bump-major-version.yaml
vendored
@@ -13,7 +13,7 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
21
.github/workflows/cherry-pick-single.yml
vendored
21
.github/workflows/cherry-pick-single.yml
vendored
@@ -38,7 +38,7 @@ jobs:
|
||||
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ steps.generate-token.outputs.token }}
|
||||
@@ -91,11 +91,17 @@ jobs:
|
||||
- name: Create Pull Request
|
||||
run: |
|
||||
# Create cherry-pick PR
|
||||
gh pr create \
|
||||
--title "${{ inputs.pr_title }} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})" \
|
||||
--body "Cherry-picked ${{ inputs.pr_title }} (#${{ inputs.pr_number }})
|
||||
TITLE="${PR_TITLE} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})"
|
||||
BODY=$(cat <<EOF
|
||||
Cherry-picked ${PR_TITLE} (#${{ inputs.pr_number }})
|
||||
|
||||
${{ steps.cherry-pick.outputs.signoff }}" \
|
||||
${{ steps.cherry-pick.outputs.signoff }}
|
||||
EOF
|
||||
)
|
||||
|
||||
gh pr create \
|
||||
--title "$TITLE" \
|
||||
--body "$BODY" \
|
||||
--base "${{ steps.cherry-pick.outputs.target_branch }}" \
|
||||
--head "${{ steps.cherry-pick.outputs.branch_name }}"
|
||||
|
||||
@@ -103,12 +109,13 @@ jobs:
|
||||
gh pr comment ${{ inputs.pr_number }} \
|
||||
--body "🍒 Cherry-pick PR created for ${{ inputs.version_number }}: #$(gh pr list --head ${{ steps.cherry-pick.outputs.branch_name }} --json number --jq '.[0].number')"
|
||||
env:
|
||||
PR_TITLE: ${{ inputs.pr_title }}
|
||||
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
|
||||
|
||||
- name: Comment on failure
|
||||
if: failure()
|
||||
run: |
|
||||
gh pr comment ${{ inputs.pr_number }} \
|
||||
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the workflow logs for details."
|
||||
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the [workflow logs](https://github.com/argoproj/argo-cd/actions/runs/${{ github.run_id }}) for details."
|
||||
env:
|
||||
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
|
||||
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
|
||||
|
||||
67
.github/workflows/ci-build.yaml
vendored
67
.github/workflows/ci-build.yaml
vendored
@@ -14,7 +14,7 @@ on:
|
||||
env:
|
||||
# Golang version to use across CI steps
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
GOLANG_VERSION: '1.25.0'
|
||||
GOLANG_VERSION: '1.25.3'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -31,7 +31,7 @@ jobs:
|
||||
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
|
||||
docs: ${{ steps.filter.outputs.docs_any_changed }}
|
||||
steps:
|
||||
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
|
||||
id: filter
|
||||
with:
|
||||
@@ -55,7 +55,7 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
@@ -67,7 +67,6 @@ jobs:
|
||||
run: |
|
||||
go mod tidy
|
||||
git diff --exit-code -- .
|
||||
|
||||
build-go:
|
||||
name: Build & cache Go code
|
||||
if: ${{ needs.changes.outputs.backend == 'true' }}
|
||||
@@ -76,13 +75,13 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -103,7 +102,7 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
@@ -112,7 +111,7 @@ jobs:
|
||||
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
|
||||
with:
|
||||
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
|
||||
version: v2.4.0
|
||||
version: v2.5.0
|
||||
args: --verbose
|
||||
|
||||
test-go:
|
||||
@@ -129,7 +128,7 @@ jobs:
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
@@ -153,7 +152,7 @@ jobs:
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -174,7 +173,7 @@ jobs:
|
||||
- name: Run all unit tests
|
||||
run: make test-local
|
||||
- name: Generate test results artifacts
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
with:
|
||||
name: test-results
|
||||
path: test-results
|
||||
@@ -193,7 +192,7 @@ jobs:
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
@@ -217,7 +216,7 @@ jobs:
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -238,7 +237,7 @@ jobs:
|
||||
- name: Run all unit tests
|
||||
run: make test-race-local
|
||||
- name: Generate test results artifacts
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
with:
|
||||
name: race-results
|
||||
path: test-results/
|
||||
@@ -251,7 +250,7 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
@@ -303,15 +302,15 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- name: Setup NodeJS
|
||||
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
|
||||
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
|
||||
with:
|
||||
# renovate: datasource=node-version packageName=node versioning=node
|
||||
node-version: '22.9.0'
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
@@ -336,7 +335,7 @@ jobs:
|
||||
shellcheck:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- run: |
|
||||
sudo apt-get install shellcheck
|
||||
shellcheck -e SC2059 -e SC2154 -e SC2034 -e SC2016 -e SC1091 $(find . -type f -name '*.sh' | grep -v './ui/node_modules') | tee sc.log
|
||||
@@ -355,12 +354,12 @@ jobs:
|
||||
sonar_secret: ${{ secrets.SONAR_TOKEN }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
@@ -368,12 +367,12 @@ jobs:
|
||||
run: |
|
||||
rm -rf ui/node_modules/argo-ui/node_modules
|
||||
- name: Get e2e code coverage
|
||||
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||
with:
|
||||
name: e2e-code-coverage
|
||||
path: e2e-code-coverage
|
||||
- name: Get unit test code coverage
|
||||
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||
with:
|
||||
name: test-results
|
||||
path: test-results
|
||||
@@ -407,7 +406,7 @@ jobs:
|
||||
test-e2e:
|
||||
name: Run end-to-end tests
|
||||
if: ${{ needs.changes.outputs.backend == 'true' }}
|
||||
runs-on: ubuntu-latest-16-cores
|
||||
runs-on: oracle-vm-16cpu-64gb-x86-64
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
@@ -426,7 +425,7 @@ jobs:
|
||||
- build-go
|
||||
- changes
|
||||
env:
|
||||
GOPATH: /home/runner/go
|
||||
GOPATH: /home/ubuntu/go
|
||||
ARGOCD_FAKE_IN_CLUSTER: 'true'
|
||||
ARGOCD_SSH_DATA_PATH: '/tmp/argo-e2e/app/config/ssh'
|
||||
ARGOCD_TLS_DATA_PATH: '/tmp/argo-e2e/app/config/tls'
|
||||
@@ -447,7 +446,7 @@ jobs:
|
||||
swap-storage: false
|
||||
tool-cache: false
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
@@ -462,19 +461,19 @@ jobs:
|
||||
set -x
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
sudo chmod -R a+rw /etc/rancher/k3s
|
||||
sudo mkdir -p $HOME/.kube && sudo chown -R runner $HOME/.kube
|
||||
sudo mkdir -p $HOME/.kube && sudo chown -R ubuntu $HOME/.kube
|
||||
sudo k3s kubectl config view --raw > $HOME/.kube/config
|
||||
sudo chown runner $HOME/.kube/config
|
||||
sudo chown ubuntu $HOME/.kube/config
|
||||
sudo chmod go-r $HOME/.kube/config
|
||||
kubectl version
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
- name: Add ~/go/bin to PATH
|
||||
run: |
|
||||
echo "/home/runner/go/bin" >> $GITHUB_PATH
|
||||
echo "/home/ubuntu/go/bin" >> $GITHUB_PATH
|
||||
- name: Add /usr/local/bin to PATH
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
@@ -496,11 +495,11 @@ jobs:
|
||||
run: |
|
||||
docker pull ghcr.io/dexidp/dex:v2.43.0
|
||||
docker pull argoproj/argo-cd-ci-builder:v1.0.0
|
||||
docker pull redis:7.2.7-alpine
|
||||
docker pull redis:8.2.1-alpine
|
||||
- name: Create target directory for binaries in the build-process
|
||||
run: |
|
||||
mkdir -p dist
|
||||
chown runner dist
|
||||
chown ubuntu dist
|
||||
- name: Run E2E server and wait for it being available
|
||||
timeout-minutes: 30
|
||||
run: |
|
||||
@@ -526,13 +525,13 @@ jobs:
|
||||
goreman run stop-all || echo "goreman trouble"
|
||||
sleep 30
|
||||
- name: Upload e2e coverage report
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
with:
|
||||
name: e2e-code-coverage
|
||||
path: /tmp/coverage
|
||||
if: ${{ matrix.k3s.latest }}
|
||||
- name: Upload e2e-server logs
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
with:
|
||||
name: e2e-server-k8s${{ matrix.k3s.version }}.log
|
||||
path: /tmp/e2e-server.log
|
||||
|
||||
2
.github/workflows/codeql.yml
vendored
2
.github/workflows/codeql.yml
vendored
@@ -29,7 +29,7 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
|
||||
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
|
||||
- name: Setup Golang
|
||||
|
||||
12
.github/workflows/image-reuse.yaml
vendored
12
.github/workflows/image-reuse.yaml
vendored
@@ -56,14 +56,14 @@ jobs:
|
||||
image-digest: ${{ steps.image.outputs.digest }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
if: ${{ github.ref_type == 'tag'}}
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
if: ${{ github.ref_type != 'tag'}}
|
||||
|
||||
- name: Setup Golang
|
||||
@@ -73,7 +73,7 @@ jobs:
|
||||
cache: false
|
||||
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
|
||||
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
|
||||
|
||||
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
|
||||
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
|
||||
@@ -103,7 +103,7 @@ jobs:
|
||||
echo 'EOF' >> $GITHUB_ENV
|
||||
|
||||
- name: Login to Quay.io
|
||||
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.quay_username }}
|
||||
@@ -111,7 +111,7 @@ jobs:
|
||||
if: ${{ inputs.quay_image_name && inputs.push }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ secrets.ghcr_username }}
|
||||
@@ -119,7 +119,7 @@ jobs:
|
||||
if: ${{ inputs.ghcr_image_name && inputs.push }}
|
||||
|
||||
- name: Login to dockerhub Container Registry
|
||||
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
with:
|
||||
username: ${{ secrets.docker_username }}
|
||||
password: ${{ secrets.docker_password }}
|
||||
|
||||
8
.github/workflows/image.yaml
vendored
8
.github/workflows/image.yaml
vendored
@@ -25,7 +25,7 @@ jobs:
|
||||
image-tag: ${{ steps.image.outputs.tag}}
|
||||
platforms: ${{ steps.platforms.outputs.platforms }}
|
||||
steps:
|
||||
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
|
||||
- name: Set image tag for ghcr
|
||||
run: echo "tag=$(cat ./VERSION)-${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
|
||||
@@ -53,7 +53,7 @@ jobs:
|
||||
with:
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.0
|
||||
go-version: 1.25.3
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: false
|
||||
|
||||
@@ -70,7 +70,7 @@ jobs:
|
||||
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.0
|
||||
go-version: 1.25.3
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: true
|
||||
secrets:
|
||||
@@ -106,7 +106,7 @@ jobs:
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- run: git clone "https://$TOKEN@github.com/argoproj/argoproj-deployments"
|
||||
env:
|
||||
TOKEN: ${{ secrets.TOKEN }}
|
||||
|
||||
2
.github/workflows/init-release.yaml
vendored
2
.github/workflows/init-release.yaml
vendored
@@ -23,7 +23,7 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
78
.github/workflows/release.yaml
vendored
78
.github/workflows/release.yaml
vendored
@@ -11,7 +11,7 @@ permissions: {}
|
||||
|
||||
env:
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
GOLANG_VERSION: '1.25.0' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
GOLANG_VERSION: '1.25.3' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
|
||||
jobs:
|
||||
argocd-image:
|
||||
@@ -25,13 +25,49 @@ jobs:
|
||||
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.0
|
||||
go-version: 1.25.3
|
||||
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
|
||||
push: true
|
||||
secrets:
|
||||
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
|
||||
|
||||
setup-variables:
|
||||
name: Setup Release Variables
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
outputs:
|
||||
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
|
||||
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Setup variables
|
||||
id: var
|
||||
run: |
|
||||
set -xue
|
||||
# Fetch all tag information
|
||||
git fetch --prune --tags --force
|
||||
|
||||
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
|
||||
|
||||
PRE_RELEASE=false
|
||||
# Check if latest tag is a pre-release
|
||||
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
|
||||
PRE_RELEASE=true
|
||||
fi
|
||||
|
||||
IS_LATEST=false
|
||||
# Ensure latest release tag matches github.ref_name
|
||||
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
|
||||
IS_LATEST=true
|
||||
fi
|
||||
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
|
||||
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
|
||||
|
||||
argocd-image-provenance:
|
||||
needs: [argocd-image]
|
||||
permissions:
|
||||
@@ -50,18 +86,20 @@ jobs:
|
||||
|
||||
goreleaser:
|
||||
needs:
|
||||
- setup-variables
|
||||
- argocd-image
|
||||
- argocd-image-provenance
|
||||
permissions:
|
||||
contents: write # used for uploading assets
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
env:
|
||||
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
|
||||
outputs:
|
||||
hashes: ${{ steps.hash.outputs.hashes }}
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -142,12 +180,12 @@ jobs:
|
||||
permissions:
|
||||
contents: write # Needed for release uploads
|
||||
outputs:
|
||||
hashes: ${{ steps.sbom-hash.outputs.hashes}}
|
||||
hashes: ${{ steps.sbom-hash.outputs.hashes }}
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -198,7 +236,7 @@ jobs:
|
||||
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Upload SBOM
|
||||
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836 # v2.3.3
|
||||
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090 # v2.4.1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
@@ -221,6 +259,7 @@ jobs:
|
||||
|
||||
post-release:
|
||||
needs:
|
||||
- setup-variables
|
||||
- argocd-image
|
||||
- goreleaser
|
||||
- generate-sbom
|
||||
@@ -229,9 +268,11 @@ jobs:
|
||||
pull-requests: write # Needed to create PR for VERSION update.
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
env:
|
||||
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -242,27 +283,6 @@ jobs:
|
||||
git config --global user.email 'ci@argoproj.com'
|
||||
git config --global user.name 'CI'
|
||||
|
||||
- name: Check if tag is the latest version and not a pre-release
|
||||
run: |
|
||||
set -xue
|
||||
# Fetch all tag information
|
||||
git fetch --prune --tags --force
|
||||
|
||||
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
|
||||
|
||||
PRE_RELEASE=false
|
||||
# Check if latest tag is a pre-release
|
||||
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
|
||||
PRE_RELEASE=true
|
||||
fi
|
||||
|
||||
# Ensure latest tag matches github.ref_name & not a pre-release
|
||||
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
|
||||
echo "TAG_STABLE=true" >> $GITHUB_ENV
|
||||
else
|
||||
echo "TAG_STABLE=false" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- name: Update stable tag to latest version
|
||||
run: |
|
||||
git tag -f stable ${{ github.ref_name }}
|
||||
|
||||
12
.github/workflows/renovate.yaml
vendored
12
.github/workflows/renovate.yaml
vendored
@@ -10,6 +10,7 @@ permissions:
|
||||
jobs:
|
||||
renovate:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
steps:
|
||||
- name: Get token
|
||||
id: get_token
|
||||
@@ -19,10 +20,17 @@ jobs:
|
||||
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # 4.2.2
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # 5.0.0
|
||||
|
||||
# Some codegen commands require Go to be setup
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.3
|
||||
|
||||
- name: Self-hosted Renovate
|
||||
uses: renovatebot/github-action@f8af9272cd94a4637c29f60dea8731afd3134473 #43.0.12
|
||||
uses: renovatebot/github-action@ea850436a5fe75c0925d583c7a02c60a5865461d #43.0.20
|
||||
with:
|
||||
configurationFile: .github/configs/renovate-config.js
|
||||
token: '${{ steps.get_token.outputs.token }}'
|
||||
|
||||
6
.github/workflows/scorecard.yaml
vendored
6
.github/workflows/scorecard.yaml
vendored
@@ -30,12 +30,12 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: "Checkout code"
|
||||
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- name: "Run analysis"
|
||||
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
|
||||
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
|
||||
with:
|
||||
results_file: results.sarif
|
||||
results_format: sarif
|
||||
@@ -54,7 +54,7 @@ jobs:
|
||||
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
|
||||
# format to the repository Actions tab.
|
||||
- name: "Upload artifact"
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
with:
|
||||
name: SARIF file
|
||||
path: results.sarif
|
||||
|
||||
2
.github/workflows/update-snyk.yaml
vendored
2
.github/workflows/update-snyk.yaml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build reports
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -20,6 +20,7 @@ node_modules/
|
||||
.kube/
|
||||
./test/cmp/*.sock
|
||||
.envrc.remote
|
||||
.mirrord/
|
||||
.*.swp
|
||||
rerunreport.txt
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ linters:
|
||||
- govet
|
||||
- importas
|
||||
- misspell
|
||||
- noctx
|
||||
- perfsprint
|
||||
- revive
|
||||
- staticcheck
|
||||
|
||||
@@ -49,13 +49,14 @@ archives:
|
||||
- argocd-cli
|
||||
name_template: |-
|
||||
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
|
||||
formats: [ binary ]
|
||||
formats: [binary]
|
||||
|
||||
checksum:
|
||||
name_template: 'cli_checksums.txt'
|
||||
algorithm: sha256
|
||||
|
||||
release:
|
||||
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
|
||||
prerelease: auto
|
||||
draft: false
|
||||
header: |
|
||||
|
||||
@@ -1,11 +1,5 @@
|
||||
dir: '{{.InterfaceDir}}/mocks'
|
||||
structname: '{{.InterfaceName}}'
|
||||
filename: '{{.InterfaceName}}.go'
|
||||
pkgname: mocks
|
||||
|
||||
template-data:
|
||||
unroll-variadic: true
|
||||
|
||||
packages:
|
||||
github.com/argoproj/argo-cd/v3/applicationset/generators:
|
||||
interfaces:
|
||||
@@ -14,17 +8,15 @@ packages:
|
||||
interfaces:
|
||||
Repos: {}
|
||||
github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider:
|
||||
config:
|
||||
dir: applicationset/services/scm_provider/aws_codecommit/mocks
|
||||
interfaces:
|
||||
AWSCodeCommitClient: {}
|
||||
AWSTaggingClient: {}
|
||||
AzureDevOpsClientFactory: {}
|
||||
github.com/argoproj/argo-cd/v3/applicationset/utils:
|
||||
interfaces:
|
||||
Renderer: {}
|
||||
github.com/argoproj/argo-cd/v3/commitserver/apiclient:
|
||||
interfaces:
|
||||
Clientset: {}
|
||||
CommitServiceClient: {}
|
||||
github.com/argoproj/argo-cd/v3/commitserver/commit:
|
||||
interfaces:
|
||||
@@ -35,6 +27,7 @@ packages:
|
||||
github.com/argoproj/argo-cd/v3/controller/hydrator:
|
||||
interfaces:
|
||||
Dependencies: {}
|
||||
RepoGetter: {}
|
||||
github.com/argoproj/argo-cd/v3/pkg/apiclient/cluster:
|
||||
interfaces:
|
||||
ClusterServiceServer: {}
|
||||
@@ -47,8 +40,8 @@ packages:
|
||||
AppProjectInterface: {}
|
||||
github.com/argoproj/argo-cd/v3/reposerver/apiclient:
|
||||
interfaces:
|
||||
RepoServerService_GenerateManifestWithFilesClient: {}
|
||||
RepoServerServiceClient: {}
|
||||
RepoServerService_GenerateManifestWithFilesClient: {}
|
||||
github.com/argoproj/argo-cd/v3/server/application:
|
||||
interfaces:
|
||||
Broadcaster: {}
|
||||
@@ -63,26 +56,37 @@ packages:
|
||||
github.com/argoproj/argo-cd/v3/util/db:
|
||||
interfaces:
|
||||
ArgoDB: {}
|
||||
RepoCredsDB: {}
|
||||
github.com/argoproj/argo-cd/v3/util/git:
|
||||
interfaces:
|
||||
Client: {}
|
||||
github.com/argoproj/argo-cd/v3/util/helm:
|
||||
interfaces:
|
||||
Client: {}
|
||||
github.com/argoproj/argo-cd/v3/util/oci:
|
||||
interfaces:
|
||||
Client: {}
|
||||
github.com/argoproj/argo-cd/v3/util/io:
|
||||
interfaces:
|
||||
TempPaths: {}
|
||||
github.com/argoproj/argo-cd/v3/util/notification/argocd:
|
||||
interfaces:
|
||||
Service: {}
|
||||
github.com/argoproj/argo-cd/v3/util/oci:
|
||||
interfaces:
|
||||
Client: {}
|
||||
github.com/argoproj/argo-cd/v3/util/workloadidentity:
|
||||
interfaces:
|
||||
TokenProvider: {}
|
||||
github.com/argoproj/gitops-engine/pkg/cache:
|
||||
interfaces:
|
||||
ClusterCache: {}
|
||||
github.com/argoproj/gitops-engine/pkg/diff:
|
||||
interfaces:
|
||||
ServerSideDryRunner: {}
|
||||
github.com/microsoft/azure-devops-go-api/azuredevops/v7/git:
|
||||
config:
|
||||
dir: applicationset/services/scm_provider/azure_devops/git/mocks
|
||||
interfaces:
|
||||
Client: {}
|
||||
pkgname: mocks
|
||||
structname: '{{.InterfaceName}}'
|
||||
template-data:
|
||||
unroll-variadic: true
|
||||
|
||||
@@ -16,4 +16,5 @@
|
||||
# CLI
|
||||
/cmd/argocd/** @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
|
||||
/cmd/main.go @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
|
||||
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
|
||||
# Also include @argoproj/argocd-approvers-docs to avoid requiring CLI approvers for docs-only PRs.
|
||||
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-docs @argoproj/argocd-approvers-cli
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:10bb10bb062de665d4dc3e0ea36715270ead632cfcb74d08ca2273712a0dfb42
|
||||
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:27771fb7b40a58237c98e8d3e6b9ecdd9289cec69a857fccfb85ff36294dac20
|
||||
####################################################################################################
|
||||
# Builder image
|
||||
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
|
||||
# Also used as the image in CI jobs so needs all dependencies
|
||||
####################################################################################################
|
||||
FROM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6 AS builder
|
||||
FROM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2 AS builder
|
||||
|
||||
WORKDIR /tmp
|
||||
|
||||
@@ -103,11 +103,13 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
|
||||
####################################################################################################
|
||||
# Argo CD Build stage which performs the actual build of Argo CD binaries
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6 AS argocd-build
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2 AS argocd-build
|
||||
|
||||
WORKDIR /go/src/github.com/argoproj/argo-cd
|
||||
|
||||
COPY go.* ./
|
||||
RUN mkdir -p gitops-engine
|
||||
COPY gitops-engine/go.* ./gitops-engine
|
||||
RUN go mod download
|
||||
|
||||
# Perform the build
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6
|
||||
FROM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM node:20
|
||||
FROM node:25
|
||||
|
||||
WORKDIR /app/ui
|
||||
|
||||
|
||||
6
Makefile
6
Makefile
@@ -261,8 +261,12 @@ clidocsgen:
|
||||
actionsdocsgen:
|
||||
hack/generate-actions-list.sh
|
||||
|
||||
.PHONY: resourceiconsgen
|
||||
resourceiconsgen:
|
||||
hack/generate-icons-typescript.sh
|
||||
|
||||
.PHONY: codegen-local
|
||||
codegen-local: mod-vendor-local mockgen gogen protogen clientgen openapigen clidocsgen actionsdocsgen manifests-local notification-docs notification-catalog
|
||||
codegen-local: mod-vendor-local mockgen gogen protogen clientgen openapigen clidocsgen actionsdocsgen resourceiconsgen manifests-local notification-docs notification-catalog
|
||||
rm -rf vendor/
|
||||
|
||||
.PHONY: codegen-local-fast
|
||||
|
||||
2
Procfile
2
Procfile
@@ -9,6 +9,6 @@ ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
|
||||
git-server: test/fixture/testrepos/start-git.sh
|
||||
helm-registry: test/fixture/testrepos/start-helm-registry.sh
|
||||
oci-registry: test/fixture/testrepos/start-authenticated-helm-registry.sh
|
||||
dev-mounter: [[ "$ARGOCD_E2E_TEST" != "true" ]] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
|
||||
dev-mounter: [ "$ARGOCD_E2E_TEST" != "true" ] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
|
||||
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
|
||||
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"
|
||||
@@ -3,9 +3,9 @@ header:
|
||||
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
|
||||
last-updated: '2023-10-27'
|
||||
last-reviewed: '2023-10-27'
|
||||
commit-hash: 320f46f06beaf75f9c406e3a47e2e09d36e2047a
|
||||
commit-hash: 06ef059f9fc7cf9da2dfaef2a505ee1e3c693485
|
||||
project-url: https://github.com/argoproj/argo-cd
|
||||
project-release: v3.2.0
|
||||
project-release: v3.3.0
|
||||
changelog: https://github.com/argoproj/argo-cd/releases
|
||||
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
|
||||
project-lifecycle:
|
||||
|
||||
23
Tiltfile
23
Tiltfile
@@ -123,6 +123,7 @@ k8s_resource(
|
||||
'9345:2345',
|
||||
'8083:8083'
|
||||
],
|
||||
resource_deps=['build']
|
||||
)
|
||||
|
||||
# track crds
|
||||
@@ -148,6 +149,7 @@ k8s_resource(
|
||||
'9346:2345',
|
||||
'8084:8084'
|
||||
],
|
||||
resource_deps=['build']
|
||||
)
|
||||
|
||||
# track argocd-redis resources and port forward
|
||||
@@ -162,6 +164,7 @@ k8s_resource(
|
||||
port_forwards=[
|
||||
'6379:6379',
|
||||
],
|
||||
resource_deps=['build']
|
||||
)
|
||||
|
||||
# track argocd-applicationset-controller resources
|
||||
@@ -180,6 +183,7 @@ k8s_resource(
|
||||
'8085:8080',
|
||||
'7000:7000'
|
||||
],
|
||||
resource_deps=['build']
|
||||
)
|
||||
|
||||
# track argocd-application-controller resources
|
||||
@@ -197,6 +201,7 @@ k8s_resource(
|
||||
'9348:2345',
|
||||
'8086:8082',
|
||||
],
|
||||
resource_deps=['build']
|
||||
)
|
||||
|
||||
# track argocd-notifications-controller resources
|
||||
@@ -214,6 +219,7 @@ k8s_resource(
|
||||
'9349:2345',
|
||||
'8087:9001',
|
||||
],
|
||||
resource_deps=['build']
|
||||
)
|
||||
|
||||
# track argocd-dex-server resources
|
||||
@@ -225,6 +231,7 @@ k8s_resource(
|
||||
'argocd-dex-server:role',
|
||||
'argocd-dex-server:rolebinding',
|
||||
],
|
||||
resource_deps=['build']
|
||||
)
|
||||
|
||||
# track argocd-commit-server resources
|
||||
@@ -239,6 +246,19 @@ k8s_resource(
|
||||
'8088:8087',
|
||||
'8089:8086',
|
||||
],
|
||||
resource_deps=['build']
|
||||
)
|
||||
|
||||
# ui dependencies
|
||||
local_resource(
|
||||
'node-modules',
|
||||
'yarn',
|
||||
dir='ui',
|
||||
deps = [
|
||||
'ui/package.json',
|
||||
'ui/yarn.lock',
|
||||
],
|
||||
allow_parallel=True,
|
||||
)
|
||||
|
||||
# docker for ui
|
||||
@@ -260,6 +280,7 @@ k8s_resource(
|
||||
port_forwards=[
|
||||
'4000:4000',
|
||||
],
|
||||
resource_deps=['node-modules'],
|
||||
)
|
||||
|
||||
# linting
|
||||
@@ -278,6 +299,7 @@ local_resource(
|
||||
'ui',
|
||||
],
|
||||
allow_parallel=True,
|
||||
resource_deps=['node-modules'],
|
||||
)
|
||||
|
||||
local_resource(
|
||||
@@ -287,5 +309,6 @@ local_resource(
|
||||
'go.mod',
|
||||
'go.sum',
|
||||
],
|
||||
allow_parallel=True,
|
||||
)
|
||||
|
||||
|
||||
3
USERS.md
3
USERS.md
@@ -63,6 +63,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Camptocamp](https://camptocamp.com)
|
||||
1. [Candis](https://www.candis.io)
|
||||
1. [Capital One](https://www.capitalone.com)
|
||||
1. [Capptain LTD](https://capptain.co/)
|
||||
1. [CARFAX Europe](https://www.carfax.eu)
|
||||
1. [CARFAX](https://www.carfax.com)
|
||||
1. [Carrefour Group](https://www.carrefour.com)
|
||||
@@ -309,7 +310,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Relex Solutions](https://www.relexsolutions.com/)
|
||||
1. [RightRev](https://rightrev.com/)
|
||||
1. [Rijkswaterstaat](https://www.rijkswaterstaat.nl/en)
|
||||
1. [Rise](https://www.risecard.eu/)
|
||||
1. Rise
|
||||
1. [Riskified](https://www.riskified.com/)
|
||||
1. [Robotinfra](https://www.robotinfra.com)
|
||||
1. [Rocket.Chat](https://rocket.chat)
|
||||
|
||||
@@ -37,6 +37,7 @@ import (
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/tools/record"
|
||||
"k8s.io/client-go/util/retry"
|
||||
"k8s.io/utils/ptr"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/builder"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
@@ -46,6 +47,8 @@ import (
|
||||
"sigs.k8s.io/controller-runtime/pkg/handler"
|
||||
"sigs.k8s.io/controller-runtime/pkg/predicate"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/health"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/controllers/template"
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/generators"
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/metrics"
|
||||
@@ -100,6 +103,7 @@ type ApplicationSetReconciler struct {
|
||||
GlobalPreservedAnnotations []string
|
||||
GlobalPreservedLabels []string
|
||||
Metrics *metrics.ApplicationsetMetrics
|
||||
MaxResourcesStatusCount int
|
||||
}
|
||||
|
||||
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
|
||||
@@ -226,8 +230,6 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
return ctrl.Result{}, fmt.Errorf("failed to get update resources status for application set: %w", err)
|
||||
}
|
||||
|
||||
// appMap is a name->app collection of Applications in this ApplicationSet.
|
||||
appMap := map[string]argov1alpha1.Application{}
|
||||
// appSyncMap tracks which apps will be synced during this reconciliation.
|
||||
appSyncMap := map[string]bool{}
|
||||
|
||||
@@ -241,22 +243,20 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
return ctrl.Result{}, fmt.Errorf("failed to clear previous AppSet application statuses for %v: %w", applicationSetInfo.Name, err)
|
||||
}
|
||||
} else if isRollingSyncStrategy(&applicationSetInfo) {
|
||||
// The appset uses progressive sync with `RollingSync` strategy
|
||||
for _, app := range currentApplications {
|
||||
appMap[app.Name] = app
|
||||
}
|
||||
|
||||
appSyncMap, err = r.performProgressiveSyncs(ctx, logCtx, applicationSetInfo, currentApplications, generatedApplications, appMap)
|
||||
appSyncMap, err = r.performProgressiveSyncs(ctx, logCtx, applicationSetInfo, currentApplications, generatedApplications)
|
||||
if err != nil {
|
||||
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Progressive Sync is disabled, clear any existing applicationStatus to prevent stale data
|
||||
if len(applicationSetInfo.Status.ApplicationStatus) > 0 {
|
||||
logCtx.Infof("Progressive Sync disabled, removing %v AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
|
||||
|
||||
var validApps []argov1alpha1.Application
|
||||
for i := range generatedApplications {
|
||||
if validateErrors[generatedApplications[i].QualifiedName()] == nil {
|
||||
validApps = append(validApps, generatedApplications[i])
|
||||
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
|
||||
if err != nil {
|
||||
return ctrl.Result{}, fmt.Errorf("failed to clear AppSet application statuses when Progressive Sync is disabled for %v: %w", applicationSetInfo.Name, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -287,13 +287,25 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
)
|
||||
}
|
||||
|
||||
var validApps []argov1alpha1.Application
|
||||
for i := range generatedApplications {
|
||||
if validateErrors[generatedApplications[i].QualifiedName()] == nil {
|
||||
validApps = append(validApps, generatedApplications[i])
|
||||
}
|
||||
}
|
||||
|
||||
if r.EnableProgressiveSyncs {
|
||||
// trigger appropriate application syncs if RollingSync strategy is enabled
|
||||
if progressiveSyncsRollingSyncStrategyEnabled(&applicationSetInfo) {
|
||||
validApps = r.syncValidApplications(logCtx, &applicationSetInfo, appSyncMap, appMap, validApps)
|
||||
validApps = r.syncDesiredApplications(logCtx, &applicationSetInfo, appSyncMap, validApps)
|
||||
}
|
||||
}
|
||||
|
||||
// Sort apps by name so they are updated/created in the same order, and condition errors are the same
|
||||
sort.Slice(validApps, func(i, j int) bool {
|
||||
return validApps[i].Name < validApps[j].Name
|
||||
})
|
||||
|
||||
if utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowUpdate() {
|
||||
err = r.createOrUpdateInCluster(ctx, logCtx, applicationSetInfo, validApps)
|
||||
if err != nil {
|
||||
@@ -325,6 +337,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
}
|
||||
|
||||
if utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowDelete() {
|
||||
// Delete the generatedApplications instead of the validApps because we want to be able to delete applications in error/invalid state
|
||||
err = r.deleteInCluster(ctx, logCtx, applicationSetInfo, generatedApplications)
|
||||
if err != nil {
|
||||
_ = r.setApplicationSetStatusCondition(ctx,
|
||||
@@ -931,7 +944,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
|
||||
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application) (map[string]bool, error) {
|
||||
appDependencyList, appStepMap := r.buildAppDependencyList(logCtx, appset, desiredApplications)
|
||||
|
||||
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
|
||||
@@ -940,21 +953,21 @@ func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context,
|
||||
}
|
||||
|
||||
logCtx.Infof("ApplicationSet %v step list:", appset.Name)
|
||||
for i, step := range appDependencyList {
|
||||
logCtx.Infof("step %v: %+v", i+1, step)
|
||||
for stepIndex, applicationNames := range appDependencyList {
|
||||
logCtx.Infof("step %v: %+v", stepIndex+1, applicationNames)
|
||||
}
|
||||
|
||||
appSyncMap := r.buildAppSyncMap(appset, appDependencyList, appMap)
|
||||
logCtx.Infof("Application allowed to sync before maxUpdate?: %+v", appSyncMap)
|
||||
appsToSync := r.getAppsToSync(appset, appDependencyList, applications)
|
||||
logCtx.Infof("Application allowed to sync before maxUpdate?: %+v", appsToSync)
|
||||
|
||||
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appSyncMap, appStepMap)
|
||||
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appsToSync, appStepMap)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to update applicationset application status progress: %w", err)
|
||||
}
|
||||
|
||||
_ = r.updateApplicationSetApplicationStatusConditions(ctx, &appset)
|
||||
|
||||
return appSyncMap, nil
|
||||
return appsToSync, nil
|
||||
}
|
||||
|
||||
// this list tracks which Applications belong to each RollingUpdate step
|
||||
@@ -1028,55 +1041,53 @@ func labelMatchedExpression(logCtx *log.Entry, val string, matchExpression argov
|
||||
return valueMatched
|
||||
}
|
||||
|
||||
// this map is used to determine which stage of Applications are ready to be updated in the reconciler loop
|
||||
func (r *ApplicationSetReconciler) buildAppSyncMap(applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, appMap map[string]argov1alpha1.Application) map[string]bool {
|
||||
// getAppsToSync returns a Map of Applications that should be synced in this progressive sync wave
|
||||
func (r *ApplicationSetReconciler) getAppsToSync(applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, currentApplications []argov1alpha1.Application) map[string]bool {
|
||||
appSyncMap := map[string]bool{}
|
||||
syncEnabled := true
|
||||
currentAppsMap := map[string]bool{}
|
||||
|
||||
// healthy stages and the first non-healthy stage should have sync enabled
|
||||
// every stage after should have sync disabled
|
||||
for _, app := range currentApplications {
|
||||
currentAppsMap[app.Name] = true
|
||||
}
|
||||
|
||||
for i := range appDependencyList {
|
||||
for stepIndex := range appDependencyList {
|
||||
// set the syncEnabled boolean for every Application in the current step
|
||||
for _, appName := range appDependencyList[i] {
|
||||
appSyncMap[appName] = syncEnabled
|
||||
for _, appName := range appDependencyList[stepIndex] {
|
||||
appSyncMap[appName] = true
|
||||
}
|
||||
|
||||
// detect if we need to halt before progressing to the next step
|
||||
for _, appName := range appDependencyList[i] {
|
||||
// evaluate if we need to sync next waves
|
||||
syncNextWave := true
|
||||
for _, appName := range appDependencyList[stepIndex] {
|
||||
// Check if application is created and managed by this AppSet, if it is not created yet, we cannot progress
|
||||
if _, ok := currentAppsMap[appName]; !ok {
|
||||
syncNextWave = false
|
||||
break
|
||||
}
|
||||
|
||||
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, appName)
|
||||
if idx == -1 {
|
||||
// no Application status found, likely because the Application is being newly created
|
||||
syncEnabled = false
|
||||
// No Application status found, likely because the Application is being newly created
|
||||
// This mean this wave is not yet completed
|
||||
syncNextWave = false
|
||||
break
|
||||
}
|
||||
|
||||
appStatus := applicationSet.Status.ApplicationStatus[idx]
|
||||
app, ok := appMap[appName]
|
||||
if !ok {
|
||||
// application name not found in the list of applications managed by this ApplicationSet, maybe because it's being deleted
|
||||
syncEnabled = false
|
||||
break
|
||||
}
|
||||
syncEnabled = appSyncEnabledForNextStep(&applicationSet, app, appStatus)
|
||||
if !syncEnabled {
|
||||
if appStatus.Status != argov1alpha1.ProgressiveSyncHealthy {
|
||||
// At least one application in this wave is not yet healthy. We cannot proceed to the next wave
|
||||
syncNextWave = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if !syncNextWave {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return appSyncMap
|
||||
}
|
||||
|
||||
func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1alpha1.Application, appStatus argov1alpha1.ApplicationSetApplicationStatus) bool {
|
||||
if progressiveSyncsRollingSyncStrategyEnabled(appset) {
|
||||
// we still need to complete the current step if the Application is not yet Healthy or there are still pending Application changes
|
||||
return isApplicationHealthy(app) && appStatus.Status == "Healthy"
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func isRollingSyncStrategy(appset *argov1alpha1.ApplicationSet) bool {
|
||||
// It's only RollingSync if the type specifically sets it
|
||||
return appset.Spec.Strategy != nil && appset.Spec.Strategy.Type == "RollingSync" && appset.Spec.Strategy.RollingSync != nil
|
||||
@@ -1087,29 +1098,21 @@ func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.Application
|
||||
return isRollingSyncStrategy(appset) && len(appset.Spec.Strategy.RollingSync.Steps) > 0
|
||||
}
|
||||
|
||||
func isProgressiveSyncDeletionOrderReversed(appset *argov1alpha1.ApplicationSet) bool {
|
||||
// When progressive sync is enabled + deletionOrder is set to Reverse (case-insensitive)
|
||||
return progressiveSyncsRollingSyncStrategyEnabled(appset) && strings.EqualFold(appset.Spec.Strategy.DeletionOrder, ReverseDeletionOrder)
|
||||
}
|
||||
|
||||
func isApplicationHealthy(app argov1alpha1.Application) bool {
|
||||
healthStatusString, syncStatusString, operationPhaseString := statusStrings(app)
|
||||
|
||||
if healthStatusString == "Healthy" && syncStatusString != "OutOfSync" && (operationPhaseString == "Succeeded" || operationPhaseString == "") {
|
||||
return true
|
||||
func isApplicationWithError(app argov1alpha1.Application) bool {
|
||||
for _, condition := range app.Status.Conditions {
|
||||
if condition.Type == argov1alpha1.ApplicationConditionInvalidSpecError {
|
||||
return true
|
||||
}
|
||||
if condition.Type == argov1alpha1.ApplicationConditionUnknownError {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func statusStrings(app argov1alpha1.Application) (string, string, string) {
|
||||
healthStatusString := string(app.Status.Health.Status)
|
||||
syncStatusString := string(app.Status.Sync.Status)
|
||||
operationPhaseString := ""
|
||||
if app.Status.OperationState != nil {
|
||||
operationPhaseString = string(app.Status.OperationState.Phase)
|
||||
}
|
||||
|
||||
return healthStatusString, syncStatusString, operationPhaseString
|
||||
func isProgressiveSyncDeletionOrderReversed(appset *argov1alpha1.ApplicationSet) bool {
|
||||
// When progressive sync is enabled + deletionOrder is set to Reverse (case-insensitive)
|
||||
return progressiveSyncsRollingSyncStrategyEnabled(appset) && strings.EqualFold(appset.Spec.Strategy.DeletionOrder, ReverseDeletionOrder)
|
||||
}
|
||||
|
||||
func getAppStep(appName string, appStepMap map[string]int) int {
|
||||
@@ -1128,81 +1131,112 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
|
||||
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applications))
|
||||
|
||||
for _, app := range applications {
|
||||
healthStatusString, syncStatusString, operationPhaseString := statusStrings(app)
|
||||
|
||||
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, app.Name)
|
||||
appHealthStatus := app.Status.Health.Status
|
||||
appSyncStatus := app.Status.Sync.Status
|
||||
|
||||
currentAppStatus := argov1alpha1.ApplicationSetApplicationStatus{}
|
||||
|
||||
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, app.Name)
|
||||
if idx == -1 {
|
||||
// AppStatus not found, set default status of "Waiting"
|
||||
currentAppStatus = argov1alpha1.ApplicationSetApplicationStatus{
|
||||
Application: app.Name,
|
||||
TargetRevisions: app.Status.GetRevisions(),
|
||||
LastTransitionTime: &now,
|
||||
Message: "No Application status found, defaulting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Message: "No Application status found, defaulting status to Waiting",
|
||||
Status: argov1alpha1.ProgressiveSyncWaiting,
|
||||
Step: strconv.Itoa(getAppStep(app.Name, appStepMap)),
|
||||
}
|
||||
} else {
|
||||
// we have an existing AppStatus
|
||||
currentAppStatus = applicationSet.Status.ApplicationStatus[idx]
|
||||
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
|
||||
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
|
||||
}
|
||||
}
|
||||
|
||||
statusLogCtx := logCtx.WithFields(log.Fields{
|
||||
"app.name": currentAppStatus.Application,
|
||||
"app.health": appHealthStatus,
|
||||
"app.sync": appSyncStatus,
|
||||
"status.status": currentAppStatus.Status,
|
||||
"status.message": currentAppStatus.Message,
|
||||
"status.step": currentAppStatus.Step,
|
||||
"status.targetRevisions": strings.Join(currentAppStatus.TargetRevisions, ","),
|
||||
})
|
||||
|
||||
newAppStatus := currentAppStatus.DeepCopy()
|
||||
newAppStatus.Step = strconv.Itoa(getAppStep(newAppStatus.Application, appStepMap))
|
||||
|
||||
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
|
||||
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
|
||||
currentAppStatus.Status = "Waiting"
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
|
||||
// A new version is available in the application and we need to re-sync the application
|
||||
newAppStatus.TargetRevisions = app.Status.GetRevisions()
|
||||
newAppStatus.Message = "Application has pending changes, setting status to Waiting"
|
||||
newAppStatus.Status = argov1alpha1.ProgressiveSyncWaiting
|
||||
newAppStatus.LastTransitionTime = &now
|
||||
}
|
||||
|
||||
appOutdated := false
|
||||
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
|
||||
appOutdated = syncStatusString == "OutOfSync"
|
||||
}
|
||||
if newAppStatus.Status == argov1alpha1.ProgressiveSyncWaiting {
|
||||
// App has changed to waiting because the TargetRevisions changed or it is a new selected app
|
||||
// This does not mean we should always sync the app. The app may not be OutOfSync
|
||||
// and may not require a sync if it does not have differences.
|
||||
if appSyncStatus == argov1alpha1.SyncStatusCodeSynced {
|
||||
if app.Status.Health.Status == health.HealthStatusHealthy {
|
||||
newAppStatus.LastTransitionTime = &now
|
||||
newAppStatus.Status = argov1alpha1.ProgressiveSyncHealthy
|
||||
newAppStatus.Message = "Application resource has synced, updating status to Healthy"
|
||||
} else {
|
||||
newAppStatus.LastTransitionTime = &now
|
||||
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
|
||||
newAppStatus.Message = "Application resource has synced, updating status to Progressing"
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// The target revision is the same, so we need to evaluate the current revision progress
|
||||
if currentAppStatus.Status == argov1alpha1.ProgressiveSyncPending {
|
||||
// No need to evaluate status health further if the application did not change since our last transition
|
||||
if app.Status.ReconciledAt == nil || (newAppStatus.LastTransitionTime != nil && app.Status.ReconciledAt.After(newAppStatus.LastTransitionTime.Time)) {
|
||||
// Validate that at least one sync was trigerred after the pending transition time
|
||||
if app.Status.OperationState != nil && app.Status.OperationState.StartedAt.After(currentAppStatus.LastTransitionTime.Time) {
|
||||
statusLogCtx = statusLogCtx.WithField("app.operation", app.Status.OperationState.Phase)
|
||||
newAppStatus.LastTransitionTime = &now
|
||||
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
|
||||
|
||||
if appOutdated && currentAppStatus.Status != "Waiting" && currentAppStatus.Status != "Pending" {
|
||||
logCtx.Infof("Application %v is outdated, updating its ApplicationSet status to Waiting", app.Name)
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = "Waiting"
|
||||
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
|
||||
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
|
||||
}
|
||||
switch {
|
||||
case app.Status.OperationState.Phase.Successful():
|
||||
newAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing"
|
||||
case app.Status.OperationState.Phase.Completed():
|
||||
newAppStatus.Message = "Application resource completed a sync, updating status from Pending to Progressing"
|
||||
default:
|
||||
// If a sync fails or has errors, the Application should be configured with retry. It is not the appset's job to retry failed syncs
|
||||
newAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing"
|
||||
}
|
||||
} else if isApplicationWithError(app) {
|
||||
// Validate if the application has errors preventing it to be reconciled and perform syncs
|
||||
// If it does, we move it to progressing.
|
||||
newAppStatus.LastTransitionTime = &now
|
||||
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
|
||||
newAppStatus.Message = "Application resource has error and cannot sync, updating status to Progressing"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if currentAppStatus.Status == "Pending" {
|
||||
if !appOutdated && operationPhaseString == "Succeeded" {
|
||||
logCtx.Infof("Application %v has completed a sync successfully, updating its ApplicationSet status to Progressing", app.Name)
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = "Progressing"
|
||||
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
|
||||
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
|
||||
} else if operationPhaseString == "Running" || healthStatusString == "Progressing" {
|
||||
logCtx.Infof("Application %v has entered Progressing status, updating its ApplicationSet status to Progressing", app.Name)
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = "Progressing"
|
||||
currentAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing."
|
||||
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
|
||||
if currentAppStatus.Status == argov1alpha1.ProgressiveSyncProgressing {
|
||||
// If the status has reached progressing, we know a sync has been triggered. No matter the result of that operation,
|
||||
// we want an the app to reach the Healthy state for the current revision.
|
||||
if appHealthStatus == health.HealthStatusHealthy && appSyncStatus == argov1alpha1.SyncStatusCodeSynced {
|
||||
newAppStatus.LastTransitionTime = &now
|
||||
newAppStatus.Status = argov1alpha1.ProgressiveSyncHealthy
|
||||
newAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if currentAppStatus.Status == "Waiting" && isApplicationHealthy(app) {
|
||||
logCtx.Infof("Application %v is already synced and healthy, updating its ApplicationSet status to Healthy", app.Name)
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = healthStatusString
|
||||
currentAppStatus.Message = "Application resource is already Healthy, updating status from Waiting to Healthy."
|
||||
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
|
||||
if newAppStatus.LastTransitionTime == &now {
|
||||
statusLogCtx.WithFields(log.Fields{
|
||||
"new_status.status": newAppStatus.Status,
|
||||
"new_status.message": newAppStatus.Message,
|
||||
"new_status.step": newAppStatus.Step,
|
||||
"new_status.targetRevisions": strings.Join(newAppStatus.TargetRevisions, ","),
|
||||
}).Info("Progressive sync application changed status")
|
||||
}
|
||||
|
||||
if currentAppStatus.Status == "Progressing" && isApplicationHealthy(app) {
|
||||
logCtx.Infof("Application %v has completed Progressing status, updating its ApplicationSet status to Healthy", app.Name)
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = healthStatusString
|
||||
currentAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy."
|
||||
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
|
||||
}
|
||||
|
||||
appStatuses = append(appStatuses, currentAppStatus)
|
||||
appStatuses = append(appStatuses, *newAppStatus)
|
||||
}
|
||||
|
||||
err := r.setAppSetApplicationStatus(ctx, logCtx, applicationSet, appStatuses)
|
||||
@@ -1214,7 +1248,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
|
||||
}
|
||||
|
||||
// check Applications that are in Waiting status and promote them to Pending if needed
|
||||
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
|
||||
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appsToSync map[string]bool, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
|
||||
now := metav1.Now()
|
||||
|
||||
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applicationSet.Status.ApplicationStatus))
|
||||
@@ -1230,12 +1264,20 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
|
||||
for _, appStatus := range applicationSet.Status.ApplicationStatus {
|
||||
totalCountMap[appStepMap[appStatus.Application]]++
|
||||
|
||||
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
|
||||
if appStatus.Status == argov1alpha1.ProgressiveSyncPending || appStatus.Status == argov1alpha1.ProgressiveSyncProgressing {
|
||||
updateCountMap[appStepMap[appStatus.Application]]++
|
||||
}
|
||||
}
|
||||
|
||||
for _, appStatus := range applicationSet.Status.ApplicationStatus {
|
||||
statusLogCtx := logCtx.WithFields(log.Fields{
|
||||
"app.name": appStatus.Application,
|
||||
"status.status": appStatus.Status,
|
||||
"status.message": appStatus.Message,
|
||||
"status.step": appStatus.Step,
|
||||
"status.targetRevisions": strings.Join(appStatus.TargetRevisions, ","),
|
||||
})
|
||||
|
||||
maxUpdateAllowed := true
|
||||
maxUpdate := &intstr.IntOrString{}
|
||||
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
|
||||
@@ -1246,7 +1288,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
|
||||
if maxUpdate != nil {
|
||||
maxUpdateVal, err := intstr.GetScaledValueFromIntOrPercent(maxUpdate, totalCountMap[appStepMap[appStatus.Application]], false)
|
||||
if err != nil {
|
||||
logCtx.Warnf("AppSet '%v' has a invalid maxUpdate value '%+v', ignoring maxUpdate logic for this step: %v", applicationSet.Name, maxUpdate, err)
|
||||
statusLogCtx.Warnf("AppSet has a invalid maxUpdate value '%+v', ignoring maxUpdate logic for this step: %v", maxUpdate, err)
|
||||
}
|
||||
|
||||
// ensure that percentage values greater than 0% always result in at least 1 Application being selected
|
||||
@@ -1256,16 +1298,21 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
|
||||
|
||||
if updateCountMap[appStepMap[appStatus.Application]] >= maxUpdateVal {
|
||||
maxUpdateAllowed = false
|
||||
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap), applicationSet.Name)
|
||||
statusLogCtx.Infof("Application is not allowed to update yet, %v/%v Applications already updating in step %v", updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap))
|
||||
}
|
||||
}
|
||||
|
||||
if appStatus.Status == "Waiting" && appSyncMap[appStatus.Application] && maxUpdateAllowed {
|
||||
logCtx.Infof("Application %v moved to Pending status, watching for the Application to start Progressing", appStatus.Application)
|
||||
if appStatus.Status == argov1alpha1.ProgressiveSyncWaiting && appsToSync[appStatus.Application] && maxUpdateAllowed {
|
||||
appStatus.LastTransitionTime = &now
|
||||
appStatus.Status = "Pending"
|
||||
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing."
|
||||
appStatus.Step = strconv.Itoa(getAppStep(appStatus.Application, appStepMap))
|
||||
appStatus.Status = argov1alpha1.ProgressiveSyncPending
|
||||
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing"
|
||||
|
||||
statusLogCtx.WithFields(log.Fields{
|
||||
"new_status.status": appStatus.Status,
|
||||
"new_status.message": appStatus.Message,
|
||||
"new_status.step": appStatus.Step,
|
||||
"new_status.targetRevisions": strings.Join(appStatus.TargetRevisions, ","),
|
||||
}).Info("Progressive sync application changed status")
|
||||
|
||||
updateCountMap[appStepMap[appStatus.Application]]++
|
||||
}
|
||||
@@ -1290,9 +1337,9 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditio
|
||||
completedWaves := map[string]bool{}
|
||||
for _, appStatus := range applicationSet.Status.ApplicationStatus {
|
||||
if v, ok := completedWaves[appStatus.Step]; !ok {
|
||||
completedWaves[appStatus.Step] = appStatus.Status == "Healthy"
|
||||
completedWaves[appStatus.Step] = appStatus.Status == argov1alpha1.ProgressiveSyncHealthy
|
||||
} else {
|
||||
completedWaves[appStatus.Step] = v && appStatus.Status == "Healthy"
|
||||
completedWaves[appStatus.Step] = v && appStatus.Status == argov1alpha1.ProgressiveSyncHealthy
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1398,7 +1445,13 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
|
||||
sort.Slice(statuses, func(i, j int) bool {
|
||||
return statuses[i].Name < statuses[j].Name
|
||||
})
|
||||
resourcesCount := int64(len(statuses))
|
||||
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
|
||||
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
|
||||
statuses = statuses[:r.MaxResourcesStatusCount]
|
||||
}
|
||||
appset.Status.Resources = statuses
|
||||
appset.Status.ResourcesCount = resourcesCount
|
||||
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
|
||||
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
|
||||
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
|
||||
@@ -1411,6 +1464,7 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
|
||||
}
|
||||
|
||||
updatedAppset.Status.Resources = appset.Status.Resources
|
||||
updatedAppset.Status.ResourcesCount = resourcesCount
|
||||
|
||||
// Update the newly fetched object with new status resources
|
||||
err := r.Client.Status().Update(ctx, updatedAppset)
|
||||
@@ -1507,30 +1561,31 @@ func (r *ApplicationSetReconciler) setAppSetApplicationStatus(ctx context.Contex
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appMap map[string]argov1alpha1.Application, validApps []argov1alpha1.Application) []argov1alpha1.Application {
|
||||
func (r *ApplicationSetReconciler) syncDesiredApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appsToSync map[string]bool, desiredApplications []argov1alpha1.Application) []argov1alpha1.Application {
|
||||
rolloutApps := []argov1alpha1.Application{}
|
||||
for i := range validApps {
|
||||
for i := range desiredApplications {
|
||||
pruneEnabled := false
|
||||
|
||||
// ensure that Applications generated with RollingSync do not have an automated sync policy, since the AppSet controller will handle triggering the sync operation instead
|
||||
if validApps[i].Spec.SyncPolicy != nil && validApps[i].Spec.SyncPolicy.IsAutomatedSyncEnabled() {
|
||||
pruneEnabled = validApps[i].Spec.SyncPolicy.Automated.Prune
|
||||
validApps[i].Spec.SyncPolicy.Automated = nil
|
||||
if desiredApplications[i].Spec.SyncPolicy != nil && desiredApplications[i].Spec.SyncPolicy.IsAutomatedSyncEnabled() {
|
||||
pruneEnabled = desiredApplications[i].Spec.SyncPolicy.Automated.Prune
|
||||
desiredApplications[i].Spec.SyncPolicy.Automated.Enabled = ptr.To(false)
|
||||
}
|
||||
|
||||
appSetStatusPending := false
|
||||
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, validApps[i].Name)
|
||||
if idx > -1 && applicationSet.Status.ApplicationStatus[idx].Status == "Pending" {
|
||||
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, desiredApplications[i].Name)
|
||||
if idx > -1 && applicationSet.Status.ApplicationStatus[idx].Status == argov1alpha1.ProgressiveSyncPending {
|
||||
// only trigger a sync for Applications that are in Pending status, since this is governed by maxUpdate
|
||||
appSetStatusPending = true
|
||||
}
|
||||
|
||||
// check appSyncMap to determine which Applications are ready to be updated and which should be skipped
|
||||
if appSyncMap[validApps[i].Name] && appMap[validApps[i].Name].Status.Sync.Status == "OutOfSync" && appSetStatusPending {
|
||||
logCtx.Infof("triggering sync for application: %v, prune enabled: %v", validApps[i].Name, pruneEnabled)
|
||||
validApps[i] = syncApplication(validApps[i], pruneEnabled)
|
||||
// check appsToSync to determine which Applications are ready to be updated and which should be skipped
|
||||
if appsToSync[desiredApplications[i].Name] && appSetStatusPending {
|
||||
logCtx.Infof("triggering sync for application: %v, prune enabled: %v", desiredApplications[i].Name, pruneEnabled)
|
||||
desiredApplications[i] = syncApplication(desiredApplications[i], pruneEnabled)
|
||||
}
|
||||
rolloutApps = append(rolloutApps, validApps[i])
|
||||
|
||||
rolloutApps = append(rolloutApps, desiredApplications[i])
|
||||
}
|
||||
return rolloutApps
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -86,28 +86,28 @@ func TestGenerateApplications(t *testing.T) {
|
||||
}
|
||||
|
||||
t.Run(cc.name, func(t *testing.T) {
|
||||
generatorMock := genmock.Generator{}
|
||||
generatorMock := &genmock.Generator{}
|
||||
generator := v1alpha1.ApplicationSetGenerator{
|
||||
List: &v1alpha1.ListGenerator{},
|
||||
}
|
||||
|
||||
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
|
||||
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
|
||||
Return(cc.params, cc.generateParamsError)
|
||||
|
||||
generatorMock.On("GetTemplate", &generator).
|
||||
generatorMock.EXPECT().GetTemplate(&generator).
|
||||
Return(&v1alpha1.ApplicationSetTemplate{})
|
||||
|
||||
rendererMock := rendmock.Renderer{}
|
||||
rendererMock := &rendmock.Renderer{}
|
||||
|
||||
var expectedApps []v1alpha1.Application
|
||||
|
||||
if cc.generateParamsError == nil {
|
||||
for _, p := range cc.params {
|
||||
if cc.rendererError != nil {
|
||||
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
|
||||
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
|
||||
Return(nil, cc.rendererError)
|
||||
} else {
|
||||
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
|
||||
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
|
||||
Return(&app, nil)
|
||||
expectedApps = append(expectedApps, app)
|
||||
}
|
||||
@@ -115,9 +115,9 @@ func TestGenerateApplications(t *testing.T) {
|
||||
}
|
||||
|
||||
generators := map[string]generators.Generator{
|
||||
"List": &generatorMock,
|
||||
"List": generatorMock,
|
||||
}
|
||||
renderer := &rendererMock
|
||||
renderer := rendererMock
|
||||
|
||||
got, reason, err := GenerateApplications(log.NewEntry(log.StandardLogger()), v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
@@ -200,26 +200,26 @@ func TestMergeTemplateApplications(t *testing.T) {
|
||||
cc := c
|
||||
|
||||
t.Run(cc.name, func(t *testing.T) {
|
||||
generatorMock := genmock.Generator{}
|
||||
generatorMock := &genmock.Generator{}
|
||||
generator := v1alpha1.ApplicationSetGenerator{
|
||||
List: &v1alpha1.ListGenerator{},
|
||||
}
|
||||
|
||||
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
|
||||
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
|
||||
Return(cc.params, nil)
|
||||
|
||||
generatorMock.On("GetTemplate", &generator).
|
||||
generatorMock.EXPECT().GetTemplate(&generator).
|
||||
Return(&cc.overrideTemplate)
|
||||
|
||||
rendererMock := rendmock.Renderer{}
|
||||
rendererMock := &rendmock.Renderer{}
|
||||
|
||||
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.expectedMerged), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), cc.params[0], false, []string(nil)).
|
||||
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.expectedMerged), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), cc.params[0], false, []string(nil)).
|
||||
Return(&cc.expectedApps[0], nil)
|
||||
|
||||
generators := map[string]generators.Generator{
|
||||
"List": &generatorMock,
|
||||
"List": generatorMock,
|
||||
}
|
||||
renderer := &rendererMock
|
||||
renderer := rendererMock
|
||||
|
||||
got, _, _ := GenerateApplications(log.NewEntry(log.StandardLogger()), v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
@@ -312,19 +312,19 @@ func TestGenerateAppsUsingPullRequestGenerator(t *testing.T) {
|
||||
},
|
||||
} {
|
||||
t.Run(cases.name, func(t *testing.T) {
|
||||
generatorMock := genmock.Generator{}
|
||||
generatorMock := &genmock.Generator{}
|
||||
generator := v1alpha1.ApplicationSetGenerator{
|
||||
PullRequest: &v1alpha1.PullRequestGenerator{},
|
||||
}
|
||||
|
||||
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
|
||||
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
|
||||
Return(cases.params, nil)
|
||||
|
||||
generatorMock.On("GetTemplate", &generator).
|
||||
Return(&cases.template, nil)
|
||||
generatorMock.EXPECT().GetTemplate(&generator).
|
||||
Return(&cases.template)
|
||||
|
||||
generators := map[string]generators.Generator{
|
||||
"PullRequest": &generatorMock,
|
||||
"PullRequest": generatorMock,
|
||||
}
|
||||
renderer := &utils.Render{}
|
||||
|
||||
|
||||
@@ -223,7 +223,7 @@ func TestTransForm(t *testing.T) {
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
testGenerators := map[string]Generator{
|
||||
"Clusters": getMockClusterGenerator(),
|
||||
"Clusters": getMockClusterGenerator(t.Context()),
|
||||
}
|
||||
|
||||
applicationSetInfo := argov1alpha1.ApplicationSet{
|
||||
@@ -260,7 +260,7 @@ func emptyTemplate() argov1alpha1.ApplicationSetTemplate {
|
||||
}
|
||||
}
|
||||
|
||||
func getMockClusterGenerator() Generator {
|
||||
func getMockClusterGenerator(ctx context.Context) Generator {
|
||||
clusters := []crtclient.Object{
|
||||
&corev1.Secret{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
@@ -342,19 +342,19 @@ func getMockClusterGenerator() Generator {
|
||||
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
|
||||
|
||||
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
|
||||
return NewClusterGenerator(context.Background(), fakeClient, appClientset, "namespace")
|
||||
return NewClusterGenerator(ctx, fakeClient, appClientset, "namespace")
|
||||
}
|
||||
|
||||
func getMockGitGenerator() Generator {
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "namespace")
|
||||
argoCDServiceMock := &mocks.Repos{}
|
||||
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "namespace")
|
||||
return gitGenerator
|
||||
}
|
||||
|
||||
func TestGetRelevantGenerators(t *testing.T) {
|
||||
testGenerators := map[string]Generator{
|
||||
"Clusters": getMockClusterGenerator(),
|
||||
"Clusters": getMockClusterGenerator(t.Context()),
|
||||
"Git": getMockGitGenerator(),
|
||||
}
|
||||
|
||||
|
||||
@@ -320,11 +320,11 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock := mocks.NewRepos(t)
|
||||
|
||||
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
|
||||
applicationSetInfo := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -357,8 +357,6 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -623,11 +621,11 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock := mocks.NewRepos(t)
|
||||
|
||||
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
|
||||
applicationSetInfo := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -660,8 +658,6 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1000,11 +996,11 @@ cluster:
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock := mocks.NewRepos(t)
|
||||
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
|
||||
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
|
||||
applicationSetInfo := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -1036,8 +1032,6 @@ cluster:
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1331,7 +1325,7 @@ env: testing
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock := mocks.NewRepos(t)
|
||||
|
||||
// IMPORTANT: we try to get the files from the repo server that matches the patterns
|
||||
// If we find those files also satisfy the exclude pattern, we remove them from map
|
||||
@@ -1339,18 +1333,16 @@ env: testing
|
||||
// With the below mock setup, we make sure that if the GetFiles() function gets called
|
||||
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
|
||||
for _, pattern := range testCaseCopy.excludePattern {
|
||||
argoCDServiceMock.
|
||||
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
|
||||
}
|
||||
|
||||
for _, pattern := range testCaseCopy.includePattern {
|
||||
argoCDServiceMock.
|
||||
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
|
||||
}
|
||||
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
|
||||
applicationSetInfo := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -1382,8 +1374,6 @@ env: testing
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1672,7 +1662,7 @@ env: testing
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock := mocks.NewRepos(t)
|
||||
|
||||
// IMPORTANT: we try to get the files from the repo server that matches the patterns
|
||||
// If we find those files also satisfy the exclude pattern, we remove them from map
|
||||
@@ -1680,18 +1670,16 @@ env: testing
|
||||
// With the below mock setup, we make sure that if the GetFiles() function gets called
|
||||
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
|
||||
for _, pattern := range testCaseCopy.excludePattern {
|
||||
argoCDServiceMock.
|
||||
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
|
||||
}
|
||||
|
||||
for _, pattern := range testCaseCopy.includePattern {
|
||||
argoCDServiceMock.
|
||||
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
|
||||
}
|
||||
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
|
||||
applicationSetInfo := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -1723,8 +1711,6 @@ env: testing
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1908,25 +1894,23 @@ func TestGitGeneratorParamsFromFilesWithExcludeOptionGoTemplate(t *testing.T) {
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock := mocks.NewRepos(t)
|
||||
// IMPORTANT: we try to get the files from the repo server that matches the patterns
|
||||
// If we find those files also satisfy the exclude pattern, we remove them from map
|
||||
// This is generally done by the g.repos.GetFiles() function.
|
||||
// With the below mock setup, we make sure that if the GetFiles() function gets called
|
||||
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
|
||||
for _, pattern := range testCaseCopy.excludePattern {
|
||||
argoCDServiceMock.
|
||||
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
|
||||
}
|
||||
|
||||
for _, pattern := range testCaseCopy.includePattern {
|
||||
argoCDServiceMock.
|
||||
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
|
||||
}
|
||||
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
|
||||
applicationSetInfo := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -1958,8 +1942,6 @@ func TestGitGeneratorParamsFromFilesWithExcludeOptionGoTemplate(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -2279,11 +2261,11 @@ cluster:
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock := mocks.NewRepos(t)
|
||||
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
|
||||
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
|
||||
applicationSetInfo := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -2315,8 +2297,6 @@ cluster:
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -2482,7 +2462,7 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
|
||||
},
|
||||
}
|
||||
for _, testCase := range cases {
|
||||
argoCDServiceMock := mocks.Repos{}
|
||||
argoCDServiceMock := mocks.NewRepos(t)
|
||||
|
||||
if testCase.callGetDirectories {
|
||||
var project any
|
||||
@@ -2492,9 +2472,9 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
|
||||
project = mock.Anything
|
||||
}
|
||||
|
||||
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, project, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
|
||||
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, project, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
|
||||
}
|
||||
gitGenerator := NewGitGenerator(&argoCDServiceMock, "argocd")
|
||||
gitGenerator := NewGitGenerator(argoCDServiceMock, "argocd")
|
||||
|
||||
scheme := runtime.NewScheme()
|
||||
err := v1alpha1.AddToScheme(scheme)
|
||||
@@ -2510,7 +2490,5 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, testCase.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.AssertExpectations(t)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -12,12 +12,12 @@ import (
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
|
||||
generatorsMock "github.com/argoproj/argo-cd/v3/applicationset/generators/mocks"
|
||||
servicesMocks "github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -136,7 +136,7 @@ func TestMatrixGenerate(t *testing.T) {
|
||||
testCaseCopy := testCase // Since tests may run in parallel
|
||||
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
genMock := &generatorMock{}
|
||||
genMock := &generatorsMock.Generator{}
|
||||
appSet := &v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -149,7 +149,7 @@ func TestMatrixGenerate(t *testing.T) {
|
||||
Git: g.Git,
|
||||
List: g.List,
|
||||
}
|
||||
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
|
||||
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
|
||||
{
|
||||
"path": "app1",
|
||||
"path.basename": "app1",
|
||||
@@ -162,7 +162,7 @@ func TestMatrixGenerate(t *testing.T) {
|
||||
},
|
||||
}, nil)
|
||||
|
||||
genMock.On("GetTemplate", &gitGeneratorSpec).
|
||||
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
|
||||
Return(&v1alpha1.ApplicationSetTemplate{})
|
||||
}
|
||||
|
||||
@@ -343,7 +343,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
|
||||
testCaseCopy := testCase // Since tests may run in parallel
|
||||
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
genMock := &generatorMock{}
|
||||
genMock := &generatorsMock.Generator{}
|
||||
appSet := &v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -358,7 +358,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
|
||||
Git: g.Git,
|
||||
List: g.List,
|
||||
}
|
||||
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
|
||||
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
|
||||
{
|
||||
"path": map[string]string{
|
||||
"path": "app1",
|
||||
@@ -375,7 +375,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
|
||||
},
|
||||
}, nil)
|
||||
|
||||
genMock.On("GetTemplate", &gitGeneratorSpec).
|
||||
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
|
||||
Return(&v1alpha1.ApplicationSetTemplate{})
|
||||
}
|
||||
|
||||
@@ -507,7 +507,7 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
|
||||
testCaseCopy := testCase // Since tests may run in parallel
|
||||
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
mock := &generatorMock{}
|
||||
mock := &generatorsMock.Generator{}
|
||||
|
||||
for _, g := range testCaseCopy.baseGenerators {
|
||||
gitGeneratorSpec := v1alpha1.ApplicationSetGenerator{
|
||||
@@ -517,7 +517,7 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
|
||||
SCMProvider: g.SCMProvider,
|
||||
ClusterDecisionResource: g.ClusterDecisionResource,
|
||||
}
|
||||
mock.On("GetRequeueAfter", &gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter, nil)
|
||||
mock.EXPECT().GetRequeueAfter(&gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter)
|
||||
}
|
||||
|
||||
matrixGenerator := NewMatrixGenerator(
|
||||
@@ -634,7 +634,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
|
||||
testCaseCopy := testCase // Since tests may run in parallel
|
||||
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
genMock := &generatorMock{}
|
||||
genMock := &generatorsMock.Generator{}
|
||||
appSet := &v1alpha1.ApplicationSet{}
|
||||
|
||||
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
|
||||
@@ -650,7 +650,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
|
||||
Git: g.Git,
|
||||
Clusters: g.Clusters,
|
||||
}
|
||||
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{
|
||||
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
|
||||
{
|
||||
"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json",
|
||||
"path.basename": "dev",
|
||||
@@ -662,7 +662,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
|
||||
"path.basenameNormalized": "prod",
|
||||
},
|
||||
}, nil)
|
||||
genMock.On("GetTemplate", &gitGeneratorSpec).
|
||||
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
|
||||
Return(&v1alpha1.ApplicationSetTemplate{})
|
||||
}
|
||||
matrixGenerator := NewMatrixGenerator(
|
||||
@@ -813,7 +813,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
|
||||
testCaseCopy := testCase // Since tests may run in parallel
|
||||
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
genMock := &generatorMock{}
|
||||
genMock := &generatorsMock.Generator{}
|
||||
appSet := &v1alpha1.ApplicationSet{
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
GoTemplate: true,
|
||||
@@ -833,7 +833,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
|
||||
Git: g.Git,
|
||||
Clusters: g.Clusters,
|
||||
}
|
||||
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{
|
||||
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
|
||||
{
|
||||
"path": map[string]string{
|
||||
"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json",
|
||||
@@ -849,7 +849,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}, nil)
|
||||
genMock.On("GetTemplate", &gitGeneratorSpec).
|
||||
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
|
||||
Return(&v1alpha1.ApplicationSetTemplate{})
|
||||
}
|
||||
matrixGenerator := NewMatrixGenerator(
|
||||
@@ -969,7 +969,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
|
||||
testCaseCopy := testCase // Since tests may run in parallel
|
||||
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
genMock := &generatorMock{}
|
||||
genMock := &generatorsMock.Generator{}
|
||||
appSet := &v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
@@ -984,7 +984,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
|
||||
Git: g.Git,
|
||||
List: g.List,
|
||||
}
|
||||
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{{
|
||||
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{{
|
||||
"foo": map[string]any{
|
||||
"bar": []any{
|
||||
map[string]any{
|
||||
@@ -1009,7 +1009,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}}, nil)
|
||||
genMock.On("GetTemplate", &gitGeneratorSpec).
|
||||
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
|
||||
Return(&v1alpha1.ApplicationSetTemplate{})
|
||||
}
|
||||
|
||||
@@ -1037,28 +1037,6 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
type generatorMock struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
func (g *generatorMock) GetTemplate(appSetGenerator *v1alpha1.ApplicationSetGenerator) *v1alpha1.ApplicationSetTemplate {
|
||||
args := g.Called(appSetGenerator)
|
||||
|
||||
return args.Get(0).(*v1alpha1.ApplicationSetTemplate)
|
||||
}
|
||||
|
||||
func (g *generatorMock) GenerateParams(appSetGenerator *v1alpha1.ApplicationSetGenerator, appSet *v1alpha1.ApplicationSet, _ client.Client) ([]map[string]any, error) {
|
||||
args := g.Called(appSetGenerator, appSet)
|
||||
|
||||
return args.Get(0).([]map[string]any), args.Error(1)
|
||||
}
|
||||
|
||||
func (g *generatorMock) GetRequeueAfter(appSetGenerator *v1alpha1.ApplicationSetGenerator) time.Duration {
|
||||
args := g.Called(appSetGenerator)
|
||||
|
||||
return args.Get(0).(time.Duration)
|
||||
}
|
||||
|
||||
func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
|
||||
// Given a matrix generator over a list generator and a git files generator, the nested git files generator should
|
||||
// be treated as a files generator, and it should produce parameters.
|
||||
@@ -1072,11 +1050,11 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
|
||||
// Now instead of checking for nil, we check whether the field is a non-empty slice. This test prevents a regression
|
||||
// of that bug.
|
||||
|
||||
listGeneratorMock := &generatorMock{}
|
||||
listGeneratorMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).Return([]map[string]any{
|
||||
listGeneratorMock := &generatorsMock.Generator{}
|
||||
listGeneratorMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).Return([]map[string]any{
|
||||
{"some": "value"},
|
||||
}, nil)
|
||||
listGeneratorMock.On("GetTemplate", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&v1alpha1.ApplicationSetTemplate{})
|
||||
listGeneratorMock.EXPECT().GetTemplate(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&v1alpha1.ApplicationSetTemplate{})
|
||||
|
||||
gitGeneratorSpec := &v1alpha1.GitGenerator{
|
||||
RepoURL: "https://git.example.com",
|
||||
@@ -1085,10 +1063,10 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
repoServiceMock := &mocks.Repos{}
|
||||
repoServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
|
||||
repoServiceMock := &servicesMocks.Repos{}
|
||||
repoServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
|
||||
"some/path.json": []byte("test: content"),
|
||||
}, nil)
|
||||
}, nil).Maybe()
|
||||
gitGenerator := NewGitGenerator(repoServiceMock, "")
|
||||
|
||||
matrixGenerator := NewMatrixGenerator(map[string]Generator{
|
||||
|
||||
@@ -174,7 +174,7 @@ func TestApplicationsetCollector(t *testing.T) {
|
||||
appsetCollector := newAppsetCollector(utils.NewAppsetLister(client), collectedLabels, filter)
|
||||
|
||||
metrics.Registry.MustRegister(appsetCollector)
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
|
||||
@@ -216,7 +216,7 @@ func TestObserveReconcile(t *testing.T) {
|
||||
|
||||
appsetMetrics := NewApplicationsetMetrics(utils.NewAppsetLister(client), collectedLabels, filter)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
|
||||
|
||||
@@ -97,7 +97,9 @@ func TestGitHubMetrics_CollectorApproach_Success(t *testing.T) {
|
||||
),
|
||||
}
|
||||
|
||||
req, _ := http.NewRequest(http.MethodGet, ts.URL+URL, http.NoBody)
|
||||
ctx := t.Context()
|
||||
|
||||
req, _ := http.NewRequestWithContext(ctx, http.MethodGet, ts.URL+URL, http.NoBody)
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
@@ -109,7 +111,11 @@ func TestGitHubMetrics_CollectorApproach_Success(t *testing.T) {
|
||||
server := httptest.NewServer(handler)
|
||||
defer server.Close()
|
||||
|
||||
resp, err = http.Get(server.URL)
|
||||
req, err = http.NewRequestWithContext(ctx, http.MethodGet, server.URL, http.NoBody)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create request: %v", err)
|
||||
}
|
||||
resp, err = http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to scrape metrics: %v", err)
|
||||
}
|
||||
@@ -151,15 +157,23 @@ func TestGitHubMetrics_CollectorApproach_NoRateLimitMetricsOnNilResponse(t *test
|
||||
metrics: metrics,
|
||||
},
|
||||
}
|
||||
ctx := t.Context()
|
||||
|
||||
req, _ := http.NewRequest(http.MethodGet, URL, http.NoBody)
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, URL, http.NoBody)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create request: %v", err)
|
||||
}
|
||||
_, _ = client.Do(req)
|
||||
|
||||
handler := promhttp.HandlerFor(reg, promhttp.HandlerOpts{})
|
||||
server := httptest.NewServer(handler)
|
||||
defer server.Close()
|
||||
|
||||
resp, err := http.Get(server.URL)
|
||||
req, err = http.NewRequestWithContext(ctx, http.MethodGet, server.URL, http.NoBody)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create request: %v", err)
|
||||
}
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to scrape metrics: %v", err)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package pull_request
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
|
||||
@@ -13,6 +12,7 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
azureMock "github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/azure_devops/git/mocks"
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
|
||||
)
|
||||
|
||||
func createBoolPtr(x bool) *bool {
|
||||
@@ -35,29 +35,6 @@ func createUniqueNamePtr(x string) *string {
|
||||
return &x
|
||||
}
|
||||
|
||||
type AzureClientFactoryMock struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (m *AzureClientFactoryMock) GetClient(ctx context.Context) (git.Client, error) {
|
||||
args := m.mock.Called(ctx)
|
||||
|
||||
var client git.Client
|
||||
c := args.Get(0)
|
||||
if c != nil {
|
||||
client = c.(git.Client)
|
||||
}
|
||||
|
||||
var err error
|
||||
if len(args) > 1 {
|
||||
if e, ok := args.Get(1).(error); ok {
|
||||
err = e
|
||||
}
|
||||
}
|
||||
|
||||
return client, err
|
||||
}
|
||||
|
||||
func TestListPullRequest(t *testing.T) {
|
||||
teamProject := "myorg_project"
|
||||
repoName := "myorg_project_repo"
|
||||
@@ -91,10 +68,10 @@ func TestListPullRequest(t *testing.T) {
|
||||
SearchCriteria: &git.GitPullRequestSearchCriteria{},
|
||||
}
|
||||
|
||||
gitClientMock := azureMock.Client{}
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
|
||||
gitClientMock.On("GetPullRequestsByProject", ctx, args).Return(&pullRequestMock, nil)
|
||||
gitClientMock := &azureMock.Client{}
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
|
||||
gitClientMock.EXPECT().GetPullRequestsByProject(mock.Anything, args).Return(&pullRequestMock, nil)
|
||||
|
||||
provider := AzureDevOpsService{
|
||||
clientFactory: clientFactoryMock,
|
||||
@@ -245,12 +222,12 @@ func TestAzureDevOpsListReturnsRepositoryNotFoundError(t *testing.T) {
|
||||
|
||||
pullRequestMock := []git.GitPullRequest{}
|
||||
|
||||
gitClientMock := azureMock.Client{}
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
|
||||
gitClientMock := &azureMock.Client{}
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
|
||||
|
||||
// Mock the GetPullRequestsByProject to return an error containing "404"
|
||||
gitClientMock.On("GetPullRequestsByProject", t.Context(), args).Return(&pullRequestMock,
|
||||
gitClientMock.EXPECT().GetPullRequestsByProject(mock.Anything, args).Return(&pullRequestMock,
|
||||
errors.New("The following project does not exist:"))
|
||||
|
||||
provider := AzureDevOpsService{
|
||||
|
||||
@@ -12,7 +12,7 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/aws_codecommit/mocks"
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -177,9 +177,8 @@ func TestAWSCodeCommitListRepos(t *testing.T) {
|
||||
if repo.getRepositoryNilMetadata {
|
||||
repoMetadata = nil
|
||||
}
|
||||
codeCommitClient.
|
||||
On("GetRepositoryWithContext", ctx, &codecommit.GetRepositoryInput{RepositoryName: aws.String(repo.name)}).
|
||||
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: repoMetadata}, repo.getRepositoryError)
|
||||
codeCommitClient.EXPECT().GetRepositoryWithContext(mock.Anything, &codecommit.GetRepositoryInput{RepositoryName: aws.String(repo.name)}).
|
||||
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: repoMetadata}, repo.getRepositoryError).Maybe()
|
||||
codecommitRepoNameIdPairs = append(codecommitRepoNameIdPairs, &codecommit.RepositoryNameIdPair{
|
||||
RepositoryId: aws.String(repo.id),
|
||||
RepositoryName: aws.String(repo.name),
|
||||
@@ -193,20 +192,18 @@ func TestAWSCodeCommitListRepos(t *testing.T) {
|
||||
}
|
||||
|
||||
if testCase.expectListAtCodeCommit {
|
||||
codeCommitClient.
|
||||
On("ListRepositoriesWithContext", ctx, &codecommit.ListRepositoriesInput{}).
|
||||
codeCommitClient.EXPECT().ListRepositoriesWithContext(mock.Anything, &codecommit.ListRepositoriesInput{}).
|
||||
Return(&codecommit.ListRepositoriesOutput{
|
||||
Repositories: codecommitRepoNameIdPairs,
|
||||
}, testCase.listRepositoryError)
|
||||
}, testCase.listRepositoryError).Maybe()
|
||||
} else {
|
||||
taggingClient.
|
||||
On("GetResourcesWithContext", ctx, mock.MatchedBy(equalIgnoringTagFilterOrder(&resourcegroupstaggingapi.GetResourcesInput{
|
||||
TagFilters: testCase.expectTagFilters,
|
||||
ResourceTypeFilters: aws.StringSlice([]string{resourceTypeCodeCommitRepository}),
|
||||
}))).
|
||||
taggingClient.EXPECT().GetResourcesWithContext(mock.Anything, mock.MatchedBy(equalIgnoringTagFilterOrder(&resourcegroupstaggingapi.GetResourcesInput{
|
||||
TagFilters: testCase.expectTagFilters,
|
||||
ResourceTypeFilters: aws.StringSlice([]string{resourceTypeCodeCommitRepository}),
|
||||
}))).
|
||||
Return(&resourcegroupstaggingapi.GetResourcesOutput{
|
||||
ResourceTagMappingList: resourceTaggings,
|
||||
}, testCase.listRepositoryError)
|
||||
}, testCase.listRepositoryError).Maybe()
|
||||
}
|
||||
|
||||
provider := &AWSCodeCommitProvider{
|
||||
@@ -350,13 +347,12 @@ func TestAWSCodeCommitRepoHasPath(t *testing.T) {
|
||||
taggingClient := mocks.NewAWSTaggingClient(t)
|
||||
ctx := t.Context()
|
||||
if testCase.expectedGetFolderPath != "" {
|
||||
codeCommitClient.
|
||||
On("GetFolderWithContext", ctx, &codecommit.GetFolderInput{
|
||||
CommitSpecifier: aws.String(branch),
|
||||
FolderPath: aws.String(testCase.expectedGetFolderPath),
|
||||
RepositoryName: aws.String(repoName),
|
||||
}).
|
||||
Return(testCase.getFolderOutput, testCase.getFolderError)
|
||||
codeCommitClient.EXPECT().GetFolderWithContext(mock.Anything, &codecommit.GetFolderInput{
|
||||
CommitSpecifier: aws.String(branch),
|
||||
FolderPath: aws.String(testCase.expectedGetFolderPath),
|
||||
RepositoryName: aws.String(repoName),
|
||||
}).
|
||||
Return(testCase.getFolderOutput, testCase.getFolderError).Maybe()
|
||||
}
|
||||
provider := &AWSCodeCommitProvider{
|
||||
codeCommitClient: codeCommitClient,
|
||||
@@ -423,18 +419,16 @@ func TestAWSCodeCommitGetBranches(t *testing.T) {
|
||||
taggingClient := mocks.NewAWSTaggingClient(t)
|
||||
ctx := t.Context()
|
||||
if testCase.allBranches {
|
||||
codeCommitClient.
|
||||
On("ListBranchesWithContext", ctx, &codecommit.ListBranchesInput{
|
||||
RepositoryName: aws.String(name),
|
||||
}).
|
||||
Return(&codecommit.ListBranchesOutput{Branches: aws.StringSlice(testCase.branches)}, testCase.apiError)
|
||||
codeCommitClient.EXPECT().ListBranchesWithContext(mock.Anything, &codecommit.ListBranchesInput{
|
||||
RepositoryName: aws.String(name),
|
||||
}).
|
||||
Return(&codecommit.ListBranchesOutput{Branches: aws.StringSlice(testCase.branches)}, testCase.apiError).Maybe()
|
||||
} else {
|
||||
codeCommitClient.
|
||||
On("GetRepositoryWithContext", ctx, &codecommit.GetRepositoryInput{RepositoryName: aws.String(name)}).
|
||||
codeCommitClient.EXPECT().GetRepositoryWithContext(mock.Anything, &codecommit.GetRepositoryInput{RepositoryName: aws.String(name)}).
|
||||
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: &codecommit.RepositoryMetadata{
|
||||
AccountId: aws.String(organization),
|
||||
DefaultBranch: aws.String(defaultBranch),
|
||||
}}, testCase.apiError)
|
||||
}}, testCase.apiError).Maybe()
|
||||
}
|
||||
provider := &AWSCodeCommitProvider{
|
||||
codeCommitClient: codeCommitClient,
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package scm_provider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"testing"
|
||||
@@ -16,6 +15,7 @@ import (
|
||||
azureGit "github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
|
||||
|
||||
azureMock "github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/azure_devops/git/mocks"
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
|
||||
)
|
||||
|
||||
func s(input string) *string {
|
||||
@@ -78,13 +78,13 @@ func TestAzureDevopsRepoHasPath(t *testing.T) {
|
||||
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
gitClientMock := azureMock.Client{}
|
||||
gitClientMock := &azureMock.Client{}
|
||||
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
|
||||
|
||||
repoId := &uuid
|
||||
gitClientMock.On("GetItem", ctx, azureGit.GetItemArgs{Project: &teamProject, Path: &path, VersionDescriptor: &azureGit.GitVersionDescriptor{Version: &branchName}, RepositoryId: repoId}).Return(nil, testCase.azureDevopsError)
|
||||
gitClientMock.EXPECT().GetItem(mock.Anything, azureGit.GetItemArgs{Project: &teamProject, Path: &path, VersionDescriptor: &azureGit.GitVersionDescriptor{Version: &branchName}, RepositoryId: repoId}).Return(nil, testCase.azureDevopsError)
|
||||
|
||||
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock}
|
||||
|
||||
@@ -143,12 +143,12 @@ func TestGetDefaultBranchOnDisabledRepo(t *testing.T) {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
uuid := uuid.New().String()
|
||||
|
||||
gitClientMock := azureMock.Client{}
|
||||
gitClientMock := azureMock.NewClient(t)
|
||||
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
|
||||
|
||||
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(nil, testCase.azureDevOpsError)
|
||||
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(nil, testCase.azureDevOpsError)
|
||||
|
||||
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
|
||||
|
||||
@@ -162,8 +162,6 @@ func TestGetDefaultBranchOnDisabledRepo(t *testing.T) {
|
||||
}
|
||||
|
||||
assert.Empty(t, branches)
|
||||
|
||||
gitClientMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -202,12 +200,12 @@ func TestGetAllBranchesOnDisabledRepo(t *testing.T) {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
uuid := uuid.New().String()
|
||||
|
||||
gitClientMock := azureMock.Client{}
|
||||
gitClientMock := azureMock.NewClient(t)
|
||||
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
|
||||
|
||||
gitClientMock.On("GetBranches", ctx, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(nil, testCase.azureDevOpsError)
|
||||
gitClientMock.EXPECT().GetBranches(mock.Anything, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(nil, testCase.azureDevOpsError)
|
||||
|
||||
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
|
||||
|
||||
@@ -221,8 +219,6 @@ func TestGetAllBranchesOnDisabledRepo(t *testing.T) {
|
||||
}
|
||||
|
||||
assert.Empty(t, branches)
|
||||
|
||||
gitClientMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -241,12 +237,12 @@ func TestAzureDevOpsGetDefaultBranchStripsRefsName(t *testing.T) {
|
||||
branchReturn := &azureGit.GitBranchStats{Name: &strippedBranchName, Commit: &azureGit.GitCommitRef{CommitId: s("abc123233223")}}
|
||||
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
|
||||
|
||||
gitClientMock := azureMock.Client{}
|
||||
gitClientMock := &azureMock.Client{}
|
||||
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
|
||||
|
||||
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &strippedBranchName}).Return(branchReturn, nil)
|
||||
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &strippedBranchName}).Return(branchReturn, nil).Maybe()
|
||||
|
||||
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock, allBranches: false}
|
||||
branches, err := provider.GetBranches(ctx, repo)
|
||||
@@ -295,12 +291,12 @@ func TestAzureDevOpsGetBranchesDefultBranchOnly(t *testing.T) {
|
||||
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
gitClientMock := azureMock.Client{}
|
||||
gitClientMock := &azureMock.Client{}
|
||||
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
|
||||
|
||||
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(testCase.expectedBranch, testCase.getBranchesAPIError)
|
||||
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(testCase.expectedBranch, testCase.getBranchesAPIError)
|
||||
|
||||
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
|
||||
|
||||
@@ -379,12 +375,12 @@ func TestAzureDevopsGetBranches(t *testing.T) {
|
||||
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
gitClientMock := azureMock.Client{}
|
||||
gitClientMock := &azureMock.Client{}
|
||||
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
|
||||
|
||||
gitClientMock.On("GetBranches", ctx, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(testCase.expectedBranches, testCase.getBranchesAPIError)
|
||||
gitClientMock.EXPECT().GetBranches(mock.Anything, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(testCase.expectedBranches, testCase.getBranchesAPIError)
|
||||
|
||||
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid}
|
||||
|
||||
@@ -427,7 +423,6 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
|
||||
teamProject := "myorg_project"
|
||||
|
||||
uuid := uuid.New()
|
||||
ctx := t.Context()
|
||||
|
||||
repoId := &uuid
|
||||
|
||||
@@ -477,15 +472,15 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
|
||||
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
gitClientMock := azureMock.Client{}
|
||||
gitClientMock.On("GetRepositories", ctx, azureGit.GetRepositoriesArgs{Project: s(teamProject)}).Return(&testCase.repositories, testCase.getRepositoriesError)
|
||||
gitClientMock := azureMock.NewClient(t)
|
||||
gitClientMock.EXPECT().GetRepositories(mock.Anything, azureGit.GetRepositoriesArgs{Project: s(teamProject)}).Return(&testCase.repositories, testCase.getRepositoriesError)
|
||||
|
||||
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
|
||||
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock)
|
||||
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
|
||||
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
|
||||
|
||||
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock}
|
||||
|
||||
repositories, err := provider.ListRepos(ctx, "https")
|
||||
repositories, err := provider.ListRepos(t.Context(), "https")
|
||||
|
||||
if testCase.getRepositoriesError != nil {
|
||||
require.Error(t, err, "Expected an error from test case %v", testCase.name)
|
||||
@@ -497,31 +492,6 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
|
||||
assert.NotEmpty(t, repositories)
|
||||
assert.Len(t, repositories, testCase.expectedNumberOfRepos)
|
||||
}
|
||||
|
||||
gitClientMock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type AzureClientFactoryMock struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (m *AzureClientFactoryMock) GetClient(ctx context.Context) (azureGit.Client, error) {
|
||||
args := m.mock.Called(ctx)
|
||||
|
||||
var client azureGit.Client
|
||||
c := args.Get(0)
|
||||
if c != nil {
|
||||
client = c.(azureGit.Client)
|
||||
}
|
||||
|
||||
var err error
|
||||
if len(args) > 1 {
|
||||
if e, ok := args.Get(1).(error); ok {
|
||||
err = e
|
||||
}
|
||||
}
|
||||
|
||||
return client, err
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
|
||||
urlStr += fmt.Sprintf("/repositories/%s/%s/src/%s/%s?format=meta", c.owner, repo.Repository, repo.SHA, path)
|
||||
body := strings.NewReader("")
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, urlStr, body)
|
||||
req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, urlStr, body)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
101
applicationset/services/scm_provider/mocks/AzureDevOpsClientFactory.go
generated
Normal file
101
applicationset/services/scm_provider/mocks/AzureDevOpsClientFactory.go
generated
Normal file
@@ -0,0 +1,101 @@
|
||||
// Code generated by mockery; DO NOT EDIT.
|
||||
// github.com/vektra/mockery
|
||||
// template: testify
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
// NewAzureDevOpsClientFactory creates a new instance of AzureDevOpsClientFactory. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewAzureDevOpsClientFactory(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *AzureDevOpsClientFactory {
|
||||
mock := &AzureDevOpsClientFactory{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
||||
// AzureDevOpsClientFactory is an autogenerated mock type for the AzureDevOpsClientFactory type
|
||||
type AzureDevOpsClientFactory struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type AzureDevOpsClientFactory_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *AzureDevOpsClientFactory) EXPECT() *AzureDevOpsClientFactory_Expecter {
|
||||
return &AzureDevOpsClientFactory_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// GetClient provides a mock function for the type AzureDevOpsClientFactory
|
||||
func (_mock *AzureDevOpsClientFactory) GetClient(ctx context.Context) (git.Client, error) {
|
||||
ret := _mock.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for GetClient")
|
||||
}
|
||||
|
||||
var r0 git.Client
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context) (git.Client, error)); ok {
|
||||
return returnFunc(ctx)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context) git.Client); ok {
|
||||
r0 = returnFunc(ctx)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(git.Client)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context) error); ok {
|
||||
r1 = returnFunc(ctx)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// AzureDevOpsClientFactory_GetClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetClient'
|
||||
type AzureDevOpsClientFactory_GetClient_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// GetClient is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
func (_e *AzureDevOpsClientFactory_Expecter) GetClient(ctx interface{}) *AzureDevOpsClientFactory_GetClient_Call {
|
||||
return &AzureDevOpsClientFactory_GetClient_Call{Call: _e.mock.On("GetClient", ctx)}
|
||||
}
|
||||
|
||||
func (_c *AzureDevOpsClientFactory_GetClient_Call) Run(run func(ctx context.Context)) *AzureDevOpsClientFactory_GetClient_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *AzureDevOpsClientFactory_GetClient_Call) Return(client git.Client, err error) *AzureDevOpsClientFactory_GetClient_Call {
|
||||
_c.Call.Return(client, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *AzureDevOpsClientFactory_GetClient_Call) RunAndReturn(run func(ctx context.Context) (git.Client, error)) *AzureDevOpsClientFactory_GetClient_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
@@ -1,7 +1,6 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"net/http"
|
||||
"testing"
|
||||
@@ -12,7 +11,7 @@ import (
|
||||
)
|
||||
|
||||
func TestSetupBitbucketClient(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
cfg := &bitbucketv1.Configuration{}
|
||||
|
||||
// Act
|
||||
|
||||
@@ -26,10 +26,14 @@ import (
|
||||
"github.com/go-playground/webhooks/v6/github"
|
||||
"github.com/go-playground/webhooks/v6/gitlab"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/util/guard"
|
||||
)
|
||||
|
||||
const payloadQueueSize = 50000
|
||||
|
||||
const panicMsgAppSet = "panic while processing applicationset-controller webhook event"
|
||||
|
||||
type WebhookHandler struct {
|
||||
sync.WaitGroup // for testing
|
||||
github *github.Webhook
|
||||
@@ -102,6 +106,7 @@ func NewWebhookHandler(webhookParallelism int, argocdSettingsMgr *argosettings.S
|
||||
}
|
||||
|
||||
func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
|
||||
compLog := log.WithField("component", "applicationset-webhook")
|
||||
for i := 0; i < webhookParallelism; i++ {
|
||||
h.Add(1)
|
||||
go func() {
|
||||
@@ -111,7 +116,7 @@ func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
h.HandleEvent(payload)
|
||||
guard.RecoverAndLog(func() { h.HandleEvent(payload) }, compLog, panicMsgAppSet)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
25
assets/swagger.json
generated
25
assets/swagger.json
generated
@@ -5619,6 +5619,9 @@
|
||||
"statusBadgeRootUrl": {
|
||||
"type": "string"
|
||||
},
|
||||
"syncWithReplaceAllowed": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"trackingMethod": {
|
||||
"type": "string"
|
||||
},
|
||||
@@ -7077,7 +7080,7 @@
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"title": "Status contains the AppSet's perceived status of the managed Application resource: (Waiting, Pending, Progressing, Healthy)"
|
||||
"title": "Status contains the AppSet's perceived status of the managed Application resource"
|
||||
},
|
||||
"step": {
|
||||
"type": "string",
|
||||
@@ -7322,6 +7325,11 @@
|
||||
"items": {
|
||||
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
|
||||
}
|
||||
},
|
||||
"resourcesCount": {
|
||||
"description": "ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when\nthe number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).",
|
||||
"type": "integer",
|
||||
"format": "int64"
|
||||
}
|
||||
}
|
||||
},
|
||||
@@ -8249,7 +8257,7 @@
|
||||
}
|
||||
},
|
||||
"v1alpha1ConfigMapKeyRef": {
|
||||
"description": "Utility struct for a reference to a configmap key.",
|
||||
"description": "ConfigMapKeyRef struct for a reference to a configmap key.",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"configMapName": {
|
||||
@@ -9331,7 +9339,7 @@
|
||||
}
|
||||
},
|
||||
"v1alpha1PullRequestGeneratorGithub": {
|
||||
"description": "PullRequestGenerator defines connection info specific to GitHub.",
|
||||
"description": "PullRequestGeneratorGithub defines connection info specific to GitHub.",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"api": {
|
||||
@@ -9472,6 +9480,11 @@
|
||||
"connectionState": {
|
||||
"$ref": "#/definitions/v1alpha1ConnectionState"
|
||||
},
|
||||
"depth": {
|
||||
"description": "Depth specifies the depth for shallow clones. A value of 0 or omitting the field indicates a full clone.",
|
||||
"type": "integer",
|
||||
"format": "int64"
|
||||
},
|
||||
"enableLfs": {
|
||||
"description": "EnableLFS specifies whether git-lfs support should be enabled for this repo. Only valid for Git repositories.",
|
||||
"type": "boolean"
|
||||
@@ -10335,7 +10348,7 @@
|
||||
}
|
||||
},
|
||||
"v1alpha1SecretRef": {
|
||||
"description": "Utility struct for a reference to a secret key.",
|
||||
"description": "SecretRef struct for a reference to a secret key.",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"key": {
|
||||
@@ -10572,8 +10585,8 @@
|
||||
"type": "string"
|
||||
},
|
||||
"targetBranch": {
|
||||
"type": "string",
|
||||
"title": "TargetBranch is the branch to which hydrated manifests should be committed"
|
||||
"description": "TargetBranch is the branch from which hydrated manifests will be synced.\nIf HydrateTo is not set, this is also the branch to which hydrated manifests are committed.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
|
||||
logutils "github.com/argoproj/argo-cd/v3/util/log"
|
||||
"github.com/argoproj/argo-cd/v3/util/profile"
|
||||
"github.com/argoproj/argo-cd/v3/util/tls"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/controllers"
|
||||
@@ -79,6 +80,7 @@ func NewCommand() *cobra.Command {
|
||||
enableScmProviders bool
|
||||
webhookParallelism int
|
||||
tokenRefStrictMode bool
|
||||
maxResourcesStatusCount int
|
||||
)
|
||||
scheme := runtime.NewScheme()
|
||||
_ = clientgoscheme.AddToScheme(scheme)
|
||||
@@ -169,6 +171,15 @@ func NewCommand() *cobra.Command {
|
||||
log.Error(err, "unable to start manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
pprofMux := http.NewServeMux()
|
||||
profile.RegisterProfiler(pprofMux)
|
||||
// This looks a little strange. Eg, not using ctrl.Options PprofBindAddress and then adding the pprof mux
|
||||
// to the metrics server. However, it allows for the controller to dynamically expose the pprof endpoints
|
||||
// and use the existing metrics server, the same pattern that the application controller and api-server follow.
|
||||
if err = mgr.AddMetricsServerExtraHandler("/debug/pprof/", pprofMux); err != nil {
|
||||
log.Error(err, "failed to register pprof handlers")
|
||||
}
|
||||
dynamicClient, err := dynamic.NewForConfig(mgr.GetConfig())
|
||||
errors.CheckError(err)
|
||||
k8sClient, err := kubernetes.NewForConfig(mgr.GetConfig())
|
||||
@@ -231,6 +242,7 @@ func NewCommand() *cobra.Command {
|
||||
GlobalPreservedAnnotations: globalPreservedAnnotations,
|
||||
GlobalPreservedLabels: globalPreservedLabels,
|
||||
Metrics: &metrics,
|
||||
MaxResourcesStatusCount: maxResourcesStatusCount,
|
||||
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
|
||||
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
|
||||
os.Exit(1)
|
||||
@@ -275,6 +287,7 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
|
||||
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
|
||||
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
|
||||
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
|
||||
|
||||
return &command
|
||||
}
|
||||
|
||||
@@ -38,7 +38,7 @@ func NewCommand() *cobra.Command {
|
||||
Use: "argocd-commit-server",
|
||||
Short: "Run Argo CD Commit Server",
|
||||
Long: "Argo CD Commit Server is an internal service which commits and pushes hydrated manifests to git. This command runs Commit Server in the foreground.",
|
||||
RunE: func(_ *cobra.Command, _ []string) error {
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
vers := common.GetVersion()
|
||||
vers.LogStartupInfo(
|
||||
"Argo CD Commit Server",
|
||||
@@ -59,8 +59,10 @@ func NewCommand() *cobra.Command {
|
||||
|
||||
server := commitserver.NewServer(askPassServer, metricsServer)
|
||||
grpc := server.CreateGRPC()
|
||||
ctx := cmd.Context()
|
||||
|
||||
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
|
||||
lc := &net.ListenConfig{}
|
||||
listener, err := lc.Listen(ctx, "tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
|
||||
errors.CheckError(err)
|
||||
|
||||
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {
|
||||
|
||||
@@ -115,7 +115,7 @@ func NewRunDexCommand() *cobra.Command {
|
||||
err = os.WriteFile("/tmp/dex.yaml", dexCfgBytes, 0o644)
|
||||
errors.CheckError(err)
|
||||
log.Debug(redactor(string(dexCfgBytes)))
|
||||
cmd = exec.Command("dex", "serve", "/tmp/dex.yaml")
|
||||
cmd = exec.CommandContext(ctx, "dex", "serve", "/tmp/dex.yaml")
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
err = cmd.Start()
|
||||
|
||||
@@ -169,7 +169,8 @@ func NewCommand() *cobra.Command {
|
||||
}
|
||||
|
||||
grpc := server.CreateGRPC()
|
||||
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
|
||||
lc := &net.ListenConfig{}
|
||||
listener, err := lc.Listen(ctx, "tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
|
||||
errors.CheckError(err)
|
||||
|
||||
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {
|
||||
|
||||
@@ -105,26 +105,26 @@ func TestGetReconcileResults_Refresh(t *testing.T) {
|
||||
appClientset := appfake.NewSimpleClientset(app, proj)
|
||||
deployment := test.NewDeployment()
|
||||
kubeClientset := kubefake.NewClientset(deployment, argoCM, argoCDSecret)
|
||||
clusterCache := clustermocks.ClusterCache{}
|
||||
clusterCache.On("IsNamespaced", mock.Anything).Return(true, nil)
|
||||
clusterCache.On("GetGVKParser", mock.Anything).Return(nil)
|
||||
repoServerClient := mocks.RepoServerServiceClient{}
|
||||
repoServerClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(&argocdclient.ManifestResponse{
|
||||
clusterCache := &clustermocks.ClusterCache{}
|
||||
clusterCache.EXPECT().IsNamespaced(mock.Anything).Return(true, nil)
|
||||
clusterCache.EXPECT().GetGVKParser().Return(nil)
|
||||
repoServerClient := &mocks.RepoServerServiceClient{}
|
||||
repoServerClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(&argocdclient.ManifestResponse{
|
||||
Manifests: []string{test.DeploymentManifest},
|
||||
}, nil)
|
||||
repoServerClientset := mocks.Clientset{RepoServerServiceClient: &repoServerClient}
|
||||
liveStateCache := cachemocks.LiveStateCache{}
|
||||
liveStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything, mock.Anything).Return(map[kube.ResourceKey]*unstructured.Unstructured{
|
||||
repoServerClientset := &mocks.Clientset{RepoServerServiceClient: repoServerClient}
|
||||
liveStateCache := &cachemocks.LiveStateCache{}
|
||||
liveStateCache.EXPECT().GetManagedLiveObjs(mock.Anything, mock.Anything, mock.Anything).Return(map[kube.ResourceKey]*unstructured.Unstructured{
|
||||
kube.GetResourceKey(deployment): deployment,
|
||||
}, nil)
|
||||
liveStateCache.On("GetVersionsInfo", mock.Anything).Return("v1.2.3", nil, nil)
|
||||
liveStateCache.On("Init").Return(nil, nil)
|
||||
liveStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCache, nil)
|
||||
liveStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
|
||||
liveStateCache.EXPECT().GetVersionsInfo(mock.Anything).Return("v1.2.3", nil, nil)
|
||||
liveStateCache.EXPECT().Init().Return(nil)
|
||||
liveStateCache.EXPECT().GetClusterCache(mock.Anything).Return(clusterCache, nil)
|
||||
liveStateCache.EXPECT().IsNamespaced(mock.Anything, mock.Anything).Return(true, nil)
|
||||
|
||||
result, err := reconcileApplications(ctx, kubeClientset, appClientset, "default", &repoServerClientset, "",
|
||||
result, err := reconcileApplications(ctx, kubeClientset, appClientset, "default", repoServerClientset, "",
|
||||
func(_ db.ArgoDB, _ cache.SharedIndexInformer, _ *settings.SettingsManager, _ *metrics.MetricsServer) statecache.LiveStateCache {
|
||||
return &liveStateCache
|
||||
return liveStateCache
|
||||
},
|
||||
false,
|
||||
normalizers.IgnoreNormalizerOpts{},
|
||||
|
||||
@@ -30,11 +30,12 @@ func NewNotificationsCommand() *cobra.Command {
|
||||
)
|
||||
|
||||
var argocdService service.Service
|
||||
|
||||
toolsCommand := cmd.NewToolsCommand(
|
||||
"notifications",
|
||||
"argocd admin notifications",
|
||||
applications,
|
||||
settings.GetFactorySettingsForCLI(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
|
||||
settings.GetFactorySettingsForCLI(func() service.Service { return argocdService }, "argocd-notifications-secret", "argocd-notifications-cm", false),
|
||||
func(clientConfig clientcmd.ClientConfig) {
|
||||
k8sCfg, err := clientConfig.ClientConfig()
|
||||
if err != nil {
|
||||
|
||||
@@ -40,9 +40,7 @@ func captureStdout(callback func()) (string, error) {
|
||||
return string(data), err
|
||||
}
|
||||
|
||||
func newSettingsManager(data map[string]string) *settings.SettingsManager {
|
||||
ctx := context.Background()
|
||||
|
||||
func newSettingsManager(ctx context.Context, data map[string]string) *settings.SettingsManager {
|
||||
clientset := fake.NewClientset(&corev1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -69,8 +67,8 @@ type fakeCmdContext struct {
|
||||
mgr *settings.SettingsManager
|
||||
}
|
||||
|
||||
func newCmdContext(data map[string]string) *fakeCmdContext {
|
||||
return &fakeCmdContext{mgr: newSettingsManager(data)}
|
||||
func newCmdContext(ctx context.Context, data map[string]string) *fakeCmdContext {
|
||||
return &fakeCmdContext{mgr: newSettingsManager(ctx, data)}
|
||||
}
|
||||
|
||||
func (ctx *fakeCmdContext) createSettingsManager(context.Context) (*settings.SettingsManager, error) {
|
||||
@@ -182,7 +180,7 @@ admissionregistration.k8s.io/MutatingWebhookConfiguration:
|
||||
if !assert.True(t, ok) {
|
||||
return
|
||||
}
|
||||
summary, err := validator(newSettingsManager(tc.data))
|
||||
summary, err := validator(newSettingsManager(t.Context(), tc.data))
|
||||
if tc.containsSummary != "" {
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, summary, tc.containsSummary)
|
||||
@@ -249,7 +247,7 @@ func tempFile(content string) (string, io.Closer, error) {
|
||||
}
|
||||
|
||||
func TestValidateSettingsCommand_NoErrors(t *testing.T) {
|
||||
cmd := NewValidateSettingsCommand(newCmdContext(map[string]string{}))
|
||||
cmd := NewValidateSettingsCommand(newCmdContext(t.Context(), map[string]string{}))
|
||||
out, err := captureStdout(func() {
|
||||
err := cmd.Execute()
|
||||
require.NoError(t, err)
|
||||
@@ -267,7 +265,7 @@ func TestResourceOverrideIgnoreDifferences(t *testing.T) {
|
||||
defer utilio.Close(closer)
|
||||
|
||||
t.Run("NoOverridesConfigured", func(t *testing.T) {
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{}))
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{}))
|
||||
out, err := captureStdout(func() {
|
||||
cmd.SetArgs([]string{"ignore-differences", f})
|
||||
err := cmd.Execute()
|
||||
@@ -278,7 +276,7 @@ func TestResourceOverrideIgnoreDifferences(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("DataIgnored", func(t *testing.T) {
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
|
||||
"resource.customizations": `apps/Deployment:
|
||||
ignoreDifferences: |
|
||||
jsonPointers:
|
||||
@@ -300,7 +298,7 @@ func TestResourceOverrideHealth(t *testing.T) {
|
||||
defer utilio.Close(closer)
|
||||
|
||||
t.Run("NoHealthAssessment", func(t *testing.T) {
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
|
||||
"resource.customizations": `example.com/ExampleResource: {}`,
|
||||
}))
|
||||
out, err := captureStdout(func() {
|
||||
@@ -313,7 +311,7 @@ func TestResourceOverrideHealth(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("HealthAssessmentConfigured", func(t *testing.T) {
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
|
||||
"resource.customizations": `example.com/ExampleResource:
|
||||
health.lua: |
|
||||
return { status = "Progressing" }
|
||||
@@ -329,7 +327,7 @@ func TestResourceOverrideHealth(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("HealthAssessmentConfiguredWildcard", func(t *testing.T) {
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
|
||||
"resource.customizations": `example.com/*:
|
||||
health.lua: |
|
||||
return { status = "Progressing" }
|
||||
@@ -355,7 +353,7 @@ func TestResourceOverrideAction(t *testing.T) {
|
||||
defer utilio.Close(closer)
|
||||
|
||||
t.Run("NoActions", func(t *testing.T) {
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
|
||||
"resource.customizations": `apps/Deployment: {}`,
|
||||
}))
|
||||
out, err := captureStdout(func() {
|
||||
@@ -368,7 +366,7 @@ func TestResourceOverrideAction(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("OldStyleActionConfigured", func(t *testing.T) {
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
|
||||
"resource.customizations": `apps/Deployment:
|
||||
actions: |
|
||||
discovery.lua: |
|
||||
@@ -404,7 +402,7 @@ resume false
|
||||
})
|
||||
|
||||
t.Run("NewStyleActionConfigured", func(t *testing.T) {
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
|
||||
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
|
||||
"resource.customizations": `batch/CronJob:
|
||||
actions: |
|
||||
discovery.lua: |
|
||||
|
||||
@@ -1794,6 +1794,7 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
repo string
|
||||
appNamespace string
|
||||
cluster string
|
||||
path string
|
||||
)
|
||||
command := &cobra.Command{
|
||||
Use: "list",
|
||||
@@ -1829,6 +1830,9 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
if cluster != "" {
|
||||
appList = argo.FilterByCluster(appList, cluster)
|
||||
}
|
||||
if path != "" {
|
||||
appList = argo.FilterByPath(appList, path)
|
||||
}
|
||||
|
||||
switch output {
|
||||
case "yaml", "json":
|
||||
@@ -1849,6 +1853,7 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
command.Flags().StringVarP(&repo, "repo", "r", "", "List apps by source repo URL")
|
||||
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Only list applications in namespace")
|
||||
command.Flags().StringVarP(&cluster, "cluster", "c", "", "List apps by cluster name or url")
|
||||
command.Flags().StringVarP(&path, "path", "P", "", "List apps by path")
|
||||
return command
|
||||
}
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ func NewApplicationGetResourceCommand(clientOpts *argocdclient.ClientOptions) *c
|
||||
# Get a specific resource with managed fields, Pod my-app-pod, in 'my-app' by name in wide format
|
||||
argocd app get-resource my-app --kind Pod --resource-name my-app-pod --show-managed-fields
|
||||
|
||||
# Get the the details of a specific field in a resource in 'my-app' in the wide format
|
||||
# Get the details of a specific field in a resource in 'my-app' in the wide format
|
||||
argocd app get-resource my-app --kind Pod --filter-fields status.podIP
|
||||
|
||||
# Get the details of multiple specific fields in a specific resource in 'my-app' in the wide format
|
||||
|
||||
103
cmd/argocd/commands/gpg_test.go
Normal file
103
cmd/argocd/commands/gpg_test.go
Normal file
@@ -0,0 +1,103 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
appsv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
// splitColumns splits a line produced by tabwriter using runs of 2 or more spaces
|
||||
// as delimiters to obtain logical columns regardless of alignment padding.
|
||||
func splitColumns(line string) []string {
|
||||
re := regexp.MustCompile(`\s{2,}`)
|
||||
return re.Split(strings.TrimSpace(line), -1)
|
||||
}
|
||||
|
||||
func Test_printKeyTable_Empty(t *testing.T) {
|
||||
out, err := captureOutput(func() error {
|
||||
printKeyTable([]appsv1.GnuPGPublicKey{})
|
||||
return nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
|
||||
require.Len(t, lines, 1)
|
||||
|
||||
headerCols := splitColumns(lines[0])
|
||||
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, headerCols)
|
||||
}
|
||||
|
||||
func Test_printKeyTable_Single(t *testing.T) {
|
||||
keys := []appsv1.GnuPGPublicKey{
|
||||
{
|
||||
KeyID: "ABCDEF1234567890",
|
||||
SubType: "rsa4096",
|
||||
Owner: "Alice <alice@example.com>",
|
||||
},
|
||||
}
|
||||
|
||||
out, err := captureOutput(func() error {
|
||||
printKeyTable(keys)
|
||||
return nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
|
||||
require.Len(t, lines, 2)
|
||||
|
||||
// Header
|
||||
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, splitColumns(lines[0]))
|
||||
|
||||
// Row
|
||||
row := splitColumns(lines[1])
|
||||
require.Len(t, row, 3)
|
||||
assert.Equal(t, "ABCDEF1234567890", row[0])
|
||||
assert.Equal(t, "RSA4096", row[1]) // subtype upper-cased
|
||||
assert.Equal(t, "Alice <alice@example.com>", row[2])
|
||||
}
|
||||
|
||||
func Test_printKeyTable_Multiple(t *testing.T) {
|
||||
keys := []appsv1.GnuPGPublicKey{
|
||||
{
|
||||
KeyID: "ABCD",
|
||||
SubType: "ed25519",
|
||||
Owner: "User One <one@example.com>",
|
||||
},
|
||||
{
|
||||
KeyID: "0123456789ABCDEF",
|
||||
SubType: "rsa2048",
|
||||
Owner: "Second User <second@example.com>",
|
||||
},
|
||||
}
|
||||
|
||||
out, err := captureOutput(func() error {
|
||||
printKeyTable(keys)
|
||||
return nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
|
||||
require.Len(t, lines, 3)
|
||||
|
||||
// Header
|
||||
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, splitColumns(lines[0]))
|
||||
|
||||
// First row
|
||||
row1 := splitColumns(lines[1])
|
||||
require.Len(t, row1, 3)
|
||||
assert.Equal(t, "ABCD", row1[0])
|
||||
assert.Equal(t, "ED25519", row1[1])
|
||||
assert.Equal(t, "User One <one@example.com>", row1[2])
|
||||
|
||||
// Second row
|
||||
row2 := splitColumns(lines[2])
|
||||
require.Len(t, row2, 3)
|
||||
assert.Equal(t, "0123456789ABCDEF", row2[0])
|
||||
assert.Equal(t, "RSA2048", row2[1])
|
||||
assert.Equal(t, "Second User <second@example.com>", row2[2])
|
||||
}
|
||||
@@ -213,7 +213,8 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
|
||||
}
|
||||
if port == nil || *port == 0 {
|
||||
addr := *address + ":0"
|
||||
ln, err := net.Listen("tcp", addr)
|
||||
lc := &net.ListenConfig{}
|
||||
ln, err := lc.Listen(ctx, "tcp", addr)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to listen on %q: %w", addr, err)
|
||||
}
|
||||
|
||||
@@ -3,9 +3,11 @@ package commands
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"maps"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"strings"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/util/cli"
|
||||
@@ -13,18 +15,17 @@ import (
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// DefaultPluginHandler implements the PluginHandler interface
|
||||
const prefix = "argocd"
|
||||
|
||||
type DefaultPluginHandler struct {
|
||||
ValidPrefixes []string
|
||||
lookPath func(file string) (string, error)
|
||||
run func(cmd *exec.Cmd) error
|
||||
lookPath func(file string) (string, error)
|
||||
run func(cmd *exec.Cmd) error
|
||||
}
|
||||
|
||||
// NewDefaultPluginHandler instantiates the DefaultPluginHandler
|
||||
func NewDefaultPluginHandler(validPrefixes []string) *DefaultPluginHandler {
|
||||
// NewDefaultPluginHandler instantiates a DefaultPluginHandler
|
||||
func NewDefaultPluginHandler() *DefaultPluginHandler {
|
||||
return &DefaultPluginHandler{
|
||||
ValidPrefixes: validPrefixes,
|
||||
lookPath: exec.LookPath,
|
||||
lookPath: exec.LookPath,
|
||||
run: func(cmd *exec.Cmd) error {
|
||||
return cmd.Run()
|
||||
},
|
||||
@@ -32,8 +33,8 @@ func NewDefaultPluginHandler(validPrefixes []string) *DefaultPluginHandler {
|
||||
}
|
||||
|
||||
// HandleCommandExecutionError processes the error returned from executing the command.
|
||||
// It handles both standard Argo CD commands and plugin commands. We don't require to return
|
||||
// error but we are doing it to cover various test scenarios.
|
||||
// It handles both standard Argo CD commands and plugin commands. We don't require returning
|
||||
// an error, but we are doing it to cover various test scenarios.
|
||||
func (h *DefaultPluginHandler) HandleCommandExecutionError(err error, isArgocdCLI bool, args []string) error {
|
||||
// the log level needs to be setup manually here since the initConfig()
|
||||
// set by the cobra.OnInitialize() was never executed because cmd.Execute()
|
||||
@@ -85,27 +86,24 @@ func (h *DefaultPluginHandler) handlePluginCommand(cmdArgs []string) (string, er
|
||||
|
||||
// lookForPlugin looks for a plugin in the PATH that starts with argocd prefix
|
||||
func (h *DefaultPluginHandler) lookForPlugin(filename string) (string, bool) {
|
||||
for _, prefix := range h.ValidPrefixes {
|
||||
pluginName := fmt.Sprintf("%s-%s", prefix, filename)
|
||||
path, err := h.lookPath(pluginName)
|
||||
if err != nil {
|
||||
// error if a plugin is found in a relative path
|
||||
if errors.Is(err, exec.ErrDot) {
|
||||
log.Errorf("Plugin '%s' found in relative path: %v", pluginName, err)
|
||||
} else {
|
||||
log.Warnf("error looking for plugin '%s': %v", pluginName, err)
|
||||
}
|
||||
continue
|
||||
pluginName := fmt.Sprintf("%s-%s", prefix, filename)
|
||||
path, err := h.lookPath(pluginName)
|
||||
if err != nil {
|
||||
// error if a plugin is found in a relative path
|
||||
if errors.Is(err, exec.ErrDot) {
|
||||
log.Errorf("Plugin '%s' found in relative path: %v", pluginName, err)
|
||||
} else {
|
||||
log.Warnf("error looking for plugin '%s': %v", pluginName, err)
|
||||
}
|
||||
|
||||
if path == "" {
|
||||
return "", false
|
||||
}
|
||||
|
||||
return path, true
|
||||
return "", false
|
||||
}
|
||||
|
||||
return "", false
|
||||
if path == "" {
|
||||
return "", false
|
||||
}
|
||||
|
||||
return path, true
|
||||
}
|
||||
|
||||
// executePlugin implements PluginHandler and executes a plugin found
|
||||
@@ -141,3 +139,56 @@ func (h *DefaultPluginHandler) command(name string, arg ...string) *exec.Cmd {
|
||||
}
|
||||
return cmd
|
||||
}
|
||||
|
||||
// ListAvailablePlugins returns a list of plugin names that are available in the user's PATH
|
||||
// for tab completion. It searches for executables matching the ValidPrefixes pattern.
|
||||
func (h *DefaultPluginHandler) ListAvailablePlugins() []string {
|
||||
// Track seen plugin names to avoid duplicates
|
||||
seenPlugins := make(map[string]bool)
|
||||
|
||||
// Search through each directory in PATH
|
||||
for _, dir := range filepath.SplitList(os.Getenv("PATH")) {
|
||||
// Skip empty directories
|
||||
if dir == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Read directory contents
|
||||
entries, err := os.ReadDir(dir)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check each file in the directory
|
||||
for _, entry := range entries {
|
||||
// Skip directories and non-executable files
|
||||
if entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
name := entry.Name()
|
||||
|
||||
// Check if the file is a valid argocd plugin
|
||||
pluginPrefix := prefix + "-"
|
||||
if strings.HasPrefix(name, pluginPrefix) {
|
||||
// Extract the plugin command name (everything after the prefix)
|
||||
pluginName := strings.TrimPrefix(name, pluginPrefix)
|
||||
|
||||
// Skip empty plugin names or names with path separators
|
||||
if pluginName == "" || strings.Contains(pluginName, "/") || strings.Contains(pluginName, "\\") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if the file is executable
|
||||
if info, err := entry.Info(); err == nil {
|
||||
// On Unix-like systems, check executable bit
|
||||
if info.Mode()&0o111 != 0 {
|
||||
seenPlugins[pluginName] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return slices.Sorted(maps.Keys(seenPlugins))
|
||||
}
|
||||
|
||||
@@ -28,7 +28,7 @@ func setupPluginPath(t *testing.T) {
|
||||
func TestNormalCommandWithPlugin(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
_ = NewDefaultPluginHandler([]string{"argocd"})
|
||||
_ = NewDefaultPluginHandler()
|
||||
args := []string{"argocd", "version", "--short", "--client"}
|
||||
buf := new(bytes.Buffer)
|
||||
cmd := NewVersionCmd(&argocdclient.ClientOptions{}, nil)
|
||||
@@ -47,7 +47,7 @@ func TestNormalCommandWithPlugin(t *testing.T) {
|
||||
func TestPluginExecution(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
cmd := NewCommand()
|
||||
cmd.SilenceErrors = true
|
||||
cmd.SilenceUsage = true
|
||||
@@ -101,7 +101,7 @@ func TestPluginExecution(t *testing.T) {
|
||||
func TestNormalCommandError(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
args := []string{"argocd", "version", "--non-existent-flag"}
|
||||
cmd := NewVersionCmd(&argocdclient.ClientOptions{}, nil)
|
||||
cmd.SetArgs(args[1:])
|
||||
@@ -118,7 +118,7 @@ func TestNormalCommandError(t *testing.T) {
|
||||
// TestUnknownCommandNoPlugin tests the scenario when the command is neither a normal ArgoCD command
|
||||
// nor exists as a plugin
|
||||
func TestUnknownCommandNoPlugin(t *testing.T) {
|
||||
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
cmd := NewCommand()
|
||||
cmd.SilenceErrors = true
|
||||
cmd.SilenceUsage = true
|
||||
@@ -137,7 +137,7 @@ func TestUnknownCommandNoPlugin(t *testing.T) {
|
||||
func TestPluginNoExecutePermission(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
cmd := NewCommand()
|
||||
cmd.SilenceErrors = true
|
||||
cmd.SilenceUsage = true
|
||||
@@ -156,7 +156,7 @@ func TestPluginNoExecutePermission(t *testing.T) {
|
||||
func TestPluginExecutionError(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
cmd := NewCommand()
|
||||
cmd.SilenceErrors = true
|
||||
cmd.SilenceUsage = true
|
||||
@@ -187,7 +187,7 @@ func TestPluginInRelativePathIgnored(t *testing.T) {
|
||||
|
||||
t.Setenv("PATH", os.Getenv("PATH")+string(os.PathListSeparator)+relativePath)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
cmd := NewCommand()
|
||||
cmd.SilenceErrors = true
|
||||
cmd.SilenceUsage = true
|
||||
@@ -206,7 +206,7 @@ func TestPluginInRelativePathIgnored(t *testing.T) {
|
||||
func TestPluginFlagParsing(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -255,7 +255,7 @@ func TestPluginFlagParsing(t *testing.T) {
|
||||
func TestPluginStatusCode(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -309,3 +309,76 @@ func TestPluginStatusCode(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestListAvailablePlugins tests the plugin discovery functionality for tab completion
|
||||
func TestListAvailablePlugins(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
validPrefix []string
|
||||
expected []string
|
||||
}{
|
||||
{
|
||||
name: "Standard argocd prefix finds plugins",
|
||||
expected: []string{"demo_plugin", "error", "foo", "status-code-plugin", "test-plugin", "version"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
plugins := pluginHandler.ListAvailablePlugins()
|
||||
|
||||
assert.Equal(t, tt.expected, plugins)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestListAvailablePluginsEmptyPath tests plugin discovery when PATH is empty
|
||||
func TestListAvailablePluginsEmptyPath(t *testing.T) {
|
||||
// Set empty PATH
|
||||
t.Setenv("PATH", "")
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
plugins := pluginHandler.ListAvailablePlugins()
|
||||
|
||||
assert.Empty(t, plugins, "Should return empty list when PATH is empty")
|
||||
}
|
||||
|
||||
// TestListAvailablePluginsNonExecutableFiles tests that non-executable files are ignored
|
||||
func TestListAvailablePluginsNonExecutableFiles(t *testing.T) {
|
||||
setupPluginPath(t)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
plugins := pluginHandler.ListAvailablePlugins()
|
||||
|
||||
// Should not include 'no-permission' since it's not executable
|
||||
assert.NotContains(t, plugins, "no-permission")
|
||||
}
|
||||
|
||||
// TestListAvailablePluginsDeduplication tests that duplicate plugins from different PATH dirs are handled
|
||||
func TestListAvailablePluginsDeduplication(t *testing.T) {
|
||||
// Create two temporary directories with the same plugin
|
||||
dir1 := t.TempDir()
|
||||
dir2 := t.TempDir()
|
||||
|
||||
// Create the same plugin in both directories
|
||||
plugin1 := filepath.Join(dir1, "argocd-duplicate")
|
||||
plugin2 := filepath.Join(dir2, "argocd-duplicate")
|
||||
|
||||
err := os.WriteFile(plugin1, []byte("#!/bin/bash\necho 'plugin1'\n"), 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(plugin2, []byte("#!/bin/bash\necho 'plugin2'\n"), 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Set PATH to include both directories
|
||||
testPath := dir1 + string(os.PathListSeparator) + dir2
|
||||
t.Setenv("PATH", testPath)
|
||||
|
||||
pluginHandler := NewDefaultPluginHandler()
|
||||
plugins := pluginHandler.ListAvailablePlugins()
|
||||
|
||||
assert.Equal(t, []string{"duplicate"}, plugins)
|
||||
}
|
||||
|
||||
@@ -191,6 +191,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
repoOpts.Repo.NoProxy = repoOpts.NoProxy
|
||||
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
|
||||
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
|
||||
repoOpts.Repo.Depth = repoOpts.Depth
|
||||
|
||||
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
|
||||
errors.Fatal(errors.ErrorGeneric, "Must specify --name for repos of type 'helm'")
|
||||
|
||||
@@ -44,6 +44,11 @@ func NewCommand() *cobra.Command {
|
||||
},
|
||||
DisableAutoGenTag: true,
|
||||
SilenceUsage: true,
|
||||
ValidArgsFunction: func(_ *cobra.Command, _ []string, _ string) ([]string, cobra.ShellCompDirective) {
|
||||
// Return available plugin commands for tab completion
|
||||
plugins := NewDefaultPluginHandler().ListAvailablePlugins()
|
||||
return plugins, cobra.ShellCompDirectiveNoFileComp
|
||||
},
|
||||
}
|
||||
|
||||
command.AddCommand(NewCompletionCommand())
|
||||
|
||||
@@ -24,7 +24,7 @@ func extractHealthStatusAndReason(node v1alpha1.ResourceNode) (healthStatus heal
|
||||
healthStatus = node.Health.Status
|
||||
reason = node.Health.Message
|
||||
}
|
||||
return
|
||||
return healthStatus, reason
|
||||
}
|
||||
|
||||
func treeViewAppGet(prefix string, uidToNodeMap map[string]v1alpha1.ResourceNode, parentToChildMap map[string][]string, parent v1alpha1.ResourceNode, mapNodeNameToResourceState map[string]*resourceState, w *tabwriter.Writer) {
|
||||
|
||||
@@ -83,12 +83,11 @@ func main() {
|
||||
}
|
||||
|
||||
err := command.Execute()
|
||||
// if the err is non-nil, try to look for various scenarios
|
||||
// if an error is present, try to look for various scenarios
|
||||
// such as if the error is from the execution of a normal argocd command,
|
||||
// unknown command error or any other.
|
||||
if err != nil {
|
||||
pluginHandler := cli.NewDefaultPluginHandler([]string{"argocd"})
|
||||
pluginErr := pluginHandler.HandleCommandExecutionError(err, isArgocdCLI, os.Args)
|
||||
pluginErr := cli.NewDefaultPluginHandler().HandleCommandExecutionError(err, isArgocdCLI, os.Args)
|
||||
if pluginErr != nil {
|
||||
var exitErr *exec.ExitError
|
||||
if errors.As(pluginErr, &exitErr) {
|
||||
|
||||
@@ -136,9 +136,9 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
|
||||
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
|
||||
command.Flags().StringVar(&opts.syncPolicy, "sync-policy", "", "Set the sync policy (one of: manual (aliases of manual: none), automated (aliases of automated: auto, automatic))")
|
||||
command.Flags().StringArrayVar(&opts.syncOptions, "sync-option", []string{}, "Add or remove a sync option, e.g add `Prune=false`. Remove using `!` prefix, e.g. `!Prune=false`")
|
||||
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning when sync is automated")
|
||||
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing when sync is automated")
|
||||
command.Flags().BoolVar(&opts.allowEmpty, "allow-empty", false, "Set allow zero live resources when sync is automated")
|
||||
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning for automated sync policy")
|
||||
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing for automated sync policy")
|
||||
command.Flags().BoolVar(&opts.allowEmpty, "allow-empty", false, "Set allow zero live resources for automated sync policy")
|
||||
command.Flags().StringVar(&opts.namePrefix, "nameprefix", "", "Kustomize nameprefix")
|
||||
command.Flags().StringVar(&opts.nameSuffix, "namesuffix", "", "Kustomize namesuffix")
|
||||
command.Flags().StringVar(&opts.kustomizeVersion, "kustomize-version", "", "Kustomize version")
|
||||
@@ -284,25 +284,26 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
|
||||
spec.SyncPolicy.Retry.Refresh = appOpts.retryRefresh
|
||||
}
|
||||
})
|
||||
if flags.Changed("auto-prune") {
|
||||
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
|
||||
log.Fatal("Cannot set --auto-prune: application not configured with automatic sync")
|
||||
}
|
||||
spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
|
||||
}
|
||||
if flags.Changed("self-heal") {
|
||||
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
|
||||
log.Fatal("Cannot set --self-heal: application not configured with automatic sync")
|
||||
}
|
||||
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
|
||||
}
|
||||
if flags.Changed("allow-empty") {
|
||||
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
|
||||
log.Fatal("Cannot set --allow-empty: application not configured with automatic sync")
|
||||
}
|
||||
spec.SyncPolicy.Automated.AllowEmpty = appOpts.allowEmpty
|
||||
}
|
||||
|
||||
if flags.Changed("auto-prune") || flags.Changed("self-heal") || flags.Changed("allow-empty") {
|
||||
if spec.SyncPolicy == nil {
|
||||
spec.SyncPolicy = &argoappv1.SyncPolicy{}
|
||||
}
|
||||
if spec.SyncPolicy.Automated == nil {
|
||||
disabled := false
|
||||
spec.SyncPolicy.Automated = &argoappv1.SyncPolicyAutomated{Enabled: &disabled}
|
||||
}
|
||||
|
||||
if flags.Changed("auto-prune") {
|
||||
spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
|
||||
}
|
||||
if flags.Changed("self-heal") {
|
||||
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
|
||||
}
|
||||
if flags.Changed("allow-empty") {
|
||||
spec.SyncPolicy.Automated.AllowEmpty = appOpts.allowEmpty
|
||||
}
|
||||
}
|
||||
return visited
|
||||
}
|
||||
|
||||
|
||||
@@ -267,6 +267,47 @@ func Test_setAppSpecOptions(t *testing.T) {
|
||||
require.NoError(t, f.SetFlag("sync-option", "!a=1"))
|
||||
assert.Nil(t, f.spec.SyncPolicy)
|
||||
})
|
||||
t.Run("AutoPruneFlag", func(t *testing.T) {
|
||||
f := newAppOptionsFixture()
|
||||
|
||||
// syncPolicy is nil (automated.enabled = false)
|
||||
require.NoError(t, f.SetFlag("auto-prune", "true"))
|
||||
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.True(t, f.spec.SyncPolicy.Automated.Prune)
|
||||
|
||||
// automated.enabled = true
|
||||
*f.spec.SyncPolicy.Automated.Enabled = true
|
||||
require.NoError(t, f.SetFlag("auto-prune", "false"))
|
||||
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.False(t, f.spec.SyncPolicy.Automated.Prune)
|
||||
})
|
||||
t.Run("SelfHealFlag", func(t *testing.T) {
|
||||
f := newAppOptionsFixture()
|
||||
|
||||
require.NoError(t, f.SetFlag("self-heal", "true"))
|
||||
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.True(t, f.spec.SyncPolicy.Automated.SelfHeal)
|
||||
|
||||
*f.spec.SyncPolicy.Automated.Enabled = true
|
||||
require.NoError(t, f.SetFlag("self-heal", "false"))
|
||||
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.False(t, f.spec.SyncPolicy.Automated.SelfHeal)
|
||||
})
|
||||
t.Run("AllowEmptyFlag", func(t *testing.T) {
|
||||
f := newAppOptionsFixture()
|
||||
|
||||
require.NoError(t, f.SetFlag("allow-empty", "true"))
|
||||
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.True(t, f.spec.SyncPolicy.Automated.AllowEmpty)
|
||||
|
||||
*f.spec.SyncPolicy.Automated.Enabled = true
|
||||
require.NoError(t, f.SetFlag("allow-empty", "false"))
|
||||
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
|
||||
assert.False(t, f.spec.SyncPolicy.Automated.AllowEmpty)
|
||||
})
|
||||
t.Run("RetryLimit", func(t *testing.T) {
|
||||
require.NoError(t, f.SetFlag("sync-retry-limit", "5"))
|
||||
assert.Equal(t, int64(5), f.spec.SyncPolicy.Retry.Limit)
|
||||
|
||||
@@ -27,6 +27,7 @@ type RepoOptions struct {
|
||||
GCPServiceAccountKeyPath string
|
||||
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
|
||||
UseAzureWorkloadIdentity bool
|
||||
Depth int64
|
||||
}
|
||||
|
||||
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
|
||||
@@ -53,4 +54,5 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
|
||||
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
|
||||
command.Flags().BoolVar(&opts.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
|
||||
command.Flags().BoolVar(&opts.InsecureOCIForceHTTP, "insecure-oci-force-http", false, "Use http when accessing an OCI repository")
|
||||
command.Flags().Int64Var(&opts.Depth, "depth", 0, "Specify a custom depth for git clone operations. Unless specified, a full clone is performed using the depth of 0")
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package cmpserver
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
@@ -85,7 +86,8 @@ func (a *ArgoCDCMPServer) Run() {
|
||||
|
||||
// Listen on the socket address
|
||||
_ = os.Remove(config.Address())
|
||||
listener, err := net.Listen("unix", config.Address())
|
||||
lc := &net.ListenConfig{}
|
||||
listener, err := lc.Listen(context.Background(), "unix", config.Address())
|
||||
errors.CheckError(err)
|
||||
log.Infof("argocd-cmp-server %s serving on %s", common.GetVersion(), listener.Addr())
|
||||
|
||||
|
||||
95
commitserver/apiclient/mocks/Clientset.go
generated
95
commitserver/apiclient/mocks/Clientset.go
generated
@@ -1,101 +1,14 @@
|
||||
// Code generated by mockery; DO NOT EDIT.
|
||||
// github.com/vektra/mockery
|
||||
// template: testify
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
"github.com/argoproj/argo-cd/v3/util/io"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
utilio "github.com/argoproj/argo-cd/v3/util/io"
|
||||
)
|
||||
|
||||
// NewClientset creates a new instance of Clientset. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewClientset(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *Clientset {
|
||||
mock := &Clientset{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
||||
// Clientset is an autogenerated mock type for the Clientset type
|
||||
type Clientset struct {
|
||||
mock.Mock
|
||||
CommitServiceClient apiclient.CommitServiceClient
|
||||
}
|
||||
|
||||
type Clientset_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *Clientset) EXPECT() *Clientset_Expecter {
|
||||
return &Clientset_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// NewCommitServerClient provides a mock function for the type Clientset
|
||||
func (_mock *Clientset) NewCommitServerClient() (io.Closer, apiclient.CommitServiceClient, error) {
|
||||
ret := _mock.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for NewCommitServerClient")
|
||||
}
|
||||
|
||||
var r0 io.Closer
|
||||
var r1 apiclient.CommitServiceClient
|
||||
var r2 error
|
||||
if returnFunc, ok := ret.Get(0).(func() (io.Closer, apiclient.CommitServiceClient, error)); ok {
|
||||
return returnFunc()
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func() io.Closer); ok {
|
||||
r0 = returnFunc()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(io.Closer)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func() apiclient.CommitServiceClient); ok {
|
||||
r1 = returnFunc()
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).(apiclient.CommitServiceClient)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(2).(func() error); ok {
|
||||
r2 = returnFunc()
|
||||
} else {
|
||||
r2 = ret.Error(2)
|
||||
}
|
||||
return r0, r1, r2
|
||||
}
|
||||
|
||||
// Clientset_NewCommitServerClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'NewCommitServerClient'
|
||||
type Clientset_NewCommitServerClient_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// NewCommitServerClient is a helper method to define mock.On call
|
||||
func (_e *Clientset_Expecter) NewCommitServerClient() *Clientset_NewCommitServerClient_Call {
|
||||
return &Clientset_NewCommitServerClient_Call{Call: _e.mock.On("NewCommitServerClient")}
|
||||
}
|
||||
|
||||
func (_c *Clientset_NewCommitServerClient_Call) Run(run func()) *Clientset_NewCommitServerClient_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *Clientset_NewCommitServerClient_Call) Return(closer io.Closer, commitServiceClient apiclient.CommitServiceClient, err error) *Clientset_NewCommitServerClient_Call {
|
||||
_c.Call.Return(closer, commitServiceClient, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *Clientset_NewCommitServerClient_Call) RunAndReturn(run func() (io.Closer, apiclient.CommitServiceClient, error)) *Clientset_NewCommitServerClient_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
func (c *Clientset) NewCommitServerClient() (utilio.Closer, apiclient.CommitServiceClient, error) {
|
||||
return utilio.NopCloser, c.CommitServiceClient, nil
|
||||
}
|
||||
|
||||
@@ -229,7 +229,7 @@ func (s *Service) initGitClient(logCtx *log.Entry, r *apiclient.CommitHydratedMa
|
||||
}
|
||||
|
||||
logCtx.Debugf("Fetching repo %s", r.Repo.Repo)
|
||||
err = gitClient.Fetch("")
|
||||
err = gitClient.Fetch("", 0)
|
||||
if err != nil {
|
||||
cleanupOrLog()
|
||||
return nil, "", nil, fmt.Errorf("failed to clone repo: %w", err)
|
||||
|
||||
@@ -82,7 +82,7 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
|
||||
|
||||
_, err := service.CommitHydratedManifests(t.Context(), validRequest)
|
||||
require.Error(t, err)
|
||||
@@ -94,14 +94,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.On("Init").Return(nil).Once()
|
||||
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
|
||||
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
|
||||
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
mockGitClient.EXPECT().Init().Return(nil).Once()
|
||||
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
resp, err := service.CommitHydratedManifests(t.Context(), validRequest)
|
||||
require.NoError(t, err)
|
||||
@@ -114,14 +114,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.On("Init").Return(nil).Once()
|
||||
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
|
||||
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.On("CommitSHA").Return("root-and-blank-sha", nil).Once()
|
||||
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
mockGitClient.EXPECT().Init().Return(nil).Once()
|
||||
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("root-and-blank-sha", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithRootAndBlank := &apiclient.CommitHydratedManifestsRequest{
|
||||
Repo: &v1alpha1.Repository{
|
||||
@@ -161,15 +161,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.On("Init").Return(nil).Once()
|
||||
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
|
||||
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("RemoveContents", []string{"apps/staging"}).Return("", nil).Once()
|
||||
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.On("CommitSHA").Return("subdir-path-sha", nil).Once()
|
||||
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
mockGitClient.EXPECT().Init().Return(nil).Once()
|
||||
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().RemoveContents([]string{"apps/staging"}).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("subdir-path-sha", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithSubdirPath := &apiclient.CommitHydratedManifestsRequest{
|
||||
Repo: &v1alpha1.Repository{
|
||||
@@ -201,15 +201,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.On("Init").Return(nil).Once()
|
||||
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
|
||||
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("RemoveContents", []string{"apps/production", "apps/staging"}).Return("", nil).Once()
|
||||
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.On("CommitSHA").Return("mixed-paths-sha", nil).Once()
|
||||
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
mockGitClient.EXPECT().Init().Return(nil).Once()
|
||||
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().RemoveContents([]string{"apps/production", "apps/staging"}).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("mixed-paths-sha", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithMixedPaths := &apiclient.CommitHydratedManifestsRequest{
|
||||
Repo: &v1alpha1.Repository{
|
||||
@@ -257,14 +257,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.On("Init").Return(nil).Once()
|
||||
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
|
||||
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
|
||||
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
mockGitClient.EXPECT().Init().Return(nil).Once()
|
||||
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithEmptyPaths := &apiclient.CommitHydratedManifestsRequest{
|
||||
Repo: &v1alpha1.Repository{
|
||||
|
||||
@@ -229,6 +229,7 @@ const (
|
||||
// AnnotationKeyAppSkipReconcile tells the Application to skip the Application controller reconcile.
|
||||
// Skip reconcile when the value is "true" or any other string values that can be strconv.ParseBool() to be true.
|
||||
AnnotationKeyAppSkipReconcile = "argocd.argoproj.io/skip-reconcile"
|
||||
|
||||
// LabelKeyComponentRepoServer is the label key to identify the component as repo-server
|
||||
LabelKeyComponentRepoServer = "app.kubernetes.io/component"
|
||||
// LabelValueComponentRepoServer is the label value for the repo-server component
|
||||
|
||||
@@ -39,6 +39,7 @@ import (
|
||||
"k8s.io/client-go/informers"
|
||||
informerv1 "k8s.io/client-go/informers/apps/v1"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
"k8s.io/client-go/util/workqueue"
|
||||
"k8s.io/utils/ptr"
|
||||
@@ -466,7 +467,7 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
|
||||
// Enforce application's permission for the source namespace
|
||||
_, err = ctrl.getAppProj(app)
|
||||
if err != nil {
|
||||
logCtx.Errorf("Unable to determine project for app '%s': %v", app.QualifiedName(), err)
|
||||
logCtx.WithError(err).Errorf("Unable to determine project for app")
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -902,12 +903,12 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
|
||||
|
||||
clusters, err := ctrl.db.ListClusters(ctx)
|
||||
if err != nil {
|
||||
log.Warnf("Cannot init sharding. Error while querying clusters list from database: %v", err)
|
||||
log.WithError(err).Warn("Cannot init sharding. Error while querying clusters list from database")
|
||||
} else {
|
||||
appItems, err := ctrl.getAppList(metav1.ListOptions{})
|
||||
|
||||
if err != nil {
|
||||
log.Warnf("Cannot init sharding. Error while querying application list from database: %v", err)
|
||||
log.WithError(err).Warn("Cannot init sharding. Error while querying application list from database")
|
||||
} else {
|
||||
ctrl.clusterSharding.Init(clusters, appItems)
|
||||
}
|
||||
@@ -1000,29 +1001,29 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
||||
appKey, shutdown := ctrl.appOperationQueue.Get()
|
||||
if shutdown {
|
||||
processNext = false
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
processNext = true
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
}
|
||||
ctrl.appOperationQueue.Done(appKey)
|
||||
}()
|
||||
|
||||
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
|
||||
if err != nil {
|
||||
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
|
||||
return
|
||||
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
|
||||
return processNext
|
||||
}
|
||||
if !exists {
|
||||
// This happens after app was deleted, but the work queue still had an entry for it.
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
origApp, ok := obj.(*appv1.Application)
|
||||
if !ok {
|
||||
log.Warnf("Key '%s' in index is not an application", appKey)
|
||||
return
|
||||
log.WithField("appkey", appKey).Warn("Key in index is not an application")
|
||||
return processNext
|
||||
}
|
||||
app := origApp.DeepCopy()
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app))
|
||||
@@ -1041,8 +1042,8 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
||||
// We cannot rely on informer since applications might be updated by both application controller and api server.
|
||||
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
logCtx.Errorf("Failed to retrieve latest application state: %v", err)
|
||||
return
|
||||
logCtx.WithError(err).Error("Failed to retrieve latest application state")
|
||||
return processNext
|
||||
}
|
||||
app = freshApp
|
||||
}
|
||||
@@ -1064,7 +1065,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
||||
}
|
||||
ts.AddCheckpoint("finalize_application_deletion_ms")
|
||||
}
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processNext bool) {
|
||||
@@ -1073,26 +1074,26 @@ func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processN
|
||||
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
log.WithField("appkey", key).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
}
|
||||
ctrl.appComparisonTypeRefreshQueue.Done(key)
|
||||
}()
|
||||
if shutdown {
|
||||
processNext = false
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
if parts := strings.Split(key, "/"); len(parts) != 3 {
|
||||
log.Warnf("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consists of namespace/name/comparisonType but got: %s", key)
|
||||
log.WithField("appkey", key).Warn("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consist of namespace/name/comparisonType")
|
||||
} else {
|
||||
compareWith, err := strconv.Atoi(parts[2])
|
||||
if err != nil {
|
||||
log.Warnf("Unable to parse comparison type: %v", err)
|
||||
return
|
||||
log.WithField("appkey", key).WithError(err).Warn("Unable to parse comparison type")
|
||||
return processNext
|
||||
}
|
||||
ctrl.requestAppRefresh(ctrl.toAppQualifiedName(parts[1], parts[0]), CompareWith(compareWith).Pointer(), nil)
|
||||
}
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool) {
|
||||
@@ -1101,35 +1102,35 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
|
||||
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
log.WithField("key", key).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
}
|
||||
ctrl.projectRefreshQueue.Done(key)
|
||||
}()
|
||||
if shutdown {
|
||||
processNext = false
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
obj, exists, err := ctrl.projInformer.GetIndexer().GetByKey(key)
|
||||
if err != nil {
|
||||
log.Errorf("Failed to get project '%s' from informer index: %+v", key, err)
|
||||
return
|
||||
log.WithField("key", key).WithError(err).Error("Failed to get project from informer index")
|
||||
return processNext
|
||||
}
|
||||
if !exists {
|
||||
// This happens after appproj was deleted, but the work queue still had an entry for it.
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
origProj, ok := obj.(*appv1.AppProject)
|
||||
if !ok {
|
||||
log.Warnf("Key '%s' in index is not an appproject", key)
|
||||
return
|
||||
log.WithField("key", key).Warnf("Key in index is not an appproject")
|
||||
return processNext
|
||||
}
|
||||
|
||||
if origProj.DeletionTimestamp != nil && origProj.HasFinalizer() {
|
||||
if err := ctrl.finalizeProjectDeletion(origProj.DeepCopy()); err != nil {
|
||||
log.Warnf("Failed to finalize project deletion: %v", err)
|
||||
log.WithError(err).Warn("Failed to finalize project deletion")
|
||||
}
|
||||
}
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) finalizeProjectDeletion(proj *appv1.AppProject) error {
|
||||
@@ -1194,7 +1195,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
if !apierrors.IsNotFound(err) {
|
||||
logCtx.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
|
||||
logCtx.WithError(err).Error("Unable to get refreshed application info prior deleting resources")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -1204,7 +1205,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
}
|
||||
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
|
||||
if err != nil {
|
||||
logCtx.Warnf("Unable to get destination cluster: %v", err)
|
||||
logCtx.WithError(err).Warn("Unable to get destination cluster")
|
||||
app.UnSetCascadedDeletion()
|
||||
app.UnSetPostDeleteFinalizerAll()
|
||||
if err := ctrl.updateFinalizers(app); err != nil {
|
||||
@@ -1219,6 +1220,11 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
}
|
||||
config := metrics.AddMetricsTransportWrapper(ctrl.metricsServer, app, clusterRESTConfig)
|
||||
|
||||
// Apply impersonation config if necessary
|
||||
if err := ctrl.applyImpersonationConfig(config, proj, app, destCluster); err != nil {
|
||||
return fmt.Errorf("cannot apply impersonation: %w", err)
|
||||
}
|
||||
|
||||
if app.CascadedDeletion() {
|
||||
deletionApproved := app.IsDeletionConfirmed(app.DeletionTimestamp.Time)
|
||||
|
||||
@@ -1367,7 +1373,7 @@ func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condi
|
||||
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(context.Background(), app.Name, types.MergePatchType, patch, metav1.PatchOptions{})
|
||||
}
|
||||
if err != nil {
|
||||
logCtx.Errorf("Unable to set application condition: %v", err)
|
||||
logCtx.WithError(err).Error("Unable to set application condition")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1511,7 +1517,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
|
||||
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
|
||||
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatestForceResolve.Pointer(), nil)
|
||||
} else {
|
||||
logCtx.Warnf("Fails to requeue application: %v", err)
|
||||
logCtx.WithError(err).Warn("Fails to requeue application")
|
||||
}
|
||||
}
|
||||
ts.AddCheckpoint("request_app_refresh_ms")
|
||||
@@ -1543,13 +1549,13 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
|
||||
}
|
||||
patchJSON, err := json.Marshal(patch)
|
||||
if err != nil {
|
||||
logCtx.Errorf("error marshaling json: %v", err)
|
||||
logCtx.WithError(err).Error("error marshaling json")
|
||||
return
|
||||
}
|
||||
if app.Status.OperationState != nil && app.Status.OperationState.FinishedAt != nil && state.FinishedAt == nil {
|
||||
patchJSON, err = jsonpatch.MergeMergePatches(patchJSON, []byte(`{"status": {"operationState": {"finishedAt": null}}}`))
|
||||
if err != nil {
|
||||
logCtx.Errorf("error merging operation state patch: %v", err)
|
||||
logCtx.WithError(err).Error("error merging operation state patch")
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -1563,7 +1569,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
|
||||
}
|
||||
// kube.RetryUntilSucceed logs failed attempts at "debug" level, but we want to know if this fails. Log a
|
||||
// warning.
|
||||
logCtx.Warnf("error patching application with operation state: %v", err)
|
||||
logCtx.WithError(err).Warn("error patching application with operation state")
|
||||
return fmt.Errorf("error patching application with operation state: %w", err)
|
||||
}
|
||||
return nil
|
||||
@@ -1592,7 +1598,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
|
||||
|
||||
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
|
||||
if err != nil {
|
||||
logCtx.Warnf("Unable to get destination cluster, setting dest_server label to empty string in sync metric: %v", err)
|
||||
logCtx.WithError(err).Warn("Unable to get destination cluster, setting dest_server label to empty string in sync metric")
|
||||
}
|
||||
destServer := ""
|
||||
if destCluster != nil {
|
||||
@@ -1609,7 +1615,7 @@ func (ctrl *ApplicationController) writeBackToInformer(app *appv1.Application) {
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithField("informer-writeBack", true)
|
||||
err := ctrl.appInformer.GetStore().Update(app)
|
||||
if err != nil {
|
||||
logCtx.Errorf("failed to update informer store: %v", err)
|
||||
logCtx.WithError(err).Error("failed to update informer store")
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -1630,12 +1636,12 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
appKey, shutdown := ctrl.appRefreshQueue.Get()
|
||||
if shutdown {
|
||||
processNext = false
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
processNext = true
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
}
|
||||
// We want to have app operation update happen after the sync, so there's no race condition
|
||||
// and app updates not proceeding. See https://github.com/argoproj/argo-cd/issues/18500.
|
||||
@@ -1644,23 +1650,23 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
}()
|
||||
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
|
||||
if err != nil {
|
||||
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
|
||||
return
|
||||
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
|
||||
return processNext
|
||||
}
|
||||
if !exists {
|
||||
// This happens after app was deleted, but the work queue still had an entry for it.
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
origApp, ok := obj.(*appv1.Application)
|
||||
if !ok {
|
||||
log.Warnf("Key '%s' in index is not an application", appKey)
|
||||
return
|
||||
log.WithField("appkey", appKey).Warn("Key in index is not an application")
|
||||
return processNext
|
||||
}
|
||||
origApp = origApp.DeepCopy()
|
||||
needRefresh, refreshType, comparisonLevel := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout, ctrl.statusHardRefreshTimeout)
|
||||
|
||||
if !needRefresh {
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
app := origApp.DeepCopy()
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithFields(log.Fields{
|
||||
@@ -1702,13 +1708,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
if tree, err = ctrl.getResourceTree(destCluster, app, managedResources); err == nil {
|
||||
app.Status.Summary = tree.GetSummary(app)
|
||||
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), tree); err != nil {
|
||||
logCtx.Errorf("Failed to cache resources tree: %v", err)
|
||||
return
|
||||
logCtx.WithError(err).Error("Failed to cache resources tree")
|
||||
return processNext
|
||||
}
|
||||
}
|
||||
|
||||
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fall back to full reconciliation")
|
||||
}
|
||||
@@ -1724,20 +1730,20 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
|
||||
|
||||
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
|
||||
logCtx.Warnf("failed to set app resource tree: %v", err)
|
||||
logCtx.WithError(err).Warn("failed to set app resource tree")
|
||||
}
|
||||
if err := ctrl.cache.SetAppManagedResources(app.InstanceName(ctrl.namespace), nil); err != nil {
|
||||
logCtx.Warnf("failed to set app managed resources tree: %v", err)
|
||||
logCtx.WithError(err).Warn("failed to set app managed resources tree")
|
||||
}
|
||||
ts.AddCheckpoint("process_refresh_app_conditions_errors_ms")
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
destCluster, err = argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
|
||||
if err != nil {
|
||||
logCtx.Errorf("Failed to get destination cluster: %v", err)
|
||||
logCtx.WithError(err).Error("Failed to get destination cluster")
|
||||
// exit the reconciliation. ctrl.refreshAppConditions should have caught the error
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
var localManifests []string
|
||||
@@ -1777,8 +1783,8 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
ts.AddCheckpoint("compare_app_state_ms")
|
||||
|
||||
if stderrors.Is(err, ErrCompareStateRepo) {
|
||||
logCtx.Warnf("Ignoring temporary failed attempt to compare app state against repo: %v", err)
|
||||
return // short circuit if git error is encountered
|
||||
logCtx.WithError(err).Warn("Ignoring temporary failed attempt to compare app state against repo")
|
||||
return processNext // short circuit if git error is encountered
|
||||
}
|
||||
|
||||
for k, v := range compareResult.timings {
|
||||
@@ -1791,7 +1797,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
tree, err := ctrl.setAppManagedResources(destCluster, app, compareResult)
|
||||
ts.AddCheckpoint("set_app_managed_resources_ms")
|
||||
if err != nil {
|
||||
logCtx.Errorf("Failed to cache app resources: %v", err)
|
||||
logCtx.WithError(err).Error("Failed to cache app resources")
|
||||
} else {
|
||||
app.Status.Summary = tree.GetSummary(app)
|
||||
}
|
||||
@@ -1843,60 +1849,54 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
}
|
||||
|
||||
if err := ctrl.updateFinalizers(app); err != nil {
|
||||
logCtx.Errorf("Failed to update finalizers: %v", err)
|
||||
logCtx.WithError(err).Error("Failed to update finalizers")
|
||||
}
|
||||
}
|
||||
ts.AddCheckpoint("process_finalizers_ms")
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext bool) {
|
||||
appKey, shutdown := ctrl.appHydrateQueue.Get()
|
||||
if shutdown {
|
||||
processNext = false
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
processNext = true
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
}
|
||||
ctrl.appHydrateQueue.Done(appKey)
|
||||
}()
|
||||
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
|
||||
if err != nil {
|
||||
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
|
||||
return
|
||||
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
|
||||
return processNext
|
||||
}
|
||||
if !exists {
|
||||
// This happens after app was deleted, but the work queue still had an entry for it.
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
origApp, ok := obj.(*appv1.Application)
|
||||
if !ok {
|
||||
log.Warnf("Key '%s' in index is not an application", appKey)
|
||||
return
|
||||
log.WithField("appkey", appKey).Warn("Key in index is not an application")
|
||||
return processNext
|
||||
}
|
||||
|
||||
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp)
|
||||
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp.DeepCopy())
|
||||
|
||||
log.WithFields(applog.GetAppLogFields(origApp)).Debug("Successfully processed app hydrate queue item")
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool) {
|
||||
hydrationKey, shutdown := ctrl.hydrationQueue.Get()
|
||||
if shutdown {
|
||||
processNext = false
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
processNext = true
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
}
|
||||
ctrl.hydrationQueue.Done(hydrationKey)
|
||||
}()
|
||||
|
||||
logCtx := log.WithFields(log.Fields{
|
||||
"sourceRepoURL": hydrationKey.SourceRepoURL,
|
||||
@@ -1904,12 +1904,19 @@ func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool
|
||||
"destinationBranch": hydrationKey.DestinationBranch,
|
||||
})
|
||||
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
logCtx.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
}
|
||||
ctrl.hydrationQueue.Done(hydrationKey)
|
||||
}()
|
||||
|
||||
logCtx.Debug("Processing hydration queue item")
|
||||
|
||||
ctrl.hydrator.ProcessHydrationQueueItem(hydrationKey)
|
||||
|
||||
logCtx.Debug("Successfully processed hydration queue item")
|
||||
return
|
||||
return processNext
|
||||
}
|
||||
|
||||
func resourceStatusKey(res appv1.ResourceStatus) string {
|
||||
@@ -2012,11 +2019,11 @@ func (ctrl *ApplicationController) normalizeApplication(orig, app *appv1.Applica
|
||||
patch, modified, err := diff.CreateTwoWayMergePatch(orig, app, appv1.Application{})
|
||||
|
||||
if err != nil {
|
||||
logCtx.Errorf("error constructing app spec patch: %v", err)
|
||||
logCtx.WithError(err).Error("error constructing app spec patch")
|
||||
} else if modified {
|
||||
_, err := ctrl.PatchAppWithWriteBack(context.Background(), app.Name, app.Namespace, types.MergePatchType, patch, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
logCtx.Errorf("Error persisting normalized application spec: %v", err)
|
||||
logCtx.WithError(err).Error("Error persisting normalized application spec")
|
||||
} else {
|
||||
logCtx.Infof("Normalized app spec: %s", string(patch))
|
||||
}
|
||||
@@ -2071,12 +2078,12 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
|
||||
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: orig.GetAnnotations()}, Status: orig.Status},
|
||||
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: newAnnotations}, Status: *newStatus})
|
||||
if err != nil {
|
||||
logCtx.Errorf("Error constructing app status patch: %v", err)
|
||||
return
|
||||
logCtx.WithError(err).Error("Error constructing app status patch")
|
||||
return patchDuration
|
||||
}
|
||||
if !modified {
|
||||
logCtx.Infof("No status changes. Skipping patch")
|
||||
return
|
||||
return patchDuration
|
||||
}
|
||||
// calculate time for path call
|
||||
start := time.Now()
|
||||
@@ -2085,7 +2092,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
|
||||
}()
|
||||
_, err = ctrl.PatchAppWithWriteBack(context.Background(), orig.Name, orig.Namespace, types.MergePatchType, patch, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
logCtx.Warnf("Error updating application: %v", err)
|
||||
logCtx.WithError(err).Warn("Error updating application")
|
||||
} else {
|
||||
logCtx.Infof("Update successful")
|
||||
}
|
||||
@@ -2230,11 +2237,11 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
|
||||
if stderrors.Is(err, argo.ErrAnotherOperationInProgress) {
|
||||
// skipping auto-sync because another operation is in progress and was not noticed due to stale data in informer
|
||||
// it is safe to skip auto-sync because it is already running
|
||||
logCtx.Warnf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
|
||||
logCtx.WithError(err).Warnf("Failed to initiate auto-sync to %s", desiredRevisions)
|
||||
return nil, 0
|
||||
}
|
||||
|
||||
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
|
||||
logCtx.WithError(err).Errorf("Failed to initiate auto-sync to %s", desiredRevisions)
|
||||
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}, setOpTime
|
||||
}
|
||||
ctrl.writeBackToInformer(updatedApp)
|
||||
@@ -2360,7 +2367,7 @@ func (ctrl *ApplicationController) canProcessApp(obj any) bool {
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
logCtx.Debugf("Unable to determine if Application should skip reconcile based on annotation %s: %v", common.AnnotationKeyAppSkipReconcile, err)
|
||||
logCtx.WithError(err).Debugf("Unable to determine if Application should skip reconcile based on annotation %s", common.AnnotationKeyAppSkipReconcile)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2615,4 +2622,22 @@ func (ctrl *ApplicationController) logAppEvent(ctx context.Context, a *appv1.App
|
||||
ctrl.auditLogger.LogAppEvent(a, eventInfo, message, "", eventLabels)
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) applyImpersonationConfig(config *rest.Config, proj *appv1.AppProject, app *appv1.Application, destCluster *appv1.Cluster) error {
|
||||
impersonationEnabled, err := ctrl.settingsMgr.IsImpersonationEnabled()
|
||||
if err != nil {
|
||||
return fmt.Errorf("error getting impersonation setting: %w", err)
|
||||
}
|
||||
if !impersonationEnabled {
|
||||
return nil
|
||||
}
|
||||
user, err := deriveServiceAccountToImpersonate(proj, app, destCluster)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error deriving service account to impersonate: %w", err)
|
||||
}
|
||||
config.Impersonate = rest.ImpersonationConfig{
|
||||
UserName: user,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type ClusterFilterFunction func(c *appv1.Cluster, distributionFunction sharding.DistributionFunction) bool
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -94,11 +95,11 @@ func (m *MockKubectl) DeleteResource(ctx context.Context, config *rest.Config, g
|
||||
return m.Kubectl.DeleteResource(ctx, config, gvk, name, namespace, deleteOptions)
|
||||
}
|
||||
|
||||
func newFakeController(data *fakeData, repoErr error) *ApplicationController {
|
||||
return newFakeControllerWithResync(data, time.Minute, repoErr, nil)
|
||||
func newFakeController(ctx context.Context, data *fakeData, repoErr error) *ApplicationController {
|
||||
return newFakeControllerWithResync(ctx, data, time.Minute, repoErr, nil)
|
||||
}
|
||||
|
||||
func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration, repoErr, revisionPathsErr error) *ApplicationController {
|
||||
func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncPeriod time.Duration, repoErr, revisionPathsErr error) *ApplicationController {
|
||||
var clust corev1.Secret
|
||||
err := yaml.Unmarshal([]byte(fakeCluster), &clust)
|
||||
if err != nil {
|
||||
@@ -106,33 +107,33 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
|
||||
}
|
||||
|
||||
// Mock out call to GenerateManifest
|
||||
mockRepoClient := mockrepoclient.RepoServerServiceClient{}
|
||||
mockRepoClient := &mockrepoclient.RepoServerServiceClient{}
|
||||
|
||||
if len(data.manifestResponses) > 0 {
|
||||
for _, response := range data.manifestResponses {
|
||||
if repoErr != nil {
|
||||
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, repoErr).Once()
|
||||
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(response, repoErr).Once()
|
||||
} else {
|
||||
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, nil).Once()
|
||||
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(response, nil).Once()
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if repoErr != nil {
|
||||
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, repoErr).Once()
|
||||
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(data.manifestResponse, repoErr).Once()
|
||||
} else {
|
||||
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, nil).Once()
|
||||
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(data.manifestResponse, nil).Once()
|
||||
}
|
||||
}
|
||||
|
||||
if revisionPathsErr != nil {
|
||||
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(nil, revisionPathsErr)
|
||||
mockRepoClient.EXPECT().UpdateRevisionForPaths(mock.Anything, mock.Anything).Return(nil, revisionPathsErr)
|
||||
} else {
|
||||
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(data.updateRevisionForPathsResponse, nil)
|
||||
mockRepoClient.EXPECT().UpdateRevisionForPaths(mock.Anything, mock.Anything).Return(data.updateRevisionForPathsResponse, nil)
|
||||
}
|
||||
|
||||
mockRepoClientset := mockrepoclient.Clientset{RepoServerServiceClient: &mockRepoClient}
|
||||
mockRepoClientset := &mockrepoclient.Clientset{RepoServerServiceClient: mockRepoClient}
|
||||
|
||||
mockCommitClientset := mockcommitclient.Clientset{}
|
||||
mockCommitClientset := &mockcommitclient.Clientset{}
|
||||
|
||||
secret := corev1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
@@ -157,15 +158,15 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
|
||||
runtimeObjs := []runtime.Object{&clust, &secret, &cm}
|
||||
runtimeObjs = append(runtimeObjs, data.additionalObjs...)
|
||||
kubeClient := fake.NewClientset(runtimeObjs...)
|
||||
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
|
||||
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, test.FakeArgoCDNamespace)
|
||||
kubectl := &MockKubectl{Kubectl: &kubetest.MockKubectlCmd{}}
|
||||
ctrl, err := NewApplicationController(
|
||||
test.FakeArgoCDNamespace,
|
||||
settingsMgr,
|
||||
kubeClient,
|
||||
appclientset.NewSimpleClientset(data.apps...),
|
||||
&mockRepoClientset,
|
||||
&mockCommitClientset,
|
||||
mockRepoClientset,
|
||||
mockCommitClientset,
|
||||
appstatecache.NewCache(
|
||||
cacheutil.NewCache(cacheutil.NewInMemoryCache(1*time.Minute)),
|
||||
1*time.Minute,
|
||||
@@ -196,7 +197,7 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
|
||||
false,
|
||||
)
|
||||
db := &dbmocks.ArgoDB{}
|
||||
db.On("GetApplicationControllerReplicas").Return(1)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
|
||||
// Setting a default sharding algorithm for the tests where we cannot set it.
|
||||
ctrl.clusterSharding = sharding.NewClusterSharding(db, 0, 1, common.DefaultShardingAlgorithm)
|
||||
if err != nil {
|
||||
@@ -206,27 +207,25 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
|
||||
defer cancelProj()
|
||||
cancelApp := test.StartInformer(ctrl.appInformer)
|
||||
defer cancelApp()
|
||||
clusterCacheMock := mocks.ClusterCache{}
|
||||
clusterCacheMock.On("IsNamespaced", mock.Anything).Return(true, nil)
|
||||
clusterCacheMock.On("GetOpenAPISchema").Return(nil, nil)
|
||||
clusterCacheMock.On("GetGVKParser").Return(nil)
|
||||
clusterCacheMock := &mocks.ClusterCache{}
|
||||
clusterCacheMock.EXPECT().IsNamespaced(mock.Anything).Return(true, nil)
|
||||
clusterCacheMock.EXPECT().GetOpenAPISchema().Return(nil)
|
||||
clusterCacheMock.EXPECT().GetGVKParser().Return(nil)
|
||||
|
||||
mockStateCache := mockstatecache.LiveStateCache{}
|
||||
ctrl.appStateManager.(*appStateManager).liveStateCache = &mockStateCache
|
||||
ctrl.stateCache = &mockStateCache
|
||||
mockStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
|
||||
mockStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
|
||||
mockStateCache.On("GetVersionsInfo", mock.Anything).Return("v1.2.3", nil, nil)
|
||||
mockStateCache := &mockstatecache.LiveStateCache{}
|
||||
ctrl.appStateManager.(*appStateManager).liveStateCache = mockStateCache
|
||||
ctrl.stateCache = mockStateCache
|
||||
mockStateCache.EXPECT().IsNamespaced(mock.Anything, mock.Anything).Return(true, nil)
|
||||
mockStateCache.EXPECT().GetManagedLiveObjs(mock.Anything, mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
|
||||
mockStateCache.EXPECT().GetVersionsInfo(mock.Anything).Return("v1.2.3", nil, nil)
|
||||
response := make(map[kube.ResourceKey]v1alpha1.ResourceNode)
|
||||
for k, v := range data.namespacedResources {
|
||||
response[k] = v.ResourceNode
|
||||
}
|
||||
mockStateCache.On("GetNamespaceTopLevelResources", mock.Anything, mock.Anything).Return(response, nil)
|
||||
mockStateCache.On("IterateResources", mock.Anything, mock.Anything).Return(nil)
|
||||
mockStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCacheMock, nil)
|
||||
mockStateCache.On("IterateHierarchyV2", mock.Anything, mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
|
||||
keys := args[1].([]kube.ResourceKey)
|
||||
action := args[2].(func(child v1alpha1.ResourceNode, appName string) bool)
|
||||
mockStateCache.EXPECT().GetNamespaceTopLevelResources(mock.Anything, mock.Anything).Return(response, nil)
|
||||
mockStateCache.EXPECT().IterateResources(mock.Anything, mock.Anything).Return(nil)
|
||||
mockStateCache.EXPECT().GetClusterCache(mock.Anything).Return(clusterCacheMock, nil)
|
||||
mockStateCache.EXPECT().IterateHierarchyV2(mock.Anything, mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Cluster, keys []kube.ResourceKey, action func(_ v1alpha1.ResourceNode, _ string) bool) {
|
||||
for _, key := range keys {
|
||||
appName := ""
|
||||
if res, ok := data.namespacedResources[key]; ok {
|
||||
@@ -596,7 +595,7 @@ func newFakeServiceAccount() map[string]any {
|
||||
|
||||
func TestAutoSync(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -614,7 +613,7 @@ func TestAutoSyncEnabledSetToTrue(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
enable := true
|
||||
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -635,7 +634,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
|
||||
app := newFakeMultiSourceApp()
|
||||
app.Spec.SyncPolicy.Automated.SelfHeal = false
|
||||
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revisions: []string{"z", "x", "v"},
|
||||
@@ -650,7 +649,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
|
||||
app := newFakeMultiSourceApp()
|
||||
app.Spec.SyncPolicy.Automated.SelfHeal = false
|
||||
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revisions: []string{"a", "b", "c"},
|
||||
@@ -666,7 +665,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
|
||||
func TestAutoSyncNotAllowEmpty(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.SyncPolicy.Automated.Prune = true
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -679,7 +678,7 @@ func TestAutoSyncAllowEmpty(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.SyncPolicy.Automated.Prune = true
|
||||
app.Spec.SyncPolicy.Automated.AllowEmpty = true
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -693,7 +692,7 @@ func TestSkipAutoSync(t *testing.T) {
|
||||
// Set current to 'aaaaa', desired to 'aaaa' and mark system OutOfSync
|
||||
t.Run("PreviouslySyncedToRevision", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
@@ -708,7 +707,7 @@ func TestSkipAutoSync(t *testing.T) {
|
||||
// Verify we skip when we are already Synced (even if revision is different)
|
||||
t.Run("AlreadyInSyncedState", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeSynced,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -724,7 +723,7 @@ func TestSkipAutoSync(t *testing.T) {
|
||||
t.Run("AutoSyncIsDisabled", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.SyncPolicy = nil
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -741,7 +740,7 @@ func TestSkipAutoSync(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
enable := false
|
||||
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -758,7 +757,7 @@ func TestSkipAutoSync(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
now := metav1.Now()
|
||||
app.DeletionTimestamp = &now
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -784,7 +783,7 @@ func TestSkipAutoSync(t *testing.T) {
|
||||
Source: *app.Spec.Source.DeepCopy(),
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -808,7 +807,7 @@ func TestSkipAutoSync(t *testing.T) {
|
||||
Source: *app.Spec.Source.DeepCopy(),
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -822,7 +821,7 @@ func TestSkipAutoSync(t *testing.T) {
|
||||
|
||||
t.Run("NeedsToPruneResourcesOnlyButAutomatedPruneDisabled", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
@@ -848,7 +847,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
@@ -908,7 +907,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
|
||||
assert.Nil(t, cond)
|
||||
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(t.Context(), "my-app", metav1.GetOptions{})
|
||||
@@ -926,7 +925,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
|
||||
app.Status.OperationState.SyncResult.Sources[0].Helm = &v1alpha1.ApplicationSourceHelm{
|
||||
Parameters: []v1alpha1.HelmParameter{
|
||||
@@ -973,7 +972,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
app.SetCascadedDeletion(v1alpha1.ResourcesFinalizerName)
|
||||
app.DeletionTimestamp = &now
|
||||
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
|
||||
patched := false
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
defaultReactor := fakeAppCs.ReactionChain[0]
|
||||
@@ -1018,7 +1017,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
appObj := kube.MustToUnstructured(&app)
|
||||
cm := newFakeCM()
|
||||
strayObj := kube.MustToUnstructured(&cm)
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj, &restrictedProj},
|
||||
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
|
||||
kube.GetResourceKey(appObj): appObj,
|
||||
@@ -1059,7 +1058,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
app := newFakeAppWithDestName()
|
||||
app.SetCascadedDeletion(v1alpha1.ResourcesFinalizerName)
|
||||
app.DeletionTimestamp = &now
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
|
||||
patched := false
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
defaultReactor := fakeAppCs.ReactionChain[0]
|
||||
@@ -1085,7 +1084,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
|
||||
testShouldDelete := func(app *v1alpha1.Application) {
|
||||
appObj := kube.MustToUnstructured(&app)
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
|
||||
kube.GetResourceKey(appObj): appObj,
|
||||
}}, nil)
|
||||
|
||||
@@ -1119,7 +1118,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.SetPostDeleteFinalizer()
|
||||
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{fakePostDeleteHook},
|
||||
}},
|
||||
@@ -1161,7 +1160,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
},
|
||||
}
|
||||
require.NoError(t, unstructured.SetNestedField(liveHook.Object, conditions, "status", "conditions"))
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{fakePostDeleteHook},
|
||||
}},
|
||||
@@ -1205,7 +1204,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
},
|
||||
}
|
||||
require.NoError(t, unstructured.SetNestedField(liveHook.Object, conditions, "status", "conditions"))
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{fakeRoleBinding, fakeRole, fakeServiceAccount, fakePostDeleteHook},
|
||||
}},
|
||||
@@ -1246,6 +1245,117 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestFinalizeAppDeletionWithImpersonation(t *testing.T) {
|
||||
type fixture struct {
|
||||
application *v1alpha1.Application
|
||||
controller *ApplicationController
|
||||
}
|
||||
|
||||
setup := func(destinationNamespace, serviceAccountName string) *fixture {
|
||||
app := newFakeApp()
|
||||
app.Status.OperationState = nil
|
||||
app.Status.History = nil
|
||||
now := metav1.Now()
|
||||
app.DeletionTimestamp = &now
|
||||
|
||||
project := &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: test.FakeArgoCDNamespace,
|
||||
Name: "default",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
SourceRepos: []string{"*"},
|
||||
Destinations: []v1alpha1.ApplicationDestination{
|
||||
{
|
||||
Server: "*",
|
||||
Namespace: "*",
|
||||
},
|
||||
},
|
||||
DestinationServiceAccounts: []v1alpha1.ApplicationDestinationServiceAccount{
|
||||
{
|
||||
Server: "https://localhost:6443",
|
||||
Namespace: destinationNamespace,
|
||||
DefaultServiceAccount: serviceAccountName,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
additionalObjs := []runtime.Object{}
|
||||
if serviceAccountName != "" {
|
||||
syncServiceAccount := &corev1.ServiceAccount{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: serviceAccountName,
|
||||
Namespace: test.FakeDestNamespace,
|
||||
},
|
||||
}
|
||||
additionalObjs = append(additionalObjs, syncServiceAccount)
|
||||
}
|
||||
|
||||
data := fakeData{
|
||||
apps: []runtime.Object{app, project},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
Namespace: test.FakeDestNamespace,
|
||||
Server: "https://localhost:6443",
|
||||
Revision: "abc123",
|
||||
},
|
||||
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{},
|
||||
configMapData: map[string]string{
|
||||
"application.sync.impersonation.enabled": strconv.FormatBool(true),
|
||||
},
|
||||
additionalObjs: additionalObjs,
|
||||
}
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
return &fixture{
|
||||
application: app,
|
||||
controller: ctrl,
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("no matching impersonation service account is configured", func(t *testing.T) {
|
||||
// given impersonation is enabled but no matching service account exists
|
||||
f := setup(test.FakeDestNamespace, "")
|
||||
|
||||
// when
|
||||
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
|
||||
return []*v1alpha1.Cluster{}, nil
|
||||
})
|
||||
|
||||
// then deletion should fail due to impersonation error
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "error deriving service account to impersonate")
|
||||
})
|
||||
|
||||
t.Run("valid impersonation service account is configured", func(t *testing.T) {
|
||||
// given impersonation is enabled with valid service account
|
||||
f := setup(test.FakeDestNamespace, "test-sa")
|
||||
|
||||
// when
|
||||
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
|
||||
return []*v1alpha1.Cluster{}, nil
|
||||
})
|
||||
|
||||
// then deletion should succeed
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("invalid application destination cluster", func(t *testing.T) {
|
||||
// given impersonation is enabled but destination cluster does not exist
|
||||
f := setup(test.FakeDestNamespace, "test-sa")
|
||||
f.application.Spec.Destination.Server = "https://invalid-cluster:6443"
|
||||
f.application.Spec.Destination.Name = "invalid"
|
||||
|
||||
// when
|
||||
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
|
||||
return []*v1alpha1.Cluster{}, nil
|
||||
})
|
||||
|
||||
// then deletion should still succeed by removing finalizers
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
// TestNormalizeApplication verifies we normalize an application during reconciliation
|
||||
func TestNormalizeApplication(t *testing.T) {
|
||||
defaultProj := v1alpha1.AppProject{
|
||||
@@ -1279,7 +1389,7 @@ func TestNormalizeApplication(t *testing.T) {
|
||||
|
||||
{
|
||||
// Verify we normalize the app because project is missing
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
key, _ := cache.MetaNamespaceKeyFunc(app)
|
||||
ctrl.appRefreshQueue.AddRateLimited(key)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
@@ -1301,7 +1411,7 @@ func TestNormalizeApplication(t *testing.T) {
|
||||
// Verify we don't unnecessarily normalize app when project is set
|
||||
app.Spec.Project = "default"
|
||||
data.apps[0] = app
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
key, _ := cache.MetaNamespaceKeyFunc(app)
|
||||
ctrl.appRefreshQueue.AddRateLimited(key)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
@@ -1326,7 +1436,7 @@ func TestHandleAppUpdated(t *testing.T) {
|
||||
app.Spec.Destination.Server = v1alpha1.KubernetesInternalAPIServerAddr
|
||||
proj := defaultProj.DeepCopy()
|
||||
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, proj}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
|
||||
|
||||
ctrl.handleObjectUpdated(map[string]bool{app.InstanceName(ctrl.namespace): true}, kube.GetObjectRef(kube.MustToUnstructured(app)))
|
||||
isRequested, level := ctrl.isRefreshRequested(app.QualifiedName())
|
||||
@@ -1353,7 +1463,7 @@ func TestHandleOrphanedResourceUpdated(t *testing.T) {
|
||||
proj := defaultProj.DeepCopy()
|
||||
proj.Spec.OrphanedResources = &v1alpha1.OrphanedResourcesMonitorSettings{}
|
||||
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app1, app2, proj}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app1, app2, proj}}, nil)
|
||||
|
||||
ctrl.handleObjectUpdated(map[string]bool{}, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: test.FakeArgoCDNamespace})
|
||||
|
||||
@@ -1384,7 +1494,7 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
|
||||
ResourceRef: v1alpha1.ResourceRef{Group: "apps", Kind: "Deployment", Namespace: "default", Name: "deploy2"},
|
||||
}
|
||||
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, proj},
|
||||
namespacedResources: map[kube.ResourceKey]namespacedResource{
|
||||
kube.NewResourceKey("apps", "Deployment", "default", "nginx-deployment"): {ResourceNode: managedDeploy},
|
||||
@@ -1407,7 +1517,7 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSetOperationStateOnDeletedApp(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
fakeAppCs.ReactionChain = nil
|
||||
patched := false
|
||||
@@ -1425,7 +1535,7 @@ func TestSetOperationStateLogRetries(t *testing.T) {
|
||||
t.Cleanup(func() {
|
||||
logrus.StandardLogger().ReplaceHooks(logrus.LevelHooks{})
|
||||
})
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
fakeAppCs.ReactionChain = nil
|
||||
patched := false
|
||||
@@ -1438,7 +1548,12 @@ func TestSetOperationStateLogRetries(t *testing.T) {
|
||||
})
|
||||
ctrl.setOperationState(newFakeApp(), &v1alpha1.OperationState{Phase: synccommon.OperationSucceeded})
|
||||
assert.True(t, patched)
|
||||
assert.Contains(t, hook.Entries[0].Message, "fake error")
|
||||
require.GreaterOrEqual(t, len(hook.Entries), 1)
|
||||
entry := hook.Entries[0]
|
||||
require.Contains(t, entry.Data, "error")
|
||||
errorVal, ok := entry.Data["error"].(error)
|
||||
require.True(t, ok, "error field should be of type error")
|
||||
assert.Contains(t, errorVal.Error(), "fake error")
|
||||
}
|
||||
|
||||
func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
@@ -1476,7 +1591,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
app.Status.Sync.ComparedTo.Source = app.Spec.GetSource()
|
||||
}
|
||||
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
|
||||
t.Run("no need to refresh just reconciled application", func(t *testing.T) {
|
||||
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
@@ -1488,7 +1603,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
assert.False(t, needRefresh)
|
||||
|
||||
// use a one-off controller so other tests don't have a manual refresh request
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
|
||||
// refresh app using the 'deepest' requested comparison level
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
@@ -1505,7 +1620,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
assert.False(t, needRefresh)
|
||||
|
||||
// use a one-off controller so other tests don't have a manual refresh request
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
|
||||
// refresh app with a non-nil delay
|
||||
// use zero-second delay to test the add later logic without waiting in the test
|
||||
@@ -1535,7 +1650,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
|
||||
// use a one-off controller so other tests don't have a manual refresh request
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
|
||||
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.False(t, needRefresh)
|
||||
@@ -1565,7 +1680,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
}
|
||||
|
||||
// use a one-off controller so other tests don't have a manual refresh request
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
|
||||
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.False(t, needRefresh)
|
||||
@@ -1645,7 +1760,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestUpdatedManagedNamespaceMetadata(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
app := newFakeApp()
|
||||
app.Spec.SyncPolicy.ManagedNamespaceMetadata = &v1alpha1.ManagedNamespaceMetadata{
|
||||
Labels: map[string]string{
|
||||
@@ -1669,7 +1784,7 @@ func TestUpdatedManagedNamespaceMetadata(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestUnchangedManagedNamespaceMetadata(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
|
||||
app := newFakeApp()
|
||||
app.Spec.SyncPolicy.ManagedNamespaceMetadata = &v1alpha1.ManagedNamespaceMetadata{
|
||||
Labels: map[string]string{
|
||||
@@ -1712,7 +1827,7 @@ func TestRefreshAppConditions(t *testing.T) {
|
||||
|
||||
t.Run("NoErrorConditions", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
|
||||
|
||||
_, hasErrors := ctrl.refreshAppConditions(app)
|
||||
assert.False(t, hasErrors)
|
||||
@@ -1723,7 +1838,7 @@ func TestRefreshAppConditions(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Status.SetConditions([]v1alpha1.ApplicationCondition{{Type: v1alpha1.ApplicationConditionExcludedResourceWarning}}, nil)
|
||||
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
|
||||
|
||||
_, hasErrors := ctrl.refreshAppConditions(app)
|
||||
assert.False(t, hasErrors)
|
||||
@@ -1736,7 +1851,7 @@ func TestRefreshAppConditions(t *testing.T) {
|
||||
app.Spec.Project = "wrong project"
|
||||
app.Status.SetConditions([]v1alpha1.ApplicationCondition{{Type: v1alpha1.ApplicationConditionInvalidSpecError, Message: "old message"}}, nil)
|
||||
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
|
||||
|
||||
_, hasErrors := ctrl.refreshAppConditions(app)
|
||||
assert.True(t, hasErrors)
|
||||
@@ -1751,7 +1866,7 @@ func TestUpdateReconciledAt(t *testing.T) {
|
||||
reconciledAt := metav1.NewTime(time.Now().Add(-1 * time.Second))
|
||||
app.Status = v1alpha1.ApplicationStatus{ReconciledAt: &reconciledAt}
|
||||
app.Status.Sync = v1alpha1.SyncStatus{ComparedTo: v1alpha1.ComparedTo{Source: app.Spec.GetSource(), Destination: app.Spec.Destination, IgnoreDifferences: app.Spec.IgnoreDifferences}}
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
@@ -1882,7 +1997,7 @@ apps/Deployment:
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{tc.app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
@@ -1946,7 +2061,7 @@ apps/Deployment:
|
||||
|
||||
return hs`,
|
||||
}
|
||||
ctrl := newFakeControllerWithResync(&fakeData{
|
||||
ctrl := newFakeControllerWithResync(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
@@ -2013,7 +2128,7 @@ apps/Deployment:
|
||||
func TestProjectErrorToCondition(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.Project = "wrong project"
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
@@ -2041,7 +2156,7 @@ func TestProjectErrorToCondition(t *testing.T) {
|
||||
func TestFinalizeProjectDeletion_HasApplications(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
proj := &v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace}}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, proj}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
|
||||
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
patched := false
|
||||
@@ -2057,7 +2172,7 @@ func TestFinalizeProjectDeletion_HasApplications(t *testing.T) {
|
||||
|
||||
func TestFinalizeProjectDeletion_DoesNotHaveApplications(t *testing.T) {
|
||||
proj := &v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace}}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{&defaultProj}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{&defaultProj}}, nil)
|
||||
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
@@ -2083,7 +2198,7 @@ func TestProcessRequestedAppOperation_FailedNoRetries(t *testing.T) {
|
||||
app.Operation = &v1alpha1.Operation{
|
||||
Sync: &v1alpha1.SyncOperation{},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
@@ -2110,7 +2225,7 @@ func TestProcessRequestedAppOperation_InvalidDestination(t *testing.T) {
|
||||
proj := defaultProj
|
||||
proj.Name = "test-project"
|
||||
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &proj}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &proj}}, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
func() {
|
||||
@@ -2139,7 +2254,7 @@ func TestProcessRequestedAppOperation_FailedHasRetries(t *testing.T) {
|
||||
Sync: &v1alpha1.SyncOperation{},
|
||||
Retry: v1alpha1.RetryStrategy{Limit: 1},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
@@ -2186,7 +2301,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailed(t *testing.T) {
|
||||
Revision: "abc123",
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(data, nil)
|
||||
ctrl := newFakeController(t.Context(), data, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
@@ -2243,7 +2358,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailedBackoff(t *testing.
|
||||
Revision: "abc123",
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(data, nil)
|
||||
ctrl := newFakeController(t.Context(), data, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
fakeAppCs.PrependReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
require.FailNow(t, "A patch should not have been called if the backoff has not passed")
|
||||
@@ -2271,7 +2386,7 @@ func TestProcessRequestedAppOperation_HasRetriesTerminated(t *testing.T) {
|
||||
Revision: "abc123",
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(data, nil)
|
||||
ctrl := newFakeController(t.Context(), data, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
@@ -2295,7 +2410,7 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
|
||||
app.Operation = &v1alpha1.Operation{
|
||||
Sync: &v1alpha1.SyncOperation{},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{},
|
||||
@@ -2370,7 +2485,7 @@ func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
|
||||
Revision: "HEAD",
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{},
|
||||
@@ -2412,9 +2527,9 @@ func TestGetAppHosts(t *testing.T) {
|
||||
"application.allowedNodeLabels": "label1,label2",
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(data, nil)
|
||||
ctrl := newFakeController(t.Context(), data, nil)
|
||||
mockStateCache := &mockstatecache.LiveStateCache{}
|
||||
mockStateCache.On("IterateResources", mock.Anything, mock.MatchedBy(func(callback func(res *clustercache.Resource, info *statecache.ResourceInfo)) bool {
|
||||
mockStateCache.EXPECT().IterateResources(mock.Anything, mock.MatchedBy(func(callback func(res *clustercache.Resource, info *statecache.ResourceInfo)) bool {
|
||||
// node resource
|
||||
callback(&clustercache.Resource{
|
||||
Ref: corev1.ObjectReference{Name: "minikube", Kind: "Node", APIVersion: "v1"},
|
||||
@@ -2440,7 +2555,7 @@ func TestGetAppHosts(t *testing.T) {
|
||||
ResourceRequests: map[corev1.ResourceName]resource.Quantity{corev1.ResourceCPU: resource.MustParse("2")},
|
||||
}})
|
||||
return true
|
||||
})).Return(nil)
|
||||
})).Return(nil).Maybe()
|
||||
ctrl.stateCache = mockStateCache
|
||||
|
||||
hosts, err := ctrl.getAppHosts(&v1alpha1.Cluster{Server: "test", Name: "test"}, app, []v1alpha1.ResourceNode{{
|
||||
@@ -2467,15 +2582,15 @@ func TestGetAppHosts(t *testing.T) {
|
||||
func TestMetricsExpiration(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
// Check expiration is disabled by default
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
assert.False(t, ctrl.metricsServer.HasExpiration())
|
||||
// Check expiration is enabled if set
|
||||
ctrl = newFakeController(&fakeData{apps: []runtime.Object{app}, metricsCacheExpiration: 10 * time.Second}, nil)
|
||||
ctrl = newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}, metricsCacheExpiration: 10 * time.Second}, nil)
|
||||
assert.True(t, ctrl.metricsServer.HasExpiration())
|
||||
}
|
||||
|
||||
func TestToAppKey(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
@@ -2495,7 +2610,7 @@ func TestToAppKey(t *testing.T) {
|
||||
|
||||
func Test_canProcessApp(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl.applicationNamespaces = []string{"good"}
|
||||
t.Run("without cluster filter, good namespace", func(t *testing.T) {
|
||||
app.Namespace = "good"
|
||||
@@ -2526,7 +2641,7 @@ func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
|
||||
appSkipReconcileFalse.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "false"}
|
||||
appSkipReconcileTrue := newFakeApp()
|
||||
appSkipReconcileTrue.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "true"}
|
||||
ctrl := newFakeController(&fakeData{}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
|
||||
tests := []struct {
|
||||
name string
|
||||
input any
|
||||
@@ -2547,7 +2662,7 @@ func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
|
||||
|
||||
func Test_syncDeleteOption(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
cm := newFakeCM()
|
||||
t.Run("without delete option object is deleted", func(t *testing.T) {
|
||||
cmObj := kube.MustToUnstructured(&cm)
|
||||
@@ -2568,7 +2683,7 @@ func Test_syncDeleteOption(t *testing.T) {
|
||||
func TestAddControllerNamespace(t *testing.T) {
|
||||
t.Run("set controllerNamespace when the app is in the controller namespace", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{},
|
||||
}, nil)
|
||||
@@ -2586,7 +2701,7 @@ func TestAddControllerNamespace(t *testing.T) {
|
||||
app.Namespace = appNamespace
|
||||
proj := defaultProj
|
||||
proj.Spec.SourceNamespaces = []string{appNamespace}
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &proj},
|
||||
manifestResponse: &apiclient.ManifestResponse{},
|
||||
applicationNamespaces: []string{appNamespace},
|
||||
@@ -2843,7 +2958,7 @@ func assertDurationAround(t *testing.T, expected time.Duration, actual time.Dura
|
||||
}
|
||||
|
||||
func TestSelfHealRemainingBackoff(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
|
||||
ctrl.selfHealBackoff = &wait.Backoff{
|
||||
Factor: 3,
|
||||
Duration: 2 * time.Second,
|
||||
@@ -2925,7 +3040,7 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
|
||||
|
||||
func TestSelfHealBackoffCooldownElapsed(t *testing.T) {
|
||||
cooldown := time.Second * 30
|
||||
ctrl := newFakeController(&fakeData{}, nil)
|
||||
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
|
||||
ctrl.selfHealBackoffCooldown = cooldown
|
||||
|
||||
app := &v1alpha1.Application{
|
||||
|
||||
39
controller/cache/cache_test.go
vendored
39
controller/cache/cache_test.go
vendored
@@ -40,7 +40,7 @@ func (n netError) Error() string { return string(n) }
|
||||
func (n netError) Timeout() bool { return false }
|
||||
func (n netError) Temporary() bool { return false }
|
||||
|
||||
func fixtures(data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
|
||||
func fixtures(ctx context.Context, data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
|
||||
cm := &corev1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: common.ArgoCDConfigMapName,
|
||||
@@ -65,17 +65,17 @@ func fixtures(data map[string]string, opts ...func(secret *corev1.Secret)) (*fak
|
||||
opts[i](secret)
|
||||
}
|
||||
kubeClient := fake.NewClientset(cm, secret)
|
||||
settingsManager := argosettings.NewSettingsManager(context.Background(), kubeClient, "default")
|
||||
settingsManager := argosettings.NewSettingsManager(ctx, kubeClient, "default")
|
||||
|
||||
return kubeClient, settingsManager
|
||||
}
|
||||
|
||||
func TestHandleModEvent_HasChanges(_ *testing.T) {
|
||||
clusterCache := &mocks.ClusterCache{}
|
||||
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
|
||||
clusterCache.On("EnsureSynced").Return(nil).Once()
|
||||
clusterCache.EXPECT().Invalidate(mock.Anything, mock.Anything).Return().Once()
|
||||
clusterCache.EXPECT().EnsureSynced().Return(nil).Once()
|
||||
db := &dbmocks.ArgoDB{}
|
||||
db.On("GetApplicationControllerReplicas").Return(1)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(1)
|
||||
clustersCache := liveStateCache{
|
||||
clusters: map[string]cache.ClusterCache{
|
||||
"https://mycluster": clusterCache,
|
||||
@@ -95,10 +95,10 @@ func TestHandleModEvent_HasChanges(_ *testing.T) {
|
||||
|
||||
func TestHandleModEvent_ClusterExcluded(t *testing.T) {
|
||||
clusterCache := &mocks.ClusterCache{}
|
||||
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
|
||||
clusterCache.On("EnsureSynced").Return(nil).Once()
|
||||
clusterCache.EXPECT().Invalidate(mock.Anything, mock.Anything).Return().Once()
|
||||
clusterCache.EXPECT().EnsureSynced().Return(nil).Once()
|
||||
db := &dbmocks.ArgoDB{}
|
||||
db.On("GetApplicationControllerReplicas").Return(1)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
|
||||
clustersCache := liveStateCache{
|
||||
db: nil,
|
||||
appInformer: nil,
|
||||
@@ -128,10 +128,10 @@ func TestHandleModEvent_ClusterExcluded(t *testing.T) {
|
||||
|
||||
func TestHandleModEvent_NoChanges(_ *testing.T) {
|
||||
clusterCache := &mocks.ClusterCache{}
|
||||
clusterCache.On("Invalidate", mock.Anything).Panic("should not invalidate")
|
||||
clusterCache.On("EnsureSynced").Return(nil).Panic("should not re-sync")
|
||||
clusterCache.EXPECT().Invalidate(mock.Anything).Panic("should not invalidate").Maybe()
|
||||
clusterCache.EXPECT().EnsureSynced().Return(nil).Panic("should not re-sync").Maybe()
|
||||
db := &dbmocks.ArgoDB{}
|
||||
db.On("GetApplicationControllerReplicas").Return(1)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
|
||||
clustersCache := liveStateCache{
|
||||
clusters: map[string]cache.ClusterCache{
|
||||
"https://mycluster": clusterCache,
|
||||
@@ -150,7 +150,7 @@ func TestHandleModEvent_NoChanges(_ *testing.T) {
|
||||
|
||||
func TestHandleAddEvent_ClusterExcluded(t *testing.T) {
|
||||
db := &dbmocks.ArgoDB{}
|
||||
db.On("GetApplicationControllerReplicas").Return(1)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
|
||||
clustersCache := liveStateCache{
|
||||
clusters: map[string]cache.ClusterCache{},
|
||||
clusterSharding: sharding.NewClusterSharding(db, 0, 2, common.DefaultShardingAlgorithm),
|
||||
@@ -169,10 +169,9 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
|
||||
Config: appv1.ClusterConfig{Username: "bar"},
|
||||
}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
db.On("GetApplicationControllerReplicas").Return(1)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(1)
|
||||
fakeClient := fake.NewClientset()
|
||||
settingsMgr := argosettings.NewSettingsManager(t.Context(), fakeClient, "argocd")
|
||||
liveStateCacheLock := sync.RWMutex{}
|
||||
gitopsEngineClusterCache := &mocks.ClusterCache{}
|
||||
clustersCache := liveStateCache{
|
||||
clusters: map[string]cache.ClusterCache{
|
||||
@@ -180,9 +179,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
|
||||
},
|
||||
clusterSharding: sharding.NewClusterSharding(db, 0, 1, common.DefaultShardingAlgorithm),
|
||||
settingsMgr: settingsMgr,
|
||||
// Set the lock here so we can reference it later
|
||||
//nolint:govet // We need to overwrite here to have access to the lock
|
||||
lock: liveStateCacheLock,
|
||||
lock: sync.RWMutex{},
|
||||
}
|
||||
channel := make(chan string)
|
||||
// Mocked lock held by the gitops-engine cluster cache
|
||||
@@ -203,7 +200,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
|
||||
handleDeleteWasCalled.Lock()
|
||||
engineHoldsEngineLock.Lock()
|
||||
|
||||
gitopsEngineClusterCache.On("EnsureSynced").Run(func(_ mock.Arguments) {
|
||||
gitopsEngineClusterCache.EXPECT().EnsureSynced().Run(func() {
|
||||
gitopsEngineClusterCacheLock.Lock()
|
||||
t.Log("EnsureSynced: Engine has engine lock")
|
||||
engineHoldsEngineLock.Unlock()
|
||||
@@ -217,7 +214,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
|
||||
ensureSyncedCompleted.Unlock()
|
||||
}).Return(nil).Once()
|
||||
|
||||
gitopsEngineClusterCache.On("Invalidate").Run(func(_ mock.Arguments) {
|
||||
gitopsEngineClusterCache.EXPECT().Invalidate().Run(func(_ ...cache.UpdateSettingsFunc) {
|
||||
// Allow EnsureSynced to continue now that we're in the deadlock condition
|
||||
handleDeleteWasCalled.Unlock()
|
||||
// Wait until gitops engine holds the gitops lock
|
||||
@@ -230,7 +227,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
|
||||
t.Log("Invalidate: Invalidate has engine lock")
|
||||
gitopsEngineClusterCacheLock.Unlock()
|
||||
invalidateCompleted.Unlock()
|
||||
}).Return()
|
||||
}).Return().Maybe()
|
||||
go func() {
|
||||
// Start the gitops-engine lock holds
|
||||
go func() {
|
||||
@@ -778,7 +775,7 @@ func Test_GetVersionsInfo_error_redacted(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLoadCacheSettings(t *testing.T) {
|
||||
_, settingsManager := fixtures(map[string]string{
|
||||
_, settingsManager := fixtures(t.Context(), map[string]string{
|
||||
"application.instanceLabelKey": "testLabel",
|
||||
"application.resourceTrackingMethod": string(appv1.TrackingMethodLabel),
|
||||
"installationID": "123456789",
|
||||
|
||||
11
controller/cache/info.go
vendored
11
controller/cache/info.go
vendored
@@ -68,6 +68,10 @@ func populateNodeInfo(un *unstructured.Unstructured, res *ResourceInfo, customLa
|
||||
case "ServiceEntry":
|
||||
populateIstioServiceEntryInfo(un, res)
|
||||
}
|
||||
case "argoproj.io":
|
||||
if gvk.Kind == "Application" {
|
||||
populateApplicationInfo(un, res)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -488,6 +492,13 @@ func populateHostNodeInfo(un *unstructured.Unstructured, res *ResourceInfo) {
|
||||
}
|
||||
}
|
||||
|
||||
func populateApplicationInfo(un *unstructured.Unstructured, res *ResourceInfo) {
|
||||
// Add managed-by-url annotation to info if present
|
||||
if managedByURL, ok := un.GetAnnotations()[v1alpha1.AnnotationKeyManagedByURL]; ok {
|
||||
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "managed-by-url", Value: managedByURL})
|
||||
}
|
||||
}
|
||||
|
||||
func generateManifestHash(un *unstructured.Unstructured, ignores []v1alpha1.ResourceIgnoreDifferences, overrides map[string]v1alpha1.ResourceOverride, opts normalizers.IgnoreNormalizerOpts) (string, error) {
|
||||
normalizer, err := normalizers.NewIgnoreNormalizer(ignores, overrides, opts)
|
||||
if err != nil {
|
||||
|
||||
@@ -4,7 +4,9 @@ import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"maps"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -99,47 +101,41 @@ func NewHydrator(dependencies Dependencies, statusRefreshTimeout time.Duration,
|
||||
// It's likely that multiple applications will trigger hydration at the same time. The hydration queue key is meant to
|
||||
// dedupe these requests.
|
||||
func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
|
||||
origApp = origApp.DeepCopy()
|
||||
app := origApp.DeepCopy()
|
||||
|
||||
if app.Spec.SourceHydrator == nil {
|
||||
return
|
||||
}
|
||||
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app))
|
||||
|
||||
logCtx.Debug("Processing app hydrate queue item")
|
||||
|
||||
// TODO: don't reuse statusRefreshTimeout. Create a new timeout for hydration.
|
||||
needsHydration, reason := appNeedsHydration(origApp, h.statusRefreshTimeout)
|
||||
if !needsHydration {
|
||||
return
|
||||
needsHydration, reason := appNeedsHydration(app)
|
||||
if needsHydration {
|
||||
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
|
||||
StartedAt: metav1.Now(),
|
||||
FinishedAt: nil,
|
||||
Phase: appv1.HydrateOperationPhaseHydrating,
|
||||
SourceHydrator: *app.Spec.SourceHydrator,
|
||||
}
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
}
|
||||
|
||||
logCtx.WithField("reason", reason).Info("Hydrating app")
|
||||
|
||||
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
|
||||
StartedAt: metav1.Now(),
|
||||
FinishedAt: nil,
|
||||
Phase: appv1.HydrateOperationPhaseHydrating,
|
||||
SourceHydrator: *app.Spec.SourceHydrator,
|
||||
needsRefresh := app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.StartedAt.Time) > h.statusRefreshTimeout
|
||||
if needsHydration || needsRefresh {
|
||||
logCtx.WithField("reason", reason).Info("Hydrating app")
|
||||
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
|
||||
} else {
|
||||
logCtx.WithField("reason", reason).Debug("Skipping hydration")
|
||||
}
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
origApp.Status.SourceHydrator = app.Status.SourceHydrator
|
||||
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
|
||||
|
||||
logCtx.Debug("Successfully processed app hydrate queue item")
|
||||
}
|
||||
|
||||
func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
|
||||
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
|
||||
if app.Spec.SourceHydrator.HydrateTo != nil {
|
||||
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
|
||||
}
|
||||
key := types.HydrationQueueKey{
|
||||
SourceRepoURL: git.NormalizeGitURLAllowInvalid(app.Spec.SourceHydrator.DrySource.RepoURL),
|
||||
SourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
|
||||
DestinationBranch: destinationBranch,
|
||||
DestinationBranch: app.Spec.GetHydrateToSource().TargetRevision,
|
||||
}
|
||||
return key
|
||||
}
|
||||
@@ -148,43 +144,92 @@ func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
|
||||
// hydration key, hydrates their latest commit, and updates their status accordingly. If the hydration fails, it marks
|
||||
// the operation as failed and logs the error. If successful, it updates the operation to indicate that hydration was
|
||||
// successful and requests a refresh of the applications to pick up the new hydrated commit.
|
||||
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) (processNext bool) {
|
||||
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) {
|
||||
logCtx := log.WithFields(log.Fields{
|
||||
"sourceRepoURL": hydrationKey.SourceRepoURL,
|
||||
"sourceTargetRevision": hydrationKey.SourceTargetRevision,
|
||||
"destinationBranch": hydrationKey.DestinationBranch,
|
||||
})
|
||||
|
||||
relevantApps, drySHA, hydratedSHA, err := h.hydrateAppsLatestCommit(logCtx, hydrationKey)
|
||||
if len(relevantApps) == 0 {
|
||||
// return early if there are no relevant apps found to hydrate
|
||||
// otherwise you'll be stuck in hydrating
|
||||
logCtx.Info("Skipping hydration since there are no relevant apps found to hydrate")
|
||||
// Get all applications sharing the same hydration key
|
||||
apps, err := h.getAppsForHydrationKey(hydrationKey)
|
||||
if err != nil {
|
||||
// If we get an error here, we cannot proceed with hydration and we do not know
|
||||
// which apps to update with the failure. The best we can do is log an error in
|
||||
// the controller and wait for statusRefreshTimeout to retry
|
||||
logCtx.WithError(err).Error("failed to get apps for hydration")
|
||||
return
|
||||
}
|
||||
logCtx.WithField("appCount", len(apps))
|
||||
|
||||
// FIXME: we might end up in a race condition here where an HydrationQueueItem is processed
|
||||
// before all applications had their CurrentOperation set by ProcessAppHydrateQueueItem.
|
||||
// This would cause this method to update "old" CurrentOperation.
|
||||
// It should only start hydration if all apps are in the HydrateOperationPhaseHydrating phase.
|
||||
raceDetected := false
|
||||
for _, app := range apps {
|
||||
if app.Status.SourceHydrator.CurrentOperation == nil || app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
|
||||
raceDetected = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if raceDetected {
|
||||
logCtx.Warn("race condition detected: not all apps are in HydrateOperationPhaseHydrating phase")
|
||||
}
|
||||
|
||||
// validate all the applications to make sure they are all correctly configured.
|
||||
// All applications sharing the same hydration key must succeed for the hydration to be processed.
|
||||
projects, validationErrors := h.validateApplications(apps)
|
||||
if len(validationErrors) > 0 {
|
||||
// For the applications that have an error, set the specific error in their status.
|
||||
// Applications without error will still fail with a generic error since the hydration cannot be partial
|
||||
genericError := genericHydrationError(validationErrors)
|
||||
for _, app := range apps {
|
||||
if err, ok := validationErrors[app.QualifiedName()]; ok {
|
||||
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
|
||||
logCtx.Errorf("failed to validate hydration app: %v", err)
|
||||
h.setAppHydratorError(app, err)
|
||||
} else {
|
||||
h.setAppHydratorError(app, genericError)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Hydrate all the apps
|
||||
drySHA, hydratedSHA, appErrors, err := h.hydrate(logCtx, apps, projects)
|
||||
if err != nil {
|
||||
// If there is a single error, it affects each applications
|
||||
for i := range apps {
|
||||
appErrors[apps[i].QualifiedName()] = err
|
||||
}
|
||||
}
|
||||
if drySHA != "" {
|
||||
logCtx = logCtx.WithField("drySHA", drySHA)
|
||||
}
|
||||
if err != nil {
|
||||
logCtx.WithField("appCount", len(relevantApps)).WithError(err).Error("Failed to hydrate apps")
|
||||
for _, app := range relevantApps {
|
||||
origApp := app.DeepCopy()
|
||||
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
|
||||
failedAt := metav1.Now()
|
||||
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
|
||||
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate revision %q: %v", drySHA, err.Error())
|
||||
// We may or may not have gotten far enough in the hydration process to get a non-empty SHA, but set it just
|
||||
// in case we did.
|
||||
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
|
||||
logCtx.Errorf("Failed to hydrate app: %v", err)
|
||||
if len(appErrors) > 0 {
|
||||
// For the applications that have an error, set the specific error in their status.
|
||||
// Applications without error will still fail with a generic error since the hydration cannot be partial
|
||||
genericError := genericHydrationError(appErrors)
|
||||
for _, app := range apps {
|
||||
if drySHA != "" {
|
||||
// If we have a drySHA, we can set it on the app status
|
||||
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
|
||||
}
|
||||
if err, ok := appErrors[app.QualifiedName()]; ok {
|
||||
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
|
||||
logCtx.Errorf("failed to hydrate app: %v", err)
|
||||
h.setAppHydratorError(app, err)
|
||||
} else {
|
||||
h.setAppHydratorError(app, genericError)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
logCtx.WithField("appCount", len(relevantApps)).Debug("Successfully hydrated apps")
|
||||
|
||||
logCtx.Debug("Successfully hydrated apps")
|
||||
finishedAt := metav1.Now()
|
||||
for _, app := range relevantApps {
|
||||
for _, app := range apps {
|
||||
origApp := app.DeepCopy()
|
||||
operation := &appv1.HydrateOperation{
|
||||
StartedAt: app.Status.SourceHydrator.CurrentOperation.StartedAt,
|
||||
@@ -202,118 +247,123 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
|
||||
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
|
||||
}
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
|
||||
// Request a refresh since we pushed a new commit.
|
||||
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
|
||||
if err != nil {
|
||||
logCtx.WithField("app", app.QualifiedName()).WithError(err).Error("Failed to request app refresh after hydration")
|
||||
logCtx.WithFields(applog.GetAppLogFields(app)).WithError(err).Error("Failed to request app refresh after hydration")
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, string, string, error) {
|
||||
relevantApps, projects, err := h.getRelevantAppsAndProjectsForHydration(logCtx, hydrationKey)
|
||||
if err != nil {
|
||||
return nil, "", "", fmt.Errorf("failed to get relevant apps for hydration: %w", err)
|
||||
// setAppHydratorError updates the CurrentOperation with the error information.
|
||||
func (h *Hydrator) setAppHydratorError(app *appv1.Application, err error) {
|
||||
// if the operation is not in progress, we do not update the status
|
||||
if app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
|
||||
return
|
||||
}
|
||||
|
||||
dryRevision, hydratedRevision, err := h.hydrate(logCtx, relevantApps, projects)
|
||||
if err != nil {
|
||||
return relevantApps, dryRevision, "", fmt.Errorf("failed to hydrate apps: %w", err)
|
||||
}
|
||||
|
||||
return relevantApps, dryRevision, hydratedRevision, nil
|
||||
origApp := app.DeepCopy()
|
||||
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
|
||||
failedAt := metav1.Now()
|
||||
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
|
||||
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate: %v", err.Error())
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
}
|
||||
|
||||
func (h *Hydrator) getRelevantAppsAndProjectsForHydration(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, map[string]*appv1.AppProject, error) {
|
||||
// getAppsForHydrationKey returns the applications matching the hydration key.
|
||||
func (h *Hydrator) getAppsForHydrationKey(hydrationKey types.HydrationQueueKey) ([]*appv1.Application, error) {
|
||||
// Get all apps
|
||||
apps, err := h.dependencies.GetProcessableApps()
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to list apps: %w", err)
|
||||
return nil, fmt.Errorf("failed to list apps: %w", err)
|
||||
}
|
||||
|
||||
var relevantApps []*appv1.Application
|
||||
projects := make(map[string]*appv1.AppProject)
|
||||
uniquePaths := make(map[string]bool, len(apps.Items))
|
||||
for _, app := range apps.Items {
|
||||
if app.Spec.SourceHydrator == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if !git.SameURL(app.Spec.SourceHydrator.DrySource.RepoURL, hydrationKey.SourceRepoURL) ||
|
||||
app.Spec.SourceHydrator.DrySource.TargetRevision != hydrationKey.SourceTargetRevision {
|
||||
continue
|
||||
}
|
||||
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
|
||||
if app.Spec.SourceHydrator.HydrateTo != nil {
|
||||
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
|
||||
}
|
||||
if destinationBranch != hydrationKey.DestinationBranch {
|
||||
appKey := getHydrationQueueKey(&app)
|
||||
if appKey != hydrationKey {
|
||||
continue
|
||||
}
|
||||
relevantApps = append(relevantApps, &app)
|
||||
}
|
||||
return relevantApps, nil
|
||||
}
|
||||
|
||||
path := app.Spec.SourceHydrator.SyncSource.Path
|
||||
// ensure that the path is always set to a path that doesn't resolve to the root of the repo
|
||||
if IsRootPath(path) {
|
||||
return nil, nil, fmt.Errorf("app %q has path %q which resolves to repository root", app.QualifiedName(), path)
|
||||
}
|
||||
// validateApplications checks that all applications are valid for hydration.
|
||||
func (h *Hydrator) validateApplications(apps []*appv1.Application) (map[string]*appv1.AppProject, map[string]error) {
|
||||
projects := make(map[string]*appv1.AppProject)
|
||||
errors := make(map[string]error)
|
||||
uniquePaths := make(map[string]string, len(apps))
|
||||
|
||||
var proj *appv1.AppProject
|
||||
for _, app := range apps {
|
||||
// Get the project for the app and validate if the app is allowed to use the source.
|
||||
// We can't short-circuit this even if we have seen this project before, because we need to verify that this
|
||||
// particular app is allowed to use this project. That logic is in GetProcessableAppProj.
|
||||
proj, err = h.dependencies.GetProcessableAppProj(&app)
|
||||
// particular app is allowed to use this project.
|
||||
proj, err := h.dependencies.GetProcessableAppProj(app)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to get project %q for app %q: %w", app.Spec.Project, app.QualifiedName(), err)
|
||||
errors[app.QualifiedName()] = fmt.Errorf("failed to get project %q: %w", app.Spec.Project, err)
|
||||
continue
|
||||
}
|
||||
permitted := proj.IsSourcePermitted(app.Spec.GetSource())
|
||||
if !permitted {
|
||||
// Log and skip. We don't want to fail the entire operation because of one app.
|
||||
logCtx.Warnf("App %q is not permitted to use source %q", app.QualifiedName(), app.Spec.Source.String())
|
||||
errors[app.QualifiedName()] = fmt.Errorf("application repo %s is not permitted in project '%s'", app.Spec.GetSource().RepoURL, proj.Name)
|
||||
continue
|
||||
}
|
||||
projects[app.Spec.Project] = proj
|
||||
|
||||
// Disallow hydrating to the repository root.
|
||||
// Hydrating to root would overwrite or delete files at the top level of the repo,
|
||||
// which can break other applications or shared configuration.
|
||||
// Every hydrated app must write into a subdirectory instead.
|
||||
destPath := app.Spec.SourceHydrator.SyncSource.Path
|
||||
if IsRootPath(destPath) {
|
||||
errors[app.QualifiedName()] = fmt.Errorf("app is configured to hydrate to the repository root (branch %q, path %q) which is not allowed", app.Spec.GetHydrateToSource().TargetRevision, destPath)
|
||||
continue
|
||||
}
|
||||
|
||||
// TODO: test the dupe detection
|
||||
// TODO: normalize the path to avoid "path/.." from being treated as different from "."
|
||||
if _, ok := uniquePaths[path]; ok {
|
||||
return nil, nil, fmt.Errorf("multiple app hydrators use the same destination: %v", app.Spec.SourceHydrator.SyncSource.Path)
|
||||
if appName, ok := uniquePaths[destPath]; ok {
|
||||
errors[app.QualifiedName()] = fmt.Errorf("app %s hydrator use the same destination: %v", appName, app.Spec.SourceHydrator.SyncSource.Path)
|
||||
errors[appName] = fmt.Errorf("app %s hydrator use the same destination: %v", app.QualifiedName(), app.Spec.SourceHydrator.SyncSource.Path)
|
||||
continue
|
||||
}
|
||||
uniquePaths[path] = true
|
||||
|
||||
relevantApps = append(relevantApps, &app)
|
||||
uniquePaths[destPath] = app.QualifiedName()
|
||||
}
|
||||
return relevantApps, projects, nil
|
||||
|
||||
// If there are any errors, return nil for projects to avoid possible partial processing.
|
||||
if len(errors) > 0 {
|
||||
projects = nil
|
||||
}
|
||||
|
||||
return projects, errors
|
||||
}
|
||||
|
||||
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, error) {
|
||||
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, map[string]error, error) {
|
||||
errors := make(map[string]error)
|
||||
if len(apps) == 0 {
|
||||
return "", "", nil
|
||||
return "", "", nil, nil
|
||||
}
|
||||
|
||||
// These values are the same for all apps being hydrated together, so just get them from the first app.
|
||||
repoURL := apps[0].Spec.SourceHydrator.DrySource.RepoURL
|
||||
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
|
||||
repoURL := apps[0].Spec.GetHydrateToSource().RepoURL
|
||||
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
|
||||
|
||||
// Disallow hydrating to the repository root.
|
||||
// Hydrating to root would overwrite or delete files at the top level of the repo,
|
||||
// which can break other applications or shared configuration.
|
||||
// Every hydrated app must write into a subdirectory instead.
|
||||
|
||||
for _, app := range apps {
|
||||
destPath := app.Spec.SourceHydrator.SyncSource.Path
|
||||
if IsRootPath(destPath) {
|
||||
return "", "", fmt.Errorf(
|
||||
"app %q is configured to hydrate to the repository root (branch %q, path %q) which is not allowed",
|
||||
app.QualifiedName(), targetBranch, destPath,
|
||||
)
|
||||
}
|
||||
}
|
||||
// FIXME: As a convenience, the commit server will create the syncBranch if it does not exist. If the
|
||||
// targetBranch does not exist, it will create it based on the syncBranch. On the next line, we take
|
||||
// the `syncBranch` from the first app and assume that they're all configured the same. Instead, if any
|
||||
// app has a different syncBranch, we should send the commit server an empty string and allow it to
|
||||
// create the targetBranch as an orphan since we can't reliable determine a reasonable base.
|
||||
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
|
||||
|
||||
// Get a static SHA revision from the first app so that all apps are hydrated from the same revision.
|
||||
targetRevision, pathDetails, err := h.getManifests(context.Background(), apps[0], "", projects[apps[0].Spec.Project])
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get manifests for app %q: %w", apps[0].QualifiedName(), err)
|
||||
errors[apps[0].QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
|
||||
return "", "", errors, nil
|
||||
}
|
||||
paths := []*commitclient.PathDetails{pathDetails}
|
||||
|
||||
@@ -324,18 +374,18 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
|
||||
app := app
|
||||
eg.Go(func() error {
|
||||
_, pathDetails, err = h.getManifests(ctx, app, targetRevision, projects[app.Spec.Project])
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get manifests for app %q: %w", app.QualifiedName(), err)
|
||||
}
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
if err != nil {
|
||||
errors[app.QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
|
||||
return errors[app.QualifiedName()]
|
||||
}
|
||||
paths = append(paths, pathDetails)
|
||||
mu.Unlock()
|
||||
return nil
|
||||
})
|
||||
}
|
||||
err = eg.Wait()
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get manifests for apps: %w", err)
|
||||
if err := eg.Wait(); err != nil {
|
||||
return targetRevision, "", errors, nil
|
||||
}
|
||||
|
||||
// If all the apps are under the same project, use that project. Otherwise, use an empty string to indicate that we
|
||||
@@ -344,18 +394,19 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
|
||||
if len(projects) == 1 {
|
||||
for p := range projects {
|
||||
project = p
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Get the commit metadata for the target revision.
|
||||
revisionMetadata, err := h.getRevisionMetadata(context.Background(), repoURL, project, targetRevision)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
|
||||
}
|
||||
|
||||
repo, err := h.dependencies.GetWriteCredentials(context.Background(), repoURL, project)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get hydrator credentials: %w", err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator credentials: %w", err)
|
||||
}
|
||||
if repo == nil {
|
||||
// Try without credentials.
|
||||
@@ -367,11 +418,11 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
|
||||
// get the commit message template
|
||||
commitMessageTemplate, err := h.dependencies.GetHydratorCommitMessageTemplate()
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get hydrated commit message template: %w", err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to get hydrated commit message template: %w", err)
|
||||
}
|
||||
commitMessage, errMsg := getTemplatedCommitMessage(repoURL, targetRevision, commitMessageTemplate, revisionMetadata)
|
||||
if errMsg != nil {
|
||||
return "", "", fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
|
||||
}
|
||||
|
||||
manifestsRequest := commitclient.CommitHydratedManifestsRequest{
|
||||
@@ -386,14 +437,14 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
|
||||
|
||||
closer, commitService, err := h.commitClientset.NewCommitServerClient()
|
||||
if err != nil {
|
||||
return targetRevision, "", fmt.Errorf("failed to create commit service: %w", err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to create commit service: %w", err)
|
||||
}
|
||||
defer utilio.Close(closer)
|
||||
resp, err := commitService.CommitHydratedManifests(context.Background(), &manifestsRequest)
|
||||
if err != nil {
|
||||
return targetRevision, "", fmt.Errorf("failed to commit hydrated manifests: %w", err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to commit hydrated manifests: %w", err)
|
||||
}
|
||||
return targetRevision, resp.HydratedSha, nil
|
||||
return targetRevision, resp.HydratedSha, errors, nil
|
||||
}
|
||||
|
||||
// getManifests gets the manifests for the given application and target revision. It returns the resolved revision
|
||||
@@ -456,34 +507,27 @@ func (h *Hydrator) getRevisionMetadata(ctx context.Context, repoURL, project, re
|
||||
}
|
||||
|
||||
// appNeedsHydration answers if application needs manifests hydrated.
|
||||
func appNeedsHydration(app *appv1.Application, statusHydrateTimeout time.Duration) (needsHydration bool, reason string) {
|
||||
if app.Spec.SourceHydrator == nil {
|
||||
return false, "source hydrator not configured"
|
||||
}
|
||||
|
||||
var hydratedAt *metav1.Time
|
||||
if app.Status.SourceHydrator.CurrentOperation != nil {
|
||||
hydratedAt = &app.Status.SourceHydrator.CurrentOperation.StartedAt
|
||||
}
|
||||
|
||||
func appNeedsHydration(app *appv1.Application) (needsHydration bool, reason string) {
|
||||
switch {
|
||||
case app.IsHydrateRequested():
|
||||
return true, "hydrate requested"
|
||||
case app.Spec.SourceHydrator == nil:
|
||||
return false, "source hydrator not configured"
|
||||
case app.Status.SourceHydrator.CurrentOperation == nil:
|
||||
return true, "no previous hydrate operation"
|
||||
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating:
|
||||
return false, "hydration operation already in progress"
|
||||
case app.IsHydrateRequested():
|
||||
return true, "hydrate requested"
|
||||
case !app.Spec.SourceHydrator.DeepEquals(app.Status.SourceHydrator.CurrentOperation.SourceHydrator):
|
||||
return true, "spec.sourceHydrator differs"
|
||||
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseFailed && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.FinishedAt.Time) > 2*time.Minute:
|
||||
return true, "previous hydrate operation failed more than 2 minutes ago"
|
||||
case hydratedAt == nil || hydratedAt.Add(statusHydrateTimeout).Before(time.Now().UTC()):
|
||||
return true, "hydration expired"
|
||||
}
|
||||
|
||||
return false, ""
|
||||
return false, "hydration not needed"
|
||||
}
|
||||
|
||||
// Gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
|
||||
// 1. Get the metadata template engine would use to render the template
|
||||
// getTemplatedCommitMessage gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
|
||||
// 1. Get the metadata template engine would use to render the template
|
||||
// 2. Pass the output of Step 1 and Step 2 to template Render
|
||||
func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string, dryCommitMetadata *appv1.RevisionMetadata) (string, error) {
|
||||
hydratorCommitMetadata, err := hydrator.GetCommitMetadata(repoURL, revision, dryCommitMetadata)
|
||||
@@ -497,6 +541,20 @@ func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string,
|
||||
return templatedCommitMsg, nil
|
||||
}
|
||||
|
||||
// genericHydrationError returns an error that summarizes the hydration errors for all applications.
|
||||
func genericHydrationError(validationErrors map[string]error) error {
|
||||
if len(validationErrors) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
keys := slices.Sorted(maps.Keys(validationErrors))
|
||||
remainder := "has an error"
|
||||
if len(keys) > 1 {
|
||||
remainder = fmt.Sprintf("and %d more have errors", len(keys)-1)
|
||||
}
|
||||
return fmt.Errorf("cannot hydrate because application %s %s", keys[0], remainder)
|
||||
}
|
||||
|
||||
// IsRootPath returns whether the path references a root path
|
||||
func IsRootPath(path string) bool {
|
||||
clean := filepath.Clean(path)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
113
controller/hydrator/mocks/RepoGetter.go
generated
Normal file
113
controller/hydrator/mocks/RepoGetter.go
generated
Normal file
@@ -0,0 +1,113 @@
|
||||
// Code generated by mockery; DO NOT EDIT.
|
||||
// github.com/vektra/mockery
|
||||
// template: testify
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
// NewRepoGetter creates a new instance of RepoGetter. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewRepoGetter(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *RepoGetter {
|
||||
mock := &RepoGetter{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
||||
// RepoGetter is an autogenerated mock type for the RepoGetter type
|
||||
type RepoGetter struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type RepoGetter_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *RepoGetter) EXPECT() *RepoGetter_Expecter {
|
||||
return &RepoGetter_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// GetRepository provides a mock function for the type RepoGetter
|
||||
func (_mock *RepoGetter) GetRepository(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error) {
|
||||
ret := _mock.Called(ctx, repoURL, project)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for GetRepository")
|
||||
}
|
||||
|
||||
var r0 *v1alpha1.Repository
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) (*v1alpha1.Repository, error)); ok {
|
||||
return returnFunc(ctx, repoURL, project)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) *v1alpha1.Repository); ok {
|
||||
r0 = returnFunc(ctx, repoURL, project)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*v1alpha1.Repository)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, string, string) error); ok {
|
||||
r1 = returnFunc(ctx, repoURL, project)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// RepoGetter_GetRepository_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetRepository'
|
||||
type RepoGetter_GetRepository_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// GetRepository is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - repoURL string
|
||||
// - project string
|
||||
func (_e *RepoGetter_Expecter) GetRepository(ctx interface{}, repoURL interface{}, project interface{}) *RepoGetter_GetRepository_Call {
|
||||
return &RepoGetter_GetRepository_Call{Call: _e.mock.On("GetRepository", ctx, repoURL, project)}
|
||||
}
|
||||
|
||||
func (_c *RepoGetter_GetRepository_Call) Run(run func(ctx context.Context, repoURL string, project string)) *RepoGetter_GetRepository_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 string
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(string)
|
||||
}
|
||||
var arg2 string
|
||||
if args[2] != nil {
|
||||
arg2 = args[2].(string)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
arg2,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *RepoGetter_GetRepository_Call) Return(repository *v1alpha1.Repository, err error) *RepoGetter_GetRepository_Call {
|
||||
_c.Call.Return(repository, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *RepoGetter_GetRepository_Call) RunAndReturn(run func(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error)) *RepoGetter_GetRepository_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
@@ -43,7 +43,7 @@ func TestGetRepoObjs(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
|
||||
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
|
||||
source := app.Spec.GetSource()
|
||||
source.RepoURL = "oci://example.com/argo/argo-cd"
|
||||
|
||||
@@ -92,7 +92,7 @@ func TestGetHydratorCommitMessageTemplate_WhenTemplateisNotDefined_FallbackToDef
|
||||
},
|
||||
}
|
||||
|
||||
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
|
||||
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
|
||||
|
||||
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
|
||||
require.NoError(t, err)
|
||||
@@ -115,7 +115,7 @@ func TestGetHydratorCommitMessageTemplate(t *testing.T) {
|
||||
configMapData: cm.Data,
|
||||
}
|
||||
|
||||
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
|
||||
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
|
||||
|
||||
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -13,12 +13,12 @@ import (
|
||||
)
|
||||
|
||||
func TestMetricClusterConnectivity(t *testing.T) {
|
||||
db := dbmocks.ArgoDB{}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
cluster1 := v1alpha1.Cluster{Name: "cluster1", Server: "server1", Labels: map[string]string{"env": "dev", "team": "team1"}}
|
||||
cluster2 := v1alpha1.Cluster{Name: "cluster2", Server: "server2", Labels: map[string]string{"env": "staging", "team": "team2"}}
|
||||
cluster3 := v1alpha1.Cluster{Name: "cluster3", Server: "server3", Labels: map[string]string{"env": "production", "team": "team3"}}
|
||||
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3}}
|
||||
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
|
||||
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
|
||||
|
||||
type testCases struct {
|
||||
testCombination
|
||||
|
||||
@@ -263,8 +263,8 @@ func newFakeApp(fakeAppYAML string) *argoappv1.Application {
|
||||
return &app
|
||||
}
|
||||
|
||||
func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.ApplicationLister) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
func newFakeLister(ctx context.Context, fakeAppYAMLs ...string) (context.CancelFunc, applister.ApplicationLister) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
var fakeApps []runtime.Object
|
||||
for _, appYAML := range fakeAppYAMLs {
|
||||
@@ -319,11 +319,11 @@ func testMetricServer(t *testing.T, fakeAppYAMLs []string, expectedResponse stri
|
||||
|
||||
func runTest(t *testing.T, cfg TestMetricServerConfig) {
|
||||
t.Helper()
|
||||
cancel, appLister := newFakeLister(cfg.FakeAppYAMLs...)
|
||||
cancel, appLister := newFakeLister(t.Context(), cfg.FakeAppYAMLs...)
|
||||
defer cancel()
|
||||
mockDB := mocks.NewArgoDB(t)
|
||||
mockDB.On("GetClusterServersByName", mock.Anything, "cluster1").Return([]string{"https://localhost:6443"}, nil)
|
||||
mockDB.On("GetCluster", mock.Anything, "https://localhost:6443").Return(&argoappv1.Cluster{Name: "cluster1", Server: "https://localhost:6443"}, nil)
|
||||
mockDB.EXPECT().GetClusterServersByName(mock.Anything, "cluster1").Return([]string{"https://localhost:6443"}, nil).Maybe()
|
||||
mockDB.EXPECT().GetCluster(mock.Anything, "https://localhost:6443").Return(&argoappv1.Cluster{Name: "cluster1", Server: "https://localhost:6443"}, nil).Maybe()
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, cfg.AppLabels, cfg.AppConditions, mockDB)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -333,7 +333,7 @@ func runTest(t *testing.T, cfg TestMetricServerConfig) {
|
||||
metricsServ.registry.MustRegister(collector)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -472,7 +472,7 @@ argocd_app_condition{condition="ExcludedResourceWarning",name="my-app-4",namespa
|
||||
}
|
||||
|
||||
func TestMetricsSyncCounter(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
cancel, appLister := newFakeLister(t.Context())
|
||||
defer cancel()
|
||||
mockDB := mocks.NewArgoDB(t)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
|
||||
@@ -493,7 +493,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
|
||||
metricsServ.IncSync(fakeApp, "https://localhost:6443", &argoappv1.OperationState{Phase: common.OperationSucceeded})
|
||||
metricsServ.IncSync(fakeApp, "https://localhost:6443", &argoappv1.OperationState{Phase: common.OperationSucceeded})
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -526,7 +526,7 @@ func assertMetricsNotPrinted(t *testing.T, expectedLines, body string) {
|
||||
}
|
||||
|
||||
func TestMetricsSyncDuration(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
cancel, appLister := newFakeLister(t.Context())
|
||||
defer cancel()
|
||||
mockDB := mocks.NewArgoDB(t)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
|
||||
@@ -536,7 +536,7 @@ func TestMetricsSyncDuration(t *testing.T) {
|
||||
fakeAppOperationRunning := newFakeApp(fakeAppOperationRunning)
|
||||
metricsServ.IncAppSyncDuration(fakeAppOperationRunning, "https://localhost:6443", fakeAppOperationRunning.Status.OperationState)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -550,7 +550,7 @@ func TestMetricsSyncDuration(t *testing.T) {
|
||||
fakeAppOperationFinished := newFakeApp(fakeAppOperationFinished)
|
||||
metricsServ.IncAppSyncDuration(fakeAppOperationFinished, "https://localhost:6443", fakeAppOperationFinished.Status.OperationState)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -567,7 +567,7 @@ argocd_app_sync_duration_seconds_total{dest_server="https://localhost:6443",name
|
||||
}
|
||||
|
||||
func TestReconcileMetrics(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
cancel, appLister := newFakeLister(t.Context())
|
||||
defer cancel()
|
||||
mockDB := mocks.NewArgoDB(t)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
|
||||
@@ -590,7 +590,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
|
||||
fakeApp := newFakeApp(fakeApp)
|
||||
metricsServ.IncReconcile(fakeApp, "https://localhost:6443", 5*time.Second)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -601,7 +601,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
|
||||
}
|
||||
|
||||
func TestOrphanedResourcesMetric(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
cancel, appLister := newFakeLister(t.Context())
|
||||
defer cancel()
|
||||
mockDB := mocks.NewArgoDB(t)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
|
||||
@@ -616,7 +616,7 @@ argocd_app_orphaned_resources_count{name="my-app-4",namespace="argocd",project="
|
||||
numOrphanedResources := 1
|
||||
metricsServ.SetOrphanedResourcesMetric(app, numOrphanedResources)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -627,7 +627,7 @@ argocd_app_orphaned_resources_count{name="my-app-4",namespace="argocd",project="
|
||||
}
|
||||
|
||||
func TestMetricsReset(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
cancel, appLister := newFakeLister(t.Context())
|
||||
defer cancel()
|
||||
mockDB := mocks.NewArgoDB(t)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
|
||||
@@ -641,7 +641,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
|
||||
argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name="my-app",namespace="argocd",phase="Succeeded",project="important-project"} 2
|
||||
`
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -652,7 +652,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
|
||||
err = metricsServ.SetExpiration(time.Second)
|
||||
require.NoError(t, err)
|
||||
time.Sleep(2 * time.Second)
|
||||
req, err = http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err = http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr = httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -665,7 +665,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
|
||||
}
|
||||
|
||||
func TestWorkqueueMetrics(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
cancel, appLister := newFakeLister(t.Context())
|
||||
defer cancel()
|
||||
mockDB := mocks.NewArgoDB(t)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
|
||||
@@ -685,7 +685,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
|
||||
`
|
||||
workqueue.NewNamed("test")
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -696,7 +696,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
|
||||
}
|
||||
|
||||
func TestGoMetrics(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
cancel, appLister := newFakeLister(t.Context())
|
||||
defer cancel()
|
||||
mockDB := mocks.NewArgoDB(t)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
|
||||
@@ -718,7 +718,7 @@ go_memstats_sys_bytes
|
||||
go_threads
|
||||
`
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
|
||||
require.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
|
||||
@@ -28,7 +28,7 @@ import (
|
||||
func TestGetShardByID_NotEmptyID(t *testing.T) {
|
||||
db := &dbmocks.ArgoDB{}
|
||||
replicasCount := 1
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
|
||||
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "1"}))
|
||||
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "2"}))
|
||||
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "3"}))
|
||||
@@ -38,7 +38,7 @@ func TestGetShardByID_NotEmptyID(t *testing.T) {
|
||||
func TestGetShardByID_EmptyID(t *testing.T) {
|
||||
db := &dbmocks.ArgoDB{}
|
||||
replicasCount := 1
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
|
||||
distributionFunction := LegacyDistributionFunction
|
||||
shard := distributionFunction(replicasCount)(&v1alpha1.Cluster{})
|
||||
assert.Equal(t, 0, shard)
|
||||
@@ -46,7 +46,7 @@ func TestGetShardByID_EmptyID(t *testing.T) {
|
||||
|
||||
func TestGetShardByID_NoReplicas(t *testing.T) {
|
||||
db := &dbmocks.ArgoDB{}
|
||||
db.On("GetApplicationControllerReplicas").Return(0)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
|
||||
distributionFunction := LegacyDistributionFunction
|
||||
shard := distributionFunction(0)(&v1alpha1.Cluster{})
|
||||
assert.Equal(t, -1, shard)
|
||||
@@ -54,7 +54,7 @@ func TestGetShardByID_NoReplicas(t *testing.T) {
|
||||
|
||||
func TestGetShardByID_NoReplicasUsingHashDistributionFunction(t *testing.T) {
|
||||
db := &dbmocks.ArgoDB{}
|
||||
db.On("GetApplicationControllerReplicas").Return(0)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
|
||||
distributionFunction := LegacyDistributionFunction
|
||||
shard := distributionFunction(0)(&v1alpha1.Cluster{})
|
||||
assert.Equal(t, -1, shard)
|
||||
@@ -63,7 +63,7 @@ func TestGetShardByID_NoReplicasUsingHashDistributionFunction(t *testing.T) {
|
||||
func TestGetShardByID_NoReplicasUsingHashDistributionFunctionWithClusters(t *testing.T) {
|
||||
clusters, db, cluster1, cluster2, cluster3, cluster4, cluster5 := createTestClusters()
|
||||
// Test with replicas set to 0
|
||||
db.On("GetApplicationControllerReplicas").Return(0)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
|
||||
t.Setenv(common.EnvControllerShardingAlgorithm, common.RoundRobinShardingAlgorithm)
|
||||
distributionFunction := RoundRobinDistributionFunction(clusters, 0)
|
||||
assert.Equal(t, -1, distributionFunction(nil))
|
||||
@@ -91,7 +91,7 @@ func TestGetClusterFilterLegacy(t *testing.T) {
|
||||
// shardIndex := 1 // ensuring that a shard with index 1 will process all the clusters with an "even" id (2,4,6,...)
|
||||
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
|
||||
replicasCount := 2
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
|
||||
t.Setenv(common.EnvControllerShardingAlgorithm, common.LegacyShardingAlgorithm)
|
||||
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
|
||||
assert.Equal(t, 0, distributionFunction(nil))
|
||||
@@ -109,7 +109,7 @@ func TestGetClusterFilterUnknown(t *testing.T) {
|
||||
os.Unsetenv(common.EnvControllerShardingAlgorithm)
|
||||
t.Setenv(common.EnvControllerShardingAlgorithm, "unknown")
|
||||
replicasCount := 2
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
|
||||
distributionFunction := GetDistributionFunction(clusterAccessor, appAccessor, "unknown", replicasCount)
|
||||
assert.Equal(t, 0, distributionFunction(nil))
|
||||
assert.Equal(t, 0, distributionFunction(&cluster1))
|
||||
@@ -124,7 +124,7 @@ func TestLegacyGetClusterFilterWithFixedShard(t *testing.T) {
|
||||
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
|
||||
appAccessor, _, _, _, _, _ := createTestApps()
|
||||
replicasCount := 5
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
|
||||
filter := GetDistributionFunction(clusterAccessor, appAccessor, common.DefaultShardingAlgorithm, replicasCount)
|
||||
assert.Equal(t, 0, filter(nil))
|
||||
assert.Equal(t, 4, filter(&cluster1))
|
||||
@@ -151,7 +151,7 @@ func TestRoundRobinGetClusterFilterWithFixedShard(t *testing.T) {
|
||||
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
|
||||
appAccessor, _, _, _, _, _ := createTestApps()
|
||||
replicasCount := 4
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
|
||||
|
||||
filter := GetDistributionFunction(clusterAccessor, appAccessor, common.RoundRobinShardingAlgorithm, replicasCount)
|
||||
assert.Equal(t, 0, filter(nil))
|
||||
@@ -182,7 +182,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
|
||||
|
||||
t.Run("replicas set to 1", func(t *testing.T) {
|
||||
replicasCount := 1
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
|
||||
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
|
||||
assert.Equal(t, 0, distributionFunction(nil))
|
||||
assert.Equal(t, 0, distributionFunction(&cluster1))
|
||||
@@ -194,7 +194,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
|
||||
|
||||
t.Run("replicas set to 2", func(t *testing.T) {
|
||||
replicasCount := 2
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
|
||||
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
|
||||
assert.Equal(t, 0, distributionFunction(nil))
|
||||
assert.Equal(t, 0, distributionFunction(&cluster1))
|
||||
@@ -206,7 +206,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
|
||||
|
||||
t.Run("replicas set to 3", func(t *testing.T) {
|
||||
replicasCount := 3
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
|
||||
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
|
||||
assert.Equal(t, 0, distributionFunction(nil))
|
||||
assert.Equal(t, 0, distributionFunction(&cluster1))
|
||||
@@ -232,7 +232,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterNumber
|
||||
t.Setenv(common.EnvControllerReplicas, strconv.Itoa(replicasCount))
|
||||
_, db, _, _, _, _, _ := createTestClusters()
|
||||
clusterAccessor := func() []*v1alpha1.Cluster { return clusterPointers }
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
|
||||
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
|
||||
for i, c := range clusterPointers {
|
||||
assert.Equal(t, i%2, distributionFunction(c))
|
||||
@@ -240,7 +240,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterNumber
|
||||
}
|
||||
|
||||
func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAddedAndRemoved(t *testing.T) {
|
||||
db := dbmocks.ArgoDB{}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
cluster1 := createCluster("cluster1", "1")
|
||||
cluster2 := createCluster("cluster2", "2")
|
||||
cluster3 := createCluster("cluster3", "3")
|
||||
@@ -252,10 +252,10 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAdde
|
||||
clusterAccessor := getClusterAccessor(clusters)
|
||||
|
||||
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5}}
|
||||
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
|
||||
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
|
||||
// Test with replicas set to 2
|
||||
replicasCount := 2
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
|
||||
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
|
||||
assert.Equal(t, 0, distributionFunction(nil))
|
||||
assert.Equal(t, 0, distributionFunction(&cluster1))
|
||||
@@ -277,7 +277,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAdde
|
||||
}
|
||||
|
||||
func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
|
||||
db := dbmocks.ArgoDB{}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
clusterCount := 133
|
||||
prefix := "cluster"
|
||||
|
||||
@@ -290,10 +290,10 @@ func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
|
||||
clusterAccessor := getClusterAccessor(clusters)
|
||||
appAccessor, _, _, _, _, _ := createTestApps()
|
||||
clusterList := &v1alpha1.ClusterList{Items: clusters}
|
||||
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
|
||||
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
|
||||
// Test with replicas set to 3
|
||||
replicasCount := 3
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
|
||||
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
|
||||
assert.Equal(t, 0, distributionFunction(nil))
|
||||
distributionMap := map[int]int{}
|
||||
@@ -347,32 +347,32 @@ func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestConsistentHashingWhenClusterWithZeroReplicas(t *testing.T) {
|
||||
db := dbmocks.ArgoDB{}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
clusters := []v1alpha1.Cluster{createCluster("cluster-01", "01")}
|
||||
clusterAccessor := getClusterAccessor(clusters)
|
||||
clusterList := &v1alpha1.ClusterList{Items: clusters}
|
||||
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
|
||||
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
|
||||
appAccessor, _, _, _, _, _ := createTestApps()
|
||||
// Test with replicas set to 0
|
||||
replicasCount := 0
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
|
||||
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
|
||||
assert.Equal(t, -1, distributionFunction(nil))
|
||||
}
|
||||
|
||||
func TestConsistentHashingWhenClusterWithFixedShard(t *testing.T) {
|
||||
db := dbmocks.ArgoDB{}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
var fixedShard int64 = 1
|
||||
cluster := &v1alpha1.Cluster{ID: "1", Shard: &fixedShard}
|
||||
clusters := []v1alpha1.Cluster{*cluster}
|
||||
|
||||
clusterAccessor := getClusterAccessor(clusters)
|
||||
clusterList := &v1alpha1.ClusterList{Items: clusters}
|
||||
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
|
||||
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
|
||||
|
||||
// Test with replicas set to 5
|
||||
replicasCount := 5
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
|
||||
appAccessor, _, _, _, _, _ := createTestApps()
|
||||
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
|
||||
assert.Equal(t, fixedShard, int64(distributionFunction(cluster)))
|
||||
@@ -381,7 +381,7 @@ func TestConsistentHashingWhenClusterWithFixedShard(t *testing.T) {
|
||||
func TestGetShardByIndexModuloReplicasCountDistributionFunction(t *testing.T) {
|
||||
clusters, db, cluster1, cluster2, _, _, _ := createTestClusters()
|
||||
replicasCount := 2
|
||||
db.On("GetApplicationControllerReplicas").Return(replicasCount)
|
||||
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
|
||||
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
|
||||
|
||||
// Test that the function returns the correct shard for cluster1 and cluster2
|
||||
@@ -419,7 +419,7 @@ func TestInferShard(t *testing.T) {
|
||||
}
|
||||
|
||||
func createTestClusters() (clusterAccessor, *dbmocks.ArgoDB, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster) {
|
||||
db := dbmocks.ArgoDB{}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
cluster1 := createCluster("cluster1", "1")
|
||||
cluster2 := createCluster("cluster2", "2")
|
||||
cluster3 := createCluster("cluster3", "3")
|
||||
@@ -428,10 +428,10 @@ func createTestClusters() (clusterAccessor, *dbmocks.ArgoDB, v1alpha1.Cluster, v
|
||||
|
||||
clusters := []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5}
|
||||
|
||||
db.On("ListClusters", mock.Anything).Return(&v1alpha1.ClusterList{Items: []v1alpha1.Cluster{
|
||||
db.EXPECT().ListClusters(mock.Anything).Return(&v1alpha1.ClusterList{Items: []v1alpha1.Cluster{
|
||||
cluster1, cluster2, cluster3, cluster4, cluster5,
|
||||
}}, nil)
|
||||
return getClusterAccessor(clusters), &db, cluster1, cluster2, cluster3, cluster4, cluster5
|
||||
return getClusterAccessor(clusters), db, cluster1, cluster2, cluster3, cluster4, cluster5
|
||||
}
|
||||
|
||||
func getClusterAccessor(clusters []v1alpha1.Cluster) clusterAccessor {
|
||||
|
||||
@@ -16,14 +16,14 @@ import (
|
||||
|
||||
func TestLargeShuffle(t *testing.T) {
|
||||
t.Skip()
|
||||
db := dbmocks.ArgoDB{}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{}}
|
||||
for i := 0; i < math.MaxInt/4096; i += 256 {
|
||||
// fmt.Fprintf(os.Stdout, "%d", i)
|
||||
cluster := createCluster(fmt.Sprintf("cluster-%d", i), strconv.Itoa(i))
|
||||
clusterList.Items = append(clusterList.Items, cluster)
|
||||
}
|
||||
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
|
||||
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
|
||||
clusterAccessor := getClusterAccessor(clusterList.Items)
|
||||
// Test with replicas set to 256
|
||||
replicasCount := 256
|
||||
@@ -36,7 +36,7 @@ func TestLargeShuffle(t *testing.T) {
|
||||
|
||||
func TestShuffle(t *testing.T) {
|
||||
t.Skip()
|
||||
db := dbmocks.ArgoDB{}
|
||||
db := &dbmocks.ArgoDB{}
|
||||
cluster1 := createCluster("cluster1", "10")
|
||||
cluster2 := createCluster("cluster2", "20")
|
||||
cluster3 := createCluster("cluster3", "30")
|
||||
@@ -46,7 +46,7 @@ func TestShuffle(t *testing.T) {
|
||||
cluster25 := createCluster("cluster6", "25")
|
||||
|
||||
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5, cluster6}}
|
||||
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
|
||||
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
|
||||
clusterAccessor := getClusterAccessor(clusterList.Items)
|
||||
// Test with replicas set to 3
|
||||
t.Setenv(common.EnvControllerReplicas, "3")
|
||||
|
||||
@@ -41,13 +41,18 @@ import (
|
||||
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
|
||||
appstatecache "github.com/argoproj/argo-cd/v3/util/cache/appstate"
|
||||
"github.com/argoproj/argo-cd/v3/util/db"
|
||||
"github.com/argoproj/argo-cd/v3/util/env"
|
||||
"github.com/argoproj/argo-cd/v3/util/gpg"
|
||||
utilio "github.com/argoproj/argo-cd/v3/util/io"
|
||||
"github.com/argoproj/argo-cd/v3/util/settings"
|
||||
"github.com/argoproj/argo-cd/v3/util/stats"
|
||||
)
|
||||
|
||||
var ErrCompareStateRepo = errors.New("failed to get repo objects")
|
||||
var (
|
||||
ErrCompareStateRepo = errors.New("failed to get repo objects")
|
||||
|
||||
processManifestGeneratePathsEnabled = env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_PROCESS_MANIFEST_GENERATE_PATHS", true)
|
||||
)
|
||||
|
||||
type resourceInfoProviderStub struct{}
|
||||
|
||||
@@ -70,7 +75,7 @@ type managedResource struct {
|
||||
|
||||
// AppStateManager defines methods which allow to compare application spec and actual application state.
|
||||
type AppStateManager interface {
|
||||
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
|
||||
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
|
||||
SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState)
|
||||
GetRepoObjs(ctx context.Context, app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
|
||||
}
|
||||
@@ -253,7 +258,14 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
|
||||
appNamespace = ""
|
||||
}
|
||||
|
||||
if !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
|
||||
updateRevisions := processManifestGeneratePathsEnabled &&
|
||||
// updating revisions result is not required if automated sync is not enabled
|
||||
app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil &&
|
||||
// using updating revisions gains performance only if manifest generation is required.
|
||||
// just reading pre-generated manifests is comparable to updating revisions time-wise
|
||||
app.Status.SourceType != v1alpha1.ApplicationSourceTypeDirectory
|
||||
|
||||
if updateRevisions && repo.Depth == 0 && !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && syncedRevision != revision && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
|
||||
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
|
||||
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
|
||||
Repo: repo,
|
||||
@@ -350,7 +362,7 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
|
||||
}
|
||||
|
||||
// ResolveGitRevision will resolve the given revision to a full commit SHA. Only works for git.
|
||||
func (m *appStateManager) ResolveGitRevision(repoURL string, revision string) (string, error) {
|
||||
func (m *appStateManager) ResolveGitRevision(repoURL, revision string) (string, error) {
|
||||
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to connect to repo server: %w", err)
|
||||
@@ -529,7 +541,7 @@ func isManagedNamespace(ns *unstructured.Unstructured, app *v1alpha1.Application
|
||||
// CompareAppState compares application git state to the live app state, using the specified
|
||||
// revision and supplied source. If revision or overrides are empty, then compares against
|
||||
// revision and overrides in the app spec.
|
||||
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
|
||||
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
|
||||
ts := stats.NewTimingStats()
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app))
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -66,7 +66,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
|
||||
// TestCompareAppStateRepoError tests the case when CompareAppState notices a repo error
|
||||
func TestCompareAppStateRepoError(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{manifestResponses: make([]*apiclient.ManifestResponse, 3)}, errors.New("test repo error"))
|
||||
ctrl := newFakeController(t.Context(), &fakeData{manifestResponses: make([]*apiclient.ManifestResponse, 3)}, errors.New("test repo error"))
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -112,7 +112,7 @@ func TestCompareAppStateNamespaceMetadataDiffers(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -161,7 +161,7 @@ func TestCompareAppStateNamespaceMetadataDiffersToManifest(t *testing.T) {
|
||||
kube.GetResourceKey(ns): ns,
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -219,7 +219,7 @@ func TestCompareAppStateNamespaceMetadata(t *testing.T) {
|
||||
kube.GetResourceKey(ns): ns,
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -278,7 +278,7 @@ func TestCompareAppStateNamespaceMetadataIsTheSame(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -306,7 +306,7 @@ func TestCompareAppStateMissing(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -338,7 +338,7 @@ func TestCompareAppStateExtra(t *testing.T) {
|
||||
key: pod,
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -369,7 +369,7 @@ func TestCompareAppStateHook(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -401,7 +401,7 @@ func TestCompareAppStateSkipHook(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -431,7 +431,7 @@ func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
@@ -465,7 +465,7 @@ func TestCompareAppStateExtraHook(t *testing.T) {
|
||||
key: pod,
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -494,7 +494,7 @@ func TestAppRevisionsSingleSource(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
app := newFakeApp()
|
||||
revisions := make([]string, 0)
|
||||
@@ -534,7 +534,7 @@ func TestAppRevisionsMultiSource(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
app := newFakeMultiSourceApp()
|
||||
revisions := make([]string, 0)
|
||||
@@ -583,7 +583,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
|
||||
kube.GetResourceKey(obj3): obj3,
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -624,7 +624,7 @@ func TestCompareAppStateManagedNamespaceMetadataWithLiveNsDoesNotGetPruned(t *te
|
||||
kube.GetResourceKey(ns): ns,
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, []string{}, app.Spec.Sources, false, false, nil, false)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -676,7 +676,7 @@ func TestCompareAppStateWithManifestGeneratePath(t *testing.T) {
|
||||
updateRevisionForPathsResponse: &apiclient.UpdateRevisionForPathsResponse{},
|
||||
}
|
||||
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
revisions := make([]string, 0)
|
||||
revisions = append(revisions, "abc123")
|
||||
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, false)
|
||||
@@ -698,7 +698,7 @@ func TestSetHealth(t *testing.T) {
|
||||
Namespace: "default",
|
||||
},
|
||||
})
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
@@ -734,7 +734,7 @@ func TestPreserveStatusTimestamp(t *testing.T) {
|
||||
Namespace: "default",
|
||||
},
|
||||
})
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
@@ -770,7 +770,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
|
||||
Namespace: "default",
|
||||
},
|
||||
})
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
@@ -799,7 +799,7 @@ func TestSetManagedResourcesWithOrphanedResources(t *testing.T) {
|
||||
proj.Spec.OrphanedResources = &v1alpha1.OrphanedResourcesMonitorSettings{}
|
||||
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, proj},
|
||||
namespacedResources: map[kube.ResourceKey]namespacedResource{
|
||||
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
|
||||
@@ -828,7 +828,7 @@ func TestSetManagedResourcesWithResourcesOfAnotherApp(t *testing.T) {
|
||||
app2 := newFakeApp()
|
||||
app2.Name = "app2"
|
||||
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app1, app2, proj},
|
||||
namespacedResources: map[kube.ResourceKey]namespacedResource{
|
||||
kube.NewResourceKey("apps", kube.DeploymentKind, app2.Namespace, "guestbook"): {
|
||||
@@ -852,7 +852,7 @@ func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
|
||||
|
||||
app := newFakeApp()
|
||||
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, proj},
|
||||
configMapData: map[string]string{
|
||||
"resource.customizations": "invalid setting",
|
||||
@@ -878,7 +878,7 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Namespace = "default"
|
||||
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, proj},
|
||||
namespacedResources: map[kube.ResourceKey]namespacedResource{
|
||||
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
|
||||
@@ -902,7 +902,7 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
|
||||
|
||||
func Test_appStateManager_persistRevisionHistory(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app},
|
||||
}, nil)
|
||||
manager := ctrl.appStateManager.(*appStateManager)
|
||||
@@ -1007,7 +1007,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1034,7 +1034,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1066,7 +1066,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1093,7 +1093,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1120,7 +1120,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1147,7 +1147,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1175,7 +1175,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
testProj := signedProj
|
||||
testProj.Spec.SignatureKeys[0].KeyID = "4AEE18F83AFDEB24"
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
@@ -1207,7 +1207,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
|
||||
}
|
||||
// it doesn't matter for our test whether local manifests are valid
|
||||
localManifests := []string{"foobar"}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1237,7 +1237,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1267,7 +1267,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
|
||||
}
|
||||
// it doesn't matter for our test whether local manifests are valid
|
||||
localManifests := []string{""}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1395,7 +1395,7 @@ func TestIsLiveResourceManaged(t *testing.T) {
|
||||
},
|
||||
},
|
||||
})
|
||||
ctrl := newFakeController(&fakeData{
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
@@ -1765,7 +1765,7 @@ func TestCompareAppStateDefaultRevisionUpdated(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1788,7 +1788,7 @@ func TestCompareAppStateRevisionUpdatedWithHelmSource(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, app.Spec.GetSource())
|
||||
revisions := make([]string, 0)
|
||||
@@ -1807,10 +1807,10 @@ func Test_normalizeClusterScopeTracking(t *testing.T) {
|
||||
Namespace: "test",
|
||||
},
|
||||
})
|
||||
c := cachemocks.ClusterCache{}
|
||||
c.On("IsNamespaced", mock.Anything).Return(false, nil)
|
||||
c := &cachemocks.ClusterCache{}
|
||||
c.EXPECT().IsNamespaced(mock.Anything).Return(false, nil)
|
||||
var called bool
|
||||
err := normalizeClusterScopeTracking([]*unstructured.Unstructured{obj}, &c, func(u *unstructured.Unstructured) error {
|
||||
err := normalizeClusterScopeTracking([]*unstructured.Unstructured{obj}, c, func(u *unstructured.Unstructured) error {
|
||||
// We expect that the normalization function will call this callback with an obj that has had the namespace set
|
||||
// to empty.
|
||||
called = true
|
||||
@@ -1838,7 +1838,7 @@ func TestCompareAppState_DoesNotCallUpdateRevisionForPaths_ForOCI(t *testing.T)
|
||||
Revision: "abc123",
|
||||
},
|
||||
}
|
||||
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
|
||||
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
|
||||
|
||||
source := app.Spec.GetSource()
|
||||
source.RepoURL = "oci://example.com/argo/argo-cd"
|
||||
|
||||
@@ -75,7 +75,7 @@ func TestPersistRevisionHistory(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
// Sync with source unspecified
|
||||
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
|
||||
@@ -121,7 +121,7 @@ func TestPersistManagedNamespaceMetadataState(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
// Sync with source unspecified
|
||||
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
|
||||
@@ -152,7 +152,7 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
// Sync with source specified
|
||||
source := v1alpha1.ApplicationSource{
|
||||
@@ -206,7 +206,7 @@ func TestSyncComparisonError(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
// Sync with source unspecified
|
||||
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
|
||||
@@ -263,7 +263,7 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: liveObjects,
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
return &fixture{
|
||||
application: app,
|
||||
@@ -350,7 +350,7 @@ func TestSyncWindowDeniesSync(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
return &fixture{
|
||||
application: app,
|
||||
@@ -1620,7 +1620,7 @@ func TestSyncWithImpersonate(t *testing.T) {
|
||||
},
|
||||
additionalObjs: additionalObjs,
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
return &fixture{
|
||||
application: app,
|
||||
project: project,
|
||||
@@ -1780,7 +1780,7 @@ func TestClientSideApplyMigration(t *testing.T) {
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data, nil)
|
||||
ctrl := newFakeController(t.Context(), &data, nil)
|
||||
|
||||
return &fixture{
|
||||
application: app,
|
||||
|
||||
BIN
docs/assets/application-view-extension.png
Normal file
BIN
docs/assets/application-view-extension.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 228 KiB |
@@ -68,9 +68,9 @@ make builder-image IMAGE_NAMESPACE=argoproj IMAGE_TAG=v1.0.0
|
||||
Every commit to master is built and published to `ghcr.io/argoproj/argo-cd/argocd:<version>-<short-sha>`. The list of images is available at
|
||||
[https://github.com/argoproj/argo-cd/packages](https://github.com/argoproj/argo-cd/packages).
|
||||
|
||||
!!! note
|
||||
GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
|
||||
even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
|
||||
to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
|
||||
> [!NOTE]
|
||||
> GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
|
||||
> even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
|
||||
> to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
|
||||
|
||||
The image is automatically deployed to the dev Argo CD instance: [https://cd.apps.argoproj.io/](https://cd.apps.argoproj.io/)
|
||||
|
||||
18
docs/developer-guide/custom-resource-icons.md
Normal file
18
docs/developer-guide/custom-resource-icons.md
Normal file
@@ -0,0 +1,18 @@
|
||||
The Argo CD UI displays icons for various Kubernetes resource types to help users quickly identify them. Argo CD
|
||||
includes a set of built-in icons for common resource types.
|
||||
|
||||
You can contribute additional icons for custom resource types by following these steps:
|
||||
|
||||
1. Ensure the license is compatible with Apache 2.0.
|
||||
2. Add the icon file to the `ui/src/assets/images/resources/<group>/icon.svg` path in the Argo CD repository.
|
||||
3. Modify the SVG to use the correct color, `#8fa4b1`.
|
||||
4. Run `make resourceiconsgen` to update the generated typescript file that lists all available icons.
|
||||
5. Create a pull request to the Argo CD repository with your changes.
|
||||
|
||||
`<group>` is the API group of the custom resource. For example, if you are adding an icon for a custom resource with the
|
||||
API group `example.com`, you would place the icon at `ui/src/assets/images/resources/example.com/icon.svg`.
|
||||
|
||||
If you want the same icon to apply to resources in multiple API groups with the same suffix, you can create a directory
|
||||
prefixed with an underscore. The underscore will be interpreted as a wildcard. For example, to apply the same icon to
|
||||
resources in the `example.com` and `another.example.com` API groups, you would place the icon at
|
||||
`ui/src/assets/images/resources/_.example.com/icon.svg`.
|
||||
@@ -26,8 +26,8 @@ api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
|
||||
```
|
||||
This configuration example will be used as the basis for the next steps.
|
||||
|
||||
!!! note
|
||||
The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
|
||||
> [!NOTE]
|
||||
> The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
|
||||
|
||||
### Configure component env variables
|
||||
The component that you will run in your IDE for debugging (`api-server` in our case) will need env variables. Copy the env variables from `Procfile`, located in the `argo-cd` root folder of your development branch. The env variables are located before the `$COMMAND` section in the `sh -c` section of the component run command.
|
||||
@@ -112,8 +112,8 @@ Example for an `api-server` launch configuration snippet, based on our above exa
|
||||
</component>
|
||||
```
|
||||
|
||||
!!! note
|
||||
As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
|
||||
> [!NOTE]
|
||||
> As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
|
||||
|
||||
## Run Argo CD without the debugged component
|
||||
Next, we need to run all Argo CD components, except for the debugged component (cause we will run this component separately in the IDE).
|
||||
@@ -143,4 +143,4 @@ To debug the `api-server`, run:
|
||||
Finally, run the component you wish to debug from your IDE and make sure it does not have any errors.
|
||||
|
||||
## Important
|
||||
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.
|
||||
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.
|
||||
|
||||
@@ -21,7 +21,7 @@ curl -sSfL https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/i
|
||||
Connect to one of the services, for example, to debug the main ArgoCD server run:
|
||||
```shell
|
||||
kubectl config set-context --current --namespace argocd
|
||||
telepresence helm install --set agent.securityContext={} # Installs telepresence into your cluster
|
||||
telepresence helm install --set-json agent.securityContext={} # Installs telepresence into your cluster
|
||||
telepresence connect # Starts the connection to your cluster (bound to the current namespace)
|
||||
telepresence intercept argocd-server --port 8080:http --env-file .envrc.remote # Starts the interception
|
||||
```
|
||||
@@ -37,7 +37,7 @@ telepresence status
|
||||
|
||||
Stop the intercept using:
|
||||
```shell
|
||||
telepresence leave argocd-server-argocd
|
||||
telepresence leave argocd-server
|
||||
```
|
||||
|
||||
And uninstall telepresence from your cluster:
|
||||
|
||||
@@ -1,38 +1,5 @@
|
||||
# Managing Dependencies
|
||||
|
||||
## GitOps Engine (`github.com/argoproj/gitops-engine`)
|
||||
|
||||
### Repository
|
||||
|
||||
https://github.com/argoproj/gitops-engine
|
||||
|
||||
### Pulling changes from `gitops-engine`
|
||||
|
||||
After your GitOps Engine PR has been merged, ArgoCD needs to be updated to pull in the version of the GitOps engine that contains your change. Here are the steps:
|
||||
|
||||
- Retrieve the SHA hash for your commit. You will use this in the next step.
|
||||
- From the `argo-cd` folder, run the following command
|
||||
|
||||
`go get github.com/argoproj/gitops-engine@<git-commit-sha>`
|
||||
|
||||
If you get an error message `invalid version: unknown revision` then you got the wrong SHA hash
|
||||
|
||||
- Run:
|
||||
|
||||
`go mod tidy`
|
||||
|
||||
- The following files are changed:
|
||||
|
||||
- `go.mod`
|
||||
- `go.sum`
|
||||
|
||||
- Create an ArgoCD PR with a `refactor:` type in its title for the two file changes.
|
||||
|
||||
### Tips:
|
||||
|
||||
- See https://github.com/argoproj/argo-cd/pull/4434 as an example
|
||||
- The PR might require additional, dependent changes in ArgoCD that are directly impacted by the changes made in the engine.
|
||||
|
||||
## Notifications Engine (`github.com/argoproj/notifications-engine`)
|
||||
|
||||
### Repository
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
## Preface
|
||||
When you have developed and possibly manually tested the code you want to contribute, you should ensure that everything builds correctly. Commit your changes locally and perform the following steps, for each step the commands for both local and virtualized toolchain are listed.
|
||||
|
||||
### Docker priviliges for virtualized toolchain users
|
||||
### Docker privileges for virtualized toolchain users
|
||||
[These instructions](toolchain-guide.md#docker-privileges) are relevant for most of the steps below
|
||||
|
||||
### Using Podman for virtualized toolchain users
|
||||
@@ -29,11 +29,6 @@ As build dependencies change over time, you have to synchronize your development
|
||||
|
||||
* `make dep-ui` or `make dep-ui-local`
|
||||
|
||||
Argo CD recently migrated to Go modules. Usually, dependencies will be downloaded at build time, but the Makefile provides two targets to download and vendor all dependencies:
|
||||
|
||||
* `make mod-download` or `make mod-download-local` will download all required Go modules and
|
||||
* `make mod-vendor` or `make mod-vendor-local` will vendor those dependencies into the Argo CD source tree
|
||||
|
||||
### Generate API glue code and other assets
|
||||
|
||||
Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
|
||||
@@ -42,8 +37,8 @@ Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/prot
|
||||
* Check if something has changed by running `git status` or `git diff`
|
||||
* Commit any possible changes to your local Git branch, an appropriate commit message would be `Changes from codegen`, for example.
|
||||
|
||||
!!!note
|
||||
There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
|
||||
> [!NOTE]
|
||||
> There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
|
||||
|
||||
### Build your code and run unit tests
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ If you want to build and test the site directly on your local machine without th
|
||||
|
||||
## Analytics
|
||||
|
||||
!!! tip
|
||||
Don't forget to disable your ad-blocker when testing.
|
||||
> [!TIP]
|
||||
> Don't forget to disable your ad-blocker when testing.
|
||||
|
||||
We collect [Google Analytics](https://analytics.google.com/analytics/web/#/report-home/a105170809w198079555p192782995).
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
# Proxy Extensions
|
||||
|
||||
!!! warning "Beta Feature (Since 2.7.0)"
|
||||
|
||||
This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
|
||||
It is generally considered stable, but there may be unhandled edge cases.
|
||||
> [!WARNING]
|
||||
> **Beta Feature (Since 2.7.0)**
|
||||
>
|
||||
> This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
|
||||
> It is generally considered stable, but there may be unhandled edge cases.
|
||||
|
||||
## Overview
|
||||
|
||||
|
||||
@@ -129,33 +129,30 @@ It is also possible to add an optional flyout widget to your extension. It can b
|
||||
|
||||
Below is an example of an extension using the flyout widget:
|
||||
|
||||
|
||||
```javascript
|
||||
((window) => {
|
||||
const component = (props: {
|
||||
openFlyout: () => any
|
||||
}) => {
|
||||
const component = (props: { openFlyout: () => any }) => {
|
||||
return React.createElement(
|
||||
"div",
|
||||
{
|
||||
style: { padding: "10px" },
|
||||
onClick: () => props.openFlyout()
|
||||
},
|
||||
"Hello World"
|
||||
"div",
|
||||
{
|
||||
style: { padding: "10px" },
|
||||
onClick: () => props.openFlyout(),
|
||||
},
|
||||
"Hello World"
|
||||
);
|
||||
};
|
||||
const flyout = () => {
|
||||
return React.createElement(
|
||||
"div",
|
||||
{ style: { padding: "10px" } },
|
||||
"This is a flyout"
|
||||
"div",
|
||||
{ style: { padding: "10px" } },
|
||||
"This is a flyout"
|
||||
);
|
||||
};
|
||||
window.extensionsAPI.registerStatusPanelExtension(
|
||||
component,
|
||||
"My Extension",
|
||||
"my_extension",
|
||||
flyout
|
||||
component,
|
||||
"My Extension",
|
||||
"my_extension",
|
||||
flyout
|
||||
);
|
||||
})(window);
|
||||
```
|
||||
@@ -183,7 +180,9 @@ The callback function `shouldDisplay` should return true if the extension should
|
||||
|
||||
```typescript
|
||||
const shouldDisplay = (app: Application) => {
|
||||
return application.metadata?.labels?.['application.environmentLabelKey'] === "prd";
|
||||
return (
|
||||
application.metadata?.labels?.["application.environmentLabelKey"] === "prd"
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
@@ -196,28 +195,68 @@ Below is an example of a simple extension with a flyout widget:
|
||||
};
|
||||
const flyout = () => {
|
||||
return React.createElement(
|
||||
"div",
|
||||
{ style: { padding: "10px" } },
|
||||
"This is a flyout"
|
||||
"div",
|
||||
{ style: { padding: "10px" } },
|
||||
"This is a flyout"
|
||||
);
|
||||
};
|
||||
const component = () => {
|
||||
return React.createElement(
|
||||
"div",
|
||||
{
|
||||
onClick: () => flyout()
|
||||
},
|
||||
"Toolbar Extension Test"
|
||||
"div",
|
||||
{
|
||||
onClick: () => flyout(),
|
||||
},
|
||||
"Toolbar Extension Test"
|
||||
);
|
||||
};
|
||||
window.extensionsAPI.registerTopBarActionMenuExt(
|
||||
component,
|
||||
"Toolbar Extension Test",
|
||||
"Toolbar_Extension_Test",
|
||||
flyout,
|
||||
shouldDisplay,
|
||||
'',
|
||||
true
|
||||
component,
|
||||
"Toolbar Extension Test",
|
||||
"Toolbar_Extension_Test",
|
||||
flyout,
|
||||
shouldDisplay,
|
||||
"",
|
||||
true
|
||||
);
|
||||
})(window);
|
||||
```
|
||||
```
|
||||
|
||||
## App View Extensions
|
||||
|
||||
App View extensions allow you to create a new Application Details View for an application. This view would be selectable alongside the other views like the Node Tree, Pod, and Network views. When the extension's icon is clicked, the extension's component is rendered as the main content of the application view.
|
||||
|
||||
Register this extension through the `extensionsAPI.registerAppViewExtension` method.
|
||||
|
||||
```typescript
|
||||
registerAppViewExtension(
|
||||
component: ExtensionComponent, // the component to be rendered
|
||||
title: string, // the title of the page once the component is rendered
|
||||
icon: string, // the favicon classname for the icon tab
|
||||
shouldDisplay?: (app: Application): boolean // returns true if the view should be available
|
||||
)
|
||||
```
|
||||
|
||||
Below is an example of a simple extension:
|
||||
|
||||
```javascript
|
||||
((window) => {
|
||||
const component = () => {
|
||||
return React.createElement(
|
||||
"div",
|
||||
{ style: { padding: "10px" } },
|
||||
"Hello World"
|
||||
);
|
||||
};
|
||||
window.extensionsAPI.registerAppViewExtension(
|
||||
component,
|
||||
"My Extension",
|
||||
"fa-question-circle",
|
||||
(app) =>
|
||||
application.metadata?.labels?.["application.environmentLabelKey"] ===
|
||||
"prd"
|
||||
);
|
||||
})(window);
|
||||
```
|
||||
|
||||
Example rendered extension:
|
||||

|
||||
|
||||
@@ -6,8 +6,8 @@
|
||||
|
||||
Sure thing! You can either open an Enhancement Proposal in our GitHub issue tracker or you can [join us on Slack](https://argoproj.github.io/community/join-slack) in channel #argo-contributors to discuss your ideas and get guidance for submitting a PR.
|
||||
|
||||
!!! note
|
||||
Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
|
||||
> [!NOTE]
|
||||
> Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
|
||||
|
||||
### No one has looked at my PR yet. Why?
|
||||
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
# Overview
|
||||
|
||||
!!! warning "As an Argo CD user, you probably don't want to be reading this section of the docs."
|
||||
This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
|
||||
|
||||
* A chat bot
|
||||
* A Slack integration
|
||||
> [!WARNING]
|
||||
> **As an Argo CD user, you probably don't want to be reading this section of the docs.**
|
||||
>
|
||||
> This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
|
||||
>
|
||||
> * A chat bot
|
||||
> * A Slack integration
|
||||
|
||||
## Preface
|
||||
#### Understand the [Code Contribution Guide](code-contributions.md)
|
||||
@@ -26,7 +28,7 @@ For backend and frontend contributions, that require a full building-testing-run
|
||||
## Contributing to Argo CD Notifications documentation
|
||||
|
||||
This guide will help you get started quickly with contributing documentation changes, performing the minimum setup you'll need.
|
||||
The notificaions docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
|
||||
The notifications docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
|
||||
For backend and frontend contributions, that require a full building-testing-running-locally cycle, please refer to [Contributing to Argo CD backend and frontend ](index.md#contributing-to-argo-cd-backend-and-frontend)
|
||||
|
||||
### Fork and clone Argo CD repository
|
||||
@@ -100,4 +102,4 @@ Need help? Start with the [Contributors FAQ](faq/)
|
||||
* [Config Management Plugins](../operator-manual/config-management-plugins/)
|
||||
|
||||
## Contributing to Argo Website
|
||||
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.
|
||||
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.
|
||||
|
||||
@@ -71,17 +71,17 @@ Example:
|
||||
./hack/trigger-release.sh v2.7.2 upstream
|
||||
```
|
||||
|
||||
!!! tip
|
||||
The tag must be in one of the following formats to trigger the GH workflow:<br>
|
||||
* GA: `v<MAJOR>.<MINOR>.<PATCH>`<br>
|
||||
* Pre-release: `v<MAJOR>.<MINOR>.<PATCH>-rc<RC#>`
|
||||
> [!TIP]
|
||||
> The tag must be in one of the following formats to trigger the GH workflow:<br>
|
||||
> * GA: `v<MAJOR>.<MINOR>.<PATCH>`<br>
|
||||
> * Pre-release: `v<MAJOR>.<MINOR>.<PATCH>-rc<RC#>`
|
||||
|
||||
Once the script is executed successfully, a GitHub workflow will start
|
||||
execution. You can follow its progress under the [Actions](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml) tab, the name of the action is `Publish ArgoCD Release`.
|
||||
|
||||
!!! warning
|
||||
You cannot perform more than one release on the same release branch at the
|
||||
same time.
|
||||
> [!WARNING]
|
||||
> You cannot perform more than one release on the same release branch at the
|
||||
> same time.
|
||||
|
||||
### Verifying automated release
|
||||
|
||||
|
||||
@@ -230,8 +230,8 @@ make manifests-local
|
||||
|
||||
(depending on your toolchain) to build a new set of installation manifests which include your specific image reference.
|
||||
|
||||
!!!note
|
||||
Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
|
||||
> [!NOTE]
|
||||
> Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
|
||||
|
||||
#### Configure your cluster with custom manifests
|
||||
|
||||
|
||||
@@ -7,11 +7,13 @@
|
||||
|
||||
## Preface
|
||||
|
||||
!!!note "Before you start"
|
||||
The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
|
||||
|
||||
We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
|
||||
[code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
|
||||
> [!NOTE]
|
||||
> **Before you start**
|
||||
>
|
||||
> The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
|
||||
>
|
||||
> We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
|
||||
> [code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
|
||||
|
||||
If you want to submit a PR, please read this document carefully, as it contains important information guiding you through our PR quality gates.
|
||||
|
||||
@@ -34,8 +36,8 @@ make pre-commit-local
|
||||
|
||||
When you submit a PR against Argo CD's GitHub repository, a couple of CI checks will be run automatically to ensure your changes will build fine and meet certain quality standards. Your contribution needs to pass those checks in order to be merged into the repository.
|
||||
|
||||
!!!note
|
||||
Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
|
||||
> [!NOTE]
|
||||
> Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
|
||||
|
||||
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.
|
||||
|
||||
|
||||
@@ -6,16 +6,18 @@ namespace `argocd-e2e***` is created prior to the execution of the tests. The th
|
||||
The [/test/e2e/testdata](https://github.com/argoproj/argo-cd/tree/master/test/e2e/testdata) directory contains various Argo CD applications. Before test execution, the directory is copied into `/tmp/argo-e2e***` temp directory and used in tests as a
|
||||
Git repository via file url: `file:///tmp/argo-e2e***`.
|
||||
|
||||
!!! note "Rancher Desktop Volume Sharing"
|
||||
The e2e git server runs in a container. If you are using Rancher Desktop, you will need to enable volume sharing for
|
||||
the e2e container to access the testdata directory. To do this, add the following to
|
||||
`~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml` and restart Rancher Desktop:
|
||||
|
||||
```yaml
|
||||
mounts:
|
||||
- location: /private/tmp
|
||||
writable: true
|
||||
```
|
||||
> [!NOTE]
|
||||
> **Rancher Desktop Volume Sharing**
|
||||
>
|
||||
> The e2e git server runs in a container. If you are using Rancher Desktop, you will need to enable volume sharing for
|
||||
> the e2e container to access the testdata directory. To do this, add the following to
|
||||
> `~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml` and restart Rancher Desktop:
|
||||
>
|
||||
> ```yaml
|
||||
> mounts:
|
||||
> - location: /private/tmp
|
||||
> writable: true
|
||||
> ```
|
||||
|
||||
## Running Tests Locally
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user