mirror of
https://github.com/argoproj/argo-cd.git
synced 2026-02-20 09:38:49 +01:00
Compare commits
279 Commits
source-hyd
...
v2.6.15
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2f7922be9c | ||
|
|
f30a20a116 | ||
|
|
3b50226e76 | ||
|
|
0c6cfa306d | ||
|
|
1fb3d3fb11 | ||
|
|
d9ce38f357 | ||
|
|
d16bc2610b | ||
|
|
a443ab06e0 | ||
|
|
d5ba03b5f2 | ||
|
|
71d7bb67a4 | ||
|
|
6f0b7cb1a4 | ||
|
|
2b15397479 | ||
|
|
999132d9e8 | ||
|
|
eb34149e7a | ||
|
|
970329f99b | ||
|
|
fbf49f72e7 | ||
|
|
cc56e70718 | ||
|
|
8d758733ad | ||
|
|
0432c40a85 | ||
|
|
09d1a1356d | ||
|
|
343cc02339 | ||
|
|
84f935a615 | ||
|
|
3aa3b5c778 | ||
|
|
201513d1c9 | ||
|
|
cea6f80266 | ||
|
|
deddacf6cc | ||
|
|
bd7bd4b18c | ||
|
|
95587118a1 | ||
|
|
0d93b20796 | ||
|
|
1f6174eebb | ||
|
|
4261d08b29 | ||
|
|
966919792f | ||
|
|
2bb1587333 | ||
|
|
f685ee61e1 | ||
|
|
2883620bb0 | ||
|
|
261e665bd9 | ||
|
|
b05386d8b7 | ||
|
|
ce87bf951e | ||
|
|
6ecb081ac8 | ||
|
|
4c4fa201a3 | ||
|
|
3c94817fb2 | ||
|
|
a05a3eef80 | ||
|
|
1974b8a2bf | ||
|
|
a117187138 | ||
|
|
5456570f8d | ||
|
|
aa7a849859 | ||
|
|
969c68c2b3 | ||
|
|
c508244cf8 | ||
|
|
f3d159f2a8 | ||
|
|
690524e93c | ||
|
|
52497ce0fb | ||
|
|
ce5de62fc5 | ||
|
|
77be41caba | ||
|
|
bba517dda0 | ||
|
|
c53bfedacd | ||
|
|
500504849c | ||
|
|
69c22df5e2 | ||
|
|
4761611390 | ||
|
|
86c831a042 | ||
|
|
697fd7c9a3 | ||
|
|
d194eaa6a2 | ||
|
|
b7fedfa497 | ||
|
|
34094a2d04 | ||
|
|
df200b044b | ||
|
|
7449f26e8b | ||
|
|
9ce19c6c80 | ||
|
|
9aaade6981 | ||
|
|
9adc9159e6 | ||
|
|
75eeb6dbb6 | ||
|
|
89051845c1 | ||
|
|
c145657347 | ||
|
|
0ee904746a | ||
|
|
3f1e7d401e | ||
|
|
85838126fc | ||
|
|
822788f0f9 | ||
|
|
8bc460fe28 | ||
|
|
814b2367c8 | ||
|
|
92ced542b8 | ||
|
|
0d579a0ec1 | ||
|
|
9c685cf021 | ||
|
|
37e24408a8 | ||
|
|
fdcbfade20 | ||
|
|
cf02b10d01 | ||
|
|
46dc70b6c6 | ||
|
|
d7e4ada9af | ||
|
|
292e69f97f | ||
|
|
618bd14c09 | ||
|
|
a047d72688 | ||
|
|
472d4eb16d | ||
|
|
674cf8d6e2 | ||
|
|
c1dba1c764 | ||
|
|
d383379b0c | ||
|
|
adbb1f50c8 | ||
|
|
419165a296 | ||
|
|
19f5d43235 | ||
|
|
2a433f168a | ||
|
|
494e9eeb51 | ||
|
|
cc75f42a10 | ||
|
|
876ff3035e | ||
|
|
4d63d279d6 | ||
|
|
62ae79ab5e | ||
|
|
6f5eaff91f | ||
|
|
c8b4707c55 | ||
|
|
09143f26a4 | ||
|
|
f08375665b | ||
|
|
0901195cd9 | ||
|
|
463e62ff80 | ||
|
|
71e523e776 | ||
|
|
61d0ef7614 | ||
|
|
d54d931f3e | ||
|
|
0c31efd158 | ||
|
|
a325c35320 | ||
|
|
9d56d4fa26 | ||
|
|
707342d707 | ||
|
|
de600c0222 | ||
|
|
937e88a164 | ||
|
|
4f3e5080a5 | ||
|
|
cf74364052 | ||
|
|
5bcd846fa1 | ||
|
|
7af7aaa08f | ||
|
|
ccb64f1c7e | ||
|
|
6d4de2ec5d | ||
|
|
bc13533afa | ||
|
|
3260ecc729 | ||
|
|
d3f81decf3 | ||
|
|
25823b88d9 | ||
|
|
56a8ce5ff2 | ||
|
|
153bf967e9 | ||
|
|
2c10c033db | ||
|
|
e883e7498f | ||
|
|
0c3c7e2fa9 | ||
|
|
de63eb4e52 | ||
|
|
a119a5cc93 | ||
|
|
dc744bb21d | ||
|
|
4d5f9bdb5d | ||
|
|
60104aca6f | ||
|
|
3ea15f05f2 | ||
|
|
d557447214 | ||
|
|
67379d881b | ||
|
|
7d335432cd | ||
|
|
f7b6b82a04 | ||
|
|
e81b22bc61 | ||
|
|
65d43364ec | ||
|
|
7be094f38d | ||
|
|
db2869c866 | ||
|
|
2b6d55bfe5 | ||
|
|
e81ddb0855 | ||
|
|
8dcdbb588d | ||
|
|
1e7aab19aa | ||
|
|
ec6e05afca | ||
|
|
09ea76364c | ||
|
|
b795fcad3d | ||
|
|
896b143866 | ||
|
|
10051833a5 | ||
|
|
705ca3c95a | ||
|
|
fca7f58a93 | ||
|
|
85d1c0fac7 | ||
|
|
bb7ec0ff32 | ||
|
|
57f6703d08 | ||
|
|
f016977b5d | ||
|
|
e05298b9c6 | ||
|
|
57c30b5e04 | ||
|
|
8e544cbb97 | ||
|
|
a5fb6ffe6b | ||
|
|
691f20461e | ||
|
|
7756a816ab | ||
|
|
fd4c5f694a | ||
|
|
f148fa42a1 | ||
|
|
c3e96be67f | ||
|
|
d17aaf23c9 | ||
|
|
21cdcc124d | ||
|
|
d195568c90 | ||
|
|
860503fc07 | ||
|
|
27c6913bfe | ||
|
|
9c0ca76d2e | ||
|
|
0d0570de53 | ||
|
|
a731a38026 | ||
|
|
03d1c052f6 | ||
|
|
312683216e | ||
|
|
4412a770d5 | ||
|
|
7bbedff9da | ||
|
|
6e02f8b232 | ||
|
|
e2b280c3dd | ||
|
|
74726cf11e | ||
|
|
d4a0c252b8 | ||
|
|
3f143c9307 | ||
|
|
7fadddcba9 | ||
|
|
cac327cf64 | ||
|
|
acc554f3d9 | ||
|
|
f91b9f45ab | ||
|
|
893569aab4 | ||
|
|
17e0875ed7 | ||
|
|
1c343837e9 | ||
|
|
0ce5183857 | ||
|
|
70669be1cc | ||
|
|
212b3948dd | ||
|
|
8074e6f9cc | ||
|
|
a951ccacf0 | ||
|
|
119116258b | ||
|
|
278fd1e175 | ||
|
|
9a35eb63a5 | ||
|
|
95b52df1ff | ||
|
|
97761cc557 | ||
|
|
edb95e891c | ||
|
|
1569dba5ce | ||
|
|
278a29c875 | ||
|
|
d0862cd275 | ||
|
|
f31e1aaf4c | ||
|
|
4175d894da | ||
|
|
4c7a024875 | ||
|
|
e6ffce7e34 | ||
|
|
d44b353a4e | ||
|
|
0ac6d0ca5a | ||
|
|
70a7b9bd00 | ||
|
|
52365f88a0 | ||
|
|
a0b7fc5cab | ||
|
|
e6df2fcfca | ||
|
|
719b0aae82 | ||
|
|
9a27a20c5d | ||
|
|
01ef2e0c32 | ||
|
|
063dce443f | ||
|
|
5094ee988f | ||
|
|
671f3f493d | ||
|
|
10caed890c | ||
|
|
bc88f98f6d | ||
|
|
e790028e5c | ||
|
|
6ba9975245 | ||
|
|
79baabc837 | ||
|
|
9b6815f06f | ||
|
|
16f2de0b29 | ||
|
|
d744465cdf | ||
|
|
240ffc30ef | ||
|
|
22662559fe | ||
|
|
590ea32083 | ||
|
|
f8483d2be4 | ||
|
|
25d1d7aca2 | ||
|
|
678d773a6a | ||
|
|
e51d0b3224 | ||
|
|
0f5f41ebb0 | ||
|
|
ee4b3cacc9 | ||
|
|
62a521ccf6 | ||
|
|
eb6474c524 | ||
|
|
826507897d | ||
|
|
5fcebcc799 | ||
|
|
7f00420b3d | ||
|
|
6ebca0bd01 | ||
|
|
93fa2a46a5 | ||
|
|
72f92b6f2a | ||
|
|
7d54482d42 | ||
|
|
fe8049fc50 | ||
|
|
8065748cca | ||
|
|
57560b32f6 | ||
|
|
d5eb10c24d | ||
|
|
cab9b5769f | ||
|
|
bb8ef6dfa3 | ||
|
|
6a9f37ca7d | ||
|
|
b357fd61c0 | ||
|
|
f8d275c50d | ||
|
|
053cfaf378 | ||
|
|
f869cc4feb | ||
|
|
fab4a3cb92 | ||
|
|
7dab9b23bf | ||
|
|
c8d010ceb0 | ||
|
|
3fa9a9197b | ||
|
|
af00900049 | ||
|
|
e67f4b151e | ||
|
|
3a8802f083 | ||
|
|
cdaf2b2c73 | ||
|
|
222cdf4711 | ||
|
|
c58d3843d5 | ||
|
|
383a65fe71 | ||
|
|
acfdc3d3be | ||
|
|
80f4ab9d7b | ||
|
|
44d13a73c9 | ||
|
|
a6469140b9 | ||
|
|
9a4179b1b6 | ||
|
|
0cd4854ffa | ||
|
|
81e40d53fe | ||
|
|
8532cfec4a |
32
.github/ISSUE_TEMPLATE/release.md
vendored
Normal file
32
.github/ISSUE_TEMPLATE/release.md
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
name: Argo CD Release
|
||||
about: Used by our Release Champion to track progress of a minor release
|
||||
title: 'Argo CD Release vX.X'
|
||||
labels: 'release'
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
Target RC1 date: ___. __, ____
|
||||
Target GA date: ___. __, ____
|
||||
|
||||
- [ ] Create new section in the [Release Planning doc](https://docs.google.com/document/d/1trJIomcgXcfvLw0aYnERrFWfPjQOfYMDJOCh1S8nMBc/edit?usp=sharing)
|
||||
- [ ] Schedule a Release Planning meeting roughly two weeks before the scheduled Release freeze date by adding it to the community calendar (or delegate this task to someone with write access to the community calendar)
|
||||
- [ ] Include Zoom link in the invite
|
||||
- [ ] Post in #argo-cd and #argo-contributors one week before the meeting
|
||||
- [ ] Post again one hour before the meeting
|
||||
- [ ] At the meeting, remove issues/PRs from the project's column for that release which have not been “claimed” by at least one Approver (add it to the next column if Approver requests that)
|
||||
- [ ] 1wk before feature freeze post in #argo-contributors that PRs must be merged by DD-MM-YYYY to be included in the release - ask approvers to drop items from milestone they can’t merge
|
||||
- [ ] At least two days before RC1 date, draft RC blog post and submit it for review (or delegate this task)
|
||||
- [ ] Cut RC1 (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] Create new release branch
|
||||
- [ ] Add the release branch to ReadTheDocs
|
||||
- [ ] Confirm that tweet and blog post are ready
|
||||
- [ ] Trigger the release
|
||||
- [ ] After the release is finished, publish tweet and blog post
|
||||
- [ ] Post in #argo-cd and #argo-announcements with lots of emojis announcing the release and requesting help testing
|
||||
- [ ] Monitor support channels for issues, cherry-picking bugfixes and docs fixes as appropriate (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] At release date, evaluate if any bugs justify delaying the release. If not, cut the release (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] If unreleased changes are on the release branch for {current minor version minus 3}, cut a final patch release for that series (or delegate this task to an Approver and coordinate timing)
|
||||
- [ ] After the release, post in #argo-cd that the {current minor version minus 3} has reached EOL (example: https://cloud-native.slack.com/archives/C01TSERG0KZ/p1667336234059729)
|
||||
- [ ] (For the next release champion) Review the [items scheduled for the next release](https://github.com/orgs/argoproj/projects/25). If any item does not have an assignee who can commit to finish the feature, move it to the next release.
|
||||
- [ ] (For the next release champion) Schedule a time mid-way through the release cycle to review items again.
|
||||
85
.github/workflows/ci-build.yaml
vendored
85
.github/workflows/ci-build.yaml
vendored
@@ -9,10 +9,11 @@ on:
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'release-*'
|
||||
|
||||
env:
|
||||
# Golang version to use across CI steps
|
||||
GOLANG_VERSION: '1.18'
|
||||
GOLANG_VERSION: '1.19.10'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -27,9 +28,9 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Download all Go modules
|
||||
@@ -45,13 +46,13 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # v3.0.11
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -69,9 +70,9 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Run golangci-lint
|
||||
@@ -92,11 +93,11 @@ jobs:
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
@@ -116,13 +117,17 @@ jobs:
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # v3.0.11
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
- name: Install all tools required for building & testing
|
||||
run: |
|
||||
make install-test-tools-local
|
||||
# We install kustomize in the dist directory
|
||||
- name: Add dist to PATH
|
||||
run: |
|
||||
echo "/home/runner/work/argo-cd/argo-cd/dist" >> $GITHUB_PATH
|
||||
- name: Setup git username and email
|
||||
run: |
|
||||
git config --global user.name "John Doe"
|
||||
@@ -133,12 +138,12 @@ jobs:
|
||||
- name: Run all unit tests
|
||||
run: make test-local
|
||||
- name: Generate code coverage artifacts
|
||||
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # v3.1.1
|
||||
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
|
||||
with:
|
||||
name: code-coverage
|
||||
path: coverage.out
|
||||
- name: Generate test results artifacts
|
||||
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # v3.1.1
|
||||
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
|
||||
with:
|
||||
name: test-results
|
||||
path: test-results/
|
||||
@@ -155,11 +160,11 @@ jobs:
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
@@ -179,13 +184,17 @@ jobs:
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # v3.0.11
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
- name: Install all tools required for building & testing
|
||||
run: |
|
||||
make install-test-tools-local
|
||||
# We install kustomize in the dist directory
|
||||
- name: Add dist to PATH
|
||||
run: |
|
||||
echo "/home/runner/work/argo-cd/argo-cd/dist" >> $GITHUB_PATH
|
||||
- name: Setup git username and email
|
||||
run: |
|
||||
git config --global user.name "John Doe"
|
||||
@@ -196,7 +205,7 @@ jobs:
|
||||
- name: Run all unit tests
|
||||
run: make test-race-local
|
||||
- name: Generate test results artifacts
|
||||
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # v3.1.1
|
||||
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
|
||||
with:
|
||||
name: race-results
|
||||
path: test-results/
|
||||
@@ -206,9 +215,9 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Create symlink in GOPATH
|
||||
@@ -232,6 +241,10 @@ jobs:
|
||||
make install-codegen-tools-local
|
||||
make install-go-tools-local
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
# We install kustomize in the dist directory
|
||||
- name: Add dist to PATH
|
||||
run: |
|
||||
echo "/home/runner/work/argo-cd/argo-cd/dist" >> $GITHUB_PATH
|
||||
- name: Run codegen
|
||||
run: |
|
||||
set -x
|
||||
@@ -250,14 +263,14 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- name: Setup NodeJS
|
||||
uses: actions/setup-node@8c91899e586c5b171469028077307d293428b516 # v3.5.1
|
||||
uses: actions/setup-node@64ed1c7eab4cce3362f8c340dee64e5eaeef8f7c # v3.6.0
|
||||
with:
|
||||
node-version: '12.18.4'
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # v3.0.11
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
@@ -287,12 +300,12 @@ jobs:
|
||||
sonar_secret: ${{ secrets.SONAR_TOKEN }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # v3.0.11
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
@@ -303,11 +316,11 @@ jobs:
|
||||
run: |
|
||||
mkdir -p test-results
|
||||
- name: Get code coverage artifiact
|
||||
uses: actions/download-artifact@9782bd6a9848b53b110e712e20e42d89988822b7 # v3.0.1
|
||||
uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2
|
||||
with:
|
||||
name: code-coverage
|
||||
- name: Get test result artifact
|
||||
uses: actions/download-artifact@9782bd6a9848b53b110e712e20e42d89988822b7 # v3.0.1
|
||||
uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2
|
||||
with:
|
||||
name: test-results
|
||||
path: test-results
|
||||
@@ -366,22 +379,14 @@ jobs:
|
||||
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: GH actions workaround - Kill XSP4 process
|
||||
run: |
|
||||
sudo pkill mono || true
|
||||
# ubuntu-22.04 comes with kubectl, but the version is not pinned. The version as of 2022-12-05 is 1.26.0 which
|
||||
# breaks the `TestNamespacedResourceDiffing` e2e test. So we'll pin to 1.25 and then fix the underlying issue.
|
||||
- name: Install kubectl
|
||||
run: |
|
||||
rm /usr/local/bin/kubectl
|
||||
curl -LO https://dl.k8s.io/release/v1.25.4/bin/linux/amd64/kubectl
|
||||
mv kubectl /usr/local/bin/kubectl
|
||||
chmod +x /usr/local/bin/kubectl
|
||||
- name: Install K3S
|
||||
env:
|
||||
INSTALL_K3S_VERSION: ${{ matrix.k3s-version }}+k3s1
|
||||
@@ -394,7 +399,7 @@ jobs:
|
||||
sudo chown runner $HOME/.kube/config
|
||||
kubectl version
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # v3.0.11
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -420,9 +425,9 @@ jobs:
|
||||
git config --global user.email "john.doe@example.com"
|
||||
- name: Pull Docker image required for tests
|
||||
run: |
|
||||
docker pull ghcr.io/dexidp/dex:v2.35.3
|
||||
docker pull ghcr.io/dexidp/dex:v2.37.0
|
||||
docker pull argoproj/argo-cd-ci-builder:v1.0.0
|
||||
docker pull redis:7.0.5-alpine
|
||||
docker pull redis:7.0.11-alpine
|
||||
- name: Create target directory for binaries in the build-process
|
||||
run: |
|
||||
mkdir -p dist
|
||||
@@ -450,7 +455,7 @@ jobs:
|
||||
set -x
|
||||
make test-e2e-local
|
||||
- name: Upload e2e-server logs
|
||||
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # v3.1.1
|
||||
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
|
||||
with:
|
||||
name: e2e-server-k8s${{ matrix.k3s-version }}.log
|
||||
path: /tmp/e2e-server.log
|
||||
|
||||
3
.github/workflows/codeql.yml
vendored
3
.github/workflows/codeql.yml
vendored
@@ -5,6 +5,7 @@ on:
|
||||
# Secrets aren't available for dependabot on push. https://docs.github.com/en/enterprise-cloud@latest/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/troubleshooting-the-codeql-workflow#error-403-resource-not-accessible-by-integration-when-using-dependabot
|
||||
branches-ignore:
|
||||
- 'dependabot/**'
|
||||
- 'cherry-pick-*'
|
||||
pull_request:
|
||||
schedule:
|
||||
- cron: '0 19 * * 0'
|
||||
@@ -29,7 +30,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
|
||||
23
.github/workflows/image.yaml
vendored
23
.github/workflows/image.yaml
vendored
@@ -10,7 +10,7 @@ on:
|
||||
types: [ labeled, unlabeled, opened, synchronize, reopened ]
|
||||
|
||||
env:
|
||||
GOLANG_VERSION: '1.18'
|
||||
GOLANG_VERSION: '1.19.10'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -29,10 +29,10 @@ jobs:
|
||||
env:
|
||||
GOPATH: /home/runner/work/argo-cd/argo-cd
|
||||
steps:
|
||||
- uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
- uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
with:
|
||||
path: src/github.com/argoproj/argo-cd
|
||||
|
||||
@@ -54,7 +54,7 @@ jobs:
|
||||
|
||||
# build
|
||||
- uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
|
||||
- uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # v2.2.1
|
||||
- uses: docker/setup-buildx-action@f03ac48505955848960e80bbb68046aa35c7b9e7 # v2.4.1
|
||||
- run: |
|
||||
IMAGE_PLATFORMS=linux/amd64
|
||||
if [[ "${{ github.event_name }}" == "push" || "${{ contains(github.event.pull_request.labels.*.name, 'test-arm-image') }}" == "true" ]]
|
||||
@@ -62,20 +62,27 @@ jobs:
|
||||
IMAGE_PLATFORMS=linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
|
||||
fi
|
||||
echo "Building image for platforms: $IMAGE_PLATFORMS"
|
||||
docker buildx build --platform $IMAGE_PLATFORMS --push="${{ github.event_name == 'push' }}" \
|
||||
docker buildx build --platform $IMAGE_PLATFORMS --sbom=false --provenance=false --push="${{ github.event_name == 'push' }}" \
|
||||
-t ghcr.io/argoproj/argo-cd/argocd:${{ steps.image.outputs.tag }} \
|
||||
-t quay.io/argoproj/argocd:latest .
|
||||
working-directory: ./src/github.com/argoproj/argo-cd
|
||||
|
||||
# sign container images
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@9becc617647dfa20ae7b1151972e9b3a2c338a2b # v2.8.1
|
||||
uses: sigstore/cosign-installer@c3667d99424e7e6047999fb6246c0da843953c65 # v3.0.1
|
||||
with:
|
||||
cosign-release: 'v1.13.0'
|
||||
cosign-release: 'v1.13.1'
|
||||
|
||||
- name: Install crane to get digest of image
|
||||
uses: imjasonh/setup-crane@00c9e93efa4e1138c9a7a5c594acd6c75a2fbf0c
|
||||
|
||||
- name: Get digest of image
|
||||
run: |
|
||||
echo "IMAGE_DIGEST=$(crane digest quay.io/argoproj/argocd:latest)" >> $GITHUB_ENV
|
||||
|
||||
- name: Sign Argo CD latest image
|
||||
run: |
|
||||
cosign sign --key env://COSIGN_PRIVATE_KEY quay.io/argoproj/argocd:latest
|
||||
cosign sign --key env://COSIGN_PRIVATE_KEY quay.io/argoproj/argocd@${{ env.IMAGE_DIGEST }}
|
||||
# Displays the public key to share.
|
||||
cosign public-key --key env://COSIGN_PRIVATE_KEY
|
||||
env:
|
||||
|
||||
37
.github/workflows/release.yaml
vendored
37
.github/workflows/release.yaml
vendored
@@ -12,7 +12,7 @@ on:
|
||||
- "!release-v0*"
|
||||
|
||||
env:
|
||||
GOLANG_VERSION: '1.18'
|
||||
GOLANG_VERSION: '1.19.10'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -43,7 +43,7 @@ jobs:
|
||||
GIT_EMAIL: argoproj@gmail.com
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -147,7 +147,7 @@ jobs:
|
||||
echo "RELEASE_NOTES=${RELEASE_NOTES}" >> $GITHUB_ENV
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # v3.4.0
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
|
||||
@@ -177,6 +177,10 @@ jobs:
|
||||
run: |
|
||||
set -ue
|
||||
make install-codegen-tools-local
|
||||
|
||||
# We install kustomize in the dist directory
|
||||
echo "/home/runner/work/argo-cd/argo-cd/dist" >> $GITHUB_PATH
|
||||
|
||||
make manifests-local VERSION=${TARGET_VERSION}
|
||||
git diff
|
||||
git commit manifests/ -m "Bump version to ${TARGET_VERSION}"
|
||||
@@ -201,13 +205,13 @@ jobs:
|
||||
if: ${{ env.DRY_RUN != 'true' }}
|
||||
|
||||
- uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
|
||||
- uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # v2.2.1
|
||||
- uses: docker/setup-buildx-action@f03ac48505955848960e80bbb68046aa35c7b9e7 # v2.4.1
|
||||
- name: Build and push Docker image for release
|
||||
run: |
|
||||
set -ue
|
||||
git clean -fd
|
||||
mkdir -p dist/
|
||||
docker buildx build --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --push -t ${IMAGE_NAMESPACE}/argocd:v${TARGET_VERSION} -t argoproj/argocd:v${TARGET_VERSION} .
|
||||
docker buildx build --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --sbom=false --provenance=false --push -t ${IMAGE_NAMESPACE}/argocd:v${TARGET_VERSION} -t argoproj/argocd:v${TARGET_VERSION} .
|
||||
make release-cli
|
||||
make checksums
|
||||
chmod +x ./dist/argocd-linux-amd64
|
||||
@@ -215,13 +219,20 @@ jobs:
|
||||
if: ${{ env.DRY_RUN != 'true' }}
|
||||
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@9becc617647dfa20ae7b1151972e9b3a2c338a2b # v2.8.1
|
||||
uses: sigstore/cosign-installer@c3667d99424e7e6047999fb6246c0da843953c65 # v3.0.1
|
||||
with:
|
||||
cosign-release: 'v1.13.0'
|
||||
cosign-release: 'v1.13.1'
|
||||
|
||||
- name: Install crane to get digest of image
|
||||
uses: imjasonh/setup-crane@00c9e93efa4e1138c9a7a5c594acd6c75a2fbf0c
|
||||
|
||||
- name: Get digest of image
|
||||
run: |
|
||||
echo "IMAGE_DIGEST=$(crane digest quay.io/argoproj/argocd:v${TARGET_VERSION})" >> $GITHUB_ENV
|
||||
|
||||
- name: Sign Argo CD container images and assets
|
||||
run: |
|
||||
cosign sign --key env://COSIGN_PRIVATE_KEY ${IMAGE_NAMESPACE}/argocd:v${TARGET_VERSION}
|
||||
cosign sign --key env://COSIGN_PRIVATE_KEY ${IMAGE_NAMESPACE}/argocd@${{ env.IMAGE_DIGEST }}
|
||||
cosign sign-blob --key env://COSIGN_PRIVATE_KEY ./dist/argocd-${TARGET_VERSION}-checksums.txt > ./dist/argocd-${TARGET_VERSION}-checksums.sig
|
||||
# Retrieves the public key to release as an asset
|
||||
cosign public-key --key env://COSIGN_PRIVATE_KEY > ./dist/argocd-cosign.pub
|
||||
@@ -255,6 +266,14 @@ jobs:
|
||||
body: ${{ steps.release-notes.outputs.content }}
|
||||
if: ${{ env.DRY_RUN == 'true' }}
|
||||
|
||||
# Based on this suggestion: https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
|
||||
- name: Free Up Disk Space
|
||||
id: free-up-disk-space
|
||||
run: |
|
||||
df -h
|
||||
sudo rm -rf /usr/share/dotnet
|
||||
df -h
|
||||
|
||||
- name: Generate SBOM (spdx)
|
||||
id: spdx-builder
|
||||
env:
|
||||
@@ -264,7 +283,7 @@ jobs:
|
||||
SIGS_BOM_VERSION: v0.2.1
|
||||
# comma delimited list of project relative folders to inspect for package
|
||||
# managers (gomod, yarn, npm).
|
||||
PROJECT_FOLDERS: ".,./ui"
|
||||
PROJECT_FOLDERS: ".,./ui"
|
||||
# full qualified name of the docker image to be inspected
|
||||
DOCKER_IMAGE: ${{env.IMAGE_NAMESPACE}}/argocd:v${{env.TARGET_VERSION}}
|
||||
run: |
|
||||
|
||||
2
.github/workflows/update-snyk.yaml
vendored
2
.github/workflows/update-snyk.yaml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build reports
|
||||
|
||||
@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:22.04
|
||||
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
|
||||
# Also used as the image in CI jobs so needs all dependencies
|
||||
####################################################################################################
|
||||
FROM docker.io/library/golang:1.18 AS builder
|
||||
FROM docker.io/library/golang:1.19.10@sha256:83f9f840072d05ad4d90ce4ac7cb2427632d6b89d5ffc558f18f9577ec8188c0 AS builder
|
||||
|
||||
RUN echo 'deb http://deb.debian.org/debian buster-backports main' >> /etc/apt/sources.list
|
||||
|
||||
@@ -99,7 +99,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
|
||||
####################################################################################################
|
||||
# Argo CD Build stage which performs the actual build of Argo CD binaries
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.18 AS argocd-build
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.19.10@sha256:83f9f840072d05ad4d90ce4ac7cb2427632d6b89d5ffc558f18f9577ec8188c0 AS argocd-build
|
||||
|
||||
WORKDIR /go/src/github.com/argoproj/argo-cd
|
||||
|
||||
|
||||
@@ -35,9 +35,7 @@ impact on Argo CD before opening an issue at least roughly.
|
||||
|
||||
## Supported Versions
|
||||
|
||||
We currently support the most recent release (`N`, e.g. `1.8`) and the release
|
||||
previous to the most recent one (`N-1`, e.g. `1.7`). With the release of
|
||||
`N+1`, `N-1` drops out of support and `N` becomes `N-1`.
|
||||
We currently support the last 3 minor versions of Argo CD with security and bug fixes.
|
||||
|
||||
We regularly perform patch releases (e.g. `1.8.5` and `1.7.12`) for the
|
||||
supported versions, which will contain fixes for security vulnerabilities and
|
||||
|
||||
7
USERS.md
7
USERS.md
@@ -23,6 +23,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Arctiq Inc.](https://www.arctiq.ca)
|
||||
1. [ARZ Allgemeines Rechenzentrum GmbH](https://www.arz.at/)
|
||||
1. [Axual B.V.](https://axual.com)
|
||||
1. [Back Market](https://www.backmarket.com)
|
||||
1. [Baloise](https://www.baloise.com)
|
||||
1. [BCDevExchange DevOps Platform](https://bcdevexchange.org/DevOpsPlatform)
|
||||
1. [Beat](https://thebeat.co/en/)
|
||||
@@ -89,6 +90,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [gloat](https://gloat.com/)
|
||||
1. [GLOBIS](https://globis.com)
|
||||
1. [Glovo](https://www.glovoapp.com)
|
||||
1. [GlueOps](https://glueops.dev)
|
||||
1. [GMETRI](https://gmetri.com/)
|
||||
1. [Gojek](https://www.gojek.io/)
|
||||
1. [Greenpass](https://www.greenpass.com.br/)
|
||||
@@ -107,6 +109,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [imaware](https://imaware.health)
|
||||
1. [Indeed](https://indeed.com)
|
||||
1. [Index Exchange](https://www.indexexchange.com/)
|
||||
1. [Info Support](https://www.infosupport.com/)
|
||||
1. [InsideBoard](https://www.insideboard.com)
|
||||
1. [Intuit](https://www.intuit.com/)
|
||||
1. [Joblift](https://joblift.com/)
|
||||
@@ -169,6 +172,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Packlink](https://www.packlink.com/)
|
||||
1. [Pandosearch](https://www.pandosearch.com/en/home)
|
||||
1. [PagerDuty](https://www.pagerduty.com/)
|
||||
1. [Patreon](https://www.patreon.com/)
|
||||
1. [PayPay](https://paypay.ne.jp/)
|
||||
1. [Peloton Interactive](https://www.onepeloton.com/)
|
||||
1. [Pigment](https://www.gopigment.com/)
|
||||
@@ -186,6 +190,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [RapidAPI](https://www.rapidapi.com/)
|
||||
1. [Recreation.gov](https://www.recreation.gov/)
|
||||
1. [Red Hat](https://www.redhat.com/)
|
||||
1. [Redpill Linpro](https://www.redpill-linpro.com/)
|
||||
1. [reev.com](https://www.reev.com/)
|
||||
1. [RightRev](https://rightrev.com/)
|
||||
1. [Rise](https://www.risecard.eu/)
|
||||
@@ -199,6 +204,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [SI Analytics](https://si-analytics.ai)
|
||||
1. [Skit](https://skit.ai/)
|
||||
1. [Skyscanner](https://www.skyscanner.net/)
|
||||
1. [Smart Pension](https://www.smartpension.co.uk/)
|
||||
1. [Smilee.io](https://smilee.io)
|
||||
1. [Snapp](https://snapp.ir/)
|
||||
1. [Snyk](https://snyk.io/)
|
||||
@@ -236,6 +242,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [ungleich.ch](https://ungleich.ch/)
|
||||
1. [Unifonic Inc](https://www.unifonic.com/)
|
||||
1. [Universidad Mesoamericana](https://www.umes.edu.gt/)
|
||||
1. [Urbantz](https://urbantz.com/)
|
||||
1. [Viaduct](https://www.viaduct.ai/)
|
||||
1. [Vinted](https://vinted.com/)
|
||||
1. [Virtuo](https://www.govirtuo.com/)
|
||||
|
||||
@@ -71,7 +71,7 @@ type ApplicationSetReconciler struct {
|
||||
utils.Policy
|
||||
utils.Renderer
|
||||
|
||||
EnableProgressiveRollouts bool
|
||||
EnableProgressiveSyncs bool
|
||||
}
|
||||
|
||||
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
|
||||
@@ -142,7 +142,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
// appSyncMap tracks which apps will be synced during this reconciliation.
|
||||
appSyncMap := map[string]bool{}
|
||||
|
||||
if r.EnableProgressiveRollouts && applicationSetInfo.Spec.Strategy != nil {
|
||||
if r.EnableProgressiveSyncs && applicationSetInfo.Spec.Strategy != nil {
|
||||
applications, err := r.getCurrentApplications(ctx, applicationSetInfo)
|
||||
if err != nil {
|
||||
return ctrl.Result{}, fmt.Errorf("failed to get current applications for application set: %w", err)
|
||||
@@ -152,9 +152,9 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
appMap[app.Name] = app
|
||||
}
|
||||
|
||||
appSyncMap, err = r.performProgressiveRollouts(ctx, applicationSetInfo, applications, desiredApplications, appMap)
|
||||
appSyncMap, err = r.performProgressiveSyncs(ctx, applicationSetInfo, applications, desiredApplications, appMap)
|
||||
if err != nil {
|
||||
return ctrl.Result{}, fmt.Errorf("failed to perform progressive rollouts reconciliation for application set: %w", err)
|
||||
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -186,9 +186,9 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
)
|
||||
}
|
||||
|
||||
if r.EnableProgressiveRollouts {
|
||||
if r.EnableProgressiveSyncs {
|
||||
// trigger appropriate application syncs if RollingSync strategy is enabled
|
||||
if progressiveRolloutStrategyEnabled(&applicationSetInfo, "RollingSync") {
|
||||
if progressiveSyncsStrategyEnabled(&applicationSetInfo, "RollingSync") {
|
||||
validApps, err = r.syncValidApplications(ctx, &applicationSetInfo, appSyncMap, appMap, validApps)
|
||||
|
||||
if err != nil {
|
||||
@@ -775,24 +775,29 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *ApplicationSetReconciler) performProgressiveRollouts(ctx context.Context, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
|
||||
|
||||
_, err := r.updateApplicationSetApplicationStatus(ctx, &appset, applications)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to update applicationset app status: %w", err)
|
||||
}
|
||||
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
|
||||
|
||||
appDependencyList, appStepMap, err := r.buildAppDependencyList(ctx, appset, desiredApplications)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to build app dependency list: %w", err)
|
||||
}
|
||||
|
||||
_, err = r.updateApplicationSetApplicationStatus(ctx, &appset, applications, appStepMap)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to update applicationset app status: %w", err)
|
||||
}
|
||||
|
||||
log.Infof("ApplicationSet %v step list:", appset.Name)
|
||||
for i, step := range appDependencyList {
|
||||
log.Infof("step %v: %+v", i+1, step)
|
||||
}
|
||||
|
||||
appSyncMap, err := r.buildAppSyncMap(ctx, appset, appDependencyList, appMap)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to build app sync map: %w", err)
|
||||
}
|
||||
|
||||
log.Infof("appSyncMap: %+v", appSyncMap)
|
||||
log.Infof("Application allowed to sync before maxUpdate?: %+v", appSyncMap)
|
||||
|
||||
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, &appset, appSyncMap, appStepMap, appMap)
|
||||
if err != nil {
|
||||
@@ -815,7 +820,7 @@ func (r *ApplicationSetReconciler) buildAppDependencyList(ctx context.Context, a
|
||||
}
|
||||
|
||||
steps := []argov1alpha1.ApplicationSetRolloutStep{}
|
||||
if progressiveRolloutStrategyEnabled(&applicationSet, "RollingSync") {
|
||||
if progressiveSyncsStrategyEnabled(&applicationSet, "RollingSync") {
|
||||
steps = applicationSet.Spec.Strategy.RollingSync.Steps
|
||||
}
|
||||
|
||||
@@ -941,7 +946,7 @@ func (r *ApplicationSetReconciler) buildAppSyncMap(ctx context.Context, applicat
|
||||
|
||||
func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1alpha1.Application, appStatus argov1alpha1.ApplicationSetApplicationStatus) bool {
|
||||
|
||||
if progressiveRolloutStrategyEnabled(appset, "RollingSync") {
|
||||
if progressiveSyncsStrategyEnabled(appset, "RollingSync") {
|
||||
// we still need to complete the current step if the Application is not yet Healthy or there are still pending Application changes
|
||||
return isApplicationHealthy(app) && appStatus.Status == "Healthy"
|
||||
}
|
||||
@@ -949,7 +954,7 @@ func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1al
|
||||
return true
|
||||
}
|
||||
|
||||
func progressiveRolloutStrategyEnabled(appset *argov1alpha1.ApplicationSet, strategyType string) bool {
|
||||
func progressiveSyncsStrategyEnabled(appset *argov1alpha1.ApplicationSet, strategyType string) bool {
|
||||
if appset.Spec.Strategy == nil || appset.Spec.Strategy.Type != strategyType {
|
||||
return false
|
||||
}
|
||||
@@ -982,7 +987,7 @@ func statusStrings(app argov1alpha1.Application) (string, string, string) {
|
||||
}
|
||||
|
||||
// check the status of each Application's status and promote Applications to the next status if needed
|
||||
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
|
||||
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
|
||||
|
||||
now := metav1.Now()
|
||||
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applications))
|
||||
@@ -993,22 +998,24 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
|
||||
|
||||
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, app.Name)
|
||||
|
||||
currentAppStatus := argov1alpha1.ApplicationSetApplicationStatus{}
|
||||
|
||||
if idx == -1 {
|
||||
// AppStatus not found, set default status of "Waiting"
|
||||
appStatuses = append(appStatuses, argov1alpha1.ApplicationSetApplicationStatus{
|
||||
currentAppStatus = argov1alpha1.ApplicationSetApplicationStatus{
|
||||
Application: app.Name,
|
||||
LastTransitionTime: &now,
|
||||
Message: "No Application status found, defaulting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
})
|
||||
break
|
||||
Step: fmt.Sprint(appStepMap[app.Name] + 1),
|
||||
}
|
||||
} else {
|
||||
// we have an existing AppStatus
|
||||
currentAppStatus = applicationSet.Status.ApplicationStatus[idx]
|
||||
}
|
||||
|
||||
// we have an existing AppStatus
|
||||
currentAppStatus := applicationSet.Status.ApplicationStatus[idx]
|
||||
|
||||
appOutdated := false
|
||||
if progressiveRolloutStrategyEnabled(applicationSet, "RollingSync") {
|
||||
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
|
||||
appOutdated = syncStatusString == "OutOfSync"
|
||||
}
|
||||
|
||||
@@ -1017,14 +1024,27 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = "Waiting"
|
||||
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
|
||||
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
|
||||
}
|
||||
|
||||
if currentAppStatus.Status == "Pending" {
|
||||
if healthStatusString == "Progressing" || operationPhaseString == "Running" {
|
||||
// check for successful syncs started less than 10s before the Application transitioned to Pending
|
||||
// this covers race conditions where syncs initiated by RollingSync miraculously have a sync time before the transition to Pending state occurred (could be a few seconds)
|
||||
if operationPhaseString == "Succeeded" && app.Status.OperationState.StartedAt.Add(time.Duration(10)*time.Second).After(currentAppStatus.LastTransitionTime.Time) {
|
||||
if !app.Status.OperationState.StartedAt.After(currentAppStatus.LastTransitionTime.Time) {
|
||||
log.Warnf("Application %v was synced less than 10s prior to entering Pending status, we'll assume the AppSet controller triggered this sync and update its status to Progressing", app.Name)
|
||||
}
|
||||
log.Infof("Application %v has completed a sync successfully, updating its ApplicationSet status to Progressing", app.Name)
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = "Progressing"
|
||||
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
|
||||
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
|
||||
} else if operationPhaseString == "Running" || healthStatusString == "Progressing" {
|
||||
log.Infof("Application %v has entered Progressing status, updating its ApplicationSet status to Progressing", app.Name)
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = "Progressing"
|
||||
currentAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing."
|
||||
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1033,6 +1053,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = healthStatusString
|
||||
currentAppStatus.Message = "Application resource is already Healthy, updating status from Waiting to Healthy."
|
||||
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
|
||||
}
|
||||
|
||||
if currentAppStatus.Status == "Progressing" && isApplicationHealthy(app) {
|
||||
@@ -1040,6 +1061,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
|
||||
currentAppStatus.LastTransitionTime = &now
|
||||
currentAppStatus.Status = healthStatusString
|
||||
currentAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy."
|
||||
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
|
||||
}
|
||||
|
||||
appStatuses = append(appStatuses, currentAppStatus)
|
||||
@@ -1065,7 +1087,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
|
||||
totalCountMap := []int{}
|
||||
|
||||
length := 0
|
||||
if progressiveRolloutStrategyEnabled(applicationSet, "RollingSync") {
|
||||
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
|
||||
length = len(applicationSet.Spec.Strategy.RollingSync.Steps)
|
||||
}
|
||||
for s := 0; s < length; s++ {
|
||||
@@ -1077,7 +1099,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
|
||||
for _, appStatus := range applicationSet.Status.ApplicationStatus {
|
||||
totalCountMap[appStepMap[appStatus.Application]] += 1
|
||||
|
||||
if progressiveRolloutStrategyEnabled(applicationSet, "RollingSync") {
|
||||
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
|
||||
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
|
||||
updateCountMap[appStepMap[appStatus.Application]] += 1
|
||||
}
|
||||
@@ -1088,7 +1110,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
|
||||
|
||||
maxUpdateAllowed := true
|
||||
maxUpdate := &intstr.IntOrString{}
|
||||
if progressiveRolloutStrategyEnabled(applicationSet, "RollingSync") {
|
||||
if progressiveSyncsStrategyEnabled(applicationSet, "RollingSync") {
|
||||
maxUpdate = applicationSet.Spec.Strategy.RollingSync.Steps[appStepMap[appStatus.Application]].MaxUpdate
|
||||
}
|
||||
|
||||
@@ -1116,6 +1138,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
|
||||
appStatus.LastTransitionTime = &now
|
||||
appStatus.Status = "Pending"
|
||||
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing."
|
||||
appStatus.Step = fmt.Sprint(appStepMap[appStatus.Application] + 1)
|
||||
|
||||
updateCountMap[appStepMap[appStatus.Application]] += 1
|
||||
}
|
||||
@@ -1263,7 +1286,7 @@ func (r *ApplicationSetReconciler) syncValidApplications(ctx context.Context, ap
|
||||
return rolloutApps, nil
|
||||
}
|
||||
|
||||
// used by the RollingSync Progressive Rollout strategy to trigger a sync of a particular Application resource
|
||||
// used by the RollingSync Progressive Sync strategy to trigger a sync of a particular Application resource
|
||||
func syncApplication(application argov1alpha1.Application, prune bool) (argov1alpha1.Application, error) {
|
||||
|
||||
operation := argov1alpha1.Operation{
|
||||
|
||||
@@ -3548,6 +3548,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
name string
|
||||
appSet argov1alpha1.ApplicationSet
|
||||
apps []argov1alpha1.Application
|
||||
appStepMap map[string]int
|
||||
expectedAppStatus []argov1alpha1.ApplicationSetApplicationStatus
|
||||
}{
|
||||
{
|
||||
@@ -3602,8 +3603,9 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
expectedAppStatus: []argov1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "No Application status found, defaulting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Message: "Application resource is already Healthy, updating status from Waiting to Healthy.",
|
||||
Status: "Healthy",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3643,8 +3645,9 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
expectedAppStatus: []argov1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "No Application status found, defaulting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Message: "Application resource is already Healthy, updating status from Waiting to Healthy.",
|
||||
Status: "Healthy",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3667,6 +3670,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: "Healthy",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3688,6 +3692,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application has pending changes, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3710,6 +3715,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3731,6 +3737,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application resource became Progressing, updating status from Pending to Progressing.",
|
||||
Status: "Progressing",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3753,6 +3760,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3780,6 +3788,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application resource became Progressing, updating status from Pending to Progressing.",
|
||||
Status: "Progressing",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3802,6 +3811,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: "Progressing",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3829,6 +3839,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application resource became Healthy, updating status from Progressing to Healthy.",
|
||||
Status: "Healthy",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3851,6 +3862,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3878,6 +3890,223 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application resource is already Healthy, updating status from Waiting to Healthy.",
|
||||
Status: "Healthy",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "progresses a new outofsync application in a later step to waiting",
|
||||
appSet: argov1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Strategy: &argov1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &argov1alpha1.ApplicationSetRolloutStrategy{},
|
||||
},
|
||||
},
|
||||
},
|
||||
apps: []argov1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
},
|
||||
Status: argov1alpha1.ApplicationStatus{
|
||||
Health: argov1alpha1.HealthStatus{
|
||||
Status: health.HealthStatusHealthy,
|
||||
},
|
||||
OperationState: &argov1alpha1.OperationState{
|
||||
Phase: common.OperationSucceeded,
|
||||
},
|
||||
Sync: argov1alpha1.SyncStatus{
|
||||
Status: argov1alpha1.SyncStatusCodeOutOfSync,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
appStepMap: map[string]int{
|
||||
"app1": 1,
|
||||
"app2": 0,
|
||||
},
|
||||
expectedAppStatus: []argov1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "No Application status found, defaulting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "2",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "progresses a pending application with a successful sync to progressing",
|
||||
appSet: argov1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Strategy: &argov1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &argov1alpha1.ApplicationSetRolloutStrategy{},
|
||||
},
|
||||
},
|
||||
Status: argov1alpha1.ApplicationSetStatus{
|
||||
ApplicationStatus: []argov1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
LastTransitionTime: &metav1.Time{
|
||||
Time: time.Now().Add(time.Duration(-1) * time.Minute),
|
||||
},
|
||||
Message: "",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
apps: []argov1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
},
|
||||
Status: argov1alpha1.ApplicationStatus{
|
||||
Health: argov1alpha1.HealthStatus{
|
||||
Status: health.HealthStatusDegraded,
|
||||
},
|
||||
OperationState: &argov1alpha1.OperationState{
|
||||
Phase: common.OperationSucceeded,
|
||||
StartedAt: metav1.Time{
|
||||
Time: time.Now(),
|
||||
},
|
||||
},
|
||||
Sync: argov1alpha1.SyncStatus{
|
||||
Status: argov1alpha1.SyncStatusCodeSynced,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedAppStatus: []argov1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "Application resource completed a sync successfully, updating status from Pending to Progressing.",
|
||||
Status: "Progressing",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "progresses a pending application with a successful sync <1s ago to progressing",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{},
|
||||
},
|
||||
},
|
||||
Status: v1alpha1.ApplicationSetStatus{
|
||||
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
LastTransitionTime: &metav1.Time{
|
||||
Time: time.Now(),
|
||||
},
|
||||
Message: "",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
apps: []v1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
},
|
||||
Status: v1alpha1.ApplicationStatus{
|
||||
Health: v1alpha1.HealthStatus{
|
||||
Status: health.HealthStatusDegraded,
|
||||
},
|
||||
OperationState: &v1alpha1.OperationState{
|
||||
Phase: common.OperationSucceeded,
|
||||
StartedAt: metav1.Time{
|
||||
Time: time.Now().Add(time.Duration(-1) * time.Second),
|
||||
},
|
||||
},
|
||||
Sync: v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeSynced,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "Application resource completed a sync successfully, updating status from Pending to Progressing.",
|
||||
Status: "Progressing",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "does not progresses a pending application with an old successful sync to progressing",
|
||||
appSet: argov1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Strategy: &argov1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &argov1alpha1.ApplicationSetRolloutStrategy{},
|
||||
},
|
||||
},
|
||||
Status: argov1alpha1.ApplicationSetStatus{
|
||||
ApplicationStatus: []argov1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
LastTransitionTime: &metav1.Time{
|
||||
Time: time.Now(),
|
||||
},
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
apps: []argov1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
},
|
||||
Status: argov1alpha1.ApplicationStatus{
|
||||
Health: argov1alpha1.HealthStatus{
|
||||
Status: health.HealthStatusDegraded,
|
||||
},
|
||||
OperationState: &argov1alpha1.OperationState{
|
||||
Phase: common.OperationSucceeded,
|
||||
StartedAt: metav1.Time{
|
||||
Time: time.Now().Add(time.Duration(-11) * time.Second),
|
||||
},
|
||||
},
|
||||
Sync: argov1alpha1.SyncStatus{
|
||||
Status: argov1alpha1.SyncStatusCodeSynced,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedAppStatus: []argov1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -3901,7 +4130,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
||||
KubeClientset: kubeclientset,
|
||||
}
|
||||
|
||||
appStatuses, err := r.updateApplicationSetApplicationStatus(context.TODO(), &cc.appSet, cc.apps)
|
||||
appStatuses, err := r.updateApplicationSetApplicationStatus(context.TODO(), &cc.appSet, cc.apps, cc.appStepMap)
|
||||
|
||||
// opt out of testing the LastTransitionTime is accurate
|
||||
for i := range appStatuses {
|
||||
@@ -4060,6 +4289,7 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4091,6 +4321,7 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4107,6 +4338,7 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4138,6 +4370,7 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application Pending status timed out while waiting to become Progressing, reset status to Healthy.",
|
||||
Status: "Healthy",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4154,6 +4387,7 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application Pending status timed out while waiting to become Progressing, reset status to Healthy.",
|
||||
Status: "Healthy",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4189,21 +4423,25 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application resource became Progressing, updating status from Pending to Progressing.",
|
||||
Status: "Progressing",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app4",
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4268,24 +4506,28 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application resource became Progressing, updating status from Pending to Progressing.",
|
||||
Status: "Progressing",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app4",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4321,16 +4563,19 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4351,18 +4596,21 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4398,16 +4646,19 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4428,18 +4679,21 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4475,16 +4729,19 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4505,18 +4762,21 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4552,16 +4812,19 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
Application: "app1",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -4582,18 +4845,21 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application moved to Pending status, watching for the Application resource to start Progressing.",
|
||||
Status: "Pending",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app2",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
{
|
||||
Application: "app3",
|
||||
LastTransitionTime: nil,
|
||||
Message: "Application is out of date with the current AppSet generation, setting status to Waiting.",
|
||||
Status: "Waiting",
|
||||
Step: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -2,6 +2,7 @@ package controllers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
@@ -46,7 +47,6 @@ type addRateLimitingInterface interface {
|
||||
}
|
||||
|
||||
func (h *clusterSecretEventHandler) queueRelatedAppGenerators(q addRateLimitingInterface, object client.Object) {
|
||||
|
||||
// Check for label, lookup all ApplicationSets that might match the cluster, queue them all
|
||||
if object.GetLabels()[generators.ArgoCDSecretTypeLabel] != generators.ArgoCDSecretTypeCluster {
|
||||
return
|
||||
@@ -73,6 +73,40 @@ func (h *clusterSecretEventHandler) queueRelatedAppGenerators(q addRateLimitingI
|
||||
foundClusterGenerator = true
|
||||
break
|
||||
}
|
||||
|
||||
if generator.Matrix != nil {
|
||||
ok, err := nestedGeneratorsHaveClusterGenerator(generator.Matrix.Generators)
|
||||
if err != nil {
|
||||
h.Log.
|
||||
WithFields(log.Fields{
|
||||
"namespace": appSet.GetNamespace(),
|
||||
"name": appSet.GetName(),
|
||||
}).
|
||||
WithError(err).
|
||||
Error("Unable to check if ApplicationSet matrix generators have cluster generator")
|
||||
}
|
||||
if ok {
|
||||
foundClusterGenerator = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if generator.Merge != nil {
|
||||
ok, err := nestedGeneratorsHaveClusterGenerator(generator.Merge.Generators)
|
||||
if err != nil {
|
||||
h.Log.
|
||||
WithFields(log.Fields{
|
||||
"namespace": appSet.GetNamespace(),
|
||||
"name": appSet.GetName(),
|
||||
}).
|
||||
WithError(err).
|
||||
Error("Unable to check if ApplicationSet merge generators have cluster generator")
|
||||
}
|
||||
if ok {
|
||||
foundClusterGenerator = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if foundClusterGenerator {
|
||||
|
||||
@@ -82,3 +116,42 @@ func (h *clusterSecretEventHandler) queueRelatedAppGenerators(q addRateLimitingI
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// nestedGeneratorsHaveClusterGenerator iterate over provided nested generators to check if they have a cluster generator.
|
||||
func nestedGeneratorsHaveClusterGenerator(generators []argoprojiov1alpha1.ApplicationSetNestedGenerator) (bool, error) {
|
||||
for _, generator := range generators {
|
||||
if ok, err := nestedGeneratorHasClusterGenerator(generator); ok || err != nil {
|
||||
return ok, err
|
||||
}
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// nestedGeneratorHasClusterGenerator checks if the provided generator has a cluster generator.
|
||||
func nestedGeneratorHasClusterGenerator(nested argoprojiov1alpha1.ApplicationSetNestedGenerator) (bool, error) {
|
||||
if nested.Clusters != nil {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
if nested.Matrix != nil {
|
||||
nestedMatrix, err := argoprojiov1alpha1.ToNestedMatrixGenerator(nested.Matrix)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("unable to get nested matrix generator: %w", err)
|
||||
}
|
||||
if nestedMatrix != nil {
|
||||
return nestedGeneratorsHaveClusterGenerator(nestedMatrix.ToMatrixGenerator().Generators)
|
||||
}
|
||||
}
|
||||
|
||||
if nested.Merge != nil {
|
||||
nestedMerge, err := argoprojiov1alpha1.ToNestedMergeGenerator(nested.Merge)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("unable to get nested merge generator: %w", err)
|
||||
}
|
||||
if nestedMerge != nil {
|
||||
return nestedGeneratorsHaveClusterGenerator(nestedMerge.ToMergeGenerator().Generators)
|
||||
}
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
@@ -163,7 +164,6 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
{NamespacedName: types.NamespacedName{Namespace: "another-namespace", Name: "my-app-set"}},
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
name: "non-argo cd secret should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
@@ -189,6 +189,348 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with a cluster generator should produce a request",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{{
|
||||
NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"},
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with non cluster generator should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
List: &argov1alpha1.ListGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with a nested matrix generator containing a cluster generator should produce a request",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Matrix: &apiextensionsv1.JSON{
|
||||
Raw: []byte(
|
||||
`{
|
||||
"generators": [
|
||||
{
|
||||
"clusters": {
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"argocd.argoproj.io/secret-type": "cluster"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}`,
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{{
|
||||
NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"},
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with a nested matrix generator containing non cluster generator should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Matrix: &apiextensionsv1.JSON{
|
||||
Raw: []byte(
|
||||
`{
|
||||
"generators": [
|
||||
{
|
||||
"list": {
|
||||
"elements": [
|
||||
"a",
|
||||
"b"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}`,
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "a merge generator with a cluster generator should produce a request",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{{
|
||||
NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"},
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with non cluster generator should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
List: &argov1alpha1.ListGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "a merge generator with a nested merge generator containing a cluster generator should produce a request",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Merge: &apiextensionsv1.JSON{
|
||||
Raw: []byte(
|
||||
`{
|
||||
"generators": [
|
||||
{
|
||||
"clusters": {
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"argocd.argoproj.io/secret-type": "cluster"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}`,
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{{
|
||||
NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"},
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "a merge generator with a nested merge generator containing non cluster generator should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Merge: &apiextensionsv1.JSON{
|
||||
Raw: []byte(
|
||||
`{
|
||||
"generators": [
|
||||
{
|
||||
"list": {
|
||||
"elements": [
|
||||
"a",
|
||||
"b"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}`,
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
|
||||
179
applicationset/controllers/requeue_after_test.go
Normal file
179
applicationset/controllers/requeue_after_test.go
Normal file
@@ -0,0 +1,179 @@
|
||||
package controllers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/applicationset/generators"
|
||||
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
dynfake "k8s.io/client-go/dynamic/fake"
|
||||
kubefake "k8s.io/client-go/kubernetes/fake"
|
||||
"k8s.io/client-go/tools/record"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
)
|
||||
|
||||
func TestRequeueAfter(t *testing.T) {
|
||||
mockServer := argoCDServiceMock{}
|
||||
ctx := context.Background()
|
||||
scheme := runtime.NewScheme()
|
||||
err := argov1alpha1.AddToScheme(scheme)
|
||||
assert.Nil(t, err)
|
||||
gvrToListKind := map[schema.GroupVersionResource]string{{
|
||||
Group: "mallard.io",
|
||||
Version: "v1",
|
||||
Resource: "ducks",
|
||||
}: "DuckList"}
|
||||
appClientset := kubefake.NewSimpleClientset()
|
||||
k8sClient := fake.NewClientBuilder().Build()
|
||||
duckType := &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"apiVersion": "v2quack",
|
||||
"kind": "Duck",
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "mightyduck",
|
||||
"namespace": "namespace",
|
||||
"labels": map[string]interface{}{"duck": "all-species"},
|
||||
},
|
||||
"status": map[string]interface{}{
|
||||
"decisions": []interface{}{
|
||||
map[string]interface{}{
|
||||
"clusterName": "staging-01",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"clusterName": "production-01",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
fakeDynClient := dynfake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind, duckType)
|
||||
|
||||
terminalGenerators := map[string]generators.Generator{
|
||||
"List": generators.NewListGenerator(),
|
||||
"Clusters": generators.NewClusterGenerator(k8sClient, ctx, appClientset, "argocd"),
|
||||
"Git": generators.NewGitGenerator(mockServer),
|
||||
"SCMProvider": generators.NewSCMProviderGenerator(fake.NewClientBuilder().WithObjects(&corev1.Secret{}).Build(), generators.SCMAuthProviders{}),
|
||||
"ClusterDecisionResource": generators.NewDuckTypeGenerator(ctx, fakeDynClient, appClientset, "argocd"),
|
||||
"PullRequest": generators.NewPullRequestGenerator(k8sClient, generators.SCMAuthProviders{}),
|
||||
}
|
||||
|
||||
nestedGenerators := map[string]generators.Generator{
|
||||
"List": terminalGenerators["List"],
|
||||
"Clusters": terminalGenerators["Clusters"],
|
||||
"Git": terminalGenerators["Git"],
|
||||
"SCMProvider": terminalGenerators["SCMProvider"],
|
||||
"ClusterDecisionResource": terminalGenerators["ClusterDecisionResource"],
|
||||
"PullRequest": terminalGenerators["PullRequest"],
|
||||
"Matrix": generators.NewMatrixGenerator(terminalGenerators),
|
||||
"Merge": generators.NewMergeGenerator(terminalGenerators),
|
||||
}
|
||||
|
||||
topLevelGenerators := map[string]generators.Generator{
|
||||
"List": terminalGenerators["List"],
|
||||
"Clusters": terminalGenerators["Clusters"],
|
||||
"Git": terminalGenerators["Git"],
|
||||
"SCMProvider": terminalGenerators["SCMProvider"],
|
||||
"ClusterDecisionResource": terminalGenerators["ClusterDecisionResource"],
|
||||
"PullRequest": terminalGenerators["PullRequest"],
|
||||
"Matrix": generators.NewMatrixGenerator(nestedGenerators),
|
||||
"Merge": generators.NewMergeGenerator(nestedGenerators),
|
||||
}
|
||||
|
||||
client := fake.NewClientBuilder().WithScheme(scheme).Build()
|
||||
r := ApplicationSetReconciler{
|
||||
Client: client,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(0),
|
||||
Generators: topLevelGenerators,
|
||||
}
|
||||
|
||||
type args struct {
|
||||
appset *argov1alpha1.ApplicationSet
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want time.Duration
|
||||
wantErr assert.ErrorAssertionFunc
|
||||
}{
|
||||
{name: "Cluster", args: args{appset: &argov1alpha1.ApplicationSet{
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{{Clusters: &argov1alpha1.ClusterGenerator{}}},
|
||||
},
|
||||
}}, want: generators.NoRequeueAfter, wantErr: assert.NoError},
|
||||
{name: "ClusterMergeNested", args: args{&argov1alpha1.ApplicationSet{
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{Clusters: &argov1alpha1.ClusterGenerator{}},
|
||||
{Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
Git: &argov1alpha1.GitGenerator{},
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
},
|
||||
}}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
|
||||
{name: "ClusterMatrixNested", args: args{&argov1alpha1.ApplicationSet{
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{Clusters: &argov1alpha1.ClusterGenerator{}},
|
||||
{Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
Git: &argov1alpha1.GitGenerator{},
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
},
|
||||
}}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
|
||||
{name: "ListGenerator", args: args{appset: &argov1alpha1.ApplicationSet{
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{{List: &argov1alpha1.ListGenerator{}}},
|
||||
},
|
||||
}}, want: generators.NoRequeueAfter, wantErr: assert.NoError},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equalf(t, tt.want, r.getMinRequeueAfter(tt.args.appset), "getMinRequeueAfter(%v)", tt.args.appset)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type argoCDServiceMock struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetApps(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision)
|
||||
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetFiles(ctx context.Context, repoURL string, revision string, pattern string) (map[string][]byte, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision, pattern)
|
||||
|
||||
return args.Get(0).(map[string][]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetFileContent(ctx context.Context, repoURL string, revision string, path string) ([]byte, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision, path)
|
||||
|
||||
return args.Get(0).([]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetDirectories(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision)
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
@@ -51,6 +51,8 @@ func NewClusterGenerator(c client.Client, ctx context.Context, clientset kuberne
|
||||
return g
|
||||
}
|
||||
|
||||
// GetRequeueAfter never requeue the cluster generator because the `clusterSecretEventHandler` will requeue the appsets
|
||||
// when the cluster secrets change
|
||||
func (g *ClusterGenerator) GetRequeueAfter(appSetGenerator *argoappsetv1alpha1.ApplicationSetGenerator) time.Duration {
|
||||
return NoRequeueAfter
|
||||
}
|
||||
|
||||
@@ -2,12 +2,11 @@ package generators
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"encoding/json"
|
||||
"reflect"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/applicationset/utils"
|
||||
"github.com/jeremywohl/flatten"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
|
||||
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
@@ -25,9 +24,12 @@ type TransformResult struct {
|
||||
Template argoprojiov1alpha1.ApplicationSetTemplate
|
||||
}
|
||||
|
||||
//Transform a spec generator to list of paramSets and a template
|
||||
// Transform a spec generator to list of paramSets and a template
|
||||
func Transform(requestedGenerator argoprojiov1alpha1.ApplicationSetGenerator, allGenerators map[string]Generator, baseTemplate argoprojiov1alpha1.ApplicationSetTemplate, appSet *argoprojiov1alpha1.ApplicationSet, genParams map[string]interface{}) ([]TransformResult, error) {
|
||||
selector, err := metav1.LabelSelectorAsSelector(requestedGenerator.Selector)
|
||||
// This is a custom version of the `LabelSelectorAsSelector` that is in k8s.io/apimachinery. This has been copied
|
||||
// verbatim from that package, with the difference that we do not have any restrictions on label values. This is done
|
||||
// so that, among other things, we can match on cluster urls.
|
||||
selector, err := utils.LabelSelectorAsSelector(requestedGenerator.Selector)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing label selector: %w", err)
|
||||
}
|
||||
@@ -72,8 +74,17 @@ func Transform(requestedGenerator argoprojiov1alpha1.ApplicationSetGenerator, al
|
||||
}
|
||||
var filterParams []map[string]interface{}
|
||||
for _, param := range params {
|
||||
flatParam, err := flattenParameters(param)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("generator", g).
|
||||
Error("error flattening params")
|
||||
if firstError == nil {
|
||||
firstError = err
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if requestedGenerator.Selector != nil && !selector.Matches(labels.Set(keepOnlyStringValues(param))) {
|
||||
if requestedGenerator.Selector != nil && !selector.Matches(labels.Set(flatParam)) {
|
||||
continue
|
||||
}
|
||||
filterParams = append(filterParams, param)
|
||||
@@ -88,18 +99,6 @@ func Transform(requestedGenerator argoprojiov1alpha1.ApplicationSetGenerator, al
|
||||
return res, firstError
|
||||
}
|
||||
|
||||
func keepOnlyStringValues(in map[string]interface{}) map[string]string {
|
||||
var out map[string]string = map[string]string{}
|
||||
|
||||
for key, value := range in {
|
||||
if _, ok := value.(string); ok {
|
||||
out[key] = value.(string)
|
||||
}
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func GetRelevantGenerators(requestedGenerator *argoprojiov1alpha1.ApplicationSetGenerator, generators map[string]Generator) []Generator {
|
||||
var res []Generator
|
||||
|
||||
@@ -122,6 +121,20 @@ func GetRelevantGenerators(requestedGenerator *argoprojiov1alpha1.ApplicationSet
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenParameters(in map[string]interface{}) (map[string]string, error) {
|
||||
flat, err := flatten.Flatten(in, "", flatten.DotStyle)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
out := make(map[string]string, len(flat))
|
||||
for k, v := range flat {
|
||||
out[k] = fmt.Sprintf("%v", v)
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func mergeGeneratorTemplate(g Generator, requestedGenerator *argoprojiov1alpha1.ApplicationSetGenerator, applicationSetTemplate argoprojiov1alpha1.ApplicationSetTemplate) (argoprojiov1alpha1.ApplicationSetTemplate, error) {
|
||||
// Make a copy of the value from `GetTemplate()` before merge, rather than copying directly into
|
||||
// the provided parameter (which will touch the original resource object returned by client-go)
|
||||
@@ -132,27 +145,15 @@ func mergeGeneratorTemplate(g Generator, requestedGenerator *argoprojiov1alpha1.
|
||||
return *dest, err
|
||||
}
|
||||
|
||||
// Currently for Matrix Generator. Allows interpolating the matrix's 2nd child generator with values from the 1st child generator
|
||||
// InterpolateGenerator allows interpolating the matrix's 2nd child generator with values from the 1st child generator
|
||||
// "params" parameter is an array, where each index corresponds to a generator. Each index contains a map w/ that generator's parameters.
|
||||
func InterpolateGenerator(requestedGenerator *argoprojiov1alpha1.ApplicationSetGenerator, params map[string]interface{}, useGoTemplate bool) (argoprojiov1alpha1.ApplicationSetGenerator, error) {
|
||||
interpolatedGenerator := requestedGenerator.DeepCopy()
|
||||
tmplBytes, err := json.Marshal(interpolatedGenerator)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("requestedGenerator", interpolatedGenerator).Error("error marshalling requested generator for interpolation")
|
||||
return *interpolatedGenerator, err
|
||||
}
|
||||
|
||||
render := utils.Render{}
|
||||
replacedTmplStr, err := render.Replace(string(tmplBytes), params, useGoTemplate)
|
||||
interpolatedGenerator, err := render.RenderGeneratorParams(requestedGenerator, params, useGoTemplate)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("interpolatedGeneratorString", replacedTmplStr).Error("error interpolating generator with other generator's parameter")
|
||||
log.WithError(err).WithField("interpolatedGenerator", interpolatedGenerator).Error("error interpolating generator with other generator's parameter")
|
||||
return *interpolatedGenerator, err
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(replacedTmplStr), interpolatedGenerator)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("requestedGenerator", interpolatedGenerator).Error("error unmarshalling requested generator for interpolation")
|
||||
return *interpolatedGenerator, err
|
||||
}
|
||||
return *interpolatedGenerator, nil
|
||||
}
|
||||
|
||||
@@ -6,9 +6,11 @@ import (
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
testutils "github.com/argoproj/argo-cd/v2/applicationset/utils/test"
|
||||
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
|
||||
"github.com/stretchr/testify/mock"
|
||||
@@ -92,8 +94,160 @@ func TestMatchValues(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func emptyTemplate() argoprojiov1alpha1.ApplicationSetTemplate {
|
||||
return argoprojiov1alpha1.ApplicationSetTemplate{
|
||||
func TestMatchValuesGoTemplate(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
elements []apiextensionsv1.JSON
|
||||
selector *metav1.LabelSelector
|
||||
expected []map[string]interface{}
|
||||
}{
|
||||
{
|
||||
name: "no filter",
|
||||
elements: []apiextensionsv1.JSON{{Raw: []byte(`{"cluster": "cluster","url": "url"}`)}},
|
||||
selector: &metav1.LabelSelector{},
|
||||
expected: []map[string]interface{}{{"cluster": "cluster", "url": "url"}},
|
||||
},
|
||||
{
|
||||
name: "nil",
|
||||
elements: []apiextensionsv1.JSON{{Raw: []byte(`{"cluster": "cluster","url": "url"}`)}},
|
||||
selector: nil,
|
||||
expected: []map[string]interface{}{{"cluster": "cluster", "url": "url"}},
|
||||
},
|
||||
{
|
||||
name: "values.foo should be foo but is ignore element",
|
||||
elements: []apiextensionsv1.JSON{{Raw: []byte(`{"cluster": "cluster","url": "url","values":{"foo":"bar"}}`)}},
|
||||
selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"values.foo": "foo",
|
||||
},
|
||||
},
|
||||
expected: []map[string]interface{}{},
|
||||
},
|
||||
{
|
||||
name: "values.foo should be bar",
|
||||
elements: []apiextensionsv1.JSON{{Raw: []byte(`{"cluster": "cluster","url": "url","values":{"foo":"bar"}}`)}},
|
||||
selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"values.foo": "bar",
|
||||
},
|
||||
},
|
||||
expected: []map[string]interface{}{{"cluster": "cluster", "url": "url", "values": map[string]interface{}{"foo": "bar"}}},
|
||||
},
|
||||
{
|
||||
name: "values.0 should be bar",
|
||||
elements: []apiextensionsv1.JSON{{Raw: []byte(`{"cluster": "cluster","url": "url","values":["bar"]}`)}},
|
||||
selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"values.0": "bar",
|
||||
},
|
||||
},
|
||||
expected: []map[string]interface{}{{"cluster": "cluster", "url": "url", "values": []interface{}{"bar"}}},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
var listGenerator = NewListGenerator()
|
||||
var data = map[string]Generator{
|
||||
"List": listGenerator,
|
||||
}
|
||||
|
||||
applicationSetInfo := argov1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
GoTemplate: true,
|
||||
},
|
||||
}
|
||||
|
||||
results, err := Transform(argov1alpha1.ApplicationSetGenerator{
|
||||
Selector: testCase.selector,
|
||||
List: &argov1alpha1.ListGenerator{
|
||||
Elements: testCase.elements,
|
||||
Template: emptyTemplate(),
|
||||
}},
|
||||
data,
|
||||
emptyTemplate(),
|
||||
&applicationSetInfo, nil)
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.ElementsMatch(t, testCase.expected, results[0].Params)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTransForm(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
selector *metav1.LabelSelector
|
||||
expected []map[string]interface{}
|
||||
}{
|
||||
{
|
||||
name: "server filter",
|
||||
selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"server": "https://production-01.example.com"},
|
||||
},
|
||||
expected: []map[string]interface{}{{
|
||||
"metadata.annotations.foo.argoproj.io": "production",
|
||||
"metadata.labels.argocd.argoproj.io/secret-type": "cluster",
|
||||
"metadata.labels.environment": "production",
|
||||
"metadata.labels.org": "bar",
|
||||
"name": "production_01/west",
|
||||
"nameNormalized": "production-01-west",
|
||||
"server": "https://production-01.example.com",
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "server filter with long url",
|
||||
selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"server": "https://some-really-long-url-that-will-exceed-63-characters.com"},
|
||||
},
|
||||
expected: []map[string]interface{}{{
|
||||
"metadata.annotations.foo.argoproj.io": "production",
|
||||
"metadata.labels.argocd.argoproj.io/secret-type": "cluster",
|
||||
"metadata.labels.environment": "production",
|
||||
"metadata.labels.org": "bar",
|
||||
"name": "some-really-long-server-url",
|
||||
"nameNormalized": "some-really-long-server-url",
|
||||
"server": "https://some-really-long-url-that-will-exceed-63-characters.com",
|
||||
}},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
testGenerators := map[string]Generator{
|
||||
"Clusters": getMockClusterGenerator(),
|
||||
}
|
||||
|
||||
applicationSetInfo := argov1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{},
|
||||
}
|
||||
|
||||
results, err := Transform(
|
||||
argov1alpha1.ApplicationSetGenerator{
|
||||
Selector: testCase.selector,
|
||||
Clusters: &argov1alpha1.ClusterGenerator{
|
||||
Selector: metav1.LabelSelector{},
|
||||
Template: argov1alpha1.ApplicationSetTemplate{},
|
||||
Values: nil,
|
||||
}},
|
||||
testGenerators,
|
||||
emptyTemplate(),
|
||||
&applicationSetInfo, nil)
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.ElementsMatch(t, testCase.expected, results[0].Params)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func emptyTemplate() argov1alpha1.ApplicationSetTemplate {
|
||||
return argov1alpha1.ApplicationSetTemplate{
|
||||
Spec: argov1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
@@ -150,8 +304,35 @@ func getMockClusterGenerator() Generator {
|
||||
},
|
||||
Type: corev1.SecretType("Opaque"),
|
||||
},
|
||||
&corev1.Secret{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "Secret",
|
||||
APIVersion: "v1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "some-really-long-server-url",
|
||||
Namespace: "namespace",
|
||||
Labels: map[string]string{
|
||||
"argocd.argoproj.io/secret-type": "cluster",
|
||||
"environment": "production",
|
||||
"org": "bar",
|
||||
},
|
||||
Annotations: map[string]string{
|
||||
"foo.argoproj.io": "production",
|
||||
},
|
||||
},
|
||||
Data: map[string][]byte{
|
||||
"config": []byte("{}"),
|
||||
"name": []byte("some-really-long-server-url"),
|
||||
"server": []byte("https://some-really-long-url-that-will-exceed-63-characters.com"),
|
||||
},
|
||||
Type: corev1.SecretType("Opaque"),
|
||||
},
|
||||
}
|
||||
runtimeClusters := []runtime.Object{}
|
||||
for _, clientCluster := range clusters {
|
||||
runtimeClusters = append(runtimeClusters, clientCluster)
|
||||
}
|
||||
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
|
||||
|
||||
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
|
||||
@@ -159,8 +340,8 @@ func getMockClusterGenerator() Generator {
|
||||
}
|
||||
|
||||
func getMockGitGenerator() Generator {
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
argoCDServiceMock.mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
argoCDServiceMock.Mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
return gitGenerator
|
||||
}
|
||||
@@ -248,6 +429,60 @@ func TestInterpolateGenerator(t *testing.T) {
|
||||
Path: "{{server}}",
|
||||
}
|
||||
|
||||
requestedGenerator = &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Git: &argoprojiov1alpha1.GitGenerator{
|
||||
Files: append([]argoprojiov1alpha1.GitFileGeneratorItem{}, fileNamePath, fileServerPath),
|
||||
Template: argoprojiov1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
}
|
||||
clusterGeneratorParams := map[string]interface{}{
|
||||
"name": "production_01/west", "server": "https://production-01.example.com",
|
||||
}
|
||||
interpolatedGenerator, err = InterpolateGenerator(requestedGenerator, clusterGeneratorParams, false)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("requestedGenerator", requestedGenerator).Error("error interpolating Generator")
|
||||
return
|
||||
}
|
||||
assert.Equal(t, "production_01/west", interpolatedGenerator.Git.Files[0].Path)
|
||||
assert.Equal(t, "https://production-01.example.com", interpolatedGenerator.Git.Files[1].Path)
|
||||
}
|
||||
|
||||
func TestInterpolateGenerator_go(t *testing.T) {
|
||||
requestedGenerator := &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Clusters: &argoprojiov1alpha1.ClusterGenerator{
|
||||
Selector: metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"argocd.argoproj.io/secret-type": "cluster",
|
||||
"path-basename": "{{base .path.path}}",
|
||||
"path-zero": "{{index .path.segments 0}}",
|
||||
"path-full": "{{.path.path}}",
|
||||
"kubernetes.io/environment": `{{default "foo" .my_label}}`,
|
||||
}},
|
||||
},
|
||||
}
|
||||
gitGeneratorParams := map[string]interface{}{
|
||||
"path": map[string]interface{}{
|
||||
"path": "p1/p2/app3",
|
||||
"segments": []string{"p1", "p2", "app3"},
|
||||
},
|
||||
}
|
||||
interpolatedGenerator, err := InterpolateGenerator(requestedGenerator, gitGeneratorParams, true)
|
||||
require.NoError(t, err)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("requestedGenerator", requestedGenerator).Error("error interpolating Generator")
|
||||
return
|
||||
}
|
||||
assert.Equal(t, "app3", interpolatedGenerator.Clusters.Selector.MatchLabels["path-basename"])
|
||||
assert.Equal(t, "p1", interpolatedGenerator.Clusters.Selector.MatchLabels["path-zero"])
|
||||
assert.Equal(t, "p1/p2/app3", interpolatedGenerator.Clusters.Selector.MatchLabels["path-full"])
|
||||
|
||||
fileNamePath := argoprojiov1alpha1.GitFileGeneratorItem{
|
||||
Path: "{{.name}}",
|
||||
}
|
||||
fileServerPath := argoprojiov1alpha1.GitFileGeneratorItem{
|
||||
Path: "{{.server}}",
|
||||
}
|
||||
|
||||
requestedGenerator = &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Git: &argoprojiov1alpha1.GitGenerator{
|
||||
Files: append([]argoprojiov1alpha1.GitFileGeneratorItem{}, fileNamePath, fileServerPath),
|
||||
|
||||
@@ -58,9 +58,9 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
|
||||
|
||||
var err error
|
||||
var res []map[string]interface{}
|
||||
if appSetGenerator.Git.Directories != nil {
|
||||
if len(appSetGenerator.Git.Directories) != 0 {
|
||||
res, err = g.generateParamsForGitDirectories(appSetGenerator, appSet.Spec.GoTemplate)
|
||||
} else if appSetGenerator.Git.Files != nil {
|
||||
} else if len(appSetGenerator.Git.Files) != 0 {
|
||||
res, err = g.generateParamsForGitFiles(appSetGenerator, appSet.Spec.GoTemplate)
|
||||
} else {
|
||||
return nil, EmptyAppSetGeneratorError
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package generators
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
@@ -9,6 +8,7 @@ import (
|
||||
"github.com/stretchr/testify/mock"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
testutils "github.com/argoproj/argo-cd/v2/applicationset/utils/test"
|
||||
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -20,33 +20,6 @@ import (
|
||||
// return io.NewCloser(func() error { return nil }), c.RepoServerServiceClient, nil
|
||||
// }
|
||||
|
||||
type argoCDServiceMock struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetApps(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision)
|
||||
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetFiles(ctx context.Context, repoURL string, revision string, pattern string) (map[string][]byte, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision, pattern)
|
||||
|
||||
return args.Get(0).(map[string][]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetFileContent(ctx context.Context, repoURL string, revision string, path string) ([]byte, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision, path)
|
||||
|
||||
return args.Get(0).([]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetDirectories(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision)
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
|
||||
func Test_generateParamsFromGitFile(t *testing.T) {
|
||||
params, err := (*GitGenerator)(nil).generateParamsFromGitFile("path/dir/file_name.yaml", []byte(`
|
||||
foo:
|
||||
@@ -271,9 +244,9 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
|
||||
argoCDServiceMock.mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
argoCDServiceMock.Mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
|
||||
@@ -301,7 +274,7 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
|
||||
assert.Equal(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.mock.AssertExpectations(t)
|
||||
argoCDServiceMock.Mock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -566,9 +539,9 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
|
||||
argoCDServiceMock.mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
argoCDServiceMock.Mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
|
||||
@@ -597,7 +570,7 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
|
||||
assert.Equal(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.mock.AssertExpectations(t)
|
||||
argoCDServiceMock.Mock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -857,8 +830,8 @@ cluster:
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
argoCDServiceMock.mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
argoCDServiceMock.Mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
|
||||
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
@@ -887,7 +860,7 @@ cluster:
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.mock.AssertExpectations(t)
|
||||
argoCDServiceMock.Mock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1206,8 +1179,8 @@ cluster:
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
argoCDServiceMock.mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
argoCDServiceMock.Mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
|
||||
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
@@ -1237,7 +1210,7 @@ cluster:
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.mock.AssertExpectations(t)
|
||||
argoCDServiceMock.Mock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -80,28 +80,13 @@ func (m *MatrixGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.App
|
||||
}
|
||||
|
||||
func (m *MatrixGenerator) getParams(appSetBaseGenerator argoprojiov1alpha1.ApplicationSetNestedGenerator, appSet *argoprojiov1alpha1.ApplicationSet, params map[string]interface{}) ([]map[string]interface{}, error) {
|
||||
var matrix *argoprojiov1alpha1.MatrixGenerator
|
||||
if appSetBaseGenerator.Matrix != nil {
|
||||
// Since nested matrix generator is represented as a JSON object in the CRD, we unmarshall it back to a Go struct here.
|
||||
nestedMatrix, err := argoprojiov1alpha1.ToNestedMatrixGenerator(appSetBaseGenerator.Matrix)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to unmarshall nested matrix generator: %v", err)
|
||||
}
|
||||
if nestedMatrix != nil {
|
||||
matrix = nestedMatrix.ToMatrixGenerator()
|
||||
}
|
||||
matrixGen, err := getMatrixGenerator(appSetBaseGenerator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var mergeGenerator *argoprojiov1alpha1.MergeGenerator
|
||||
if appSetBaseGenerator.Merge != nil {
|
||||
// Since nested merge generator is represented as a JSON object in the CRD, we unmarshall it back to a Go struct here.
|
||||
nestedMerge, err := argoprojiov1alpha1.ToNestedMergeGenerator(appSetBaseGenerator.Merge)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to unmarshall nested merge generator: %v", err)
|
||||
}
|
||||
if nestedMerge != nil {
|
||||
mergeGenerator = nestedMerge.ToMergeGenerator()
|
||||
}
|
||||
mergeGen, err := getMergeGenerator(appSetBaseGenerator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
t, err := Transform(
|
||||
@@ -112,8 +97,8 @@ func (m *MatrixGenerator) getParams(appSetBaseGenerator argoprojiov1alpha1.Appli
|
||||
SCMProvider: appSetBaseGenerator.SCMProvider,
|
||||
ClusterDecisionResource: appSetBaseGenerator.ClusterDecisionResource,
|
||||
PullRequest: appSetBaseGenerator.PullRequest,
|
||||
Matrix: matrix,
|
||||
Merge: mergeGenerator,
|
||||
Matrix: matrixGen,
|
||||
Merge: mergeGen,
|
||||
Selector: appSetBaseGenerator.Selector,
|
||||
},
|
||||
m.supportedGenerators,
|
||||
@@ -143,10 +128,15 @@ func (m *MatrixGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.Ap
|
||||
var found bool
|
||||
|
||||
for _, r := range appSetGenerator.Matrix.Generators {
|
||||
matrixGen, _ := getMatrixGenerator(r)
|
||||
mergeGen, _ := getMergeGenerator(r)
|
||||
base := &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
List: r.List,
|
||||
Clusters: r.Clusters,
|
||||
Git: r.Git,
|
||||
List: r.List,
|
||||
Clusters: r.Clusters,
|
||||
Git: r.Git,
|
||||
PullRequest: r.PullRequest,
|
||||
Matrix: matrixGen,
|
||||
Merge: mergeGen,
|
||||
}
|
||||
generators := GetRelevantGenerators(base, m.supportedGenerators)
|
||||
|
||||
@@ -167,6 +157,17 @@ func (m *MatrixGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.Ap
|
||||
|
||||
}
|
||||
|
||||
func getMatrixGenerator(r argoprojiov1alpha1.ApplicationSetNestedGenerator) (*argoprojiov1alpha1.MatrixGenerator, error) {
|
||||
if r.Matrix == nil {
|
||||
return nil, nil
|
||||
}
|
||||
matrix, err := argoprojiov1alpha1.ToNestedMatrixGenerator(r.Matrix)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return matrix.ToMatrixGenerator(), nil
|
||||
}
|
||||
|
||||
func (m *MatrixGenerator) GetTemplate(appSetGenerator *argoprojiov1alpha1.ApplicationSetGenerator) *argoprojiov1alpha1.ApplicationSetTemplate {
|
||||
return &appSetGenerator.Matrix.Template
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
@@ -16,6 +17,7 @@ import (
|
||||
"github.com/stretchr/testify/mock"
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
|
||||
testutils "github.com/argoproj/argo-cd/v2/applicationset/utils/test"
|
||||
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -28,7 +30,7 @@ func TestMatrixGenerate(t *testing.T) {
|
||||
}
|
||||
|
||||
listGenerator := &argoprojiov1alpha1.ListGenerator{
|
||||
Elements: []apiextensionsv1.JSON{{Raw: []byte(`{"cluster": "Cluster","url": "Url"}`)}},
|
||||
Elements: []apiextensionsv1.JSON{{Raw: []byte(`{"cluster": "Cluster","url": "Url", "templated": "test-{{path.basenameNormalized}}"}`)}},
|
||||
}
|
||||
|
||||
testCases := []struct {
|
||||
@@ -48,8 +50,8 @@ func TestMatrixGenerate(t *testing.T) {
|
||||
},
|
||||
},
|
||||
expected: []map[string]interface{}{
|
||||
{"path": "app1", "path.basename": "app1", "path.basenameNormalized": "app1", "cluster": "Cluster", "url": "Url"},
|
||||
{"path": "app2", "path.basename": "app2", "path.basenameNormalized": "app2", "cluster": "Cluster", "url": "Url"},
|
||||
{"path": "app1", "path.basename": "app1", "path.basenameNormalized": "app1", "cluster": "Cluster", "url": "Url", "templated": "test-app1"},
|
||||
{"path": "app2", "path.basename": "app2", "path.basenameNormalized": "app2", "cluster": "Cluster", "url": "Url", "templated": "test-app2"},
|
||||
},
|
||||
},
|
||||
{
|
||||
@@ -399,6 +401,8 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
|
||||
Elements: []apiextensionsv1.JSON{{Raw: []byte(`{"cluster": "Cluster","url": "Url"}`)}},
|
||||
}
|
||||
|
||||
pullRequestGenerator := &argoprojiov1alpha1.PullRequestGenerator{}
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
baseGenerators []argoprojiov1alpha1.ApplicationSetNestedGenerator
|
||||
@@ -431,6 +435,31 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
|
||||
gitGetRequeueAfter: time.Duration(1),
|
||||
expected: time.Duration(1),
|
||||
},
|
||||
{
|
||||
name: "returns the minimal time for pull request",
|
||||
baseGenerators: []argoprojiov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Git: gitGenerator,
|
||||
},
|
||||
{
|
||||
PullRequest: pullRequestGenerator,
|
||||
},
|
||||
},
|
||||
gitGetRequeueAfter: time.Duration(15 * time.Second),
|
||||
expected: time.Duration(15 * time.Second),
|
||||
},
|
||||
{
|
||||
name: "returns the default time if no requeueAfterSeconds is provided",
|
||||
baseGenerators: []argoprojiov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Git: gitGenerator,
|
||||
},
|
||||
{
|
||||
PullRequest: pullRequestGenerator,
|
||||
},
|
||||
},
|
||||
expected: time.Duration(30 * time.Minute),
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
@@ -441,16 +470,18 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
|
||||
|
||||
for _, g := range testCaseCopy.baseGenerators {
|
||||
gitGeneratorSpec := argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Git: g.Git,
|
||||
List: g.List,
|
||||
Git: g.Git,
|
||||
List: g.List,
|
||||
PullRequest: g.PullRequest,
|
||||
}
|
||||
mock.On("GetRequeueAfter", &gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter, nil)
|
||||
}
|
||||
|
||||
var matrixGenerator = NewMatrixGenerator(
|
||||
map[string]Generator{
|
||||
"Git": mock,
|
||||
"List": &ListGenerator{},
|
||||
"Git": mock,
|
||||
"List": &ListGenerator{},
|
||||
"PullRequest": &PullRequestGenerator{},
|
||||
},
|
||||
)
|
||||
|
||||
@@ -828,3 +859,72 @@ func (g *generatorMock) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.Appl
|
||||
return args.Get(0).(time.Duration)
|
||||
|
||||
}
|
||||
|
||||
func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
|
||||
// Given a matrix generator over a list generator and a git files generator, the nested git files generator should
|
||||
// be treated as a files generator, and it should produce parameters.
|
||||
|
||||
// This tests for a specific bug where a nested git files generator was being treated as a directory generator. This
|
||||
// happened because, when the matrix generator was being processed, the nested git files generator was being
|
||||
// interpolated by the deeplyReplace function. That function cannot differentiate between a nil slice and an empty
|
||||
// slice. So it was replacing the `Directories` field with an empty slice, which the ApplicationSet controller
|
||||
// interpreted as meaning this was a directory generator, not a files generator.
|
||||
|
||||
// Now instead of checking for nil, we check whether the field is a non-empty slice. This test prevents a regression
|
||||
// of that bug.
|
||||
|
||||
listGeneratorMock := &generatorMock{}
|
||||
listGeneratorMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet")).Return([]map[string]interface{}{
|
||||
{"some": "value"},
|
||||
}, nil)
|
||||
listGeneratorMock.On("GetTemplate", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&argoprojiov1alpha1.ApplicationSetTemplate{})
|
||||
|
||||
gitGeneratorSpec := &argoprojiov1alpha1.GitGenerator{
|
||||
RepoURL: "https://git.example.com",
|
||||
Files: []argoprojiov1alpha1.GitFileGeneratorItem{
|
||||
{Path: "some/path.json"},
|
||||
},
|
||||
}
|
||||
|
||||
repoServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
repoServiceMock.Mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
|
||||
"some/path.json": []byte("test: content"),
|
||||
}, nil)
|
||||
gitGenerator := NewGitGenerator(repoServiceMock)
|
||||
|
||||
matrixGenerator := NewMatrixGenerator(map[string]Generator{
|
||||
"List": listGeneratorMock,
|
||||
"Git": gitGenerator,
|
||||
})
|
||||
|
||||
matrixGeneratorSpec := &argoprojiov1alpha1.MatrixGenerator{
|
||||
Generators: []argoprojiov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
List: &argoprojiov1alpha1.ListGenerator{
|
||||
Elements: []apiextensionsv1.JSON{
|
||||
{
|
||||
Raw: []byte(`{"some": "value"}`),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Git: gitGeneratorSpec,
|
||||
},
|
||||
},
|
||||
}
|
||||
params, err := matrixGenerator.GenerateParams(&argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Matrix: matrixGeneratorSpec,
|
||||
}, &argoprojiov1alpha1.ApplicationSet{})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []map[string]interface{}{{
|
||||
"path": "some",
|
||||
"path.basename": "some",
|
||||
"path.basenameNormalized": "some",
|
||||
"path.filename": "path.json",
|
||||
"path.filenameNormalized": "path.json",
|
||||
"path[0]": "some",
|
||||
"some": "value",
|
||||
"test": "content",
|
||||
}}, params)
|
||||
}
|
||||
|
||||
@@ -137,27 +137,13 @@ func getParamSetsByMergeKey(mergeKeys []string, paramSets []map[string]interface
|
||||
|
||||
// getParams get the parameters generated by this generator.
|
||||
func (m *MergeGenerator) getParams(appSetBaseGenerator argoprojiov1alpha1.ApplicationSetNestedGenerator, appSet *argoprojiov1alpha1.ApplicationSet) ([]map[string]interface{}, error) {
|
||||
|
||||
var matrix *argoprojiov1alpha1.MatrixGenerator
|
||||
if appSetBaseGenerator.Matrix != nil {
|
||||
nestedMatrix, err := argoprojiov1alpha1.ToNestedMatrixGenerator(appSetBaseGenerator.Matrix)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if nestedMatrix != nil {
|
||||
matrix = nestedMatrix.ToMatrixGenerator()
|
||||
}
|
||||
matrixGen, err := getMatrixGenerator(appSetBaseGenerator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var mergeGenerator *argoprojiov1alpha1.MergeGenerator
|
||||
if appSetBaseGenerator.Merge != nil {
|
||||
nestedMerge, err := argoprojiov1alpha1.ToNestedMergeGenerator(appSetBaseGenerator.Merge)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if nestedMerge != nil {
|
||||
mergeGenerator = nestedMerge.ToMergeGenerator()
|
||||
}
|
||||
mergeGen, err := getMergeGenerator(appSetBaseGenerator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
t, err := Transform(
|
||||
@@ -168,8 +154,8 @@ func (m *MergeGenerator) getParams(appSetBaseGenerator argoprojiov1alpha1.Applic
|
||||
SCMProvider: appSetBaseGenerator.SCMProvider,
|
||||
ClusterDecisionResource: appSetBaseGenerator.ClusterDecisionResource,
|
||||
PullRequest: appSetBaseGenerator.PullRequest,
|
||||
Matrix: matrix,
|
||||
Merge: mergeGenerator,
|
||||
Matrix: matrixGen,
|
||||
Merge: mergeGen,
|
||||
Selector: appSetBaseGenerator.Selector,
|
||||
},
|
||||
m.supportedGenerators,
|
||||
@@ -197,10 +183,15 @@ func (m *MergeGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.App
|
||||
var found bool
|
||||
|
||||
for _, r := range appSetGenerator.Merge.Generators {
|
||||
matrixGen, _ := getMatrixGenerator(r)
|
||||
mergeGen, _ := getMergeGenerator(r)
|
||||
base := &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
List: r.List,
|
||||
Clusters: r.Clusters,
|
||||
Git: r.Git,
|
||||
List: r.List,
|
||||
Clusters: r.Clusters,
|
||||
Git: r.Git,
|
||||
PullRequest: r.PullRequest,
|
||||
Matrix: matrixGen,
|
||||
Merge: mergeGen,
|
||||
}
|
||||
generators := GetRelevantGenerators(base, m.supportedGenerators)
|
||||
|
||||
@@ -221,6 +212,17 @@ func (m *MergeGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.App
|
||||
|
||||
}
|
||||
|
||||
func getMergeGenerator(r argoprojiov1alpha1.ApplicationSetNestedGenerator) (*argoprojiov1alpha1.MergeGenerator, error) {
|
||||
if r.Merge == nil {
|
||||
return nil, nil
|
||||
}
|
||||
merge, err := argoprojiov1alpha1.ToNestedMergeGenerator(r.Merge)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return merge.ToMergeGenerator(), nil
|
||||
}
|
||||
|
||||
// GetTemplate gets the Template field for the MergeGenerator.
|
||||
func (m *MergeGenerator) GetTemplate(appSetGenerator *argoprojiov1alpha1.ApplicationSetGenerator) *argoprojiov1alpha1.ApplicationSetTemplate {
|
||||
return &appSetGenerator.Merge.Template
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"net/http"
|
||||
pathpkg "path"
|
||||
|
||||
gitlab "github.com/xanzy/go-gitlab"
|
||||
@@ -144,7 +145,11 @@ func (g *GitlabProvider) listBranches(_ context.Context, repo *Repository) ([]gi
|
||||
branches := []gitlab.Branch{}
|
||||
// If we don't specifically want to query for all branches, just use the default branch and call it a day.
|
||||
if !g.allBranches {
|
||||
gitlabBranch, _, err := g.client.Branches.GetBranch(repo.RepositoryId, repo.Branch, nil)
|
||||
gitlabBranch, resp, err := g.client.Branches.GetBranch(repo.RepositoryId, repo.Branch, nil)
|
||||
// 404s are not an error here, just a normal false.
|
||||
if resp != nil && resp.StatusCode == http.StatusNotFound {
|
||||
return []gitlab.Branch{}, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -157,6 +162,10 @@ func (g *GitlabProvider) listBranches(_ context.Context, repo *Repository) ([]gi
|
||||
}
|
||||
for {
|
||||
gitlabBranches, resp, err := g.client.Branches.ListBranches(repo.RepositoryId, opt)
|
||||
// 404s are not an error here, just a normal false.
|
||||
if resp != nil && resp.StatusCode == http.StatusNotFound {
|
||||
return []gitlab.Branch{}, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -274,6 +274,8 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v4/projects/27084533/repository/branches/foo":
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
default:
|
||||
_, err := io.WriteString(w, `[]`)
|
||||
if err != nil {
|
||||
@@ -391,3 +393,29 @@ func TestGitlabHasPath(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGitlabGetBranches(t *testing.T) {
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
gitlabMockHandler(t)(w, r)
|
||||
}))
|
||||
host, _ := NewGitlabProvider(context.Background(), "test-argocd-proton", "", ts.URL, false, true)
|
||||
|
||||
repo := &Repository{
|
||||
RepositoryId: 27084533,
|
||||
Branch: "master",
|
||||
}
|
||||
t.Run("branch exists", func(t *testing.T) {
|
||||
repos, err := host.GetBranches(context.Background(), repo)
|
||||
assert.Nil(t, err)
|
||||
assert.Equal(t, repos[0].Branch, "master")
|
||||
})
|
||||
|
||||
repo2 := &Repository{
|
||||
RepositoryId: 27084533,
|
||||
Branch: "foo",
|
||||
}
|
||||
t.Run("unknown branch", func(t *testing.T) {
|
||||
_, err := host.GetBranches(context.Background(), repo2)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
261
applicationset/utils/selector.go
Normal file
261
applicationset/utils/selector.go
Normal file
@@ -0,0 +1,261 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"k8s.io/apimachinery/pkg/selection"
|
||||
"k8s.io/apimachinery/pkg/util/validation"
|
||||
"k8s.io/apimachinery/pkg/util/validation/field"
|
||||
"k8s.io/klog/v2"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
unaryOperators = []string{
|
||||
string(selection.Exists), string(selection.DoesNotExist),
|
||||
}
|
||||
binaryOperators = []string{
|
||||
string(selection.In), string(selection.NotIn),
|
||||
string(selection.Equals), string(selection.DoubleEquals), string(selection.NotEquals),
|
||||
string(selection.GreaterThan), string(selection.LessThan),
|
||||
}
|
||||
validRequirementOperators = append(binaryOperators, unaryOperators...)
|
||||
)
|
||||
|
||||
// Selector represents a label selector.
|
||||
type Selector interface {
|
||||
// Matches returns true if this selector matches the given set of labels.
|
||||
Matches(labels.Labels) bool
|
||||
|
||||
// Add adds requirements to the Selector
|
||||
Add(r ...Requirement) Selector
|
||||
}
|
||||
|
||||
type internalSelector []Requirement
|
||||
|
||||
// ByKey sorts requirements by key to obtain deterministic parser
|
||||
type ByKey []Requirement
|
||||
|
||||
func (a ByKey) Len() int { return len(a) }
|
||||
|
||||
func (a ByKey) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
|
||||
func (a ByKey) Less(i, j int) bool { return a[i].key < a[j].key }
|
||||
|
||||
// Matches for a internalSelector returns true if all
|
||||
// its Requirements match the input Labels. If any
|
||||
// Requirement does not match, false is returned.
|
||||
func (s internalSelector) Matches(l labels.Labels) bool {
|
||||
for ix := range s {
|
||||
if matches := s[ix].Matches(l); !matches {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Add adds requirements to the selector. It copies the current selector returning a new one
|
||||
func (s internalSelector) Add(reqs ...Requirement) Selector {
|
||||
ret := make(internalSelector, 0, len(s)+len(reqs))
|
||||
ret = append(ret, s...)
|
||||
ret = append(ret, reqs...)
|
||||
sort.Sort(ByKey(ret))
|
||||
return ret
|
||||
}
|
||||
|
||||
type nothingSelector struct{}
|
||||
|
||||
func (n nothingSelector) Matches(l labels.Labels) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (n nothingSelector) Add(r ...Requirement) Selector {
|
||||
return n
|
||||
}
|
||||
|
||||
// Nothing returns a selector that matches no labels
|
||||
func nothing() Selector {
|
||||
return nothingSelector{}
|
||||
}
|
||||
|
||||
// Everything returns a selector that matches all labels.
|
||||
func everything() Selector {
|
||||
return internalSelector{}
|
||||
}
|
||||
|
||||
// LabelSelectorAsSelector converts the LabelSelector api type into a struct that implements
|
||||
// labels.Selector
|
||||
// Note: This function should be kept in sync with the selector methods in pkg/labels/selector.go
|
||||
func LabelSelectorAsSelector(ps *v1.LabelSelector) (Selector, error) {
|
||||
if ps == nil {
|
||||
return nothing(), nil
|
||||
}
|
||||
if len(ps.MatchLabels)+len(ps.MatchExpressions) == 0 {
|
||||
return everything(), nil
|
||||
}
|
||||
requirements := make([]Requirement, 0, len(ps.MatchLabels)+len(ps.MatchExpressions))
|
||||
for k, v := range ps.MatchLabels {
|
||||
r, err := newRequirement(k, selection.Equals, []string{v})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
requirements = append(requirements, *r)
|
||||
}
|
||||
for _, expr := range ps.MatchExpressions {
|
||||
var op selection.Operator
|
||||
switch expr.Operator {
|
||||
case v1.LabelSelectorOpIn:
|
||||
op = selection.In
|
||||
case v1.LabelSelectorOpNotIn:
|
||||
op = selection.NotIn
|
||||
case v1.LabelSelectorOpExists:
|
||||
op = selection.Exists
|
||||
case v1.LabelSelectorOpDoesNotExist:
|
||||
op = selection.DoesNotExist
|
||||
default:
|
||||
return nil, fmt.Errorf("%q is not a valid pod selector operator", expr.Operator)
|
||||
}
|
||||
r, err := newRequirement(expr.Key, op, append([]string(nil), expr.Values...))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
requirements = append(requirements, *r)
|
||||
}
|
||||
selector := newSelector()
|
||||
selector = selector.Add(requirements...)
|
||||
return selector, nil
|
||||
}
|
||||
|
||||
// NewSelector returns a nil selector
|
||||
func newSelector() Selector {
|
||||
return internalSelector(nil)
|
||||
}
|
||||
|
||||
func validateLabelKey(k string, path *field.Path) *field.Error {
|
||||
if errs := validation.IsQualifiedName(k); len(errs) != 0 {
|
||||
return field.Invalid(path, k, strings.Join(errs, "; "))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewRequirement is the constructor for a Requirement.
|
||||
// If any of these rules is violated, an error is returned:
|
||||
// (1) The operator can only be In, NotIn, Equals, DoubleEquals, Gt, Lt, NotEquals, Exists, or DoesNotExist.
|
||||
// (2) If the operator is In or NotIn, the values set must be non-empty.
|
||||
// (3) If the operator is Equals, DoubleEquals, or NotEquals, the values set must contain one value.
|
||||
// (4) If the operator is Exists or DoesNotExist, the value set must be empty.
|
||||
// (5) If the operator is Gt or Lt, the values set must contain only one value, which will be interpreted as an integer.
|
||||
// (6) The key is invalid due to its length, or sequence
|
||||
//
|
||||
// of characters. See validateLabelKey for more details.
|
||||
//
|
||||
// The empty string is a valid value in the input values set.
|
||||
// Returned error, if not nil, is guaranteed to be an aggregated field.ErrorList
|
||||
func newRequirement(key string, op selection.Operator, vals []string, opts ...field.PathOption) (*Requirement, error) {
|
||||
var allErrs field.ErrorList
|
||||
path := field.ToPath(opts...)
|
||||
if err := validateLabelKey(key, path.Child("key")); err != nil {
|
||||
allErrs = append(allErrs, err)
|
||||
}
|
||||
|
||||
valuePath := path.Child("values")
|
||||
switch op {
|
||||
case selection.In, selection.NotIn:
|
||||
if len(vals) == 0 {
|
||||
allErrs = append(allErrs, field.Invalid(valuePath, vals, "for 'in', 'notin' operators, values set can't be empty"))
|
||||
}
|
||||
case selection.Equals, selection.DoubleEquals, selection.NotEquals:
|
||||
if len(vals) != 1 {
|
||||
allErrs = append(allErrs, field.Invalid(valuePath, vals, "exact-match compatibility requires one single value"))
|
||||
}
|
||||
case selection.Exists, selection.DoesNotExist:
|
||||
if len(vals) != 0 {
|
||||
allErrs = append(allErrs, field.Invalid(valuePath, vals, "values set must be empty for exists and does not exist"))
|
||||
}
|
||||
case selection.GreaterThan, selection.LessThan:
|
||||
if len(vals) != 1 {
|
||||
allErrs = append(allErrs, field.Invalid(valuePath, vals, "for 'Gt', 'Lt' operators, exactly one value is required"))
|
||||
}
|
||||
for i := range vals {
|
||||
if _, err := strconv.ParseInt(vals[i], 10, 64); err != nil {
|
||||
allErrs = append(allErrs, field.Invalid(valuePath.Index(i), vals[i], "for 'Gt', 'Lt' operators, the value must be an integer"))
|
||||
}
|
||||
}
|
||||
default:
|
||||
allErrs = append(allErrs, field.NotSupported(path.Child("operator"), op, validRequirementOperators))
|
||||
}
|
||||
|
||||
return &Requirement{key: key, operator: op, strValues: vals}, allErrs.ToAggregate()
|
||||
}
|
||||
|
||||
// Requirement contains values, a key, and an operator that relates the key and values.
|
||||
// The zero value of Requirement is invalid.
|
||||
// Requirement implements both set based match and exact match
|
||||
// Requirement should be initialized via NewRequirement constructor for creating a valid Requirement.
|
||||
// +k8s:deepcopy-gen=true
|
||||
type Requirement struct {
|
||||
key string
|
||||
operator selection.Operator
|
||||
// In the majority of cases we have at most one value here.
|
||||
// It is generally faster to operate on a single-element slice
|
||||
// than on a single-element map, so we have a slice here.
|
||||
strValues []string
|
||||
}
|
||||
|
||||
func (r *Requirement) hasValue(value string) bool {
|
||||
for i := range r.strValues {
|
||||
if r.strValues[i] == value {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (r *Requirement) Matches(ls labels.Labels) bool {
|
||||
switch r.operator {
|
||||
case selection.In, selection.Equals, selection.DoubleEquals:
|
||||
if !ls.Has(r.key) {
|
||||
return false
|
||||
}
|
||||
return r.hasValue(ls.Get(r.key))
|
||||
case selection.NotIn, selection.NotEquals:
|
||||
if !ls.Has(r.key) {
|
||||
return true
|
||||
}
|
||||
return !r.hasValue(ls.Get(r.key))
|
||||
case selection.Exists:
|
||||
return ls.Has(r.key)
|
||||
case selection.DoesNotExist:
|
||||
return !ls.Has(r.key)
|
||||
case selection.GreaterThan, selection.LessThan:
|
||||
if !ls.Has(r.key) {
|
||||
return false
|
||||
}
|
||||
lsValue, err := strconv.ParseInt(ls.Get(r.key), 10, 64)
|
||||
if err != nil {
|
||||
klog.V(10).Infof("ParseInt failed for value %+v in label %+v, %+v", ls.Get(r.key), ls, err)
|
||||
return false
|
||||
}
|
||||
|
||||
// There should be only one strValue in r.strValues, and can be converted to an integer.
|
||||
if len(r.strValues) != 1 {
|
||||
klog.V(10).Infof("Invalid values count %+v of requirement %#v, for 'Gt', 'Lt' operators, exactly one value is required", len(r.strValues), r)
|
||||
return false
|
||||
}
|
||||
|
||||
var rValue int64
|
||||
for i := range r.strValues {
|
||||
rValue, err = strconv.ParseInt(r.strValues[i], 10, 64)
|
||||
if err != nil {
|
||||
klog.V(10).Infof("ParseInt failed for value %+v in requirement %#v, for 'Gt', 'Lt' operators, the value must be an integer", r.strValues[i], r)
|
||||
return false
|
||||
}
|
||||
}
|
||||
return (r.operator == selection.GreaterThan && lsValue > rValue) || (r.operator == selection.LessThan && lsValue < rValue)
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
34
applicationset/utils/test/testutils.go
Normal file
34
applicationset/utils/test/testutils.go
Normal file
@@ -0,0 +1,34 @@
|
||||
package test
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
type ArgoCDServiceMock struct {
|
||||
Mock *mock.Mock
|
||||
}
|
||||
|
||||
func (a ArgoCDServiceMock) GetApps(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.Mock.Called(ctx, repoURL, revision)
|
||||
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
|
||||
func (a ArgoCDServiceMock) GetFiles(ctx context.Context, repoURL string, revision string, pattern string) (map[string][]byte, error) {
|
||||
args := a.Mock.Called(ctx, repoURL, revision, pattern)
|
||||
|
||||
return args.Get(0).(map[string][]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a ArgoCDServiceMock) GetFileContent(ctx context.Context, repoURL string, revision string, path string) ([]byte, error) {
|
||||
args := a.Mock.Called(ctx, repoURL, revision, path)
|
||||
|
||||
return args.Get(0).([]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a ArgoCDServiceMock) GetDirectories(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.Mock.Called(ctx, repoURL, revision)
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
@@ -96,6 +96,25 @@ func (r *Render) deeplyReplace(copy, original reflect.Value, replaceMap map[stri
|
||||
// specific case time
|
||||
if currentType == "time.Time" {
|
||||
copy.Field(i).Set(original.Field(i))
|
||||
} else if currentType == "Raw.k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1" {
|
||||
var unmarshaled interface{}
|
||||
originalBytes := original.Field(i).Bytes()
|
||||
err := json.Unmarshal(originalBytes, &unmarshaled)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to unmarshal JSON field: %w", err)
|
||||
}
|
||||
jsonOriginal := reflect.ValueOf(&unmarshaled)
|
||||
jsonCopy := reflect.New(jsonOriginal.Type()).Elem()
|
||||
err = r.deeplyReplace(jsonCopy, jsonOriginal, replaceMap, useGoTemplate)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to deeply replace JSON field contents: %w", err)
|
||||
}
|
||||
jsonCopyInterface := jsonCopy.Interface().(*interface{})
|
||||
data, err := json.Marshal(jsonCopyInterface)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal templated JSON field: %w", err)
|
||||
}
|
||||
copy.Field(i).Set(reflect.ValueOf(data))
|
||||
} else if err := r.deeplyReplace(copy.Field(i), original.Field(i), replaceMap, useGoTemplate); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -174,7 +193,7 @@ func (r *Render) deeplyReplace(copy, original reflect.Value, replaceMap map[stri
|
||||
|
||||
func (r *Render) RenderTemplateParams(tmpl *argoappsv1.Application, syncPolicy *argoappsv1.ApplicationSetSyncPolicy, params map[string]interface{}, useGoTemplate bool) (*argoappsv1.Application, error) {
|
||||
if tmpl == nil {
|
||||
return nil, fmt.Errorf("application template is empty ")
|
||||
return nil, fmt.Errorf("application template is empty")
|
||||
}
|
||||
|
||||
if len(params) == 0 {
|
||||
@@ -204,6 +223,27 @@ func (r *Render) RenderTemplateParams(tmpl *argoappsv1.Application, syncPolicy *
|
||||
return replacedTmpl, nil
|
||||
}
|
||||
|
||||
func (r *Render) RenderGeneratorParams(gen *argoappsv1.ApplicationSetGenerator, params map[string]interface{}, useGoTemplate bool) (*argoappsv1.ApplicationSetGenerator, error) {
|
||||
if gen == nil {
|
||||
return nil, fmt.Errorf("generator is empty")
|
||||
}
|
||||
|
||||
if len(params) == 0 {
|
||||
return gen, nil
|
||||
}
|
||||
|
||||
original := reflect.ValueOf(gen)
|
||||
copy := reflect.New(original.Type()).Elem()
|
||||
|
||||
if err := r.deeplyReplace(copy, original, params, useGoTemplate); err != nil {
|
||||
return nil, fmt.Errorf("failed to replace parameters in generator: %w", err)
|
||||
}
|
||||
|
||||
replacedGen := copy.Interface().(*argoappsv1.ApplicationSetGenerator)
|
||||
|
||||
return replacedGen, nil
|
||||
}
|
||||
|
||||
var isTemplatedRegex = regexp.MustCompile(".*{{.*}}.*")
|
||||
|
||||
// Replace executes basic string substitution of a template with replacement values.
|
||||
@@ -227,7 +267,10 @@ func (r *Render) Replace(tmpl string, replaceMap map[string]interface{}, useGoTe
|
||||
return tmpl, nil
|
||||
}
|
||||
|
||||
fstTmpl := fasttemplate.New(tmpl, "{{", "}}")
|
||||
fstTmpl, err := fasttemplate.NewTemplate(tmpl, "{{", "}}")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("invalid template: %w", err)
|
||||
}
|
||||
replacedTmpl := fstTmpl.ExecuteFuncString(func(w io.Writer, tag string) (int, error) {
|
||||
trimmedTag := strings.TrimSpace(tag)
|
||||
replacement, ok := replaceMap[trimmedTag].(string)
|
||||
|
||||
@@ -464,6 +464,14 @@ func TestRenderTemplateParamsGoTemplate(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func Test_Render_Replace_no_panic_on_missing_closing_brace(t *testing.T) {
|
||||
r := &Render{}
|
||||
assert.NotPanics(t, func() {
|
||||
_, err := r.Replace("{{properly.closed}} {{improperly.closed}", nil, false)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRenderTemplateKeys(t *testing.T) {
|
||||
t.Run("fasttemplate", func(t *testing.T) {
|
||||
application := &argoappsv1.Application{
|
||||
|
||||
@@ -271,6 +271,16 @@
|
||||
"description": "the application's namespace.",
|
||||
"name": "appNamespace",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"collectionFormat": "multi",
|
||||
"description": "the project names to restrict returned list applications (legacy name for backwards-compatibility).",
|
||||
"name": "project",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -585,6 +595,16 @@
|
||||
"description": "the application's namespace.",
|
||||
"name": "appNamespace",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"collectionFormat": "multi",
|
||||
"description": "the project names to restrict returned list applications (legacy name for backwards-compatibility).",
|
||||
"name": "project",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -3430,6 +3450,12 @@
|
||||
"description": "Google Cloud Platform service account key.",
|
||||
"name": "gcpServiceAccountKey",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"description": "Whether to force HTTP basic auth.",
|
||||
"name": "forceHttpBasicAuth",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -3588,6 +3614,29 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"/api/v1/settings/plugins": {
|
||||
"get": {
|
||||
"tags": [
|
||||
"SettingsService"
|
||||
],
|
||||
"summary": "Get returns Argo CD plugins",
|
||||
"operationId": "SettingsService_GetPlugins",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "A successful response.",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/clusterSettingsPluginsResponse"
|
||||
}
|
||||
},
|
||||
"default": {
|
||||
"description": "An unexpected error response.",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/runtimeError"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"/api/v1/stream/applications": {
|
||||
"get": {
|
||||
"tags": [
|
||||
@@ -3641,6 +3690,16 @@
|
||||
"description": "the application's namespace.",
|
||||
"name": "appNamespace",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"collectionFormat": "multi",
|
||||
"description": "the project names to restrict returned list applications (legacy name for backwards-compatibility).",
|
||||
"name": "project",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -4342,6 +4401,17 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"clusterSettingsPluginsResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"plugins": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/definitions/clusterPlugin"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"gpgkeyGnuPGPublicKeyCreateResponse": {
|
||||
"type": "object",
|
||||
"title": "Response to a public key creation request",
|
||||
@@ -5668,6 +5738,10 @@
|
||||
"status": {
|
||||
"type": "string",
|
||||
"title": "Status contains the AppSet's perceived status of the managed Application resource: (Waiting, Pending, Progressing, Healthy)"
|
||||
},
|
||||
"step": {
|
||||
"type": "string",
|
||||
"title": "Step tracks which step this Application should be updated in"
|
||||
}
|
||||
}
|
||||
},
|
||||
@@ -7304,6 +7378,10 @@
|
||||
"type": "boolean",
|
||||
"title": "EnableOCI specifies whether helm-oci support should be enabled for this repo"
|
||||
},
|
||||
"forceHttpBasicAuth": {
|
||||
"type": "boolean",
|
||||
"title": "ForceHttpBasicAuth specifies whether Argo CD should attempt to force basic auth for HTTP connections"
|
||||
},
|
||||
"gcpServiceAccountKey": {
|
||||
"type": "string",
|
||||
"title": "GCPServiceAccountKey specifies the service account key in JSON format to be used for getting credentials to Google Cloud Source repos"
|
||||
@@ -7390,6 +7468,10 @@
|
||||
"type": "boolean",
|
||||
"title": "EnableOCI specifies whether helm-oci support should be enabled for this repo"
|
||||
},
|
||||
"forceHttpBasicAuth": {
|
||||
"type": "boolean",
|
||||
"title": "ForceHttpBasicAuth specifies whether Argo CD should attempt to force basic auth for HTTP connections"
|
||||
},
|
||||
"gcpServiceAccountKey": {
|
||||
"type": "string",
|
||||
"title": "GCPServiceAccountKey specifies the service account key in JSON format to be used for getting credentials to Google Cloud Source repos"
|
||||
|
||||
@@ -46,17 +46,17 @@ func getSubmoduleEnabled() bool {
|
||||
|
||||
func NewCommand() *cobra.Command {
|
||||
var (
|
||||
clientConfig clientcmd.ClientConfig
|
||||
metricsAddr string
|
||||
probeBindAddr string
|
||||
webhookAddr string
|
||||
enableLeaderElection bool
|
||||
namespace string
|
||||
argocdRepoServer string
|
||||
policy string
|
||||
debugLog bool
|
||||
dryRun bool
|
||||
enableProgressiveRollouts bool
|
||||
clientConfig clientcmd.ClientConfig
|
||||
metricsAddr string
|
||||
probeBindAddr string
|
||||
webhookAddr string
|
||||
enableLeaderElection bool
|
||||
namespace string
|
||||
argocdRepoServer string
|
||||
policy string
|
||||
debugLog bool
|
||||
dryRun bool
|
||||
enableProgressiveSyncs bool
|
||||
)
|
||||
scheme := runtime.NewScheme()
|
||||
_ = clientgoscheme.AddToScheme(scheme)
|
||||
@@ -169,16 +169,16 @@ func NewCommand() *cobra.Command {
|
||||
|
||||
go func() { errors.CheckError(askPassServer.Run(askpass.SocketPath)) }()
|
||||
if err = (&controllers.ApplicationSetReconciler{
|
||||
Generators: topLevelGenerators,
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
|
||||
Renderer: &utils.Render{},
|
||||
Policy: policyObj,
|
||||
ArgoAppClientset: appSetConfig,
|
||||
KubeClientset: k8sClient,
|
||||
ArgoDB: argoCDDB,
|
||||
EnableProgressiveRollouts: enableProgressiveRollouts,
|
||||
Generators: topLevelGenerators,
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: mgr.GetScheme(),
|
||||
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
|
||||
Renderer: &utils.Render{},
|
||||
Policy: policyObj,
|
||||
ArgoAppClientset: appSetConfig,
|
||||
KubeClientset: k8sClient,
|
||||
ArgoDB: argoCDDB,
|
||||
EnableProgressiveSyncs: enableProgressiveSyncs,
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
|
||||
os.Exit(1)
|
||||
@@ -207,7 +207,7 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
|
||||
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", env.StringFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
|
||||
command.Flags().BoolVar(&dryRun, "dry-run", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_DRY_RUN", false), "Enable dry run mode")
|
||||
command.Flags().BoolVar(&enableProgressiveRollouts, "enable-progressive-rollouts", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_ENABLE_PROGRESSIVE_ROLLOUTS", false), "Enable use of the experimental progressive rollouts feature.")
|
||||
command.Flags().BoolVar(&enableProgressiveSyncs, "enable-progressive-syncs", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS", false), "Enable use of the experimental progressive syncs feature.")
|
||||
return &command
|
||||
}
|
||||
|
||||
|
||||
@@ -84,6 +84,8 @@ func NewCommand() *cobra.Command {
|
||||
allowOutOfBoundsSymlinks bool
|
||||
streamedManifestMaxTarSize string
|
||||
streamedManifestMaxExtractedSize string
|
||||
helmManifestMaxExtractedSize string
|
||||
disableManifestMaxExtractedSize bool
|
||||
)
|
||||
var command = cobra.Command{
|
||||
Use: cliName,
|
||||
@@ -122,6 +124,9 @@ func NewCommand() *cobra.Command {
|
||||
streamedManifestMaxExtractedSizeQuantity, err := resource.ParseQuantity(streamedManifestMaxExtractedSize)
|
||||
errors.CheckError(err)
|
||||
|
||||
helmManifestMaxExtractedSizeQuantity, err := resource.ParseQuantity(helmManifestMaxExtractedSize)
|
||||
errors.CheckError(err)
|
||||
|
||||
askPassServer := askpass.NewServer()
|
||||
metricsServer := metrics.NewMetricsServer()
|
||||
cacheutil.CollectMetrics(redisClient, metricsServer)
|
||||
@@ -136,6 +141,7 @@ func NewCommand() *cobra.Command {
|
||||
AllowOutOfBoundsSymlinks: allowOutOfBoundsSymlinks,
|
||||
StreamedManifestMaxExtractedSize: streamedManifestMaxExtractedSizeQuantity.ToDec().Value(),
|
||||
StreamedManifestMaxTarSize: streamedManifestMaxTarSizeQuantity.ToDec().Value(),
|
||||
HelmManifestMaxExtractedSize: helmManifestMaxExtractedSizeQuantity.ToDec().Value(),
|
||||
}, askPassServer)
|
||||
errors.CheckError(err)
|
||||
|
||||
@@ -216,6 +222,8 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().BoolVar(&allowOutOfBoundsSymlinks, "allow-oob-symlinks", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_ALLOW_OUT_OF_BOUNDS_SYMLINKS", false), "Allow out-of-bounds symlinks in repositories (not recommended)")
|
||||
command.Flags().StringVar(&streamedManifestMaxTarSize, "streamed-manifest-max-tar-size", env.StringFromEnv("ARGOCD_REPO_SERVER_STREAMED_MANIFEST_MAX_TAR_SIZE", "100M"), "Maximum size of streamed manifest archives")
|
||||
command.Flags().StringVar(&streamedManifestMaxExtractedSize, "streamed-manifest-max-extracted-size", env.StringFromEnv("ARGOCD_REPO_SERVER_STREAMED_MANIFEST_MAX_EXTRACTED_SIZE", "1G"), "Maximum size of streamed manifest archives when extracted")
|
||||
command.Flags().StringVar(&helmManifestMaxExtractedSize, "helm-manifest-max-extracted-size", env.StringFromEnv("ARGOCD_REPO_SERVER_HELM_MANIFEST_MAX_EXTRACTED_SIZE", "1G"), "Maximum size of helm manifest archives when extracted")
|
||||
command.Flags().BoolVar(&disableManifestMaxExtractedSize, "disable-helm-manifest-max-extracted-size", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_DISABLE_HELM_MANIFEST_MAX_EXTRACTED_SIZE", false), "Disable maximum size of helm manifest archives when extracted")
|
||||
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
|
||||
cacheSrc = reposervercache.AddCacheFlagsToCmd(&command, func(client *redis.Client) {
|
||||
redisClient = client
|
||||
|
||||
@@ -9,13 +9,16 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/initialize"
|
||||
"github.com/argoproj/argo-cd/v2/common"
|
||||
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
|
||||
"github.com/argoproj/argo-cd/v2/util/cache"
|
||||
"github.com/argoproj/argo-cd/v2/util/env"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
)
|
||||
|
||||
func NewDashboardCommand() *cobra.Command {
|
||||
var (
|
||||
port int
|
||||
address string
|
||||
port int
|
||||
address string
|
||||
compressionStr string
|
||||
)
|
||||
cmd := &cobra.Command{
|
||||
Use: "dashboard",
|
||||
@@ -23,7 +26,9 @@ func NewDashboardCommand() *cobra.Command {
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
ctx := cmd.Context()
|
||||
|
||||
errors.CheckError(headless.StartLocalServer(ctx, &argocdclient.ClientOptions{Core: true}, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address))
|
||||
compression, err := cache.CompressionTypeFromString(compressionStr)
|
||||
errors.CheckError(err)
|
||||
errors.CheckError(headless.StartLocalServer(ctx, &argocdclient.ClientOptions{Core: true}, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address, compression))
|
||||
println(fmt.Sprintf("Argo CD UI is available at http://%s:%d", address, port))
|
||||
<-ctx.Done()
|
||||
},
|
||||
@@ -31,5 +36,6 @@ func NewDashboardCommand() *cobra.Command {
|
||||
initialize.InitCommand(cmd)
|
||||
cmd.Flags().IntVar(&port, "port", common.DefaultPortAPIServer, "Listen on given port")
|
||||
cmd.Flags().StringVar(&address, "address", common.DefaultAddressAPIServer, "Listen on given address")
|
||||
cmd.Flags().StringVar(&compressionStr, "redis-compress", env.StringFromEnv("REDIS_COMPRESSION", string(cache.RedisCompressionNone)), "Enable this if the application controller is configured with redis compression enabled. (possible values: none, gzip)")
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -206,7 +206,7 @@ var validatorsByGroup = map[string]settingValidator{
|
||||
}
|
||||
ssoProvider = "Dex"
|
||||
} else if general.OIDCConfigRAW != "" {
|
||||
if _, err := settings.UnmarshalOIDCConfig(general.OIDCConfigRAW); err != nil {
|
||||
if err := settings.ValidateOIDCConfig(general.OIDCConfigRAW); err != nil {
|
||||
return "", fmt.Errorf("invalid oidc.config: %v", err)
|
||||
}
|
||||
ssoProvider = "OIDC"
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
log "github.com/sirupsen/logrus"
|
||||
@@ -373,6 +374,9 @@ func resolveRBACResourceName(name string) string {
|
||||
|
||||
// isValidRBACAction checks whether a given action is a valid RBAC action
|
||||
func isValidRBACAction(action string) bool {
|
||||
if strings.HasPrefix(action, rbacpolicy.ActionAction+"/") {
|
||||
return true
|
||||
}
|
||||
_, ok := validRBACActions[action]
|
||||
return ok
|
||||
}
|
||||
|
||||
@@ -27,6 +27,11 @@ func Test_isValidRBACAction(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func Test_isValidRBACAction_ActionAction(t *testing.T) {
|
||||
ok := isValidRBACAction("action/apps/Deployment/restart")
|
||||
assert.True(t, ok)
|
||||
}
|
||||
|
||||
func Test_isValidRBACResource(t *testing.T) {
|
||||
for k := range validRBACResources {
|
||||
t.Run(k, func(t *testing.T) {
|
||||
|
||||
@@ -33,7 +33,6 @@ import (
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/headless"
|
||||
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
|
||||
argocommon "github.com/argoproj/argo-cd/v2/common"
|
||||
"github.com/argoproj/argo-cd/v2/controller"
|
||||
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apiclient/application"
|
||||
@@ -150,9 +149,6 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
if appNamespace != "" {
|
||||
app.Namespace = appNamespace
|
||||
}
|
||||
@@ -169,7 +165,9 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
||||
|
||||
// Get app before creating to see if it is being updated or no change
|
||||
existing, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &app.Name})
|
||||
if grpc.UnwrapGRPCStatus(err).Code() != codes.NotFound {
|
||||
unwrappedError := grpc.UnwrapGRPCStatus(err).Code()
|
||||
// As part of the fix for CVE-2022-41354, the API will return Permission Denied when an app does not exist.
|
||||
if unwrappedError != codes.NotFound && unwrappedError != codes.PermissionDenied {
|
||||
errors.CheckError(err)
|
||||
}
|
||||
|
||||
@@ -294,10 +292,6 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
pConn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
|
||||
defer argoio.Close(pConn)
|
||||
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: app.Spec.Project})
|
||||
@@ -625,10 +619,6 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName, AppNamespace: &appNs})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
visited := cmdutil.SetAppSpecOptions(c.Flags(), &app.Spec, &appOpts)
|
||||
if visited == 0 {
|
||||
log.Error("Please set at least one option to update")
|
||||
@@ -693,10 +683,6 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
||||
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName, AppNamespace: &appNs})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
source := app.Spec.GetSource()
|
||||
updated, nothingToUnset := unset(&source, opts)
|
||||
if nothingToUnset {
|
||||
@@ -933,10 +919,6 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName, AppNamespace: &appNs})
|
||||
errors.CheckError(err)
|
||||
conn, settingsIf := clientset.NewSettingsClientOrDie()
|
||||
@@ -967,7 +949,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
|
||||
diffOption.serversideRes = res
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr, "Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.6.")
|
||||
fmt.Fprintf(os.Stderr, "Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.")
|
||||
conn, clusterIf := clientset.NewClusterClientOrDie()
|
||||
defer argoio.Close(conn)
|
||||
cluster, err := clusterIf.Get(ctx, &clusterpkg.ClusterQuery{Name: app.Spec.Destination.Name, Server: app.Spec.Destination.Server})
|
||||
@@ -1021,7 +1003,7 @@ func findandPrintDiff(ctx context.Context, app *argoappv1.Application, resources
|
||||
unstructureds = append(unstructureds, obj)
|
||||
}
|
||||
groupedObjs := groupObjsByKey(unstructureds, liveObjs, app.Spec.Destination.Namespace)
|
||||
items = groupObjsForDiff(resources, groupedObjs, items, argoSettings, app.Name)
|
||||
items = groupObjsForDiff(resources, groupedObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace))
|
||||
} else if diffOptions.serversideRes != nil {
|
||||
var unstructureds []*unstructured.Unstructured
|
||||
for _, mfst := range diffOptions.serversideRes.Manifests {
|
||||
@@ -1030,7 +1012,7 @@ func findandPrintDiff(ctx context.Context, app *argoappv1.Application, resources
|
||||
unstructureds = append(unstructureds, obj)
|
||||
}
|
||||
groupedObjs := groupObjsByKey(unstructureds, liveObjs, app.Spec.Destination.Namespace)
|
||||
items = groupObjsForDiff(resources, groupedObjs, items, argoSettings, app.Name)
|
||||
items = groupObjsForDiff(resources, groupedObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace))
|
||||
} else {
|
||||
for i := range resources.Items {
|
||||
res := resources.Items[i]
|
||||
@@ -1301,17 +1283,6 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
if cluster != "" {
|
||||
appList = argo.FilterByCluster(appList, cluster)
|
||||
}
|
||||
var appsWithDeprecatedPlugins []string
|
||||
for _, app := range appList {
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
appsWithDeprecatedPlugins = append(appsWithDeprecatedPlugins, app.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if len(appsWithDeprecatedPlugins) > 0 {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
log.Warnf("The following Applications use deprecated plugins: %s", strings.Join(appsWithDeprecatedPlugins, ", "))
|
||||
}
|
||||
|
||||
switch output {
|
||||
case "yaml", "json":
|
||||
@@ -1475,7 +1446,7 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
}
|
||||
}
|
||||
for _, appName := range appNames {
|
||||
_, err := waitOnApplicationStatus(ctx, acdClient, appName, timeout, watch, selectedResources)
|
||||
_, _, err := waitOnApplicationStatus(ctx, acdClient, appName, timeout, watch, selectedResources)
|
||||
errors.CheckError(err)
|
||||
}
|
||||
},
|
||||
@@ -1636,15 +1607,18 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.HasMultipleSources() {
|
||||
log.Fatal("argocd cli does not work on multi-source app")
|
||||
return
|
||||
if revision != "" {
|
||||
log.Fatal("argocd cli does not work on multi-source app with --revision flag")
|
||||
return
|
||||
}
|
||||
|
||||
if local != "" {
|
||||
log.Fatal("argocd cli does not work on multi-source app with --local flag")
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if local != "" {
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
if app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil && !dryRun {
|
||||
log.Fatal("Cannot use local sync when Automatic Sync Policy is enabled except with --dry-run")
|
||||
}
|
||||
@@ -1718,10 +1692,6 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
}
|
||||
}
|
||||
if diffChanges {
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{
|
||||
ApplicationName: &appName,
|
||||
AppNamespace: &appNs,
|
||||
@@ -1749,15 +1719,15 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
errors.CheckError(err)
|
||||
|
||||
if !async {
|
||||
app, err := waitOnApplicationStatus(ctx, acdClient, appQualifiedName, timeout, watchOpts{operation: true}, selectedResources)
|
||||
app, opState, err := waitOnApplicationStatus(ctx, acdClient, appQualifiedName, timeout, watchOpts{operation: true}, selectedResources)
|
||||
errors.CheckError(err)
|
||||
|
||||
if !dryRun {
|
||||
if !app.Status.OperationState.Phase.Successful() {
|
||||
log.Fatalf("Operation has completed with phase: %s", app.Status.OperationState.Phase)
|
||||
if !opState.Phase.Successful() {
|
||||
log.Fatalf("Operation has completed with phase: %s", opState.Phase)
|
||||
} else if len(selectedResources) == 0 && app.Status.Sync.Status != argoappv1.SyncStatusCodeSynced {
|
||||
// Only get resources to be pruned if sync was application-wide and final status is not synced
|
||||
pruningRequired := app.Status.OperationState.SyncResult.Resources.PruningRequired()
|
||||
pruningRequired := opState.SyncResult.Resources.PruningRequired()
|
||||
if pruningRequired > 0 {
|
||||
log.Fatalf("%d resources require pruning", pruningRequired)
|
||||
}
|
||||
@@ -1957,7 +1927,10 @@ func checkResourceStatus(watch watchOpts, healthStatus string, syncStatus string
|
||||
|
||||
const waitFormatString = "%s\t%5s\t%10s\t%10s\t%20s\t%8s\t%7s\t%10s\t%s\n"
|
||||
|
||||
func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client, appName string, timeout uint, watch watchOpts, selectedResources []*argoappv1.SyncOperationResource) (*argoappv1.Application, error) {
|
||||
// waitOnApplicationStatus watches an application and blocks until either the desired watch conditions
|
||||
// are fulfiled or we reach the timeout. Returns the app once desired conditions have been filled.
|
||||
// Additionally return the operationState at time of fulfilment (which may be different than returned app).
|
||||
func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client, appName string, timeout uint, watch watchOpts, selectedResources []*argoappv1.SyncOperationResource) (*argoappv1.Application, *argoappv1.OperationState, error) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
@@ -2015,10 +1988,20 @@ func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client,
|
||||
AppNamespace: &appNs,
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
// printFinalStatus() will refresh and update the app object, potentially causing the app's
|
||||
// status.operationState to be different than the version when we break out of the event loop.
|
||||
// This means the app.status is unreliable for determining the final state of the operation.
|
||||
// finalOperationState captures the operationState as it was seen when we met the conditions of
|
||||
// the wait, so the caller can rely on it to determine the outcome of the operation.
|
||||
// See: https://github.com/argoproj/argo-cd/issues/5592
|
||||
finalOperationState := app.Status.OperationState
|
||||
|
||||
appEventCh := acdClient.WatchApplicationWithRetry(ctx, appName, app.ResourceVersion)
|
||||
for appEvent := range appEventCh {
|
||||
app = &appEvent.Application
|
||||
|
||||
finalOperationState = app.Status.OperationState
|
||||
operationInProgress := false
|
||||
// consider the operation is in progress
|
||||
if app.Operation != nil {
|
||||
@@ -2056,7 +2039,7 @@ func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client,
|
||||
|
||||
if selectedResourcesAreReady && (!operationInProgress || !watch.operation) {
|
||||
app = printFinalStatus(app)
|
||||
return app, nil
|
||||
return app, finalOperationState, nil
|
||||
}
|
||||
|
||||
newStates := groupResourceStates(app, selectedResources)
|
||||
@@ -2066,7 +2049,7 @@ func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client,
|
||||
if prevState, found := prevStates[stateKey]; found {
|
||||
if watch.health && prevState.Health != string(health.HealthStatusUnknown) && prevState.Health != string(health.HealthStatusDegraded) && newState.Health == string(health.HealthStatusDegraded) {
|
||||
_ = printFinalStatus(app)
|
||||
return nil, fmt.Errorf("application '%s' health state has transitioned from %s to %s", appName, prevState.Health, newState.Health)
|
||||
return nil, finalOperationState, fmt.Errorf("application '%s' health state has transitioned from %s to %s", appName, prevState.Health, newState.Health)
|
||||
}
|
||||
doPrint = prevState.Merge(newState)
|
||||
} else {
|
||||
@@ -2080,7 +2063,7 @@ func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client,
|
||||
_ = w.Flush()
|
||||
}
|
||||
_ = printFinalStatus(app)
|
||||
return nil, fmt.Errorf("timed out (%ds) waiting for app %q match desired state", timeout, appName)
|
||||
return nil, finalOperationState, fmt.Errorf("timed out (%ds) waiting for app %q match desired state", timeout, appName)
|
||||
}
|
||||
|
||||
// setParameterOverrides updates an existing or appends a new parameter override in the application
|
||||
@@ -2165,10 +2148,6 @@ func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
if output == "id" {
|
||||
printApplicationHistoryIds(app.Status.History)
|
||||
} else {
|
||||
@@ -2229,10 +2208,6 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
depInfo, err := findRevisionHistory(app, int64(depID))
|
||||
errors.CheckError(err)
|
||||
|
||||
@@ -2244,7 +2219,7 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
_, err = waitOnApplicationStatus(ctx, acdClient, app.QualifiedName(), timeout, watchOpts{
|
||||
_, _, err = waitOnApplicationStatus(ctx, acdClient, app.QualifiedName(), timeout, watchOpts{
|
||||
operation: true,
|
||||
}, nil)
|
||||
errors.CheckError(err)
|
||||
@@ -2316,10 +2291,6 @@ func NewApplicationManifestsCommand(clientOpts *argocdclient.ClientOptions) *cob
|
||||
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
settingsConn, settingsIf := clientset.NewSettingsClientOrDie()
|
||||
defer argoio.Close(settingsConn)
|
||||
argoSettings, err := settingsIf.Get(context.Background(), &settingspkg.SettingsQuery{})
|
||||
@@ -2419,10 +2390,6 @@ func NewApplicationEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
appData, err := json.Marshal(app.Spec)
|
||||
errors.CheckError(err)
|
||||
appData, err = yaml.JSONToYAML(appData)
|
||||
|
||||
@@ -343,16 +343,19 @@ func printAppSetSummaryTable(appSet *arogappsetv1.ApplicationSet) {
|
||||
fmt.Printf(printOpFmtStr, "Path:", source.Path)
|
||||
printAppSourceDetails(&source)
|
||||
|
||||
var syncPolicy string
|
||||
if appSet.Spec.SyncPolicy != nil && appSet.Spec.Template.Spec.SyncPolicy.Automated != nil {
|
||||
syncPolicy = "Automated"
|
||||
if appSet.Spec.Template.Spec.SyncPolicy.Automated.Prune {
|
||||
syncPolicy += " (Prune)"
|
||||
var (
|
||||
syncPolicyStr string
|
||||
syncPolicy = appSet.Spec.Template.Spec.SyncPolicy
|
||||
)
|
||||
if syncPolicy != nil && syncPolicy.Automated != nil {
|
||||
syncPolicyStr = "Automated"
|
||||
if syncPolicy.Automated.Prune {
|
||||
syncPolicyStr += " (Prune)"
|
||||
}
|
||||
} else {
|
||||
syncPolicy = "<none>"
|
||||
syncPolicyStr = "<none>"
|
||||
}
|
||||
fmt.Printf(printOpFmtStr, "SyncPolicy:", syncPolicy)
|
||||
fmt.Printf(printOpFmtStr, "SyncPolicy:", syncPolicyStr)
|
||||
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
@@ -68,3 +70,124 @@ func TestPrintApplicationSetTable(t *testing.T) {
|
||||
expectation := "NAME NAMESPACE PROJECT SYNCPOLICY CONDITIONS\napp-name default nil [{ResourcesUpToDate <nil> True }]\napp-name default nil [{ResourcesUpToDate <nil> True }]\n"
|
||||
assert.Equal(t, expectation, output)
|
||||
}
|
||||
|
||||
func TestPrintAppSetSummaryTable(t *testing.T) {
|
||||
baseAppSet := &arogappsetv1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app-name",
|
||||
},
|
||||
Spec: arogappsetv1.ApplicationSetSpec{
|
||||
Generators: []arogappsetv1.ApplicationSetGenerator{
|
||||
arogappsetv1.ApplicationSetGenerator{
|
||||
Git: &arogappsetv1.GitGenerator{
|
||||
RepoURL: "https://github.com/argoproj/argo-cd.git",
|
||||
Revision: "head",
|
||||
Directories: []arogappsetv1.GitDirectoryGeneratorItem{
|
||||
arogappsetv1.GitDirectoryGeneratorItem{
|
||||
Path: "applicationset/examples/git-generator-directory/cluster-addons/*",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Template: arogappsetv1.ApplicationSetTemplate{
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "default",
|
||||
},
|
||||
},
|
||||
},
|
||||
Status: arogappsetv1.ApplicationSetStatus{
|
||||
Conditions: []arogappsetv1.ApplicationSetCondition{
|
||||
arogappsetv1.ApplicationSetCondition{
|
||||
Status: v1alpha1.ApplicationSetConditionStatusTrue,
|
||||
Type: arogappsetv1.ApplicationSetConditionResourcesUpToDate,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
appsetSpecSyncPolicy := baseAppSet.DeepCopy()
|
||||
appsetSpecSyncPolicy.Spec.SyncPolicy = &arogappsetv1.ApplicationSetSyncPolicy{
|
||||
PreserveResourcesOnDeletion: true,
|
||||
}
|
||||
|
||||
appSetTemplateSpecSyncPolicy := baseAppSet.DeepCopy()
|
||||
appSetTemplateSpecSyncPolicy.Spec.Template.Spec.SyncPolicy = &arogappsetv1.SyncPolicy{
|
||||
Automated: &arogappsetv1.SyncPolicyAutomated{
|
||||
SelfHeal: true,
|
||||
},
|
||||
}
|
||||
|
||||
appSetBothSyncPolicies := baseAppSet.DeepCopy()
|
||||
appSetBothSyncPolicies.Spec.SyncPolicy = &arogappsetv1.ApplicationSetSyncPolicy{
|
||||
PreserveResourcesOnDeletion: true,
|
||||
}
|
||||
appSetBothSyncPolicies.Spec.Template.Spec.SyncPolicy = &arogappsetv1.SyncPolicy{
|
||||
Automated: &arogappsetv1.SyncPolicyAutomated{
|
||||
SelfHeal: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
appSet *arogappsetv1.ApplicationSet
|
||||
expectedOutput string
|
||||
}{
|
||||
{
|
||||
name: "appset with only spec.syncPolicy set",
|
||||
appSet: appsetSpecSyncPolicy,
|
||||
expectedOutput: `Name: app-name
|
||||
Project: default
|
||||
Server:
|
||||
Namespace:
|
||||
Repo:
|
||||
Target:
|
||||
Path:
|
||||
SyncPolicy: <none>
|
||||
`,
|
||||
},
|
||||
{
|
||||
name: "appset with only spec.template.spec.syncPolicy set",
|
||||
appSet: appSetTemplateSpecSyncPolicy,
|
||||
expectedOutput: `Name: app-name
|
||||
Project: default
|
||||
Server:
|
||||
Namespace:
|
||||
Repo:
|
||||
Target:
|
||||
Path:
|
||||
SyncPolicy: Automated
|
||||
`,
|
||||
},
|
||||
{
|
||||
name: "appset with both spec.SyncPolicy and spec.template.spec.syncPolicy set",
|
||||
appSet: appSetBothSyncPolicies,
|
||||
expectedOutput: `Name: app-name
|
||||
Project: default
|
||||
Server:
|
||||
Namespace:
|
||||
Repo:
|
||||
Target:
|
||||
Path:
|
||||
SyncPolicy: Automated
|
||||
`,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
oldStdout := os.Stdout
|
||||
defer func() {
|
||||
os.Stdout = oldStdout
|
||||
}()
|
||||
|
||||
r, w, _ := os.Pipe()
|
||||
os.Stdout = w
|
||||
|
||||
printAppSetSummaryTable(tt.appSet)
|
||||
w.Close()
|
||||
|
||||
out, err := ioutil.ReadAll(r)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, tt.expectedOutput, string(out))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -38,11 +38,12 @@ import (
|
||||
)
|
||||
|
||||
type forwardCacheClient struct {
|
||||
namespace string
|
||||
context string
|
||||
init sync.Once
|
||||
client cache.CacheClient
|
||||
err error
|
||||
namespace string
|
||||
context string
|
||||
init sync.Once
|
||||
client cache.CacheClient
|
||||
compression cache.RedisCompressionType
|
||||
err error
|
||||
}
|
||||
|
||||
func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error) error {
|
||||
@@ -58,7 +59,7 @@ func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error)
|
||||
}
|
||||
|
||||
redisClient := redis.NewClient(&redis.Options{Addr: fmt.Sprintf("localhost:%d", redisPort)})
|
||||
c.client = cache.NewRedisCache(redisClient, time.Hour, cache.RedisCompressionNone)
|
||||
c.client = cache.NewRedisCache(redisClient, time.Hour, c.compression)
|
||||
})
|
||||
if c.err != nil {
|
||||
return c.err
|
||||
@@ -139,7 +140,7 @@ func testAPI(ctx context.Context, clientOpts *apiclient.ClientOptions) error {
|
||||
|
||||
// StartLocalServer allows executing command in a headless mode: on the fly starts Argo CD API server and
|
||||
// changes provided client options to use started API server port
|
||||
func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string) error {
|
||||
func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string, compression cache.RedisCompressionType) error {
|
||||
flags := pflag.NewFlagSet("tmp", pflag.ContinueOnError)
|
||||
clientConfig := cli.AddKubectlFlagsToSet(flags)
|
||||
startInProcessAPI := clientOpts.Core
|
||||
@@ -200,7 +201,7 @@ func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions,
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr}), time.Hour)
|
||||
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: compression}), time.Hour)
|
||||
srv := server.NewServer(ctx, server.ArgoCDServerOpts{
|
||||
EnableGZip: false,
|
||||
Namespace: namespace,
|
||||
@@ -243,7 +244,7 @@ func NewClientOrDie(opts *apiclient.ClientOptions, c *cobra.Command) apiclient.C
|
||||
ctx := c.Context()
|
||||
|
||||
ctxStr := initialize.RetrieveContextIfChanged(c.Flag("context"))
|
||||
err := StartLocalServer(ctx, opts, ctxStr, nil, nil)
|
||||
err := StartLocalServer(ctx, opts, ctxStr, nil, nil, cache.RedisCompressionNone)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package commands
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"text/tabwriter"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
@@ -160,6 +161,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
repoOpts.Repo.GithubAppInstallationId = repoOpts.GithubAppInstallationId
|
||||
repoOpts.Repo.GitHubAppEnterpriseBaseURL = repoOpts.GitHubAppEnterpriseBaseURL
|
||||
repoOpts.Repo.Proxy = repoOpts.Proxy
|
||||
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
|
||||
|
||||
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
|
||||
errors.CheckError(fmt.Errorf("Must specify --name for repos of type 'helm'"))
|
||||
@@ -199,6 +201,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
Proxy: repoOpts.Proxy,
|
||||
Project: repoOpts.Repo.Project,
|
||||
GcpServiceAccountKey: repoOpts.Repo.GCPServiceAccountKey,
|
||||
ForceHttpBasicAuth: repoOpts.Repo.ForceHttpBasicAuth,
|
||||
}
|
||||
_, err := repoIf.ValidateAccess(ctx, &repoAccessReq)
|
||||
errors.CheckError(err)
|
||||
@@ -248,15 +251,12 @@ func printRepoTable(repos appsv1.Repositories) {
|
||||
_, _ = fmt.Fprintf(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\tPROJECT\n")
|
||||
for _, r := range repos {
|
||||
var hasCreds string
|
||||
if !r.HasCredentials() {
|
||||
hasCreds = "false"
|
||||
if r.InheritedCreds {
|
||||
hasCreds = "inherited"
|
||||
} else {
|
||||
if r.InheritedCreds {
|
||||
hasCreds = "inherited"
|
||||
} else {
|
||||
hasCreds = "true"
|
||||
}
|
||||
hasCreds = strconv.FormatBool(r.HasCredentials())
|
||||
}
|
||||
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%v\t%v\t%v\t%s\t%s\t%s\t%s\n", r.Type, r.Name, r.Repo, r.IsInsecure(), r.EnableOCI, r.EnableLFS, hasCreds, r.ConnectionState.Status, r.ConnectionState.Message, r.Project)
|
||||
}
|
||||
_ = w.Flush()
|
||||
|
||||
@@ -175,6 +175,7 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
|
||||
command.Flags().BoolVar(&repo.EnableOCI, "enable-oci", false, "Specifies whether helm-oci support should be enabled for this repo")
|
||||
command.Flags().StringVar(&repo.Type, "type", common.DefaultRepoType, "type of the repository, \"git\" or \"helm\"")
|
||||
command.Flags().StringVar(&gcpServiceAccountKeyPath, "gcp-service-account-key-path", "", "service account key for the Google Cloud Platform")
|
||||
command.Flags().BoolVar(&repo.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force basic auth when connecting via HTTP")
|
||||
return command
|
||||
}
|
||||
|
||||
|
||||
@@ -61,6 +61,6 @@ func readAppset(yml []byte, appsets *[]*argoprojiov1alpha1.ApplicationSet) error
|
||||
*appsets = append(*appsets, &appset)
|
||||
|
||||
}
|
||||
|
||||
return fmt.Errorf("error reading app set: %w", err)
|
||||
// we reach here if there is no error found while reading the Application Set
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -138,7 +138,10 @@ func readProjFromURI(fileURL string, proj *v1alpha1.AppProject) error {
|
||||
} else {
|
||||
err = config.UnmarshalRemoteFile(fileURL, &proj)
|
||||
}
|
||||
return fmt.Errorf("error reading proj from uri: %w", err)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error reading proj from uri: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func SetProjSpecOptions(flags *pflag.FlagSet, spec *v1alpha1.AppProjectSpec, projOpts *ProjectOpts) int {
|
||||
|
||||
@@ -23,6 +23,7 @@ type RepoOptions struct {
|
||||
GitHubAppEnterpriseBaseURL string
|
||||
Proxy string
|
||||
GCPServiceAccountKeyPath string
|
||||
ForceHttpBasicAuth bool
|
||||
}
|
||||
|
||||
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
|
||||
@@ -44,4 +45,5 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
|
||||
command.Flags().StringVar(&opts.GitHubAppEnterpriseBaseURL, "github-app-enterprise-base-url", "", "base url to use when using GitHub Enterprise (e.g. https://ghe.example.com/api/v3")
|
||||
command.Flags().StringVar(&opts.Proxy, "proxy", "", "use proxy to access repository")
|
||||
command.Flags().StringVar(&opts.GCPServiceAccountKeyPath, "gcp-service-account-key-path", "", "service account key for the Google Cloud Platform")
|
||||
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
|
||||
}
|
||||
|
||||
@@ -318,6 +318,7 @@ func (m *ManifestResponse) GetSourceType() string {
|
||||
|
||||
type RepositoryResponse struct {
|
||||
IsSupported bool `protobuf:"varint,1,opt,name=isSupported,proto3" json:"isSupported,omitempty"`
|
||||
IsDiscoveryEnabled bool `protobuf:"varint,2,opt,name=isDiscoveryEnabled,proto3" json:"isDiscoveryEnabled,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
@@ -363,6 +364,13 @@ func (m *RepositoryResponse) GetIsSupported() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *RepositoryResponse) GetIsDiscoveryEnabled() bool {
|
||||
if m != nil {
|
||||
return m.IsDiscoveryEnabled
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// ParametersAnnouncementResponse contains a list of announcements. This list represents all the parameters which a CMP
|
||||
// is able to accept.
|
||||
type ParametersAnnouncementResponse struct {
|
||||
@@ -472,42 +480,43 @@ func init() {
|
||||
func init() { proto.RegisterFile("cmpserver/plugin/plugin.proto", fileDescriptor_b21875a7079a06ed) }
|
||||
|
||||
var fileDescriptor_b21875a7079a06ed = []byte{
|
||||
// 558 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x54, 0xc1, 0x6e, 0xd3, 0x4c,
|
||||
0x10, 0xae, 0x9b, 0xb4, 0x4d, 0x26, 0x95, 0xfe, 0x68, 0xf5, 0x0b, 0x4c, 0xd4, 0x86, 0xe0, 0x03,
|
||||
0xca, 0x85, 0x44, 0x32, 0x88, 0x1b, 0x12, 0x2d, 0x2a, 0xad, 0x40, 0x41, 0xd1, 0x96, 0x0b, 0xdc,
|
||||
0xb6, 0xce, 0x24, 0x59, 0x6a, 0xef, 0x2e, 0xeb, 0xb5, 0xa5, 0xc0, 0x85, 0xf7, 0xe0, 0x01, 0x78,
|
||||
0x15, 0x8e, 0x3c, 0x02, 0xca, 0x93, 0x20, 0xaf, 0xed, 0xd8, 0xa2, 0x6d, 0x38, 0x79, 0xe6, 0x9b,
|
||||
0x99, 0x6f, 0xbf, 0x9d, 0x99, 0x35, 0x1c, 0x07, 0x91, 0x8a, 0x51, 0xa7, 0xa8, 0xc7, 0x2a, 0x4c,
|
||||
0x16, 0x5c, 0x14, 0x9f, 0x91, 0xd2, 0xd2, 0x48, 0xb2, 0x9f, 0x7b, 0xbd, 0xb3, 0x05, 0x37, 0xcb,
|
||||
0xe4, 0x6a, 0x14, 0xc8, 0x68, 0xcc, 0xf4, 0x42, 0x2a, 0x2d, 0x3f, 0x59, 0xe3, 0x49, 0x30, 0x1b,
|
||||
0xa7, 0xfe, 0x58, 0xa3, 0x92, 0x05, 0x8d, 0x35, 0xb9, 0x91, 0x7a, 0x55, 0x33, 0x73, 0x3a, 0xef,
|
||||
0x9b, 0x03, 0xdd, 0x13, 0xa5, 0x2e, 0x8d, 0x46, 0x16, 0x51, 0xfc, 0x9c, 0x60, 0x6c, 0xc8, 0x0b,
|
||||
0x68, 0x45, 0x68, 0xd8, 0x8c, 0x19, 0xe6, 0x3a, 0x03, 0x67, 0xd8, 0xf1, 0x1f, 0x8e, 0x0a, 0x11,
|
||||
0x13, 0x26, 0xf8, 0x1c, 0x63, 0x53, 0xa4, 0x4e, 0x8a, 0xb4, 0x8b, 0x1d, 0xba, 0x29, 0x21, 0x1e,
|
||||
0x34, 0xe7, 0x3c, 0x44, 0x77, 0xd7, 0x96, 0x1e, 0x96, 0xa5, 0xaf, 0x79, 0x88, 0x17, 0x3b, 0xd4,
|
||||
0xc6, 0x4e, 0xdb, 0x70, 0xa0, 0x73, 0x0a, 0xef, 0x87, 0x03, 0xf7, 0xef, 0xa0, 0x25, 0x2e, 0x1c,
|
||||
0x30, 0xa5, 0xde, 0xb1, 0x08, 0xad, 0x90, 0x36, 0x2d, 0x5d, 0xd2, 0x07, 0x60, 0x4a, 0x51, 0x0c,
|
||||
0xa7, 0xcc, 0x2c, 0xed, 0x51, 0x6d, 0x5a, 0x43, 0x48, 0x0f, 0x5a, 0xc1, 0x12, 0x83, 0xeb, 0x38,
|
||||
0x89, 0xdc, 0x86, 0x8d, 0x6e, 0x7c, 0x42, 0xa0, 0x19, 0xf3, 0x2f, 0xe8, 0x36, 0x07, 0xce, 0xb0,
|
||||
0x41, 0xad, 0x4d, 0x3c, 0x68, 0xa0, 0x48, 0xdd, 0xbd, 0x41, 0x63, 0xd8, 0xf1, 0xbb, 0xa5, 0xe6,
|
||||
0x33, 0x91, 0x9e, 0x09, 0xa3, 0x57, 0x34, 0x0b, 0x7a, 0xcf, 0xa0, 0x55, 0x02, 0x19, 0x87, 0xa8,
|
||||
0x64, 0x59, 0x9b, 0xfc, 0x0f, 0x7b, 0x29, 0x0b, 0x13, 0x2c, 0xe4, 0xe4, 0x8e, 0x37, 0x85, 0x6e,
|
||||
0x75, 0xbd, 0x58, 0x49, 0x11, 0x23, 0x39, 0x82, 0x76, 0x54, 0x60, 0xb1, 0xeb, 0x0c, 0x1a, 0xc3,
|
||||
0x36, 0xad, 0x80, 0xec, 0x6e, 0xb1, 0x4c, 0x74, 0x80, 0xef, 0x57, 0xaa, 0x24, 0xab, 0x21, 0xde,
|
||||
0x73, 0x20, 0x74, 0x33, 0xc8, 0x0d, 0xe7, 0x00, 0x3a, 0x3c, 0xbe, 0x4c, 0x94, 0x92, 0xda, 0xe0,
|
||||
0xcc, 0x0a, 0x6b, 0xd1, 0x3a, 0xe4, 0x7d, 0x85, 0xfe, 0x94, 0x69, 0x16, 0xa1, 0x41, 0x1d, 0x9f,
|
||||
0x08, 0x21, 0x13, 0x11, 0x60, 0x84, 0xa2, 0xd2, 0xf5, 0x01, 0xee, 0xa9, 0x32, 0xa3, 0x9e, 0x90,
|
||||
0x8b, 0xec, 0xf8, 0x8f, 0x46, 0xb5, 0x0d, 0x9a, 0xde, 0x96, 0x49, 0xef, 0x20, 0xf0, 0x8e, 0xa0,
|
||||
0x99, 0x6d, 0x40, 0xd6, 0xa4, 0x60, 0x99, 0x88, 0x6b, 0x2b, 0xf0, 0x90, 0xe6, 0x8e, 0xff, 0x7d,
|
||||
0x17, 0x8e, 0x5f, 0x49, 0x31, 0xe7, 0x8b, 0x09, 0x13, 0x6c, 0x61, 0x6b, 0xa6, 0x76, 0x06, 0x97,
|
||||
0xa8, 0x53, 0x1e, 0x20, 0x79, 0x03, 0xdd, 0x73, 0x14, 0xa8, 0x99, 0xc1, 0xb2, 0x9d, 0xc4, 0x2d,
|
||||
0xe7, 0xf4, 0xf7, 0x0a, 0xf7, 0xdc, 0x9b, 0x0b, 0x9b, 0x5f, 0xd1, 0xdb, 0x19, 0x3a, 0xe4, 0x2d,
|
||||
0xfc, 0x37, 0x61, 0x26, 0x58, 0x56, 0x5d, 0xdc, 0x42, 0xd5, 0x2b, 0x23, 0x37, 0x7b, 0x6e, 0xc9,
|
||||
0x18, 0x3c, 0x38, 0x47, 0x73, 0x7b, 0x63, 0xb7, 0xd0, 0x3e, 0x2e, 0x23, 0xdb, 0x47, 0x92, 0x1d,
|
||||
0x71, 0xfa, 0xf2, 0xe7, 0xba, 0xef, 0xfc, 0x5a, 0xf7, 0x9d, 0xdf, 0xeb, 0xbe, 0xf3, 0xd1, 0xff,
|
||||
0xc7, 0xd3, 0xaf, 0x7e, 0x20, 0x4c, 0xf1, 0x20, 0xe4, 0x28, 0xcc, 0xd5, 0xbe, 0x7d, 0xee, 0x4f,
|
||||
0xff, 0x04, 0x00, 0x00, 0xff, 0xff, 0x33, 0x34, 0xb3, 0x95, 0x5e, 0x04, 0x00, 0x00,
|
||||
// 576 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x94, 0xdd, 0x6e, 0x12, 0x4f,
|
||||
0x14, 0xc0, 0xbb, 0x85, 0xb6, 0x70, 0x68, 0xf2, 0x27, 0x93, 0x7f, 0x74, 0x25, 0x2d, 0xe2, 0x5e,
|
||||
0x18, 0x6e, 0x84, 0x04, 0xbd, 0x35, 0xb1, 0x55, 0x6c, 0xa3, 0xc1, 0x90, 0xa9, 0x37, 0x7a, 0x37,
|
||||
0x1d, 0x0e, 0x30, 0x76, 0x77, 0x66, 0x9c, 0x99, 0xdd, 0x04, 0xbd, 0xf1, 0x3d, 0x7c, 0x00, 0x5f,
|
||||
0xc5, 0x4b, 0x1f, 0xc1, 0xf4, 0x49, 0x0c, 0xb3, 0xbb, 0x40, 0x6c, 0x8b, 0x57, 0x7b, 0x3e, 0x7f,
|
||||
0x7b, 0xbe, 0x32, 0x70, 0xcc, 0x13, 0x6d, 0xd1, 0x64, 0x68, 0xfa, 0x3a, 0x4e, 0x67, 0x42, 0x16,
|
||||
0x9f, 0x9e, 0x36, 0xca, 0x29, 0xb2, 0x9f, 0x6b, 0xad, 0xe1, 0x4c, 0xb8, 0x79, 0x7a, 0xd9, 0xe3,
|
||||
0x2a, 0xe9, 0x33, 0x33, 0x53, 0xda, 0xa8, 0x4f, 0x5e, 0x78, 0xc2, 0x27, 0xfd, 0x6c, 0xd0, 0x37,
|
||||
0xa8, 0x55, 0x81, 0xf1, 0xa2, 0x70, 0xca, 0x2c, 0x36, 0xc4, 0x1c, 0x17, 0x7d, 0x0b, 0xa0, 0x79,
|
||||
0xa2, 0xf5, 0x85, 0x33, 0xc8, 0x12, 0x8a, 0x9f, 0x53, 0xb4, 0x8e, 0x3c, 0x87, 0x5a, 0x82, 0x8e,
|
||||
0x4d, 0x98, 0x63, 0x61, 0xd0, 0x09, 0xba, 0x8d, 0xc1, 0xc3, 0x5e, 0x51, 0xc4, 0x88, 0x49, 0x31,
|
||||
0x45, 0xeb, 0x8a, 0xd0, 0x51, 0x11, 0x76, 0xbe, 0x43, 0x57, 0x29, 0x24, 0x82, 0xea, 0x54, 0xc4,
|
||||
0x18, 0xee, 0xfa, 0xd4, 0xc3, 0x32, 0xf5, 0xb5, 0x88, 0xf1, 0x7c, 0x87, 0x7a, 0xdf, 0x69, 0x1d,
|
||||
0x0e, 0x4c, 0x8e, 0x88, 0x7e, 0x04, 0x70, 0xff, 0x0e, 0x2c, 0x09, 0xe1, 0x80, 0x69, 0xfd, 0x8e,
|
||||
0x25, 0xe8, 0x0b, 0xa9, 0xd3, 0x52, 0x25, 0x6d, 0x00, 0xa6, 0x35, 0xc5, 0x78, 0xcc, 0xdc, 0xdc,
|
||||
0xff, 0xaa, 0x4e, 0x37, 0x2c, 0xa4, 0x05, 0x35, 0x3e, 0x47, 0x7e, 0x65, 0xd3, 0x24, 0xac, 0x78,
|
||||
0xef, 0x4a, 0x27, 0x04, 0xaa, 0x56, 0x7c, 0xc1, 0xb0, 0xda, 0x09, 0xba, 0x15, 0xea, 0x65, 0x12,
|
||||
0x41, 0x05, 0x65, 0x16, 0xee, 0x75, 0x2a, 0xdd, 0xc6, 0xa0, 0x59, 0xd6, 0x3c, 0x94, 0xd9, 0x50,
|
||||
0x3a, 0xb3, 0xa0, 0x4b, 0x67, 0xf4, 0x0c, 0x6a, 0xa5, 0x61, 0xc9, 0x90, 0xeb, 0xb2, 0xbc, 0x4c,
|
||||
0xfe, 0x87, 0xbd, 0x8c, 0xc5, 0x29, 0x16, 0xe5, 0xe4, 0x4a, 0x34, 0x86, 0xe6, 0xba, 0x3d, 0xab,
|
||||
0x95, 0xb4, 0x48, 0x8e, 0xa0, 0x9e, 0x14, 0x36, 0x1b, 0x06, 0x9d, 0x4a, 0xb7, 0x4e, 0xd7, 0x86,
|
||||
0x65, 0x6f, 0x56, 0xa5, 0x86, 0xe3, 0xfb, 0x85, 0x2e, 0x61, 0x1b, 0x96, 0x68, 0x0a, 0x84, 0xae,
|
||||
0x16, 0xb9, 0x62, 0x76, 0xa0, 0x21, 0xec, 0x45, 0xaa, 0xb5, 0x32, 0x0e, 0x27, 0xbe, 0xb0, 0x1a,
|
||||
0xdd, 0x34, 0x91, 0x1e, 0x10, 0x61, 0x5f, 0x09, 0xcb, 0x55, 0x86, 0x66, 0x31, 0x94, 0xec, 0x32,
|
||||
0xc6, 0x89, 0xe7, 0xd7, 0xe8, 0x2d, 0x9e, 0xe8, 0x2b, 0xb4, 0xc7, 0xcc, 0xb0, 0x04, 0x1d, 0x1a,
|
||||
0x7b, 0x22, 0xa5, 0x4a, 0x25, 0xc7, 0x04, 0xe5, 0xba, 0x8f, 0x0f, 0x70, 0x4f, 0x97, 0x11, 0x9b,
|
||||
0x01, 0x79, 0x53, 0x8d, 0xc1, 0xa3, 0xde, 0xc6, 0xc5, 0x8d, 0x6f, 0x8b, 0xa4, 0x77, 0x00, 0xa2,
|
||||
0x23, 0xa8, 0x2e, 0x2f, 0x66, 0x39, 0x54, 0x3e, 0x4f, 0xe5, 0x95, 0x6f, 0xe8, 0x90, 0xe6, 0xca,
|
||||
0xe0, 0xfb, 0x2e, 0x1c, 0xbf, 0x54, 0x72, 0x2a, 0x66, 0x23, 0x26, 0xd9, 0xcc, 0xe7, 0x8c, 0xfd,
|
||||
0xce, 0x2e, 0xd0, 0x64, 0x82, 0x23, 0x79, 0x03, 0xcd, 0x33, 0x94, 0x68, 0x98, 0xc3, 0x72, 0xfc,
|
||||
0x24, 0x2c, 0xf7, 0xfa, 0xf7, 0xc9, 0xb7, 0xc2, 0x9b, 0x07, 0x9e, 0xb7, 0x18, 0xed, 0x74, 0x03,
|
||||
0xf2, 0x16, 0xfe, 0x1b, 0x31, 0xc7, 0xe7, 0xeb, 0xa9, 0x6f, 0x41, 0xb5, 0x4a, 0xcf, 0xcd, 0x1d,
|
||||
0x79, 0x18, 0x83, 0x07, 0x67, 0xe8, 0x6e, 0x1f, 0xec, 0x16, 0xec, 0xe3, 0xd2, 0xb3, 0x7d, 0x25,
|
||||
0xcb, 0x5f, 0x9c, 0xbe, 0xf8, 0x79, 0xdd, 0x0e, 0x7e, 0x5d, 0xb7, 0x83, 0xdf, 0xd7, 0xed, 0xe0,
|
||||
0xe3, 0xe0, 0x1f, 0x4f, 0xc5, 0xfa, 0xc1, 0x61, 0x5a, 0xf0, 0x58, 0xa0, 0x74, 0x97, 0xfb, 0xfe,
|
||||
0x79, 0x78, 0xfa, 0x27, 0x00, 0x00, 0xff, 0xff, 0x23, 0x88, 0x8e, 0xd3, 0x8e, 0x04, 0x00, 0x00,
|
||||
}
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
@@ -1025,6 +1034,16 @@ func (m *RepositoryResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.IsDiscoveryEnabled {
|
||||
i--
|
||||
if m.IsDiscoveryEnabled {
|
||||
dAtA[i] = 1
|
||||
} else {
|
||||
dAtA[i] = 0
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x10
|
||||
}
|
||||
if m.IsSupported {
|
||||
i--
|
||||
if m.IsSupported {
|
||||
@@ -1247,6 +1266,9 @@ func (m *RepositoryResponse) Size() (n int) {
|
||||
if m.IsSupported {
|
||||
n += 2
|
||||
}
|
||||
if m.IsDiscoveryEnabled {
|
||||
n += 2
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
@@ -1893,6 +1915,26 @@ func (m *RepositoryResponse) Unmarshal(dAtA []byte) error {
|
||||
}
|
||||
}
|
||||
m.IsSupported = bool(v != 0)
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field IsDiscoveryEnabled", wireType)
|
||||
}
|
||||
var v int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowPlugin
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
v |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
m.IsDiscoveryEnabled = bool(v != 0)
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipPlugin(dAtA[iNdEx:])
|
||||
|
||||
@@ -22,11 +22,11 @@ type PluginConfig struct {
|
||||
}
|
||||
|
||||
type PluginConfigSpec struct {
|
||||
Version string `json:"version"`
|
||||
Init Command `json:"init,omitempty"`
|
||||
Generate Command `json:"generate"`
|
||||
Discover Discover `json:"discover"`
|
||||
Parameters Parameters `yaml:"parameters"`
|
||||
Version string `json:"version"`
|
||||
Init Command `json:"init,omitempty"`
|
||||
Generate Command `json:"generate"`
|
||||
Discover Discover `json:"discover"`
|
||||
Parameters Parameters `yaml:"parameters"`
|
||||
}
|
||||
|
||||
//Discover holds find and fileName
|
||||
@@ -84,9 +84,7 @@ func ValidatePluginConfig(config PluginConfig) error {
|
||||
if len(config.Spec.Generate.Command) == 0 {
|
||||
return fmt.Errorf("invalid plugin configuration file. spec.generate command should be non-empty")
|
||||
}
|
||||
if config.Spec.Discover.Find.Glob == "" && len(config.Spec.Discover.Find.Command.Command) == 0 && config.Spec.Discover.FileName == "" {
|
||||
return fmt.Errorf("invalid plugin configuration file. atleast one of discover.find.command or discover.find.glob or discover.fineName should be non-empty")
|
||||
}
|
||||
// discovery field is optional as apps can now specify plugin names directly
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -9,8 +9,10 @@ import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode"
|
||||
|
||||
"github.com/argoproj/pkg/rand"
|
||||
|
||||
@@ -22,6 +24,7 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/util/io/files"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
"github.com/cyphar/filepath-securejoin"
|
||||
"github.com/mattn/go-zglob"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
@@ -73,9 +76,8 @@ func runCommand(ctx context.Context, command Command, path string, env []string)
|
||||
}
|
||||
logCtx := log.WithFields(log.Fields{"execID": execId})
|
||||
|
||||
// log in a way we can copy-and-paste into a terminal
|
||||
args := strings.Join(cmd.Args, " ")
|
||||
logCtx.WithFields(log.Fields{"dir": cmd.Dir}).Info(args)
|
||||
argsToLog := getCommandArgsToLog(cmd)
|
||||
logCtx.WithFields(log.Fields{"dir": cmd.Dir}).Info(argsToLog)
|
||||
|
||||
var stdout bytes.Buffer
|
||||
var stderr bytes.Buffer
|
||||
@@ -95,6 +97,14 @@ func runCommand(ctx context.Context, command Command, path string, env []string)
|
||||
<-ctx.Done()
|
||||
// Kill by group ID to make sure child processes are killed. The - tells `kill` that it's a group ID.
|
||||
// Since we didn't set Pgid in SysProcAttr, the group ID is the same as the process ID. https://pkg.go.dev/syscall#SysProcAttr
|
||||
|
||||
// Sending a TERM signal first to allow any potential cleanup if needed, and then sending a KILL signal
|
||||
_ = sysCallTerm(-cmd.Process.Pid)
|
||||
|
||||
// modify cleanup timeout to allow process to cleanup
|
||||
cleanupTimeout := 5 * time.Second
|
||||
time.Sleep(cleanupTimeout)
|
||||
|
||||
_ = sysCallKill(-cmd.Process.Pid)
|
||||
}()
|
||||
|
||||
@@ -106,7 +116,7 @@ func runCommand(ctx context.Context, command Command, path string, env []string)
|
||||
logCtx.WithFields(log.Fields{"duration": duration}).Debug(output)
|
||||
|
||||
if err != nil {
|
||||
err := newCmdError(args, errors.New(err.Error()), strings.TrimSpace(stderr.String()))
|
||||
err := newCmdError(argsToLog, errors.New(err.Error()), strings.TrimSpace(stderr.String()))
|
||||
logCtx.Error(err.Error())
|
||||
return strings.TrimSuffix(output, "\n"), err
|
||||
}
|
||||
@@ -114,6 +124,28 @@ func runCommand(ctx context.Context, command Command, path string, env []string)
|
||||
return strings.TrimSuffix(output, "\n"), nil
|
||||
}
|
||||
|
||||
// getCommandArgsToLog represents the given command in a way that we can copy-and-paste into a terminal
|
||||
func getCommandArgsToLog(cmd *exec.Cmd) string {
|
||||
var argsToLog []string
|
||||
for _, arg := range cmd.Args {
|
||||
containsSpace := false
|
||||
for _, r := range arg {
|
||||
if unicode.IsSpace(r) {
|
||||
containsSpace = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if containsSpace {
|
||||
// add quotes and escape any internal quotes
|
||||
argsToLog = append(argsToLog, strconv.Quote(arg))
|
||||
} else {
|
||||
argsToLog = append(argsToLog, arg)
|
||||
}
|
||||
}
|
||||
args := strings.Join(argsToLog, " ")
|
||||
return args
|
||||
}
|
||||
|
||||
type CmdError struct {
|
||||
Args string
|
||||
Stderr string
|
||||
@@ -153,7 +185,7 @@ func getTempDirMustCleanup(baseDir string) (workDir string, cleanup func(), err
|
||||
if err := os.RemoveAll(workDir); err != nil {
|
||||
log.WithFields(map[string]interface{}{
|
||||
common.SecurityField: common.SecurityHigh,
|
||||
common.SecurityCWEField: 459,
|
||||
common.SecurityCWEField: common.SecurityCWEIncompleteCleanup,
|
||||
}).Errorf("Failed to clean up temp directory: %s", err)
|
||||
}
|
||||
}
|
||||
@@ -273,11 +305,11 @@ func (s *Service) matchRepositoryGeneric(stream MatchRepositoryStream) error {
|
||||
return fmt.Errorf("match repository error receiving stream: %w", err)
|
||||
}
|
||||
|
||||
isSupported, err := s.matchRepository(bufferedCtx, workDir, metadata.GetEnv())
|
||||
isSupported, isDiscoveryEnabled, err := s.matchRepository(bufferedCtx, workDir, metadata.GetEnv(), metadata.GetAppRelPath())
|
||||
if err != nil {
|
||||
return fmt.Errorf("match repository error: %w", err)
|
||||
}
|
||||
repoResponse := &apiclient.RepositoryResponse{IsSupported: isSupported}
|
||||
repoResponse := &apiclient.RepositoryResponse{IsSupported: isSupported, IsDiscoveryEnabled: isDiscoveryEnabled}
|
||||
|
||||
err = stream.SendAndClose(repoResponse)
|
||||
if err != nil {
|
||||
@@ -286,50 +318,55 @@ func (s *Service) matchRepositoryGeneric(stream MatchRepositoryStream) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) matchRepository(ctx context.Context, workdir string, envEntries []*apiclient.EnvEntry) (bool, error) {
|
||||
func (s *Service) matchRepository(ctx context.Context, workdir string, envEntries []*apiclient.EnvEntry, appRelPath string) (isSupported bool, isDiscoveryEnabled bool, err error) {
|
||||
config := s.initConstants.PluginConfig
|
||||
|
||||
appPath, err := securejoin.SecureJoin(workdir, appRelPath)
|
||||
if err != nil {
|
||||
log.WithFields(map[string]interface{}{
|
||||
common.SecurityField: common.SecurityHigh,
|
||||
common.SecurityCWEField: common.SecurityCWEIncompleteCleanup,
|
||||
}).Errorf("error joining workdir %q and appRelPath %q: %v", workdir, appRelPath, err)
|
||||
}
|
||||
|
||||
if config.Spec.Discover.FileName != "" {
|
||||
log.Debugf("config.Spec.Discover.FileName is provided")
|
||||
pattern := filepath.Join(workdir, config.Spec.Discover.FileName)
|
||||
pattern := filepath.Join(appPath, config.Spec.Discover.FileName)
|
||||
matches, err := filepath.Glob(pattern)
|
||||
if err != nil {
|
||||
e := fmt.Errorf("error finding filename match for pattern %q: %w", pattern, err)
|
||||
log.Debug(e)
|
||||
return false, e
|
||||
return false, true, e
|
||||
}
|
||||
return len(matches) > 0, nil
|
||||
return len(matches) > 0, true, nil
|
||||
}
|
||||
|
||||
if config.Spec.Discover.Find.Glob != "" {
|
||||
log.Debugf("config.Spec.Discover.Find.Glob is provided")
|
||||
pattern := filepath.Join(workdir, config.Spec.Discover.Find.Glob)
|
||||
pattern := filepath.Join(appPath, config.Spec.Discover.Find.Glob)
|
||||
// filepath.Glob doesn't have '**' support hence selecting third-party lib
|
||||
// https://github.com/golang/go/issues/11862
|
||||
matches, err := zglob.Glob(pattern)
|
||||
if err != nil {
|
||||
e := fmt.Errorf("error finding glob match for pattern %q: %w", pattern, err)
|
||||
log.Debug(e)
|
||||
return false, e
|
||||
return false, true, e
|
||||
}
|
||||
|
||||
if len(matches) > 0 {
|
||||
return true, nil
|
||||
return len(matches) > 0, true, nil
|
||||
}
|
||||
|
||||
if len(config.Spec.Discover.Find.Command.Command) > 0 {
|
||||
log.Debugf("Going to try runCommand.")
|
||||
env := append(os.Environ(), environ(envEntries)...)
|
||||
find, err := runCommand(ctx, config.Spec.Discover.Find.Command, appPath, env)
|
||||
if err != nil {
|
||||
return false, true, fmt.Errorf("error running find command: %w", err)
|
||||
}
|
||||
return false, nil
|
||||
return find != "", true, nil
|
||||
}
|
||||
|
||||
log.Debugf("Going to try runCommand.")
|
||||
env := append(os.Environ(), environ(envEntries)...)
|
||||
|
||||
find, err := runCommand(ctx, config.Spec.Discover.Find.Command, workdir, env)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("error running find command: %w", err)
|
||||
}
|
||||
|
||||
if find != "" {
|
||||
return true, nil
|
||||
}
|
||||
return false, nil
|
||||
return false, false, nil
|
||||
}
|
||||
|
||||
// ParametersAnnouncementStream defines an interface able to send/receive a stream of parameter announcements.
|
||||
@@ -358,7 +395,7 @@ func (s *Service) GetParametersAnnouncement(stream apiclient.ConfigManagementPlu
|
||||
return fmt.Errorf("illegal appPath: out of workDir bound")
|
||||
}
|
||||
|
||||
repoResponse, err := getParametersAnnouncement(bufferedCtx, appPath, s.initConstants.PluginConfig.Spec.Parameters.Static, s.initConstants.PluginConfig.Spec.Parameters.Dynamic)
|
||||
repoResponse, err := getParametersAnnouncement(bufferedCtx, appPath, s.initConstants.PluginConfig.Spec.Parameters.Static, s.initConstants.PluginConfig.Spec.Parameters.Dynamic, metadata.GetEnv())
|
||||
if err != nil {
|
||||
return fmt.Errorf("get parameters announcement error: %w", err)
|
||||
}
|
||||
@@ -370,11 +407,12 @@ func (s *Service) GetParametersAnnouncement(stream apiclient.ConfigManagementPlu
|
||||
return nil
|
||||
}
|
||||
|
||||
func getParametersAnnouncement(ctx context.Context, appDir string, announcements []*repoclient.ParameterAnnouncement, command Command) (*apiclient.ParametersAnnouncementResponse, error) {
|
||||
func getParametersAnnouncement(ctx context.Context, appDir string, announcements []*repoclient.ParameterAnnouncement, command Command, envEntries []*apiclient.EnvEntry) (*apiclient.ParametersAnnouncementResponse, error) {
|
||||
augmentedAnnouncements := announcements
|
||||
|
||||
if len(command.Command) > 0 {
|
||||
stdout, err := runCommand(ctx, command, appDir, os.Environ())
|
||||
env := append(os.Environ(), environ(envEntries)...)
|
||||
stdout, err := runCommand(ctx, command, appDir, env)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error executing dynamic parameter output command: %w", err)
|
||||
}
|
||||
|
||||
@@ -44,6 +44,7 @@ message ManifestResponse {
|
||||
|
||||
message RepositoryResponse {
|
||||
bool isSupported = 1;
|
||||
bool isDiscoveryEnabled = 2;
|
||||
}
|
||||
|
||||
// ParametersAnnouncementResponse contains a list of announcements. This list represents all the parameters which a CMP
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
@@ -99,11 +100,12 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will not match plugin by filename if file not found", func(t *testing.T) {
|
||||
// given
|
||||
@@ -113,11 +115,12 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will not match a pattern with a syntax error", func(t *testing.T) {
|
||||
// given
|
||||
@@ -127,7 +130,7 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
_, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
_, _, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.ErrorContains(t, err, "syntax error")
|
||||
@@ -142,11 +145,12 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will not match plugin by glob if not found", func(t *testing.T) {
|
||||
// given
|
||||
@@ -158,11 +162,12 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will throw an error for a bad pattern", func(t *testing.T) {
|
||||
// given
|
||||
@@ -174,7 +179,7 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
_, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
_, _, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.ErrorContains(t, err, "error finding glob match for pattern")
|
||||
@@ -191,11 +196,12 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will not match plugin by command when returns no output", func(t *testing.T) {
|
||||
// given
|
||||
@@ -209,11 +215,11 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will match plugin because env var defined", func(t *testing.T) {
|
||||
// given
|
||||
@@ -227,11 +233,12 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will not match plugin because no env var defined", func(t *testing.T) {
|
||||
// given
|
||||
@@ -246,11 +253,12 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will not match plugin by command when command fails", func(t *testing.T) {
|
||||
// given
|
||||
@@ -264,11 +272,25 @@ func TestMatchRepository(t *testing.T) {
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, err := f.service.matchRepository(context.Background(), f.path, f.env)
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.Error(t, err)
|
||||
assert.False(t, match)
|
||||
assert.True(t, discovery)
|
||||
})
|
||||
t.Run("will not match plugin as discovery is not set", func(t *testing.T) {
|
||||
// given
|
||||
d := Discover{}
|
||||
f := setup(t, withDiscover(d))
|
||||
|
||||
// when
|
||||
match, discovery, err := f.service.matchRepository(context.Background(), f.path, f.env, ".")
|
||||
|
||||
// then
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, match)
|
||||
assert.False(t, discovery)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -347,6 +369,28 @@ func TestRunCommandEmptyCommand(t *testing.T) {
|
||||
assert.ErrorContains(t, err, "Command is empty")
|
||||
}
|
||||
|
||||
// TestRunCommandContextTimeoutWithGracefulTermination makes sure that the process is given enough time to cleanup before sending SIGKILL.
|
||||
func TestRunCommandContextTimeoutWithCleanup(t *testing.T) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 900*time.Millisecond)
|
||||
defer cancel()
|
||||
|
||||
// Use a subshell so there's a child command.
|
||||
// This command sleeps for 4 seconds which is currently less than the 5 second delay between SIGTERM and SIGKILL signal and then exits successfully.
|
||||
command := Command{
|
||||
Command: []string{"sh", "-c"},
|
||||
Args: []string{`(trap 'echo "cleanup completed"; exit' TERM; sleep 4)`},
|
||||
}
|
||||
|
||||
before := time.Now()
|
||||
output, err := runCommand(ctx, command, "", []string{})
|
||||
after := time.Now()
|
||||
|
||||
assert.Error(t, err) // The command should time out, causing an error.
|
||||
assert.Less(t, after.Sub(before), 1*time.Second)
|
||||
// The command should still have completed the cleanup after termination.
|
||||
assert.Contains(t, output, "cleanup completed")
|
||||
}
|
||||
|
||||
func Test_getParametersAnnouncement_empty_command(t *testing.T) {
|
||||
staticYAML := `
|
||||
- name: static-a
|
||||
@@ -359,7 +403,7 @@ func Test_getParametersAnnouncement_empty_command(t *testing.T) {
|
||||
Command: []string{"echo"},
|
||||
Args: []string{`[]`},
|
||||
}
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command)
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command, []*apiclient.EnvEntry{})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []*repoclient.ParameterAnnouncement{{Name: "static-a"}, {Name: "static-b"}}, res.ParameterAnnouncements)
|
||||
}
|
||||
@@ -373,7 +417,7 @@ func Test_getParametersAnnouncement_no_command(t *testing.T) {
|
||||
err := yaml.Unmarshal([]byte(staticYAML), static)
|
||||
require.NoError(t, err)
|
||||
command := Command{}
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command)
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command, []*apiclient.EnvEntry{})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []*repoclient.ParameterAnnouncement{{Name: "static-a"}, {Name: "static-b"}}, res.ParameterAnnouncements)
|
||||
}
|
||||
@@ -390,7 +434,7 @@ func Test_getParametersAnnouncement_static_and_dynamic(t *testing.T) {
|
||||
Command: []string{"echo"},
|
||||
Args: []string{`[{"name": "dynamic-a"}, {"name": "dynamic-b"}]`},
|
||||
}
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command)
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command, []*apiclient.EnvEntry{})
|
||||
require.NoError(t, err)
|
||||
expected := []*repoclient.ParameterAnnouncement{
|
||||
{Name: "dynamic-a"},
|
||||
@@ -406,7 +450,7 @@ func Test_getParametersAnnouncement_invalid_json(t *testing.T) {
|
||||
Command: []string{"echo"},
|
||||
Args: []string{`[`},
|
||||
}
|
||||
_, err := getParametersAnnouncement(context.Background(), "", []*repoclient.ParameterAnnouncement{}, command)
|
||||
_, err := getParametersAnnouncement(context.Background(), "", []*repoclient.ParameterAnnouncement{}, command, []*apiclient.EnvEntry{})
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "unexpected end of JSON input")
|
||||
}
|
||||
@@ -416,7 +460,7 @@ func Test_getParametersAnnouncement_bad_command(t *testing.T) {
|
||||
Command: []string{"exit"},
|
||||
Args: []string{"1"},
|
||||
}
|
||||
_, err := getParametersAnnouncement(context.Background(), "", []*repoclient.ParameterAnnouncement{}, command)
|
||||
_, err := getParametersAnnouncement(context.Background(), "", []*repoclient.ParameterAnnouncement{}, command, []*apiclient.EnvEntry{})
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "error executing dynamic parameter output command")
|
||||
}
|
||||
@@ -708,16 +752,17 @@ func TestService_GetParametersAnnouncement(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("successful response", func(t *testing.T) {
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", nil)
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", []string{"MUST_BE_SET=yep"})
|
||||
require.NoError(t, err)
|
||||
err = service.GetParametersAnnouncement(s)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, s.response)
|
||||
require.Len(t, s.response.ParameterAnnouncements, 1)
|
||||
assert.Equal(t, repoclient.ParameterAnnouncement{Name: "test-param", String_: "test-value"}, *s.response.ParameterAnnouncements[0])
|
||||
require.Len(t, s.response.ParameterAnnouncements, 2)
|
||||
assert.Equal(t, repoclient.ParameterAnnouncement{Name: "dynamic-test-param", String_: "yep"}, *s.response.ParameterAnnouncements[0])
|
||||
assert.Equal(t, repoclient.ParameterAnnouncement{Name: "test-param", String_: "test-value"}, *s.response.ParameterAnnouncements[1])
|
||||
})
|
||||
t.Run("out of bounds app", func(t *testing.T) {
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", nil)
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", []string{"MUST_BE_SET=yep"})
|
||||
require.NoError(t, err)
|
||||
// set a malicious app path on the metadata
|
||||
s.metadataRequest.Request.(*apiclient.AppStreamRequest_Metadata).Metadata.AppRelPath = "../out-of-bounds"
|
||||
@@ -725,4 +770,38 @@ func TestService_GetParametersAnnouncement(t *testing.T) {
|
||||
require.ErrorContains(t, err, "illegal appPath")
|
||||
require.Nil(t, s.response)
|
||||
})
|
||||
t.Run("fails when script fails", func(t *testing.T) {
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", []string{"WRONG_ENV_VAR=oops"})
|
||||
require.NoError(t, err)
|
||||
err = service.GetParametersAnnouncement(s)
|
||||
require.ErrorContains(t, err, "error executing dynamic parameter output command")
|
||||
require.Nil(t, s.response)
|
||||
})
|
||||
}
|
||||
|
||||
func Test_getCommandArgsToLog(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
args []string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "no spaces",
|
||||
args: []string{"sh", "-c", "cat"},
|
||||
expected: "sh -c cat",
|
||||
},
|
||||
{
|
||||
name: "spaces",
|
||||
args: []string{"sh", "-c", `echo "hello world"`},
|
||||
expected: `sh -c "echo \"hello world\""`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
tcc := tc
|
||||
t.Run(tcc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
assert.Equal(t, tcc.expected, getCommandArgsToLog(exec.Command(tcc.args[0], tcc.args[1:]...)))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,3 +14,7 @@ func newSysProcAttr(setpgid bool) *syscall.SysProcAttr {
|
||||
func sysCallKill(pid int) error {
|
||||
return syscall.Kill(pid, syscall.SIGKILL)
|
||||
}
|
||||
|
||||
func sysCallTerm(pid int) error {
|
||||
return syscall.Kill(pid, syscall.SIGTERM)
|
||||
}
|
||||
|
||||
@@ -14,3 +14,7 @@ func newSysProcAttr(setpgid bool) *syscall.SysProcAttr {
|
||||
func sysCallKill(pid int) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func sysCallTerm(pid int) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -5,9 +5,15 @@ metadata:
|
||||
spec:
|
||||
version: v1.0
|
||||
init:
|
||||
command: [kustomize, version]
|
||||
command: [sh, -c]
|
||||
args:
|
||||
- |
|
||||
kustomize version
|
||||
generate:
|
||||
command: [sh, -c, "kustomize build"]
|
||||
command: [sh, -c]
|
||||
args:
|
||||
- |
|
||||
kustomize build
|
||||
discover:
|
||||
find:
|
||||
command: [sh, -c, find . -name kustomization.yaml]
|
||||
@@ -16,3 +22,12 @@ spec:
|
||||
static:
|
||||
- name: test-param
|
||||
string: test-value
|
||||
dynamic:
|
||||
command: [sh, -c]
|
||||
args:
|
||||
- |
|
||||
# Make sure env vars are making it to the plugin.
|
||||
if [ -z "$MUST_BE_SET" ]; then
|
||||
exit 1
|
||||
fi
|
||||
echo "[{\"name\": \"dynamic-test-param\", \"string\": \"$MUST_BE_SET\"}]"
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Default service addresses and URLS of Argo CD internal services
|
||||
@@ -222,13 +225,7 @@ const (
|
||||
// DefaultCMPWorkDirName defines the work directory name used by the cmp-server
|
||||
DefaultCMPWorkDirName = "_cmp_server"
|
||||
|
||||
ConfigMapPluginDeprecationWarning = "argocd-cm plugins are deprecated, and support will be removed in v2.6. Upgrade your plugin to be installed via sidecar. https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/"
|
||||
|
||||
ConfigMapPluginCLIDeprecationWarning = "spec.plugin.name is set, which means this Application uses a plugin installed in the " +
|
||||
"argocd-cm ConfigMap. Installing plugins via that ConfigMap is deprecated in Argo CD v2.5. " +
|
||||
"Starting in Argo CD v2.6, this Application will fail to sync. Contact your Argo CD admin " +
|
||||
"to make sure an upgrade plan is in place. More info: " +
|
||||
"https://argo-cd.readthedocs.io/en/latest/operator-manual/upgrading/2.4-2.5/"
|
||||
ConfigMapPluginDeprecationWarning = "argocd-cm plugins are deprecated, and support will be removed in v2.7. Upgrade your plugin to be installed via sidecar. https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -308,11 +305,21 @@ const (
|
||||
|
||||
// Security severity logging
|
||||
const (
|
||||
SecurityField = "security"
|
||||
SecurityCWEField = "CWE"
|
||||
SecurityEmergency = 5 // Indicates unmistakably malicious events that should NEVER occur accidentally and indicates an active attack (i.e. brute forcing, DoS)
|
||||
SecurityCritical = 4 // Indicates any malicious or exploitable event that had a side effect (i.e. secrets being left behind on the filesystem)
|
||||
SecurityHigh = 3 // Indicates likely malicious events but one that had no side effects or was blocked (i.e. out of bounds symlinks in repos)
|
||||
SecurityMedium = 2 // Could indicate malicious events, but has a high likelihood of being user/system error (i.e. access denied)
|
||||
SecurityLow = 1 // Unexceptional entries (i.e. successful access logs)
|
||||
SecurityField = "security"
|
||||
// SecurityCWEField is the logs field for the CWE associated with a log line. CWE stands for Common Weakness Enumeration. See https://cwe.mitre.org/
|
||||
SecurityCWEField = "CWE"
|
||||
SecurityCWEIncompleteCleanup = 459
|
||||
SecurityCWEMissingReleaseOfFileDescriptor = 775
|
||||
SecurityEmergency = 5 // Indicates unmistakably malicious events that should NEVER occur accidentally and indicates an active attack (i.e. brute forcing, DoS)
|
||||
SecurityCritical = 4 // Indicates any malicious or exploitable event that had a side effect (i.e. secrets being left behind on the filesystem)
|
||||
SecurityHigh = 3 // Indicates likely malicious events but one that had no side effects or was blocked (i.e. out of bounds symlinks in repos)
|
||||
SecurityMedium = 2 // Could indicate malicious events, but has a high likelihood of being user/system error (i.e. access denied)
|
||||
SecurityLow = 1 // Unexceptional entries (i.e. successful access logs)
|
||||
)
|
||||
|
||||
// Common error messages
|
||||
const TokenVerificationError = "failed to verify the token"
|
||||
|
||||
var TokenVerificationErr = errors.New(TokenVerificationError)
|
||||
|
||||
var PermissionDeniedAPIError = status.Error(codes.PermissionDenied, "permission denied")
|
||||
|
||||
@@ -335,7 +335,7 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
|
||||
}
|
||||
|
||||
if !ctrl.canProcessApp(obj) {
|
||||
// Don't force refresh app if app belongs to a different controller shard
|
||||
// Don't force refresh app if app belongs to a different controller shard or is outside the allowed namespaces.
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -1469,6 +1469,13 @@ func resourceStatusKey(res appv1.ResourceStatus) string {
|
||||
return strings.Join([]string{res.Group, res.Kind, res.Namespace, res.Name}, "/")
|
||||
}
|
||||
|
||||
func currentSourceEqualsSyncedSource(app *appv1.Application) bool {
|
||||
if app.Spec.HasMultipleSources() {
|
||||
return app.Spec.Sources.Equals(app.Status.Sync.ComparedTo.Sources)
|
||||
}
|
||||
return app.Spec.Source.Equals(app.Status.Sync.ComparedTo.Source)
|
||||
}
|
||||
|
||||
// needRefreshAppStatus answers if application status needs to be refreshed.
|
||||
// Returns true if application never been compared, has changed or comparison result has expired.
|
||||
// Additionally returns whether full refresh was requested or not.
|
||||
@@ -1487,14 +1494,12 @@ func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application,
|
||||
refreshType = requestedType
|
||||
reason = fmt.Sprintf("%s refresh requested", refreshType)
|
||||
} else {
|
||||
if app.Spec.HasMultipleSources() {
|
||||
if (len(app.Spec.Sources) != len(app.Status.Sync.ComparedTo.Sources)) || !reflect.DeepEqual(app.Spec.Sources, app.Status.Sync.ComparedTo.Sources) {
|
||||
reason = "atleast one of the spec.sources differs"
|
||||
compareWith = CompareWithLatestForceResolve
|
||||
}
|
||||
} else if !app.Spec.Source.Equals(app.Status.Sync.ComparedTo.Source) {
|
||||
if !currentSourceEqualsSyncedSource(app) {
|
||||
reason = "spec.source differs"
|
||||
compareWith = CompareWithLatestForceResolve
|
||||
if app.Spec.HasMultipleSources() {
|
||||
reason = "at least one of the spec.sources differs"
|
||||
}
|
||||
} else if hardExpired || softExpired {
|
||||
// The commented line below mysteriously crashes if app.Status.ReconciledAt is nil
|
||||
// reason = fmt.Sprintf("comparison expired. reconciledAt: %v, expiry: %v", app.Status.ReconciledAt, statusRefreshTimeout)
|
||||
@@ -1773,6 +1778,13 @@ func (ctrl *ApplicationController) canProcessApp(obj interface{}) bool {
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
// Only process given app if it exists in a watched namespace, or in the
|
||||
// control plane's namespace.
|
||||
if app.Namespace != ctrl.namespace && !glob.MatchStringInList(ctrl.applicationNamespaces, app.Namespace, false) {
|
||||
return false
|
||||
}
|
||||
|
||||
if ctrl.clusterFilter != nil {
|
||||
cluster, err := ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
|
||||
if err != nil {
|
||||
@@ -1781,12 +1793,6 @@ func (ctrl *ApplicationController) canProcessApp(obj interface{}) bool {
|
||||
return ctrl.clusterFilter(cluster)
|
||||
}
|
||||
|
||||
// Only process given app if it exists in a watched namespace, or in the
|
||||
// control plane's namespace.
|
||||
if app.Namespace != ctrl.namespace && !glob.MatchStringInList(ctrl.applicationNamespaces, app.Namespace, false) {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
|
||||
@@ -50,6 +50,7 @@ type namespacedResource struct {
|
||||
type fakeData struct {
|
||||
apps []runtime.Object
|
||||
manifestResponse *apiclient.ManifestResponse
|
||||
manifestResponses []*apiclient.ManifestResponse
|
||||
managedLiveObjs map[kube.ResourceKey]*unstructured.Unstructured
|
||||
namespacedResources map[kube.ResourceKey]namespacedResource
|
||||
configMapData map[string]string
|
||||
@@ -65,7 +66,15 @@ func newFakeController(data *fakeData) *ApplicationController {
|
||||
|
||||
// Mock out call to GenerateManifest
|
||||
mockRepoClient := mockrepoclient.RepoServerServiceClient{}
|
||||
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, nil)
|
||||
|
||||
if len(data.manifestResponses) > 0 {
|
||||
for _, response := range data.manifestResponses {
|
||||
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, nil).Once()
|
||||
}
|
||||
} else {
|
||||
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, nil)
|
||||
}
|
||||
|
||||
mockRepoClientset := mockrepoclient.Clientset{RepoServerServiceClient: &mockRepoClient}
|
||||
|
||||
secret := corev1.Secret{
|
||||
@@ -209,6 +218,64 @@ status:
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps.git
|
||||
`
|
||||
|
||||
var fakeMultiSourceApp = `
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
uid: "123"
|
||||
name: my-app
|
||||
namespace: ` + test.FakeArgoCDNamespace + `
|
||||
spec:
|
||||
destination:
|
||||
namespace: ` + test.FakeDestNamespace + `
|
||||
server: https://localhost:6443
|
||||
project: default
|
||||
sources:
|
||||
- path: some/path
|
||||
helm:
|
||||
valueFiles:
|
||||
- $values_test/values.yaml
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps.git
|
||||
- path: some/other/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps-fake.git
|
||||
- ref: values_test
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps-fake-ref.git
|
||||
syncPolicy:
|
||||
automated: {}
|
||||
status:
|
||||
operationState:
|
||||
finishedAt: 2018-09-21T23:50:29Z
|
||||
message: successfully synced
|
||||
operation:
|
||||
sync:
|
||||
revisions:
|
||||
- HEAD
|
||||
- HEAD
|
||||
- HEAD
|
||||
phase: Succeeded
|
||||
startedAt: 2018-09-21T23:50:25Z
|
||||
syncResult:
|
||||
resources:
|
||||
- kind: RoleBinding
|
||||
message: |-
|
||||
rolebinding.rbac.authorization.k8s.io/always-outofsync reconciled
|
||||
rolebinding.rbac.authorization.k8s.io/always-outofsync configured
|
||||
name: always-outofsync
|
||||
namespace: default
|
||||
status: Synced
|
||||
revisions:
|
||||
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
- bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
|
||||
- cccccccccccccccccccccccccccccccccccccccc
|
||||
sources:
|
||||
- path: some/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps.git
|
||||
- path: some/other/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps-fake.git
|
||||
- path: some/other/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps-fake-ref.git
|
||||
`
|
||||
|
||||
var fakeAppWithDestName = `
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
@@ -263,6 +330,10 @@ func newFakeApp() *argoappv1.Application {
|
||||
return createFakeApp(fakeApp)
|
||||
}
|
||||
|
||||
func newFakeMultiSourceApp() *argoappv1.Application {
|
||||
return createFakeApp(fakeMultiSourceApp)
|
||||
}
|
||||
|
||||
func newFakeAppWithDestMismatch() *argoappv1.Application {
|
||||
return createFakeApp(fakeAppWithDestMismatch)
|
||||
}
|
||||
@@ -857,101 +928,133 @@ func TestSetOperationStateOnDeletedApp(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
|
||||
|
||||
app := newFakeApp()
|
||||
now := metav1.Now()
|
||||
app.Status.ReconciledAt = &now
|
||||
app.Status.Sync = argoappv1.SyncStatus{
|
||||
Status: argoappv1.SyncStatusCodeSynced,
|
||||
ComparedTo: argoappv1.ComparedTo{
|
||||
Source: app.Spec.GetSource(),
|
||||
Destination: app.Spec.Destination,
|
||||
testCases := []struct {
|
||||
name string
|
||||
app *argoappv1.Application
|
||||
}{
|
||||
{
|
||||
name: "single-source app",
|
||||
app: newFakeApp(),
|
||||
},
|
||||
{
|
||||
name: "multi-source app",
|
||||
app: newFakeMultiSourceApp(),
|
||||
},
|
||||
}
|
||||
|
||||
// no need to refresh just reconciled application
|
||||
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.False(t, needRefresh)
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
|
||||
app := tc.app
|
||||
now := metav1.Now()
|
||||
app.Status.ReconciledAt = &now
|
||||
|
||||
// refresh app using the 'deepest' requested comparison level
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing.Pointer(), nil)
|
||||
app.Status.Sync = argoappv1.SyncStatus{
|
||||
Status: argoappv1.SyncStatusCodeSynced,
|
||||
ComparedTo: argoappv1.ComparedTo{
|
||||
Destination: app.Spec.Destination,
|
||||
},
|
||||
}
|
||||
|
||||
needRefresh, refreshType, compareWith := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithRecent, compareWith)
|
||||
if app.Spec.HasMultipleSources() {
|
||||
app.Status.Sync.ComparedTo.Sources = app.Spec.Sources
|
||||
} else {
|
||||
app.Status.Sync.ComparedTo.Source = app.Spec.GetSource()
|
||||
}
|
||||
|
||||
// refresh application which status is not reconciled using latest commit
|
||||
app.Status.Sync = argoappv1.SyncStatus{Status: argoappv1.SyncStatusCodeUnknown}
|
||||
// no need to refresh just reconciled application
|
||||
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.False(t, needRefresh)
|
||||
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
// refresh app using the 'deepest' requested comparison level
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing.Pointer(), nil)
|
||||
|
||||
{
|
||||
// refresh app using the 'latest' level if comparison expired
|
||||
app := app.DeepCopy()
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Minute, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
}
|
||||
needRefresh, refreshType, compareWith := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithRecent, compareWith)
|
||||
|
||||
{
|
||||
// refresh app using the 'latest' level if comparison expired for hard refresh
|
||||
app := app.DeepCopy()
|
||||
app.Status.Sync = argoappv1.SyncStatus{
|
||||
Status: argoappv1.SyncStatusCodeSynced,
|
||||
ComparedTo: argoappv1.ComparedTo{
|
||||
Source: app.Spec.GetSource(),
|
||||
Destination: app.Spec.Destination,
|
||||
},
|
||||
}
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 2*time.Hour, 1*time.Minute)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
|
||||
assert.Equal(t, CompareWithLatest, compareWith)
|
||||
}
|
||||
// refresh application which status is not reconciled using latest commit
|
||||
app.Status.Sync = argoappv1.SyncStatus{Status: argoappv1.SyncStatusCodeUnknown}
|
||||
|
||||
{
|
||||
app := app.DeepCopy()
|
||||
// execute hard refresh if app has refresh annotation
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
app.Annotations = map[string]string{
|
||||
v1alpha1.AnnotationKeyRefresh: string(argoappv1.RefreshTypeHard),
|
||||
}
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
}
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
|
||||
{
|
||||
app := app.DeepCopy()
|
||||
// ensure that CompareWithLatest level is used if application source has changed
|
||||
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing.Pointer(), nil)
|
||||
// sample app source change
|
||||
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{
|
||||
Parameters: []argoappv1.HelmParameter{{
|
||||
Name: "foo",
|
||||
Value: "bar",
|
||||
}},
|
||||
}
|
||||
t.Run("refresh app using the 'latest' level if comparison expired", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Minute, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
})
|
||||
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
t.Run("refresh app using the 'latest' level if comparison expired for hard refresh", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
app.Status.Sync = argoappv1.SyncStatus{
|
||||
Status: argoappv1.SyncStatusCodeSynced,
|
||||
ComparedTo: argoappv1.ComparedTo{
|
||||
Destination: app.Spec.Destination,
|
||||
},
|
||||
}
|
||||
if app.Spec.HasMultipleSources() {
|
||||
app.Status.Sync.ComparedTo.Sources = app.Spec.Sources
|
||||
} else {
|
||||
app.Status.Sync.ComparedTo.Source = app.Spec.GetSource()
|
||||
}
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 2*time.Hour, 1*time.Minute)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
|
||||
assert.Equal(t, CompareWithLatest, compareWith)
|
||||
})
|
||||
|
||||
t.Run("execute hard refresh if app has refresh annotation", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
app.Annotations = map[string]string{
|
||||
v1alpha1.AnnotationKeyRefresh: string(argoappv1.RefreshTypeHard),
|
||||
}
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
})
|
||||
|
||||
t.Run("ensure that CompareWithLatest level is used if application source has changed", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing.Pointer(), nil)
|
||||
// sample app source change
|
||||
if app.Spec.HasMultipleSources() {
|
||||
app.Spec.Sources[0].Helm = &argoappv1.ApplicationSourceHelm{
|
||||
Parameters: []argoappv1.HelmParameter{{
|
||||
Name: "foo",
|
||||
Value: "bar",
|
||||
}},
|
||||
}
|
||||
} else {
|
||||
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{
|
||||
Parameters: []argoappv1.HelmParameter{{
|
||||
Name: "foo",
|
||||
Value: "bar",
|
||||
}},
|
||||
}
|
||||
}
|
||||
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1373,3 +1476,31 @@ func TestToAppKey(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_canProcessApp(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
|
||||
ctrl.applicationNamespaces = []string{"good"}
|
||||
t.Run("without cluster filter, good namespace", func(t *testing.T) {
|
||||
app.Namespace = "good"
|
||||
canProcess := ctrl.canProcessApp(app)
|
||||
assert.True(t, canProcess)
|
||||
})
|
||||
t.Run("without cluster filter, bad namespace", func(t *testing.T) {
|
||||
app.Namespace = "bad"
|
||||
canProcess := ctrl.canProcessApp(app)
|
||||
assert.False(t, canProcess)
|
||||
})
|
||||
t.Run("with cluster filter, good namespace", func(t *testing.T) {
|
||||
app.Namespace = "good"
|
||||
ctrl.clusterFilter = func(_ *argoappv1.Cluster) bool { return true }
|
||||
canProcess := ctrl.canProcessApp(app)
|
||||
assert.True(t, canProcess)
|
||||
})
|
||||
t.Run("with cluster filter, bad namespace", func(t *testing.T) {
|
||||
app.Namespace = "bad"
|
||||
ctrl.clusterFilter = func(_ *argoappv1.Cluster) bool { return true }
|
||||
canProcess := ctrl.canProcessApp(app)
|
||||
assert.False(t, canProcess)
|
||||
})
|
||||
}
|
||||
|
||||
30
controller/cache/cache.go
vendored
30
controller/cache/cache.go
vendored
@@ -25,6 +25,7 @@ import (
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/controller/metrics"
|
||||
@@ -394,6 +395,20 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
|
||||
return nil, fmt.Errorf("error getting custom label: %w", err)
|
||||
}
|
||||
|
||||
clusterCacheConfig := cluster.RESTConfig()
|
||||
// Controller dynamically fetches all resource types available on the cluster
|
||||
// using a discovery API that may contain deprecated APIs.
|
||||
// This causes log flooding when managing a large number of clusters.
|
||||
// https://github.com/argoproj/argo-cd/issues/11973
|
||||
// However, we can safely suppress deprecation warnings
|
||||
// because we do not rely on resources with a particular API group or version.
|
||||
// https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling
|
||||
//
|
||||
// Completely suppress warning logs only for log levels that are less than Debug.
|
||||
if log.GetLevel() < log.DebugLevel {
|
||||
clusterCacheConfig.WarningHandler = rest.NoWarnings{}
|
||||
}
|
||||
|
||||
clusterCacheOpts := []clustercache.UpdateSettingsFunc{
|
||||
clustercache.SetListSemaphore(semaphore.NewWeighted(clusterCacheListSemaphoreSize)),
|
||||
clustercache.SetListPageSize(clusterCacheListPageSize),
|
||||
@@ -425,7 +440,7 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
|
||||
clustercache.SetRetryOptions(clusterCacheAttemptLimit, clusterCacheRetryUseBackoff, isRetryableError),
|
||||
}
|
||||
|
||||
clusterCache = clustercache.NewClusterCache(cluster.RESTConfig(), clusterCacheOpts...)
|
||||
clusterCache = clustercache.NewClusterCache(clusterCacheConfig, clusterCacheOpts...)
|
||||
|
||||
_ = clusterCache.OnResourceUpdated(func(newRes *clustercache.Resource, oldRes *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) {
|
||||
toNotify := make(map[string]bool)
|
||||
@@ -473,10 +488,11 @@ func (c *liveStateCache) getSyncedCluster(server string) (clustercache.ClusterCa
|
||||
func (c *liveStateCache) invalidate(cacheSettings cacheSettings) {
|
||||
log.Info("invalidating live state cache")
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
|
||||
c.cacheSettings = cacheSettings
|
||||
for _, clust := range c.clusters {
|
||||
clusters := c.clusters
|
||||
c.lock.Unlock()
|
||||
|
||||
for _, clust := range clusters {
|
||||
clust.Invalidate(clustercache.SetSettings(cacheSettings.clusterSettings))
|
||||
}
|
||||
log.Info("live state cache invalidated")
|
||||
@@ -686,12 +702,14 @@ func (c *liveStateCache) handleModEvent(oldCluster *appv1.Cluster, newCluster *a
|
||||
}
|
||||
|
||||
func (c *liveStateCache) handleDeleteEvent(clusterServer string) {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
c.lock.RLock()
|
||||
cluster, ok := c.clusters[clusterServer]
|
||||
c.lock.RUnlock()
|
||||
if ok {
|
||||
cluster.Invalidate()
|
||||
c.lock.Lock()
|
||||
delete(c.clusters, clusterServer)
|
||||
c.lock.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
97
controller/cache/cache_test.go
vendored
97
controller/cache/cache_test.go
vendored
@@ -1,10 +1,13 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"net"
|
||||
"net/url"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
apierr "k8s.io/apimachinery/pkg/api/errors"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
@@ -14,8 +17,10 @@ import (
|
||||
"github.com/argoproj/gitops-engine/pkg/cache"
|
||||
"github.com/argoproj/gitops-engine/pkg/cache/mocks"
|
||||
"github.com/stretchr/testify/mock"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
|
||||
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
argosettings "github.com/argoproj/argo-cd/v2/util/settings"
|
||||
)
|
||||
|
||||
type netError string
|
||||
@@ -106,6 +111,98 @@ func TestHandleAddEvent_ClusterExcluded(t *testing.T) {
|
||||
assert.Len(t, clustersCache.clusters, 0)
|
||||
}
|
||||
|
||||
func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
|
||||
testCluster := &appv1.Cluster{
|
||||
Server: "https://mycluster",
|
||||
Config: appv1.ClusterConfig{Username: "bar"},
|
||||
}
|
||||
fakeClient := fake.NewSimpleClientset()
|
||||
settingsMgr := argosettings.NewSettingsManager(context.TODO(), fakeClient, "argocd")
|
||||
externalLockRef := sync.RWMutex{}
|
||||
gitopsEngineClusterCache := &mocks.ClusterCache{}
|
||||
clustersCache := liveStateCache{
|
||||
clusters: map[string]cache.ClusterCache{
|
||||
testCluster.Server: gitopsEngineClusterCache,
|
||||
},
|
||||
clusterFilter: func(cluster *appv1.Cluster) bool {
|
||||
return true
|
||||
},
|
||||
settingsMgr: settingsMgr,
|
||||
// Set the lock here so we can reference it later
|
||||
// nolint We need to overwrite here to have access to the lock
|
||||
lock: externalLockRef,
|
||||
}
|
||||
channel := make(chan string)
|
||||
// Mocked lock held by the gitops-engine cluster cache
|
||||
mockMutex := sync.RWMutex{}
|
||||
// Locks to force trigger condition during test
|
||||
// Condition order:
|
||||
// EnsuredSynced -> Locks gitops-engine
|
||||
// handleDeleteEvent -> Locks liveStateCache
|
||||
// EnsureSynced via sync, newResource, populateResourceInfoHandler -> attempts to Lock liveStateCache
|
||||
// handleDeleteEvent via cluster.Invalidate -> attempts to Lock gitops-engine
|
||||
handleDeleteWasCalled := sync.Mutex{}
|
||||
engineHoldsLock := sync.Mutex{}
|
||||
handleDeleteWasCalled.Lock()
|
||||
engineHoldsLock.Lock()
|
||||
gitopsEngineClusterCache.On("EnsureSynced").Run(func(args mock.Arguments) {
|
||||
// Held by EnsureSync calling into sync and watchEvents
|
||||
mockMutex.Lock()
|
||||
defer mockMutex.Unlock()
|
||||
// Continue Execution of timer func
|
||||
engineHoldsLock.Unlock()
|
||||
// Wait for handleDeleteEvent to be called triggering the lock
|
||||
// on the liveStateCache
|
||||
handleDeleteWasCalled.Lock()
|
||||
t.Logf("handleDelete was called, EnsureSynced continuing...")
|
||||
handleDeleteWasCalled.Unlock()
|
||||
// Try and obtain the lock on the liveStateCache
|
||||
alreadyFailed := !externalLockRef.TryLock()
|
||||
if alreadyFailed {
|
||||
channel <- "DEADLOCKED -- EnsureSynced could not obtain lock on liveStateCache"
|
||||
return
|
||||
}
|
||||
externalLockRef.Lock()
|
||||
t.Logf("EnsureSynce was able to lock liveStateCache")
|
||||
externalLockRef.Unlock()
|
||||
}).Return(nil).Once()
|
||||
gitopsEngineClusterCache.On("Invalidate").Run(func(args mock.Arguments) {
|
||||
// If deadlock is fixed should be able to acquire lock here
|
||||
alreadyFailed := !mockMutex.TryLock()
|
||||
if alreadyFailed {
|
||||
channel <- "DEADLOCKED -- Invalidate could not obtain lock on gitops-engine"
|
||||
return
|
||||
}
|
||||
mockMutex.Lock()
|
||||
t.Logf("Invalidate was able to lock gitops-engine cache")
|
||||
mockMutex.Unlock()
|
||||
}).Return()
|
||||
go func() {
|
||||
// Start the gitops-engine lock holds
|
||||
go func() {
|
||||
err := gitopsEngineClusterCache.EnsureSynced()
|
||||
if err != nil {
|
||||
assert.Fail(t, err.Error())
|
||||
}
|
||||
}()
|
||||
// Wait for EnsureSynced to grab the lock for gitops-engine
|
||||
engineHoldsLock.Lock()
|
||||
t.Log("EnsureSynced has obtained lock on gitops-engine")
|
||||
engineHoldsLock.Unlock()
|
||||
// Run in background
|
||||
go clustersCache.handleDeleteEvent(testCluster.Server)
|
||||
// Allow execution to continue on clusters cache call to trigger lock
|
||||
handleDeleteWasCalled.Unlock()
|
||||
channel <- "PASSED"
|
||||
}()
|
||||
select {
|
||||
case str := <-channel:
|
||||
assert.Equal(t, "PASSED", str, str)
|
||||
case <-time.After(5 * time.Second):
|
||||
assert.Fail(t, "Ended up in deadlock")
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsRetryableError(t *testing.T) {
|
||||
var (
|
||||
tlsHandshakeTimeoutErr net.Error = netError("net/http: TLS handshake timeout")
|
||||
|
||||
@@ -107,7 +107,7 @@ type appStateManager struct {
|
||||
persistResourceHealth bool
|
||||
}
|
||||
|
||||
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject) ([]*unstructured.Unstructured, map[*v1alpha1.ApplicationSource]*apiclient.ManifestResponse, error) {
|
||||
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, error) {
|
||||
|
||||
ts := stats.NewTimingStats()
|
||||
helmRepos, err := m.db.ListHelmRepositories(context.Background())
|
||||
@@ -164,7 +164,7 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
}
|
||||
defer io.Close(conn)
|
||||
|
||||
manifestInfoMap := make(map[*v1alpha1.ApplicationSource]*apiclient.ManifestResponse)
|
||||
manifestInfos := make([]*apiclient.ManifestResponse, 0)
|
||||
targetObjs := make([]*unstructured.Unstructured, 0)
|
||||
|
||||
// Store the map of all sources having ref field into a map for applications with sources field
|
||||
@@ -215,20 +215,14 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
// GenerateManifest can return empty ManifestResponse without error if app has multiple sources
|
||||
// and if any of the source does not have path and chart field not specified.
|
||||
// In that scenario, we continue to the next source
|
||||
if app.Spec.HasMultipleSources() && len(manifestInfo.Manifests) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
targetObj, err := unmarshalManifests(manifestInfo.Manifests)
|
||||
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
targetObjs = append(targetObjs, targetObj...)
|
||||
manifestInfoMap[&source] = manifestInfo
|
||||
|
||||
manifestInfos = append(manifestInfos, manifestInfo)
|
||||
}
|
||||
|
||||
ts.AddCheckpoint("unmarshal_ms")
|
||||
@@ -238,7 +232,7 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
}
|
||||
logCtx = logCtx.WithField("time_ms", time.Since(ts.StartTime).Milliseconds())
|
||||
logCtx.Info("getRepoObjs stats")
|
||||
return targetObjs, manifestInfoMap, nil
|
||||
return targetObjs, manifestInfos, nil
|
||||
}
|
||||
|
||||
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, error) {
|
||||
@@ -399,7 +393,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
var targetObjs []*unstructured.Unstructured
|
||||
now := metav1.Now()
|
||||
|
||||
var manifestInfoMap map[*v1alpha1.ApplicationSource]*apiclient.ManifestResponse
|
||||
var manifestInfos []*apiclient.ManifestResponse
|
||||
|
||||
if len(localManifests) == 0 {
|
||||
// If the length of revisions is not same as the length of sources,
|
||||
@@ -411,7 +405,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
}
|
||||
}
|
||||
|
||||
targetObjs, manifestInfoMap, err = m.getRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project)
|
||||
targetObjs, manifestInfos, err = m.getRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project)
|
||||
if err != nil {
|
||||
targetObjs = make([]*unstructured.Unstructured, 0)
|
||||
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error(), LastTransitionTime: &now})
|
||||
@@ -434,9 +428,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
}
|
||||
}
|
||||
// empty out manifestInfoMap
|
||||
for as := range manifestInfoMap {
|
||||
delete(manifestInfoMap, as)
|
||||
}
|
||||
manifestInfos = make([]*apiclient.ManifestResponse, 0)
|
||||
}
|
||||
ts.AddCheckpoint("git_ms")
|
||||
|
||||
@@ -471,7 +463,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
failedToLoadObjs = true
|
||||
}
|
||||
|
||||
logCtx.Debugf("Retrieved lived manifests")
|
||||
logCtx.Debugf("Retrieved live manifests")
|
||||
|
||||
// filter out all resources which are not permitted in the application project
|
||||
for k, v := range liveObjByKey {
|
||||
@@ -516,12 +508,12 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
}
|
||||
manifestRevisions := make([]string, 0)
|
||||
|
||||
for _, manifestInfo := range manifestInfoMap {
|
||||
for _, manifestInfo := range manifestInfos {
|
||||
manifestRevisions = append(manifestRevisions, manifestInfo.Revision)
|
||||
}
|
||||
|
||||
// restore comparison using cached diff result if previous comparison was performed for the same revision
|
||||
revisionChanged := len(manifestInfoMap) != len(sources) || !reflect.DeepEqual(app.Status.Sync.Revisions, manifestRevisions)
|
||||
revisionChanged := len(manifestInfos) != len(sources) || !reflect.DeepEqual(app.Status.Sync.Revisions, manifestRevisions)
|
||||
specChanged := !reflect.DeepEqual(app.Status.Sync.ComparedTo, appv1.ComparedTo{Source: app.Spec.GetSource(), Destination: app.Spec.Destination, Sources: sources})
|
||||
|
||||
_, refreshRequested := app.IsRefreshRequested()
|
||||
@@ -688,7 +680,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
// Git has already performed the signature verification via its GPG interface, and the result is available
|
||||
// in the manifest info received from the repository server. We now need to form our opinion about the result
|
||||
// and stop processing if we do not agree about the outcome.
|
||||
for _, manifestInfo := range manifestInfoMap {
|
||||
for _, manifestInfo := range manifestInfos {
|
||||
if gpg.IsGPGEnabled() && verifySignature && manifestInfo != nil {
|
||||
conditions = append(conditions, verifyGnuPGSignature(manifestInfo.Revision, project, manifestInfo)...)
|
||||
}
|
||||
@@ -705,11 +697,11 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
}
|
||||
|
||||
if hasMultipleSources {
|
||||
for _, manifestInfo := range manifestInfoMap {
|
||||
for _, manifestInfo := range manifestInfos {
|
||||
compRes.appSourceTypes = append(compRes.appSourceTypes, appv1.ApplicationSourceType(manifestInfo.SourceType))
|
||||
}
|
||||
} else {
|
||||
for _, manifestInfo := range manifestInfoMap {
|
||||
for _, manifestInfo := range manifestInfos {
|
||||
compRes.appSourceType = v1alpha1.ApplicationSourceType(manifestInfo.SourceType)
|
||||
break
|
||||
}
|
||||
|
||||
@@ -233,6 +233,74 @@ func TestCompareAppStateExtraHook(t *testing.T) {
|
||||
assert.Equal(t, 0, len(app.Status.Conditions))
|
||||
}
|
||||
|
||||
// TestAppRevisions tests that revisions are properly propagated for a single source app
|
||||
func TestAppRevisionsSingleSource(t *testing.T) {
|
||||
obj1 := NewPod()
|
||||
obj1.SetNamespace(test.FakeDestNamespace)
|
||||
data := fakeData{
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{toJSON(t, obj1)},
|
||||
Namespace: test.FakeDestNamespace,
|
||||
Server: test.FakeClusterURL,
|
||||
Revision: "abc123",
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data)
|
||||
|
||||
app := newFakeApp()
|
||||
revisions := make([]string, 0)
|
||||
revisions = append(revisions, "")
|
||||
compRes := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, app.Spec.HasMultipleSources())
|
||||
assert.NotNil(t, compRes)
|
||||
assert.NotNil(t, compRes.syncStatus)
|
||||
assert.NotEmpty(t, compRes.syncStatus.Revision)
|
||||
assert.Len(t, compRes.syncStatus.Revisions, 0)
|
||||
|
||||
}
|
||||
|
||||
// TestAppRevisions tests that revisions are properly propagated for a multi source app
|
||||
func TestAppRevisionsMultiSource(t *testing.T) {
|
||||
obj1 := NewPod()
|
||||
obj1.SetNamespace(test.FakeDestNamespace)
|
||||
data := fakeData{
|
||||
manifestResponses: []*apiclient.ManifestResponse{
|
||||
{
|
||||
Manifests: []string{toJSON(t, obj1)},
|
||||
Namespace: test.FakeDestNamespace,
|
||||
Server: test.FakeClusterURL,
|
||||
Revision: "abc123",
|
||||
},
|
||||
{
|
||||
Manifests: []string{toJSON(t, obj1)},
|
||||
Namespace: test.FakeDestNamespace,
|
||||
Server: test.FakeClusterURL,
|
||||
Revision: "def456",
|
||||
},
|
||||
{
|
||||
Manifests: []string{},
|
||||
Namespace: test.FakeDestNamespace,
|
||||
Server: test.FakeClusterURL,
|
||||
Revision: "ghi789",
|
||||
},
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data)
|
||||
|
||||
app := newFakeMultiSourceApp()
|
||||
revisions := make([]string, 0)
|
||||
revisions = append(revisions, "")
|
||||
compRes := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, app.Spec.HasMultipleSources())
|
||||
assert.NotNil(t, compRes)
|
||||
assert.NotNil(t, compRes.syncStatus)
|
||||
assert.Empty(t, compRes.syncStatus.Revision)
|
||||
assert.Len(t, compRes.syncStatus.Revisions, 3)
|
||||
assert.Equal(t, "abc123", compRes.syncStatus.Revisions[0])
|
||||
assert.Equal(t, "def456", compRes.syncStatus.Revisions[1])
|
||||
assert.Equal(t, "ghi789", compRes.syncStatus.Revisions[2])
|
||||
}
|
||||
|
||||
func toJSON(t *testing.T, obj *unstructured.Unstructured) string {
|
||||
data, err := json.Marshal(obj)
|
||||
assert.NoError(t, err)
|
||||
|
||||
BIN
docs/assets/extra_info-1.png
Normal file
BIN
docs/assets/extra_info-1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 32 KiB |
BIN
docs/assets/extra_info-2.png
Normal file
BIN
docs/assets/extra_info-2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 8.3 KiB |
BIN
docs/assets/extra_info.png
Normal file
BIN
docs/assets/extra_info.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 132 KiB |
@@ -9,16 +9,6 @@ setTimeout(function() {
|
||||
caret.innerHTML = "<i class='fa fa-caret-down dropdown-caret'></i>"
|
||||
caret.classList.add('dropdown-caret')
|
||||
div.querySelector('.rst-current-version').appendChild(caret);
|
||||
div.querySelector('.rst-current-version').addEventListener('click', function() {
|
||||
const classes = container.className.split(' ');
|
||||
const index = classes.indexOf('shift-up');
|
||||
if (index === -1) {
|
||||
classes.push('shift-up');
|
||||
} else {
|
||||
classes.splice(index, 1);
|
||||
}
|
||||
container.className = classes.join(' ');
|
||||
});
|
||||
}
|
||||
|
||||
var CSSLink = document.createElement('link');
|
||||
|
||||
@@ -28,7 +28,7 @@ telepresence intercept argocd-server --port 8080:http --env-file .envrc.remote #
|
||||
* `--port` forwards traffic of remote port http to 8080 locally (use `--port 8080:https` if argocd-server terminates TLS)
|
||||
* `--env-file` writes all the environment variables of the remote pod into a local file, the variables are also set on the subprocess of the `--run` command
|
||||
|
||||
With this, any traffic that hits your argocd-server service in the cluster (e.g through a LB / ingress) will be forwarded to your laptop on port 8080. So that you can now start argocd-server locally to debug or test new code. If you launch argocd-server using the environment variables in `.envrc.remote`, he is able to fetch all the configmaps, secrets and so one from the cluster and transparently connect to the other microservices so that no further configuration should be neccesary and he behaves exactly the same as in the cluster.
|
||||
With this, any traffic that hits your argocd-server service in the cluster (e.g. through a LB / ingress) will be forwarded to your laptop on port 8080. So that you can now start argocd-server locally to debug or test new code. If you launch argocd-server using the environment variables in `.envrc.remote`, it is able to fetch all the configmaps, secrets and so on from the cluster and transparently connect to the other microservices so that no further configuration should be necessary, and it behaves exactly the same as in the cluster.
|
||||
|
||||
List current status of Telepresence using:
|
||||
```shell
|
||||
|
||||
@@ -1,49 +1,78 @@
|
||||
# Release Process And Cadence
|
||||
|
||||
Argo CD is being developed using the following process:
|
||||
## Release Cycle
|
||||
|
||||
* Maintainers commit to work on set of features and enhancements and create GitHub milestone to track the work.
|
||||
* We are trying to avoid delaying release and prefer moving the feature into the next release if we cannot complete it on time.
|
||||
* The new release is published every **3 months**.
|
||||
* Critical bug-fixes are cherry-picked into the release branch and delivered using patch releases as frequently as needed.
|
||||
### Schedule
|
||||
|
||||
## Release Planning
|
||||
These are the upcoming releases dates:
|
||||
|
||||
We are using GitHub milestones to perform release planning and tracking. Each release milestone includes two type of issues:
|
||||
| Release | Release Planning Meeting | Release Candidate 1 | General Availability | Release Champion | Checklist |
|
||||
|---------|--------------------------|-----------------------|----------------------|-------------------------------------------------------|---------------------------------------------------------------|
|
||||
| v2.6 | Monday, Dec. 12, 2022 | Monday, Dec. 19, 2022 | Monday, Feb. 6, 2023 | [William Tam](https://github.com/wtam2018) | [checklist](https://github.com/argoproj/argo-cd/issues/11563) |
|
||||
| v2.7 | Monday, Mar. 6, 2023 | Monday, Mar. 20, 2023 | Monday, May. 1, 2023 | [Pavel Kostohrys](https://github.com/pasha-codefresh) |
|
||||
| v2.8 | Monday, Jun. 5, 2023 | Monday, Jun. 19, 2023 | Monday, Aug. 7, 2023 | [Keith Chong](https://github.keithchong)
|
||||
| v2.9 | Monday, Sep. 4, 2023 | Monday, Sep. 18, 2023 | Monday, Nov. 6, 2023 |
|
||||
|
||||
* Issues that maintainers committed to working on. Maintainers decide which features they are committing to work on during the next release based on
|
||||
their availability. Typically issues added offline by each maintainer and finalized during the contributors' meeting. Each such issue should be
|
||||
assigned to maintainer who plans to implement and test it.
|
||||
* Nice to have improvements contributed by community contributors. Nice to have issues are typically not critical, smallish enhancements that could
|
||||
be contributed by community contributors. Maintainers are not committing to implement them but committing to review PR from the community.
|
||||
Actual release dates might differ from the plan by a few days.
|
||||
|
||||
The milestone should have a clear description of the most important features as well as the expected end date. This should provide clarity to end-users
|
||||
about what to expect from the next release and when.
|
||||
### Release Process
|
||||
|
||||
In addition to the next milestone, we need to maintain a draft of the upcoming release milestone.
|
||||
#### Minor Releases (e.g. 2.x.0)
|
||||
|
||||
## Community Contributions
|
||||
A minor Argo CD release occurs four times a year, once every three months. Each General Availability (GA) release is
|
||||
preceded by several Release Candidates (RCs). The first RC is released three weeks before the scheduled GA date. This
|
||||
effectively means that there is a three-week feature freeze.
|
||||
|
||||
We receive a lot of contributions from our awesome community, and we're very grateful for that fact. However, reviewing and testing PRs is a lot of (unplanned) work and therefore, we cannot guarantee that contributions (especially large or complex ones) made by the community receive a timely review within a release's time frame. Maintainers may decide on their own to put work on a PR together with the contributor and in this case, the maintainer will self-assigned the PR and thereby committing to review, eventually merge and later test it on the release scope.
|
||||
These are the approximate release dates:
|
||||
|
||||
## Release Testing
|
||||
* The first Monday of February
|
||||
* The first Monday of May
|
||||
* The first Monday of August
|
||||
* The first Monday of November
|
||||
|
||||
We need to make sure that each change, both from maintainers and community contributors, is tested well and have someone who is going to fix last-minute
|
||||
bugs. In order to ensure it, each merged pull request must have an assigned maintainer before it gets merged. The assigned maintainer will be working on
|
||||
testing the introduced changes and fixing of any introduced bugs.
|
||||
Dates may be shifted slightly to accommodate holidays. Those shifts should be minimal.
|
||||
|
||||
We have a code freeze period two weeks before the release until the release branch is created. During code freeze no feature PR should be merged and it is ok
|
||||
to merge bug fixes.
|
||||
#### Patch Releases (e.g. 2.5.x)
|
||||
|
||||
Maintainers assigned to a PR that's been merged should drive testing and work on fixing last-minute issues. For tracking purposes after verifying PR the assigned
|
||||
the maintainer should label it with a `verified` label.
|
||||
Argo CD patch releases occur on an as-needed basis. Only the three most recent minor versions are eligible for patch
|
||||
releases. Versions older than the three most recent minor versions are considered EOL and will not receive bug fixes or
|
||||
security updates.
|
||||
|
||||
## Releasing
|
||||
#### Minor Release Planning Meeting
|
||||
|
||||
The releasing procedure is described in [releasing](./releasing.md) document. Before closing the release milestone following should be verified:
|
||||
Roughly two weeks before the RC date, there will be a meeting to discuss which features are planned for the RC. This meeting is
|
||||
for contributors to advocate for certain features. Features which have at least one approver (besides the contributor)
|
||||
who can assure they will review/merge by the RC date will be included in the release milestone. All other features will
|
||||
be dropped from the milestone (and potentially shifted to the next one).
|
||||
|
||||
- [ ] All merged PRs and verified (verify and remove `needs-verification` label):
|
||||
- [ ] Triage issues reported by `yarn audit` and ensure there are no exploitable security issues.
|
||||
- [ ] Roadmap is updated based one current release changes
|
||||
- [ ] Next release milestone is created
|
||||
- [ ] Upcoming release milestone is updated
|
||||
Since not everyone will be able to attend the meeting, there will be a meeting doc. Contributors can add their feature
|
||||
to a table, and Approvers can add their name to the table. Features with a corresponding approver will remain in the
|
||||
release milestone.
|
||||
|
||||
#### Release Champion
|
||||
|
||||
To help manage all the steps involved in a release, we will have a Release Champion. The Release Champion will be
|
||||
responsible for a checklist of items for their release. The checklist is an issue template in the Argo CD repository.
|
||||
|
||||
The Release Champion can be anyone in the Argo CD community. Some tasks (like cherry-picking bug fixes and cutting
|
||||
releases) require [Approver](https://github.com/argoproj/argoproj/blob/master/community/membership.md#community-membership)
|
||||
membership. The Release Champion can delegate tasks when necessary and will be responsible for coordinating with the
|
||||
Approver.
|
||||
|
||||
### Feature Acceptance Criteria
|
||||
|
||||
To be eligible for inclusion in a minor release, a new feature must meet the following criteria before the release’s RC
|
||||
date.
|
||||
|
||||
If it is a large feature that involves significant design decisions, that feature must be described in a Proposal, and
|
||||
that Proposal must be reviewed and merged.
|
||||
|
||||
The feature PR must include:
|
||||
|
||||
* Tests (passing)
|
||||
* Documentation
|
||||
* If necessary, a note in the Upgrading docs for the planned minor release
|
||||
* The PR must be reviewed, approved, and merged by an Approver.
|
||||
|
||||
If these criteria are not met by the RC date, the feature will be ineligible for inclusion in the RC series or GA for
|
||||
that minor release. It will have to wait for the next minor release.
|
||||
|
||||
@@ -9,12 +9,7 @@ To test:
|
||||
```bash
|
||||
make serve-docs
|
||||
```
|
||||
|
||||
Check for broken external links:
|
||||
|
||||
```bash
|
||||
make lint-docs
|
||||
```
|
||||
Once running, you can view your locally built documentation at [http://0.0.0.0:8000/](http://0.0.0.0:8000/).
|
||||
|
||||
## Deploying
|
||||
|
||||
|
||||
44
docs/faq.md
44
docs/faq.md
@@ -122,9 +122,9 @@ To terminate the sync, click on the "synchronisation" then "terminate":
|
||||
|
||||
 
|
||||
|
||||
## Why Is My App Out Of Sync Even After Syncing?
|
||||
## Why Is My App `Out Of Sync` Even After Syncing?
|
||||
|
||||
Is some cases, the tool you use may conflict with Argo CD by adding the `app.kubernetes.io/instance` label. E.g. using
|
||||
In some cases, the tool you use may conflict with Argo CD by adding the `app.kubernetes.io/instance` label. E.g. using
|
||||
Kustomize common labels feature.
|
||||
|
||||
Argo CD automatically sets the `app.kubernetes.io/instance` label and uses it to determine which resources form the app.
|
||||
@@ -142,7 +142,7 @@ The default polling interval is 3 minutes (180 seconds).
|
||||
You can change the setting by updating the `timeout.reconciliation` value in the [argocd-cm](https://github.com/argoproj/argo-cd/blob/2d6ce088acd4fb29271ffb6f6023dbb27594d59b/docs/operator-manual/argocd-cm.yaml#L279-L282) config map. If there are any Git changes, ArgoCD will only update applications with the [auto-sync setting](user-guide/auto_sync.md) enabled. If you set it to `0` then Argo CD will stop polling Git repositories automatically and you can only use alternative methods such as [webhooks](operator-manual/webhook.md) and/or manual syncs for deploying applications.
|
||||
|
||||
|
||||
## Why Are My Resource Limits Out Of Sync?
|
||||
## Why Are My Resource Limits `Out Of Sync`?
|
||||
|
||||
Kubernetes has normalized your resource limits when they are applied, and then Argo CD has then compared the version in
|
||||
your generated manifests to the normalized one is Kubernetes - they won't match.
|
||||
@@ -157,7 +157,7 @@ E.g.
|
||||
To fix this use diffing
|
||||
customizations [settings](./user-guide/diffing.md#known-kubernetes-types-in-crds-resource-limits-volume-mounts-etc).
|
||||
|
||||
## How Do I Fix "invalid cookie, longer than max length 4093"?
|
||||
## How Do I Fix `invalid cookie, longer than max length 4093`?
|
||||
|
||||
Argo CD uses a JWT as the auth token. You likely are part of many groups and have gone over the 4KB limit which is set
|
||||
for cookies. You can get the list of groups by opening "developer tools -> network"
|
||||
@@ -224,4 +224,38 @@ resource.customizations.health.bitnami.com_SealedSecret: |
|
||||
hs.status = "Healthy"
|
||||
hs.message = "Controller doesn't report resource status"
|
||||
return hs
|
||||
```
|
||||
```
|
||||
|
||||
## How do I fix `The order in patch list … doesn't match $setElementOrder list: …`?
|
||||
|
||||
An application may trigger a sync error labeled a `ComparisonError` with a message like:
|
||||
|
||||
> The order in patch list: [map[name:**KEY_BC** value:150] map[name:**KEY_BC** value:500] map[name:**KEY_BD** value:250] map[name:**KEY_BD** value:500] map[name:KEY_BI value:something]] doesn't match $setElementOrder list: [map[name:KEY_AA] map[name:KEY_AB] map[name:KEY_AC] map[name:KEY_AD] map[name:KEY_AE] map[name:KEY_AF] map[name:KEY_AG] map[name:KEY_AH] map[name:KEY_AI] map[name:KEY_AJ] map[name:KEY_AK] map[name:KEY_AL] map[name:KEY_AM] map[name:KEY_AN] map[name:KEY_AO] map[name:KEY_AP] map[name:KEY_AQ] map[name:KEY_AR] map[name:KEY_AS] map[name:KEY_AT] map[name:KEY_AU] map[name:KEY_AV] map[name:KEY_AW] map[name:KEY_AX] map[name:KEY_AY] map[name:KEY_AZ] map[name:KEY_BA] map[name:KEY_BB] map[name:**KEY_BC**] map[name:**KEY_BD**] map[name:KEY_BE] map[name:KEY_BF] map[name:KEY_BG] map[name:KEY_BH] map[name:KEY_BI] map[name:**KEY_BC**] map[name:**KEY_BD**]]
|
||||
|
||||
|
||||
There are two parts to the message:
|
||||
|
||||
1. `The order in patch list: [`
|
||||
|
||||
This identifies values for items, especially items that appear multiple times:
|
||||
|
||||
> map[name:**KEY_BC** value:150] map[name:**KEY_BC** value:500] map[name:**KEY_BD** value:250] map[name:**KEY_BD** value:500] map[name:KEY_BI value:something]
|
||||
|
||||
You'll want to identify the keys that are duplicated -- you can focus on the first part, as each duplicated key will appear, once for each of its value with its value in the first list. The second list is really just
|
||||
|
||||
`]`
|
||||
|
||||
2. `doesn't match $setElementOrder list: [`
|
||||
|
||||
This includes all of the keys. It's included for debugging purposes -- you don't need to pay much attention to it. It will give you a hint about the precise location in the list for the duplicated keys:
|
||||
|
||||
> map[name:KEY_AA] map[name:KEY_AB] map[name:KEY_AC] map[name:KEY_AD] map[name:KEY_AE] map[name:KEY_AF] map[name:KEY_AG] map[name:KEY_AH] map[name:KEY_AI] map[name:KEY_AJ] map[name:KEY_AK] map[name:KEY_AL] map[name:KEY_AM] map[name:KEY_AN] map[name:KEY_AO] map[name:KEY_AP] map[name:KEY_AQ] map[name:KEY_AR] map[name:KEY_AS] map[name:KEY_AT] map[name:KEY_AU] map[name:KEY_AV] map[name:KEY_AW] map[name:KEY_AX] map[name:KEY_AY] map[name:KEY_AZ] map[name:KEY_BA] map[name:KEY_BB] map[name:**KEY_BC**] map[name:**KEY_BD**] map[name:KEY_BE] map[name:KEY_BF] map[name:KEY_BG] map[name:KEY_BH] map[name:KEY_BI] map[name:**KEY_BC**] map[name:**KEY_BD**]
|
||||
|
||||
`]`
|
||||
|
||||
In this case, the duplicated keys have been **emphasized** to help you identify the problematic keys. Many editors have the ability to highlight all instances of a string, using such an editor can help with such problems.
|
||||
|
||||
The most common instance of this error is with `env:` fields for `containers`.
|
||||
|
||||
!!! note "Dynamic applications"
|
||||
It's possible that your application is being generated by a tool in which case the duplication might not be evident within the scope of a single file. If you have trouble debugging this problem, consider filing a ticket to the owner of the generator tool asking them to improve its validation and error reporting.
|
||||
|
||||
@@ -81,7 +81,7 @@ in your Argo CD installation namespace. You can simply retrieve this password
|
||||
using the `argocd` CLI:
|
||||
|
||||
```bash
|
||||
argocd admin initial-password
|
||||
argocd admin initial-password -n argocd
|
||||
```
|
||||
|
||||
!!! warning
|
||||
|
||||
@@ -24,7 +24,7 @@ This feature can only be enabled and used when your Argo CD is installed as a cl
|
||||
|
||||
### Switch resource tracking method
|
||||
|
||||
Also, while technically not necessary, it is strongly suggested that you switch the application tracking method from the default `label` setting to either `annotation` or `annotation+label`. The reasonsing for this is, that application names will be a composite of the namespace's name and the name of the `Application`, and this can easily exceed the 63 characters length limit imposed on label values. Annotations have a notably greater length limit.
|
||||
Also, while technically not necessary, it is strongly suggested that you switch the application tracking method from the default `label` setting to either `annotation` or `annotation+label`. The reasoning for this is, that application names will be a composite of the namespace's name and the name of the `Application`, and this can easily exceed the 63 characters length limit imposed on label values. Annotations have a notably greater length limit.
|
||||
|
||||
To enable annotation based resource tracking, refer to the documentation about [resource tracking methods](../../user-guide/resource_tracking/)
|
||||
|
||||
|
||||
@@ -6,7 +6,10 @@ metadata:
|
||||
namespace: argocd
|
||||
# Add this finalizer ONLY if you want these to cascade delete.
|
||||
finalizers:
|
||||
# The default behaviour is foreground cascading deletion
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
# Alternatively, you can use background cascading deletion
|
||||
# - resources-finalizer.argocd.argoproj.io/background
|
||||
# Add labels to your application object.
|
||||
labels:
|
||||
name: guestbook
|
||||
@@ -114,9 +117,7 @@ spec:
|
||||
|
||||
# plugin specific config
|
||||
plugin:
|
||||
# NOTE: this field is deprecated in v2.5 and must be removed to use sidecar-based plugins.
|
||||
# Only set the plugin name if the plugin is defined in argocd-cm.
|
||||
# If the plugin is defined as a sidecar, omit the name. The plugin will be automatically matched with the
|
||||
# If the plugin is defined as a sidecar and name is not passed, the plugin will be automatically matched with the
|
||||
# Application according to the plugin's discovery rules.
|
||||
name: mypluginname
|
||||
# environment variables passed to the plugin
|
||||
@@ -142,10 +143,18 @@ spec:
|
||||
|
||||
# Destination cluster and namespace to deploy the application
|
||||
destination:
|
||||
# cluster API URL
|
||||
server: https://kubernetes.default.svc
|
||||
# or cluster name
|
||||
# name: in-cluster
|
||||
# The namespace will only be set for namespace-scoped resources that have not set a value for .metadata.namespace
|
||||
namespace: guestbook
|
||||
|
||||
|
||||
# Extra information to show in the Argo CD Application details tab
|
||||
info:
|
||||
- name: 'Example:'
|
||||
value: 'https://example.com'
|
||||
|
||||
# Sync policy
|
||||
syncPolicy:
|
||||
automated: # automated sync by default retries failed attempts 5 times with following delays between attempts ( 5s, 10s, 20s, 40s, 80s ); retry controlled using `retry` field.
|
||||
@@ -157,6 +166,7 @@ spec:
|
||||
- CreateNamespace=true # Namespace Auto-Creation ensures that namespace specified as the application destination exists in the destination cluster.
|
||||
- PrunePropagationPolicy=foreground # Supported policies are background, foreground and orphan.
|
||||
- PruneLast=true # Allow the ability for resource pruning to happen as a final, implicit wave of a sync operation
|
||||
- RespectIgnoreDifferences=true # When syncing changes, respect fields ignored by the ignoreDifferences configuration
|
||||
managedNamespaceMetadata: # Sets the metadata for the application namespace. Only valid if CreateNamespace=true (see above), otherwise it's a no-op.
|
||||
labels: # The labels to set on the application namespace
|
||||
any: label
|
||||
@@ -175,18 +185,24 @@ spec:
|
||||
maxDuration: 3m # the maximum amount of time allowed for the backoff strategy
|
||||
|
||||
# Will ignore differences between live and desired states during the diff. Note that these configurations are not
|
||||
# used during the sync process.
|
||||
# used during the sync process unless the `RespectIgnoreDifferences=true` sync option is enabled.
|
||||
ignoreDifferences:
|
||||
# for the specified json pointers
|
||||
- group: apps
|
||||
kind: Deployment
|
||||
jsonPointers:
|
||||
- /spec/replicas
|
||||
- kind: ConfigMap
|
||||
jqPathExpressions:
|
||||
- '.data["config.yaml"].auth'
|
||||
# for the specified managedFields managers
|
||||
- group: "*"
|
||||
kind: "*"
|
||||
managedFieldsManagers:
|
||||
- kube-controller-manager
|
||||
# Name and namespace are optional. If specified, they must match exactly, these are not glob patterns.
|
||||
name: my-deployment
|
||||
namespace: my-namespace
|
||||
|
||||
# RevisionHistoryLimit limits the number of items kept in the application's revision history, which is used for
|
||||
# informational purposes as well as for rollbacks to previous versions. This should only be changed in exceptional
|
||||
|
||||
@@ -110,7 +110,7 @@ spec:
|
||||
server: https://kubernetes.default.svc
|
||||
namespace: '{{path.basename}}'
|
||||
```
|
||||
(*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/examples/applicationset/git-generator-directory/excludes).*)
|
||||
(*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/git-generator-directory/excludes).*)
|
||||
|
||||
This example excludes the `exclude-helm-guestbook` directory from the list of directories scanned for this `ApplicationSet` resource.
|
||||
|
||||
@@ -153,7 +153,7 @@ Or, a shorter way (using [path.Match](https://golang.org/pkg/path/#Match) syntax
|
||||
|
||||
```yaml
|
||||
- path: /d/*
|
||||
- path: /d/[f|g]
|
||||
- path: /d/[fg]
|
||||
exclude: true
|
||||
```
|
||||
|
||||
|
||||
@@ -0,0 +1,43 @@
|
||||
# Post Selector all generators
|
||||
|
||||
The Selector allows to post-filter based on generated values using the kubernetes common labelSelector format. In the example, the list generator generates a set of two application which then filter by the key value to only select the `env` with value `staging`:
|
||||
|
||||
## Example: List generator + Post Selector
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: ApplicationSet
|
||||
metadata:
|
||||
name: guestbook
|
||||
spec:
|
||||
generators:
|
||||
- list:
|
||||
elements:
|
||||
- cluster: engineering-dev
|
||||
url: https://kubernetes.default.svc
|
||||
env: staging
|
||||
- cluster: engineering-prod
|
||||
url: https://kubernetes.default.svc
|
||||
env: prod
|
||||
selector:
|
||||
matchLabels:
|
||||
env: staging
|
||||
template:
|
||||
metadata:
|
||||
name: '{{cluster}}-guestbook'
|
||||
spec:
|
||||
project: default
|
||||
source:
|
||||
repoURL: https://github.com/argoproj-labs/applicationset.git
|
||||
targetRevision: HEAD
|
||||
path: examples/list-generator/guestbook/{{cluster}}
|
||||
destination:
|
||||
server: '{{url}}'
|
||||
namespace: guestbook
|
||||
```
|
||||
|
||||
The List generator + Post Selector generates a single set of parameters:
|
||||
```yaml
|
||||
- cluster: engineering-dev
|
||||
url: https://kubernetes.default.svc
|
||||
env: staging
|
||||
```
|
||||
@@ -60,7 +60,7 @@ spec:
|
||||
* `repo`: Required name of the GitHub repository.
|
||||
* `api`: If using GitHub Enterprise, the URL to access it. (Optional)
|
||||
* `tokenRef`: A `Secret` name and key containing the GitHub access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional)
|
||||
* `labels`: Labels is used to filter the PRs that you want to target. (Optional)
|
||||
* `labels`: Filter the PRs to those containing **all** of the labels listed. (Optional)
|
||||
* `appSecretName`: A `Secret` name containing a GitHub App secret in [repo-creds format][repo-creds].
|
||||
|
||||
[repo-creds]: ../declarative-setup.md#repository-credentials
|
||||
|
||||
@@ -6,7 +6,7 @@ Generators are primarily based on the data source that they use to generate the
|
||||
|
||||
As of this writing there are eight generators:
|
||||
|
||||
- [List generator](Generators-List.md): The List generator allows you to target Argo CD Applications to clusters based on a fixed list of cluster name/URL values.
|
||||
- [List generator](Generators-List.md): The List generator allows you to target Argo CD Applications to clusters based on a fixed list of any chosen key/value element pairs.
|
||||
- [Cluster generator](Generators-Cluster.md): The Cluster generator allows you to target Argo CD Applications to clusters, based on the list of clusters defined within (and managed by) Argo CD (which includes automatically responding to cluster addition/removal events from Argo CD).
|
||||
- [Git generator](Generators-Git.md): The Git generator allows you to create Applications based on files within a Git repository, or based on the directory structure of a Git repository.
|
||||
- [Matrix generator](Generators-Matrix.md): The Matrix generator may be used to combine the generated parameters of two separate generators.
|
||||
@@ -15,4 +15,6 @@ As of this writing there are eight generators:
|
||||
- [Pull Request generator](Generators-Pull-Request.md): The Pull Request generator uses the API of an SCMaaS provider (eg GitHub) to automatically discover open pull requests within an repository.
|
||||
- [Cluster Decision Resource generator](Generators-Cluster-Decision-Resource.md): The Cluster Decision Resource generator is used to interface with Kubernetes custom resources that use custom resource-specific logic to decide which set of Argo CD clusters to deploy to.
|
||||
|
||||
All generators can be filtered by using the [Post Selector](Generators-Post-Selector.md)
|
||||
|
||||
If you are new to generators, begin with the **List** and **Cluster** generators. For more advanced use cases, see the documentation for the remaining generators above.
|
||||
|
||||
@@ -87,6 +87,10 @@ By activating Go Templating, `{{ .path }}` becomes an object. Therefore, some ch
|
||||
generators' templating:
|
||||
|
||||
- `{{ path }}` becomes `{{ .path.path }}`
|
||||
- `{{ path.basename }}` becomes `{{ .path.basename }}`
|
||||
- `{{ path.basenameNormalized }}` becomes `{{ .path.basenameNormalized }}`
|
||||
- `{{ path.filename }}` becomes `{{ .path.filename }}`
|
||||
- `{{ path.filenameNormalized }}` becomes `{{ .path.filenameNormalized }}`
|
||||
- `{{ path[n] }}` becomes `{{ index .path.segments n }}`
|
||||
|
||||
Here is an example:
|
||||
|
||||
@@ -1,20 +1,21 @@
|
||||
# Progressive Rollouts
|
||||
# Progressive Syncs
|
||||
|
||||
!!! warning "Alpha Feature"
|
||||
This is an experimental, alpha-quality feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications owned by an ApplicationSet resource. It may be removed in future releases or modified in backwards-incompatible ways.
|
||||
|
||||
## Use Cases
|
||||
The Progressive Rollouts feature set is intended to be light and flexible. The feature only interacts with the health of managed Applications. It is not intended to support direct integrations with other Rollout controllers (such as the native ReplicaSet controller or Argo Rollouts).
|
||||
The Progressive Syncs feature set is intended to be light and flexible. The feature only interacts with the health of managed Applications. It is not intended to support direct integrations with other Rollout controllers (such as the native ReplicaSet controller or Argo Rollouts).
|
||||
|
||||
* Progressive Rollouts watch for the managed Application resources to become "Healthy" before proceeding to the next stage.
|
||||
* Progressive Syncs watch for the managed Application resources to become "Healthy" before proceeding to the next stage.
|
||||
* Deployments, DaemonSets, StatefulSets, and [Argo Rollouts](https://argoproj.github.io/argo-rollouts/) are all supported, because the Application enters a "Progressing" state while pods are being rolled out. In fact, any resource with a health check that can report a "Progressing" status is supported.
|
||||
* [Argo CD Resource Hooks](../../user-guide/resource_hooks.md) are supported. We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used, such as smoke testing after a DaemonSet change.
|
||||
|
||||
## Enabling Progressive Rollouts
|
||||
As an experimental feature, progressive rollouts must be explicitly enabled, in one of these ways.
|
||||
1. Pass `--enable-progressive-rollouts` to the ApplicationSet controller args.
|
||||
1. Set `ARGOCD_APPLICATIONSET_ENABLE_PROGRESSIVE_ROLLOUTS=true` in the ApplicationSet controller environment variables.
|
||||
1. Set `applicationsetcontroller.enable.progressive.rollouts: true` in the ArgoCD ConfigMap.
|
||||
## Enabling Progressive Syncs
|
||||
As an experimental feature, progressive syncs must be explicitly enabled, in one of these ways.
|
||||
|
||||
1. Pass `--enable-progressive-syncs` to the ApplicationSet controller args.
|
||||
1. Set `ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=true` in the ApplicationSet controller environment variables.
|
||||
1. Set `applicationsetcontroller.enable.progressive.syncs: true` in the Argo CD `argocd-cmd-params-cm` ConfigMap.
|
||||
|
||||
## Strategies
|
||||
|
||||
@@ -30,10 +31,10 @@ All Applications managed by the ApplicationSet resource are updated simultaneous
|
||||
This update strategy allows you to group Applications by labels present on the generated Application resources.
|
||||
When the ApplicationSet changes, the changes will be applied to each group of Application resources sequentially.
|
||||
|
||||
* Application groups are selected by `matchExpressions`.
|
||||
* Application groups are selected using their labels and `matchExpressions`.
|
||||
* All `matchExpressions` must be true for an Application to be selected (multiple expressions match with AND behavior).
|
||||
* The `In` and `NotIn` operators must match at least one value to be considered true (OR behavior).
|
||||
* The `NotIn` operatorn has priority in the event that both a `NotIn` and `In` operator produce a match.
|
||||
* The `NotIn` operator has priority in the event that both a `NotIn` and `In` operator produce a match.
|
||||
* All Applications in each group must become Healthy before the ApplicationSet controller will proceed to update the next group of Applications.
|
||||
* The number of simultaneous Application updates in a group will not exceed its `maxUpdate` parameter (default is 100%, unbounded).
|
||||
* RollingSync will capture external changes outside the ApplicationSet resource, since it relies on watching the OutOfSync status of the managed Applications.
|
||||
@@ -43,9 +44,10 @@ When the ApplicationSet changes, the changes will be applied to each group of Ap
|
||||
* If an Application is considered "Pending" for `applicationsetcontroller.default.application.progressing.timeout` seconds, the Application is automatically moved to Healthy status (default 300).
|
||||
|
||||
#### Example
|
||||
The following example illustrates how to stage a progressive rollout over Applications with explicitly configured environment labels.
|
||||
The following example illustrates how to stage a progressive sync over Applications with explicitly configured environment labels.
|
||||
|
||||
Once a change is pushed, the following will happen in order.
|
||||
|
||||
* All `env-dev` Applications will be updated simultaneously.
|
||||
* The rollout will wait for all `env-qa` Applications to be manually synced via the `argocd` CLI or by clicking the Sync button in the UI.
|
||||
* 10% of all `env-prod` Applications will be updated at a time until all `env-prod` Applications have been updated.
|
||||
@@ -74,19 +76,19 @@ spec:
|
||||
rollingSync:
|
||||
steps:
|
||||
- matchExpressions:
|
||||
- key: env
|
||||
- key: envLabel
|
||||
operator: In
|
||||
values:
|
||||
- env-dev
|
||||
#maxUpdate: 100% # if undefined, all applications matched are updated together (default is 100%)
|
||||
- matchExpressions:
|
||||
- key: env
|
||||
- key: envLabel
|
||||
operator: In
|
||||
values:
|
||||
- env-qa
|
||||
maxUpdate: 0 # if 0, no matched applications will be updated
|
||||
- matchExpressions:
|
||||
- key: env
|
||||
- key: envLabel
|
||||
operator: In
|
||||
values:
|
||||
- env-prod
|
||||
@@ -96,7 +98,7 @@ spec:
|
||||
metadata:
|
||||
name: '{{.cluster}}-guestbook'
|
||||
labels:
|
||||
env: '{{.env}}'
|
||||
envLabel: '{{.env}}'
|
||||
spec:
|
||||
project: my-project
|
||||
source:
|
||||
@@ -47,7 +47,7 @@ data:
|
||||
help.download.windows-amd64: "path-or-url-to-download"
|
||||
|
||||
# A dex connector configuration (optional). See SSO configuration documentation:
|
||||
# https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/sso
|
||||
# https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/user-management/index.md#sso
|
||||
# https://dexidp.io/docs/connectors/
|
||||
dex.config: |
|
||||
connectors:
|
||||
@@ -330,4 +330,4 @@ data:
|
||||
resource.links: |
|
||||
- url: https://mycompany.splunk.com?search={{.metadata.namespace}}
|
||||
title: Splunk
|
||||
if: kind == "Pod" || kind == "Deployment"
|
||||
if: kind == "Pod" || kind == "Deployment"
|
||||
|
||||
@@ -164,5 +164,5 @@ data:
|
||||
applicationsetcontroller.dryrun: "false"
|
||||
# Enable git submodule support
|
||||
applicationsetcontroller.enable.git.submodule: "true"
|
||||
# Enables use of the Progressive Rollouts capability
|
||||
applicationsetcontroller.enable.progressive.rollouts: "false"
|
||||
# Enables use of the Progressive Syncs capability
|
||||
applicationsetcontroller.enable.progressive.syncs: "false"
|
||||
|
||||
@@ -4,18 +4,33 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: my-private-repo
|
||||
name: my-private-https-repo
|
||||
namespace: argocd
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
url: https://github.com/argoproj/my-private-repository
|
||||
url: https://github.com/argoproj/argocd-example-apps
|
||||
password: my-password
|
||||
username: my-username
|
||||
insecure: "true" # Ignore validity of server's TLS certificate. Defaults to "false"
|
||||
forceHttpBasicAuth: "true" # Skip auth method negotiation and force usage of HTTP basic auth. Defaults to "false"
|
||||
enableLfs: "true" # Enable git-lfs for this repository. Defaults to "false"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: my-private-ssh-repo
|
||||
namespace: argocd
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
url: ssh://git@github.com/argoproj/argocd-example-apps
|
||||
sshPrivateKey: |
|
||||
-----BEGIN OPENSSH PRIVATE KEY-----
|
||||
...
|
||||
-----END OPENSSH PRIVATE KEY-----
|
||||
insecure: "true" # Do not perform a host key check for the server. Defaults to "false"
|
||||
enableLfs: "true" # Enable git-lfs for this repository. Defaults to "false"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
||||
@@ -7,7 +7,7 @@ metadata:
|
||||
name: argocd-ssh-known-hosts-cm
|
||||
data:
|
||||
ssh_known_hosts: |
|
||||
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
|
||||
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=
|
||||
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
|
||||
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
|
||||
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
|
||||
|
||||
@@ -20,51 +20,21 @@ The following sections will describe how to create, install, and use plugins. Ch
|
||||
|
||||
There are two ways to install a Config Management Plugin:
|
||||
|
||||
1. Add the plugin config to the Argo CD ConfigMap (**this method is deprecated and will be removed in a future
|
||||
version**). The repo-server container will run your plugin's commands. This is a good option for a simple plugin that
|
||||
requires only a few lines of code that fit nicely in the Argo CD ConfigMap.
|
||||
2. Add the plugin as a sidecar to the repo-server Pod.
|
||||
This is a good option for a more complex plugin that would clutter the Argo CD ConfigMap. A copy of the repository is
|
||||
sent to the sidecar container as a tarball and processed individually per application, which makes it a good option
|
||||
for [concurrent processing of monorepos](../operator-manual/high_availability.md#enable-concurrent-processing).
|
||||
* **Sidecar plugin**
|
||||
|
||||
This is a good option for a more complex plugin that would clutter the Argo CD ConfigMap. A copy of the repository is
|
||||
sent to the sidecar container as a tarball and processed individually per application.
|
||||
|
||||
### Option 1: Configure plugins via Argo CD configmap (deprecated)
|
||||
* **ConfigMap plugin** (**this method is deprecated and will be removed in a future
|
||||
version**)
|
||||
|
||||
The repo-server container will run your plugin's commands.
|
||||
|
||||
The following changes are required to configure a new plugin:
|
||||
|
||||
1. Make sure required binaries are available in `argocd-repo-server` pod. The binaries can be added via volume mounts or
|
||||
using a custom image (see [custom_tools](../operator-manual/custom_tools.md) for examples of both).
|
||||
2. Register a new plugin in `argocd-cm` ConfigMap:
|
||||
|
||||
:::yaml
|
||||
data:
|
||||
configManagementPlugins: |
|
||||
- name: pluginName
|
||||
init: # Optional command to initialize application source directory
|
||||
command: ["sample command"]
|
||||
args: ["sample args"]
|
||||
generate: # Command to generate manifests YAML
|
||||
command: ["sample command"]
|
||||
args: ["sample args"]
|
||||
lockRepo: true # Defaults to false. See below.
|
||||
|
||||
The `generate` command must print a valid YAML or JSON stream to stdout. Both `init` and `generate` commands are executed inside the application source directory or in `path` when specified for the app.
|
||||
|
||||
3. [Create an Application which uses your new CMP](#using-a-cmp).
|
||||
|
||||
More CMP examples are available in [argocd-example-apps](https://github.com/argoproj/argocd-example-apps/tree/master/plugins).
|
||||
|
||||
!!!note "Repository locking"
|
||||
If your plugin makes use of `git` (e.g. `git crypt`), it is advised to set
|
||||
`lockRepo` to `true` so that your plugin will have exclusive access to the
|
||||
repository at the time it is executed. Otherwise, two applications synced
|
||||
at the same time may result in a race condition and sync failure.
|
||||
|
||||
### Option 2: Configure plugin via sidecar
|
||||
### Sidecar plugin
|
||||
|
||||
An operator can configure a plugin tool via a sidecar to repo-server. The following changes are required to configure a new plugin:
|
||||
|
||||
#### 1. Write the plugin configuration file
|
||||
#### Write the plugin configuration file
|
||||
|
||||
Plugins will be configured via a ConfigManagementPlugin manifest located inside the plugin container.
|
||||
|
||||
@@ -84,7 +54,7 @@ spec:
|
||||
command: [sh]
|
||||
args: [-c, 'echo "Initializing..."']
|
||||
# The generate command runs in the Application source directory each time manifests are generated. Standard output
|
||||
# must be ONLY valid YAML manifests. A non-zero exit code will fail manifest generation.
|
||||
# must be ONLY valid Kubernetes Objects in either YAML or JSON. A non-zero exit code will fail manifest generation.
|
||||
# Error output will be sent to the UI, so avoid printing sensitive information (such as secrets).
|
||||
generate:
|
||||
command: [sh, -c]
|
||||
@@ -92,12 +62,13 @@ spec:
|
||||
- |
|
||||
echo "{\"kind\": \"ConfigMap\", \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"$ARGOCD_APP_NAME\", \"namespace\": \"$ARGOCD_APP_NAMESPACE\", \"annotations\": {\"Foo\": \"$ARGOCD_ENV_FOO\", \"KubeVersion\": \"$KUBE_VERSION\", \"KubeApiVersion\": \"$KUBE_API_VERSIONS\",\"Bar\": \"baz\"}}}"
|
||||
# The discovery config is applied to a repository. If every configured discovery tool matches, then the plugin may be
|
||||
# used to generate manifests for Applications using the repository.
|
||||
# used to generate manifests for Applications using the repository. If the discovery config is omitted then the plugin
|
||||
# will not match any application but can still be invoked explicitly by specifying the plugin name in the app spec.
|
||||
# Only one of fileName, find.glob, or find.command should be specified. If multiple are specified then only the
|
||||
# first (in that order) is evaluated.
|
||||
discover:
|
||||
# fileName is a glob pattern (https://pkg.go.dev/path/filepath#Glob) that is applied to the repository's root
|
||||
# directory (not the Application source directory). If there is a match, this plugin may be used for the repository.
|
||||
# fileName is a glob pattern (https://pkg.go.dev/path/filepath#Glob) that is applied to the Application's source
|
||||
# directory. If there is a match, this plugin may be used for the Application.
|
||||
fileName: "./subdir/s*.yaml"
|
||||
find:
|
||||
# This does the same thing as fileName, but it supports double-start (nested directory) glob patterns.
|
||||
@@ -152,7 +123,7 @@ spec:
|
||||
While the ConfigManagementPlugin _looks like_ a Kubernetes object, it is not actually a custom resource.
|
||||
It only follows kubernetes-style spec conventions.
|
||||
|
||||
The `generate` command must print a valid YAML stream to stdout. Both `init` and `generate` commands are executed inside the application source directory.
|
||||
The `generate` command must print a valid Kubernetes YAML or JSON object stream to stdout. Both `init` and `generate` commands are executed inside the application source directory.
|
||||
|
||||
The `discover.fileName` is used as [glob](https://pkg.go.dev/path/filepath#Glob) pattern to determine whether an
|
||||
application repository is supported by the plugin or not.
|
||||
@@ -167,7 +138,7 @@ If `discover.fileName` is not provided, the `discover.find.command` is executed
|
||||
application repository is supported by the plugin or not. The `find` command should return a non-error exit code
|
||||
and produce output to stdout when the application source type is supported.
|
||||
|
||||
#### 2. Place the plugin configuration file in the sidecar
|
||||
#### Place the plugin configuration file in the sidecar
|
||||
|
||||
Argo CD expects the plugin configuration file to be located at `/home/argocd/cmp-server/config/plugin.yaml` in the sidecar.
|
||||
|
||||
@@ -202,7 +173,7 @@ data:
|
||||
fileName: "./subdir/s*.yaml"
|
||||
```
|
||||
|
||||
#### 3. Register the plugin sidecar
|
||||
#### Register the plugin sidecar
|
||||
|
||||
To install a plugin, patch argocd-repo-server to run the plugin container as a sidecar, with argocd-cmp-server as its
|
||||
entrypoint. You can use either off-the-shelf or custom-built plugin image as sidecar image. For example:
|
||||
@@ -241,90 +212,115 @@ volumes:
|
||||
2. Make sure that sidecar container is running as user 999.
|
||||
3. Make sure that plugin configuration file is present at `/home/argocd/cmp-server/config/plugin.yaml`. It can either be volume mapped via configmap or baked into image.
|
||||
|
||||
### ConfigMap plugin
|
||||
|
||||
!!! warning "Deprecated"
|
||||
ConfigMap plugins are deprecated and will no longer be supported in 2.7.
|
||||
|
||||
The following changes are required to configure a new plugin:
|
||||
|
||||
1. Make sure required binaries are available in `argocd-repo-server` pod. The binaries can be added via volume mounts or
|
||||
using a custom image (see [custom_tools](../operator-manual/custom_tools.md) for examples of both).
|
||||
2. Register a new plugin in `argocd-cm` ConfigMap:
|
||||
|
||||
data:
|
||||
configManagementPlugins: |
|
||||
- name: pluginName
|
||||
init: # Optional command to initialize application source directory
|
||||
command: ["sample command"]
|
||||
args: ["sample args"]
|
||||
generate: # Command to generate manifests YAML
|
||||
command: ["sample command"]
|
||||
args: ["sample args"]
|
||||
lockRepo: true # Defaults to false. See below.
|
||||
|
||||
The `generate` command must print a valid YAML or JSON stream to stdout. Both `init` and `generate` commands are executed inside the application source directory or in `path` when specified for the app.
|
||||
|
||||
3. [Create an Application which uses your new CMP](#using-a-cmp).
|
||||
|
||||
More CMP examples are available in [argocd-example-apps](https://github.com/argoproj/argocd-example-apps/tree/master/plugins).
|
||||
|
||||
!!!note "Repository locking"
|
||||
If your plugin makes use of `git` (e.g. `git crypt`), it is advised to set
|
||||
`lockRepo` to `true` so that your plugin will have exclusive access to the
|
||||
repository at the time it is executed. Otherwise, two applications synced
|
||||
at the same time may result in a race condition and sync failure.
|
||||
|
||||
### Using environment variables in your plugin
|
||||
|
||||
Plugin commands have access to
|
||||
|
||||
1. The system environment variables (of the repo-server container for argocd-cm plugins or of the sidecar for sidecar plugins)
|
||||
2. [Standard build environment variables](build-environment.md)
|
||||
2. [Standard build environment variables](../user-guide/build-environment.md)
|
||||
3. Variables in the Application spec (References to system and build variables will get interpolated in the variables' values):
|
||||
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
spec:
|
||||
source:
|
||||
plugin:
|
||||
env:
|
||||
- name: FOO
|
||||
value: bar
|
||||
- name: REV
|
||||
value: test-$ARGOCD_APP_REVISION
|
||||
```
|
||||
|
||||
!!! note
|
||||
The `discover.command` command only has access to the above environment starting with v2.4.
|
||||
|
||||
Before reaching the `init.command`, `generate.command`, and `discover.command` commands, Argo CD prefixes all
|
||||
user-supplied environment variables (#3 above) with `ARGOCD_ENV_`. This prevents users from directly setting
|
||||
potentially-sensitive environment variables.
|
||||
|
||||
If your plugin was written before 2.4 and depends on user-supplied environment variables, then you will need to update
|
||||
your plugin's behavior to work with 2.4. If you use a third-party plugin, make sure they explicitly advertise support
|
||||
for 2.4.
|
||||
|
||||
4. (Starting in v2.4) Parameters in the Application spec:
|
||||
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
spec:
|
||||
source:
|
||||
plugin:
|
||||
parameters:
|
||||
- name: values-files
|
||||
array: [values-dev.yaml]
|
||||
- name: helm-parameters
|
||||
map:
|
||||
image.tag: v1.2.3
|
||||
```
|
||||
|
||||
The parameters are available as JSON in the `ARGOCD_APP_PARAMETERS` environment variable. The example above would
|
||||
produce this JSON:
|
||||
|
||||
```json
|
||||
[{"name": "values-files", "array": ["values-dev.yaml"]}, {"name": "helm-parameters", "map": {"image.tag": "v1.2.3"}}]
|
||||
```
|
||||
|
||||
!!! note
|
||||
Parameter announcements, even if they specify defaults, are _not_ sent to the plugin in `ARGOCD_APP_PARAMETERS`.
|
||||
Only parameters explicitly set in the Application spec are sent to the plugin. It is up to the plugin to apply
|
||||
the same defaults as the ones announced to the UI.
|
||||
|
||||
The same parameters are also available as individual environment variables. The names of the environment variables
|
||||
follows this convention:
|
||||
|
||||
```yaml
|
||||
- name: some-string-param
|
||||
string: some-string-value
|
||||
# PARAM_SOME_STRING_PARAM=some-string-value
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
spec:
|
||||
source:
|
||||
plugin:
|
||||
env:
|
||||
- name: FOO
|
||||
value: bar
|
||||
- name: REV
|
||||
value: test-$ARGOCD_APP_REVISION
|
||||
|
||||
- name: some-array-param
|
||||
value: [item1, item2]
|
||||
# PARAM_SOME_ARRAY_PARAM_0=item1
|
||||
# PARAM_SOME_ARRAY_PARAM_1=item2
|
||||
|
||||
- name: some-map-param
|
||||
map:
|
||||
image.tag: v1.2.3
|
||||
# PARAM_SOME_MAP_PARAM_IMAGE_TAG=v1.2.3
|
||||
```
|
||||
!!! note
|
||||
The `discover.find.command` command only has access to the above environment starting with v2.4.
|
||||
|
||||
Before reaching the `init.command`, `generate.command`, and `discover.find.command` commands, Argo CD prefixes all
|
||||
user-supplied environment variables (#3 above) with `ARGOCD_ENV_`. This prevents users from directly setting
|
||||
potentially-sensitive environment variables.
|
||||
|
||||
If your plugin was written before 2.4 and depends on user-supplied environment variables, then you will need to update
|
||||
your plugin's behavior to work with 2.4. If you use a third-party plugin, make sure they explicitly advertise support
|
||||
for 2.4.
|
||||
|
||||
!!! warning
|
||||
Sanitize/escape user input. As part of Argo CD's manifest generation system, config management plugins are treated with a level of trust. Be
|
||||
4. (Starting in v2.6) Parameters in the Application spec:
|
||||
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
spec:
|
||||
source:
|
||||
plugin:
|
||||
parameters:
|
||||
- name: values-files
|
||||
array: [values-dev.yaml]
|
||||
- name: helm-parameters
|
||||
map:
|
||||
image.tag: v1.2.3
|
||||
|
||||
The parameters are available as JSON in the `ARGOCD_APP_PARAMETERS` environment variable. The example above would
|
||||
produce this JSON:
|
||||
|
||||
[{"name": "values-files", "array": ["values-dev.yaml"]}, {"name": "helm-parameters", "map": {"image.tag": "v1.2.3"}}]
|
||||
|
||||
!!! note
|
||||
Parameter announcements, even if they specify defaults, are _not_ sent to the plugin in `ARGOCD_APP_PARAMETERS`.
|
||||
Only parameters explicitly set in the Application spec are sent to the plugin. It is up to the plugin to apply
|
||||
the same defaults as the ones announced to the UI.
|
||||
|
||||
The same parameters are also available as individual environment variables. The names of the environment variables
|
||||
follows this convention:
|
||||
|
||||
- name: some-string-param
|
||||
string: some-string-value
|
||||
# PARAM_SOME_STRING_PARAM=some-string-value
|
||||
|
||||
- name: some-array-param
|
||||
value: [item1, item2]
|
||||
# PARAM_SOME_ARRAY_PARAM_0=item1
|
||||
# PARAM_SOME_ARRAY_PARAM_1=item2
|
||||
|
||||
- name: some-map-param
|
||||
map:
|
||||
image.tag: v1.2.3
|
||||
# PARAM_SOME_MAP_PARAM_IMAGE_TAG=v1.2.3
|
||||
|
||||
!!! warning "Sanitize/escape user input"
|
||||
As part of Argo CD's manifest generation system, config management plugins are treated with a level of trust. Be
|
||||
sure to escape user input in your plugin to prevent malicious input from causing unwanted behavior.
|
||||
|
||||
|
||||
## Using a config management plugin with an Application
|
||||
|
||||
If your CMP is defined in the `argocd-cm` ConfigMap, you can create a new Application using the CLI. Replace
|
||||
@@ -337,7 +333,7 @@ argocd app create <appName> --config-management-plugin <pluginName>
|
||||
If your CMP is defined as a sidecar, you must manually define the Application manifest. You may leave the `name` field
|
||||
empty in the `plugin` section for the plugin to be automatically matched with the Application based on its discovery rules. If you do mention the name make sure
|
||||
it is either `<metadata.name>-<spec.version>` if version is mentioned in the `ConfigManagementPlugin` spec or else just `<metadata.name>`. When name is explicitly
|
||||
specified only that particular plugin will be used iff it's discovery pattern/command matches the provided application repo.
|
||||
specified only that particular plugin will be used iff its discovery pattern/command matches the provided application repo.
|
||||
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
@@ -413,7 +409,7 @@ a future release.
|
||||
|
||||
The following will show how to convert an argocd-cm plugin to a sidecar plugin.
|
||||
|
||||
### 1. Convert the ConfigMap entry into a config file
|
||||
### Convert the ConfigMap entry into a config file
|
||||
|
||||
First, copy the plugin's configuration into its own YAML file. Take for example the following ConfigMap entry:
|
||||
|
||||
@@ -424,7 +420,7 @@ data:
|
||||
init: # Optional command to initialize application source directory
|
||||
command: ["sample command"]
|
||||
args: ["sample args"]
|
||||
generate: # Command to generate manifests YAML
|
||||
generate: # Command to generate Kubernetes Objects in either YAML or JSON
|
||||
command: ["sample command"]
|
||||
args: ["sample args"]
|
||||
lockRepo: true # Defaults to false. See below.
|
||||
@@ -441,7 +437,7 @@ spec:
|
||||
init: # Optional command to initialize application source directory
|
||||
command: ["sample command"]
|
||||
args: ["sample args"]
|
||||
generate: # Command to generate manifests YAML
|
||||
generate: # Command to generate Kubernetes Objects in either YAML or JSON
|
||||
command: ["sample command"]
|
||||
args: ["sample args"]
|
||||
```
|
||||
@@ -450,17 +446,17 @@ spec:
|
||||
The `lockRepo` key is not relevant for sidecar plugins, because sidecar plugins do not share a single source repo
|
||||
directory when generating manifests.
|
||||
|
||||
### 2. Write discovery rules for your plugin
|
||||
### Write discovery rules for your plugin
|
||||
|
||||
Sidecar plugins use discovery rules instead of a plugin name to match Applications to plugins.
|
||||
Sidecar plugins can use either discovery rules or a plugin name to match Applications to plugins. If the discovery rule is omitted
|
||||
then you have to explicitly specify the plugin by name in the app spec or else that particular plugin will not match any app.
|
||||
|
||||
Write rules applicable to your plugin [using the instructions above](#1-write-the-plugin-configuration-file) and add
|
||||
them to your configuration file.
|
||||
If you want to use discovery instead of the plugin name to match applications to your plugin, write rules applicable to
|
||||
your plugin [using the instructions above](#1-write-the-plugin-configuration-file) and add them to your configuration
|
||||
file.
|
||||
|
||||
!!! note
|
||||
After installing your sidecar plugin, you may remove the `name` field from the plugin config in your
|
||||
Application specs for auto-discovery or update the name to `<metadata.name>-<spec.version>`
|
||||
if version was mentioned in the `ConfigManagementPlugin` spec or else just use `<metadata.name>`. For example:
|
||||
To use the name instead of discovery, update the name in your application manifest to `<metadata.name>-<spec.version>`
|
||||
if version was mentioned in the `ConfigManagementPlugin` spec or else just use `<metadata.name>`. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
@@ -473,7 +469,7 @@ spec:
|
||||
name: pluginName # Delete this for auto-discovery (and set `plugin: {}` if `name` was the only value) or use proper sidecar plugin name
|
||||
```
|
||||
|
||||
### 3. Make sure the plugin has access to the tools it needs
|
||||
### Make sure the plugin has access to the tools it needs
|
||||
|
||||
Plugins configured with argocd-cm ran on the Argo CD image. This gave it access to all the tools installed on that
|
||||
image by default (see the [Dockerfile](https://github.com/argoproj/argo-cd/blob/master/Dockerfile) for base image and
|
||||
@@ -482,7 +478,7 @@ installed tools).
|
||||
You can either use a stock image (like busybox) or design your own base image with the tools your plugin needs. For
|
||||
security, avoid using image with more binaries installed than what your plugin actually needs.
|
||||
|
||||
### 4. Test the plugin
|
||||
### Test the plugin
|
||||
|
||||
After installing the plugin as a sidecar [according to the directions above](#installing-a-config-management-plugin),
|
||||
test it out on a few Applications before migrating all of them to the sidecar plugin.
|
||||
@@ -7,7 +7,7 @@ other than what Argo CD bundles. Some reasons to do this might be:
|
||||
* To upgrade/downgrade to a specific version of a tool due to bugs or bug fixes.
|
||||
* To install additional dependencies to be used by kustomize's configmap/secret generators.
|
||||
(e.g. curl, vault, gpg, AWS CLI)
|
||||
* To install a [config management plugin](../user-guide/config-management-plugins.md).
|
||||
* To install a [config management plugin](config-management-plugins.md).
|
||||
|
||||
As the Argo CD repo-server is the single service responsible for generating Kubernetes manifests, it
|
||||
can be customized to use alternative toolchain required by your environment.
|
||||
@@ -51,7 +51,7 @@ following example builds an entirely customized repo-server from a Dockerfile, i
|
||||
dependencies that may be needed for generating manifests.
|
||||
|
||||
```Dockerfile
|
||||
FROM argoproj/argocd:latest
|
||||
FROM argoproj/argocd:v2.5.4 # Replace tag with the appropriate argo version
|
||||
|
||||
# Switch to root for the ability to perform install
|
||||
USER root
|
||||
|
||||
@@ -77,7 +77,7 @@ spec:
|
||||
```
|
||||
|
||||
!!! warning
|
||||
By default, deleting an application will not perform a cascade delete, which would delete its resources. You must add the finalizer if you want this behaviour - which you may well not want.
|
||||
Without the `resources-finalizer.argocd.argoproj.io` finalizer, deleting an application will not delete the resources it manages. To perform a cascading delete, you must add the finalizer. See [App Deletion](../user-guide/app_deletion.md#about-the-deletion-finalizer).
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
@@ -416,9 +416,25 @@ data:
|
||||
|
||||
### SSH known host public keys
|
||||
|
||||
If you are connecting repositories via SSH, Argo CD will need to know the SSH known hosts public key of the repository servers. You can manage the SSH known hosts data in the ConfigMap named `argocd-ssh-known-hosts-cm`. This ConfigMap contains a single key/value pair, with `ssh_known_hosts` as the key and the actual public keys of the SSH servers as data. As opposed to TLS configuration, the public key(s) of each single repository server Argo CD will connect via SSH must be configured, otherwise the connections to the repository will fail. There is no fallback. The data can be copied from any existing `ssh_known_hosts` file, or from the output of the `ssh-keyscan` utility. The basic format is `<servername> <keydata>`, one entry per line.
|
||||
If you are configuring repositories to use SSH, Argo CD will need to know their SSH public keys. In order for Argo CD to connect via SSH the public key(s) for each repository server must be pre-configured in Argo CD (unlike TLS configuration), otherwise the connections to the repository will fail.
|
||||
|
||||
An example ConfigMap object:
|
||||
You can manage the SSH known hosts data in the `argocd-ssh-known-hosts-cm` ConfigMap. This ConfigMap contains a single entry, `ssh_known_hosts`, with the public keys of the SSH servers as its value. The value can be filled in from any existing `ssh_known_hosts` file, or from the output of the `ssh-keyscan` utility (which is part of OpenSSH's client package). The basic format is `<server_name> <keytype> <base64-encoded_key>`, one entry per line.
|
||||
|
||||
Here is an example of running `ssh-keyscan`:
|
||||
```bash
|
||||
$ for host in bitbucket.org github.com gitlab.com ssh.dev.azure.com vs-ssh.visualstudio.com ; do ssh-keyscan $host 2> /dev/null ; done
|
||||
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=
|
||||
github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
|
||||
github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk=
|
||||
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
|
||||
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
|
||||
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
|
||||
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
|
||||
ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
|
||||
vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
|
||||
```
|
||||
|
||||
Here is an example `ConfigMap` object using the output from `ssh-keyscan` above:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
@@ -431,7 +447,7 @@ metadata:
|
||||
app.kubernetes.io/part-of: argocd
|
||||
data:
|
||||
ssh_known_hosts: |
|
||||
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
|
||||
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=
|
||||
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
|
||||
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
|
||||
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
|
||||
|
||||
63
docs/operator-manual/deep_links.md
Normal file
63
docs/operator-manual/deep_links.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Deep Links
|
||||
|
||||
Deep links allow users to quickly redirect to third-party systems, such as Splunk, Datadog, etc. from the Argo CD
|
||||
user interface.
|
||||
|
||||
Argo CD administrator will be able to configure links to third-party systems by providing
|
||||
deep link templates configured in `argocd-cm`. The templates can be conditionally rendered and are able
|
||||
to reference different types of resources relating to where the links show up, this includes projects, applications,
|
||||
or individual resources (pods, services, etc.).
|
||||
|
||||
## Configuring Deep Links
|
||||
|
||||
The configuration for Deep Links is present in `argocd-cm` as `<location>.links` fields where
|
||||
`<location>` determines where it will be displayed. The possible values for `<location>` are :
|
||||
- `project` : all links under this field will show up in the project tab in the Argo CD UI
|
||||
- `application` : all links under this field will show up in the application summary tab
|
||||
- `resource` : all links under this field will show up in the resource (deployments, pods, services, etc.) summary tab
|
||||
|
||||
Each link in the list has five subfields :
|
||||
1. `title` : title/tag that will be displayed in the UI corresponding to that link
|
||||
2. `url` : the actual URL where the deep link will redirect to, this field can be templated to use data from the
|
||||
corresponding application, project or resource objects (depending on where it is located). This uses [text/template](pkg.go.dev/text/template) pkg for templating
|
||||
3. `description` (optional) : a description for what the deep link is about
|
||||
4. `icon.class` (optional) : a font-awesome icon class to be used when displaying the links in dropdown menus
|
||||
5. `if` (optional) : a conditional statement that results in either `true` or `false`, it also has access to the same
|
||||
data as the `url` field. If the condition resolves to `true` the deep link will be displayed - else it will be hidden. If
|
||||
the field is omitted, by default the deep links will be displayed. This uses [antonmedv/expr](https://github.com/antonmedv/expr/tree/master/docs) for evaluating conditions
|
||||
|
||||
!!!note
|
||||
For resources of kind Secret the data fields are redacted but other fields are accessible for templating the deep links.
|
||||
|
||||
!!!warning
|
||||
Make sure to validate the url templates and inputs to prevent data leaks or possible generation of any malicious links.
|
||||
|
||||
|
||||
An example `argocd-cm.yaml` file with deep links and their variations :
|
||||
|
||||
```yaml
|
||||
# sample project level links
|
||||
project.links: |
|
||||
- url: https://myaudit-system.com?project={{.metadata.name}}
|
||||
title: Audit
|
||||
description: system audit logs
|
||||
icon.class: "fa-book"
|
||||
# sample application level links
|
||||
application.links: |
|
||||
# pkg.go.dev/text/template is used for evaluating url templates
|
||||
- url: https://mycompany.splunk.com?search={{.spec.destination.namespace}}
|
||||
title: Splunk
|
||||
# conditionally show link e.g. for specific project
|
||||
# github.com/antonmedv/expr is used for evaluation of conditions
|
||||
- url: https://mycompany.splunk.com?search={{.spec.destination.namespace}}
|
||||
title: Splunk
|
||||
if: spec.project == "default"
|
||||
- url: https://{{.metadata.annotations.splunkhost}}?search={{.spec.destination.namespace}}
|
||||
title: Splunk
|
||||
if: metadata.annotations.splunkhost
|
||||
# sample resource level links
|
||||
resource.links: |
|
||||
- url: https://mycompany.splunk.com?search={{.metadata.namespace}}
|
||||
title: Splunk
|
||||
if: kind == "Pod" || kind == "Deployment"
|
||||
```
|
||||
@@ -15,13 +15,13 @@ export VERSION=v1.0.1
|
||||
Export to a backup:
|
||||
|
||||
```bash
|
||||
docker run -v ~/.kube:/home/argocd/.kube --rm argoproj/argocd:$VERSION argocd admin export > backup.yaml
|
||||
docker run -v ~/.kube:/home/argocd/.kube --rm quay.io/argoproj/argocd:$VERSION argocd admin export > backup.yaml
|
||||
```
|
||||
|
||||
Import from a backup:
|
||||
|
||||
```bash
|
||||
docker run -i -v ~/.kube:/home/argocd/.kube --rm argoproj/argocd:$VERSION argocd admin import - < backup.yaml
|
||||
docker run -i -v ~/.kube:/home/argocd/.kube --rm quay.io/argoproj/argocd:$VERSION argocd admin import - < backup.yaml
|
||||
```
|
||||
|
||||
!!! note
|
||||
|
||||
@@ -5,7 +5,7 @@ Argo CD provides built-in health assessment for several standard Kubernetes type
|
||||
surfaced to the overall Application health status as a whole. The following checks are made for
|
||||
specific types of kubernetes resources:
|
||||
|
||||
### Deployment, ReplicaSet, StatefulSet DaemonSet
|
||||
### Deployment, ReplicaSet, StatefulSet, DaemonSet
|
||||
* Observed generation is equal to desired generation.
|
||||
* Number of **updated** replicas equals the number of desired replicas.
|
||||
|
||||
@@ -16,6 +16,8 @@ with at least one value for `hostname` or `IP`.
|
||||
### Ingress
|
||||
* The `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`.
|
||||
|
||||
### Job
|
||||
* If job `.spec.suspended` is set to 'true', then the job and app health will be marked as suspended.
|
||||
### PersistentVolumeClaim
|
||||
* The `status.phase` is `Bound`
|
||||
|
||||
@@ -38,7 +40,7 @@ metadata:
|
||||
data:
|
||||
resource.customizations: |
|
||||
argoproj.io/Application:
|
||||
health.lua: |
|
||||
health.lua: |
|
||||
hs = {}
|
||||
hs.status = "Progressing"
|
||||
hs.message = ""
|
||||
@@ -64,11 +66,11 @@ There are two ways to configure a custom health check. The next two sections des
|
||||
|
||||
### Way 1. Define a Custom Health Check in `argocd-cm` ConfigMap
|
||||
|
||||
Custom health checks can be defined in
|
||||
Custom health checks can be defined in
|
||||
```yaml
|
||||
resource.customizations: |
|
||||
<group/kind>:
|
||||
health.lua: |
|
||||
health.lua: |
|
||||
```
|
||||
field of `argocd-cm`. If you are using argocd-operator, this is overridden by [the argocd-operator resourceCustomizations](https://argocd-operator.readthedocs.io/en/latest/reference/argocd/#resource-customizations).
|
||||
|
||||
@@ -101,15 +103,25 @@ data:
|
||||
hs.message = "Waiting for certificate"
|
||||
return hs
|
||||
```
|
||||
In order to prevent duplication of the same custom health check for potentially multiple resources, it is also possible to specify a wildcard in the resource kind, like this:
|
||||
In order to prevent duplication of the custom health check for potentially multiple resources, it is also possible to specify a wildcard in the resource kind, and anywhere in the resource group, like this:
|
||||
|
||||
```yaml
|
||||
resource.customizations: |
|
||||
ec2.aws.crossplane.io/*:
|
||||
health.lua: |
|
||||
...
|
||||
```
|
||||
|
||||
```yaml
|
||||
resource.customizations: |
|
||||
"*.aws.crossplane.io/*":
|
||||
health.lua: |
|
||||
...
|
||||
```
|
||||
|
||||
!!!important
|
||||
Please note the required quotes in the resource customization health section, if the wildcard starts with `*`.
|
||||
|
||||
The `obj` is a global variable which contains the resource. The script must return an object with status and optional message field.
|
||||
The custom health check might return one of the following health statuses:
|
||||
|
||||
|
||||
@@ -1,13 +1,11 @@
|
||||
# High Availability
|
||||
|
||||
Argo CD is largely stateless, all data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.
|
||||
Argo CD is largely stateless. All data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.
|
||||
|
||||
A set of HA manifests are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
|
||||
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
|
||||
|
||||
[Manifests ⧉](https://github.com/argoproj/argo-cd/tree/master/manifests)
|
||||
|
||||
!!! note
|
||||
The HA installation will require at least three different nodes due to pod anti-affinity roles in the specs.
|
||||
> **NOTE:** The HA installation will require at least three different nodes due to pod anti-affinity roles in the
|
||||
> specs. Additionally, IPv6 only clusters are not supported.
|
||||
|
||||
## Scaling Up
|
||||
|
||||
@@ -17,11 +15,11 @@ A set of HA manifests are provided for users who wish to run Argo CD in a highly
|
||||
|
||||
The `argocd-repo-server` is responsible for cloning Git repository, keeping it up to date and generating manifests using the appropriate tool.
|
||||
|
||||
* `argocd-repo-server` fork/exec config management tool to generate manifests. The fork can fail due to lack of memory and limit on the number of OS threads.
|
||||
The `--parallelismlimit` flag controls how many manifests generations are running concurrently and allows avoiding OOM kills.
|
||||
* `argocd-repo-server` fork/exec config management tool to generate manifests. The fork can fail due to lack of memory or limit on the number of OS threads.
|
||||
The `--parallelismlimit` flag controls how many manifests generations are running concurrently and helps avoid OOM kills.
|
||||
|
||||
* the `argocd-repo-server` ensures that repository is in the clean state during the manifest generation using config management tools such as Kustomize, Helm
|
||||
or custom plugin. As a result Git repositories with multiple applications might be affect repository server performance.
|
||||
or custom plugin. As a result Git repositories with multiple applications might affect repository server performance.
|
||||
Read [Monorepo Scaling Considerations](#monorepo-scaling-considerations) for more information.
|
||||
|
||||
* `argocd-repo-server` clones repository into `/tmp` ( of path specified in `TMPDIR` env variable ). Pod might run out of disk space if have too many repository
|
||||
@@ -30,7 +28,7 @@ or repositories has a lot of files. To avoid this problem mount persistent volum
|
||||
* `argocd-repo-server` `git ls-remote` to resolve ambiguous revision such as `HEAD`, branch or tag name. This operation is happening pretty frequently
|
||||
and might fail. To avoid failed syncs use `ARGOCD_GIT_ATTEMPTS_COUNT` environment variable to retry failed requests.
|
||||
|
||||
* `argocd-repo-server` Every 3m (by default) Argo CD checks for changes to the app manifests. Argo CD assumes by default that manifests only change when the repo changes, so it caches generated manifests (for 24h by default). With Kustomize remote bases, or Helm patch releases, the manifests can change even though the repo has not changed. By reducing the cache time, you can get the changes without waiting for 24h. Use `--repo-cache-expiration duration`, and we'd suggest in low volume environments you try '1h'. Bear in mind this will negate the benefit of caching if set too low.
|
||||
* `argocd-repo-server` Every 3m (by default) Argo CD checks for changes to the app manifests. Argo CD assumes by default that manifests only change when the repo changes, so it caches the generated manifests (for 24h by default). With Kustomize remote bases, or Helm patch releases, the manifests can change even though the repo has not changed. By reducing the cache time, you can get the changes without waiting for 24h. Use `--repo-cache-expiration duration`, and we'd suggest in low volume environments you try '1h'. Bear in mind that this will negate the benefits of caching if set too low.
|
||||
|
||||
* `argocd-repo-server` fork exec config management tools such as `helm` or `kustomize` and enforces 90 seconds timeout. The timeout can be increased using `ARGOCD_EXEC_TIMEOUT` env variable. The value should be in Go time duration string format, for example, `2m30s`.
|
||||
|
||||
|
||||
@@ -538,15 +538,15 @@ spec:
|
||||
- secretName: secret-yourdomain-com
|
||||
rules:
|
||||
- host: argocd.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- pathType: ImplementationSpecific
|
||||
path: "/*" # "*" is needed. Without this, the UI Javascript and CSS will not load properly
|
||||
backend:
|
||||
service:
|
||||
name: argocd-server
|
||||
port:
|
||||
number: 80
|
||||
http:
|
||||
paths:
|
||||
- pathType: ImplementationSpecific
|
||||
path: "/*" # "*" is needed. Without this, the UI Javascript and CSS will not load properly
|
||||
backend:
|
||||
service:
|
||||
name: argocd-server
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
If you use the version `1.21.3-gke.1600` or later, you should use the following Ingress resource:
|
||||
@@ -563,15 +563,15 @@ spec:
|
||||
- secretName: secret-yourdomain-com
|
||||
rules:
|
||||
- host: argocd.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: argocd-server
|
||||
port:
|
||||
number: 80
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: argocd-server
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
As you may know already, it can take some minutes to deploy the load balancer and become ready to accept connections. Once it's ready, get the public IP address for your Load Balancer, go to your DNS server (Google or third party) and point your domain or subdomain (i.e. argocd.yourdomain.com) to that IP address.
|
||||
|
||||
@@ -74,7 +74,7 @@ kind: Kustomization
|
||||
|
||||
namespace: argocd
|
||||
resources:
|
||||
- https://raw.githubusercontent.com/argoproj/argo-cd/v2.0.4/manifests/ha/install.yaml
|
||||
- github.com/argoproj/argo-cd/manifests/ha/base?ref=v2.6.2
|
||||
```
|
||||
|
||||
## Helm
|
||||
|
||||
@@ -17,8 +17,9 @@ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/st
|
||||
* Add Email username and password token to `argocd-notifications-secret` secret
|
||||
|
||||
```bash
|
||||
export EMAIL_USER=<your-username>
|
||||
export PASSWORD=<your-password>
|
||||
EMAIL_USER=<your-username>
|
||||
PASSWORD=<your-password>
|
||||
|
||||
kubectl apply -n argocd -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
||||
@@ -58,15 +58,19 @@ metadata:
|
||||
|
||||

|
||||
|
||||
If the message is set to 140 characters or more, it will be truncate.
|
||||
|
||||
```yaml
|
||||
template.app-deployed: |
|
||||
message: |
|
||||
Application {{.app.metadata.name}} is now running new version of deployments manifests.
|
||||
github:
|
||||
repoURLPath: "{{.app.spec.source.repoURL}}"
|
||||
revisionPath: "{{.app.status.operationState.syncResult.revision}}"
|
||||
status:
|
||||
state: success
|
||||
label: "continuous-delivery/{{.app.metadata.name}}"
|
||||
targetURL: "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}?operation=true"
|
||||
```
|
||||
|
||||
**Notes**:
|
||||
- If the message is set to 140 characters or more, it will be truncated.
|
||||
- If `github.repoURLPath` and `github.revisionPath` are same as above, they can be omitted.
|
||||
|
||||
@@ -15,9 +15,11 @@ spec:
|
||||
- '*'
|
||||
|
||||
# Only permit applications to deploy to the guestbook namespace in the same cluster
|
||||
# Destination clusters can be identified by 'server', 'name', or both.
|
||||
destinations:
|
||||
- namespace: guestbook
|
||||
server: https://kubernetes.default.svc
|
||||
name: in-cluster
|
||||
|
||||
# Deny all cluster-scoped resources from being created, except for Namespace
|
||||
clusterResourceWhitelist:
|
||||
@@ -84,3 +86,13 @@ spec:
|
||||
clusters:
|
||||
- in-cluster
|
||||
- cluster1
|
||||
|
||||
# By default, apps may sync to any cluster specified under the `destinations` field, even if they are not
|
||||
# scoped to this project. Set the following field to `true` to restrict apps in this cluster to only clusters
|
||||
# scoped to this project.
|
||||
permitOnlyProjectScopedClusters: false
|
||||
|
||||
# When using Applications-in-any-namespace, this field determines which namespaces this AppProject permits
|
||||
# Applications to reside in. Details: https://argo-cd.readthedocs.io/en/stable/operator-manual/app-any-namespace/
|
||||
sourceNamespaces:
|
||||
- "argocd-apps-*"
|
||||
|
||||
@@ -64,7 +64,7 @@ See [Web-based Terminal](web_based_terminal.md) for more info.
|
||||
|
||||
#### The `applicationsets` resource
|
||||
|
||||
[ApplicationSets](applicationset) provide a declarative way to automatically create/update/delete Applications.
|
||||
[ApplicationSets](applicationset/index.md) provide a declarative way to automatically create/update/delete Applications.
|
||||
|
||||
Granting `applicationsets, create` effectively grants the ability to create Applications. While it doesn't allow the
|
||||
user to create Applications directly, they can create Applications via an ApplicationSet.
|
||||
|
||||
@@ -9,9 +9,8 @@ Operators can add actions to custom resources in form of a Lua script and expand
|
||||
|
||||
Argo CD supports custom resource actions written in [Lua](https://www.lua.org/). This is useful if you:
|
||||
|
||||
* Have a custom resource for which Argo CD does not provide any built-in actions.
|
||||
* Have a commonly performed manual task that might be error prone if executed by users via `kubectl`
|
||||
|
||||
* Have a custom resource for which Argo CD does not provide any built-in actions.
|
||||
* Have a commonly performed manual task that might be error prone if executed by users via `kubectl`
|
||||
|
||||
You can define your own custom resource actions in the `argocd-cm` ConfigMap.
|
||||
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
# Secret Management
|
||||
|
||||
Argo CD is un-opinionated about how secrets are managed. There's many ways to do it and there's no one-size-fits-all solution. Here's some ways people are doing GitOps secrets:
|
||||
Argo CD is un-opinionated about how secrets are managed. There are many ways to do it, and there's no one-size-fits-all solution.
|
||||
|
||||
Many solutions use plugins to inject secrets into the application manifests. See [Mitigating Risks of Secret-Injection Plugins](#mitigating-risks-of-secret-injection-plugins)
|
||||
below to make sure you use those plugins securely.
|
||||
|
||||
Here are some ways people are doing GitOps secrets:
|
||||
|
||||
* [Bitnami Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets)
|
||||
* [External Secrets Operator](https://github.com/external-secrets/external-secrets)
|
||||
@@ -15,3 +20,17 @@ Argo CD is un-opinionated about how secrets are managed. There's many ways to do
|
||||
* [Kubernetes Secrets Store CSI Driver](https://github.com/kubernetes-sigs/secrets-store-csi-driver)
|
||||
|
||||
For discussion, see [#1364](https://github.com/argoproj/argo-cd/issues/1364)
|
||||
|
||||
## Mitigating Risks of Secret-Injection Plugins
|
||||
|
||||
Argo CD caches the manifests generated by plugins, along with the injected secrets, in its Redis instance. Those
|
||||
manifests are also available via the repo-server API (a gRPC service). This means that the secrets are available to
|
||||
anyone who has access to the Redis instance or to the repo-server.
|
||||
|
||||
Consider these steps to mitigate the risks of secret-injection plugins:
|
||||
|
||||
1. Set up network policies to prevent direct access to Argo CD components (Redis and the repo-server). Make sure your
|
||||
cluster supports those network policies and can actually enforce them.
|
||||
2. Consider running Argo CD on its own cluster, with no other applications running on it.
|
||||
3. [Enable password authentication on the Redis instance](https://github.com/argoproj/argo-cd/issues/3130) (currently
|
||||
only supported for non-HA Argo CD installations).
|
||||
|
||||
@@ -15,7 +15,9 @@ argocd-repo-server [flags]
|
||||
```
|
||||
--allow-oob-symlinks Allow out-of-bounds symlinks in repositories (not recommended)
|
||||
--default-cache-expiration duration Cache expiration default (default 24h0m0s)
|
||||
--disable-helm-manifest-max-extracted-size Disable maximum size of helm manifest archives when extracted
|
||||
--disable-tls Disable TLS on the gRPC endpoint
|
||||
--helm-manifest-max-extracted-size string Maximum size of helm manifest archives when extracted (default "1G")
|
||||
-h, --help help for argocd-repo-server
|
||||
--logformat string Set the logging format. One of: text|json (default "text")
|
||||
--loglevel string Set the logging level. One of: debug|info|warn|error (default "info")
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# v1.8 to v2.0
|
||||
# v1.8 to 2.0
|
||||
|
||||
## Redis Upgraded to v6.2.1
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ data:
|
||||
|
||||
## Removed Python from the base image
|
||||
|
||||
If you are using a [Config Management Plugin](../../user-guide/config-management-plugins.md) that relies on Python, you
|
||||
If you are using a [Config Management Plugin](../config-management-plugins.md) that relies on Python, you
|
||||
will need to build a custom image on the Argo CD base to install Python.
|
||||
|
||||
## Upgraded Kustomize Version
|
||||
|
||||
@@ -1,5 +1,26 @@
|
||||
# v2.3 to 2.4
|
||||
|
||||
## Known Issues
|
||||
|
||||
### Broken `project` filter before 2.4.27
|
||||
|
||||
Argo CD 2.4.0 introduced a breaking API change, renaming the `project` filter to `projects`.
|
||||
|
||||
#### Impact to API clients
|
||||
|
||||
A similar issue applies to other API clients which communicate with the Argo CD API server via its REST API. If the
|
||||
client uses the `project` field to filter projects, the filter will not be applied. **The failing project filter could
|
||||
have detrimental consequences if, for example, you rely on it to list Applications to be deleted.**
|
||||
|
||||
#### Impact to CLI clients
|
||||
|
||||
CLI clients older that v2.4.0 rely on client-side filtering and are not impacted by this bug.
|
||||
|
||||
#### How to fix the problem
|
||||
|
||||
Upgrade to Argo CD >=2.4.27, >=2.5.15, or >=2.6.6. This version of Argo CD will accept both `project` and `projects` as
|
||||
valid filters.
|
||||
|
||||
## KSonnet support is removed
|
||||
|
||||
Ksonnet was deprecated in [2019](https://github.com/ksonnet/ksonnet/pull/914/files) and is no longer maintained.
|
||||
@@ -176,7 +197,7 @@ that uses the Service Account for auth), be sure to test before deploying the 2.
|
||||
|
||||
### Remove the shared volume from any sidecar plugins
|
||||
|
||||
As a security enhancement, [sidecar plugins](../../user-guide/config-management-plugins.md#option-2-configure-plugin-via-sidecar)
|
||||
As a security enhancement, [sidecar plugins](../config-management-plugins.md#option-2-configure-plugin-via-sidecar)
|
||||
no longer share the /tmp directory with the repo-server.
|
||||
|
||||
If you have one or more sidecar plugins enabled, replace the /tmp volume mount for each sidecar to use a volume specific
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user