mirror of
https://github.com/argoproj/argo-cd.git
synced 2026-02-20 01:28:45 +01:00
Compare commits
239 Commits
dependabot
...
stable
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
326a1dbd6b | ||
|
|
d0b2a6cfd7 | ||
|
|
e464f6ae43 | ||
|
|
4b0a2c0ef2 | ||
|
|
8449d9a0f3 | ||
|
|
92df21cfc0 | ||
|
|
24493145a6 | ||
|
|
273683b647 | ||
|
|
a1d18559f5 | ||
|
|
8df5e96981 | ||
|
|
c4f0cd3e84 | ||
|
|
0038fce14d | ||
|
|
6f270cc8f4 | ||
|
|
61267982ab | ||
|
|
445916fdb0 | ||
|
|
54f29167a6 | ||
|
|
55d0d09802 | ||
|
|
2502af402d | ||
|
|
fd6b7d5b3c | ||
|
|
f4e479e3f0 | ||
|
|
436da4e7d8 | ||
|
|
cd6a9aaf3f | ||
|
|
ac071b57a1 | ||
|
|
3d64c21206 | ||
|
|
0fa47b11b2 | ||
|
|
48a9dcc23b | ||
|
|
b52a0750b2 | ||
|
|
8fbb44c336 | ||
|
|
28e8472c69 | ||
|
|
a6472c8393 | ||
|
|
74de77a24c | ||
|
|
15568cb9d5 | ||
|
|
32c32a67cb | ||
|
|
675f8cfe3f | ||
|
|
9ae26e4e74 | ||
|
|
369fb7577e | ||
|
|
efca5b9144 | ||
|
|
2c3bc6f991 | ||
|
|
8639b7be5e | ||
|
|
5de1e6472d | ||
|
|
51b595b1ee | ||
|
|
fe0466de51 | ||
|
|
05b416906e | ||
|
|
20604f1b21 | ||
|
|
fd2d0adae9 | ||
|
|
708c63683c | ||
|
|
393cb97042 | ||
|
|
99434863c9 | ||
|
|
814db444c3 | ||
|
|
c8a5159e10 | ||
|
|
ac1a2f8536 | ||
|
|
0456a707f3 | ||
|
|
d7954f0698 | ||
|
|
e58d75f1da | ||
|
|
d9fe8a4175 | ||
|
|
125606ef89 | ||
|
|
182a084407 | ||
|
|
b628c6dd9e | ||
|
|
ddce93cfdd | ||
|
|
a2659e9560 | ||
|
|
6cd30d3b99 | ||
|
|
bbc3e99aa4 | ||
|
|
63927d1e1e | ||
|
|
40e9a060d7 | ||
|
|
df2a759c65 | ||
|
|
abb354b5e3 | ||
|
|
b74c7aa31f | ||
|
|
53b0beae4a | ||
|
|
ae03d8f2d1 | ||
|
|
610ea27c8e | ||
|
|
17b98d9b66 | ||
|
|
8fec5c5306 | ||
|
|
53c35423ab | ||
|
|
0447ab62c4 | ||
|
|
414a17882d | ||
|
|
8b2e0e1aec | ||
|
|
e79a2bd6ea | ||
|
|
53fa4f45b9 | ||
|
|
90e48bca14 | ||
|
|
03e6342a4a | ||
|
|
035726e711 | ||
|
|
45a89ef4c0 | ||
|
|
e932dc2575 | ||
|
|
8bebf65bbe | ||
|
|
91c479b228 | ||
|
|
e50dd008fd | ||
|
|
2c6edd819f | ||
|
|
f437a75e39 | ||
|
|
f67da0b43d | ||
|
|
1e9f4aa793 | ||
|
|
7ebdf10cbf | ||
|
|
9129e8668f | ||
|
|
4a1bf9efff | ||
|
|
4a4db1d9ca | ||
|
|
482440b131 | ||
|
|
50d7b206f5 | ||
|
|
61322b66e4 | ||
|
|
4ea93dbdc0 | ||
|
|
fa609efbc1 | ||
|
|
fa873d4085 | ||
|
|
de79e6aafc | ||
|
|
b6da0545f3 | ||
|
|
9acb8f815c | ||
|
|
45a54ae041 | ||
|
|
805c3891cb | ||
|
|
f866959fe1 | ||
|
|
c0c9768424 | ||
|
|
df8727cca5 | ||
|
|
0000f05c38 | ||
|
|
cc57831808 | ||
|
|
acde9ff4fc | ||
|
|
4ddd0a2fc7 | ||
|
|
08b93e83d2 | ||
|
|
a48b381d3b | ||
|
|
3bf3d8a212 | ||
|
|
0c6fa288c2 | ||
|
|
528482c87a | ||
|
|
318e3319c5 | ||
|
|
7cdc0f952f | ||
|
|
b7c7d02b0e | ||
|
|
cfb6f5e7d7 | ||
|
|
7c0f032def | ||
|
|
2c24def159 | ||
|
|
644d1e6ec2 | ||
|
|
a62e3687f2 | ||
|
|
f53e1d5629 | ||
|
|
be0d2952ac | ||
|
|
e81872fb1d | ||
|
|
dea7ead9a3 | ||
|
|
8e91653f73 | ||
|
|
e77acec858 | ||
|
|
c43088265e | ||
|
|
eaf83019a1 | ||
|
|
7ba0898a5d | ||
|
|
9393e587d5 | ||
|
|
290db5de86 | ||
|
|
e5829757e7 | ||
|
|
b5f75f15cc | ||
|
|
3c12c0108a | ||
|
|
42929ffe5c | ||
|
|
f64507521d | ||
|
|
1a6973af2d | ||
|
|
860eed5127 | ||
|
|
da4e74837c | ||
|
|
1301eaa9e7 | ||
|
|
706e469809 | ||
|
|
3ad7da50f7 | ||
|
|
48c969b324 | ||
|
|
5db7846c78 | ||
|
|
27f30b4a7d | ||
|
|
56dcea0cfe | ||
|
|
81dcc2f2ee | ||
|
|
94e474a867 | ||
|
|
a64933f11d | ||
|
|
7669da6c3e | ||
|
|
f68f0ec16b | ||
|
|
f3ae26bb83 | ||
|
|
46783614d5 | ||
|
|
3d73f69522 | ||
|
|
d23501875c | ||
|
|
320754a470 | ||
|
|
83548e39de | ||
|
|
06bffebc04 | ||
|
|
0c77f3ca1f | ||
|
|
df1035d236 | ||
|
|
1f147912e4 | ||
|
|
de781f4a76 | ||
|
|
bcff1f6e3a | ||
|
|
3fa7348ec5 | ||
|
|
ba50c4a604 | ||
|
|
7c3b710fbd | ||
|
|
72e88be125 | ||
|
|
fe02a8f410 | ||
|
|
14d05d2cea | ||
|
|
69d5d94c4e | ||
|
|
d5fee5a18a | ||
|
|
96804e89a2 | ||
|
|
791e92490f | ||
|
|
b7dbff80b2 | ||
|
|
8373059176 | ||
|
|
c549aea1fd | ||
|
|
d92ad4d5c8 | ||
|
|
99b5a62650 | ||
|
|
c917599b0b | ||
|
|
1f8e9d9a90 | ||
|
|
9c64f4d7f8 | ||
|
|
84d94c0e7b | ||
|
|
c1a28aa51e | ||
|
|
fe3632fe0c | ||
|
|
9ee5cca38b | ||
|
|
27715cd556 | ||
|
|
7e1946c3d8 | ||
|
|
9fbdc10cb0 | ||
|
|
4a75a756a7 | ||
|
|
10f60b96ac | ||
|
|
0a585e24ed | ||
|
|
910661fab5 | ||
|
|
19ee75b9fc | ||
|
|
7065fbb6ca | ||
|
|
ec7134406a | ||
|
|
dcfd191d8e | ||
|
|
7b73766251 | ||
|
|
b7f60b7f76 | ||
|
|
ed6fe769e6 | ||
|
|
5444415c86 | ||
|
|
c79f17167c | ||
|
|
ef6a27fdfc | ||
|
|
61a89dc23e | ||
|
|
5c6aa59ed3 | ||
|
|
60f2ff5f77 | ||
|
|
98d0e8451a | ||
|
|
d8a86f4ccb | ||
|
|
f618adb93e | ||
|
|
b829cd29c8 | ||
|
|
b6bf931fe4 | ||
|
|
6d303b9b3f | ||
|
|
fd2fc0abf9 | ||
|
|
2a4734c54c | ||
|
|
43828a7770 | ||
|
|
be31558b41 | ||
|
|
b3dfab5f6d | ||
|
|
54f9b8c9b5 | ||
|
|
2ab3b0ddaf | ||
|
|
be2b7da724 | ||
|
|
13895feb99 | ||
|
|
991ede4764 | ||
|
|
6bf276f675 | ||
|
|
dbe0a0c1d3 | ||
|
|
19ca5dfad7 | ||
|
|
728f2e7436 | ||
|
|
6638dd67a6 | ||
|
|
10f991d674 | ||
|
|
45462175c9 | ||
|
|
ce627702dc | ||
|
|
d6f25a169e | ||
|
|
81073bdb1f | ||
|
|
6cfef6bf02 | ||
|
|
6df6b7a355 | ||
|
|
c7b47c3cd2 |
4
.github/ISSUE_TEMPLATE/release.md
vendored
4
.github/ISSUE_TEMPLATE/release.md
vendored
@@ -23,6 +23,7 @@ Target GA date: ___. __, ____
|
||||
- [ ] Verify the release on [GitHub releases](https://github.com/argoproj/argo-cd/releases)
|
||||
- [ ] Verify the container image on [Quay.io](https://quay.io/repository/argoproj/argocd?tab=tags)
|
||||
- [ ] Confirm the new version appears in [Read the Docs](https://argo-cd.readthedocs.io/)
|
||||
- [ ] Verify the docs release build in https://app.readthedocs.org/projects/argo-cd/ succeeded and retry if failed (requires an Approver with admin creds to readthedocs)
|
||||
- [ ] Announce RC1 release
|
||||
- [ ] Confirm that tweet and blog post are ready
|
||||
- [ ] Publish tweet and blog post
|
||||
@@ -64,6 +65,7 @@ Target GA date: ___. __, ____
|
||||
- [ ] Verify the container image on [Quay.io](https://quay.io/repository/argoproj/argocd?tab=tags)
|
||||
- [ ] Verify the `stable` tag has been updated
|
||||
- [ ] Confirm the new version appears in [Read the Docs](https://argo-cd.readthedocs.io/)
|
||||
- [ ] Verify the docs release build in https://app.readthedocs.org/projects/argo-cd/ succeeded and retry if failed (requires an Approver with admin creds to readthedocs)
|
||||
- [ ] Announce GA release with EOL notice
|
||||
- [ ] Confirm that tweet and blog post are ready
|
||||
- [ ] Publish tweet and blog post
|
||||
@@ -81,4 +83,4 @@ Target GA date: ___. __, ____
|
||||
Thanks to all the folks who spent their time contributing to this release in any way possible!
|
||||
```
|
||||
- [ ] (For the next release champion) Review the [items scheduled for the next release](https://github.com/orgs/argoproj/projects/25). If any item does not have an assignee who can commit to finish the feature, move it to the next release.
|
||||
- [ ] (For the next release champion) Schedule a time mid-way through the release cycle to review items again.
|
||||
- [ ] (For the next release champion) Schedule a time mid-way through the release cycle to review items again.
|
||||
|
||||
6
.github/workflows/bump-major-version.yaml
vendored
6
.github/workflows/bump-major-version.yaml
vendored
@@ -13,7 +13,7 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -37,7 +37,7 @@ jobs:
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Add ~/go/bin to PATH
|
||||
@@ -74,7 +74,7 @@ jobs:
|
||||
rsync -a --exclude=.git /home/runner/go/src/github.com/argoproj/argo-cd/ ../argo-cd
|
||||
|
||||
- name: Create pull request
|
||||
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
|
||||
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 # v8.0.0
|
||||
with:
|
||||
commit-message: "Bump major version to ${{ steps.get-target-version.outputs.TARGET_VERSION }}"
|
||||
title: "Bump major version to ${{ steps.get-target-version.outputs.TARGET_VERSION }}"
|
||||
|
||||
2
.github/workflows/cherry-pick-single.yml
vendored
2
.github/workflows/cherry-pick-single.yml
vendored
@@ -38,7 +38,7 @@ jobs:
|
||||
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ steps.generate-token.outputs.token }}
|
||||
|
||||
100
.github/workflows/ci-build.yaml
vendored
100
.github/workflows/ci-build.yaml
vendored
@@ -14,7 +14,7 @@ on:
|
||||
env:
|
||||
# Golang version to use across CI steps
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
GOLANG_VERSION: '1.25.3'
|
||||
GOLANG_VERSION: '1.25.5'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -31,8 +31,8 @@ jobs:
|
||||
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
|
||||
docs: ${{ steps.filter.outputs.docs_any_changed }}
|
||||
steps:
|
||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
|
||||
id: filter
|
||||
with:
|
||||
# Any file which is not under docs/, ui/ or is not a markdown file is counted as a backend file
|
||||
@@ -55,9 +55,9 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Download all Go modules
|
||||
@@ -75,13 +75,13 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -102,13 +102,13 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
|
||||
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0
|
||||
with:
|
||||
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
|
||||
version: v2.5.0
|
||||
@@ -128,11 +128,11 @@ jobs:
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
@@ -152,7 +152,7 @@ jobs:
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -192,11 +192,11 @@ jobs:
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
@@ -216,7 +216,7 @@ jobs:
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -250,15 +250,16 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Create symlink in GOPATH
|
||||
# generalizing repo name for forks: ${{ github.event.repository.name }}
|
||||
run: |
|
||||
mkdir -p ~/go/src/github.com/argoproj
|
||||
cp -a ../argo-cd ~/go/src/github.com/argoproj
|
||||
cp -a ../${{ github.event.repository.name }} ~/go/src/github.com/argoproj
|
||||
- name: Add ~/go/bin to PATH
|
||||
run: |
|
||||
echo "/home/runner/go/bin" >> $GITHUB_PATH
|
||||
@@ -270,12 +271,14 @@ jobs:
|
||||
# We need to vendor go modules for codegen yet
|
||||
go mod download
|
||||
go mod vendor -v
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
# generalizing repo name for forks: ${{ github.event.repository.name }}
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/${{ github.event.repository.name }}
|
||||
- name: Install toolchain for codegen
|
||||
run: |
|
||||
make install-codegen-tools-local
|
||||
make install-go-tools-local
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
# generalizing repo name for forks: ${{ github.event.repository.name }}
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/${{ github.event.repository.name }}
|
||||
# We install kustomize in the dist directory
|
||||
- name: Add dist to PATH
|
||||
run: |
|
||||
@@ -286,12 +289,14 @@ jobs:
|
||||
export GOPATH=$(go env GOPATH)
|
||||
git checkout -- go.mod go.sum
|
||||
make codegen-local
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
# generalizing repo name for forks: ${{ github.event.repository.name }}
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/${{ github.event.repository.name }}
|
||||
- name: Check nothing has changed
|
||||
run: |
|
||||
set -xo pipefail
|
||||
git diff --exit-code -- . ':!go.sum' ':!go.mod' ':!assets/swagger.json' | tee codegen.patch
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
# generalizing repo name for forks: ${{ github.event.repository.name }}
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/${{ github.event.repository.name }}
|
||||
|
||||
build-ui:
|
||||
name: Build, test & lint UI code
|
||||
@@ -302,15 +307,15 @@ jobs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- name: Setup NodeJS
|
||||
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
|
||||
uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
|
||||
with:
|
||||
# renovate: datasource=node-version packageName=node versioning=node
|
||||
node-version: '22.9.0'
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
@@ -335,7 +340,7 @@ jobs:
|
||||
shellcheck:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- run: |
|
||||
sudo apt-get install shellcheck
|
||||
shellcheck -e SC2059 -e SC2154 -e SC2034 -e SC2016 -e SC1091 $(find . -type f -name '*.sh' | grep -v './ui/node_modules') | tee sc.log
|
||||
@@ -354,12 +359,12 @@ jobs:
|
||||
sonar_secret: ${{ secrets.SONAR_TOKEN }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
@@ -384,7 +389,7 @@ jobs:
|
||||
run: |
|
||||
go tool covdata percent -i=test-results,e2e-code-coverage/applicationset-controller,e2e-code-coverage/repo-server,e2e-code-coverage/app-controller,e2e-code-coverage/commit-server -o test-results/full-coverage.out
|
||||
- name: Upload code coverage information to codecov.io
|
||||
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
|
||||
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
|
||||
with:
|
||||
files: test-results/full-coverage.out
|
||||
fail_ci_if_error: true
|
||||
@@ -401,35 +406,31 @@ jobs:
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
||||
uses: SonarSource/sonarqube-scan-action@1a6d90ebcb0e6a6b1d87e37ba693fe453195ae25 # v5.3.1
|
||||
uses: SonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9 # v7.0.0
|
||||
if: env.sonar_secret != ''
|
||||
test-e2e:
|
||||
name: Run end-to-end tests
|
||||
if: ${{ needs.changes.outputs.backend == 'true' }}
|
||||
runs-on: oracle-vm-16cpu-64gb-x86-64
|
||||
runs-on: ${{ github.repository == 'argoproj/argo-cd' && 'oracle-vm-16cpu-64gb-x86-64' || 'ubuntu-22.04' }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
# latest: true means that this version mush upload the coverage report to codecov.io
|
||||
# We designate the latest version because we only collect code coverage for that version.
|
||||
k3s:
|
||||
- version: v1.33.1
|
||||
- version: v1.34.2
|
||||
latest: true
|
||||
- version: v1.33.1
|
||||
latest: false
|
||||
- version: v1.32.1
|
||||
latest: false
|
||||
- version: v1.31.0
|
||||
latest: false
|
||||
- version: v1.30.4
|
||||
latest: false
|
||||
needs:
|
||||
- build-go
|
||||
- changes
|
||||
env:
|
||||
GOPATH: /home/ubuntu/go
|
||||
ARGOCD_FAKE_IN_CLUSTER: 'true'
|
||||
ARGOCD_SSH_DATA_PATH: '/tmp/argo-e2e/app/config/ssh'
|
||||
ARGOCD_TLS_DATA_PATH: '/tmp/argo-e2e/app/config/tls'
|
||||
ARGOCD_E2E_SSH_KNOWN_HOSTS: '../fixture/certs/ssh_known_hosts'
|
||||
ARGOCD_E2E_K3S: 'true'
|
||||
ARGOCD_IN_CI: 'true'
|
||||
ARGOCD_E2E_APISERVER_PORT: '8088'
|
||||
@@ -446,11 +447,14 @@ jobs:
|
||||
swap-storage: false
|
||||
tool-cache: false
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Set GOPATH
|
||||
run: |
|
||||
echo "GOPATH=$HOME/go" >> $GITHUB_ENV
|
||||
- name: GH actions workaround - Kill XSP4 process
|
||||
run: |
|
||||
sudo pkill mono || true
|
||||
@@ -461,19 +465,19 @@ jobs:
|
||||
set -x
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
sudo chmod -R a+rw /etc/rancher/k3s
|
||||
sudo mkdir -p $HOME/.kube && sudo chown -R ubuntu $HOME/.kube
|
||||
sudo mkdir -p $HOME/.kube && sudo chown -R $(whoami) $HOME/.kube
|
||||
sudo k3s kubectl config view --raw > $HOME/.kube/config
|
||||
sudo chown ubuntu $HOME/.kube/config
|
||||
sudo chown $(whoami) $HOME/.kube/config
|
||||
sudo chmod go-r $HOME/.kube/config
|
||||
kubectl version
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
- name: Add ~/go/bin to PATH
|
||||
run: |
|
||||
echo "/home/ubuntu/go/bin" >> $GITHUB_PATH
|
||||
echo "$HOME/go/bin" >> $GITHUB_PATH
|
||||
- name: Add /usr/local/bin to PATH
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
@@ -495,11 +499,11 @@ jobs:
|
||||
run: |
|
||||
docker pull ghcr.io/dexidp/dex:v2.43.0
|
||||
docker pull argoproj/argo-cd-ci-builder:v1.0.0
|
||||
docker pull redis:8.2.1-alpine
|
||||
docker pull redis:8.2.3-alpine
|
||||
- name: Create target directory for binaries in the build-process
|
||||
run: |
|
||||
mkdir -p dist
|
||||
chown ubuntu dist
|
||||
chown $(whoami) dist
|
||||
- name: Run E2E server and wait for it being available
|
||||
timeout-minutes: 30
|
||||
run: |
|
||||
|
||||
4
.github/workflows/codeql.yml
vendored
4
.github/workflows/codeql.yml
vendored
@@ -29,11 +29,11 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
|
||||
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version-file: go.mod
|
||||
|
||||
|
||||
6
.github/workflows/image-reuse.yaml
vendored
6
.github/workflows/image-reuse.yaml
vendored
@@ -56,18 +56,18 @@ jobs:
|
||||
image-digest: ${{ steps.image.outputs.digest }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
if: ${{ github.ref_type == 'tag'}}
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
if: ${{ github.ref_type != 'tag'}}
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ inputs.go-version }}
|
||||
cache: false
|
||||
|
||||
60
.github/workflows/image.yaml
vendored
60
.github/workflows/image.yaml
vendored
@@ -19,16 +19,49 @@ jobs:
|
||||
set-vars:
|
||||
permissions:
|
||||
contents: read
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
# Always run to calculate variables - other jobs check outputs
|
||||
runs-on: ubuntu-22.04
|
||||
outputs:
|
||||
image-tag: ${{ steps.image.outputs.tag}}
|
||||
platforms: ${{ steps.platforms.outputs.platforms }}
|
||||
image_namespace: ${{ steps.image.outputs.image_namespace }}
|
||||
image_repository: ${{ steps.image.outputs.image_repository }}
|
||||
quay_image_name: ${{ steps.image.outputs.quay_image_name }}
|
||||
ghcr_image_name: ${{ steps.image.outputs.ghcr_image_name }}
|
||||
ghcr_provenance_image: ${{ steps.image.outputs.ghcr_provenance_image }}
|
||||
allow_ghcr_publish: ${{ steps.image.outputs.allow_ghcr_publish }}
|
||||
steps:
|
||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
|
||||
- name: Set image tag for ghcr
|
||||
run: echo "tag=$(cat ./VERSION)-${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
|
||||
- name: Set image tag and names
|
||||
run: |
|
||||
# Calculate image tag
|
||||
TAG="$(cat ./VERSION)-${GITHUB_SHA::8}"
|
||||
echo "tag=$TAG" >> $GITHUB_OUTPUT
|
||||
|
||||
# Calculate image names with defaults
|
||||
IMAGE_NAMESPACE="${{ vars.IMAGE_NAMESPACE || 'argoproj' }}"
|
||||
IMAGE_REPOSITORY="${{ vars.IMAGE_REPOSITORY || 'argocd' }}"
|
||||
GHCR_NAMESPACE="${{ vars.GHCR_NAMESPACE || github.repository }}"
|
||||
GHCR_REPOSITORY="${{ vars.GHCR_REPOSITORY || 'argocd' }}"
|
||||
|
||||
echo "image_namespace=$IMAGE_NAMESPACE" >> $GITHUB_OUTPUT
|
||||
echo "image_repository=$IMAGE_REPOSITORY" >> $GITHUB_OUTPUT
|
||||
|
||||
# Construct image name
|
||||
echo "quay_image_name=quay.io/$IMAGE_NAMESPACE/$IMAGE_REPOSITORY:latest" >> $GITHUB_OUTPUT
|
||||
|
||||
ALLOW_GHCR_PUBLISH=false
|
||||
if [[ "${{ github.repository }}" == "argoproj/argo-cd" || "$GHCR_NAMESPACE" != argoproj/* ]]; then
|
||||
ALLOW_GHCR_PUBLISH=true
|
||||
echo "ghcr_image_name=ghcr.io/$GHCR_NAMESPACE/$GHCR_REPOSITORY:$TAG" >> $GITHUB_OUTPUT
|
||||
echo "ghcr_provenance_image=ghcr.io/$GHCR_NAMESPACE/$GHCR_REPOSITORY" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "GhCR publish skipped: refusing to push to namespace '$GHCR_NAMESPACE'. Please override GHCR_* for forks." >&2
|
||||
echo "ghcr_image_name=" >> $GITHUB_OUTPUT
|
||||
echo "ghcr_provenance_image=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
echo "allow_ghcr_publish=$ALLOW_GHCR_PUBLISH" >> $GITHUB_OUTPUT
|
||||
id: image
|
||||
|
||||
- name: Determine image platforms to use
|
||||
@@ -48,12 +81,12 @@ jobs:
|
||||
contents: read
|
||||
packages: write # for pushing packages to GHCR, which is used by cd.apps.argoproj.io to avoid polluting Quay with tags
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name != 'push' }}
|
||||
if: ${{ (github.repository == 'argoproj/argo-cd' || needs.set-vars.outputs.image_namespace != 'argoproj') && github.event_name != 'push' }}
|
||||
uses: ./.github/workflows/image-reuse.yaml
|
||||
with:
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.3
|
||||
go-version: 1.25.5
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: false
|
||||
|
||||
@@ -63,14 +96,14 @@ jobs:
|
||||
contents: read
|
||||
packages: write # for pushing packages to GHCR, which is used by cd.apps.argoproj.io to avoid polluting Quay with tags
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
|
||||
if: ${{ (github.repository == 'argoproj/argo-cd' || needs.set-vars.outputs.image_namespace != 'argoproj') && github.event_name == 'push' }}
|
||||
uses: ./.github/workflows/image-reuse.yaml
|
||||
with:
|
||||
quay_image_name: quay.io/argoproj/argocd:latest
|
||||
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
|
||||
quay_image_name: ${{ needs.set-vars.outputs.quay_image_name }}
|
||||
ghcr_image_name: ${{ needs.set-vars.outputs.ghcr_image_name }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.3
|
||||
go-version: 1.25.5
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: true
|
||||
secrets:
|
||||
@@ -81,16 +114,17 @@ jobs:
|
||||
|
||||
build-and-publish-provenance: # Push attestations to GHCR, latest image is polluting quay.io
|
||||
needs:
|
||||
- set-vars
|
||||
- build-and-publish
|
||||
permissions:
|
||||
actions: read # for detecting the Github Actions environment.
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
packages: write # for uploading attestations. (https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#known-issues)
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
|
||||
if: ${{ (github.repository == 'argoproj/argo-cd' || needs.set-vars.outputs.image_namespace != 'argoproj') && github.event_name == 'push' && needs.set-vars.outputs.allow_ghcr_publish == 'true'}}
|
||||
# Must be refernced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0
|
||||
with:
|
||||
image: ghcr.io/argoproj/argo-cd/argocd
|
||||
image: ${{ needs.set-vars.outputs.ghcr_provenance_image }}
|
||||
digest: ${{ needs.build-and-publish.outputs.image-digest }}
|
||||
registry-username: ${{ github.actor }}
|
||||
secrets:
|
||||
@@ -106,7 +140,7 @@ jobs:
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- run: git clone "https://$TOKEN@github.com/argoproj/argoproj-deployments"
|
||||
env:
|
||||
TOKEN: ${{ secrets.TOKEN }}
|
||||
|
||||
10
.github/workflows/init-release.yaml
vendored
10
.github/workflows/init-release.yaml
vendored
@@ -21,9 +21,15 @@ jobs:
|
||||
pull-requests: write # for peter-evans/create-pull-request to create a PR
|
||||
name: Automatically generate version and manifests on ${{ inputs.TARGET_BRANCH }}
|
||||
runs-on: ubuntu-22.04
|
||||
env:
|
||||
# Calculate image names with defaults, this will be used in the make manifests-local command
|
||||
# to generate the correct image name in the manifests
|
||||
IMAGE_REGISTRY: ${{ vars.IMAGE_REGISTRY || 'quay.io' }}
|
||||
IMAGE_NAMESPACE: ${{ vars.IMAGE_NAMESPACE || 'argoproj' }}
|
||||
IMAGE_REPOSITORY: ${{ vars.IMAGE_REPOSITORY || 'argocd' }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -64,7 +70,7 @@ jobs:
|
||||
git stash pop
|
||||
|
||||
- name: Create pull request
|
||||
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
|
||||
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 # v8.0.0
|
||||
with:
|
||||
commit-message: "Bump version to ${{ inputs.TARGET_VERSION }}"
|
||||
title: "Bump version to ${{ inputs.TARGET_VERSION }} on ${{ inputs.TARGET_BRANCH }} branch"
|
||||
|
||||
76
.github/workflows/release.yaml
vendored
76
.github/workflows/release.yaml
vendored
@@ -11,21 +11,22 @@ permissions: {}
|
||||
|
||||
env:
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
GOLANG_VERSION: '1.25.3' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
GOLANG_VERSION: '1.25.5' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
|
||||
jobs:
|
||||
argocd-image:
|
||||
needs: [setup-variables]
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
packages: write # used to push images to `ghcr.io` if used.
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
|
||||
uses: ./.github/workflows/image-reuse.yaml
|
||||
with:
|
||||
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
|
||||
quay_image_name: ${{ needs.setup-variables.outputs.quay_image_name }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.3
|
||||
go-version: 1.25.5
|
||||
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
|
||||
push: true
|
||||
secrets:
|
||||
@@ -34,14 +35,20 @@ jobs:
|
||||
|
||||
setup-variables:
|
||||
name: Setup Release Variables
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
if: github.repository == 'argoproj/argo-cd' || (github.repository_owner != 'argoproj' && vars.ENABLE_FORK_RELEASES == 'true' && vars.IMAGE_NAMESPACE && vars.IMAGE_NAMESPACE != 'argoproj')
|
||||
runs-on: ubuntu-22.04
|
||||
outputs:
|
||||
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
|
||||
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
|
||||
enable_fork_releases: ${{ steps.var.outputs.enable_fork_releases }}
|
||||
image_namespace: ${{ steps.var.outputs.image_namespace }}
|
||||
image_repository: ${{ steps.var.outputs.image_repository }}
|
||||
quay_image_name: ${{ steps.var.outputs.quay_image_name }}
|
||||
provenance_image: ${{ steps.var.outputs.provenance_image }}
|
||||
allow_fork_release: ${{ steps.var.outputs.allow_fork_release }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -67,18 +74,36 @@ jobs:
|
||||
fi
|
||||
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
|
||||
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
|
||||
|
||||
# Calculate configuration with defaults
|
||||
ENABLE_FORK_RELEASES="${{ vars.ENABLE_FORK_RELEASES || 'false' }}"
|
||||
IMAGE_NAMESPACE="${{ vars.IMAGE_NAMESPACE || 'argoproj' }}"
|
||||
IMAGE_REPOSITORY="${{ vars.IMAGE_REPOSITORY || 'argocd' }}"
|
||||
|
||||
echo "enable_fork_releases=$ENABLE_FORK_RELEASES" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "image_namespace=$IMAGE_NAMESPACE" >> $GITHUB_OUTPUT
|
||||
echo "image_repository=$IMAGE_REPOSITORY" >> $GITHUB_OUTPUT
|
||||
echo "quay_image_name=quay.io/$IMAGE_NAMESPACE/$IMAGE_REPOSITORY:${{ github.ref_name }}" >> $GITHUB_OUTPUT
|
||||
echo "provenance_image=quay.io/$IMAGE_NAMESPACE/$IMAGE_REPOSITORY" >> $GITHUB_OUTPUT
|
||||
|
||||
ALLOW_FORK_RELEASE=false
|
||||
if [[ "${{ github.repository_owner }}" != "argoproj" && "$ENABLE_FORK_RELEASES" == "true" && "$IMAGE_NAMESPACE" != "argoproj" && "${{ github.ref }}" == refs/tags/* ]]; then
|
||||
ALLOW_FORK_RELEASE=true
|
||||
fi
|
||||
echo "allow_fork_release=$ALLOW_FORK_RELEASE" >> $GITHUB_OUTPUT
|
||||
|
||||
argocd-image-provenance:
|
||||
needs: [argocd-image]
|
||||
needs: [setup-variables, argocd-image]
|
||||
permissions:
|
||||
actions: read # for detecting the Github Actions environment.
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
packages: write # for uploading attestations. (https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#known-issues)
|
||||
# Must be refernced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0
|
||||
with:
|
||||
image: quay.io/argoproj/argocd
|
||||
image: ${{ needs.setup-variables.outputs.provenance_image }}
|
||||
digest: ${{ needs.argocd-image.outputs.image-digest }}
|
||||
secrets:
|
||||
registry-username: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
@@ -91,7 +116,7 @@ jobs:
|
||||
- argocd-image-provenance
|
||||
permissions:
|
||||
contents: write # used for uploading assets
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
|
||||
runs-on: ubuntu-22.04
|
||||
env:
|
||||
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
|
||||
@@ -99,7 +124,7 @@ jobs:
|
||||
hashes: ${{ steps.hash.outputs.hashes }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -108,7 +133,7 @@ jobs:
|
||||
run: git fetch --force --tags
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
cache: false
|
||||
@@ -143,6 +168,8 @@ jobs:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
KUBECTL_VERSION: ${{ env.KUBECTL_VERSION }}
|
||||
GIT_TREE_STATE: ${{ env.GIT_TREE_STATE }}
|
||||
# Used to determine the current repository in the goreleaser config to display correct manifest links
|
||||
GORELEASER_CURRENT_REPOSITORY: ${{ github.repository }}
|
||||
|
||||
- name: Generate subject for provenance
|
||||
id: hash
|
||||
@@ -159,12 +186,12 @@ jobs:
|
||||
echo "hashes=$hashes" >> $GITHUB_OUTPUT
|
||||
|
||||
goreleaser-provenance:
|
||||
needs: [goreleaser]
|
||||
needs: [goreleaser, setup-variables]
|
||||
permissions:
|
||||
actions: read # for detecting the Github Actions environment
|
||||
id-token: write # Needed for provenance signing and ID
|
||||
contents: write # Needed for release uploads
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
|
||||
# Must be refernced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
|
||||
with:
|
||||
@@ -177,21 +204,22 @@ jobs:
|
||||
needs:
|
||||
- argocd-image
|
||||
- goreleaser
|
||||
- setup-variables
|
||||
permissions:
|
||||
contents: write # Needed for release uploads
|
||||
outputs:
|
||||
hashes: ${{ steps.sbom-hash.outputs.hashes }}
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
cache: false
|
||||
@@ -207,7 +235,7 @@ jobs:
|
||||
# managers (gomod, yarn, npm).
|
||||
PROJECT_FOLDERS: '.,./ui'
|
||||
# full qualified name of the docker image to be inspected
|
||||
DOCKER_IMAGE: quay.io/argoproj/argocd:${{ github.ref_name }}
|
||||
DOCKER_IMAGE: ${{ needs.setup-variables.outputs.quay_image_name }}
|
||||
run: |
|
||||
yarn install --cwd ./ui
|
||||
go install github.com/spdx/spdx-sbom-generator/cmd/generator@$SPDX_GEN_VERSION
|
||||
@@ -236,7 +264,7 @@ jobs:
|
||||
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Upload SBOM
|
||||
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090 # v2.4.1
|
||||
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2.5.0
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
@@ -244,12 +272,12 @@ jobs:
|
||||
/tmp/sbom.tar.gz
|
||||
|
||||
sbom-provenance:
|
||||
needs: [generate-sbom]
|
||||
needs: [generate-sbom, setup-variables]
|
||||
permissions:
|
||||
actions: read # for detecting the Github Actions environment
|
||||
id-token: write # Needed for provenance signing and ID
|
||||
contents: write # Needed for release uploads
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
|
||||
# Must be referenced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
|
||||
with:
|
||||
@@ -266,13 +294,13 @@ jobs:
|
||||
permissions:
|
||||
contents: write # Needed to push commit to update stable tag
|
||||
pull-requests: write # Needed to create PR for VERSION update.
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
|
||||
runs-on: ubuntu-22.04
|
||||
env:
|
||||
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -316,7 +344,7 @@ jobs:
|
||||
if: ${{ env.UPDATE_VERSION == 'true' }}
|
||||
|
||||
- name: Create PR to update VERSION on master branch
|
||||
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
|
||||
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 # v8.0.0
|
||||
with:
|
||||
commit-message: Bump version in master
|
||||
title: 'chore: Bump version in master'
|
||||
|
||||
8
.github/workflows/renovate.yaml
vendored
8
.github/workflows/renovate.yaml
vendored
@@ -20,17 +20,17 @@ jobs:
|
||||
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # 5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # 6.0.1
|
||||
|
||||
# Some codegen commands require Go to be setup
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.3
|
||||
go-version: 1.25.5
|
||||
|
||||
- name: Self-hosted Renovate
|
||||
uses: renovatebot/github-action@ea850436a5fe75c0925d583c7a02c60a5865461d #43.0.20
|
||||
uses: renovatebot/github-action@5712c6a41dea6cdf32c72d92a763bd417e6606aa #44.0.5
|
||||
with:
|
||||
configurationFile: .github/configs/renovate-config.js
|
||||
token: '${{ steps.get_token.outputs.token }}'
|
||||
|
||||
2
.github/workflows/scorecard.yaml
vendored
2
.github/workflows/scorecard.yaml
vendored
@@ -30,7 +30,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: "Checkout code"
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
||||
2
.github/workflows/update-snyk.yaml
vendored
2
.github/workflows/update-snyk.yaml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build reports
|
||||
|
||||
@@ -66,14 +66,14 @@ release:
|
||||
|
||||
```shell
|
||||
kubectl create namespace argocd
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/{{.Tag}}/manifests/install.yaml
|
||||
kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubusercontent.com/{{ .Env.GORELEASER_CURRENT_REPOSITORY }}/{{.Tag}}/manifests/install.yaml
|
||||
```
|
||||
|
||||
### HA:
|
||||
|
||||
```shell
|
||||
kubectl create namespace argocd
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/{{.Tag}}/manifests/ha/install.yaml
|
||||
kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubusercontent.com/{{ .Env.GORELEASER_CURRENT_REPOSITORY }}/{{.Tag}}/manifests/ha/install.yaml
|
||||
```
|
||||
|
||||
## Release Signatures and Provenance
|
||||
@@ -87,7 +87,7 @@ release:
|
||||
|
||||
If upgrading from a different minor version, be sure to read the [upgrading](https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/overview/) documentation.
|
||||
footer: |
|
||||
**Full Changelog**: https://github.com/argoproj/argo-cd/compare/{{ .PreviousTag }}...{{ .Tag }}
|
||||
**Full Changelog**: https://github.com/{{ .Env.GORELEASER_CURRENT_REPOSITORY }}/compare/{{ .PreviousTag }}...{{ .Tag }}
|
||||
|
||||
<a href="https://argoproj.github.io/cd/"><img src="https://raw.githubusercontent.com/argoproj/argo-site/master/content/pages/cd/gitops-cd.png" width="25%" ></a>
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
dir: '{{.InterfaceDir}}/mocks'
|
||||
filename: '{{.InterfaceName}}.go'
|
||||
include-auto-generated: true # Needed since mockery 3.6.1
|
||||
packages:
|
||||
github.com/argoproj/argo-cd/v3/applicationset/generators:
|
||||
interfaces:
|
||||
@@ -31,6 +32,9 @@ packages:
|
||||
github.com/argoproj/argo-cd/v3/pkg/apiclient/cluster:
|
||||
interfaces:
|
||||
ClusterServiceServer: {}
|
||||
github.com/argoproj/argo-cd/v3/pkg/apiclient/project:
|
||||
interfaces:
|
||||
ProjectServiceClient: {}
|
||||
github.com/argoproj/argo-cd/v3/pkg/apiclient/session:
|
||||
interfaces:
|
||||
SessionServiceClient: {}
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:27771fb7b40a58237c98e8d3e6b9ecdd9289cec69a857fccfb85ff36294dac20
|
||||
ARG BASE_IMAGE=docker.io/library/ubuntu:25.10@sha256:5922638447b1e3ba114332c896a2c7288c876bb94adec923d70d58a17d2fec5e
|
||||
####################################################################################################
|
||||
# Builder image
|
||||
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
|
||||
# Also used as the image in CI jobs so needs all dependencies
|
||||
####################################################################################################
|
||||
FROM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2 AS builder
|
||||
FROM docker.io/library/golang:1.25.5@sha256:31c1e53dfc1cc2d269deec9c83f58729fa3c53dc9a576f6426109d1e319e9e9a AS builder
|
||||
|
||||
WORKDIR /tmp
|
||||
|
||||
@@ -85,7 +85,7 @@ WORKDIR /home/argocd
|
||||
####################################################################################################
|
||||
# Argo CD UI stage
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/node:23.0.0@sha256:e643c0b70dca9704dff42e12b17f5b719dbe4f95e6392fc2dfa0c5f02ea8044d AS argocd-ui
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/node:23.0.0@sha256:9d09fa506f5b8465c5221cbd6f980e29ae0ce9a3119e2b9bc0842e6a3f37bb59 AS argocd-ui
|
||||
|
||||
WORKDIR /src
|
||||
COPY ["ui/package.json", "ui/yarn.lock", "./"]
|
||||
@@ -103,7 +103,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
|
||||
####################################################################################################
|
||||
# Argo CD Build stage which performs the actual build of Argo CD binaries
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2 AS argocd-build
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.5@sha256:31c1e53dfc1cc2d269deec9c83f58729fa3c53dc9a576f6426109d1e319e9e9a AS argocd-build
|
||||
|
||||
WORKDIR /go/src/github.com/argoproj/argo-cd
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2
|
||||
FROM docker.io/library/golang:1.25.5@sha256:31c1e53dfc1cc2d269deec9c83f58729fa3c53dc9a576f6426109d1e319e9e9a
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
|
||||
47
Makefile
47
Makefile
@@ -56,8 +56,8 @@ endif
|
||||
|
||||
ARGOCD_PROCFILE?=Procfile
|
||||
|
||||
# pointing to python 3.7 to match https://github.com/argoproj/argo-cd/blob/master/.readthedocs.yml
|
||||
MKDOCS_DOCKER_IMAGE?=python:3.7-alpine
|
||||
# pointing to python 3.12 to match https://github.com/argoproj/argo-cd/blob/master/.readthedocs.yaml
|
||||
MKDOCS_DOCKER_IMAGE?=python:3.12-alpine
|
||||
MKDOCS_RUN_ARGS?=
|
||||
|
||||
# Configuration for building argocd-test-tools image
|
||||
@@ -76,8 +76,10 @@ ARGOCD_E2E_REDIS_PORT?=6379
|
||||
ARGOCD_E2E_DEX_PORT?=5556
|
||||
ARGOCD_E2E_YARN_HOST?=localhost
|
||||
ARGOCD_E2E_DISABLE_AUTH?=
|
||||
ARGOCD_E2E_DIR?=/tmp/argo-e2e
|
||||
|
||||
ARGOCD_E2E_TEST_TIMEOUT?=90m
|
||||
ARGOCD_E2E_RERUN_FAILS?=5
|
||||
|
||||
ARGOCD_IN_CI?=false
|
||||
ARGOCD_TEST_E2E?=true
|
||||
@@ -198,7 +200,7 @@ endif
|
||||
|
||||
ifneq (${GIT_TAG},)
|
||||
IMAGE_TAG=${GIT_TAG}
|
||||
LDFLAGS += -X ${PACKAGE}.gitTag=${GIT_TAG}
|
||||
override LDFLAGS += -X ${PACKAGE}.gitTag=${GIT_TAG}
|
||||
else
|
||||
IMAGE_TAG?=latest
|
||||
endif
|
||||
@@ -213,6 +215,10 @@ ifdef IMAGE_NAMESPACE
|
||||
IMAGE_PREFIX=${IMAGE_NAMESPACE}/
|
||||
endif
|
||||
|
||||
ifndef IMAGE_REGISTRY
|
||||
IMAGE_REGISTRY="quay.io"
|
||||
endif
|
||||
|
||||
.PHONY: all
|
||||
all: cli image
|
||||
|
||||
@@ -308,12 +314,11 @@ endif
|
||||
.PHONY: manifests-local
|
||||
manifests-local:
|
||||
./hack/update-manifests.sh
|
||||
|
||||
.PHONY: manifests
|
||||
manifests: test-tools-image
|
||||
$(call run-in-test-client,make manifests-local IMAGE_NAMESPACE='${IMAGE_NAMESPACE}' IMAGE_TAG='${IMAGE_TAG}')
|
||||
|
||||
$(call run-in-test-client,make manifests-local IMAGE_REGISTRY='${IMAGE_REGISTRY}' IMAGE_NAMESPACE='${IMAGE_NAMESPACE}' IMAGE_REPOSITORY='${IMAGE_REPOSITORY}' IMAGE_TAG='${IMAGE_TAG}')
|
||||
# consolidated binary for cli, util, server, repo-server, controller
|
||||
|
||||
.PHONY: argocd-all
|
||||
argocd-all: clean-debug
|
||||
CGO_ENABLED=${CGO_FLAG} GOOS=${GOOS} GOARCH=${GOARCH} GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${BIN_NAME} ./cmd
|
||||
@@ -458,7 +463,7 @@ test-e2e:
|
||||
test-e2e-local: cli-local
|
||||
# NO_PROXY ensures all tests don't go out through a proxy if one is configured on the test system
|
||||
export GO111MODULE=off
|
||||
DIST_DIR=${DIST_DIR} RERUN_FAILS=5 PACKAGES="./test/e2e" ARGOCD_E2E_RECORD=${ARGOCD_E2E_RECORD} ARGOCD_CONFIG_DIR=$(HOME)/.config/argocd-e2e ARGOCD_GPG_ENABLED=true NO_PROXY=* ./hack/test.sh -timeout $(ARGOCD_E2E_TEST_TIMEOUT) -v -args -test.gocoverdir="$(PWD)/test-results"
|
||||
DIST_DIR=${DIST_DIR} RERUN_FAILS=$(ARGOCD_E2E_RERUN_FAILS) PACKAGES="./test/e2e" ARGOCD_E2E_RECORD=${ARGOCD_E2E_RECORD} ARGOCD_CONFIG_DIR=$(HOME)/.config/argocd-e2e ARGOCD_GPG_ENABLED=true NO_PROXY=* ./hack/test.sh -timeout $(ARGOCD_E2E_TEST_TIMEOUT) -v -args -test.gocoverdir="$(PWD)/test-results"
|
||||
|
||||
# Spawns a shell in the test server container for debugging purposes
|
||||
debug-test-server: test-tools-image
|
||||
@@ -482,13 +487,13 @@ start-e2e-local: mod-vendor-local dep-ui-local cli-local
|
||||
kubectl create ns argocd-e2e-external || true
|
||||
kubectl create ns argocd-e2e-external-2 || true
|
||||
kubectl config set-context --current --namespace=argocd-e2e
|
||||
kustomize build test/manifests/base | kubectl apply -f -
|
||||
kustomize build test/manifests/base | kubectl apply --server-side --force-conflicts -f -
|
||||
kubectl apply -f https://raw.githubusercontent.com/open-cluster-management/api/a6845f2ebcb186ec26b832f60c988537a58f3859/cluster/v1alpha1/0000_04_clusters.open-cluster-management.io_placementdecisions.crd.yaml
|
||||
# Create GPG keys and source directories
|
||||
if test -d /tmp/argo-e2e/app/config/gpg; then rm -rf /tmp/argo-e2e/app/config/gpg/*; fi
|
||||
mkdir -p /tmp/argo-e2e/app/config/gpg/keys && chmod 0700 /tmp/argo-e2e/app/config/gpg/keys
|
||||
mkdir -p /tmp/argo-e2e/app/config/gpg/source && chmod 0700 /tmp/argo-e2e/app/config/gpg/source
|
||||
mkdir -p /tmp/argo-e2e/app/config/plugin && chmod 0700 /tmp/argo-e2e/app/config/plugin
|
||||
if test -d $(ARGOCD_E2E_DIR)/app/config/gpg; then rm -rf $(ARGOCD_E2E_DIR)/app/config/gpg/*; fi
|
||||
mkdir -p $(ARGOCD_E2E_DIR)/app/config/gpg/keys && chmod 0700 $(ARGOCD_E2E_DIR)/app/config/gpg/keys
|
||||
mkdir -p $(ARGOCD_E2E_DIR)/app/config/gpg/source && chmod 0700 $(ARGOCD_E2E_DIR)/app/config/gpg/source
|
||||
mkdir -p $(ARGOCD_E2E_DIR)/app/config/plugin && chmod 0700 $(ARGOCD_E2E_DIR)/app/config/plugin
|
||||
# create folders to hold go coverage results for each component
|
||||
mkdir -p /tmp/coverage/app-controller
|
||||
mkdir -p /tmp/coverage/api-server
|
||||
@@ -497,13 +502,15 @@ start-e2e-local: mod-vendor-local dep-ui-local cli-local
|
||||
mkdir -p /tmp/coverage/notification
|
||||
mkdir -p /tmp/coverage/commit-server
|
||||
# set paths for locally managed ssh known hosts and tls certs data
|
||||
ARGOCD_SSH_DATA_PATH=/tmp/argo-e2e/app/config/ssh \
|
||||
ARGOCD_TLS_DATA_PATH=/tmp/argo-e2e/app/config/tls \
|
||||
ARGOCD_GPG_DATA_PATH=/tmp/argo-e2e/app/config/gpg/source \
|
||||
ARGOCD_GNUPGHOME=/tmp/argo-e2e/app/config/gpg/keys \
|
||||
ARGOCD_E2E_DIR=$(ARGOCD_E2E_DIR) \
|
||||
ARGOCD_SSH_DATA_PATH=$(ARGOCD_E2E_DIR)/app/config/ssh \
|
||||
ARGOCD_TLS_DATA_PATH=$(ARGOCD_E2E_DIR)/app/config/tls \
|
||||
ARGOCD_GPG_DATA_PATH=$(ARGOCD_E2E_DIR)/app/config/gpg/source \
|
||||
ARGOCD_GNUPGHOME=$(ARGOCD_E2E_DIR)/app/config/gpg/keys \
|
||||
ARGOCD_GPG_ENABLED=$(ARGOCD_GPG_ENABLED) \
|
||||
ARGOCD_PLUGINCONFIGFILEPATH=/tmp/argo-e2e/app/config/plugin \
|
||||
ARGOCD_PLUGINSOCKFILEPATH=/tmp/argo-e2e/app/config/plugin \
|
||||
ARGOCD_PLUGINCONFIGFILEPATH=$(ARGOCD_E2E_DIR)/app/config/plugin \
|
||||
ARGOCD_PLUGINSOCKFILEPATH=$(ARGOCD_E2E_DIR)/app/config/plugin \
|
||||
ARGOCD_GIT_CONFIG=$(PWD)/test/e2e/fixture/gitconfig \
|
||||
ARGOCD_E2E_DISABLE_AUTH=false \
|
||||
ARGOCD_ZJWT_FEATURE_FLAG=always \
|
||||
ARGOCD_IN_CI=$(ARGOCD_IN_CI) \
|
||||
@@ -580,7 +587,7 @@ build-docs-local:
|
||||
|
||||
.PHONY: build-docs
|
||||
build-docs:
|
||||
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install mkdocs; pip install $$(mkdocs get-deps); mkdocs build'
|
||||
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install -r docs/requirements.txt; mkdocs build'
|
||||
|
||||
.PHONY: serve-docs-local
|
||||
serve-docs-local:
|
||||
@@ -588,7 +595,7 @@ serve-docs-local:
|
||||
|
||||
.PHONY: serve-docs
|
||||
serve-docs:
|
||||
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install mkdocs; pip install $$(mkdocs get-deps); mkdocs serve -a $$(ip route get 1 | awk '\''{print $$7}'\''):8000'
|
||||
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install -r docs/requirements.txt; mkdocs serve -a $$(ip route get 1 | awk '\''{print $$7}'\''):8000'
|
||||
|
||||
# Verify that kubectl can connect to your K8s cluster from Docker
|
||||
.PHONY: verify-kube-connect
|
||||
|
||||
4
Procfile
4
Procfile
@@ -2,7 +2,7 @@ controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
|
||||
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
|
||||
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v3/cmd gendexcfg -o `pwd`/dist/dex.yaml && (test -f dist/dex.yaml || { echo 'Failed to generate dex configuration'; exit 1; }) && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
|
||||
redis: hack/start-redis-with-password.sh
|
||||
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=./dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
cmp-server: [ "$ARGOCD_E2E_TEST" = 'true' ] && exit 0 || [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
|
||||
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
|
||||
@@ -11,4 +11,4 @@ helm-registry: test/fixture/testrepos/start-helm-registry.sh
|
||||
oci-registry: test/fixture/testrepos/start-authenticated-helm-registry.sh
|
||||
dev-mounter: [ "$ARGOCD_E2E_TEST" != "true" ] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
|
||||
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
|
||||
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"
|
||||
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"
|
||||
|
||||
9
USERS.md
9
USERS.md
@@ -31,6 +31,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [ANSTO - Australian Synchrotron](https://www.synchrotron.org.au/)
|
||||
1. [Ant Group](https://www.antgroup.com/)
|
||||
1. [AppDirect](https://www.appdirect.com)
|
||||
1. [Arcadia](https://www.arcadia.io)
|
||||
1. [Arctiq Inc.](https://www.arctiq.ca)
|
||||
1. [Artemis Health by Nomi Health](https://www.artemishealth.com/)
|
||||
1. [Arturia](https://www.arturia.com)
|
||||
@@ -86,6 +87,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Codefresh](https://www.codefresh.io/)
|
||||
1. [Codility](https://www.codility.com/)
|
||||
1. [Cognizant](https://www.cognizant.com/)
|
||||
1. [Collins Aerospace](https://www.collinsaerospace.com/)
|
||||
1. [Commonbond](https://commonbond.co/)
|
||||
1. [Compatio.AI](https://compatio.ai/)
|
||||
1. [Contlo](https://contlo.com/)
|
||||
@@ -99,6 +101,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Datarisk](https://www.datarisk.io/)
|
||||
1. [Daydream](https://daydream.ing)
|
||||
1. [Deloitte](https://www.deloitte.com/)
|
||||
1. [Dematic](https://www.dematic.com)
|
||||
1. [Deutsche Telekom AG](https://telekom.com)
|
||||
1. [Deutsche Bank AG](https://www.deutsche-bank.de/)
|
||||
1. [Devopsi - Poland Software/DevOps Consulting](https://devopsi.pl/)
|
||||
@@ -107,6 +110,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [DigitalOcean](https://www.digitalocean.com)
|
||||
1. [Divar](https://divar.ir)
|
||||
1. [Divistant](https://divistant.com)
|
||||
2. [DocNetwork](https://docnetwork.org/)
|
||||
1. [Dott](https://ridedott.com)
|
||||
1. [Doubble](https://www.doubble.app)
|
||||
1. [Doximity](https://www.doximity.com/)
|
||||
@@ -121,6 +125,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [enigmo](https://enigmo.co.jp/)
|
||||
1. [Envoy](https://envoy.com/)
|
||||
1. [eSave](https://esave.es/)
|
||||
1. [Expedia](https://www.expedia.com)
|
||||
1. [Factorial](https://factorialhr.com/)
|
||||
1. [Farfetch](https://www.farfetch.com)
|
||||
1. [Faro](https://www.faro.com/)
|
||||
@@ -181,6 +186,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Instruqt](https://www.instruqt.com)
|
||||
1. [Intel](https://www.intel.com)
|
||||
1. [Intuit](https://www.intuit.com/)
|
||||
1. [IQVIA](https://www.iqvia.com/)
|
||||
1. [Jellysmack](https://www.jellysmack.com)
|
||||
1. [Joblift](https://joblift.com/)
|
||||
1. [JovianX](https://www.jovianx.com/)
|
||||
@@ -232,6 +238,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [mixi Group](https://mixi.co.jp/)
|
||||
1. [Moengage](https://www.moengage.com/)
|
||||
1. [Money Forward](https://corp.moneyforward.com/en/)
|
||||
1. [MongoDB](https://www.mongodb.com/)
|
||||
1. [MOO Print](https://www.moo.com/)
|
||||
1. [Mozilla](https://www.mozilla.org)
|
||||
1. [MTN Group](https://www.mtn.com/)
|
||||
@@ -311,6 +318,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [RightRev](https://rightrev.com/)
|
||||
1. [Rijkswaterstaat](https://www.rijkswaterstaat.nl/en)
|
||||
1. Rise
|
||||
1. [RISK IDENT](https://riskident.com/)
|
||||
1. [Riskified](https://www.riskified.com/)
|
||||
1. [Robotinfra](https://www.robotinfra.com)
|
||||
1. [Rocket.Chat](https://rocket.chat)
|
||||
@@ -377,6 +385,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Ticketmaster](https://ticketmaster.com)
|
||||
1. [Tiger Analytics](https://www.tigeranalytics.com/)
|
||||
1. [Tigera](https://www.tigera.io/)
|
||||
1. [Topicus.Education](https://topicus.nl/en/sectors/education)
|
||||
1. [Toss](https://toss.im/en)
|
||||
1. [Trendyol](https://www.trendyol.com/)
|
||||
1. [tru.ID](https://tru.id)
|
||||
|
||||
@@ -75,9 +75,15 @@ const (
|
||||
AllAtOnceDeletionOrder = "AllAtOnce"
|
||||
)
|
||||
|
||||
var defaultPreservedFinalizers = []string{
|
||||
argov1alpha1.PreDeleteFinalizerName,
|
||||
argov1alpha1.PostDeleteFinalizerName,
|
||||
}
|
||||
|
||||
var defaultPreservedAnnotations = []string{
|
||||
NotifiedAnnotationKey,
|
||||
argov1alpha1.AnnotationKeyRefresh,
|
||||
argov1alpha1.AnnotationKeyHydrate,
|
||||
}
|
||||
|
||||
type deleteInOrder struct {
|
||||
@@ -176,6 +182,16 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
// ensure finalizer exists if deletionOrder is set as Reverse
|
||||
if r.EnableProgressiveSyncs && isProgressiveSyncDeletionOrderReversed(&applicationSetInfo) {
|
||||
if !controllerutil.ContainsFinalizer(&applicationSetInfo, argov1alpha1.ResourcesFinalizerName) {
|
||||
controllerutil.AddFinalizer(&applicationSetInfo, argov1alpha1.ResourcesFinalizerName)
|
||||
if err := r.Update(ctx, &applicationSetInfo); err != nil {
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Log a warning if there are unrecognized generators
|
||||
_ = utils.CheckInvalidGenerators(&applicationSetInfo)
|
||||
// desiredApplications is the main list of all expected Applications from all generators in this appset.
|
||||
@@ -653,8 +669,9 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
|
||||
Watches(
|
||||
&corev1.Secret{},
|
||||
&clusterSecretEventHandler{
|
||||
Client: mgr.GetClient(),
|
||||
Log: log.WithField("type", "createSecretEventHandler"),
|
||||
Client: mgr.GetClient(),
|
||||
Log: log.WithField("type", "createSecretEventHandler"),
|
||||
ApplicationSetNamespaces: r.ApplicationSetNamespaces,
|
||||
}).
|
||||
Complete(r)
|
||||
}
|
||||
@@ -731,21 +748,19 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
|
||||
}
|
||||
}
|
||||
|
||||
// Preserve post-delete finalizers:
|
||||
// https://github.com/argoproj/argo-cd/issues/17181
|
||||
for _, finalizer := range found.Finalizers {
|
||||
if strings.HasPrefix(finalizer, argov1alpha1.PostDeleteFinalizerName) {
|
||||
if generatedApp.Finalizers == nil {
|
||||
generatedApp.Finalizers = []string{}
|
||||
// Preserve deleting finalizers and avoid diff conflicts
|
||||
for _, finalizer := range defaultPreservedFinalizers {
|
||||
for _, f := range found.Finalizers {
|
||||
// For finalizers, use prefix matching in case it contains "/" stages
|
||||
if strings.HasPrefix(f, finalizer) {
|
||||
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
|
||||
}
|
||||
generatedApp.Finalizers = append(generatedApp.Finalizers, finalizer)
|
||||
}
|
||||
}
|
||||
|
||||
found.Annotations = generatedApp.Annotations
|
||||
|
||||
found.Finalizers = generatedApp.Finalizers
|
||||
found.Labels = generatedApp.Labels
|
||||
found.Finalizers = generatedApp.Finalizers
|
||||
|
||||
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
|
||||
})
|
||||
@@ -876,16 +891,14 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
|
||||
// Detect if the destination's server field does not match an existing cluster
|
||||
matchingCluster := false
|
||||
for _, cluster := range clusterList {
|
||||
if destCluster.Server != cluster.Server {
|
||||
continue
|
||||
// A cluster matches if either the server matches OR the name matches
|
||||
// This handles cases where:
|
||||
// 1. The cluster is the in-cluster (server=https://kubernetes.default.svc, name=in-cluster)
|
||||
// 2. A custom cluster has the same server as in-cluster but a different name
|
||||
if destCluster.Server == cluster.Server || (destCluster.Name != "" && cluster.Name != "" && destCluster.Name == cluster.Name) {
|
||||
matchingCluster = true
|
||||
break
|
||||
}
|
||||
|
||||
if destCluster.Name != cluster.Name {
|
||||
continue
|
||||
}
|
||||
|
||||
matchingCluster = true
|
||||
break
|
||||
}
|
||||
|
||||
if !matchingCluster {
|
||||
|
||||
@@ -588,6 +588,72 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Ensure that hydrate annotation is preserved from an existing app",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Template: v1alpha1.ApplicationSetTemplate{
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
existingApps: []v1alpha1.Application{
|
||||
{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "2",
|
||||
Annotations: map[string]string{
|
||||
"annot-key": "annot-value",
|
||||
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.RefreshTypeNormal),
|
||||
},
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
desiredApps: []v1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: []v1alpha1.Application{
|
||||
{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "3",
|
||||
Annotations: map[string]string{
|
||||
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.RefreshTypeNormal),
|
||||
},
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Ensure that configured preserved annotations are preserved from an existing app",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
@@ -1010,7 +1076,7 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Ensure that argocd post-delete finalizers are preserved from an existing app",
|
||||
name: "Ensure that argocd pre-delete and post-delete finalizers are preserved from an existing app",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
@@ -1035,8 +1101,11 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "2",
|
||||
Finalizers: []string{
|
||||
"non-argo-finalizer",
|
||||
v1alpha1.PreDeleteFinalizerName,
|
||||
v1alpha1.PreDeleteFinalizerName + "/stage1",
|
||||
v1alpha1.PostDeleteFinalizerName,
|
||||
v1alpha1.PostDeleteFinalizerName + "/mystage",
|
||||
v1alpha1.PostDeleteFinalizerName + "/stage2",
|
||||
},
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
@@ -1064,10 +1133,12 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "2",
|
||||
ResourceVersion: "3",
|
||||
Finalizers: []string{
|
||||
v1alpha1.PreDeleteFinalizerName,
|
||||
v1alpha1.PreDeleteFinalizerName + "/stage1",
|
||||
v1alpha1.PostDeleteFinalizerName,
|
||||
v1alpha1.PostDeleteFinalizerName + "/mystage",
|
||||
v1alpha1.PostDeleteFinalizerName + "/stage2",
|
||||
},
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
@@ -1197,7 +1268,10 @@ func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
|
||||
kubeclientset := kubefake.NewSimpleClientset(objects...)
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
|
||||
settingsMgr := settings.NewSettingsManager(t.Context(), kubeclientset, "argocd")
|
||||
// Initialize the settings manager to ensure cluster cache is ready
|
||||
_ = settingsMgr.ResyncInformers()
|
||||
argodb := db.NewDB("argocd", settingsMgr, kubeclientset)
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: client,
|
||||
@@ -1352,7 +1426,10 @@ func TestRemoveFinalizerOnInvalidDestination_DestinationTypes(t *testing.T) {
|
||||
kubeclientset := getDefaultTestClientSet(secret)
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
|
||||
settingsMgr := settings.NewSettingsManager(t.Context(), kubeclientset, "argocd")
|
||||
// Initialize the settings manager to ensure cluster cache is ready
|
||||
_ = settingsMgr.ResyncInformers()
|
||||
argodb := db.NewDB("argocd", settingsMgr, kubeclientset)
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: client,
|
||||
@@ -1937,7 +2014,7 @@ func TestValidateGeneratedApplications(t *testing.T) {
|
||||
Server: "*",
|
||||
},
|
||||
},
|
||||
ClusterResourceWhitelist: []metav1.GroupKind{
|
||||
ClusterResourceWhitelist: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{
|
||||
Group: "*",
|
||||
Kind: "*",
|
||||
@@ -7277,6 +7354,223 @@ func TestIsRollingSyncDeletionReversed(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestReconcileAddsFinalizer_WhenDeletionOrderReverse(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := v1alpha1.AddToScheme(scheme)
|
||||
require.NoError(t, err)
|
||||
|
||||
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
|
||||
|
||||
for _, cc := range []struct {
|
||||
name string
|
||||
appSet v1alpha1.ApplicationSet
|
||||
progressiveSyncEnabled bool
|
||||
expectedFinalizers []string
|
||||
}{
|
||||
{
|
||||
name: "adds finalizer when DeletionOrder is Reverse",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-appset",
|
||||
Namespace: "argocd",
|
||||
// No finalizers initially
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{
|
||||
{
|
||||
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
|
||||
{
|
||||
Key: "env",
|
||||
Operator: "In",
|
||||
Values: []string{"dev"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DeletionOrder: ReverseDeletionOrder,
|
||||
},
|
||||
Template: v1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
},
|
||||
progressiveSyncEnabled: true,
|
||||
expectedFinalizers: []string{v1alpha1.ResourcesFinalizerName},
|
||||
},
|
||||
{
|
||||
name: "does not add finalizer when already exists and DeletionOrder is Reverse",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-appset",
|
||||
Namespace: "argocd",
|
||||
Finalizers: []string{
|
||||
v1alpha1.ResourcesFinalizerName,
|
||||
},
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{
|
||||
{
|
||||
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
|
||||
{
|
||||
Key: "env",
|
||||
Operator: "In",
|
||||
Values: []string{"dev"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DeletionOrder: ReverseDeletionOrder,
|
||||
},
|
||||
Template: v1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
},
|
||||
progressiveSyncEnabled: true,
|
||||
expectedFinalizers: []string{v1alpha1.ResourcesFinalizerName},
|
||||
},
|
||||
{
|
||||
name: "does not add finalizer when DeletionOrder is AllAtOnce",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-appset",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{
|
||||
{
|
||||
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
|
||||
{
|
||||
Key: "env",
|
||||
Operator: "In",
|
||||
Values: []string{"dev"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DeletionOrder: AllAtOnceDeletionOrder,
|
||||
},
|
||||
Template: v1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
},
|
||||
progressiveSyncEnabled: true,
|
||||
expectedFinalizers: nil,
|
||||
},
|
||||
{
|
||||
name: "does not add finalizer when DeletionOrder is not set",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-appset",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{
|
||||
{
|
||||
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
|
||||
{
|
||||
Key: "env",
|
||||
Operator: "In",
|
||||
Values: []string{"dev"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Template: v1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
},
|
||||
progressiveSyncEnabled: true,
|
||||
expectedFinalizers: nil,
|
||||
},
|
||||
{
|
||||
name: "does not add finalizer when progressive sync not enabled",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-appset",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{
|
||||
{
|
||||
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
|
||||
{
|
||||
Key: "env",
|
||||
Operator: "In",
|
||||
Values: []string{"dev"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DeletionOrder: ReverseDeletionOrder,
|
||||
},
|
||||
Template: v1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
},
|
||||
progressiveSyncEnabled: false,
|
||||
expectedFinalizers: nil,
|
||||
},
|
||||
} {
|
||||
t.Run(cc.name, func(t *testing.T) {
|
||||
client := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(&cc.appSet).
|
||||
WithStatusSubresource(&cc.appSet).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
Build()
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: client,
|
||||
Scheme: scheme,
|
||||
Renderer: &utils.Render{},
|
||||
Recorder: record.NewFakeRecorder(1),
|
||||
Generators: map[string]generators.Generator{},
|
||||
ArgoDB: argodb,
|
||||
KubeClientset: kubeclientset,
|
||||
Metrics: metrics,
|
||||
EnableProgressiveSyncs: cc.progressiveSyncEnabled,
|
||||
}
|
||||
|
||||
req := ctrl.Request{
|
||||
NamespacedName: types.NamespacedName{
|
||||
Namespace: cc.appSet.Namespace,
|
||||
Name: cc.appSet.Name,
|
||||
},
|
||||
}
|
||||
|
||||
// Run reconciliation
|
||||
_, err = r.Reconcile(t.Context(), req)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Fetch the updated ApplicationSet
|
||||
var updatedAppSet v1alpha1.ApplicationSet
|
||||
err = r.Get(t.Context(), req.NamespacedName, &updatedAppSet)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the finalizers
|
||||
assert.Equal(t, cc.expectedFinalizers, updatedAppSet.Finalizers,
|
||||
"finalizers should match expected value")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestReconcileProgressiveSyncDisabled(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := v1alpha1.AddToScheme(scheme)
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/event"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/utils"
|
||||
"github.com/argoproj/argo-cd/v3/common"
|
||||
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
@@ -22,8 +23,9 @@ import (
|
||||
// requeue any related ApplicationSets.
|
||||
type clusterSecretEventHandler struct {
|
||||
// handler.EnqueueRequestForOwner
|
||||
Log log.FieldLogger
|
||||
Client client.Client
|
||||
Log log.FieldLogger
|
||||
Client client.Client
|
||||
ApplicationSetNamespaces []string
|
||||
}
|
||||
|
||||
func (h *clusterSecretEventHandler) Create(ctx context.Context, e event.CreateEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
|
||||
@@ -68,6 +70,10 @@ func (h *clusterSecretEventHandler) queueRelatedAppGenerators(ctx context.Contex
|
||||
|
||||
h.Log.WithField("count", len(appSetList.Items)).Info("listed ApplicationSets")
|
||||
for _, appSet := range appSetList.Items {
|
||||
if !utils.IsNamespaceAllowed(h.ApplicationSetNamespaces, appSet.GetNamespace()) {
|
||||
// Ignore it as not part of the allowed list of namespaces in which to watch Appsets
|
||||
continue
|
||||
}
|
||||
foundClusterGenerator := false
|
||||
for _, generator := range appSet.Spec.Generators {
|
||||
if generator.Clusters != nil {
|
||||
|
||||
@@ -137,7 +137,7 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "another-namespace",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
@@ -171,9 +171,37 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{
|
||||
{NamespacedName: types.NamespacedName{Namespace: "another-namespace", Name: "my-app-set"}},
|
||||
{NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"}},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "cluster generators in other namespaces should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "my-namespace-not-allowed",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "non-argo cd secret should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
@@ -552,8 +580,9 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
fakeClient := fake.NewClientBuilder().WithScheme(scheme).WithLists(&appSetList).Build()
|
||||
|
||||
handler := &clusterSecretEventHandler{
|
||||
Client: fakeClient,
|
||||
Log: log.WithField("type", "createSecretEventHandler"),
|
||||
Client: fakeClient,
|
||||
Log: log.WithField("type", "createSecretEventHandler"),
|
||||
ApplicationSetNamespaces: []string{"argocd"},
|
||||
}
|
||||
|
||||
mockAddRateLimitingInterface := mockAddRateLimitingInterface{}
|
||||
|
||||
@@ -551,7 +551,7 @@ func TestInterpolateGeneratorError(t *testing.T) {
|
||||
},
|
||||
useGoTemplate: true,
|
||||
goTemplateOptions: []string{},
|
||||
}, want: argov1alpha1.ApplicationSetGenerator{}, expectedErrStr: "failed to replace parameters in generator: failed to execute go template {{ index .rmap (default .override .test) }}: template: :1:3: executing \"\" at <index .rmap (default .override .test)>: error calling index: index of untyped nil"},
|
||||
}, want: argov1alpha1.ApplicationSetGenerator{}, expectedErrStr: "failed to replace parameters in generator: failed to execute go template {{ index .rmap (default .override .test) }}: template: base:1:3: executing \"base\" at <index .rmap (default .override .test)>: error calling index: index of untyped nil"},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
|
||||
@@ -107,7 +107,7 @@ func (g *PullRequestGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
|
||||
}
|
||||
|
||||
paramMap := map[string]any{
|
||||
"number": strconv.Itoa(pull.Number),
|
||||
"number": strconv.FormatInt(pull.Number, 10),
|
||||
"title": pull.Title,
|
||||
"branch": pull.Branch,
|
||||
"branch_slug": slug.Make(pull.Branch),
|
||||
@@ -243,9 +243,9 @@ func (g *PullRequestGenerator) github(ctx context.Context, cfg *argoprojiov1alph
|
||||
}
|
||||
|
||||
if g.enableGitHubAPIMetrics {
|
||||
return pullrequest.NewGithubAppService(*auth, cfg.API, cfg.Owner, cfg.Repo, cfg.Labels, httpClient)
|
||||
return pullrequest.NewGithubAppService(ctx, *auth, cfg.API, cfg.Owner, cfg.Repo, cfg.Labels, httpClient)
|
||||
}
|
||||
return pullrequest.NewGithubAppService(*auth, cfg.API, cfg.Owner, cfg.Repo, cfg.Labels)
|
||||
return pullrequest.NewGithubAppService(ctx, *auth, cfg.API, cfg.Owner, cfg.Repo, cfg.Labels)
|
||||
}
|
||||
|
||||
// always default to token, even if not set (public access)
|
||||
|
||||
@@ -296,9 +296,9 @@ func (g *SCMProviderGenerator) githubProvider(ctx context.Context, github *argop
|
||||
}
|
||||
|
||||
if g.enableGitHubAPIMetrics {
|
||||
return scm_provider.NewGithubAppProviderFor(*auth, github.Organization, github.API, github.AllBranches, httpClient)
|
||||
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches, httpClient)
|
||||
}
|
||||
return scm_provider.NewGithubAppProviderFor(*auth, github.Organization, github.API, github.AllBranches)
|
||||
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches)
|
||||
}
|
||||
|
||||
token, err := utils.GetSecretRef(ctx, g.client, github.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
package github_app
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
@@ -8,40 +10,65 @@ import (
|
||||
"github.com/google/go-github/v69/github"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services/github_app_auth"
|
||||
appsetutils "github.com/argoproj/argo-cd/v3/applicationset/utils"
|
||||
"github.com/argoproj/argo-cd/v3/util/git"
|
||||
)
|
||||
|
||||
func getOptionalHTTPClientAndTransport(optionalHTTPClient ...*http.Client) (*http.Client, http.RoundTripper) {
|
||||
httpClient := appsetutils.GetOptionalHTTPClient(optionalHTTPClient...)
|
||||
if len(optionalHTTPClient) > 0 && optionalHTTPClient[0] != nil && optionalHTTPClient[0].Transport != nil {
|
||||
// will either use the provided custom httpClient and it's transport
|
||||
return httpClient, optionalHTTPClient[0].Transport
|
||||
// getInstallationClient creates a new GitHub client with the specified installation ID.
|
||||
// It also returns a ghinstallation.Transport, which can be used for git requests.
|
||||
func getInstallationClient(g github_app_auth.Authentication, url string, httpClient ...*http.Client) (*github.Client, error) {
|
||||
if g.InstallationId <= 0 {
|
||||
return nil, errors.New("installation ID is required for github")
|
||||
}
|
||||
// or the default httpClient and transport
|
||||
return httpClient, http.DefaultTransport
|
||||
}
|
||||
|
||||
// Client builds a github client for the given app authentication.
|
||||
func Client(g github_app_auth.Authentication, url string, optionalHTTPClient ...*http.Client) (*github.Client, error) {
|
||||
httpClient, transport := getOptionalHTTPClientAndTransport(optionalHTTPClient...)
|
||||
// Use provided HTTP client's transport or default
|
||||
var transport http.RoundTripper
|
||||
if len(httpClient) > 0 && httpClient[0] != nil && httpClient[0].Transport != nil {
|
||||
transport = httpClient[0].Transport
|
||||
} else {
|
||||
transport = http.DefaultTransport
|
||||
}
|
||||
|
||||
rt, err := ghinstallation.New(transport, g.Id, g.InstallationId, []byte(g.PrivateKey))
|
||||
itr, err := ghinstallation.New(transport, g.Id, g.InstallationId, []byte(g.PrivateKey))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create github app install: %w", err)
|
||||
return nil, fmt.Errorf("failed to create GitHub installation transport: %w", err)
|
||||
}
|
||||
|
||||
if url == "" {
|
||||
url = g.EnterpriseBaseURL
|
||||
}
|
||||
|
||||
var client *github.Client
|
||||
httpClient.Transport = rt
|
||||
if url == "" {
|
||||
client = github.NewClient(httpClient)
|
||||
} else {
|
||||
rt.BaseURL = url
|
||||
client, err = github.NewClient(httpClient).WithEnterpriseURLs(url, url)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create github enterprise client: %w", err)
|
||||
}
|
||||
client = github.NewClient(&http.Client{Transport: itr})
|
||||
return client, nil
|
||||
}
|
||||
|
||||
itr.BaseURL = url
|
||||
client, err = github.NewClient(&http.Client{Transport: itr}).WithEnterpriseURLs(url, url)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create GitHub enterprise client: %w", err)
|
||||
}
|
||||
return client, nil
|
||||
}
|
||||
|
||||
// Client builds a github client for the given app authentication.
|
||||
func Client(ctx context.Context, g github_app_auth.Authentication, url, org string, optionalHTTPClient ...*http.Client) (*github.Client, error) {
|
||||
if url == "" {
|
||||
url = g.EnterpriseBaseURL
|
||||
}
|
||||
|
||||
// If an installation ID is already provided, use it directly.
|
||||
if g.InstallationId != 0 {
|
||||
return getInstallationClient(g, url, optionalHTTPClient...)
|
||||
}
|
||||
|
||||
// Auto-discover installation ID using shared utility
|
||||
// Pass optional HTTP client for metrics tracking
|
||||
installationId, err := git.DiscoverGitHubAppInstallationID(ctx, g.Id, g.PrivateKey, url, org, optionalHTTPClient...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
g.InstallationId = installationId
|
||||
return getInstallationClient(g, url, optionalHTTPClient...)
|
||||
}
|
||||
|
||||
@@ -107,7 +107,7 @@ func (a *AzureDevOpsService) List(ctx context.Context) ([]*PullRequest, error) {
|
||||
|
||||
if *pr.Repository.Name == a.repo {
|
||||
pullRequests = append(pullRequests, &PullRequest{
|
||||
Number: *pr.PullRequestId,
|
||||
Number: int64(*pr.PullRequestId),
|
||||
Title: *pr.Title,
|
||||
Branch: strings.Replace(*pr.SourceRefName, "refs/heads/", "", 1),
|
||||
TargetBranch: strings.Replace(*pr.TargetRefName, "refs/heads/", "", 1),
|
||||
|
||||
@@ -87,7 +87,7 @@ func TestListPullRequest(t *testing.T) {
|
||||
assert.Equal(t, "main", list[0].TargetBranch)
|
||||
assert.Equal(t, prHeadSha, list[0].HeadSHA)
|
||||
assert.Equal(t, "feat(123)", list[0].Title)
|
||||
assert.Equal(t, prID, list[0].Number)
|
||||
assert.Equal(t, int64(prID), list[0].Number)
|
||||
assert.Equal(t, uniqueName, list[0].Author)
|
||||
}
|
||||
|
||||
|
||||
@@ -81,7 +81,10 @@ func NewBitbucketCloudServiceBasicAuth(baseURL, username, password, owner, repos
|
||||
return nil, fmt.Errorf("error parsing base url of %s for %s/%s: %w", baseURL, owner, repositorySlug, err)
|
||||
}
|
||||
|
||||
bitbucketClient := bitbucket.NewBasicAuth(username, password)
|
||||
bitbucketClient, err := bitbucket.NewBasicAuth(username, password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error creating BitBucket Cloud client with basic auth: %w", err)
|
||||
}
|
||||
bitbucketClient.SetApiBaseURL(*url)
|
||||
|
||||
return &BitbucketCloudService{
|
||||
@@ -97,14 +100,13 @@ func NewBitbucketCloudServiceBearerToken(baseURL, bearerToken, owner, repository
|
||||
return nil, fmt.Errorf("error parsing base url of %s for %s/%s: %w", baseURL, owner, repositorySlug, err)
|
||||
}
|
||||
|
||||
bitbucketClient := bitbucket.NewOAuthbearerToken(bearerToken)
|
||||
bitbucketClient, err := bitbucket.NewOAuthbearerToken(bearerToken)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error creating BitBucket Cloud client with oauth bearer token: %w", err)
|
||||
}
|
||||
bitbucketClient.SetApiBaseURL(*url)
|
||||
|
||||
return &BitbucketCloudService{
|
||||
client: bitbucketClient,
|
||||
owner: owner,
|
||||
repositorySlug: repositorySlug,
|
||||
}, nil
|
||||
return &BitbucketCloudService{client: bitbucketClient, owner: owner, repositorySlug: repositorySlug}, nil
|
||||
}
|
||||
|
||||
func NewBitbucketCloudServiceNoAuth(baseURL, owner, repositorySlug string) (PullRequestService, error) {
|
||||
@@ -154,7 +156,7 @@ func (b *BitbucketCloudService) List(_ context.Context) ([]*PullRequest, error)
|
||||
|
||||
for _, pull := range pulls {
|
||||
pullRequests = append(pullRequests, &PullRequest{
|
||||
Number: pull.ID,
|
||||
Number: int64(pull.ID),
|
||||
Title: pull.Title,
|
||||
Branch: pull.Source.Branch.Name,
|
||||
TargetBranch: pull.Destination.Branch.Name,
|
||||
|
||||
@@ -89,7 +89,7 @@ func TestListPullRequestBearerTokenCloud(t *testing.T) {
|
||||
pullRequests, err := ListPullRequests(t.Context(), svc, []v1alpha1.PullRequestGeneratorFilter{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, pullRequests, 1)
|
||||
assert.Equal(t, 101, pullRequests[0].Number)
|
||||
assert.Equal(t, int64(101), pullRequests[0].Number)
|
||||
assert.Equal(t, "feat(foo-bar)", pullRequests[0].Title)
|
||||
assert.Equal(t, "feature/foo-bar", pullRequests[0].Branch)
|
||||
assert.Equal(t, "1a8dd249c04a", pullRequests[0].HeadSHA)
|
||||
@@ -107,7 +107,7 @@ func TestListPullRequestNoAuthCloud(t *testing.T) {
|
||||
pullRequests, err := ListPullRequests(t.Context(), svc, []v1alpha1.PullRequestGeneratorFilter{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, pullRequests, 1)
|
||||
assert.Equal(t, 101, pullRequests[0].Number)
|
||||
assert.Equal(t, int64(101), pullRequests[0].Number)
|
||||
assert.Equal(t, "feat(foo-bar)", pullRequests[0].Title)
|
||||
assert.Equal(t, "feature/foo-bar", pullRequests[0].Branch)
|
||||
assert.Equal(t, "1a8dd249c04a", pullRequests[0].HeadSHA)
|
||||
@@ -125,7 +125,7 @@ func TestListPullRequestBasicAuthCloud(t *testing.T) {
|
||||
pullRequests, err := ListPullRequests(t.Context(), svc, []v1alpha1.PullRequestGeneratorFilter{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, pullRequests, 1)
|
||||
assert.Equal(t, 101, pullRequests[0].Number)
|
||||
assert.Equal(t, int64(101), pullRequests[0].Number)
|
||||
assert.Equal(t, "feat(foo-bar)", pullRequests[0].Title)
|
||||
assert.Equal(t, "feature/foo-bar", pullRequests[0].Branch)
|
||||
assert.Equal(t, "1a8dd249c04a", pullRequests[0].HeadSHA)
|
||||
|
||||
@@ -82,7 +82,7 @@ func (b *BitbucketService) List(_ context.Context) ([]*PullRequest, error) {
|
||||
|
||||
for _, pull := range pulls {
|
||||
pullRequests = append(pullRequests, &PullRequest{
|
||||
Number: pull.ID,
|
||||
Number: int64(pull.ID),
|
||||
Title: pull.Title,
|
||||
Branch: pull.FromRef.DisplayID, // ID: refs/heads/main DisplayID: main
|
||||
TargetBranch: pull.ToRef.DisplayID,
|
||||
|
||||
@@ -68,7 +68,7 @@ func TestListPullRequestNoAuth(t *testing.T) {
|
||||
pullRequests, err := ListPullRequests(t.Context(), svc, []v1alpha1.PullRequestGeneratorFilter{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, pullRequests, 1)
|
||||
assert.Equal(t, 101, pullRequests[0].Number)
|
||||
assert.Equal(t, int64(101), pullRequests[0].Number)
|
||||
assert.Equal(t, "feat(ABC) : 123", pullRequests[0].Title)
|
||||
assert.Equal(t, "feature-ABC-123", pullRequests[0].Branch)
|
||||
assert.Equal(t, "master", pullRequests[0].TargetBranch)
|
||||
@@ -211,7 +211,7 @@ func TestListPullRequestBasicAuth(t *testing.T) {
|
||||
pullRequests, err := ListPullRequests(t.Context(), svc, []v1alpha1.PullRequestGeneratorFilter{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, pullRequests, 1)
|
||||
assert.Equal(t, 101, pullRequests[0].Number)
|
||||
assert.Equal(t, int64(101), pullRequests[0].Number)
|
||||
assert.Equal(t, "feature-ABC-123", pullRequests[0].Branch)
|
||||
assert.Equal(t, "cb3cf2e4d1517c83e720d2585b9402dbef71f992", pullRequests[0].HeadSHA)
|
||||
}
|
||||
@@ -228,7 +228,7 @@ func TestListPullRequestBearerAuth(t *testing.T) {
|
||||
pullRequests, err := ListPullRequests(t.Context(), svc, []v1alpha1.PullRequestGeneratorFilter{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, pullRequests, 1)
|
||||
assert.Equal(t, 101, pullRequests[0].Number)
|
||||
assert.Equal(t, int64(101), pullRequests[0].Number)
|
||||
assert.Equal(t, "feat(ABC) : 123", pullRequests[0].Title)
|
||||
assert.Equal(t, "feature-ABC-123", pullRequests[0].Branch)
|
||||
assert.Equal(t, "cb3cf2e4d1517c83e720d2585b9402dbef71f992", pullRequests[0].HeadSHA)
|
||||
|
||||
@@ -68,7 +68,7 @@ func (g *GiteaService) List(ctx context.Context) ([]*PullRequest, error) {
|
||||
continue
|
||||
}
|
||||
list = append(list, &PullRequest{
|
||||
Number: int(pr.Index),
|
||||
Number: int64(pr.Index),
|
||||
Title: pr.Title,
|
||||
Branch: pr.Head.Ref,
|
||||
TargetBranch: pr.Base.Ref,
|
||||
|
||||
@@ -303,7 +303,7 @@ func TestGiteaList(t *testing.T) {
|
||||
prs, err := host.List(t.Context())
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, prs, 1)
|
||||
assert.Equal(t, 1, prs[0].Number)
|
||||
assert.Equal(t, int64(1), prs[0].Number)
|
||||
assert.Equal(t, "add an empty file", prs[0].Title)
|
||||
assert.Equal(t, "test", prs[0].Branch)
|
||||
assert.Equal(t, "main", prs[0].TargetBranch)
|
||||
|
||||
@@ -76,7 +76,7 @@ func (g *GithubService) List(ctx context.Context) ([]*PullRequest, error) {
|
||||
continue
|
||||
}
|
||||
pullRequests = append(pullRequests, &PullRequest{
|
||||
Number: *pull.Number,
|
||||
Number: int64(*pull.Number),
|
||||
Title: *pull.Title,
|
||||
Branch: *pull.Head.Ref,
|
||||
TargetBranch: *pull.Base.Ref,
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package pull_request
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services/github_app_auth"
|
||||
@@ -8,9 +9,9 @@ import (
|
||||
appsetutils "github.com/argoproj/argo-cd/v3/applicationset/utils"
|
||||
)
|
||||
|
||||
func NewGithubAppService(g github_app_auth.Authentication, url, owner, repo string, labels []string, optionalHTTPClient ...*http.Client) (PullRequestService, error) {
|
||||
func NewGithubAppService(ctx context.Context, g github_app_auth.Authentication, url, owner, repo string, labels []string, optionalHTTPClient ...*http.Client) (PullRequestService, error) {
|
||||
httpClient := appsetutils.GetOptionalHTTPClient(optionalHTTPClient...)
|
||||
client, err := github_app.Client(g, url, httpClient)
|
||||
client, err := github_app.Client(ctx, g, url, owner, httpClient)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -61,11 +61,15 @@ func (g *GitLabService) List(ctx context.Context) ([]*PullRequest, error) {
|
||||
var labelsList gitlab.LabelOptions = g.labels
|
||||
labels = &labelsList
|
||||
}
|
||||
opts := &gitlab.ListProjectMergeRequestsOptions{
|
||||
|
||||
snippetsListOptions := gitlab.ExploreSnippetsOptions{
|
||||
ListOptions: gitlab.ListOptions{
|
||||
PerPage: 100,
|
||||
},
|
||||
Labels: labels,
|
||||
}
|
||||
opts := &gitlab.ListProjectMergeRequestsOptions{
|
||||
ListOptions: snippetsListOptions.ListOptions,
|
||||
Labels: labels,
|
||||
}
|
||||
|
||||
if g.pullRequestState != "" {
|
||||
|
||||
@@ -78,7 +78,7 @@ func TestList(t *testing.T) {
|
||||
prs, err := svc.List(t.Context())
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, prs, 1)
|
||||
assert.Equal(t, 15442, prs[0].Number)
|
||||
assert.Equal(t, int64(15442), prs[0].Number)
|
||||
assert.Equal(t, "Draft: Use structured logging for DB load balancer", prs[0].Title)
|
||||
assert.Equal(t, "use-structured-logging-for-db-load-balancer", prs[0].Branch)
|
||||
assert.Equal(t, "master", prs[0].TargetBranch)
|
||||
|
||||
@@ -7,7 +7,8 @@ import (
|
||||
|
||||
type PullRequest struct {
|
||||
// Number is a number that will be the ID of the pull request.
|
||||
Number int
|
||||
// Gitlab uses int64 for the pull request number.
|
||||
Number int64
|
||||
// Title of the pull request.
|
||||
Title string
|
||||
// Branch is the name of the branch from which the pull request originated.
|
||||
|
||||
@@ -53,8 +53,12 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
|
||||
var _ SCMProviderService = &BitBucketCloudProvider{}
|
||||
|
||||
func NewBitBucketCloudProvider(owner string, user string, password string, allBranches bool) (*BitBucketCloudProvider, error) {
|
||||
bitbucketClient, err := bitbucket.NewBasicAuth(user, password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error creating BitBucket Cloud client with basic auth: %w", err)
|
||||
}
|
||||
client := &ExtendedClient{
|
||||
bitbucket.NewBasicAuth(user, password),
|
||||
bitbucketClient,
|
||||
user,
|
||||
password,
|
||||
owner,
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package scm_provider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services/github_app_auth"
|
||||
@@ -8,9 +9,9 @@ import (
|
||||
appsetutils "github.com/argoproj/argo-cd/v3/applicationset/utils"
|
||||
)
|
||||
|
||||
func NewGithubAppProviderFor(g github_app_auth.Authentication, organization string, url string, allBranches bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
|
||||
func NewGithubAppProviderFor(ctx context.Context, g github_app_auth.Authentication, organization string, url string, allBranches bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
|
||||
httpClient := appsetutils.GetOptionalHTTPClient(optionalHTTPClient...)
|
||||
client, err := github_app.Client(g, url, httpClient)
|
||||
client, err := github_app.Client(ctx, g, url, organization, httpClient)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -76,8 +76,13 @@ func (g *GitlabProvider) GetBranches(ctx context.Context, repo *Repository) ([]*
|
||||
}
|
||||
|
||||
func (g *GitlabProvider) ListRepos(_ context.Context, cloneProtocol string) ([]*Repository, error) {
|
||||
snippetsListOptions := gitlab.ExploreSnippetsOptions{
|
||||
ListOptions: gitlab.ListOptions{
|
||||
PerPage: 100,
|
||||
},
|
||||
}
|
||||
opt := &gitlab.ListGroupProjectsOptions{
|
||||
ListOptions: gitlab.ListOptions{PerPage: 100},
|
||||
ListOptions: snippetsListOptions.ListOptions,
|
||||
IncludeSubGroups: &g.includeSubgroups,
|
||||
WithShared: &g.includeSharedProjects,
|
||||
Topic: &g.topic,
|
||||
@@ -173,8 +178,13 @@ func (g *GitlabProvider) listBranches(_ context.Context, repo *Repository) ([]gi
|
||||
return branches, nil
|
||||
}
|
||||
// Otherwise, scrape the ListBranches API.
|
||||
snippetsListOptions := gitlab.ExploreSnippetsOptions{
|
||||
ListOptions: gitlab.ListOptions{
|
||||
PerPage: 100,
|
||||
},
|
||||
}
|
||||
opt := &gitlab.ListBranchesOptions{
|
||||
ListOptions: gitlab.ListOptions{PerPage: 100},
|
||||
ListOptions: snippetsListOptions.ListOptions,
|
||||
}
|
||||
for {
|
||||
gitlabBranches, resp, err := g.client.Branches.ListBranches(repo.RepositoryId, opt)
|
||||
|
||||
@@ -30,6 +30,10 @@ import (
|
||||
|
||||
var sprigFuncMap = sprig.GenericFuncMap() // a singleton for better performance
|
||||
|
||||
// baseTemplate is a pre-initialized template with all sprig functions loaded.
|
||||
// Cloning this is much faster than calling Funcs() on a new template each time.
|
||||
var baseTemplate *template.Template
|
||||
|
||||
func init() {
|
||||
// Avoid allowing the user to learn things about the environment.
|
||||
delete(sprigFuncMap, "env")
|
||||
@@ -40,6 +44,10 @@ func init() {
|
||||
sprigFuncMap["toYaml"] = toYAML
|
||||
sprigFuncMap["fromYaml"] = fromYAML
|
||||
sprigFuncMap["fromYamlArray"] = fromYAMLArray
|
||||
|
||||
// Initialize the base template with sprig functions once at startup.
|
||||
// This must be done after modifying sprigFuncMap above.
|
||||
baseTemplate = template.New("base").Funcs(sprigFuncMap)
|
||||
}
|
||||
|
||||
type Renderer interface {
|
||||
@@ -309,16 +317,21 @@ var isTemplatedRegex = regexp.MustCompile(".*{{.*}}.*")
|
||||
// remaining in the substituted template.
|
||||
func (r *Render) Replace(tmpl string, replaceMap map[string]any, useGoTemplate bool, goTemplateOptions []string) (string, error) {
|
||||
if useGoTemplate {
|
||||
template, err := template.New("").Funcs(sprigFuncMap).Parse(tmpl)
|
||||
// Clone the base template which has sprig funcs pre-loaded
|
||||
cloned, err := baseTemplate.Clone()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to clone base template: %w", err)
|
||||
}
|
||||
for _, option := range goTemplateOptions {
|
||||
cloned = cloned.Option(option)
|
||||
}
|
||||
parsed, err := cloned.Parse(tmpl)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to parse template %s: %w", tmpl, err)
|
||||
}
|
||||
for _, option := range goTemplateOptions {
|
||||
template = template.Option(option)
|
||||
}
|
||||
|
||||
var replacedTmplBuffer bytes.Buffer
|
||||
if err = template.Execute(&replacedTmplBuffer, replaceMap); err != nil {
|
||||
if err = parsed.Execute(&replacedTmplBuffer, replaceMap); err != nil {
|
||||
return "", fmt.Errorf("failed to execute go template %s: %w", tmpl, err)
|
||||
}
|
||||
|
||||
|
||||
@@ -514,7 +514,7 @@ func TestRenderTemplateParamsGoTemplate(t *testing.T) {
|
||||
params: map[string]any{
|
||||
"data": `a data string`,
|
||||
},
|
||||
errorMessage: `failed to parse template {{functiondoesnotexist}}: template: :1: function "functiondoesnotexist" not defined`,
|
||||
errorMessage: `failed to parse template {{functiondoesnotexist}}: template: base:1: function "functiondoesnotexist" not defined`,
|
||||
},
|
||||
{
|
||||
name: "Test template error",
|
||||
@@ -523,7 +523,7 @@ func TestRenderTemplateParamsGoTemplate(t *testing.T) {
|
||||
params: map[string]any{
|
||||
"data": `a data string`,
|
||||
},
|
||||
errorMessage: `failed to execute go template {{.data.test}}: template: :1:7: executing "" at <.data.test>: can't evaluate field test in type interface {}`,
|
||||
errorMessage: `failed to execute go template {{.data.test}}: template: base:1:7: executing "base" at <.data.test>: can't evaluate field test in type interface {}`,
|
||||
},
|
||||
{
|
||||
name: "lookup missing value with missingkey=default",
|
||||
@@ -543,7 +543,7 @@ func TestRenderTemplateParamsGoTemplate(t *testing.T) {
|
||||
"unused": "this is not used",
|
||||
},
|
||||
templateOptions: []string{"missingkey=error"},
|
||||
errorMessage: `failed to execute go template --> {{.doesnotexist}} <--: template: :1:6: executing "" at <.doesnotexist>: map has no entry for key "doesnotexist"`,
|
||||
errorMessage: `failed to execute go template --> {{.doesnotexist}} <--: template: base:1:6: executing "base" at <.doesnotexist>: map has no entry for key "doesnotexist"`,
|
||||
},
|
||||
{
|
||||
name: "toYaml",
|
||||
@@ -563,7 +563,7 @@ func TestRenderTemplateParamsGoTemplate(t *testing.T) {
|
||||
name: "toYaml Error",
|
||||
fieldVal: `{{ toYaml . | indent 2 }}`,
|
||||
expectedVal: " foo:\n bar:\n bool: true\n number: 2\n str: Hello world",
|
||||
errorMessage: "failed to execute go template {{ toYaml . | indent 2 }}: template: :1:3: executing \"\" at <toYaml .>: error calling toYaml: error marshaling into JSON: json: unsupported type: func(*string)",
|
||||
errorMessage: "failed to execute go template {{ toYaml . | indent 2 }}: template: base:1:3: executing \"base\" at <toYaml .>: error calling toYaml: error marshaling into JSON: json: unsupported type: func(*string)",
|
||||
params: map[string]any{
|
||||
"foo": func(_ *string) {
|
||||
},
|
||||
@@ -581,7 +581,7 @@ func TestRenderTemplateParamsGoTemplate(t *testing.T) {
|
||||
name: "fromYaml error",
|
||||
fieldVal: `{{ get (fromYaml .value) "hello" }}`,
|
||||
expectedVal: "world",
|
||||
errorMessage: "failed to execute go template {{ get (fromYaml .value) \"hello\" }}: template: :1:8: executing \"\" at <fromYaml .value>: error calling fromYaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {}",
|
||||
errorMessage: "failed to execute go template {{ get (fromYaml .value) \"hello\" }}: template: base:1:8: executing \"base\" at <fromYaml .value>: error calling fromYaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {}",
|
||||
params: map[string]any{
|
||||
"value": "non\n compliant\n yaml",
|
||||
},
|
||||
@@ -598,7 +598,7 @@ func TestRenderTemplateParamsGoTemplate(t *testing.T) {
|
||||
name: "fromYamlArray error",
|
||||
fieldVal: `{{ fromYamlArray .value | last }}`,
|
||||
expectedVal: "bonjour tout le monde",
|
||||
errorMessage: "failed to execute go template {{ fromYamlArray .value | last }}: template: :1:3: executing \"\" at <fromYamlArray .value>: error calling fromYamlArray: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type []interface {}",
|
||||
errorMessage: "failed to execute go template {{ fromYamlArray .value | last }}: template: base:1:3: executing \"base\" at <fromYamlArray .value>: error calling fromYamlArray: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type []interface {}",
|
||||
params: map[string]any{
|
||||
"value": "non\n compliant\n yaml",
|
||||
},
|
||||
|
||||
36
assets/swagger.json
generated
36
assets/swagger.json
generated
@@ -6829,14 +6829,14 @@
|
||||
"type": "array",
|
||||
"title": "ClusterResourceBlacklist contains list of blacklisted cluster level resources",
|
||||
"items": {
|
||||
"$ref": "#/definitions/v1GroupKind"
|
||||
"$ref": "#/definitions/v1alpha1ClusterResourceRestrictionItem"
|
||||
}
|
||||
},
|
||||
"clusterResourceWhitelist": {
|
||||
"type": "array",
|
||||
"title": "ClusterResourceWhitelist contains list of whitelisted cluster level resources",
|
||||
"items": {
|
||||
"$ref": "#/definitions/v1GroupKind"
|
||||
"$ref": "#/definitions/v1alpha1ClusterResourceRestrictionItem"
|
||||
}
|
||||
},
|
||||
"description": {
|
||||
@@ -7050,7 +7050,7 @@
|
||||
},
|
||||
"v1alpha1ApplicationSet": {
|
||||
"type": "object",
|
||||
"title": "ApplicationSet is a set of Application resources\n+genclient\n+genclient:noStatus\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n+kubebuilder:resource:path=applicationsets,shortName=appset;appsets\n+kubebuilder:subresource:status",
|
||||
"title": "ApplicationSet is a set of Application resources.\n+genclient\n+genclient:noStatus\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n+kubebuilder:resource:path=applicationsets,shortName=appset;appsets\n+kubebuilder:subresource:status",
|
||||
"properties": {
|
||||
"metadata": {
|
||||
"$ref": "#/definitions/v1ObjectMeta"
|
||||
@@ -8164,6 +8164,22 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"v1alpha1ClusterResourceRestrictionItem": {
|
||||
"type": "object",
|
||||
"title": "ClusterResourceRestrictionItem is a cluster resource that is restricted by the project's whitelist or blacklist",
|
||||
"properties": {
|
||||
"group": {
|
||||
"type": "string"
|
||||
},
|
||||
"kind": {
|
||||
"type": "string"
|
||||
},
|
||||
"name": {
|
||||
"description": "Name is the name of the restricted resource. Glob patterns using Go's filepath.Match syntax are supported.\nUnlike the group and kind fields, if no name is specified, all resources of the specified group/kind are matched.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"v1alpha1Command": {
|
||||
"type": "object",
|
||||
"title": "Command holds binary path and arguments list",
|
||||
@@ -8289,10 +8305,22 @@
|
||||
"description": "DrySource specifies a location for dry \"don't repeat yourself\" manifest source information.",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"directory": {
|
||||
"$ref": "#/definitions/v1alpha1ApplicationSourceDirectory"
|
||||
},
|
||||
"helm": {
|
||||
"$ref": "#/definitions/v1alpha1ApplicationSourceHelm"
|
||||
},
|
||||
"kustomize": {
|
||||
"$ref": "#/definitions/v1alpha1ApplicationSourceKustomize"
|
||||
},
|
||||
"path": {
|
||||
"type": "string",
|
||||
"title": "Path is a directory path within the Git repository where the manifests are located"
|
||||
},
|
||||
"plugin": {
|
||||
"$ref": "#/definitions/v1alpha1ApplicationSourcePlugin"
|
||||
},
|
||||
"repoURL": {
|
||||
"type": "string",
|
||||
"title": "RepoURL is the URL to the git repository that contains the application manifests"
|
||||
@@ -9437,7 +9465,7 @@
|
||||
"title": "TLSClientCertKey specifies the TLS client cert key for authenticating at the repo server"
|
||||
},
|
||||
"type": {
|
||||
"description": "Type specifies the type of the repoCreds. Can be either \"git\" or \"helm. \"git\" is assumed if empty or absent.",
|
||||
"description": "Type specifies the type of the repoCreds. Can be either \"git\", \"helm\" or \"oci\". \"git\" is assumed if empty or absent.",
|
||||
"type": "string"
|
||||
},
|
||||
"url": {
|
||||
|
||||
@@ -202,7 +202,6 @@ func NewCommand() *cobra.Command {
|
||||
time.Duration(appResyncJitter)*time.Second,
|
||||
time.Duration(selfHealTimeoutSeconds)*time.Second,
|
||||
selfHealBackoff,
|
||||
time.Duration(selfHealBackoffCooldownSeconds)*time.Second,
|
||||
time.Duration(syncTimeout)*time.Second,
|
||||
time.Duration(repoErrorGracePeriod)*time.Second,
|
||||
metricsPort,
|
||||
@@ -275,6 +274,7 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().IntVar(&selfHealBackoffFactor, "self-heal-backoff-factor", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_FACTOR", 3, 0, math.MaxInt32), "Specifies factor of exponential timeout between application self heal attempts")
|
||||
command.Flags().IntVar(&selfHealBackoffCapSeconds, "self-heal-backoff-cap-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_CAP_SECONDS", 300, 0, math.MaxInt32), "Specifies max timeout of exponential backoff between application self heal attempts")
|
||||
command.Flags().IntVar(&selfHealBackoffCooldownSeconds, "self-heal-backoff-cooldown-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS", 330, 0, math.MaxInt32), "Specifies period of time the app needs to stay synced before the self heal backoff can reset")
|
||||
errors.CheckError(command.Flags().MarkDeprecated("self-heal-backoff-cooldown-seconds", "This flag is deprecated and has no effect."))
|
||||
command.Flags().IntVar(&syncTimeout, "sync-timeout", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT", 0, 0, math.MaxInt32), "Specifies the timeout after which a sync would be terminated. 0 means no timeout (default 0).")
|
||||
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", env.ParseInt64FromEnv("ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT", 20, 0, math.MaxInt64), "Number of allowed concurrent kubectl fork/execs. Any value less than 1 means no limit.")
|
||||
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
|
||||
|
||||
@@ -105,7 +105,12 @@ func NewCommand() *cobra.Command {
|
||||
)
|
||||
|
||||
cli.SetLogFormat(cmdutil.LogFormat)
|
||||
cli.SetLogLevel(cmdutil.LogLevel)
|
||||
|
||||
if debugLog {
|
||||
cli.SetLogLevel("debug")
|
||||
} else {
|
||||
cli.SetLogLevel(cmdutil.LogLevel)
|
||||
}
|
||||
|
||||
ctrl.SetLogger(logutils.NewLogrusLogger(logutils.NewWithCurrentConfig()))
|
||||
|
||||
|
||||
@@ -80,6 +80,7 @@ func NewCommand() *cobra.Command {
|
||||
includeHiddenDirectories bool
|
||||
cmpUseManifestGeneratePaths bool
|
||||
ociMediaTypes []string
|
||||
enableBuiltinGitConfig bool
|
||||
)
|
||||
command := cobra.Command{
|
||||
Use: cliName,
|
||||
@@ -155,6 +156,7 @@ func NewCommand() *cobra.Command {
|
||||
IncludeHiddenDirectories: includeHiddenDirectories,
|
||||
CMPUseManifestGeneratePaths: cmpUseManifestGeneratePaths,
|
||||
OCIMediaTypes: ociMediaTypes,
|
||||
EnableBuiltinGitConfig: enableBuiltinGitConfig,
|
||||
}, askPassServer)
|
||||
errors.CheckError(err)
|
||||
|
||||
@@ -265,6 +267,7 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().BoolVar(&includeHiddenDirectories, "include-hidden-directories", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_INCLUDE_HIDDEN_DIRECTORIES", false), "Include hidden directories from Git")
|
||||
command.Flags().BoolVar(&cmpUseManifestGeneratePaths, "plugin-use-manifest-generate-paths", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_PLUGIN_USE_MANIFEST_GENERATE_PATHS", false), "Pass the resources described in argocd.argoproj.io/manifest-generate-paths value to the cmpserver to generate the application manifests.")
|
||||
command.Flags().StringSliceVar(&ociMediaTypes, "oci-layer-media-types", env.StringsFromEnv("ARGOCD_REPO_SERVER_OCI_LAYER_MEDIA_TYPES", []string{"application/vnd.oci.image.layer.v1.tar", "application/vnd.oci.image.layer.v1.tar+gzip", "application/vnd.cncf.helm.chart.content.v1.tar+gzip"}, ","), "Comma separated list of allowed media types for OCI media types. This only accounts for media types within layers.")
|
||||
command.Flags().BoolVar(&enableBuiltinGitConfig, "enable-builtin-git-config", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_ENABLE_BUILTIN_GIT_CONFIG", true), "Enable builtin git configuration options that are required for correct argocd-repo-server operation.")
|
||||
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
|
||||
cacheSrc = reposervercache.AddCacheFlagsToCmd(&command, cacheutil.Options{
|
||||
OnClientCreated: func(client *redis.Client) {
|
||||
|
||||
@@ -84,7 +84,7 @@ func newAppProject() *unstructured.Unstructured {
|
||||
Server: "*",
|
||||
},
|
||||
},
|
||||
ClusterResourceWhitelist: []metav1.GroupKind{
|
||||
ClusterResourceWhitelist: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{
|
||||
Group: "*",
|
||||
Kind: "*",
|
||||
|
||||
@@ -77,6 +77,15 @@ func NewGenRepoSpecCommand() *cobra.Command {
|
||||
|
||||
# Add a private HTTP OCI repository named 'stable'
|
||||
argocd admin repo generate-spec oci://helm-oci-registry.cn-zhangjiakou.cr.aliyuncs.com --type oci --name stable --username test --password test --insecure-oci-force-http
|
||||
|
||||
# Add a private Git repository on GitHub.com via GitHub App. github-app-installation-id is optional, if not provided, the installation id will be fetched from the GitHub API.
|
||||
argocd admin repo generate-spec https://git.example.com/repos/repo --github-app-id 1 --github-app-installation-id 2 --github-app-private-key-path test.private-key.pem
|
||||
|
||||
# Add a private Git repository on GitHub Enterprise via GitHub App. github-app-installation-id is optional, if not provided, the installation id will be fetched from the GitHub API.
|
||||
argocd admin repo generate-spec https://ghe.example.com/repos/repo --github-app-id 1 --github-app-installation-id 2 --github-app-private-key-path test.private-key.pem --github-app-enterprise-base-url https://ghe.example.com/api/v3
|
||||
|
||||
# Add a private Git repository on Google Cloud Sources via GCP service account credentials
|
||||
argocd admin repo generate-spec https://source.developers.google.com/p/my-google-cloud-project/r/my-repo --gcp-service-account-key-path service-account-key.json
|
||||
`
|
||||
|
||||
command := &cobra.Command{
|
||||
|
||||
@@ -33,6 +33,7 @@ func NewApplicationGetResourceCommand(clientOpts *argocdclient.ClientOptions) *c
|
||||
var (
|
||||
resourceName string
|
||||
kind string
|
||||
group string
|
||||
project string
|
||||
filteredFields []string
|
||||
showManagedFields bool
|
||||
@@ -88,7 +89,7 @@ func NewApplicationGetResourceCommand(clientOpts *argocdclient.ClientOptions) *c
|
||||
var resources []unstructured.Unstructured
|
||||
var fetchedStr string
|
||||
for _, r := range tree.Nodes {
|
||||
if (resourceName != "" && r.Name != resourceName) || r.Kind != kind {
|
||||
if (resourceName != "" && r.Name != resourceName) || (group != "" && r.Group != group) || r.Kind != kind {
|
||||
continue
|
||||
}
|
||||
resource, err := appIf.GetResource(ctx, &applicationpkg.ApplicationResourceRequest{
|
||||
@@ -131,6 +132,7 @@ func NewApplicationGetResourceCommand(clientOpts *argocdclient.ClientOptions) *c
|
||||
command.Flags().StringVar(&kind, "kind", "", "Kind of resource [REQUIRED]")
|
||||
err := command.MarkFlagRequired("kind")
|
||||
errors.CheckError(err)
|
||||
command.Flags().StringVar(&group, "group", "", "Group")
|
||||
command.Flags().StringVar(&project, "project", "", "Project of resource")
|
||||
command.Flags().StringSliceVar(&filteredFields, "filter-fields", nil, "A comma separated list of fields to display, if not provided will output the entire manifest")
|
||||
command.Flags().BoolVar(&showManagedFields, "show-managed-fields", false, "Show managed fields in the output manifest")
|
||||
|
||||
@@ -223,6 +223,19 @@ $ source _argocd
|
||||
$ argocd completion fish > ~/.config/fish/completions/argocd.fish
|
||||
$ source ~/.config/fish/completions/argocd.fish
|
||||
|
||||
# For powershell
|
||||
$ mkdir -Force "$HOME\Documents\PowerShell" | Out-Null
|
||||
$ argocd completion powershell > $HOME\Documents\PowerShell\argocd_completion.ps1
|
||||
|
||||
Add the following lines to your powershell profile
|
||||
|
||||
$ # ArgoCD tab completion
|
||||
if (Test-Path "$HOME\Documents\PowerShell\argocd_completion.ps1") {
|
||||
. "$HOME\Documents\PowerShell\argocd_completion.ps1"
|
||||
}
|
||||
|
||||
Then reload your profile
|
||||
$ . $PROFILE
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if len(args) != 1 {
|
||||
@@ -233,9 +246,10 @@ $ source ~/.config/fish/completions/argocd.fish
|
||||
rootCommand := NewCommand()
|
||||
rootCommand.BashCompletionFunction = bashCompletionFunc
|
||||
availableCompletions := map[string]func(out io.Writer, cmd *cobra.Command) error{
|
||||
"bash": runCompletionBash,
|
||||
"zsh": runCompletionZsh,
|
||||
"fish": runCompletionFish,
|
||||
"bash": runCompletionBash,
|
||||
"zsh": runCompletionZsh,
|
||||
"fish": runCompletionFish,
|
||||
"powershell": runCompletionPowershell,
|
||||
}
|
||||
completion, ok := availableCompletions[shell]
|
||||
if !ok {
|
||||
@@ -262,3 +276,7 @@ func runCompletionZsh(out io.Writer, cmd *cobra.Command) error {
|
||||
func runCompletionFish(out io.Writer, cmd *cobra.Command) error {
|
||||
return cmd.GenFishCompletion(out, true)
|
||||
}
|
||||
|
||||
func runCompletionPowershell(out io.Writer, cmd *cobra.Command) error {
|
||||
return cmd.GenPowerShellCompletionWithDesc(out)
|
||||
}
|
||||
|
||||
@@ -34,6 +34,10 @@ argocd context cd.argoproj.io --delete`,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
|
||||
errors.CheckError(err)
|
||||
if localCfg == nil {
|
||||
fmt.Println("No local configuration found")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if deletion {
|
||||
if len(args) == 0 {
|
||||
|
||||
@@ -41,8 +41,9 @@ type policyOpts struct {
|
||||
// NewProjectCommand returns a new instance of an `argocd proj` command
|
||||
func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
command := &cobra.Command{
|
||||
Use: "proj",
|
||||
Short: "Manage projects",
|
||||
Use: "proj",
|
||||
Short: "Manage projects",
|
||||
Aliases: []string{"project"},
|
||||
Example: templates.Examples(`
|
||||
# List all available projects
|
||||
argocd proj list
|
||||
@@ -590,17 +591,15 @@ func NewProjectRemoveSourceNamespace(clientOpts *argocdclient.ClientOptions) *co
|
||||
return command
|
||||
}
|
||||
|
||||
func modifyResourcesList(list *[]metav1.GroupKind, add bool, listDesc string, group string, kind string) bool {
|
||||
func modifyNamespacedResourcesList(list *[]metav1.GroupKind, add bool, listAction string, group string, kind string) (bool, string) {
|
||||
if add {
|
||||
for _, item := range *list {
|
||||
if item.Group == group && item.Kind == kind {
|
||||
fmt.Printf("Group '%s' and kind '%s' already present in %s resources\n", group, kind, listDesc)
|
||||
return false
|
||||
return false, fmt.Sprintf("Group '%s' and kind '%s' already present in %s namespaced resources", group, kind, listAction)
|
||||
}
|
||||
}
|
||||
fmt.Printf("Group '%s' and kind '%s' is added to %s resources\n", group, kind, listDesc)
|
||||
*list = append(*list, metav1.GroupKind{Group: group, Kind: kind})
|
||||
return true
|
||||
return true, fmt.Sprintf("Group '%s' and kind '%s' is added to %s namespaced resources", group, kind, listAction)
|
||||
}
|
||||
index := -1
|
||||
for i, item := range *list {
|
||||
@@ -610,15 +609,37 @@ func modifyResourcesList(list *[]metav1.GroupKind, add bool, listDesc string, gr
|
||||
}
|
||||
}
|
||||
if index == -1 {
|
||||
fmt.Printf("Group '%s' and kind '%s' not in %s resources\n", group, kind, listDesc)
|
||||
return false
|
||||
return false, fmt.Sprintf("Group '%s' and kind '%s' not in %s namespaced resources", group, kind, listAction)
|
||||
}
|
||||
*list = append((*list)[:index], (*list)[index+1:]...)
|
||||
fmt.Printf("Group '%s' and kind '%s' is removed from %s resources\n", group, kind, listDesc)
|
||||
return true
|
||||
return true, fmt.Sprintf("Group '%s' and kind '%s' is removed from %s namespaced resources", group, kind, listAction)
|
||||
}
|
||||
|
||||
func modifyResourceListCmd(cmdUse, cmdDesc, examples string, clientOpts *argocdclient.ClientOptions, allow bool, namespacedList bool) *cobra.Command {
|
||||
func modifyClusterResourcesList(list *[]v1alpha1.ClusterResourceRestrictionItem, add bool, listAction string, group string, kind string, name string) (bool, string) {
|
||||
if add {
|
||||
for _, item := range *list {
|
||||
if item.Group == group && item.Kind == kind && item.Name == name {
|
||||
return false, fmt.Sprintf("Group '%s', kind '%s', and name '%s' is already present in %s cluster resources", group, kind, name, listAction)
|
||||
}
|
||||
}
|
||||
*list = append(*list, v1alpha1.ClusterResourceRestrictionItem{Group: group, Kind: kind, Name: name})
|
||||
return true, fmt.Sprintf("Group '%s', kind '%s', and name '%s' is added to %s cluster resources", group, kind, name, listAction)
|
||||
}
|
||||
index := -1
|
||||
for i, item := range *list {
|
||||
if item.Group == group && item.Kind == kind && item.Name == name {
|
||||
index = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if index == -1 {
|
||||
return false, fmt.Sprintf("Group '%s', kind '%s', and name '%s' not in %s cluster resources", group, kind, name, listAction)
|
||||
}
|
||||
*list = append((*list)[:index], (*list)[index+1:]...)
|
||||
return true, fmt.Sprintf("Group '%s', kind '%s', and name '%s' is removed from %s cluster resources", group, kind, name, listAction)
|
||||
}
|
||||
|
||||
func modifyResourceListCmd(getProjIf func(*cobra.Command) (io.Closer, projectpkg.ProjectServiceClient), cmdUse, cmdDesc, examples string, allow bool, namespacedList bool) *cobra.Command {
|
||||
var (
|
||||
listType string
|
||||
defaultList string
|
||||
@@ -635,38 +656,61 @@ func modifyResourceListCmd(cmdUse, cmdDesc, examples string, clientOpts *argocdc
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
ctx := c.Context()
|
||||
|
||||
if len(args) != 3 {
|
||||
if namespacedList && len(args) != 3 {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if !namespacedList && (len(args) < 3 || len(args) > 4) {
|
||||
// Cluster-scoped resource command can have an optional NAME argument.
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
projName, group, kind := args[0], args[1], args[2]
|
||||
conn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
|
||||
var name string
|
||||
if !namespacedList && len(args) > 3 {
|
||||
name = args[3]
|
||||
}
|
||||
conn, projIf := getProjIf(c)
|
||||
defer utilio.Close(conn)
|
||||
|
||||
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
|
||||
errors.CheckError(err)
|
||||
var list, allowList, denyList *[]metav1.GroupKind
|
||||
var listAction, listDesc string
|
||||
var clusterList *[]v1alpha1.ClusterResourceRestrictionItem
|
||||
var clusterAllowList, clusterDenyList *[]v1alpha1.ClusterResourceRestrictionItem
|
||||
var listAction string
|
||||
var add bool
|
||||
if namespacedList {
|
||||
allowList, denyList = &proj.Spec.NamespaceResourceWhitelist, &proj.Spec.NamespaceResourceBlacklist
|
||||
listDesc = "namespaced"
|
||||
} else {
|
||||
allowList, denyList = &proj.Spec.ClusterResourceWhitelist, &proj.Spec.ClusterResourceBlacklist
|
||||
listDesc = "cluster"
|
||||
clusterAllowList, clusterDenyList = &proj.Spec.ClusterResourceWhitelist, &proj.Spec.ClusterResourceBlacklist
|
||||
}
|
||||
|
||||
if (listType == "allow") || (listType == "white") {
|
||||
list = allowList
|
||||
clusterList = clusterAllowList
|
||||
listAction = "allowed"
|
||||
add = allow
|
||||
} else {
|
||||
list = denyList
|
||||
clusterList = clusterDenyList
|
||||
listAction = "denied"
|
||||
add = !allow
|
||||
}
|
||||
|
||||
if modifyResourcesList(list, add, listAction+" "+listDesc, group, kind) {
|
||||
if !namespacedList {
|
||||
if ok, msg := modifyClusterResourcesList(clusterList, add, listAction, group, kind, name); ok {
|
||||
c.Println(msg)
|
||||
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
|
||||
errors.CheckError(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if ok, msg := modifyNamespacedResourcesList(list, add, listAction, group, kind); ok {
|
||||
c.Println(msg)
|
||||
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
|
||||
errors.CheckError(err)
|
||||
}
|
||||
@@ -684,7 +728,10 @@ func NewProjectAllowNamespaceResourceCommand(clientOpts *argocdclient.ClientOpti
|
||||
# Removes a namespaced API resource with specified GROUP and KIND from the deny list or add a namespaced API resource to the allow list for project PROJECT
|
||||
argocd proj allow-namespace-resource PROJECT GROUP KIND
|
||||
`
|
||||
return modifyResourceListCmd(use, desc, examples, clientOpts, true, true)
|
||||
getProjIf := func(cmd *cobra.Command) (io.Closer, projectpkg.ProjectServiceClient) {
|
||||
return headless.NewClientOrDie(clientOpts, cmd).NewProjectClientOrDie()
|
||||
}
|
||||
return modifyResourceListCmd(getProjIf, use, desc, examples, true, true)
|
||||
}
|
||||
|
||||
// NewProjectDenyNamespaceResourceCommand returns a new instance of an `argocd proj deny-namespace-resource` command
|
||||
@@ -695,7 +742,10 @@ func NewProjectDenyNamespaceResourceCommand(clientOpts *argocdclient.ClientOptio
|
||||
# Adds a namespaced API resource with specified GROUP and KIND from the deny list or removes a namespaced API resource from the allow list for project PROJECT
|
||||
argocd proj deny-namespace-resource PROJECT GROUP KIND
|
||||
`
|
||||
return modifyResourceListCmd(use, desc, examples, clientOpts, false, true)
|
||||
getProjIf := func(cmd *cobra.Command) (io.Closer, projectpkg.ProjectServiceClient) {
|
||||
return headless.NewClientOrDie(clientOpts, cmd).NewProjectClientOrDie()
|
||||
}
|
||||
return modifyResourceListCmd(getProjIf, use, desc, examples, false, true)
|
||||
}
|
||||
|
||||
// NewProjectDenyClusterResourceCommand returns a new instance of an `deny-cluster-resource` command
|
||||
@@ -706,18 +756,27 @@ func NewProjectDenyClusterResourceCommand(clientOpts *argocdclient.ClientOptions
|
||||
# Removes a cluster-scoped API resource with specified GROUP and KIND from the allow list and adds it to deny list for project PROJECT
|
||||
argocd proj deny-cluster-resource PROJECT GROUP KIND
|
||||
`
|
||||
return modifyResourceListCmd(use, desc, examples, clientOpts, false, false)
|
||||
getProjIf := func(cmd *cobra.Command) (io.Closer, projectpkg.ProjectServiceClient) {
|
||||
return headless.NewClientOrDie(clientOpts, cmd).NewProjectClientOrDie()
|
||||
}
|
||||
return modifyResourceListCmd(getProjIf, use, desc, examples, false, false)
|
||||
}
|
||||
|
||||
// NewProjectAllowClusterResourceCommand returns a new instance of an `argocd proj allow-cluster-resource` command
|
||||
func NewProjectAllowClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
use := "allow-cluster-resource PROJECT GROUP KIND"
|
||||
use := "allow-cluster-resource PROJECT GROUP KIND [NAME]"
|
||||
desc := "Adds a cluster-scoped API resource to the allow list and removes it from deny list"
|
||||
examples := `
|
||||
# Adds a cluster-scoped API resource with specified GROUP and KIND to the allow list and removes it from deny list for project PROJECT
|
||||
argocd proj allow-cluster-resource PROJECT GROUP KIND
|
||||
|
||||
# Adds a cluster-scoped API resource with specified GROUP, KIND and NAME pattern to the allow list and removes it from deny list for project PROJECT
|
||||
argocd proj allow-cluster-resource PROJECT GROUP KIND NAME
|
||||
`
|
||||
return modifyResourceListCmd(use, desc, examples, clientOpts, true, false)
|
||||
getProjIf := func(cmd *cobra.Command) (io.Closer, projectpkg.ProjectServiceClient) {
|
||||
return headless.NewClientOrDie(clientOpts, cmd).NewProjectClientOrDie()
|
||||
}
|
||||
return modifyResourceListCmd(getProjIf, use, desc, examples, true, false)
|
||||
}
|
||||
|
||||
// NewProjectRemoveSourceCommand returns a new instance of an `argocd proj remove-src` command
|
||||
@@ -826,7 +885,7 @@ func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
|
||||
# List all available projects
|
||||
argocd proj list
|
||||
|
||||
# List all available projects in yaml format
|
||||
# List all available projects in yaml format (other options are "json" and "name")
|
||||
argocd proj list -o yaml
|
||||
`),
|
||||
Run: func(c *cobra.Command, _ []string) {
|
||||
|
||||
256
cmd/argocd/commands/project_test.go
Normal file
256
cmd/argocd/commands/project_test.go
Normal file
@@ -0,0 +1,256 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
projectpkg "github.com/argoproj/argo-cd/v3/pkg/apiclient/project"
|
||||
projectmocks "github.com/argoproj/argo-cd/v3/pkg/apiclient/project/mocks"
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
func TestModifyResourceListCmd_AddClusterAllowItemWithName(t *testing.T) {
|
||||
// Create a mock project client
|
||||
mockProjClient := projectmocks.NewProjectServiceClient(t)
|
||||
|
||||
// Mock project data
|
||||
projectName := "test-project"
|
||||
mockProject := &v1alpha1.AppProject{
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
ClusterResourceWhitelist: []v1alpha1.ClusterResourceRestrictionItem{},
|
||||
},
|
||||
}
|
||||
|
||||
// Mock Get and Update calls
|
||||
mockProjClient.On("Get", mock.Anything, mock.Anything).Return(mockProject, nil)
|
||||
mockProjClient.On("Update", mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
|
||||
req := args.Get(1).(*projectpkg.ProjectUpdateRequest)
|
||||
mockProject.Spec.ClusterResourceWhitelist = req.Project.Spec.ClusterResourceWhitelist
|
||||
}).Return(mockProject, nil)
|
||||
|
||||
getProjIf := func(_ *cobra.Command) (io.Closer, projectpkg.ProjectServiceClient) {
|
||||
return io.NopCloser(bytes.NewBufferString("")), mockProjClient
|
||||
}
|
||||
// Create the command
|
||||
cmd := modifyResourceListCmd(
|
||||
getProjIf,
|
||||
"allow-cluster-resource",
|
||||
"Test command",
|
||||
"Example usage",
|
||||
true,
|
||||
false,
|
||||
)
|
||||
|
||||
// Set up the command arguments
|
||||
args := []string{projectName, "apps", "Deployment", "example-deployment"}
|
||||
cmd.SetArgs(args)
|
||||
|
||||
// Capture the output
|
||||
var output bytes.Buffer
|
||||
cmd.SetOut(&output)
|
||||
|
||||
// Execute the command
|
||||
err := cmd.ExecuteContext(t.Context())
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the project was updated correctly
|
||||
expected := []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: "example-deployment"},
|
||||
}
|
||||
assert.Equal(t, expected, mockProject.Spec.ClusterResourceWhitelist)
|
||||
|
||||
// Verify the output
|
||||
assert.Contains(t, output.String(), "Group 'apps', kind 'Deployment', and name 'example-deployment' is added to allowed cluster resources")
|
||||
}
|
||||
|
||||
func Test_modifyNamespacedResourceList(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
initialList []metav1.GroupKind
|
||||
add bool
|
||||
group string
|
||||
kind string
|
||||
expectedList []metav1.GroupKind
|
||||
expectedResult bool
|
||||
}{
|
||||
{
|
||||
name: "Add new item to empty list",
|
||||
initialList: []metav1.GroupKind{},
|
||||
add: true,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
expectedList: []metav1.GroupKind{
|
||||
{Group: "apps", Kind: "Deployment"},
|
||||
},
|
||||
expectedResult: true,
|
||||
},
|
||||
{
|
||||
name: "Add duplicate item",
|
||||
initialList: []metav1.GroupKind{
|
||||
{Group: "apps", Kind: "Deployment"},
|
||||
},
|
||||
add: true,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
expectedList: []metav1.GroupKind{
|
||||
{Group: "apps", Kind: "Deployment"},
|
||||
},
|
||||
expectedResult: false,
|
||||
},
|
||||
{
|
||||
name: "Remove existing item",
|
||||
initialList: []metav1.GroupKind{
|
||||
{Group: "apps", Kind: "Deployment"},
|
||||
},
|
||||
add: false,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
expectedList: []metav1.GroupKind{},
|
||||
expectedResult: true,
|
||||
},
|
||||
{
|
||||
name: "Remove non-existent item",
|
||||
initialList: []metav1.GroupKind{
|
||||
{Group: "apps", Kind: "Deployment"},
|
||||
},
|
||||
add: false,
|
||||
group: "apps",
|
||||
kind: "StatefulSet",
|
||||
expectedList: []metav1.GroupKind{
|
||||
{Group: "apps", Kind: "Deployment"},
|
||||
},
|
||||
expectedResult: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
list := tt.initialList
|
||||
result, _ := modifyNamespacedResourcesList(&list, tt.add, "", tt.group, tt.kind)
|
||||
assert.Equal(t, tt.expectedResult, result)
|
||||
assert.Equal(t, tt.expectedList, list)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_modifyAllowClusterResourceList(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
initialList []v1alpha1.ClusterResourceRestrictionItem
|
||||
add bool
|
||||
group string
|
||||
kind string
|
||||
resourceName string
|
||||
expectedList []v1alpha1.ClusterResourceRestrictionItem
|
||||
expectedResult bool
|
||||
}{
|
||||
{
|
||||
name: "Add new item to empty list",
|
||||
initialList: []v1alpha1.ClusterResourceRestrictionItem{},
|
||||
add: true,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
resourceName: "",
|
||||
expectedList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: ""},
|
||||
},
|
||||
expectedResult: true,
|
||||
},
|
||||
{
|
||||
name: "Add duplicate item",
|
||||
initialList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: ""},
|
||||
},
|
||||
add: true,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
resourceName: "",
|
||||
expectedList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: ""},
|
||||
},
|
||||
expectedResult: false,
|
||||
},
|
||||
{
|
||||
name: "Remove existing item",
|
||||
initialList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: ""},
|
||||
},
|
||||
add: false,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
resourceName: "",
|
||||
expectedList: []v1alpha1.ClusterResourceRestrictionItem{},
|
||||
expectedResult: true,
|
||||
},
|
||||
{
|
||||
name: "Remove non-existent item",
|
||||
initialList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: ""},
|
||||
},
|
||||
add: false,
|
||||
group: "apps",
|
||||
kind: "StatefulSet",
|
||||
resourceName: "",
|
||||
expectedList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: ""},
|
||||
},
|
||||
expectedResult: false,
|
||||
},
|
||||
{
|
||||
name: "Add item with name",
|
||||
initialList: []v1alpha1.ClusterResourceRestrictionItem{},
|
||||
add: true,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
resourceName: "example-deployment",
|
||||
expectedList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: "example-deployment"},
|
||||
},
|
||||
expectedResult: true,
|
||||
},
|
||||
{
|
||||
name: "Remove item with name",
|
||||
initialList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: "example-deployment"},
|
||||
},
|
||||
add: false,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
resourceName: "example-deployment",
|
||||
expectedList: []v1alpha1.ClusterResourceRestrictionItem{},
|
||||
expectedResult: true,
|
||||
},
|
||||
{
|
||||
name: "Attempt to remove item with name but only group and kind exist",
|
||||
initialList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: ""},
|
||||
},
|
||||
add: false,
|
||||
group: "apps",
|
||||
kind: "Deployment",
|
||||
resourceName: "example-deployment",
|
||||
expectedList: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "apps", Kind: "Deployment", Name: ""},
|
||||
},
|
||||
expectedResult: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
list := tt.initialList
|
||||
|
||||
result, _ := modifyClusterResourcesList(&list, tt.add, "", tt.group, tt.kind, tt.resourceName)
|
||||
assert.Equal(t, tt.expectedResult, result)
|
||||
assert.Equal(t, tt.expectedList, list)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -94,10 +94,10 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
# Add a private HTTP OCI repository named 'stable'
|
||||
argocd repo add oci://helm-oci-registry.cn-zhangjiakou.cr.aliyuncs.com --type oci --name stable --username test --password test --insecure-oci-force-http
|
||||
|
||||
# Add a private Git repository on GitHub.com via GitHub App
|
||||
# Add a private Git repository on GitHub.com via GitHub App. github-app-installation-id is optional, if not provided, the installation id will be fetched from the GitHub API.
|
||||
argocd repo add https://git.example.com/repos/repo --github-app-id 1 --github-app-installation-id 2 --github-app-private-key-path test.private-key.pem
|
||||
|
||||
# Add a private Git repository on GitHub Enterprise via GitHub App
|
||||
# Add a private Git repository on GitHub Enterprise via GitHub App. github-app-installation-id is optional, if not provided, the installation id will be fetched from the GitHub API.
|
||||
argocd repo add https://ghe.example.com/repos/repo --github-app-id 1 --github-app-installation-id 2 --github-app-private-key-path test.private-key.pem --github-app-enterprise-base-url https://ghe.example.com/api/v3
|
||||
|
||||
# Add a private Git repository on Google Cloud Sources via GCP service account credentials
|
||||
|
||||
@@ -72,10 +72,10 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
|
||||
# Add credentials with SSH private key authentication to use for all repositories under ssh://git@git.example.com/repos
|
||||
argocd repocreds add ssh://git@git.example.com/repos/ --ssh-private-key-path ~/.ssh/id_rsa
|
||||
|
||||
# Add credentials with GitHub App authentication to use for all repositories under https://github.com/repos
|
||||
# Add credentials with GitHub App authentication to use for all repositories under https://github.com/repos. github-app-installation-id is optional, if not provided, the installation id will be fetched from the GitHub API.
|
||||
argocd repocreds add https://github.com/repos/ --github-app-id 1 --github-app-installation-id 2 --github-app-private-key-path test.private-key.pem
|
||||
|
||||
# Add credentials with GitHub App authentication to use for all repositories under https://ghe.example.com/repos
|
||||
# Add credentials with GitHub App authentication to use for all repositories under https://ghe.example.com/repos. github-app-installation-id is optional, if not provided, the installation id will be fetched from the GitHub API.
|
||||
argocd repocreds add https://ghe.example.com/repos/ --github-app-id 1 --github-app-installation-id 2 --github-app-private-key-path test.private-key.pem --github-app-enterprise-base-url https://ghe.example.com/api/v3
|
||||
|
||||
# Add credentials with helm oci registry so that these oci registry urls do not need to be added as repos individually.
|
||||
@@ -191,7 +191,7 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
|
||||
command.Flags().StringVar(&tlsClientCertPath, "tls-client-cert-path", "", "path to the TLS client cert (must be PEM format)")
|
||||
command.Flags().StringVar(&tlsClientCertKeyPath, "tls-client-cert-key-path", "", "path to the TLS client cert's key (must be PEM format)")
|
||||
command.Flags().Int64Var(&repo.GithubAppId, "github-app-id", 0, "id of the GitHub Application")
|
||||
command.Flags().Int64Var(&repo.GithubAppInstallationId, "github-app-installation-id", 0, "installation id of the GitHub Application")
|
||||
command.Flags().Int64Var(&repo.GithubAppInstallationId, "github-app-installation-id", 0, "installation id of the GitHub Application (optional, will be auto-discovered if not provided)")
|
||||
command.Flags().StringVar(&githubAppPrivateKeyPath, "github-app-private-key-path", "", "private key of the GitHub Application")
|
||||
command.Flags().StringVar(&repo.GitHubAppEnterpriseBaseURL, "github-app-enterprise-base-url", "", "base url to use when using GitHub Enterprise (e.g. https://ghe.example.com/api/v3")
|
||||
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing repository with the same name even if the spec differs")
|
||||
|
||||
@@ -43,8 +43,8 @@ func AddProjFlags(command *cobra.Command, opts *ProjectOpts) {
|
||||
command.Flags().StringSliceVar(&opts.SignatureKeys, "signature-keys", []string{}, "GnuPG public key IDs for commit signature verification")
|
||||
command.Flags().BoolVar(&opts.orphanedResourcesEnabled, "orphaned-resources", false, "Enables orphaned resources monitoring")
|
||||
command.Flags().BoolVar(&opts.orphanedResourcesWarn, "orphaned-resources-warn", false, "Specifies if applications should have a warning condition when orphaned resources detected")
|
||||
command.Flags().StringArrayVar(&opts.allowedClusterResources, "allow-cluster-resource", []string{}, "List of allowed cluster level resources")
|
||||
command.Flags().StringArrayVar(&opts.deniedClusterResources, "deny-cluster-resource", []string{}, "List of denied cluster level resources")
|
||||
command.Flags().StringArrayVar(&opts.allowedClusterResources, "allow-cluster-resource", []string{}, "List of allowed cluster level resources, optionally with group and name (e.g. ClusterRole, apiextensions.k8s.io/CustomResourceDefinition, /Namespace/team1-*)")
|
||||
command.Flags().StringArrayVar(&opts.deniedClusterResources, "deny-cluster-resource", []string{}, "List of denied cluster level resources, optionally with group and name (e.g. ClusterRole, apiextensions.k8s.io/CustomResourceDefinition, /Namespace/kube-*)")
|
||||
command.Flags().StringArrayVar(&opts.allowedNamespacedResources, "allow-namespaced-resource", []string{}, "List of allowed namespaced resources")
|
||||
command.Flags().StringArrayVar(&opts.deniedNamespacedResources, "deny-namespaced-resource", []string{}, "List of denied namespaced resources")
|
||||
command.Flags().StringSliceVar(&opts.SourceNamespaces, "source-namespaces", []string{}, "List of source namespaces for applications")
|
||||
@@ -64,12 +64,26 @@ func getGroupKindList(values []string) []metav1.GroupKind {
|
||||
return res
|
||||
}
|
||||
|
||||
func (opts *ProjectOpts) GetAllowedClusterResources() []metav1.GroupKind {
|
||||
return getGroupKindList(opts.allowedClusterResources)
|
||||
func getClusterResourceRestrictionItemList(values []string) []v1alpha1.ClusterResourceRestrictionItem {
|
||||
var res []v1alpha1.ClusterResourceRestrictionItem
|
||||
for _, val := range values {
|
||||
if parts := strings.Split(val, "/"); len(parts) == 3 {
|
||||
res = append(res, v1alpha1.ClusterResourceRestrictionItem{Group: parts[0], Kind: parts[1], Name: parts[2]})
|
||||
} else if parts = strings.Split(val, "/"); len(parts) == 2 {
|
||||
res = append(res, v1alpha1.ClusterResourceRestrictionItem{Group: parts[0], Kind: parts[1]})
|
||||
} else if len(parts) == 1 {
|
||||
res = append(res, v1alpha1.ClusterResourceRestrictionItem{Kind: parts[0]})
|
||||
}
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func (opts *ProjectOpts) GetDeniedClusterResources() []metav1.GroupKind {
|
||||
return getGroupKindList(opts.deniedClusterResources)
|
||||
func (opts *ProjectOpts) GetAllowedClusterResources() []v1alpha1.ClusterResourceRestrictionItem {
|
||||
return getClusterResourceRestrictionItemList(opts.allowedClusterResources)
|
||||
}
|
||||
|
||||
func (opts *ProjectOpts) GetDeniedClusterResources() []v1alpha1.ClusterResourceRestrictionItem {
|
||||
return getClusterResourceRestrictionItemList(opts.deniedClusterResources)
|
||||
}
|
||||
|
||||
func (opts *ProjectOpts) GetAllowedNamespacedResources() []metav1.GroupKind {
|
||||
|
||||
@@ -19,8 +19,8 @@ func TestProjectOpts_ResourceLists(t *testing.T) {
|
||||
|
||||
assert.ElementsMatch(t, []metav1.GroupKind{{Kind: "ConfigMap"}}, opts.GetAllowedNamespacedResources())
|
||||
assert.ElementsMatch(t, []metav1.GroupKind{{Group: "apps", Kind: "DaemonSet"}}, opts.GetDeniedNamespacedResources())
|
||||
assert.ElementsMatch(t, []metav1.GroupKind{{Group: "apiextensions.k8s.io", Kind: "CustomResourceDefinition"}}, opts.GetAllowedClusterResources())
|
||||
assert.ElementsMatch(t, []metav1.GroupKind{{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"}}, opts.GetDeniedClusterResources())
|
||||
assert.ElementsMatch(t, []v1alpha1.ClusterResourceRestrictionItem{{Group: "apiextensions.k8s.io", Kind: "CustomResourceDefinition"}}, opts.GetAllowedClusterResources())
|
||||
assert.ElementsMatch(t, []v1alpha1.ClusterResourceRestrictionItem{{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"}}, opts.GetDeniedClusterResources())
|
||||
}
|
||||
|
||||
func TestProjectOpts_GetDestinationServiceAccounts(t *testing.T) {
|
||||
|
||||
@@ -45,7 +45,7 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
|
||||
command.Flags().BoolVar(&opts.EnableLfs, "enable-lfs", false, "enable git-lfs (Large File Support) on this repository")
|
||||
command.Flags().BoolVar(&opts.EnableOci, "enable-oci", false, "enable helm-oci (Helm OCI-Based Repository) (only valid for helm type repositories)")
|
||||
command.Flags().Int64Var(&opts.GithubAppId, "github-app-id", 0, "id of the GitHub Application")
|
||||
command.Flags().Int64Var(&opts.GithubAppInstallationId, "github-app-installation-id", 0, "installation id of the GitHub Application")
|
||||
command.Flags().Int64Var(&opts.GithubAppInstallationId, "github-app-installation-id", 0, "installation id of the GitHub Application (optional, will be auto-discovered if not provided)")
|
||||
command.Flags().StringVar(&opts.GithubAppPrivateKeyPath, "github-app-private-key-path", "", "private key of the GitHub Application")
|
||||
command.Flags().StringVar(&opts.GitHubAppEnterpriseBaseURL, "github-app-enterprise-base-url", "", "base url to use when using GitHub Enterprise (e.g. https://ghe.example.com/api/v3")
|
||||
command.Flags().StringVar(&opts.Proxy, "proxy", "", "use proxy to access repository")
|
||||
|
||||
209
commitserver/commit/addnote_race_test.go
Normal file
209
commitserver/commit/addnote_race_test.go
Normal file
@@ -0,0 +1,209 @@
|
||||
package commit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/util/git"
|
||||
)
|
||||
|
||||
// TestAddNoteConcurrentStaggered tests that when multiple AddNote operations run
|
||||
// with slightly staggered timing, all notes persist correctly.
|
||||
// Each operation gets its own git clone, simulating multiple concurrent hydration requests.
|
||||
func TestAddNoteConcurrentStaggered(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
remotePath, localPath := setupRepoWithRemote(t)
|
||||
|
||||
// Create 3 branches with commits (simulating different hydration targets)
|
||||
branches := []string{"env/dev", "env/staging", "env/prod"}
|
||||
commitSHAs := make([]string, 3)
|
||||
|
||||
for i, branch := range branches {
|
||||
commitSHAs[i] = commitAndPushBranch(t, localPath, branch)
|
||||
}
|
||||
|
||||
// Create separate clones for concurrent operations
|
||||
cloneClients := make([]git.Client, 3)
|
||||
for i := 0; i < 3; i++ {
|
||||
cloneClients[i] = getClientForClone(t, remotePath)
|
||||
}
|
||||
|
||||
// Add notes concurrently with slight stagger
|
||||
var wg sync.WaitGroup
|
||||
errors := make([]error, 3)
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
wg.Add(1)
|
||||
go func(idx int) {
|
||||
defer wg.Done()
|
||||
time.Sleep(time.Duration(idx*50) * time.Millisecond)
|
||||
errors[idx] = AddNote(cloneClients[idx], fmt.Sprintf("dry-sha-%d", idx), commitSHAs[idx])
|
||||
}(i)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
// Verify all notes persisted
|
||||
verifyClient := getClientForClone(t, remotePath)
|
||||
|
||||
for i, commitSHA := range commitSHAs {
|
||||
note, err := verifyClient.GetCommitNote(commitSHA, NoteNamespace)
|
||||
require.NoError(t, err, "Note should exist for commit %d", i)
|
||||
assert.Contains(t, note, fmt.Sprintf("dry-sha-%d", i))
|
||||
}
|
||||
}
|
||||
|
||||
// TestAddNoteConcurrentSimultaneous tests that when multiple AddNote operations run
|
||||
// simultaneously (without delays), all notes persist correctly.
|
||||
// Each operation gets its own git clone, simulating multiple concurrent hydration requests.
|
||||
func TestAddNoteConcurrentSimultaneous(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
remotePath, localPath := setupRepoWithRemote(t)
|
||||
|
||||
// Create 3 branches with commits (simulating different hydration targets)
|
||||
branches := []string{"env/dev", "env/staging", "env/prod"}
|
||||
commitSHAs := make([]string, 3)
|
||||
|
||||
for i, branch := range branches {
|
||||
commitSHAs[i] = commitAndPushBranch(t, localPath, branch)
|
||||
}
|
||||
|
||||
// Create separate clones for concurrent operations
|
||||
cloneClients := make([]git.Client, 3)
|
||||
for i := 0; i < 3; i++ {
|
||||
cloneClients[i] = getClientForClone(t, remotePath)
|
||||
}
|
||||
|
||||
// Add notes concurrently without delays
|
||||
var wg sync.WaitGroup
|
||||
startChan := make(chan struct{})
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
wg.Add(1)
|
||||
go func(idx int) {
|
||||
defer wg.Done()
|
||||
<-startChan
|
||||
_ = AddNote(cloneClients[idx], fmt.Sprintf("dry-sha-%d", idx), commitSHAs[idx])
|
||||
}(i)
|
||||
}
|
||||
|
||||
close(startChan)
|
||||
wg.Wait()
|
||||
|
||||
// Verify all notes persisted
|
||||
verifyClient := getClientForClone(t, remotePath)
|
||||
|
||||
for i, commitSHA := range commitSHAs {
|
||||
note, err := verifyClient.GetCommitNote(commitSHA, NoteNamespace)
|
||||
require.NoError(t, err, "Note should exist for commit %d", i)
|
||||
assert.Contains(t, note, fmt.Sprintf("dry-sha-%d", i))
|
||||
}
|
||||
}
|
||||
|
||||
// setupRepoWithRemote creates a bare remote repo and a local repo configured to push to it.
|
||||
// Returns the remote path and local path.
|
||||
func setupRepoWithRemote(t *testing.T) (remotePath, localPath string) {
|
||||
t.Helper()
|
||||
ctx := t.Context()
|
||||
|
||||
// Create bare remote repository
|
||||
remoteDir := t.TempDir()
|
||||
remotePath = filepath.Join(remoteDir, "remote.git")
|
||||
err := os.MkdirAll(remotePath, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, remotePath, "init", "--bare")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create local repository
|
||||
localDir := t.TempDir()
|
||||
localPath = filepath.Join(localDir, "local")
|
||||
err = os.MkdirAll(localPath, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, localPath, "init")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, localPath, "config", "user.name", "Test User")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, localPath, "config", "user.email", "test@example.com")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, localPath, "remote", "add", "origin", remotePath)
|
||||
require.NoError(t, err)
|
||||
|
||||
return remotePath, localPath
|
||||
}
|
||||
|
||||
// commitAndPushBranch writes a file, commits it, creates a branch, and pushes to remote.
|
||||
// Returns the commit SHA.
|
||||
func commitAndPushBranch(t *testing.T, localPath, branch string) string {
|
||||
t.Helper()
|
||||
ctx := t.Context()
|
||||
|
||||
testFile := filepath.Join(localPath, "test.txt")
|
||||
err := os.WriteFile(testFile, []byte("content for "+branch), 0o644)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, localPath, "add", ".")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, localPath, "commit", "-m", "commit "+branch)
|
||||
require.NoError(t, err)
|
||||
|
||||
sha, err := runGitCmd(ctx, localPath, "rev-parse", "HEAD")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, localPath, "branch", branch)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, localPath, "push", "origin", branch)
|
||||
require.NoError(t, err)
|
||||
|
||||
return sha
|
||||
}
|
||||
|
||||
// getClientForClone creates a git client with a fresh clone of the remote repo.
|
||||
func getClientForClone(t *testing.T, remotePath string) git.Client {
|
||||
t.Helper()
|
||||
ctx := t.Context()
|
||||
|
||||
workDir := t.TempDir()
|
||||
|
||||
client, err := git.NewClientExt(remotePath, workDir, &git.NopCreds{}, false, false, "", "")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = client.Init()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, workDir, "config", "user.name", "Test User")
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = runGitCmd(ctx, workDir, "config", "user.email", "test@example.com")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = client.Fetch("", 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
return client
|
||||
}
|
||||
|
||||
// runGitCmd is a helper function to run git commands
|
||||
func runGitCmd(ctx context.Context, dir string, args ...string) (string, error) {
|
||||
cmd := exec.CommandContext(ctx, "git", args...)
|
||||
cmd.Dir = dir
|
||||
output, err := cmd.CombinedOutput()
|
||||
return strings.TrimSpace(string(output)), err
|
||||
}
|
||||
@@ -7,8 +7,6 @@ import (
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/controller/hydrator"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
@@ -19,6 +17,11 @@ import (
|
||||
"github.com/argoproj/argo-cd/v3/util/io/files"
|
||||
)
|
||||
|
||||
const (
|
||||
NoteNamespace = "hydrator.metadata" // NoteNamespace is the custom git notes namespace used by the hydrator to store and retrieve commit-related metadata.
|
||||
ManifestYaml = "manifest.yaml" // ManifestYaml constant for the manifest yaml
|
||||
)
|
||||
|
||||
// Service is the service that handles commit requests.
|
||||
type Service struct {
|
||||
metricsServer *metrics.Server
|
||||
@@ -47,6 +50,13 @@ type hydratorMetadataFile struct {
|
||||
References []v1alpha1.RevisionReference `json:"references,omitempty"`
|
||||
}
|
||||
|
||||
// CommitNote represents the structure of the git note associated with a hydrated commit.
|
||||
// This struct is used to serialize/deserialize commit metadata (such as the dry run SHA)
|
||||
// stored in the custom note namespace by the hydrator.
|
||||
type CommitNote struct {
|
||||
DrySHA string `json:"drySha"` // SHA of original commit that triggerd the hydrator
|
||||
}
|
||||
|
||||
// TODO: make this configurable via ConfigMap.
|
||||
var manifestHydrationReadmeTemplate = `# Manifest Hydration
|
||||
|
||||
@@ -157,33 +167,45 @@ func (s *Service) handleCommitRequest(logCtx *log.Entry, r *apiclient.CommitHydr
|
||||
return out, "", fmt.Errorf("failed to checkout target branch: %w", err)
|
||||
}
|
||||
|
||||
logCtx.Debug("Clearing and preparing paths")
|
||||
var pathsToClear []string
|
||||
// range over the paths configured and skip those application
|
||||
// paths that are referencing to root path
|
||||
for _, p := range r.Paths {
|
||||
if hydrator.IsRootPath(p.Path) {
|
||||
// skip adding paths that are referencing root directory
|
||||
logCtx.Debugf("Path %s is referencing root directory, ignoring the path", p.Path)
|
||||
continue
|
||||
}
|
||||
pathsToClear = append(pathsToClear, p.Path)
|
||||
hydratedSha, err := gitClient.CommitSHA()
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get commit SHA: %w", err)
|
||||
}
|
||||
|
||||
if len(pathsToClear) > 0 {
|
||||
logCtx.Debugf("Clearing paths: %v", pathsToClear)
|
||||
out, err := gitClient.RemoveContents(pathsToClear)
|
||||
if err != nil {
|
||||
return out, "", fmt.Errorf("failed to clear paths %v: %w", pathsToClear, err)
|
||||
}
|
||||
/* git note changes
|
||||
1. Get the git note
|
||||
2. If found, short-circuit, log a warn and return
|
||||
3. If not, get the last manifest from git for every path, compare it with the hydrated manifest
|
||||
3a. If manifest has no changes, continue.. no need to commit it
|
||||
3b. Else, hydrate the manifest.
|
||||
3c. Push the updated note
|
||||
*/
|
||||
isHydrated, err := IsHydrated(gitClient, r.DrySha, hydratedSha)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get notes from git %w", err)
|
||||
}
|
||||
// short-circuit if already hydrated
|
||||
if isHydrated {
|
||||
logCtx.Debugf("this dry sha %s is already hydrated", r.DrySha)
|
||||
return "", hydratedSha, nil
|
||||
}
|
||||
|
||||
logCtx.Debug("Writing manifests")
|
||||
err = WriteForPaths(root, r.Repo.Repo, r.DrySha, r.DryCommitMetadata, r.Paths)
|
||||
shouldCommit, err := WriteForPaths(root, r.Repo.Repo, r.DrySha, r.DryCommitMetadata, r.Paths, gitClient)
|
||||
// When there are no new manifests to commit, err will be nil and success will be false as nothing to commit. Else or every other error err will not be nil
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to write manifests: %w", err)
|
||||
}
|
||||
|
||||
if !shouldCommit {
|
||||
// Manifests did not change, so we don't need to create a new commit.
|
||||
// Add a git note to track that this dry SHA has been processed, and return the existing hydrated SHA.
|
||||
logCtx.Debug("Adding commit note")
|
||||
err = AddNote(gitClient, r.DrySha, hydratedSha)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to add commit note: %w", err)
|
||||
}
|
||||
return "", hydratedSha, nil
|
||||
}
|
||||
logCtx.Debug("Committing and pushing changes")
|
||||
out, err = gitClient.CommitAndPush(r.TargetBranch, r.CommitMessage)
|
||||
if err != nil {
|
||||
@@ -195,7 +217,12 @@ func (s *Service) handleCommitRequest(logCtx *log.Entry, r *apiclient.CommitHydr
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get commit SHA: %w", err)
|
||||
}
|
||||
|
||||
// add the commit note
|
||||
logCtx.Debug("Adding commit note")
|
||||
err = AddNote(gitClient, r.DrySha, sha)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to add commit note: %w", err)
|
||||
}
|
||||
return "", sha, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package commit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -99,14 +100,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().GetCommitNote(mock.Anything, mock.Anything).Return("", fmt.Errorf("test %w", git.ErrNoNoteFound)).Once()
|
||||
mockGitClient.EXPECT().AddAndPushNote(mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
resp, err := service.CommitHydratedManifests(t.Context(), validRequest)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, resp)
|
||||
assert.Equal(t, "it-worked!", resp.HydratedSha)
|
||||
assert.Equal(t, "it-worked!", resp.HydratedSha, "Should return existing hydrated SHA for no-op")
|
||||
})
|
||||
|
||||
t.Run("root path with dot and blank - no directory removal", func(t *testing.T) {
|
||||
@@ -119,8 +121,11 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().GetCommitNote(mock.Anything, mock.Anything).Return("", fmt.Errorf("test %w", git.ErrNoNoteFound)).Once()
|
||||
mockGitClient.EXPECT().HasFileChanged(mock.Anything).Return(true, nil).Twice()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("root-and-blank-sha", nil).Once()
|
||||
mockGitClient.EXPECT().AddAndPushNote(mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("root-and-blank-sha", nil).Twice()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithRootAndBlank := &apiclient.CommitHydratedManifestsRequest{
|
||||
@@ -158,7 +163,6 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
|
||||
t.Run("subdirectory path - triggers directory removal", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.EXPECT().Init().Return(nil).Once()
|
||||
@@ -166,17 +170,20 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().RemoveContents([]string{"apps/staging"}).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().GetCommitNote(mock.Anything, mock.Anything).Return("", fmt.Errorf("test %w", git.ErrNoNoteFound)).Once()
|
||||
mockGitClient.EXPECT().HasFileChanged(mock.Anything).Return(true, nil).Once()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("subdir-path-sha", nil).Once()
|
||||
mockGitClient.EXPECT().AddAndPushNote(mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("subdir-path-sha", nil).Twice()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithSubdirPath := &apiclient.CommitHydratedManifestsRequest{
|
||||
Repo: &v1alpha1.Repository{
|
||||
Repo: "https://github.com/argoproj/argocd-example-apps.git",
|
||||
},
|
||||
TargetBranch: "main",
|
||||
SyncBranch: "env/test",
|
||||
TargetBranch: "main",
|
||||
SyncBranch: "env/test",
|
||||
|
||||
CommitMessage: "test commit message",
|
||||
Paths: []*apiclient.PathDetails{
|
||||
{
|
||||
@@ -206,9 +213,11 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().RemoveContents([]string{"apps/production", "apps/staging"}).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().GetCommitNote(mock.Anything, mock.Anything).Return("", fmt.Errorf("test %w", git.ErrNoNoteFound)).Once()
|
||||
mockGitClient.EXPECT().HasFileChanged(mock.Anything).Return(true, nil).Times(3)
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("mixed-paths-sha", nil).Once()
|
||||
mockGitClient.EXPECT().AddAndPushNote(mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("mixed-paths-sha", nil).Twice()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithMixedPaths := &apiclient.CommitHydratedManifestsRequest{
|
||||
@@ -262,8 +271,9 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
|
||||
mockGitClient.EXPECT().GetCommitNote(mock.Anything, mock.Anything).Return("", fmt.Errorf("test %w", git.ErrNoNoteFound)).Once()
|
||||
mockGitClient.EXPECT().AddAndPushNote(mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("empty-paths-sha", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithEmptyPaths := &apiclient.CommitHydratedManifestsRequest{
|
||||
@@ -273,12 +283,97 @@ func Test_CommitHydratedManifests(t *testing.T) {
|
||||
TargetBranch: "main",
|
||||
SyncBranch: "env/test",
|
||||
CommitMessage: "test commit message",
|
||||
DrySha: "dry-sha-456",
|
||||
}
|
||||
|
||||
resp, err := service.CommitHydratedManifests(t.Context(), requestWithEmptyPaths)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, resp)
|
||||
assert.Equal(t, "it-worked!", resp.HydratedSha)
|
||||
assert.Equal(t, "empty-paths-sha", resp.HydratedSha, "Should return existing hydrated SHA for no-op")
|
||||
})
|
||||
|
||||
t.Run("duplicate request already hydrated", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
strnote := "{\"drySha\":\"abc123\"}"
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.EXPECT().Init().Return(nil).Once()
|
||||
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().GetCommitNote(mock.Anything, mock.Anything).Return(strnote, nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("dupe-test-sha", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
request := &apiclient.CommitHydratedManifestsRequest{
|
||||
Repo: &v1alpha1.Repository{
|
||||
Repo: "https://github.com/argoproj/argocd-example-apps.git",
|
||||
},
|
||||
TargetBranch: "main",
|
||||
SyncBranch: "env/test",
|
||||
DrySha: "abc123",
|
||||
CommitMessage: "test commit message",
|
||||
Paths: []*apiclient.PathDetails{
|
||||
{
|
||||
Path: ".",
|
||||
Manifests: []*apiclient.HydratedManifestDetails{
|
||||
{
|
||||
ManifestJSON: `{"apiVersion":"v1","kind":"Deployment","metadata":{"name":"test-app"}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
resp, err := service.CommitHydratedManifests(t.Context(), request)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, resp)
|
||||
assert.Equal(t, "dupe-test-sha", resp.HydratedSha, "Should return existing hydrated SHA when already hydrated")
|
||||
})
|
||||
|
||||
t.Run("root path with dot - no changes to manifest - should commit note only", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
service, mockRepoClientFactory := newServiceWithMocks(t)
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.EXPECT().Init().Return(nil).Once()
|
||||
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
|
||||
mockGitClient.EXPECT().GetCommitNote(mock.Anything, mock.Anything).Return("", fmt.Errorf("test %w", git.ErrNoNoteFound)).Once()
|
||||
mockGitClient.EXPECT().HasFileChanged(mock.Anything).Return(false, nil).Once()
|
||||
mockGitClient.EXPECT().AddAndPushNote(mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.EXPECT().CommitSHA().Return("root-and-blank-sha", nil).Once()
|
||||
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
|
||||
|
||||
requestWithRootAndBlank := &apiclient.CommitHydratedManifestsRequest{
|
||||
Repo: &v1alpha1.Repository{
|
||||
Repo: "https://github.com/argoproj/argocd-example-apps.git",
|
||||
},
|
||||
TargetBranch: "main",
|
||||
SyncBranch: "env/test",
|
||||
CommitMessage: "test commit message",
|
||||
DrySha: "dry-sha-123",
|
||||
Paths: []*apiclient.PathDetails{
|
||||
{
|
||||
Path: ".",
|
||||
Manifests: []*apiclient.HydratedManifestDetails{
|
||||
{
|
||||
ManifestJSON: `{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"test-dot"}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
resp, err := service.CommitHydratedManifests(t.Context(), requestWithRootAndBlank)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, resp)
|
||||
// BUG FIX: When manifests don't change (no-op), the existing hydrated SHA should be returned.
|
||||
assert.Equal(t, "root-and-blank-sha", resp.HydratedSha, "Should return existing hydrated SHA for no-op")
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ func getCredentialType(repo *v1alpha1.Repository) string {
|
||||
if repo.SSHPrivateKey != "" {
|
||||
return "ssh"
|
||||
}
|
||||
if repo.GithubAppPrivateKey != "" && repo.GithubAppId != 0 && repo.GithubAppInstallationId != 0 {
|
||||
if repo.GithubAppPrivateKey != "" && repo.GithubAppId != 0 { // Promoter MVP: remove github-app-installation-id check since it is no longer a required field
|
||||
return "github-app"
|
||||
}
|
||||
if repo.GCPServiceAccountKey != "" {
|
||||
|
||||
@@ -2,6 +2,7 @@ package commit
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -15,14 +16,15 @@ import (
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
"github.com/argoproj/argo-cd/v3/common"
|
||||
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v3/util/git"
|
||||
"github.com/argoproj/argo-cd/v3/util/hydrator"
|
||||
"github.com/argoproj/argo-cd/v3/util/io"
|
||||
)
|
||||
|
||||
var sprigFuncMap = sprig.GenericFuncMap() // a singleton for better performance
|
||||
|
||||
const gitAttributesContents = `*/README.md linguist-generated=true
|
||||
*/hydrator.metadata linguist-generated=true`
|
||||
const gitAttributesContents = `**/README.md linguist-generated=true
|
||||
**/hydrator.metadata linguist-generated=true`
|
||||
|
||||
func init() {
|
||||
// Avoid allowing the user to learn things about the environment.
|
||||
@@ -33,24 +35,24 @@ func init() {
|
||||
|
||||
// WriteForPaths writes the manifests, hydrator.metadata, and README.md files for each path in the provided paths. It
|
||||
// also writes a root-level hydrator.metadata file containing the repo URL and dry SHA.
|
||||
func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *appv1.RevisionMetadata, paths []*apiclient.PathDetails) error { //nolint:revive //FIXME(var-naming)
|
||||
func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *appv1.RevisionMetadata, paths []*apiclient.PathDetails, gitClient git.Client) (bool, error) { //nolint:revive //FIXME(var-naming)
|
||||
hydratorMetadata, err := hydrator.GetCommitMetadata(repoUrl, drySha, dryCommitMetadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve hydrator metadata: %w", err)
|
||||
return false, fmt.Errorf("failed to retrieve hydrator metadata: %w", err)
|
||||
}
|
||||
|
||||
// Write the top-level readme.
|
||||
err = writeMetadata(root, "", hydratorMetadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write top-level hydrator metadata: %w", err)
|
||||
return false, fmt.Errorf("failed to write top-level hydrator metadata: %w", err)
|
||||
}
|
||||
|
||||
// Write .gitattributes
|
||||
err = writeGitAttributes(root)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write git attributes: %w", err)
|
||||
return false, fmt.Errorf("failed to write git attributes: %w", err)
|
||||
}
|
||||
|
||||
var atleastOneManifestChanged bool
|
||||
for _, p := range paths {
|
||||
hydratePath := p.Path
|
||||
if hydratePath == "." {
|
||||
@@ -61,15 +63,26 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
|
||||
if hydratePath != "" {
|
||||
err = root.MkdirAll(hydratePath, 0o755)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create path: %w", err)
|
||||
return false, fmt.Errorf("failed to create path: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Write the manifests
|
||||
err = writeManifests(root, hydratePath, p.Manifests)
|
||||
err := writeManifests(root, hydratePath, p.Manifests)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write manifests: %w", err)
|
||||
return false, fmt.Errorf("failed to write manifests: %w", err)
|
||||
}
|
||||
// Check if the manifest file has been modified compared to the git index
|
||||
changed, err := gitClient.HasFileChanged(filepath.Join(hydratePath, ManifestYaml))
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to check if anything changed on the manifest: %w", err)
|
||||
}
|
||||
|
||||
if !changed {
|
||||
continue
|
||||
}
|
||||
// If any manifest has changed, signal that a commit should occur. If none have changed, skip committing.
|
||||
atleastOneManifestChanged = changed
|
||||
|
||||
// Write hydrator.metadata containing information about the hydration process.
|
||||
hydratorMetadata := hydrator.HydratorCommitMetadata{
|
||||
@@ -79,16 +92,20 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
|
||||
}
|
||||
err = writeMetadata(root, hydratePath, hydratorMetadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write hydrator metadata: %w", err)
|
||||
return false, fmt.Errorf("failed to write hydrator metadata: %w", err)
|
||||
}
|
||||
|
||||
// Write README
|
||||
err = writeReadme(root, hydratePath, hydratorMetadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write readme: %w", err)
|
||||
return false, fmt.Errorf("failed to write readme: %w", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
// if no manifest changes then skip commit
|
||||
if !atleastOneManifestChanged {
|
||||
return false, nil
|
||||
}
|
||||
return atleastOneManifestChanged, nil
|
||||
}
|
||||
|
||||
// writeMetadata writes the metadata to the hydrator.metadata file.
|
||||
@@ -163,7 +180,7 @@ func writeGitAttributes(root *os.Root) error {
|
||||
func writeManifests(root *os.Root, dirPath string, manifests []*apiclient.HydratedManifestDetails) error {
|
||||
// If the file exists, truncate it.
|
||||
// No need to use SecureJoin here, as the path is already sanitized.
|
||||
manifestPath := filepath.Join(dirPath, "manifest.yaml")
|
||||
manifestPath := filepath.Join(dirPath, ManifestYaml)
|
||||
|
||||
file, err := root.OpenFile(manifestPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.ModePerm)
|
||||
if err != nil {
|
||||
@@ -196,6 +213,43 @@ func writeManifests(root *os.Root, dirPath string, manifests []*apiclient.Hydrat
|
||||
return fmt.Errorf("failed to encode manifest: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsHydrated checks whether the given commit (commitSha) has already been hydrated with the specified Dry SHA (drySha).
|
||||
// It does this by retrieving the commit note in the NoteNamespace and examining the DrySHA value.
|
||||
// Returns true if the stored DrySHA matches the provided drySha, false if not or if no note exists.
|
||||
// Gracefully handles missing notes as a normal outcome (not an error), but returns an error on retrieval or parse failures.
|
||||
func IsHydrated(gitClient git.Client, drySha, commitSha string) (bool, error) {
|
||||
note, err := gitClient.GetCommitNote(commitSha, NoteNamespace)
|
||||
if err != nil {
|
||||
// note not found is a valid and acceptable outcome in this context so returning false and nil to let the hydration continue
|
||||
unwrappedError := errors.Unwrap(err)
|
||||
if unwrappedError != nil && errors.Is(unwrappedError, git.ErrNoNoteFound) {
|
||||
return false, nil
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
var commitNote CommitNote
|
||||
err = json.Unmarshal([]byte(note), &commitNote)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("json unmarshal failed %w", err)
|
||||
}
|
||||
return commitNote.DrySHA == drySha, nil
|
||||
}
|
||||
|
||||
// AddNote attaches a commit note containing the specified dry SHA (`drySha`) to the given commit (`commitSha`)
|
||||
// in the configured note namespace. The note is marshaled as JSON and pushed to the remote repository using
|
||||
// the provided gitClient. Returns an error if marshalling or note addition fails.
|
||||
func AddNote(gitClient git.Client, drySha, commitSha string) error {
|
||||
note := CommitNote{DrySHA: drySha}
|
||||
jsonBytes, err := json.Marshal(note)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal commit note: %w", err)
|
||||
}
|
||||
err = gitClient.AddAndPushNote(commitSha, NoteNamespace, string(jsonBytes))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to add commit note: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -5,19 +5,25 @@ import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
appsv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v3/util/git"
|
||||
gitmocks "github.com/argoproj/argo-cd/v3/util/git/mocks"
|
||||
"github.com/argoproj/argo-cd/v3/util/hydrator"
|
||||
)
|
||||
|
||||
@@ -92,9 +98,12 @@ Argocd-reference-commit-sha: abc123
|
||||
},
|
||||
},
|
||||
}
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.On("HasFileChanged", mock.Anything).Return(true, nil).Times(len(paths))
|
||||
|
||||
err := WriteForPaths(root, repoURL, drySha, metadata, paths)
|
||||
shouldCommit, err := WriteForPaths(root, repoURL, drySha, metadata, paths, mockGitClient)
|
||||
require.NoError(t, err)
|
||||
require.True(t, shouldCommit)
|
||||
|
||||
// Check if the top-level hydrator.metadata exists and contains the repo URL and dry SHA
|
||||
topMetadataPath := filepath.Join(root.Name(), "hydrator.metadata")
|
||||
@@ -142,6 +151,117 @@ Argocd-reference-commit-sha: abc123
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteForPaths_WithOneManifestMatchesExisting(t *testing.T) {
|
||||
root := tempRoot(t)
|
||||
|
||||
repoURL := "https://github.com/example/repo"
|
||||
drySha := "abc123"
|
||||
paths := []*apiclient.PathDetails{
|
||||
{
|
||||
Path: "path1",
|
||||
Manifests: []*apiclient.HydratedManifestDetails{
|
||||
{ManifestJSON: `{"kind":"Pod","apiVersion":"v1"}`},
|
||||
},
|
||||
Commands: []string{"command1", "command2"},
|
||||
},
|
||||
{
|
||||
Path: "path2",
|
||||
Manifests: []*apiclient.HydratedManifestDetails{
|
||||
{ManifestJSON: `{"kind":"Service","apiVersion":"v1"}`},
|
||||
},
|
||||
Commands: []string{"command3"},
|
||||
},
|
||||
{
|
||||
Path: "path3/nested",
|
||||
Manifests: []*apiclient.HydratedManifestDetails{
|
||||
{ManifestJSON: `{"kind":"Deployment","apiVersion":"apps/v1"}`},
|
||||
},
|
||||
Commands: []string{"command4"},
|
||||
},
|
||||
}
|
||||
|
||||
now := metav1.NewTime(time.Now())
|
||||
metadata := &appsv1.RevisionMetadata{
|
||||
Author: "test-author",
|
||||
Date: &now,
|
||||
Message: `test-message
|
||||
|
||||
Signed-off-by: Test User <test@example.com>
|
||||
Argocd-reference-commit-sha: abc123
|
||||
`,
|
||||
References: []appsv1.RevisionReference{
|
||||
{
|
||||
Commit: &appsv1.CommitMetadata{
|
||||
Author: "test-code-author <test-email-author@example.com>",
|
||||
Date: now.Format(time.RFC3339),
|
||||
Subject: "test-code-subject",
|
||||
SHA: "test-code-sha",
|
||||
RepoURL: "https://example.com/test/repo.git",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
mockGitClient.On("HasFileChanged", "path1/manifest.yaml").Return(true, nil).Once()
|
||||
mockGitClient.On("HasFileChanged", "path2/manifest.yaml").Return(true, nil).Once()
|
||||
mockGitClient.On("HasFileChanged", "path3/nested/manifest.yaml").Return(false, nil).Once()
|
||||
|
||||
shouldCommit, err := WriteForPaths(root, repoURL, drySha, metadata, paths, mockGitClient)
|
||||
require.NoError(t, err)
|
||||
require.True(t, shouldCommit)
|
||||
|
||||
// Check if the top-level hydrator.metadata exists and contains the repo URL and dry SHA
|
||||
topMetadataPath := filepath.Join(root.Name(), "hydrator.metadata")
|
||||
topMetadataBytes, err := os.ReadFile(topMetadataPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
var topMetadata hydratorMetadataFile
|
||||
err = json.Unmarshal(topMetadataBytes, &topMetadata)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, repoURL, topMetadata.RepoURL)
|
||||
assert.Equal(t, drySha, topMetadata.DrySHA)
|
||||
assert.Equal(t, metadata.Author, topMetadata.Author)
|
||||
assert.Equal(t, "test-message", topMetadata.Subject)
|
||||
// The body should exclude the Argocd- trailers.
|
||||
assert.Equal(t, "Signed-off-by: Test User <test@example.com>\n", topMetadata.Body)
|
||||
assert.Equal(t, metadata.Date.Format(time.RFC3339), topMetadata.Date)
|
||||
assert.Equal(t, metadata.References, topMetadata.References)
|
||||
|
||||
for _, p := range paths {
|
||||
fullHydratePath := filepath.Join(root.Name(), p.Path)
|
||||
if p.Path == "path3/nested" {
|
||||
assert.DirExists(t, fullHydratePath)
|
||||
manifestPath := path.Join(fullHydratePath, "manifest.yaml")
|
||||
_, err := os.ReadFile(manifestPath)
|
||||
require.NoError(t, err)
|
||||
continue
|
||||
}
|
||||
// Check if each path directory exists
|
||||
assert.DirExists(t, fullHydratePath)
|
||||
|
||||
// Check if each path contains a hydrator.metadata file and contains the repo URL
|
||||
metadataPath := path.Join(fullHydratePath, "hydrator.metadata")
|
||||
metadataBytes, err := os.ReadFile(metadataPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
var readMetadata hydratorMetadataFile
|
||||
err = json.Unmarshal(metadataBytes, &readMetadata)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, repoURL, readMetadata.RepoURL)
|
||||
// Check if each path contains a README.md file and contains the repo URL
|
||||
readmePath := path.Join(fullHydratePath, "README.md")
|
||||
readmeBytes, err := os.ReadFile(readmePath)
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, string(readmeBytes), repoURL)
|
||||
|
||||
// Check if each path contains a manifest.yaml file and contains the word kind
|
||||
manifestPath := path.Join(fullHydratePath, "manifest.yaml")
|
||||
manifestBytes, err := os.ReadFile(manifestPath)
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, string(manifestBytes), "kind")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteMetadata(t *testing.T) {
|
||||
root := tempRoot(t)
|
||||
|
||||
@@ -234,6 +354,186 @@ func TestWriteGitAttributes(t *testing.T) {
|
||||
gitAttributesPath := filepath.Join(root.Name(), ".gitattributes")
|
||||
gitAttributesBytes, err := os.ReadFile(gitAttributesPath)
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, string(gitAttributesBytes), "*/README.md linguist-generated=true")
|
||||
assert.Contains(t, string(gitAttributesBytes), "*/hydrator.metadata linguist-generated=true")
|
||||
assert.Contains(t, string(gitAttributesBytes), "README.md linguist-generated=true")
|
||||
assert.Contains(t, string(gitAttributesBytes), "hydrator.metadata linguist-generated=true")
|
||||
}
|
||||
|
||||
func TestWriteGitAttributes_MatchesAllDepths(t *testing.T) {
|
||||
root := tempRoot(t)
|
||||
|
||||
err := writeGitAttributes(root)
|
||||
require.NoError(t, err)
|
||||
|
||||
// The gitattributes pattern needs to match files at all depths:
|
||||
// - hydrator.metadata (root level)
|
||||
// - path1/hydrator.metadata (one level deep)
|
||||
// - path1/nested/deep/hydrator.metadata (multiple levels deep)
|
||||
// Same for README.md files
|
||||
//
|
||||
// The pattern "**/hydrator.metadata" matches at any depth including root
|
||||
// The pattern "*/hydrator.metadata" only matches exactly one directory level deep
|
||||
|
||||
// Test actual Git behavior using git check-attr
|
||||
// Initialize a git repo
|
||||
ctx := t.Context()
|
||||
repoPath := root.Name()
|
||||
cmd := exec.CommandContext(ctx, "git", "init")
|
||||
cmd.Dir = repoPath
|
||||
output, err := cmd.CombinedOutput()
|
||||
require.NoError(t, err, "Failed to init git repo: %s", string(output))
|
||||
|
||||
// Test files at different depths
|
||||
testCases := []struct {
|
||||
path string
|
||||
shouldMatch bool
|
||||
description string
|
||||
}{
|
||||
{"hydrator.metadata", true, "root level hydrator.metadata"},
|
||||
{"README.md", true, "root level README.md"},
|
||||
{"path1/hydrator.metadata", true, "one level deep hydrator.metadata"},
|
||||
{"path1/README.md", true, "one level deep README.md"},
|
||||
{"path1/nested/hydrator.metadata", true, "two levels deep hydrator.metadata"},
|
||||
{"path1/nested/README.md", true, "two levels deep README.md"},
|
||||
{"path1/nested/deep/hydrator.metadata", true, "three levels deep hydrator.metadata"},
|
||||
{"path1/nested/deep/README.md", true, "three levels deep README.md"},
|
||||
{"manifest.yaml", false, "manifest.yaml should not match"},
|
||||
{"path1/manifest.yaml", false, "nested manifest.yaml should not match"},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.description, func(t *testing.T) {
|
||||
// Use git check-attr to verify if linguist-generated attribute is set
|
||||
cmd := exec.CommandContext(ctx, "git", "check-attr", "linguist-generated", tc.path)
|
||||
cmd.Dir = repoPath
|
||||
output, err := cmd.CombinedOutput()
|
||||
require.NoError(t, err, "Failed to run git check-attr: %s", string(output))
|
||||
|
||||
// Output format: <path>: <attribute>: <value>
|
||||
// Example: "hydrator.metadata: linguist-generated: true"
|
||||
outputStr := strings.TrimSpace(string(output))
|
||||
|
||||
if tc.shouldMatch {
|
||||
expectedOutput := tc.path + ": linguist-generated: true"
|
||||
assert.Equal(t, expectedOutput, outputStr,
|
||||
"File %s should have linguist-generated=true attribute", tc.path)
|
||||
} else {
|
||||
// Attribute should be unspecified
|
||||
expectedOutput := tc.path + ": linguist-generated: unspecified"
|
||||
assert.Equal(t, expectedOutput, outputStr,
|
||||
"File %s should not have linguist-generated=true attribute", tc.path)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsHydrated(t *testing.T) {
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
drySha := "abc123"
|
||||
commitSha := "fff456"
|
||||
commitShaNoNoteFoundErr := "abc456"
|
||||
commitShaErr := "abc999"
|
||||
strnote := "{\"drySha\":\"abc123\"}"
|
||||
mockGitClient.On("GetCommitNote", commitSha, mock.Anything).Return(strnote, nil).Once()
|
||||
mockGitClient.On("GetCommitNote", commitShaNoNoteFoundErr, mock.Anything).Return("", fmt.Errorf("wrapped error %w", git.ErrNoNoteFound)).Once()
|
||||
// an existing note
|
||||
isHydrated, err := IsHydrated(mockGitClient, drySha, commitSha)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, isHydrated)
|
||||
|
||||
// no note found treated as success.. no error returned
|
||||
isHydrated, err = IsHydrated(mockGitClient, drySha, commitShaNoNoteFoundErr)
|
||||
require.NoError(t, err)
|
||||
assert.False(t, isHydrated)
|
||||
|
||||
// Test that non-ErrNoNoteFound errors are propagated: when GetCommitNote fails with
|
||||
// an error other than "no note found", IsHydrated should return that error to the caller
|
||||
err = errors.New("some other error")
|
||||
mockGitClient.On("GetCommitNote", commitShaErr, mock.Anything).Return("", fmt.Errorf("wrapped error %w", err)).Once()
|
||||
isHydrated, err = IsHydrated(mockGitClient, drySha, commitShaErr)
|
||||
require.Error(t, err)
|
||||
assert.False(t, isHydrated)
|
||||
}
|
||||
|
||||
func TestAddNote(t *testing.T) {
|
||||
mockGitClient := gitmocks.NewClient(t)
|
||||
drySha := "abc123"
|
||||
commitSha := "fff456"
|
||||
commitShaErr := "abc456"
|
||||
err := errors.New("test error")
|
||||
mockGitClient.On("AddAndPushNote", commitSha, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
mockGitClient.On("AddAndPushNote", commitShaErr, mock.Anything, mock.Anything).Return(err).Once()
|
||||
|
||||
// success
|
||||
err = AddNote(mockGitClient, drySha, commitSha)
|
||||
require.NoError(t, err)
|
||||
|
||||
// failure
|
||||
err = AddNote(mockGitClient, drySha, commitShaErr)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
// TestWriteForPaths_NoOpScenario tests that when manifests don't change between two hydrations,
|
||||
// shouldCommit returns false. This reproduces the bug where a new DRY commit that doesn't affect
|
||||
// manifests should not create a new hydrated commit.
|
||||
func TestWriteForPaths_NoOpScenario(t *testing.T) {
|
||||
root := tempRoot(t)
|
||||
|
||||
repoURL := "https://github.com/example/repo"
|
||||
drySha1 := "abc123"
|
||||
drySha2 := "def456" // Different dry SHA
|
||||
paths := []*apiclient.PathDetails{
|
||||
{
|
||||
Path: "guestbook",
|
||||
Manifests: []*apiclient.HydratedManifestDetails{
|
||||
{ManifestJSON: `{"apiVersion":"v1","kind":"Service","metadata":{"name":"guestbook-ui"}}`},
|
||||
{ManifestJSON: `{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"name":"guestbook-ui"}}`},
|
||||
},
|
||||
Commands: []string{"kustomize build ."},
|
||||
},
|
||||
}
|
||||
|
||||
now1 := metav1.NewTime(time.Now())
|
||||
metadata1 := &appsv1.RevisionMetadata{
|
||||
Author: "test-author",
|
||||
Date: &now1,
|
||||
Message: "Initial commit",
|
||||
}
|
||||
|
||||
// First hydration - manifests are new, so HasFileChanged should return true
|
||||
mockGitClient1 := gitmocks.NewClient(t)
|
||||
mockGitClient1.On("HasFileChanged", "guestbook/manifest.yaml").Return(true, nil).Once()
|
||||
|
||||
shouldCommit1, err := WriteForPaths(root, repoURL, drySha1, metadata1, paths, mockGitClient1)
|
||||
require.NoError(t, err)
|
||||
require.True(t, shouldCommit1, "First hydration should commit because manifests are new")
|
||||
|
||||
// Second hydration - same manifest content but different dry SHA and metadata
|
||||
// Simulate adding a README.md to the dry source (which doesn't affect manifests)
|
||||
now2 := metav1.NewTime(time.Now().Add(1 * time.Hour)) // Different timestamp
|
||||
metadata2 := &appsv1.RevisionMetadata{
|
||||
Author: "test-author",
|
||||
Date: &now2,
|
||||
Message: "Add README.md", // Different commit message
|
||||
}
|
||||
|
||||
// The manifests are identical, so HasFileChanged should return false
|
||||
mockGitClient2 := gitmocks.NewClient(t)
|
||||
mockGitClient2.On("HasFileChanged", "guestbook/manifest.yaml").Return(false, nil).Once()
|
||||
|
||||
shouldCommit2, err := WriteForPaths(root, repoURL, drySha2, metadata2, paths, mockGitClient2)
|
||||
require.NoError(t, err)
|
||||
require.False(t, shouldCommit2, "Second hydration should NOT commit because manifests didn't change")
|
||||
|
||||
// Verify that the root-level metadata WAS updated (even though we're not committing)
|
||||
// The files get written to the working directory, but since shouldCommit is false, they won't be committed
|
||||
topMetadataPath := filepath.Join(root.Name(), "hydrator.metadata")
|
||||
topMetadataBytes, err := os.ReadFile(topMetadataPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
var topMetadata hydratorMetadataFile
|
||||
err = json.Unmarshal(topMetadataBytes, &topMetadata)
|
||||
require.NoError(t, err)
|
||||
// The top-level metadata should have the NEW dry SHA (files are written, just not committed)
|
||||
assert.Equal(t, drySha2, topMetadata.DrySHA)
|
||||
assert.Equal(t, metadata2.Date.Format(time.RFC3339), topMetadata.Date)
|
||||
}
|
||||
|
||||
@@ -225,6 +225,10 @@ const (
|
||||
// Ex: "http://grafana.example.com/d/yu5UH4MMz/deployments"
|
||||
// Ex: "Go to Dashboard|http://grafana.example.com/d/yu5UH4MMz/deployments"
|
||||
AnnotationKeyLinkPrefix = "link.argocd.argoproj.io/"
|
||||
// AnnotationKeyIgnoreDefaultLinks tells the Application to not add autogenerated links from this object into its externalURLs
|
||||
// This applies to ingress objects and takes effect if set to "true"
|
||||
// This only disables the default behavior of generating links based on the ingress spec, and does not disable AnnotationKeyLinkPrefix
|
||||
AnnotationKeyIgnoreDefaultLinks = "argocd.argoproj.io/ignore-default-links"
|
||||
|
||||
// AnnotationKeyAppSkipReconcile tells the Application to skip the Application controller reconcile.
|
||||
// Skip reconcile when the value is "true" or any other string values that can be strconv.ParseBool() to be true.
|
||||
|
||||
@@ -128,7 +128,6 @@ type ApplicationController struct {
|
||||
statusRefreshJitter time.Duration
|
||||
selfHealTimeout time.Duration
|
||||
selfHealBackoff *wait.Backoff
|
||||
selfHealBackoffCooldown time.Duration
|
||||
syncTimeout time.Duration
|
||||
db db.ArgoDB
|
||||
settingsMgr *settings_util.SettingsManager
|
||||
@@ -164,7 +163,6 @@ func NewApplicationController(
|
||||
appResyncJitter time.Duration,
|
||||
selfHealTimeout time.Duration,
|
||||
selfHealBackoff *wait.Backoff,
|
||||
selfHealBackoffCooldown time.Duration,
|
||||
syncTimeout time.Duration,
|
||||
repoErrorGracePeriod time.Duration,
|
||||
metricsPort int,
|
||||
@@ -211,7 +209,6 @@ func NewApplicationController(
|
||||
settingsMgr: settingsMgr,
|
||||
selfHealTimeout: selfHealTimeout,
|
||||
selfHealBackoff: selfHealBackoff,
|
||||
selfHealBackoffCooldown: selfHealBackoffCooldown,
|
||||
syncTimeout: syncTimeout,
|
||||
clusterSharding: clusterSharding,
|
||||
projByNameCache: sync.Map{},
|
||||
@@ -612,8 +609,10 @@ func (ctrl *ApplicationController) getResourceTree(destCluster *appv1.Cluster, a
|
||||
managedResourcesKeys = append(managedResourcesKeys, kube.GetResourceKey(live))
|
||||
}
|
||||
}
|
||||
// Process managed resources and their children, including cross-namespace relationships
|
||||
// from cluster-scoped parents (e.g., Crossplane CompositeResourceDefinitions)
|
||||
err = ctrl.stateCache.IterateHierarchyV2(destCluster, managedResourcesKeys, func(child appv1.ResourceNode, _ string) bool {
|
||||
permitted, _ := proj.IsResourcePermitted(schema.GroupKind{Group: child.Group, Kind: child.Kind}, child.Namespace, destCluster, func(project string) ([]*appv1.Cluster, error) {
|
||||
permitted, _ := proj.IsResourcePermitted(schema.GroupKind{Group: child.Group, Kind: child.Kind}, child.Name, child.Namespace, destCluster, func(project string) ([]*appv1.Cluster, error) {
|
||||
clusters, err := ctrl.db.GetProjectClusters(context.TODO(), project)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get project clusters: %w", err)
|
||||
@@ -633,10 +632,11 @@ func (ctrl *ApplicationController) getResourceTree(destCluster *appv1.Cluster, a
|
||||
orphanedNodes := make([]appv1.ResourceNode, 0)
|
||||
orphanedNodesKeys := make([]kube.ResourceKey, 0)
|
||||
for k := range orphanedNodesMap {
|
||||
if k.Namespace != "" && proj.IsGroupKindPermitted(k.GroupKind(), true) && !isKnownOrphanedResourceExclusion(k, proj) {
|
||||
if k.Namespace != "" && proj.IsGroupKindNamePermitted(k.GroupKind(), k.Name, true) && !isKnownOrphanedResourceExclusion(k, proj) {
|
||||
orphanedNodesKeys = append(orphanedNodesKeys, k)
|
||||
}
|
||||
}
|
||||
// Process orphaned resources
|
||||
err = ctrl.stateCache.IterateHierarchyV2(destCluster, orphanedNodesKeys, func(child appv1.ResourceNode, appName string) bool {
|
||||
belongToAnotherApp := false
|
||||
if appName != "" {
|
||||
@@ -650,7 +650,7 @@ func (ctrl *ApplicationController) getResourceTree(destCluster *appv1.Cluster, a
|
||||
return false
|
||||
}
|
||||
|
||||
permitted, _ := proj.IsResourcePermitted(schema.GroupKind{Group: child.Group, Kind: child.Kind}, child.Namespace, destCluster, func(project string) ([]*appv1.Cluster, error) {
|
||||
permitted, _ := proj.IsResourcePermitted(schema.GroupKind{Group: child.Group, Kind: child.Kind}, child.Name, child.Namespace, destCluster, func(project string) ([]*appv1.Cluster, error) {
|
||||
return ctrl.db.GetProjectClusters(context.TODO(), project)
|
||||
})
|
||||
|
||||
@@ -1062,6 +1062,9 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
||||
})
|
||||
message := fmt.Sprintf("Unable to delete application resources: %v", err.Error())
|
||||
ctrl.logAppEvent(context.TODO(), app, argo.EventInfo{Reason: argo.EventReasonStatusRefreshed, Type: corev1.EventTypeWarning}, message)
|
||||
} else {
|
||||
// Clear DeletionError condition if deletion is progressing successfully
|
||||
app.Status.SetConditions([]appv1.ApplicationCondition{}, map[appv1.ApplicationConditionType]bool{appv1.ApplicationConditionDeletionError: true})
|
||||
}
|
||||
ts.AddCheckpoint("finalize_application_deletion_ms")
|
||||
}
|
||||
@@ -1134,13 +1137,13 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) finalizeProjectDeletion(proj *appv1.AppProject) error {
|
||||
apps, err := ctrl.appLister.Applications(ctrl.namespace).List(labels.Everything())
|
||||
apps, err := ctrl.appLister.List(labels.Everything())
|
||||
if err != nil {
|
||||
return fmt.Errorf("error listing applications: %w", err)
|
||||
}
|
||||
appsCount := 0
|
||||
for i := range apps {
|
||||
if apps[i].Spec.GetProject() == proj.Name {
|
||||
if apps[i].Spec.GetProject() == proj.Name && ctrl.isAppNamespaceAllowed(apps[i]) && proj.IsAppNamespacePermitted(apps[i], ctrl.namespace) {
|
||||
appsCount++
|
||||
}
|
||||
}
|
||||
@@ -1203,17 +1206,21 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Get destination cluster
|
||||
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
|
||||
if err != nil {
|
||||
logCtx.WithError(err).Warn("Unable to get destination cluster")
|
||||
app.UnSetCascadedDeletion()
|
||||
app.UnSetPostDeleteFinalizerAll()
|
||||
app.UnSetPreDeleteFinalizerAll()
|
||||
if err := ctrl.updateFinalizers(app); err != nil {
|
||||
return err
|
||||
}
|
||||
logCtx.Infof("Resource entries removed from undefined cluster")
|
||||
return nil
|
||||
}
|
||||
|
||||
clusterRESTConfig, err := destCluster.RESTConfig()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -1225,9 +1232,30 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
return fmt.Errorf("cannot apply impersonation: %w", err)
|
||||
}
|
||||
|
||||
// Handle PreDelete hooks - run them before any deletion occurs
|
||||
if app.HasPreDeleteFinalizer() {
|
||||
objsMap, err := ctrl.getPermittedAppLiveObjects(destCluster, app, proj, projectClusters)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error getting permitted app live objects: %w", err)
|
||||
}
|
||||
|
||||
done, err := ctrl.executePreDeleteHooks(app, proj, objsMap, config, logCtx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error executing pre-delete hooks: %w", err)
|
||||
}
|
||||
if !done {
|
||||
// PreDelete hooks are still running - wait for them to complete
|
||||
return nil
|
||||
}
|
||||
// PreDelete hooks are done - remove the finalizer so we can continue with deletion
|
||||
app.UnSetPreDeleteFinalizer()
|
||||
if err := ctrl.updateFinalizers(app); err != nil {
|
||||
return fmt.Errorf("error updating pre-delete finalizers: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if app.CascadedDeletion() {
|
||||
deletionApproved := app.IsDeletionConfirmed(app.DeletionTimestamp.Time)
|
||||
|
||||
logCtx.Infof("Deleting resources")
|
||||
// ApplicationDestination points to a valid cluster, so we may clean up the live objects
|
||||
objs := make([]*unstructured.Unstructured, 0)
|
||||
@@ -1304,6 +1332,23 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
return ctrl.updateFinalizers(app)
|
||||
}
|
||||
|
||||
if app.HasPreDeleteFinalizer("cleanup") {
|
||||
objsMap, err := ctrl.getPermittedAppLiveObjects(destCluster, app, proj, projectClusters)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error getting permitted app live objects for pre-delete cleanup: %w", err)
|
||||
}
|
||||
|
||||
done, err := ctrl.cleanupPreDeleteHooks(objsMap, config, logCtx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error cleaning up pre-delete hooks: %w", err)
|
||||
}
|
||||
if !done {
|
||||
return nil
|
||||
}
|
||||
app.UnSetPreDeleteFinalizer("cleanup")
|
||||
return ctrl.updateFinalizers(app)
|
||||
}
|
||||
|
||||
if app.HasPostDeleteFinalizer("cleanup") {
|
||||
objsMap, err := ctrl.getPermittedAppLiveObjects(destCluster, app, proj, projectClusters)
|
||||
if err != nil {
|
||||
@@ -1321,7 +1366,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
return ctrl.updateFinalizers(app)
|
||||
}
|
||||
|
||||
if !app.CascadedDeletion() && !app.HasPostDeleteFinalizer() {
|
||||
if !app.CascadedDeletion() && !app.HasPostDeleteFinalizer() && !app.HasPreDeleteFinalizer() {
|
||||
if err := ctrl.cache.SetAppManagedResources(app.Name, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1514,8 +1559,18 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
|
||||
// if we just completed an operation, force a refresh so that UI will report up-to-date
|
||||
// sync/health information
|
||||
if _, err := cache.MetaNamespaceKeyFunc(app); err == nil {
|
||||
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
|
||||
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatestForceResolve.Pointer(), nil)
|
||||
var compareWith CompareWith
|
||||
if state.Operation.InitiatedBy.Automated {
|
||||
// Do not force revision resolution on automated operations because
|
||||
// this would cause excessive Ls-Remote requests on monorepo commits
|
||||
compareWith = CompareWithLatest
|
||||
} else {
|
||||
// Force app refresh with using most recent resolved revision after sync,
|
||||
// so UI won't show a just synced application being out of sync if it was
|
||||
// synced after commit but before app. refresh (see #18153)
|
||||
compareWith = CompareWithLatestForceResolve
|
||||
}
|
||||
ctrl.requestAppRefresh(app.QualifiedName(), compareWith.Pointer(), nil)
|
||||
} else {
|
||||
logCtx.WithError(err).Warn("Fails to requeue application")
|
||||
}
|
||||
@@ -1835,10 +1890,25 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
app.Status.SourceTypes = compareResult.appSourceTypes
|
||||
app.Status.ControllerNamespace = ctrl.namespace
|
||||
ts.AddCheckpoint("app_status_update_ms")
|
||||
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
|
||||
// This is a partly a duplicate of patch_ms, but more descriptive and allows to have measurement for the next step.
|
||||
ts.AddCheckpoint("persist_app_status_ms")
|
||||
if (compareResult.hasPostDeleteHooks != app.HasPostDeleteFinalizer() || compareResult.hasPostDeleteHooks != app.HasPostDeleteFinalizer("cleanup")) &&
|
||||
// Update finalizers BEFORE persisting status to avoid race condition where app shows "Synced"
|
||||
// but doesn't have finalizers yet, which would allow deletion without running pre-delete hooks
|
||||
if (compareResult.hasPreDeleteHooks != app.HasPreDeleteFinalizer() ||
|
||||
compareResult.hasPreDeleteHooks != app.HasPreDeleteFinalizer("cleanup")) &&
|
||||
app.GetDeletionTimestamp() == nil {
|
||||
if compareResult.hasPreDeleteHooks {
|
||||
app.SetPreDeleteFinalizer()
|
||||
app.SetPreDeleteFinalizer("cleanup")
|
||||
} else {
|
||||
app.UnSetPreDeleteFinalizer()
|
||||
app.UnSetPreDeleteFinalizer("cleanup")
|
||||
}
|
||||
|
||||
if err := ctrl.updateFinalizers(app); err != nil {
|
||||
logCtx.Errorf("Failed to update pre-delete finalizers: %v", err)
|
||||
}
|
||||
}
|
||||
if (compareResult.hasPostDeleteHooks != app.HasPostDeleteFinalizer() ||
|
||||
compareResult.hasPostDeleteHooks != app.HasPostDeleteFinalizer("cleanup")) &&
|
||||
app.GetDeletionTimestamp() == nil {
|
||||
if compareResult.hasPostDeleteHooks {
|
||||
app.SetPostDeleteFinalizer()
|
||||
@@ -1849,10 +1919,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
}
|
||||
|
||||
if err := ctrl.updateFinalizers(app); err != nil {
|
||||
logCtx.WithError(err).Error("Failed to update finalizers")
|
||||
logCtx.WithError(err).Error("Failed to update post-delete finalizers")
|
||||
}
|
||||
}
|
||||
ts.AddCheckpoint("process_finalizers_ms")
|
||||
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
|
||||
// This is a partly a duplicate of patch_ms, but more descriptive and allows to have measurement for the next step.
|
||||
ts.AddCheckpoint("persist_app_status_ms")
|
||||
return processNext
|
||||
}
|
||||
|
||||
@@ -2186,12 +2259,8 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
|
||||
// Self heal will trigger a new sync operation when the desired state changes and cause the application to
|
||||
// be OutOfSync when it was previously synced Successfully. This means SelfHeal should only ever be attempted
|
||||
// when the revisions have not changed, and where the previous sync to these revision was successful
|
||||
|
||||
// Only carry SelfHealAttemptsCount to be increased when the selfHealBackoffCooldown has not elapsed yet
|
||||
if !ctrl.selfHealBackoffCooldownElapsed(app) {
|
||||
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
|
||||
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
|
||||
}
|
||||
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
|
||||
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
|
||||
}
|
||||
|
||||
if remainingTime := ctrl.selfHealRemainingBackoff(app, int(op.Sync.SelfHealAttemptsCount)); remainingTime > 0 {
|
||||
@@ -2327,19 +2396,6 @@ func (ctrl *ApplicationController) selfHealRemainingBackoff(app *appv1.Applicati
|
||||
return retryAfter
|
||||
}
|
||||
|
||||
// selfHealBackoffCooldownElapsed returns true when the last successful sync has occurred since longer
|
||||
// than then self heal cooldown. This means that the application has been in sync for long enough to
|
||||
// reset the self healing backoff to its initial state
|
||||
func (ctrl *ApplicationController) selfHealBackoffCooldownElapsed(app *appv1.Application) bool {
|
||||
if app.Status.OperationState == nil || app.Status.OperationState.FinishedAt == nil {
|
||||
// Something is in progress, or about to be. In that case, selfHeal attempt should be zero anyway
|
||||
return true
|
||||
}
|
||||
|
||||
timeSinceLastOperation := time.Since(app.Status.OperationState.FinishedAt.Time)
|
||||
return timeSinceLastOperation >= ctrl.selfHealBackoffCooldown && app.Status.OperationState.Phase.Successful()
|
||||
}
|
||||
|
||||
// isAppNamespaceAllowed returns whether the application is allowed in the
|
||||
// namespace it's residing in.
|
||||
func (ctrl *ApplicationController) isAppNamespaceAllowed(app *appv1.Application) bool {
|
||||
|
||||
@@ -159,6 +159,8 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
|
||||
runtimeObjs = append(runtimeObjs, data.additionalObjs...)
|
||||
kubeClient := fake.NewClientset(runtimeObjs...)
|
||||
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, test.FakeArgoCDNamespace)
|
||||
// Initialize the settings manager to ensure cluster cache is ready
|
||||
_ = settingsMgr.ResyncInformers()
|
||||
kubectl := &MockKubectl{Kubectl: &kubetest.MockKubectlCmd{}}
|
||||
ctrl, err := NewApplicationController(
|
||||
test.FakeArgoCDNamespace,
|
||||
@@ -177,7 +179,6 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
|
||||
time.Second,
|
||||
time.Minute,
|
||||
nil,
|
||||
time.Minute,
|
||||
0,
|
||||
time.Second*10,
|
||||
common.DefaultPortArgoCDMetrics,
|
||||
@@ -407,6 +408,37 @@ metadata:
|
||||
data:
|
||||
`
|
||||
|
||||
var fakePreDeleteHook = `
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "pre-delete-hook",
|
||||
"namespace": "default",
|
||||
"labels": {
|
||||
"app.kubernetes.io/instance": "my-app"
|
||||
},
|
||||
"annotations": {
|
||||
"argocd.argoproj.io/hook": "PreDelete"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "pre-delete-hook",
|
||||
"image": "busybox",
|
||||
"restartPolicy": "Never",
|
||||
"command": [
|
||||
"/bin/sh",
|
||||
"-c",
|
||||
"sleep 5 && echo hello from the pre-delete-hook pod"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
`
|
||||
|
||||
var fakePostDeleteHook = `
|
||||
{
|
||||
"apiVersion": "batch/v1",
|
||||
@@ -557,6 +589,15 @@ func newFakeCM() map[string]any {
|
||||
return cm
|
||||
}
|
||||
|
||||
func newFakePreDeleteHook() map[string]any {
|
||||
var cm map[string]any
|
||||
err := yaml.Unmarshal([]byte(fakePreDeleteHook), &cm)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return cm
|
||||
}
|
||||
|
||||
func newFakePostDeleteHook() map[string]any {
|
||||
var hook map[string]any
|
||||
err := yaml.Unmarshal([]byte(fakePostDeleteHook), &hook)
|
||||
@@ -1114,6 +1155,40 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
testShouldDelete(app3)
|
||||
})
|
||||
|
||||
t.Run("PreDelete_HookIsCreated", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.SetPreDeleteFinalizer()
|
||||
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
|
||||
ctrl := newFakeController(context.Background(), &fakeData{
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{fakePreDeleteHook},
|
||||
}},
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{},
|
||||
}, nil)
|
||||
|
||||
patched := false
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
defaultReactor := fakeAppCs.ReactionChain[0]
|
||||
fakeAppCs.ReactionChain = nil
|
||||
fakeAppCs.AddReactor("get", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
return defaultReactor.React(action)
|
||||
})
|
||||
fakeAppCs.AddReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
patched = true
|
||||
return true, &v1alpha1.Application{}, nil
|
||||
})
|
||||
err := ctrl.finalizeApplicationDeletion(app, func(_ string) ([]*v1alpha1.Cluster, error) {
|
||||
return []*v1alpha1.Cluster{}, nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
// finalizer is not deleted
|
||||
assert.False(t, patched)
|
||||
// pre-delete hook is created
|
||||
require.Len(t, ctrl.kubectl.(*MockKubectl).CreatedResources, 1)
|
||||
require.Equal(t, "pre-delete-hook", ctrl.kubectl.(*MockKubectl).CreatedResources[0].GetName())
|
||||
})
|
||||
|
||||
t.Run("PostDelete_HookIsCreated", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.SetPostDeleteFinalizer()
|
||||
@@ -1148,6 +1223,41 @@ func TestFinalizeAppDeletion(t *testing.T) {
|
||||
require.Equal(t, "post-delete-hook", ctrl.kubectl.(*MockKubectl).CreatedResources[0].GetName())
|
||||
})
|
||||
|
||||
t.Run("PreDelete_HookIsExecuted", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.SetPreDeleteFinalizer()
|
||||
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
|
||||
liveHook := &unstructured.Unstructured{Object: newFakePreDeleteHook()}
|
||||
require.NoError(t, unstructured.SetNestedField(liveHook.Object, "Succeeded", "status", "phase"))
|
||||
ctrl := newFakeController(context.Background(), &fakeData{
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{fakePreDeleteHook},
|
||||
}},
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
|
||||
kube.GetResourceKey(liveHook): liveHook,
|
||||
},
|
||||
}, nil)
|
||||
|
||||
patched := false
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
defaultReactor := fakeAppCs.ReactionChain[0]
|
||||
fakeAppCs.ReactionChain = nil
|
||||
fakeAppCs.AddReactor("get", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
return defaultReactor.React(action)
|
||||
})
|
||||
fakeAppCs.AddReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
patched = true
|
||||
return true, &v1alpha1.Application{}, nil
|
||||
})
|
||||
err := ctrl.finalizeApplicationDeletion(app, func(_ string) ([]*v1alpha1.Cluster, error) {
|
||||
return []*v1alpha1.Cluster{}, nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
// finalizer is removed
|
||||
assert.True(t, patched)
|
||||
})
|
||||
|
||||
t.Run("PostDelete_HookIsExecuted", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.SetPostDeleteFinalizer()
|
||||
@@ -2192,6 +2302,93 @@ func TestFinalizeProjectDeletion_DoesNotHaveApplications(t *testing.T) {
|
||||
}, receivedPatch)
|
||||
}
|
||||
|
||||
func TestFinalizeProjectDeletion_HasApplicationInOtherNamespace(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Namespace = "team-a"
|
||||
proj := &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
SourceNamespaces: []string{"team-a"},
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, proj},
|
||||
applicationNamespaces: []string{"team-a"},
|
||||
}, nil)
|
||||
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
patched := false
|
||||
fakeAppCs.PrependReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
patched = true
|
||||
return true, &v1alpha1.AppProject{}, nil
|
||||
})
|
||||
|
||||
err := ctrl.finalizeProjectDeletion(proj)
|
||||
require.NoError(t, err)
|
||||
assert.False(t, patched)
|
||||
}
|
||||
|
||||
func TestFinalizeProjectDeletion_IgnoresAppsInUnmonitoredNamespace(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Namespace = "team-b"
|
||||
proj := &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace},
|
||||
}
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, proj},
|
||||
applicationNamespaces: []string{"team-a"},
|
||||
}, nil)
|
||||
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
require.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
}
|
||||
return true, &v1alpha1.AppProject{}, nil
|
||||
})
|
||||
|
||||
err := ctrl.finalizeProjectDeletion(proj)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, map[string]any{
|
||||
"metadata": map[string]any{
|
||||
"finalizers": nil,
|
||||
},
|
||||
}, receivedPatch)
|
||||
}
|
||||
|
||||
func TestFinalizeProjectDeletion_IgnoresAppsNotPermittedByProject(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Namespace = "team-b"
|
||||
proj := &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
SourceNamespaces: []string{"team-a"},
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, proj},
|
||||
applicationNamespaces: []string{"team-a", "team-b"},
|
||||
}, nil)
|
||||
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
require.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
}
|
||||
return true, &v1alpha1.AppProject{}, nil
|
||||
})
|
||||
|
||||
err := ctrl.finalizeProjectDeletion(proj)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, map[string]any{
|
||||
"metadata": map[string]any{
|
||||
"finalizers": nil,
|
||||
},
|
||||
}, receivedPatch)
|
||||
}
|
||||
|
||||
func TestProcessRequestedAppOperation_FailedNoRetries(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.Project = "default"
|
||||
@@ -2436,6 +2633,41 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
|
||||
assert.Equal(t, CompareWithLatestForceResolve, level)
|
||||
}
|
||||
|
||||
func TestProcessRequestedAppAutomatedOperation_Successful(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.Project = "default"
|
||||
app.Operation = &v1alpha1.Operation{
|
||||
Sync: &v1alpha1.SyncOperation{},
|
||||
InitiatedBy: v1alpha1.OperationInitiator{
|
||||
Automated: true,
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(t.Context(), &fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{},
|
||||
}},
|
||||
}, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
require.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
}
|
||||
return true, &v1alpha1.Application{}, nil
|
||||
})
|
||||
|
||||
ctrl.processRequestedAppOperation(app)
|
||||
|
||||
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
|
||||
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
|
||||
assert.Equal(t, string(synccommon.OperationSucceeded), phase)
|
||||
assert.Equal(t, "successfully synced (no more tasks)", message)
|
||||
ok, level := ctrl.isRefreshRequested(ctrl.toAppKey(app.Name))
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, CompareWithLatest, level)
|
||||
}
|
||||
|
||||
func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
@@ -3037,46 +3269,3 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSelfHealBackoffCooldownElapsed(t *testing.T) {
|
||||
cooldown := time.Second * 30
|
||||
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
|
||||
ctrl.selfHealBackoffCooldown = cooldown
|
||||
|
||||
app := &v1alpha1.Application{
|
||||
Status: v1alpha1.ApplicationStatus{
|
||||
OperationState: &v1alpha1.OperationState{
|
||||
Phase: synccommon.OperationSucceeded,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("operation not completed", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
app.Status.OperationState.FinishedAt = nil
|
||||
elapsed := ctrl.selfHealBackoffCooldownElapsed(app)
|
||||
assert.True(t, elapsed)
|
||||
})
|
||||
|
||||
t.Run("successful operation finised after cooldown", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
app.Status.OperationState.FinishedAt = &metav1.Time{Time: time.Now().Add(-cooldown)}
|
||||
elapsed := ctrl.selfHealBackoffCooldownElapsed(app)
|
||||
assert.True(t, elapsed)
|
||||
})
|
||||
|
||||
t.Run("unsuccessful operation finised after cooldown", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
app.Status.OperationState.Phase = synccommon.OperationFailed
|
||||
app.Status.OperationState.FinishedAt = &metav1.Time{Time: time.Now().Add(-cooldown)}
|
||||
elapsed := ctrl.selfHealBackoffCooldownElapsed(app)
|
||||
assert.False(t, elapsed)
|
||||
})
|
||||
|
||||
t.Run("successful operation finised before cooldown", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
app.Status.OperationState.FinishedAt = &metav1.Time{Time: time.Now()}
|
||||
elapsed := ctrl.selfHealBackoffCooldownElapsed(app)
|
||||
assert.False(t, elapsed)
|
||||
})
|
||||
}
|
||||
|
||||
42
controller/cache/cache.go
vendored
42
controller/cache/cache.go
vendored
@@ -270,7 +270,7 @@ func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
|
||||
return &cacheSettings{clusterSettings, appInstanceLabelKey, appv1.TrackingMethod(trackingMethod), installationID, resourceUpdatesOverrides, ignoreResourceUpdatesEnabled}, nil
|
||||
}
|
||||
|
||||
func asResourceNode(r *clustercache.Resource) appv1.ResourceNode {
|
||||
func asResourceNode(r *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) appv1.ResourceNode {
|
||||
gv, err := schema.ParseGroupVersion(r.Ref.APIVersion)
|
||||
if err != nil {
|
||||
gv = schema.GroupVersion{}
|
||||
@@ -278,14 +278,30 @@ func asResourceNode(r *clustercache.Resource) appv1.ResourceNode {
|
||||
parentRefs := make([]appv1.ResourceRef, len(r.OwnerRefs))
|
||||
for i, ownerRef := range r.OwnerRefs {
|
||||
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
|
||||
parentRefs[i] = appv1.ResourceRef{
|
||||
Group: ownerGvk.Group,
|
||||
Kind: ownerGvk.Kind,
|
||||
Version: ownerGvk.Version,
|
||||
Namespace: r.Ref.Namespace,
|
||||
Name: ownerRef.Name,
|
||||
UID: string(ownerRef.UID),
|
||||
parentRef := appv1.ResourceRef{
|
||||
Group: ownerGvk.Group,
|
||||
Kind: ownerGvk.Kind,
|
||||
Version: ownerGvk.Version,
|
||||
Name: ownerRef.Name,
|
||||
UID: string(ownerRef.UID),
|
||||
}
|
||||
|
||||
// Look up the parent in namespace resources
|
||||
// If found, it's namespaced and we use its namespace
|
||||
// If not found, it must be cluster-scoped (namespace = "")
|
||||
parentKey := kube.NewResourceKey(ownerGvk.Group, ownerGvk.Kind, r.Ref.Namespace, ownerRef.Name)
|
||||
if parent, ok := namespaceResources[parentKey]; ok {
|
||||
parentRef.Namespace = parent.Ref.Namespace
|
||||
} else {
|
||||
// Not in namespace => must be cluster-scoped
|
||||
parentRef.Namespace = ""
|
||||
// Debug logging for cross-namespace relationships
|
||||
if r.Ref.Namespace != "" {
|
||||
log.Debugf("Cross-namespace ref: %s/%s in namespace %s has parent %s/%s (cluster-scoped)",
|
||||
r.Ref.Kind, r.Ref.Name, r.Ref.Namespace, ownerGvk.Kind, ownerRef.Name)
|
||||
}
|
||||
}
|
||||
parentRefs[i] = parentRef
|
||||
}
|
||||
var resHealth *appv1.HealthStatus
|
||||
resourceInfo := resInfo(r)
|
||||
@@ -673,7 +689,7 @@ func (c *liveStateCache) IterateHierarchyV2(server *appv1.Cluster, keys []kube.R
|
||||
return err
|
||||
}
|
||||
clusterInfo.IterateHierarchyV2(keys, func(resource *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) bool {
|
||||
return action(asResourceNode(resource), getApp(resource, namespaceResources))
|
||||
return action(asResourceNode(resource, namespaceResources), getApp(resource, namespaceResources))
|
||||
})
|
||||
return nil
|
||||
}
|
||||
@@ -698,9 +714,15 @@ func (c *liveStateCache) GetNamespaceTopLevelResources(server *appv1.Cluster, na
|
||||
return nil, err
|
||||
}
|
||||
resources := clusterInfo.FindResources(namespace, clustercache.TopLevelResource)
|
||||
|
||||
// Get all namespace resources for parent lookups
|
||||
namespaceResources := clusterInfo.FindResources(namespace, func(_ *clustercache.Resource) bool {
|
||||
return true
|
||||
})
|
||||
|
||||
res := make(map[kube.ResourceKey]appv1.ResourceNode)
|
||||
for k, r := range resources {
|
||||
res[k] = asResourceNode(r)
|
||||
res[k] = asResourceNode(r, namespaceResources)
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
112
controller/cache/cache_test.go
vendored
112
controller/cache/cache_test.go
vendored
@@ -323,7 +323,7 @@ func Test_asResourceNode_owner_refs(t *testing.T) {
|
||||
CreationTimestamp: nil,
|
||||
Info: nil,
|
||||
Resource: nil,
|
||||
})
|
||||
}, nil)
|
||||
expected := appv1.ResourceNode{
|
||||
ResourceRef: appv1.ResourceRef{
|
||||
Version: "v1",
|
||||
@@ -842,3 +842,113 @@ func Test_ownerRefGV(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_asResourceNode_cross_namespace_parent(t *testing.T) {
|
||||
// Test that a namespaced resource with a cluster-scoped parent
|
||||
// correctly sets the parent namespace to empty string
|
||||
|
||||
// Create a Role (namespaced) with an owner reference to a ClusterRole (cluster-scoped)
|
||||
roleResource := &cache.Resource{
|
||||
Ref: corev1.ObjectReference{
|
||||
APIVersion: "rbac.authorization.k8s.io/v1",
|
||||
Kind: "Role",
|
||||
Namespace: "my-namespace",
|
||||
Name: "my-role",
|
||||
},
|
||||
OwnerRefs: []metav1.OwnerReference{
|
||||
{
|
||||
APIVersion: "rbac.authorization.k8s.io/v1",
|
||||
Kind: "ClusterRole",
|
||||
Name: "my-cluster-role",
|
||||
UID: "cluster-role-uid",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Create namespace resources map (ClusterRole won't be in here since it's cluster-scoped)
|
||||
namespaceResources := map[kube.ResourceKey]*cache.Resource{
|
||||
// Add some other namespace resources but not the ClusterRole
|
||||
{
|
||||
Group: "rbac.authorization.k8s.io",
|
||||
Kind: "Role",
|
||||
Namespace: "my-namespace",
|
||||
Name: "other-role",
|
||||
}: {
|
||||
Ref: corev1.ObjectReference{
|
||||
APIVersion: "rbac.authorization.k8s.io/v1",
|
||||
Kind: "Role",
|
||||
Namespace: "my-namespace",
|
||||
Name: "other-role",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
resNode := asResourceNode(roleResource, namespaceResources)
|
||||
|
||||
// The parent reference should have empty namespace since ClusterRole is cluster-scoped
|
||||
assert.Len(t, resNode.ParentRefs, 1)
|
||||
assert.Equal(t, "ClusterRole", resNode.ParentRefs[0].Kind)
|
||||
assert.Equal(t, "my-cluster-role", resNode.ParentRefs[0].Name)
|
||||
assert.Empty(t, resNode.ParentRefs[0].Namespace, "ClusterRole parent should have empty namespace")
|
||||
}
|
||||
|
||||
func Test_asResourceNode_same_namespace_parent(t *testing.T) {
|
||||
// Test that a namespaced resource with a namespaced parent in the same namespace
|
||||
// correctly sets the parent namespace
|
||||
|
||||
// Create a ReplicaSet with an owner reference to a Deployment (both namespaced)
|
||||
rsResource := &cache.Resource{
|
||||
Ref: corev1.ObjectReference{
|
||||
APIVersion: "apps/v1",
|
||||
Kind: "ReplicaSet",
|
||||
Namespace: "my-namespace",
|
||||
Name: "my-rs",
|
||||
},
|
||||
OwnerRefs: []metav1.OwnerReference{
|
||||
{
|
||||
APIVersion: "apps/v1",
|
||||
Kind: "Deployment",
|
||||
Name: "my-deployment",
|
||||
UID: "deployment-uid",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Create namespace resources map with the Deployment
|
||||
deploymentKey := kube.ResourceKey{
|
||||
Group: "apps",
|
||||
Kind: "Deployment",
|
||||
Namespace: "my-namespace",
|
||||
Name: "my-deployment",
|
||||
}
|
||||
namespaceResources := map[kube.ResourceKey]*cache.Resource{
|
||||
deploymentKey: {
|
||||
Ref: corev1.ObjectReference{
|
||||
APIVersion: "apps/v1",
|
||||
Kind: "Deployment",
|
||||
Namespace: "my-namespace",
|
||||
Name: "my-deployment",
|
||||
UID: "deployment-uid",
|
||||
},
|
||||
Resource: &unstructured.Unstructured{
|
||||
Object: map[string]any{
|
||||
"apiVersion": "apps/v1",
|
||||
"kind": "Deployment",
|
||||
"metadata": map[string]any{
|
||||
"name": "my-deployment",
|
||||
"namespace": "my-namespace",
|
||||
"uid": "deployment-uid",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
resNode := asResourceNode(rsResource, namespaceResources)
|
||||
|
||||
// The parent reference should have the same namespace
|
||||
assert.Len(t, resNode.ParentRefs, 1)
|
||||
assert.Equal(t, "Deployment", resNode.ParentRefs[0].Kind)
|
||||
assert.Equal(t, "my-deployment", resNode.ParentRefs[0].Name)
|
||||
assert.Equal(t, "my-namespace", resNode.ParentRefs[0].Namespace, "Deployment parent should have same namespace")
|
||||
}
|
||||
|
||||
14
controller/cache/info.go
vendored
14
controller/cache/info.go
vendored
@@ -225,9 +225,19 @@ func populateIngressInfo(un *unstructured.Unstructured, res *ResourceInfo) {
|
||||
if res.NetworkingInfo != nil {
|
||||
urls = res.NetworkingInfo.ExternalURLs
|
||||
}
|
||||
for url := range urlsSet {
|
||||
urls = append(urls, url)
|
||||
|
||||
enableDefaultExternalURLs := true
|
||||
if ignoreVal, ok := un.GetAnnotations()[common.AnnotationKeyIgnoreDefaultLinks]; ok {
|
||||
if ignoreDefaultLinks, err := strconv.ParseBool(ignoreVal); err == nil {
|
||||
enableDefaultExternalURLs = !ignoreDefaultLinks
|
||||
}
|
||||
}
|
||||
if enableDefaultExternalURLs {
|
||||
for url := range urlsSet {
|
||||
urls = append(urls, url)
|
||||
}
|
||||
}
|
||||
|
||||
res.NetworkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetRefs: targets, Ingress: ingress, ExternalURLs: urls}
|
||||
}
|
||||
|
||||
|
||||
58
controller/cache/info_test.go
vendored
58
controller/cache/info_test.go
vendored
@@ -126,6 +126,40 @@ var (
|
||||
ingress:
|
||||
- ip: 107.178.210.11`)
|
||||
|
||||
testIgnoreDefaultLinksIngress = strToUnstructured(`
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: helm-guestbook
|
||||
namespace: default
|
||||
uid: "4"
|
||||
annotations:
|
||||
link.argocd.argoproj.io/external-link: http://my-grafana.example.com/ingress-link
|
||||
argocd.argoproj.io/ignore-default-links: "true"
|
||||
spec:
|
||||
backend:
|
||||
serviceName: not-found-service
|
||||
servicePort: 443
|
||||
rules:
|
||||
- host: helm-guestbook.example.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: helm-guestbook
|
||||
servicePort: 443
|
||||
path: /
|
||||
- backend:
|
||||
serviceName: helm-guestbook
|
||||
servicePort: https
|
||||
path: /
|
||||
tls:
|
||||
- host: helm-guestbook.example.com
|
||||
secretName: my-tls-secret
|
||||
status:
|
||||
loadBalancer:
|
||||
ingress:
|
||||
- ip: 107.178.210.11`)
|
||||
|
||||
testIngressWildCardPath = strToUnstructured(`
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
@@ -1200,6 +1234,30 @@ func TestGetLinkAnnotatedIngressInfo(t *testing.T) {
|
||||
}, info.NetworkingInfo)
|
||||
}
|
||||
|
||||
func TestGetIgnoreDefaultLinksIngressInfo(t *testing.T) {
|
||||
info := &ResourceInfo{}
|
||||
populateNodeInfo(testIgnoreDefaultLinksIngress, info, []string{})
|
||||
assert.Empty(t, info.Info)
|
||||
sort.Slice(info.NetworkingInfo.TargetRefs, func(i, j int) bool {
|
||||
return info.NetworkingInfo.TargetRefs[i].Name < info.NetworkingInfo.TargetRefs[j].Name
|
||||
})
|
||||
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
|
||||
Ingress: []corev1.LoadBalancerIngress{{IP: "107.178.210.11"}},
|
||||
TargetRefs: []v1alpha1.ResourceRef{{
|
||||
Namespace: "default",
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Name: "helm-guestbook",
|
||||
}, {
|
||||
Namespace: "default",
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Name: "not-found-service",
|
||||
}},
|
||||
ExternalURLs: []string{"http://my-grafana.example.com/ingress-link"},
|
||||
}, info.NetworkingInfo)
|
||||
}
|
||||
|
||||
func TestGetIngressInfoWildCardPath(t *testing.T) {
|
||||
info := &ResourceInfo{}
|
||||
populateNodeInfo(testIngressWildCardPath, info, []string{})
|
||||
|
||||
@@ -2,6 +2,8 @@ package controller
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/health"
|
||||
"github.com/argoproj/gitops-engine/pkg/sync/common"
|
||||
@@ -14,26 +16,33 @@ import (
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/util/lua"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
var (
|
||||
postDeleteHook = "PostDelete"
|
||||
postDeleteHooks = map[string]string{
|
||||
"argocd.argoproj.io/hook": postDeleteHook,
|
||||
type HookType string
|
||||
|
||||
const (
|
||||
PreDeleteHookType HookType = "PreDelete"
|
||||
PostDeleteHookType HookType = "PostDelete"
|
||||
)
|
||||
|
||||
var hookTypeAnnotations = map[HookType]map[string]string{
|
||||
PreDeleteHookType: {
|
||||
"argocd.argoproj.io/hook": string(PreDeleteHookType),
|
||||
"helm.sh/hook": "pre-delete",
|
||||
},
|
||||
PostDeleteHookType: {
|
||||
"argocd.argoproj.io/hook": string(PostDeleteHookType),
|
||||
"helm.sh/hook": "post-delete",
|
||||
}
|
||||
)
|
||||
|
||||
func isHook(obj *unstructured.Unstructured) bool {
|
||||
return hook.IsHook(obj) || isPostDeleteHook(obj)
|
||||
},
|
||||
}
|
||||
|
||||
func isPostDeleteHook(obj *unstructured.Unstructured) bool {
|
||||
func isHookOfType(obj *unstructured.Unstructured, hookType HookType) bool {
|
||||
if obj == nil || obj.GetAnnotations() == nil {
|
||||
return false
|
||||
}
|
||||
for k, v := range postDeleteHooks {
|
||||
|
||||
for k, v := range hookTypeAnnotations[hookType] {
|
||||
if val, ok := obj.GetAnnotations()[k]; ok && val == v {
|
||||
return true
|
||||
}
|
||||
@@ -41,11 +50,34 @@ func isPostDeleteHook(obj *unstructured.Unstructured) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) executePostDeleteHooks(app *v1alpha1.Application, proj *v1alpha1.AppProject, liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
func isHook(obj *unstructured.Unstructured) bool {
|
||||
if hook.IsHook(obj) {
|
||||
return true
|
||||
}
|
||||
|
||||
for hookType := range hookTypeAnnotations {
|
||||
if isHookOfType(obj, hookType) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func isPreDeleteHook(obj *unstructured.Unstructured) bool {
|
||||
return isHookOfType(obj, PreDeleteHookType)
|
||||
}
|
||||
|
||||
func isPostDeleteHook(obj *unstructured.Unstructured) bool {
|
||||
return isHookOfType(obj, PostDeleteHookType)
|
||||
}
|
||||
|
||||
// executeHooks is a generic function to execute hooks of a specified type
|
||||
func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Application, proj *appv1.AppProject, liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
appLabelKey, err := ctrl.settingsMgr.GetAppInstanceLabelKey()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
var revisions []string
|
||||
for _, src := range app.Spec.GetSources() {
|
||||
revisions = append(revisions, src.TargetRevision)
|
||||
@@ -55,44 +87,62 @@ func (ctrl *ApplicationController) executePostDeleteHooks(app *v1alpha1.Applicat
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Find existing hooks of the specified type
|
||||
runningHooks := map[kube.ResourceKey]*unstructured.Unstructured{}
|
||||
for key, obj := range liveObjs {
|
||||
if isPostDeleteHook(obj) {
|
||||
if isHookOfType(obj, hookType) {
|
||||
runningHooks[key] = obj
|
||||
}
|
||||
}
|
||||
|
||||
// Find expected hooks that need to be created
|
||||
expectedHook := map[kube.ResourceKey]*unstructured.Unstructured{}
|
||||
for _, obj := range targets {
|
||||
if obj.GetNamespace() == "" {
|
||||
obj.SetNamespace(app.Spec.Destination.Namespace)
|
||||
}
|
||||
if !isPostDeleteHook(obj) {
|
||||
if !isHookOfType(obj, hookType) {
|
||||
continue
|
||||
}
|
||||
if runningHook := runningHooks[kube.GetResourceKey(obj)]; runningHook == nil {
|
||||
expectedHook[kube.GetResourceKey(obj)] = obj
|
||||
}
|
||||
}
|
||||
|
||||
// Create hooks that don't exist yet
|
||||
createdCnt := 0
|
||||
for _, obj := range expectedHook {
|
||||
// Add app instance label so the hook can be tracked and cleaned up
|
||||
labels := obj.GetLabels()
|
||||
if labels == nil {
|
||||
labels = make(map[string]string)
|
||||
}
|
||||
labels[appLabelKey] = app.InstanceName(ctrl.namespace)
|
||||
obj.SetLabels(labels)
|
||||
|
||||
_, err = ctrl.kubectl.CreateResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), obj, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
createdCnt++
|
||||
}
|
||||
|
||||
if createdCnt > 0 {
|
||||
logCtx.Infof("Created %d post-delete hooks", createdCnt)
|
||||
logCtx.Infof("Created %d %s hooks", createdCnt, hookType)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Check health of running hooks
|
||||
resourceOverrides, err := ctrl.settingsMgr.GetResourceOverrides()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
healthOverrides := lua.ResourceHealthOverrides(resourceOverrides)
|
||||
|
||||
progressingHooksCnt := 0
|
||||
progressingHooksCount := 0
|
||||
var failedHooks []string
|
||||
var failedHookObjects []*unstructured.Unstructured
|
||||
for _, obj := range runningHooks {
|
||||
hookHealth, err := health.GetResourceHealth(obj, healthOverrides)
|
||||
if err != nil {
|
||||
@@ -110,19 +160,37 @@ func (ctrl *ApplicationController) executePostDeleteHooks(app *v1alpha1.Applicat
|
||||
Status: health.HealthStatusHealthy,
|
||||
}
|
||||
}
|
||||
if hookHealth.Status == health.HealthStatusProgressing {
|
||||
progressingHooksCnt++
|
||||
switch hookHealth.Status {
|
||||
case health.HealthStatusProgressing:
|
||||
progressingHooksCount++
|
||||
case health.HealthStatusDegraded:
|
||||
failedHooks = append(failedHooks, fmt.Sprintf("%s/%s", obj.GetNamespace(), obj.GetName()))
|
||||
failedHookObjects = append(failedHookObjects, obj)
|
||||
}
|
||||
}
|
||||
if progressingHooksCnt > 0 {
|
||||
logCtx.Infof("Waiting for %d post-delete hooks to complete", progressingHooksCnt)
|
||||
|
||||
if len(failedHooks) > 0 {
|
||||
// Delete failed hooks to allow retry with potentially fixed hook definitions
|
||||
logCtx.Infof("Deleting %d failed %s hook(s) to allow retry", len(failedHookObjects), hookType)
|
||||
for _, obj := range failedHookObjects {
|
||||
err = ctrl.kubectl.DeleteResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), metav1.DeleteOptions{})
|
||||
if err != nil {
|
||||
logCtx.WithError(err).Warnf("Failed to delete failed hook %s/%s", obj.GetNamespace(), obj.GetName())
|
||||
}
|
||||
}
|
||||
return false, fmt.Errorf("%s hook(s) failed: %s", hookType, strings.Join(failedHooks, ", "))
|
||||
}
|
||||
|
||||
if progressingHooksCount > 0 {
|
||||
logCtx.Infof("Waiting for %d %s hooks to complete", progressingHooksCount, hookType)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) cleanupPostDeleteHooks(liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
// cleanupHooks is a generic function to clean up hooks of a specified type
|
||||
func (ctrl *ApplicationController) cleanupHooks(hookType HookType, liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
resourceOverrides, err := ctrl.settingsMgr.GetResourceOverrides()
|
||||
if err != nil {
|
||||
return false, err
|
||||
@@ -132,8 +200,10 @@ func (ctrl *ApplicationController) cleanupPostDeleteHooks(liveObjs map[kube.Reso
|
||||
pendingDeletionCount := 0
|
||||
aggregatedHealth := health.HealthStatusHealthy
|
||||
var hooks []*unstructured.Unstructured
|
||||
|
||||
// Collect hooks and determine overall health
|
||||
for _, obj := range liveObjs {
|
||||
if !isPostDeleteHook(obj) {
|
||||
if !isHookOfType(obj, hookType) {
|
||||
continue
|
||||
}
|
||||
hookHealth, err := health.GetResourceHealth(obj, healthOverrides)
|
||||
@@ -151,25 +221,60 @@ func (ctrl *ApplicationController) cleanupPostDeleteHooks(liveObjs map[kube.Reso
|
||||
hooks = append(hooks, obj)
|
||||
}
|
||||
|
||||
// Process hooks for deletion
|
||||
for _, obj := range hooks {
|
||||
for _, policy := range hook.DeletePolicies(obj) {
|
||||
if (policy != common.HookDeletePolicyHookFailed || aggregatedHealth != health.HealthStatusDegraded) && (policy != common.HookDeletePolicyHookSucceeded || aggregatedHealth != health.HealthStatusHealthy) {
|
||||
continue
|
||||
deletePolicies := hook.DeletePolicies(obj)
|
||||
shouldDelete := false
|
||||
|
||||
if len(deletePolicies) == 0 {
|
||||
// If no delete policy is specified, always delete hooks during cleanup phase
|
||||
shouldDelete = true
|
||||
} else {
|
||||
// Check if any delete policy matches the current hook state
|
||||
for _, policy := range deletePolicies {
|
||||
if (policy == common.HookDeletePolicyHookFailed && aggregatedHealth == health.HealthStatusDegraded) ||
|
||||
(policy == common.HookDeletePolicyHookSucceeded && aggregatedHealth == health.HealthStatusHealthy) {
|
||||
shouldDelete = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if shouldDelete {
|
||||
pendingDeletionCount++
|
||||
if obj.GetDeletionTimestamp() != nil {
|
||||
continue
|
||||
}
|
||||
logCtx.Infof("Deleting post-delete hook %s/%s", obj.GetNamespace(), obj.GetName())
|
||||
logCtx.Infof("Deleting %s hook %s/%s", hookType, obj.GetNamespace(), obj.GetName())
|
||||
err = ctrl.kubectl.DeleteResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), metav1.DeleteOptions{})
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if pendingDeletionCount > 0 {
|
||||
logCtx.Infof("Waiting for %d post-delete hooks to be deleted", pendingDeletionCount)
|
||||
logCtx.Infof("Waiting for %d %s hooks to be deleted", pendingDeletionCount, hookType)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// Execute and cleanup hooks for pre-delete and post-delete operations
|
||||
|
||||
func (ctrl *ApplicationController) executePreDeleteHooks(app *appv1.Application, proj *appv1.AppProject, liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
return ctrl.executeHooks(PreDeleteHookType, app, proj, liveObjs, config, logCtx)
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) cleanupPreDeleteHooks(liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
return ctrl.cleanupHooks(PreDeleteHookType, liveObjs, config, logCtx)
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) executePostDeleteHooks(app *appv1.Application, proj *appv1.AppProject, liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
return ctrl.executeHooks(PostDeleteHookType, app, proj, liveObjs, config, logCtx)
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) cleanupPostDeleteHooks(liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
return ctrl.cleanupHooks(PostDeleteHookType, liveObjs, config, logCtx)
|
||||
}
|
||||
|
||||
173
controller/hook_test.go
Normal file
173
controller/hook_test.go
Normal file
@@ -0,0 +1,173 @@
|
||||
package controller
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
)
|
||||
|
||||
func TestIsHookOfType(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
hookType HookType
|
||||
annot map[string]string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "ArgoCD PreDelete hook",
|
||||
hookType: PreDeleteHookType,
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Helm PreDelete hook",
|
||||
hookType: PreDeleteHookType,
|
||||
annot: map[string]string{"helm.sh/hook": "pre-delete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "ArgoCD PostDelete hook",
|
||||
hookType: PostDeleteHookType,
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Helm PostDelete hook",
|
||||
hookType: PostDeleteHookType,
|
||||
annot: map[string]string{"helm.sh/hook": "post-delete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Not a hook",
|
||||
hookType: PreDeleteHookType,
|
||||
annot: map[string]string{"some-other": "annotation"},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Wrong hook type",
|
||||
hookType: PreDeleteHookType,
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Nil annotations",
|
||||
hookType: PreDeleteHookType,
|
||||
annot: nil,
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
obj := &unstructured.Unstructured{}
|
||||
obj.SetAnnotations(tt.annot)
|
||||
result := isHookOfType(obj, tt.hookType)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsHook(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
annot map[string]string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "ArgoCD PreDelete hook",
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "ArgoCD PostDelete hook",
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "ArgoCD PreSync hook",
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PreSync"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Not a hook",
|
||||
annot: map[string]string{"some-other": "annotation"},
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
obj := &unstructured.Unstructured{}
|
||||
obj.SetAnnotations(tt.annot)
|
||||
result := isHook(obj)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsPreDeleteHook(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
annot map[string]string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "ArgoCD PreDelete hook",
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Helm PreDelete hook",
|
||||
annot: map[string]string{"helm.sh/hook": "pre-delete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "ArgoCD PostDelete hook",
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
obj := &unstructured.Unstructured{}
|
||||
obj.SetAnnotations(tt.annot)
|
||||
result := isPreDeleteHook(obj)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsPostDeleteHook(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
annot map[string]string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "ArgoCD PostDelete hook",
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "Helm PostDelete hook",
|
||||
annot: map[string]string{"helm.sh/hook": "post-delete"},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "ArgoCD PreDelete hook",
|
||||
annot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
obj := &unstructured.Unstructured{}
|
||||
obj.SetAnnotations(tt.annot)
|
||||
result := isPostDeleteHook(obj)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -366,6 +366,13 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
|
||||
return "", "", errors, nil
|
||||
}
|
||||
paths := []*commitclient.PathDetails{pathDetails}
|
||||
logCtx = logCtx.WithFields(log.Fields{"drySha": targetRevision})
|
||||
// De-dupe, if the drySha was already hydrated log a debug and return using the data from the last successful hydration run.
|
||||
// We only inspect one app. If apps have been added/removed, that will be handled on the next DRY commit.
|
||||
if apps[0].Status.SourceHydrator.LastSuccessfulOperation != nil && targetRevision == apps[0].Status.SourceHydrator.LastSuccessfulOperation.DrySHA {
|
||||
logCtx.Debug("Skipping hydration since the DRY commit was already hydrated")
|
||||
return targetRevision, apps[0].Status.SourceHydrator.LastSuccessfulOperation.HydratedSHA, nil, nil
|
||||
}
|
||||
|
||||
eg, ctx := errgroup.WithContext(context.Background())
|
||||
var mu sync.Mutex
|
||||
@@ -456,6 +463,10 @@ func (h *Hydrator) getManifests(ctx context.Context, app *appv1.Application, tar
|
||||
RepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
|
||||
Path: app.Spec.SourceHydrator.DrySource.Path,
|
||||
TargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
|
||||
Helm: app.Spec.SourceHydrator.DrySource.Helm,
|
||||
Kustomize: app.Spec.SourceHydrator.DrySource.Kustomize,
|
||||
Directory: app.Spec.SourceHydrator.DrySource.Directory,
|
||||
Plugin: app.Spec.SourceHydrator.DrySource.Plugin,
|
||||
}
|
||||
if targetRevision == "" {
|
||||
targetRevision = app.Spec.SourceHydrator.DrySource.TargetRevision
|
||||
|
||||
@@ -1094,3 +1094,36 @@ func TestHydrator_getManifests_GetRepoObjsError(t *testing.T) {
|
||||
assert.Empty(t, rev)
|
||||
assert.Nil(t, pathDetails)
|
||||
}
|
||||
|
||||
func TestHydrator_hydrate_DeDupe_Success(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
d := mocks.NewDependencies(t)
|
||||
h := &Hydrator{dependencies: d}
|
||||
|
||||
app1 := newTestApp("app1")
|
||||
app2 := newTestApp("app2")
|
||||
lastSuccessfulOperation := &v1alpha1.SuccessfulHydrateOperation{
|
||||
DrySHA: "sha123",
|
||||
HydratedSHA: "hydrated123",
|
||||
}
|
||||
app1.Status.SourceHydrator = v1alpha1.SourceHydratorStatus{
|
||||
LastSuccessfulOperation: lastSuccessfulOperation,
|
||||
}
|
||||
|
||||
apps := []*v1alpha1.Application{app1, app2}
|
||||
proj := newTestProject()
|
||||
projects := map[string]*v1alpha1.AppProject{app1.Spec.Project: proj}
|
||||
|
||||
// Asserting .Once() confirms that we only make one call to repo-server to get the last hydrated DRY
|
||||
// sha, and then we quit early.
|
||||
d.On("GetRepoObjs", mock.Anything, app1, app1.Spec.SourceHydrator.GetDrySource(), "main", proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil).Once()
|
||||
logCtx := log.NewEntry(log.StandardLogger())
|
||||
|
||||
sha, hydratedSha, errs, err := h.hydrate(logCtx, apps, projects)
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "sha123", sha)
|
||||
assert.Equal(t, "hydrated123", hydratedSha)
|
||||
assert.Empty(t, errs)
|
||||
}
|
||||
|
||||
@@ -95,6 +95,7 @@ type comparisonResult struct {
|
||||
timings map[string]time.Duration
|
||||
diffResultList *diff.DiffResultList
|
||||
hasPostDeleteHooks bool
|
||||
hasPreDeleteHooks bool
|
||||
// revisionsMayHaveChanges indicates if there are any possibilities that the revisions contain changes
|
||||
revisionsMayHaveChanges bool
|
||||
}
|
||||
@@ -254,9 +255,6 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
|
||||
|
||||
appNamespace := app.Spec.Destination.Namespace
|
||||
apiVersions := argo.APIResourcesToStrings(apiResources, true)
|
||||
if !sendRuntimeState {
|
||||
appNamespace = ""
|
||||
}
|
||||
|
||||
updateRevisions := processManifestGeneratePathsEnabled &&
|
||||
// updating revisions result is not required if automated sync is not enabled
|
||||
@@ -272,7 +270,7 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
|
||||
Revision: revision,
|
||||
SyncedRevision: syncedRevision,
|
||||
NoRevisionCache: noRevisionCache,
|
||||
Paths: path.GetAppRefreshPaths(app),
|
||||
Paths: path.GetSourceRefreshPaths(app, source),
|
||||
AppLabelKey: appLabelKey,
|
||||
AppName: app.InstanceName(m.namespace),
|
||||
Namespace: appNamespace,
|
||||
@@ -765,14 +763,26 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
|
||||
}
|
||||
}
|
||||
}
|
||||
hasPreDeleteHooks := false
|
||||
hasPostDeleteHooks := false
|
||||
// Filter out PreDelete and PostDelete hooks from targetObjs since they should not be synced
|
||||
// as regular resources. They are only executed during deletion.
|
||||
var targetObjsForSync []*unstructured.Unstructured
|
||||
for _, obj := range targetObjs {
|
||||
if isPreDeleteHook(obj) {
|
||||
hasPreDeleteHooks = true
|
||||
// Skip PreDelete hooks - they are not synced, only executed during deletion
|
||||
continue
|
||||
}
|
||||
if isPostDeleteHook(obj) {
|
||||
hasPostDeleteHooks = true
|
||||
// Skip PostDelete hooks - they are not synced, only executed after deletion
|
||||
continue
|
||||
}
|
||||
targetObjsForSync = append(targetObjsForSync, obj)
|
||||
}
|
||||
|
||||
reconciliation := sync.Reconcile(targetObjs, liveObjByKey, app.Spec.Destination.Namespace, infoProvider)
|
||||
reconciliation := sync.Reconcile(targetObjsForSync, liveObjByKey, app.Spec.Destination.Namespace, infoProvider)
|
||||
ts.AddCheckpoint("live_ms")
|
||||
|
||||
compareOptions, err := m.settingsMgr.GetResourceCompareOptions()
|
||||
@@ -917,7 +927,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
|
||||
}
|
||||
// set unknown status to all resource that are not permitted in the app project
|
||||
isNamespaced, err := m.liveStateCache.IsNamespaced(destCluster, gvk.GroupKind())
|
||||
if !project.IsGroupKindPermitted(gvk.GroupKind(), isNamespaced && err == nil) {
|
||||
if !project.IsGroupKindNamePermitted(gvk.GroupKind(), obj.GetName(), isNamespaced && err == nil) {
|
||||
resState.Status = v1alpha1.SyncStatusCodeUnknown
|
||||
}
|
||||
|
||||
@@ -989,6 +999,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
|
||||
diffConfig: diffConfig,
|
||||
diffResultList: diffResults,
|
||||
hasPostDeleteHooks: hasPostDeleteHooks,
|
||||
hasPreDeleteHooks: hasPreDeleteHooks,
|
||||
revisionsMayHaveChanges: revisionsMayHaveChanges,
|
||||
}
|
||||
|
||||
|
||||
@@ -2,7 +2,6 @@ package controller
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
stderrors "errors"
|
||||
"fmt"
|
||||
"os"
|
||||
@@ -263,7 +262,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
|
||||
// resources which in this case applies the live values in the configured
|
||||
// ignore differences fields.
|
||||
if syncOp.SyncOptions.HasOption("RespectIgnoreDifferences=true") {
|
||||
patchedTargets, err := normalizeTargetResources(openAPISchema, compareResult)
|
||||
patchedTargets, err := normalizeTargetResources(compareResult)
|
||||
if err != nil {
|
||||
state.Phase = common.OperationError
|
||||
state.Message = fmt.Sprintf("Failed to normalize target resources: %s", err)
|
||||
@@ -309,7 +308,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
|
||||
sync.WithLogr(logutils.NewLogrusLogger(logEntry)),
|
||||
sync.WithHealthOverride(lua.ResourceHealthOverrides(resourceOverrides)),
|
||||
sync.WithPermissionValidator(func(un *unstructured.Unstructured, res *metav1.APIResource) error {
|
||||
if !project.IsGroupKindPermitted(un.GroupVersionKind().GroupKind(), res.Namespaced) {
|
||||
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
|
||||
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
|
||||
}
|
||||
if res.Namespaced {
|
||||
@@ -331,6 +330,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
|
||||
sync.WithResourcesFilter(func(key kube.ResourceKey, target *unstructured.Unstructured, live *unstructured.Unstructured) bool {
|
||||
return (len(syncOp.Resources) == 0 ||
|
||||
isPostDeleteHook(target) ||
|
||||
isPreDeleteHook(target) ||
|
||||
argo.ContainsSyncResource(key.Name, key.Namespace, schema.GroupVersionKind{Kind: key.Kind, Group: key.Group}, syncOp.Resources)) &&
|
||||
m.isSelfReferencedObj(live, target, app.GetName(), v1alpha1.TrackingMethod(trackingMethod), installationID)
|
||||
}),
|
||||
@@ -435,65 +435,53 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
|
||||
// - applies normalization to the target resources based on the live resources
|
||||
// - copies ignored fields from the matching live resources: apply normalizer to the live resource,
|
||||
// calculates the patch performed by normalizer and applies the patch to the target resource
|
||||
func normalizeTargetResources(openAPISchema openapi.Resources, cr *comparisonResult) ([]*unstructured.Unstructured, error) {
|
||||
// Normalize live and target resources (cleaning or aligning them)
|
||||
func normalizeTargetResources(cr *comparisonResult) ([]*unstructured.Unstructured, error) {
|
||||
// normalize live and target resources
|
||||
normalized, err := diff.Normalize(cr.reconciliationResult.Live, cr.reconciliationResult.Target, cr.diffConfig)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
patchedTargets := []*unstructured.Unstructured{}
|
||||
|
||||
for idx, live := range cr.reconciliationResult.Live {
|
||||
normalizedTarget := normalized.Targets[idx]
|
||||
if normalizedTarget == nil {
|
||||
patchedTargets = append(patchedTargets, nil)
|
||||
continue
|
||||
}
|
||||
gvk := normalizedTarget.GroupVersionKind()
|
||||
|
||||
originalTarget := cr.reconciliationResult.Target[idx]
|
||||
if live == nil {
|
||||
// No live resource, just use target
|
||||
patchedTargets = append(patchedTargets, originalTarget)
|
||||
continue
|
||||
}
|
||||
|
||||
var (
|
||||
lookupPatchMeta strategicpatch.LookupPatchMeta
|
||||
versionedObject any
|
||||
)
|
||||
|
||||
// Load patch meta struct or OpenAPI schema for CRDs
|
||||
if versionedObject, err = scheme.Scheme.New(gvk); err == nil {
|
||||
if lookupPatchMeta, err = strategicpatch.NewPatchMetaFromStruct(versionedObject); err != nil {
|
||||
var lookupPatchMeta *strategicpatch.PatchMetaFromStruct
|
||||
versionedObject, err := scheme.Scheme.New(normalizedTarget.GroupVersionKind())
|
||||
if err == nil {
|
||||
meta, err := strategicpatch.NewPatchMetaFromStruct(versionedObject)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else if crdSchema := openAPISchema.LookupResource(gvk); crdSchema != nil {
|
||||
lookupPatchMeta = strategicpatch.NewPatchMetaFromOpenAPI(crdSchema)
|
||||
lookupPatchMeta = &meta
|
||||
}
|
||||
|
||||
// Calculate live patch
|
||||
livePatch, err := getMergePatch(normalized.Lives[idx], live, lookupPatchMeta)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Apply the patch to the normalized target
|
||||
// This ensures ignored fields in live are restored into the target before syncing
|
||||
normalizedTarget, err = applyMergePatch(normalizedTarget, livePatch, versionedObject, lookupPatchMeta)
|
||||
normalizedTarget, err = applyMergePatch(normalizedTarget, livePatch, versionedObject)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
patchedTargets = append(patchedTargets, normalizedTarget)
|
||||
}
|
||||
|
||||
return patchedTargets, nil
|
||||
}
|
||||
|
||||
// getMergePatch calculates and returns the patch between the original and the
|
||||
// modified unstructures.
|
||||
func getMergePatch(original, modified *unstructured.Unstructured, lookupPatchMeta strategicpatch.LookupPatchMeta) ([]byte, error) {
|
||||
func getMergePatch(original, modified *unstructured.Unstructured, lookupPatchMeta *strategicpatch.PatchMetaFromStruct) ([]byte, error) {
|
||||
originalJSON, err := original.MarshalJSON()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -509,35 +497,18 @@ func getMergePatch(original, modified *unstructured.Unstructured, lookupPatchMet
|
||||
return jsonpatch.CreateMergePatch(originalJSON, modifiedJSON)
|
||||
}
|
||||
|
||||
// applyMergePatch will apply the given patch in the obj and return the patched unstructure.
|
||||
func applyMergePatch(obj *unstructured.Unstructured, patch []byte, versionedObject any, meta strategicpatch.LookupPatchMeta) (*unstructured.Unstructured, error) {
|
||||
// applyMergePatch will apply the given patch in the obj and return the patched
|
||||
// unstructure.
|
||||
func applyMergePatch(obj *unstructured.Unstructured, patch []byte, versionedObject any) (*unstructured.Unstructured, error) {
|
||||
originalJSON, err := obj.MarshalJSON()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var patchedJSON []byte
|
||||
switch {
|
||||
case versionedObject != nil:
|
||||
patchedJSON, err = strategicpatch.StrategicMergePatch(originalJSON, patch, versionedObject)
|
||||
case meta != nil:
|
||||
var originalMap, patchMap map[string]any
|
||||
if err := json.Unmarshal(originalJSON, &originalMap); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := json.Unmarshal(patch, &patchMap); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
patchedMap, err := strategicpatch.StrategicMergeMapPatchUsingLookupPatchMeta(originalMap, patchMap, meta)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
patchedJSON, err = json.Marshal(patchedMap)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
default:
|
||||
if versionedObject == nil {
|
||||
patchedJSON, err = jsonpatch.MergePatch(originalJSON, patch)
|
||||
} else {
|
||||
patchedJSON, err = strategicpatch.StrategicMergePatch(originalJSON, patch, versionedObject)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -1,17 +1,9 @@
|
||||
package controller
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
openapi_v2 "github.com/google/gnostic-models/openapiv2"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/kubectl/pkg/util/openapi"
|
||||
|
||||
"sigs.k8s.io/yaml"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/sync"
|
||||
synccommon "github.com/argoproj/gitops-engine/pkg/sync/common"
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
@@ -31,29 +23,6 @@ import (
|
||||
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
|
||||
)
|
||||
|
||||
type fakeDiscovery struct {
|
||||
schema *openapi_v2.Document
|
||||
}
|
||||
|
||||
func (f *fakeDiscovery) OpenAPISchema() (*openapi_v2.Document, error) {
|
||||
return f.schema, nil
|
||||
}
|
||||
|
||||
func loadCRDSchema(t *testing.T, path string) *openapi_v2.Document {
|
||||
t.Helper()
|
||||
|
||||
data, err := os.ReadFile(path)
|
||||
require.NoError(t, err)
|
||||
|
||||
jsonData, err := yaml.YAMLToJSON(data)
|
||||
require.NoError(t, err)
|
||||
|
||||
doc, err := openapi_v2.ParseDocument(jsonData)
|
||||
require.NoError(t, err)
|
||||
|
||||
return doc
|
||||
}
|
||||
|
||||
func TestPersistRevisionHistory(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Status.OperationState = nil
|
||||
@@ -416,7 +385,7 @@ func TestNormalizeTargetResources(t *testing.T) {
|
||||
f := setup(t, ignores)
|
||||
|
||||
// when
|
||||
targets, err := normalizeTargetResources(nil, f.comparisonResult)
|
||||
targets, err := normalizeTargetResources(f.comparisonResult)
|
||||
|
||||
// then
|
||||
require.NoError(t, err)
|
||||
@@ -429,7 +398,7 @@ func TestNormalizeTargetResources(t *testing.T) {
|
||||
f := setup(t, []v1alpha1.ResourceIgnoreDifferences{})
|
||||
|
||||
// when
|
||||
targets, err := normalizeTargetResources(nil, f.comparisonResult)
|
||||
targets, err := normalizeTargetResources(f.comparisonResult)
|
||||
|
||||
// then
|
||||
require.NoError(t, err)
|
||||
@@ -449,7 +418,7 @@ func TestNormalizeTargetResources(t *testing.T) {
|
||||
unstructured.RemoveNestedField(live.Object, "metadata", "annotations", "iksm-version")
|
||||
|
||||
// when
|
||||
targets, err := normalizeTargetResources(nil, f.comparisonResult)
|
||||
targets, err := normalizeTargetResources(f.comparisonResult)
|
||||
|
||||
// then
|
||||
require.NoError(t, err)
|
||||
@@ -474,7 +443,7 @@ func TestNormalizeTargetResources(t *testing.T) {
|
||||
f := setup(t, ignores)
|
||||
|
||||
// when
|
||||
targets, err := normalizeTargetResources(nil, f.comparisonResult)
|
||||
targets, err := normalizeTargetResources(f.comparisonResult)
|
||||
|
||||
// then
|
||||
require.NoError(t, err)
|
||||
@@ -489,6 +458,7 @@ func TestNormalizeTargetResources(t *testing.T) {
|
||||
assert.Equal(t, int64(4), replicas)
|
||||
})
|
||||
t.Run("will keep new array entries not found in live state if not ignored", func(t *testing.T) {
|
||||
t.Skip("limitation in the current implementation")
|
||||
// given
|
||||
ignores := []v1alpha1.ResourceIgnoreDifferences{
|
||||
{
|
||||
@@ -502,7 +472,7 @@ func TestNormalizeTargetResources(t *testing.T) {
|
||||
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
|
||||
|
||||
// when
|
||||
targets, err := normalizeTargetResources(nil, f.comparisonResult)
|
||||
targets, err := normalizeTargetResources(f.comparisonResult)
|
||||
|
||||
// then
|
||||
require.NoError(t, err)
|
||||
@@ -539,11 +509,6 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
|
||||
}
|
||||
|
||||
t.Run("will properly ignore nested fields within arrays", func(t *testing.T) {
|
||||
doc := loadCRDSchema(t, "testdata/schemas/httpproxy_openapi_v2.yaml")
|
||||
disco := &fakeDiscovery{schema: doc}
|
||||
oapiGetter := openapi.NewOpenAPIGetter(disco)
|
||||
oapiResources, err := openapi.NewOpenAPIParser(oapiGetter).Parse()
|
||||
require.NoError(t, err)
|
||||
// given
|
||||
ignores := []v1alpha1.ResourceIgnoreDifferences{
|
||||
{
|
||||
@@ -557,11 +522,8 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
|
||||
target := test.YamlToUnstructured(testdata.TargetHTTPProxy)
|
||||
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
|
||||
|
||||
gvk := schema.GroupVersionKind{Group: "projectcontour.io", Version: "v1", Kind: "HTTPProxy"}
|
||||
fmt.Printf("LookupResource result: %+v\n", oapiResources.LookupResource(gvk))
|
||||
|
||||
// when
|
||||
patchedTargets, err := normalizeTargetResources(oapiResources, f.comparisonResult)
|
||||
patchedTargets, err := normalizeTargetResources(f.comparisonResult)
|
||||
|
||||
// then
|
||||
require.NoError(t, err)
|
||||
@@ -600,7 +562,7 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
|
||||
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
|
||||
|
||||
// when
|
||||
targets, err := normalizeTargetResources(nil, f.comparisonResult)
|
||||
targets, err := normalizeTargetResources(f.comparisonResult)
|
||||
|
||||
// then
|
||||
require.NoError(t, err)
|
||||
@@ -652,7 +614,7 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
|
||||
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
|
||||
|
||||
// when
|
||||
targets, err := normalizeTargetResources(nil, f.comparisonResult)
|
||||
targets, err := normalizeTargetResources(f.comparisonResult)
|
||||
|
||||
// then
|
||||
require.NoError(t, err)
|
||||
@@ -706,175 +668,6 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
|
||||
assert.Equal(t, "EV", env0["name"])
|
||||
assert.Equal(t, "here", env0["value"])
|
||||
})
|
||||
|
||||
t.Run("patches ignored differences in individual array elements of HTTPProxy CRD", func(t *testing.T) {
|
||||
doc := loadCRDSchema(t, "testdata/schemas/httpproxy_openapi_v2.yaml")
|
||||
disco := &fakeDiscovery{schema: doc}
|
||||
oapiGetter := openapi.NewOpenAPIGetter(disco)
|
||||
oapiResources, err := openapi.NewOpenAPIParser(oapiGetter).Parse()
|
||||
require.NoError(t, err)
|
||||
|
||||
ignores := []v1alpha1.ResourceIgnoreDifferences{
|
||||
{
|
||||
Group: "projectcontour.io",
|
||||
Kind: "HTTPProxy",
|
||||
JQPathExpressions: []string{".spec.routes[].rateLimitPolicy.global.descriptors[].entries[]"},
|
||||
},
|
||||
}
|
||||
|
||||
f := setupHTTPProxy(t, ignores)
|
||||
|
||||
target := test.YamlToUnstructured(testdata.TargetHTTPProxy)
|
||||
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
|
||||
|
||||
live := test.YamlToUnstructured(testdata.LiveHTTPProxy)
|
||||
f.comparisonResult.reconciliationResult.Live = []*unstructured.Unstructured{live}
|
||||
|
||||
patchedTargets, err := normalizeTargetResources(oapiResources, f.comparisonResult)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, patchedTargets, 1)
|
||||
patched := patchedTargets[0]
|
||||
|
||||
// verify descriptors array in patched target
|
||||
descriptors := dig(patched.Object, "spec", "routes", 0, "rateLimitPolicy", "global", "descriptors").([]any)
|
||||
require.Len(t, descriptors, 1) // Only the descriptors with ignored entries should remain
|
||||
|
||||
// verify individual entries array inside the descriptor
|
||||
entriesArr := dig(patched.Object, "spec", "routes", 0, "rateLimitPolicy", "global", "descriptors", 0, "entries").([]any)
|
||||
require.Len(t, entriesArr, 1) // Only the ignored entry should be patched
|
||||
|
||||
// verify the content of the entry is preserved correctly
|
||||
entry := entriesArr[0].(map[string]any)
|
||||
requestHeader := entry["requestHeader"].(map[string]any)
|
||||
assert.Equal(t, "sample-header", requestHeader["headerName"])
|
||||
assert.Equal(t, "sample-key", requestHeader["descriptorKey"])
|
||||
})
|
||||
}
|
||||
|
||||
func TestNormalizeTargetResourcesCRDs(t *testing.T) {
|
||||
type fixture struct {
|
||||
comparisonResult *comparisonResult
|
||||
}
|
||||
setupHTTPProxy := func(t *testing.T, ignores []v1alpha1.ResourceIgnoreDifferences) *fixture {
|
||||
t.Helper()
|
||||
dc, err := diff.NewDiffConfigBuilder().
|
||||
WithDiffSettings(ignores, nil, true, normalizers.IgnoreNormalizerOpts{}).
|
||||
WithNoCache().
|
||||
Build()
|
||||
require.NoError(t, err)
|
||||
live := test.YamlToUnstructured(testdata.SimpleAppLiveYaml)
|
||||
target := test.YamlToUnstructured(testdata.SimpleAppTargetYaml)
|
||||
return &fixture{
|
||||
&comparisonResult{
|
||||
reconciliationResult: sync.ReconciliationResult{
|
||||
Live: []*unstructured.Unstructured{live},
|
||||
Target: []*unstructured.Unstructured{target},
|
||||
},
|
||||
diffConfig: dc,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("sample-app", func(t *testing.T) {
|
||||
doc := loadCRDSchema(t, "testdata/schemas/simple-app.yaml")
|
||||
disco := &fakeDiscovery{schema: doc}
|
||||
oapiGetter := openapi.NewOpenAPIGetter(disco)
|
||||
oapiResources, err := openapi.NewOpenAPIParser(oapiGetter).Parse()
|
||||
require.NoError(t, err)
|
||||
|
||||
ignores := []v1alpha1.ResourceIgnoreDifferences{
|
||||
{
|
||||
Group: "example.com",
|
||||
Kind: "SimpleApp",
|
||||
JQPathExpressions: []string{".spec.servers[1].enabled", ".spec.servers[0].port"},
|
||||
},
|
||||
}
|
||||
|
||||
f := setupHTTPProxy(t, ignores)
|
||||
|
||||
target := test.YamlToUnstructured(testdata.SimpleAppTargetYaml)
|
||||
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
|
||||
|
||||
live := test.YamlToUnstructured(testdata.SimpleAppLiveYaml)
|
||||
f.comparisonResult.reconciliationResult.Live = []*unstructured.Unstructured{live}
|
||||
|
||||
patchedTargets, err := normalizeTargetResources(oapiResources, f.comparisonResult)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, patchedTargets, 1)
|
||||
|
||||
patched := patchedTargets[0]
|
||||
require.NotNil(t, patched)
|
||||
|
||||
// 'spec.servers' array has length 2
|
||||
servers := dig(patched.Object, "spec", "servers").([]any)
|
||||
require.Len(t, servers, 2)
|
||||
|
||||
// first server's 'name' is 'server1'
|
||||
name1 := dig(patched.Object, "spec", "servers", 0, "name").(string)
|
||||
assert.Equal(t, "server1", name1)
|
||||
|
||||
assert.Equal(t, int64(8081), dig(patched.Object, "spec", "servers", 0, "port").(int64))
|
||||
assert.Equal(t, int64(9090), dig(patched.Object, "spec", "servers", 1, "port").(int64))
|
||||
|
||||
// first server's 'enabled' should be true
|
||||
enabled1 := dig(patched.Object, "spec", "servers", 0, "enabled").(bool)
|
||||
assert.True(t, enabled1)
|
||||
|
||||
// second server's 'name' should be 'server2'
|
||||
name2 := dig(patched.Object, "spec", "servers", 1, "name").(string)
|
||||
assert.Equal(t, "server2", name2)
|
||||
|
||||
// second server's 'enabled' should be true (respected from live due to ignoreDifferences)
|
||||
enabled2 := dig(patched.Object, "spec", "servers", 1, "enabled").(bool)
|
||||
assert.True(t, enabled2)
|
||||
})
|
||||
t.Run("rollout-obj", func(t *testing.T) {
|
||||
// Load Rollout CRD schema like SimpleApp
|
||||
doc := loadCRDSchema(t, "testdata/schemas/rollout-schema.yaml")
|
||||
disco := &fakeDiscovery{schema: doc}
|
||||
oapiGetter := openapi.NewOpenAPIGetter(disco)
|
||||
oapiResources, err := openapi.NewOpenAPIParser(oapiGetter).Parse()
|
||||
require.NoError(t, err)
|
||||
|
||||
ignores := []v1alpha1.ResourceIgnoreDifferences{
|
||||
{
|
||||
Group: "argoproj.io",
|
||||
Kind: "Rollout",
|
||||
JQPathExpressions: []string{`.spec.template.spec.containers[] | select(.name == "init") | .image`},
|
||||
},
|
||||
}
|
||||
|
||||
f := setupHTTPProxy(t, ignores)
|
||||
|
||||
live := test.YamlToUnstructured(testdata.LiveRolloutYaml)
|
||||
target := test.YamlToUnstructured(testdata.TargetRolloutYaml)
|
||||
f.comparisonResult.reconciliationResult.Live = []*unstructured.Unstructured{live}
|
||||
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
|
||||
|
||||
targets, err := normalizeTargetResources(oapiResources, f.comparisonResult)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, targets, 1)
|
||||
|
||||
patched := targets[0]
|
||||
require.NotNil(t, patched)
|
||||
|
||||
containers := dig(patched.Object, "spec", "template", "spec", "containers").([]any)
|
||||
require.Len(t, containers, 2)
|
||||
|
||||
initContainer := containers[0].(map[string]any)
|
||||
mainContainer := containers[1].(map[string]any)
|
||||
|
||||
// Assert init container image is preserved (ignoreDifferences works)
|
||||
initImage := dig(initContainer, "image").(string)
|
||||
assert.Equal(t, "init-container:v1", initImage)
|
||||
|
||||
// Assert main container fields as expected
|
||||
mainName := dig(mainContainer, "name").(string)
|
||||
assert.Equal(t, "main", mainName)
|
||||
|
||||
mainImage := dig(mainContainer, "image").(string)
|
||||
assert.Equal(t, "main-container:v1", mainImage)
|
||||
})
|
||||
}
|
||||
|
||||
func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
||||
|
||||
12
controller/testdata/data.go
vendored
12
controller/testdata/data.go
vendored
@@ -32,16 +32,4 @@ var (
|
||||
|
||||
//go:embed additional-image-replicas-deployment.yaml
|
||||
AdditionalImageReplicaDeploymentYaml string
|
||||
|
||||
//go:embed simple-app-live.yaml
|
||||
SimpleAppLiveYaml string
|
||||
|
||||
//go:embed simple-app-target.yaml
|
||||
SimpleAppTargetYaml string
|
||||
|
||||
//go:embed target-rollout.yaml
|
||||
TargetRolloutYaml string
|
||||
|
||||
//go:embed live-rollout.yaml
|
||||
LiveRolloutYaml string
|
||||
)
|
||||
|
||||
25
controller/testdata/live-rollout.yaml
vendored
25
controller/testdata/live-rollout.yaml
vendored
@@ -1,25 +0,0 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Rollout
|
||||
metadata:
|
||||
name: rollout-sample
|
||||
spec:
|
||||
replicas: 2
|
||||
strategy:
|
||||
canary:
|
||||
steps:
|
||||
- setWeight: 20
|
||||
selector:
|
||||
matchLabels:
|
||||
app: rollout-sample
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: rollout-sample
|
||||
spec:
|
||||
containers:
|
||||
- name: init
|
||||
image: init-container:v1
|
||||
livenessProbe:
|
||||
initialDelaySeconds: 10
|
||||
- name: main
|
||||
image: main-container:v1
|
||||
@@ -1,62 +0,0 @@
|
||||
swagger: "2.0"
|
||||
info:
|
||||
title: HTTPProxy
|
||||
version: "v1"
|
||||
paths: {}
|
||||
definitions:
|
||||
io.projectcontour.v1.HTTPProxy:
|
||||
type: object
|
||||
x-kubernetes-group-version-kind:
|
||||
- group: projectcontour.io
|
||||
version: v1
|
||||
kind: HTTPProxy
|
||||
properties:
|
||||
spec:
|
||||
type: object
|
||||
properties:
|
||||
routes:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
rateLimitPolicy:
|
||||
type: object
|
||||
properties:
|
||||
global:
|
||||
type: object
|
||||
properties:
|
||||
descriptors:
|
||||
type: array
|
||||
x-kubernetes-list-map-keys:
|
||||
- entries
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
entries:
|
||||
type: array
|
||||
x-kubernetes-list-map-keys:
|
||||
- headerName
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
requestHeader:
|
||||
type: object
|
||||
properties:
|
||||
descriptorKey:
|
||||
type: string
|
||||
headerName:
|
||||
type: string
|
||||
requestHeaderValueMatch:
|
||||
type: object
|
||||
properties:
|
||||
headers:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
contains:
|
||||
type: string
|
||||
value:
|
||||
type: string
|
||||
67
controller/testdata/schemas/rollout-schema.yaml
vendored
67
controller/testdata/schemas/rollout-schema.yaml
vendored
@@ -1,67 +0,0 @@
|
||||
swagger: "2.0"
|
||||
info:
|
||||
title: Rollout
|
||||
version: "v1alpha1"
|
||||
paths: {}
|
||||
definitions:
|
||||
argoproj.io.v1alpha1.Rollout:
|
||||
type: object
|
||||
x-kubernetes-group-version-kind:
|
||||
- group: argoproj.io
|
||||
version: v1alpha1
|
||||
kind: Rollout
|
||||
properties:
|
||||
spec:
|
||||
type: object
|
||||
properties:
|
||||
replicas:
|
||||
type: integer
|
||||
strategy:
|
||||
type: object
|
||||
properties:
|
||||
canary:
|
||||
type: object
|
||||
properties:
|
||||
steps:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
setWeight:
|
||||
type: integer
|
||||
selector:
|
||||
type: object
|
||||
properties:
|
||||
matchLabels:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
template:
|
||||
type: object
|
||||
properties:
|
||||
metadata:
|
||||
type: object
|
||||
properties:
|
||||
labels:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
spec:
|
||||
type: object
|
||||
properties:
|
||||
containers:
|
||||
type: array
|
||||
x-kubernetes-list-map-keys:
|
||||
- name
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
image:
|
||||
type: string
|
||||
livenessProbe:
|
||||
type: object
|
||||
properties:
|
||||
initialDelaySeconds:
|
||||
type: integer
|
||||
29
controller/testdata/schemas/simple-app.yaml
vendored
29
controller/testdata/schemas/simple-app.yaml
vendored
@@ -1,29 +0,0 @@
|
||||
swagger: "2.0"
|
||||
info:
|
||||
title: SimpleApp
|
||||
version: "v1"
|
||||
paths: {}
|
||||
definitions:
|
||||
example.com.v1.SimpleApp:
|
||||
type: object
|
||||
x-kubernetes-group-version-kind:
|
||||
- group: example.com
|
||||
version: v1
|
||||
kind: SimpleApp
|
||||
properties:
|
||||
spec:
|
||||
type: object
|
||||
properties:
|
||||
servers:
|
||||
type: array
|
||||
x-kubernetes-list-map-keys:
|
||||
- name
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
port:
|
||||
type: integer
|
||||
enabled:
|
||||
type: boolean
|
||||
12
controller/testdata/simple-app-live.yaml
vendored
12
controller/testdata/simple-app-live.yaml
vendored
@@ -1,12 +0,0 @@
|
||||
apiVersion: example.com/v1
|
||||
kind: SimpleApp
|
||||
metadata:
|
||||
name: simpleapp-sample
|
||||
spec:
|
||||
servers:
|
||||
- name: server1
|
||||
port: 8081 # port changed in live from 8080
|
||||
enabled: true
|
||||
- name: server2
|
||||
port: 9090
|
||||
enabled: true # enabled changed in live from false
|
||||
12
controller/testdata/simple-app-target.yaml
vendored
12
controller/testdata/simple-app-target.yaml
vendored
@@ -1,12 +0,0 @@
|
||||
apiVersion: example.com/v1
|
||||
kind: SimpleApp
|
||||
metadata:
|
||||
name: simpleapp-sample
|
||||
spec:
|
||||
servers:
|
||||
- name: server1
|
||||
port: 8080
|
||||
enabled: true
|
||||
- name: server2
|
||||
port: 9090
|
||||
enabled: false
|
||||
25
controller/testdata/target-rollout.yaml
vendored
25
controller/testdata/target-rollout.yaml
vendored
@@ -1,25 +0,0 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Rollout
|
||||
metadata:
|
||||
name: rollout-sample
|
||||
spec:
|
||||
replicas: 2
|
||||
strategy:
|
||||
canary:
|
||||
steps:
|
||||
- setWeight: 20
|
||||
selector:
|
||||
matchLabels:
|
||||
app: rollout-sample
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: rollout-sample
|
||||
spec:
|
||||
containers:
|
||||
- name: init
|
||||
image: init-container:v1
|
||||
livenessProbe:
|
||||
initialDelaySeconds: 15
|
||||
- name: main
|
||||
image: main-container:v1
|
||||
BIN
docs/assets/confirm-prune.png
Normal file
BIN
docs/assets/confirm-prune.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 31 KiB |
@@ -127,6 +127,8 @@ Below are the different options.
|
||||
So for the case of debugging the `api-server`, run:
|
||||
`make start-local ARGOCD_START="notification applicationset-controller repo-server redis dex controller ui"`
|
||||
|
||||
> [!NOTE]
|
||||
> By default, the api-server in this configuration runs with auth disabled. If you need to test argo cd auth-related functionality, run `export ARGOCD_E2E_DISABLE_AUTH='false' && make start-local`
|
||||
#### Run with "make run"
|
||||
`make run` runs all the components by default, but it is also possible to run it with a blacklist of components, enabling the separation we need.
|
||||
|
||||
|
||||
@@ -72,7 +72,10 @@ The final step is running the End-to-End testsuite, which ensures that your Kube
|
||||
* First, start the End-to-End server: `make start-e2e` or `make start-e2e-local`. This will spawn a number of processes and services on your system.
|
||||
* When all components have started, run `make test-e2e` or `make test-e2e-local` to run the end-to-end tests against your local services.
|
||||
|
||||
To run a single test with a local toolchain, you can use `TEST_FLAGS="-run TestName" make test-e2e-local`.
|
||||
Below you can find a few examples of how to run specific tests.
|
||||
- To run a single test with a local toolchain, you can use `TEST_FLAGS="-run TestName" make test-e2e-local`.
|
||||
- To run a specific package, you can use `make TEST_MODULE=./test/e2e/<TEST_FILE>.go test-e2e-local`
|
||||
- Finally, you can also try `make TEST_FLAGS="-run <TEST_METHOD_NAME_REGEXP>" test-e2e-local` if you want a more fine-grained control.
|
||||
|
||||
For more information about End-to-End tests, refer to the [End-to-End test documentation](test-e2e.md).
|
||||
|
||||
|
||||
@@ -109,7 +109,7 @@ make install-codegen-tools-local
|
||||
|
||||
```shell
|
||||
kubectl create namespace argocd &&
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/install.yaml
|
||||
kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/install.yaml
|
||||
```
|
||||
|
||||
Set kubectl config to avoid specifying the namespace in every kubectl command.
|
||||
|
||||
@@ -19,19 +19,15 @@ make build-docs
|
||||
|
||||
If you want to build and test the site directly on your local machine without the use of docker container, follow the below steps:
|
||||
|
||||
1. Install the `mkdocs` using the `pip` command
|
||||
1. Install the dependencies from the root of this repository using the `pip` command
|
||||
```bash
|
||||
pip install mkdocs
|
||||
pip install -r docs/requirements.txt
|
||||
```
|
||||
2. Install the required dependencies using the below command
|
||||
```bash
|
||||
pip install $(mkdocs get-deps)
|
||||
```
|
||||
3. Build the docs site locally from the root
|
||||
2. Build the docs site locally from the root
|
||||
```bash
|
||||
make build-docs-local
|
||||
```
|
||||
4. Start the docs site locally
|
||||
3. Start the docs site locally
|
||||
```bash
|
||||
make serve-docs-local
|
||||
```
|
||||
|
||||
@@ -93,7 +93,6 @@ Need help? Start with the [Contributors FAQ](faq/)
|
||||
|
||||
## Contributing to Argo CD dependencies
|
||||
- [Contributing to argo-ui](dependencies.md#argo-ui-components-githubcomargoprojargo-ui)
|
||||
- [Contributing to gitops-engine](dependencies.md#gitops-engine-githubcomargoprojgitops-engine)
|
||||
- [Contributing to notifications-engine](dependencies.md#notifications-engine-githubcomargoprojnotifications-engine)
|
||||
|
||||
## Extensions and Third-Party Applications
|
||||
|
||||
49
docs/developer-guide/maintaining-internal-argo-cd-forks.md
Normal file
49
docs/developer-guide/maintaining-internal-argo-cd-forks.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# Maintaining Internal Argo CD Forks
|
||||
|
||||
Most Argo CD contributors don't need this section to contribute to Argo CD. In most cases, the [Regular Developer Guide](index.md) is sufficient.
|
||||
|
||||
This section will help companies that need to publish custom Argo CD images or publish custom Argo CD releases from their forks.
|
||||
Such companies need the below documentation in addition to the [Regular Developer Guide](index.md).
|
||||
This section will also help Argo CD maintainers to test the release process in a test environment.
|
||||
|
||||
## Understanding where and which upstream images are published
|
||||
|
||||
Official upstream release tags (`vX.Y.Z*`) publish their multi-platform images and the corresponding provenance attestations—to `quay.io/argoproj/argocd` (or whatever registry a fork configures via `IMAGE_*` variables).
|
||||
Upstream master builds continue to refresh the `latest` tag in the same primary registry, while also pushing commit-tagged images (and their provenance) to `ghcr.io/argoproj/argo-cd/argocd` so `cd.apps.argoproj.io` can pin exact SHAs.
|
||||
Forks inherit the same behavior but target their customized registries/namespaces and do not deploy to `cd.apps.argoproj.io`.
|
||||
|
||||
## Publishing custom images from forked master branches
|
||||
|
||||
Fork builds can publish their own containers once workflow variables point at your registry/namespace instead of `argoproj`.
|
||||
|
||||
### Configuring GitHub Actions variables
|
||||
Adjust the variables below to match your setup (overriding `IMAGE_NAMESPACE` is required, because it flips the workflows out of “upstream” mode):
|
||||
|
||||
- `IMAGE_NAMESPACE` – defaults to `argoproj` (overriding required)
|
||||
- `IMAGE_REPOSITORY` – defaults to `argocd` (may need overriding)
|
||||
- `GHCR_NAMESPACE` – defaults to `${{ github.repository }}`, which translates to `<YOUR_GITHUB_USERNAME>/<YOUR_FORK_REPO>`, rarely needs overriding)
|
||||
- `GHCR_REPOSITORY` – defaults to `argocd` (may need overriding)
|
||||
|
||||
These values produce the final image names:
|
||||
|
||||
- `quay.io/$IMAGE_NAMESPACE/$IMAGE_REPOSITORY`
|
||||
- `ghcr.io/$GHCR_NAMESPACE/$GHCR_REPOSITORY`
|
||||
|
||||
Example: if your GitHub account is `my-user`, your fork is `my-argo-cd-fork`, and you want to push release images to `quay.io/my-quay-user/argocd`, configure:
|
||||
|
||||
- `IMAGE_NAMESPACE = my-quay-user`
|
||||
Your master build images will then be published to `quay.io/my-quay-user/argocd:latest`, and the commit tagged images along with the attestations will be published under the Packages (GHCR) of your GitHub fork repo.
|
||||
|
||||
### Configuring GitHub Actions secrets
|
||||
Supply credentials for your primary registry so the workflow can push:
|
||||
|
||||
- `RELEASE_QUAY_USERNAME`
|
||||
- `RELEASE_QUAY_TOKEN`
|
||||
|
||||
## Enabling fork releases
|
||||
|
||||
Forks can run the full release workflow by setting `ENABLE_FORK_RELEASES: true`, ensuring all upstream tags are fetched (the release tooling needs previous tags for changelog diffs), and reusing the same image variables/secrets listed above so release images push to your custom registry. After that, follow the standard [Release Process](releasing.md) with one critical adjustment:
|
||||
|
||||
> [!WARNING]
|
||||
> When invoking `hack/trigger-release.sh`, point it at your fork remote (usually `origin`) rather than ~~upstream~~, otherwise the script may try to push official tags.
|
||||
> Example: `./hack/trigger-release.sh v2.7.2 origin`
|
||||
@@ -17,10 +17,12 @@ These are the upcoming releases dates:
|
||||
| v2.12 | Monday, Jun. 17, 2024 | Monday, Aug. 5, 2024 | [Ishita Sequeira](https://github.com/ishitasequeira) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/19063) |
|
||||
| v2.13 | Monday, Sep. 16, 2024 | Monday, Nov. 4, 2024 | [Regina Voloshin](https://github.com/reggie-k) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/19513) |
|
||||
| v2.14 | Monday, Dec. 16, 2024 | Monday, Feb. 3, 2025 | [Ryan Umstead](https://github.com/rumstead) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/20869) |
|
||||
| v3.0 | Monday, Mar. 17, 2025 | Tuesday, May 6, 2025 | [Regina Voloshin](https://github.com/reggie-k) | | [checklist](https://github.com/argoproj/argo-cd/issues/21735) |
|
||||
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](#) |
|
||||
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | | [checklist](#) |
|
||||
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | | |
|
||||
| v3.0 | Monday, Mar. 17, 2025 | Tuesday, May 6, 2025 | [Regina Voloshin](https://github.com/reggie-k) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/21735) |
|
||||
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](https://github.com/argoproj/argo-cd/issues/23347) |
|
||||
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | [Michael Crenshaw](https://github.com/crenshaw-dev) | [checklist](https://github.com/argoproj/argo-cd/issues/24539) |
|
||||
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | [Peter Jiang](https://github.com/pjiang-dev) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/25211) |
|
||||
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | | |
|
||||
| v3.5 | Monday, Jun. 15, 2026 | Monday, Aug. 3, 2026 | | |
|
||||
|
||||
Actual release dates might differ from the plan by a few days.
|
||||
|
||||
|
||||
@@ -71,6 +71,8 @@ Example:
|
||||
./hack/trigger-release.sh v2.7.2 upstream
|
||||
```
|
||||
|
||||
The script will ask for confirmation, type `y` to proceed. If no confirmation is received within 30 seconds, the script will abort.
|
||||
|
||||
> [!TIP]
|
||||
> The tag must be in one of the following formats to trigger the GH workflow:<br>
|
||||
> * GA: `v<MAJOR>.<MINOR>.<PATCH>`<br>
|
||||
|
||||
@@ -21,7 +21,7 @@ First push the installation manifest into argocd namespace:
|
||||
|
||||
```shell
|
||||
kubectl create namespace argocd
|
||||
kubectl apply -n argocd --force -f manifests/install.yaml
|
||||
kubectl apply -n argocd --server-side --force-conflicts -f manifests/install.yaml
|
||||
```
|
||||
|
||||
The services you will start later assume you are running in the namespace where Argo CD is installed. You can set the current context default namespace as follows:
|
||||
@@ -199,6 +199,7 @@ docker login
|
||||
You will need to push the built images to your own Docker namespace:
|
||||
|
||||
```bash
|
||||
export IMAGE_REGISTRY=docker.io
|
||||
export IMAGE_NAMESPACE=youraccount
|
||||
```
|
||||
|
||||
@@ -208,6 +209,13 @@ If you don't set `IMAGE_TAG` in your environment, the default of `:latest` will
|
||||
export IMAGE_TAG=1.5.0-myrc
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> The image will be built for `linux/amd64` platform by default. If you are running on Mac with Apple chip (ARM),
|
||||
> you need to specify the correct buld platform by running:
|
||||
> ```bash
|
||||
> export TARGET_ARCH=linux/arm64
|
||||
> ```
|
||||
|
||||
Then you can build & push the image in one step:
|
||||
|
||||
```bash
|
||||
@@ -216,7 +224,7 @@ DOCKER_PUSH=true make image
|
||||
|
||||
#### Configure manifests for your image
|
||||
|
||||
With `IMAGE_NAMESPACE` and `IMAGE_TAG` still set, run:
|
||||
With `IMAGE_REGISTRY`, `IMAGE_NAMESPACE` and `IMAGE_TAG` still set, run:
|
||||
|
||||
```bash
|
||||
make manifests
|
||||
@@ -231,12 +239,12 @@ make manifests-local
|
||||
(depending on your toolchain) to build a new set of installation manifests which include your specific image reference.
|
||||
|
||||
> [!NOTE]
|
||||
> Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
|
||||
> Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_REGISTRY`, `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
|
||||
|
||||
#### Configure your cluster with custom manifests
|
||||
|
||||
The final step is to push the manifests to your cluster, so it will pull and run your image:
|
||||
|
||||
```bash
|
||||
kubectl apply -n argocd --force -f manifests/install.yaml
|
||||
kubectl apply -n argocd --server-side --force-conflicts -f manifests/install.yaml
|
||||
```
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user