mirror of
https://github.com/argoproj/argo-cd.git
synced 2026-03-10 18:38:46 +01:00
Compare commits
199 Commits
dependabot
...
release-3.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e7d33de05c | ||
|
|
48549a2035 | ||
|
|
10c3fd02f4 | ||
|
|
ca08f90e96 | ||
|
|
1f03b27fd5 | ||
|
|
9c128e2d4c | ||
|
|
75eddbd910 | ||
|
|
65b029342d | ||
|
|
2ff406ae33 | ||
|
|
76fc92f655 | ||
|
|
ad117b88a6 | ||
|
|
508da9c791 | ||
|
|
20866f4557 | ||
|
|
e3b108b738 | ||
|
|
c56f4400f2 | ||
|
|
e9d03a633e | ||
|
|
030b4f982b | ||
|
|
fafbd44489 | ||
|
|
d7d9674e33 | ||
|
|
e6f54030f0 | ||
|
|
b4146969ed | ||
|
|
51c6375130 | ||
|
|
b67eb40a45 | ||
|
|
8a0633b74a | ||
|
|
0d4f505954 | ||
|
|
2b6251dfed | ||
|
|
8f903c3a11 | ||
|
|
8d0dde1388 | ||
|
|
784f62ca6d | ||
|
|
33b5043405 | ||
|
|
88fe638aff | ||
|
|
a29703877e | ||
|
|
95e7cdb16f | ||
|
|
122f4db3db | ||
|
|
2d65b26420 | ||
|
|
0ace9bb9a3 | ||
|
|
6398ec3dcb | ||
|
|
732b16fb2a | ||
|
|
024c7e6020 | ||
|
|
26b7fb2c61 | ||
|
|
8c4ab63a9c | ||
|
|
29f869c82f | ||
|
|
c11e67d4bf | ||
|
|
a0a18438ab | ||
|
|
dabdf39772 | ||
|
|
cd8df1721c | ||
|
|
27c5065308 | ||
|
|
1545390cd8 | ||
|
|
7bd02d7f02 | ||
|
|
86c9994394 | ||
|
|
6dd5e7a6d2 | ||
|
|
66b2f302d9 | ||
|
|
a1df57df93 | ||
|
|
8884b27381 | ||
|
|
be8e79eb31 | ||
|
|
6aa9c20e47 | ||
|
|
1963030721 | ||
|
|
b227ef1559 | ||
|
|
d1251f407a | ||
|
|
3db95b1fbe | ||
|
|
7628473802 | ||
|
|
059e8d220e | ||
|
|
1ba3929520 | ||
|
|
a42ccaeeca | ||
|
|
d75bcfd7b2 | ||
|
|
35e3897f61 | ||
|
|
dc309cbe0d | ||
|
|
a1f42488d9 | ||
|
|
973eccee0a | ||
|
|
8f8a1ecacb | ||
|
|
46409ae734 | ||
|
|
5f5d46c78b | ||
|
|
722036d447 | ||
|
|
001bfda068 | ||
|
|
4821d71e3d | ||
|
|
ef8ac49807 | ||
|
|
feab307df3 | ||
|
|
087378c669 | ||
|
|
f3c8e1d5e3 | ||
|
|
28510cdda6 | ||
|
|
6a2df4380a | ||
|
|
cd87a13a0d | ||
|
|
1453367645 | ||
|
|
50531e6ab3 | ||
|
|
bf9f927d55 | ||
|
|
ee0de13be4 | ||
|
|
4ac3f920d5 | ||
|
|
06ef059f9f | ||
|
|
45c8fd9d2b | ||
|
|
2bba563a76 | ||
|
|
f13aa46e7f | ||
|
|
3af3a056a2 | ||
|
|
aed63c628d | ||
|
|
22d3ef0ef6 | ||
|
|
e8875bbe7b | ||
|
|
5c10b47d27 | ||
|
|
1680134dc2 | ||
|
|
a330ae4355 | ||
|
|
cd3dc7a1cf | ||
|
|
f4541a60c0 | ||
|
|
81da5ea740 | ||
|
|
c4d99bb224 | ||
|
|
21ec075fd9 | ||
|
|
b834987db9 | ||
|
|
139debe3bb | ||
|
|
f4c4c66f38 | ||
|
|
0793efb5e4 | ||
|
|
15a35daf16 | ||
|
|
4e5b201ba5 | ||
|
|
bb56b9ea67 | ||
|
|
b18ea682c4 | ||
|
|
7fafc99a7a | ||
|
|
ba38778d8c | ||
|
|
48933252b4 | ||
|
|
adf89ea322 | ||
|
|
e492587fb1 | ||
|
|
fed3c7eef7 | ||
|
|
922e459665 | ||
|
|
0fe2a2110c | ||
|
|
1e4cfcc4a0 | ||
|
|
8d018bbf2e | ||
|
|
41f664493e | ||
|
|
939d88c5c6 | ||
|
|
0174fccf28 | ||
|
|
6212ea2afb | ||
|
|
873c2fcfc7 | ||
|
|
2229f9d6fc | ||
|
|
5a8b427322 | ||
|
|
2e5601f932 | ||
|
|
7ae14c89d9 | ||
|
|
8b8d04ecfa | ||
|
|
c64183717b | ||
|
|
d54c8afc09 | ||
|
|
762114c6df | ||
|
|
564e507dd7 | ||
|
|
e4eb86d2db | ||
|
|
bed3d56d17 | ||
|
|
f401a0ee11 | ||
|
|
bc4775468a | ||
|
|
17e5c1f68f | ||
|
|
df324c07d8 | ||
|
|
6028dea3a5 | ||
|
|
b68601255c | ||
|
|
e24d8d4024 | ||
|
|
93148b52c4 | ||
|
|
12d3f5dba1 | ||
|
|
f5a562ac30 | ||
|
|
1268dd9bff | ||
|
|
8b2799c51c | ||
|
|
993344e232 | ||
|
|
670d383f69 | ||
|
|
7829e2c6c1 | ||
|
|
ed752cb540 | ||
|
|
12b1bf5f34 | ||
|
|
52683fdd3e | ||
|
|
3247474212 | ||
|
|
c69d30e52d | ||
|
|
2cfc70afa9 | ||
|
|
728674f922 | ||
|
|
cb2b7faa6d | ||
|
|
a608753071 | ||
|
|
cc39e63e24 | ||
|
|
db7acf8501 | ||
|
|
23f3472f25 | ||
|
|
b96401bb76 | ||
|
|
f953976d92 | ||
|
|
26b970b5bd | ||
|
|
f44de4b854 | ||
|
|
634ef6ff1c | ||
|
|
4a3884f516 | ||
|
|
fd4355baae | ||
|
|
d954789d47 | ||
|
|
66f7b4caa1 | ||
|
|
c447628913 | ||
|
|
9922336968 | ||
|
|
8fa3e47d17 | ||
|
|
ae16c00916 | ||
|
|
267eb2ff0f | ||
|
|
1cec174803 | ||
|
|
05385b3dd8 | ||
|
|
c07768cd64 | ||
|
|
b88527cb39 | ||
|
|
e8f86101f5 | ||
|
|
5e5c4b7d03 | ||
|
|
c7588ffb44 | ||
|
|
6f6c39d8f4 | ||
|
|
4c9291152b | ||
|
|
f2233ccd67 | ||
|
|
871b0b434c | ||
|
|
35331553bf | ||
|
|
3e70033247 | ||
|
|
22c652cf97 | ||
|
|
2ffaf43c1d | ||
|
|
4445dbafb2 | ||
|
|
04cf408264 | ||
|
|
5b8e4b57ac | ||
|
|
88a32d6aab | ||
|
|
51fa4e8a54 | ||
|
|
56320a7b08 |
8
.github/ISSUE_TEMPLATE/bug_report.md
vendored
8
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -2,7 +2,7 @@
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: 'bug'
|
||||
labels: ['bug', 'triage/pending']
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
@@ -10,9 +10,9 @@ assignees: ''
|
||||
|
||||
Checklist:
|
||||
|
||||
* [ ] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
|
||||
* [ ] I've included steps to reproduce the bug.
|
||||
* [ ] I've pasted the output of `argocd version`.
|
||||
- [ ] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
|
||||
- [ ] I've included steps to reproduce the bug.
|
||||
- [ ] I've pasted the output of `argocd version`.
|
||||
|
||||
**Describe the bug**
|
||||
|
||||
|
||||
@@ -2,9 +2,10 @@
|
||||
name: Enhancement proposal
|
||||
about: Propose an enhancement for this project
|
||||
title: ''
|
||||
labels: 'enhancement'
|
||||
labels: ['enhancement', 'triage/pending']
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
# Summary
|
||||
|
||||
What change you think needs making.
|
||||
@@ -15,4 +16,4 @@ Please give examples of your use case, e.g. when would you use this.
|
||||
|
||||
# Proposal
|
||||
|
||||
How do you think this should be implemented?
|
||||
How do you think this should be implemented?
|
||||
|
||||
14
.github/ISSUE_TEMPLATE/new_dev_tool.md
vendored
14
.github/ISSUE_TEMPLATE/new_dev_tool.md
vendored
@@ -2,17 +2,17 @@
|
||||
name: New Dev Tool Request
|
||||
about: This is a request for adding a new tool for setting up a dev environment.
|
||||
title: ''
|
||||
labels: ''
|
||||
labels: ['component:dev-env', 'triage/pending']
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
Checklist:
|
||||
|
||||
* [ ] I am willing to maintain this tool, or have another Argo CD maintainer who is.
|
||||
* [ ] I have another Argo CD maintainer who is willing to help maintain this tool (there needs to be at least two maintainers willing to maintain this tool)
|
||||
* [ ] I have a lead sponsor who is a core Argo CD maintainer
|
||||
* [ ] There is a PR which adds said tool - this is so that the maintainers can assess the impact of having this in the tree
|
||||
* [ ] I have given a motivation why this should be added
|
||||
- [ ] I am willing to maintain this tool, or have another Argo CD maintainer who is.
|
||||
- [ ] I have another Argo CD maintainer who is willing to help maintain this tool (there needs to be at least two maintainers willing to maintain this tool)
|
||||
- [ ] I have a lead sponsor who is a core Argo CD maintainer
|
||||
- [ ] There is a PR which adds said tool - this is so that the maintainers can assess the impact of having this in the tree
|
||||
- [ ] I have given a motivation why this should be added
|
||||
|
||||
### The proposer
|
||||
|
||||
@@ -24,7 +24,7 @@ Checklist:
|
||||
|
||||
### Motivation
|
||||
|
||||
<!-- Why this tool would be useful to have in the tree. -->
|
||||
<!-- Why this tool would be useful to have in the tree. -->
|
||||
|
||||
### Link to PR (Optional)
|
||||
|
||||
|
||||
8
.github/ISSUE_TEMPLATE/security_logs.md
vendored
8
.github/ISSUE_TEMPLATE/security_logs.md
vendored
@@ -1,10 +1,11 @@
|
||||
---
|
||||
name: Security log
|
||||
about: Propose adding security-related logs or tagging existing logs with security fields
|
||||
title: "seclog: [Event Description]"
|
||||
labels: security-log
|
||||
assignees: notfromstatefarm
|
||||
title: 'seclog: [Event Description]'
|
||||
labels: ['security', 'triage/pending']
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
# Event to be logged
|
||||
|
||||
Specify the event that needs to be logged or existing logs that need to be tagged.
|
||||
@@ -16,4 +17,3 @@ What security level should these events be logged under? Refer to https://argo-c
|
||||
# Common Weakness Enumeration
|
||||
|
||||
Is there an associated [CWE](https://cwe.mitre.org/) that could be tagged as well?
|
||||
|
||||
|
||||
3
.github/cherry-pick-bot.yml
vendored
3
.github/cherry-pick-bot.yml
vendored
@@ -1,3 +0,0 @@
|
||||
enabled: true
|
||||
preservePullRequestTitle: true
|
||||
|
||||
2
.github/workflows/bump-major-version.yaml
vendored
2
.github/workflows/bump-major-version.yaml
vendored
@@ -37,7 +37,7 @@ jobs:
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Add ~/go/bin to PATH
|
||||
|
||||
114
.github/workflows/cherry-pick-single.yml
vendored
Normal file
114
.github/workflows/cherry-pick-single.yml
vendored
Normal file
@@ -0,0 +1,114 @@
|
||||
name: Cherry Pick Single
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
merge_commit_sha:
|
||||
required: true
|
||||
type: string
|
||||
description: "The merge commit SHA to cherry-pick"
|
||||
version_number:
|
||||
required: true
|
||||
type: string
|
||||
description: "The version number (from cherry-pick/ label)"
|
||||
pr_number:
|
||||
required: true
|
||||
type: string
|
||||
description: "The original PR number"
|
||||
pr_title:
|
||||
required: true
|
||||
type: string
|
||||
description: "The original PR title"
|
||||
secrets:
|
||||
CHERRYPICK_APP_ID:
|
||||
required: true
|
||||
CHERRYPICK_APP_PRIVATE_KEY:
|
||||
required: true
|
||||
|
||||
jobs:
|
||||
cherry-pick:
|
||||
name: Cherry Pick to ${{ inputs.version_number }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Generate a token
|
||||
id: generate-token
|
||||
uses: actions/create-github-app-token@a8d616148505b5069dccd32f177bb87d7f39123b # v2.1.1
|
||||
with:
|
||||
app-id: ${{ secrets.CHERRYPICK_APP_ID }}
|
||||
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ steps.generate-token.outputs.token }}
|
||||
|
||||
- name: Configure Git
|
||||
run: |
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
|
||||
- name: Cherry pick commit
|
||||
id: cherry-pick
|
||||
run: |
|
||||
set -e
|
||||
|
||||
MERGE_COMMIT="${{ inputs.merge_commit_sha }}"
|
||||
TARGET_BRANCH="release-${{ inputs.version_number }}"
|
||||
|
||||
echo "🍒 Cherry-picking commit $MERGE_COMMIT to branch $TARGET_BRANCH"
|
||||
|
||||
# Check if target branch exists
|
||||
if ! git show-ref --verify --quiet "refs/remotes/origin/$TARGET_BRANCH"; then
|
||||
echo "❌ Target branch '$TARGET_BRANCH' does not exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create new branch for cherry-pick
|
||||
CHERRY_PICK_BRANCH="cherry-pick-${{ inputs.pr_number }}-to-${TARGET_BRANCH}"
|
||||
git checkout -b "$CHERRY_PICK_BRANCH" "origin/$TARGET_BRANCH"
|
||||
|
||||
# Perform cherry-pick
|
||||
if git cherry-pick -m 1 "$MERGE_COMMIT"; then
|
||||
echo "✅ Cherry-pick successful"
|
||||
|
||||
# Extract Signed-off-by from the cherry-pick commit
|
||||
SIGNOFF=$(git log -1 --pretty=format:"%B" | grep -E '^Signed-off-by:' || echo "")
|
||||
|
||||
# Push the new branch
|
||||
git push origin "$CHERRY_PICK_BRANCH"
|
||||
|
||||
# Save data for PR creation
|
||||
echo "branch_name=$CHERRY_PICK_BRANCH" >> "$GITHUB_OUTPUT"
|
||||
echo "signoff=$SIGNOFF" >> "$GITHUB_OUTPUT"
|
||||
echo "target_branch=$TARGET_BRANCH" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "❌ Cherry-pick failed due to conflicts"
|
||||
git cherry-pick --abort
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Create Pull Request
|
||||
run: |
|
||||
# Create cherry-pick PR
|
||||
gh pr create \
|
||||
--title "${{ inputs.pr_title }} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})" \
|
||||
--body "Cherry-picked ${{ inputs.pr_title }} (#${{ inputs.pr_number }})
|
||||
|
||||
${{ steps.cherry-pick.outputs.signoff }}" \
|
||||
--base "${{ steps.cherry-pick.outputs.target_branch }}" \
|
||||
--head "${{ steps.cherry-pick.outputs.branch_name }}"
|
||||
|
||||
# Comment on original PR
|
||||
gh pr comment ${{ inputs.pr_number }} \
|
||||
--body "🍒 Cherry-pick PR created for ${{ inputs.version_number }}: #$(gh pr list --head ${{ steps.cherry-pick.outputs.branch_name }} --json number --jq '.[0].number')"
|
||||
env:
|
||||
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
|
||||
|
||||
- name: Comment on failure
|
||||
if: failure()
|
||||
run: |
|
||||
gh pr comment ${{ inputs.pr_number }} \
|
||||
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the workflow logs for details."
|
||||
env:
|
||||
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
|
||||
53
.github/workflows/cherry-pick.yml
vendored
Normal file
53
.github/workflows/cherry-pick.yml
vendored
Normal file
@@ -0,0 +1,53 @@
|
||||
name: Cherry Pick
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
branches:
|
||||
- master
|
||||
types: ["labeled", "closed"]
|
||||
|
||||
jobs:
|
||||
find-labels:
|
||||
name: Find Cherry Pick Labels
|
||||
if: |
|
||||
github.event.pull_request.merged == true && (
|
||||
(github.event.action == 'labeled' && startsWith(github.event.label.name, 'cherry-pick/')) ||
|
||||
(github.event.action == 'closed' && contains(toJSON(github.event.pull_request.labels.*.name), 'cherry-pick/'))
|
||||
)
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
labels: ${{ steps.extract-labels.outputs.labels }}
|
||||
steps:
|
||||
- name: Extract cherry-pick labels
|
||||
id: extract-labels
|
||||
run: |
|
||||
if [[ "${{ github.event.action }}" == "labeled" ]]; then
|
||||
# Label was just added - use it directly
|
||||
LABEL_NAME="${{ github.event.label.name }}"
|
||||
VERSION="${LABEL_NAME#cherry-pick/}"
|
||||
CHERRY_PICK_DATA='[{"label":"'$LABEL_NAME'","version":"'$VERSION'"}]'
|
||||
else
|
||||
# PR was closed - find all cherry-pick labels
|
||||
CHERRY_PICK_DATA=$(echo '${{ toJSON(github.event.pull_request.labels) }}' | jq -c '[.[] | select(.name | startswith("cherry-pick/")) | {label: .name, version: (.name | sub("cherry-pick/"; ""))}]')
|
||||
fi
|
||||
|
||||
echo "labels=$CHERRY_PICK_DATA" >> "$GITHUB_OUTPUT"
|
||||
echo "Found cherry-pick data: $CHERRY_PICK_DATA"
|
||||
|
||||
cherry-pick:
|
||||
name: Cherry Pick
|
||||
needs: find-labels
|
||||
if: needs.find-labels.outputs.labels != '[]'
|
||||
strategy:
|
||||
matrix:
|
||||
include: ${{ fromJSON(needs.find-labels.outputs.labels) }}
|
||||
fail-fast: false
|
||||
uses: ./.github/workflows/cherry-pick-single.yml
|
||||
with:
|
||||
merge_commit_sha: ${{ github.event.pull_request.merge_commit_sha }}
|
||||
version_number: ${{ matrix.version }}
|
||||
pr_number: ${{ github.event.pull_request.number }}
|
||||
pr_title: ${{ github.event.pull_request.title }}
|
||||
secrets:
|
||||
CHERRYPICK_APP_ID: ${{ vars.CHERRYPICK_APP_ID }}
|
||||
CHERRYPICK_APP_PRIVATE_KEY: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
|
||||
44
.github/workflows/ci-build.yaml
vendored
44
.github/workflows/ci-build.yaml
vendored
@@ -14,7 +14,7 @@ on:
|
||||
env:
|
||||
# Golang version to use across CI steps
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
GOLANG_VERSION: '1.25.0'
|
||||
GOLANG_VERSION: '1.25.6'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -32,7 +32,7 @@ jobs:
|
||||
docs: ${{ steps.filter.outputs.docs_any_changed }}
|
||||
steps:
|
||||
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
|
||||
- uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
|
||||
id: filter
|
||||
with:
|
||||
# Any file which is not under docs/, ui/ or is not a markdown file is counted as a backend file
|
||||
@@ -57,7 +57,7 @@ jobs:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Download all Go modules
|
||||
@@ -78,7 +78,7 @@ jobs:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Restore go build cache
|
||||
@@ -105,7 +105,7 @@ jobs:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Run golangci-lint
|
||||
@@ -133,7 +133,7 @@ jobs:
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
@@ -197,7 +197,7 @@ jobs:
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
@@ -253,7 +253,7 @@ jobs:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Create symlink in GOPATH
|
||||
@@ -305,7 +305,7 @@ jobs:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- name: Setup NodeJS
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
|
||||
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
|
||||
with:
|
||||
# renovate: datasource=node-version packageName=node versioning=node
|
||||
node-version: '22.9.0'
|
||||
@@ -385,7 +385,7 @@ jobs:
|
||||
run: |
|
||||
go tool covdata percent -i=test-results,e2e-code-coverage/applicationset-controller,e2e-code-coverage/repo-server,e2e-code-coverage/app-controller,e2e-code-coverage/commit-server -o test-results/full-coverage.out
|
||||
- name: Upload code coverage information to codecov.io
|
||||
uses: codecov/codecov-action@fdcc8476540edceab3de004e990f80d881c6cc00 # v5.5.0
|
||||
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
|
||||
with:
|
||||
files: test-results/full-coverage.out
|
||||
fail_ci_if_error: true
|
||||
@@ -402,31 +402,31 @@ jobs:
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
||||
uses: SonarSource/sonarqube-scan-action@8c71dc039c2dd71d3821e89a2b58ecc7fee6ced9 # v5.3.0
|
||||
uses: SonarSource/sonarqube-scan-action@1a6d90ebcb0e6a6b1d87e37ba693fe453195ae25 # v5.3.1
|
||||
if: env.sonar_secret != ''
|
||||
test-e2e:
|
||||
name: Run end-to-end tests
|
||||
if: ${{ needs.changes.outputs.backend == 'true' }}
|
||||
runs-on: ubuntu-22.04
|
||||
runs-on: oracle-vm-16cpu-64gb-x86-64
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
# latest: true means that this version mush upload the coverage report to codecov.io
|
||||
# We designate the latest version because we only collect code coverage for that version.
|
||||
k3s:
|
||||
- version: v1.33.1
|
||||
- version: v1.34.2
|
||||
latest: true
|
||||
- version: v1.33.1
|
||||
latest: false
|
||||
- version: v1.32.1
|
||||
latest: false
|
||||
- version: v1.31.0
|
||||
latest: false
|
||||
- version: v1.30.4
|
||||
latest: false
|
||||
needs:
|
||||
- build-go
|
||||
- changes
|
||||
env:
|
||||
GOPATH: /home/runner/go
|
||||
GOPATH: /home/ubuntu/go
|
||||
ARGOCD_FAKE_IN_CLUSTER: 'true'
|
||||
ARGOCD_SSH_DATA_PATH: '/tmp/argo-e2e/app/config/ssh'
|
||||
ARGOCD_TLS_DATA_PATH: '/tmp/argo-e2e/app/config/tls'
|
||||
@@ -449,7 +449,7 @@ jobs:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: GH actions workaround - Kill XSP4 process
|
||||
@@ -462,9 +462,9 @@ jobs:
|
||||
set -x
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
sudo chmod -R a+rw /etc/rancher/k3s
|
||||
sudo mkdir -p $HOME/.kube && sudo chown -R runner $HOME/.kube
|
||||
sudo mkdir -p $HOME/.kube && sudo chown -R ubuntu $HOME/.kube
|
||||
sudo k3s kubectl config view --raw > $HOME/.kube/config
|
||||
sudo chown runner $HOME/.kube/config
|
||||
sudo chown ubuntu $HOME/.kube/config
|
||||
sudo chmod go-r $HOME/.kube/config
|
||||
kubectl version
|
||||
- name: Restore go build cache
|
||||
@@ -474,7 +474,7 @@ jobs:
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
- name: Add ~/go/bin to PATH
|
||||
run: |
|
||||
echo "/home/runner/go/bin" >> $GITHUB_PATH
|
||||
echo "/home/ubuntu/go/bin" >> $GITHUB_PATH
|
||||
- name: Add /usr/local/bin to PATH
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
@@ -496,11 +496,11 @@ jobs:
|
||||
run: |
|
||||
docker pull ghcr.io/dexidp/dex:v2.43.0
|
||||
docker pull argoproj/argo-cd-ci-builder:v1.0.0
|
||||
docker pull redis:7.2.7-alpine
|
||||
docker pull redis:8.2.2-alpine
|
||||
- name: Create target directory for binaries in the build-process
|
||||
run: |
|
||||
mkdir -p dist
|
||||
chown runner dist
|
||||
chown ubuntu dist
|
||||
- name: Run E2E server and wait for it being available
|
||||
timeout-minutes: 30
|
||||
run: |
|
||||
|
||||
2
.github/workflows/codeql.yml
vendored
2
.github/workflows/codeql.yml
vendored
@@ -33,7 +33,7 @@ jobs:
|
||||
|
||||
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version-file: go.mod
|
||||
|
||||
|
||||
4
.github/workflows/image-reuse.yaml
vendored
4
.github/workflows/image-reuse.yaml
vendored
@@ -67,13 +67,13 @@ jobs:
|
||||
if: ${{ github.ref_type != 'tag'}}
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ inputs.go-version }}
|
||||
cache: false
|
||||
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # v3.9.2
|
||||
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
|
||||
|
||||
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
|
||||
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
|
||||
|
||||
4
.github/workflows/image.yaml
vendored
4
.github/workflows/image.yaml
vendored
@@ -53,7 +53,7 @@ jobs:
|
||||
with:
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.0
|
||||
go-version: 1.25.6
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: false
|
||||
|
||||
@@ -70,7 +70,7 @@ jobs:
|
||||
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.0
|
||||
go-version: 1.25.6
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: true
|
||||
secrets:
|
||||
|
||||
76
.github/workflows/release.yaml
vendored
76
.github/workflows/release.yaml
vendored
@@ -11,7 +11,7 @@ permissions: {}
|
||||
|
||||
env:
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
GOLANG_VERSION: '1.25.0' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
GOLANG_VERSION: '1.25.6' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
|
||||
jobs:
|
||||
argocd-image:
|
||||
@@ -25,13 +25,49 @@ jobs:
|
||||
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.0
|
||||
go-version: 1.25.6
|
||||
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
|
||||
push: true
|
||||
secrets:
|
||||
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
|
||||
|
||||
setup-variables:
|
||||
name: Setup Release Variables
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
outputs:
|
||||
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
|
||||
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Setup variables
|
||||
id: var
|
||||
run: |
|
||||
set -xue
|
||||
# Fetch all tag information
|
||||
git fetch --prune --tags --force
|
||||
|
||||
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
|
||||
|
||||
PRE_RELEASE=false
|
||||
# Check if latest tag is a pre-release
|
||||
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
|
||||
PRE_RELEASE=true
|
||||
fi
|
||||
|
||||
IS_LATEST=false
|
||||
# Ensure latest release tag matches github.ref_name
|
||||
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
|
||||
IS_LATEST=true
|
||||
fi
|
||||
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
|
||||
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
|
||||
|
||||
argocd-image-provenance:
|
||||
needs: [argocd-image]
|
||||
permissions:
|
||||
@@ -50,15 +86,17 @@ jobs:
|
||||
|
||||
goreleaser:
|
||||
needs:
|
||||
- setup-variables
|
||||
- argocd-image
|
||||
- argocd-image-provenance
|
||||
permissions:
|
||||
contents: write # used for uploading assets
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
env:
|
||||
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
|
||||
outputs:
|
||||
hashes: ${{ steps.hash.outputs.hashes }}
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
@@ -70,7 +108,7 @@ jobs:
|
||||
run: git fetch --force --tags
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
cache: false
|
||||
@@ -142,7 +180,7 @@ jobs:
|
||||
permissions:
|
||||
contents: write # Needed for release uploads
|
||||
outputs:
|
||||
hashes: ${{ steps.sbom-hash.outputs.hashes}}
|
||||
hashes: ${{ steps.sbom-hash.outputs.hashes }}
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
@@ -153,7 +191,7 @@ jobs:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
cache: false
|
||||
@@ -198,7 +236,7 @@ jobs:
|
||||
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Upload SBOM
|
||||
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8 # v2.3.2
|
||||
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836 # v2.3.3
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
@@ -221,6 +259,7 @@ jobs:
|
||||
|
||||
post-release:
|
||||
needs:
|
||||
- setup-variables
|
||||
- argocd-image
|
||||
- goreleaser
|
||||
- generate-sbom
|
||||
@@ -229,6 +268,8 @@ jobs:
|
||||
pull-requests: write # Needed to create PR for VERSION update.
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
env:
|
||||
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
|
||||
@@ -242,27 +283,6 @@ jobs:
|
||||
git config --global user.email 'ci@argoproj.com'
|
||||
git config --global user.name 'CI'
|
||||
|
||||
- name: Check if tag is the latest version and not a pre-release
|
||||
run: |
|
||||
set -xue
|
||||
# Fetch all tag information
|
||||
git fetch --prune --tags --force
|
||||
|
||||
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
|
||||
|
||||
PRE_RELEASE=false
|
||||
# Check if latest tag is a pre-release
|
||||
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
|
||||
PRE_RELEASE=true
|
||||
fi
|
||||
|
||||
# Ensure latest tag matches github.ref_name & not a pre-release
|
||||
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
|
||||
echo "TAG_STABLE=true" >> $GITHUB_ENV
|
||||
else
|
||||
echo "TAG_STABLE=false" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- name: Update stable tag to latest version
|
||||
run: |
|
||||
git tag -f stable ${{ github.ref_name }}
|
||||
|
||||
11
.github/workflows/renovate.yaml
vendored
11
.github/workflows/renovate.yaml
vendored
@@ -19,10 +19,17 @@ jobs:
|
||||
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # 4.2.2
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # 6.0.1
|
||||
|
||||
# Some codegen commands require Go to be setup
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
|
||||
with:
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.25.6
|
||||
|
||||
- name: Self-hosted Renovate
|
||||
uses: renovatebot/github-action@b11417b9eaac3145fe9a8544cee66503724e32b6 #43.0.8
|
||||
uses: renovatebot/github-action@f8af9272cd94a4637c29f60dea8731afd3134473 #43.0.12
|
||||
with:
|
||||
configurationFile: .github/configs/renovate-config.js
|
||||
token: '${{ steps.get_token.outputs.token }}'
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -20,6 +20,7 @@ node_modules/
|
||||
.kube/
|
||||
./test/cmp/*.sock
|
||||
.envrc.remote
|
||||
.mirrord/
|
||||
.*.swp
|
||||
rerunreport.txt
|
||||
|
||||
|
||||
@@ -49,13 +49,14 @@ archives:
|
||||
- argocd-cli
|
||||
name_template: |-
|
||||
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
|
||||
formats: [ binary ]
|
||||
formats: [binary]
|
||||
|
||||
checksum:
|
||||
name_template: 'cli_checksums.txt'
|
||||
algorithm: sha256
|
||||
|
||||
release:
|
||||
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
|
||||
prerelease: auto
|
||||
draft: false
|
||||
header: |
|
||||
|
||||
@@ -24,7 +24,6 @@ packages:
|
||||
Renderer: {}
|
||||
github.com/argoproj/argo-cd/v3/commitserver/apiclient:
|
||||
interfaces:
|
||||
Clientset: {}
|
||||
CommitServiceClient: {}
|
||||
github.com/argoproj/argo-cd/v3/commitserver/commit:
|
||||
interfaces:
|
||||
@@ -35,6 +34,7 @@ packages:
|
||||
github.com/argoproj/argo-cd/v3/controller/hydrator:
|
||||
interfaces:
|
||||
Dependencies: {}
|
||||
RepoGetter: {}
|
||||
github.com/argoproj/argo-cd/v3/pkg/apiclient/cluster:
|
||||
interfaces:
|
||||
ClusterServiceServer: {}
|
||||
|
||||
@@ -12,3 +12,8 @@
|
||||
/.github/** @argoproj/argocd-approvers @argoproj/argocd-approvers-ci
|
||||
/.goreleaser.yaml @argoproj/argocd-approvers @argoproj/argocd-approvers-ci
|
||||
/sonar-project.properties @argoproj/argocd-approvers @argoproj/argocd-approvers-ci
|
||||
|
||||
# CLI
|
||||
/cmd/argocd/** @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
|
||||
/cmd/main.go @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
|
||||
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
|
||||
@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:10bb10bb062de665d4dc3e0ea36
|
||||
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
|
||||
# Also used as the image in CI jobs so needs all dependencies
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6 AS builder
|
||||
FROM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c AS builder
|
||||
|
||||
WORKDIR /tmp
|
||||
|
||||
@@ -103,7 +103,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
|
||||
####################################################################################################
|
||||
# Argo CD Build stage which performs the actual build of Argo CD binaries
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6 AS argocd-build
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c AS argocd-build
|
||||
|
||||
WORKDIR /go/src/github.com/argoproj/argo-cd
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6
|
||||
FROM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
|
||||
13
Makefile
13
Makefile
@@ -43,6 +43,17 @@ endif
|
||||
DOCKER_SRCDIR?=$(GOPATH)/src
|
||||
DOCKER_WORKDIR?=/go/src/github.com/argoproj/argo-cd
|
||||
|
||||
# Allows you to control which Docker network the test-util containers attach to.
|
||||
# This is particularly useful if you are running Kubernetes in Docker (e.g., k3d)
|
||||
# and want the test containers to reach the Kubernetes API via an already-existing Docker network.
|
||||
DOCKER_NETWORK ?= default
|
||||
|
||||
ifneq ($(DOCKER_NETWORK),default)
|
||||
DOCKER_NETWORK_ARG := --network $(DOCKER_NETWORK)
|
||||
else
|
||||
DOCKER_NETWORK_ARG :=
|
||||
endif
|
||||
|
||||
ARGOCD_PROCFILE?=Procfile
|
||||
|
||||
# pointing to python 3.7 to match https://github.com/argoproj/argo-cd/blob/master/.readthedocs.yml
|
||||
@@ -117,6 +128,7 @@ define run-in-test-server
|
||||
-p ${ARGOCD_E2E_APISERVER_PORT}:8080 \
|
||||
-p 4000:4000 \
|
||||
-p 5000:5000 \
|
||||
$(DOCKER_NETWORK_ARG)\
|
||||
$(PODMAN_ARGS) \
|
||||
$(TEST_TOOLS_PREFIX)$(TEST_TOOLS_IMAGE):$(TEST_TOOLS_TAG) \
|
||||
bash -c "$(1)"
|
||||
@@ -138,6 +150,7 @@ define run-in-test-client
|
||||
-v ${GOCACHE}:/tmp/go-build-cache${VOLUME_MOUNT} \
|
||||
-v ${HOME}/.kube:/home/user/.kube${VOLUME_MOUNT} \
|
||||
-w ${DOCKER_WORKDIR} \
|
||||
$(DOCKER_NETWORK_ARG)\
|
||||
$(PODMAN_ARGS) \
|
||||
$(TEST_TOOLS_PREFIX)$(TEST_TOOLS_IMAGE):$(TEST_TOOLS_TAG) \
|
||||
bash -c "$(1)"
|
||||
|
||||
8
Tiltfile
8
Tiltfile
@@ -10,6 +10,14 @@ cmd_button(
|
||||
text='make codegen-local',
|
||||
)
|
||||
|
||||
cmd_button(
|
||||
'make test-local',
|
||||
argv=['sh', '-c', 'make test-local'],
|
||||
location=location.NAV,
|
||||
icon_name='science',
|
||||
text='make test-local',
|
||||
)
|
||||
|
||||
# add ui button in web ui to run make codegen-local (top nav)
|
||||
cmd_button(
|
||||
'make cli-local',
|
||||
|
||||
2
USERS.md
2
USERS.md
@@ -326,8 +326,10 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [SEEK](https://seek.com.au)
|
||||
1. [SEKAI](https://www.sekai.io/)
|
||||
1. [Semgrep](https://semgrep.com)
|
||||
1. [Seznam.cz](https://o-seznam.cz/)
|
||||
1. [Shield](https://shield.com)
|
||||
1. [Shipfox](https://www.shipfox.io)
|
||||
1. [Shock Media](https://www.shockmedia.nl)
|
||||
1. [SI Analytics](https://si-analytics.ai)
|
||||
1. [Sidewalk Entertainment](https://sidewalkplay.com/)
|
||||
1. [Skit](https://skit.ai/)
|
||||
|
||||
@@ -75,6 +75,7 @@ const (
|
||||
var defaultPreservedAnnotations = []string{
|
||||
NotifiedAnnotationKey,
|
||||
argov1alpha1.AnnotationKeyRefresh,
|
||||
argov1alpha1.AnnotationKeyHydrate,
|
||||
}
|
||||
|
||||
type deleteInOrder struct {
|
||||
@@ -100,6 +101,7 @@ type ApplicationSetReconciler struct {
|
||||
GlobalPreservedAnnotations []string
|
||||
GlobalPreservedLabels []string
|
||||
Metrics *metrics.ApplicationsetMetrics
|
||||
MaxResourcesStatusCount int
|
||||
}
|
||||
|
||||
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
|
||||
@@ -251,6 +253,16 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
|
||||
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Progressive Sync is disabled, clear any existing applicationStatus to prevent stale data
|
||||
if len(applicationSetInfo.Status.ApplicationStatus) > 0 {
|
||||
logCtx.Infof("Progressive Sync disabled, removing %v AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
|
||||
|
||||
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
|
||||
if err != nil {
|
||||
return ctrl.Result{}, fmt.Errorf("failed to clear AppSet application statuses when Progressive Sync is disabled for %v: %w", applicationSetInfo.Name, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var validApps []argov1alpha1.Application
|
||||
@@ -640,8 +652,9 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
|
||||
Watches(
|
||||
&corev1.Secret{},
|
||||
&clusterSecretEventHandler{
|
||||
Client: mgr.GetClient(),
|
||||
Log: log.WithField("type", "createSecretEventHandler"),
|
||||
Client: mgr.GetClient(),
|
||||
Log: log.WithField("type", "createSecretEventHandler"),
|
||||
ApplicationSetNamespaces: r.ApplicationSetNamespaces,
|
||||
}).
|
||||
Complete(r)
|
||||
}
|
||||
@@ -857,7 +870,7 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
|
||||
|
||||
// Detect if the destination is invalid (name doesn't correspond to a matching cluster)
|
||||
if destCluster, err := argoutil.GetDestinationCluster(ctx, app.Spec.Destination, r.ArgoDB); err != nil {
|
||||
appLog.Warnf("The destination cluster for %s couldn't be found: %v", app.Name, err)
|
||||
appLog.Warnf("The destination cluster for %s could not be found: %v", app.Name, err)
|
||||
validDestination = false
|
||||
} else {
|
||||
// Detect if the destination's server field does not match an existing cluster
|
||||
@@ -876,7 +889,7 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
|
||||
}
|
||||
|
||||
if !matchingCluster {
|
||||
appLog.Warnf("A match for the destination cluster for %s, by server url, couldn't be found", app.Name)
|
||||
appLog.Warnf("A match for the destination cluster for %s, by server url, could not be found", app.Name)
|
||||
}
|
||||
|
||||
validDestination = matchingCluster
|
||||
@@ -1398,7 +1411,13 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
|
||||
sort.Slice(statuses, func(i, j int) bool {
|
||||
return statuses[i].Name < statuses[j].Name
|
||||
})
|
||||
resourcesCount := int64(len(statuses))
|
||||
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
|
||||
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
|
||||
statuses = statuses[:r.MaxResourcesStatusCount]
|
||||
}
|
||||
appset.Status.Resources = statuses
|
||||
appset.Status.ResourcesCount = resourcesCount
|
||||
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
|
||||
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
|
||||
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
|
||||
@@ -1411,6 +1430,7 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
|
||||
}
|
||||
|
||||
updatedAppset.Status.Resources = appset.Status.Resources
|
||||
updatedAppset.Status.ResourcesCount = resourcesCount
|
||||
|
||||
// Update the newly fetched object with new status resources
|
||||
err := r.Client.Status().Update(ctx, updatedAppset)
|
||||
|
||||
@@ -589,6 +589,72 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Ensure that hydrate annotation is preserved from an existing app",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Template: v1alpha1.ApplicationSetTemplate{
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
existingApps: []v1alpha1.Application{
|
||||
{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "2",
|
||||
Annotations: map[string]string{
|
||||
"annot-key": "annot-value",
|
||||
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.RefreshTypeNormal),
|
||||
},
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
desiredApps: []v1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: []v1alpha1.Application{
|
||||
{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "3",
|
||||
Annotations: map[string]string{
|
||||
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.RefreshTypeNormal),
|
||||
},
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Ensure that configured preserved annotations are preserved from an existing app",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
@@ -6410,10 +6476,11 @@ func TestUpdateResourceStatus(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, cc := range []struct {
|
||||
name string
|
||||
appSet v1alpha1.ApplicationSet
|
||||
apps []v1alpha1.Application
|
||||
expectedResources []v1alpha1.ResourceStatus
|
||||
name string
|
||||
appSet v1alpha1.ApplicationSet
|
||||
apps []v1alpha1.Application
|
||||
expectedResources []v1alpha1.ResourceStatus
|
||||
maxResourcesStatusCount int
|
||||
}{
|
||||
{
|
||||
name: "handles an empty application list",
|
||||
@@ -6577,6 +6644,73 @@ func TestUpdateResourceStatus(t *testing.T) {
|
||||
apps: []v1alpha1.Application{},
|
||||
expectedResources: nil,
|
||||
},
|
||||
{
|
||||
name: "truncates resources status list to",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Status: v1alpha1.ApplicationSetStatus{
|
||||
Resources: []v1alpha1.ResourceStatus{
|
||||
{
|
||||
Name: "app1",
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Health: &v1alpha1.HealthStatus{
|
||||
Status: health.HealthStatusProgressing,
|
||||
Message: "this is progressing",
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "app2",
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
Health: &v1alpha1.HealthStatus{
|
||||
Status: health.HealthStatusProgressing,
|
||||
Message: "this is progressing",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
apps: []v1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
},
|
||||
Status: v1alpha1.ApplicationStatus{
|
||||
Sync: v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeSynced,
|
||||
},
|
||||
Health: v1alpha1.AppHealthStatus{
|
||||
Status: health.HealthStatusHealthy,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app2",
|
||||
},
|
||||
Status: v1alpha1.ApplicationStatus{
|
||||
Sync: v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeSynced,
|
||||
},
|
||||
Health: v1alpha1.AppHealthStatus{
|
||||
Status: health.HealthStatusHealthy,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedResources: []v1alpha1.ResourceStatus{
|
||||
{
|
||||
Name: "app1",
|
||||
Status: v1alpha1.SyncStatusCodeSynced,
|
||||
Health: &v1alpha1.HealthStatus{
|
||||
Status: health.HealthStatusHealthy,
|
||||
},
|
||||
},
|
||||
},
|
||||
maxResourcesStatusCount: 1,
|
||||
},
|
||||
} {
|
||||
t.Run(cc.name, func(t *testing.T) {
|
||||
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
|
||||
@@ -6587,13 +6721,14 @@ func TestUpdateResourceStatus(t *testing.T) {
|
||||
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: client,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(1),
|
||||
Generators: map[string]generators.Generator{},
|
||||
ArgoDB: argodb,
|
||||
KubeClientset: kubeclientset,
|
||||
Metrics: metrics,
|
||||
Client: client,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(1),
|
||||
Generators: map[string]generators.Generator{},
|
||||
ArgoDB: argodb,
|
||||
KubeClientset: kubeclientset,
|
||||
Metrics: metrics,
|
||||
MaxResourcesStatusCount: cc.maxResourcesStatusCount,
|
||||
}
|
||||
|
||||
err := r.updateResourcesStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps)
|
||||
@@ -7544,109 +7679,81 @@ func TestSyncApplication(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsRollingSyncDeletionReversed(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
appset *v1alpha1.ApplicationSet
|
||||
expected bool
|
||||
func TestReconcileProgressiveSyncDisabled(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := v1alpha1.AddToScheme(scheme)
|
||||
require.NoError(t, err)
|
||||
|
||||
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
|
||||
|
||||
for _, cc := range []struct {
|
||||
name string
|
||||
appSet v1alpha1.ApplicationSet
|
||||
enableProgressiveSyncs bool
|
||||
expectedAppStatuses []v1alpha1.ApplicationSetApplicationStatus
|
||||
}{
|
||||
{
|
||||
name: "Deletion Order on strategy is set as Reverse",
|
||||
appset: &v1alpha1.ApplicationSet{
|
||||
name: "clears applicationStatus when Progressive Sync is disabled",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-appset",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{
|
||||
{
|
||||
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
|
||||
{
|
||||
Key: "environment",
|
||||
Operator: "In",
|
||||
Values: []string{
|
||||
"dev",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
|
||||
{
|
||||
Key: "environment",
|
||||
Operator: "In",
|
||||
Values: []string{
|
||||
"staging",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Generators: []v1alpha1.ApplicationSetGenerator{},
|
||||
Template: v1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
Status: v1alpha1.ApplicationSetStatus{
|
||||
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "test-appset-guestbook",
|
||||
Message: "Application resource became Healthy, updating status from Progressing to Healthy.",
|
||||
Status: "Healthy",
|
||||
Step: "1",
|
||||
},
|
||||
DeletionOrder: ReverseDeletionOrder,
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
enableProgressiveSyncs: false,
|
||||
expectedAppStatuses: nil,
|
||||
},
|
||||
{
|
||||
name: "Deletion Order on strategy is set as AllAtOnce",
|
||||
appset: &v1alpha1.ApplicationSet{
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{},
|
||||
},
|
||||
DeletionOrder: AllAtOnceDeletionOrder,
|
||||
},
|
||||
} {
|
||||
t.Run(cc.name, func(t *testing.T) {
|
||||
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&cc.appSet).WithStatusSubresource(&cc.appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: client,
|
||||
Scheme: scheme,
|
||||
Renderer: &utils.Render{},
|
||||
Recorder: record.NewFakeRecorder(1),
|
||||
Generators: map[string]generators.Generator{},
|
||||
ArgoDB: argodb,
|
||||
KubeClientset: kubeclientset,
|
||||
Metrics: metrics,
|
||||
EnableProgressiveSyncs: cc.enableProgressiveSyncs,
|
||||
}
|
||||
|
||||
req := ctrl.Request{
|
||||
NamespacedName: types.NamespacedName{
|
||||
Namespace: cc.appSet.Namespace,
|
||||
Name: cc.appSet.Name,
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Deletion Order on strategy is set as Reverse but no steps in RollingSync",
|
||||
appset: &v1alpha1.ApplicationSet{
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "RollingSync",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{},
|
||||
},
|
||||
DeletionOrder: ReverseDeletionOrder,
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Deletion Order on strategy is set as Reverse, but AllAtOnce is explicitly set",
|
||||
appset: &v1alpha1.ApplicationSet{
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{
|
||||
Type: "AllAtOnce",
|
||||
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
|
||||
Steps: []v1alpha1.ApplicationSetRolloutStep{},
|
||||
},
|
||||
DeletionOrder: ReverseDeletionOrder,
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Strategy is Nil",
|
||||
appset: &v1alpha1.ApplicationSet{
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Strategy: &v1alpha1.ApplicationSetStrategy{},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := isProgressiveSyncDeletionOrderReversed(tt.appset)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
}
|
||||
|
||||
// Run reconciliation
|
||||
_, err = r.Reconcile(t.Context(), req)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Fetch the updated ApplicationSet
|
||||
var updatedAppSet v1alpha1.ApplicationSet
|
||||
err = r.Get(t.Context(), req.NamespacedName, &updatedAppSet)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the applicationStatus field
|
||||
assert.Equal(t, cc.expectedAppStatuses, updatedAppSet.Status.ApplicationStatus, "applicationStatus should match expected value")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/event"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/utils"
|
||||
"github.com/argoproj/argo-cd/v3/common"
|
||||
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
@@ -22,8 +23,9 @@ import (
|
||||
// requeue any related ApplicationSets.
|
||||
type clusterSecretEventHandler struct {
|
||||
// handler.EnqueueRequestForOwner
|
||||
Log log.FieldLogger
|
||||
Client client.Client
|
||||
Log log.FieldLogger
|
||||
Client client.Client
|
||||
ApplicationSetNamespaces []string
|
||||
}
|
||||
|
||||
func (h *clusterSecretEventHandler) Create(ctx context.Context, e event.CreateEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
|
||||
@@ -68,6 +70,10 @@ func (h *clusterSecretEventHandler) queueRelatedAppGenerators(ctx context.Contex
|
||||
|
||||
h.Log.WithField("count", len(appSetList.Items)).Info("listed ApplicationSets")
|
||||
for _, appSet := range appSetList.Items {
|
||||
if !utils.IsNamespaceAllowed(h.ApplicationSetNamespaces, appSet.GetNamespace()) {
|
||||
// Ignore it as not part of the allowed list of namespaces in which to watch Appsets
|
||||
continue
|
||||
}
|
||||
foundClusterGenerator := false
|
||||
for _, generator := range appSet.Spec.Generators {
|
||||
if generator.Clusters != nil {
|
||||
|
||||
@@ -137,7 +137,7 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "another-namespace",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
@@ -171,9 +171,37 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{
|
||||
{NamespacedName: types.NamespacedName{Namespace: "another-namespace", Name: "my-app-set"}},
|
||||
{NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"}},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "cluster generators in other namespaces should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "my-namespace-not-allowed",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "non-argo cd secret should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
@@ -552,8 +580,9 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
fakeClient := fake.NewClientBuilder().WithScheme(scheme).WithLists(&appSetList).Build()
|
||||
|
||||
handler := &clusterSecretEventHandler{
|
||||
Client: fakeClient,
|
||||
Log: log.WithField("type", "createSecretEventHandler"),
|
||||
Client: fakeClient,
|
||||
Log: log.WithField("type", "createSecretEventHandler"),
|
||||
ApplicationSetNamespaces: []string{"argocd"},
|
||||
}
|
||||
|
||||
mockAddRateLimitingInterface := mockAddRateLimitingInterface{}
|
||||
|
||||
@@ -29,10 +29,10 @@ type GitGenerator struct {
|
||||
}
|
||||
|
||||
// NewGitGenerator creates a new instance of Git Generator
|
||||
func NewGitGenerator(repos services.Repos, namespace string) Generator {
|
||||
func NewGitGenerator(repos services.Repos, controllerNamespace string) Generator {
|
||||
g := &GitGenerator{
|
||||
repos: repos,
|
||||
namespace: namespace,
|
||||
namespace: controllerNamespace,
|
||||
}
|
||||
|
||||
return g
|
||||
@@ -78,11 +78,11 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
|
||||
if !strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
|
||||
project := appSet.Spec.Template.Spec.Project
|
||||
appProject := &argoprojiov1alpha1.AppProject{}
|
||||
namespace := g.namespace
|
||||
if namespace == "" {
|
||||
namespace = appSet.Namespace
|
||||
controllerNamespace := g.namespace
|
||||
if controllerNamespace == "" {
|
||||
controllerNamespace = appSet.Namespace
|
||||
}
|
||||
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: namespace}, appProject); err != nil {
|
||||
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: controllerNamespace}, appProject); err != nil {
|
||||
return nil, fmt.Errorf("error getting project %s: %w", project, err)
|
||||
}
|
||||
// we need to verify the signature on the Git revision if GPG is enabled
|
||||
|
||||
@@ -119,15 +119,15 @@ func (g *PullRequestGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
|
||||
"author": pull.Author,
|
||||
}
|
||||
|
||||
err := appendTemplatedValues(appSetGenerator.PullRequest.Values, paramMap, applicationSetInfo.Spec.GoTemplate, applicationSetInfo.Spec.GoTemplateOptions)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to append templated values: %w", err)
|
||||
}
|
||||
|
||||
// PR lables will only be supported for Go Template appsets, since fasttemplate will be deprecated.
|
||||
if applicationSetInfo != nil && applicationSetInfo.Spec.GoTemplate {
|
||||
paramMap["labels"] = pull.Labels
|
||||
}
|
||||
|
||||
err := appendTemplatedValues(appSetGenerator.PullRequest.Values, paramMap, applicationSetInfo.Spec.GoTemplate, applicationSetInfo.Spec.GoTemplateOptions)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to append templated values: %w", err)
|
||||
}
|
||||
params = append(params, paramMap)
|
||||
}
|
||||
return params, nil
|
||||
|
||||
@@ -277,6 +277,51 @@ func TestPullRequestGithubGenerateParams(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
selectFunc: func(context.Context, *argoprojiov1alpha1.PullRequestGenerator, *argoprojiov1alpha1.ApplicationSet) (pullrequest.PullRequestService, error) {
|
||||
return pullrequest.NewFakeService(
|
||||
ctx,
|
||||
[]*pullrequest.PullRequest{
|
||||
{
|
||||
Number: 1,
|
||||
Title: "title1",
|
||||
Branch: "my_branch",
|
||||
TargetBranch: "master",
|
||||
HeadSHA: "abcd",
|
||||
Author: "testName",
|
||||
Labels: []string{"preview", "preview:team1"},
|
||||
},
|
||||
},
|
||||
nil,
|
||||
)
|
||||
},
|
||||
values: map[string]string{
|
||||
"preview_env": "{{ regexFind \"(team1|team2)\" (.labels | join \",\") }}",
|
||||
},
|
||||
expected: []map[string]any{
|
||||
{
|
||||
"number": "1",
|
||||
"title": "title1",
|
||||
"branch": "my_branch",
|
||||
"branch_slug": "my-branch",
|
||||
"target_branch": "master",
|
||||
"target_branch_slug": "master",
|
||||
"head_sha": "abcd",
|
||||
"head_short_sha": "abcd",
|
||||
"head_short_sha_7": "abcd",
|
||||
"author": "testName",
|
||||
"labels": []string{"preview", "preview:team1"},
|
||||
"values": map[string]string{"preview_env": "team1"},
|
||||
},
|
||||
},
|
||||
expectedErr: nil,
|
||||
applicationSet: argoprojiov1alpha1.ApplicationSet{
|
||||
Spec: argoprojiov1alpha1.ApplicationSetSpec{
|
||||
// Application set is using fasttemplate.
|
||||
GoTemplate: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
|
||||
@@ -10,15 +10,15 @@ import (
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/services"
|
||||
)
|
||||
|
||||
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, namespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
|
||||
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, controllerNamespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
|
||||
terminalGenerators := map[string]Generator{
|
||||
"List": NewListGenerator(),
|
||||
"Clusters": NewClusterGenerator(ctx, c, k8sClient, namespace),
|
||||
"Git": NewGitGenerator(argoCDService, namespace),
|
||||
"Clusters": NewClusterGenerator(ctx, c, k8sClient, controllerNamespace),
|
||||
"Git": NewGitGenerator(argoCDService, controllerNamespace),
|
||||
"SCMProvider": NewSCMProviderGenerator(c, scmConfig),
|
||||
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, namespace),
|
||||
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, controllerNamespace),
|
||||
"PullRequest": NewPullRequestGenerator(c, scmConfig),
|
||||
"Plugin": NewPluginGenerator(c, namespace),
|
||||
"Plugin": NewPluginGenerator(c, controllerNamespace),
|
||||
}
|
||||
|
||||
nestedGenerators := map[string]Generator{
|
||||
|
||||
@@ -399,19 +399,19 @@ func addInvalidGeneratorNames(names map[string]bool, applicationSetInfo *argoapp
|
||||
var values map[string]any
|
||||
err := json.Unmarshal([]byte(config), &values)
|
||||
if err != nil {
|
||||
log.Warnf("couldn't unmarshal kubectl.kubernetes.io/last-applied-configuration: %+v", config)
|
||||
log.Warnf("could not unmarshal kubectl.kubernetes.io/last-applied-configuration: %+v", config)
|
||||
return
|
||||
}
|
||||
|
||||
spec, ok := values["spec"].(map[string]any)
|
||||
if !ok {
|
||||
log.Warn("coundn't get spec from kubectl.kubernetes.io/last-applied-configuration annotation")
|
||||
log.Warn("could not get spec from kubectl.kubernetes.io/last-applied-configuration annotation")
|
||||
return
|
||||
}
|
||||
|
||||
generators, ok := spec["generators"].([]any)
|
||||
if !ok {
|
||||
log.Warn("coundn't get generators from kubectl.kubernetes.io/last-applied-configuration annotation")
|
||||
log.Warn("could not get generators from kubectl.kubernetes.io/last-applied-configuration annotation")
|
||||
return
|
||||
}
|
||||
|
||||
@@ -422,7 +422,7 @@ func addInvalidGeneratorNames(names map[string]bool, applicationSetInfo *argoapp
|
||||
|
||||
generator, ok := generators[index].(map[string]any)
|
||||
if !ok {
|
||||
log.Warn("coundn't get generator from kubectl.kubernetes.io/last-applied-configuration annotation")
|
||||
log.Warn("could not get generator from kubectl.kubernetes.io/last-applied-configuration annotation")
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -26,10 +26,14 @@ import (
|
||||
"github.com/go-playground/webhooks/v6/github"
|
||||
"github.com/go-playground/webhooks/v6/gitlab"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/util/guard"
|
||||
)
|
||||
|
||||
const payloadQueueSize = 50000
|
||||
|
||||
const panicMsgAppSet = "panic while processing applicationset-controller webhook event"
|
||||
|
||||
type WebhookHandler struct {
|
||||
sync.WaitGroup // for testing
|
||||
github *github.Webhook
|
||||
@@ -102,6 +106,7 @@ func NewWebhookHandler(webhookParallelism int, argocdSettingsMgr *argosettings.S
|
||||
}
|
||||
|
||||
func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
|
||||
compLog := log.WithField("component", "applicationset-webhook")
|
||||
for i := 0; i < webhookParallelism; i++ {
|
||||
h.Add(1)
|
||||
go func() {
|
||||
@@ -111,7 +116,7 @@ func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
h.HandleEvent(payload)
|
||||
guard.RecoverAndLog(func() { h.HandleEvent(payload) }, compLog, panicMsgAppSet)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
22
assets/swagger.json
generated
22
assets/swagger.json
generated
@@ -1049,6 +1049,11 @@
|
||||
"collectionFormat": "multi",
|
||||
"name": "revisions",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"name": "noCache",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -7317,6 +7322,11 @@
|
||||
"items": {
|
||||
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
|
||||
}
|
||||
},
|
||||
"resourcesCount": {
|
||||
"description": "ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when\nthe number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).",
|
||||
"type": "integer",
|
||||
"format": "int64"
|
||||
}
|
||||
}
|
||||
},
|
||||
@@ -9960,6 +9970,10 @@
|
||||
"description": "Limit is the maximum number of attempts for retrying a failed sync. If set to 0, no retries will be performed.",
|
||||
"type": "integer",
|
||||
"format": "int64"
|
||||
},
|
||||
"refresh": {
|
||||
"type": "boolean",
|
||||
"title": "Refresh indicates if the latest revision should be used on retry instead of the initial one (default: false)"
|
||||
}
|
||||
}
|
||||
},
|
||||
@@ -10540,7 +10554,7 @@
|
||||
"type": "boolean",
|
||||
"title": "AllowEmpty allows apps have zero live resources (default: false)"
|
||||
},
|
||||
"enable": {
|
||||
"enabled": {
|
||||
"type": "boolean",
|
||||
"title": "Enable allows apps to explicitly control automated sync"
|
||||
},
|
||||
@@ -10559,12 +10573,12 @@
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"description": "Path is a directory path within the git repository where hydrated manifests should be committed to and synced\nfrom. If hydrateTo is set, this is just the path from which hydrated manifests will be synced.",
|
||||
"description": "Path is a directory path within the git repository where hydrated manifests should be committed to and synced\nfrom. The Path should never point to the root of the repo. If hydrateTo is set, this is just the path from which\nhydrated manifests will be synced.\n\n+kubebuilder:validation:Required\n+kubebuilder:validation:MinLength=1\n+kubebuilder:validation:Pattern=`^.{2,}|[^./]$`",
|
||||
"type": "string"
|
||||
},
|
||||
"targetBranch": {
|
||||
"type": "string",
|
||||
"title": "TargetBranch is the branch to which hydrated manifests should be committed"
|
||||
"description": "TargetBranch is the branch from which hydrated manifests will be synced.\nIf HydrateTo is not set, this is also the branch to which hydrated manifests are committed.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
|
||||
logutils "github.com/argoproj/argo-cd/v3/util/log"
|
||||
"github.com/argoproj/argo-cd/v3/util/profile"
|
||||
"github.com/argoproj/argo-cd/v3/util/tls"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/applicationset/controllers"
|
||||
@@ -79,6 +80,7 @@ func NewCommand() *cobra.Command {
|
||||
enableScmProviders bool
|
||||
webhookParallelism int
|
||||
tokenRefStrictMode bool
|
||||
maxResourcesStatusCount int
|
||||
)
|
||||
scheme := runtime.NewScheme()
|
||||
_ = clientgoscheme.AddToScheme(scheme)
|
||||
@@ -169,6 +171,15 @@ func NewCommand() *cobra.Command {
|
||||
log.Error(err, "unable to start manager")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
pprofMux := http.NewServeMux()
|
||||
profile.RegisterProfiler(pprofMux)
|
||||
// This looks a little strange. Eg, not using ctrl.Options PprofBindAddress and then adding the pprof mux
|
||||
// to the metrics server. However, it allows for the controller to dynamically expose the pprof endpoints
|
||||
// and use the existing metrics server, the same pattern that the application controller and api-server follow.
|
||||
if err = mgr.AddMetricsServerExtraHandler("/debug/pprof/", pprofMux); err != nil {
|
||||
log.Error(err, "failed to register pprof handlers")
|
||||
}
|
||||
dynamicClient, err := dynamic.NewForConfig(mgr.GetConfig())
|
||||
errors.CheckError(err)
|
||||
k8sClient, err := kubernetes.NewForConfig(mgr.GetConfig())
|
||||
@@ -231,6 +242,7 @@ func NewCommand() *cobra.Command {
|
||||
GlobalPreservedAnnotations: globalPreservedAnnotations,
|
||||
GlobalPreservedLabels: globalPreservedLabels,
|
||||
Metrics: &metrics,
|
||||
MaxResourcesStatusCount: maxResourcesStatusCount,
|
||||
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
|
||||
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
|
||||
os.Exit(1)
|
||||
@@ -275,6 +287,7 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
|
||||
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
|
||||
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
|
||||
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
|
||||
|
||||
return &command
|
||||
}
|
||||
|
||||
@@ -35,7 +35,7 @@ func NewCommand() *cobra.Command {
|
||||
if nonce == "" {
|
||||
errors.CheckError(fmt.Errorf("%s is not set", askpass.ASKPASS_NONCE_ENV))
|
||||
}
|
||||
conn, err := grpc_util.BlockingDial(ctx, "unix", askpass.SocketPath, nil, grpc.WithTransportCredentials(insecure.NewCredentials()))
|
||||
conn, err := grpc_util.BlockingNewClient(ctx, "unix", askpass.SocketPath, nil, grpc.WithTransportCredentials(insecure.NewCredentials()))
|
||||
errors.CheckError(err)
|
||||
defer utilio.Close(conn)
|
||||
client := askpass.NewAskPassServiceClient(conn)
|
||||
|
||||
@@ -11,16 +11,6 @@ import (
|
||||
"sync"
|
||||
"syscall"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/common"
|
||||
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/util/env"
|
||||
"github.com/argoproj/argo-cd/v3/util/errors"
|
||||
service "github.com/argoproj/argo-cd/v3/util/notification/argocd"
|
||||
"github.com/argoproj/argo-cd/v3/util/tls"
|
||||
|
||||
notificationscontroller "github.com/argoproj/argo-cd/v3/notification_controller/controller"
|
||||
|
||||
"github.com/argoproj/notifications-engine/pkg/controller"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
@@ -30,22 +20,21 @@ import (
|
||||
"k8s.io/client-go/kubernetes"
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/common"
|
||||
notificationscontroller "github.com/argoproj/argo-cd/v3/notification_controller/controller"
|
||||
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
|
||||
"github.com/argoproj/argo-cd/v3/util/cli"
|
||||
"github.com/argoproj/argo-cd/v3/util/env"
|
||||
"github.com/argoproj/argo-cd/v3/util/errors"
|
||||
service "github.com/argoproj/argo-cd/v3/util/notification/argocd"
|
||||
"github.com/argoproj/argo-cd/v3/util/tls"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultMetricsPort = 9001
|
||||
)
|
||||
|
||||
func addK8SFlagsToCmd(cmd *cobra.Command) clientcmd.ClientConfig {
|
||||
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
|
||||
loadingRules.DefaultClientConfig = &clientcmd.DefaultClientConfig
|
||||
overrides := clientcmd.ConfigOverrides{}
|
||||
kflags := clientcmd.RecommendedConfigOverrideFlags("")
|
||||
cmd.PersistentFlags().StringVar(&loadingRules.ExplicitPath, "kubeconfig", "", "Path to a kube config. Only required if out-of-cluster")
|
||||
clientcmd.BindOverrideFlags(&overrides, cmd.PersistentFlags(), kflags)
|
||||
return clientcmd.NewInteractiveDeferredLoadingClientConfig(loadingRules, &overrides, os.Stdin)
|
||||
}
|
||||
|
||||
func NewCommand() *cobra.Command {
|
||||
var (
|
||||
clientConfig clientcmd.ClientConfig
|
||||
@@ -174,7 +163,7 @@ func NewCommand() *cobra.Command {
|
||||
return nil
|
||||
},
|
||||
}
|
||||
clientConfig = addK8SFlagsToCmd(&command)
|
||||
clientConfig = cli.AddKubectlFlagsToCmd(&command)
|
||||
command.Flags().IntVar(&processorsCount, "processors-count", 1, "Processors count.")
|
||||
command.Flags().StringVar(&appLabelSelector, "app-label-selector", "", "App label selector.")
|
||||
command.Flags().StringVar(&logLevel, "loglevel", env.StringFromEnv("ARGOCD_NOTIFICATIONS_CONTROLLER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
|
||||
|
||||
@@ -80,6 +80,7 @@ func NewCommand() *cobra.Command {
|
||||
includeHiddenDirectories bool
|
||||
cmpUseManifestGeneratePaths bool
|
||||
ociMediaTypes []string
|
||||
enableBuiltinGitConfig bool
|
||||
)
|
||||
command := cobra.Command{
|
||||
Use: cliName,
|
||||
@@ -155,6 +156,7 @@ func NewCommand() *cobra.Command {
|
||||
IncludeHiddenDirectories: includeHiddenDirectories,
|
||||
CMPUseManifestGeneratePaths: cmpUseManifestGeneratePaths,
|
||||
OCIMediaTypes: ociMediaTypes,
|
||||
EnableBuiltinGitConfig: enableBuiltinGitConfig,
|
||||
}, askPassServer)
|
||||
errors.CheckError(err)
|
||||
|
||||
@@ -264,6 +266,7 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().BoolVar(&includeHiddenDirectories, "include-hidden-directories", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_INCLUDE_HIDDEN_DIRECTORIES", false), "Include hidden directories from Git")
|
||||
command.Flags().BoolVar(&cmpUseManifestGeneratePaths, "plugin-use-manifest-generate-paths", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_PLUGIN_USE_MANIFEST_GENERATE_PATHS", false), "Pass the resources described in argocd.argoproj.io/manifest-generate-paths value to the cmpserver to generate the application manifests.")
|
||||
command.Flags().StringSliceVar(&ociMediaTypes, "oci-layer-media-types", env.StringsFromEnv("ARGOCD_REPO_SERVER_OCI_LAYER_MEDIA_TYPES", []string{"application/vnd.oci.image.layer.v1.tar", "application/vnd.oci.image.layer.v1.tar+gzip", "application/vnd.cncf.helm.chart.content.v1.tar+gzip"}, ","), "Comma separated list of allowed media types for OCI media types. This only accounts for media types within layers.")
|
||||
command.Flags().BoolVar(&enableBuiltinGitConfig, "enable-builtin-git-config", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_ENABLE_BUILTIN_GIT_CONFIG", true), "Enable builtin git configuration options that are required for correct argocd-repo-server operation.")
|
||||
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
|
||||
cacheSrc = reposervercache.AddCacheFlagsToCmd(&command, cacheutil.Options{
|
||||
OnClientCreated: func(client *redis.Client) {
|
||||
|
||||
@@ -30,11 +30,12 @@ func NewNotificationsCommand() *cobra.Command {
|
||||
)
|
||||
|
||||
var argocdService service.Service
|
||||
|
||||
toolsCommand := cmd.NewToolsCommand(
|
||||
"notifications",
|
||||
"argocd admin notifications",
|
||||
applications,
|
||||
settings.GetFactorySettingsForCLI(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
|
||||
settings.GetFactorySettingsForCLI(func() service.Service { return argocdService }, "argocd-notifications-secret", "argocd-notifications-cm", false),
|
||||
func(clientConfig clientcmd.ClientConfig) {
|
||||
k8sCfg, err := clientConfig.ClientConfig()
|
||||
if err != nil {
|
||||
|
||||
@@ -353,7 +353,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
command := &cobra.Command{
|
||||
Use: "get APPNAME",
|
||||
Short: "Get application details",
|
||||
Example: templates.Examples(`
|
||||
Example: templates.Examples(`
|
||||
# Get basic details about the application "my-app" in wide format
|
||||
argocd app get my-app -o wide
|
||||
|
||||
@@ -383,7 +383,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
|
||||
# Get application details and display them in a tree format
|
||||
argocd app get my-app --output tree
|
||||
|
||||
|
||||
# Get application details and display them in a detailed tree format
|
||||
argocd app get my-app --output tree=detailed
|
||||
`),
|
||||
@@ -541,7 +541,7 @@ func NewApplicationLogsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
command := &cobra.Command{
|
||||
Use: "logs APPNAME",
|
||||
Short: "Get logs of application pods",
|
||||
Example: templates.Examples(`
|
||||
Example: templates.Examples(`
|
||||
# Get logs of pods associated with the application "my-app"
|
||||
argocd app logs my-app
|
||||
|
||||
@@ -855,7 +855,7 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
command := &cobra.Command{
|
||||
Use: "set APPNAME",
|
||||
Short: "Set application parameters",
|
||||
Example: templates.Examples(`
|
||||
Example: templates.Examples(`
|
||||
# Set application parameters for the application "my-app"
|
||||
argocd app set my-app --parameter key1=value1 --parameter key2=value2
|
||||
|
||||
@@ -1379,6 +1379,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
AppNamespace: &appNs,
|
||||
Revisions: revisions,
|
||||
SourcePositions: sourcePositions,
|
||||
NoCache: &hardRefresh,
|
||||
}
|
||||
res, err := appIf.GetManifests(ctx, &q)
|
||||
errors.CheckError(err)
|
||||
@@ -1390,6 +1391,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
Name: &appName,
|
||||
Revision: &revision,
|
||||
AppNamespace: &appNs,
|
||||
NoCache: &hardRefresh,
|
||||
}
|
||||
res, err := appIf.GetManifests(ctx, &q)
|
||||
errors.CheckError(err)
|
||||
@@ -2085,6 +2087,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
applyOutOfSyncOnly bool
|
||||
async bool
|
||||
retryLimit int64
|
||||
retryRefresh bool
|
||||
retryBackoffDuration time.Duration
|
||||
retryBackoffMaxDuration time.Duration
|
||||
retryBackoffFactor int64
|
||||
@@ -2356,9 +2359,10 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
default:
|
||||
log.Fatalf("Unknown sync strategy: '%s'", strategy)
|
||||
}
|
||||
if retryLimit > 0 {
|
||||
if retryLimit != 0 {
|
||||
syncReq.RetryStrategy = &argoappv1.RetryStrategy{
|
||||
Limit: retryLimit,
|
||||
Limit: retryLimit,
|
||||
Refresh: retryRefresh,
|
||||
Backoff: &argoappv1.Backoff{
|
||||
Duration: retryBackoffDuration.String(),
|
||||
MaxDuration: retryBackoffMaxDuration.String(),
|
||||
@@ -2427,6 +2431,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
command.Flags().StringArrayVar(&labels, "label", []string{}, "Sync only specific resources with a label. This option may be specified repeatedly.")
|
||||
command.Flags().UintVar(&timeout, "timeout", defaultCheckTimeoutSeconds, "Time out after this many seconds")
|
||||
command.Flags().Int64Var(&retryLimit, "retry-limit", 0, "Max number of allowed sync retries")
|
||||
command.Flags().BoolVar(&retryRefresh, "retry-refresh", false, "Indicates if the latest revision should be used on retry instead of the initial one")
|
||||
command.Flags().DurationVar(&retryBackoffDuration, "retry-backoff-duration", argoappv1.DefaultSyncRetryDuration, "Retry backoff base duration. Input needs to be a duration (e.g. 2m, 1h)")
|
||||
command.Flags().DurationVar(&retryBackoffMaxDuration, "retry-backoff-max-duration", argoappv1.DefaultSyncRetryMaxDuration, "Max retry backoff duration. Input needs to be a duration (e.g. 2m, 1h)")
|
||||
command.Flags().Int64Var(&retryBackoffFactor, "retry-backoff-factor", argoappv1.DefaultSyncRetryFactor, "Factor multiplies the base duration after each failed retry")
|
||||
@@ -3484,7 +3489,7 @@ func NewApplicationRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *
|
||||
Short: "Remove a source from multiple sources application.",
|
||||
Example: ` # Remove the source at position 1 from application's sources. Counting starts at 1.
|
||||
argocd app remove-source myapplication --source-position 1
|
||||
|
||||
|
||||
# Remove the source named "test" from application's sources.
|
||||
argocd app remove-source myapplication --source-name test`,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
|
||||
@@ -42,6 +42,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
|
||||
username string
|
||||
password string
|
||||
sso bool
|
||||
callback string
|
||||
ssoPort int
|
||||
skipTestTLS bool
|
||||
ssoLaunchBrowser bool
|
||||
@@ -138,7 +139,7 @@ argocd login cd.argoproj.io --core`,
|
||||
errors.CheckError(err)
|
||||
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
|
||||
errors.CheckError(err)
|
||||
tokenString, refreshToken = oauth2Login(ctx, ssoPort, acdSet.GetOIDCConfig(), oauth2conf, provider, ssoLaunchBrowser)
|
||||
tokenString, refreshToken = oauth2Login(ctx, callback, ssoPort, acdSet.GetOIDCConfig(), oauth2conf, provider, ssoLaunchBrowser)
|
||||
}
|
||||
parser := jwt.NewParser(jwt.WithoutClaimsValidation())
|
||||
claims := jwt.MapClaims{}
|
||||
@@ -185,6 +186,7 @@ argocd login cd.argoproj.io --core`,
|
||||
command.Flags().StringVar(&password, "password", "", "The password of an account to authenticate")
|
||||
command.Flags().BoolVar(&sso, "sso", false, "Perform SSO login")
|
||||
command.Flags().IntVar(&ssoPort, "sso-port", DefaultSSOLocalPort, "Port to run local OAuth2 login application")
|
||||
command.Flags().StringVar(&callback, "callback", "", "Scheme, Host and Port for the callback URL")
|
||||
command.Flags().BoolVar(&skipTestTLS, "skip-test-tls", false, "Skip testing whether the server is configured with TLS (this can help when the command hangs for no apparent reason)")
|
||||
command.Flags().BoolVar(&ssoLaunchBrowser, "sso-launch-browser", true, "Automatically launch the system default browser when performing SSO login")
|
||||
return command
|
||||
@@ -204,13 +206,19 @@ func userDisplayName(claims jwt.MapClaims) string {
|
||||
// returns the JWT token and a refresh token (if supported)
|
||||
func oauth2Login(
|
||||
ctx context.Context,
|
||||
callback string,
|
||||
port int,
|
||||
oidcSettings *settingspkg.OIDCConfig,
|
||||
oauth2conf *oauth2.Config,
|
||||
provider *oidc.Provider,
|
||||
ssoLaunchBrowser bool,
|
||||
) (string, string) {
|
||||
oauth2conf.RedirectURL = fmt.Sprintf("http://localhost:%d/auth/callback", port)
|
||||
redirectBase := callback
|
||||
if redirectBase == "" {
|
||||
redirectBase = "http://localhost:" + strconv.Itoa(port)
|
||||
}
|
||||
|
||||
oauth2conf.RedirectURL = redirectBase + "/auth/callback"
|
||||
oidcConf, err := oidcutil.ParseConfig(provider)
|
||||
errors.CheckError(err)
|
||||
log.Debug("OIDC Configuration:")
|
||||
|
||||
@@ -605,8 +605,17 @@ ID ISSUED-AT EXPIRES-AT
|
||||
fmt.Printf(printRoleFmtStr, "Description:", role.Description)
|
||||
fmt.Printf("Policies:\n")
|
||||
fmt.Printf("%s\n", proj.ProjectPoliciesString())
|
||||
fmt.Printf("Groups:\n")
|
||||
// if the group exists in the role
|
||||
// range over each group and print it
|
||||
if v1alpha1.RoleGroupExists(role) {
|
||||
for _, group := range role.Groups {
|
||||
fmt.Printf(" - %s\n", group)
|
||||
}
|
||||
} else {
|
||||
fmt.Println("<none>")
|
||||
}
|
||||
fmt.Printf("JWT Tokens:\n")
|
||||
// TODO(jessesuen): print groups
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
|
||||
for _, token := range proj.Status.JWTTokensByRole[roleName].Items {
|
||||
|
||||
@@ -21,6 +21,7 @@ import (
|
||||
func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var (
|
||||
password string
|
||||
callback string
|
||||
ssoPort int
|
||||
ssoLaunchBrowser bool
|
||||
)
|
||||
@@ -73,7 +74,7 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
|
||||
errors.CheckError(err)
|
||||
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
|
||||
errors.CheckError(err)
|
||||
tokenString, refreshToken = oauth2Login(ctx, ssoPort, acdSet.GetOIDCConfig(), oauth2conf, provider, ssoLaunchBrowser)
|
||||
tokenString, refreshToken = oauth2Login(ctx, callback, ssoPort, acdSet.GetOIDCConfig(), oauth2conf, provider, ssoLaunchBrowser)
|
||||
}
|
||||
|
||||
localCfg.UpsertUser(localconfig.User{
|
||||
@@ -100,6 +101,7 @@ argocd login cd.argoproj.io --core
|
||||
}
|
||||
command.Flags().StringVar(&password, "password", "", "The password of an account to authenticate")
|
||||
command.Flags().IntVar(&ssoPort, "sso-port", DefaultSSOLocalPort, "Port to run local OAuth2 login application")
|
||||
command.Flags().StringVar(&callback, "callback", "", "Host and Port for the callback URL")
|
||||
command.Flags().BoolVar(&ssoLaunchBrowser, "sso-launch-browser", true, "Automatically launch the default browser when performing SSO login")
|
||||
return command
|
||||
}
|
||||
|
||||
@@ -368,16 +368,19 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
conn, repoIf := headless.NewClientOrDie(clientOpts, c).NewRepoClientOrDie()
|
||||
defer utilio.Close(conn)
|
||||
forceRefresh := false
|
||||
|
||||
switch refresh {
|
||||
case "":
|
||||
case "hard":
|
||||
forceRefresh = true
|
||||
default:
|
||||
err := stderrors.New("--refresh must be one of: 'hard'")
|
||||
err := fmt.Errorf("unknown refresh value: %s. Supported values: hard", refresh)
|
||||
errors.CheckError(err)
|
||||
}
|
||||
|
||||
repos, err := repoIf.ListRepositories(ctx, &repositorypkg.RepoQuery{ForceRefresh: forceRefresh})
|
||||
errors.CheckError(err)
|
||||
|
||||
switch output {
|
||||
case "yaml", "json":
|
||||
err := PrintResourceList(repos.Items, output, false)
|
||||
@@ -388,12 +391,12 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
case "wide", "":
|
||||
printRepoTable(repos.Items)
|
||||
default:
|
||||
errors.CheckError(fmt.Errorf("unknown output format: %s", output))
|
||||
errors.CheckError(fmt.Errorf("unknown output format: %s. Supported formats: yaml|json|url|wide", output))
|
||||
}
|
||||
},
|
||||
}
|
||||
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide|url")
|
||||
command.Flags().StringVar(&refresh, "refresh", "", "Force a cache refresh on connection status , must be one of: 'hard'")
|
||||
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. Supported formats: yaml|json|url|wide")
|
||||
command.Flags().StringVar(&refresh, "refresh", "", "Force a cache refresh on connection status. Supported values: hard")
|
||||
return command
|
||||
}
|
||||
|
||||
@@ -442,11 +445,12 @@ func NewRepoGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
case "hard":
|
||||
forceRefresh = true
|
||||
default:
|
||||
err := stderrors.New("--refresh must be one of: 'hard'")
|
||||
err := fmt.Errorf("unknown refresh value: %s. Supported values: hard", refresh)
|
||||
errors.CheckError(err)
|
||||
}
|
||||
repo, err := repoIf.Get(ctx, &repositorypkg.RepoQuery{Repo: repoURL, ForceRefresh: forceRefresh, AppProject: project})
|
||||
errors.CheckError(err)
|
||||
|
||||
switch output {
|
||||
case "yaml", "json":
|
||||
err := PrintResource(repo, output)
|
||||
@@ -457,13 +461,13 @@ func NewRepoGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
case "wide", "":
|
||||
printRepoTable(appsv1.Repositories{repo})
|
||||
default:
|
||||
errors.CheckError(fmt.Errorf("unknown output format: %s", output))
|
||||
errors.CheckError(fmt.Errorf("unknown output format: %s. Supported formats: yaml|json|url|wide", output))
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
command.Flags().StringVar(&project, "project", "", "project of the repository")
|
||||
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide|url")
|
||||
command.Flags().StringVar(&refresh, "refresh", "", "Force a cache refresh on connection status , must be one of: 'hard'")
|
||||
command.Flags().StringVar(&refresh, "refresh", "", "Force a cache refresh on connection status. Supported values: hard")
|
||||
return command
|
||||
}
|
||||
|
||||
@@ -90,6 +90,7 @@ type AppOptions struct {
|
||||
retryBackoffDuration time.Duration
|
||||
retryBackoffMaxDuration time.Duration
|
||||
retryBackoffFactor int64
|
||||
retryRefresh bool
|
||||
ref string
|
||||
SourceName string
|
||||
drySourceRepo string
|
||||
@@ -168,6 +169,7 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
|
||||
command.Flags().DurationVar(&opts.retryBackoffDuration, "sync-retry-backoff-duration", argoappv1.DefaultSyncRetryDuration, "Sync retry backoff base duration. Input needs to be a duration (e.g. 2m, 1h)")
|
||||
command.Flags().DurationVar(&opts.retryBackoffMaxDuration, "sync-retry-backoff-max-duration", argoappv1.DefaultSyncRetryMaxDuration, "Max sync retry backoff duration. Input needs to be a duration (e.g. 2m, 1h)")
|
||||
command.Flags().Int64Var(&opts.retryBackoffFactor, "sync-retry-backoff-factor", argoappv1.DefaultSyncRetryFactor, "Factor multiplies the base duration after each failed sync retry")
|
||||
command.Flags().BoolVar(&opts.retryRefresh, "sync-retry-refresh", false, "Indicates if the latest revision should be used on retry instead of the initial one")
|
||||
command.Flags().StringVar(&opts.ref, "ref", "", "Ref is reference to another source within sources field")
|
||||
command.Flags().StringVar(&opts.SourceName, "source-name", "", "Name of the source from the list of sources of the app.")
|
||||
}
|
||||
@@ -261,6 +263,7 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
|
||||
MaxDuration: appOpts.retryBackoffMaxDuration.String(),
|
||||
Factor: ptr.To(appOpts.retryBackoffFactor),
|
||||
},
|
||||
Refresh: appOpts.retryRefresh,
|
||||
}
|
||||
case appOpts.retryLimit == 0:
|
||||
if spec.SyncPolicy.IsZero() {
|
||||
@@ -271,6 +274,14 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
|
||||
default:
|
||||
log.Fatalf("Invalid sync-retry-limit [%d]", appOpts.retryLimit)
|
||||
}
|
||||
case "sync-retry-refresh":
|
||||
if spec.SyncPolicy == nil {
|
||||
spec.SyncPolicy = &argoappv1.SyncPolicy{}
|
||||
}
|
||||
if spec.SyncPolicy.Retry == nil {
|
||||
spec.SyncPolicy.Retry = &argoappv1.RetryStrategy{}
|
||||
}
|
||||
spec.SyncPolicy.Retry.Refresh = appOpts.retryRefresh
|
||||
}
|
||||
})
|
||||
if flags.Changed("auto-prune") {
|
||||
|
||||
@@ -274,6 +274,13 @@ func Test_setAppSpecOptions(t *testing.T) {
|
||||
require.NoError(t, f.SetFlag("sync-retry-limit", "0"))
|
||||
assert.Nil(t, f.spec.SyncPolicy.Retry)
|
||||
})
|
||||
t.Run("RetryRefresh", func(t *testing.T) {
|
||||
require.NoError(t, f.SetFlag("sync-retry-refresh", "true"))
|
||||
assert.True(t, f.spec.SyncPolicy.Retry.Refresh)
|
||||
|
||||
require.NoError(t, f.SetFlag("sync-retry-refresh", "false"))
|
||||
assert.False(t, f.spec.SyncPolicy.Retry.Refresh)
|
||||
})
|
||||
t.Run("Kustomize", func(t *testing.T) {
|
||||
require.NoError(t, f.SetFlag("kustomize-replica", "my-deployment=2"))
|
||||
require.NoError(t, f.SetFlag("kustomize-replica", "my-statefulset=4"))
|
||||
|
||||
@@ -52,7 +52,7 @@ func NewConnection(address string) (*grpc.ClientConn, error) {
|
||||
}
|
||||
|
||||
dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials()))
|
||||
conn, err := grpc_util.BlockingDial(context.Background(), "unix", address, nil, dialOpts...)
|
||||
conn, err := grpc_util.BlockingNewClient(context.Background(), "unix", address, nil, dialOpts...)
|
||||
if err != nil {
|
||||
log.Errorf("Unable to connect to config management plugin service with address %s", address)
|
||||
return nil, err
|
||||
|
||||
@@ -40,9 +40,7 @@ func NewConnection(address string) (*grpc.ClientConn, error) {
|
||||
var opts []grpc.DialOption
|
||||
opts = append(opts, grpc.WithTransportCredentials(insecure.NewCredentials()))
|
||||
|
||||
// TODO: switch to grpc.NewClient.
|
||||
//nolint:staticcheck
|
||||
conn, err := grpc.Dial(address, opts...)
|
||||
conn, err := grpc.NewClient(address, opts...)
|
||||
if err != nil {
|
||||
log.Errorf("Unable to connect to commit service with address %s", address)
|
||||
return nil, err
|
||||
|
||||
95
commitserver/apiclient/mocks/Clientset.go
generated
95
commitserver/apiclient/mocks/Clientset.go
generated
@@ -1,101 +1,14 @@
|
||||
// Code generated by mockery; DO NOT EDIT.
|
||||
// github.com/vektra/mockery
|
||||
// template: testify
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
"github.com/argoproj/argo-cd/v3/util/io"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
utilio "github.com/argoproj/argo-cd/v3/util/io"
|
||||
)
|
||||
|
||||
// NewClientset creates a new instance of Clientset. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewClientset(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *Clientset {
|
||||
mock := &Clientset{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
||||
// Clientset is an autogenerated mock type for the Clientset type
|
||||
type Clientset struct {
|
||||
mock.Mock
|
||||
CommitServiceClient apiclient.CommitServiceClient
|
||||
}
|
||||
|
||||
type Clientset_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *Clientset) EXPECT() *Clientset_Expecter {
|
||||
return &Clientset_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// NewCommitServerClient provides a mock function for the type Clientset
|
||||
func (_mock *Clientset) NewCommitServerClient() (io.Closer, apiclient.CommitServiceClient, error) {
|
||||
ret := _mock.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for NewCommitServerClient")
|
||||
}
|
||||
|
||||
var r0 io.Closer
|
||||
var r1 apiclient.CommitServiceClient
|
||||
var r2 error
|
||||
if returnFunc, ok := ret.Get(0).(func() (io.Closer, apiclient.CommitServiceClient, error)); ok {
|
||||
return returnFunc()
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func() io.Closer); ok {
|
||||
r0 = returnFunc()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(io.Closer)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func() apiclient.CommitServiceClient); ok {
|
||||
r1 = returnFunc()
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).(apiclient.CommitServiceClient)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(2).(func() error); ok {
|
||||
r2 = returnFunc()
|
||||
} else {
|
||||
r2 = ret.Error(2)
|
||||
}
|
||||
return r0, r1, r2
|
||||
}
|
||||
|
||||
// Clientset_NewCommitServerClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'NewCommitServerClient'
|
||||
type Clientset_NewCommitServerClient_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// NewCommitServerClient is a helper method to define mock.On call
|
||||
func (_e *Clientset_Expecter) NewCommitServerClient() *Clientset_NewCommitServerClient_Call {
|
||||
return &Clientset_NewCommitServerClient_Call{Call: _e.mock.On("NewCommitServerClient")}
|
||||
}
|
||||
|
||||
func (_c *Clientset_NewCommitServerClient_Call) Run(run func()) *Clientset_NewCommitServerClient_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *Clientset_NewCommitServerClient_Call) Return(closer io.Closer, commitServiceClient apiclient.CommitServiceClient, err error) *Clientset_NewCommitServerClient_Call {
|
||||
_c.Call.Return(closer, commitServiceClient, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *Clientset_NewCommitServerClient_Call) RunAndReturn(run func() (io.Closer, apiclient.CommitServiceClient, error)) *Clientset_NewCommitServerClient_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
func (c *Clientset) NewCommitServerClient() (utilio.Closer, apiclient.CommitServiceClient, error) {
|
||||
return utilio.NopCloser, c.CommitServiceClient, nil
|
||||
}
|
||||
|
||||
@@ -7,6 +7,8 @@ import (
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/controller/hydrator"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
@@ -31,6 +33,43 @@ func NewService(gitCredsStore git.CredsStore, metricsServer *metrics.Server) *Se
|
||||
}
|
||||
}
|
||||
|
||||
type hydratorMetadataFile struct {
|
||||
RepoURL string `json:"repoURL,omitempty"`
|
||||
DrySHA string `json:"drySha,omitempty"`
|
||||
Commands []string `json:"commands,omitempty"`
|
||||
Author string `json:"author,omitempty"`
|
||||
Date string `json:"date,omitempty"`
|
||||
// Subject is the subject line of the DRY commit message, i.e. `git show --format=%s`.
|
||||
Subject string `json:"subject,omitempty"`
|
||||
// Body is the body of the DRY commit message, excluding the subject line, i.e. `git show --format=%b`.
|
||||
// Known Argocd- trailers with valid values are removed, but all other trailers are kept.
|
||||
Body string `json:"body,omitempty"`
|
||||
References []v1alpha1.RevisionReference `json:"references,omitempty"`
|
||||
}
|
||||
|
||||
// TODO: make this configurable via ConfigMap.
|
||||
var manifestHydrationReadmeTemplate = `# Manifest Hydration
|
||||
|
||||
To hydrate the manifests in this repository, run the following commands:
|
||||
|
||||
` + "```shell" + `
|
||||
git clone {{ .RepoURL }}
|
||||
# cd into the cloned directory
|
||||
git checkout {{ .DrySHA }}
|
||||
{{ range $command := .Commands -}}
|
||||
{{ $command }}
|
||||
{{ end -}}` + "```" + `
|
||||
{{ if .References -}}
|
||||
|
||||
## References
|
||||
|
||||
{{ range $ref := .References -}}
|
||||
{{ if $ref.Commit -}}
|
||||
* [{{ $ref.Commit.SHA | mustRegexFind "[0-9a-f]+" | trunc 7 }}]({{ $ref.Commit.RepoURL }}): {{ $ref.Commit.Subject }} ({{ $ref.Commit.Author }})
|
||||
{{ end -}}
|
||||
{{ end -}}
|
||||
{{ end -}}`
|
||||
|
||||
// CommitHydratedManifests handles a commit request. It clones the repository, checks out the sync branch, checks out
|
||||
// the target branch, clears the repository contents, writes the manifests to the repository, commits the changes, and
|
||||
// pushes the changes. It returns the hydrated revision SHA and an error if one occurred.
|
||||
@@ -120,13 +159,17 @@ func (s *Service) handleCommitRequest(logCtx *log.Entry, r *apiclient.CommitHydr
|
||||
|
||||
logCtx.Debug("Clearing and preparing paths")
|
||||
var pathsToClear []string
|
||||
// range over the paths configured and skip those application
|
||||
// paths that are referencing to root path
|
||||
for _, p := range r.Paths {
|
||||
if p.Path == "" || p.Path == "." {
|
||||
logCtx.Debug("Using root directory for manifests, no directory removal needed")
|
||||
} else {
|
||||
pathsToClear = append(pathsToClear, p.Path)
|
||||
if hydrator.IsRootPath(p.Path) {
|
||||
// skip adding paths that are referencing root directory
|
||||
logCtx.Debugf("Path %s is referencing root directory, ignoring the path", p.Path)
|
||||
continue
|
||||
}
|
||||
pathsToClear = append(pathsToClear, p.Path)
|
||||
}
|
||||
|
||||
if len(pathsToClear) > 0 {
|
||||
logCtx.Debugf("Clearing paths: %v", pathsToClear)
|
||||
out, err := gitClient.RemoveContents(pathsToClear)
|
||||
@@ -221,40 +264,3 @@ func (s *Service) initGitClient(logCtx *log.Entry, r *apiclient.CommitHydratedMa
|
||||
|
||||
return gitClient, dirPath, cleanupOrLog, nil
|
||||
}
|
||||
|
||||
type hydratorMetadataFile struct {
|
||||
RepoURL string `json:"repoURL,omitempty"`
|
||||
DrySHA string `json:"drySha,omitempty"`
|
||||
Commands []string `json:"commands,omitempty"`
|
||||
Author string `json:"author,omitempty"`
|
||||
Date string `json:"date,omitempty"`
|
||||
// Subject is the subject line of the DRY commit message, i.e. `git show --format=%s`.
|
||||
Subject string `json:"subject,omitempty"`
|
||||
// Body is the body of the DRY commit message, excluding the subject line, i.e. `git show --format=%b`.
|
||||
// Known Argocd- trailers with valid values are removed, but all other trailers are kept.
|
||||
Body string `json:"body,omitempty"`
|
||||
References []v1alpha1.RevisionReference `json:"references,omitempty"`
|
||||
}
|
||||
|
||||
// TODO: make this configurable via ConfigMap.
|
||||
var manifestHydrationReadmeTemplate = `# Manifest Hydration
|
||||
|
||||
To hydrate the manifests in this repository, run the following commands:
|
||||
|
||||
` + "```shell" + `
|
||||
git clone {{ .RepoURL }}
|
||||
# cd into the cloned directory
|
||||
git checkout {{ .DrySHA }}
|
||||
{{ range $command := .Commands -}}
|
||||
{{ $command }}
|
||||
{{ end -}}` + "```" + `
|
||||
{{ if .References -}}
|
||||
|
||||
## References
|
||||
|
||||
{{ range $ref := .References -}}
|
||||
{{ if $ref.Commit -}}
|
||||
* [{{ $ref.Commit.SHA | mustRegexFind "[0-9a-f]+" | trunc 7 }}]({{ $ref.Commit.RepoURL }}): {{ $ref.Commit.Subject }} ({{ $ref.Commit.Author }})
|
||||
{{ end -}}
|
||||
{{ end -}}
|
||||
{{ end -}}`
|
||||
|
||||
@@ -5,9 +5,7 @@ import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"text/template"
|
||||
"time"
|
||||
|
||||
"github.com/Masterminds/sprig/v3"
|
||||
log "github.com/sirupsen/logrus"
|
||||
@@ -17,14 +15,14 @@ import (
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
"github.com/argoproj/argo-cd/v3/common"
|
||||
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v3/util/git"
|
||||
"github.com/argoproj/argo-cd/v3/util/hydrator"
|
||||
"github.com/argoproj/argo-cd/v3/util/io"
|
||||
)
|
||||
|
||||
var sprigFuncMap = sprig.GenericFuncMap() // a singleton for better performance
|
||||
|
||||
const gitAttributesContents = `*/README.md linguist-generated=true
|
||||
*/hydrator.metadata linguist-generated=true`
|
||||
const gitAttributesContents = `**/README.md linguist-generated=true
|
||||
**/hydrator.metadata linguist-generated=true`
|
||||
|
||||
func init() {
|
||||
// Avoid allowing the user to learn things about the environment.
|
||||
@@ -36,25 +34,13 @@ func init() {
|
||||
// WriteForPaths writes the manifests, hydrator.metadata, and README.md files for each path in the provided paths. It
|
||||
// also writes a root-level hydrator.metadata file containing the repo URL and dry SHA.
|
||||
func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *appv1.RevisionMetadata, paths []*apiclient.PathDetails) error { //nolint:revive //FIXME(var-naming)
|
||||
author := ""
|
||||
message := ""
|
||||
date := ""
|
||||
var references []appv1.RevisionReference
|
||||
if dryCommitMetadata != nil {
|
||||
author = dryCommitMetadata.Author
|
||||
message = dryCommitMetadata.Message
|
||||
if dryCommitMetadata.Date != nil {
|
||||
date = dryCommitMetadata.Date.Format(time.RFC3339)
|
||||
}
|
||||
references = dryCommitMetadata.References
|
||||
hydratorMetadata, err := hydrator.GetCommitMetadata(repoUrl, drySha, dryCommitMetadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve hydrator metadata: %w", err)
|
||||
}
|
||||
|
||||
subject, body, _ := strings.Cut(message, "\n\n")
|
||||
|
||||
_, bodyMinusTrailers := git.GetReferences(log.WithFields(log.Fields{"repo": repoUrl, "revision": drySha}), body)
|
||||
|
||||
// Write the top-level readme.
|
||||
err := writeMetadata(root, "", hydratorMetadataFile{DrySHA: drySha, RepoURL: repoUrl, Author: author, Subject: subject, Body: bodyMinusTrailers, Date: date, References: references})
|
||||
err = writeMetadata(root, "", hydratorMetadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write top-level hydrator metadata: %w", err)
|
||||
}
|
||||
@@ -71,9 +57,12 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
|
||||
hydratePath = ""
|
||||
}
|
||||
|
||||
err = root.MkdirAll(hydratePath, 0o755)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create path: %w", err)
|
||||
// Only create directory if path is not empty (root directory case)
|
||||
if hydratePath != "" {
|
||||
err = root.MkdirAll(hydratePath, 0o755)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create path: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Write the manifests
|
||||
@@ -83,7 +72,7 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
|
||||
}
|
||||
|
||||
// Write hydrator.metadata containing information about the hydration process.
|
||||
hydratorMetadata := hydratorMetadataFile{
|
||||
hydratorMetadata := hydrator.HydratorCommitMetadata{
|
||||
Commands: p.Commands,
|
||||
DrySHA: drySha,
|
||||
RepoURL: repoUrl,
|
||||
@@ -103,7 +92,7 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
|
||||
}
|
||||
|
||||
// writeMetadata writes the metadata to the hydrator.metadata file.
|
||||
func writeMetadata(root *os.Root, dirPath string, metadata hydratorMetadataFile) error {
|
||||
func writeMetadata(root *os.Root, dirPath string, metadata hydrator.HydratorCommitMetadata) error {
|
||||
hydratorMetadataPath := filepath.Join(dirPath, "hydrator.metadata")
|
||||
f, err := root.Create(hydratorMetadataPath)
|
||||
if err != nil {
|
||||
@@ -122,7 +111,7 @@ func writeMetadata(root *os.Root, dirPath string, metadata hydratorMetadataFile)
|
||||
}
|
||||
|
||||
// writeReadme writes the readme to the README.md file.
|
||||
func writeReadme(root *os.Root, dirPath string, metadata hydratorMetadataFile) error {
|
||||
func writeReadme(root *os.Root, dirPath string, metadata hydrator.HydratorCommitMetadata) error {
|
||||
readmeTemplate, err := template.New("readme").Funcs(sprigFuncMap).Parse(manifestHydrationReadmeTemplate)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse readme template: %w", err)
|
||||
|
||||
@@ -7,8 +7,10 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -18,6 +20,7 @@ import (
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
appsv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v3/util/hydrator"
|
||||
)
|
||||
|
||||
// tempRoot creates a temporary directory and returns an os.Root object for it.
|
||||
@@ -144,7 +147,7 @@ Argocd-reference-commit-sha: abc123
|
||||
func TestWriteMetadata(t *testing.T) {
|
||||
root := tempRoot(t)
|
||||
|
||||
metadata := hydratorMetadataFile{
|
||||
metadata := hydrator.HydratorCommitMetadata{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
DrySHA: "abc123",
|
||||
}
|
||||
@@ -156,7 +159,7 @@ func TestWriteMetadata(t *testing.T) {
|
||||
metadataBytes, err := os.ReadFile(metadataPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
var readMetadata hydratorMetadataFile
|
||||
var readMetadata hydrator.HydratorCommitMetadata
|
||||
err = json.Unmarshal(metadataBytes, &readMetadata)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, metadata, readMetadata)
|
||||
@@ -171,7 +174,7 @@ func TestWriteReadme(t *testing.T) {
|
||||
hash := sha256.Sum256(randomData)
|
||||
sha := hex.EncodeToString(hash[:])
|
||||
|
||||
metadata := hydratorMetadataFile{
|
||||
metadata := hydrator.HydratorCommitMetadata{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
DrySHA: "abc123",
|
||||
References: []appsv1.RevisionReference{
|
||||
@@ -233,6 +236,74 @@ func TestWriteGitAttributes(t *testing.T) {
|
||||
gitAttributesPath := filepath.Join(root.Name(), ".gitattributes")
|
||||
gitAttributesBytes, err := os.ReadFile(gitAttributesPath)
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, string(gitAttributesBytes), "*/README.md linguist-generated=true")
|
||||
assert.Contains(t, string(gitAttributesBytes), "*/hydrator.metadata linguist-generated=true")
|
||||
assert.Contains(t, string(gitAttributesBytes), "README.md linguist-generated=true")
|
||||
assert.Contains(t, string(gitAttributesBytes), "hydrator.metadata linguist-generated=true")
|
||||
}
|
||||
|
||||
func TestWriteGitAttributes_MatchesAllDepths(t *testing.T) {
|
||||
root := tempRoot(t)
|
||||
|
||||
err := writeGitAttributes(root)
|
||||
require.NoError(t, err)
|
||||
|
||||
// The gitattributes pattern needs to match files at all depths:
|
||||
// - hydrator.metadata (root level)
|
||||
// - path1/hydrator.metadata (one level deep)
|
||||
// - path1/nested/deep/hydrator.metadata (multiple levels deep)
|
||||
// Same for README.md files
|
||||
//
|
||||
// The pattern "**/hydrator.metadata" matches at any depth including root
|
||||
// The pattern "*/hydrator.metadata" only matches exactly one directory level deep
|
||||
|
||||
// Test actual Git behavior using git check-attr
|
||||
// Initialize a git repo
|
||||
ctx := t.Context()
|
||||
repoPath := root.Name()
|
||||
cmd := exec.CommandContext(ctx, "git", "init")
|
||||
cmd.Dir = repoPath
|
||||
output, err := cmd.CombinedOutput()
|
||||
require.NoError(t, err, "Failed to init git repo: %s", string(output))
|
||||
|
||||
// Test files at different depths
|
||||
testCases := []struct {
|
||||
path string
|
||||
shouldMatch bool
|
||||
description string
|
||||
}{
|
||||
{"hydrator.metadata", true, "root level hydrator.metadata"},
|
||||
{"README.md", true, "root level README.md"},
|
||||
{"path1/hydrator.metadata", true, "one level deep hydrator.metadata"},
|
||||
{"path1/README.md", true, "one level deep README.md"},
|
||||
{"path1/nested/hydrator.metadata", true, "two levels deep hydrator.metadata"},
|
||||
{"path1/nested/README.md", true, "two levels deep README.md"},
|
||||
{"path1/nested/deep/hydrator.metadata", true, "three levels deep hydrator.metadata"},
|
||||
{"path1/nested/deep/README.md", true, "three levels deep README.md"},
|
||||
{"manifest.yaml", false, "manifest.yaml should not match"},
|
||||
{"path1/manifest.yaml", false, "nested manifest.yaml should not match"},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.description, func(t *testing.T) {
|
||||
// Use git check-attr to verify if linguist-generated attribute is set
|
||||
cmd := exec.CommandContext(ctx, "git", "check-attr", "linguist-generated", tc.path)
|
||||
cmd.Dir = repoPath
|
||||
output, err := cmd.CombinedOutput()
|
||||
require.NoError(t, err, "Failed to run git check-attr: %s", string(output))
|
||||
|
||||
// Output format: <path>: <attribute>: <value>
|
||||
// Example: "hydrator.metadata: linguist-generated: true"
|
||||
outputStr := strings.TrimSpace(string(output))
|
||||
|
||||
if tc.shouldMatch {
|
||||
expectedOutput := tc.path + ": linguist-generated: true"
|
||||
assert.Equal(t, expectedOutput, outputStr,
|
||||
"File %s should have linguist-generated=true attribute", tc.path)
|
||||
} else {
|
||||
// Attribute should be unspecified
|
||||
expectedOutput := tc.path + ": linguist-generated: unspecified"
|
||||
assert.Equal(t, expectedOutput, outputStr,
|
||||
"File %s should not have linguist-generated=true attribute", tc.path)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1206,7 +1206,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
if err != nil {
|
||||
logCtx.Warnf("Unable to get destination cluster: %v", err)
|
||||
app.UnSetCascadedDeletion()
|
||||
app.UnSetPostDeleteFinalizer()
|
||||
app.UnSetPostDeleteFinalizerAll()
|
||||
if err := ctrl.updateFinalizers(app); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1417,18 +1417,28 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
|
||||
return
|
||||
}
|
||||
retryAfter := time.Until(retryAt)
|
||||
|
||||
if retryAfter > 0 {
|
||||
logCtx.Infof("Skipping retrying in-progress operation. Attempting again at: %s", retryAt.Format(time.RFC3339))
|
||||
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), &retryAfter)
|
||||
return
|
||||
}
|
||||
|
||||
// Remove the desired revisions if the sync failed and we are retrying. The latest revision from the source will be used.
|
||||
extraMsg := ""
|
||||
if state.Operation.Retry.Refresh {
|
||||
extraMsg += " with latest revisions"
|
||||
state.Operation.Sync.Revision = ""
|
||||
state.Operation.Sync.Revisions = nil
|
||||
}
|
||||
|
||||
// Get rid of sync results and null out previous operation completion time
|
||||
// This will start the retry attempt
|
||||
state.Message = fmt.Sprintf("Retrying operation. Attempt #%d", state.RetryCount)
|
||||
state.Message = fmt.Sprintf("Retrying operation%s. Attempt #%d", extraMsg, state.RetryCount)
|
||||
state.FinishedAt = nil
|
||||
state.SyncResult = nil
|
||||
ctrl.setOperationState(app, state)
|
||||
logCtx.Infof("Retrying operation. Attempt #%d", state.RetryCount)
|
||||
logCtx.Infof("Retrying operation%s. Attempt #%d", extraMsg, state.RetryCount)
|
||||
default:
|
||||
logCtx.Infof("Resuming in-progress operation. phase: %s, message: %s", state.Phase, state.Message)
|
||||
}
|
||||
@@ -1498,8 +1508,18 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
|
||||
// if we just completed an operation, force a refresh so that UI will report up-to-date
|
||||
// sync/health information
|
||||
if _, err := cache.MetaNamespaceKeyFunc(app); err == nil {
|
||||
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
|
||||
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatestForceResolve.Pointer(), nil)
|
||||
var compareWith CompareWith
|
||||
if state.Operation.InitiatedBy.Automated {
|
||||
// Do not force revision resolution on automated operations because
|
||||
// this would cause excessive Ls-Remote requests on monorepo commits
|
||||
compareWith = CompareWithLatest
|
||||
} else {
|
||||
// Force app refresh with using most recent resolved revision after sync,
|
||||
// so UI won't show a just synced application being out of sync if it was
|
||||
// synced after commit but before app. refresh (see #18153)
|
||||
compareWith = CompareWithLatestForceResolve
|
||||
}
|
||||
ctrl.requestAppRefresh(app.QualifiedName(), compareWith.Pointer(), nil)
|
||||
} else {
|
||||
logCtx.Warnf("Fails to requeue application: %v", err)
|
||||
}
|
||||
@@ -1868,7 +1888,7 @@ func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext boo
|
||||
return
|
||||
}
|
||||
|
||||
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp)
|
||||
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp.DeepCopy())
|
||||
|
||||
log.WithFields(applog.GetAppLogFields(origApp)).Debug("Successfully processed app hydrate queue item")
|
||||
return
|
||||
@@ -2127,16 +2147,20 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
|
||||
}
|
||||
}
|
||||
|
||||
source := ptr.To(app.Spec.GetSource())
|
||||
desiredRevisions := []string{syncStatus.Revision}
|
||||
if app.Spec.HasMultipleSources() {
|
||||
source = nil
|
||||
desiredRevisions = syncStatus.Revisions
|
||||
}
|
||||
|
||||
op := appv1.Operation{
|
||||
Sync: &appv1.SyncOperation{
|
||||
Source: source,
|
||||
Revision: syncStatus.Revision,
|
||||
Prune: app.Spec.SyncPolicy.Automated.Prune,
|
||||
SyncOptions: app.Spec.SyncPolicy.SyncOptions,
|
||||
Sources: app.Spec.Sources,
|
||||
Revisions: syncStatus.Revisions,
|
||||
},
|
||||
InitiatedBy: appv1.OperationInitiator{Automated: true},
|
||||
|
||||
@@ -2321,6 +2321,41 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
|
||||
assert.Equal(t, CompareWithLatestForceResolve, level)
|
||||
}
|
||||
|
||||
func TestProcessRequestedAppAutomatedOperation_Successful(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.Project = "default"
|
||||
app.Operation = &v1alpha1.Operation{
|
||||
Sync: &v1alpha1.SyncOperation{},
|
||||
InitiatedBy: v1alpha1.OperationInitiator{
|
||||
Automated: true,
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(&fakeData{
|
||||
apps: []runtime.Object{app, &defaultProj},
|
||||
manifestResponses: []*apiclient.ManifestResponse{{
|
||||
Manifests: []string{},
|
||||
}},
|
||||
}, nil)
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]any{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
require.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
}
|
||||
return true, &v1alpha1.Application{}, nil
|
||||
})
|
||||
|
||||
ctrl.processRequestedAppOperation(app)
|
||||
|
||||
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
|
||||
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
|
||||
assert.Equal(t, string(synccommon.OperationSucceeded), phase)
|
||||
assert.Equal(t, "successfully synced (no more tasks)", message)
|
||||
ok, level := ctrl.isRefreshRequested(ctrl.toAppKey(app.Name))
|
||||
assert.True(t, ok)
|
||||
assert.Equal(t, CompareWithLatest, level)
|
||||
}
|
||||
|
||||
func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
|
||||
@@ -51,7 +51,7 @@ func (ctrl *ApplicationController) executePostDeleteHooks(app *v1alpha1.Applicat
|
||||
revisions = append(revisions, src.TargetRevision)
|
||||
}
|
||||
|
||||
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, true)
|
||||
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(context.Background(), app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, true)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
@@ -4,8 +4,14 @@ import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"maps"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
@@ -16,6 +22,7 @@ import (
|
||||
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
|
||||
applog "github.com/argoproj/argo-cd/v3/util/app/log"
|
||||
"github.com/argoproj/argo-cd/v3/util/git"
|
||||
"github.com/argoproj/argo-cd/v3/util/hydrator"
|
||||
utilio "github.com/argoproj/argo-cd/v3/util/io"
|
||||
)
|
||||
|
||||
@@ -43,7 +50,7 @@ type Dependencies interface {
|
||||
|
||||
// GetRepoObjs returns the repository objects for the given application, source, and revision. It calls the repo-
|
||||
// server and gets the manifests (objects).
|
||||
GetRepoObjs(app *appv1.Application, source appv1.ApplicationSource, revision string, project *appv1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)
|
||||
GetRepoObjs(ctx context.Context, app *appv1.Application, source appv1.ApplicationSource, revision string, project *appv1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)
|
||||
|
||||
// GetWriteCredentials returns the repository credentials for the given repository URL and project. These are to be
|
||||
// sent to the commit server to write the hydrated manifests.
|
||||
@@ -59,6 +66,9 @@ type Dependencies interface {
|
||||
// AddHydrationQueueItem adds a hydration queue item to the queue. This is used to trigger the hydration process for
|
||||
// a group of applications which are hydrating to the same repo and target branch.
|
||||
AddHydrationQueueItem(key types.HydrationQueueKey)
|
||||
|
||||
// GetHydratorCommitMessageTemplate gets the configured template for rendering commit messages.
|
||||
GetHydratorCommitMessageTemplate() (string, error)
|
||||
}
|
||||
|
||||
// Hydrator is the main struct that implements the hydration logic. It uses the Dependencies interface to access the
|
||||
@@ -91,47 +101,41 @@ func NewHydrator(dependencies Dependencies, statusRefreshTimeout time.Duration,
|
||||
// It's likely that multiple applications will trigger hydration at the same time. The hydration queue key is meant to
|
||||
// dedupe these requests.
|
||||
func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
|
||||
origApp = origApp.DeepCopy()
|
||||
app := origApp.DeepCopy()
|
||||
|
||||
if app.Spec.SourceHydrator == nil {
|
||||
return
|
||||
}
|
||||
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app))
|
||||
|
||||
logCtx.Debug("Processing app hydrate queue item")
|
||||
|
||||
// TODO: don't reuse statusRefreshTimeout. Create a new timeout for hydration.
|
||||
needsHydration, reason := appNeedsHydration(origApp, h.statusRefreshTimeout)
|
||||
if !needsHydration {
|
||||
return
|
||||
needsHydration, reason := appNeedsHydration(app)
|
||||
if needsHydration {
|
||||
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
|
||||
StartedAt: metav1.Now(),
|
||||
FinishedAt: nil,
|
||||
Phase: appv1.HydrateOperationPhaseHydrating,
|
||||
SourceHydrator: *app.Spec.SourceHydrator,
|
||||
}
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
}
|
||||
|
||||
logCtx.WithField("reason", reason).Info("Hydrating app")
|
||||
|
||||
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
|
||||
StartedAt: metav1.Now(),
|
||||
FinishedAt: nil,
|
||||
Phase: appv1.HydrateOperationPhaseHydrating,
|
||||
SourceHydrator: *app.Spec.SourceHydrator,
|
||||
needsRefresh := app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.StartedAt.Time) > h.statusRefreshTimeout
|
||||
if needsHydration || needsRefresh {
|
||||
logCtx.WithField("reason", reason).Info("Hydrating app")
|
||||
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
|
||||
} else {
|
||||
logCtx.WithField("reason", reason).Debug("Skipping hydration")
|
||||
}
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
origApp.Status.SourceHydrator = app.Status.SourceHydrator
|
||||
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
|
||||
|
||||
logCtx.Debug("Successfully processed app hydrate queue item")
|
||||
}
|
||||
|
||||
func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
|
||||
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
|
||||
if app.Spec.SourceHydrator.HydrateTo != nil {
|
||||
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
|
||||
}
|
||||
key := types.HydrationQueueKey{
|
||||
SourceRepoURL: git.NormalizeGitURLAllowInvalid(app.Spec.SourceHydrator.DrySource.RepoURL),
|
||||
SourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
|
||||
DestinationBranch: destinationBranch,
|
||||
DestinationBranch: app.Spec.GetHydrateToSource().TargetRevision,
|
||||
}
|
||||
return key
|
||||
}
|
||||
@@ -140,37 +144,92 @@ func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
|
||||
// hydration key, hydrates their latest commit, and updates their status accordingly. If the hydration fails, it marks
|
||||
// the operation as failed and logs the error. If successful, it updates the operation to indicate that hydration was
|
||||
// successful and requests a refresh of the applications to pick up the new hydrated commit.
|
||||
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) (processNext bool) {
|
||||
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) {
|
||||
logCtx := log.WithFields(log.Fields{
|
||||
"sourceRepoURL": hydrationKey.SourceRepoURL,
|
||||
"sourceTargetRevision": hydrationKey.SourceTargetRevision,
|
||||
"destinationBranch": hydrationKey.DestinationBranch,
|
||||
})
|
||||
|
||||
relevantApps, drySHA, hydratedSHA, err := h.hydrateAppsLatestCommit(logCtx, hydrationKey)
|
||||
if drySHA != "" {
|
||||
logCtx = logCtx.WithField("drySHA", drySHA)
|
||||
}
|
||||
// Get all applications sharing the same hydration key
|
||||
apps, err := h.getAppsForHydrationKey(hydrationKey)
|
||||
if err != nil {
|
||||
logCtx.WithField("appCount", len(relevantApps)).WithError(err).Error("Failed to hydrate apps")
|
||||
for _, app := range relevantApps {
|
||||
origApp := app.DeepCopy()
|
||||
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
|
||||
failedAt := metav1.Now()
|
||||
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
|
||||
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate revision %q: %v", drySHA, err.Error())
|
||||
// We may or may not have gotten far enough in the hydration process to get a non-empty SHA, but set it just
|
||||
// in case we did.
|
||||
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
|
||||
logCtx.Errorf("Failed to hydrate app: %v", err)
|
||||
// If we get an error here, we cannot proceed with hydration and we do not know
|
||||
// which apps to update with the failure. The best we can do is log an error in
|
||||
// the controller and wait for statusRefreshTimeout to retry
|
||||
logCtx.WithError(err).Error("failed to get apps for hydration")
|
||||
return
|
||||
}
|
||||
logCtx.WithField("appCount", len(apps))
|
||||
|
||||
// FIXME: we might end up in a race condition here where an HydrationQueueItem is processed
|
||||
// before all applications had their CurrentOperation set by ProcessAppHydrateQueueItem.
|
||||
// This would cause this method to update "old" CurrentOperation.
|
||||
// It should only start hydration if all apps are in the HydrateOperationPhaseHydrating phase.
|
||||
raceDetected := false
|
||||
for _, app := range apps {
|
||||
if app.Status.SourceHydrator.CurrentOperation == nil || app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
|
||||
raceDetected = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if raceDetected {
|
||||
logCtx.Warn("race condition detected: not all apps are in HydrateOperationPhaseHydrating phase")
|
||||
}
|
||||
|
||||
// validate all the applications to make sure they are all correctly configured.
|
||||
// All applications sharing the same hydration key must succeed for the hydration to be processed.
|
||||
projects, validationErrors := h.validateApplications(apps)
|
||||
if len(validationErrors) > 0 {
|
||||
// For the applications that have an error, set the specific error in their status.
|
||||
// Applications without error will still fail with a generic error since the hydration cannot be partial
|
||||
genericError := genericHydrationError(validationErrors)
|
||||
for _, app := range apps {
|
||||
if err, ok := validationErrors[app.QualifiedName()]; ok {
|
||||
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
|
||||
logCtx.Errorf("failed to validate hydration app: %v", err)
|
||||
h.setAppHydratorError(app, err)
|
||||
} else {
|
||||
h.setAppHydratorError(app, genericError)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
logCtx.WithField("appCount", len(relevantApps)).Debug("Successfully hydrated apps")
|
||||
|
||||
// Hydrate all the apps
|
||||
drySHA, hydratedSHA, appErrors, err := h.hydrate(logCtx, apps, projects)
|
||||
if err != nil {
|
||||
// If there is a single error, it affects each applications
|
||||
for i := range apps {
|
||||
appErrors[apps[i].QualifiedName()] = err
|
||||
}
|
||||
}
|
||||
if drySHA != "" {
|
||||
logCtx = logCtx.WithField("drySHA", drySHA)
|
||||
}
|
||||
if len(appErrors) > 0 {
|
||||
// For the applications that have an error, set the specific error in their status.
|
||||
// Applications without error will still fail with a generic error since the hydration cannot be partial
|
||||
genericError := genericHydrationError(appErrors)
|
||||
for _, app := range apps {
|
||||
if drySHA != "" {
|
||||
// If we have a drySHA, we can set it on the app status
|
||||
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
|
||||
}
|
||||
if err, ok := appErrors[app.QualifiedName()]; ok {
|
||||
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
|
||||
logCtx.Errorf("failed to hydrate app: %v", err)
|
||||
h.setAppHydratorError(app, err)
|
||||
} else {
|
||||
h.setAppHydratorError(app, genericError)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
logCtx.Debug("Successfully hydrated apps")
|
||||
finishedAt := metav1.Now()
|
||||
for _, app := range relevantApps {
|
||||
for _, app := range apps {
|
||||
origApp := app.DeepCopy()
|
||||
operation := &appv1.HydrateOperation{
|
||||
StartedAt: app.Status.SourceHydrator.CurrentOperation.StartedAt,
|
||||
@@ -188,30 +247,32 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
|
||||
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
|
||||
}
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
|
||||
// Request a refresh since we pushed a new commit.
|
||||
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
|
||||
if err != nil {
|
||||
logCtx.WithField("app", app.QualifiedName()).WithError(err).Error("Failed to request app refresh after hydration")
|
||||
logCtx.WithFields(applog.GetAppLogFields(app)).WithError(err).Error("Failed to request app refresh after hydration")
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, string, string, error) {
|
||||
relevantApps, err := h.getRelevantAppsForHydration(logCtx, hydrationKey)
|
||||
if err != nil {
|
||||
return nil, "", "", fmt.Errorf("failed to get relevant apps for hydration: %w", err)
|
||||
// setAppHydratorError updates the CurrentOperation with the error information.
|
||||
func (h *Hydrator) setAppHydratorError(app *appv1.Application, err error) {
|
||||
// if the operation is not in progress, we do not update the status
|
||||
if app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
|
||||
return
|
||||
}
|
||||
|
||||
dryRevision, hydratedRevision, err := h.hydrate(logCtx, relevantApps)
|
||||
if err != nil {
|
||||
return relevantApps, dryRevision, "", fmt.Errorf("failed to hydrate apps: %w", err)
|
||||
}
|
||||
|
||||
return relevantApps, dryRevision, hydratedRevision, nil
|
||||
origApp := app.DeepCopy()
|
||||
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
|
||||
failedAt := metav1.Now()
|
||||
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
|
||||
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate: %v", err.Error())
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
}
|
||||
|
||||
func (h *Hydrator) getRelevantAppsForHydration(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, error) {
|
||||
// getAppsForHydrationKey returns the applications matching the hydration key.
|
||||
func (h *Hydrator) getAppsForHydrationKey(hydrationKey types.HydrationQueueKey) ([]*appv1.Application, error) {
|
||||
// Get all apps
|
||||
apps, err := h.dependencies.GetProcessableApps()
|
||||
if err != nil {
|
||||
@@ -219,100 +280,113 @@ func (h *Hydrator) getRelevantAppsForHydration(logCtx *log.Entry, hydrationKey t
|
||||
}
|
||||
|
||||
var relevantApps []*appv1.Application
|
||||
uniquePaths := make(map[string]bool, len(apps.Items))
|
||||
for _, app := range apps.Items {
|
||||
if app.Spec.SourceHydrator == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if !git.SameURL(app.Spec.SourceHydrator.DrySource.RepoURL, hydrationKey.SourceRepoURL) ||
|
||||
app.Spec.SourceHydrator.DrySource.TargetRevision != hydrationKey.SourceTargetRevision {
|
||||
appKey := getHydrationQueueKey(&app)
|
||||
if appKey != hydrationKey {
|
||||
continue
|
||||
}
|
||||
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
|
||||
if app.Spec.SourceHydrator.HydrateTo != nil {
|
||||
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
|
||||
}
|
||||
if destinationBranch != hydrationKey.DestinationBranch {
|
||||
continue
|
||||
}
|
||||
|
||||
var proj *appv1.AppProject
|
||||
proj, err = h.dependencies.GetProcessableAppProj(&app)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get project %q for app %q: %w", app.Spec.Project, app.QualifiedName(), err)
|
||||
}
|
||||
permitted := proj.IsSourcePermitted(app.Spec.GetSource())
|
||||
if !permitted {
|
||||
// Log and skip. We don't want to fail the entire operation because of one app.
|
||||
logCtx.Warnf("App %q is not permitted to use source %q", app.QualifiedName(), app.Spec.Source.String())
|
||||
continue
|
||||
}
|
||||
|
||||
// TODO: test the dupe detection
|
||||
// TODO: normalize the path to avoid "path/.." from being treated as different from "."
|
||||
if _, ok := uniquePaths[app.Spec.SourceHydrator.SyncSource.Path]; ok {
|
||||
return nil, fmt.Errorf("multiple app hydrators use the same destination: %v", app.Spec.SourceHydrator.SyncSource.Path)
|
||||
}
|
||||
uniquePaths[app.Spec.SourceHydrator.SyncSource.Path] = true
|
||||
|
||||
relevantApps = append(relevantApps, &app)
|
||||
}
|
||||
return relevantApps, nil
|
||||
}
|
||||
|
||||
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application) (string, string, error) {
|
||||
if len(apps) == 0 {
|
||||
return "", "", nil
|
||||
}
|
||||
repoURL := apps[0].Spec.SourceHydrator.DrySource.RepoURL
|
||||
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
|
||||
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
|
||||
var paths []*commitclient.PathDetails
|
||||
projects := make(map[string]bool, len(apps))
|
||||
var targetRevision string
|
||||
// TODO: parallelize this loop
|
||||
// validateApplications checks that all applications are valid for hydration.
|
||||
func (h *Hydrator) validateApplications(apps []*appv1.Application) (map[string]*appv1.AppProject, map[string]error) {
|
||||
projects := make(map[string]*appv1.AppProject)
|
||||
errors := make(map[string]error)
|
||||
uniquePaths := make(map[string]string, len(apps))
|
||||
|
||||
for _, app := range apps {
|
||||
project, err := h.dependencies.GetProcessableAppProj(app)
|
||||
// Get the project for the app and validate if the app is allowed to use the source.
|
||||
// We can't short-circuit this even if we have seen this project before, because we need to verify that this
|
||||
// particular app is allowed to use this project.
|
||||
proj, err := h.dependencies.GetProcessableAppProj(app)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get project: %w", err)
|
||||
errors[app.QualifiedName()] = fmt.Errorf("failed to get project %q: %w", app.Spec.Project, err)
|
||||
continue
|
||||
}
|
||||
projects[project.Name] = true
|
||||
drySource := appv1.ApplicationSource{
|
||||
RepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
|
||||
Path: app.Spec.SourceHydrator.DrySource.Path,
|
||||
TargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
|
||||
permitted := proj.IsSourcePermitted(app.Spec.GetSource())
|
||||
if !permitted {
|
||||
errors[app.QualifiedName()] = fmt.Errorf("application repo %s is not permitted in project '%s'", app.Spec.GetSource().RepoURL, proj.Name)
|
||||
continue
|
||||
}
|
||||
if targetRevision == "" {
|
||||
targetRevision = app.Spec.SourceHydrator.DrySource.TargetRevision
|
||||
projects[app.Spec.Project] = proj
|
||||
|
||||
// Disallow hydrating to the repository root.
|
||||
// Hydrating to root would overwrite or delete files at the top level of the repo,
|
||||
// which can break other applications or shared configuration.
|
||||
// Every hydrated app must write into a subdirectory instead.
|
||||
destPath := app.Spec.SourceHydrator.SyncSource.Path
|
||||
if IsRootPath(destPath) {
|
||||
errors[app.QualifiedName()] = fmt.Errorf("app is configured to hydrate to the repository root (branch %q, path %q) which is not allowed", app.Spec.GetHydrateToSource().TargetRevision, destPath)
|
||||
continue
|
||||
}
|
||||
|
||||
// TODO: enable signature verification
|
||||
objs, resp, err := h.dependencies.GetRepoObjs(app, drySource, targetRevision, project)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get repo objects for app %q: %w", app.QualifiedName(), err)
|
||||
// TODO: test the dupe detection
|
||||
// TODO: normalize the path to avoid "path/.." from being treated as different from "."
|
||||
if appName, ok := uniquePaths[destPath]; ok {
|
||||
errors[app.QualifiedName()] = fmt.Errorf("app %s hydrator use the same destination: %v", appName, app.Spec.SourceHydrator.SyncSource.Path)
|
||||
errors[appName] = fmt.Errorf("app %s hydrator use the same destination: %v", app.QualifiedName(), app.Spec.SourceHydrator.SyncSource.Path)
|
||||
continue
|
||||
}
|
||||
uniquePaths[destPath] = app.QualifiedName()
|
||||
}
|
||||
|
||||
// This should be the DRY SHA. We set it here so that after processing the first app, all apps are hydrated
|
||||
// using the same SHA.
|
||||
targetRevision = resp.Revision
|
||||
// If there are any errors, return nil for projects to avoid possible partial processing.
|
||||
if len(errors) > 0 {
|
||||
projects = nil
|
||||
}
|
||||
|
||||
// Set up a ManifestsRequest
|
||||
manifestDetails := make([]*commitclient.HydratedManifestDetails, len(objs))
|
||||
for i, obj := range objs {
|
||||
objJSON, err := json.Marshal(obj)
|
||||
return projects, errors
|
||||
}
|
||||
|
||||
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, map[string]error, error) {
|
||||
errors := make(map[string]error)
|
||||
if len(apps) == 0 {
|
||||
return "", "", nil, nil
|
||||
}
|
||||
|
||||
// These values are the same for all apps being hydrated together, so just get them from the first app.
|
||||
repoURL := apps[0].Spec.GetHydrateToSource().RepoURL
|
||||
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
|
||||
// FIXME: As a convenience, the commit server will create the syncBranch if it does not exist. If the
|
||||
// targetBranch does not exist, it will create it based on the syncBranch. On the next line, we take
|
||||
// the `syncBranch` from the first app and assume that they're all configured the same. Instead, if any
|
||||
// app has a different syncBranch, we should send the commit server an empty string and allow it to
|
||||
// create the targetBranch as an orphan since we can't reliable determine a reasonable base.
|
||||
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
|
||||
|
||||
// Get a static SHA revision from the first app so that all apps are hydrated from the same revision.
|
||||
targetRevision, pathDetails, err := h.getManifests(context.Background(), apps[0], "", projects[apps[0].Spec.Project])
|
||||
if err != nil {
|
||||
errors[apps[0].QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
|
||||
return "", "", errors, nil
|
||||
}
|
||||
paths := []*commitclient.PathDetails{pathDetails}
|
||||
|
||||
eg, ctx := errgroup.WithContext(context.Background())
|
||||
var mu sync.Mutex
|
||||
|
||||
for _, app := range apps[1:] {
|
||||
app := app
|
||||
eg.Go(func() error {
|
||||
_, pathDetails, err = h.getManifests(ctx, app, targetRevision, projects[app.Spec.Project])
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to marshal object: %w", err)
|
||||
errors[app.QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
|
||||
return errors[app.QualifiedName()]
|
||||
}
|
||||
manifestDetails[i] = &commitclient.HydratedManifestDetails{ManifestJSON: string(objJSON)}
|
||||
}
|
||||
|
||||
paths = append(paths, &commitclient.PathDetails{
|
||||
Path: app.Spec.SourceHydrator.SyncSource.Path,
|
||||
Manifests: manifestDetails,
|
||||
Commands: resp.Commands,
|
||||
paths = append(paths, pathDetails)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
if err := eg.Wait(); err != nil {
|
||||
return targetRevision, "", errors, nil
|
||||
}
|
||||
|
||||
// If all the apps are under the same project, use that project. Otherwise, use an empty string to indicate that we
|
||||
// need global creds.
|
||||
@@ -320,18 +394,19 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application) (string
|
||||
if len(projects) == 1 {
|
||||
for p := range projects {
|
||||
project = p
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Get the commit metadata for the target revision.
|
||||
revisionMetadata, err := h.getRevisionMetadata(context.Background(), repoURL, project, targetRevision)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
|
||||
}
|
||||
|
||||
repo, err := h.dependencies.GetWriteCredentials(context.Background(), repoURL, project)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("failed to get hydrator credentials: %w", err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator credentials: %w", err)
|
||||
}
|
||||
if repo == nil {
|
||||
// Try without credentials.
|
||||
@@ -340,27 +415,73 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application) (string
|
||||
}
|
||||
logCtx.Warn("no credentials found for repo, continuing without credentials")
|
||||
}
|
||||
// get the commit message template
|
||||
commitMessageTemplate, err := h.dependencies.GetHydratorCommitMessageTemplate()
|
||||
if err != nil {
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to get hydrated commit message template: %w", err)
|
||||
}
|
||||
commitMessage, errMsg := getTemplatedCommitMessage(repoURL, targetRevision, commitMessageTemplate, revisionMetadata)
|
||||
if errMsg != nil {
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
|
||||
}
|
||||
|
||||
manifestsRequest := commitclient.CommitHydratedManifestsRequest{
|
||||
Repo: repo,
|
||||
SyncBranch: syncBranch,
|
||||
TargetBranch: targetBranch,
|
||||
DrySha: targetRevision,
|
||||
CommitMessage: "[Argo CD Bot] hydrate " + targetRevision,
|
||||
CommitMessage: commitMessage,
|
||||
Paths: paths,
|
||||
DryCommitMetadata: revisionMetadata,
|
||||
}
|
||||
|
||||
closer, commitService, err := h.commitClientset.NewCommitServerClient()
|
||||
if err != nil {
|
||||
return targetRevision, "", fmt.Errorf("failed to create commit service: %w", err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to create commit service: %w", err)
|
||||
}
|
||||
defer utilio.Close(closer)
|
||||
resp, err := commitService.CommitHydratedManifests(context.Background(), &manifestsRequest)
|
||||
if err != nil {
|
||||
return targetRevision, "", fmt.Errorf("failed to commit hydrated manifests: %w", err)
|
||||
return targetRevision, "", errors, fmt.Errorf("failed to commit hydrated manifests: %w", err)
|
||||
}
|
||||
return targetRevision, resp.HydratedSha, nil
|
||||
return targetRevision, resp.HydratedSha, errors, nil
|
||||
}
|
||||
|
||||
// getManifests gets the manifests for the given application and target revision. It returns the resolved revision
|
||||
// (a git SHA), and path details for the commit server.
|
||||
//
|
||||
// If the given target revision is empty, it uses the target revision from the app dry source spec.
|
||||
func (h *Hydrator) getManifests(ctx context.Context, app *appv1.Application, targetRevision string, project *appv1.AppProject) (revision string, pathDetails *commitclient.PathDetails, err error) {
|
||||
drySource := appv1.ApplicationSource{
|
||||
RepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
|
||||
Path: app.Spec.SourceHydrator.DrySource.Path,
|
||||
TargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
|
||||
}
|
||||
if targetRevision == "" {
|
||||
targetRevision = app.Spec.SourceHydrator.DrySource.TargetRevision
|
||||
}
|
||||
|
||||
// TODO: enable signature verification
|
||||
objs, resp, err := h.dependencies.GetRepoObjs(ctx, app, drySource, targetRevision, project)
|
||||
if err != nil {
|
||||
return "", nil, fmt.Errorf("failed to get repo objects for app %q: %w", app.QualifiedName(), err)
|
||||
}
|
||||
|
||||
// Set up a ManifestsRequest
|
||||
manifestDetails := make([]*commitclient.HydratedManifestDetails, len(objs))
|
||||
for i, obj := range objs {
|
||||
objJSON, err := json.Marshal(obj)
|
||||
if err != nil {
|
||||
return "", nil, fmt.Errorf("failed to marshal object: %w", err)
|
||||
}
|
||||
manifestDetails[i] = &commitclient.HydratedManifestDetails{ManifestJSON: string(objJSON)}
|
||||
}
|
||||
|
||||
return resp.Revision, &commitclient.PathDetails{
|
||||
Path: app.Spec.SourceHydrator.SyncSource.Path,
|
||||
Manifests: manifestDetails,
|
||||
Commands: resp.Commands,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (h *Hydrator) getRevisionMetadata(ctx context.Context, repoURL, project, revision string) (*appv1.RevisionMetadata, error) {
|
||||
@@ -386,28 +507,56 @@ func (h *Hydrator) getRevisionMetadata(ctx context.Context, repoURL, project, re
|
||||
}
|
||||
|
||||
// appNeedsHydration answers if application needs manifests hydrated.
|
||||
func appNeedsHydration(app *appv1.Application, statusHydrateTimeout time.Duration) (needsHydration bool, reason string) {
|
||||
if app.Spec.SourceHydrator == nil {
|
||||
return false, "source hydrator not configured"
|
||||
}
|
||||
|
||||
var hydratedAt *metav1.Time
|
||||
if app.Status.SourceHydrator.CurrentOperation != nil {
|
||||
hydratedAt = &app.Status.SourceHydrator.CurrentOperation.StartedAt
|
||||
}
|
||||
|
||||
func appNeedsHydration(app *appv1.Application) (needsHydration bool, reason string) {
|
||||
switch {
|
||||
case app.IsHydrateRequested():
|
||||
return true, "hydrate requested"
|
||||
case app.Spec.SourceHydrator == nil:
|
||||
return false, "source hydrator not configured"
|
||||
case app.Status.SourceHydrator.CurrentOperation == nil:
|
||||
return true, "no previous hydrate operation"
|
||||
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating:
|
||||
return false, "hydration operation already in progress"
|
||||
case app.IsHydrateRequested():
|
||||
return true, "hydrate requested"
|
||||
case !app.Spec.SourceHydrator.DeepEquals(app.Status.SourceHydrator.CurrentOperation.SourceHydrator):
|
||||
return true, "spec.sourceHydrator differs"
|
||||
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseFailed && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.FinishedAt.Time) > 2*time.Minute:
|
||||
return true, "previous hydrate operation failed more than 2 minutes ago"
|
||||
case hydratedAt == nil || hydratedAt.Add(statusHydrateTimeout).Before(time.Now().UTC()):
|
||||
return true, "hydration expired"
|
||||
}
|
||||
|
||||
return false, ""
|
||||
return false, "hydration not needed"
|
||||
}
|
||||
|
||||
// getTemplatedCommitMessage gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
|
||||
// 1. Get the metadata template engine would use to render the template
|
||||
// 2. Pass the output of Step 1 and Step 2 to template Render
|
||||
func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string, dryCommitMetadata *appv1.RevisionMetadata) (string, error) {
|
||||
hydratorCommitMetadata, err := hydrator.GetCommitMetadata(repoURL, revision, dryCommitMetadata)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get hydrated commit message: %w", err)
|
||||
}
|
||||
templatedCommitMsg, err := hydrator.Render(commitMessageTemplate, hydratorCommitMetadata)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to parse template %s: %w", commitMessageTemplate, err)
|
||||
}
|
||||
return templatedCommitMsg, nil
|
||||
}
|
||||
|
||||
// genericHydrationError returns an error that summarizes the hydration errors for all applications.
|
||||
func genericHydrationError(validationErrors map[string]error) error {
|
||||
if len(validationErrors) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
keys := slices.Sorted(maps.Keys(validationErrors))
|
||||
remainder := "has an error"
|
||||
if len(keys) > 1 {
|
||||
remainder = fmt.Sprintf("and %d more have errors", len(keys)-1)
|
||||
}
|
||||
return fmt.Errorf("cannot hydrate because application %s %s", keys[0], remainder)
|
||||
}
|
||||
|
||||
// IsRootPath returns whether the path references a root path
|
||||
func IsRootPath(path string) bool {
|
||||
clean := filepath.Clean(path)
|
||||
return clean == "" || clean == "." || clean == string(filepath.Separator)
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
103
controller/hydrator/mocks/Dependencies.go
generated
103
controller/hydrator/mocks/Dependencies.go
generated
@@ -81,6 +81,59 @@ func (_c *Dependencies_AddHydrationQueueItem_Call) RunAndReturn(run func(key typ
|
||||
return _c
|
||||
}
|
||||
|
||||
// GetHydratorCommitMessageTemplate provides a mock function for the type Dependencies
|
||||
func (_mock *Dependencies) GetHydratorCommitMessageTemplate() (string, error) {
|
||||
ret := _mock.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for GetHydratorCommitMessageTemplate")
|
||||
}
|
||||
|
||||
var r0 string
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func() (string, error)); ok {
|
||||
return returnFunc()
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func() string); ok {
|
||||
r0 = returnFunc()
|
||||
} else {
|
||||
r0 = ret.Get(0).(string)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func() error); ok {
|
||||
r1 = returnFunc()
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Dependencies_GetHydratorCommitMessageTemplate_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetHydratorCommitMessageTemplate'
|
||||
type Dependencies_GetHydratorCommitMessageTemplate_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// GetHydratorCommitMessageTemplate is a helper method to define mock.On call
|
||||
func (_e *Dependencies_Expecter) GetHydratorCommitMessageTemplate() *Dependencies_GetHydratorCommitMessageTemplate_Call {
|
||||
return &Dependencies_GetHydratorCommitMessageTemplate_Call{Call: _e.mock.On("GetHydratorCommitMessageTemplate")}
|
||||
}
|
||||
|
||||
func (_c *Dependencies_GetHydratorCommitMessageTemplate_Call) Run(run func()) *Dependencies_GetHydratorCommitMessageTemplate_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *Dependencies_GetHydratorCommitMessageTemplate_Call) Return(s string, err error) *Dependencies_GetHydratorCommitMessageTemplate_Call {
|
||||
_c.Call.Return(s, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *Dependencies_GetHydratorCommitMessageTemplate_Call) RunAndReturn(run func() (string, error)) *Dependencies_GetHydratorCommitMessageTemplate_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// GetProcessableAppProj provides a mock function for the type Dependencies
|
||||
func (_mock *Dependencies) GetProcessableAppProj(app *v1alpha1.Application) (*v1alpha1.AppProject, error) {
|
||||
ret := _mock.Called(app)
|
||||
@@ -199,8 +252,8 @@ func (_c *Dependencies_GetProcessableApps_Call) RunAndReturn(run func() (*v1alph
|
||||
}
|
||||
|
||||
// GetRepoObjs provides a mock function for the type Dependencies
|
||||
func (_mock *Dependencies) GetRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
|
||||
ret := _mock.Called(app, source, revision, project)
|
||||
func (_mock *Dependencies) GetRepoObjs(ctx context.Context, app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
|
||||
ret := _mock.Called(ctx, app, source, revision, project)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for GetRepoObjs")
|
||||
@@ -209,25 +262,25 @@ func (_mock *Dependencies) GetRepoObjs(app *v1alpha1.Application, source v1alpha
|
||||
var r0 []*unstructured.Unstructured
|
||||
var r1 *apiclient.ManifestResponse
|
||||
var r2 error
|
||||
if returnFunc, ok := ret.Get(0).(func(*v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)); ok {
|
||||
return returnFunc(app, source, revision, project)
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, *v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)); ok {
|
||||
return returnFunc(ctx, app, source, revision, project)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(*v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) []*unstructured.Unstructured); ok {
|
||||
r0 = returnFunc(app, source, revision, project)
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, *v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) []*unstructured.Unstructured); ok {
|
||||
r0 = returnFunc(ctx, app, source, revision, project)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]*unstructured.Unstructured)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(*v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) *apiclient.ManifestResponse); ok {
|
||||
r1 = returnFunc(app, source, revision, project)
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, *v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) *apiclient.ManifestResponse); ok {
|
||||
r1 = returnFunc(ctx, app, source, revision, project)
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).(*apiclient.ManifestResponse)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(2).(func(*v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) error); ok {
|
||||
r2 = returnFunc(app, source, revision, project)
|
||||
if returnFunc, ok := ret.Get(2).(func(context.Context, *v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) error); ok {
|
||||
r2 = returnFunc(ctx, app, source, revision, project)
|
||||
} else {
|
||||
r2 = ret.Error(2)
|
||||
}
|
||||
@@ -240,37 +293,43 @@ type Dependencies_GetRepoObjs_Call struct {
|
||||
}
|
||||
|
||||
// GetRepoObjs is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - app *v1alpha1.Application
|
||||
// - source v1alpha1.ApplicationSource
|
||||
// - revision string
|
||||
// - project *v1alpha1.AppProject
|
||||
func (_e *Dependencies_Expecter) GetRepoObjs(app interface{}, source interface{}, revision interface{}, project interface{}) *Dependencies_GetRepoObjs_Call {
|
||||
return &Dependencies_GetRepoObjs_Call{Call: _e.mock.On("GetRepoObjs", app, source, revision, project)}
|
||||
func (_e *Dependencies_Expecter) GetRepoObjs(ctx interface{}, app interface{}, source interface{}, revision interface{}, project interface{}) *Dependencies_GetRepoObjs_Call {
|
||||
return &Dependencies_GetRepoObjs_Call{Call: _e.mock.On("GetRepoObjs", ctx, app, source, revision, project)}
|
||||
}
|
||||
|
||||
func (_c *Dependencies_GetRepoObjs_Call) Run(run func(app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject)) *Dependencies_GetRepoObjs_Call {
|
||||
func (_c *Dependencies_GetRepoObjs_Call) Run(run func(ctx context.Context, app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject)) *Dependencies_GetRepoObjs_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 *v1alpha1.Application
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(*v1alpha1.Application)
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 v1alpha1.ApplicationSource
|
||||
var arg1 *v1alpha1.Application
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(v1alpha1.ApplicationSource)
|
||||
arg1 = args[1].(*v1alpha1.Application)
|
||||
}
|
||||
var arg2 string
|
||||
var arg2 v1alpha1.ApplicationSource
|
||||
if args[2] != nil {
|
||||
arg2 = args[2].(string)
|
||||
arg2 = args[2].(v1alpha1.ApplicationSource)
|
||||
}
|
||||
var arg3 *v1alpha1.AppProject
|
||||
var arg3 string
|
||||
if args[3] != nil {
|
||||
arg3 = args[3].(*v1alpha1.AppProject)
|
||||
arg3 = args[3].(string)
|
||||
}
|
||||
var arg4 *v1alpha1.AppProject
|
||||
if args[4] != nil {
|
||||
arg4 = args[4].(*v1alpha1.AppProject)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
arg2,
|
||||
arg3,
|
||||
arg4,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
@@ -281,7 +340,7 @@ func (_c *Dependencies_GetRepoObjs_Call) Return(unstructureds []*unstructured.Un
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *Dependencies_GetRepoObjs_Call) RunAndReturn(run func(app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)) *Dependencies_GetRepoObjs_Call {
|
||||
func (_c *Dependencies_GetRepoObjs_Call) RunAndReturn(run func(ctx context.Context, app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)) *Dependencies_GetRepoObjs_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
113
controller/hydrator/mocks/RepoGetter.go
generated
Normal file
113
controller/hydrator/mocks/RepoGetter.go
generated
Normal file
@@ -0,0 +1,113 @@
|
||||
// Code generated by mockery; DO NOT EDIT.
|
||||
// github.com/vektra/mockery
|
||||
// template: testify
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
// NewRepoGetter creates a new instance of RepoGetter. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewRepoGetter(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *RepoGetter {
|
||||
mock := &RepoGetter{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
||||
// RepoGetter is an autogenerated mock type for the RepoGetter type
|
||||
type RepoGetter struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type RepoGetter_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *RepoGetter) EXPECT() *RepoGetter_Expecter {
|
||||
return &RepoGetter_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// GetRepository provides a mock function for the type RepoGetter
|
||||
func (_mock *RepoGetter) GetRepository(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error) {
|
||||
ret := _mock.Called(ctx, repoURL, project)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for GetRepository")
|
||||
}
|
||||
|
||||
var r0 *v1alpha1.Repository
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) (*v1alpha1.Repository, error)); ok {
|
||||
return returnFunc(ctx, repoURL, project)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) *v1alpha1.Repository); ok {
|
||||
r0 = returnFunc(ctx, repoURL, project)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*v1alpha1.Repository)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, string, string) error); ok {
|
||||
r1 = returnFunc(ctx, repoURL, project)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// RepoGetter_GetRepository_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetRepository'
|
||||
type RepoGetter_GetRepository_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// GetRepository is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - repoURL string
|
||||
// - project string
|
||||
func (_e *RepoGetter_Expecter) GetRepository(ctx interface{}, repoURL interface{}, project interface{}) *RepoGetter_GetRepository_Call {
|
||||
return &RepoGetter_GetRepository_Call{Call: _e.mock.On("GetRepository", ctx, repoURL, project)}
|
||||
}
|
||||
|
||||
func (_c *RepoGetter_GetRepository_Call) Run(run func(ctx context.Context, repoURL string, project string)) *RepoGetter_GetRepository_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 string
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(string)
|
||||
}
|
||||
var arg2 string
|
||||
if args[2] != nil {
|
||||
arg2 = args[2].(string)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
arg2,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *RepoGetter_GetRepository_Call) Return(repository *v1alpha1.Repository, err error) *RepoGetter_GetRepository_Call {
|
||||
_c.Call.Return(repository, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *RepoGetter_GetRepository_Call) RunAndReturn(run func(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error)) *RepoGetter_GetRepository_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
@@ -31,7 +31,7 @@ func (ctrl *ApplicationController) GetProcessableApps() (*appv1.ApplicationList,
|
||||
return ctrl.getAppList(metav1.ListOptions{})
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) GetRepoObjs(origApp *appv1.Application, drySource appv1.ApplicationSource, revision string, project *appv1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
|
||||
func (ctrl *ApplicationController) GetRepoObjs(ctx context.Context, origApp *appv1.Application, drySource appv1.ApplicationSource, revision string, project *appv1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
|
||||
drySources := []appv1.ApplicationSource{drySource}
|
||||
dryRevisions := []string{revision}
|
||||
|
||||
@@ -50,7 +50,7 @@ func (ctrl *ApplicationController) GetRepoObjs(origApp *appv1.Application, drySo
|
||||
delete(app.Annotations, appv1.AnnotationKeyManifestGeneratePaths)
|
||||
|
||||
// FIXME: use cache and revision cache
|
||||
objs, resp, _, err := ctrl.appStateManager.GetRepoObjs(app, drySources, appLabelKey, dryRevisions, true, true, false, project, false)
|
||||
objs, resp, _, err := ctrl.appStateManager.GetRepoObjs(ctx, app, drySources, appLabelKey, dryRevisions, true, true, false, project, false)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to get repo objects: %w", err)
|
||||
}
|
||||
@@ -97,3 +97,12 @@ func (ctrl *ApplicationController) PersistAppHydratorStatus(orig *appv1.Applicat
|
||||
func (ctrl *ApplicationController) AddHydrationQueueItem(key types.HydrationQueueKey) {
|
||||
ctrl.hydrationQueue.AddRateLimited(key)
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) GetHydratorCommitMessageTemplate() (string, error) {
|
||||
sourceHydratorCommitMessageKey, err := ctrl.settingsMgr.GetSourceHydratorCommitMessageTemplate()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get sourceHydrator commit message template key: %w", err)
|
||||
}
|
||||
|
||||
return sourceHydratorCommitMessageKey, nil
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
|
||||
"github.com/argoproj/argo-cd/v3/test"
|
||||
"github.com/argoproj/argo-cd/v3/util/settings"
|
||||
)
|
||||
|
||||
func TestGetRepoObjs(t *testing.T) {
|
||||
@@ -46,7 +47,7 @@ func TestGetRepoObjs(t *testing.T) {
|
||||
source := app.Spec.GetSource()
|
||||
source.RepoURL = "oci://example.com/argo/argo-cd"
|
||||
|
||||
objs, resp, err := ctrl.GetRepoObjs(app, source, "abc123", &v1alpha1.AppProject{
|
||||
objs, resp, err := ctrl.GetRepoObjs(t.Context(), app, source, "abc123", &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "default",
|
||||
Namespace: test.FakeArgoCDNamespace,
|
||||
@@ -77,3 +78,46 @@ func TestGetRepoObjs(t *testing.T) {
|
||||
|
||||
assert.Equal(t, "ConfigMap", objs[0].GetKind())
|
||||
}
|
||||
|
||||
func TestGetHydratorCommitMessageTemplate_WhenTemplateisNotDefined_FallbackToDefault(t *testing.T) {
|
||||
cm := test.NewConfigMap()
|
||||
cmBytes, _ := json.Marshal(cm)
|
||||
|
||||
data := fakeData{
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{string(cmBytes)},
|
||||
Namespace: test.FakeDestNamespace,
|
||||
Server: test.FakeClusterURL,
|
||||
Revision: "abc123",
|
||||
},
|
||||
}
|
||||
|
||||
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
|
||||
|
||||
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, tmpl) // should fallback to default
|
||||
assert.Equal(t, settings.CommitMessageTemplate, tmpl)
|
||||
}
|
||||
|
||||
func TestGetHydratorCommitMessageTemplate(t *testing.T) {
|
||||
cm := test.NewFakeConfigMap()
|
||||
cm.Data["sourceHydrator.commitMessageTemplate"] = settings.CommitMessageTemplate
|
||||
cmBytes, _ := json.Marshal(cm)
|
||||
|
||||
data := fakeData{
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{string(cmBytes)},
|
||||
Namespace: test.FakeDestNamespace,
|
||||
Server: test.FakeClusterURL,
|
||||
Revision: "abc123",
|
||||
},
|
||||
configMapData: cm.Data,
|
||||
}
|
||||
|
||||
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
|
||||
|
||||
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, tmpl)
|
||||
}
|
||||
|
||||
@@ -72,7 +72,7 @@ type managedResource struct {
|
||||
type AppStateManager interface {
|
||||
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
|
||||
SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState)
|
||||
GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
|
||||
GetRepoObjs(ctx context.Context, app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
|
||||
}
|
||||
|
||||
// comparisonResult holds the state of an application after the reconciliation
|
||||
@@ -127,9 +127,9 @@ type appStateManager struct {
|
||||
// task to the repo-server. It returns the list of generated manifests as unstructured
|
||||
// objects. It also returns the full response from all calls to the repo server as the
|
||||
// second argument.
|
||||
func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error) {
|
||||
func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error) {
|
||||
ts := stats.NewTimingStats()
|
||||
helmRepos, err := m.db.ListHelmRepositories(context.Background())
|
||||
helmRepos, err := m.db.ListHelmRepositories(ctx)
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to list Helm repositories: %w", err)
|
||||
}
|
||||
@@ -138,7 +138,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
return nil, nil, false, fmt.Errorf("failed to get permitted Helm repositories for project %q: %w", proj.Name, err)
|
||||
}
|
||||
|
||||
ociRepos, err := m.db.ListOCIRepositories(context.Background())
|
||||
ociRepos, err := m.db.ListOCIRepositories(ctx)
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to list OCI repositories: %w", err)
|
||||
}
|
||||
@@ -148,7 +148,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
}
|
||||
|
||||
ts.AddCheckpoint("repo_ms")
|
||||
helmRepositoryCredentials, err := m.db.GetAllHelmRepositoryCredentials(context.Background())
|
||||
helmRepositoryCredentials, err := m.db.GetAllHelmRepositoryCredentials(ctx)
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to get Helm credentials: %w", err)
|
||||
}
|
||||
@@ -157,7 +157,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
return nil, nil, false, fmt.Errorf("failed to get permitted Helm credentials for project %q: %w", proj.Name, err)
|
||||
}
|
||||
|
||||
ociRepositoryCredentials, err := m.db.GetAllOCIRepositoryCredentials(context.Background())
|
||||
ociRepositoryCredentials, err := m.db.GetAllOCIRepositoryCredentials(ctx)
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to get OCI credentials: %w", err)
|
||||
}
|
||||
@@ -192,7 +192,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
return nil, nil, false, fmt.Errorf("failed to get installation ID: %w", err)
|
||||
}
|
||||
|
||||
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, m.db)
|
||||
destCluster, err := argo.GetDestinationCluster(ctx, app.Spec.Destination, m.db)
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to get destination cluster: %w", err)
|
||||
}
|
||||
@@ -218,7 +218,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
// Store the map of all sources having ref field into a map for applications with sources field
|
||||
// If it's for a rollback process, the refSources[*].targetRevision fields are the desired
|
||||
// revisions for the rollback
|
||||
refSources, err := argo.GetRefSources(context.Background(), sources, app.Spec.Project, m.db.GetRepository, revisions)
|
||||
refSources, err := argo.GetRefSources(ctx, sources, app.Spec.Project, m.db.GetRepository, revisions)
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to get ref sources: %w", err)
|
||||
}
|
||||
@@ -231,7 +231,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
if len(revisions) < len(sources) || revisions[i] == "" {
|
||||
revisions[i] = source.TargetRevision
|
||||
}
|
||||
repo, err := m.db.GetRepository(context.Background(), source.RepoURL, proj.Name)
|
||||
repo, err := m.db.GetRepository(ctx, source.RepoURL, proj.Name)
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to get repo %q: %w", source.RepoURL, err)
|
||||
}
|
||||
@@ -255,12 +255,12 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
|
||||
if !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
|
||||
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
|
||||
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(context.Background(), &apiclient.UpdateRevisionForPathsRequest{
|
||||
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
|
||||
Repo: repo,
|
||||
Revision: revision,
|
||||
SyncedRevision: syncedRevision,
|
||||
NoRevisionCache: noRevisionCache,
|
||||
Paths: path.GetAppRefreshPaths(app),
|
||||
Paths: path.GetSourceRefreshPaths(app, source),
|
||||
AppLabelKey: appLabelKey,
|
||||
AppName: app.InstanceName(m.namespace),
|
||||
Namespace: appNamespace,
|
||||
@@ -301,7 +301,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
|
||||
}
|
||||
|
||||
log.Debugf("Generating Manifest for source %s revision %s", source, revision)
|
||||
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &apiclient.ManifestRequest{
|
||||
manifestInfo, err := repoClient.GenerateManifest(ctx, &apiclient.ManifestRequest{
|
||||
Repo: repo,
|
||||
Repos: repos,
|
||||
Revision: revision,
|
||||
@@ -558,7 +558,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
|
||||
appLabelKey, resourceOverrides, resFilter, installationID, trackingMethod, err := m.getComparisonSettings()
|
||||
ts.AddCheckpoint("settings_ms")
|
||||
if err != nil {
|
||||
// return unknown comparison result if basic comparison settings cannot be loaded
|
||||
log.Infof("Basic comparison settings cannot be loaded, using unknown comparison: %s", err.Error())
|
||||
return &comparisonResult{syncStatus: syncStatus, healthStatus: health.HealthStatusUnknown}, nil
|
||||
}
|
||||
|
||||
@@ -594,7 +594,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
|
||||
}
|
||||
}
|
||||
|
||||
targetObjs, manifestInfos, revisionsMayHaveChanges, err = m.GetRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project, true)
|
||||
targetObjs, manifestInfos, revisionsMayHaveChanges, err = m.GetRepoObjs(context.Background(), app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project, true)
|
||||
if err != nil {
|
||||
targetObjs = make([]*unstructured.Unstructured, 0)
|
||||
msg := "Failed to load target state: " + err.Error()
|
||||
|
||||
@@ -1845,6 +1845,6 @@ func TestCompareAppState_DoesNotCallUpdateRevisionForPaths_ForOCI(t *testing.T)
|
||||
sources := make([]v1alpha1.ApplicationSource, 0)
|
||||
sources = append(sources, source)
|
||||
|
||||
_, _, _, err := ctrl.appStateManager.GetRepoObjs(app, sources, "abc123", []string{"123456"}, false, false, false, &defaultProj, false)
|
||||
_, _, _, err := ctrl.appStateManager.GetRepoObjs(t.Context(), app, sources, "abc123", []string{"123456"}, false, false, false, &defaultProj, false)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ applied to all new issues in our tracker.
|
||||
|
||||
If it is not yet possible to classify the bug, i.e. because more information is
|
||||
required to correctly classify the bug, you should always set the label
|
||||
`bug/in-triage` to make it clear that triage process has started but could not
|
||||
`triage/pending` to make it clear that triage process has started but could not
|
||||
yet be completed.
|
||||
|
||||
### Issue type
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
# Proxy Extensions
|
||||
|
||||
!!! warning "Beta Feature (Since 2.7.0)"
|
||||
This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
|
||||
|
||||
This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
|
||||
It is generally considered stable, but there may be unhandled edge cases.
|
||||
|
||||
## Overview
|
||||
@@ -29,7 +30,7 @@ metadata:
|
||||
name: argocd-cmd-params-cm
|
||||
namespace: argocd
|
||||
data:
|
||||
server.enable.proxy.extension: "true"
|
||||
server.enable.proxy.extension: 'true'
|
||||
```
|
||||
|
||||
Once the proxy extension is enabled, it can be configured in the main
|
||||
@@ -102,11 +103,12 @@ respect the new configuration.
|
||||
|
||||
Every configuration entry is explained below:
|
||||
|
||||
#### `extensions` (*list*)
|
||||
#### `extensions` (_list_)
|
||||
|
||||
Defines configurations for all extensions enabled.
|
||||
|
||||
#### `extensions.name` (*string*)
|
||||
#### `extensions.name` (_string_)
|
||||
|
||||
(mandatory)
|
||||
|
||||
Defines the endpoint that will be used to register the extension
|
||||
@@ -116,54 +118,61 @@ following url:
|
||||
|
||||
<argocd-host>/extensions/my-extension
|
||||
|
||||
#### `extensions.backend.connectionTimeout` (*duration string*)
|
||||
#### `extensions.backend.connectionTimeout` (_duration string_)
|
||||
|
||||
(optional. Default: 2s)
|
||||
|
||||
Is the maximum amount of time a dial to the extension server will wait
|
||||
for a connect to complete.
|
||||
for a connect to complete.
|
||||
|
||||
#### `extensions.backend.keepAlive` (_duration string_)
|
||||
|
||||
#### `extensions.backend.keepAlive` (*duration string*)
|
||||
(optional. Default: 15s)
|
||||
|
||||
Specifies the interval between keep-alive probes for an active network
|
||||
connection between the API server and the extension server.
|
||||
|
||||
#### `extensions.backend.idleConnectionTimeout` (*duration string*)
|
||||
#### `extensions.backend.idleConnectionTimeout` (_duration string_)
|
||||
|
||||
(optional. Default: 60s)
|
||||
|
||||
Is the maximum amount of time an idle (keep-alive) connection between
|
||||
the API server and the extension server will remain idle before
|
||||
closing itself.
|
||||
|
||||
#### `extensions.backend.maxIdleConnections` (*int*)
|
||||
#### `extensions.backend.maxIdleConnections` (_int_)
|
||||
|
||||
(optional. Default: 30)
|
||||
|
||||
Controls the maximum number of idle (keep-alive) connections between
|
||||
the API server and the extension server.
|
||||
|
||||
#### `extensions.backend.services` (*list*)
|
||||
#### `extensions.backend.services` (_list_)
|
||||
|
||||
Defines a list with backend url by cluster.
|
||||
|
||||
#### `extensions.backend.services.url` (*string*)
|
||||
#### `extensions.backend.services.url` (_string_)
|
||||
|
||||
(mandatory)
|
||||
|
||||
Is the address where the extension backend must be available.
|
||||
|
||||
#### `extensions.backend.services.headers` (*list*)
|
||||
#### `extensions.backend.services.headers` (_list_)
|
||||
|
||||
If provided, the headers list will be added on all outgoing requests
|
||||
for this service config. Existing headers in the incoming request with
|
||||
the same name will be overridden by the one in this list. Reserved header
|
||||
names will be ignored (see the [headers](#incoming-request-headers) below).
|
||||
|
||||
#### `extensions.backend.services.headers.name` (*string*)
|
||||
#### `extensions.backend.services.headers.name` (_string_)
|
||||
|
||||
(mandatory)
|
||||
|
||||
Defines the name of the header. It is a mandatory field if a header is
|
||||
provided.
|
||||
|
||||
#### `extensions.backend.services.headers.value` (*string*)
|
||||
#### `extensions.backend.services.headers.value` (_string_)
|
||||
|
||||
(mandatory)
|
||||
|
||||
Defines the value of the header. It is a mandatory field if a header is
|
||||
@@ -178,7 +187,8 @@ Example:
|
||||
In the example above, the value will be replaced with the one from
|
||||
the argocd-secret with key 'some.argocd.secret.key'.
|
||||
|
||||
#### `extensions.backend.services.cluster` (*object*)
|
||||
#### `extensions.backend.services.cluster` (_object_)
|
||||
|
||||
(optional)
|
||||
|
||||
If provided, and multiple services are configured, will have to match
|
||||
@@ -190,17 +200,19 @@ send requests to the proper backend service. If only one backend
|
||||
service is configured, this field is ignored, and all requests are
|
||||
forwarded to the configured one.
|
||||
|
||||
#### `extensions.backend.services.cluster.name` (*string*)
|
||||
#### `extensions.backend.services.cluster.name` (_string_)
|
||||
|
||||
(optional)
|
||||
|
||||
It will be matched with the value from
|
||||
`Application.Spec.Destination.Name`
|
||||
|
||||
#### `extensions.backend.services.cluster.server` (*string*)
|
||||
#### `extensions.backend.services.cluster.server` (_string_)
|
||||
|
||||
(optional)
|
||||
|
||||
It will be matched with the value from
|
||||
`Application.Spec.Destination.Server`.
|
||||
`Application.Spec.Destination.Server`.
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -245,7 +257,7 @@ Argo CD UI keeps the authentication token stored in a cookie
|
||||
(`argocd.token`). This value needs to be sent in the `Cookie` header
|
||||
so the API server can validate its authenticity.
|
||||
|
||||
Example:
|
||||
Example:
|
||||
|
||||
Cookie: argocd.token=eyJhbGciOiJIUzI1Ni...
|
||||
|
||||
@@ -299,11 +311,16 @@ section for more details.
|
||||
|
||||
#### `Argocd-Username`
|
||||
|
||||
Will be populated with the username logged in Argo CD.
|
||||
Will be populated with the username logged in Argo CD. This is primarily useful for display purposes.
|
||||
To identify a user for programmatic needs, `Argocd-User-Id` is probably a better choice.
|
||||
|
||||
#### `Argocd-User-Id`
|
||||
|
||||
Will be populated with the internal user id, most often defined by the `sub` claim, logged in Argo CD.
|
||||
|
||||
#### `Argocd-User-Groups`
|
||||
|
||||
Will be populated with the 'groups' claim from the user logged in Argo CD.
|
||||
Will be populated with the configured RBAC scopes, most often the `groups` claim, from the user logged in Argo CD.
|
||||
|
||||
### Multi Backend Use-Case
|
||||
|
||||
|
||||
@@ -18,9 +18,11 @@ These are the upcoming releases dates:
|
||||
| v2.13 | Monday, Sep. 16, 2024 | Monday, Nov. 4, 2024 | [Regina Voloshin](https://github.com/reggie-k) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/19513) |
|
||||
| v2.14 | Monday, Dec. 16, 2024 | Monday, Feb. 3, 2025 | [Ryan Umstead](https://github.com/rumstead) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/20869) |
|
||||
| v3.0 | Monday, Mar. 17, 2025 | Tuesday, May 6, 2025 | [Regina Voloshin](https://github.com/reggie-k) | | [checklist](https://github.com/argoproj/argo-cd/issues/21735) |
|
||||
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](#) |
|
||||
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | | [checklist](#) |
|
||||
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | | |
|
||||
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](https://github.com/argoproj/argo-cd/issues/23347) |
|
||||
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | [Michael Crenshaw](https://github.com/crenshaw-dev) | [checklist](https://github.com/argoproj/argo-cd/issues/24539) |
|
||||
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | [Peter Jiang](https://github.com/pjiang-dev) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/25211) |
|
||||
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | | |
|
||||
| v3.5 | Monday, Jun. 15, 2026 | Monday, Aug. 3, 2026 | | |
|
||||
|
||||
Actual release dates might differ from the plan by a few days.
|
||||
|
||||
|
||||
@@ -84,3 +84,13 @@ If you want to see how much coverage just a specific module (i.e. your new one)
|
||||
...
|
||||
ok github.com/argoproj/argo-cd/server/cache 0.029s coverage: 89.3% of statements
|
||||
```
|
||||
|
||||
## Cherry-picking fixes
|
||||
|
||||
If your PR contains a bug fix, and you want to have that fix backported to a previous release branch, please label your
|
||||
PR with `cherry-pick/x.y` (example: `cherry-pick/3.1`). If you do not have access to add labels, ask a maintainer to add
|
||||
them for you.
|
||||
|
||||
If you add labels before the PR is merged, the cherry-pick bot will open the backport PRs when your PR is merged.
|
||||
|
||||
Adding a label after the PR is merged will also cause the bot to open the backport PR.
|
||||
|
||||
@@ -9,7 +9,7 @@ As of version 2.5, Argo CD supports managing `Application` resources in namespac
|
||||
|
||||
Argo CD administrators can define a certain set of namespaces where `Application` resources may be created, updated and reconciled in. However, applications in these additional namespaces will only be allowed to use certain `AppProjects`, as configured by the Argo CD administrators. This allows ordinary Argo CD users (e.g. application teams) to use patterns like declarative management of `Application` resources, implementing app-of-apps and others without the risk of a privilege escalation through usage of other `AppProjects` that would exceed the permissions granted to the application teams.
|
||||
|
||||
Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature.
|
||||
Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature.
|
||||
|
||||
One additional advantage of adopting applications in any namespace is to allow end-users to configure notifications for their Argo CD application in the namespace where Argo CD application is running in. See notifications [namespace based configuration](notifications/index.md#namespace-based-configuration) page for more information.
|
||||
|
||||
@@ -124,9 +124,13 @@ The `.spec.sourceNamespaces` field of the `AppProject` is a list that can contai
|
||||
!!! note
|
||||
For backwards compatibility, Applications in the Argo CD control plane's namespace (`argocd`) are allowed to set their `.spec.project` field to reference any AppProject, regardless of the restrictions placed by the AppProject's `.spec.sourceNamespaces` field.
|
||||
|
||||
!!! note
|
||||
Currently it's not possible to have a applicationset in one namespace and have the application
|
||||
be generated in another. See [#11104](https://github.com/argoproj/argo-cd/issues/11104) for more info.
|
||||
|
||||
### Application names
|
||||
|
||||
For the CLI and UI, applications are now referred to and displayed as in the format `<namespace>/<name>`.
|
||||
For the CLI and UI, applications are now referred to and displayed as in the format `<namespace>/<name>`.
|
||||
|
||||
For backwards compatibility, if the namespace of the Application is the control plane's namespace (i.e. `argocd`), the `<namespace>` can be omitted from the application name when referring to it. For example, the application names `argocd/someapp` and `someapp` are semantically the same and refer to the same application in the CLI and the UI.
|
||||
|
||||
|
||||
@@ -226,7 +226,7 @@ spec:
|
||||
# Sync policy
|
||||
syncPolicy:
|
||||
automated: # automated sync by default retries failed attempts 5 times with following delays between attempts ( 5s, 10s, 20s, 40s, 80s ); retry controlled using `retry` field.
|
||||
enable: true # Enables automated syncing of the application ( true by default ).
|
||||
enabled: true # Enables automated syncing of the application ( true by default ).
|
||||
prune: true # Specifies if resources should be pruned during auto-syncing ( false by default ).
|
||||
selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
|
||||
allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ).
|
||||
@@ -266,7 +266,8 @@ spec:
|
||||
- /spec/replicas
|
||||
- kind: ConfigMap
|
||||
jqPathExpressions:
|
||||
- '.data["config.yaml"].auth'
|
||||
# Example: Ignore changes to a specific key inside a ConfigMap
|
||||
- '.data["config.yaml"]'
|
||||
# for the specified managedFields managers
|
||||
- group: "*"
|
||||
kind: "*"
|
||||
|
||||
@@ -439,3 +439,22 @@ data:
|
||||
|
||||
# application.sync.impersonation.enabled enables application sync to use a custom service account, via impersonation. This allows decoupling sync from control-plane service account.
|
||||
application.sync.impersonation.enabled: "false"
|
||||
|
||||
### SourceHydrator commit message template.
|
||||
# This template iterates through the fields in the `.metadata` object,
|
||||
# and formats them based on their type (map, array, or primitive values).
|
||||
# This is the default template and targets specific metadata properties
|
||||
sourceHydrator.commitMessageTemplate: |
|
||||
{{.metadata.drySha | trunc 7}}: {{ .metadata.subject }}
|
||||
{{- if .metadata.body }}
|
||||
|
||||
{{ .metadata.body }}
|
||||
{{- end }}
|
||||
{{ range $ref := .metadata.references }}
|
||||
{{- if and $ref.commit $ref.commit.author }}
|
||||
Co-authored-by: {{ $ref.commit.author }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .metadata.author }}
|
||||
Co-authored-by: {{ .metadata.author }}
|
||||
{{- end }}
|
||||
|
||||
@@ -3,5 +3,5 @@
|
||||
An example of an argocd-cmd-params-cm.yaml file:
|
||||
|
||||
```yaml
|
||||
{!docs/operator-manual/argocd-cmd-params-cm.yaml!}
|
||||
```
|
||||
{ !docs/operator-manual/argocd-cmd-params-cm.yaml! }
|
||||
```
|
||||
@@ -66,6 +66,8 @@ data:
|
||||
controller.self.heal.backoff.cap.seconds: "300"
|
||||
# Specifies a sync timeout for applications. "0" means no timeout (default "0")
|
||||
controller.sync.timeout.seconds: "0"
|
||||
# Specifies the delay in seconds between each sync wave to give other controllers a chance to react to spec changes. (default "2")
|
||||
controller.sync.wave.delay.seconds: "2"
|
||||
|
||||
# Cache expiration for app state (default 1h0m0s)
|
||||
controller.app.state.cache.expiration: "1h0m0s"
|
||||
@@ -217,6 +219,8 @@ data:
|
||||
reposerver.git.lsremote.parallelism.limit: "0"
|
||||
# Git requests timeout.
|
||||
reposerver.git.request.timeout: "15s"
|
||||
# Enable builtin git configuration options that are required for correct argocd-repo-server operation (default "true")
|
||||
reposerver.enable.builtin.git.config: "true"
|
||||
# Include hidden directories from Git
|
||||
reposerver.include.hidden.directories: "false"
|
||||
|
||||
@@ -290,6 +294,10 @@ data:
|
||||
applicationsetcontroller.global.preserved.labels: "acme.com/label1,acme.com/label2"
|
||||
# Enable GitHub API metrics for generators that use GitHub API
|
||||
applicationsetcontroller.enable.github.api.metrics: "false"
|
||||
# The maximum number of resources stored in the status of an ApplicationSet. This is a safeguard to prevent the status from growing too large.
|
||||
applicationsetcontroller.status.max.resources.count: "5000"
|
||||
# Enables profile endpoint on the internal metrics port
|
||||
applicationsetcontroller.profile.enabled: "false"
|
||||
|
||||
## Argo CD Notifications Controller Properties
|
||||
# Set the logging level. One of: debug|info|warn|error (default "info")
|
||||
|
||||
@@ -121,6 +121,18 @@ spec:
|
||||
...
|
||||
```
|
||||
|
||||
### Deleting child applications
|
||||
|
||||
When working with the App of Apps pattern, you may need to delete individual child applications. Starting in 3.2, Argo CD provides consistent deletion behaviour whether you delete from the Applications List or from the parent application's Resource Tree.
|
||||
|
||||
For detailed information about deletion options and behaviour, including:
|
||||
- Consistent deletion across UI views
|
||||
- Non-cascading (orphan) deletion to preserve managed resources
|
||||
- Child application detection and improved dialog messages
|
||||
- Best practices and example scenarios
|
||||
|
||||
See [Deleting Applications in the UI](../user-guide/app_deletion.md#deleting-applications-in-the-ui).
|
||||
|
||||
### Ignoring differences in child applications
|
||||
|
||||
To allow changes in child apps without triggering an out-of-sync status, or modification for debugging etc, the app of apps pattern works with [diff customization](../user-guide/diffing/). The example below shows how to ignore changes to syncPolicy and other common values.
|
||||
@@ -136,7 +148,6 @@ spec:
|
||||
ignoreDifferences:
|
||||
- group: "*"
|
||||
kind: "Application"
|
||||
namespace: "*"
|
||||
jsonPointers:
|
||||
# Allow manually disabling auto sync for apps, useful for debugging.
|
||||
- /spec/syncPolicy/automated
|
||||
|
||||
43
docs/operator-manual/git_configuration.md
Normal file
43
docs/operator-manual/git_configuration.md
Normal file
@@ -0,0 +1,43 @@
|
||||
|
||||
# Git Configuration
|
||||
|
||||
## System Configuration
|
||||
|
||||
Argo CD uses the Git installation from its base image (Ubuntu), which
|
||||
includes a standard system configuration file located at
|
||||
`/etc/gitconfig`. This file is minimal, just defining filters
|
||||
necessary for Git LFS functionality.
|
||||
|
||||
You can customize Git's system configuration by mounting a file from a
|
||||
ConfigMap or by creating a custom Argo CD image.
|
||||
|
||||
## Global Configuration
|
||||
|
||||
Argo CD runs Git with the `HOME` environment variable set to
|
||||
`/dev/null`. As a result, global Git configuration is not supported.
|
||||
|
||||
## Built-in Configuration
|
||||
|
||||
The `argocd-repo-server` adds specific configuration parameters to the
|
||||
Git environment to ensure proper Argo CD operation. These built-in
|
||||
settings override any conflicting values from the system Git
|
||||
configuration.
|
||||
|
||||
Currently, the following built-in configuration options are set:
|
||||
|
||||
- `maintenance.autoDetach=false`
|
||||
- `gc.autoDetach=false`
|
||||
|
||||
These settings force Git's repository maintenance tasks to run in the
|
||||
foreground. This prevents Git from running detached background
|
||||
processes that could modify the repository and interfere with
|
||||
subsequent Git invocations from `argocd-repo-server`.
|
||||
|
||||
You can disable these built-in settings by setting the
|
||||
`argocd-cmd-params-cm` value `reposerver.enable.builtin.git.config` to
|
||||
`"false"`. This allows you to experiment with background processing or
|
||||
if you are certain that concurrency issues will not occur in your
|
||||
environment.
|
||||
|
||||
> [!NOTE]
|
||||
> Disabling this is not recommended and is not supported!
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Argo CD is largely stateless. All data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.
|
||||
|
||||
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
|
||||
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/stable/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
|
||||
|
||||
!!! note
|
||||
|
||||
@@ -63,7 +63,7 @@ performance. For performance reasons the controller monitors and caches only the
|
||||
preferred version into a version of the resource stored in Git. If `kubectl convert` fails because the conversion is not supported then the controller falls back to Kubernetes API query which slows down
|
||||
reconciliation. In this case, we advise to use the preferred resource version in Git.
|
||||
|
||||
* The controller polls Git every 3m by default. You can change this duration using the `timeout.reconciliation` and `timeout.reconciliation.jitter` setting in the `argocd-cm` ConfigMap. The value of the fields is a duration string e.g `60s`, `1m`, `1h` or `1d`.
|
||||
* The controller polls Git every 3m by default. You can change this duration using the `timeout.reconciliation` and `timeout.reconciliation.jitter` setting in the `argocd-cm` ConfigMap. The value of the fields is a [duration string](https://pkg.go.dev/time#ParseDuration) e.g `60s`, `1m` or `1h`.
|
||||
|
||||
* If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple
|
||||
controller replicas. To enable sharding, increase the number of replicas in `argocd-application-controller` `StatefulSet`
|
||||
@@ -205,7 +205,9 @@ If the manifest generation has no side effects then requests are processed in pa
|
||||
### Manifest Paths Annotation
|
||||
|
||||
Argo CD aggressively caches generated manifests and uses the repository commit SHA as a cache key. A new commit to the Git repository invalidates the cache for all applications configured in the repository.
|
||||
This can negatively affect repositories with multiple applications. You can use [webhooks](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/webhook.md) and the `argocd.argoproj.io/manifest-generate-paths` Application CRD annotation to solve this problem and improve performance.
|
||||
This can negatively affect repositories with multiple applications. You can use the `argocd.argoproj.io/manifest-generate-paths` Application CRD annotation to solve this problem and improve performance.
|
||||
|
||||
Note: The `argocd.argoproj.io/manifest-generate-paths` annotation is available for use with webhooks. Since Argo CD v2.11, this annotation can also be used **without configuring any webhooks**. Webhooks are not a pre-condition for this feature. You can rely on the annotation alone to optimize manifest generation for all applications.
|
||||
|
||||
The `argocd.argoproj.io/manifest-generate-paths` annotation contains a semicolon-separated list of paths within the Git repository that are used during manifest generation. It will use the paths specified in the annotation to compare the last cached revision to the latest commit. If no modified files match the paths specified in `argocd.argoproj.io/manifest-generate-paths`, then it will not trigger application reconciliation and the existing cache will be considered valid for the new commit.
|
||||
|
||||
@@ -396,13 +398,16 @@ Not all HTTP responses are eligible for retries. The following conditions will n
|
||||
|
||||
## CPU/Memory Profiling
|
||||
|
||||
Argo CD optionally exposes a profiling endpoint that can be used to profile the CPU and memory usage of the Argo CD component.
|
||||
The profiling endpoint is available on metrics port of each component. See [metrics](./metrics.md) for more information about the port.
|
||||
For security reasons the profiling endpoint is disabled by default. The endpoint can be enabled by setting the `server.profile.enabled`
|
||||
or `controller.profile.enabled` key of [argocd-cmd-params-cm](argocd-cmd-params-cm.yaml) ConfigMap to `true`.
|
||||
Once the endpoint is enabled you can use go profile tool to collect the CPU and memory profiles. Example:
|
||||
Argo CD optionally exposes a profiling endpoint that can be used to profile the CPU and memory usage of the Argo CD
|
||||
component.
|
||||
The profiling endpoint is available on metrics port of each component. See [metrics](./metrics.md) for more information
|
||||
about the port.
|
||||
For security reasons, the profiling endpoint is disabled by default. The endpoint can be enabled by setting the
|
||||
`server.profile.enabled`, `applicationsetcontroller.profile.enabled`, or `controller.profile.enabled` key
|
||||
of [argocd-cmd-params-cm](argocd-cmd-params-cm.yaml) ConfigMap to `true`.
|
||||
Once the endpoint is enabled, you can use go profile tool to collect the CPU and memory profiles. Example:
|
||||
|
||||
```bash
|
||||
$ kubectl port-forward svc/argocd-metrics 8082:8082
|
||||
$ go tool pprof http://localhost:8082/debug/pprof/heap
|
||||
```
|
||||
```
|
||||
@@ -434,6 +434,8 @@ spec:
|
||||
|
||||
Once we create this service, we can configure the Ingress to conditionally route all `application/grpc` traffic to the new HTTP2 backend, using the `alb.ingress.kubernetes.io/conditions` annotation, as seen below. Note: The value after the . in the condition annotation _must_ be the same name as the service that you want traffic to route to - and will be applied on any path with a matching serviceName.
|
||||
|
||||
Also note that we can configure the health check to return the gRPC health status code `OK - 0` from the argocd-server by setting the health check path to `/grpc.health.v1.Health/Check`. By default, the ALB health check for gRPC returns the status code `UNIMPLEMENTED - 12` on health check path `/AWS.ALB/healthcheck`.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
@@ -444,6 +446,9 @@ Once we create this service, we can configure the Ingress to conditionally route
|
||||
alb.ingress.kubernetes.io/conditions.argogrpc: |
|
||||
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
|
||||
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
|
||||
# Use this annotation to receive OK - 0 instead of UNIMPLEMENTED - 12 for gRPC health check.
|
||||
alb.ingress.kubernetes.io/healthcheck-path: /grpc.health.v1.Health/Check
|
||||
alb.ingress.kubernetes.io/success-codes: '0'
|
||||
name: argocd
|
||||
namespace: argocd
|
||||
spec:
|
||||
|
||||
@@ -11,6 +11,10 @@ The notification service is used to push events to [Alertmanager](https://github
|
||||
* `basicAuth` - optional, server auth
|
||||
* `bearerToken` - optional, server auth
|
||||
* `timeout` - optional, the timeout in seconds used when sending alerts, default is "3 seconds"
|
||||
* `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts.
|
||||
* `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host.
|
||||
* `maxConnsPerHost` - optional, maximum total connections per host.
|
||||
* `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing.
|
||||
|
||||
`basicAuth` or `bearerToken` is used for authentication, you can choose one. If the two are set at the same time, `basicAuth` takes precedence over `bearerToken`.
|
||||
|
||||
|
||||
@@ -12,6 +12,19 @@ The Email notification service sends email notifications using SMTP protocol and
|
||||
* `html` - optional bool, true or false
|
||||
* `insecure_skip_verify` - optional bool, true or false
|
||||
|
||||
### Using Gmail
|
||||
|
||||
When configuring Gmail as the SMTP service:
|
||||
|
||||
* `username` - Must be your Gmail address.
|
||||
* `password` - Use an App Password, not your regular Gmail password.
|
||||
|
||||
To Generate an app password, follow this link https://myaccount.google.com/apppasswords
|
||||
|
||||
!!! note
|
||||
This applies to personal Gmail accounts (non-Google Workspace). For Google Workspace users, SMTP settings
|
||||
and authentication methods may differ.
|
||||
|
||||
## Example
|
||||
|
||||
The following snippet contains sample Gmail service configuration:
|
||||
@@ -23,11 +36,11 @@ metadata:
|
||||
name: argocd-notifications-cm
|
||||
data:
|
||||
service.email.gmail: |
|
||||
username: $email-username
|
||||
password: $email-password
|
||||
username: $username
|
||||
password: $password
|
||||
host: smtp.gmail.com
|
||||
port: 465
|
||||
from: $email-username
|
||||
from: $email-address
|
||||
```
|
||||
|
||||
Without authentication:
|
||||
@@ -41,7 +54,7 @@ data:
|
||||
service.email.example: |
|
||||
host: smtp.example.com
|
||||
port: 587
|
||||
from: $email-username
|
||||
from: $email-address
|
||||
```
|
||||
|
||||
## Template
|
||||
|
||||
@@ -8,6 +8,10 @@ The GitHub notification service changes commit status using [GitHub Apps](https:
|
||||
- `installationID` - the app installation id
|
||||
- `privateKey` - the app private key
|
||||
- `enterpriseBaseURL` - optional URL, e.g. https://git.example.com/api/v3
|
||||
- `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts.
|
||||
- `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host.
|
||||
- `maxConnsPerHost` - optional, maximum total connections per host.
|
||||
- `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing.
|
||||
|
||||
> ⚠️ _NOTE:_ Specifying `/api/v3` in the `enterpriseBaseURL` is required until [argoproj/notifications-engine#205](https://github.com/argoproj/notifications-engine/issues/205) is resolved.
|
||||
|
||||
|
||||
@@ -9,6 +9,10 @@ Available parameters :
|
||||
* `apiURL` - the server url, e.g. https://grafana.example.com
|
||||
* `apiKey` - the API key for the serviceaccount
|
||||
* `insecureSkipVerify` - optional bool, true or false
|
||||
* `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts.
|
||||
* `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host.
|
||||
* `maxConnsPerHost` - optional, maximum total connections per host.
|
||||
* `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing.
|
||||
|
||||
1. Login to your Grafana instance as `admin`
|
||||
2. On the left menu, go to Configuration / API Keys
|
||||
|
||||
@@ -5,6 +5,10 @@
|
||||
* `apiURL` - the server url, e.g. https://mattermost.example.com
|
||||
* `token` - the bot token
|
||||
* `insecureSkipVerify` - optional bool, true or false
|
||||
* `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts.
|
||||
* `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host.
|
||||
* `maxConnsPerHost` - optional, maximum total connections per host.
|
||||
* `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing, e.g. '90s'.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
||||
@@ -4,6 +4,10 @@
|
||||
|
||||
* `apiURL` - the api server url, e.g. https://api.newrelic.com
|
||||
* `apiKey` - a [NewRelic ApiKey](https://docs.newrelic.com/docs/apis/rest-api-v2/get-started/introduction-new-relic-rest-api-v2/#api_key)
|
||||
* `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts.
|
||||
* `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host.
|
||||
* `maxConnsPerHost` - optional, maximum total connections per host.
|
||||
* `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing, e.g. '90s'.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
||||
@@ -47,7 +47,8 @@ metadata:
|
||||
* [Grafana](./grafana.md)
|
||||
* [Webhook](./webhook.md)
|
||||
* [Telegram](./telegram.md)
|
||||
* [Teams](./teams.md)
|
||||
* [Teams (Office 365 Connectors)](./teams.md) - Legacy service (deprecated, retires March 31, 2026)
|
||||
* [Teams Workflows](./teams-workflows.md) - Recommended replacement for Office 365 Connectors
|
||||
* [Google Chat](./googlechat.md)
|
||||
* [Rocket.Chat](./rocketchat.md)
|
||||
* [Pushover](./pushover.md)
|
||||
|
||||
@@ -62,6 +62,8 @@ The parameters for the PagerDuty configuration in the template generally match w
|
||||
* `group` - Logical grouping of components of a service.
|
||||
* `class` - The class/type of the event.
|
||||
* `url` - The URL that should be used for the link "View in ArgoCD" in PagerDuty.
|
||||
* `dedupKey` - A string used by PagerDuty to deduplicate and correlate events. Events with the same `dedupKey` will be grouped into the same incident. If omitted, PagerDuty will create a new incident for each event.
|
||||
|
||||
|
||||
The `timestamp` and `custom_details` parameters are not currently supported.
|
||||
|
||||
|
||||
@@ -16,6 +16,11 @@ The Slack notification service configuration includes following settings:
|
||||
| `token` | **True** | `string` | The app's OAuth access token. | `xoxb-1234567890-1234567890123-5n38u5ed63fgzqlvuyxvxcx6` |
|
||||
| `username` | False | `string` | The app username. | `argocd` |
|
||||
| `disableUnfurl` | False | `bool` | Disable slack unfurling links in messages | `true` |
|
||||
| `maxIdleConns` | False | `int` | Maximum number of idle (keep-alive) connections across all hosts. | — |
|
||||
| `maxIdleConnsPerHost` | False | `int` | Maximum number of idle (keep-alive) connections per host. | — |
|
||||
| `maxConnsPerHost` | False | `int` | Maximum total connections per host. | — |
|
||||
| `idleConnTimeout` | False | `string` | Maximum amount of time an idle (keep-alive) connection will remain open before closing (e.g., `90s`). | — |
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
|
||||
370
docs/operator-manual/notifications/services/teams-workflows.md
Executable file
370
docs/operator-manual/notifications/services/teams-workflows.md
Executable file
@@ -0,0 +1,370 @@
|
||||
# Teams Workflows
|
||||
|
||||
## Overview
|
||||
|
||||
The Teams Workflows notification service sends message notifications using Microsoft Teams Workflows (Power Automate). This is the recommended replacement for the legacy Office 365 Connectors service, which will be retired on March 31, 2026.
|
||||
|
||||
## Parameters
|
||||
|
||||
The Teams Workflows notification service requires specifying the following settings:
|
||||
|
||||
* `recipientUrls` - the webhook url map, e.g. `channelName: https://api.powerautomate.com/webhook/...`
|
||||
|
||||
## Supported Webhook URL Formats
|
||||
|
||||
The service supports the following Microsoft Teams Workflows webhook URL patterns:
|
||||
|
||||
- `https://api.powerautomate.com/...`
|
||||
- `https://api.powerplatform.com/...`
|
||||
- `https://flow.microsoft.com/...`
|
||||
- URLs containing `/powerautomate/` in the path
|
||||
|
||||
## Configuration
|
||||
|
||||
1. Open `Teams` and go to the channel you wish to set notifications for
|
||||
2. Click on the 3 dots next to the channel name
|
||||
3. Select`Workflows`
|
||||
4. Click on `Manage`
|
||||
5. Click `New flow`
|
||||
6. Write `Send webhook alerts to a channel` in the search bar or select it from the template list
|
||||
7. Choose your team and channel
|
||||
8. Configure the webhook name and settings
|
||||
9. Copy the webhook URL (it will be from `api.powerautomate.com`, `api.powerplatform.com`, or `flow.microsoft.com`)
|
||||
10. Store it in `argocd-notifications-secret` and define it in `argocd-notifications-cm`
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-notifications-cm
|
||||
data:
|
||||
service.teams-workflows: |
|
||||
recipientUrls:
|
||||
channelName: $channel-workflows-url
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: <secret-name>
|
||||
stringData:
|
||||
channel-workflows-url: https://api.powerautomate.com/webhook/your-webhook-id
|
||||
```
|
||||
|
||||
11. Create subscription for your Teams Workflows integration:
|
||||
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
annotations:
|
||||
notifications.argoproj.io/subscribe.on-sync-succeeded.teams-workflows: channelName
|
||||
```
|
||||
|
||||
## Channel Support
|
||||
|
||||
- ✅ Standard Teams channels
|
||||
- ✅ Shared channels (as of December 2025)
|
||||
- ✅ Private channels (as of December 2025)
|
||||
|
||||
Teams Workflows provides enhanced channel support compared to Office 365 Connectors, allowing you to post to shared and private channels in addition to standard channels.
|
||||
|
||||
## Adaptive Card Format
|
||||
|
||||
The Teams Workflows service uses **Adaptive Cards** exclusively, which is the modern, flexible card format for Microsoft Teams. All notifications are automatically converted to Adaptive Card format and wrapped in the required message envelope.
|
||||
|
||||
### Option 1: Using Template Fields (Recommended)
|
||||
|
||||
The service automatically converts template fields to Adaptive Card format. This is the simplest and most maintainable approach:
|
||||
|
||||
```yaml
|
||||
template.app-sync-succeeded: |
|
||||
teams-workflows:
|
||||
# ThemeColor supports Adaptive Card semantic colors: "Good", "Warning", "Attention", "Accent"
|
||||
# or hex colors like "#000080"
|
||||
themeColor: "Good"
|
||||
title: Application {{.app.metadata.name}} has been successfully synced
|
||||
text: Application {{.app.metadata.name}} has been successfully synced at {{.app.status.operationState.finishedAt}}.
|
||||
summary: "{{.app.metadata.name}} sync succeeded"
|
||||
facts: |
|
||||
[{
|
||||
"name": "Sync Status",
|
||||
"value": "{{.app.status.sync.status}}"
|
||||
}, {
|
||||
"name": "Repository",
|
||||
"value": "{{.app.spec.source.repoURL}}"
|
||||
}]
|
||||
sections: |
|
||||
[{
|
||||
"facts": [
|
||||
{
|
||||
"name": "Namespace",
|
||||
"value": "{{.app.metadata.namespace}}"
|
||||
},
|
||||
{
|
||||
"name": "Cluster",
|
||||
"value": "{{.app.spec.destination.server}}"
|
||||
}
|
||||
]
|
||||
}]
|
||||
potentialAction: |-
|
||||
[{
|
||||
"@type": "OpenUri",
|
||||
"name": "View in Argo CD",
|
||||
"targets": [{
|
||||
"os": "default",
|
||||
"uri": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}"
|
||||
}]
|
||||
}]
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
- `title` → Converted to a large, bold TextBlock
|
||||
- `text` → Converted to a regular TextBlock
|
||||
- `facts` → Converted to a FactSet element
|
||||
- `sections` → Facts within sections are extracted and converted to FactSet elements
|
||||
- `potentialAction` → OpenUri actions are converted to Action.OpenUrl
|
||||
- `themeColor` → Applied to the title TextBlock (supports semantic colors like "Good", "Warning", "Attention", "Accent" or hex colors)
|
||||
|
||||
### Option 2: Custom Adaptive Card JSON
|
||||
|
||||
For full control and advanced features, you can provide a complete Adaptive Card JSON template:
|
||||
|
||||
```yaml
|
||||
template.app-sync-succeeded: |
|
||||
teams-workflows:
|
||||
adaptiveCard: |
|
||||
{
|
||||
"type": "AdaptiveCard",
|
||||
"version": "1.4",
|
||||
"body": [
|
||||
{
|
||||
"type": "TextBlock",
|
||||
"text": "Application {{.app.metadata.name}} synced successfully",
|
||||
"size": "Large",
|
||||
"weight": "Bolder",
|
||||
"color": "Good"
|
||||
},
|
||||
{
|
||||
"type": "TextBlock",
|
||||
"text": "Application {{.app.metadata.name}} has been successfully synced at {{.app.status.operationState.finishedAt}}.",
|
||||
"wrap": true
|
||||
},
|
||||
{
|
||||
"type": "FactSet",
|
||||
"facts": [
|
||||
{
|
||||
"title": "Sync Status",
|
||||
"value": "{{.app.status.sync.status}}"
|
||||
},
|
||||
{
|
||||
"title": "Repository",
|
||||
"value": "{{.app.spec.source.repoURL}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"actions": [
|
||||
{
|
||||
"type": "Action.OpenUrl",
|
||||
"title": "View in Argo CD",
|
||||
"url": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** When using `adaptiveCard`, you only need to provide the AdaptiveCard JSON structure (not the full message envelope). The service automatically wraps it in the required `message` + `attachments` format for Teams Workflows.
|
||||
|
||||
**Important:** If you provide `adaptiveCard`, it takes precedence over all other template fields (`title`, `text`, `facts`, etc.).
|
||||
|
||||
## Template Fields
|
||||
|
||||
The Teams Workflows service supports the following template fields, which are automatically converted to Adaptive Card format:
|
||||
|
||||
### Standard Fields
|
||||
|
||||
- `title` - Message title (converted to large, bold TextBlock)
|
||||
- `text` - Message text content (converted to TextBlock)
|
||||
- `summary` - Summary text (currently not used in Adaptive Cards, but preserved for compatibility)
|
||||
- `themeColor` - Color for the title. Supports:
|
||||
- Semantic colors: `"Good"` (green), `"Warning"` (yellow), `"Attention"` (red), `"Accent"` (blue)
|
||||
- Hex colors: `"#000080"`, `"#FF0000"`, etc.
|
||||
- `facts` - JSON array of fact key-value pairs (converted to FactSet)
|
||||
```yaml
|
||||
facts: |
|
||||
[{
|
||||
"name": "Status",
|
||||
"value": "{{.app.status.sync.status}}"
|
||||
}]
|
||||
```
|
||||
- `sections` - JSON array of sections containing facts (facts are extracted and converted to FactSet)
|
||||
```yaml
|
||||
sections: |
|
||||
[{
|
||||
"facts": [{
|
||||
"name": "Namespace",
|
||||
"value": "{{.app.metadata.namespace}}"
|
||||
}]
|
||||
}]
|
||||
```
|
||||
- `potentialAction` - JSON array of action buttons (OpenUri actions converted to Action.OpenUrl)
|
||||
```yaml
|
||||
potentialAction: |-
|
||||
[{
|
||||
"@type": "OpenUri",
|
||||
"name": "View Details",
|
||||
"targets": [{
|
||||
"os": "default",
|
||||
"uri": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}"
|
||||
}]
|
||||
}]
|
||||
```
|
||||
|
||||
### Advanced Fields
|
||||
|
||||
- `adaptiveCard` - Complete Adaptive Card JSON template (takes precedence over all other fields)
|
||||
- Only provide the AdaptiveCard structure, not the message envelope
|
||||
- Supports full Adaptive Card 1.4 specification
|
||||
- Allows access to all Adaptive Card features (containers, columns, images, etc.)
|
||||
|
||||
- `template` - Raw JSON template (legacy, use `adaptiveCard` instead)
|
||||
|
||||
### Field Conversion Details
|
||||
|
||||
| Template Field | Adaptive Card Element | Notes |
|
||||
|---------------|----------------------|-------|
|
||||
| `title` | `TextBlock` with `size: "Large"`, `weight: "Bolder"` | ThemeColor applied to this element |
|
||||
| `text` | `TextBlock` with `wrap: true` | Uses `n.Message` if `text` is empty |
|
||||
| `facts` | `FactSet` | Each fact becomes a `title`/`value` pair |
|
||||
| `sections[].facts` | `FactSet` | Facts extracted from sections |
|
||||
| `potentialAction[OpenUri]` | `Action.OpenUrl` | Only OpenUri actions are converted |
|
||||
| `themeColor` | Applied to title `TextBlock.color` | Supports semantic and hex colors |
|
||||
|
||||
## Migration from Office 365 Connectors
|
||||
|
||||
If you're currently using the `teams` service with Office 365 Connectors, follow these steps to migrate:
|
||||
|
||||
1. **Create a new Workflows webhook** using the configuration steps above
|
||||
|
||||
2. **Update your service configuration:**
|
||||
- Change from `service.teams` to `service.teams-workflows`
|
||||
- Update the webhook URL to your new Workflows webhook URL
|
||||
|
||||
3. **Update your templates:**
|
||||
- Change `teams:` to `teams-workflows:` in your templates
|
||||
- Your existing template fields (`title`, `text`, `facts`, `sections`, `potentialAction`) will automatically be converted to Adaptive Card format
|
||||
- No changes needed to your template structure - the conversion is automatic
|
||||
|
||||
4. **Update your subscriptions:**
|
||||
```yaml
|
||||
# Old
|
||||
notifications.argoproj.io/subscribe.on-sync-succeeded.teams: channelName
|
||||
|
||||
# New
|
||||
notifications.argoproj.io/subscribe.on-sync-succeeded.teams-workflows: channelName
|
||||
```
|
||||
|
||||
5. **Test and verify:**
|
||||
- Send a test notification to verify it works correctly
|
||||
- Once verified, you can remove the old Office 365 Connector configuration
|
||||
|
||||
**Note:** Your existing templates will work without modification. The service automatically converts your template fields to Adaptive Card format, so you get the benefits of modern cards without changing your templates.
|
||||
|
||||
## Differences from Office 365 Connectors
|
||||
|
||||
| Feature | Office 365 Connectors | Teams Workflows |
|
||||
|---------|----------------------|-----------------|
|
||||
| Service Name | `teams` | `teams-workflows` |
|
||||
| Standard Channels | ✅ | ✅ |
|
||||
| Shared Channels | ❌ | ✅ (Dec 2025+) |
|
||||
| Private Channels | ❌ | ✅ (Dec 2025+) |
|
||||
| Card Format | messageCard (legacy) | Adaptive Cards (modern) |
|
||||
| Template Conversion | N/A | Automatic conversion from template fields |
|
||||
| Retirement Date | March 31, 2026 | Active |
|
||||
|
||||
## Adaptive Card Features
|
||||
|
||||
The Teams Workflows service leverages Adaptive Cards, which provide:
|
||||
|
||||
- **Rich Content**: Support for text, images, fact sets, and more
|
||||
- **Flexible Layout**: Containers, columns, and adaptive layouts
|
||||
- **Interactive Elements**: Action buttons, input fields, and more
|
||||
- **Semantic Colors**: Built-in color schemes (Good, Warning, Attention, Accent)
|
||||
- **Cross-Platform**: Works across Teams, Outlook, and other Microsoft 365 apps
|
||||
|
||||
### Example: Advanced Adaptive Card Template
|
||||
|
||||
For complex notifications, you can use the full Adaptive Card specification:
|
||||
|
||||
```yaml
|
||||
template.app-sync-succeeded-advanced: |
|
||||
teams-workflows:
|
||||
adaptiveCard: |
|
||||
{
|
||||
"type": "AdaptiveCard",
|
||||
"version": "1.4",
|
||||
"body": [
|
||||
{
|
||||
"type": "Container",
|
||||
"items": [
|
||||
{
|
||||
"type": "ColumnSet",
|
||||
"columns": [
|
||||
{
|
||||
"type": "Column",
|
||||
"width": "auto",
|
||||
"items": [
|
||||
{
|
||||
"type": "Image",
|
||||
"url": "https://example.com/success-icon.png",
|
||||
"size": "Small"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "Column",
|
||||
"width": "stretch",
|
||||
"items": [
|
||||
{
|
||||
"type": "TextBlock",
|
||||
"text": "Application {{.app.metadata.name}}",
|
||||
"weight": "Bolder",
|
||||
"size": "Large"
|
||||
},
|
||||
{
|
||||
"type": "TextBlock",
|
||||
"text": "Successfully synced",
|
||||
"spacing": "None",
|
||||
"isSubtle": true
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "FactSet",
|
||||
"facts": [
|
||||
{
|
||||
"title": "Status",
|
||||
"value": "{{.app.status.sync.status}}"
|
||||
},
|
||||
{
|
||||
"title": "Repository",
|
||||
"value": "{{.app.spec.source.repoURL}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"actions": [
|
||||
{
|
||||
"type": "Action.OpenUrl",
|
||||
"title": "View in Argo CD",
|
||||
"url": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -1,18 +1,46 @@
|
||||
# Teams
|
||||
# Teams (Office 365 Connectors)
|
||||
|
||||
## ⚠️ Deprecation Notice
|
||||
|
||||
**Office 365 Connectors are being retired by Microsoft.**
|
||||
|
||||
Microsoft is retiring the Office 365 Connectors service in Teams. The service will be fully retired by **March 31, 2026** (extended from the original timeline of December 2025).
|
||||
|
||||
### What this means:
|
||||
- **Old Office 365 Connectors** (webhook URLs from `webhook.office.com`) will stop working after the retirement date
|
||||
- **New Power Automate Workflows** (webhook URLs from `api.powerautomate.com`, `api.powerplatform.com`, or `flow.microsoft.com`) are the recommended replacement
|
||||
|
||||
### Migration Required:
|
||||
If you are currently using Office 365 Connectors (Incoming Webhook), you should migrate to Power Automate Workflows before the retirement date. The notifications-engine automatically detects the webhook type and handles both formats, but you should plan your migration.
|
||||
|
||||
**Migration Resources:**
|
||||
- [Microsoft Deprecation Notice](https://devblogs.microsoft.com/microsoft365dev/retirement-of-office-365-connectors-within-microsoft-teams/)
|
||||
- [Create incoming webhooks with Workflows for Microsoft Teams](https://support.microsoft.com/en-us/office/create-incoming-webhooks-with-workflows-for-microsoft-teams-4b3b0b0e-0b5a-4b5a-9b5a-0b5a-4b5a-9b5a)
|
||||
|
||||
---
|
||||
|
||||
## Parameters
|
||||
|
||||
The Teams notification service send message notifications using Teams bot and requires specifying the following settings:
|
||||
The Teams notification service sends message notifications using Office 365 Connectors and requires specifying the following settings:
|
||||
|
||||
* `recipientUrls` - the webhook url map, e.g. `channelName: https://example.com`
|
||||
* `recipientUrls` - the webhook url map, e.g. `channelName: https://outlook.office.com/webhook/...`
|
||||
|
||||
> **⚠️ Deprecation Notice:** Office 365 Connectors will be retired by Microsoft on **March 31, 2026**. We recommend migrating to the [Teams Workflows service](./teams-workflows.md) for continued support and enhanced features.
|
||||
|
||||
## Configuration
|
||||
|
||||
> **💡 For Power Automate Workflows (Recommended):** See the [Teams Workflows documentation](./teams-workflows.md) for detailed configuration instructions.
|
||||
|
||||
### Office 365 Connectors (Deprecated - Retiring March 31, 2026)
|
||||
|
||||
> **⚠️ Warning:** This method is deprecated and will stop working after March 31, 2026. Please migrate to Power Automate Workflows.
|
||||
|
||||
1. Open `Teams` and goto `Apps`
|
||||
2. Find `Incoming Webhook` microsoft app and click on it
|
||||
3. Press `Add to a team` -> select team and channel -> press `Set up a connector`
|
||||
4. Enter webhook name and upload image (optional)
|
||||
5. Press `Create` then copy webhook url and store it in `argocd-notifications-secret` and define it in `argocd-notifications-cm`
|
||||
5. Press `Create` then copy webhook url (it will be from `webhook.office.com`)
|
||||
6. Store it in `argocd-notifications-secret` and define it in `argocd-notifications-cm`
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
@@ -31,10 +59,20 @@ kind: Secret
|
||||
metadata:
|
||||
name: <secret-name>
|
||||
stringData:
|
||||
channel-teams-url: https://example.com
|
||||
channel-teams-url: https://webhook.office.com/webhook/your-webhook-id # Office 365 Connector (deprecated)
|
||||
```
|
||||
|
||||
6. Create subscription for your Teams integration:
|
||||
> **Note:** For Power Automate Workflows webhooks, use the [Teams Workflows service](./teams-workflows.md) instead.
|
||||
|
||||
### Webhook Type Detection
|
||||
|
||||
The `teams` service supports Office 365 Connectors (deprecated):
|
||||
|
||||
- **Office 365 Connectors**: URLs from `webhook.office.com` (deprecated)
|
||||
- Requires response body to be exactly `"1"` for success
|
||||
- Will stop working after March 31, 2026
|
||||
|
||||
7. Create subscription for your Teams integration:
|
||||
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
@@ -44,12 +82,20 @@ metadata:
|
||||
notifications.argoproj.io/subscribe.on-sync-succeeded.teams: channelName
|
||||
```
|
||||
|
||||
## Channel Support
|
||||
|
||||
- ✅ Standard Teams channels only
|
||||
|
||||
> **Note:** Office 365 Connectors only support standard Teams channels. For shared channels or private channels, use the [Teams Workflows service](./teams-workflows.md).
|
||||
|
||||
## Templates
|
||||
|
||||

|
||||
|
||||
[Notification templates](../templates.md) can be customized to leverage teams message sections, facts, themeColor, summary and potentialAction [feature](https://docs.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using).
|
||||
|
||||
The Teams service uses the **messageCard** format (MessageCard schema) which is compatible with Office 365 Connectors.
|
||||
|
||||
```yaml
|
||||
template.app-sync-succeeded: |
|
||||
teams:
|
||||
@@ -124,3 +170,7 @@ template.app-sync-succeeded: |
|
||||
teams:
|
||||
summary: "Sync Succeeded"
|
||||
```
|
||||
|
||||
## Migration to Teams Workflows
|
||||
|
||||
If you're currently using Office 365 Connectors, see the [Teams Workflows documentation](./teams-workflows.md) for migration instructions and enhanced features.
|
||||
|
||||
@@ -14,6 +14,10 @@ The Webhook notification service configuration includes following settings:
|
||||
- `retryWaitMin` - Optional, the minimum wait time between retries. Default value: 1s.
|
||||
- `retryWaitMax` - Optional, the maximum wait time between retries. Default value: 5s.
|
||||
- `retryMax` - Optional, the maximum number of retries. Default value: 3.
|
||||
- `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts.
|
||||
- `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host.
|
||||
- `maxConnsPerHost` - optional, maximum total connections per host.
|
||||
- `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing, e.g. '90s'.
|
||||
|
||||
## Retry Behavior
|
||||
|
||||
@@ -74,6 +78,29 @@ metadata:
|
||||
notifications.argoproj.io/subscribe.<trigger-name>.<webhook-name>: ""
|
||||
```
|
||||
|
||||
4. TLS configuration (optional)
|
||||
|
||||
If your webhook server uses a custom TLS certificate, you can configure the notification service to trust it by adding the certificate to the `argocd-tls-certs-cm` ConfigMap as shown below:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-tls-certs-cm
|
||||
data:
|
||||
<hostname>: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
<TLS DATA>
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
*NOTE:*
|
||||
*If the custom certificate is not trusted, you may encounter errors such as:*
|
||||
```
|
||||
Put \"https://...\": x509: certificate signed by unknown authority
|
||||
```
|
||||
*Adding the server's certificate to `argocd-tls-certs-cm` resolves this issue.*
|
||||
|
||||
## Examples
|
||||
|
||||
### Set GitHub commit status
|
||||
|
||||
@@ -35,14 +35,26 @@ metadata:
|
||||
name: argocd-notifications-cm
|
||||
data:
|
||||
trigger.sync-operation-change: |
|
||||
- when: app.status.operationState.phase in ['Succeeded']
|
||||
- when: app.status?.operationState.phase in ['Succeeded']
|
||||
send: [github-commit-status]
|
||||
- when: app.status.operationState.phase in ['Running']
|
||||
- when: app.status?.operationState.phase in ['Running']
|
||||
send: [github-commit-status]
|
||||
- when: app.status.operationState.phase in ['Error', 'Failed']
|
||||
- when: app.status?.operationState.phase in ['Error', 'Failed']
|
||||
send: [app-sync-failed, github-commit-status]
|
||||
```
|
||||
|
||||
|
||||
## Accessing Optional Manifest Sections and Fields
|
||||
|
||||
Note that in the trigger example above, the `?.` (optional chaining) operator is used to access the Application's
|
||||
`status.operationState` section. This section is optional; it is not present when an operation has been initiated but has not yet
|
||||
started by the Application Controller.
|
||||
|
||||
If the `?.` operator were not used, `status.operationState` would resolve to `nil` and the evaluation of the
|
||||
`app.status.operationState.phase` expression would fail. The `app.status?.operationState.phase` expression is equivalent to
|
||||
`app.status.operationState != nil ? app.status.operationState.phase : nil`.
|
||||
|
||||
|
||||
## Avoid Sending Same Notification Too Often
|
||||
|
||||
In some cases, the trigger condition might be "flapping". The example below illustrates the problem.
|
||||
@@ -60,14 +72,14 @@ data:
|
||||
# Optional 'oncePer' property ensure that notification is sent only once per specified field value
|
||||
# E.g. following is triggered once per sync revision
|
||||
trigger.on-deployed: |
|
||||
when: app.status.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy'
|
||||
when: app.status?.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy'
|
||||
oncePer: app.status.sync.revision
|
||||
send: [app-sync-succeeded]
|
||||
```
|
||||
|
||||
**Mono Repo Usage**
|
||||
|
||||
When one repo is used to sync multiple applications, the `oncePer: app.status.sync.revision` field will trigger a notification for each commit. For mono repos, the better approach will be using `oncePer: app.status.operationState.syncResult.revision` statement. This way a notification will be sent only for a particular Application's revision.
|
||||
When one repo is used to sync multiple applications, the `oncePer: app.status.sync.revision` field will trigger a notification for each commit. For mono repos, the better approach will be using `oncePer: app.status?.operationState.syncResult.revision` statement. This way a notification will be sent only for a particular Application's revision.
|
||||
|
||||
### oncePer
|
||||
|
||||
@@ -122,7 +134,7 @@ Triggers have access to the set of built-in functions.
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
when: time.Now().Sub(time.Parse(app.status.operationState.startedAt)).Minutes() >= 5
|
||||
when: time.Now().Sub(time.Parse(app.status?.operationState.startedAt)).Minutes() >= 5
|
||||
```
|
||||
|
||||
{!docs/operator-manual/notifications/functions.md!}
|
||||
|
||||
@@ -308,7 +308,7 @@ If omitted, it defaults to `'[groups]'`. The scope value can be a string, or a l
|
||||
|
||||
For more information on `scopes` please review the [User Management Documentation](user-management/index.md).
|
||||
|
||||
The following example shows targeting `email` as well as `groups` from your OIDC provider.
|
||||
The following example shows targeting `email` as well as `groups` from your OIDC provider, and also demonstrates explicit role assignments and role-to-role inheritance:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
@@ -324,12 +324,18 @@ data:
|
||||
p, my-org:team-alpha, applications, sync, my-project/*, allow
|
||||
g, my-org:team-beta, role:admin
|
||||
g, user@example.org, role:admin
|
||||
g, admin, role:admin
|
||||
g, role:admin, role:readonly
|
||||
policy.default: role:readonly
|
||||
scopes: '[groups, email]'
|
||||
```
|
||||
|
||||
This can be useful to associate users' emails and groups directly in AppProject.
|
||||
Here:
|
||||
1. `g, admin, role:admin` explicitly binds the built-in admin user to the admin role.
|
||||
2. `g, role:admin, role:readonly` shows role inheritance, so anyone granted `role:admin` also automatically has all the permissions of
|
||||
`role:readonly`.
|
||||
|
||||
This approach can be combined with AppProjects to associate users' emails and groups directly at the project level:
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: AppProject
|
||||
|
||||
@@ -37,6 +37,7 @@ argocd-applicationset-controller [flags]
|
||||
--kubeconfig string Path to a kube config. Only required if out-of-cluster
|
||||
--logformat string Set the logging format. One of: json|text (default "json")
|
||||
--loglevel string Set the logging level. One of: debug|info|warn|error (default "info")
|
||||
--max-resources-status-count int Max number of resources stored in appset status.
|
||||
--metrics-addr string The address the metric endpoint binds to. (default ":8080")
|
||||
--metrics-applicationset-labels strings List of Application labels that will be added to the argocd_applicationset_labels metric
|
||||
-n, --namespace string If present, the namespace scope for this CLI request
|
||||
|
||||
@@ -21,6 +21,7 @@ argocd-repo-server [flags]
|
||||
--disable-helm-manifest-max-extracted-size Disable maximum size of helm manifest archives when extracted
|
||||
--disable-oci-manifest-max-extracted-size Disable maximum size of oci manifest archives when extracted
|
||||
--disable-tls Disable TLS on the gRPC endpoint
|
||||
--enable-builtin-git-config Enable builtin git configuration options that are required for correct argocd-repo-server operation. (default true)
|
||||
--helm-manifest-max-extracted-size string Maximum size of helm manifest archives when extracted (default "1G")
|
||||
--helm-registry-max-index-size string Maximum size of registry index file (default "1G")
|
||||
-h, --help help for argocd-repo-server
|
||||
|
||||
@@ -1,2 +1,5 @@
|
||||
This page is populated for released Argo CD versions. Use the version selector to view this table for a specific
|
||||
version.
|
||||
| Argo CD version | Kubernetes versions |
|
||||
|-----------------|---------------------|
|
||||
| 3.2 | v1.34, v1.33, v1.32, v1.31 |
|
||||
| 3.1 | v1.34, v1.33, v1.32, v1.31 |
|
||||
| 3.0 | v1.32, v1.31, v1.30, v1.29 |
|
||||
|
||||
@@ -16,6 +16,30 @@ most users will want to explicitly configure the certificates for these TLS
|
||||
endpoints, possibly using automated means such as `cert-manager` or using
|
||||
their own dedicated Certificate Authority.
|
||||
|
||||
## TLS Configuration Quick Reference
|
||||
|
||||
### Certificate Configuration Overview
|
||||
|
||||
| Component | Secret Name | Hot Reload | Default Cert | Required SAN Entries |
|
||||
|-----------|-------------|------------|---------------|---------------------|
|
||||
| `argocd-server` | `argocd-server-tls` | ✅ Yes | Self-signed | External hostname (e.g., `argocd.example.com`) |
|
||||
| `argocd-repo-server` | `argocd-repo-server-tls` | ❌ Restart required | Self-signed | `DNS:argocd-repo-server`, `DNS:argocd-repo-server.argocd.svc` |
|
||||
| `argocd-dex-server` | `argocd-dex-server-tls` | ❌ Restart required | Self-signed | `DNS:argocd-dex-server`, `DNS:argocd-dex-server.argocd.svc` |
|
||||
|
||||
### Inter-Component TLS
|
||||
|
||||
| Connection | Strict TLS Parameter | Plain Text Parameter | Default Behavior |
|
||||
|------------|---------------------|---------------------|------------------|
|
||||
| `argocd-server` → `argocd-repo-server` | `--repo-server-strict-tls` | `--repo-server-plaintext` | Non-validating TLS |
|
||||
| `argocd-application-controller` → `argocd-repo-server` | `--repo-server-strict-tls` | `--repo-server-plaintext` | Non-validating TLS |
|
||||
| `argocd-server` → `argocd-dex-server` | `--dex-server-strict-tls` | `--dex-server-plaintext` | Non-validating TLS |
|
||||
|
||||
### Certificate Priority (argocd-server only)
|
||||
|
||||
1. `argocd-server-tls` secret (recommended)
|
||||
2. `argocd-secret` secret (deprecated)
|
||||
3. Auto-generated self-signed certificate
|
||||
|
||||
## Configuring TLS for argocd-server
|
||||
|
||||
### Inbound TLS options for argocd-server
|
||||
|
||||
@@ -9,7 +9,7 @@ For example flag name `load_restrictor` is changed in Kustomize v4+. It is chang
|
||||
|
||||
The`--app-resync` flag allows controlling how frequently Argo CD application controller checks resolve the target
|
||||
application revision of each application. In order to allow caching resolved revision per repository as opposed to per
|
||||
application, the `--app-resync` flag has been deprecated. Please use `timeout.reconciliation` setting in `argocd-cm` ConfigMap instead. The value of `timeout.reconciliation` is a duration string e.g `60s`, `1m`, `1h` or `1d`.
|
||||
application, the `--app-resync` flag has been deprecated. Please use `timeout.reconciliation` setting in `argocd-cm` ConfigMap instead. The value of `timeout.reconciliation` is a [duration string](https://pkg.go.dev/time#ParseDuration) e.g `60s`, `1m` or `1h`.
|
||||
See example in [argocd-cm.yaml](../argocd-cm.yaml).
|
||||
|
||||
From here on you can follow the [regular upgrade process](./overview.md).
|
||||
|
||||
@@ -55,4 +55,16 @@ spec:
|
||||
+ protocol: UDP
|
||||
+ - port: 53
|
||||
+ protocol: TCP
|
||||
```
|
||||
```
|
||||
|
||||
## Added Healthchecks
|
||||
|
||||
* [route53.aws.crossplane.io/ResourceRecordSet](https://github.com/argoproj/argo-cd/commit/666499f6108124ef7bfa0c6cc616770c6dc4f42c)
|
||||
* [cloudfront.aws.crossplane.io/Distribution](https://github.com/argoproj/argo-cd/commit/21c384f42354ada2b94c18773104527eb27f86bc)
|
||||
* [beat.k8s.elastic.co/Beat](https://github.com/argoproj/argo-cd/commit/5100726fd61617a0001a27233cfe8ac4354bdbed)
|
||||
* [apps.kruise.io/AdvancedCronjob](https://github.com/argoproj/argo-cd/commit/d6da9f2a15fba708d70531c5b3f2797663fb3c08)
|
||||
* [apps.kruise.io/BroadcastJob](https://github.com/argoproj/argo-cd/commit/d6da9f2a15fba708d70531c5b3f2797663fb3c08)
|
||||
* [apps.kruise.io/CloneSet](https://github.com/argoproj/argo-cd/commit/d6da9f2a15fba708d70531c5b3f2797663fb3c08)
|
||||
* [apps.kruise.io/DaemonSet](https://github.com/argoproj/argo-cd/commit/d6da9f2a15fba708d70531c5b3f2797663fb3c08)
|
||||
* [apps.kruise.io/StatefulSet](https://github.com/argoproj/argo-cd/commit/d6da9f2a15fba708d70531c5b3f2797663fb3c08)
|
||||
* [rollouts.kruise.io/Rollout](https://github.com/argoproj/argo-cd/commit/d6da9f2a15fba708d70531c5b3f2797663fb3c08)
|
||||
@@ -57,3 +57,24 @@ The affected ApplicationSet fields are the following (jq selector syntax):
|
||||
* `.spec.generators[].clusterDecisionResource.labelSelector`
|
||||
* `.spec.generators[].matrix.generators[].selector`
|
||||
* `.spec.generators[].merge.generators[].selector`
|
||||
|
||||
## Added Healthchecks
|
||||
|
||||
* [core.humio.com/HumioAction](https://github.com/argoproj/argo-cd/commit/1cd6fcac4f38edf3cd3b5409fa1b6d4aa4ad2694)
|
||||
* [core.humio.com/HumioAlert](https://github.com/argoproj/argo-cd/commit/1cd6fcac4f38edf3cd3b5409fa1b6d4aa4ad2694)
|
||||
* [core.humio.com/HumioCluster](https://github.com/argoproj/argo-cd/commit/1cd6fcac4f38edf3cd3b5409fa1b6d4aa4ad2694)
|
||||
* [core.humio.com/HumioIngestToken](https://github.com/argoproj/argo-cd/commit/1cd6fcac4f38edf3cd3b5409fa1b6d4aa4ad2694)
|
||||
* [core.humio.com/HumioParser](https://github.com/argoproj/argo-cd/commit/1cd6fcac4f38edf3cd3b5409fa1b6d4aa4ad2694)
|
||||
* [core.humio.com/HumioRepository](https://github.com/argoproj/argo-cd/commit/1cd6fcac4f38edf3cd3b5409fa1b6d4aa4ad2694)
|
||||
* [core.humio.com/HumioView](https://github.com/argoproj/argo-cd/commit/1cd6fcac4f38edf3cd3b5409fa1b6d4aa4ad2694)
|
||||
* [k8s.mariadb.com/Backup](https://github.com/argoproj/argo-cd/commit/440fbac12b7469fd3ed4a6e1f6ace5cf7eacaf39)
|
||||
* [k8s.mariadb.com/Database](https://github.com/argoproj/argo-cd/commit/440fbac12b7469fd3ed4a6e1f6ace5cf7eacaf39)
|
||||
* [k8s.mariadb.com/Grant](https://github.com/argoproj/argo-cd/commit/440fbac12b7469fd3ed4a6e1f6ace5cf7eacaf39)
|
||||
* [k8s.mariadb.com/MariaDB](https://github.com/argoproj/argo-cd/commit/440fbac12b7469fd3ed4a6e1f6ace5cf7eacaf39)
|
||||
* [k8s.mariadb.com/SqlJob](https://github.com/argoproj/argo-cd/commit/440fbac12b7469fd3ed4a6e1f6ace5cf7eacaf39)
|
||||
* [k8s.mariadb.com/User](https://github.com/argoproj/argo-cd/commit/440fbac12b7469fd3ed4a6e1f6ace5cf7eacaf39)
|
||||
* [kafka.strimzi.io/KafkaBridge](https://github.com/argoproj/argo-cd/commit/f13861740c17be1ab261f986532706cdda638b24)
|
||||
* [kafka.strimzi.io/KafkaConnector](https://github.com/argoproj/argo-cd/commit/f13861740c17be1ab261f986532706cdda638b24)
|
||||
* [keda.sh/ScaledObject](https://github.com/argoproj/argo-cd/commit/9bc9ff9c7a3573742a767c38679cbefb4f07c1c0)
|
||||
* [openfaas.com/Function](https://github.com/argoproj/argo-cd/commit/2a05ae02ab90ae06fefa97ed6b9310590d317783)
|
||||
* [camel.apache.org/Integration](https://github.com/argoproj/argo-cd/commit/1e2f5987d25307581cd56b8fe9d329633e0f704f)
|
||||
@@ -68,6 +68,41 @@ The default extension for log files generated by Argo CD when using the "Downloa
|
||||
- Consistency with standard log file conventions.
|
||||
|
||||
If you have any custom scripts or tools that depend on the `.txt` extension, please update them accordingly.
|
||||
|
||||
## Added proxy to kustomize
|
||||
|
||||
Proxy config set on repository credentials / repository templates is now passed down to the `kustomize build` command.
|
||||
|
||||
## Added Healthchecks
|
||||
|
||||
* [controlplane.cluster.x-k8s.io/AWSManagedControlPlane](https://github.com/argoproj/argo-cd/commit/f1105705126153674c79f69b5d9c9647360d16f5)
|
||||
* [policy.open-cluster-management.io/CertificatePolicy](https://github.com/argoproj/argo-cd/commit/d2231577c7f667d86bd0aa9505f871ecf1fde2bb)
|
||||
* [policy.open-cluster-management.io/ConfigurationPolicy](https://github.com/argoproj/argo-cd/commit/d2231577c7f667d86bd0aa9505f871ecf1fde2bb)
|
||||
* [policy.open-cluster-management.io/OperatorPolicy](https://github.com/argoproj/argo-cd/commit/d2231577c7f667d86bd0aa9505f871ecf1fde2bb)
|
||||
* [policy.open-cluster-management.io/Policy](https://github.com/argoproj/argo-cd/commit/d2231577c7f667d86bd0aa9505f871ecf1fde2bb)
|
||||
* [PodDisruptionBudget](https://github.com/argoproj/argo-cd/commit/e86258d8a5049260b841abc0ef1fd7f7a4b7cd45)
|
||||
* [cluster.x-k8s.io/MachinePool](https://github.com/argoproj/argo-cd/commit/59e00911304288b4f96889bf669b6ed2aecdf31b)
|
||||
* [lifecycle.keptn.sh/KeptnWorkloadVersion](https://github.com/argoproj/argo-cd/commit/ddc0b0fd3fa7e0b53170582846b20be23c301185)
|
||||
* [numaplane.numaproj.io/ISBServiceRollout](https://github.com/argoproj/argo-cd/commit/d6bc02b1956a375f853e9d5c37d97ee6963154df)
|
||||
* [numaplane.numaproj.io/NumaflowControllerRollout](https://github.com/argoproj/argo-cd/commit/d6bc02b1956a375f853e9d5c37d97ee6963154df)
|
||||
* [numaplane.numaproj.io/PipelineRollout](https://github.com/argoproj/argo-cd/commit/d6bc02b1956a375f853e9d5c37d97ee6963154df)
|
||||
* [rds.aws.crossplane.io/DBCluster](https://github.com/argoproj/argo-cd/commit/f26b76a7aa81637474cfb7992629ea1007124606)
|
||||
* [rds.aws.crossplane.io/DBInstance](https://github.com/argoproj/argo-cd/commit/f26b76a7aa81637474cfb7992629ea1007124606)
|
||||
* [iam.aws.crossplane.io/Policy](https://github.com/argoproj/argo-cd/commit/7f338e910f11929d172b39f5c2b395948529f7e8)
|
||||
* [iam.aws.crossplane.io/RolePolicyAttachment](https://github.com/argoproj/argo-cd/commit/7f338e910f11929d172b39f5c2b395948529f7e8)
|
||||
* [iam.aws.crossplane.io/Role](https://github.com/argoproj/argo-cd/commit/7f338e910f11929d172b39f5c2b395948529f7e8)
|
||||
* [s3.aws.crossplane.io/Bucket](https://github.com/argoproj/argo-cd/commit/7f338e910f11929d172b39f5c2b395948529f7e8)
|
||||
* [metrics.keptn.sh/KeptnMetric](https://github.com/argoproj/argo-cd/commit/326cc4a06b2cb5ac99797d3f04c2d4c48b8692e2)
|
||||
* [metrics.keptn.sh/Analysis](https://github.com/argoproj/argo-cd/commit/e26c105e527ed262cc5dc838a793841017ba316a)
|
||||
* [numaplane.numaproj.io/MonoVertexRollout](https://github.com/argoproj/argo-cd/commit/32ee00f1f494f69cc84d1881dda70ce514e1f737)
|
||||
* [helm.toolkit.fluxcd.io/HelmRelease](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [image.toolkit.fluxcd.io/ImagePolicy](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [image.toolkit.fluxcd.io/ImageRepository](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [image.toolkit.fluxcd.io/ImageUpdateAutomation](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [kustomize.toolkit.fluxcd.io/Kustomization](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [notification.toolkit.fluxcd.io/Receiver](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [source.toolkit.fluxcd.io/Bucket](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [source.toolkit.fluxcd.io/GitRepository](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [source.toolkit.fluxcd.io/HelmChart](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [source.toolkit.fluxcd.io/HelmRepository](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
* [source.toolkit.fluxcd.io/OCIRepository](https://github.com/argoproj/argo-cd/commit/824d0dced73196bf3c148f92a164145cc115c5ea)
|
||||
|
||||
@@ -11,4 +11,36 @@ Eg, `https://github.com/argoproj/argo-cd/manifests/ha/cluster-install?ref=v2.14.
|
||||
## Upgraded Helm Version
|
||||
|
||||
Helm was upgraded to 3.16.2 and the skipSchemaValidation Flag was added to
|
||||
the [CLI and Application CR](https://argo-cd.readthedocs.io/en/latest/user-guide/helm/#helm-skip-schema-validation).
|
||||
the [CLI and Application CR](https://argo-cd.readthedocs.io/en/latest/user-guide/helm/#helm-skip-schema-validation).
|
||||
|
||||
## Breaking Changes
|
||||
|
||||
## Sanitized project API response
|
||||
|
||||
Due to security reasons ([GHSA-786q-9hcg-v9ff](https://github.com/argoproj/argo-cd/security/advisories/GHSA-786q-9hcg-v9ff)),
|
||||
the project API response was sanitized to remove sensitive information. This includes
|
||||
credentials of project-scoped repositories and clusters.
|
||||
|
||||
## Added Healthchecks
|
||||
|
||||
* [platform.confluent.io/Connector](https://github.com/argoproj/argo-cd/commit/99efafb55a553a9ab962d56c20dab54ba65b7ae0)
|
||||
* [addons.cluster.x-k8s.io/ClusterResourceSet](https://github.com/argoproj/argo-cd/commit/fdf539dc6a027ef975fde23bf734f880570ccdc3)
|
||||
* [numaflow.numaproj.io/InterStepBufferService](https://github.com/argoproj/argo-cd/commit/82484ce758aa80334ecf66bfda28b9d5c41a8c30)
|
||||
* [numaflow.numaproj.io/MonoVertex](https://github.com/argoproj/argo-cd/commit/82484ce758aa80334ecf66bfda28b9d5c41a8c30)
|
||||
* [numaflow.numaproj.io/Pipeline](https://github.com/argoproj/argo-cd/commit/82484ce758aa80334ecf66bfda28b9d5c41a8c30)
|
||||
* [numaflow.numaproj.io/Vertex](https://github.com/argoproj/argo-cd/commit/82484ce758aa80334ecf66bfda28b9d5c41a8c30)
|
||||
* [acid.zalan.do/Postgresql](https://github.com/argoproj/argo-cd/commit/19d85aa9fbb40dca452f0a0f2f9ab462e02c851d)
|
||||
* [grafana.integreatly.org/Grafana](https://github.com/argoproj/argo-cd/commit/19d85aa9fbb40dca452f0a0f2f9ab462e02c851d)
|
||||
* [grafana.integreatly.org/GrafanaDatasource](https://github.com/argoproj/argo-cd/commit/19d85aa9fbb40dca452f0a0f2f9ab462e02c851d)
|
||||
* [k8s.keycloak.org/Keycloak](https://github.com/argoproj/argo-cd/commit/19d85aa9fbb40dca452f0a0f2f9ab462e02c851d)
|
||||
* [solr.apache.org/SolrCloud](https://github.com/argoproj/argo-cd/commit/19d85aa9fbb40dca452f0a0f2f9ab462e02c851d)
|
||||
* [gateway.solo.io/Gateway](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gateway.solo.io/MatchableHttpGateway](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gateway.solo.io/RouteOption](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gateway.solo.io/RouteTable](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gateway.solo.io/VirtualHostOption](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gateway.solo.io/VirtualService](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gloo.solo.io/Proxy](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gloo.solo.io/Settings](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gloo.solo.io/Upstream](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
* [gloo.solo.io/UpstreamGroup](https://github.com/argoproj/argo-cd/commit/2a199bc7ae70ce8b933cda81f8558916621750d5)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user