Compare commits

..

36 Commits

Author SHA1 Message Date
dependabot[bot]
24c3abd8dd chore(deps): bump library/ubuntu from 5798086 to 91832dc in /test/container (#26930)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 09:57:04 -04:00
Linghao Su
91d83d37c4 fix(server): fix find container logic for terminal (#26858)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
2026-03-19 23:37:39 -10:00
dependabot[bot]
aabe8524ba chore(deps): bump library/redis from a019c00 to 315270d in /test/container (#26902)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 12:56:40 -04:00
Papapetrou Patroklos
fe30b2c60a fix: trigger app sync on app-set spec change (#26811)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
2026-03-19 10:31:07 +00:00
dependabot[bot]
148c86ad42 chore(deps): bump actions/cache from 5.0.3 to 5.0.4 (#26901)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-03-19 07:31:08 +00:00
dependabot[bot]
30db355197 chore(deps): bump codecov/codecov-action from 5.5.2 to 5.5.3 (#26900)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 08:31:09 +02:00
Andy Lo-A-Foe
442aed496f fix: prevent panic on nil APIResource in permission validator (#26610)
Signed-off-by: Andy Lo-A-Foe <andy.loafoe@gmail.com>
2026-03-18 14:27:24 -04:00
Blake Pettersson
87ccebc51a chore(ci): remove cherry-pick branch if already present (#26881)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-18 14:23:20 -04:00
dependabot[bot]
20439902eb chore(deps): bump google.golang.org/grpc from 1.79.2 to 1.79.3 (#26886)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-18 17:43:25 +00:00
Michael Crenshaw
559da44135 chore(deps): bump Helm to 3.20.1 (#26896)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-03-18 13:42:58 -04:00
Blake Pettersson
a87aab146e chore(ci): attempt to make test less flaky (#26890)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-18 13:02:42 -04:00
Andrea Matera
d34e83f60c chore: add Mollie to USERS.md (#26895)
Signed-off-by: Andrea Matera <andrea.matera@mollie.com>
2026-03-18 15:13:44 +00:00
Michael Crenshaw
566c172058 feat(ui): add GitOps Promoter resource icon (#26894)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-03-18 10:52:20 -04:00
dependabot[bot]
d80a122502 chore(deps): bump library/ubuntu from fed6ddb to 5798086 in /test/container (#26887)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 14:38:29 +00:00
Ekamveer Walia
539c35b295 docs: fix incorrect wording for ApplicationSets in other namespaces (#26893) 2026-03-18 13:44:10 +00:00
Blake Pettersson
45a84dfa38 fix(ci): add .gitkeep to images dir (#26892)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-18 09:37:18 -04:00
Mangaal Meetei
d011b7b508 fix: Bitbucket webhook diffstat does not work with upper case repo slug (#26594)
Signed-off-by: Mangaal <angommeeteimangaal@gmail.com>
2026-03-18 07:50:32 -04:00
Huynh Duc Tran
f1b922765d chore: add Techcom Securities to USERS.md (#26889)
Signed-off-by: Tran Huynh Duc <duchuynhtran12a1@gmail.com>>
Signed-off-by: Duck <duchuynhtran12a1@gmail.com>
2026-03-18 15:22:26 +05:30
Jaewoo Choi
4b4bbc8bb2 fix(ui): include _-prefixed dirs in embedded assets (#26589)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2026-03-17 16:55:20 -06:00
Atif Ali
c5d1c914bb fix(UI): show RollingSync step clearly when labels match no step (#26877)
Signed-off-by: Atif Ali <atali@redhat.com>
2026-03-17 23:05:29 +01:00
dependabot[bot]
59aea0476a chore(deps): bump github.com/aws/aws-sdk-go-v2/credentials from 1.19.11 to 1.19.12 (#26840)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-17 23:01:40 +01:00
Nitish Kumar
4cdc650a58 feat(helm): support wildcard glob patterns for valueFiles (#26768)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-03-17 21:37:43 +00:00
Blake Pettersson
2b6489828b chore: allow multiple signoff lines (#26875)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-17 21:06:28 +00:00
Alexander Matyushentsev
92c3ef2559 fix: avoid scanning symlinks in whole repo on each app manifest operation (#26718)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2026-03-17 13:40:16 -07:00
Alexandre Gaudreault
4070b6feea docs: add warning in orphan resource doc (#26874)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-03-17 12:54:01 -04:00
Jonathan Ogilvie
67db597810 fix: stack overflow when processing circular ownerrefs in resource graph (#26783) (#26790)
Signed-off-by: Jonathan Ogilvie <jonathan.ogilvie@sumologic.com>
Signed-off-by: Jonathan Ogilvie <679297+jcogilvie@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-17 12:03:23 -04:00
rumstead
5b3073986f feat(appset): add concurrency when managing applications (#26642)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-03-17 15:04:11 +00:00
Kit Dallege
5ceb8354e6 docs: add orphaned resources FAQ entry (#26833)
Signed-off-by: kovan <xaum.io@gmail.com>
2026-03-17 10:53:24 -04:00
S Kevin Joe Harris
79922c06d6 ci: Improve Go build timing with effective caching (#26628)
Signed-off-by: Kevin Joe Harris <kevinjoeharris1@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2026-03-17 20:12:09 +05:30
Sinhyeok Seo
382c507beb fix(server): Cache glob patterns to improve RBAC evaluation performance (#25759)
Signed-off-by: Sinhyeok Seo <sinhyeok@gmail.com>
Signed-off-by: Sinhyeok Seo <44961659+Sinhyeok@users.noreply.github.com>
2026-03-17 10:22:23 -04:00
dependabot[bot]
8142920ab8 chore(deps): bump library/redis from 1c054d5 to a019c00 in /test/container (#26865)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-17 07:08:27 -04:00
Blake Pettersson
47a0746851 chore(renovate): group aws-sdk-v2-updates (#26848)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-17 11:39:00 +02:00
Regina Voloshin
13cd517470 docs: move releases to Tuesdays (#26859)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2026-03-16 18:27:47 +02:00
Christopher Coco
63a009effa fix(test): make fail message better for TestAuthReconcileWithMissingNamespace (#26856)
Signed-off-by: Christopher Coco <ccoco@redhat.com>
2026-03-16 03:13:40 -10:00
github-actions[bot]
5a6c83229b chore: Bump version in master (#26855)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-03-16 14:44:45 +02:00
dependabot[bot]
f409135f17 chore(deps): bump softprops/action-gh-release from 2.5.0 to 2.6.1 (#26838)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 13:00:58 +02:00
66 changed files with 1926 additions and 388 deletions

View File

@@ -11,6 +11,7 @@ module.exports = {
"github>argoproj/argo-cd//renovate-presets/custom-managers/yaml.json5",
"github>argoproj/argo-cd//renovate-presets/fix/disable-all-updates.json5",
"github>argoproj/argo-cd//renovate-presets/devtool.json5",
"github>argoproj/argo-cd//renovate-presets/docs.json5"
"github>argoproj/argo-cd//renovate-presets/docs.json5",
"group:aws-sdk-go-v2Monorepo"
]
}

View File

@@ -66,6 +66,7 @@ jobs:
# Create new branch for cherry-pick
CHERRY_PICK_BRANCH="cherry-pick-${{ inputs.pr_number }}-to-${TARGET_BRANCH}"
git checkout -b "$CHERRY_PICK_BRANCH" "origin/$TARGET_BRANCH"
# Perform cherry-pick
@@ -75,12 +76,17 @@ jobs:
# Extract Signed-off-by from the cherry-pick commit
SIGNOFF=$(git log -1 --pretty=format:"%B" | grep -E '^Signed-off-by:' || echo "")
# Push the new branch
git push origin "$CHERRY_PICK_BRANCH"
# Push the new branch. Force push to ensure that in case the original cherry-pick branch is stale,
# that the current state of the $TARGET_BRANCH + cherry-pick gets in $CHERRY_PICK_BRANCH.
git push origin -f "$CHERRY_PICK_BRANCH"
# Save data for PR creation
echo "branch_name=$CHERRY_PICK_BRANCH" >> "$GITHUB_OUTPUT"
echo "signoff=$SIGNOFF" >> "$GITHUB_OUTPUT"
{
echo "signoff<<EOF"
echo "$SIGNOFF"
echo "EOF"
} >> "$GITHUB_OUTPUT"
echo "target_branch=$TARGET_BRANCH" >> "$GITHUB_OUTPUT"
else
echo "❌ Cherry-pick failed due to conflicts"

View File

@@ -80,12 +80,16 @@ jobs:
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
- name: Restore go build and module cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Download all Go modules
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Download Go modules
run: |
go mod download
- name: Compile all packages
@@ -151,11 +155,15 @@ jobs:
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
- name: Restore go build and module cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@@ -167,7 +175,7 @@ jobs:
run: |
git config --global user.name "John Doe"
git config --global user.email "john.doe@example.com"
- name: Download and vendor all required packages
- name: Download Go modules
run: |
go mod download
- name: Run all unit tests
@@ -215,11 +223,15 @@ jobs:
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
- name: Restore go build and module cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@@ -231,7 +243,7 @@ jobs:
run: |
git config --global user.name "John Doe"
git config --global user.email "john.doe@example.com"
- name: Download and vendor all required packages
- name: Download Go modules
run: |
go mod download
- name: Run all unit tests
@@ -315,7 +327,7 @@ jobs:
node-version: '22.9.0'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -365,7 +377,7 @@ jobs:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -392,7 +404,7 @@ jobs:
- name: Upload code coverage information to codecov.io
# Only run when the workflow is for upstream (PR target or push is in argoproj/argo-cd).
if: github.repository == 'argoproj/argo-cd'
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5.5.3
with:
files: test-results/full-coverage.out
fail_ci_if_error: true
@@ -401,7 +413,7 @@ jobs:
- name: Upload test results to Codecov
# Codecov uploads test results to Codecov.io on upstream master branch.
if: github.repository == 'argoproj/argo-cd' && github.ref == 'refs/heads/master' && github.event_name == 'push'
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5.5.3
with:
files: test-results/junit.xml
fail_ci_if_error: true
@@ -475,11 +487,15 @@ jobs:
sudo chown $(whoami) $HOME/.kube/config
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
- name: Restore go build and module cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Add ~/go/bin to PATH
run: |
echo "$HOME/go/bin" >> $GITHUB_PATH
@@ -489,9 +505,11 @@ jobs:
- name: Add ./dist to PATH
run: |
echo "$(pwd)/dist" >> $GITHUB_PATH
- name: Download Go dependencies
- name: Download Go modules
run: |
go mod download
- name: Install goreman
run: |
go install github.com/mattn/goreman@latest
- name: Install all tools required for building & testing
run: |

View File

@@ -264,7 +264,7 @@ jobs:
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Upload SBOM
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2.5.0
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2.6.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -2,7 +2,7 @@ controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v3/cmd gendexcfg -o `pwd`/dist/dex.yaml && (test -f dist/dex.yaml || { echo 'Failed to generate dex configuration'; exit 1; }) && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex:v2.45.0" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
redis: hack/start-redis-with-password.sh
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=./dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=\$(pwd)/dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
cmp-server: [ "$ARGOCD_E2E_TEST" = 'true' ] && exit 0 || [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'

View File

@@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: 814db444c36503851dc3d45cf9c44394821ca1a4
commit-hash: d91a2ab3bf1b1143fb273fa06f54073fc78f41f1
project-url: https://github.com/argoproj/argo-cd
project-release: v3.4.0
project-release: v3.5.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View File

@@ -240,6 +240,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Mission Lane](https://missionlane.com)
1. [mixi Group](https://mixi.co.jp/)
1. [Moengage](https://www.moengage.com/)
1. [Mollie](https://www.mollie.com/)
1. [Money Forward](https://corp.moneyforward.com/en/)
1. [MongoDB](https://www.mongodb.com/)
1. [MOO Print](https://www.moo.com/)
@@ -380,6 +381,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Tailor Brands](https://www.tailorbrands.com)
1. [Tamkeen Technologies](https://tamkeentech.sa/)
1. [TBC Bank](https://tbcbank.ge/)
1. [Techcom Securities](https://www.tcbs.com.vn/)
1. [Techcombank](https://www.techcombank.com.vn/trang-chu)
1. [Technacy](https://www.technacy.it/)
1. [Telavita](https://www.telavita.com.br/)

View File

@@ -1 +1 @@
3.4.0-rc2
3.5.0

View File

@@ -24,11 +24,13 @@ import (
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -74,6 +76,9 @@ const (
ReconcileRequeueOnValidationError = time.Minute * 3
ReverseDeletionOrder = "Reverse"
AllAtOnceDeletionOrder = "AllAtOnce"
revisionAndSpecChangedMsg = "Application has pending changes (revision and spec differ), setting status to Waiting"
revisionChangedMsg = "Application has pending changes, setting status to Waiting"
specChangedMsg = "Application has pending changes (spec differs), setting status to Waiting"
)
var defaultPreservedFinalizers = []string{
@@ -103,15 +108,16 @@ type ApplicationSetReconciler struct {
Policy argov1alpha1.ApplicationsSyncPolicy
EnablePolicyOverride bool
utils.Renderer
ArgoCDNamespace string
ApplicationSetNamespaces []string
EnableProgressiveSyncs bool
SCMRootCAPath string
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
ClusterInformer *settings.ClusterInformer
ArgoCDNamespace string
ApplicationSetNamespaces []string
EnableProgressiveSyncs bool
SCMRootCAPath string
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
ClusterInformer *settings.ClusterInformer
ConcurrentApplicationUpdates int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -688,108 +694,133 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
// - For existing application, it will call update
// The function also adds owner reference to all applications, and uses it to delete them.
func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context, logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, desiredApplications []argov1alpha1.Application) error {
var firstError error
// Creates or updates the application in appList
for _, generatedApp := range desiredApplications {
appLog := logCtx.WithFields(applog.GetAppLogFields(&generatedApp))
// Build the diff config once per reconcile.
// Diff config is per applicationset, so generate it once for all applications
diffConfig, err := utils.BuildIgnoreDiffConfig(applicationSet.Spec.IgnoreApplicationDifferences, normalizers.IgnoreNormalizerOpts{})
if err != nil {
return fmt.Errorf("failed to build ignore diff config: %w", err)
}
g, ctx := errgroup.WithContext(ctx)
concurrency := r.concurrency()
g.SetLimit(concurrency)
var appErrorsMu sync.Mutex
appErrors := map[string]error{}
for _, generatedApp := range desiredApplications {
// Normalize to avoid fighting with the application controller.
generatedApp.Spec = *argoutil.NormalizeApplicationSpec(&generatedApp.Spec)
g.Go(func() error {
appLog := logCtx.WithFields(applog.GetAppLogFields(&generatedApp))
found := &argov1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: generatedApp.Name,
Namespace: generatedApp.Namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
}
action, err := utils.CreateOrUpdate(ctx, appLog, r.Client, applicationSet.Spec.IgnoreApplicationDifferences, normalizers.IgnoreNormalizerOpts{}, found, func() error {
// Copy only the Application/ObjectMeta fields that are significant, from the generatedApp
found.Spec = generatedApp.Spec
// allow setting the Operation field to trigger a sync operation on an Application
if generatedApp.Operation != nil {
found.Operation = generatedApp.Operation
found := &argov1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: generatedApp.Name,
Namespace: generatedApp.Namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
}
preservedAnnotations := make([]string, 0)
preservedLabels := make([]string, 0)
action, err := utils.CreateOrUpdate(ctx, appLog, r.Client, diffConfig, found, func() error {
// Copy only the Application/ObjectMeta fields that are significant, from the generatedApp
found.Spec = generatedApp.Spec
if applicationSet.Spec.PreservedFields != nil {
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
preservedLabels = append(preservedLabels, applicationSet.Spec.PreservedFields.Labels...)
}
if len(r.GlobalPreservedAnnotations) > 0 {
preservedAnnotations = append(preservedAnnotations, r.GlobalPreservedAnnotations...)
}
if len(r.GlobalPreservedLabels) > 0 {
preservedLabels = append(preservedLabels, r.GlobalPreservedLabels...)
}
// Preserve specially treated argo cd annotations:
// * https://github.com/argoproj/applicationset/issues/180
// * https://github.com/argoproj/argo-cd/issues/10500
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
generatedApp.Annotations[key] = state
// allow setting the Operation field to trigger a sync operation on an Application
if generatedApp.Operation != nil {
found.Operation = generatedApp.Operation
}
}
for _, key := range preservedLabels {
if state, exists := found.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
generatedApp.Labels[key] = state
preservedAnnotations := make([]string, 0)
preservedLabels := make([]string, 0)
if applicationSet.Spec.PreservedFields != nil {
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
preservedLabels = append(preservedLabels, applicationSet.Spec.PreservedFields.Labels...)
}
}
// Preserve deleting finalizers and avoid diff conflicts
for _, finalizer := range defaultPreservedFinalizers {
for _, f := range found.Finalizers {
// For finalizers, use prefix matching in case it contains "/" stages
if strings.HasPrefix(f, finalizer) {
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
if len(r.GlobalPreservedAnnotations) > 0 {
preservedAnnotations = append(preservedAnnotations, r.GlobalPreservedAnnotations...)
}
if len(r.GlobalPreservedLabels) > 0 {
preservedLabels = append(preservedLabels, r.GlobalPreservedLabels...)
}
// Preserve specially treated argo cd annotations:
// * https://github.com/argoproj/applicationset/issues/180
// * https://github.com/argoproj/argo-cd/issues/10500
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
generatedApp.Annotations[key] = state
}
}
for _, key := range preservedLabels {
if state, exists := found.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
generatedApp.Labels[key] = state
}
}
// Preserve deleting finalizers and avoid diff conflicts
for _, finalizer := range defaultPreservedFinalizers {
for _, f := range found.Finalizers {
// For finalizers, use prefix matching in case it contains "/" stages
if strings.HasPrefix(f, finalizer) {
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
}
}
}
found.Annotations = generatedApp.Annotations
found.Labels = generatedApp.Labels
found.Finalizers = generatedApp.Finalizers
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
})
if err != nil {
appLog.WithError(err).WithField("action", action).Errorf("failed to %s Application", action)
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
// For backwards compatibility with sequential behavior: continue processing other applications
// but record the error keyed by app name so we can deterministically return the error from
// the lexicographically first failing app, regardless of goroutine scheduling order.
appErrorsMu.Lock()
appErrors[generatedApp.Name] = err
appErrorsMu.Unlock()
return nil
}
found.Annotations = generatedApp.Annotations
found.Labels = generatedApp.Labels
found.Finalizers = generatedApp.Finalizers
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
if action != controllerutil.OperationResultNone {
// Don't pollute etcd with "unchanged Application" events
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, fmt.Sprint(action), "%s Application %q", action, generatedApp.Name)
appLog.Logf(log.InfoLevel, "%s Application", action)
} else {
// "unchanged Application" can be inferred by Reconcile Complete with no action being listed
// Or enable debug logging
appLog.Logf(log.DebugLevel, "%s Application", action)
}
return nil
})
if err != nil {
appLog.WithError(err).WithField("action", action).Errorf("failed to %s Application", action)
if firstError == nil {
firstError = err
}
continue
}
if action != controllerutil.OperationResultNone {
// Don't pollute etcd with "unchanged Application" events
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, fmt.Sprint(action), "%s Application %q", action, generatedApp.Name)
appLog.Logf(log.InfoLevel, "%s Application", action)
} else {
// "unchanged Application" can be inferred by Reconcile Complete with no action being listed
// Or enable debug logging
appLog.Logf(log.DebugLevel, "%s Application", action)
}
}
return firstError
if err := g.Wait(); errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
return firstAppError(appErrors)
}
// createInCluster will filter from the desiredApplications only the application that needs to be created
@@ -849,36 +880,84 @@ func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *
m[app.Name] = true
}
// Delete apps that are not in m[string]bool
var firstError error
for _, app := range current {
logCtx = logCtx.WithFields(applog.GetAppLogFields(&app))
_, exists := m[app.Name]
g, ctx := errgroup.WithContext(ctx)
concurrency := r.concurrency()
g.SetLimit(concurrency)
if !exists {
var appErrorsMu sync.Mutex
appErrors := map[string]error{}
// Delete apps that are not in m[string]bool
for _, app := range current {
_, exists := m[app.Name]
if exists {
continue
}
appLogCtx := logCtx.WithFields(applog.GetAppLogFields(&app))
g.Go(func() error {
// Removes the Argo CD resources finalizer if the application contains an invalid target (eg missing cluster)
err := r.removeFinalizerOnInvalidDestination(ctx, applicationSet, &app, clusterList, logCtx)
err := r.removeFinalizerOnInvalidDestination(ctx, applicationSet, &app, clusterList, appLogCtx)
if err != nil {
logCtx.WithError(err).Error("failed to update Application")
if firstError != nil {
firstError = err
appLogCtx.WithError(err).Error("failed to update Application")
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
continue
// For backwards compatibility with sequential behavior: continue processing other applications
// but record the error keyed by app name so we can deterministically return the error from
// the lexicographically first failing app, regardless of goroutine scheduling order.
appErrorsMu.Lock()
appErrors[app.Name] = err
appErrorsMu.Unlock()
return nil
}
err = r.Delete(ctx, &app)
if err != nil {
logCtx.WithError(err).Error("failed to delete Application")
if firstError != nil {
firstError = err
appLogCtx.WithError(err).Error("failed to delete Application")
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
continue
appErrorsMu.Lock()
appErrors[app.Name] = err
appErrorsMu.Unlock()
return nil
}
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, "Deleted", "Deleted Application %q", app.Name)
logCtx.Log(log.InfoLevel, "Deleted application")
}
appLogCtx.Log(log.InfoLevel, "Deleted application")
return nil
})
}
return firstError
if err := g.Wait(); errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
return firstAppError(appErrors)
}
// concurrency returns the configured number of concurrent application updates, defaulting to 1.
func (r *ApplicationSetReconciler) concurrency() int {
if r.ConcurrentApplicationUpdates <= 0 {
return 1
}
return r.ConcurrentApplicationUpdates
}
// firstAppError returns the error associated with the lexicographically smallest application name
// in the provided map. This gives a deterministic result when multiple goroutines may have
// recorded errors concurrently, matching the behavior of the original sequential loop where the
// first application in iteration order would determine the returned error.
func firstAppError(appErrors map[string]error) error {
if len(appErrors) == 0 {
return nil
}
names := make([]string, 0, len(appErrors))
for name := range appErrors {
names = append(names, name)
}
sort.Strings(names)
return appErrors[names[0]]
}
// removeFinalizerOnInvalidDestination removes the Argo CD resources finalizer if the application contains an invalid target (eg missing cluster)
@@ -967,7 +1046,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application) (map[string]bool, error) {
appDependencyList, appStepMap := r.buildAppDependencyList(logCtx, appset, desiredApplications)
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, desiredApplications, appStepMap)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset app status: %w", err)
}
@@ -1144,10 +1223,16 @@ func getAppStep(appName string, appStepMap map[string]int) int {
}
// check the status of each Application's status and promote Applications to the next status if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applications))
// Build a map of desired applications for quick lookup
desiredAppsMap := make(map[string]*argov1alpha1.Application)
for i := range desiredApplications {
desiredAppsMap[desiredApplications[i].Name] = &desiredApplications[i]
}
for _, app := range applications {
appHealthStatus := app.Status.Health.Status
appSyncStatus := app.Status.Sync.Status
@@ -1182,10 +1267,27 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
newAppStatus := currentAppStatus.DeepCopy()
newAppStatus.Step = strconv.Itoa(getAppStep(newAppStatus.Application, appStepMap))
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
// A new version is available in the application and we need to re-sync the application
revisionsChanged := !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions())
// Check if the desired Application spec differs from the current Application spec
specChanged := false
if desiredApp, ok := desiredAppsMap[app.Name]; ok {
// Compare the desired spec with the current spec to detect non-Git changes
// This will catch changes to generator parameters like image tags, helm values, etc.
specChanged = !cmp.Equal(desiredApp.Spec, app.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{}))
}
if revisionsChanged || specChanged {
newAppStatus.TargetRevisions = app.Status.GetRevisions()
newAppStatus.Message = "Application has pending changes, setting status to Waiting"
switch {
case revisionsChanged && specChanged:
newAppStatus.Message = revisionAndSpecChangedMsg
case revisionsChanged:
newAppStatus.Message = revisionChangedMsg
default:
newAppStatus.Message = specChangedMsg
}
newAppStatus.Status = argov1alpha1.ProgressiveSyncWaiting
newAppStatus.LastTransitionTime = &now
}

View File

@@ -25,6 +25,7 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
crtclient "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/client/interceptor"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
@@ -1077,6 +1078,70 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
},
},
},
{
name: "Ensure that unnormalized live spec does not cause a spurious patch",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSetSpec{
Template: v1alpha1.ApplicationSetTemplate{
Spec: v1alpha1.ApplicationSpec{
Project: "project",
},
},
},
},
existingApps: []v1alpha1.Application{
{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "2",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
// Without normalizing the live object, the equality check
// sees &SyncPolicy{} vs nil and issues an unnecessary patch.
SyncPolicy: &v1alpha1.SyncPolicy{},
},
},
},
desiredApps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
SyncPolicy: nil,
},
},
},
expected: []v1alpha1.Application{
{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "2",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
SyncPolicy: &v1alpha1.SyncPolicy{},
},
},
},
},
{
name: "Ensure that argocd pre-delete and post-delete finalizers are preserved from an existing app",
appSet: v1alpha1.ApplicationSet{
@@ -1186,6 +1251,374 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
}
}
func TestCreateOrUpdateInCluster_Concurrent(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
t.Run("all apps are created correctly with concurrency > 1", func(t *testing.T) {
desiredApps := make([]v1alpha1.Application, 5)
for i := range desiredApps {
desiredApps[i] = v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "project"},
}
}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
ConcurrentApplicationUpdates: 5,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, desiredApps)
require.NoError(t, err)
for _, desired := range desiredApps {
got := &v1alpha1.Application{}
require.NoError(t, fakeClient.Get(t.Context(), crtclient.ObjectKey{Namespace: desired.Namespace, Name: desired.Name}, got))
assert.Equal(t, desired.Spec.Project, got.Spec.Project)
}
})
t.Run("non-context errors from concurrent goroutines are collected and one is returned", func(t *testing.T) {
existingApps := make([]v1alpha1.Application, 5)
initObjs := []crtclient.Object{&appSet}
for i := range existingApps {
existingApps[i] = v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "old"},
}
app := existingApps[i].DeepCopy()
require.NoError(t, controllerutil.SetControllerReference(&appSet, app, scheme))
initObjs = append(initObjs, app)
}
desiredApps := make([]v1alpha1.Application, 5)
for i := range desiredApps {
desiredApps[i] = v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "new"},
}
}
patchErr := errors.New("some patch error")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return patchErr
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
ConcurrentApplicationUpdates: 5,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, desiredApps)
require.ErrorIs(t, err, patchErr)
})
}
func TestCreateOrUpdateInCluster_ContextCancellation(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
existingApp := v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "old"},
}
desiredApp := v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "new"},
}
t.Run("context canceled on patch is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return context.Canceled
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, context.Canceled)
})
t.Run("context deadline exceeded on patch is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return context.DeadlineExceeded
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, context.DeadlineExceeded)
})
t.Run("non-context error is collected and returned after all goroutines finish", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
patchErr := errors.New("some patch error")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return patchErr
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, patchErr)
})
t.Run("context canceled on create is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Create: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.CreateOption) error {
return context.Canceled
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
newApp := v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{Name: "newapp", Namespace: "namespace"},
Spec: v1alpha1.ApplicationSpec{Project: "default"},
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{newApp})
require.ErrorIs(t, err, context.Canceled)
})
}
func TestDeleteInCluster_ContextCancellation(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
existingApp := v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "delete-me",
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "project"},
}
makeReconciler := func(t *testing.T, fakeClient crtclient.Client) ApplicationSetReconciler {
t.Helper()
kubeclientset := kubefake.NewClientset()
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "namespace")
require.NoError(t, err)
cancel := startAndSyncInformer(t, clusterInformer)
t.Cleanup(cancel)
return ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
KubeClientset: kubeclientset,
Metrics: appsetmetrics.NewFakeAppsetMetrics(),
ClusterInformer: clusterInformer,
}
}
t.Run("context canceled on delete is returned directly", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return context.Canceled
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, context.Canceled)
})
t.Run("context deadline exceeded on delete is returned directly", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return context.DeadlineExceeded
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, context.DeadlineExceeded)
})
t.Run("non-context delete error is collected and returned", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
deleteErr := errors.New("delete failed")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return deleteErr
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, deleteErr)
})
}
func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
@@ -4799,6 +5232,12 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
}
}
newAppWithSpec := func(name string, health health.HealthStatusCode, sync v1alpha1.SyncStatusCode, revision string, opState *v1alpha1.OperationState, spec v1alpha1.ApplicationSpec) v1alpha1.Application {
app := newApp(name, health, sync, revision, opState)
app.Spec = spec
return app
}
newOperationState := func(phase common.OperationPhase) *v1alpha1.OperationState {
finishedAt := &metav1.Time{Time: time.Now().Add(-1 * time.Second)}
if !phase.Completed() {
@@ -4815,6 +5254,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
desiredApps []v1alpha1.Application
appStepMap map[string]int
expectedAppStatus []v1alpha1.ApplicationSetApplicationStatus
}{
@@ -4968,14 +5408,14 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "Application has pending changes, setting status to Waiting",
Message: revisionChangedMsg,
Status: v1alpha1.ProgressiveSyncWaiting,
Step: "1",
TargetRevisions: []string{"next"},
},
{
Application: "app2-multisource",
Message: "Application has pending changes, setting status to Waiting",
Message: revisionChangedMsg,
Status: v1alpha1.ProgressiveSyncWaiting,
Step: "1",
TargetRevisions: []string{"next"},
@@ -5415,6 +5855,191 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
},
},
},
{
name: "detects spec changes when image tag changes in generator (same Git revision)",
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: v1alpha1.ProgressiveSyncHealthy,
Step: "1",
TargetRevisions: []string{"abc123"},
},
}),
apps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "abc123", nil, // Changed to OutOfSync
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v1.0.0"},
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
desiredApps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "abc123", nil, // Changed to OutOfSync
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v2.0.0"}, // Different value
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
appStepMap: map[string]int{
"app1": 0,
},
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: specChangedMsg,
Status: v1alpha1.ProgressiveSyncWaiting,
Step: "1",
TargetRevisions: []string{"abc123"},
},
},
},
{
name: "does not detect changes when spec is identical (same Git revision)",
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: v1alpha1.ProgressiveSyncHealthy,
Step: "1",
TargetRevisions: []string{"abc123"},
},
}),
apps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeSynced, "abc123", nil,
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v1.0.0"},
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
appStepMap: map[string]int{
"app1": 0,
},
// Desired apps have identical spec
desiredApps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v1.0.0"}, // Same value
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
},
},
},
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: v1alpha1.ProgressiveSyncHealthy,
Step: "1",
TargetRevisions: []string{"abc123"},
},
},
},
{
name: "detects both spec and revision changes",
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: v1alpha1.ProgressiveSyncHealthy,
Step: "1",
TargetRevisions: []string{"abc123"}, // OLD revision in status
},
}),
apps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "def456", nil, // NEW revision, but OutOfSync
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v1.0.0"},
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
desiredApps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "def456", nil,
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v2.0.0"}, // Changed value
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
appStepMap: map[string]int{
"app1": 0,
},
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: revisionAndSpecChangedMsg,
Status: v1alpha1.ProgressiveSyncWaiting,
Step: "1",
TargetRevisions: []string{"def456"},
},
},
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
@@ -5434,7 +6059,11 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
Metrics: metrics,
}
appStatuses, err := r.updateApplicationSetApplicationStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps, cc.appStepMap)
desiredApps := cc.desiredApps
if desiredApps == nil {
desiredApps = cc.apps
}
appStatuses, err := r.updateApplicationSetApplicationStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps, desiredApps, cc.appStepMap)
// opt out of testing the LastTransitionTime is accurate
for i := range appStatuses {
@@ -7321,6 +7950,40 @@ func TestIsRollingSyncStrategy(t *testing.T) {
}
}
func TestFirstAppError(t *testing.T) {
errA := errors.New("error from app-a")
errB := errors.New("error from app-b")
errC := errors.New("error from app-c")
t.Run("returns nil for empty map", func(t *testing.T) {
assert.NoError(t, firstAppError(map[string]error{}))
})
t.Run("returns the single error", func(t *testing.T) {
assert.ErrorIs(t, firstAppError(map[string]error{"app-a": errA}), errA)
})
t.Run("returns error from lexicographically first app name", func(t *testing.T) {
appErrors := map[string]error{
"app-c": errC,
"app-a": errA,
"app-b": errB,
}
assert.ErrorIs(t, firstAppError(appErrors), errA)
})
t.Run("result is stable across multiple calls with same input", func(t *testing.T) {
appErrors := map[string]error{
"app-c": errC,
"app-a": errA,
"app-b": errB,
}
for range 10 {
assert.ErrorIs(t, firstAppError(appErrors), errA, "firstAppError must return the same error on every call")
}
})
}
func TestSyncApplication(t *testing.T) {
tests := []struct {
name string

View File

@@ -24,6 +24,43 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
)
var appEquality = conversion.EqualitiesOrDie(
func(a, b resource.Quantity) bool {
// Ignore formatting, only care that numeric value stayed the same.
// TODO: if we decide it's important, it should be safe to start comparing the format.
//
// Uninitialized quantities are equivalent to 0 quantities.
return a.Cmp(b) == 0
},
func(a, b metav1.MicroTime) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b metav1.Time) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b labels.Selector) bool {
return a.String() == b.String()
},
func(a, b fields.Selector) bool {
return a.String() == b.String()
},
func(a, b argov1alpha1.ApplicationDestination) bool {
return a.Namespace == b.Namespace && a.Name == b.Name && a.Server == b.Server
},
)
// BuildIgnoreDiffConfig constructs a DiffConfig from the ApplicationSet's ignoreDifferences rules.
// Returns nil when ignoreDifferences is empty.
func BuildIgnoreDiffConfig(ignoreDifferences argov1alpha1.ApplicationSetIgnoreDifferences, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) (argodiff.DiffConfig, error) {
if len(ignoreDifferences) == 0 {
return nil, nil
}
return argodiff.NewDiffConfigBuilder().
WithDiffSettings(ignoreDifferences.ToApplicationIgnoreDifferences(), nil, false, ignoreNormalizerOpts).
WithNoCache().
Build()
}
// CreateOrUpdate overrides "sigs.k8s.io/controller-runtime" function
// in sigs.k8s.io/controller-runtime/pkg/controller/controllerutil/controllerutil.go
// to add equality for argov1alpha1.ApplicationDestination
@@ -34,10 +71,15 @@ import (
// cluster. The object's desired state must be reconciled with the existing
// state inside the passed in callback MutateFn.
//
// diffConfig must be built once per reconcile cycle via BuildIgnoreDiffConfig and may be nil
// when there are no ignoreDifferences rules. obj.Spec must already be normalized by the caller
// via NormalizeApplicationSpec before this function is called; the live object fetched from the
// cluster is normalized internally.
//
// The MutateFn is called regardless of creating or updating an object.
//
// It returns the executed operation and an error.
func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, ignoreAppDifferences argov1alpha1.ApplicationSetIgnoreDifferences, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts, obj *argov1alpha1.Application, f controllerutil.MutateFn) (controllerutil.OperationResult, error) {
func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, diffConfig argodiff.DiffConfig, obj *argov1alpha1.Application, f controllerutil.MutateFn) (controllerutil.OperationResult, error) {
key := client.ObjectKeyFromObject(obj)
if err := c.Get(ctx, key, obj); err != nil {
if !errors.IsNotFound(err) {
@@ -59,43 +101,18 @@ func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, ign
return controllerutil.OperationResultNone, err
}
// Normalize the live spec to avoid spurious diffs from unimportant differences (e.g. nil vs
// empty SyncPolicy). obj.Spec is already normalized by the caller; only the live side needs it.
normalizedLive.Spec = *argo.NormalizeApplicationSpec(&normalizedLive.Spec)
// Apply ignoreApplicationDifferences rules to remove ignored fields from both the live and the desired state. This
// prevents those differences from appearing in the diff and therefore in the patch.
err := applyIgnoreDifferences(ignoreAppDifferences, normalizedLive, obj, ignoreNormalizerOpts)
err := applyIgnoreDifferences(diffConfig, normalizedLive, obj)
if err != nil {
return controllerutil.OperationResultNone, fmt.Errorf("failed to apply ignore differences: %w", err)
}
// Normalize to avoid diffing on unimportant differences.
normalizedLive.Spec = *argo.NormalizeApplicationSpec(&normalizedLive.Spec)
obj.Spec = *argo.NormalizeApplicationSpec(&obj.Spec)
equality := conversion.EqualitiesOrDie(
func(a, b resource.Quantity) bool {
// Ignore formatting, only care that numeric value stayed the same.
// TODO: if we decide it's important, it should be safe to start comparing the format.
//
// Uninitialized quantities are equivalent to 0 quantities.
return a.Cmp(b) == 0
},
func(a, b metav1.MicroTime) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b metav1.Time) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b labels.Selector) bool {
return a.String() == b.String()
},
func(a, b fields.Selector) bool {
return a.String() == b.String()
},
func(a, b argov1alpha1.ApplicationDestination) bool {
return a.Namespace == b.Namespace && a.Name == b.Name && a.Server == b.Server
},
)
if equality.DeepEqual(normalizedLive, obj) {
if appEquality.DeepEqual(normalizedLive, obj) {
return controllerutil.OperationResultNone, nil
}
@@ -135,19 +152,13 @@ func mutate(f controllerutil.MutateFn, key client.ObjectKey, obj client.Object)
}
// applyIgnoreDifferences applies the ignore differences rules to the found application. It modifies the applications in place.
func applyIgnoreDifferences(applicationSetIgnoreDifferences argov1alpha1.ApplicationSetIgnoreDifferences, found *argov1alpha1.Application, generatedApp *argov1alpha1.Application, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) error {
if len(applicationSetIgnoreDifferences) == 0 {
// diffConfig may be nil, in which case this is a no-op.
func applyIgnoreDifferences(diffConfig argodiff.DiffConfig, found *argov1alpha1.Application, generatedApp *argov1alpha1.Application) error {
if diffConfig == nil {
return nil
}
generatedAppCopy := generatedApp.DeepCopy()
diffConfig, err := argodiff.NewDiffConfigBuilder().
WithDiffSettings(applicationSetIgnoreDifferences.ToApplicationIgnoreDifferences(), nil, false, ignoreNormalizerOpts).
WithNoCache().
Build()
if err != nil {
return fmt.Errorf("failed to build diff config: %w", err)
}
unstructuredFound, err := appToUnstructured(found)
if err != nil {
return fmt.Errorf("failed to convert found application to unstructured: %w", err)

View File

@@ -224,7 +224,9 @@ spec:
generatedApp := v1alpha1.Application{TypeMeta: appMeta}
err = yaml.Unmarshal([]byte(tc.generatedApp), &generatedApp)
require.NoError(t, err, tc.generatedApp)
err = applyIgnoreDifferences(tc.ignoreDifferences, &foundApp, &generatedApp, normalizers.IgnoreNormalizerOpts{})
diffConfig, err := BuildIgnoreDiffConfig(tc.ignoreDifferences, normalizers.IgnoreNormalizerOpts{})
require.NoError(t, err)
err = applyIgnoreDifferences(diffConfig, &foundApp, &generatedApp)
require.NoError(t, err)
yamlFound, err := yaml.Marshal(tc.foundApp)
require.NoError(t, err)

View File

@@ -79,6 +79,7 @@ func NewCommand() *cobra.Command {
tokenRefStrictMode bool
maxResourcesStatusCount int
cacheSyncPeriod time.Duration
concurrentApplicationUpdates int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -239,24 +240,25 @@ func NewCommand() *cobra.Command {
})
if err = (&controllers.ApplicationSetReconciler{
Generators: topLevelGenerators,
Client: utils.NewCacheSyncingClient(mgr.GetClient(), mgr.GetCache()),
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
Renderer: &utils.Render{},
Policy: policyObj,
EnablePolicyOverride: enablePolicyOverride,
KubeClientset: k8sClient,
ArgoDB: argoCDDB,
ArgoCDNamespace: namespace,
ApplicationSetNamespaces: applicationSetNamespaces,
EnableProgressiveSyncs: enableProgressiveSyncs,
SCMRootCAPath: scmRootCAPath,
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
ClusterInformer: clusterInformer,
Generators: topLevelGenerators,
Client: utils.NewCacheSyncingClient(mgr.GetClient(), mgr.GetCache()),
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
Renderer: &utils.Render{},
Policy: policyObj,
EnablePolicyOverride: enablePolicyOverride,
KubeClientset: k8sClient,
ArgoDB: argoCDDB,
ArgoCDNamespace: namespace,
ApplicationSetNamespaces: applicationSetNamespaces,
EnableProgressiveSyncs: enableProgressiveSyncs,
SCMRootCAPath: scmRootCAPath,
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
ClusterInformer: clusterInformer,
ConcurrentApplicationUpdates: concurrentApplicationUpdates,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -303,6 +305,7 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 5000, 0, math.MaxInt), "Max number of resources stored in appset status.")
command.Flags().DurationVar(&cacheSyncPeriod, "cache-sync-period", env.ParseDurationFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CACHE_SYNC_PERIOD", time.Hour*10, 0, time.Hour*24), "Period at which the manager client cache is forcefully resynced with the Kubernetes API server. 0 disables periodic resync.")
command.Flags().IntVar(&concurrentApplicationUpdates, "concurrent-application-updates", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CONCURRENT_APPLICATION_UPDATES", 1, 1, 200), "Number of concurrent Application create/update/delete operations per ApplicationSet reconcile.")
return &command
}

View File

@@ -0,0 +1,28 @@
package command
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewCommand_ConcurrentApplicationUpdatesFlag(t *testing.T) {
cmd := NewCommand()
flag := cmd.Flags().Lookup("concurrent-application-updates")
require.NotNil(t, flag, "expected --concurrent-application-updates flag to be registered")
assert.Equal(t, "int", flag.Value.Type())
assert.Equal(t, "1", flag.DefValue, "default should be 1")
}
func TestNewCommand_ConcurrentApplicationUpdatesFlagValue(t *testing.T) {
cmd := NewCommand()
err := cmd.Flags().Set("concurrent-application-updates", "5")
require.NoError(t, err)
val, err := cmd.Flags().GetInt("concurrent-application-updates")
require.NoError(t, err)
assert.Equal(t, 5, val)
}

View File

@@ -34,6 +34,7 @@ import (
"github.com/argoproj/argo-cd/v3/util/dex"
"github.com/argoproj/argo-cd/v3/util/env"
"github.com/argoproj/argo-cd/v3/util/errors"
utilglob "github.com/argoproj/argo-cd/v3/util/glob"
"github.com/argoproj/argo-cd/v3/util/kube"
"github.com/argoproj/argo-cd/v3/util/templates"
"github.com/argoproj/argo-cd/v3/util/tls"
@@ -87,6 +88,7 @@ func NewCommand() *cobra.Command {
applicationNamespaces []string
enableProxyExtension bool
webhookParallelism int
globCacheSize int
hydratorEnabled bool
syncWithReplaceAllowed bool
@@ -122,6 +124,7 @@ func NewCommand() *cobra.Command {
cli.SetLogFormat(cmdutil.LogFormat)
cli.SetLogLevel(cmdutil.LogLevel)
cli.SetGLogLevel(glogLevel)
utilglob.SetCacheSize(globCacheSize)
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
@@ -326,6 +329,7 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&applicationNamespaces, "application-namespaces", env.StringsFromEnv("ARGOCD_APPLICATION_NAMESPACES", []string{}, ","), "List of additional namespaces where application resources can be managed in")
command.Flags().BoolVar(&enableProxyExtension, "enable-proxy-extension", env.ParseBoolFromEnv("ARGOCD_SERVER_ENABLE_PROXY_EXTENSION", false), "Enable Proxy Extension feature")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_SERVER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().IntVar(&globCacheSize, "glob-cache-size", env.ParseNumFromEnv("ARGOCD_SERVER_GLOB_CACHE_SIZE", utilglob.DefaultGlobCacheSize, 1, math.MaxInt32), "Maximum number of compiled glob patterns to cache for RBAC evaluation")
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
command.Flags().BoolVar(&hydratorEnabled, "hydrator-enabled", env.ParseBoolFromEnv("ARGOCD_HYDRATOR_ENABLED", false), "Feature flag to enable Hydrator. Default (\"false\")")
command.Flags().BoolVar(&syncWithReplaceAllowed, "sync-with-replace-allowed", env.ParseBoolFromEnv("ARGOCD_SYNC_WITH_REPLACE_ALLOWED", true), "Whether to allow users to select replace for syncs from UI/CLI")

View File

@@ -308,22 +308,9 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
sync.WithLogr(logutils.NewLogrusLogger(logEntry)),
sync.WithHealthOverride(lua.ResourceHealthOverrides(resourceOverrides)),
sync.WithPermissionValidator(func(un *unstructured.Unstructured, res *metav1.APIResource) error {
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
}
if res.Namespaced {
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), func(project string) ([]*v1alpha1.Cluster, error) {
return m.db.GetProjectClusters(context.TODO(), project)
})
if err != nil {
return err
}
if !permitted {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
}
}
return nil
return validateSyncPermissions(project, destCluster, func(proj string) ([]*v1alpha1.Cluster, error) {
return m.db.GetProjectClusters(context.TODO(), proj)
}, un, res)
}),
sync.WithOperationSettings(syncOp.DryRun, syncOp.Prune, syncOp.SyncStrategy.Force(), syncOp.IsApplyStrategy() || len(syncOp.Resources) > 0),
sync.WithInitialState(state.Phase, state.Message, initialResourcesRes, state.StartedAt),
@@ -605,3 +592,33 @@ func deriveServiceAccountToImpersonate(project *v1alpha1.AppProject, application
// if there is no match found in the AppProject.Spec.DestinationServiceAccounts, use the default service account of the destination namespace.
return "", fmt.Errorf("no matching service account found for destination server %s and namespace %s", application.Spec.Destination.Server, serviceAccountNamespace)
}
// validateSyncPermissions checks whether the given resource is permitted by the project's
// allow/deny lists and destination rules. It returns an error if the API resource info is nil
// (preventing a nil-pointer panic), if the resource's group/kind is not permitted, or if
// the resource's namespace is not an allowed destination.
func validateSyncPermissions(
project *v1alpha1.AppProject,
destCluster *v1alpha1.Cluster,
getProjectClusters func(string) ([]*v1alpha1.Cluster, error),
un *unstructured.Unstructured,
res *metav1.APIResource,
) error {
if res == nil {
return fmt.Errorf("failed to get API resource info for %s/%s: unable to verify permissions", un.GroupVersionKind().Group, un.GroupVersionKind().Kind)
}
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
}
if res.Namespaced {
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), getProjectClusters)
if err != nil {
return err
}
if !permitted {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
}
}
return nil
}

View File

@@ -13,6 +13,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/controller/testdata"
@@ -1653,3 +1654,116 @@ func dig(obj any, path ...any) any {
return i
}
func TestValidateSyncPermissions(t *testing.T) {
t.Parallel()
newResource := func(group, kind, name, namespace string) *unstructured.Unstructured {
obj := &unstructured.Unstructured{}
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: group, Version: "v1", Kind: kind})
obj.SetName(name)
obj.SetNamespace(namespace)
return obj
}
project := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "default", Server: "*"},
},
},
}
destCluster := &v1alpha1.Cluster{
Server: "https://kubernetes.default.svc",
}
noopGetClusters := func(_ string) ([]*v1alpha1.Cluster, error) {
return nil, nil
}
t.Run("nil APIResource returns error", func(t *testing.T) {
t.Parallel()
un := newResource("apps", "Deployment", "my-deploy", "default")
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, nil)
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to get API resource info for apps/Deployment")
assert.Contains(t, err.Error(), "unable to verify permissions")
})
t.Run("permitted namespaced resource returns no error", func(t *testing.T) {
t.Parallel()
un := newResource("", "ConfigMap", "my-cm", "default")
res := &metav1.APIResource{Name: "configmaps", Namespaced: true}
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, res)
assert.NoError(t, err)
})
t.Run("group kind not permitted returns error", func(t *testing.T) {
t.Parallel()
projectWithDenyList := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "restricted-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "*", Server: "*"},
},
ClusterResourceBlacklist: []v1alpha1.ClusterResourceRestrictionItem{
{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"},
},
},
}
un := newResource("rbac.authorization.k8s.io", "ClusterRole", "my-role", "")
res := &metav1.APIResource{Name: "clusterroles", Namespaced: false}
err := validateSyncPermissions(projectWithDenyList, destCluster, noopGetClusters, un, res)
require.Error(t, err)
assert.Contains(t, err.Error(), "is not permitted in project")
})
t.Run("namespace not permitted returns error", func(t *testing.T) {
t.Parallel()
un := newResource("", "ConfigMap", "my-cm", "kube-system")
res := &metav1.APIResource{Name: "configmaps", Namespaced: true}
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, res)
require.Error(t, err)
assert.Contains(t, err.Error(), "namespace kube-system is not permitted in project")
})
t.Run("cluster-scoped resource skips namespace check", func(t *testing.T) {
t.Parallel()
projectWithClusterResources := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "default", Server: "*"},
},
ClusterResourceWhitelist: []v1alpha1.ClusterResourceRestrictionItem{
{Group: "*", Kind: "*"},
},
},
}
un := newResource("", "Namespace", "my-ns", "")
res := &metav1.APIResource{Name: "namespaces", Namespaced: false}
err := validateSyncPermissions(projectWithClusterResources, destCluster, noopGetClusters, un, res)
assert.NoError(t, err)
})
}

View File

@@ -21,8 +21,8 @@ These are the upcoming releases dates:
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](https://github.com/argoproj/argo-cd/issues/23347) |
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | [Michael Crenshaw](https://github.com/crenshaw-dev) | [checklist](https://github.com/argoproj/argo-cd/issues/24539) |
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | [Peter Jiang](https://github.com/pjiang-dev) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/25211) |
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26527) |
| v3.5 | Monday, Jun. 15, 2026 | Monday, Aug. 3, 2026 | [Patroklos Papapetrou](https://github.com/ppapapetrou76) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26746) |
| v3.4 | Monday, Mar. 16, 2026 | Tuesday, May. 5, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26527) |
| v3.5 | Tuesday, Jun. 16, 2026 | Tuesday, Aug. 4, 2026 | [Patroklos Papapetrou](https://github.com/ppapapetrou76) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26746) |
Actual release dates might differ from the plan by a few days.
@@ -36,10 +36,10 @@ effectively means that there is a seven-week feature freeze.
These are the approximate release dates:
* The first Monday of February
* The first Monday of May
* The first Monday of August
* The first Monday of November
* The first Tuesday of February
* The first Tuesday of May
* The first Tuesday of August
* The first Tuesday of November
Dates may be shifted slightly to accommodate holidays. Those shifts should be minimal.

View File

@@ -83,6 +83,26 @@ or a randomly generated password stored in a secret (Argo CD 1.9 and later).
Add `admin.enabled: "false"` to the `argocd-cm` ConfigMap
(see [user management](./operator-manual/user-management/index.md)).
## How to view orphaned resources?
Orphaned Kubernetes resources are top-level namespaced resources that do not belong to any Argo CD Application. For more information, see [Orphaned Resources Monitoring](./user-guide/orphaned-resources.md).
!!! warning
Enabling orphaned resource monitoring has performance implications. If an AppProject monitors a namespace containing many resources not managed by Argo CD (e.g. `kube-system`), it can significantly impact your Argo CD instance. Enable this feature only on projects with well-scoped namespaces.
To view orphaned resources in the Argo CD UI:
1. Click on **Settings** in the sidebar.
2. Click on **Projects**.
3. Select the desired project.
4. Scroll down to the **RESOURCE MONITORING** section.
5. Click **Edit** and enable the monitoring feature.
6. Check **Enable application warning conditions?** to enable warnings.
7. Click **Save**.
8. Navigate back to **Applications** and select an application under the configured project.
9. In the **Sync Panel**, under **APP CONDITIONS**, you will see the orphaned resources warning.
10. Click **Show Orphaned** below the **HEALTH STATUS** filters to display orphaned resources.
## Argo CD cannot deploy Helm Chart based applications without internet access, how can I solve it?
Argo CD might fail to generate Helm chart manifests if the chart has dependencies located in external repositories. To

View File

@@ -230,7 +230,7 @@ p, somerole, applicationsets, get, foo/bar/*, allow
### Using the CLI
You can use all existing Argo CD CLI commands for managing applications in other namespaces, exactly as you would use the CLI to manage applications in the control plane's namespace.
You can use all existing Argo CD CLI commands for managing ApplicationSets in other namespaces, exactly as you would use the CLI to manage ApplicationSets in the control plane's namespace.
For example, to retrieve the `ApplicationSet` named `foo` in the namespace `bar`, you can use the following CLI command:

View File

@@ -150,6 +150,8 @@ data:
server.api.content.types: "application/json"
# Number of webhook requests processed concurrently (default 50)
server.webhook.parallelism.limit: "50"
# Maximum number of compiled glob patterns to cache for RBAC evaluation (default 10000)
server.glob.cache.size: "10000"
# Whether to allow sync with replace checked to go through. Resource-level annotation to replace override this setting, i.e. it's only enforced on the API server level.
server.sync.replace.allowed: "true"

View File

@@ -253,6 +253,11 @@ spec:
megabytes.
The default value is 200. You might need to increase this for an Argo CD instance that manages 3000+ applications.
* The `server.glob.cache.size` config key in `argocd-cmd-params-cm` (or the `--glob-cache-size` server flag) controls
the maximum number of compiled glob patterns cached for RBAC policy evaluation. Glob pattern compilation is expensive,
and caching significantly improves RBAC performance when many applications are managed. The default value is 10000.
See [RBAC Glob Matching](rbac.md#glob-matching) for more details.
### argocd-dex-server, argocd-redis
The `argocd-dex-server` uses an in-memory database, and two or more instances may have inconsistent data.

View File

@@ -321,6 +321,10 @@ When the `example-user` executes the `extensions/DaemonSet/test` action, the fol
3. The value `action/extensions/DaemonSet/test` matches `action/extensions/*`. Note that `/` is not treated as a separator and the use of `**` is not necessary.
4. The value `default/my-app` matches `default/*`.
> [!TIP]
> For performance tuning of glob pattern matching, see the `server.glob.cache.size` config key in
> [High Availability - argocd-server](high_availability.md#argocd-server).
## Using SSO Users/Groups
The `scopes` field controls which OIDC scopes to examine during RBAC enforcement (in addition to `sub` scope).

View File

@@ -22,6 +22,7 @@ argocd-applicationset-controller [flags]
--client-certificate string Path to a client certificate file for TLS
--client-key string Path to a client key file for TLS
--cluster string The name of the kubeconfig cluster to use
--concurrent-application-updates int Number of concurrent Application create/update/delete operations per ApplicationSet reconcile. (default 1)
--concurrent-reconciliations int Max concurrent reconciliations limit for the controller (default 10)
--context string The name of the kubeconfig context to use
--debug Print debug logs. Takes precedence over loglevel

View File

@@ -54,6 +54,7 @@ argocd-server [flags]
--enable-gzip Enable GZIP compression (default true)
--enable-k8s-event none Enable ArgoCD to use k8s event. For disabling all events, set the value as none. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated) (default [all])
--enable-proxy-extension Enable Proxy Extension feature
--glob-cache-size int Maximum number of compiled glob patterns to cache for RBAC evaluation (default 10000)
--gloglevel int Set the glog logging level
-h, --help help for argocd-server
--hydrator-enabled Feature flag to enable Hydrator. Default ("false")

View File

@@ -1,5 +1,2 @@
| Argo CD version | Kubernetes versions |
|-----------------|---------------------|
| 3.4 | v1.35, v1.34, v1.33, v1.32 |
| 3.3 | v1.34, v1.33, v1.32, v1.31 |
| 3.2 | v1.34, v1.33, v1.32, v1.31 |
This page is populated for released Argo CD versions. Use the version selector to view this table for a specific
version.

View File

@@ -1,5 +1,9 @@
# Orphaned Resources Monitoring
!!! warning
Enabling orphaned resource monitoring has performance implications. If an AppProject monitors a namespace containing many resources not managed by Argo CD (e.g. `kube-system`), it can significantly impact your Argo CD instance. Enable this feature only on projects with well-scoped namespaces.
An [orphaned Kubernetes resource](https://kubernetes.io/docs/concepts/architecture/garbage-collection/#orphaned-dependents) is a top-level namespaced resource that does not belong to any Argo CD Application. The Orphaned Resources Monitoring feature allows detecting
orphaned resources, inspecting/removing resources using the Argo CD UI, and generating a warning.
@@ -38,10 +42,10 @@ Not every resource in the Kubernetes cluster is controlled by the end user and m
The following resources are never considered orphaned:
* Namespaced resources denied in the project. Usually, such resources are managed by cluster administrators and are not supposed to be modified by a namespace user.
* `ServiceAccount` with the name `default` (and the corresponding auto-generated `ServiceAccountToken`).
* `Service` with the name `kubernetes` in the `default` namespace.
* `ConfigMap` with the name `kube-root-ca.crt` in all namespaces.
- Namespaced resources denied in the project. Usually, such resources are managed by cluster administrators and are not supposed to be modified by a namespace user.
- `ServiceAccount` with the name `default` (and the corresponding auto-generated `ServiceAccountToken`).
- `Service` with the name `kubernetes` in the `default` namespace.
- `ConfigMap` with the name `kube-root-ca.crt` in all namespaces.
You can prevent resources from being declared orphaned by providing a list of ignore rules, each defining a Group, Kind, and Name.
@@ -49,8 +53,8 @@ You can prevent resources from being declared orphaned by providing a list of ig
spec:
orphanedResources:
ignore:
- kind: ConfigMap
name: orphaned-but-ignored-configmap
- kind: ConfigMap
name: orphaned-but-ignored-configmap
```
The `name` can be a [glob pattern](https://github.com/gobwas/glob), e.g.:

View File

@@ -563,7 +563,7 @@ func (_c *ClusterCache_IsNamespaced_Call) RunAndReturn(run func(gk schema.GroupK
return _c
}
// IterateHierarchyV2 provides a mock function with given fields: keys, action, orphanedResourceNamespace
// IterateHierarchyV2 provides a mock function for the type ClusterCache
func (_mock *ClusterCache) IterateHierarchyV2(keys []kube.ResourceKey, action func(resource *cache.Resource, namespaceResources map[kube.ResourceKey]*cache.Resource) bool) {
_mock.Called(keys, action)
return

View File

@@ -57,14 +57,14 @@ func TestAuthReconcileWithMissingNamespace(t *testing.T) {
_, err := k.authReconcile(context.Background(), role, "/dev/null", cmdutil.DryRunNone)
assert.Error(t, err)
assert.True(t, errors.IsNotFound(err), "returned error wasn't not found")
assert.True(t, errors.IsNotFound(err), "returned error should be resource not found")
roleBinding := testingutils.NewRoleBinding()
roleBinding.SetNamespace(namespace)
_, err = k.authReconcile(context.Background(), roleBinding, "/dev/null", cmdutil.DryRunNone)
assert.Error(t, err)
assert.True(t, errors.IsNotFound(err), "returned error wasn't not found")
assert.True(t, errors.IsNotFound(err), "returned error should be resource not found")
clusterRole := testingutils.NewClusterRole()
clusterRole.SetNamespace(namespace)

20
go.mod
View File

@@ -45,6 +45,7 @@ require (
github.com/gogits/go-gogs-client v0.0.0-20210131175652-1d7215cd8d85
github.com/gogo/protobuf v1.3.2
github.com/golang-jwt/jwt/v5 v5.3.1
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8
github.com/golang/protobuf v1.5.4
github.com/google/btree v1.1.3
github.com/google/gnostic-models v0.7.0 // indirect
@@ -102,7 +103,7 @@ require (
golang.org/x/term v0.41.0
golang.org/x/time v0.15.0
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57
google.golang.org/grpc v1.79.2
google.golang.org/grpc v1.79.3
google.golang.org/protobuf v1.36.11
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
@@ -148,18 +149,18 @@ require (
github.com/RocketChat/Rocket.Chat.Go.SDK v0.0.0-20240116134246-a8cbe886bab0 // indirect
github.com/aws/aws-sdk-go-v2 v1.41.4
github.com/aws/aws-sdk-go-v2/config v1.32.11
github.com/aws/aws-sdk-go-v2/credentials v1.19.11
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.19 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.19.12
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.20 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.20 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.20 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.5 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.6 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.19 // indirect
github.com/aws/aws-sdk-go-v2/service/signin v1.0.7 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.7 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.20 // indirect
github.com/aws/aws-sdk-go-v2/service/signin v1.0.8 // indirect
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.12 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.16 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.8
github.com/aws/aws-sdk-go-v2/service/sso v1.30.13 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.17 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.9
github.com/aws/smithy-go v1.24.2
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
@@ -208,7 +209,6 @@ require (
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
github.com/golang-jwt/jwt/v4 v4.5.2 // indirect
github.com/golang/glog v1.2.5 // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/google/go-querystring v1.2.0 // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect

36
go.sum
View File

@@ -128,10 +128,10 @@ github.com/aws/aws-sdk-go-v2 v1.41.4 h1:10f50G7WyU02T56ox1wWXq+zTX9I1zxG46HYuG1h
github.com/aws/aws-sdk-go-v2 v1.41.4/go.mod h1:mwsPRE8ceUUpiTgF7QmQIJ7lgsKUPQOUl3o72QBrE1o=
github.com/aws/aws-sdk-go-v2/config v1.32.11 h1:ftxI5sgz8jZkckuUHXfC/wMUc8u3fG1vQS0plr2F2Zs=
github.com/aws/aws-sdk-go-v2/config v1.32.11/go.mod h1:twF11+6ps9aNRKEDimksp923o44w/Thk9+8YIlzWMmo=
github.com/aws/aws-sdk-go-v2/credentials v1.19.11 h1:NdV8cwCcAXrCWyxArt58BrvZJ9pZ9Fhf9w6Uh5W3Uyc=
github.com/aws/aws-sdk-go-v2/credentials v1.19.11/go.mod h1:30yY2zqkMPdrvxBqzI9xQCM+WrlrZKSOpSJEsylVU+8=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.19 h1:INUvJxmhdEbVulJYHI061k4TVuS3jzzthNvjqvVvTKM=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.19/go.mod h1:FpZN2QISLdEBWkayloda+sZjVJL+e9Gl0k1SyTgcswU=
github.com/aws/aws-sdk-go-v2/credentials v1.19.12 h1:oqtA6v+y5fZg//tcTWahyN9PEn5eDU/Wpvc2+kJ4aY8=
github.com/aws/aws-sdk-go-v2/credentials v1.19.12/go.mod h1:U3R1RtSHx6NB0DvEQFGyf/0sbrpJrluENHdPy1j/3TE=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.20 h1:zOgq3uezl5nznfoK3ODuqbhVg1JzAGDUhXOsU0IDCAo=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.20/go.mod h1:z/MVwUARehy6GAg/yQ1GO2IMl0k++cu1ohP9zo887wE=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.20 h1:CNXO7mvgThFGqOFgbNAP2nol2qAWBOGfqR/7tQlvLmc=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.20/go.mod h1:oydPDJKcfMhgfcgBUZaG+toBbwy8yPWubJXBVERtI4o=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.20 h1:tN6W/hg+pkM+tf9XDkWUbDEjGLb+raoBMFsTodcoYKw=
@@ -140,22 +140,22 @@ github.com/aws/aws-sdk-go-v2/internal/ini v1.8.5 h1:clHU5fm//kWS1C2HgtgWxfQbFbx4
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.5/go.mod h1:O3h0IK87yXci+kg6flUKzJnWeziQUKciKrLjcatSNcY=
github.com/aws/aws-sdk-go-v2/service/codecommit v1.33.11 h1:R3S5odXTsflG7xUp9S2AsewSXtQi1LBd+stJ5OpCIog=
github.com/aws/aws-sdk-go-v2/service/codecommit v1.33.11/go.mod h1:OekzWXyZi3ptl+YoKmm+G5ODIa4BDEArvZv8gHrQb5s=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.6 h1:XAq62tBTJP/85lFD5oqOOe7YYgWxY9LvWq8plyDvDVg=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.6/go.mod h1:x0nZssQ3qZSnIcePWLvcoFisRXJzcTVvYpAAdYX8+GI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.19 h1:X1Tow7suZk9UCJHE1Iw9GMZJJl0dAnKXXP1NaSDHwmw=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.19/go.mod h1:/rARO8psX+4sfjUQXp5LLifjUt8DuATZ31WptNJTyQA=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.7 h1:5EniKhLZe4xzL7a+fU3C2tfUN4nWIqlLesfrjkuPFTY=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.7/go.mod h1:x0nZssQ3qZSnIcePWLvcoFisRXJzcTVvYpAAdYX8+GI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.20 h1:2HvVAIq+YqgGotK6EkMf+KIEqTISmTYh5zLpYyeTo1Y=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.20/go.mod h1:V4X406Y666khGa8ghKmphma/7C0DAtEQYhkq9z4vpbk=
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.31.8 h1:mGgiunl7ZwOwhpJwJNF4JfsZFYJp08wjyS3NqFQe3ws=
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.31.8/go.mod h1:KdM2EhXeHfeBQz5keOvv/FM7kbesjCWm7HEEyJe3frs=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.7 h1:Y2cAXlClHsXkkOvWZFXATr34b0hxxloeQu/pAZz2row=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.7/go.mod h1:idzZ7gmDeqeNrSPkdbtMp9qWMgcBwykA7P7Rzh5DXVU=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.8 h1:0GFOLzEbOyZABS3PhYfBIx2rNBACYcKty+XGkTgw1ow=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.8/go.mod h1:LXypKvk85AROkKhOG6/YEcHFPoX+prKTowKnVdcaIxE=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.1 h1:ZtgZeMPJH8+/vNs9vJFFLI0QEzYbcN0p7x1/FFwyROc=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.1/go.mod h1:Bar4MrRxeqdn6XIh8JGfiXuFRmyrrsZNTJotxEJmWW0=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.12 h1:iSsvB9EtQ09YrsmIc44Heqlx5ByGErqhPK1ZQLppias=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.12/go.mod h1:fEWYKTRGoZNl8tZ77i61/ccwOMJdGxwOhWCkp6TXAr0=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.16 h1:EnUdUqRP1CNzt2DkV67tJx6XDN4xlfBFm+bzeNOQVb0=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.16/go.mod h1:Jic/xv0Rq/pFNCh3WwpH4BEqdbSAl+IyHro8LbibHD8=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.8 h1:XQTQTF75vnug2TXS8m7CVJfC2nniYPZnO1D4Np761Oo=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.8/go.mod h1:Xgx+PR1NUOjNmQY+tRMnouRp83JRM8pRMw/vCaVhPkI=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.13 h1:kiIDLZ005EcKomYYITtfsjn7dtOwHDOFy7IbPXKek2o=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.13/go.mod h1:2h/xGEowcW/g38g06g3KpRWDlT+OTfxxI0o1KqayAB8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.17 h1:jzKAXIlhZhJbnYwHbvUQZEB8KfgAEuG0dc08Bkda7NU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.17/go.mod h1:Al9fFsXjv4KfbzQHGe6V4NZSZQXecFcvaIF4e70FoRA=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.9 h1:Cng+OOwCHmFljXIxpEVXAGMnBia8MSU6Ch5i9PgBkcU=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.9/go.mod h1:LrlIndBDdjA/EeXeyNBle+gyCwTlizzW5ycgWnvIxkk=
github.com/aws/smithy-go v1.24.2 h1:FzA3bu/nt/vDvmnkg+R8Xl46gmzEDam6mZ1hzmwXFng=
github.com/aws/smithy-go v1.24.2/go.mod h1:YE2RhdIuDbA5E5bTdciG9KrW3+TiEONeUWCqxX9i1Fc=
github.com/beevik/ntp v0.2.0/go.mod h1:hIHWr+l3+/clUnF44zdK+CWW7fO8dR5cIylAQ76NRpg=
@@ -1404,8 +1404,8 @@ google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.32.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.79.2 h1:fRMD94s2tITpyJGtBBn7MkMseNpOZU8ZxgC3MMBaXRU=
google.golang.org/grpc v1.79.2/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
google.golang.org/grpc v1.79.3 h1:sybAEdRIEtvcD68Gx7dmnwjZKlyfuc61Dyo9pGXXkKE=
google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=

View File

@@ -0,0 +1 @@
580515b544d5c966edc6f782c9ae88e21a9e10c786a7d6c5fd4b52613f321076 helm-v3.20.1-darwin-amd64.tar.gz

View File

@@ -0,0 +1 @@
75cc96ac3fe8b8b9928eb051e55698e98d1e026967b6bffe4f0f3c538a551b65 helm-v3.20.1-darwin-arm64.tar.gz

View File

@@ -0,0 +1 @@
0165ee4a2db012cc657381001e593e981f42aa5707acdd50658326790c9d0dc3 helm-v3.20.1-linux-amd64.tar.gz

View File

@@ -0,0 +1 @@
56b9d1b0e0efbb739be6e68a37860ace8ec9c7d3e6424e3b55d4c459bc3a0401 helm-v3.20.1-linux-arm64.tar.gz

View File

@@ -0,0 +1 @@
77b7d9bc62b209c044b873bc773055c5c0d17ef055e54c683f33209ebbe8883c helm-v3.20.1-linux-ppc64le.tar.gz

View File

@@ -0,0 +1 @@
3c43d45149a425c7bf15ba3653ddee13e7b1a4dd6d4534397b6f317f83c51b58 helm-v3.20.1-linux-s390x.tar.gz

View File

@@ -11,7 +11,7 @@
# Use ./hack/installers/checksums/add-helm-checksums.sh and
# add-kustomize-checksums.sh to help download checksums.
###############################################################################
helm3_version=3.19.4
helm3_version=3.20.1
kustomize5_version=5.8.1
protoc_version=29.3
oras_version=1.2.0

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.4.0-rc2
newTag: latest

View File

@@ -5,7 +5,7 @@ kind: Kustomization
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.4.0-rc2
newTag: latest
resources:
- ./application-controller
- ./dex

View File

@@ -316,6 +316,12 @@ spec:
name: argocd-cmd-params-cm
key: server.webhook.parallelism.limit
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: server.glob.cache.size
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:

View File

@@ -31332,7 +31332,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -31473,7 +31473,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -31601,7 +31601,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -31910,7 +31910,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -31963,7 +31963,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -32366,7 +32366,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -31300,7 +31300,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -31429,7 +31429,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -31738,7 +31738,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -31791,7 +31791,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -32194,7 +32194,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.4.0-rc2
newTag: latest

View File

@@ -12,7 +12,7 @@ patches:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.4.0-rc2
newTag: latest
resources:
- ../../base/application-controller
- ../../base/applicationset-controller

View File

@@ -32758,7 +32758,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -32899,7 +32899,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -33057,7 +33057,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -33159,7 +33159,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -33283,7 +33283,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -33618,7 +33618,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -33671,7 +33671,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -34058,6 +34058,12 @@ spec:
key: server.webhook.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: server.glob.cache.size
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:
@@ -34100,7 +34106,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -34532,7 +34538,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -32728,7 +32728,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -32887,7 +32887,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -32989,7 +32989,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -33113,7 +33113,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -33448,7 +33448,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -33501,7 +33501,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -33888,6 +33888,12 @@ spec:
key: server.webhook.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: server.glob.cache.size
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:
@@ -33930,7 +33936,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -34362,7 +34368,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -2005,7 +2005,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -2146,7 +2146,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2304,7 +2304,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2406,7 +2406,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2530,7 +2530,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2865,7 +2865,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2918,7 +2918,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -3305,6 +3305,12 @@ spec:
key: server.webhook.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: server.glob.cache.size
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:
@@ -3347,7 +3353,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3779,7 +3785,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -1975,7 +1975,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -2134,7 +2134,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2236,7 +2236,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2360,7 +2360,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2695,7 +2695,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2748,7 +2748,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -3135,6 +3135,12 @@ spec:
key: server.webhook.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: server.glob.cache.size
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:
@@ -3177,7 +3183,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3609,7 +3615,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -31776,7 +31776,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -31917,7 +31917,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -32075,7 +32075,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -32177,7 +32177,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -32279,7 +32279,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -32588,7 +32588,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -32641,7 +32641,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -33026,6 +33026,12 @@ spec:
key: server.webhook.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: server.glob.cache.size
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:
@@ -33068,7 +33074,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -33500,7 +33506,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

22
manifests/install.yaml generated
View File

@@ -31744,7 +31744,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -31903,7 +31903,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -32005,7 +32005,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -32107,7 +32107,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -32416,7 +32416,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -32469,7 +32469,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -32854,6 +32854,12 @@ spec:
key: server.webhook.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: server.glob.cache.size
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:
@@ -32896,7 +32902,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -33328,7 +33334,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -1023,7 +1023,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1164,7 +1164,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1322,7 +1322,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1424,7 +1424,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1526,7 +1526,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1835,7 +1835,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1888,7 +1888,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2273,6 +2273,12 @@ spec:
key: server.webhook.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: server.glob.cache.size
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:
@@ -2315,7 +2321,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2747,7 +2753,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -991,7 +991,7 @@ spec:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1150,7 +1150,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1252,7 +1252,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1354,7 +1354,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1663,7 +1663,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1716,7 +1716,7 @@ spec:
command:
- sh
- -c
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2101,6 +2101,12 @@ spec:
key: server.webhook.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_GLOB_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: server.glob.cache.size
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_NEW_GIT_FILE_GLOBBING
valueFrom:
configMapKeyRef:
@@ -2143,7 +2149,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2575,7 +2581,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.4.0-rc2
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -15,11 +15,13 @@ import (
"regexp"
"sort"
"strings"
gosync "sync"
"time"
"github.com/TomOnTime/utfutil"
"github.com/bmatcuk/doublestar/v4"
imagev1 "github.com/opencontainers/image-spec/specs-go/v1"
gocache "github.com/patrickmn/go-cache"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v3/util/oci"
@@ -96,6 +98,8 @@ type Service struct {
newGitClient func(rawRepoURL string, root string, creds git.Creds, insecure bool, enableLfs bool, proxy string, noProxy string, opts ...git.ClientOpts) (git.Client, error)
newHelmClient func(repoURL string, creds helm.Creds, enableOci bool, proxy string, noProxy string, opts ...helm.ClientOpts) helm.Client
initConstants RepoServerInitConstants
// stores cached symlink validation results
symlinksState *gocache.Cache
// now is usually just time.Now, but may be replaced by unit tests for testing purposes
now func() time.Time
}
@@ -157,6 +161,7 @@ func NewService(metricsServer *metrics.MetricsServer, cache *cache.Cache, initCo
ociPaths: ociRandomizedPaths,
gitRepoInitializer: directoryPermissionInitializer,
rootDir: rootDir,
symlinksState: gocache.New(12*time.Hour, time.Hour),
}
}
@@ -396,7 +401,7 @@ func (s *Service) runRepoOperation(
defer utilio.Close(closer)
if !s.initConstants.AllowOutOfBoundsSymlinks {
err := apppathutil.CheckOutOfBoundsSymlinks(ociPath)
err := s.checkOutOfBoundsSymlinks(ociPath, revision, settings.noCache)
if err != nil {
oobError := &apppathutil.OutOfBoundsSymlinkError{}
if errors.As(err, &oobError) {
@@ -437,7 +442,7 @@ func (s *Service) runRepoOperation(
}
defer utilio.Close(closer)
if !s.initConstants.AllowOutOfBoundsSymlinks {
err := apppathutil.CheckOutOfBoundsSymlinks(chartPath)
err := s.checkOutOfBoundsSymlinks(chartPath, revision, settings.noCache)
if err != nil {
oobError := &apppathutil.OutOfBoundsSymlinkError{}
if errors.As(err, &oobError) {
@@ -466,7 +471,7 @@ func (s *Service) runRepoOperation(
defer utilio.Close(closer)
if !s.initConstants.AllowOutOfBoundsSymlinks {
err := apppathutil.CheckOutOfBoundsSymlinks(gitClient.Root())
err := s.checkOutOfBoundsSymlinks(gitClient.Root(), revision, settings.noCache, ".git")
if err != nil {
oobError := &apppathutil.OutOfBoundsSymlinkError{}
if errors.As(err, &oobError) {
@@ -590,6 +595,25 @@ func resolveReferencedSources(hasMultipleSources bool, source *v1alpha1.Applicat
return repoRefs, nil
}
// checkOutOfBoundsSymlinks validates symlinks and caches validation result in memory
func (s *Service) checkOutOfBoundsSymlinks(rootPath string, version string, noCache bool, skipPaths ...string) error {
key := rootPath + "/" + version + "/" + strings.Join(skipPaths, ",")
ok := false
var checker any
if !noCache {
checker, ok = s.symlinksState.Get(key)
}
if !ok {
checker = gosync.OnceValue(func() error {
return apppathutil.CheckOutOfBoundsSymlinks(rootPath, skipPaths...)
})
s.symlinksState.Set(key, checker, gocache.DefaultExpiration)
}
return checker.(func() error)()
}
func (s *Service) GenerateManifest(ctx context.Context, q *apiclient.ManifestRequest) (*apiclient.ManifestResponse, error) {
var res *apiclient.ManifestResponse
var err error
@@ -865,7 +889,7 @@ func (s *Service) runManifestGenAsync(ctx context.Context, repoRoot, commitSHA,
// Symlink check must happen after acquiring lock.
if !s.initConstants.AllowOutOfBoundsSymlinks {
err := apppathutil.CheckOutOfBoundsSymlinks(gitClient.Root())
err := s.checkOutOfBoundsSymlinks(gitClient.Root(), commitSHA, q.NoCache, ".git")
if err != nil {
oobError := &apppathutil.OutOfBoundsSymlinkError{}
if errors.As(err, &oobError) {

View File

@@ -1,4 +1,4 @@
FROM docker.io/library/redis:8.6.1@sha256:1c054d54ecd1597bba52f4304bca5afbc5565ebe614c5b3d7dc5b7f8a0cd768d AS redis
FROM docker.io/library/redis:8.6.1@sha256:315270d166080f537bbdf1b489b603aaaa213cb55a544acfa51feb7481abb1c0 AS redis
# There are libraries we will want to copy from here in the final stage of the
# build, but the COPY directive does not have a way to determine system
@@ -14,7 +14,7 @@ FROM docker.io/library/registry:3.0@sha256:6c5666b861f3505b116bb9aa9b25175e71210
FROM docker.io/bitnamilegacy/kubectl:1.32@sha256:9524faf8e3cefb47fa28244a5d15f95ec21a73d963273798e593e61f80712333 AS kubectl
FROM docker.io/library/ubuntu:26.04@sha256:fed6ddb82c61194e1814e93b59cfcb6759e5aa33c4e41bb3782313c2386ed6df
FROM docker.io/library/ubuntu:26.04@sha256:91832dcd7bc5e44c098ecefc0a251a5c5d596dae494b33fb248e01b6840f8ce0
ENV DEBIAN_FRONTEND=noninteractive

View File

@@ -21,6 +21,7 @@ export const resourceIconGroups = {
'kyverno.io': true,
'opentelemetry.io': true,
'projectcontour.io': true,
'promoter.argoproj.io': true,
'work.karmada.io': true,
'zookeeper.pravega.io': true,
};

View File

@@ -16,7 +16,8 @@ jest.mock('./resource-customizations', () => ({
resourceIconGroups: {
'*.crossplane.io': true,
'*.fluxcd.io': true,
'cert-manager.io': true
'cert-manager.io': true,
'promoter.argoproj.io': true
}
}));
@@ -71,6 +72,14 @@ describe('ResourceIcon', () => {
expect(imgs.length).toBeGreaterThan(0);
expect(imgs[0].props.src).toBe('assets/images/resources/_.fluxcd.io/icon.svg');
});
it('should show group-based icon for promoter.argoproj.io', () => {
const testRenderer = renderer.create(<ResourceIcon group='promoter.argoproj.io' kind='PromotionStrategy' />);
const testInstance = testRenderer.root;
const imgs = testInstance.findAllByType('img');
expect(imgs.length).toBeGreaterThan(0);
expect(imgs[0].props.src).toBe('assets/images/resources/promoter.argoproj.io/icon.svg');
});
});
describe('fallback to kind-based icons (with non-matching group) - THIS IS THE BUG FIX', () => {

View File

@@ -0,0 +1,27 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Derived from GitOps Promoter plain icon (primary layer, no white outline):
https://github.com/argoproj-labs/gitops-promoter/blob/087fd273bbdad9c9669b93f520d6e4d1054d628f/docs/assets/logo/icon/primary.svg
Licensed under Apache License 2.0.
Single fill #8fa4b1 to match other Argo CD resource icons.
-->
<svg id="Layer_2" data-name="Layer 2" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 557 556.93">
<defs>
<style>
.cls-1 {
fill: #8fa4b1;
}
</style>
</defs>
<g id="Graphics">
<g>
<ellipse class="cls-1" cx="131.66" cy="425.27" rx="31.79" ry="97.5" transform="translate(-262.15 217.66) rotate(-45)"/>
<ellipse class="cls-1" cx="78.3" cy="478.69" rx="23.36" ry="71.65" transform="translate(-315.55 195.58) rotate(-45)"/>
<ellipse class="cls-1" cx="35.47" cy="521.46" rx="15.55" ry="47.68" transform="translate(-358.33 177.81) rotate(-45)"/>
<ellipse class="cls-1" cx="410.41" cy="146.59" rx="23.4" ry="94.92" transform="translate(16.55 333.14) rotate(-45)"/>
<path class="cls-1" d="M142.99,424.54c-2.71,0-5.43-1.04-7.5-3.11-4.14-4.14-4.13-10.85.01-14.99l36.23-36.17c6.49,6.49,11.18,11.18,14.99,14.99l-36.24,36.18c-2.07,2.07-4.78,3.1-7.49,3.1Z"/>
<path class="cls-1" d="M528.29,28.8h0s-.03-.03-.05-.05c-.02-.02-.03-.03-.04-.05h0c-48.94-48.78-172.32-45.51-298.42,90.14-5.89,6.34-11.91,9.55-17.92,9.56-4.51,0-8.89-.07-13.17-.14-27.96-.47-54.36-.9-85.68,20.63-15.31,10.53-26.97,21.26-36.69,33.75l-45.01,68.39c-.77,1.48-1.01,2.99-.91,4.48l.1,1.08c1.49,8.26,13.83,15.91,19,16.67l65.4,8.3c9.42,1.36,11.81,2.27,16.61,9.14,2.43,8.19,2.71,9.87,2.76,18.76-.76,16.95,14.48,36.33,24.37,47.73l21.55,21.55-8.48-8.48,7.69-7.69c.06-.06.09-.13.15-.19l100.84-100.73c4.14-4.14,10.85-4.13,14.99,0,4.14,4.14,4.13,10.85,0,14.99l-87,86.91.02.02-21.69,21.69,13.09,13.09c11.4,9.88,30.78,25.13,47.73,24.37,8.89.05,10.57.33,18.76,2.76,6.87,4.8,7.77,7.2,9.14,16.61l8.3,65.41c.76,5.17,8.4,17.51,16.67,19,0,.01,1.08.11,1.08.1,1.5.09,3-.14,4.48-.91l68.39-45.01c12.5-9.72,23.22-21.38,33.75-36.69,21.54-31.32,21.1-57.72,20.63-85.68-.07-4.28-.15-8.66-.14-13.17,0-6.01,3.22-12.04,9.56-17.92,135.65-126.1,138.91-249.48,90.13-298.42ZM379.05,177.95c-34.12-34.44-62.15-77.59-61.65-107.87C386.6,18.94,468.63,3.78,511.18,43.34c.42.39.79.8,1.19,1.2l.05.05.05.05c.4.4.81.77,1.2,1.19,39.56,42.56,24.39,124.58-26.75,193.79-30.28.5-73.43-27.53-107.87-61.65Z"/>
<path class="cls-1" d="M382.59,69.55c-3.96,0-7.75-2.23-9.57-6.03-2.52-5.28-.28-11.61,5-14.13,30.34-14.48,63.95-19.59,94.66-14.38,5.77.98,9.66,6.45,8.68,12.22-.98,5.77-6.45,9.67-12.22,8.68-26.43-4.48-55.55,0-81.98,12.61-1.47.7-3.03,1.04-4.56,1.04Z"/>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

@@ -45,20 +45,28 @@ func (e *OutOfBoundsSymlinkError) Error() string {
// CheckOutOfBoundsSymlinks determines if basePath contains any symlinks that
// are absolute or point to a path outside of the basePath. If found, an
// OutOfBoundsSymlinkError is returned.
func CheckOutOfBoundsSymlinks(basePath string) error {
func CheckOutOfBoundsSymlinks(basePath string, skipPaths ...string) error {
absBasePath, err := filepath.Abs(basePath)
if err != nil {
return fmt.Errorf("failed to get absolute path: %w", err)
}
skipPathsSet := map[string]bool{}
for _, p := range skipPaths {
skipPathsSet[filepath.Join(absBasePath, p)] = true
}
return filepath.Walk(absBasePath, func(path string, info os.FileInfo, err error) error {
if err != nil {
// Ignore "no such file or directory" errors than can happen with
// Ignore "no such file or directory" errors that can happen with
// temporary files such as .git/*.lock
if errors.Is(err, os.ErrNotExist) {
return nil
}
return fmt.Errorf("failed to walk for symlinks in %s: %w", absBasePath, err)
}
if skipPathsSet[path] {
return filepath.SkipDir
}
if files.IsSymlink(info) {
// We don't use filepath.EvalSymlinks because it fails without returning a path
// if the target doesn't exist.

View File

@@ -83,6 +83,11 @@ func TestBadSymlinks3(t *testing.T) {
assert.Equal(t, "badlink", oobError.File)
}
func TestBadSymlinksExcluded(t *testing.T) {
err := CheckOutOfBoundsSymlinks("./testdata/badlink", "badlink")
assert.NoError(t, err)
}
// No absolute symlinks allowed
func TestAbsSymlink(t *testing.T) {
const testDir = "./testdata/abslink"

View File

@@ -360,7 +360,8 @@ func TestVerifyCommitSignature(t *testing.T) {
err = client.Init()
require.NoError(t, err)
err = client.Fetch("", 0)
// Use shallow fetch to avoid timeout fetching the entire repo
err = client.Fetch("", 1)
require.NoError(t, err)
commitSHA, err := client.LsRemote("HEAD")
@@ -369,10 +370,18 @@ func TestVerifyCommitSignature(t *testing.T) {
_, err = client.Checkout(commitSHA, true, true)
require.NoError(t, err)
// Fetch the specific commits needed for signature verification
signedCommit := "28027897aad1262662096745f2ce2d4c74d02b7f"
unsignedCommit := "85d660f0b967960becce3d49bd51c678ba2a5d24"
err = client.Fetch(signedCommit, 1)
require.NoError(t, err)
err = client.Fetch(unsignedCommit, 1)
require.NoError(t, err)
// 28027897aad1262662096745f2ce2d4c74d02b7f is a commit that is signed in the repo
// It doesn't matter whether we know the key or not at this stage
{
out, err := client.VerifyCommitSignature("28027897aad1262662096745f2ce2d4c74d02b7f")
out, err := client.VerifyCommitSignature(signedCommit)
require.NoError(t, err)
assert.NotEmpty(t, out)
assert.Contains(t, out, "gpg: Signature made")
@@ -380,7 +389,7 @@ func TestVerifyCommitSignature(t *testing.T) {
// 85d660f0b967960becce3d49bd51c678ba2a5d24 is a commit that is not signed
{
out, err := client.VerifyCommitSignature("85d660f0b967960becce3d49bd51c678ba2a5d24")
out, err := client.VerifyCommitSignature(unsignedCommit)
require.NoError(t, err)
assert.Empty(t, out)
}

View File

@@ -1,26 +1,108 @@
package glob
import (
"sync"
"github.com/gobwas/glob"
"github.com/golang/groupcache/lru"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/singleflight"
)
const (
// DefaultGlobCacheSize is the default maximum number of compiled glob patterns to cache.
// This limit prevents memory exhaustion from untrusted RBAC patterns.
// 10,000 patterns should be sufficient for most deployments while limiting
// memory usage to roughly ~10MB (assuming ~1KB per compiled pattern).
DefaultGlobCacheSize = 10000
)
type compileFn func(pattern string, separators ...rune) (glob.Glob, error)
var (
// globCache stores compiled glob patterns using an LRU cache with bounded size.
// This prevents memory exhaustion from potentially untrusted RBAC patterns
// while still providing significant performance benefits.
globCache *lru.Cache
globCacheLock sync.Mutex
compileGroup singleflight.Group
compileGlob compileFn = glob.Compile
)
func init() {
globCache = lru.New(DefaultGlobCacheSize)
}
// SetCacheSize reinitializes the glob cache with the given maximum number of entries.
// This should be called early during process startup, before concurrent access begins.
func SetCacheSize(maxEntries int) {
globCacheLock.Lock()
defer globCacheLock.Unlock()
globCache = lru.New(maxEntries)
}
// globCacheKey uniquely identifies a compiled glob pattern.
// The same pattern compiled with different separators produces different globs,
// so both fields are needed.
type globCacheKey struct {
Pattern string
Separators string
}
func cacheKey(pattern string, separators ...rune) globCacheKey {
return globCacheKey{Pattern: pattern, Separators: string(separators)}
}
// getOrCompile returns a cached compiled glob pattern, compiling and caching it if necessary.
// Cache hits are a brief lock + map lookup. On cache miss, singleflight ensures each
// unique pattern is compiled exactly once even under concurrent access, while unrelated
// patterns compile in parallel.
// lru.Cache.Get() promotes entries (mutating), so a Mutex is used rather than RWMutex.
func getOrCompile(pattern string, compiler compileFn, separators ...rune) (glob.Glob, error) {
key := cacheKey(pattern, separators...)
globCacheLock.Lock()
if cached, ok := globCache.Get(key); ok {
globCacheLock.Unlock()
return cached.(glob.Glob), nil
}
globCacheLock.Unlock()
sfKey := key.Pattern + "\x00" + key.Separators
v, err, _ := compileGroup.Do(sfKey, func() (any, error) {
compiled, err := compiler(pattern, separators...)
if err != nil {
return nil, err
}
globCacheLock.Lock()
globCache.Add(key, compiled)
globCacheLock.Unlock()
return compiled, nil
})
if err != nil {
return nil, err
}
return v.(glob.Glob), nil
}
// Match tries to match a text with a given glob pattern.
// Compiled glob patterns are cached for performance.
func Match(pattern, text string, separators ...rune) bool {
compiledGlob, err := glob.Compile(pattern, separators...)
compiled, err := getOrCompile(pattern, compileGlob, separators...)
if err != nil {
log.Warnf("failed to compile pattern %s due to error %v", pattern, err)
return false
}
return compiledGlob.Match(text)
return compiled.Match(text)
}
// MatchWithError tries to match a text with a given glob pattern.
// returns error if the glob pattern fails to compile.
// Returns error if the glob pattern fails to compile.
// Compiled glob patterns are cached for performance.
func MatchWithError(pattern, text string, separators ...rune) (bool, error) {
compiledGlob, err := glob.Compile(pattern, separators...)
compiled, err := getOrCompile(pattern, compileGlob, separators...)
if err != nil {
return false, err
}
return compiledGlob.Match(text), nil
return compiled.Match(text), nil
}

View File

@@ -1,11 +1,57 @@
package glob
import (
"errors"
"fmt"
"sync"
"sync/atomic"
"testing"
extglob "github.com/gobwas/glob"
"github.com/stretchr/testify/require"
)
// Test helpers - these access internal variables for testing purposes
// resetGlobCacheForTest clears the cached glob patterns for testing.
func resetGlobCacheForTest() {
globCacheLock.Lock()
defer globCacheLock.Unlock()
globCache.Clear()
}
// isPatternCached returns true if the pattern (with optional separators) is cached.
func isPatternCached(pattern string, separators ...rune) bool {
globCacheLock.Lock()
defer globCacheLock.Unlock()
_, ok := globCache.Get(cacheKey(pattern, separators...))
return ok
}
// globCacheLen returns the number of cached patterns.
func globCacheLen() int {
globCacheLock.Lock()
defer globCacheLock.Unlock()
return globCache.Len()
}
func matchWithCompiler(pattern, text string, compiler compileFn, separators ...rune) bool {
compiled, err := getOrCompile(pattern, compiler, separators...)
if err != nil {
return false
}
return compiled.Match(text)
}
func countingCompiler() (compileFn, *int32) {
var compileCount int32
compiler := func(pattern string, separators ...rune) (extglob.Glob, error) {
atomic.AddInt32(&compileCount, 1)
return extglob.Compile(pattern, separators...)
}
return compiler, &compileCount
}
func Test_Match(t *testing.T) {
tests := []struct {
name string
@@ -86,3 +132,209 @@ func Test_MatchWithError(t *testing.T) {
})
}
}
func Test_GlobCaching(t *testing.T) {
// Clear cache before test
resetGlobCacheForTest()
compiler, compileCount := countingCompiler()
pattern := "test*pattern"
text := "testABCpattern"
// First call should compile and cache
result1 := matchWithCompiler(pattern, text, compiler)
require.True(t, result1)
// Verify pattern is cached
require.True(t, isPatternCached(pattern), "pattern should be cached after first Match call")
// Second call should use cached value
result2 := matchWithCompiler(pattern, text, compiler)
require.True(t, result2)
// Results should be consistent
require.Equal(t, result1, result2)
require.Equal(t, int32(1), atomic.LoadInt32(compileCount), "glob should compile once for the cached pattern")
}
func Test_GlobCachingConcurrent(t *testing.T) {
// Clear cache before test
resetGlobCacheForTest()
compiler, compileCount := countingCompiler()
pattern := "concurrent*test"
text := "concurrentABCtest"
var wg sync.WaitGroup
numGoroutines := 100
errChan := make(chan error, numGoroutines)
for range numGoroutines {
wg.Go(func() {
result := matchWithCompiler(pattern, text, compiler)
if !result {
errChan <- errors.New("expected match to return true")
}
})
}
wg.Wait()
close(errChan)
// Check for any errors from goroutines
for err := range errChan {
t.Error(err)
}
// Verify pattern is cached
require.True(t, isPatternCached(pattern))
require.Equal(t, 1, globCacheLen(), "should only have one cached entry for the pattern")
require.Equal(t, int32(1), atomic.LoadInt32(compileCount), "glob should compile once for the cached pattern")
}
func Test_GlobCacheLRUEviction(t *testing.T) {
// Clear cache before test
resetGlobCacheForTest()
// Fill cache beyond DefaultGlobCacheSize
for i := range DefaultGlobCacheSize + 100 {
pattern := fmt.Sprintf("pattern-%d-*", i)
Match(pattern, "pattern-0-test")
}
// Cache size should be limited to DefaultGlobCacheSize
require.Equal(t, DefaultGlobCacheSize, globCacheLen(), "cache size should be limited to DefaultGlobCacheSize")
// The oldest patterns should be evicted
oldest := fmt.Sprintf("pattern-%d-*", 0)
require.False(t, isPatternCached(oldest), "oldest pattern should be evicted")
// The most recently used patterns should still be cached
require.True(t, isPatternCached(fmt.Sprintf("pattern-%d-*", DefaultGlobCacheSize+99)), "most recent pattern should be cached")
}
func Test_GlobCacheKeyIncludesSeparators(t *testing.T) {
resetGlobCacheForTest()
compiler, compileCount := countingCompiler()
pattern := "a*b"
textWithSlash := "a/b"
// Without separators, '*' matches '/' so "a/b" matches "a*b"
require.True(t, matchWithCompiler(pattern, textWithSlash, compiler))
require.Equal(t, int32(1), atomic.LoadInt32(compileCount))
// With separator '/', '*' does NOT match '/' so "a/b" should NOT match "a*b"
require.False(t, matchWithCompiler(pattern, textWithSlash, compiler, '/'))
require.Equal(t, int32(2), atomic.LoadInt32(compileCount), "same pattern with different separators must compile separately")
// Both entries should be independently cached
require.True(t, isPatternCached(pattern))
require.True(t, isPatternCached(pattern, '/'))
require.Equal(t, 2, globCacheLen())
// Subsequent calls should use cache (no additional compiles)
matchWithCompiler(pattern, textWithSlash, compiler)
matchWithCompiler(pattern, textWithSlash, compiler, '/')
require.Equal(t, int32(2), atomic.LoadInt32(compileCount), "cached patterns should not recompile")
}
func Test_InvalidGlobNotCached(t *testing.T) {
// Clear cache before test
resetGlobCacheForTest()
invalidPattern := "e[[a*"
text := "test"
// Match should return false for invalid pattern
result := Match(invalidPattern, text)
require.False(t, result)
// Invalid patterns should NOT be cached
require.False(t, isPatternCached(invalidPattern), "invalid pattern should not be cached")
// Also test with MatchWithError
_, err := MatchWithError(invalidPattern, text)
require.Error(t, err)
// Still should not be cached after MatchWithError
require.False(t, isPatternCached(invalidPattern), "invalid pattern should not be cached after MatchWithError")
}
func Test_SetCacheSize(t *testing.T) {
resetGlobCacheForTest()
customSize := 5
SetCacheSize(customSize)
defer SetCacheSize(DefaultGlobCacheSize)
for i := range customSize + 3 {
Match(fmt.Sprintf("setsize-%d-*", i), "setsize-0-test")
}
require.Equal(t, customSize, globCacheLen(), "cache size should respect the custom size set via SetCacheSize")
require.False(t, isPatternCached("setsize-0-*"), "oldest pattern should be evicted with custom cache size")
require.True(t, isPatternCached(fmt.Sprintf("setsize-%d-*", customSize+2)), "most recent pattern should be cached")
}
// BenchmarkMatch_WithCache benchmarks Match with caching (cache hit)
func BenchmarkMatch_WithCache(b *testing.B) {
pattern := "proj:*/app-*"
text := "proj:myproject/app-frontend"
// Warm up the cache
Match(pattern, text)
b.ResetTimer()
for i := 0; i < b.N; i++ {
Match(pattern, text)
}
}
// BenchmarkMatch_WithoutCache simulates the OLD behavior (compile every time)
// by calling glob.Compile + Match directly, bypassing the cache entirely.
func BenchmarkMatch_WithoutCache(b *testing.B) {
pattern := "proj:*/app-*"
text := "proj:myproject/app-frontend"
b.ResetTimer()
for i := 0; i < b.N; i++ {
compiled, err := extglob.Compile(pattern)
if err != nil {
b.Fatal(err)
}
compiled.Match(text)
}
}
// BenchmarkGlobCompile measures raw glob.Compile cost
func BenchmarkGlobCompile(b *testing.B) {
pattern := "proj:*/app-*"
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = extglob.Compile(pattern)
}
}
// BenchmarkMatch_RBACSimulation simulates real RBAC evaluation scenario
// 50 policies × 1 app = what happens per application in List
func BenchmarkMatch_RBACSimulation(b *testing.B) {
patterns := make([]string, 50)
for i := range 50 {
patterns[i] = fmt.Sprintf("proj:team-%d/*", i)
}
text := "proj:team-25/my-app"
// With caching: patterns are compiled once
b.ResetTimer()
for i := 0; i < b.N; i++ {
for _, pattern := range patterns {
Match(pattern, text)
}
}
}

View File

@@ -734,10 +734,13 @@ func TestClusterInformer_SecretDeletion(t *testing.T) {
err = clientset.CoreV1().Secrets("argocd").Delete(t.Context(), "cluster1", metav1.DeleteOptions{})
require.NoError(t, err)
time.Sleep(100 * time.Millisecond)
require.Eventually(t, func() bool {
_, err := informer.GetClusterByURL("https://cluster1.example.com")
return err != nil
}, 5*time.Second, 10*time.Millisecond, "expected cluster1 to be removed from cache after secret deletion")
_, err = informer.GetClusterByURL("https://cluster1.example.com")
assert.Error(t, err)
require.Error(t, err)
assert.Contains(t, err.Error(), "not found")
cluster2, err := informer.GetClusterByURL("https://cluster2.example.com")

View File

@@ -238,14 +238,18 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
break
}
log.Debugf("created bitbucket client with base URL '%s'", apiBaseURL)
owner := strings.ReplaceAll(payload.Repository.FullName, "/"+payload.Repository.Name, "")
owner, repoSlug, ok := strings.Cut(payload.Repository.FullName, "/")
if !ok || owner == "" || repoSlug == "" {
log.Warnf("error parsing bitbucket repository full name %q", payload.Repository.FullName)
break
}
spec := change.shaBefore + ".." + change.shaAfter
diffStatChangedFiles, err := fetchDiffStatFromBitbucket(ctx, bbClient, owner, payload.Repository.Name, spec)
diffStatChangedFiles, err := fetchDiffStatFromBitbucket(ctx, bbClient, owner, repoSlug, spec)
if err != nil {
log.Warnf("error fetching changed files using bitbucket diffstat api: %v", err)
}
changedFiles = append(changedFiles, diffStatChangedFiles...)
touchedHead, err = isHeadTouched(ctx, bbClient, owner, payload.Repository.Name, revision)
touchedHead, err = isHeadTouched(ctx, bbClient, owner, repoSlug, revision)
if err != nil {
log.Warnf("error fetching bitbucket repo details: %v", err)
// To be safe, we just return true and let the controller check for himself.

View File

@@ -1232,7 +1232,7 @@ func Test_affectedRevisionInfo_bitbucket_changed_files(t *testing.T) {
"repository":{
"type": "repository",
"full_name": "{{.owner}}/{{.repo}}",
"name": "{{.repo}}",
"name": "{{.name}}",
"scm": "git",
"links": {
"self": {"href": "https://api.bitbucket.org/2.0/repositories/{{.owner}}/{{.repo}}"},
@@ -1245,7 +1245,7 @@ func Test_affectedRevisionInfo_bitbucket_changed_files(t *testing.T) {
panic(err)
}
bitbucketPushPayload := func(branchName, owner, repo string) bitbucket.RepoPushPayload {
bitbucketPushPayload := func(branchName, owner, repo, name string) bitbucket.RepoPushPayload {
// The payload's "push.changes[0].new.name" member seems to only have the branch name (based on the example payload).
// https://support.atlassian.com/bitbucket-cloud/docs/event-payloads/#EventPayloads-Push
var pl bitbucket.RepoPushPayload
@@ -1254,6 +1254,7 @@ func Test_affectedRevisionInfo_bitbucket_changed_files(t *testing.T) {
"branch": branchName,
"owner": owner,
"repo": repo,
"name": name,
"oldHash": "abcdef",
"newHash": "ghijkl",
})
@@ -1276,7 +1277,7 @@ func Test_affectedRevisionInfo_bitbucket_changed_files(t *testing.T) {
"bitbucket branch name containing 'refs/heads/'",
false,
"release-0.0",
bitbucketPushPayload("release-0.0", "test-owner", "test-repo"),
bitbucketPushPayload("release-0.0", "test-owner", "test-repo", "test-repo"),
false,
[]string{"guestbook/guestbook-ui-deployment.yaml"},
changeInfo{
@@ -1288,7 +1289,55 @@ func Test_affectedRevisionInfo_bitbucket_changed_files(t *testing.T) {
"bitbucket branch name containing 'main'",
false,
"main",
bitbucketPushPayload("main", "test-owner", "test-repo"),
bitbucketPushPayload("main", "test-owner", "test-repo", "test-repo"),
true,
[]string{"guestbook/guestbook-ui-deployment.yaml"},
changeInfo{
shaBefore: "abcdef",
shaAfter: "ghijkl",
},
},
{
"bitbucket display name is mixed case, differs from repo slug",
false,
"main",
bitbucketPushPayload("main", "test-owner", "test-repo", "Test Repo"),
true,
[]string{"guestbook/guestbook-ui-deployment.yaml"},
changeInfo{
shaBefore: "abcdef",
shaAfter: "ghijkl",
},
},
{
"bitbucket display name is all uppercase, differs from repo slug",
false,
"main",
bitbucketPushPayload("main", "test-owner", "test-repo", "TESTREPO"),
true,
[]string{"guestbook/guestbook-ui-deployment.yaml"},
changeInfo{
shaBefore: "abcdef",
shaAfter: "ghijkl",
},
},
{
"bitbucket display name is all lowercase, differs from repo slug",
false,
"main",
bitbucketPushPayload("main", "test-owner", "test-repo", "testrepo"),
true,
[]string{"guestbook/guestbook-ui-deployment.yaml"},
changeInfo{
shaBefore: "abcdef",
shaAfter: "ghijkl",
},
},
{
"bitbucket display name is all uppercase with spaces, differs from repo slug",
false,
"main",
bitbucketPushPayload("main", "test-owner", "test-repo", "TEST REPO"),
true,
[]string{"guestbook/guestbook-ui-deployment.yaml"},
changeInfo{