Compare commits

..

1 Commits

Author SHA1 Message Date
argoproj-renovate[bot]
f04ca4a967 chore(deps): update group node
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
2025-09-17 18:05:40 +00:00
820 changed files with 13439 additions and 155230 deletions

View File

@@ -9,76 +9,18 @@ assignees: ''
Target RC1 date: ___. __, ____
Target GA date: ___. __, ____
## RC1 Release Checklist
- [ ] 1wk before feature freeze post in #argo-contributors that PRs must be merged by DD-MM-YYYY to be included in the release - ask approvers to drop items from milestone they can't merge
- [ ] 1wk before feature freeze post in #argo-contributors that PRs must be merged by DD-MM-YYYY to be included in the release - ask approvers to drop items from milestone they cant merge
- [ ] At least two days before RC1 date, draft RC blog post and submit it for review (or delegate this task)
- [ ] Create new release branch (or delegate this task to an Approver)
- [ ] Add the release branch to ReadTheDocs
- [ ] Cut RC1 (or delegate this task to an Approver and coordinate timing)
- [ ] Run the [Init ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/init-release.yaml) from the release branch
- [ ] Review and merge the generated version bump PR
- [ ] Run `./hack/trigger-release.sh` to push the release tag
- [ ] Monitor the [Publish ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml)
- [ ] Verify the release on [GitHub releases](https://github.com/argoproj/argo-cd/releases)
- [ ] Verify the container image on [Quay.io](https://quay.io/repository/argoproj/argocd?tab=tags)
- [ ] Confirm the new version appears in [Read the Docs](https://argo-cd.readthedocs.io/)
- [ ] Announce RC1 release
- [ ] Confirm that tweet and blog post are ready
- [ ] Publish tweet and blog post
- [ ] Post in #argo-cd and #argo-announcements requesting help testing:
```
:mega: Argo CD v{MAJOR}.{MINOR}.{PATCH}-rc{RC_NUMBER} is OUT NOW! :argocd::tada:
Please go through the following resources to know more about the release:
Release notes: https://github.com/argoproj/argo-cd/releases/tag/v{VERSION}
Blog: {BLOG_POST_URL}
We'd love your help testing this release candidate! Please try it out in your environments and report any issues you find. This helps us ensure a stable GA release.
Thanks to all the folks who spent their time contributing to this release in any way possible!
```
- [ ] Monitor support channels for issues, cherry-picking bugfixes and docs fixes as appropriate during the RC period (or delegate this task to an Approver and coordinate timing)
## GA Release Checklist
- [ ] At GA release date, evaluate if any bugs justify delaying the release
- [ ] Prepare for EOL version (version that is 3 releases old)
- [ ] If unreleased changes are on the release branch for {current minor version minus 3}, cut a final patch release for that series (or delegate this task to an Approver and coordinate timing)
- [ ] Edit the final patch release on GitHub and add the following notice at the top:
```markdown
> [!IMPORTANT]
> **END OF LIFE NOTICE**
>
> This is the final release of the {EOL_SERIES} release series. As of {GA_DATE}, this version has reached end of life and will no longer receive bug fixes or security updates.
>
> **Action Required**: Please upgrade to a [supported version](https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/overview/) (v{SUPPORTED_VERSION_1}, v{SUPPORTED_VERSION_2}, or v{NEW_VERSION}).
```
- [ ] Cut GA release (or delegate this task to an Approver and coordinate timing)
- [ ] Run the [Init ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/init-release.yaml) from the release branch
- [ ] Review and merge the generated version bump PR
- [ ] Run `./hack/trigger-release.sh` to push the release tag
- [ ] Monitor the [Publish ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml)
- [ ] Verify the release on [GitHub releases](https://github.com/argoproj/argo-cd/releases)
- [ ] Verify the container image on [Quay.io](https://quay.io/repository/argoproj/argocd?tab=tags)
- [ ] Verify the `stable` tag has been updated
- [ ] Confirm the new version appears in [Read the Docs](https://argo-cd.readthedocs.io/)
- [ ] Announce GA release with EOL notice
- [ ] Confirm that tweet and blog post are ready
- [ ] Publish tweet and blog post
- [ ] Post in #argo-cd and #argo-announcements announcing the release and EOL:
```
:mega: Argo CD v{MAJOR}.{MINOR} is OUT NOW! :argocd::tada:
Please go through the following resources to know more about the release:
Upgrade instructions: https://argo-cd.readthedocs.io/en/latest/operator-manual/upgrading/{PREV_MINOR}-{MAJOR}.{MINOR}/
Blog: {BLOG_POST_URL}
:warning: IMPORTANT: With the release of Argo CD v{MAJOR}.{MINOR}, support for Argo CD v{EOL_VERSION} has officially reached End of Life (EOL).
Thanks to all the folks who spent their time contributing to this release in any way possible!
```
- [ ] Create new release branch
- [ ] Add the release branch to ReadTheDocs
- [ ] Confirm that tweet and blog post are ready
- [ ] Trigger the release
- [ ] After the release is finished, publish tweet and blog post
- [ ] Post in #argo-cd and #argo-announcements with lots of emojis announcing the release and requesting help testing
- [ ] Monitor support channels for issues, cherry-picking bugfixes and docs fixes as appropriate (or delegate this task to an Approver and coordinate timing)
- [ ] At release date, evaluate if any bugs justify delaying the release. If not, cut the release (or delegate this task to an Approver and coordinate timing)
- [ ] If unreleased changes are on the release branch for {current minor version minus 3}, cut a final patch release for that series (or delegate this task to an Approver and coordinate timing)
- [ ] After the release, post in #argo-cd that the {current minor version minus 3} has reached EOL (example: https://cloud-native.slack.com/archives/C01TSERG0KZ/p1667336234059729)
- [ ] (For the next release champion) Review the [items scheduled for the next release](https://github.com/orgs/argoproj/projects/25). If any item does not have an assignee who can commit to finish the feature, move it to the next release.
- [ ] (For the next release champion) Schedule a time mid-way through the release cycle to review items again.

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -38,7 +38,7 @@ jobs:
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ steps.generate-token.outputs.token }}
@@ -91,17 +91,11 @@ jobs:
- name: Create Pull Request
run: |
# Create cherry-pick PR
TITLE="${PR_TITLE} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})"
BODY=$(cat <<EOF
Cherry-picked ${PR_TITLE} (#${{ inputs.pr_number }})
${{ steps.cherry-pick.outputs.signoff }}
EOF
)
gh pr create \
--title "$TITLE" \
--body "$BODY" \
--title "${{ inputs.pr_title }} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})" \
--body "Cherry-picked ${{ inputs.pr_title }} (#${{ inputs.pr_number }})
${{ steps.cherry-pick.outputs.signoff }}" \
--base "${{ steps.cherry-pick.outputs.target_branch }}" \
--head "${{ steps.cherry-pick.outputs.branch_name }}"
@@ -109,13 +103,12 @@ jobs:
gh pr comment ${{ inputs.pr_number }} \
--body "🍒 Cherry-pick PR created for ${{ inputs.version_number }}: #$(gh pr list --head ${{ steps.cherry-pick.outputs.branch_name }} --json number --jq '.[0].number')"
env:
PR_TITLE: ${{ inputs.pr_title }}
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
- name: Comment on failure
if: failure()
run: |
gh pr comment ${{ inputs.pr_number }} \
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the [workflow logs](https://github.com/argoproj/argo-cd/actions/runs/${{ github.run_id }}) for details."
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the workflow logs for details."
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
GH_TOKEN: ${{ steps.generate-token.outputs.token }}

View File

@@ -14,7 +14,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.3'
GOLANG_VERSION: '1.25.1'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -31,7 +31,7 @@ jobs:
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
docs: ${{ steps.filter.outputs.docs_any_changed }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
id: filter
with:
@@ -55,7 +55,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -67,6 +67,7 @@ jobs:
run: |
go mod tidy
git diff --exit-code -- .
build-go:
name: Build & cache Go code
if: ${{ needs.changes.outputs.backend == 'true' }}
@@ -75,13 +76,13 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -102,7 +103,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -111,7 +112,7 @@ jobs:
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v2.5.0
version: v2.4.0
args: --verbose
test-go:
@@ -128,7 +129,7 @@ jobs:
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
@@ -152,7 +153,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -173,7 +174,7 @@ jobs:
- name: Run all unit tests
run: make test-local
- name: Generate test results artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: test-results
path: test-results
@@ -192,7 +193,7 @@ jobs:
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
@@ -216,7 +217,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -237,7 +238,7 @@ jobs:
- name: Run all unit tests
run: make test-race-local
- name: Generate test results artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: race-results
path: test-results/
@@ -250,7 +251,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -302,15 +303,15 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup NodeJS
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '22.9.0'
node-version: '22.19.0'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -335,7 +336,7 @@ jobs:
shellcheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- run: |
sudo apt-get install shellcheck
shellcheck -e SC2059 -e SC2154 -e SC2034 -e SC2016 -e SC1091 $(find . -type f -name '*.sh' | grep -v './ui/node_modules') | tee sc.log
@@ -354,12 +355,12 @@ jobs:
sonar_secret: ${{ secrets.SONAR_TOKEN }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -367,12 +368,12 @@ jobs:
run: |
rm -rf ui/node_modules/argo-ui/node_modules
- name: Get e2e code coverage
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: e2e-code-coverage
path: e2e-code-coverage
- name: Get unit test code coverage
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: test-results
path: test-results
@@ -406,7 +407,7 @@ jobs:
test-e2e:
name: Run end-to-end tests
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: oracle-vm-16cpu-64gb-x86-64
runs-on: ubuntu-latest-16-cores
strategy:
fail-fast: false
matrix:
@@ -425,7 +426,7 @@ jobs:
- build-go
- changes
env:
GOPATH: /home/ubuntu/go
GOPATH: /home/runner/go
ARGOCD_FAKE_IN_CLUSTER: 'true'
ARGOCD_SSH_DATA_PATH: '/tmp/argo-e2e/app/config/ssh'
ARGOCD_TLS_DATA_PATH: '/tmp/argo-e2e/app/config/tls'
@@ -446,7 +447,7 @@ jobs:
swap-storage: false
tool-cache: false
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -461,19 +462,19 @@ jobs:
set -x
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
sudo mkdir -p $HOME/.kube && sudo chown -R ubuntu $HOME/.kube
sudo mkdir -p $HOME/.kube && sudo chown -R runner $HOME/.kube
sudo k3s kubectl config view --raw > $HOME/.kube/config
sudo chown ubuntu $HOME/.kube/config
sudo chown runner $HOME/.kube/config
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Add ~/go/bin to PATH
run: |
echo "/home/ubuntu/go/bin" >> $GITHUB_PATH
echo "/home/runner/go/bin" >> $GITHUB_PATH
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
@@ -499,7 +500,7 @@ jobs:
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist
chown ubuntu dist
chown runner dist
- name: Run E2E server and wait for it being available
timeout-minutes: 30
run: |
@@ -525,13 +526,13 @@ jobs:
goreman run stop-all || echo "goreman trouble"
sleep 30
- name: Upload e2e coverage report
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: e2e-code-coverage
path: /tmp/coverage
if: ${{ matrix.k3s.latest }}
- name: Upload e2e-server logs
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: e2e-server-k8s${{ matrix.k3s.version }}.log
path: /tmp/e2e-server.log

View File

@@ -29,7 +29,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
- name: Setup Golang

View File

@@ -56,14 +56,14 @@ jobs:
image-digest: ${{ steps.image.outputs.digest }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
if: ${{ github.ref_type == 'tag'}}
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
if: ${{ github.ref_type != 'tag'}}
- name: Setup Golang
@@ -73,7 +73,7 @@ jobs:
cache: false
- name: Install cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
@@ -103,7 +103,7 @@ jobs:
echo 'EOF' >> $GITHUB_ENV
- name: Login to Quay.io
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: quay.io
username: ${{ secrets.quay_username }}
@@ -111,7 +111,7 @@ jobs:
if: ${{ inputs.quay_image_name && inputs.push }}
- name: Login to GitHub Container Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ secrets.ghcr_username }}
@@ -119,7 +119,7 @@ jobs:
if: ${{ inputs.ghcr_image_name && inputs.push }}
- name: Login to dockerhub Container Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
username: ${{ secrets.docker_username }}
password: ${{ secrets.docker_password }}

View File

@@ -25,7 +25,7 @@ jobs:
image-tag: ${{ steps.image.outputs.tag}}
platforms: ${{ steps.platforms.outputs.platforms }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Set image tag for ghcr
run: echo "tag=$(cat ./VERSION)-${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
@@ -53,7 +53,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
go-version: 1.25.1
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -70,7 +70,7 @@ jobs:
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
go-version: 1.25.1
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:
@@ -106,7 +106,7 @@ jobs:
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- run: git clone "https://$TOKEN@github.com/argoproj/argoproj-deployments"
env:
TOKEN: ${{ secrets.TOKEN }}

View File

@@ -23,7 +23,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -11,7 +11,7 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.3' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.25.1' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
@@ -25,49 +25,13 @@ jobs:
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
go-version: 1.25.1
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup variables
id: var
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
IS_LATEST=false
# Ensure latest release tag matches github.ref_name
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
IS_LATEST=true
fi
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
argocd-image-provenance:
needs: [argocd-image]
permissions:
@@ -86,20 +50,18 @@ jobs:
goreleaser:
needs:
- setup-variables
- argocd-image
- argocd-image-provenance
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -180,12 +142,12 @@ jobs:
permissions:
contents: write # Needed for release uploads
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes }}
hashes: ${{ steps.sbom-hash.outputs.hashes}}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -236,7 +198,7 @@ jobs:
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Upload SBOM
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090 # v2.4.1
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836 # v2.3.3
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -259,7 +221,6 @@ jobs:
post-release:
needs:
- setup-variables
- argocd-image
- goreleaser
- generate-sbom
@@ -268,11 +229,9 @@ jobs:
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -283,6 +242,27 @@ jobs:
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
- name: Check if tag is the latest version and not a pre-release
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
# Ensure latest tag matches github.ref_name & not a pre-release
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
echo "TAG_STABLE=true" >> $GITHUB_ENV
else
echo "TAG_STABLE=false" >> $GITHUB_ENV
fi
- name: Update stable tag to latest version
run: |
git tag -f stable ${{ github.ref_name }}

View File

@@ -20,17 +20,10 @@ jobs:
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # 5.0.0
# Some codegen commands require Go to be setup
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # 4.2.2
- name: Self-hosted Renovate
uses: renovatebot/github-action@ea850436a5fe75c0925d583c7a02c60a5865461d #43.0.20
uses: renovatebot/github-action@f8af9272cd94a4637c29f60dea8731afd3134473 #43.0.12
with:
configurationFile: .github/configs/renovate-config.js
token: '${{ steps.get_token.outputs.token }}'

View File

@@ -30,12 +30,12 @@ jobs:
steps:
- name: "Checkout code"
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
@@ -54,7 +54,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: SARIF file
path: results.sarif

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Build reports

1
.gitignore vendored
View File

@@ -20,7 +20,6 @@ node_modules/
.kube/
./test/cmp/*.sock
.envrc.remote
.mirrord/
.*.swp
rerunreport.txt

View File

@@ -22,7 +22,6 @@ linters:
- govet
- importas
- misspell
- noctx
- perfsprint
- revive
- staticcheck

View File

@@ -49,14 +49,13 @@ archives:
- argocd-cli
name_template: |-
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
formats: [binary]
formats: [ binary ]
checksum:
name_template: 'cli_checksums.txt'
algorithm: sha256
release:
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
prerelease: auto
draft: false
header: |

View File

@@ -1,5 +1,11 @@
dir: '{{.InterfaceDir}}/mocks'
structname: '{{.InterfaceName}}'
filename: '{{.InterfaceName}}.go'
pkgname: mocks
template-data:
unroll-variadic: true
packages:
github.com/argoproj/argo-cd/v3/applicationset/generators:
interfaces:
@@ -8,15 +14,17 @@ packages:
interfaces:
Repos: {}
github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider:
config:
dir: applicationset/services/scm_provider/aws_codecommit/mocks
interfaces:
AWSCodeCommitClient: {}
AWSTaggingClient: {}
AzureDevOpsClientFactory: {}
github.com/argoproj/argo-cd/v3/applicationset/utils:
interfaces:
Renderer: {}
github.com/argoproj/argo-cd/v3/commitserver/apiclient:
interfaces:
Clientset: {}
CommitServiceClient: {}
github.com/argoproj/argo-cd/v3/commitserver/commit:
interfaces:
@@ -27,7 +35,6 @@ packages:
github.com/argoproj/argo-cd/v3/controller/hydrator:
interfaces:
Dependencies: {}
RepoGetter: {}
github.com/argoproj/argo-cd/v3/pkg/apiclient/cluster:
interfaces:
ClusterServiceServer: {}
@@ -40,8 +47,8 @@ packages:
AppProjectInterface: {}
github.com/argoproj/argo-cd/v3/reposerver/apiclient:
interfaces:
RepoServerServiceClient: {}
RepoServerService_GenerateManifestWithFilesClient: {}
RepoServerServiceClient: {}
github.com/argoproj/argo-cd/v3/server/application:
interfaces:
Broadcaster: {}
@@ -56,37 +63,26 @@ packages:
github.com/argoproj/argo-cd/v3/util/db:
interfaces:
ArgoDB: {}
RepoCredsDB: {}
github.com/argoproj/argo-cd/v3/util/git:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/helm:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/oci:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/io:
interfaces:
TempPaths: {}
github.com/argoproj/argo-cd/v3/util/notification/argocd:
interfaces:
Service: {}
github.com/argoproj/argo-cd/v3/util/oci:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/workloadidentity:
interfaces:
TokenProvider: {}
github.com/argoproj/gitops-engine/pkg/cache:
interfaces:
ClusterCache: {}
github.com/argoproj/gitops-engine/pkg/diff:
interfaces:
ServerSideDryRunner: {}
github.com/microsoft/azure-devops-go-api/azuredevops/v7/git:
config:
dir: applicationset/services/scm_provider/azure_devops/git/mocks
interfaces:
Client: {}
pkgname: mocks
structname: '{{.InterfaceName}}'
template-data:
unroll-variadic: true

View File

@@ -16,5 +16,4 @@
# CLI
/cmd/argocd/** @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
/cmd/main.go @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
# Also include @argoproj/argocd-approvers-docs to avoid requiring CLI approvers for docs-only PRs.
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-docs @argoproj/argocd-approvers-cli
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-cli

View File

@@ -1,10 +1,10 @@
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:27771fb7b40a58237c98e8d3e6b9ecdd9289cec69a857fccfb85ff36294dac20
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:10bb10bb062de665d4dc3e0ea36715270ead632cfcb74d08ca2273712a0dfb42
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2 AS builder
FROM docker.io/library/golang:1.25.1@sha256:8305f5fa8ea63c7b5bc85bd223ccc62941f852318ebfbd22f53bbd0b358c07e1 AS builder
WORKDIR /tmp
@@ -85,7 +85,7 @@ WORKDIR /home/argocd
####################################################################################################
# Argo CD UI stage
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/node:23.0.0@sha256:e643c0b70dca9704dff42e12b17f5b719dbe4f95e6392fc2dfa0c5f02ea8044d AS argocd-ui
FROM --platform=$BUILDPLATFORM docker.io/library/node:23.11.1@sha256:9a25b5a6f9a90218b73a62205f111e71de5e4289aee952b4dd7e86f7498f2544 AS argocd-ui
WORKDIR /src
COPY ["ui/package.json", "ui/yarn.lock", "./"]
@@ -103,13 +103,11 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.1@sha256:8305f5fa8ea63c7b5bc85bd223ccc62941f852318ebfbd22f53bbd0b358c07e1 AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd
COPY go.* ./
RUN mkdir -p gitops-engine
COPY gitops-engine/go.* ./gitops-engine
RUN go mod download
# Perform the build

View File

@@ -1,4 +1,4 @@
FROM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2
FROM docker.io/library/golang:1.25.1@sha256:8305f5fa8ea63c7b5bc85bd223ccc62941f852318ebfbd22f53bbd0b358c07e1
ENV DEBIAN_FRONTEND=noninteractive

View File

@@ -1,4 +1,4 @@
FROM node:25
FROM node:20
WORKDIR /app/ui

View File

@@ -261,12 +261,8 @@ clidocsgen:
actionsdocsgen:
hack/generate-actions-list.sh
.PHONY: resourceiconsgen
resourceiconsgen:
hack/generate-icons-typescript.sh
.PHONY: codegen-local
codegen-local: mod-vendor-local mockgen gogen protogen clientgen openapigen clidocsgen actionsdocsgen resourceiconsgen manifests-local notification-docs notification-catalog
codegen-local: mod-vendor-local mockgen gogen protogen clientgen openapigen clidocsgen actionsdocsgen manifests-local notification-docs notification-catalog
rm -rf vendor/
.PHONY: codegen-local-fast

View File

@@ -9,6 +9,6 @@ ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
git-server: test/fixture/testrepos/start-git.sh
helm-registry: test/fixture/testrepos/start-helm-registry.sh
oci-registry: test/fixture/testrepos/start-authenticated-helm-registry.sh
dev-mounter: [ "$ARGOCD_E2E_TEST" != "true" ] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
dev-mounter: [[ "$ARGOCD_E2E_TEST" != "true" ]] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"

View File

@@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: 06ef059f9fc7cf9da2dfaef2a505ee1e3c693485
commit-hash: 320f46f06beaf75f9c406e3a47e2e09d36e2047a
project-url: https://github.com/argoproj/argo-cd
project-release: v3.3.0
project-release: v3.2.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View File

@@ -123,7 +123,6 @@ k8s_resource(
'9345:2345',
'8083:8083'
],
resource_deps=['build']
)
# track crds
@@ -149,7 +148,6 @@ k8s_resource(
'9346:2345',
'8084:8084'
],
resource_deps=['build']
)
# track argocd-redis resources and port forward
@@ -164,7 +162,6 @@ k8s_resource(
port_forwards=[
'6379:6379',
],
resource_deps=['build']
)
# track argocd-applicationset-controller resources
@@ -183,7 +180,6 @@ k8s_resource(
'8085:8080',
'7000:7000'
],
resource_deps=['build']
)
# track argocd-application-controller resources
@@ -201,7 +197,6 @@ k8s_resource(
'9348:2345',
'8086:8082',
],
resource_deps=['build']
)
# track argocd-notifications-controller resources
@@ -219,7 +214,6 @@ k8s_resource(
'9349:2345',
'8087:9001',
],
resource_deps=['build']
)
# track argocd-dex-server resources
@@ -231,7 +225,6 @@ k8s_resource(
'argocd-dex-server:role',
'argocd-dex-server:rolebinding',
],
resource_deps=['build']
)
# track argocd-commit-server resources
@@ -246,19 +239,6 @@ k8s_resource(
'8088:8087',
'8089:8086',
],
resource_deps=['build']
)
# ui dependencies
local_resource(
'node-modules',
'yarn',
dir='ui',
deps = [
'ui/package.json',
'ui/yarn.lock',
],
allow_parallel=True,
)
# docker for ui
@@ -280,7 +260,6 @@ k8s_resource(
port_forwards=[
'4000:4000',
],
resource_deps=['node-modules'],
)
# linting
@@ -299,7 +278,6 @@ local_resource(
'ui',
],
allow_parallel=True,
resource_deps=['node-modules'],
)
local_resource(
@@ -309,6 +287,5 @@ local_resource(
'go.mod',
'go.sum',
],
allow_parallel=True,
)

View File

@@ -63,7 +63,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Camptocamp](https://camptocamp.com)
1. [Candis](https://www.candis.io)
1. [Capital One](https://www.capitalone.com)
1. [Capptain LTD](https://capptain.co/)
1. [CARFAX Europe](https://www.carfax.eu)
1. [CARFAX](https://www.carfax.com)
1. [Carrefour Group](https://www.carrefour.com)
@@ -310,7 +309,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Relex Solutions](https://www.relexsolutions.com/)
1. [RightRev](https://rightrev.com/)
1. [Rijkswaterstaat](https://www.rijkswaterstaat.nl/en)
1. Rise
1. [Rise](https://www.risecard.eu/)
1. [Riskified](https://www.riskified.com/)
1. [Robotinfra](https://www.robotinfra.com)
1. [Rocket.Chat](https://rocket.chat)

View File

@@ -1 +1 @@
3.3.0
3.2.0

View File

@@ -103,7 +103,6 @@ type ApplicationSetReconciler struct {
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -248,16 +247,6 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
}
}
} else {
// Progressive Sync is disabled, clear any existing applicationStatus to prevent stale data
if len(applicationSetInfo.Status.ApplicationStatus) > 0 {
logCtx.Infof("Progressive Sync disabled, removing %v AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear AppSet application statuses when Progressive Sync is disabled for %v: %w", applicationSetInfo.Name, err)
}
}
}
if len(validateErrors) > 0 {
@@ -1445,13 +1434,7 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
resourcesCount := int64(len(statuses))
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
statuses = statuses[:r.MaxResourcesStatusCount]
}
appset.Status.Resources = statuses
appset.Status.ResourcesCount = resourcesCount
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
@@ -1464,7 +1447,6 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
}
updatedAppset.Status.Resources = appset.Status.Resources
updatedAppset.Status.ResourcesCount = resourcesCount
// Update the newly fetched object with new status resources
err := r.Client.Status().Update(ctx, updatedAppset)

View File

@@ -1832,16 +1832,16 @@ func TestGetMinRequeueAfter(t *testing.T) {
Clusters: &v1alpha1.ClusterGenerator{},
}
generatorMock0 := &mocks.Generator{}
generatorMock0.EXPECT().GetRequeueAfter(&generator).
generatorMock0 := mocks.Generator{}
generatorMock0.On("GetRequeueAfter", &generator).
Return(generators.NoRequeueAfter)
generatorMock1 := &mocks.Generator{}
generatorMock1.EXPECT().GetRequeueAfter(&generator).
generatorMock1 := mocks.Generator{}
generatorMock1.On("GetRequeueAfter", &generator).
Return(time.Duration(1) * time.Second)
generatorMock10 := &mocks.Generator{}
generatorMock10.EXPECT().GetRequeueAfter(&generator).
generatorMock10 := mocks.Generator{}
generatorMock10.On("GetRequeueAfter", &generator).
Return(time.Duration(10) * time.Second)
r := ApplicationSetReconciler{
@@ -1850,9 +1850,9 @@ func TestGetMinRequeueAfter(t *testing.T) {
Recorder: record.NewFakeRecorder(0),
Metrics: metrics,
Generators: map[string]generators.Generator{
"List": generatorMock10,
"Git": generatorMock1,
"Clusters": generatorMock1,
"List": &generatorMock10,
"Git": &generatorMock1,
"Clusters": &generatorMock1,
},
}
@@ -1889,10 +1889,10 @@ func TestRequeueGeneratorFails(t *testing.T) {
PullRequest: &v1alpha1.PullRequestGenerator{},
}
generatorMock := &mocks.Generator{}
generatorMock.EXPECT().GetTemplate(&generator).
generatorMock := mocks.Generator{}
generatorMock.On("GetTemplate", &generator).
Return(&v1alpha1.ApplicationSetTemplate{})
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return([]map[string]any{}, errors.New("Simulated error generating params that could be related to an external service/API call"))
metrics := appsetmetrics.NewFakeAppsetMetrics()
@@ -1902,7 +1902,7 @@ func TestRequeueGeneratorFails(t *testing.T) {
Scheme: scheme,
Recorder: record.NewFakeRecorder(0),
Generators: map[string]generators.Generator{
"PullRequest": generatorMock,
"PullRequest": &generatorMock,
},
Metrics: metrics,
}
@@ -5967,11 +5967,10 @@ func TestUpdateResourceStatus(t *testing.T) {
require.NoError(t, err)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
maxResourcesStatusCount int
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
}{
{
name: "handles an empty application list",
@@ -6135,73 +6134,6 @@ func TestUpdateResourceStatus(t *testing.T) {
apps: []v1alpha1.Application{},
expectedResources: nil,
},
{
name: "truncates resources status list to",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "argocd",
},
Status: v1alpha1.ApplicationSetStatus{
Resources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
{
Name: "app2",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
},
},
},
apps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.AppHealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "app2",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.AppHealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
},
expectedResources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeSynced,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
maxResourcesStatusCount: 1,
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
@@ -6212,14 +6144,13 @@ func TestUpdateResourceStatus(t *testing.T) {
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
MaxResourcesStatusCount: cc.maxResourcesStatusCount,
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
}
err := r.updateResourcesStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps)
@@ -7276,82 +7207,3 @@ func TestIsRollingSyncDeletionReversed(t *testing.T) {
})
}
}
func TestReconcileProgressiveSyncDisabled(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
enableProgressiveSyncs bool
expectedAppStatuses []v1alpha1.ApplicationSetApplicationStatus
}{
{
name: "clears applicationStatus when Progressive Sync is disabled",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-appset",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{},
Template: v1alpha1.ApplicationSetTemplate{},
},
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "test-appset-guestbook",
Message: "Application resource became Healthy, updating status from Progressing to Healthy.",
Status: "Healthy",
Step: "1",
},
},
},
},
enableProgressiveSyncs: false,
expectedAppStatuses: nil,
},
} {
t.Run(cc.name, func(t *testing.T) {
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&cc.appSet).WithStatusSubresource(&cc.appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Renderer: &utils.Render{},
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
EnableProgressiveSyncs: cc.enableProgressiveSyncs,
}
req := ctrl.Request{
NamespacedName: types.NamespacedName{
Namespace: cc.appSet.Namespace,
Name: cc.appSet.Name,
},
}
// Run reconciliation
_, err = r.Reconcile(t.Context(), req)
require.NoError(t, err)
// Fetch the updated ApplicationSet
var updatedAppSet v1alpha1.ApplicationSet
err = r.Get(t.Context(), req.NamespacedName, &updatedAppSet)
require.NoError(t, err)
// Verify the applicationStatus field
assert.Equal(t, cc.expectedAppStatuses, updatedAppSet.Status.ApplicationStatus, "applicationStatus should match expected value")
})
}
}

View File

@@ -86,28 +86,28 @@ func TestGenerateApplications(t *testing.T) {
}
t.Run(cc.name, func(t *testing.T) {
generatorMock := &genmock.Generator{}
generatorMock := genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
List: &v1alpha1.ListGenerator{},
}
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cc.params, cc.generateParamsError)
generatorMock.EXPECT().GetTemplate(&generator).
generatorMock.On("GetTemplate", &generator).
Return(&v1alpha1.ApplicationSetTemplate{})
rendererMock := &rendmock.Renderer{}
rendererMock := rendmock.Renderer{}
var expectedApps []v1alpha1.Application
if cc.generateParamsError == nil {
for _, p := range cc.params {
if cc.rendererError != nil {
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
Return(nil, cc.rendererError)
} else {
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
Return(&app, nil)
expectedApps = append(expectedApps, app)
}
@@ -115,9 +115,9 @@ func TestGenerateApplications(t *testing.T) {
}
generators := map[string]generators.Generator{
"List": generatorMock,
"List": &generatorMock,
}
renderer := rendererMock
renderer := &rendererMock
got, reason, err := GenerateApplications(log.NewEntry(log.StandardLogger()), v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -200,26 +200,26 @@ func TestMergeTemplateApplications(t *testing.T) {
cc := c
t.Run(cc.name, func(t *testing.T) {
generatorMock := &genmock.Generator{}
generatorMock := genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
List: &v1alpha1.ListGenerator{},
}
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cc.params, nil)
generatorMock.EXPECT().GetTemplate(&generator).
generatorMock.On("GetTemplate", &generator).
Return(&cc.overrideTemplate)
rendererMock := &rendmock.Renderer{}
rendererMock := rendmock.Renderer{}
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.expectedMerged), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), cc.params[0], false, []string(nil)).
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.expectedMerged), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), cc.params[0], false, []string(nil)).
Return(&cc.expectedApps[0], nil)
generators := map[string]generators.Generator{
"List": generatorMock,
"List": &generatorMock,
}
renderer := rendererMock
renderer := &rendererMock
got, _, _ := GenerateApplications(log.NewEntry(log.StandardLogger()), v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -312,19 +312,19 @@ func TestGenerateAppsUsingPullRequestGenerator(t *testing.T) {
},
} {
t.Run(cases.name, func(t *testing.T) {
generatorMock := &genmock.Generator{}
generatorMock := genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
PullRequest: &v1alpha1.PullRequestGenerator{},
}
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cases.params, nil)
generatorMock.EXPECT().GetTemplate(&generator).
Return(&cases.template)
generatorMock.On("GetTemplate", &generator).
Return(&cases.template, nil)
generators := map[string]generators.Generator{
"PullRequest": generatorMock,
"PullRequest": &generatorMock,
}
renderer := &utils.Render{}

View File

@@ -223,7 +223,7 @@ func TestTransForm(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
testGenerators := map[string]Generator{
"Clusters": getMockClusterGenerator(t.Context()),
"Clusters": getMockClusterGenerator(),
}
applicationSetInfo := argov1alpha1.ApplicationSet{
@@ -260,7 +260,7 @@ func emptyTemplate() argov1alpha1.ApplicationSetTemplate {
}
}
func getMockClusterGenerator(ctx context.Context) Generator {
func getMockClusterGenerator() Generator {
clusters := []crtclient.Object{
&corev1.Secret{
TypeMeta: metav1.TypeMeta{
@@ -342,19 +342,19 @@ func getMockClusterGenerator(ctx context.Context) Generator {
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
return NewClusterGenerator(ctx, fakeClient, appClientset, "namespace")
return NewClusterGenerator(context.Background(), fakeClient, appClientset, "namespace")
}
func getMockGitGenerator() Generator {
argoCDServiceMock := &mocks.Repos{}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
gitGenerator := NewGitGenerator(argoCDServiceMock, "namespace")
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "namespace")
return gitGenerator
}
func TestGetRelevantGenerators(t *testing.T) {
testGenerators := map[string]Generator{
"Clusters": getMockClusterGenerator(t.Context()),
"Clusters": getMockClusterGenerator(),
"Git": getMockGitGenerator(),
}

View File

@@ -320,11 +320,11 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -357,6 +357,8 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -621,11 +623,11 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -658,6 +660,8 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -996,11 +1000,11 @@ cluster:
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1032,6 +1036,8 @@ cluster:
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1325,7 +1331,7 @@ env: testing
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
@@ -1333,16 +1339,18 @@ env: testing
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1374,6 +1382,8 @@ env: testing
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1662,7 +1672,7 @@ env: testing
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
@@ -1670,16 +1680,18 @@ env: testing
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1711,6 +1723,8 @@ env: testing
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1894,23 +1908,25 @@ func TestGitGeneratorParamsFromFilesWithExcludeOptionGoTemplate(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
// This is generally done by the g.repos.GetFiles() function.
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1942,6 +1958,8 @@ func TestGitGeneratorParamsFromFilesWithExcludeOptionGoTemplate(t *testing.T) {
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -2261,11 +2279,11 @@ cluster:
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -2297,6 +2315,8 @@ cluster:
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -2462,7 +2482,7 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
},
}
for _, testCase := range cases {
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
if testCase.callGetDirectories {
var project any
@@ -2472,9 +2492,9 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
project = mock.Anything
}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, project, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, project, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
}
gitGenerator := NewGitGenerator(argoCDServiceMock, "argocd")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "argocd")
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
@@ -2490,5 +2510,7 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCase.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
}
}

View File

@@ -12,12 +12,12 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
generatorsMock "github.com/argoproj/argo-cd/v3/applicationset/generators/mocks"
servicesMocks "github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -136,7 +136,7 @@ func TestMatrixGenerate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -149,7 +149,7 @@ func TestMatrixGenerate(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
{
"path": "app1",
"path.basename": "app1",
@@ -162,7 +162,7 @@ func TestMatrixGenerate(t *testing.T) {
},
}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -343,7 +343,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -358,7 +358,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
{
"path": map[string]string{
"path": "app1",
@@ -375,7 +375,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
},
}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -507,7 +507,7 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
mock := &generatorsMock.Generator{}
mock := &generatorMock{}
for _, g := range testCaseCopy.baseGenerators {
gitGeneratorSpec := v1alpha1.ApplicationSetGenerator{
@@ -517,7 +517,7 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
SCMProvider: g.SCMProvider,
ClusterDecisionResource: g.ClusterDecisionResource,
}
mock.EXPECT().GetRequeueAfter(&gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter)
mock.On("GetRequeueAfter", &gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter, nil)
}
matrixGenerator := NewMatrixGenerator(
@@ -634,7 +634,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{}
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
@@ -650,7 +650,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
Git: g.Git,
Clusters: g.Clusters,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{
{
"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json",
"path.basename": "dev",
@@ -662,7 +662,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
"path.basenameNormalized": "prod",
},
}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
matrixGenerator := NewMatrixGenerator(
@@ -813,7 +813,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
GoTemplate: true,
@@ -833,7 +833,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
Git: g.Git,
Clusters: g.Clusters,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{
{
"path": map[string]string{
"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json",
@@ -849,7 +849,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
},
},
}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
matrixGenerator := NewMatrixGenerator(
@@ -969,7 +969,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -984,7 +984,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{{
"foo": map[string]any{
"bar": []any{
map[string]any{
@@ -1009,7 +1009,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
},
},
}}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -1037,6 +1037,28 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
}
}
type generatorMock struct {
mock.Mock
}
func (g *generatorMock) GetTemplate(appSetGenerator *v1alpha1.ApplicationSetGenerator) *v1alpha1.ApplicationSetTemplate {
args := g.Called(appSetGenerator)
return args.Get(0).(*v1alpha1.ApplicationSetTemplate)
}
func (g *generatorMock) GenerateParams(appSetGenerator *v1alpha1.ApplicationSetGenerator, appSet *v1alpha1.ApplicationSet, _ client.Client) ([]map[string]any, error) {
args := g.Called(appSetGenerator, appSet)
return args.Get(0).([]map[string]any), args.Error(1)
}
func (g *generatorMock) GetRequeueAfter(appSetGenerator *v1alpha1.ApplicationSetGenerator) time.Duration {
args := g.Called(appSetGenerator)
return args.Get(0).(time.Duration)
}
func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
// Given a matrix generator over a list generator and a git files generator, the nested git files generator should
// be treated as a files generator, and it should produce parameters.
@@ -1050,11 +1072,11 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
// Now instead of checking for nil, we check whether the field is a non-empty slice. This test prevents a regression
// of that bug.
listGeneratorMock := &generatorsMock.Generator{}
listGeneratorMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).Return([]map[string]any{
listGeneratorMock := &generatorMock{}
listGeneratorMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).Return([]map[string]any{
{"some": "value"},
}, nil)
listGeneratorMock.EXPECT().GetTemplate(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&v1alpha1.ApplicationSetTemplate{})
listGeneratorMock.On("GetTemplate", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&v1alpha1.ApplicationSetTemplate{})
gitGeneratorSpec := &v1alpha1.GitGenerator{
RepoURL: "https://git.example.com",
@@ -1063,10 +1085,10 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
},
}
repoServiceMock := &servicesMocks.Repos{}
repoServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
repoServiceMock := &mocks.Repos{}
repoServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
"some/path.json": []byte("test: content"),
}, nil).Maybe()
}, nil)
gitGenerator := NewGitGenerator(repoServiceMock, "")
matrixGenerator := NewMatrixGenerator(map[string]Generator{

View File

@@ -174,7 +174,7 @@ func TestApplicationsetCollector(t *testing.T) {
appsetCollector := newAppsetCollector(utils.NewAppsetLister(client), collectedLabels, filter)
metrics.Registry.MustRegister(appsetCollector)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
@@ -216,7 +216,7 @@ func TestObserveReconcile(t *testing.T) {
appsetMetrics := NewApplicationsetMetrics(utils.NewAppsetLister(client), collectedLabels, filter)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})

View File

@@ -97,9 +97,7 @@ func TestGitHubMetrics_CollectorApproach_Success(t *testing.T) {
),
}
ctx := t.Context()
req, _ := http.NewRequestWithContext(ctx, http.MethodGet, ts.URL+URL, http.NoBody)
req, _ := http.NewRequest(http.MethodGet, ts.URL+URL, http.NoBody)
resp, err := client.Do(req)
if err != nil {
t.Fatalf("unexpected error: %v", err)
@@ -111,11 +109,7 @@ func TestGitHubMetrics_CollectorApproach_Success(t *testing.T) {
server := httptest.NewServer(handler)
defer server.Close()
req, err = http.NewRequestWithContext(ctx, http.MethodGet, server.URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
resp, err = http.DefaultClient.Do(req)
resp, err = http.Get(server.URL)
if err != nil {
t.Fatalf("failed to scrape metrics: %v", err)
}
@@ -157,23 +151,15 @@ func TestGitHubMetrics_CollectorApproach_NoRateLimitMetricsOnNilResponse(t *test
metrics: metrics,
},
}
ctx := t.Context()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
req, _ := http.NewRequest(http.MethodGet, URL, http.NoBody)
_, _ = client.Do(req)
handler := promhttp.HandlerFor(reg, promhttp.HandlerOpts{})
server := httptest.NewServer(handler)
defer server.Close()
req, err = http.NewRequestWithContext(ctx, http.MethodGet, server.URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
resp, err := http.DefaultClient.Do(req)
resp, err := http.Get(server.URL)
if err != nil {
t.Fatalf("failed to scrape metrics: %v", err)
}

View File

@@ -1,6 +1,7 @@
package pull_request
import (
"context"
"errors"
"testing"
@@ -12,7 +13,6 @@ import (
"github.com/stretchr/testify/require"
azureMock "github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/azure_devops/git/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
)
func createBoolPtr(x bool) *bool {
@@ -35,6 +35,29 @@ func createUniqueNamePtr(x string) *string {
return &x
}
type AzureClientFactoryMock struct {
mock *mock.Mock
}
func (m *AzureClientFactoryMock) GetClient(ctx context.Context) (git.Client, error) {
args := m.mock.Called(ctx)
var client git.Client
c := args.Get(0)
if c != nil {
client = c.(git.Client)
}
var err error
if len(args) > 1 {
if e, ok := args.Get(1).(error); ok {
err = e
}
}
return client, err
}
func TestListPullRequest(t *testing.T) {
teamProject := "myorg_project"
repoName := "myorg_project_repo"
@@ -68,10 +91,10 @@ func TestListPullRequest(t *testing.T) {
SearchCriteria: &git.GitPullRequestSearchCriteria{},
}
gitClientMock := &azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
gitClientMock.EXPECT().GetPullRequestsByProject(mock.Anything, args).Return(&pullRequestMock, nil)
gitClientMock := azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.On("GetPullRequestsByProject", ctx, args).Return(&pullRequestMock, nil)
provider := AzureDevOpsService{
clientFactory: clientFactoryMock,
@@ -222,12 +245,12 @@ func TestAzureDevOpsListReturnsRepositoryNotFoundError(t *testing.T) {
pullRequestMock := []git.GitPullRequest{}
gitClientMock := &azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
gitClientMock := azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
// Mock the GetPullRequestsByProject to return an error containing "404"
gitClientMock.EXPECT().GetPullRequestsByProject(mock.Anything, args).Return(&pullRequestMock,
gitClientMock.On("GetPullRequestsByProject", t.Context(), args).Return(&pullRequestMock,
errors.New("The following project does not exist:"))
provider := AzureDevOpsService{

View File

@@ -12,7 +12,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/aws_codecommit/mocks"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -177,8 +177,9 @@ func TestAWSCodeCommitListRepos(t *testing.T) {
if repo.getRepositoryNilMetadata {
repoMetadata = nil
}
codeCommitClient.EXPECT().GetRepositoryWithContext(mock.Anything, &codecommit.GetRepositoryInput{RepositoryName: aws.String(repo.name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: repoMetadata}, repo.getRepositoryError).Maybe()
codeCommitClient.
On("GetRepositoryWithContext", ctx, &codecommit.GetRepositoryInput{RepositoryName: aws.String(repo.name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: repoMetadata}, repo.getRepositoryError)
codecommitRepoNameIdPairs = append(codecommitRepoNameIdPairs, &codecommit.RepositoryNameIdPair{
RepositoryId: aws.String(repo.id),
RepositoryName: aws.String(repo.name),
@@ -192,18 +193,20 @@ func TestAWSCodeCommitListRepos(t *testing.T) {
}
if testCase.expectListAtCodeCommit {
codeCommitClient.EXPECT().ListRepositoriesWithContext(mock.Anything, &codecommit.ListRepositoriesInput{}).
codeCommitClient.
On("ListRepositoriesWithContext", ctx, &codecommit.ListRepositoriesInput{}).
Return(&codecommit.ListRepositoriesOutput{
Repositories: codecommitRepoNameIdPairs,
}, testCase.listRepositoryError).Maybe()
}, testCase.listRepositoryError)
} else {
taggingClient.EXPECT().GetResourcesWithContext(mock.Anything, mock.MatchedBy(equalIgnoringTagFilterOrder(&resourcegroupstaggingapi.GetResourcesInput{
TagFilters: testCase.expectTagFilters,
ResourceTypeFilters: aws.StringSlice([]string{resourceTypeCodeCommitRepository}),
}))).
taggingClient.
On("GetResourcesWithContext", ctx, mock.MatchedBy(equalIgnoringTagFilterOrder(&resourcegroupstaggingapi.GetResourcesInput{
TagFilters: testCase.expectTagFilters,
ResourceTypeFilters: aws.StringSlice([]string{resourceTypeCodeCommitRepository}),
}))).
Return(&resourcegroupstaggingapi.GetResourcesOutput{
ResourceTagMappingList: resourceTaggings,
}, testCase.listRepositoryError).Maybe()
}, testCase.listRepositoryError)
}
provider := &AWSCodeCommitProvider{
@@ -347,12 +350,13 @@ func TestAWSCodeCommitRepoHasPath(t *testing.T) {
taggingClient := mocks.NewAWSTaggingClient(t)
ctx := t.Context()
if testCase.expectedGetFolderPath != "" {
codeCommitClient.EXPECT().GetFolderWithContext(mock.Anything, &codecommit.GetFolderInput{
CommitSpecifier: aws.String(branch),
FolderPath: aws.String(testCase.expectedGetFolderPath),
RepositoryName: aws.String(repoName),
}).
Return(testCase.getFolderOutput, testCase.getFolderError).Maybe()
codeCommitClient.
On("GetFolderWithContext", ctx, &codecommit.GetFolderInput{
CommitSpecifier: aws.String(branch),
FolderPath: aws.String(testCase.expectedGetFolderPath),
RepositoryName: aws.String(repoName),
}).
Return(testCase.getFolderOutput, testCase.getFolderError)
}
provider := &AWSCodeCommitProvider{
codeCommitClient: codeCommitClient,
@@ -419,16 +423,18 @@ func TestAWSCodeCommitGetBranches(t *testing.T) {
taggingClient := mocks.NewAWSTaggingClient(t)
ctx := t.Context()
if testCase.allBranches {
codeCommitClient.EXPECT().ListBranchesWithContext(mock.Anything, &codecommit.ListBranchesInput{
RepositoryName: aws.String(name),
}).
Return(&codecommit.ListBranchesOutput{Branches: aws.StringSlice(testCase.branches)}, testCase.apiError).Maybe()
codeCommitClient.
On("ListBranchesWithContext", ctx, &codecommit.ListBranchesInput{
RepositoryName: aws.String(name),
}).
Return(&codecommit.ListBranchesOutput{Branches: aws.StringSlice(testCase.branches)}, testCase.apiError)
} else {
codeCommitClient.EXPECT().GetRepositoryWithContext(mock.Anything, &codecommit.GetRepositoryInput{RepositoryName: aws.String(name)}).
codeCommitClient.
On("GetRepositoryWithContext", ctx, &codecommit.GetRepositoryInput{RepositoryName: aws.String(name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: &codecommit.RepositoryMetadata{
AccountId: aws.String(organization),
DefaultBranch: aws.String(defaultBranch),
}}, testCase.apiError).Maybe()
}}, testCase.apiError)
}
provider := &AWSCodeCommitProvider{
codeCommitClient: codeCommitClient,

View File

@@ -1,6 +1,7 @@
package scm_provider
import (
"context"
"errors"
"fmt"
"testing"
@@ -15,7 +16,6 @@ import (
azureGit "github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
azureMock "github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/azure_devops/git/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
)
func s(input string) *string {
@@ -78,13 +78,13 @@ func TestAzureDevopsRepoHasPath(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := &azureMock.Client{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
repoId := &uuid
gitClientMock.EXPECT().GetItem(mock.Anything, azureGit.GetItemArgs{Project: &teamProject, Path: &path, VersionDescriptor: &azureGit.GitVersionDescriptor{Version: &branchName}, RepositoryId: repoId}).Return(nil, testCase.azureDevopsError)
gitClientMock.On("GetItem", ctx, azureGit.GetItemArgs{Project: &teamProject, Path: &path, VersionDescriptor: &azureGit.GitVersionDescriptor{Version: &branchName}, RepositoryId: repoId}).Return(nil, testCase.azureDevopsError)
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock}
@@ -143,12 +143,12 @@ func TestGetDefaultBranchOnDisabledRepo(t *testing.T) {
t.Run(testCase.name, func(t *testing.T) {
uuid := uuid.New().String()
gitClientMock := azureMock.NewClient(t)
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(nil, testCase.azureDevOpsError)
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(nil, testCase.azureDevOpsError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -162,6 +162,8 @@ func TestGetDefaultBranchOnDisabledRepo(t *testing.T) {
}
assert.Empty(t, branches)
gitClientMock.AssertExpectations(t)
})
}
}
@@ -200,12 +202,12 @@ func TestGetAllBranchesOnDisabledRepo(t *testing.T) {
t.Run(testCase.name, func(t *testing.T) {
uuid := uuid.New().String()
gitClientMock := azureMock.NewClient(t)
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.EXPECT().GetBranches(mock.Anything, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(nil, testCase.azureDevOpsError)
gitClientMock.On("GetBranches", ctx, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(nil, testCase.azureDevOpsError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -219,6 +221,8 @@ func TestGetAllBranchesOnDisabledRepo(t *testing.T) {
}
assert.Empty(t, branches)
gitClientMock.AssertExpectations(t)
})
}
}
@@ -237,12 +241,12 @@ func TestAzureDevOpsGetDefaultBranchStripsRefsName(t *testing.T) {
branchReturn := &azureGit.GitBranchStats{Name: &strippedBranchName, Commit: &azureGit.GitCommitRef{CommitId: s("abc123233223")}}
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
gitClientMock := &azureMock.Client{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &strippedBranchName}).Return(branchReturn, nil).Maybe()
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &strippedBranchName}).Return(branchReturn, nil)
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock, allBranches: false}
branches, err := provider.GetBranches(ctx, repo)
@@ -291,12 +295,12 @@ func TestAzureDevOpsGetBranchesDefultBranchOnly(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := &azureMock.Client{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(testCase.expectedBranch, testCase.getBranchesAPIError)
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(testCase.expectedBranch, testCase.getBranchesAPIError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -375,12 +379,12 @@ func TestAzureDevopsGetBranches(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := &azureMock.Client{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
gitClientMock.EXPECT().GetBranches(mock.Anything, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(testCase.expectedBranches, testCase.getBranchesAPIError)
gitClientMock.On("GetBranches", ctx, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(testCase.expectedBranches, testCase.getBranchesAPIError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid}
@@ -423,6 +427,7 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
teamProject := "myorg_project"
uuid := uuid.New()
ctx := t.Context()
repoId := &uuid
@@ -472,15 +477,15 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := azureMock.NewClient(t)
gitClientMock.EXPECT().GetRepositories(mock.Anything, azureGit.GetRepositoriesArgs{Project: s(teamProject)}).Return(&testCase.repositories, testCase.getRepositoriesError)
gitClientMock := azureMock.Client{}
gitClientMock.On("GetRepositories", ctx, azureGit.GetRepositoriesArgs{Project: s(teamProject)}).Return(&testCase.repositories, testCase.getRepositoriesError)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock)
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock}
repositories, err := provider.ListRepos(t.Context(), "https")
repositories, err := provider.ListRepos(ctx, "https")
if testCase.getRepositoriesError != nil {
require.Error(t, err, "Expected an error from test case %v", testCase.name)
@@ -492,6 +497,31 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
assert.NotEmpty(t, repositories)
assert.Len(t, repositories, testCase.expectedNumberOfRepos)
}
gitClientMock.AssertExpectations(t)
})
}
}
type AzureClientFactoryMock struct {
mock *mock.Mock
}
func (m *AzureClientFactoryMock) GetClient(ctx context.Context) (azureGit.Client, error) {
args := m.mock.Called(ctx)
var client azureGit.Client
c := args.Get(0)
if c != nil {
client = c.(azureGit.Client)
}
var err error
if len(args) > 1 {
if e, ok := args.Get(1).(error); ok {
err = e
}
}
return client, err
}

View File

@@ -30,7 +30,7 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
urlStr += fmt.Sprintf("/repositories/%s/%s/src/%s/%s?format=meta", c.owner, repo.Repository, repo.SHA, path)
body := strings.NewReader("")
req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, urlStr, body)
req, err := http.NewRequest(http.MethodGet, urlStr, body)
if err != nil {
return false, err
}

View File

@@ -1,101 +0,0 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"context"
"github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
mock "github.com/stretchr/testify/mock"
)
// NewAzureDevOpsClientFactory creates a new instance of AzureDevOpsClientFactory. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewAzureDevOpsClientFactory(t interface {
mock.TestingT
Cleanup(func())
}) *AzureDevOpsClientFactory {
mock := &AzureDevOpsClientFactory{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// AzureDevOpsClientFactory is an autogenerated mock type for the AzureDevOpsClientFactory type
type AzureDevOpsClientFactory struct {
mock.Mock
}
type AzureDevOpsClientFactory_Expecter struct {
mock *mock.Mock
}
func (_m *AzureDevOpsClientFactory) EXPECT() *AzureDevOpsClientFactory_Expecter {
return &AzureDevOpsClientFactory_Expecter{mock: &_m.Mock}
}
// GetClient provides a mock function for the type AzureDevOpsClientFactory
func (_mock *AzureDevOpsClientFactory) GetClient(ctx context.Context) (git.Client, error) {
ret := _mock.Called(ctx)
if len(ret) == 0 {
panic("no return value specified for GetClient")
}
var r0 git.Client
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context) (git.Client, error)); ok {
return returnFunc(ctx)
}
if returnFunc, ok := ret.Get(0).(func(context.Context) git.Client); ok {
r0 = returnFunc(ctx)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(git.Client)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context) error); ok {
r1 = returnFunc(ctx)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// AzureDevOpsClientFactory_GetClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetClient'
type AzureDevOpsClientFactory_GetClient_Call struct {
*mock.Call
}
// GetClient is a helper method to define mock.On call
// - ctx context.Context
func (_e *AzureDevOpsClientFactory_Expecter) GetClient(ctx interface{}) *AzureDevOpsClientFactory_GetClient_Call {
return &AzureDevOpsClientFactory_GetClient_Call{Call: _e.mock.On("GetClient", ctx)}
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) Run(run func(ctx context.Context)) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
run(
arg0,
)
})
return _c
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) Return(client git.Client, err error) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Return(client, err)
return _c
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) RunAndReturn(run func(ctx context.Context) (git.Client, error)) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -1,6 +1,7 @@
package services
import (
"context"
"crypto/tls"
"net/http"
"testing"
@@ -11,7 +12,7 @@ import (
)
func TestSetupBitbucketClient(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
cfg := &bitbucketv1.Configuration{}
// Act

View File

@@ -26,14 +26,10 @@ import (
"github.com/go-playground/webhooks/v6/github"
"github.com/go-playground/webhooks/v6/gitlab"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v3/util/guard"
)
const payloadQueueSize = 50000
const panicMsgAppSet = "panic while processing applicationset-controller webhook event"
type WebhookHandler struct {
sync.WaitGroup // for testing
github *github.Webhook
@@ -106,7 +102,6 @@ func NewWebhookHandler(webhookParallelism int, argocdSettingsMgr *argosettings.S
}
func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
compLog := log.WithField("component", "applicationset-webhook")
for i := 0; i < webhookParallelism; i++ {
h.Add(1)
go func() {
@@ -116,7 +111,7 @@ func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
if !ok {
return
}
guard.RecoverAndLog(func() { h.HandleEvent(payload) }, compLog, panicMsgAppSet)
h.HandleEvent(payload)
}
}()
}

23
assets/swagger.json generated
View File

@@ -5619,9 +5619,6 @@
"statusBadgeRootUrl": {
"type": "string"
},
"syncWithReplaceAllowed": {
"type": "boolean"
},
"trackingMethod": {
"type": "string"
},
@@ -7325,11 +7322,6 @@
"items": {
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
}
},
"resourcesCount": {
"description": "ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when\nthe number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).",
"type": "integer",
"format": "int64"
}
}
},
@@ -8257,7 +8249,7 @@
}
},
"v1alpha1ConfigMapKeyRef": {
"description": "ConfigMapKeyRef struct for a reference to a configmap key.",
"description": "Utility struct for a reference to a configmap key.",
"type": "object",
"properties": {
"configMapName": {
@@ -9339,7 +9331,7 @@
}
},
"v1alpha1PullRequestGeneratorGithub": {
"description": "PullRequestGeneratorGithub defines connection info specific to GitHub.",
"description": "PullRequestGenerator defines connection info specific to GitHub.",
"type": "object",
"properties": {
"api": {
@@ -9480,11 +9472,6 @@
"connectionState": {
"$ref": "#/definitions/v1alpha1ConnectionState"
},
"depth": {
"description": "Depth specifies the depth for shallow clones. A value of 0 or omitting the field indicates a full clone.",
"type": "integer",
"format": "int64"
},
"enableLfs": {
"description": "EnableLFS specifies whether git-lfs support should be enabled for this repo. Only valid for Git repositories.",
"type": "boolean"
@@ -10348,7 +10335,7 @@
}
},
"v1alpha1SecretRef": {
"description": "SecretRef struct for a reference to a secret key.",
"description": "Utility struct for a reference to a secret key.",
"type": "object",
"properties": {
"key": {
@@ -10585,8 +10572,8 @@
"type": "string"
},
"targetBranch": {
"description": "TargetBranch is the branch from which hydrated manifests will be synced.\nIf HydrateTo is not set, this is also the branch to which hydrated manifests are committed.",
"type": "string"
"type": "string",
"title": "TargetBranch is the branch to which hydrated manifests should be committed"
}
}
},

View File

@@ -14,7 +14,6 @@ import (
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
logutils "github.com/argoproj/argo-cd/v3/util/log"
"github.com/argoproj/argo-cd/v3/util/profile"
"github.com/argoproj/argo-cd/v3/util/tls"
"github.com/argoproj/argo-cd/v3/applicationset/controllers"
@@ -80,7 +79,6 @@ func NewCommand() *cobra.Command {
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
maxResourcesStatusCount int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -171,15 +169,6 @@ func NewCommand() *cobra.Command {
log.Error(err, "unable to start manager")
os.Exit(1)
}
pprofMux := http.NewServeMux()
profile.RegisterProfiler(pprofMux)
// This looks a little strange. Eg, not using ctrl.Options PprofBindAddress and then adding the pprof mux
// to the metrics server. However, it allows for the controller to dynamically expose the pprof endpoints
// and use the existing metrics server, the same pattern that the application controller and api-server follow.
if err = mgr.AddMetricsServerExtraHandler("/debug/pprof/", pprofMux); err != nil {
log.Error(err, "failed to register pprof handlers")
}
dynamicClient, err := dynamic.NewForConfig(mgr.GetConfig())
errors.CheckError(err)
k8sClient, err := kubernetes.NewForConfig(mgr.GetConfig())
@@ -242,7 +231,6 @@ func NewCommand() *cobra.Command {
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -287,7 +275,6 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
return &command
}

View File

@@ -38,7 +38,7 @@ func NewCommand() *cobra.Command {
Use: "argocd-commit-server",
Short: "Run Argo CD Commit Server",
Long: "Argo CD Commit Server is an internal service which commits and pushes hydrated manifests to git. This command runs Commit Server in the foreground.",
RunE: func(cmd *cobra.Command, _ []string) error {
RunE: func(_ *cobra.Command, _ []string) error {
vers := common.GetVersion()
vers.LogStartupInfo(
"Argo CD Commit Server",
@@ -59,10 +59,8 @@ func NewCommand() *cobra.Command {
server := commitserver.NewServer(askPassServer, metricsServer)
grpc := server.CreateGRPC()
ctx := cmd.Context()
lc := &net.ListenConfig{}
listener, err := lc.Listen(ctx, "tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
errors.CheckError(err)
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {

View File

@@ -115,7 +115,7 @@ func NewRunDexCommand() *cobra.Command {
err = os.WriteFile("/tmp/dex.yaml", dexCfgBytes, 0o644)
errors.CheckError(err)
log.Debug(redactor(string(dexCfgBytes)))
cmd = exec.CommandContext(ctx, "dex", "serve", "/tmp/dex.yaml")
cmd = exec.Command("dex", "serve", "/tmp/dex.yaml")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err = cmd.Start()

View File

@@ -169,8 +169,7 @@ func NewCommand() *cobra.Command {
}
grpc := server.CreateGRPC()
lc := &net.ListenConfig{}
listener, err := lc.Listen(ctx, "tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
errors.CheckError(err)
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {

View File

@@ -105,26 +105,26 @@ func TestGetReconcileResults_Refresh(t *testing.T) {
appClientset := appfake.NewSimpleClientset(app, proj)
deployment := test.NewDeployment()
kubeClientset := kubefake.NewClientset(deployment, argoCM, argoCDSecret)
clusterCache := &clustermocks.ClusterCache{}
clusterCache.EXPECT().IsNamespaced(mock.Anything).Return(true, nil)
clusterCache.EXPECT().GetGVKParser().Return(nil)
repoServerClient := &mocks.RepoServerServiceClient{}
repoServerClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(&argocdclient.ManifestResponse{
clusterCache := clustermocks.ClusterCache{}
clusterCache.On("IsNamespaced", mock.Anything).Return(true, nil)
clusterCache.On("GetGVKParser", mock.Anything).Return(nil)
repoServerClient := mocks.RepoServerServiceClient{}
repoServerClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(&argocdclient.ManifestResponse{
Manifests: []string{test.DeploymentManifest},
}, nil)
repoServerClientset := &mocks.Clientset{RepoServerServiceClient: repoServerClient}
liveStateCache := &cachemocks.LiveStateCache{}
liveStateCache.EXPECT().GetManagedLiveObjs(mock.Anything, mock.Anything, mock.Anything).Return(map[kube.ResourceKey]*unstructured.Unstructured{
repoServerClientset := mocks.Clientset{RepoServerServiceClient: &repoServerClient}
liveStateCache := cachemocks.LiveStateCache{}
liveStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything, mock.Anything).Return(map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(deployment): deployment,
}, nil)
liveStateCache.EXPECT().GetVersionsInfo(mock.Anything).Return("v1.2.3", nil, nil)
liveStateCache.EXPECT().Init().Return(nil)
liveStateCache.EXPECT().GetClusterCache(mock.Anything).Return(clusterCache, nil)
liveStateCache.EXPECT().IsNamespaced(mock.Anything, mock.Anything).Return(true, nil)
liveStateCache.On("GetVersionsInfo", mock.Anything).Return("v1.2.3", nil, nil)
liveStateCache.On("Init").Return(nil, nil)
liveStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCache, nil)
liveStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
result, err := reconcileApplications(ctx, kubeClientset, appClientset, "default", repoServerClientset, "",
result, err := reconcileApplications(ctx, kubeClientset, appClientset, "default", &repoServerClientset, "",
func(_ db.ArgoDB, _ cache.SharedIndexInformer, _ *settings.SettingsManager, _ *metrics.MetricsServer) statecache.LiveStateCache {
return liveStateCache
return &liveStateCache
},
false,
normalizers.IgnoreNormalizerOpts{},

View File

@@ -30,12 +30,11 @@ func NewNotificationsCommand() *cobra.Command {
)
var argocdService service.Service
toolsCommand := cmd.NewToolsCommand(
"notifications",
"argocd admin notifications",
applications,
settings.GetFactorySettingsForCLI(func() service.Service { return argocdService }, "argocd-notifications-secret", "argocd-notifications-cm", false),
settings.GetFactorySettingsForCLI(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
func(clientConfig clientcmd.ClientConfig) {
k8sCfg, err := clientConfig.ClientConfig()
if err != nil {

View File

@@ -40,7 +40,9 @@ func captureStdout(callback func()) (string, error) {
return string(data), err
}
func newSettingsManager(ctx context.Context, data map[string]string) *settings.SettingsManager {
func newSettingsManager(data map[string]string) *settings.SettingsManager {
ctx := context.Background()
clientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
@@ -67,8 +69,8 @@ type fakeCmdContext struct {
mgr *settings.SettingsManager
}
func newCmdContext(ctx context.Context, data map[string]string) *fakeCmdContext {
return &fakeCmdContext{mgr: newSettingsManager(ctx, data)}
func newCmdContext(data map[string]string) *fakeCmdContext {
return &fakeCmdContext{mgr: newSettingsManager(data)}
}
func (ctx *fakeCmdContext) createSettingsManager(context.Context) (*settings.SettingsManager, error) {
@@ -180,7 +182,7 @@ admissionregistration.k8s.io/MutatingWebhookConfiguration:
if !assert.True(t, ok) {
return
}
summary, err := validator(newSettingsManager(t.Context(), tc.data))
summary, err := validator(newSettingsManager(tc.data))
if tc.containsSummary != "" {
require.NoError(t, err)
assert.Contains(t, summary, tc.containsSummary)
@@ -247,7 +249,7 @@ func tempFile(content string) (string, io.Closer, error) {
}
func TestValidateSettingsCommand_NoErrors(t *testing.T) {
cmd := NewValidateSettingsCommand(newCmdContext(t.Context(), map[string]string{}))
cmd := NewValidateSettingsCommand(newCmdContext(map[string]string{}))
out, err := captureStdout(func() {
err := cmd.Execute()
require.NoError(t, err)
@@ -265,7 +267,7 @@ func TestResourceOverrideIgnoreDifferences(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoOverridesConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{}))
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"ignore-differences", f})
err := cmd.Execute()
@@ -276,7 +278,7 @@ func TestResourceOverrideIgnoreDifferences(t *testing.T) {
})
t.Run("DataIgnored", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment:
ignoreDifferences: |
jsonPointers:
@@ -298,7 +300,7 @@ func TestResourceOverrideHealth(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoHealthAssessment", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `example.com/ExampleResource: {}`,
}))
out, err := captureStdout(func() {
@@ -311,7 +313,7 @@ func TestResourceOverrideHealth(t *testing.T) {
})
t.Run("HealthAssessmentConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `example.com/ExampleResource:
health.lua: |
return { status = "Progressing" }
@@ -327,7 +329,7 @@ func TestResourceOverrideHealth(t *testing.T) {
})
t.Run("HealthAssessmentConfiguredWildcard", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `example.com/*:
health.lua: |
return { status = "Progressing" }
@@ -353,7 +355,7 @@ func TestResourceOverrideAction(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoActions", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment: {}`,
}))
out, err := captureStdout(func() {
@@ -366,7 +368,7 @@ func TestResourceOverrideAction(t *testing.T) {
})
t.Run("OldStyleActionConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment:
actions: |
discovery.lua: |
@@ -402,7 +404,7 @@ resume false
})
t.Run("NewStyleActionConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `batch/CronJob:
actions: |
discovery.lua: |

View File

@@ -1794,7 +1794,6 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
repo string
appNamespace string
cluster string
path string
)
command := &cobra.Command{
Use: "list",
@@ -1830,9 +1829,6 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if cluster != "" {
appList = argo.FilterByCluster(appList, cluster)
}
if path != "" {
appList = argo.FilterByPath(appList, path)
}
switch output {
case "yaml", "json":
@@ -1853,7 +1849,6 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().StringVarP(&repo, "repo", "r", "", "List apps by source repo URL")
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Only list applications in namespace")
command.Flags().StringVarP(&cluster, "cluster", "c", "", "List apps by cluster name or url")
command.Flags().StringVarP(&path, "path", "P", "", "List apps by path")
return command
}

View File

@@ -57,7 +57,7 @@ func NewApplicationGetResourceCommand(clientOpts *argocdclient.ClientOptions) *c
# Get a specific resource with managed fields, Pod my-app-pod, in 'my-app' by name in wide format
argocd app get-resource my-app --kind Pod --resource-name my-app-pod --show-managed-fields
# Get the details of a specific field in a resource in 'my-app' in the wide format
# Get the the details of a specific field in a resource in 'my-app' in the wide format
argocd app get-resource my-app --kind Pod --filter-fields status.podIP
# Get the details of multiple specific fields in a specific resource in 'my-app' in the wide format

View File

@@ -1,103 +0,0 @@
package commands
import (
"regexp"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
appsv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
// splitColumns splits a line produced by tabwriter using runs of 2 or more spaces
// as delimiters to obtain logical columns regardless of alignment padding.
func splitColumns(line string) []string {
re := regexp.MustCompile(`\s{2,}`)
return re.Split(strings.TrimSpace(line), -1)
}
func Test_printKeyTable_Empty(t *testing.T) {
out, err := captureOutput(func() error {
printKeyTable([]appsv1.GnuPGPublicKey{})
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 1)
headerCols := splitColumns(lines[0])
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, headerCols)
}
func Test_printKeyTable_Single(t *testing.T) {
keys := []appsv1.GnuPGPublicKey{
{
KeyID: "ABCDEF1234567890",
SubType: "rsa4096",
Owner: "Alice <alice@example.com>",
},
}
out, err := captureOutput(func() error {
printKeyTable(keys)
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 2)
// Header
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, splitColumns(lines[0]))
// Row
row := splitColumns(lines[1])
require.Len(t, row, 3)
assert.Equal(t, "ABCDEF1234567890", row[0])
assert.Equal(t, "RSA4096", row[1]) // subtype upper-cased
assert.Equal(t, "Alice <alice@example.com>", row[2])
}
func Test_printKeyTable_Multiple(t *testing.T) {
keys := []appsv1.GnuPGPublicKey{
{
KeyID: "ABCD",
SubType: "ed25519",
Owner: "User One <one@example.com>",
},
{
KeyID: "0123456789ABCDEF",
SubType: "rsa2048",
Owner: "Second User <second@example.com>",
},
}
out, err := captureOutput(func() error {
printKeyTable(keys)
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 3)
// Header
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, splitColumns(lines[0]))
// First row
row1 := splitColumns(lines[1])
require.Len(t, row1, 3)
assert.Equal(t, "ABCD", row1[0])
assert.Equal(t, "ED25519", row1[1])
assert.Equal(t, "User One <one@example.com>", row1[2])
// Second row
row2 := splitColumns(lines[2])
require.Len(t, row2, 3)
assert.Equal(t, "0123456789ABCDEF", row2[0])
assert.Equal(t, "RSA2048", row2[1])
assert.Equal(t, "Second User <second@example.com>", row2[2])
}

View File

@@ -213,8 +213,7 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
}
if port == nil || *port == 0 {
addr := *address + ":0"
lc := &net.ListenConfig{}
ln, err := lc.Listen(ctx, "tcp", addr)
ln, err := net.Listen("tcp", addr)
if err != nil {
return nil, fmt.Errorf("failed to listen on %q: %w", addr, err)
}

View File

@@ -3,11 +3,9 @@ package commands
import (
"errors"
"fmt"
"maps"
"os"
"os/exec"
"path/filepath"
"slices"
"strings"
"github.com/argoproj/argo-cd/v3/util/cli"
@@ -15,17 +13,18 @@ import (
log "github.com/sirupsen/logrus"
)
const prefix = "argocd"
// DefaultPluginHandler implements the PluginHandler interface
type DefaultPluginHandler struct {
lookPath func(file string) (string, error)
run func(cmd *exec.Cmd) error
ValidPrefixes []string
lookPath func(file string) (string, error)
run func(cmd *exec.Cmd) error
}
// NewDefaultPluginHandler instantiates a DefaultPluginHandler
func NewDefaultPluginHandler() *DefaultPluginHandler {
// NewDefaultPluginHandler instantiates the DefaultPluginHandler
func NewDefaultPluginHandler(validPrefixes []string) *DefaultPluginHandler {
return &DefaultPluginHandler{
lookPath: exec.LookPath,
ValidPrefixes: validPrefixes,
lookPath: exec.LookPath,
run: func(cmd *exec.Cmd) error {
return cmd.Run()
},
@@ -33,8 +32,8 @@ func NewDefaultPluginHandler() *DefaultPluginHandler {
}
// HandleCommandExecutionError processes the error returned from executing the command.
// It handles both standard Argo CD commands and plugin commands. We don't require returning
// an error, but we are doing it to cover various test scenarios.
// It handles both standard Argo CD commands and plugin commands. We don't require to return
// error but we are doing it to cover various test scenarios.
func (h *DefaultPluginHandler) HandleCommandExecutionError(err error, isArgocdCLI bool, args []string) error {
// the log level needs to be setup manually here since the initConfig()
// set by the cobra.OnInitialize() was never executed because cmd.Execute()
@@ -86,24 +85,27 @@ func (h *DefaultPluginHandler) handlePluginCommand(cmdArgs []string) (string, er
// lookForPlugin looks for a plugin in the PATH that starts with argocd prefix
func (h *DefaultPluginHandler) lookForPlugin(filename string) (string, bool) {
pluginName := fmt.Sprintf("%s-%s", prefix, filename)
path, err := h.lookPath(pluginName)
if err != nil {
// error if a plugin is found in a relative path
if errors.Is(err, exec.ErrDot) {
log.Errorf("Plugin '%s' found in relative path: %v", pluginName, err)
} else {
log.Warnf("error looking for plugin '%s': %v", pluginName, err)
for _, prefix := range h.ValidPrefixes {
pluginName := fmt.Sprintf("%s-%s", prefix, filename)
path, err := h.lookPath(pluginName)
if err != nil {
// error if a plugin is found in a relative path
if errors.Is(err, exec.ErrDot) {
log.Errorf("Plugin '%s' found in relative path: %v", pluginName, err)
} else {
log.Warnf("error looking for plugin '%s': %v", pluginName, err)
}
continue
}
return "", false
if path == "" {
return "", false
}
return path, true
}
if path == "" {
return "", false
}
return path, true
return "", false
}
// executePlugin implements PluginHandler and executes a plugin found
@@ -139,56 +141,3 @@ func (h *DefaultPluginHandler) command(name string, arg ...string) *exec.Cmd {
}
return cmd
}
// ListAvailablePlugins returns a list of plugin names that are available in the user's PATH
// for tab completion. It searches for executables matching the ValidPrefixes pattern.
func (h *DefaultPluginHandler) ListAvailablePlugins() []string {
// Track seen plugin names to avoid duplicates
seenPlugins := make(map[string]bool)
// Search through each directory in PATH
for _, dir := range filepath.SplitList(os.Getenv("PATH")) {
// Skip empty directories
if dir == "" {
continue
}
// Read directory contents
entries, err := os.ReadDir(dir)
if err != nil {
continue
}
// Check each file in the directory
for _, entry := range entries {
// Skip directories and non-executable files
if entry.IsDir() {
continue
}
name := entry.Name()
// Check if the file is a valid argocd plugin
pluginPrefix := prefix + "-"
if strings.HasPrefix(name, pluginPrefix) {
// Extract the plugin command name (everything after the prefix)
pluginName := strings.TrimPrefix(name, pluginPrefix)
// Skip empty plugin names or names with path separators
if pluginName == "" || strings.Contains(pluginName, "/") || strings.Contains(pluginName, "\\") {
continue
}
// Check if the file is executable
if info, err := entry.Info(); err == nil {
// On Unix-like systems, check executable bit
if info.Mode()&0o111 != 0 {
seenPlugins[pluginName] = true
}
}
}
}
}
return slices.Sorted(maps.Keys(seenPlugins))
}

View File

@@ -28,7 +28,7 @@ func setupPluginPath(t *testing.T) {
func TestNormalCommandWithPlugin(t *testing.T) {
setupPluginPath(t)
_ = NewDefaultPluginHandler()
_ = NewDefaultPluginHandler([]string{"argocd"})
args := []string{"argocd", "version", "--short", "--client"}
buf := new(bytes.Buffer)
cmd := NewVersionCmd(&argocdclient.ClientOptions{}, nil)
@@ -47,7 +47,7 @@ func TestNormalCommandWithPlugin(t *testing.T) {
func TestPluginExecution(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -101,7 +101,7 @@ func TestPluginExecution(t *testing.T) {
func TestNormalCommandError(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
args := []string{"argocd", "version", "--non-existent-flag"}
cmd := NewVersionCmd(&argocdclient.ClientOptions{}, nil)
cmd.SetArgs(args[1:])
@@ -118,7 +118,7 @@ func TestNormalCommandError(t *testing.T) {
// TestUnknownCommandNoPlugin tests the scenario when the command is neither a normal ArgoCD command
// nor exists as a plugin
func TestUnknownCommandNoPlugin(t *testing.T) {
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -137,7 +137,7 @@ func TestUnknownCommandNoPlugin(t *testing.T) {
func TestPluginNoExecutePermission(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -156,7 +156,7 @@ func TestPluginNoExecutePermission(t *testing.T) {
func TestPluginExecutionError(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -187,7 +187,7 @@ func TestPluginInRelativePathIgnored(t *testing.T) {
t.Setenv("PATH", os.Getenv("PATH")+string(os.PathListSeparator)+relativePath)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -206,7 +206,7 @@ func TestPluginInRelativePathIgnored(t *testing.T) {
func TestPluginFlagParsing(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
tests := []struct {
name string
@@ -255,7 +255,7 @@ func TestPluginFlagParsing(t *testing.T) {
func TestPluginStatusCode(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
tests := []struct {
name string
@@ -309,76 +309,3 @@ func TestPluginStatusCode(t *testing.T) {
})
}
}
// TestListAvailablePlugins tests the plugin discovery functionality for tab completion
func TestListAvailablePlugins(t *testing.T) {
setupPluginPath(t)
tests := []struct {
name string
validPrefix []string
expected []string
}{
{
name: "Standard argocd prefix finds plugins",
expected: []string{"demo_plugin", "error", "foo", "status-code-plugin", "test-plugin", "version"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Equal(t, tt.expected, plugins)
})
}
}
// TestListAvailablePluginsEmptyPath tests plugin discovery when PATH is empty
func TestListAvailablePluginsEmptyPath(t *testing.T) {
// Set empty PATH
t.Setenv("PATH", "")
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Empty(t, plugins, "Should return empty list when PATH is empty")
}
// TestListAvailablePluginsNonExecutableFiles tests that non-executable files are ignored
func TestListAvailablePluginsNonExecutableFiles(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
// Should not include 'no-permission' since it's not executable
assert.NotContains(t, plugins, "no-permission")
}
// TestListAvailablePluginsDeduplication tests that duplicate plugins from different PATH dirs are handled
func TestListAvailablePluginsDeduplication(t *testing.T) {
// Create two temporary directories with the same plugin
dir1 := t.TempDir()
dir2 := t.TempDir()
// Create the same plugin in both directories
plugin1 := filepath.Join(dir1, "argocd-duplicate")
plugin2 := filepath.Join(dir2, "argocd-duplicate")
err := os.WriteFile(plugin1, []byte("#!/bin/bash\necho 'plugin1'\n"), 0o755)
require.NoError(t, err)
err = os.WriteFile(plugin2, []byte("#!/bin/bash\necho 'plugin2'\n"), 0o755)
require.NoError(t, err)
// Set PATH to include both directories
testPath := dir1 + string(os.PathListSeparator) + dir2
t.Setenv("PATH", testPath)
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Equal(t, []string{"duplicate"}, plugins)
}

View File

@@ -191,7 +191,6 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
repoOpts.Repo.NoProxy = repoOpts.NoProxy
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
repoOpts.Repo.Depth = repoOpts.Depth
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
errors.Fatal(errors.ErrorGeneric, "Must specify --name for repos of type 'helm'")

View File

@@ -44,11 +44,6 @@ func NewCommand() *cobra.Command {
},
DisableAutoGenTag: true,
SilenceUsage: true,
ValidArgsFunction: func(_ *cobra.Command, _ []string, _ string) ([]string, cobra.ShellCompDirective) {
// Return available plugin commands for tab completion
plugins := NewDefaultPluginHandler().ListAvailablePlugins()
return plugins, cobra.ShellCompDirectiveNoFileComp
},
}
command.AddCommand(NewCompletionCommand())

View File

@@ -24,7 +24,7 @@ func extractHealthStatusAndReason(node v1alpha1.ResourceNode) (healthStatus heal
healthStatus = node.Health.Status
reason = node.Health.Message
}
return healthStatus, reason
return
}
func treeViewAppGet(prefix string, uidToNodeMap map[string]v1alpha1.ResourceNode, parentToChildMap map[string][]string, parent v1alpha1.ResourceNode, mapNodeNameToResourceState map[string]*resourceState, w *tabwriter.Writer) {

View File

@@ -83,11 +83,12 @@ func main() {
}
err := command.Execute()
// if an error is present, try to look for various scenarios
// if the err is non-nil, try to look for various scenarios
// such as if the error is from the execution of a normal argocd command,
// unknown command error or any other.
if err != nil {
pluginErr := cli.NewDefaultPluginHandler().HandleCommandExecutionError(err, isArgocdCLI, os.Args)
pluginHandler := cli.NewDefaultPluginHandler([]string{"argocd"})
pluginErr := pluginHandler.HandleCommandExecutionError(err, isArgocdCLI, os.Args)
if pluginErr != nil {
var exitErr *exec.ExitError
if errors.As(pluginErr, &exitErr) {

View File

@@ -136,9 +136,9 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
command.Flags().StringVar(&opts.syncPolicy, "sync-policy", "", "Set the sync policy (one of: manual (aliases of manual: none), automated (aliases of automated: auto, automatic))")
command.Flags().StringArrayVar(&opts.syncOptions, "sync-option", []string{}, "Add or remove a sync option, e.g add `Prune=false`. Remove using `!` prefix, e.g. `!Prune=false`")
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning for automated sync policy")
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing for automated sync policy")
command.Flags().BoolVar(&opts.allowEmpty, "allow-empty", false, "Set allow zero live resources for automated sync policy")
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning when sync is automated")
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing when sync is automated")
command.Flags().BoolVar(&opts.allowEmpty, "allow-empty", false, "Set allow zero live resources when sync is automated")
command.Flags().StringVar(&opts.namePrefix, "nameprefix", "", "Kustomize nameprefix")
command.Flags().StringVar(&opts.nameSuffix, "namesuffix", "", "Kustomize namesuffix")
command.Flags().StringVar(&opts.kustomizeVersion, "kustomize-version", "", "Kustomize version")
@@ -284,26 +284,25 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
spec.SyncPolicy.Retry.Refresh = appOpts.retryRefresh
}
})
if flags.Changed("auto-prune") || flags.Changed("self-heal") || flags.Changed("allow-empty") {
if spec.SyncPolicy == nil {
spec.SyncPolicy = &argoappv1.SyncPolicy{}
}
if spec.SyncPolicy.Automated == nil {
disabled := false
spec.SyncPolicy.Automated = &argoappv1.SyncPolicyAutomated{Enabled: &disabled}
}
if flags.Changed("auto-prune") {
spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
}
if flags.Changed("self-heal") {
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
}
if flags.Changed("allow-empty") {
spec.SyncPolicy.Automated.AllowEmpty = appOpts.allowEmpty
if flags.Changed("auto-prune") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --auto-prune: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
}
if flags.Changed("self-heal") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --self-heal: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
}
if flags.Changed("allow-empty") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --allow-empty: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.AllowEmpty = appOpts.allowEmpty
}
return visited
}

View File

@@ -267,47 +267,6 @@ func Test_setAppSpecOptions(t *testing.T) {
require.NoError(t, f.SetFlag("sync-option", "!a=1"))
assert.Nil(t, f.spec.SyncPolicy)
})
t.Run("AutoPruneFlag", func(t *testing.T) {
f := newAppOptionsFixture()
// syncPolicy is nil (automated.enabled = false)
require.NoError(t, f.SetFlag("auto-prune", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.Prune)
// automated.enabled = true
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("auto-prune", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.Prune)
})
t.Run("SelfHealFlag", func(t *testing.T) {
f := newAppOptionsFixture()
require.NoError(t, f.SetFlag("self-heal", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.SelfHeal)
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("self-heal", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.SelfHeal)
})
t.Run("AllowEmptyFlag", func(t *testing.T) {
f := newAppOptionsFixture()
require.NoError(t, f.SetFlag("allow-empty", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.AllowEmpty)
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("allow-empty", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.AllowEmpty)
})
t.Run("RetryLimit", func(t *testing.T) {
require.NoError(t, f.SetFlag("sync-retry-limit", "5"))
assert.Equal(t, int64(5), f.spec.SyncPolicy.Retry.Limit)

View File

@@ -27,7 +27,6 @@ type RepoOptions struct {
GCPServiceAccountKeyPath string
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
UseAzureWorkloadIdentity bool
Depth int64
}
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
@@ -54,5 +53,4 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
command.Flags().BoolVar(&opts.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
command.Flags().BoolVar(&opts.InsecureOCIForceHTTP, "insecure-oci-force-http", false, "Use http when accessing an OCI repository")
command.Flags().Int64Var(&opts.Depth, "depth", 0, "Specify a custom depth for git clone operations. Unless specified, a full clone is performed using the depth of 0")
}

View File

@@ -1,7 +1,6 @@
package cmpserver
import (
"context"
"fmt"
"net"
"os"
@@ -86,8 +85,7 @@ func (a *ArgoCDCMPServer) Run() {
// Listen on the socket address
_ = os.Remove(config.Address())
lc := &net.ListenConfig{}
listener, err := lc.Listen(context.Background(), "unix", config.Address())
listener, err := net.Listen("unix", config.Address())
errors.CheckError(err)
log.Infof("argocd-cmp-server %s serving on %s", common.GetVersion(), listener.Addr())

View File

@@ -1,14 +1,101 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
utilio "github.com/argoproj/argo-cd/v3/util/io"
"github.com/argoproj/argo-cd/v3/util/io"
mock "github.com/stretchr/testify/mock"
)
type Clientset struct {
CommitServiceClient apiclient.CommitServiceClient
// NewClientset creates a new instance of Clientset. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewClientset(t interface {
mock.TestingT
Cleanup(func())
}) *Clientset {
mock := &Clientset{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
func (c *Clientset) NewCommitServerClient() (utilio.Closer, apiclient.CommitServiceClient, error) {
return utilio.NopCloser, c.CommitServiceClient, nil
// Clientset is an autogenerated mock type for the Clientset type
type Clientset struct {
mock.Mock
}
type Clientset_Expecter struct {
mock *mock.Mock
}
func (_m *Clientset) EXPECT() *Clientset_Expecter {
return &Clientset_Expecter{mock: &_m.Mock}
}
// NewCommitServerClient provides a mock function for the type Clientset
func (_mock *Clientset) NewCommitServerClient() (io.Closer, apiclient.CommitServiceClient, error) {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for NewCommitServerClient")
}
var r0 io.Closer
var r1 apiclient.CommitServiceClient
var r2 error
if returnFunc, ok := ret.Get(0).(func() (io.Closer, apiclient.CommitServiceClient, error)); ok {
return returnFunc()
}
if returnFunc, ok := ret.Get(0).(func() io.Closer); ok {
r0 = returnFunc()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(io.Closer)
}
}
if returnFunc, ok := ret.Get(1).(func() apiclient.CommitServiceClient); ok {
r1 = returnFunc()
} else {
if ret.Get(1) != nil {
r1 = ret.Get(1).(apiclient.CommitServiceClient)
}
}
if returnFunc, ok := ret.Get(2).(func() error); ok {
r2 = returnFunc()
} else {
r2 = ret.Error(2)
}
return r0, r1, r2
}
// Clientset_NewCommitServerClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'NewCommitServerClient'
type Clientset_NewCommitServerClient_Call struct {
*mock.Call
}
// NewCommitServerClient is a helper method to define mock.On call
func (_e *Clientset_Expecter) NewCommitServerClient() *Clientset_NewCommitServerClient_Call {
return &Clientset_NewCommitServerClient_Call{Call: _e.mock.On("NewCommitServerClient")}
}
func (_c *Clientset_NewCommitServerClient_Call) Run(run func()) *Clientset_NewCommitServerClient_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *Clientset_NewCommitServerClient_Call) Return(closer io.Closer, commitServiceClient apiclient.CommitServiceClient, err error) *Clientset_NewCommitServerClient_Call {
_c.Call.Return(closer, commitServiceClient, err)
return _c
}
func (_c *Clientset_NewCommitServerClient_Call) RunAndReturn(run func() (io.Closer, apiclient.CommitServiceClient, error)) *Clientset_NewCommitServerClient_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -229,7 +229,7 @@ func (s *Service) initGitClient(logCtx *log.Entry, r *apiclient.CommitHydratedMa
}
logCtx.Debugf("Fetching repo %s", r.Repo.Repo)
err = gitClient.Fetch("", 0)
err = gitClient.Fetch("")
if err != nil {
cleanupOrLog()
return nil, "", nil, fmt.Errorf("failed to clone repo: %w", err)

View File

@@ -82,7 +82,7 @@ func Test_CommitHydratedManifests(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
_, err := service.CommitHydratedManifests(t.Context(), validRequest)
require.Error(t, err)
@@ -94,14 +94,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
resp, err := service.CommitHydratedManifests(t.Context(), validRequest)
require.NoError(t, err)
@@ -114,14 +114,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("root-and-blank-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("root-and-blank-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithRootAndBlank := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -161,15 +161,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().RemoveContents([]string{"apps/staging"}).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("subdir-path-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents", []string{"apps/staging"}).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("subdir-path-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithSubdirPath := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -201,15 +201,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().RemoveContents([]string{"apps/production", "apps/staging"}).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("mixed-paths-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents", []string{"apps/production", "apps/staging"}).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("mixed-paths-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithMixedPaths := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -257,14 +257,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithEmptyPaths := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{

View File

@@ -229,7 +229,6 @@ const (
// AnnotationKeyAppSkipReconcile tells the Application to skip the Application controller reconcile.
// Skip reconcile when the value is "true" or any other string values that can be strconv.ParseBool() to be true.
AnnotationKeyAppSkipReconcile = "argocd.argoproj.io/skip-reconcile"
// LabelKeyComponentRepoServer is the label key to identify the component as repo-server
LabelKeyComponentRepoServer = "app.kubernetes.io/component"
// LabelValueComponentRepoServer is the label value for the repo-server component

View File

@@ -39,7 +39,6 @@ import (
"k8s.io/client-go/informers"
informerv1 "k8s.io/client-go/informers/apps/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/utils/ptr"
@@ -467,7 +466,7 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
// Enforce application's permission for the source namespace
_, err = ctrl.getAppProj(app)
if err != nil {
logCtx.WithError(err).Errorf("Unable to determine project for app")
logCtx.Errorf("Unable to determine project for app '%s': %v", app.QualifiedName(), err)
continue
}
@@ -903,12 +902,12 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
clusters, err := ctrl.db.ListClusters(ctx)
if err != nil {
log.WithError(err).Warn("Cannot init sharding. Error while querying clusters list from database")
log.Warnf("Cannot init sharding. Error while querying clusters list from database: %v", err)
} else {
appItems, err := ctrl.getAppList(metav1.ListOptions{})
if err != nil {
log.WithError(err).Warn("Cannot init sharding. Error while querying application list from database")
log.Warnf("Cannot init sharding. Error while querying application list from database: %v", err)
} else {
ctrl.clusterSharding.Init(clusters, appItems)
}
@@ -1001,29 +1000,29 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
appKey, shutdown := ctrl.appOperationQueue.Get()
if shutdown {
processNext = false
return processNext
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appOperationQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return processNext
return
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app))
@@ -1042,8 +1041,8 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
// We cannot rely on informer since applications might be updated by both application controller and api server.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
logCtx.WithError(err).Error("Failed to retrieve latest application state")
return processNext
logCtx.Errorf("Failed to retrieve latest application state: %v", err)
return
}
app = freshApp
}
@@ -1065,7 +1064,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
}
ts.AddCheckpoint("finalize_application_deletion_ms")
}
return processNext
return
}
func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processNext bool) {
@@ -1074,26 +1073,26 @@ func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processN
defer func() {
if r := recover(); r != nil {
log.WithField("appkey", key).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appComparisonTypeRefreshQueue.Done(key)
}()
if shutdown {
processNext = false
return processNext
return
}
if parts := strings.Split(key, "/"); len(parts) != 3 {
log.WithField("appkey", key).Warn("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consist of namespace/name/comparisonType")
log.Warnf("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consists of namespace/name/comparisonType but got: %s", key)
} else {
compareWith, err := strconv.Atoi(parts[2])
if err != nil {
log.WithField("appkey", key).WithError(err).Warn("Unable to parse comparison type")
return processNext
log.Warnf("Unable to parse comparison type: %v", err)
return
}
ctrl.requestAppRefresh(ctrl.toAppQualifiedName(parts[1], parts[0]), CompareWith(compareWith).Pointer(), nil)
}
return processNext
return
}
func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool) {
@@ -1102,35 +1101,35 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
defer func() {
if r := recover(); r != nil {
log.WithField("key", key).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.projectRefreshQueue.Done(key)
}()
if shutdown {
processNext = false
return processNext
return
}
obj, exists, err := ctrl.projInformer.GetIndexer().GetByKey(key)
if err != nil {
log.WithField("key", key).WithError(err).Error("Failed to get project from informer index")
return processNext
log.Errorf("Failed to get project '%s' from informer index: %+v", key, err)
return
}
if !exists {
// This happens after appproj was deleted, but the work queue still had an entry for it.
return processNext
return
}
origProj, ok := obj.(*appv1.AppProject)
if !ok {
log.WithField("key", key).Warnf("Key in index is not an appproject")
return processNext
log.Warnf("Key '%s' in index is not an appproject", key)
return
}
if origProj.DeletionTimestamp != nil && origProj.HasFinalizer() {
if err := ctrl.finalizeProjectDeletion(origProj.DeepCopy()); err != nil {
log.WithError(err).Warn("Failed to finalize project deletion")
log.Warnf("Failed to finalize project deletion: %v", err)
}
}
return processNext
return
}
func (ctrl *ApplicationController) finalizeProjectDeletion(proj *appv1.AppProject) error {
@@ -1195,7 +1194,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
if !apierrors.IsNotFound(err) {
logCtx.WithError(err).Error("Unable to get refreshed application info prior deleting resources")
logCtx.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
}
return nil
}
@@ -1205,7 +1204,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
}
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.WithError(err).Warn("Unable to get destination cluster")
logCtx.Warnf("Unable to get destination cluster: %v", err)
app.UnSetCascadedDeletion()
app.UnSetPostDeleteFinalizerAll()
if err := ctrl.updateFinalizers(app); err != nil {
@@ -1220,11 +1219,6 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
}
config := metrics.AddMetricsTransportWrapper(ctrl.metricsServer, app, clusterRESTConfig)
// Apply impersonation config if necessary
if err := ctrl.applyImpersonationConfig(config, proj, app, destCluster); err != nil {
return fmt.Errorf("cannot apply impersonation: %w", err)
}
if app.CascadedDeletion() {
deletionApproved := app.IsDeletionConfirmed(app.DeletionTimestamp.Time)
@@ -1373,7 +1367,7 @@ func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condi
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(context.Background(), app.Name, types.MergePatchType, patch, metav1.PatchOptions{})
}
if err != nil {
logCtx.WithError(err).Error("Unable to set application condition")
logCtx.Errorf("Unable to set application condition: %v", err)
}
}
@@ -1517,7 +1511,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatestForceResolve.Pointer(), nil)
} else {
logCtx.WithError(err).Warn("Fails to requeue application")
logCtx.Warnf("Fails to requeue application: %v", err)
}
}
ts.AddCheckpoint("request_app_refresh_ms")
@@ -1549,13 +1543,13 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
}
patchJSON, err := json.Marshal(patch)
if err != nil {
logCtx.WithError(err).Error("error marshaling json")
logCtx.Errorf("error marshaling json: %v", err)
return
}
if app.Status.OperationState != nil && app.Status.OperationState.FinishedAt != nil && state.FinishedAt == nil {
patchJSON, err = jsonpatch.MergeMergePatches(patchJSON, []byte(`{"status": {"operationState": {"finishedAt": null}}}`))
if err != nil {
logCtx.WithError(err).Error("error merging operation state patch")
logCtx.Errorf("error merging operation state patch: %v", err)
return
}
}
@@ -1569,7 +1563,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
}
// kube.RetryUntilSucceed logs failed attempts at "debug" level, but we want to know if this fails. Log a
// warning.
logCtx.WithError(err).Warn("error patching application with operation state")
logCtx.Warnf("error patching application with operation state: %v", err)
return fmt.Errorf("error patching application with operation state: %w", err)
}
return nil
@@ -1598,7 +1592,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.WithError(err).Warn("Unable to get destination cluster, setting dest_server label to empty string in sync metric")
logCtx.Warnf("Unable to get destination cluster, setting dest_server label to empty string in sync metric: %v", err)
}
destServer := ""
if destCluster != nil {
@@ -1615,7 +1609,7 @@ func (ctrl *ApplicationController) writeBackToInformer(app *appv1.Application) {
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithField("informer-writeBack", true)
err := ctrl.appInformer.GetStore().Update(app)
if err != nil {
logCtx.WithError(err).Error("failed to update informer store")
logCtx.Errorf("failed to update informer store: %v", err)
return
}
}
@@ -1636,12 +1630,12 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
appKey, shutdown := ctrl.appRefreshQueue.Get()
if shutdown {
processNext = false
return processNext
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
// We want to have app operation update happen after the sync, so there's no race condition
// and app updates not proceeding. See https://github.com/argoproj/argo-cd/issues/18500.
@@ -1650,23 +1644,23 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return processNext
return
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
origApp = origApp.DeepCopy()
needRefresh, refreshType, comparisonLevel := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout, ctrl.statusHardRefreshTimeout)
if !needRefresh {
return processNext
return
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithFields(log.Fields{
@@ -1708,13 +1702,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
if tree, err = ctrl.getResourceTree(destCluster, app, managedResources); err == nil {
app.Status.Summary = tree.GetSummary(app)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), tree); err != nil {
logCtx.WithError(err).Error("Failed to cache resources tree")
return processNext
logCtx.Errorf("Failed to cache resources tree: %v", err)
return
}
}
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
return processNext
return
}
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fall back to full reconciliation")
}
@@ -1730,20 +1724,20 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
logCtx.WithError(err).Warn("failed to set app resource tree")
logCtx.Warnf("failed to set app resource tree: %v", err)
}
if err := ctrl.cache.SetAppManagedResources(app.InstanceName(ctrl.namespace), nil); err != nil {
logCtx.WithError(err).Warn("failed to set app managed resources tree")
logCtx.Warnf("failed to set app managed resources tree: %v", err)
}
ts.AddCheckpoint("process_refresh_app_conditions_errors_ms")
return processNext
return
}
destCluster, err = argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.WithError(err).Error("Failed to get destination cluster")
logCtx.Errorf("Failed to get destination cluster: %v", err)
// exit the reconciliation. ctrl.refreshAppConditions should have caught the error
return processNext
return
}
var localManifests []string
@@ -1783,8 +1777,8 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
ts.AddCheckpoint("compare_app_state_ms")
if stderrors.Is(err, ErrCompareStateRepo) {
logCtx.WithError(err).Warn("Ignoring temporary failed attempt to compare app state against repo")
return processNext // short circuit if git error is encountered
logCtx.Warnf("Ignoring temporary failed attempt to compare app state against repo: %v", err)
return // short circuit if git error is encountered
}
for k, v := range compareResult.timings {
@@ -1797,7 +1791,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
tree, err := ctrl.setAppManagedResources(destCluster, app, compareResult)
ts.AddCheckpoint("set_app_managed_resources_ms")
if err != nil {
logCtx.WithError(err).Error("Failed to cache app resources")
logCtx.Errorf("Failed to cache app resources: %v", err)
} else {
app.Status.Summary = tree.GetSummary(app)
}
@@ -1849,54 +1843,60 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
if err := ctrl.updateFinalizers(app); err != nil {
logCtx.WithError(err).Error("Failed to update finalizers")
logCtx.Errorf("Failed to update finalizers: %v", err)
}
}
ts.AddCheckpoint("process_finalizers_ms")
return processNext
return
}
func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appHydrateQueue.Get()
if shutdown {
processNext = false
return processNext
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appHydrateQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return processNext
return
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp.DeepCopy())
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp)
log.WithFields(applog.GetAppLogFields(origApp)).Debug("Successfully processed app hydrate queue item")
return processNext
return
}
func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool) {
hydrationKey, shutdown := ctrl.hydrationQueue.Get()
if shutdown {
processNext = false
return processNext
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.hydrationQueue.Done(hydrationKey)
}()
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
@@ -1904,19 +1904,12 @@ func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool
"destinationBranch": hydrationKey.DestinationBranch,
})
defer func() {
if r := recover(); r != nil {
logCtx.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.hydrationQueue.Done(hydrationKey)
}()
logCtx.Debug("Processing hydration queue item")
ctrl.hydrator.ProcessHydrationQueueItem(hydrationKey)
logCtx.Debug("Successfully processed hydration queue item")
return processNext
return
}
func resourceStatusKey(res appv1.ResourceStatus) string {
@@ -2019,11 +2012,11 @@ func (ctrl *ApplicationController) normalizeApplication(orig, app *appv1.Applica
patch, modified, err := diff.CreateTwoWayMergePatch(orig, app, appv1.Application{})
if err != nil {
logCtx.WithError(err).Error("error constructing app spec patch")
logCtx.Errorf("error constructing app spec patch: %v", err)
} else if modified {
_, err := ctrl.PatchAppWithWriteBack(context.Background(), app.Name, app.Namespace, types.MergePatchType, patch, metav1.PatchOptions{})
if err != nil {
logCtx.WithError(err).Error("Error persisting normalized application spec")
logCtx.Errorf("Error persisting normalized application spec: %v", err)
} else {
logCtx.Infof("Normalized app spec: %s", string(patch))
}
@@ -2078,12 +2071,12 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: orig.GetAnnotations()}, Status: orig.Status},
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: newAnnotations}, Status: *newStatus})
if err != nil {
logCtx.WithError(err).Error("Error constructing app status patch")
return patchDuration
logCtx.Errorf("Error constructing app status patch: %v", err)
return
}
if !modified {
logCtx.Infof("No status changes. Skipping patch")
return patchDuration
return
}
// calculate time for path call
start := time.Now()
@@ -2092,7 +2085,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
}()
_, err = ctrl.PatchAppWithWriteBack(context.Background(), orig.Name, orig.Namespace, types.MergePatchType, patch, metav1.PatchOptions{})
if err != nil {
logCtx.WithError(err).Warn("Error updating application")
logCtx.Warnf("Error updating application: %v", err)
} else {
logCtx.Infof("Update successful")
}
@@ -2237,11 +2230,11 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
if stderrors.Is(err, argo.ErrAnotherOperationInProgress) {
// skipping auto-sync because another operation is in progress and was not noticed due to stale data in informer
// it is safe to skip auto-sync because it is already running
logCtx.WithError(err).Warnf("Failed to initiate auto-sync to %s", desiredRevisions)
logCtx.Warnf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
return nil, 0
}
logCtx.WithError(err).Errorf("Failed to initiate auto-sync to %s", desiredRevisions)
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}, setOpTime
}
ctrl.writeBackToInformer(updatedApp)
@@ -2367,7 +2360,7 @@ func (ctrl *ApplicationController) canProcessApp(obj any) bool {
return false
}
} else {
logCtx.WithError(err).Debugf("Unable to determine if Application should skip reconcile based on annotation %s", common.AnnotationKeyAppSkipReconcile)
logCtx.Debugf("Unable to determine if Application should skip reconcile based on annotation %s: %v", common.AnnotationKeyAppSkipReconcile, err)
}
}
}
@@ -2622,22 +2615,4 @@ func (ctrl *ApplicationController) logAppEvent(ctx context.Context, a *appv1.App
ctrl.auditLogger.LogAppEvent(a, eventInfo, message, "", eventLabels)
}
func (ctrl *ApplicationController) applyImpersonationConfig(config *rest.Config, proj *appv1.AppProject, app *appv1.Application, destCluster *appv1.Cluster) error {
impersonationEnabled, err := ctrl.settingsMgr.IsImpersonationEnabled()
if err != nil {
return fmt.Errorf("error getting impersonation setting: %w", err)
}
if !impersonationEnabled {
return nil
}
user, err := deriveServiceAccountToImpersonate(proj, app, destCluster)
if err != nil {
return fmt.Errorf("error deriving service account to impersonate: %w", err)
}
config.Impersonate = rest.ImpersonationConfig{
UserName: user,
}
return nil
}
type ClusterFilterFunction func(c *appv1.Cluster, distributionFunction sharding.DistributionFunction) bool

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"errors"
"fmt"
"strconv"
"testing"
"time"
@@ -95,11 +94,11 @@ func (m *MockKubectl) DeleteResource(ctx context.Context, config *rest.Config, g
return m.Kubectl.DeleteResource(ctx, config, gvk, name, namespace, deleteOptions)
}
func newFakeController(ctx context.Context, data *fakeData, repoErr error) *ApplicationController {
return newFakeControllerWithResync(ctx, data, time.Minute, repoErr, nil)
func newFakeController(data *fakeData, repoErr error) *ApplicationController {
return newFakeControllerWithResync(data, time.Minute, repoErr, nil)
}
func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncPeriod time.Duration, repoErr, revisionPathsErr error) *ApplicationController {
func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration, repoErr, revisionPathsErr error) *ApplicationController {
var clust corev1.Secret
err := yaml.Unmarshal([]byte(fakeCluster), &clust)
if err != nil {
@@ -107,33 +106,33 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
}
// Mock out call to GenerateManifest
mockRepoClient := &mockrepoclient.RepoServerServiceClient{}
mockRepoClient := mockrepoclient.RepoServerServiceClient{}
if len(data.manifestResponses) > 0 {
for _, response := range data.manifestResponses {
if repoErr != nil {
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(response, repoErr).Once()
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, repoErr).Once()
} else {
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(response, nil).Once()
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, nil).Once()
}
}
} else {
if repoErr != nil {
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(data.manifestResponse, repoErr).Once()
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, repoErr).Once()
} else {
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(data.manifestResponse, nil).Once()
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, nil).Once()
}
}
if revisionPathsErr != nil {
mockRepoClient.EXPECT().UpdateRevisionForPaths(mock.Anything, mock.Anything).Return(nil, revisionPathsErr)
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(nil, revisionPathsErr)
} else {
mockRepoClient.EXPECT().UpdateRevisionForPaths(mock.Anything, mock.Anything).Return(data.updateRevisionForPathsResponse, nil)
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(data.updateRevisionForPathsResponse, nil)
}
mockRepoClientset := &mockrepoclient.Clientset{RepoServerServiceClient: mockRepoClient}
mockRepoClientset := mockrepoclient.Clientset{RepoServerServiceClient: &mockRepoClient}
mockCommitClientset := &mockcommitclient.Clientset{}
mockCommitClientset := mockcommitclient.Clientset{}
secret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
@@ -158,15 +157,15 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
runtimeObjs := []runtime.Object{&clust, &secret, &cm}
runtimeObjs = append(runtimeObjs, data.additionalObjs...)
kubeClient := fake.NewClientset(runtimeObjs...)
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, test.FakeArgoCDNamespace)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
kubectl := &MockKubectl{Kubectl: &kubetest.MockKubectlCmd{}}
ctrl, err := NewApplicationController(
test.FakeArgoCDNamespace,
settingsMgr,
kubeClient,
appclientset.NewSimpleClientset(data.apps...),
mockRepoClientset,
mockCommitClientset,
&mockRepoClientset,
&mockCommitClientset,
appstatecache.NewCache(
cacheutil.NewCache(cacheutil.NewInMemoryCache(1*time.Minute)),
1*time.Minute,
@@ -197,7 +196,7 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
false,
)
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
db.On("GetApplicationControllerReplicas").Return(1)
// Setting a default sharding algorithm for the tests where we cannot set it.
ctrl.clusterSharding = sharding.NewClusterSharding(db, 0, 1, common.DefaultShardingAlgorithm)
if err != nil {
@@ -207,25 +206,27 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
defer cancelProj()
cancelApp := test.StartInformer(ctrl.appInformer)
defer cancelApp()
clusterCacheMock := &mocks.ClusterCache{}
clusterCacheMock.EXPECT().IsNamespaced(mock.Anything).Return(true, nil)
clusterCacheMock.EXPECT().GetOpenAPISchema().Return(nil)
clusterCacheMock.EXPECT().GetGVKParser().Return(nil)
clusterCacheMock := mocks.ClusterCache{}
clusterCacheMock.On("IsNamespaced", mock.Anything).Return(true, nil)
clusterCacheMock.On("GetOpenAPISchema").Return(nil, nil)
clusterCacheMock.On("GetGVKParser").Return(nil)
mockStateCache := &mockstatecache.LiveStateCache{}
ctrl.appStateManager.(*appStateManager).liveStateCache = mockStateCache
ctrl.stateCache = mockStateCache
mockStateCache.EXPECT().IsNamespaced(mock.Anything, mock.Anything).Return(true, nil)
mockStateCache.EXPECT().GetManagedLiveObjs(mock.Anything, mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
mockStateCache.EXPECT().GetVersionsInfo(mock.Anything).Return("v1.2.3", nil, nil)
mockStateCache := mockstatecache.LiveStateCache{}
ctrl.appStateManager.(*appStateManager).liveStateCache = &mockStateCache
ctrl.stateCache = &mockStateCache
mockStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
mockStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
mockStateCache.On("GetVersionsInfo", mock.Anything).Return("v1.2.3", nil, nil)
response := make(map[kube.ResourceKey]v1alpha1.ResourceNode)
for k, v := range data.namespacedResources {
response[k] = v.ResourceNode
}
mockStateCache.EXPECT().GetNamespaceTopLevelResources(mock.Anything, mock.Anything).Return(response, nil)
mockStateCache.EXPECT().IterateResources(mock.Anything, mock.Anything).Return(nil)
mockStateCache.EXPECT().GetClusterCache(mock.Anything).Return(clusterCacheMock, nil)
mockStateCache.EXPECT().IterateHierarchyV2(mock.Anything, mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Cluster, keys []kube.ResourceKey, action func(_ v1alpha1.ResourceNode, _ string) bool) {
mockStateCache.On("GetNamespaceTopLevelResources", mock.Anything, mock.Anything).Return(response, nil)
mockStateCache.On("IterateResources", mock.Anything, mock.Anything).Return(nil)
mockStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCacheMock, nil)
mockStateCache.On("IterateHierarchyV2", mock.Anything, mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
keys := args[1].([]kube.ResourceKey)
action := args[2].(func(child v1alpha1.ResourceNode, appName string) bool)
for _, key := range keys {
appName := ""
if res, ok := data.namespacedResources[key]; ok {
@@ -595,7 +596,7 @@ func newFakeServiceAccount() map[string]any {
func TestAutoSync(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -613,7 +614,7 @@ func TestAutoSyncEnabledSetToTrue(t *testing.T) {
app := newFakeApp()
enable := true
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -634,7 +635,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"z", "x", "v"},
@@ -649,7 +650,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"a", "b", "c"},
@@ -665,7 +666,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
func TestAutoSyncNotAllowEmpty(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated.Prune = true
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -678,7 +679,7 @@ func TestAutoSyncAllowEmpty(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated.Prune = true
app.Spec.SyncPolicy.Automated.AllowEmpty = true
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -692,7 +693,7 @@ func TestSkipAutoSync(t *testing.T) {
// Set current to 'aaaaa', desired to 'aaaa' and mark system OutOfSync
t.Run("PreviouslySyncedToRevision", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
@@ -707,7 +708,7 @@ func TestSkipAutoSync(t *testing.T) {
// Verify we skip when we are already Synced (even if revision is different)
t.Run("AlreadyInSyncedState", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -723,7 +724,7 @@ func TestSkipAutoSync(t *testing.T) {
t.Run("AutoSyncIsDisabled", func(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy = nil
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -740,7 +741,7 @@ func TestSkipAutoSync(t *testing.T) {
app := newFakeApp()
enable := false
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -757,7 +758,7 @@ func TestSkipAutoSync(t *testing.T) {
app := newFakeApp()
now := metav1.Now()
app.DeletionTimestamp = &now
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -783,7 +784,7 @@ func TestSkipAutoSync(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -807,7 +808,7 @@ func TestSkipAutoSync(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -821,7 +822,7 @@ func TestSkipAutoSync(t *testing.T) {
t.Run("NeedsToPruneResourcesOnlyButAutomatedPruneDisabled", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -847,7 +848,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
},
},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
@@ -907,7 +908,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(t.Context(), "my-app", metav1.GetOptions{})
@@ -925,7 +926,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
},
},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
app.Status.OperationState.SyncResult.Sources[0].Helm = &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
@@ -972,7 +973,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app.SetCascadedDeletion(v1alpha1.ResourcesFinalizerName)
app.DeletionTimestamp = &now
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
patched := false
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
defaultReactor := fakeAppCs.ReactionChain[0]
@@ -1017,7 +1018,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
appObj := kube.MustToUnstructured(&app)
cm := newFakeCM()
strayObj := kube.MustToUnstructured(&cm)
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj, &restrictedProj},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(appObj): appObj,
@@ -1058,7 +1059,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app := newFakeAppWithDestName()
app.SetCascadedDeletion(v1alpha1.ResourcesFinalizerName)
app.DeletionTimestamp = &now
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
patched := false
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
defaultReactor := fakeAppCs.ReactionChain[0]
@@ -1084,7 +1085,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
testShouldDelete := func(app *v1alpha1.Application) {
appObj := kube.MustToUnstructured(&app)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(appObj): appObj,
}}, nil)
@@ -1118,7 +1119,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app := newFakeApp()
app.SetPostDeleteFinalizer()
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakePostDeleteHook},
}},
@@ -1160,7 +1161,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
},
}
require.NoError(t, unstructured.SetNestedField(liveHook.Object, conditions, "status", "conditions"))
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakePostDeleteHook},
}},
@@ -1204,7 +1205,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
},
}
require.NoError(t, unstructured.SetNestedField(liveHook.Object, conditions, "status", "conditions"))
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakeRoleBinding, fakeRole, fakeServiceAccount, fakePostDeleteHook},
}},
@@ -1245,117 +1246,6 @@ func TestFinalizeAppDeletion(t *testing.T) {
})
}
func TestFinalizeAppDeletionWithImpersonation(t *testing.T) {
type fixture struct {
application *v1alpha1.Application
controller *ApplicationController
}
setup := func(destinationNamespace, serviceAccountName string) *fixture {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
now := metav1.Now()
app.DeletionTimestamp = &now
project := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
Name: "default",
},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []v1alpha1.ApplicationDestination{
{
Server: "*",
Namespace: "*",
},
},
DestinationServiceAccounts: []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://localhost:6443",
Namespace: destinationNamespace,
DefaultServiceAccount: serviceAccountName,
},
},
},
}
additionalObjs := []runtime.Object{}
if serviceAccountName != "" {
syncServiceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: serviceAccountName,
Namespace: test.FakeDestNamespace,
},
}
additionalObjs = append(additionalObjs, syncServiceAccount)
}
data := fakeData{
apps: []runtime.Object{app, project},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: "https://localhost:6443",
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{},
configMapData: map[string]string{
"application.sync.impersonation.enabled": strconv.FormatBool(true),
},
additionalObjs: additionalObjs,
}
ctrl := newFakeController(t.Context(), &data, nil)
return &fixture{
application: app,
controller: ctrl,
}
}
t.Run("no matching impersonation service account is configured", func(t *testing.T) {
// given impersonation is enabled but no matching service account exists
f := setup(test.FakeDestNamespace, "")
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should fail due to impersonation error
require.Error(t, err)
assert.Contains(t, err.Error(), "error deriving service account to impersonate")
})
t.Run("valid impersonation service account is configured", func(t *testing.T) {
// given impersonation is enabled with valid service account
f := setup(test.FakeDestNamespace, "test-sa")
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should succeed
require.NoError(t, err)
})
t.Run("invalid application destination cluster", func(t *testing.T) {
// given impersonation is enabled but destination cluster does not exist
f := setup(test.FakeDestNamespace, "test-sa")
f.application.Spec.Destination.Server = "https://invalid-cluster:6443"
f.application.Spec.Destination.Name = "invalid"
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should still succeed by removing finalizers
require.NoError(t, err)
})
}
// TestNormalizeApplication verifies we normalize an application during reconciliation
func TestNormalizeApplication(t *testing.T) {
defaultProj := v1alpha1.AppProject{
@@ -1389,7 +1279,7 @@ func TestNormalizeApplication(t *testing.T) {
{
// Verify we normalize the app because project is missing
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
key, _ := cache.MetaNamespaceKeyFunc(app)
ctrl.appRefreshQueue.AddRateLimited(key)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
@@ -1411,7 +1301,7 @@ func TestNormalizeApplication(t *testing.T) {
// Verify we don't unnecessarily normalize app when project is set
app.Spec.Project = "default"
data.apps[0] = app
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
key, _ := cache.MetaNamespaceKeyFunc(app)
ctrl.appRefreshQueue.AddRateLimited(key)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
@@ -1436,7 +1326,7 @@ func TestHandleAppUpdated(t *testing.T) {
app.Spec.Destination.Server = v1alpha1.KubernetesInternalAPIServerAddr
proj := defaultProj.DeepCopy()
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl.handleObjectUpdated(map[string]bool{app.InstanceName(ctrl.namespace): true}, kube.GetObjectRef(kube.MustToUnstructured(app)))
isRequested, level := ctrl.isRefreshRequested(app.QualifiedName())
@@ -1463,7 +1353,7 @@ func TestHandleOrphanedResourceUpdated(t *testing.T) {
proj := defaultProj.DeepCopy()
proj.Spec.OrphanedResources = &v1alpha1.OrphanedResourcesMonitorSettings{}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app1, app2, proj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app1, app2, proj}}, nil)
ctrl.handleObjectUpdated(map[string]bool{}, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: test.FakeArgoCDNamespace})
@@ -1494,7 +1384,7 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
ResourceRef: v1alpha1.ResourceRef{Group: "apps", Kind: "Deployment", Namespace: "default", Name: "deploy2"},
}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", "Deployment", "default", "nginx-deployment"): {ResourceNode: managedDeploy},
@@ -1517,7 +1407,7 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
}
func TestSetOperationStateOnDeletedApp(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
patched := false
@@ -1535,7 +1425,7 @@ func TestSetOperationStateLogRetries(t *testing.T) {
t.Cleanup(func() {
logrus.StandardLogger().ReplaceHooks(logrus.LevelHooks{})
})
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
patched := false
@@ -1548,12 +1438,7 @@ func TestSetOperationStateLogRetries(t *testing.T) {
})
ctrl.setOperationState(newFakeApp(), &v1alpha1.OperationState{Phase: synccommon.OperationSucceeded})
assert.True(t, patched)
require.GreaterOrEqual(t, len(hook.Entries), 1)
entry := hook.Entries[0]
require.Contains(t, entry.Data, "error")
errorVal, ok := entry.Data["error"].(error)
require.True(t, ok, "error field should be of type error")
assert.Contains(t, errorVal.Error(), "fake error")
assert.Contains(t, hook.Entries[0].Message, "fake error")
}
func TestNeedRefreshAppStatus(t *testing.T) {
@@ -1591,7 +1476,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
app.Status.Sync.ComparedTo.Source = app.Spec.GetSource()
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
t.Run("no need to refresh just reconciled application", func(t *testing.T) {
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
@@ -1603,7 +1488,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.False(t, needRefresh)
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
// refresh app using the 'deepest' requested comparison level
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
@@ -1620,7 +1505,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.False(t, needRefresh)
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
// refresh app with a non-nil delay
// use zero-second delay to test the add later logic without waiting in the test
@@ -1650,7 +1535,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
app := app.DeepCopy()
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.False(t, needRefresh)
@@ -1680,7 +1565,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
}
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.False(t, needRefresh)
@@ -1760,7 +1645,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
}
func TestUpdatedManagedNamespaceMetadata(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
app := newFakeApp()
app.Spec.SyncPolicy.ManagedNamespaceMetadata = &v1alpha1.ManagedNamespaceMetadata{
Labels: map[string]string{
@@ -1784,7 +1669,7 @@ func TestUpdatedManagedNamespaceMetadata(t *testing.T) {
}
func TestUnchangedManagedNamespaceMetadata(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
app := newFakeApp()
app.Spec.SyncPolicy.ManagedNamespaceMetadata = &v1alpha1.ManagedNamespaceMetadata{
Labels: map[string]string{
@@ -1827,7 +1712,7 @@ func TestRefreshAppConditions(t *testing.T) {
t.Run("NoErrorConditions", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.False(t, hasErrors)
@@ -1838,7 +1723,7 @@ func TestRefreshAppConditions(t *testing.T) {
app := newFakeApp()
app.Status.SetConditions([]v1alpha1.ApplicationCondition{{Type: v1alpha1.ApplicationConditionExcludedResourceWarning}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.False(t, hasErrors)
@@ -1851,7 +1736,7 @@ func TestRefreshAppConditions(t *testing.T) {
app.Spec.Project = "wrong project"
app.Status.SetConditions([]v1alpha1.ApplicationCondition{{Type: v1alpha1.ApplicationConditionInvalidSpecError, Message: "old message"}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.True(t, hasErrors)
@@ -1866,7 +1751,7 @@ func TestUpdateReconciledAt(t *testing.T) {
reconciledAt := metav1.NewTime(time.Now().Add(-1 * time.Second))
app.Status = v1alpha1.ApplicationStatus{ReconciledAt: &reconciledAt}
app.Status.Sync = v1alpha1.SyncStatus{ComparedTo: v1alpha1.ComparedTo{Source: app.Spec.GetSource(), Destination: app.Spec.Destination, IgnoreDifferences: app.Spec.IgnoreDifferences}}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -1997,7 +1882,7 @@ apps/Deployment:
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{tc.app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -2061,7 +1946,7 @@ apps/Deployment:
return hs`,
}
ctrl := newFakeControllerWithResync(t.Context(), &fakeData{
ctrl := newFakeControllerWithResync(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -2128,7 +2013,7 @@ apps/Deployment:
func TestProjectErrorToCondition(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "wrong project"
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -2156,7 +2041,7 @@ func TestProjectErrorToCondition(t *testing.T) {
func TestFinalizeProjectDeletion_HasApplications(t *testing.T) {
app := newFakeApp()
proj := &v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace}}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, proj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
patched := false
@@ -2172,7 +2057,7 @@ func TestFinalizeProjectDeletion_HasApplications(t *testing.T) {
func TestFinalizeProjectDeletion_DoesNotHaveApplications(t *testing.T) {
proj := &v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace}}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{&defaultProj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{&defaultProj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
@@ -2198,7 +2083,7 @@ func TestProcessRequestedAppOperation_FailedNoRetries(t *testing.T) {
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2225,7 +2110,7 @@ func TestProcessRequestedAppOperation_InvalidDestination(t *testing.T) {
proj := defaultProj
proj.Name = "test-project"
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &proj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &proj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
func() {
@@ -2254,7 +2139,7 @@ func TestProcessRequestedAppOperation_FailedHasRetries(t *testing.T) {
Sync: &v1alpha1.SyncOperation{},
Retry: v1alpha1.RetryStrategy{Limit: 1},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2301,7 +2186,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailed(t *testing.T) {
Revision: "abc123",
},
}
ctrl := newFakeController(t.Context(), data, nil)
ctrl := newFakeController(data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2358,7 +2243,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailedBackoff(t *testing.
Revision: "abc123",
},
}
ctrl := newFakeController(t.Context(), data, nil)
ctrl := newFakeController(data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.PrependReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
require.FailNow(t, "A patch should not have been called if the backoff has not passed")
@@ -2386,7 +2271,7 @@ func TestProcessRequestedAppOperation_HasRetriesTerminated(t *testing.T) {
Revision: "abc123",
},
}
ctrl := newFakeController(t.Context(), data, nil)
ctrl := newFakeController(data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2410,7 +2295,7 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
@@ -2485,7 +2370,7 @@ func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
Revision: "HEAD",
},
}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
@@ -2527,9 +2412,9 @@ func TestGetAppHosts(t *testing.T) {
"application.allowedNodeLabels": "label1,label2",
},
}
ctrl := newFakeController(t.Context(), data, nil)
ctrl := newFakeController(data, nil)
mockStateCache := &mockstatecache.LiveStateCache{}
mockStateCache.EXPECT().IterateResources(mock.Anything, mock.MatchedBy(func(callback func(res *clustercache.Resource, info *statecache.ResourceInfo)) bool {
mockStateCache.On("IterateResources", mock.Anything, mock.MatchedBy(func(callback func(res *clustercache.Resource, info *statecache.ResourceInfo)) bool {
// node resource
callback(&clustercache.Resource{
Ref: corev1.ObjectReference{Name: "minikube", Kind: "Node", APIVersion: "v1"},
@@ -2555,7 +2440,7 @@ func TestGetAppHosts(t *testing.T) {
ResourceRequests: map[corev1.ResourceName]resource.Quantity{corev1.ResourceCPU: resource.MustParse("2")},
}})
return true
})).Return(nil).Maybe()
})).Return(nil)
ctrl.stateCache = mockStateCache
hosts, err := ctrl.getAppHosts(&v1alpha1.Cluster{Server: "test", Name: "test"}, app, []v1alpha1.ResourceNode{{
@@ -2582,15 +2467,15 @@ func TestGetAppHosts(t *testing.T) {
func TestMetricsExpiration(t *testing.T) {
app := newFakeApp()
// Check expiration is disabled by default
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
assert.False(t, ctrl.metricsServer.HasExpiration())
// Check expiration is enabled if set
ctrl = newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}, metricsCacheExpiration: 10 * time.Second}, nil)
ctrl = newFakeController(&fakeData{apps: []runtime.Object{app}, metricsCacheExpiration: 10 * time.Second}, nil)
assert.True(t, ctrl.metricsServer.HasExpiration())
}
func TestToAppKey(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl := newFakeController(&fakeData{}, nil)
tests := []struct {
name string
input string
@@ -2610,7 +2495,7 @@ func TestToAppKey(t *testing.T) {
func Test_canProcessApp(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl.applicationNamespaces = []string{"good"}
t.Run("without cluster filter, good namespace", func(t *testing.T) {
app.Namespace = "good"
@@ -2641,7 +2526,7 @@ func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
appSkipReconcileFalse.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "false"}
appSkipReconcileTrue := newFakeApp()
appSkipReconcileTrue.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "true"}
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl := newFakeController(&fakeData{}, nil)
tests := []struct {
name string
input any
@@ -2662,7 +2547,7 @@ func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
func Test_syncDeleteOption(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
cm := newFakeCM()
t.Run("without delete option object is deleted", func(t *testing.T) {
cmObj := kube.MustToUnstructured(&cm)
@@ -2683,7 +2568,7 @@ func Test_syncDeleteOption(t *testing.T) {
func TestAddControllerNamespace(t *testing.T) {
t.Run("set controllerNamespace when the app is in the controller namespace", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{},
}, nil)
@@ -2701,7 +2586,7 @@ func TestAddControllerNamespace(t *testing.T) {
app.Namespace = appNamespace
proj := defaultProj
proj.Spec.SourceNamespaces = []string{appNamespace}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &proj},
manifestResponse: &apiclient.ManifestResponse{},
applicationNamespaces: []string{appNamespace},
@@ -2958,7 +2843,7 @@ func assertDurationAround(t *testing.T, expected time.Duration, actual time.Dura
}
func TestSelfHealRemainingBackoff(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl := newFakeController(&fakeData{}, nil)
ctrl.selfHealBackoff = &wait.Backoff{
Factor: 3,
Duration: 2 * time.Second,
@@ -3040,7 +2925,7 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
func TestSelfHealBackoffCooldownElapsed(t *testing.T) {
cooldown := time.Second * 30
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl := newFakeController(&fakeData{}, nil)
ctrl.selfHealBackoffCooldown = cooldown
app := &v1alpha1.Application{

View File

@@ -40,7 +40,7 @@ func (n netError) Error() string { return string(n) }
func (n netError) Timeout() bool { return false }
func (n netError) Temporary() bool { return false }
func fixtures(ctx context.Context, data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
func fixtures(data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDConfigMapName,
@@ -65,17 +65,17 @@ func fixtures(ctx context.Context, data map[string]string, opts ...func(secret *
opts[i](secret)
}
kubeClient := fake.NewClientset(cm, secret)
settingsManager := argosettings.NewSettingsManager(ctx, kubeClient, "default")
settingsManager := argosettings.NewSettingsManager(context.Background(), kubeClient, "default")
return kubeClient, settingsManager
}
func TestHandleModEvent_HasChanges(_ *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.EXPECT().Invalidate(mock.Anything, mock.Anything).Return().Once()
clusterCache.EXPECT().EnsureSynced().Return(nil).Once()
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
clusterCache.On("EnsureSynced").Return(nil).Once()
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1)
db.On("GetApplicationControllerReplicas").Return(1)
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
"https://mycluster": clusterCache,
@@ -95,10 +95,10 @@ func TestHandleModEvent_HasChanges(_ *testing.T) {
func TestHandleModEvent_ClusterExcluded(t *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.EXPECT().Invalidate(mock.Anything, mock.Anything).Return().Once()
clusterCache.EXPECT().EnsureSynced().Return(nil).Once()
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
clusterCache.On("EnsureSynced").Return(nil).Once()
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
db.On("GetApplicationControllerReplicas").Return(1)
clustersCache := liveStateCache{
db: nil,
appInformer: nil,
@@ -128,10 +128,10 @@ func TestHandleModEvent_ClusterExcluded(t *testing.T) {
func TestHandleModEvent_NoChanges(_ *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.EXPECT().Invalidate(mock.Anything).Panic("should not invalidate").Maybe()
clusterCache.EXPECT().EnsureSynced().Return(nil).Panic("should not re-sync").Maybe()
clusterCache.On("Invalidate", mock.Anything).Panic("should not invalidate")
clusterCache.On("EnsureSynced").Return(nil).Panic("should not re-sync")
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
db.On("GetApplicationControllerReplicas").Return(1)
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
"https://mycluster": clusterCache,
@@ -150,7 +150,7 @@ func TestHandleModEvent_NoChanges(_ *testing.T) {
func TestHandleAddEvent_ClusterExcluded(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
db.On("GetApplicationControllerReplicas").Return(1)
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{},
clusterSharding: sharding.NewClusterSharding(db, 0, 2, common.DefaultShardingAlgorithm),
@@ -169,9 +169,10 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
Config: appv1.ClusterConfig{Username: "bar"},
}
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1)
db.On("GetApplicationControllerReplicas").Return(1)
fakeClient := fake.NewClientset()
settingsMgr := argosettings.NewSettingsManager(t.Context(), fakeClient, "argocd")
liveStateCacheLock := sync.RWMutex{}
gitopsEngineClusterCache := &mocks.ClusterCache{}
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
@@ -179,7 +180,9 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
},
clusterSharding: sharding.NewClusterSharding(db, 0, 1, common.DefaultShardingAlgorithm),
settingsMgr: settingsMgr,
lock: sync.RWMutex{},
// Set the lock here so we can reference it later
//nolint:govet // We need to overwrite here to have access to the lock
lock: liveStateCacheLock,
}
channel := make(chan string)
// Mocked lock held by the gitops-engine cluster cache
@@ -200,7 +203,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
handleDeleteWasCalled.Lock()
engineHoldsEngineLock.Lock()
gitopsEngineClusterCache.EXPECT().EnsureSynced().Run(func() {
gitopsEngineClusterCache.On("EnsureSynced").Run(func(_ mock.Arguments) {
gitopsEngineClusterCacheLock.Lock()
t.Log("EnsureSynced: Engine has engine lock")
engineHoldsEngineLock.Unlock()
@@ -214,7 +217,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
ensureSyncedCompleted.Unlock()
}).Return(nil).Once()
gitopsEngineClusterCache.EXPECT().Invalidate().Run(func(_ ...cache.UpdateSettingsFunc) {
gitopsEngineClusterCache.On("Invalidate").Run(func(_ mock.Arguments) {
// Allow EnsureSynced to continue now that we're in the deadlock condition
handleDeleteWasCalled.Unlock()
// Wait until gitops engine holds the gitops lock
@@ -227,7 +230,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
t.Log("Invalidate: Invalidate has engine lock")
gitopsEngineClusterCacheLock.Unlock()
invalidateCompleted.Unlock()
}).Return().Maybe()
}).Return()
go func() {
// Start the gitops-engine lock holds
go func() {
@@ -775,7 +778,7 @@ func Test_GetVersionsInfo_error_redacted(t *testing.T) {
}
func TestLoadCacheSettings(t *testing.T) {
_, settingsManager := fixtures(t.Context(), map[string]string{
_, settingsManager := fixtures(map[string]string{
"application.instanceLabelKey": "testLabel",
"application.resourceTrackingMethod": string(appv1.TrackingMethodLabel),
"installationID": "123456789",

View File

@@ -68,10 +68,6 @@ func populateNodeInfo(un *unstructured.Unstructured, res *ResourceInfo, customLa
case "ServiceEntry":
populateIstioServiceEntryInfo(un, res)
}
case "argoproj.io":
if gvk.Kind == "Application" {
populateApplicationInfo(un, res)
}
}
}
@@ -492,13 +488,6 @@ func populateHostNodeInfo(un *unstructured.Unstructured, res *ResourceInfo) {
}
}
func populateApplicationInfo(un *unstructured.Unstructured, res *ResourceInfo) {
// Add managed-by-url annotation to info if present
if managedByURL, ok := un.GetAnnotations()[v1alpha1.AnnotationKeyManagedByURL]; ok {
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "managed-by-url", Value: managedByURL})
}
}
func generateManifestHash(un *unstructured.Unstructured, ignores []v1alpha1.ResourceIgnoreDifferences, overrides map[string]v1alpha1.ResourceOverride, opts normalizers.IgnoreNormalizerOpts) (string, error) {
normalizer, err := normalizers.NewIgnoreNormalizer(ignores, overrides, opts)
if err != nil {

View File

@@ -4,9 +4,7 @@ import (
"context"
"encoding/json"
"fmt"
"maps"
"path/filepath"
"slices"
"sync"
"time"
@@ -101,41 +99,47 @@ func NewHydrator(dependencies Dependencies, statusRefreshTimeout time.Duration,
// It's likely that multiple applications will trigger hydration at the same time. The hydration queue key is meant to
// dedupe these requests.
func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
origApp = origApp.DeepCopy()
app := origApp.DeepCopy()
if app.Spec.SourceHydrator == nil {
return
}
logCtx := log.WithFields(applog.GetAppLogFields(app))
logCtx.Debug("Processing app hydrate queue item")
needsHydration, reason := appNeedsHydration(app)
if needsHydration {
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
StartedAt: metav1.Now(),
FinishedAt: nil,
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
// TODO: don't reuse statusRefreshTimeout. Create a new timeout for hydration.
needsHydration, reason := appNeedsHydration(origApp, h.statusRefreshTimeout)
if !needsHydration {
return
}
needsRefresh := app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.StartedAt.Time) > h.statusRefreshTimeout
if needsHydration || needsRefresh {
logCtx.WithField("reason", reason).Info("Hydrating app")
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
} else {
logCtx.WithField("reason", reason).Debug("Skipping hydration")
logCtx.WithField("reason", reason).Info("Hydrating app")
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
StartedAt: metav1.Now(),
FinishedAt: nil,
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
origApp.Status.SourceHydrator = app.Status.SourceHydrator
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
logCtx.Debug("Successfully processed app hydrate queue item")
}
func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
key := types.HydrationQueueKey{
SourceRepoURL: git.NormalizeGitURLAllowInvalid(app.Spec.SourceHydrator.DrySource.RepoURL),
SourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
DestinationBranch: app.Spec.GetHydrateToSource().TargetRevision,
DestinationBranch: destinationBranch,
}
return key
}
@@ -144,92 +148,43 @@ func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
// hydration key, hydrates their latest commit, and updates their status accordingly. If the hydration fails, it marks
// the operation as failed and logs the error. If successful, it updates the operation to indicate that hydration was
// successful and requests a refresh of the applications to pick up the new hydrated commit.
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) {
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) (processNext bool) {
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
"sourceTargetRevision": hydrationKey.SourceTargetRevision,
"destinationBranch": hydrationKey.DestinationBranch,
})
// Get all applications sharing the same hydration key
apps, err := h.getAppsForHydrationKey(hydrationKey)
if err != nil {
// If we get an error here, we cannot proceed with hydration and we do not know
// which apps to update with the failure. The best we can do is log an error in
// the controller and wait for statusRefreshTimeout to retry
logCtx.WithError(err).Error("failed to get apps for hydration")
relevantApps, drySHA, hydratedSHA, err := h.hydrateAppsLatestCommit(logCtx, hydrationKey)
if len(relevantApps) == 0 {
// return early if there are no relevant apps found to hydrate
// otherwise you'll be stuck in hydrating
logCtx.Info("Skipping hydration since there are no relevant apps found to hydrate")
return
}
logCtx.WithField("appCount", len(apps))
// FIXME: we might end up in a race condition here where an HydrationQueueItem is processed
// before all applications had their CurrentOperation set by ProcessAppHydrateQueueItem.
// This would cause this method to update "old" CurrentOperation.
// It should only start hydration if all apps are in the HydrateOperationPhaseHydrating phase.
raceDetected := false
for _, app := range apps {
if app.Status.SourceHydrator.CurrentOperation == nil || app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
raceDetected = true
break
}
}
if raceDetected {
logCtx.Warn("race condition detected: not all apps are in HydrateOperationPhaseHydrating phase")
}
// validate all the applications to make sure they are all correctly configured.
// All applications sharing the same hydration key must succeed for the hydration to be processed.
projects, validationErrors := h.validateApplications(apps)
if len(validationErrors) > 0 {
// For the applications that have an error, set the specific error in their status.
// Applications without error will still fail with a generic error since the hydration cannot be partial
genericError := genericHydrationError(validationErrors)
for _, app := range apps {
if err, ok := validationErrors[app.QualifiedName()]; ok {
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("failed to validate hydration app: %v", err)
h.setAppHydratorError(app, err)
} else {
h.setAppHydratorError(app, genericError)
}
}
return
}
// Hydrate all the apps
drySHA, hydratedSHA, appErrors, err := h.hydrate(logCtx, apps, projects)
if err != nil {
// If there is a single error, it affects each applications
for i := range apps {
appErrors[apps[i].QualifiedName()] = err
}
}
if drySHA != "" {
logCtx = logCtx.WithField("drySHA", drySHA)
}
if len(appErrors) > 0 {
// For the applications that have an error, set the specific error in their status.
// Applications without error will still fail with a generic error since the hydration cannot be partial
genericError := genericHydrationError(appErrors)
for _, app := range apps {
if drySHA != "" {
// If we have a drySHA, we can set it on the app status
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
}
if err, ok := appErrors[app.QualifiedName()]; ok {
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("failed to hydrate app: %v", err)
h.setAppHydratorError(app, err)
} else {
h.setAppHydratorError(app, genericError)
}
if err != nil {
logCtx.WithField("appCount", len(relevantApps)).WithError(err).Error("Failed to hydrate apps")
for _, app := range relevantApps {
origApp := app.DeepCopy()
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate revision %q: %v", drySHA, err.Error())
// We may or may not have gotten far enough in the hydration process to get a non-empty SHA, but set it just
// in case we did.
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("Failed to hydrate app: %v", err)
}
return
}
logCtx.Debug("Successfully hydrated apps")
logCtx.WithField("appCount", len(relevantApps)).Debug("Successfully hydrated apps")
finishedAt := metav1.Now()
for _, app := range apps {
for _, app := range relevantApps {
origApp := app.DeepCopy()
operation := &appv1.HydrateOperation{
StartedAt: app.Status.SourceHydrator.CurrentOperation.StartedAt,
@@ -247,123 +202,118 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
// Request a refresh since we pushed a new commit.
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
if err != nil {
logCtx.WithFields(applog.GetAppLogFields(app)).WithError(err).Error("Failed to request app refresh after hydration")
logCtx.WithField("app", app.QualifiedName()).WithError(err).Error("Failed to request app refresh after hydration")
}
}
return
}
// setAppHydratorError updates the CurrentOperation with the error information.
func (h *Hydrator) setAppHydratorError(app *appv1.Application, err error) {
// if the operation is not in progress, we do not update the status
if app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
return
func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, string, string, error) {
relevantApps, projects, err := h.getRelevantAppsAndProjectsForHydration(logCtx, hydrationKey)
if err != nil {
return nil, "", "", fmt.Errorf("failed to get relevant apps for hydration: %w", err)
}
origApp := app.DeepCopy()
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate: %v", err.Error())
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
dryRevision, hydratedRevision, err := h.hydrate(logCtx, relevantApps, projects)
if err != nil {
return relevantApps, dryRevision, "", fmt.Errorf("failed to hydrate apps: %w", err)
}
return relevantApps, dryRevision, hydratedRevision, nil
}
// getAppsForHydrationKey returns the applications matching the hydration key.
func (h *Hydrator) getAppsForHydrationKey(hydrationKey types.HydrationQueueKey) ([]*appv1.Application, error) {
func (h *Hydrator) getRelevantAppsAndProjectsForHydration(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, map[string]*appv1.AppProject, error) {
// Get all apps
apps, err := h.dependencies.GetProcessableApps()
if err != nil {
return nil, fmt.Errorf("failed to list apps: %w", err)
return nil, nil, fmt.Errorf("failed to list apps: %w", err)
}
var relevantApps []*appv1.Application
projects := make(map[string]*appv1.AppProject)
uniquePaths := make(map[string]bool, len(apps.Items))
for _, app := range apps.Items {
if app.Spec.SourceHydrator == nil {
continue
}
appKey := getHydrationQueueKey(&app)
if appKey != hydrationKey {
if !git.SameURL(app.Spec.SourceHydrator.DrySource.RepoURL, hydrationKey.SourceRepoURL) ||
app.Spec.SourceHydrator.DrySource.TargetRevision != hydrationKey.SourceTargetRevision {
continue
}
relevantApps = append(relevantApps, &app)
}
return relevantApps, nil
}
// validateApplications checks that all applications are valid for hydration.
func (h *Hydrator) validateApplications(apps []*appv1.Application) (map[string]*appv1.AppProject, map[string]error) {
projects := make(map[string]*appv1.AppProject)
errors := make(map[string]error)
uniquePaths := make(map[string]string, len(apps))
for _, app := range apps {
// Get the project for the app and validate if the app is allowed to use the source.
// We can't short-circuit this even if we have seen this project before, because we need to verify that this
// particular app is allowed to use this project.
proj, err := h.dependencies.GetProcessableAppProj(app)
if err != nil {
errors[app.QualifiedName()] = fmt.Errorf("failed to get project %q: %w", app.Spec.Project, err)
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
if destinationBranch != hydrationKey.DestinationBranch {
continue
}
path := app.Spec.SourceHydrator.SyncSource.Path
// ensure that the path is always set to a path that doesn't resolve to the root of the repo
if IsRootPath(path) {
return nil, nil, fmt.Errorf("app %q has path %q which resolves to repository root", app.QualifiedName(), path)
}
var proj *appv1.AppProject
// We can't short-circuit this even if we have seen this project before, because we need to verify that this
// particular app is allowed to use this project. That logic is in GetProcessableAppProj.
proj, err = h.dependencies.GetProcessableAppProj(&app)
if err != nil {
return nil, nil, fmt.Errorf("failed to get project %q for app %q: %w", app.Spec.Project, app.QualifiedName(), err)
}
permitted := proj.IsSourcePermitted(app.Spec.GetSource())
if !permitted {
errors[app.QualifiedName()] = fmt.Errorf("application repo %s is not permitted in project '%s'", app.Spec.GetSource().RepoURL, proj.Name)
// Log and skip. We don't want to fail the entire operation because of one app.
logCtx.Warnf("App %q is not permitted to use source %q", app.QualifiedName(), app.Spec.Source.String())
continue
}
projects[app.Spec.Project] = proj
// Disallow hydrating to the repository root.
// Hydrating to root would overwrite or delete files at the top level of the repo,
// which can break other applications or shared configuration.
// Every hydrated app must write into a subdirectory instead.
destPath := app.Spec.SourceHydrator.SyncSource.Path
if IsRootPath(destPath) {
errors[app.QualifiedName()] = fmt.Errorf("app is configured to hydrate to the repository root (branch %q, path %q) which is not allowed", app.Spec.GetHydrateToSource().TargetRevision, destPath)
continue
}
// TODO: test the dupe detection
// TODO: normalize the path to avoid "path/.." from being treated as different from "."
if appName, ok := uniquePaths[destPath]; ok {
errors[app.QualifiedName()] = fmt.Errorf("app %s hydrator use the same destination: %v", appName, app.Spec.SourceHydrator.SyncSource.Path)
errors[appName] = fmt.Errorf("app %s hydrator use the same destination: %v", app.QualifiedName(), app.Spec.SourceHydrator.SyncSource.Path)
continue
if _, ok := uniquePaths[path]; ok {
return nil, nil, fmt.Errorf("multiple app hydrators use the same destination: %v", app.Spec.SourceHydrator.SyncSource.Path)
}
uniquePaths[destPath] = app.QualifiedName()
}
uniquePaths[path] = true
// If there are any errors, return nil for projects to avoid possible partial processing.
if len(errors) > 0 {
projects = nil
relevantApps = append(relevantApps, &app)
}
return projects, errors
return relevantApps, projects, nil
}
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, map[string]error, error) {
errors := make(map[string]error)
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, error) {
if len(apps) == 0 {
return "", "", nil, nil
return "", "", nil
}
// These values are the same for all apps being hydrated together, so just get them from the first app.
repoURL := apps[0].Spec.GetHydrateToSource().RepoURL
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
// FIXME: As a convenience, the commit server will create the syncBranch if it does not exist. If the
// targetBranch does not exist, it will create it based on the syncBranch. On the next line, we take
// the `syncBranch` from the first app and assume that they're all configured the same. Instead, if any
// app has a different syncBranch, we should send the commit server an empty string and allow it to
// create the targetBranch as an orphan since we can't reliable determine a reasonable base.
repoURL := apps[0].Spec.SourceHydrator.DrySource.RepoURL
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
// Disallow hydrating to the repository root.
// Hydrating to root would overwrite or delete files at the top level of the repo,
// which can break other applications or shared configuration.
// Every hydrated app must write into a subdirectory instead.
for _, app := range apps {
destPath := app.Spec.SourceHydrator.SyncSource.Path
if IsRootPath(destPath) {
return "", "", fmt.Errorf(
"app %q is configured to hydrate to the repository root (branch %q, path %q) which is not allowed",
app.QualifiedName(), targetBranch, destPath,
)
}
}
// Get a static SHA revision from the first app so that all apps are hydrated from the same revision.
targetRevision, pathDetails, err := h.getManifests(context.Background(), apps[0], "", projects[apps[0].Spec.Project])
if err != nil {
errors[apps[0].QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
return "", "", errors, nil
return "", "", fmt.Errorf("failed to get manifests for app %q: %w", apps[0].QualifiedName(), err)
}
paths := []*commitclient.PathDetails{pathDetails}
@@ -374,18 +324,18 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
app := app
eg.Go(func() error {
_, pathDetails, err = h.getManifests(ctx, app, targetRevision, projects[app.Spec.Project])
mu.Lock()
defer mu.Unlock()
if err != nil {
errors[app.QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
return errors[app.QualifiedName()]
return fmt.Errorf("failed to get manifests for app %q: %w", app.QualifiedName(), err)
}
mu.Lock()
paths = append(paths, pathDetails)
mu.Unlock()
return nil
})
}
if err := eg.Wait(); err != nil {
return targetRevision, "", errors, nil
err = eg.Wait()
if err != nil {
return "", "", fmt.Errorf("failed to get manifests for apps: %w", err)
}
// If all the apps are under the same project, use that project. Otherwise, use an empty string to indicate that we
@@ -394,19 +344,18 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
if len(projects) == 1 {
for p := range projects {
project = p
break
}
}
// Get the commit metadata for the target revision.
revisionMetadata, err := h.getRevisionMetadata(context.Background(), repoURL, project, targetRevision)
if err != nil {
return targetRevision, "", errors, fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
return "", "", fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
}
repo, err := h.dependencies.GetWriteCredentials(context.Background(), repoURL, project)
if err != nil {
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator credentials: %w", err)
return "", "", fmt.Errorf("failed to get hydrator credentials: %w", err)
}
if repo == nil {
// Try without credentials.
@@ -418,11 +367,11 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
// get the commit message template
commitMessageTemplate, err := h.dependencies.GetHydratorCommitMessageTemplate()
if err != nil {
return targetRevision, "", errors, fmt.Errorf("failed to get hydrated commit message template: %w", err)
return "", "", fmt.Errorf("failed to get hydrated commit message template: %w", err)
}
commitMessage, errMsg := getTemplatedCommitMessage(repoURL, targetRevision, commitMessageTemplate, revisionMetadata)
if errMsg != nil {
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
return "", "", fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
}
manifestsRequest := commitclient.CommitHydratedManifestsRequest{
@@ -437,14 +386,14 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
closer, commitService, err := h.commitClientset.NewCommitServerClient()
if err != nil {
return targetRevision, "", errors, fmt.Errorf("failed to create commit service: %w", err)
return targetRevision, "", fmt.Errorf("failed to create commit service: %w", err)
}
defer utilio.Close(closer)
resp, err := commitService.CommitHydratedManifests(context.Background(), &manifestsRequest)
if err != nil {
return targetRevision, "", errors, fmt.Errorf("failed to commit hydrated manifests: %w", err)
return targetRevision, "", fmt.Errorf("failed to commit hydrated manifests: %w", err)
}
return targetRevision, resp.HydratedSha, errors, nil
return targetRevision, resp.HydratedSha, nil
}
// getManifests gets the manifests for the given application and target revision. It returns the resolved revision
@@ -507,27 +456,34 @@ func (h *Hydrator) getRevisionMetadata(ctx context.Context, repoURL, project, re
}
// appNeedsHydration answers if application needs manifests hydrated.
func appNeedsHydration(app *appv1.Application) (needsHydration bool, reason string) {
switch {
case app.Spec.SourceHydrator == nil:
func appNeedsHydration(app *appv1.Application, statusHydrateTimeout time.Duration) (needsHydration bool, reason string) {
if app.Spec.SourceHydrator == nil {
return false, "source hydrator not configured"
case app.Status.SourceHydrator.CurrentOperation == nil:
return true, "no previous hydrate operation"
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating:
return false, "hydration operation already in progress"
}
var hydratedAt *metav1.Time
if app.Status.SourceHydrator.CurrentOperation != nil {
hydratedAt = &app.Status.SourceHydrator.CurrentOperation.StartedAt
}
switch {
case app.IsHydrateRequested():
return true, "hydrate requested"
case app.Status.SourceHydrator.CurrentOperation == nil:
return true, "no previous hydrate operation"
case !app.Spec.SourceHydrator.DeepEquals(app.Status.SourceHydrator.CurrentOperation.SourceHydrator):
return true, "spec.sourceHydrator differs"
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseFailed && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.FinishedAt.Time) > 2*time.Minute:
return true, "previous hydrate operation failed more than 2 minutes ago"
case hydratedAt == nil || hydratedAt.Add(statusHydrateTimeout).Before(time.Now().UTC()):
return true, "hydration expired"
}
return false, "hydration not needed"
return false, ""
}
// getTemplatedCommitMessage gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
// 1. Get the metadata template engine would use to render the template
// Gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
// 1. Get the metadata template engine would use to render the template
// 2. Pass the output of Step 1 and Step 2 to template Render
func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string, dryCommitMetadata *appv1.RevisionMetadata) (string, error) {
hydratorCommitMetadata, err := hydrator.GetCommitMetadata(repoURL, revision, dryCommitMetadata)
@@ -541,20 +497,6 @@ func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string,
return templatedCommitMsg, nil
}
// genericHydrationError returns an error that summarizes the hydration errors for all applications.
func genericHydrationError(validationErrors map[string]error) error {
if len(validationErrors) == 0 {
return nil
}
keys := slices.Sorted(maps.Keys(validationErrors))
remainder := "has an error"
if len(keys) > 1 {
remainder = fmt.Sprintf("and %d more have errors", len(keys)-1)
}
return fmt.Errorf("cannot hydrate because application %s %s", keys[0], remainder)
}
// IsRootPath returns whether the path references a root path
func IsRootPath(path string) bool {
clean := filepath.Clean(path)

File diff suppressed because it is too large Load Diff

View File

@@ -1,113 +0,0 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"context"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
mock "github.com/stretchr/testify/mock"
)
// NewRepoGetter creates a new instance of RepoGetter. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewRepoGetter(t interface {
mock.TestingT
Cleanup(func())
}) *RepoGetter {
mock := &RepoGetter{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// RepoGetter is an autogenerated mock type for the RepoGetter type
type RepoGetter struct {
mock.Mock
}
type RepoGetter_Expecter struct {
mock *mock.Mock
}
func (_m *RepoGetter) EXPECT() *RepoGetter_Expecter {
return &RepoGetter_Expecter{mock: &_m.Mock}
}
// GetRepository provides a mock function for the type RepoGetter
func (_mock *RepoGetter) GetRepository(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error) {
ret := _mock.Called(ctx, repoURL, project)
if len(ret) == 0 {
panic("no return value specified for GetRepository")
}
var r0 *v1alpha1.Repository
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) (*v1alpha1.Repository, error)); ok {
return returnFunc(ctx, repoURL, project)
}
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) *v1alpha1.Repository); ok {
r0 = returnFunc(ctx, repoURL, project)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.Repository)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context, string, string) error); ok {
r1 = returnFunc(ctx, repoURL, project)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// RepoGetter_GetRepository_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetRepository'
type RepoGetter_GetRepository_Call struct {
*mock.Call
}
// GetRepository is a helper method to define mock.On call
// - ctx context.Context
// - repoURL string
// - project string
func (_e *RepoGetter_Expecter) GetRepository(ctx interface{}, repoURL interface{}, project interface{}) *RepoGetter_GetRepository_Call {
return &RepoGetter_GetRepository_Call{Call: _e.mock.On("GetRepository", ctx, repoURL, project)}
}
func (_c *RepoGetter_GetRepository_Call) Run(run func(ctx context.Context, repoURL string, project string)) *RepoGetter_GetRepository_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 string
if args[1] != nil {
arg1 = args[1].(string)
}
var arg2 string
if args[2] != nil {
arg2 = args[2].(string)
}
run(
arg0,
arg1,
arg2,
)
})
return _c
}
func (_c *RepoGetter_GetRepository_Call) Return(repository *v1alpha1.Repository, err error) *RepoGetter_GetRepository_Call {
_c.Call.Return(repository, err)
return _c
}
func (_c *RepoGetter_GetRepository_Call) RunAndReturn(run func(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error)) *RepoGetter_GetRepository_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -43,7 +43,7 @@ func TestGetRepoObjs(t *testing.T) {
},
}
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
source := app.Spec.GetSource()
source.RepoURL = "oci://example.com/argo/argo-cd"
@@ -92,7 +92,7 @@ func TestGetHydratorCommitMessageTemplate_WhenTemplateisNotDefined_FallbackToDef
},
}
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
require.NoError(t, err)
@@ -115,7 +115,7 @@ func TestGetHydratorCommitMessageTemplate(t *testing.T) {
configMapData: cm.Data,
}
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
require.NoError(t, err)

View File

@@ -13,12 +13,12 @@ import (
)
func TestMetricClusterConnectivity(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
cluster1 := v1alpha1.Cluster{Name: "cluster1", Server: "server1", Labels: map[string]string{"env": "dev", "team": "team1"}}
cluster2 := v1alpha1.Cluster{Name: "cluster2", Server: "server2", Labels: map[string]string{"env": "staging", "team": "team2"}}
cluster3 := v1alpha1.Cluster{Name: "cluster3", Server: "server3", Labels: map[string]string{"env": "production", "team": "team3"}}
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3}}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
type testCases struct {
testCombination

View File

@@ -263,8 +263,8 @@ func newFakeApp(fakeAppYAML string) *argoappv1.Application {
return &app
}
func newFakeLister(ctx context.Context, fakeAppYAMLs ...string) (context.CancelFunc, applister.ApplicationLister) {
ctx, cancel := context.WithCancel(ctx)
func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.ApplicationLister) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var fakeApps []runtime.Object
for _, appYAML := range fakeAppYAMLs {
@@ -319,11 +319,11 @@ func testMetricServer(t *testing.T, fakeAppYAMLs []string, expectedResponse stri
func runTest(t *testing.T, cfg TestMetricServerConfig) {
t.Helper()
cancel, appLister := newFakeLister(t.Context(), cfg.FakeAppYAMLs...)
cancel, appLister := newFakeLister(cfg.FakeAppYAMLs...)
defer cancel()
mockDB := mocks.NewArgoDB(t)
mockDB.EXPECT().GetClusterServersByName(mock.Anything, "cluster1").Return([]string{"https://localhost:6443"}, nil).Maybe()
mockDB.EXPECT().GetCluster(mock.Anything, "https://localhost:6443").Return(&argoappv1.Cluster{Name: "cluster1", Server: "https://localhost:6443"}, nil).Maybe()
mockDB.On("GetClusterServersByName", mock.Anything, "cluster1").Return([]string{"https://localhost:6443"}, nil)
mockDB.On("GetCluster", mock.Anything, "https://localhost:6443").Return(&argoappv1.Cluster{Name: "cluster1", Server: "https://localhost:6443"}, nil)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, cfg.AppLabels, cfg.AppConditions, mockDB)
require.NoError(t, err)
@@ -333,7 +333,7 @@ func runTest(t *testing.T, cfg TestMetricServerConfig) {
metricsServ.registry.MustRegister(collector)
}
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -472,7 +472,7 @@ argocd_app_condition{condition="ExcludedResourceWarning",name="my-app-4",namespa
}
func TestMetricsSyncCounter(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -493,7 +493,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
metricsServ.IncSync(fakeApp, "https://localhost:6443", &argoappv1.OperationState{Phase: common.OperationSucceeded})
metricsServ.IncSync(fakeApp, "https://localhost:6443", &argoappv1.OperationState{Phase: common.OperationSucceeded})
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -526,7 +526,7 @@ func assertMetricsNotPrinted(t *testing.T, expectedLines, body string) {
}
func TestMetricsSyncDuration(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -536,7 +536,7 @@ func TestMetricsSyncDuration(t *testing.T) {
fakeAppOperationRunning := newFakeApp(fakeAppOperationRunning)
metricsServ.IncAppSyncDuration(fakeAppOperationRunning, "https://localhost:6443", fakeAppOperationRunning.Status.OperationState)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -550,7 +550,7 @@ func TestMetricsSyncDuration(t *testing.T) {
fakeAppOperationFinished := newFakeApp(fakeAppOperationFinished)
metricsServ.IncAppSyncDuration(fakeAppOperationFinished, "https://localhost:6443", fakeAppOperationFinished.Status.OperationState)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -567,7 +567,7 @@ argocd_app_sync_duration_seconds_total{dest_server="https://localhost:6443",name
}
func TestReconcileMetrics(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -590,7 +590,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
fakeApp := newFakeApp(fakeApp)
metricsServ.IncReconcile(fakeApp, "https://localhost:6443", 5*time.Second)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -601,7 +601,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
}
func TestOrphanedResourcesMetric(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -616,7 +616,7 @@ argocd_app_orphaned_resources_count{name="my-app-4",namespace="argocd",project="
numOrphanedResources := 1
metricsServ.SetOrphanedResourcesMetric(app, numOrphanedResources)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -627,7 +627,7 @@ argocd_app_orphaned_resources_count{name="my-app-4",namespace="argocd",project="
}
func TestMetricsReset(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -641,7 +641,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name="my-app",namespace="argocd",phase="Succeeded",project="important-project"} 2
`
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -652,7 +652,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
err = metricsServ.SetExpiration(time.Second)
require.NoError(t, err)
time.Sleep(2 * time.Second)
req, err = http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err = http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr = httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -665,7 +665,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
}
func TestWorkqueueMetrics(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -685,7 +685,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
`
workqueue.NewNamed("test")
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -696,7 +696,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
}
func TestGoMetrics(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -718,7 +718,7 @@ go_memstats_sys_bytes
go_threads
`
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)

View File

@@ -28,7 +28,7 @@ import (
func TestGetShardByID_NotEmptyID(t *testing.T) {
db := &dbmocks.ArgoDB{}
replicasCount := 1
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "1"}))
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "2"}))
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "3"}))
@@ -38,7 +38,7 @@ func TestGetShardByID_NotEmptyID(t *testing.T) {
func TestGetShardByID_EmptyID(t *testing.T) {
db := &dbmocks.ArgoDB{}
replicasCount := 1
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(replicasCount)(&v1alpha1.Cluster{})
assert.Equal(t, 0, shard)
@@ -46,7 +46,7 @@ func TestGetShardByID_EmptyID(t *testing.T) {
func TestGetShardByID_NoReplicas(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
db.On("GetApplicationControllerReplicas").Return(0)
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(0)(&v1alpha1.Cluster{})
assert.Equal(t, -1, shard)
@@ -54,7 +54,7 @@ func TestGetShardByID_NoReplicas(t *testing.T) {
func TestGetShardByID_NoReplicasUsingHashDistributionFunction(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
db.On("GetApplicationControllerReplicas").Return(0)
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(0)(&v1alpha1.Cluster{})
assert.Equal(t, -1, shard)
@@ -63,7 +63,7 @@ func TestGetShardByID_NoReplicasUsingHashDistributionFunction(t *testing.T) {
func TestGetShardByID_NoReplicasUsingHashDistributionFunctionWithClusters(t *testing.T) {
clusters, db, cluster1, cluster2, cluster3, cluster4, cluster5 := createTestClusters()
// Test with replicas set to 0
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
db.On("GetApplicationControllerReplicas").Return(0)
t.Setenv(common.EnvControllerShardingAlgorithm, common.RoundRobinShardingAlgorithm)
distributionFunction := RoundRobinDistributionFunction(clusters, 0)
assert.Equal(t, -1, distributionFunction(nil))
@@ -91,7 +91,7 @@ func TestGetClusterFilterLegacy(t *testing.T) {
// shardIndex := 1 // ensuring that a shard with index 1 will process all the clusters with an "even" id (2,4,6,...)
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
t.Setenv(common.EnvControllerShardingAlgorithm, common.LegacyShardingAlgorithm)
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
@@ -109,7 +109,7 @@ func TestGetClusterFilterUnknown(t *testing.T) {
os.Unsetenv(common.EnvControllerShardingAlgorithm)
t.Setenv(common.EnvControllerShardingAlgorithm, "unknown")
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := GetDistributionFunction(clusterAccessor, appAccessor, "unknown", replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -124,7 +124,7 @@ func TestLegacyGetClusterFilterWithFixedShard(t *testing.T) {
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
appAccessor, _, _, _, _, _ := createTestApps()
replicasCount := 5
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
filter := GetDistributionFunction(clusterAccessor, appAccessor, common.DefaultShardingAlgorithm, replicasCount)
assert.Equal(t, 0, filter(nil))
assert.Equal(t, 4, filter(&cluster1))
@@ -151,7 +151,7 @@ func TestRoundRobinGetClusterFilterWithFixedShard(t *testing.T) {
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
appAccessor, _, _, _, _, _ := createTestApps()
replicasCount := 4
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
filter := GetDistributionFunction(clusterAccessor, appAccessor, common.RoundRobinShardingAlgorithm, replicasCount)
assert.Equal(t, 0, filter(nil))
@@ -182,7 +182,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 1", func(t *testing.T) {
replicasCount := 1
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -194,7 +194,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 2", func(t *testing.T) {
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -206,7 +206,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 3", func(t *testing.T) {
replicasCount := 3
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -232,7 +232,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterNumber
t.Setenv(common.EnvControllerReplicas, strconv.Itoa(replicasCount))
_, db, _, _, _, _, _ := createTestClusters()
clusterAccessor := func() []*v1alpha1.Cluster { return clusterPointers }
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
for i, c := range clusterPointers {
assert.Equal(t, i%2, distributionFunction(c))
@@ -240,7 +240,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterNumber
}
func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAddedAndRemoved(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "1")
cluster2 := createCluster("cluster2", "2")
cluster3 := createCluster("cluster3", "3")
@@ -252,10 +252,10 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAdde
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5}}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
// Test with replicas set to 2
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -277,7 +277,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAdde
}
func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
clusterCount := 133
prefix := "cluster"
@@ -290,10 +290,10 @@ func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
clusterAccessor := getClusterAccessor(clusters)
appAccessor, _, _, _, _, _ := createTestApps()
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
// Test with replicas set to 3
replicasCount := 3
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
distributionMap := map[int]int{}
@@ -347,32 +347,32 @@ func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
}
func TestConsistentHashingWhenClusterWithZeroReplicas(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
clusters := []v1alpha1.Cluster{createCluster("cluster-01", "01")}
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
appAccessor, _, _, _, _, _ := createTestApps()
// Test with replicas set to 0
replicasCount := 0
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, -1, distributionFunction(nil))
}
func TestConsistentHashingWhenClusterWithFixedShard(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
var fixedShard int64 = 1
cluster := &v1alpha1.Cluster{ID: "1", Shard: &fixedShard}
clusters := []v1alpha1.Cluster{*cluster}
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
// Test with replicas set to 5
replicasCount := 5
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
db.On("GetApplicationControllerReplicas").Return(replicasCount)
appAccessor, _, _, _, _, _ := createTestApps()
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, fixedShard, int64(distributionFunction(cluster)))
@@ -381,7 +381,7 @@ func TestConsistentHashingWhenClusterWithFixedShard(t *testing.T) {
func TestGetShardByIndexModuloReplicasCountDistributionFunction(t *testing.T) {
clusters, db, cluster1, cluster2, _, _, _ := createTestClusters()
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
// Test that the function returns the correct shard for cluster1 and cluster2
@@ -419,7 +419,7 @@ func TestInferShard(t *testing.T) {
}
func createTestClusters() (clusterAccessor, *dbmocks.ArgoDB, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "1")
cluster2 := createCluster("cluster2", "2")
cluster3 := createCluster("cluster3", "3")
@@ -428,10 +428,10 @@ func createTestClusters() (clusterAccessor, *dbmocks.ArgoDB, v1alpha1.Cluster, v
clusters := []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5}
db.EXPECT().ListClusters(mock.Anything).Return(&v1alpha1.ClusterList{Items: []v1alpha1.Cluster{
db.On("ListClusters", mock.Anything).Return(&v1alpha1.ClusterList{Items: []v1alpha1.Cluster{
cluster1, cluster2, cluster3, cluster4, cluster5,
}}, nil)
return getClusterAccessor(clusters), db, cluster1, cluster2, cluster3, cluster4, cluster5
return getClusterAccessor(clusters), &db, cluster1, cluster2, cluster3, cluster4, cluster5
}
func getClusterAccessor(clusters []v1alpha1.Cluster) clusterAccessor {

View File

@@ -16,14 +16,14 @@ import (
func TestLargeShuffle(t *testing.T) {
t.Skip()
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{}}
for i := 0; i < math.MaxInt/4096; i += 256 {
// fmt.Fprintf(os.Stdout, "%d", i)
cluster := createCluster(fmt.Sprintf("cluster-%d", i), strconv.Itoa(i))
clusterList.Items = append(clusterList.Items, cluster)
}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
clusterAccessor := getClusterAccessor(clusterList.Items)
// Test with replicas set to 256
replicasCount := 256
@@ -36,7 +36,7 @@ func TestLargeShuffle(t *testing.T) {
func TestShuffle(t *testing.T) {
t.Skip()
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "10")
cluster2 := createCluster("cluster2", "20")
cluster3 := createCluster("cluster3", "30")
@@ -46,7 +46,7 @@ func TestShuffle(t *testing.T) {
cluster25 := createCluster("cluster6", "25")
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5, cluster6}}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
clusterAccessor := getClusterAccessor(clusterList.Items)
// Test with replicas set to 3
t.Setenv(common.EnvControllerReplicas, "3")

View File

@@ -41,18 +41,13 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
appstatecache "github.com/argoproj/argo-cd/v3/util/cache/appstate"
"github.com/argoproj/argo-cd/v3/util/db"
"github.com/argoproj/argo-cd/v3/util/env"
"github.com/argoproj/argo-cd/v3/util/gpg"
utilio "github.com/argoproj/argo-cd/v3/util/io"
"github.com/argoproj/argo-cd/v3/util/settings"
"github.com/argoproj/argo-cd/v3/util/stats"
)
var (
ErrCompareStateRepo = errors.New("failed to get repo objects")
processManifestGeneratePathsEnabled = env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_PROCESS_MANIFEST_GENERATE_PATHS", true)
)
var ErrCompareStateRepo = errors.New("failed to get repo objects")
type resourceInfoProviderStub struct{}
@@ -75,7 +70,7 @@ type managedResource struct {
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState)
GetRepoObjs(ctx context.Context, app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
}
@@ -258,14 +253,7 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
appNamespace = ""
}
updateRevisions := processManifestGeneratePathsEnabled &&
// updating revisions result is not required if automated sync is not enabled
app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil &&
// using updating revisions gains performance only if manifest generation is required.
// just reading pre-generated manifests is comparable to updating revisions time-wise
app.Status.SourceType != v1alpha1.ApplicationSourceTypeDirectory
if updateRevisions && repo.Depth == 0 && !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && syncedRevision != revision && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
if !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
Repo: repo,
@@ -362,7 +350,7 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
}
// ResolveGitRevision will resolve the given revision to a full commit SHA. Only works for git.
func (m *appStateManager) ResolveGitRevision(repoURL, revision string) (string, error) {
func (m *appStateManager) ResolveGitRevision(repoURL string, revision string) (string, error) {
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
return "", fmt.Errorf("failed to connect to repo server: %w", err)
@@ -541,7 +529,7 @@ func isManagedNamespace(ns *unstructured.Unstructured, app *v1alpha1.Application
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
ts := stats.NewTimingStats()
logCtx := log.WithFields(applog.GetAppLogFields(app))

View File

@@ -48,7 +48,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -66,7 +66,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
// TestCompareAppStateRepoError tests the case when CompareAppState notices a repo error
func TestCompareAppStateRepoError(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{manifestResponses: make([]*apiclient.ManifestResponse, 3)}, errors.New("test repo error"))
ctrl := newFakeController(&fakeData{manifestResponses: make([]*apiclient.ManifestResponse, 3)}, errors.New("test repo error"))
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -112,7 +112,7 @@ func TestCompareAppStateNamespaceMetadataDiffers(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -161,7 +161,7 @@ func TestCompareAppStateNamespaceMetadataDiffersToManifest(t *testing.T) {
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -219,7 +219,7 @@ func TestCompareAppStateNamespaceMetadata(t *testing.T) {
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -278,7 +278,7 @@ func TestCompareAppStateNamespaceMetadataIsTheSame(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -306,7 +306,7 @@ func TestCompareAppStateMissing(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -338,7 +338,7 @@ func TestCompareAppStateExtra(t *testing.T) {
key: pod,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -369,7 +369,7 @@ func TestCompareAppStateHook(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -401,7 +401,7 @@ func TestCompareAppStateSkipHook(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -431,7 +431,7 @@ func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
@@ -465,7 +465,7 @@ func TestCompareAppStateExtraHook(t *testing.T) {
key: pod,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -494,7 +494,7 @@ func TestAppRevisionsSingleSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
app := newFakeApp()
revisions := make([]string, 0)
@@ -534,7 +534,7 @@ func TestAppRevisionsMultiSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
app := newFakeMultiSourceApp()
revisions := make([]string, 0)
@@ -583,7 +583,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
kube.GetResourceKey(obj3): obj3,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -624,7 +624,7 @@ func TestCompareAppStateManagedNamespaceMetadataWithLiveNsDoesNotGetPruned(t *te
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, []string{}, app.Spec.Sources, false, false, nil, false)
require.NoError(t, err)
@@ -676,7 +676,7 @@ func TestCompareAppStateWithManifestGeneratePath(t *testing.T) {
updateRevisionForPathsResponse: &apiclient.UpdateRevisionForPathsResponse{},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, false)
@@ -698,7 +698,7 @@ func TestSetHealth(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -734,7 +734,7 @@ func TestPreserveStatusTimestamp(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -770,7 +770,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -799,7 +799,7 @@ func TestSetManagedResourcesWithOrphanedResources(t *testing.T) {
proj.Spec.OrphanedResources = &v1alpha1.OrphanedResourcesMonitorSettings{}
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
@@ -828,7 +828,7 @@ func TestSetManagedResourcesWithResourcesOfAnotherApp(t *testing.T) {
app2 := newFakeApp()
app2.Name = "app2"
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app1, app2, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app2.Namespace, "guestbook"): {
@@ -852,7 +852,7 @@ func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
configMapData: map[string]string{
"resource.customizations": "invalid setting",
@@ -878,7 +878,7 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
app := newFakeApp()
app.Namespace = "default"
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
@@ -902,7 +902,7 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
func Test_appStateManager_persistRevisionHistory(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app},
}, nil)
manager := ctrl.appStateManager.(*appStateManager)
@@ -1007,7 +1007,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1034,7 +1034,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1066,7 +1066,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1093,7 +1093,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1120,7 +1120,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1147,7 +1147,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1175,7 +1175,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
testProj := signedProj
testProj.Spec.SignatureKeys[0].KeyID = "4AEE18F83AFDEB24"
sources := make([]v1alpha1.ApplicationSource, 0)
@@ -1207,7 +1207,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
}
// it doesn't matter for our test whether local manifests are valid
localManifests := []string{"foobar"}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1237,7 +1237,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1267,7 +1267,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
}
// it doesn't matter for our test whether local manifests are valid
localManifests := []string{""}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1395,7 +1395,7 @@ func TestIsLiveResourceManaged(t *testing.T) {
},
},
})
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -1765,7 +1765,7 @@ func TestCompareAppStateDefaultRevisionUpdated(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1788,7 +1788,7 @@ func TestCompareAppStateRevisionUpdatedWithHelmSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1807,10 +1807,10 @@ func Test_normalizeClusterScopeTracking(t *testing.T) {
Namespace: "test",
},
})
c := &cachemocks.ClusterCache{}
c.EXPECT().IsNamespaced(mock.Anything).Return(false, nil)
c := cachemocks.ClusterCache{}
c.On("IsNamespaced", mock.Anything).Return(false, nil)
var called bool
err := normalizeClusterScopeTracking([]*unstructured.Unstructured{obj}, c, func(u *unstructured.Unstructured) error {
err := normalizeClusterScopeTracking([]*unstructured.Unstructured{obj}, &c, func(u *unstructured.Unstructured) error {
// We expect that the normalization function will call this callback with an obj that has had the namespace set
// to empty.
called = true
@@ -1838,7 +1838,7 @@ func TestCompareAppState_DoesNotCallUpdateRevisionForPaths_ForOCI(t *testing.T)
Revision: "abc123",
},
}
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
source := app.Spec.GetSource()
source.RepoURL = "oci://example.com/argo/argo-cd"

View File

@@ -75,7 +75,7 @@ func TestPersistRevisionHistory(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -121,7 +121,7 @@ func TestPersistManagedNamespaceMetadataState(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -152,7 +152,7 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
// Sync with source specified
source := v1alpha1.ApplicationSource{
@@ -206,7 +206,7 @@ func TestSyncComparisonError(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -263,7 +263,7 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
},
managedLiveObjs: liveObjects,
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,
@@ -350,7 +350,7 @@ func TestSyncWindowDeniesSync(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,
@@ -1620,7 +1620,7 @@ func TestSyncWithImpersonate(t *testing.T) {
},
additionalObjs: additionalObjs,
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,
project: project,
@@ -1780,7 +1780,7 @@ func TestClientSideApplyMigration(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,

Binary file not shown.

Before

Width:  |  Height:  |  Size: 228 KiB

View File

@@ -68,9 +68,9 @@ make builder-image IMAGE_NAMESPACE=argoproj IMAGE_TAG=v1.0.0
Every commit to master is built and published to `ghcr.io/argoproj/argo-cd/argocd:<version>-<short-sha>`. The list of images is available at
[https://github.com/argoproj/argo-cd/packages](https://github.com/argoproj/argo-cd/packages).
> [!NOTE]
> GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
> even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
> to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
!!! note
GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
The image is automatically deployed to the dev Argo CD instance: [https://cd.apps.argoproj.io/](https://cd.apps.argoproj.io/)

View File

@@ -1,18 +0,0 @@
The Argo CD UI displays icons for various Kubernetes resource types to help users quickly identify them. Argo CD
includes a set of built-in icons for common resource types.
You can contribute additional icons for custom resource types by following these steps:
1. Ensure the license is compatible with Apache 2.0.
2. Add the icon file to the `ui/src/assets/images/resources/<group>/icon.svg` path in the Argo CD repository.
3. Modify the SVG to use the correct color, `#8fa4b1`.
4. Run `make resourceiconsgen` to update the generated typescript file that lists all available icons.
5. Create a pull request to the Argo CD repository with your changes.
`<group>` is the API group of the custom resource. For example, if you are adding an icon for a custom resource with the
API group `example.com`, you would place the icon at `ui/src/assets/images/resources/example.com/icon.svg`.
If you want the same icon to apply to resources in multiple API groups with the same suffix, you can create a directory
prefixed with an underscore. The underscore will be interpreted as a wildcard. For example, to apply the same icon to
resources in the `example.com` and `another.example.com` API groups, you would place the icon at
`ui/src/assets/images/resources/_.example.com/icon.svg`.

View File

@@ -26,8 +26,8 @@ api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
```
This configuration example will be used as the basis for the next steps.
> [!NOTE]
> The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
!!! note
The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
### Configure component env variables
The component that you will run in your IDE for debugging (`api-server` in our case) will need env variables. Copy the env variables from `Procfile`, located in the `argo-cd` root folder of your development branch. The env variables are located before the `$COMMAND` section in the `sh -c` section of the component run command.
@@ -112,8 +112,8 @@ Example for an `api-server` launch configuration snippet, based on our above exa
</component>
```
> [!NOTE]
> As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
!!! note
As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
## Run Argo CD without the debugged component
Next, we need to run all Argo CD components, except for the debugged component (cause we will run this component separately in the IDE).
@@ -143,4 +143,4 @@ To debug the `api-server`, run:
Finally, run the component you wish to debug from your IDE and make sure it does not have any errors.
## Important
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.

View File

@@ -21,7 +21,7 @@ curl -sSfL https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/i
Connect to one of the services, for example, to debug the main ArgoCD server run:
```shell
kubectl config set-context --current --namespace argocd
telepresence helm install --set-json agent.securityContext={} # Installs telepresence into your cluster
telepresence helm install --set agent.securityContext={} # Installs telepresence into your cluster
telepresence connect # Starts the connection to your cluster (bound to the current namespace)
telepresence intercept argocd-server --port 8080:http --env-file .envrc.remote # Starts the interception
```
@@ -37,7 +37,7 @@ telepresence status
Stop the intercept using:
```shell
telepresence leave argocd-server
telepresence leave argocd-server-argocd
```
And uninstall telepresence from your cluster:

View File

@@ -1,5 +1,38 @@
# Managing Dependencies
## GitOps Engine (`github.com/argoproj/gitops-engine`)
### Repository
https://github.com/argoproj/gitops-engine
### Pulling changes from `gitops-engine`
After your GitOps Engine PR has been merged, ArgoCD needs to be updated to pull in the version of the GitOps engine that contains your change. Here are the steps:
- Retrieve the SHA hash for your commit. You will use this in the next step.
- From the `argo-cd` folder, run the following command
`go get github.com/argoproj/gitops-engine@<git-commit-sha>`
If you get an error message `invalid version: unknown revision` then you got the wrong SHA hash
- Run:
`go mod tidy`
- The following files are changed:
- `go.mod`
- `go.sum`
- Create an ArgoCD PR with a `refactor:` type in its title for the two file changes.
### Tips:
- See https://github.com/argoproj/argo-cd/pull/4434 as an example
- The PR might require additional, dependent changes in ArgoCD that are directly impacted by the changes made in the engine.
## Notifications Engine (`github.com/argoproj/notifications-engine`)
### Repository

View File

@@ -7,7 +7,7 @@
## Preface
When you have developed and possibly manually tested the code you want to contribute, you should ensure that everything builds correctly. Commit your changes locally and perform the following steps, for each step the commands for both local and virtualized toolchain are listed.
### Docker privileges for virtualized toolchain users
### Docker priviliges for virtualized toolchain users
[These instructions](toolchain-guide.md#docker-privileges) are relevant for most of the steps below
### Using Podman for virtualized toolchain users
@@ -29,6 +29,11 @@ As build dependencies change over time, you have to synchronize your development
* `make dep-ui` or `make dep-ui-local`
Argo CD recently migrated to Go modules. Usually, dependencies will be downloaded at build time, but the Makefile provides two targets to download and vendor all dependencies:
* `make mod-download` or `make mod-download-local` will download all required Go modules and
* `make mod-vendor` or `make mod-vendor-local` will vendor those dependencies into the Argo CD source tree
### Generate API glue code and other assets
Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
@@ -37,8 +42,8 @@ Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/prot
* Check if something has changed by running `git status` or `git diff`
* Commit any possible changes to your local Git branch, an appropriate commit message would be `Changes from codegen`, for example.
> [!NOTE]
> There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
!!!note
There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
### Build your code and run unit tests

View File

@@ -38,7 +38,7 @@ If you want to build and test the site directly on your local machine without th
## Analytics
> [!TIP]
> Don't forget to disable your ad-blocker when testing.
!!! tip
Don't forget to disable your ad-blocker when testing.
We collect [Google Analytics](https://analytics.google.com/analytics/web/#/report-home/a105170809w198079555p192782995).

View File

@@ -1,10 +1,9 @@
# Proxy Extensions
> [!WARNING]
> **Beta Feature (Since 2.7.0)**
>
> This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
> It is generally considered stable, but there may be unhandled edge cases.
!!! warning "Beta Feature (Since 2.7.0)"
This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
It is generally considered stable, but there may be unhandled edge cases.
## Overview

View File

@@ -129,30 +129,33 @@ It is also possible to add an optional flyout widget to your extension. It can b
Below is an example of an extension using the flyout widget:
```javascript
((window) => {
const component = (props: { openFlyout: () => any }) => {
const component = (props: {
openFlyout: () => any
}) => {
return React.createElement(
"div",
{
style: { padding: "10px" },
onClick: () => props.openFlyout(),
},
"Hello World"
"div",
{
style: { padding: "10px" },
onClick: () => props.openFlyout()
},
"Hello World"
);
};
const flyout = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"This is a flyout"
"div",
{ style: { padding: "10px" } },
"This is a flyout"
);
};
window.extensionsAPI.registerStatusPanelExtension(
component,
"My Extension",
"my_extension",
flyout
component,
"My Extension",
"my_extension",
flyout
);
})(window);
```
@@ -180,9 +183,7 @@ The callback function `shouldDisplay` should return true if the extension should
```typescript
const shouldDisplay = (app: Application) => {
return (
application.metadata?.labels?.["application.environmentLabelKey"] === "prd"
);
return application.metadata?.labels?.['application.environmentLabelKey'] === "prd";
};
```
@@ -195,68 +196,28 @@ Below is an example of a simple extension with a flyout widget:
};
const flyout = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"This is a flyout"
"div",
{ style: { padding: "10px" } },
"This is a flyout"
);
};
const component = () => {
return React.createElement(
"div",
{
onClick: () => flyout(),
},
"Toolbar Extension Test"
"div",
{
onClick: () => flyout()
},
"Toolbar Extension Test"
);
};
window.extensionsAPI.registerTopBarActionMenuExt(
component,
"Toolbar Extension Test",
"Toolbar_Extension_Test",
flyout,
shouldDisplay,
"",
true
component,
"Toolbar Extension Test",
"Toolbar_Extension_Test",
flyout,
shouldDisplay,
'',
true
);
})(window);
```
## App View Extensions
App View extensions allow you to create a new Application Details View for an application. This view would be selectable alongside the other views like the Node Tree, Pod, and Network views. When the extension's icon is clicked, the extension's component is rendered as the main content of the application view.
Register this extension through the `extensionsAPI.registerAppViewExtension` method.
```typescript
registerAppViewExtension(
component: ExtensionComponent, // the component to be rendered
title: string, // the title of the page once the component is rendered
icon: string, // the favicon classname for the icon tab
shouldDisplay?: (app: Application): boolean // returns true if the view should be available
)
```
Below is an example of a simple extension:
```javascript
((window) => {
const component = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"Hello World"
);
};
window.extensionsAPI.registerAppViewExtension(
component,
"My Extension",
"fa-question-circle",
(app) =>
application.metadata?.labels?.["application.environmentLabelKey"] ===
"prd"
);
})(window);
```
Example rendered extension:
![destination](../../assets/application-view-extension.png)
```

View File

@@ -6,8 +6,8 @@
Sure thing! You can either open an Enhancement Proposal in our GitHub issue tracker or you can [join us on Slack](https://argoproj.github.io/community/join-slack) in channel #argo-contributors to discuss your ideas and get guidance for submitting a PR.
> [!NOTE]
> Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
!!! note
Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
### No one has looked at my PR yet. Why?

View File

@@ -1,12 +1,10 @@
# Overview
> [!WARNING]
> **As an Argo CD user, you probably don't want to be reading this section of the docs.**
>
> This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
>
> * A chat bot
> * A Slack integration
!!! warning "As an Argo CD user, you probably don't want to be reading this section of the docs."
This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
* A chat bot
* A Slack integration
## Preface
#### Understand the [Code Contribution Guide](code-contributions.md)
@@ -28,7 +26,7 @@ For backend and frontend contributions, that require a full building-testing-run
## Contributing to Argo CD Notifications documentation
This guide will help you get started quickly with contributing documentation changes, performing the minimum setup you'll need.
The notifications docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
The notificaions docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
For backend and frontend contributions, that require a full building-testing-running-locally cycle, please refer to [Contributing to Argo CD backend and frontend ](index.md#contributing-to-argo-cd-backend-and-frontend)
### Fork and clone Argo CD repository
@@ -102,4 +100,4 @@ Need help? Start with the [Contributors FAQ](faq/)
* [Config Management Plugins](../operator-manual/config-management-plugins/)
## Contributing to Argo Website
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.

View File

@@ -71,17 +71,17 @@ Example:
./hack/trigger-release.sh v2.7.2 upstream
```
> [!TIP]
> The tag must be in one of the following formats to trigger the GH workflow:<br>
> * GA: `v<MAJOR>.<MINOR>.<PATCH>`<br>
> * Pre-release: `v<MAJOR>.<MINOR>.<PATCH>-rc<RC#>`
!!! tip
The tag must be in one of the following formats to trigger the GH workflow:<br>
* GA: `v<MAJOR>.<MINOR>.<PATCH>`<br>
* Pre-release: `v<MAJOR>.<MINOR>.<PATCH>-rc<RC#>`
Once the script is executed successfully, a GitHub workflow will start
execution. You can follow its progress under the [Actions](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml) tab, the name of the action is `Publish ArgoCD Release`.
> [!WARNING]
> You cannot perform more than one release on the same release branch at the
> same time.
!!! warning
You cannot perform more than one release on the same release branch at the
same time.
### Verifying automated release

View File

@@ -230,8 +230,8 @@ make manifests-local
(depending on your toolchain) to build a new set of installation manifests which include your specific image reference.
> [!NOTE]
> Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
!!!note
Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
#### Configure your cluster with custom manifests

View File

@@ -7,13 +7,11 @@
## Preface
> [!NOTE]
> **Before you start**
>
> The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
>
> We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
> [code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
!!!note "Before you start"
The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
[code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
If you want to submit a PR, please read this document carefully, as it contains important information guiding you through our PR quality gates.
@@ -36,8 +34,8 @@ make pre-commit-local
When you submit a PR against Argo CD's GitHub repository, a couple of CI checks will be run automatically to ensure your changes will build fine and meet certain quality standards. Your contribution needs to pass those checks in order to be merged into the repository.
> [!NOTE]
> Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
!!!note
Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.

View File

@@ -6,18 +6,16 @@ namespace `argocd-e2e***` is created prior to the execution of the tests. The th
The [/test/e2e/testdata](https://github.com/argoproj/argo-cd/tree/master/test/e2e/testdata) directory contains various Argo CD applications. Before test execution, the directory is copied into `/tmp/argo-e2e***` temp directory and used in tests as a
Git repository via file url: `file:///tmp/argo-e2e***`.
> [!NOTE]
> **Rancher Desktop Volume Sharing**
>
> The e2e git server runs in a container. If you are using Rancher Desktop, you will need to enable volume sharing for
> the e2e container to access the testdata directory. To do this, add the following to
> `~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml` and restart Rancher Desktop:
>
> ```yaml
> mounts:
> - location: /private/tmp
> writable: true
> ```
!!! note "Rancher Desktop Volume Sharing"
The e2e git server runs in a container. If you are using Rancher Desktop, you will need to enable volume sharing for
the e2e container to access the testdata directory. To do this, add the following to
`~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml` and restart Rancher Desktop:
```yaml
mounts:
- location: /private/tmp
writable: true
```
## Running Tests Locally

Some files were not shown because too many files have changed in this diff Show More