mirror of
https://github.com/argoproj/argo-cd.git
synced 2026-02-21 01:58:46 +01:00
Compare commits
1 Commits
v2.7.0
...
security-s
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c7a0dff47e |
@@ -11,17 +11,3 @@ cmd/**/debug
|
||||
debug.test
|
||||
coverage.out
|
||||
ui/node_modules/
|
||||
test-results/
|
||||
test/
|
||||
manifests/
|
||||
hack/
|
||||
docs/
|
||||
examples/
|
||||
.github/
|
||||
!test/fixture
|
||||
!test/container
|
||||
!hack/installers
|
||||
!hack/gpg-wrapper.sh
|
||||
!hack/git-verify-wrapper.sh
|
||||
!hack/tool-versions.sh
|
||||
!hack/install.sh
|
||||
3
.github/cherry-pick-bot.yml
vendored
3
.github/cherry-pick-bot.yml
vendored
@@ -1,3 +0,0 @@
|
||||
enabled: true
|
||||
preservePullRequestTitle: true
|
||||
|
||||
25
.github/dependabot.yml
vendored
25
.github/dependabot.yml
vendored
@@ -16,28 +16,3 @@ updates:
|
||||
directory: "/ui/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/test/container/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/test/e2e/multiarch-container/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/test/remote/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/ui-test/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
|
||||
5
.github/pull_request_template.md
vendored
5
.github/pull_request_template.md
vendored
@@ -11,10 +11,7 @@ Checklist:
|
||||
* [ ] Does this PR require documentation updates?
|
||||
* [ ] I've updated documentation as required by this PR.
|
||||
* [ ] Optional. My organization is added to USERS.md.
|
||||
* [ ] I have signed off all my commits as required by [DCO](https://github.com/argoproj/argoproj/blob/master/community/CONTRIBUTING.md#legal)
|
||||
* [ ] I have signed off all my commits as required by [DCO](https://github.com/argoproj/argoproj/tree/master/community#contributing-to-argo)
|
||||
* [ ] I have written unit and/or e2e tests for my change. PRs without these are unlikely to be merged.
|
||||
* [ ] My build is green ([troubleshooting builds](https://argo-cd.readthedocs.io/en/latest/developer-guide/ci/)).
|
||||
* [ ] My new feature complies with the [feature status](https://github.com/argoproj/argoproj/blob/master/community/feature-status.md) guidelines.
|
||||
* [ ] I have added a brief description of why this PR is necessary and/or what this PR solves.
|
||||
|
||||
Please see [Contribution FAQs](https://argo-cd.readthedocs.io/en/latest/developer-guide/faq/) if you have questions about your pull-request.
|
||||
|
||||
38
.github/workflows/README.md
vendored
38
.github/workflows/README.md
vendored
@@ -1,38 +0,0 @@
|
||||
# Workflows
|
||||
|
||||
| Workflow | Description |
|
||||
|--------------------|----------------------------------------------------------------|
|
||||
| ci-build.yaml | Build, lint, test, codegen, build-ui, analyze, e2e-test |
|
||||
| codeql.yaml | CodeQL analysis |
|
||||
| image-reuse.yaml | Build, push, and Sign container images |
|
||||
| image.yaml | Build container image for PR's & publish for push events |
|
||||
| pr-title-check.yaml| Lint PR for semantic information |
|
||||
| init-release.yaml | Build manifests and version then create a PR for release branch|
|
||||
| release.yaml | Build images, cli-binaries, provenances, and post actions |
|
||||
| update-snyk.yaml | Scheduled snyk reports |
|
||||
|
||||
# Reusable workflows
|
||||
|
||||
## image-reuse.yaml
|
||||
|
||||
- The resuable workflow can be used to publish or build images with multiple container registries(Quay,GHCR, dockerhub), and then sign them with cosign when an image is published.
|
||||
- A GO version `must` be specified e.g. 1.19
|
||||
- The image name for each registry *must* contain the tag. Note: multiple tags are allowed for each registry using a CSV type.
|
||||
- Multiple platforms can be specified e.g. linux/amd64,linux/arm64
|
||||
- Images are not published by default. A boolean value must be set to `true` to push images.
|
||||
- An optional target can be specified.
|
||||
|
||||
| Inputs | Description | Type | Required | Defaults |
|
||||
|-------------------|-------------------------------------|-------------|----------|-----------------|
|
||||
| go-version | Version of Go to be used | string | true | none |
|
||||
| quay_image_name | Full image name and tag | CSV, string | false | none |
|
||||
| ghcr_image_name | Full image name and tag | CSV, string | false | none |
|
||||
| docker_image_name | Full image name and tag | CSV, string | false | none |
|
||||
| platforms | Platforms to build (linux/amd64) | CSV, string | false | linux/amd64 |
|
||||
| push | Whether to push image/s to registry | boolean | false | false |
|
||||
| target | Target build stage | string | false | none |
|
||||
|
||||
| Outputs | Description | Type |
|
||||
|-------------|------------------------------------------|-------|
|
||||
|image-digest | Image digest of image container created | string|
|
||||
|
||||
67
.github/workflows/ci-build.yaml
vendored
67
.github/workflows/ci-build.yaml
vendored
@@ -9,11 +9,10 @@ on:
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'release-*'
|
||||
|
||||
env:
|
||||
# Golang version to use across CI steps
|
||||
GOLANG_VERSION: '1.19'
|
||||
GOLANG_VERSION: '1.18'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
@@ -28,9 +27,9 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Download all Go modules
|
||||
@@ -46,13 +45,13 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
uses: actions/cache@627f0f41f6904a5b1efbaed9f96d9eb58e92e920 # v3.2.4
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -70,15 +69,15 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@0ad9a0988b3973e851ab0a07adf248ec2e100376 # v3.3.1
|
||||
with:
|
||||
version: v1.51.0
|
||||
version: v1.46.2
|
||||
args: --timeout 10m --exclude SA5011 --verbose
|
||||
|
||||
test-go:
|
||||
@@ -93,11 +92,11 @@ jobs:
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
@@ -117,17 +116,13 @@ jobs:
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
uses: actions/cache@627f0f41f6904a5b1efbaed9f96d9eb58e92e920 # v3.2.4
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
- name: Install all tools required for building & testing
|
||||
run: |
|
||||
make install-test-tools-local
|
||||
# We install kustomize in the dist directory
|
||||
- name: Add dist to PATH
|
||||
run: |
|
||||
echo "/home/runner/work/argo-cd/argo-cd/dist" >> $GITHUB_PATH
|
||||
- name: Setup git username and email
|
||||
run: |
|
||||
git config --global user.name "John Doe"
|
||||
@@ -160,11 +155,11 @@ jobs:
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
@@ -184,17 +179,13 @@ jobs:
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
uses: actions/cache@627f0f41f6904a5b1efbaed9f96d9eb58e92e920 # v3.2.4
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
- name: Install all tools required for building & testing
|
||||
run: |
|
||||
make install-test-tools-local
|
||||
# We install kustomize in the dist directory
|
||||
- name: Add dist to PATH
|
||||
run: |
|
||||
echo "/home/runner/work/argo-cd/argo-cd/dist" >> $GITHUB_PATH
|
||||
- name: Setup git username and email
|
||||
run: |
|
||||
git config --global user.name "John Doe"
|
||||
@@ -215,9 +206,9 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Create symlink in GOPATH
|
||||
@@ -241,10 +232,6 @@ jobs:
|
||||
make install-codegen-tools-local
|
||||
make install-go-tools-local
|
||||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
# We install kustomize in the dist directory
|
||||
- name: Add dist to PATH
|
||||
run: |
|
||||
echo "/home/runner/work/argo-cd/argo-cd/dist" >> $GITHUB_PATH
|
||||
- name: Run codegen
|
||||
run: |
|
||||
set -x
|
||||
@@ -263,14 +250,14 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Setup NodeJS
|
||||
uses: actions/setup-node@64ed1c7eab4cce3362f8c340dee64e5eaeef8f7c # v3.6.0
|
||||
with:
|
||||
node-version: '18.15.0'
|
||||
node-version: '12.18.4'
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
uses: actions/cache@627f0f41f6904a5b1efbaed9f96d9eb58e92e920 # v3.2.4
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
@@ -300,12 +287,12 @@ jobs:
|
||||
sonar_secret: ${{ secrets.SONAR_TOKEN }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
uses: actions/cache@627f0f41f6904a5b1efbaed9f96d9eb58e92e920 # v3.2.4
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
@@ -379,9 +366,9 @@ jobs:
|
||||
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: GH actions workaround - Kill XSP4 process
|
||||
@@ -399,7 +386,7 @@ jobs:
|
||||
sudo chown runner $HOME/.kube/config
|
||||
kubectl version
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1
|
||||
uses: actions/cache@627f0f41f6904a5b1efbaed9f96d9eb58e92e920 # v3.2.4
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
@@ -425,9 +412,9 @@ jobs:
|
||||
git config --global user.email "john.doe@example.com"
|
||||
- name: Pull Docker image required for tests
|
||||
run: |
|
||||
docker pull ghcr.io/dexidp/dex:v2.36.0
|
||||
docker pull ghcr.io/dexidp/dex:v2.35.3
|
||||
docker pull argoproj/argo-cd-ci-builder:v1.0.0
|
||||
docker pull redis:7.0.11-alpine
|
||||
docker pull redis:7.0.7-alpine
|
||||
- name: Create target directory for binaries in the build-process
|
||||
run: |
|
||||
mkdir -p dist
|
||||
|
||||
3
.github/workflows/codeql.yml
vendored
3
.github/workflows/codeql.yml
vendored
@@ -5,7 +5,6 @@ on:
|
||||
# Secrets aren't available for dependabot on push. https://docs.github.com/en/enterprise-cloud@latest/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/troubleshooting-the-codeql-workflow#error-403-resource-not-accessible-by-integration-when-using-dependabot
|
||||
branches-ignore:
|
||||
- 'dependabot/**'
|
||||
- 'cherry-pick-*'
|
||||
pull_request:
|
||||
schedule:
|
||||
- cron: '0 19 * * 0'
|
||||
@@ -30,7 +29,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
|
||||
153
.github/workflows/image-reuse.yaml
vendored
153
.github/workflows/image-reuse.yaml
vendored
@@ -1,153 +0,0 @@
|
||||
name: Publish and Sign Container Image
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
go-version:
|
||||
required: true
|
||||
type: string
|
||||
quay_image_name:
|
||||
required: false
|
||||
type: string
|
||||
ghcr_image_name:
|
||||
required: false
|
||||
type: string
|
||||
docker_image_name:
|
||||
required: false
|
||||
type: string
|
||||
platforms:
|
||||
required: true
|
||||
type: string
|
||||
default: linux/amd64
|
||||
push:
|
||||
required: true
|
||||
type: boolean
|
||||
default: false
|
||||
target:
|
||||
required: false
|
||||
type: string
|
||||
|
||||
secrets:
|
||||
quay_username:
|
||||
required: false
|
||||
quay_password:
|
||||
required: false
|
||||
ghcr_username:
|
||||
required: false
|
||||
ghcr_password:
|
||||
required: false
|
||||
docker_username:
|
||||
required: false
|
||||
docker_password:
|
||||
required: false
|
||||
|
||||
outputs:
|
||||
image-digest:
|
||||
description: "sha256 digest of container image"
|
||||
value: ${{ jobs.publish.outputs.image-digest }}
|
||||
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
publish:
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write # Used to push images to `ghcr.io` if used.
|
||||
id-token: write # Needed to create an OIDC token for keyless signing
|
||||
runs-on: ubuntu-22.04
|
||||
outputs:
|
||||
image-digest: ${{ steps.image.outputs.digest }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.3.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
if: ${{ github.ref_type == 'tag'}}
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
if: ${{ github.ref_type != 'tag'}}
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ inputs.go-version }}
|
||||
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@c3667d99424e7e6047999fb6246c0da843953c65 # v3.0.1
|
||||
with:
|
||||
cosign-release: 'v2.0.0'
|
||||
|
||||
- uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
|
||||
- uses: docker/setup-buildx-action@4b4e9c3e2d4531116a6f8ba8e71fc6e2cb6e6c8c # v2.5.0
|
||||
|
||||
- name: Setup tags for container image as a CSV type
|
||||
run: |
|
||||
IMAGE_TAGS=$(for str in \
|
||||
${{ inputs.quay_image_name }} \
|
||||
${{ inputs.ghcr_image_name }} \
|
||||
${{ inputs.docker_image_name}}; do
|
||||
echo -n "${str}",;done | sed 's/,$//')
|
||||
|
||||
echo $IMAGE_TAGS
|
||||
echo "TAGS=$IMAGE_TAGS" >> $GITHUB_ENV
|
||||
|
||||
- name: Setup image namespace for signing, strip off the tag
|
||||
run: |
|
||||
TAGS=$(for tag in \
|
||||
${{ inputs.quay_image_name }} \
|
||||
${{ inputs.ghcr_image_name }} \
|
||||
${{ inputs.docker_image_name}}; do
|
||||
echo -n "${tag}" | awk -F ":" '{print $1}' -;done)
|
||||
|
||||
echo $TAGS
|
||||
echo 'SIGNING_TAGS<<EOF' >> $GITHUB_ENV
|
||||
echo $TAGS >> $GITHUB_ENV
|
||||
echo 'EOF' >> $GITHUB_ENV
|
||||
|
||||
- name: Login to Quay.io
|
||||
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.quay_username }}
|
||||
password: ${{ secrets.quay_password }}
|
||||
if: ${{ inputs.quay_image_name && inputs.push }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ secrets.ghcr_username }}
|
||||
password: ${{ secrets.ghcr_password }}
|
||||
if: ${{ inputs.ghcr_image_name && inputs.push }}
|
||||
|
||||
- name: Login to dockerhub Container Registry
|
||||
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
|
||||
with:
|
||||
username: ${{ secrets.docker_username }}
|
||||
password: ${{ secrets.docker_password }}
|
||||
if: ${{ inputs.docker_image_name && inputs.push }}
|
||||
|
||||
- name: Build and push container image
|
||||
id: image
|
||||
uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671 #v4.0.0
|
||||
with:
|
||||
context: .
|
||||
platforms: ${{ inputs.platforms }}
|
||||
push: ${{ inputs.push }}
|
||||
tags: ${{ env.TAGS }}
|
||||
target: ${{ inputs.target }}
|
||||
provenance: false
|
||||
sbom: false
|
||||
|
||||
- name: Sign container images
|
||||
run: |
|
||||
for signing_tag in $SIGNING_TAGS; do
|
||||
cosign sign \
|
||||
-a "repo=${{ github.repository }}" \
|
||||
-a "workflow=${{ github.workflow }}" \
|
||||
-a "sha=${{ github.sha }}" \
|
||||
-y \
|
||||
"$signing_tag"@${{ steps.image.outputs.digest }}
|
||||
done
|
||||
if: ${{ inputs.push }}
|
||||
141
.github/workflows/image.yaml
vendored
141
.github/workflows/image.yaml
vendored
@@ -9,108 +9,97 @@ on:
|
||||
- master
|
||||
types: [ labeled, unlabeled, opened, synchronize, reopened ]
|
||||
|
||||
env:
|
||||
GOLANG_VERSION: '1.18'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions: {}
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
set-vars:
|
||||
publish:
|
||||
permissions:
|
||||
contents: read
|
||||
contents: write # for git to push upgrade commit if not already deployed
|
||||
packages: write # for pushing packages to GHCR, which is used by cd.apps.argoproj.io to avoid polluting Quay with tags
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
outputs:
|
||||
image-tag: ${{ steps.image.outputs.tag}}
|
||||
platforms: ${{ steps.platforms.outputs.platforms }}
|
||||
env:
|
||||
GOPATH: /home/runner/work/argo-cd/argo-cd
|
||||
steps:
|
||||
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
- uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
with:
|
||||
path: src/github.com/argoproj/argo-cd
|
||||
|
||||
- name: Set image tag for ghcr
|
||||
run: echo "tag=$(cat ./VERSION)-${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
|
||||
# get image tag
|
||||
- run: echo "tag=$(cat ./VERSION)-${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
|
||||
working-directory: ./src/github.com/argoproj/argo-cd
|
||||
id: image
|
||||
|
||||
- name: Determine image platforms to use
|
||||
id: platforms
|
||||
run: |
|
||||
# login
|
||||
- run: |
|
||||
docker login ghcr.io --username $USERNAME --password-stdin <<< "$PASSWORD"
|
||||
docker login quay.io --username "$DOCKER_USERNAME" --password-stdin <<< "$DOCKER_TOKEN"
|
||||
if: github.event_name == 'push'
|
||||
env:
|
||||
USERNAME: ${{ github.actor }}
|
||||
PASSWORD: ${{ secrets.GITHUB_TOKEN }}
|
||||
DOCKER_USERNAME: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
DOCKER_TOKEN: ${{ secrets.RELEASE_QUAY_TOKEN }}
|
||||
|
||||
# build
|
||||
- uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
|
||||
- uses: docker/setup-buildx-action@15c905b16b06416d2086efa066dd8e3a35cc7f98 # v2.4.0
|
||||
- run: |
|
||||
IMAGE_PLATFORMS=linux/amd64
|
||||
if [[ "${{ github.event_name }}" == "push" || "${{ contains(github.event.pull_request.labels.*.name, 'test-multi-image') }}" == "true" ]]
|
||||
if [[ "${{ github.event_name }}" == "push" || "${{ contains(github.event.pull_request.labels.*.name, 'test-arm-image') }}" == "true" ]]
|
||||
then
|
||||
IMAGE_PLATFORMS=linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
|
||||
fi
|
||||
echo "Building image for platforms: $IMAGE_PLATFORMS"
|
||||
echo "platforms=$IMAGE_PLATFORMS" >> $GITHUB_OUTPUT
|
||||
docker buildx build --platform $IMAGE_PLATFORMS --sbom=false --provenance=false --push="${{ github.event_name == 'push' }}" \
|
||||
-t ghcr.io/argoproj/argo-cd/argocd:${{ steps.image.outputs.tag }} \
|
||||
-t quay.io/argoproj/argocd:latest .
|
||||
working-directory: ./src/github.com/argoproj/argo-cd
|
||||
|
||||
build-only:
|
||||
needs: [set-vars]
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write # for pushing packages to GHCR, which is used by cd.apps.argoproj.io to avoid polluting Quay with tags
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name != 'push' }}
|
||||
uses: ./.github/workflows/image-reuse.yaml
|
||||
with:
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
go-version: 1.19
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: false
|
||||
# sign container images
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@9becc617647dfa20ae7b1151972e9b3a2c338a2b # v2.8.1
|
||||
with:
|
||||
cosign-release: 'v1.13.1'
|
||||
|
||||
build-and-publish:
|
||||
needs: [set-vars]
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write # for pushing packages to GHCR, which is used by cd.apps.argoproj.io to avoid polluting Quay with tags
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
|
||||
uses: ./.github/workflows/image-reuse.yaml
|
||||
with:
|
||||
quay_image_name: quay.io/argoproj/argocd:latest
|
||||
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
go-version: 1.19
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: true
|
||||
secrets:
|
||||
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
|
||||
ghcr_username: ${{ github.actor }}
|
||||
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Install crane to get digest of image
|
||||
uses: imjasonh/setup-crane@e82f1b9a8007d399333baba4d75915558e9fb6a4
|
||||
|
||||
build-and-publish-provenance:
|
||||
needs: [build-and-publish]
|
||||
permissions:
|
||||
actions: read # for detecting the Github Actions environment.
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
packages: write # for uploading attestations. (https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#known-issues)
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
|
||||
# Must be refernced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v1.5.0
|
||||
with:
|
||||
image: quay.io/argoproj/argocd
|
||||
digest: ${{ needs.build-and-publish.outputs.image-digest }}
|
||||
secrets:
|
||||
registry-username: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
registry-password: ${{ secrets.RELEASE_QUAY_TOKEN }}
|
||||
- name: Get digest of image
|
||||
run: |
|
||||
echo "IMAGE_DIGEST=$(crane digest quay.io/argoproj/argocd:latest)" >> $GITHUB_ENV
|
||||
|
||||
Deploy:
|
||||
needs:
|
||||
- build-and-publish
|
||||
- set-vars
|
||||
permissions:
|
||||
contents: write # for git to push upgrade commit if not already deployed
|
||||
packages: write # for pushing packages to GHCR, which is used by cd.apps.argoproj.io to avoid polluting Quay with tags
|
||||
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
- name: Sign Argo CD latest image
|
||||
run: |
|
||||
cosign sign --key env://COSIGN_PRIVATE_KEY quay.io/argoproj/argocd@${{ env.IMAGE_DIGEST }}
|
||||
# Displays the public key to share.
|
||||
cosign public-key --key env://COSIGN_PRIVATE_KEY
|
||||
env:
|
||||
COSIGN_PRIVATE_KEY: ${{secrets.COSIGN_PRIVATE_KEY}}
|
||||
COSIGN_PASSWORD: ${{secrets.COSIGN_PASSWORD}}
|
||||
if: ${{ github.event_name == 'push' }}
|
||||
|
||||
# deploy
|
||||
- run: git clone "https://$TOKEN@github.com/argoproj/argoproj-deployments"
|
||||
if: github.event_name == 'push'
|
||||
env:
|
||||
TOKEN: ${{ secrets.TOKEN }}
|
||||
- run: |
|
||||
docker run -u $(id -u):$(id -g) -v $(pwd):/src -w /src --rm -t ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }} kustomize edit set image quay.io/argoproj/argocd=ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
|
||||
docker run -u $(id -u):$(id -g) -v $(pwd):/src -w /src --rm -t ghcr.io/argoproj/argo-cd/argocd:${{ steps.image.outputs.tag }} kustomize edit set image quay.io/argoproj/argocd=ghcr.io/argoproj/argo-cd/argocd:${{ steps.image.outputs.tag }}
|
||||
git config --global user.email 'ci@argoproj.com'
|
||||
git config --global user.name 'CI'
|
||||
git diff --exit-code && echo 'Already deployed' || (git commit -am 'Upgrade argocd to ${{ needs.set-vars.outputs.image-tag }}' && git push)
|
||||
git diff --exit-code && echo 'Already deployed' || (git commit -am 'Upgrade argocd to ${{ steps.image.outputs.tag }}' && git push)
|
||||
if: github.event_name == 'push'
|
||||
working-directory: argoproj-deployments/argocd
|
||||
|
||||
# TODO: clean up old images once github supports it: https://github.community/t5/How-to-use-Git-and-GitHub/Deleting-images-from-GitHub-Package-Registry/m-p/41202/thread-id/9811
|
||||
|
||||
70
.github/workflows/init-release.yaml
vendored
70
.github/workflows/init-release.yaml
vendored
@@ -1,70 +0,0 @@
|
||||
name: Init ArgoCD Release
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
TARGET_BRANCH:
|
||||
description: 'TARGET_BRANCH to checkout (e.g. release-2.5)'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
TARGET_VERSION:
|
||||
description: 'TARGET_VERSION to build manifests (e.g. 2.5.0-rc1) Note: the `v` prefix is not used'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
prepare-release:
|
||||
permissions:
|
||||
contents: write # for peter-evans/create-pull-request to create branch
|
||||
pull-requests: write # for peter-evans/create-pull-request to create a PR
|
||||
name: Automatically generate version and manifests on ${{ inputs.TARGET_BRANCH }}
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
ref: ${{ inputs.TARGET_BRANCH }}
|
||||
|
||||
- name: Check if TARGET_VERSION is well formed.
|
||||
run: |
|
||||
set -xue
|
||||
# Target version must not contain 'v' prefix
|
||||
if echo "${{ inputs.TARGET_VERSION }}" | grep -e '^v'; then
|
||||
echo "::error::Target version '${{ inputs.TARGET_VERSION }}' should not begin with a 'v' prefix, refusing to continue." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Create VERSION information
|
||||
run: |
|
||||
set -ue
|
||||
echo "Bumping version from $(cat VERSION) to ${{ inputs.TARGET_VERSION }}"
|
||||
echo "${{ inputs.TARGET_VERSION }}" > VERSION
|
||||
|
||||
# We install kustomize in the dist directory
|
||||
- name: Add dist to PATH
|
||||
run: |
|
||||
echo "/home/runner/work/argo-cd/argo-cd/dist" >> $GITHUB_PATH
|
||||
|
||||
- name: Generate new set of manifests
|
||||
run: |
|
||||
set -ue
|
||||
make install-codegen-tools-local
|
||||
make manifests-local VERSION=${{ inputs.TARGET_VERSION }}
|
||||
git diff
|
||||
|
||||
- name: Create pull request
|
||||
uses: peter-evans/create-pull-request@38e0b6e68b4c852a5500a94740f0e535e0d7ba54 # v4.2.4
|
||||
with:
|
||||
commit-message: "Bump version to ${{ inputs.TARGET_VERSION }}"
|
||||
title: "Bump version to ${{ inputs.TARGET_VERSION }} on ${{ inputs.TARGET_BRANCH }} branch"
|
||||
body: Updating VERSION and manifests to ${{ inputs.TARGET_VERSION }}
|
||||
branch: update-version
|
||||
branch-suffix: random
|
||||
signoff: true
|
||||
labels: release
|
||||
|
||||
|
||||
2
.github/workflows/pr-title-check.yml
vendored
2
.github/workflows/pr-title-check.yml
vendored
@@ -27,7 +27,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
# IMPORTANT: Carefully review changes when updating this action. Using the pull_request_target event requires caution.
|
||||
- uses: amannn/action-semantic-pull-request@b6bca70dcd3e56e896605356ce09b76f7e1e0d39 # v5.1.0
|
||||
- uses: amannn/action-semantic-pull-request@01d5fd8a8ebb9aafe902c40c53f0f4744f7381eb # v5.0.2
|
||||
with:
|
||||
types: |
|
||||
feat
|
||||
|
||||
465
.github/workflows/release.yaml
vendored
465
.github/workflows/release.yaml
vendored
@@ -1,157 +1,266 @@
|
||||
name: Publish ArgoCD Release
|
||||
name: Create ArgoCD release
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v*'
|
||||
- '!v2.4*'
|
||||
- '!v2.5*'
|
||||
- '!v2.6*'
|
||||
|
||||
permissions: {}
|
||||
- "release-v*"
|
||||
- "!release-v1.5*"
|
||||
- "!release-v1.4*"
|
||||
- "!release-v1.3*"
|
||||
- "!release-v1.2*"
|
||||
- "!release-v1.1*"
|
||||
- "!release-v1.0*"
|
||||
- "!release-v0*"
|
||||
|
||||
env:
|
||||
GOLANG_VERSION: '1.19' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
GOLANG_VERSION: '1.18'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
argocd-image:
|
||||
prepare-release:
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
packages: write # used to push images to `ghcr.io` if used.
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
uses: ./.github/workflows/image-reuse.yaml
|
||||
with:
|
||||
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
go-version: 1.19
|
||||
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
|
||||
push: true
|
||||
secrets:
|
||||
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
|
||||
|
||||
argocd-image-provenance:
|
||||
needs: [argocd-image]
|
||||
permissions:
|
||||
actions: read # for detecting the Github Actions environment.
|
||||
id-token: write # for creating OIDC tokens for signing.
|
||||
packages: write # for uploading attestations. (https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#known-issues)
|
||||
# Must be refernced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v1.5.0
|
||||
with:
|
||||
image: quay.io/argoproj/argocd
|
||||
digest: ${{ needs.argocd-image.outputs.image-digest }}
|
||||
secrets:
|
||||
registry-username: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
registry-password: ${{ secrets.RELEASE_QUAY_TOKEN }}
|
||||
|
||||
goreleaser:
|
||||
needs:
|
||||
- argocd-image
|
||||
- argocd-image-provenance
|
||||
permissions:
|
||||
contents: write # used for uploading assets
|
||||
contents: write # To push changes to release branch
|
||||
name: Perform automatic release on trigger ${{ github.ref }}
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
outputs:
|
||||
hashes: ${{ steps.hash.outputs.hashes }}
|
||||
|
||||
env:
|
||||
# The name of the tag as supplied by the GitHub event
|
||||
SOURCE_TAG: ${{ github.ref }}
|
||||
# The image namespace where Docker image will be published to
|
||||
IMAGE_NAMESPACE: quay.io/argoproj
|
||||
# Whether to create & push image and release assets
|
||||
DRY_RUN: false
|
||||
# Whether a draft release should be created, instead of public one
|
||||
DRAFT_RELEASE: false
|
||||
# Whether to update homebrew with this release as well
|
||||
# Set RELEASE_HOMEBREW_TOKEN secret in repository for this to work - needs
|
||||
# access to public repositories
|
||||
UPDATE_HOMEBREW: false
|
||||
# Name of the GitHub user for Git config
|
||||
GIT_USERNAME: argo-bot
|
||||
# E-Mail of the GitHub user for Git config
|
||||
GIT_EMAIL: argoproj@gmail.com
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Fetch all tags
|
||||
run: git fetch --force --tags
|
||||
|
||||
- name: Set GORELEASER_PREVIOUS_TAG # Workaround, GoReleaser uses 'git-describe' to determine a previous tag. Our tags are created in realease branches.
|
||||
- name: Check if the published tag is well formed and setup vars
|
||||
run: |
|
||||
set -xue
|
||||
if echo ${{ github.ref_name }} | grep -E -- '-rc1+$';then
|
||||
echo "GORELEASER_PREVIOUS_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n 2 | head -n 1)" >> $GITHUB_ENV
|
||||
else
|
||||
echo "This is not the first release on the branch, Using GoReleaser defaults"
|
||||
# Target version must match major.minor.patch and optional -rcX suffix
|
||||
# where X must be a number.
|
||||
TARGET_VERSION=${SOURCE_TAG#*release-v}
|
||||
if ! echo "${TARGET_VERSION}" | egrep '^[0-9]+\.[0-9]+\.[0-9]+(-rc[0-9]+)*$'; then
|
||||
echo "::error::Target version '${TARGET_VERSION}' is malformed, refusing to continue." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 # v4.0.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
# Target branch is the release branch we're going to operate on
|
||||
# Its name is 'release-<major>.<minor>'
|
||||
TARGET_BRANCH="release-${TARGET_VERSION%\.[0-9]*}"
|
||||
|
||||
- name: Set environment variables for ldflags
|
||||
id: set_ldflag
|
||||
run: |
|
||||
echo "KUBECTL_VERSION=$(go list -m k8s.io/client-go | head -n 1 | rev | cut -d' ' -f1 | rev)" >> $GITHUB_ENV
|
||||
echo "GIT_TREE_STATE=$(if [ -z "`git status --porcelain`" ]; then echo "clean" ; else echo "dirty"; fi)" >> $GITHUB_ENV
|
||||
# The release tag is the source tag, minus the release- prefix
|
||||
RELEASE_TAG="${SOURCE_TAG#*release-}"
|
||||
|
||||
- name: Run GoReleaser
|
||||
uses: goreleaser/goreleaser-action@f82d6c1c344bcacabba2c841718984797f664a6b # v4.2.0
|
||||
id: run-goreleaser
|
||||
with:
|
||||
version: latest
|
||||
args: release --clean --timeout 55m
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
KUBECTL_VERSION: ${{ env.KUBECTL_VERSION }}
|
||||
GIT_TREE_STATE: ${{ env.GIT_TREE_STATE }}
|
||||
|
||||
- name: Generate subject for provenance
|
||||
id: hash
|
||||
env:
|
||||
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
|
||||
if test "$hashes" = ""; then # goreleaser < v1.13.0
|
||||
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
|
||||
hashes=$(cat $checksum_file | base64 -w0)
|
||||
# Whether this is a pre-release (indicated by -rc suffix)
|
||||
PRE_RELEASE=false
|
||||
if echo "${RELEASE_TAG}" | egrep -- '-rc[0-9]+$'; then
|
||||
PRE_RELEASE=true
|
||||
fi
|
||||
echo "hashes=$hashes" >> $GITHUB_OUTPUT
|
||||
|
||||
goreleaser-provenance:
|
||||
needs: [goreleaser]
|
||||
permissions:
|
||||
actions: read # for detecting the Github Actions environment
|
||||
id-token: write # Needed for provenance signing and ID
|
||||
contents: write # Needed for release uploads
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
# Must be refernced by a tag. https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#referencing-the-slsa-generator
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v1.5.0
|
||||
with:
|
||||
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
|
||||
provenance-name: "argocd-cli.intoto.jsonl"
|
||||
upload-assets: true
|
||||
# We must not have a release trigger within the same release branch,
|
||||
# because that means a release for this branch is already running.
|
||||
if git tag -l | grep "release-v${TARGET_VERSION%\.[0-9]*}" | grep -v "release-v${TARGET_VERSION}"; then
|
||||
echo "::error::Another release for branch ${TARGET_BRANCH} is currently in progress."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
generate-sbom:
|
||||
name: Create Sbom and sign assets
|
||||
needs:
|
||||
- argocd-image
|
||||
- goreleaser
|
||||
permissions:
|
||||
contents: write # Needed for release uploads
|
||||
id-token: write # Needed for signing Sbom
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
# Ensure that release do not yet exist
|
||||
if git rev-parse ${RELEASE_TAG}; then
|
||||
echo "::error::Release tag ${RELEASE_TAG} already exists in repository. Refusing to continue."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Make the variables available in follow-up steps
|
||||
echo "TARGET_VERSION=${TARGET_VERSION}" >> $GITHUB_ENV
|
||||
echo "TARGET_BRANCH=${TARGET_BRANCH}" >> $GITHUB_ENV
|
||||
echo "RELEASE_TAG=${RELEASE_TAG}" >> $GITHUB_ENV
|
||||
echo "PRE_RELEASE=${PRE_RELEASE}" >> $GITHUB_ENV
|
||||
|
||||
- name: Check if our release tag has a correct annotation
|
||||
run: |
|
||||
set -ue
|
||||
# Fetch all tag information as well
|
||||
git fetch --prune --tags --force
|
||||
|
||||
echo "=========== BEGIN COMMIT MESSAGE ============="
|
||||
git show ${SOURCE_TAG}
|
||||
echo "============ END COMMIT MESSAGE =============="
|
||||
|
||||
# Quite dirty hack to get the release notes from the annotated tag
|
||||
# into a temporary file.
|
||||
RELEASE_NOTES=$(mktemp -p /tmp release-notes.XXXXXX)
|
||||
|
||||
prefix=true
|
||||
begin=false
|
||||
git show ${SOURCE_TAG} | while read line; do
|
||||
# Whatever is in commit history for the tag, we only want that
|
||||
# annotation from our tag. We discard everything else.
|
||||
if test "$begin" = "false"; then
|
||||
if echo "$line" | grep -q "tag ${SOURCE_TAG#refs/tags/}"; then begin="true"; fi
|
||||
continue
|
||||
fi
|
||||
if test "$prefix" = "true"; then
|
||||
if test -z "$line"; then prefix=false; fi
|
||||
else
|
||||
if echo "$line" | egrep -q '^commit [0-9a-f]+'; then
|
||||
break
|
||||
fi
|
||||
echo "$line" >> ${RELEASE_NOTES}
|
||||
fi
|
||||
done
|
||||
|
||||
# For debug purposes
|
||||
echo "============BEGIN RELEASE NOTES================="
|
||||
cat ${RELEASE_NOTES}
|
||||
echo "=============END RELEASE NOTES=================="
|
||||
|
||||
# Too short release notes are suspicious. We need at least 100 bytes.
|
||||
relNoteLen=$(stat -c '%s' $RELEASE_NOTES)
|
||||
if test $relNoteLen -lt 100; then
|
||||
echo "::error::No release notes provided in tag annotation (or tag is not annotated)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for magic string '## Quick Start' in head of release notes
|
||||
if ! head -2 ${RELEASE_NOTES} | grep -iq '## Quick Start'; then
|
||||
echo "::error::Release notes seem invalid, quick start section not found."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# We store path to temporary release notes file for later reading, we
|
||||
# need it when creating release.
|
||||
echo "RELEASE_NOTES=${RELEASE_NOTES}" >> $GITHUB_ENV
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
|
||||
- name: Setup Git author information
|
||||
run: |
|
||||
set -ue
|
||||
git config --global user.email "${GIT_EMAIL}"
|
||||
git config --global user.name "${GIT_USERNAME}"
|
||||
|
||||
- name: Checkout corresponding release branch
|
||||
run: |
|
||||
set -ue
|
||||
echo "Switching to release branch '${TARGET_BRANCH}'"
|
||||
if ! git checkout ${TARGET_BRANCH}; then
|
||||
echo "::error::Checking out release branch '${TARGET_BRANCH}' for target version '${TARGET_VERSION}' (tagged '${RELEASE_TAG}') failed. Does it exist in repo?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Create VERSION information
|
||||
run: |
|
||||
set -ue
|
||||
echo "Bumping version from $(cat VERSION) to ${TARGET_VERSION}"
|
||||
echo "${TARGET_VERSION}" > VERSION
|
||||
git commit -m "Bump version to ${TARGET_VERSION}" VERSION
|
||||
|
||||
- name: Generate new set of manifests
|
||||
run: |
|
||||
set -ue
|
||||
make install-codegen-tools-local
|
||||
make manifests-local VERSION=${TARGET_VERSION}
|
||||
git diff
|
||||
git commit manifests/ -m "Bump version to ${TARGET_VERSION}"
|
||||
|
||||
- name: Create the release tag
|
||||
run: |
|
||||
set -ue
|
||||
echo "Creating release ${RELEASE_TAG}"
|
||||
git tag ${RELEASE_TAG}
|
||||
|
||||
- name: Login to docker repositories
|
||||
env:
|
||||
DOCKER_USERNAME: ${{ secrets.RELEASE_DOCKERHUB_USERNAME }}
|
||||
DOCKER_TOKEN: ${{ secrets.RELEASE_DOCKERHUB_TOKEN }}
|
||||
QUAY_USERNAME: ${{ secrets.RELEASE_QUAY_USERNAME }}
|
||||
QUAY_TOKEN: ${{ secrets.RELEASE_QUAY_TOKEN }}
|
||||
run: |
|
||||
set -ue
|
||||
docker login quay.io --username "${QUAY_USERNAME}" --password-stdin <<< "${QUAY_TOKEN}"
|
||||
# Remove the following when Docker Hub is gone
|
||||
docker login --username "${DOCKER_USERNAME}" --password-stdin <<< "${DOCKER_TOKEN}"
|
||||
if: ${{ env.DRY_RUN != 'true' }}
|
||||
|
||||
- uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
|
||||
- uses: docker/setup-buildx-action@15c905b16b06416d2086efa066dd8e3a35cc7f98 # v2.4.0
|
||||
- name: Build and push Docker image for release
|
||||
run: |
|
||||
set -ue
|
||||
git clean -fd
|
||||
mkdir -p dist/
|
||||
docker buildx build --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --sbom=false --provenance=false --push -t ${IMAGE_NAMESPACE}/argocd:v${TARGET_VERSION} -t argoproj/argocd:v${TARGET_VERSION} .
|
||||
make release-cli
|
||||
make checksums
|
||||
chmod +x ./dist/argocd-linux-amd64
|
||||
./dist/argocd-linux-amd64 version --client
|
||||
if: ${{ env.DRY_RUN != 'true' }}
|
||||
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@c3667d99424e7e6047999fb6246c0da843953c65 # v3.0.1
|
||||
uses: sigstore/cosign-installer@9becc617647dfa20ae7b1151972e9b3a2c338a2b # v2.8.1
|
||||
with:
|
||||
cosign-release: 'v2.0.0'
|
||||
cosign-release: 'v1.13.1'
|
||||
|
||||
- name: Install crane to get digest of image
|
||||
uses: imjasonh/setup-crane@e82f1b9a8007d399333baba4d75915558e9fb6a4
|
||||
|
||||
- name: Get digest of image
|
||||
run: |
|
||||
echo "IMAGE_DIGEST=$(crane digest quay.io/argoproj/argocd:v${TARGET_VERSION})" >> $GITHUB_ENV
|
||||
|
||||
- name: Sign Argo CD container images and assets
|
||||
run: |
|
||||
cosign sign --key env://COSIGN_PRIVATE_KEY ${IMAGE_NAMESPACE}/argocd@${{ env.IMAGE_DIGEST }}
|
||||
cosign sign-blob --key env://COSIGN_PRIVATE_KEY ./dist/argocd-${TARGET_VERSION}-checksums.txt > ./dist/argocd-${TARGET_VERSION}-checksums.sig
|
||||
# Retrieves the public key to release as an asset
|
||||
cosign public-key --key env://COSIGN_PRIVATE_KEY > ./dist/argocd-cosign.pub
|
||||
env:
|
||||
COSIGN_PRIVATE_KEY: ${{secrets.COSIGN_PRIVATE_KEY}}
|
||||
COSIGN_PASSWORD: ${{secrets.COSIGN_PASSWORD}}
|
||||
if: ${{ env.DRY_RUN != 'true' }}
|
||||
|
||||
- name: Read release notes file
|
||||
id: release-notes
|
||||
uses: juliangruber/read-file-action@02bbba9876a8f870efd4ad64e3b9088d3fb94d4b # v1.1.6
|
||||
with:
|
||||
path: ${{ env.RELEASE_NOTES }}
|
||||
|
||||
- name: Push changes to release branch
|
||||
run: |
|
||||
set -ue
|
||||
git push origin ${TARGET_BRANCH}
|
||||
git push origin ${RELEASE_TAG}
|
||||
|
||||
- name: Dry run GitHub release
|
||||
uses: actions/create-release@0cb9c9b65d5d1901c1f53e5e66eaf4afd303e70e # v1.1.4
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
id: create_release
|
||||
with:
|
||||
tag_name: ${{ env.RELEASE_TAG }}
|
||||
release_name: ${{ env.RELEASE_TAG }}
|
||||
draft: ${{ env.DRAFT_RELEASE }}
|
||||
prerelease: ${{ env.PRE_RELEASE }}
|
||||
body: ${{ steps.release-notes.outputs.content }}
|
||||
if: ${{ env.DRY_RUN == 'true' }}
|
||||
|
||||
- name: Generate SBOM (spdx)
|
||||
id: spdx-builder
|
||||
@@ -164,7 +273,7 @@ jobs:
|
||||
# managers (gomod, yarn, npm).
|
||||
PROJECT_FOLDERS: ".,./ui"
|
||||
# full qualified name of the docker image to be inspected
|
||||
DOCKER_IMAGE: quay.io/argoproj/argocd:${{ github.ref_name }}
|
||||
DOCKER_IMAGE: ${{env.IMAGE_NAMESPACE}}/argocd:v${{env.TARGET_VERSION}}
|
||||
run: |
|
||||
yarn install --cwd ./ui
|
||||
go install github.com/spdx/spdx-sbom-generator/cmd/generator@$SPDX_GEN_VERSION
|
||||
@@ -182,101 +291,43 @@ jobs:
|
||||
fi
|
||||
|
||||
cd /tmp && tar -zcf sbom.tar.gz *.spdx
|
||||
if: ${{ env.DRY_RUN != 'true' }}
|
||||
|
||||
- name: Sign SBOM
|
||||
- name: Sign sbom
|
||||
run: |
|
||||
cosign sign-blob \
|
||||
--output-certificate=/tmp/sbom.tar.gz.pem \
|
||||
--output-signature=/tmp/sbom.tar.gz.sig \
|
||||
-y \
|
||||
/tmp/sbom.tar.gz
|
||||
cosign sign-blob --key env://COSIGN_PRIVATE_KEY /tmp/sbom.tar.gz > /tmp/sbom.tar.gz.sig
|
||||
env:
|
||||
COSIGN_PRIVATE_KEY: ${{secrets.COSIGN_PRIVATE_KEY}}
|
||||
COSIGN_PASSWORD: ${{secrets.COSIGN_PASSWORD}}
|
||||
if: ${{ env.DRY_RUN != 'true' }}
|
||||
|
||||
- name: Upload SBOM and signature assets
|
||||
- name: Create GitHub release
|
||||
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
name: ${{ env.RELEASE_TAG }}
|
||||
tag_name: ${{ env.RELEASE_TAG }}
|
||||
draft: ${{ env.DRAFT_RELEASE }}
|
||||
prerelease: ${{ env.PRE_RELEASE }}
|
||||
body: ${{ steps.release-notes.outputs.content }} # Pre-pended to the generated notes
|
||||
files: |
|
||||
/tmp/sbom.tar.*
|
||||
dist/argocd-*
|
||||
/tmp/sbom.tar.gz
|
||||
/tmp/sbom.tar.gz.sig
|
||||
if: ${{ env.DRY_RUN != 'true' }}
|
||||
|
||||
post-release:
|
||||
needs:
|
||||
- argocd-image
|
||||
- goreleaser
|
||||
- generate-sbom
|
||||
permissions:
|
||||
contents: write # Needed to push commit to update stable tag
|
||||
pull-requests: write # Needed to create PR for VERSION update.
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # v3.2.0
|
||||
- name: Update homebrew formula
|
||||
env:
|
||||
HOMEBREW_TOKEN: ${{ secrets.RELEASE_HOMEBREW_TOKEN }}
|
||||
uses: dawidd6/action-homebrew-bump-formula@02e79d9da43d79efa846d73695b6052cbbdbf48a # v3.8.3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
token: ${{env.HOMEBREW_TOKEN}}
|
||||
formula: argocd
|
||||
if: ${{ env.HOMEBREW_TOKEN != '' && env.UPDATE_HOMEBREW == 'true' && env.PRE_RELEASE != 'true' }}
|
||||
|
||||
- name: Setup Git author information
|
||||
- name: Delete original request tag from repository
|
||||
run: |
|
||||
set -ue
|
||||
git config --global user.email 'ci@argoproj.com'
|
||||
git config --global user.name 'CI'
|
||||
|
||||
- name: Check if tag is the latest version and not a pre-release
|
||||
run: |
|
||||
set -xue
|
||||
# Fetch all tag information
|
||||
git fetch --prune --tags --force
|
||||
|
||||
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
|
||||
|
||||
PRE_RELEASE=false
|
||||
# Check if latest tag is a pre-release
|
||||
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
|
||||
PRE_RELEASE=true
|
||||
fi
|
||||
|
||||
# Ensure latest tag matches github.ref_name & not a pre-release
|
||||
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
|
||||
echo "TAG_STABLE=true" >> $GITHUB_ENV
|
||||
else
|
||||
echo "TAG_STABLE=false" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- name: Update stable tag to latest version
|
||||
run: |
|
||||
git tag -f stable ${{ github.ref_name }}
|
||||
git push -f origin stable
|
||||
if: ${{ env.TAG_STABLE == 'true' }}
|
||||
|
||||
- name: Check to see if VERSION should be updated on master branch
|
||||
run: |
|
||||
set -xue
|
||||
SOURCE_TAG=${{ github.ref_name }}
|
||||
VERSION_REF="${SOURCE_TAG#*v}"
|
||||
if echo "$VERSION_REF" | grep -E -- '^[0-9]+\.[0-9]+\.0$';then
|
||||
VERSION=$(awk 'BEGIN {FS=OFS="."} {$2++; print}' <<< "${VERSION_REF}")
|
||||
echo "Updating VERSION to: $VERSION"
|
||||
echo "UPDATE_VERSION=true" >> $GITHUB_ENV
|
||||
echo "NEW_VERSION=$VERSION" >> $GITHUB_ENV
|
||||
else
|
||||
echo "Not updating VERSION"
|
||||
echo "UPDATE_VERSION=false" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- name: Update VERSION on master branch
|
||||
run: |
|
||||
echo ${{ env.NEW_VERSION }} > VERSION
|
||||
if: ${{ env.UPDATE_VERSION == 'true' }}
|
||||
|
||||
- name: Create PR to update VERSION on master branch
|
||||
uses: peter-evans/create-pull-request@38e0b6e68b4c852a5500a94740f0e535e0d7ba54 # v4.2.4
|
||||
with:
|
||||
commit-message: Bump version in master
|
||||
title: "chore: Bump version in master"
|
||||
body: All images built from master should indicate which version we are on track for.
|
||||
signoff: true
|
||||
branch: update-version
|
||||
branch-suffix: random
|
||||
base: master
|
||||
if: ${{ env.UPDATE_VERSION == 'true' }}
|
||||
git push --delete origin ${SOURCE_TAG}
|
||||
if: ${{ always() }}
|
||||
|
||||
67
.github/workflows/scorecard.yaml
vendored
67
.github/workflows/scorecard.yaml
vendored
@@ -1,67 +0,0 @@
|
||||
name: Scorecards supply-chain security
|
||||
on:
|
||||
# Only the default branch is supported.
|
||||
branch_protection_rule:
|
||||
schedule:
|
||||
- cron: "39 9 * * 2"
|
||||
push:
|
||||
branches: ["master"]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
# Declare default permissions as read only.
|
||||
permissions: read-all
|
||||
|
||||
jobs:
|
||||
analysis:
|
||||
name: Scorecards analysis
|
||||
runs-on: ubuntu-22.04
|
||||
permissions:
|
||||
# Needed to upload the results to code-scanning dashboard.
|
||||
security-events: write
|
||||
# Used to receive a badge. (Upcoming feature)
|
||||
id-token: write
|
||||
# Needs for private repositories.
|
||||
contents: read
|
||||
actions: read
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
|
||||
steps:
|
||||
- name: "Checkout code"
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- name: "Run analysis"
|
||||
uses: ossf/scorecard-action@e38b1902ae4f44df626f11ba0734b14fb91f8f86 # v2.1.2
|
||||
with:
|
||||
results_file: results.sarif
|
||||
results_format: sarif
|
||||
# (Optional) Read-only PAT token. Uncomment the `repo_token` line below if:
|
||||
# - you want to enable the Branch-Protection check on a *public* repository, or
|
||||
# - you are installing Scorecards on a *private* repository
|
||||
# To create the PAT, follow the steps in https://github.com/ossf/scorecard-action#authentication-with-pat.
|
||||
# repo_token: ${{ secrets.SCORECARD_READ_TOKEN }}
|
||||
|
||||
# Publish the results for public repositories to enable scorecard badges. For more details, see
|
||||
# https://github.com/ossf/scorecard-action#publishing-results.
|
||||
# For private repositories, `publish_results` will automatically be set to `false`, regardless
|
||||
# of the value entered here.
|
||||
publish_results: true
|
||||
|
||||
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
|
||||
# format to the repository Actions tab.
|
||||
- name: "Upload artifact"
|
||||
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
|
||||
with:
|
||||
name: SARIF file
|
||||
path: results.sarif
|
||||
retention-days: 5
|
||||
|
||||
# Upload the results to GitHub's code scanning dashboard.
|
||||
- name: "Upload to code-scanning"
|
||||
uses: github/codeql-action/upload-sarif@3ebbd71c74ef574dbc558c82f70e52732c8b44fe # v2.2.1
|
||||
with:
|
||||
sarif_file: results.sarif
|
||||
2
.github/workflows/update-snyk.yaml
vendored
2
.github/workflows/update-snyk.yaml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # v3.4.0
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build reports
|
||||
|
||||
2
.gitpod.Dockerfile
vendored
2
.gitpod.Dockerfile
vendored
@@ -1,4 +1,4 @@
|
||||
FROM gitpod/workspace-full@sha256:d5787229cd062aceae91109f1690013d3f25062916492fb7f444d13de3186178
|
||||
FROM gitpod/workspace-full
|
||||
|
||||
USER root
|
||||
|
||||
|
||||
120
.goreleaser.yaml
120
.goreleaser.yaml
@@ -1,120 +0,0 @@
|
||||
project_name: argocd
|
||||
|
||||
before:
|
||||
hooks:
|
||||
- go mod download
|
||||
- make build-ui
|
||||
|
||||
builds:
|
||||
- id: argocd-cli
|
||||
main: ./cmd
|
||||
binary: argocd-{{ .Os}}-{{ .Arch}}
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
flags:
|
||||
- -v
|
||||
ldflags:
|
||||
- -X github.com/argoproj/argo-cd/v2/common.version={{ .Version }}
|
||||
- -X github.com/argoproj/argo-cd/v2/common.buildDate={{ .Date }}
|
||||
- -X github.com/argoproj/argo-cd/v2/common.gitCommit={{ .FullCommit }}
|
||||
- -X github.com/argoproj/argo-cd/v2/common.gitTreeState={{ .Env.GIT_TREE_STATE }}
|
||||
- -X github.com/argoproj/argo-cd/v2/common.kubectlVersion={{ .Env.KUBECTL_VERSION }}
|
||||
- -extldflags="-static"
|
||||
goos:
|
||||
- linux
|
||||
- darwin
|
||||
- windows
|
||||
goarch:
|
||||
- amd64
|
||||
- arm64
|
||||
- s390x
|
||||
- ppc64le
|
||||
ignore:
|
||||
- goos: darwin
|
||||
goarch: s390x
|
||||
- goos: darwin
|
||||
goarch: ppc64le
|
||||
- goos: windows
|
||||
goarch: s390x
|
||||
- goos: windows
|
||||
goarch: ppc64le
|
||||
- goos: windows
|
||||
goarch: arm64
|
||||
|
||||
archives:
|
||||
- id: argocd-archive
|
||||
builds:
|
||||
- argocd-cli
|
||||
name_template: |-
|
||||
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
|
||||
format: binary
|
||||
|
||||
checksum:
|
||||
name_template: 'cli_checksums.txt'
|
||||
algorithm: sha256
|
||||
|
||||
release:
|
||||
prerelease: auto
|
||||
draft: false
|
||||
header: |
|
||||
## Quick Start
|
||||
|
||||
### Non-HA:
|
||||
|
||||
```shell
|
||||
kubectl create namespace argocd
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/{{.Tag}}/manifests/install.yaml
|
||||
```
|
||||
|
||||
### HA:
|
||||
|
||||
```shell
|
||||
kubectl create namespace argocd
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/{{.Tag}}/manifests/ha/install.yaml
|
||||
```
|
||||
|
||||
## Release Signatures and Provenance
|
||||
|
||||
All Argo CD container images are signed by cosign. A Provenance is generated for container images and CLI binaries which meet the SLSA Level 3 specifications. See the [documentation](https://argo-cd.readthedocs.io/en/stable/operator-manual/signed-release-assets) on how to verify.
|
||||
|
||||
|
||||
## Upgrading
|
||||
|
||||
If upgrading from a different minor version, be sure to read the [upgrading](https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/overview/) documentation.
|
||||
footer: |
|
||||
**Full Changelog**: https://github.com/argoproj/argo-cd/compare/{{ .PreviousTag }}...{{ .Tag }}
|
||||
|
||||
<a href="https://argoproj.github.io/cd/"><img src="https://raw.githubusercontent.com/argoproj/argo-site/master/content/pages/cd/gitops-cd.png" width="25%" ></a>
|
||||
|
||||
|
||||
snapshot: #### To be removed for PR
|
||||
name_template: "2.6.0"
|
||||
|
||||
changelog:
|
||||
use:
|
||||
github
|
||||
sort: asc
|
||||
abbrev: 0
|
||||
groups: # Regex use RE2 syntax as defined here: https://github.com/google/re2/wiki/Syntax.
|
||||
- title: 'Features'
|
||||
regexp: '^.*?feat(\([[:word:]]+\))??!?:.+$'
|
||||
order: 100
|
||||
- title: 'Bug fixes'
|
||||
regexp: '^.*?fix(\([[:word:]]+\))??!?:.+$'
|
||||
order: 200
|
||||
- title: 'Documentation'
|
||||
regexp: '^.*?docs(\([[:word:]]+\))??!?:.+$'
|
||||
order: 300
|
||||
- title: 'Dependency updates'
|
||||
regexp: '^.*?(feat|fix|chore)\(deps?.+\)!?:.+$'
|
||||
order: 400
|
||||
- title: 'Other work'
|
||||
order: 999
|
||||
filters:
|
||||
exclude:
|
||||
- '^test:'
|
||||
- '^.*?Bump(\([[:word:]]+\))?.+$'
|
||||
|
||||
|
||||
# yaml-language-server: $schema=https://goreleaser.com/static/schema.json
|
||||
|
||||
@@ -4,8 +4,4 @@ mkdocs:
|
||||
fail_on_warning: false
|
||||
python:
|
||||
install:
|
||||
- requirements: docs/requirements.txt
|
||||
build:
|
||||
os: "ubuntu-22.04"
|
||||
tools:
|
||||
python: "3.7"
|
||||
- requirements: docs/requirements.txt
|
||||
@@ -1,10 +1,10 @@
|
||||
ARG BASE_IMAGE=docker.io/library/ubuntu:22.04@sha256:9a0bdde4188b896a372804be2384015e90e3f84906b750c1a53539b585fbbe7f
|
||||
ARG BASE_IMAGE=docker.io/library/ubuntu:22.04
|
||||
####################################################################################################
|
||||
# Builder image
|
||||
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
|
||||
# Also used as the image in CI jobs so needs all dependencies
|
||||
####################################################################################################
|
||||
FROM docker.io/library/golang:1.19.6@sha256:7ce31d15a3a4dbf20446cccffa4020d3a2974ad2287d96123f55caf22c7adb71 AS builder
|
||||
FROM docker.io/library/golang:1.18 AS builder
|
||||
|
||||
RUN echo 'deb http://deb.debian.org/debian buster-backports main' >> /etc/apt/sources.list
|
||||
|
||||
@@ -83,7 +83,7 @@ WORKDIR /home/argocd
|
||||
####################################################################################################
|
||||
# Argo CD UI stage
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/node:18.15.0@sha256:8d9a875ee427897ef245302e31e2319385b092f1c3368b497e89790f240368f5 AS argocd-ui
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/node:12.18.4 AS argocd-ui
|
||||
|
||||
WORKDIR /src
|
||||
COPY ["ui/package.json", "ui/yarn.lock", "./"]
|
||||
@@ -101,7 +101,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
|
||||
####################################################################################################
|
||||
# Argo CD Build stage which performs the actual build of Argo CD binaries
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.19.6@sha256:7ce31d15a3a4dbf20446cccffa4020d3a2974ad2287d96123f55caf22c7adb71 AS argocd-build
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.18 AS argocd-build
|
||||
|
||||
WORKDIR /go/src/github.com/argoproj/argo-cd
|
||||
|
||||
@@ -132,4 +132,3 @@ RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-server && \
|
||||
ln -s /usr/local/bin/argocd /usr/local/bin/argocd-k8s-auth
|
||||
|
||||
USER $ARGOCD_USER_ID
|
||||
ENTRYPOINT ["/usr/bin/tini", "--"]
|
||||
|
||||
72
Makefile
72
Makefile
@@ -219,7 +219,7 @@ clidocsgen: ensure-gopath
|
||||
|
||||
|
||||
.PHONY: codegen-local
|
||||
codegen-local: ensure-gopath mod-vendor-local gogen protogen clientgen openapigen clidocsgen manifests-local notification-docs notification-catalog
|
||||
codegen-local: ensure-gopath mod-vendor-local notification-docs notification-catalog gogen protogen clientgen openapigen clidocsgen manifests-local
|
||||
rm -rf vendor/
|
||||
|
||||
.PHONY: codegen
|
||||
@@ -232,11 +232,11 @@ cli: test-tools-image
|
||||
|
||||
.PHONY: cli-local
|
||||
cli-local: clean-debug
|
||||
CGO_ENABLED=0 GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd
|
||||
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd
|
||||
|
||||
.PHONY: gen-resources-cli-local
|
||||
gen-resources-cli-local: clean-debug
|
||||
CGO_ENABLED=0 GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${GEN_RESOURCES_CLI_NAME} ./hack/gen-resources/cmd
|
||||
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${GEN_RESOURCES_CLI_NAME} ./hack/gen-resources/cmd
|
||||
|
||||
.PHONY: release-cli
|
||||
release-cli: clean-debug build-ui
|
||||
@@ -266,19 +266,19 @@ manifests: test-tools-image
|
||||
# consolidated binary for cli, util, server, repo-server, controller
|
||||
.PHONY: argocd-all
|
||||
argocd-all: clean-debug
|
||||
CGO_ENABLED=0 GOOS=${GOOS} GOARCH=${GOARCH} GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${BIN_NAME} ./cmd
|
||||
CGO_ENABLED=0 GOOS=${GOOS} GOARCH=${GOARCH} go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${BIN_NAME} ./cmd
|
||||
|
||||
.PHONY: server
|
||||
server: clean-debug
|
||||
CGO_ENABLED=0 GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd
|
||||
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd
|
||||
|
||||
.PHONY: repo-server
|
||||
repo-server:
|
||||
CGO_ENABLED=0 GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd
|
||||
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd
|
||||
|
||||
.PHONY: controller
|
||||
controller:
|
||||
CGO_ENABLED=0 GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd
|
||||
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd
|
||||
|
||||
.PHONY: build-ui
|
||||
build-ui:
|
||||
@@ -294,7 +294,7 @@ ifeq ($(DEV_IMAGE), true)
|
||||
IMAGE_TAG="dev-$(shell git describe --always --dirty)"
|
||||
image: build-ui
|
||||
DOCKER_BUILDKIT=1 docker build --platform=linux/amd64 -t argocd-base --target argocd-base .
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd ./cmd
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd ./cmd
|
||||
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-server
|
||||
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-application-controller
|
||||
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-repo-server
|
||||
@@ -368,7 +368,7 @@ build: test-tools-image
|
||||
# Build all Go code (local version)
|
||||
.PHONY: build-local
|
||||
build-local:
|
||||
GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v `go list ./... | grep -v 'resource_customizations\|test/e2e'`
|
||||
go build -v `go list ./... | grep -v 'resource_customizations\|test/e2e'`
|
||||
|
||||
# Run all unit tests
|
||||
#
|
||||
@@ -579,7 +579,7 @@ list:
|
||||
|
||||
.PHONY: applicationset-controller
|
||||
applicationset-controller:
|
||||
GODEBUG="tarinsecurepath=0,zipinsecurepath=0" CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-applicationset-controller ./cmd
|
||||
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-applicationset-controller ./cmd
|
||||
|
||||
.PHONY: checksums
|
||||
checksums:
|
||||
@@ -596,55 +596,3 @@ snyk-non-container-tests:
|
||||
.PHONY: snyk-report
|
||||
snyk-report:
|
||||
./hack/snyk-report.sh $(target_branch)
|
||||
|
||||
.PHONY: help
|
||||
help:
|
||||
@echo 'Note: Generally an item w/ (-local) will run inside docker unless you use the -local variant'
|
||||
@echo
|
||||
@echo 'Common targets'
|
||||
@echo
|
||||
@echo 'all -- make cli and image'
|
||||
@echo
|
||||
@echo 'components:'
|
||||
@echo ' applicationset-controller -- applicationset controller'
|
||||
@echo ' cli(-local) -- argocd cli program'
|
||||
@echo ' controller -- controller (orchestrator)'
|
||||
@echo ' repo-server -- repo server (manage repository instances)'
|
||||
@echo ' server -- argocd web application'
|
||||
@echo
|
||||
@echo 'build:'
|
||||
@echo ' image -- make image of the following items'
|
||||
@echo ' build(-local) -- compile go'
|
||||
@echo ' build-docs(-local) -- build docs'
|
||||
@echo ' build-ui -- compile typescript'
|
||||
@echo
|
||||
@echo 'run:'
|
||||
@echo ' run -- run the components locally'
|
||||
@echo ' serve-docs(-local) -- expose the documents for viewing in a browser'
|
||||
@echo
|
||||
@echo 'release:'
|
||||
@echo ' release-cli'
|
||||
@echo ' release-precheck'
|
||||
@echo ' checksums'
|
||||
@echo
|
||||
@echo 'docs:'
|
||||
@echo ' build-docs(-local)'
|
||||
@echo ' serve-docs(-local)'
|
||||
@echo ' notification-docs'
|
||||
@echo ' clidocsgen'
|
||||
@echo
|
||||
@echo 'testing:'
|
||||
@echo ' test(-local)'
|
||||
@echo ' start-e2e(-local)'
|
||||
@echo ' test-e2e(-local)'
|
||||
@echo ' test-race(-local)'
|
||||
@echo
|
||||
@echo 'debug:'
|
||||
@echo ' list -- list all make targets'
|
||||
@echo ' install-tools-local -- install all the tools below'
|
||||
@echo ' install-lint-tools(-local)'
|
||||
@echo
|
||||
@echo 'codegen:'
|
||||
@echo ' codegen(-local) -- if using -local, run the following targets first'
|
||||
@echo ' install-codegen-tools-local -- run this to install the codegen tools'
|
||||
@echo ' install-go-tools-local -- run this to install go libraries for codegen'
|
||||
14
Procfile
14
Procfile
@@ -1,12 +1,12 @@
|
||||
controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''}"
|
||||
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''}"
|
||||
controller: [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''}"
|
||||
api-server: [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''}"
|
||||
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v2/cmd gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
|
||||
redis: bash -c "if [ \"$ARGOCD_REDIS_LOCAL\" = 'true' ]; then redis-server --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; else docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:$(grep "image: redis" manifests/base/redis/argocd-redis-deployment.yaml | cut -d':' -f3) --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; fi"
|
||||
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
cmp-server: [ "$ARGOCD_E2E_TEST" = 'true' ] && exit 0 || [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
redis: bash -c "if [ \"$ARGOCD_REDIS_LOCAL\" == 'true' ]; then redis-server --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; else docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:$(grep "image: redis" manifests/base/redis/argocd-redis-deployment.yaml | cut -d':' -f3) --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; fi"
|
||||
repo-server: [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
cmp-server: [ "$ARGOCD_E2E_TEST" == 'true' ] && exit 0 || [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
|
||||
git-server: test/fixture/testrepos/start-git.sh
|
||||
helm-registry: test/fixture/testrepos/start-helm-registry.sh
|
||||
dev-mounter: [[ "$ARGOCD_E2E_TEST" != "true" ]] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
|
||||
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_ASK_PASS_SOCK=/tmp/applicationset-ask-pass.sock ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
|
||||
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug"
|
||||
applicationset-controller: [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_ASK_PASS_SOCK=/tmp/applicationset-ask-pass.sock ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
|
||||
notification: [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug"
|
||||
|
||||
@@ -6,7 +6,6 @@
|
||||
[](https://github.com/argoproj/argo-cd/actions?query=workflow%3A%22Integration+tests%22)
|
||||
[](https://codecov.io/gh/argoproj/argo-cd)
|
||||
[](https://bestpractices.coreinfrastructure.org/projects/4486)
|
||||
[](https://api.securityscorecards.dev/projects/github.com/argoproj/argo-cd)
|
||||
[](https://app.fossa.com/projects/git%2Bgithub.com%2Fargoproj%2Fargo-cd?ref=badge_shield)
|
||||
|
||||
**Social:**
|
||||
@@ -81,7 +80,7 @@ Participation in the Argo CD project is governed by the [CNCF Code of Conduct](h
|
||||
1. [Applied GitOps with Argo CD](https://thenewstack.io/applied-gitops-with-argocd/)
|
||||
1. [Solving configuration drift using GitOps with Argo CD](https://www.cncf.io/blog/2020/12/17/solving-configuration-drift-using-gitops-with-argo-cd/)
|
||||
1. [Decentralized GitOps over environments](https://blogs.sap.com/2021/05/06/decentralized-gitops-over-environments/)
|
||||
1. [How GitOps and Operators mark the rise of Infrastructure-As-Software](https://paytmlabs.com/blog/2021/10/how-to-improve-operational-work-with-operators-and-gitops/)
|
||||
1. [Getting Started with ArgoCD for GitOps Deployments](https://youtu.be/AvLuplh1skA)
|
||||
1. [Using Argo CD & Datree for Stable Kubernetes CI/CD Deployments](https://youtu.be/17894DTru2Y)
|
||||
1. [How to create Argo CD Applications Automatically using ApplicationSet? "Automation of GitOps"](https://amralaayassen.medium.com/how-to-create-argocd-applications-automatically-using-applicationset-automation-of-the-gitops-59455eaf4f72)
|
||||
|
||||
|
||||
22
SECURITY.md
22
SECURITY.md
@@ -1,6 +1,6 @@
|
||||
# Security Policy for Argo CD
|
||||
|
||||
Version: **v1.5 (2023-03-06)**
|
||||
Version: **v1.4 (2022-01-23)**
|
||||
|
||||
## Preface
|
||||
|
||||
@@ -41,7 +41,7 @@ previous to the most recent one (`N-1`, e.g. `1.7`). With the release of
|
||||
|
||||
We regularly perform patch releases (e.g. `1.8.5` and `1.7.12`) for the
|
||||
supported versions, which will contain fixes for security vulnerabilities and
|
||||
important bugs. Prior releases might receive critical security fixes on best
|
||||
important bugs. Prior releases might receive critical security fixes on a best
|
||||
effort basis, however, it cannot be guaranteed that security fixes get
|
||||
back-ported to these unsupported versions.
|
||||
|
||||
@@ -61,28 +61,14 @@ and disclosure with you. Sometimes, it might take a little longer for us to
|
||||
react (e.g. out of office conditions), so please bear with us in these cases.
|
||||
|
||||
We will publish security advisories using the
|
||||
[GitHub Security Advisories](https://github.com/argoproj/argo-cd/security/advisories)
|
||||
feature to keep our community well-informed, and will credit you for your
|
||||
[Git Hub Security Advisories](https://github.com/argoproj/argo-cd/security/advisories)
|
||||
feature to keep our community well informed, and will credit you for your
|
||||
findings (unless you prefer to stay anonymous, of course).
|
||||
|
||||
Please report vulnerabilities by e-mail to the following address:
|
||||
|
||||
* cncf-argo-security@lists.cncf.io
|
||||
|
||||
## Internet Bug Bounty collaboration
|
||||
|
||||
We're happy to announce that the Argo project is collaborating with the great
|
||||
folks over at
|
||||
[Hacker One](https://hackerone.com/) and their
|
||||
[Internet Bug Bounty program](https://hackerone.com/ibb)
|
||||
to reward the awesome people who find security vulnerabilities in the four
|
||||
main Argo projects (CD, Events, Rollouts and Workflows) and then work with
|
||||
us to fix and disclose them in a responsible manner.
|
||||
|
||||
If you report a vulnerability to us as outlined in this security policy, we
|
||||
will work together with you to find out whether your finding is eligible for
|
||||
claiming a bounty, and also on how to claim it.
|
||||
|
||||
## Securing your Argo CD Instance
|
||||
|
||||
See the [operator manual security page](docs/operator-manual/security.md) for
|
||||
|
||||
13
USERS.md
13
USERS.md
@@ -11,7 +11,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Adevinta](https://www.adevinta.com/)
|
||||
1. [Adfinis](https://adfinis.com)
|
||||
1. [Adventure](https://jp.adventurekk.com/)
|
||||
1. [Adyen](https://www.adyen.com)
|
||||
1. [AirQo](https://airqo.net/)
|
||||
1. [Akuity](https://akuity.io/)
|
||||
1. [Alibaba Group](https://www.alibabagroup.com/)
|
||||
@@ -46,9 +45,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Chargetrip](https://chargetrip.com)
|
||||
1. [Chime](https://www.chime.com)
|
||||
1. [Cisco ET&I](https://eti.cisco.com/)
|
||||
1. [Cloud Posse](https://www.cloudposse.com/)
|
||||
1. [Cloud Scale](https://cloudscaleinc.com/)
|
||||
1. [Cloudmate](https://cloudmt.co.kr/)
|
||||
1. [Cobalt](https://www.cobalt.io/)
|
||||
1. [Codefresh](https://www.codefresh.io/)
|
||||
1. [Codility](https://www.codility.com/)
|
||||
@@ -64,7 +61,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Deutsche Telekom AG](https://telekom.com)
|
||||
1. [Devopsi - Poland Software/DevOps Consulting](https://devopsi.pl/)
|
||||
1. [Devtron Labs](https://github.com/devtron-labs/devtron)
|
||||
1. [DigitalOcean](https://www.digitalocean.com)
|
||||
1. [Divistant](https://divistant.com)
|
||||
1. [Doximity](https://www.doximity.com/)
|
||||
1. [EDF Renewables](https://www.edf-re.com/)
|
||||
@@ -77,7 +73,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Energisme](https://energisme.com/)
|
||||
1. [enigmo](https://enigmo.co.jp/)
|
||||
1. [Envoy](https://envoy.com/)
|
||||
1. [Farfetch](https://www.farfetch.com)
|
||||
1. [Faro](https://www.faro.com/)
|
||||
1. [Fave](https://myfave.com)
|
||||
1. [Flip](https://flip.id)
|
||||
@@ -99,7 +94,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Gojek](https://www.gojek.io/)
|
||||
1. [Greenpass](https://www.greenpass.com.br/)
|
||||
1. [Gridfuse](https://gridfuse.com/)
|
||||
1. [Groww](https://groww.in)
|
||||
1. [Grupo MasMovil](https://grupomasmovil.com/en/)
|
||||
1. [Handelsbanken](https://www.handelsbanken.se)
|
||||
1. [Healy](https://www.healyworld.net)
|
||||
@@ -114,7 +108,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [imaware](https://imaware.health)
|
||||
1. [Indeed](https://indeed.com)
|
||||
1. [Index Exchange](https://www.indexexchange.com/)
|
||||
1. [Info Support](https://www.infosupport.com/)
|
||||
1. [InsideBoard](https://www.insideboard.com)
|
||||
1. [Intuit](https://www.intuit.com/)
|
||||
1. [Joblift](https://joblift.com/)
|
||||
@@ -136,7 +129,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Liatrio](https://www.liatrio.com)
|
||||
1. [Lightricks](https://www.lightricks.com/)
|
||||
1. [LINE](https://linecorp.com/en/)
|
||||
1. [Loom](https://www.loom.com/)
|
||||
1. [Lytt](https://www.lytt.co/)
|
||||
1. [Magic Leap](https://www.magicleap.com/)
|
||||
1. [Majid Al Futtaim](https://www.majidalfuttaim.com/)
|
||||
@@ -162,7 +154,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Nextdoor](https://nextdoor.com/)
|
||||
1. [Nikkei](https://www.nikkei.co.jp/nikkeiinfo/en/)
|
||||
1. [Nitro](https://gonitro.com)
|
||||
1. [NYCU, CS IT Center](https://it.cs.nycu.edu.tw)
|
||||
1. [Objective](https://www.objective.com.br/)
|
||||
1. [OCCMundial](https://occ.com.mx)
|
||||
1. [Octadesk](https://octadesk.com)
|
||||
@@ -170,7 +161,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Omni](https://omni.se/)
|
||||
1. [openEuler](https://openeuler.org)
|
||||
1. [openGauss](https://opengauss.org/)
|
||||
1. [OpenGov](https://opengov.com)
|
||||
1. [openLooKeng](https://openlookeng.io)
|
||||
1. [OpenSaaS Studio](https://opensaas.studio)
|
||||
1. [Opensurvey](https://www.opensurvey.co.kr/)
|
||||
@@ -201,8 +191,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [RapidAPI](https://www.rapidapi.com/)
|
||||
1. [Recreation.gov](https://www.recreation.gov/)
|
||||
1. [Red Hat](https://www.redhat.com/)
|
||||
1. [Redpill Linpro](https://www.redpill-linpro.com/)
|
||||
1. [Reenigne Cloud](https://reenigne.ca)
|
||||
1. [reev.com](https://www.reev.com/)
|
||||
1. [RightRev](https://rightrev.com/)
|
||||
1. [Rise](https://www.risecard.eu/)
|
||||
@@ -255,7 +243,6 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [ungleich.ch](https://ungleich.ch/)
|
||||
1. [Unifonic Inc](https://www.unifonic.com/)
|
||||
1. [Universidad Mesoamericana](https://www.umes.edu.gt/)
|
||||
1. [Vectra](https://www.vectra.ai)
|
||||
1. [Viaduct](https://www.viaduct.ai/)
|
||||
1. [Vinted](https://vinted.com/)
|
||||
1. [Virtuo](https://www.govirtuo.com/)
|
||||
|
||||
@@ -17,7 +17,6 @@ package controllers
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
@@ -30,12 +29,9 @@ import (
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/tools/record"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/builder"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
|
||||
"sigs.k8s.io/controller-runtime/pkg/event"
|
||||
"sigs.k8s.io/controller-runtime/pkg/handler"
|
||||
"sigs.k8s.io/controller-runtime/pkg/predicate"
|
||||
"sigs.k8s.io/controller-runtime/pkg/source"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/applicationset/generators"
|
||||
@@ -46,8 +42,6 @@ import (
|
||||
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
|
||||
argoutil "github.com/argoproj/argo-cd/v2/util/argo"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -59,7 +53,7 @@ const (
|
||||
)
|
||||
|
||||
var (
|
||||
defaultPreservedAnnotations = []string{
|
||||
preservedAnnotations = []string{
|
||||
NotifiedAnnotationKey,
|
||||
argov1alpha1.AnnotationKeyRefresh,
|
||||
}
|
||||
@@ -518,7 +512,7 @@ func (r *ApplicationSetReconciler) generateApplications(applicationSetInfo argov
|
||||
return res, applicationSetReason, firstError
|
||||
}
|
||||
|
||||
func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProgressiveSyncs bool) error {
|
||||
func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager) error {
|
||||
if err := mgr.GetFieldIndexer().IndexField(context.TODO(), &argov1alpha1.Application{}, ".metadata.controller", func(rawObj client.Object) []string {
|
||||
// grab the job object, extract the owner...
|
||||
app := rawObj.(*argov1alpha1.Application)
|
||||
@@ -537,11 +531,9 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
|
||||
return fmt.Errorf("error setting up with manager: %w", err)
|
||||
}
|
||||
|
||||
ownsHandler := getOwnsHandlerPredicates(enableProgressiveSyncs)
|
||||
|
||||
return ctrl.NewControllerManagedBy(mgr).
|
||||
For(&argov1alpha1.ApplicationSet{}).
|
||||
Owns(&argov1alpha1.Application{}, builder.WithPredicates(ownsHandler)).
|
||||
Owns(&argov1alpha1.Application{}).
|
||||
Watches(
|
||||
&source.Kind{Type: &corev1.Secret{}},
|
||||
&clusterSecretEventHandler{
|
||||
@@ -571,7 +563,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
|
||||
Namespace: generatedApp.Namespace,
|
||||
},
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
Kind: "Application",
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
}
|
||||
@@ -585,15 +577,9 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
|
||||
found.Operation = generatedApp.Operation
|
||||
}
|
||||
|
||||
preservedAnnotations := make([]string, 0)
|
||||
if applicationSet.Spec.PreservedFields != nil {
|
||||
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
|
||||
}
|
||||
// Preserve specially treated argo cd annotations:
|
||||
// * https://github.com/argoproj/applicationset/issues/180
|
||||
// * https://github.com/argoproj/argo-cd/issues/10500
|
||||
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
|
||||
|
||||
for _, key := range preservedAnnotations {
|
||||
if state, exists := found.ObjectMeta.Annotations[key]; exists {
|
||||
if generatedApp.Annotations == nil {
|
||||
@@ -1326,73 +1312,4 @@ func syncApplication(application argov1alpha1.Application, prune bool) (argov1al
|
||||
return application, nil
|
||||
}
|
||||
|
||||
func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
|
||||
return predicate.Funcs{
|
||||
CreateFunc: func(e event.CreateEvent) bool {
|
||||
// if we are the owner and there is a create event, we most likely created it and do not need to
|
||||
// re-reconcile
|
||||
log.Debugln("received create event from owning an application")
|
||||
return false
|
||||
},
|
||||
DeleteFunc: func(e event.DeleteEvent) bool {
|
||||
log.Debugln("received delete event from owning an application")
|
||||
return true
|
||||
},
|
||||
UpdateFunc: func(e event.UpdateEvent) bool {
|
||||
log.Debugln("received update event from owning an application")
|
||||
appOld, isApp := e.ObjectOld.(*argov1alpha1.Application)
|
||||
if !isApp {
|
||||
return false
|
||||
}
|
||||
appNew, isApp := e.ObjectNew.(*argov1alpha1.Application)
|
||||
if !isApp {
|
||||
return false
|
||||
}
|
||||
requeue := shouldRequeueApplicationSet(appOld, appNew, enableProgressiveSyncs)
|
||||
log.Debugf("requeue: %t caused by application %s\n", requeue, appNew.Name)
|
||||
return requeue
|
||||
},
|
||||
GenericFunc: func(e event.GenericEvent) bool {
|
||||
log.Debugln("received generic event from owning an application")
|
||||
return true
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// shouldRequeueApplicationSet determines when we want to requeue an ApplicationSet for reconciling based on an owned
|
||||
// application change
|
||||
// The applicationset controller owns a subset of the Application CR.
|
||||
// We do not need to re-reconcile if parts of the application change outside the applicationset's control.
|
||||
// An example being, Application.ApplicationStatus.ReconciledAt which gets updated by the application controller.
|
||||
// Additionally, Application.ObjectMeta.ResourceVersion and Application.ObjectMeta.Generation which are set by K8s.
|
||||
func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov1alpha1.Application, enableProgressiveSyncs bool) bool {
|
||||
if appOld == nil || appNew == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// the applicationset controller owns the application spec, labels, annotations, and finalizers on the applications
|
||||
if !reflect.DeepEqual(appOld.Spec, appNew.Spec) ||
|
||||
!reflect.DeepEqual(appOld.ObjectMeta.GetAnnotations(), appNew.ObjectMeta.GetAnnotations()) ||
|
||||
!reflect.DeepEqual(appOld.ObjectMeta.GetLabels(), appNew.ObjectMeta.GetLabels()) ||
|
||||
!reflect.DeepEqual(appOld.ObjectMeta.GetFinalizers(), appNew.ObjectMeta.GetFinalizers()) {
|
||||
return true
|
||||
}
|
||||
|
||||
// progressive syncs use the application status for updates. if they differ, requeue to trigger the next progression
|
||||
if enableProgressiveSyncs {
|
||||
if appOld.Status.Health.Status != appNew.Status.Health.Status || appOld.Status.Sync.Status != appNew.Status.Sync.Status {
|
||||
return true
|
||||
}
|
||||
|
||||
if appOld.Status.OperationState != nil && appNew.Status.OperationState != nil {
|
||||
if appOld.Status.OperationState.Phase != appNew.Status.OperationState.Phase ||
|
||||
appOld.Status.OperationState.StartedAt != appNew.Status.OperationState.StartedAt {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
var _ handler.EventHandler = &clusterSecretEventHandler{}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,6 @@ package controllers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
@@ -47,6 +46,7 @@ type addRateLimitingInterface interface {
|
||||
}
|
||||
|
||||
func (h *clusterSecretEventHandler) queueRelatedAppGenerators(q addRateLimitingInterface, object client.Object) {
|
||||
|
||||
// Check for label, lookup all ApplicationSets that might match the cluster, queue them all
|
||||
if object.GetLabels()[generators.ArgoCDSecretTypeLabel] != generators.ArgoCDSecretTypeCluster {
|
||||
return
|
||||
@@ -73,40 +73,6 @@ func (h *clusterSecretEventHandler) queueRelatedAppGenerators(q addRateLimitingI
|
||||
foundClusterGenerator = true
|
||||
break
|
||||
}
|
||||
|
||||
if generator.Matrix != nil {
|
||||
ok, err := nestedGeneratorsHaveClusterGenerator(generator.Matrix.Generators)
|
||||
if err != nil {
|
||||
h.Log.
|
||||
WithFields(log.Fields{
|
||||
"namespace": appSet.GetNamespace(),
|
||||
"name": appSet.GetName(),
|
||||
}).
|
||||
WithError(err).
|
||||
Error("Unable to check if ApplicationSet matrix generators have cluster generator")
|
||||
}
|
||||
if ok {
|
||||
foundClusterGenerator = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if generator.Merge != nil {
|
||||
ok, err := nestedGeneratorsHaveClusterGenerator(generator.Merge.Generators)
|
||||
if err != nil {
|
||||
h.Log.
|
||||
WithFields(log.Fields{
|
||||
"namespace": appSet.GetNamespace(),
|
||||
"name": appSet.GetName(),
|
||||
}).
|
||||
WithError(err).
|
||||
Error("Unable to check if ApplicationSet merge generators have cluster generator")
|
||||
}
|
||||
if ok {
|
||||
foundClusterGenerator = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if foundClusterGenerator {
|
||||
|
||||
@@ -116,42 +82,3 @@ func (h *clusterSecretEventHandler) queueRelatedAppGenerators(q addRateLimitingI
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// nestedGeneratorsHaveClusterGenerator iterate over provided nested generators to check if they have a cluster generator.
|
||||
func nestedGeneratorsHaveClusterGenerator(generators []argoprojiov1alpha1.ApplicationSetNestedGenerator) (bool, error) {
|
||||
for _, generator := range generators {
|
||||
if ok, err := nestedGeneratorHasClusterGenerator(generator); ok || err != nil {
|
||||
return ok, err
|
||||
}
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// nestedGeneratorHasClusterGenerator checks if the provided generator has a cluster generator.
|
||||
func nestedGeneratorHasClusterGenerator(nested argoprojiov1alpha1.ApplicationSetNestedGenerator) (bool, error) {
|
||||
if nested.Clusters != nil {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
if nested.Matrix != nil {
|
||||
nestedMatrix, err := argoprojiov1alpha1.ToNestedMatrixGenerator(nested.Matrix)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("unable to get nested matrix generator: %w", err)
|
||||
}
|
||||
if nestedMatrix != nil {
|
||||
return nestedGeneratorsHaveClusterGenerator(nestedMatrix.ToMatrixGenerator().Generators)
|
||||
}
|
||||
}
|
||||
|
||||
if nested.Merge != nil {
|
||||
nestedMerge, err := argoprojiov1alpha1.ToNestedMergeGenerator(nested.Merge)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("unable to get nested merge generator: %w", err)
|
||||
}
|
||||
if nestedMerge != nil {
|
||||
return nestedGeneratorsHaveClusterGenerator(nestedMerge.ToMergeGenerator().Generators)
|
||||
}
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
@@ -164,6 +163,7 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
{NamespacedName: types.NamespacedName{Namespace: "another-namespace", Name: "my-app-set"}},
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
name: "non-argo cd secret should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
@@ -189,348 +189,6 @@ func TestClusterEventHandler(t *testing.T) {
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with a cluster generator should produce a request",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{{
|
||||
NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"},
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with non cluster generator should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
List: &argov1alpha1.ListGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with a nested matrix generator containing a cluster generator should produce a request",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Matrix: &apiextensionsv1.JSON{
|
||||
Raw: []byte(
|
||||
`{
|
||||
"generators": [
|
||||
{
|
||||
"clusters": {
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"argocd.argoproj.io/secret-type": "cluster"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}`,
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{{
|
||||
NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"},
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with a nested matrix generator containing non cluster generator should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Matrix: &apiextensionsv1.JSON{
|
||||
Raw: []byte(
|
||||
`{
|
||||
"generators": [
|
||||
{
|
||||
"list": {
|
||||
"elements": [
|
||||
"a",
|
||||
"b"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}`,
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "a merge generator with a cluster generator should produce a request",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{{
|
||||
NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"},
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "a matrix generator with non cluster generator should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
List: &argov1alpha1.ListGenerator{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
{
|
||||
name: "a merge generator with a nested merge generator containing a cluster generator should produce a request",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Merge: &apiextensionsv1.JSON{
|
||||
Raw: []byte(
|
||||
`{
|
||||
"generators": [
|
||||
{
|
||||
"clusters": {
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"argocd.argoproj.io/secret-type": "cluster"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}`,
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{{
|
||||
NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"},
|
||||
}},
|
||||
},
|
||||
{
|
||||
name: "a merge generator with a nested merge generator containing non cluster generator should not match",
|
||||
items: []argov1alpha1.ApplicationSet{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "my-app-set",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{
|
||||
Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Merge: &apiextensionsv1.JSON{
|
||||
Raw: []byte(
|
||||
`{
|
||||
"generators": [
|
||||
{
|
||||
"list": {
|
||||
"elements": [
|
||||
"a",
|
||||
"b"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}`,
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secret: corev1.Secret{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "argocd",
|
||||
Name: "my-secret",
|
||||
Labels: map[string]string{
|
||||
generators.ArgoCDSecretTypeLabel: generators.ArgoCDSecretTypeCluster,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedRequests: []reconcile.Request{},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
|
||||
@@ -1,179 +0,0 @@
|
||||
package controllers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/applicationset/generators"
|
||||
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
dynfake "k8s.io/client-go/dynamic/fake"
|
||||
kubefake "k8s.io/client-go/kubernetes/fake"
|
||||
"k8s.io/client-go/tools/record"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
)
|
||||
|
||||
func TestRequeueAfter(t *testing.T) {
|
||||
mockServer := argoCDServiceMock{}
|
||||
ctx := context.Background()
|
||||
scheme := runtime.NewScheme()
|
||||
err := argov1alpha1.AddToScheme(scheme)
|
||||
assert.Nil(t, err)
|
||||
gvrToListKind := map[schema.GroupVersionResource]string{{
|
||||
Group: "mallard.io",
|
||||
Version: "v1",
|
||||
Resource: "ducks",
|
||||
}: "DuckList"}
|
||||
appClientset := kubefake.NewSimpleClientset()
|
||||
k8sClient := fake.NewClientBuilder().Build()
|
||||
duckType := &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"apiVersion": "v2quack",
|
||||
"kind": "Duck",
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "mightyduck",
|
||||
"namespace": "namespace",
|
||||
"labels": map[string]interface{}{"duck": "all-species"},
|
||||
},
|
||||
"status": map[string]interface{}{
|
||||
"decisions": []interface{}{
|
||||
map[string]interface{}{
|
||||
"clusterName": "staging-01",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"clusterName": "production-01",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
fakeDynClient := dynfake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind, duckType)
|
||||
|
||||
terminalGenerators := map[string]generators.Generator{
|
||||
"List": generators.NewListGenerator(),
|
||||
"Clusters": generators.NewClusterGenerator(k8sClient, ctx, appClientset, "argocd"),
|
||||
"Git": generators.NewGitGenerator(mockServer),
|
||||
"SCMProvider": generators.NewSCMProviderGenerator(fake.NewClientBuilder().WithObjects(&corev1.Secret{}).Build(), generators.SCMAuthProviders{}),
|
||||
"ClusterDecisionResource": generators.NewDuckTypeGenerator(ctx, fakeDynClient, appClientset, "argocd"),
|
||||
"PullRequest": generators.NewPullRequestGenerator(k8sClient, generators.SCMAuthProviders{}),
|
||||
}
|
||||
|
||||
nestedGenerators := map[string]generators.Generator{
|
||||
"List": terminalGenerators["List"],
|
||||
"Clusters": terminalGenerators["Clusters"],
|
||||
"Git": terminalGenerators["Git"],
|
||||
"SCMProvider": terminalGenerators["SCMProvider"],
|
||||
"ClusterDecisionResource": terminalGenerators["ClusterDecisionResource"],
|
||||
"PullRequest": terminalGenerators["PullRequest"],
|
||||
"Matrix": generators.NewMatrixGenerator(terminalGenerators),
|
||||
"Merge": generators.NewMergeGenerator(terminalGenerators),
|
||||
}
|
||||
|
||||
topLevelGenerators := map[string]generators.Generator{
|
||||
"List": terminalGenerators["List"],
|
||||
"Clusters": terminalGenerators["Clusters"],
|
||||
"Git": terminalGenerators["Git"],
|
||||
"SCMProvider": terminalGenerators["SCMProvider"],
|
||||
"ClusterDecisionResource": terminalGenerators["ClusterDecisionResource"],
|
||||
"PullRequest": terminalGenerators["PullRequest"],
|
||||
"Matrix": generators.NewMatrixGenerator(nestedGenerators),
|
||||
"Merge": generators.NewMergeGenerator(nestedGenerators),
|
||||
}
|
||||
|
||||
client := fake.NewClientBuilder().WithScheme(scheme).Build()
|
||||
r := ApplicationSetReconciler{
|
||||
Client: client,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(0),
|
||||
Generators: topLevelGenerators,
|
||||
}
|
||||
|
||||
type args struct {
|
||||
appset *argov1alpha1.ApplicationSet
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want time.Duration
|
||||
wantErr assert.ErrorAssertionFunc
|
||||
}{
|
||||
{name: "Cluster", args: args{appset: &argov1alpha1.ApplicationSet{
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{{Clusters: &argov1alpha1.ClusterGenerator{}}},
|
||||
},
|
||||
}}, want: generators.NoRequeueAfter, wantErr: assert.NoError},
|
||||
{name: "ClusterMergeNested", args: args{&argov1alpha1.ApplicationSet{
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{Clusters: &argov1alpha1.ClusterGenerator{}},
|
||||
{Merge: &argov1alpha1.MergeGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
Git: &argov1alpha1.GitGenerator{},
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
},
|
||||
}}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
|
||||
{name: "ClusterMatrixNested", args: args{&argov1alpha1.ApplicationSet{
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{
|
||||
{Clusters: &argov1alpha1.ClusterGenerator{}},
|
||||
{Matrix: &argov1alpha1.MatrixGenerator{
|
||||
Generators: []argov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Clusters: &argov1alpha1.ClusterGenerator{},
|
||||
Git: &argov1alpha1.GitGenerator{},
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
},
|
||||
}}, want: generators.DefaultRequeueAfterSeconds, wantErr: assert.NoError},
|
||||
{name: "ListGenerator", args: args{appset: &argov1alpha1.ApplicationSet{
|
||||
Spec: argov1alpha1.ApplicationSetSpec{
|
||||
Generators: []argov1alpha1.ApplicationSetGenerator{{List: &argov1alpha1.ListGenerator{}}},
|
||||
},
|
||||
}}, want: generators.NoRequeueAfter, wantErr: assert.NoError},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equalf(t, tt.want, r.getMinRequeueAfter(tt.args.appset), "getMinRequeueAfter(%v)", tt.args.appset)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type argoCDServiceMock struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetApps(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision)
|
||||
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetFiles(ctx context.Context, repoURL string, revision string, pattern string) (map[string][]byte, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision, pattern)
|
||||
|
||||
return args.Get(0).(map[string][]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetFileContent(ctx context.Context, repoURL string, revision string, path string) ([]byte, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision, path)
|
||||
|
||||
return args.Get(0).([]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetDirectories(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision)
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
@@ -1,14 +0,0 @@
|
||||
key:
|
||||
components:
|
||||
- name: component1
|
||||
chart: podinfo
|
||||
version: "6.3.2"
|
||||
releaseName: component1
|
||||
repoUrl: "https://stefanprodan.github.io/podinfo"
|
||||
namespace: component1
|
||||
- name: component2
|
||||
chart: podinfo
|
||||
version: "6.3.3"
|
||||
releaseName: component2
|
||||
repoUrl: "ghcr.io/stefanprodan/charts"
|
||||
namespace: component2
|
||||
@@ -51,8 +51,6 @@ func NewClusterGenerator(c client.Client, ctx context.Context, clientset kuberne
|
||||
return g
|
||||
}
|
||||
|
||||
// GetRequeueAfter never requeue the cluster generator because the `clusterSecretEventHandler` will requeue the appsets
|
||||
// when the cluster secrets change
|
||||
func (g *ClusterGenerator) GetRequeueAfter(appSetGenerator *argoappsetv1alpha1.ApplicationSetGenerator) time.Duration {
|
||||
return NoRequeueAfter
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package generators
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"encoding/json"
|
||||
"reflect"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/applicationset/utils"
|
||||
@@ -24,7 +25,7 @@ type TransformResult struct {
|
||||
Template argoprojiov1alpha1.ApplicationSetTemplate
|
||||
}
|
||||
|
||||
// Transform a spec generator to list of paramSets and a template
|
||||
//Transform a spec generator to list of paramSets and a template
|
||||
func Transform(requestedGenerator argoprojiov1alpha1.ApplicationSetGenerator, allGenerators map[string]Generator, baseTemplate argoprojiov1alpha1.ApplicationSetTemplate, appSet *argoprojiov1alpha1.ApplicationSet, genParams map[string]interface{}) ([]TransformResult, error) {
|
||||
selector, err := metav1.LabelSelectorAsSelector(requestedGenerator.Selector)
|
||||
if err != nil {
|
||||
@@ -131,15 +132,27 @@ func mergeGeneratorTemplate(g Generator, requestedGenerator *argoprojiov1alpha1.
|
||||
return *dest, err
|
||||
}
|
||||
|
||||
// InterpolateGenerator allows interpolating the matrix's 2nd child generator with values from the 1st child generator
|
||||
// Currently for Matrix Generator. Allows interpolating the matrix's 2nd child generator with values from the 1st child generator
|
||||
// "params" parameter is an array, where each index corresponds to a generator. Each index contains a map w/ that generator's parameters.
|
||||
func InterpolateGenerator(requestedGenerator *argoprojiov1alpha1.ApplicationSetGenerator, params map[string]interface{}, useGoTemplate bool) (argoprojiov1alpha1.ApplicationSetGenerator, error) {
|
||||
render := utils.Render{}
|
||||
interpolatedGenerator, err := render.RenderGeneratorParams(requestedGenerator, params, useGoTemplate)
|
||||
interpolatedGenerator := requestedGenerator.DeepCopy()
|
||||
tmplBytes, err := json.Marshal(interpolatedGenerator)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("interpolatedGenerator", interpolatedGenerator).Error("error interpolating generator with other generator's parameter")
|
||||
log.WithError(err).WithField("requestedGenerator", interpolatedGenerator).Error("error marshalling requested generator for interpolation")
|
||||
return *interpolatedGenerator, err
|
||||
}
|
||||
|
||||
render := utils.Render{}
|
||||
replacedTmplStr, err := render.Replace(string(tmplBytes), params, useGoTemplate)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("interpolatedGeneratorString", replacedTmplStr).Error("error interpolating generator with other generator's parameter")
|
||||
return *interpolatedGenerator, err
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(replacedTmplStr), interpolatedGenerator)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("requestedGenerator", interpolatedGenerator).Error("error unmarshalling requested generator for interpolation")
|
||||
return *interpolatedGenerator, err
|
||||
}
|
||||
return *interpolatedGenerator, nil
|
||||
}
|
||||
|
||||
@@ -6,11 +6,9 @@ import (
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
testutils "github.com/argoproj/argo-cd/v2/applicationset/utils/test"
|
||||
argov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
|
||||
"github.com/stretchr/testify/mock"
|
||||
@@ -161,8 +159,8 @@ func getMockClusterGenerator() Generator {
|
||||
}
|
||||
|
||||
func getMockGitGenerator() Generator {
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
argoCDServiceMock.Mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
argoCDServiceMock.mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
return gitGenerator
|
||||
}
|
||||
@@ -250,60 +248,6 @@ func TestInterpolateGenerator(t *testing.T) {
|
||||
Path: "{{server}}",
|
||||
}
|
||||
|
||||
requestedGenerator = &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Git: &argoprojiov1alpha1.GitGenerator{
|
||||
Files: append([]argoprojiov1alpha1.GitFileGeneratorItem{}, fileNamePath, fileServerPath),
|
||||
Template: argoprojiov1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
}
|
||||
clusterGeneratorParams := map[string]interface{}{
|
||||
"name": "production_01/west", "server": "https://production-01.example.com",
|
||||
}
|
||||
interpolatedGenerator, err = InterpolateGenerator(requestedGenerator, clusterGeneratorParams, false)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("requestedGenerator", requestedGenerator).Error("error interpolating Generator")
|
||||
return
|
||||
}
|
||||
assert.Equal(t, "production_01/west", interpolatedGenerator.Git.Files[0].Path)
|
||||
assert.Equal(t, "https://production-01.example.com", interpolatedGenerator.Git.Files[1].Path)
|
||||
}
|
||||
|
||||
func TestInterpolateGenerator_go(t *testing.T) {
|
||||
requestedGenerator := &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Clusters: &argoprojiov1alpha1.ClusterGenerator{
|
||||
Selector: metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"argocd.argoproj.io/secret-type": "cluster",
|
||||
"path-basename": "{{base .path.path}}",
|
||||
"path-zero": "{{index .path.segments 0}}",
|
||||
"path-full": "{{.path.path}}",
|
||||
"kubernetes.io/environment": `{{default "foo" .my_label}}`,
|
||||
}},
|
||||
},
|
||||
}
|
||||
gitGeneratorParams := map[string]interface{}{
|
||||
"path": map[string]interface{}{
|
||||
"path": "p1/p2/app3",
|
||||
"segments": []string{"p1", "p2", "app3"},
|
||||
},
|
||||
}
|
||||
interpolatedGenerator, err := InterpolateGenerator(requestedGenerator, gitGeneratorParams, true)
|
||||
require.NoError(t, err)
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("requestedGenerator", requestedGenerator).Error("error interpolating Generator")
|
||||
return
|
||||
}
|
||||
assert.Equal(t, "app3", interpolatedGenerator.Clusters.Selector.MatchLabels["path-basename"])
|
||||
assert.Equal(t, "p1", interpolatedGenerator.Clusters.Selector.MatchLabels["path-zero"])
|
||||
assert.Equal(t, "p1/p2/app3", interpolatedGenerator.Clusters.Selector.MatchLabels["path-full"])
|
||||
|
||||
fileNamePath := argoprojiov1alpha1.GitFileGeneratorItem{
|
||||
Path: "{{.name}}",
|
||||
}
|
||||
fileServerPath := argoprojiov1alpha1.GitFileGeneratorItem{
|
||||
Path: "{{.server}}",
|
||||
}
|
||||
|
||||
requestedGenerator = &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Git: &argoprojiov1alpha1.GitGenerator{
|
||||
Files: append([]argoprojiov1alpha1.GitFileGeneratorItem{}, fileNamePath, fileServerPath),
|
||||
|
||||
@@ -58,9 +58,9 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
|
||||
|
||||
var err error
|
||||
var res []map[string]interface{}
|
||||
if len(appSetGenerator.Git.Directories) != 0 {
|
||||
if appSetGenerator.Git.Directories != nil {
|
||||
res, err = g.generateParamsForGitDirectories(appSetGenerator, appSet.Spec.GoTemplate)
|
||||
} else if len(appSetGenerator.Git.Files) != 0 {
|
||||
} else if appSetGenerator.Git.Files != nil {
|
||||
res, err = g.generateParamsForGitFiles(appSetGenerator, appSet.Spec.GoTemplate)
|
||||
} else {
|
||||
return nil, EmptyAppSetGeneratorError
|
||||
@@ -81,10 +81,10 @@ func (g *GitGenerator) generateParamsForGitDirectories(appSetGenerator *argoproj
|
||||
}
|
||||
|
||||
log.WithFields(log.Fields{
|
||||
"allPaths": allPaths,
|
||||
"total": len(allPaths),
|
||||
"repoURL": appSetGenerator.Git.RepoURL,
|
||||
"revision": appSetGenerator.Git.Revision,
|
||||
"allPaths": allPaths,
|
||||
"total": len(allPaths),
|
||||
"repoURL": appSetGenerator.Git.RepoURL,
|
||||
"revision": appSetGenerator.Git.Revision,
|
||||
"pathParamPrefix": appSetGenerator.Git.PathParamPrefix,
|
||||
}).Info("applications result from the repo service")
|
||||
|
||||
@@ -127,7 +127,9 @@ func (g *GitGenerator) generateParamsForGitFiles(appSetGenerator *argoprojiov1al
|
||||
return nil, fmt.Errorf("unable to process file '%s': %v", path, err)
|
||||
}
|
||||
|
||||
res = append(res, paramsArray...)
|
||||
for index := range paramsArray {
|
||||
res = append(res, paramsArray[index])
|
||||
}
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
@@ -181,7 +183,7 @@ func (g *GitGenerator) generateParamsFromGitFile(filePath string, fileContent []
|
||||
}
|
||||
pathParamName := "path"
|
||||
if pathParamPrefix != "" {
|
||||
pathParamName = pathParamPrefix + "." + pathParamName
|
||||
pathParamName = pathParamPrefix+"."+pathParamName
|
||||
}
|
||||
params[pathParamName] = path.Dir(filePath)
|
||||
params[pathParamName+".basename"] = path.Base(params[pathParamName].(string))
|
||||
@@ -249,7 +251,7 @@ func (g *GitGenerator) generateParamsFromApps(requestedApps []string, appSetGene
|
||||
} else {
|
||||
pathParamName := "path"
|
||||
if appSetGenerator.Git.PathParamPrefix != "" {
|
||||
pathParamName = appSetGenerator.Git.PathParamPrefix + "." + pathParamName
|
||||
pathParamName = appSetGenerator.Git.PathParamPrefix+"."+pathParamName
|
||||
}
|
||||
params[pathParamName] = a
|
||||
params[pathParamName+".basename"] = path.Base(a)
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package generators
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
@@ -8,7 +9,6 @@ import (
|
||||
"github.com/stretchr/testify/mock"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
testutils "github.com/argoproj/argo-cd/v2/applicationset/utils/test"
|
||||
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -20,6 +20,33 @@ import (
|
||||
// return io.NewCloser(func() error { return nil }), c.RepoServerServiceClient, nil
|
||||
// }
|
||||
|
||||
type argoCDServiceMock struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetApps(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision)
|
||||
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetFiles(ctx context.Context, repoURL string, revision string, pattern string) (map[string][]byte, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision, pattern)
|
||||
|
||||
return args.Get(0).(map[string][]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetFileContent(ctx context.Context, repoURL string, revision string, path string) ([]byte, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision, path)
|
||||
|
||||
return args.Get(0).([]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a argoCDServiceMock) GetDirectories(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.mock.Called(ctx, repoURL, revision)
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
|
||||
func Test_generateParamsFromGitFile(t *testing.T) {
|
||||
params, err := (*GitGenerator)(nil).generateParamsFromGitFile("path/dir/file_name.yaml", []byte(`
|
||||
foo:
|
||||
@@ -244,9 +271,9 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
|
||||
argoCDServiceMock.Mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
argoCDServiceMock.mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
|
||||
@@ -274,7 +301,7 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
|
||||
assert.Equal(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.Mock.AssertExpectations(t)
|
||||
argoCDServiceMock.mock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -539,9 +566,9 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
|
||||
argoCDServiceMock.Mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
argoCDServiceMock.mock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
|
||||
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
|
||||
@@ -570,7 +597,7 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
|
||||
assert.Equal(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.Mock.AssertExpectations(t)
|
||||
argoCDServiceMock.mock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -830,8 +857,8 @@ cluster:
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
argoCDServiceMock.Mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
argoCDServiceMock.mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
|
||||
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
@@ -860,7 +887,7 @@ cluster:
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.Mock.AssertExpectations(t)
|
||||
argoCDServiceMock.mock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1179,8 +1206,8 @@ cluster:
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
argoCDServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
argoCDServiceMock.Mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
argoCDServiceMock := argoCDServiceMock{mock: &mock.Mock{}}
|
||||
argoCDServiceMock.mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).
|
||||
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
|
||||
|
||||
var gitGenerator = NewGitGenerator(argoCDServiceMock)
|
||||
@@ -1210,7 +1237,7 @@ cluster:
|
||||
assert.ElementsMatch(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
argoCDServiceMock.Mock.AssertExpectations(t)
|
||||
argoCDServiceMock.mock.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"time"
|
||||
|
||||
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
var _ Generator = (*ListGenerator)(nil)
|
||||
@@ -74,16 +73,5 @@ func (g *ListGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Appli
|
||||
}
|
||||
}
|
||||
|
||||
// Append elements from ElementsYaml to the response
|
||||
if len(appSetGenerator.List.ElementsYaml) > 0 {
|
||||
|
||||
var yamlElements []map[string]interface{}
|
||||
err := yaml.Unmarshal([]byte(appSetGenerator.List.ElementsYaml), &yamlElements)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error unmarshling decoded ElementsYaml %v", err)
|
||||
}
|
||||
res = append(res, yamlElements...)
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
||||
|
||||
@@ -80,13 +80,28 @@ func (m *MatrixGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.App
|
||||
}
|
||||
|
||||
func (m *MatrixGenerator) getParams(appSetBaseGenerator argoprojiov1alpha1.ApplicationSetNestedGenerator, appSet *argoprojiov1alpha1.ApplicationSet, params map[string]interface{}) ([]map[string]interface{}, error) {
|
||||
matrixGen, err := getMatrixGenerator(appSetBaseGenerator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
var matrix *argoprojiov1alpha1.MatrixGenerator
|
||||
if appSetBaseGenerator.Matrix != nil {
|
||||
// Since nested matrix generator is represented as a JSON object in the CRD, we unmarshall it back to a Go struct here.
|
||||
nestedMatrix, err := argoprojiov1alpha1.ToNestedMatrixGenerator(appSetBaseGenerator.Matrix)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to unmarshall nested matrix generator: %v", err)
|
||||
}
|
||||
if nestedMatrix != nil {
|
||||
matrix = nestedMatrix.ToMatrixGenerator()
|
||||
}
|
||||
}
|
||||
mergeGen, err := getMergeGenerator(appSetBaseGenerator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
var mergeGenerator *argoprojiov1alpha1.MergeGenerator
|
||||
if appSetBaseGenerator.Merge != nil {
|
||||
// Since nested merge generator is represented as a JSON object in the CRD, we unmarshall it back to a Go struct here.
|
||||
nestedMerge, err := argoprojiov1alpha1.ToNestedMergeGenerator(appSetBaseGenerator.Merge)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to unmarshall nested merge generator: %v", err)
|
||||
}
|
||||
if nestedMerge != nil {
|
||||
mergeGenerator = nestedMerge.ToMergeGenerator()
|
||||
}
|
||||
}
|
||||
|
||||
t, err := Transform(
|
||||
@@ -97,8 +112,8 @@ func (m *MatrixGenerator) getParams(appSetBaseGenerator argoprojiov1alpha1.Appli
|
||||
SCMProvider: appSetBaseGenerator.SCMProvider,
|
||||
ClusterDecisionResource: appSetBaseGenerator.ClusterDecisionResource,
|
||||
PullRequest: appSetBaseGenerator.PullRequest,
|
||||
Matrix: matrixGen,
|
||||
Merge: mergeGen,
|
||||
Matrix: matrix,
|
||||
Merge: mergeGenerator,
|
||||
Selector: appSetBaseGenerator.Selector,
|
||||
},
|
||||
m.supportedGenerators,
|
||||
@@ -128,15 +143,11 @@ func (m *MatrixGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.Ap
|
||||
var found bool
|
||||
|
||||
for _, r := range appSetGenerator.Matrix.Generators {
|
||||
matrixGen, _ := getMatrixGenerator(r)
|
||||
mergeGen, _ := getMergeGenerator(r)
|
||||
base := &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
List: r.List,
|
||||
Clusters: r.Clusters,
|
||||
Git: r.Git,
|
||||
PullRequest: r.PullRequest,
|
||||
Matrix: matrixGen,
|
||||
Merge: mergeGen,
|
||||
}
|
||||
generators := GetRelevantGenerators(base, m.supportedGenerators)
|
||||
|
||||
@@ -157,17 +168,6 @@ func (m *MatrixGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.Ap
|
||||
|
||||
}
|
||||
|
||||
func getMatrixGenerator(r argoprojiov1alpha1.ApplicationSetNestedGenerator) (*argoprojiov1alpha1.MatrixGenerator, error) {
|
||||
if r.Matrix == nil {
|
||||
return nil, nil
|
||||
}
|
||||
matrix, err := argoprojiov1alpha1.ToNestedMatrixGenerator(r.Matrix)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return matrix.ToMatrixGenerator(), nil
|
||||
}
|
||||
|
||||
func (m *MatrixGenerator) GetTemplate(appSetGenerator *argoprojiov1alpha1.ApplicationSetGenerator) *argoprojiov1alpha1.ApplicationSetTemplate {
|
||||
return &appSetGenerator.Matrix.Template
|
||||
}
|
||||
|
||||
@@ -5,7 +5,6 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
@@ -17,7 +16,6 @@ import (
|
||||
"github.com/stretchr/testify/mock"
|
||||
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
|
||||
testutils "github.com/argoproj/argo-cd/v2/applicationset/utils/test"
|
||||
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -837,174 +835,6 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatrixGenerateListElementsYaml(t *testing.T) {
|
||||
|
||||
gitGenerator := &argoprojiov1alpha1.GitGenerator{
|
||||
RepoURL: "RepoURL",
|
||||
Revision: "Revision",
|
||||
Files: []argoprojiov1alpha1.GitFileGeneratorItem{
|
||||
{Path: "config.yaml"},
|
||||
},
|
||||
}
|
||||
|
||||
listGenerator := &argoprojiov1alpha1.ListGenerator{
|
||||
Elements: []apiextensionsv1.JSON{},
|
||||
ElementsYaml: "{{ .foo.bar | toJson }}",
|
||||
}
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
baseGenerators []argoprojiov1alpha1.ApplicationSetNestedGenerator
|
||||
expectedErr error
|
||||
expected []map[string]interface{}
|
||||
}{
|
||||
{
|
||||
name: "happy flow - generate params",
|
||||
baseGenerators: []argoprojiov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
Git: gitGenerator,
|
||||
},
|
||||
{
|
||||
List: listGenerator,
|
||||
},
|
||||
},
|
||||
expected: []map[string]interface{}{
|
||||
{
|
||||
"chart": "a",
|
||||
"version": "1",
|
||||
"foo": map[string]interface{}{
|
||||
"bar": []interface{}{
|
||||
map[string]interface{}{
|
||||
"chart": "a",
|
||||
"version": "1",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"chart": "b",
|
||||
"version": "2",
|
||||
},
|
||||
},
|
||||
},
|
||||
"path": map[string]interface{}{
|
||||
"basename": "dir",
|
||||
"basenameNormalized": "dir",
|
||||
"filename": "file_name.yaml",
|
||||
"filenameNormalized": "file-name.yaml",
|
||||
"path": "path/dir",
|
||||
"segments": []string {
|
||||
"path",
|
||||
"dir",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"chart": "b",
|
||||
"version": "2",
|
||||
"foo": map[string]interface{}{
|
||||
"bar": []interface{}{
|
||||
map[string]interface{}{
|
||||
"chart": "a",
|
||||
"version": "1",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"chart": "b",
|
||||
"version": "2",
|
||||
},
|
||||
},
|
||||
},
|
||||
"path": map[string]interface{}{
|
||||
"basename": "dir",
|
||||
"basenameNormalized": "dir",
|
||||
"filename": "file_name.yaml",
|
||||
"filenameNormalized": "file-name.yaml",
|
||||
"path": "path/dir",
|
||||
"segments": []string {
|
||||
"path",
|
||||
"dir",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testCase := range testCases {
|
||||
testCaseCopy := testCase // Since tests may run in parallel
|
||||
|
||||
t.Run(testCaseCopy.name, func(t *testing.T) {
|
||||
genMock := &generatorMock{}
|
||||
appSet := &argoprojiov1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "set",
|
||||
},
|
||||
Spec: argoprojiov1alpha1.ApplicationSetSpec{
|
||||
GoTemplate: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, g := range testCaseCopy.baseGenerators {
|
||||
|
||||
gitGeneratorSpec := argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Git: g.Git,
|
||||
List: g.List,
|
||||
}
|
||||
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{{
|
||||
"foo": map[string]interface{}{
|
||||
"bar": []interface{}{
|
||||
map[string]interface{}{
|
||||
"chart": "a",
|
||||
"version": "1",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"chart": "b",
|
||||
"version": "2",
|
||||
},
|
||||
},
|
||||
},
|
||||
"path": map[string]interface{}{
|
||||
"basename": "dir",
|
||||
"basenameNormalized": "dir",
|
||||
"filename": "file_name.yaml",
|
||||
"filenameNormalized": "file-name.yaml",
|
||||
"path": "path/dir",
|
||||
"segments": []string {
|
||||
"path",
|
||||
"dir",
|
||||
},
|
||||
},
|
||||
|
||||
}}, nil)
|
||||
genMock.On("GetTemplate", &gitGeneratorSpec).
|
||||
Return(&argoprojiov1alpha1.ApplicationSetTemplate{})
|
||||
|
||||
}
|
||||
|
||||
var matrixGenerator = NewMatrixGenerator(
|
||||
map[string]Generator{
|
||||
"Git": genMock,
|
||||
"List": &ListGenerator{},
|
||||
},
|
||||
)
|
||||
|
||||
got, err := matrixGenerator.GenerateParams(&argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Matrix: &argoprojiov1alpha1.MatrixGenerator{
|
||||
Generators: testCaseCopy.baseGenerators,
|
||||
Template: argoprojiov1alpha1.ApplicationSetTemplate{},
|
||||
},
|
||||
}, appSet)
|
||||
|
||||
if testCaseCopy.expectedErr != nil {
|
||||
assert.ErrorIs(t, err, testCaseCopy.expectedErr)
|
||||
} else {
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, testCaseCopy.expected, got)
|
||||
}
|
||||
|
||||
})
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
type generatorMock struct {
|
||||
mock.Mock
|
||||
}
|
||||
@@ -1027,72 +857,3 @@ func (g *generatorMock) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.Appl
|
||||
return args.Get(0).(time.Duration)
|
||||
|
||||
}
|
||||
|
||||
func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
|
||||
// Given a matrix generator over a list generator and a git files generator, the nested git files generator should
|
||||
// be treated as a files generator, and it should produce parameters.
|
||||
|
||||
// This tests for a specific bug where a nested git files generator was being treated as a directory generator. This
|
||||
// happened because, when the matrix generator was being processed, the nested git files generator was being
|
||||
// interpolated by the deeplyReplace function. That function cannot differentiate between a nil slice and an empty
|
||||
// slice. So it was replacing the `Directories` field with an empty slice, which the ApplicationSet controller
|
||||
// interpreted as meaning this was a directory generator, not a files generator.
|
||||
|
||||
// Now instead of checking for nil, we check whether the field is a non-empty slice. This test prevents a regression
|
||||
// of that bug.
|
||||
|
||||
listGeneratorMock := &generatorMock{}
|
||||
listGeneratorMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet")).Return([]map[string]interface{}{
|
||||
{"some": "value"},
|
||||
}, nil)
|
||||
listGeneratorMock.On("GetTemplate", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&argoprojiov1alpha1.ApplicationSetTemplate{})
|
||||
|
||||
gitGeneratorSpec := &argoprojiov1alpha1.GitGenerator{
|
||||
RepoURL: "https://git.example.com",
|
||||
Files: []argoprojiov1alpha1.GitFileGeneratorItem{
|
||||
{Path: "some/path.json"},
|
||||
},
|
||||
}
|
||||
|
||||
repoServiceMock := testutils.ArgoCDServiceMock{Mock: &mock.Mock{}}
|
||||
repoServiceMock.Mock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
|
||||
"some/path.json": []byte("test: content"),
|
||||
}, nil)
|
||||
gitGenerator := NewGitGenerator(repoServiceMock)
|
||||
|
||||
matrixGenerator := NewMatrixGenerator(map[string]Generator{
|
||||
"List": listGeneratorMock,
|
||||
"Git": gitGenerator,
|
||||
})
|
||||
|
||||
matrixGeneratorSpec := &argoprojiov1alpha1.MatrixGenerator{
|
||||
Generators: []argoprojiov1alpha1.ApplicationSetNestedGenerator{
|
||||
{
|
||||
List: &argoprojiov1alpha1.ListGenerator{
|
||||
Elements: []apiextensionsv1.JSON{
|
||||
{
|
||||
Raw: []byte(`{"some": "value"}`),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Git: gitGeneratorSpec,
|
||||
},
|
||||
},
|
||||
}
|
||||
params, err := matrixGenerator.GenerateParams(&argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
Matrix: matrixGeneratorSpec,
|
||||
}, &argoprojiov1alpha1.ApplicationSet{})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []map[string]interface{}{{
|
||||
"path": "some",
|
||||
"path.basename": "some",
|
||||
"path.basenameNormalized": "some",
|
||||
"path.filename": "path.json",
|
||||
"path.filenameNormalized": "path.json",
|
||||
"path[0]": "some",
|
||||
"some": "value",
|
||||
"test": "content",
|
||||
}}, params)
|
||||
}
|
||||
|
||||
@@ -137,13 +137,27 @@ func getParamSetsByMergeKey(mergeKeys []string, paramSets []map[string]interface
|
||||
|
||||
// getParams get the parameters generated by this generator.
|
||||
func (m *MergeGenerator) getParams(appSetBaseGenerator argoprojiov1alpha1.ApplicationSetNestedGenerator, appSet *argoprojiov1alpha1.ApplicationSet) ([]map[string]interface{}, error) {
|
||||
matrixGen, err := getMatrixGenerator(appSetBaseGenerator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
var matrix *argoprojiov1alpha1.MatrixGenerator
|
||||
if appSetBaseGenerator.Matrix != nil {
|
||||
nestedMatrix, err := argoprojiov1alpha1.ToNestedMatrixGenerator(appSetBaseGenerator.Matrix)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if nestedMatrix != nil {
|
||||
matrix = nestedMatrix.ToMatrixGenerator()
|
||||
}
|
||||
}
|
||||
mergeGen, err := getMergeGenerator(appSetBaseGenerator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
var mergeGenerator *argoprojiov1alpha1.MergeGenerator
|
||||
if appSetBaseGenerator.Merge != nil {
|
||||
nestedMerge, err := argoprojiov1alpha1.ToNestedMergeGenerator(appSetBaseGenerator.Merge)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if nestedMerge != nil {
|
||||
mergeGenerator = nestedMerge.ToMergeGenerator()
|
||||
}
|
||||
}
|
||||
|
||||
t, err := Transform(
|
||||
@@ -154,8 +168,8 @@ func (m *MergeGenerator) getParams(appSetBaseGenerator argoprojiov1alpha1.Applic
|
||||
SCMProvider: appSetBaseGenerator.SCMProvider,
|
||||
ClusterDecisionResource: appSetBaseGenerator.ClusterDecisionResource,
|
||||
PullRequest: appSetBaseGenerator.PullRequest,
|
||||
Matrix: matrixGen,
|
||||
Merge: mergeGen,
|
||||
Matrix: matrix,
|
||||
Merge: mergeGenerator,
|
||||
Selector: appSetBaseGenerator.Selector,
|
||||
},
|
||||
m.supportedGenerators,
|
||||
@@ -183,15 +197,10 @@ func (m *MergeGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.App
|
||||
var found bool
|
||||
|
||||
for _, r := range appSetGenerator.Merge.Generators {
|
||||
matrixGen, _ := getMatrixGenerator(r)
|
||||
mergeGen, _ := getMergeGenerator(r)
|
||||
base := &argoprojiov1alpha1.ApplicationSetGenerator{
|
||||
List: r.List,
|
||||
Clusters: r.Clusters,
|
||||
Git: r.Git,
|
||||
PullRequest: r.PullRequest,
|
||||
Matrix: matrixGen,
|
||||
Merge: mergeGen,
|
||||
List: r.List,
|
||||
Clusters: r.Clusters,
|
||||
Git: r.Git,
|
||||
}
|
||||
generators := GetRelevantGenerators(base, m.supportedGenerators)
|
||||
|
||||
@@ -212,17 +221,6 @@ func (m *MergeGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alpha1.App
|
||||
|
||||
}
|
||||
|
||||
func getMergeGenerator(r argoprojiov1alpha1.ApplicationSetNestedGenerator) (*argoprojiov1alpha1.MergeGenerator, error) {
|
||||
if r.Merge == nil {
|
||||
return nil, nil
|
||||
}
|
||||
merge, err := argoprojiov1alpha1.ToNestedMergeGenerator(r.Merge)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return merge.ToMergeGenerator(), nil
|
||||
}
|
||||
|
||||
// GetTemplate gets the Template field for the MergeGenerator.
|
||||
func (m *MergeGenerator) GetTemplate(appSetGenerator *argoprojiov1alpha1.ApplicationSetGenerator) *argoprojiov1alpha1.ApplicationSetTemplate {
|
||||
return &appSetGenerator.Merge.Template
|
||||
|
||||
@@ -158,7 +158,7 @@ func TestListPullRequestBasicAuth(t *testing.T) {
|
||||
|
||||
func TestListResponseError(t *testing.T) {
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
w.WriteHeader(500)
|
||||
}))
|
||||
defer ts.Close()
|
||||
svc, _ := NewBitbucketServiceNoAuth(context.Background(), ts.URL, "PROJECT", "REPO")
|
||||
|
||||
@@ -29,7 +29,7 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
|
||||
urlStr += fmt.Sprintf("/repositories/%s/%s/src/%s/%s?format=meta", c.owner, repo.Repository, repo.SHA, path)
|
||||
body := strings.NewReader("")
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, urlStr, body)
|
||||
req, err := http.NewRequest("GET", urlStr, body)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
@@ -347,7 +347,7 @@ func TestGetBranchesMissingDefault(t *testing.T) {
|
||||
assert.Empty(t, r.Header.Get("Authorization"))
|
||||
switch r.RequestURI {
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/branches/default":
|
||||
http.Error(w, "Not found", http.StatusNotFound)
|
||||
http.Error(w, "Not found", 404)
|
||||
}
|
||||
defaultHandler(t)(w, r)
|
||||
}))
|
||||
@@ -370,7 +370,7 @@ func TestGetBranchesErrorDefaultBranch(t *testing.T) {
|
||||
assert.Empty(t, r.Header.Get("Authorization"))
|
||||
switch r.RequestURI {
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/branches/default":
|
||||
http.Error(w, "Internal server error", http.StatusInternalServerError)
|
||||
http.Error(w, "Internal server error", 500)
|
||||
}
|
||||
defaultHandler(t)(w, r)
|
||||
}))
|
||||
@@ -442,7 +442,7 @@ func TestListReposMissingDefaultBranch(t *testing.T) {
|
||||
assert.Empty(t, r.Header.Get("Authorization"))
|
||||
switch r.RequestURI {
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/branches/default":
|
||||
http.Error(w, "Not found", http.StatusNotFound)
|
||||
http.Error(w, "Not found", 404)
|
||||
}
|
||||
defaultHandler(t)(w, r)
|
||||
}))
|
||||
@@ -459,7 +459,7 @@ func TestListReposErrorDefaultBranch(t *testing.T) {
|
||||
assert.Empty(t, r.Header.Get("Authorization"))
|
||||
switch r.RequestURI {
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/branches/default":
|
||||
http.Error(w, "Internal server error", http.StatusInternalServerError)
|
||||
http.Error(w, "Internal server error", 500)
|
||||
}
|
||||
defaultHandler(t)(w, r)
|
||||
}))
|
||||
@@ -516,17 +516,17 @@ func TestBitbucketServerHasPath(t *testing.T) {
|
||||
_, err = io.WriteString(w, `{"type":"FILE"}`)
|
||||
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/browse/anotherpkg/missing.txt?at=main&limit=100&type=true":
|
||||
http.Error(w, "The path \"anotherpkg/missing.txt\" does not exist at revision \"main\"", http.StatusNotFound)
|
||||
http.Error(w, "The path \"anotherpkg/missing.txt\" does not exist at revision \"main\"", 404)
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/browse/notathing?at=main&limit=100&type=true":
|
||||
http.Error(w, "The path \"notathing\" does not exist at revision \"main\"", http.StatusNotFound)
|
||||
http.Error(w, "The path \"notathing\" does not exist at revision \"main\"", 404)
|
||||
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/browse/return-redirect?at=main&limit=100&type=true":
|
||||
http.Redirect(w, r, "http://"+r.Host+"/rest/api/1.0/projects/PROJECT/repos/REPO/browse/redirected?at=main&limit=100&type=true", http.StatusMovedPermanently)
|
||||
http.Redirect(w, r, "http://"+r.Host+"/rest/api/1.0/projects/PROJECT/repos/REPO/browse/redirected?at=main&limit=100&type=true", 301)
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/browse/redirected?at=main&limit=100&type=true":
|
||||
_, err = io.WriteString(w, `{"type":"DIRECTORY"}`)
|
||||
|
||||
case "/rest/api/1.0/projects/PROJECT/repos/REPO/browse/unauthorized-response?at=main&limit=100&type=true":
|
||||
http.Error(w, "Authentication failed", http.StatusUnauthorized)
|
||||
http.Error(w, "Authentication failed", 401)
|
||||
|
||||
default:
|
||||
t.Fail()
|
||||
|
||||
@@ -47,7 +47,7 @@ func NewGiteaProvider(ctx context.Context, owner, token, url string, allBranches
|
||||
func (g *GiteaProvider) GetBranches(ctx context.Context, repo *Repository) ([]*Repository, error) {
|
||||
if !g.allBranches {
|
||||
branch, status, err := g.client.GetRepoBranch(g.owner, repo.Repository, repo.Branch)
|
||||
if status.StatusCode == http.StatusNotFound {
|
||||
if status.StatusCode == 404 {
|
||||
return nil, fmt.Errorf("got 404 while getting default branch %q for repo %q - check your repo config: %w", repo.Branch, repo.Repository, err)
|
||||
}
|
||||
if err != nil {
|
||||
|
||||
@@ -247,7 +247,7 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
||||
_, err := io.WriteString(w, testdata.ReposGiteaGoSdkContentsGiteaResponse)
|
||||
require.NoError(t, err)
|
||||
case "/api/v1/repos/gitea/go-sdk/contents/notathing?ref=master":
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
w.WriteHeader(404)
|
||||
_, err := io.WriteString(w, `{"errors":["object does not exist [id: , rel_path: notathing]"],"message":"GetContentsOrList","url":"https://gitea.com/api/swagger"}`)
|
||||
require.NoError(t, err)
|
||||
default:
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
|
||||
"github.com/google/go-github/v35/github"
|
||||
@@ -124,7 +123,7 @@ func (g *GithubProvider) listBranches(ctx context.Context, repo *Repository) ([]
|
||||
if err != nil {
|
||||
var githubErrorResponse *github.ErrorResponse
|
||||
if errors.As(err, &githubErrorResponse) {
|
||||
if githubErrorResponse.Response.StatusCode == http.StatusNotFound {
|
||||
if githubErrorResponse.Response.StatusCode == 404 {
|
||||
// Default branch doesn't exist, so the repo is empty.
|
||||
return []github.Branch{}, nil
|
||||
}
|
||||
|
||||
@@ -196,7 +196,7 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
||||
t.Fail()
|
||||
}
|
||||
default:
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
w.WriteHeader(404)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"net/http"
|
||||
pathpkg "path"
|
||||
|
||||
gitlab "github.com/xanzy/go-gitlab"
|
||||
@@ -145,11 +144,7 @@ func (g *GitlabProvider) listBranches(_ context.Context, repo *Repository) ([]gi
|
||||
branches := []gitlab.Branch{}
|
||||
// If we don't specifically want to query for all branches, just use the default branch and call it a day.
|
||||
if !g.allBranches {
|
||||
gitlabBranch, resp, err := g.client.Branches.GetBranch(repo.RepositoryId, repo.Branch, nil)
|
||||
// 404s are not an error here, just a normal false.
|
||||
if resp != nil && resp.StatusCode == http.StatusNotFound {
|
||||
return []gitlab.Branch{}, nil
|
||||
}
|
||||
gitlabBranch, _, err := g.client.Branches.GetBranch(repo.RepositoryId, repo.Branch, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -162,10 +157,6 @@ func (g *GitlabProvider) listBranches(_ context.Context, repo *Repository) ([]gi
|
||||
}
|
||||
for {
|
||||
gitlabBranches, resp, err := g.client.Branches.ListBranches(repo.RepositoryId, opt)
|
||||
// 404s are not an error here, just a normal false.
|
||||
if resp != nil && resp.StatusCode == http.StatusNotFound {
|
||||
return []gitlab.Branch{}, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -274,8 +274,6 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v4/projects/27084533/repository/branches/foo":
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
default:
|
||||
_, err := io.WriteString(w, `[]`)
|
||||
if err != nil {
|
||||
@@ -393,29 +391,3 @@ func TestGitlabHasPath(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGitlabGetBranches(t *testing.T) {
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
gitlabMockHandler(t)(w, r)
|
||||
}))
|
||||
host, _ := NewGitlabProvider(context.Background(), "test-argocd-proton", "", ts.URL, false, true)
|
||||
|
||||
repo := &Repository{
|
||||
RepositoryId: 27084533,
|
||||
Branch: "master",
|
||||
}
|
||||
t.Run("branch exists", func(t *testing.T) {
|
||||
repos, err := host.GetBranches(context.Background(), repo)
|
||||
assert.Nil(t, err)
|
||||
assert.Equal(t, repos[0].Branch, "master")
|
||||
})
|
||||
|
||||
repo2 := &Repository{
|
||||
RepositoryId: 27084533,
|
||||
Branch: "foo",
|
||||
}
|
||||
t.Run("unknown branch", func(t *testing.T) {
|
||||
_, err := host.GetBranches(context.Background(), repo2)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
package test
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
type ArgoCDServiceMock struct {
|
||||
Mock *mock.Mock
|
||||
}
|
||||
|
||||
func (a ArgoCDServiceMock) GetApps(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.Mock.Called(ctx, repoURL, revision)
|
||||
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
|
||||
func (a ArgoCDServiceMock) GetFiles(ctx context.Context, repoURL string, revision string, pattern string) (map[string][]byte, error) {
|
||||
args := a.Mock.Called(ctx, repoURL, revision, pattern)
|
||||
|
||||
return args.Get(0).(map[string][]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a ArgoCDServiceMock) GetFileContent(ctx context.Context, repoURL string, revision string, path string) ([]byte, error) {
|
||||
args := a.Mock.Called(ctx, repoURL, revision, path)
|
||||
|
||||
return args.Get(0).([]byte), args.Error(1)
|
||||
}
|
||||
|
||||
func (a ArgoCDServiceMock) GetDirectories(ctx context.Context, repoURL string, revision string) ([]string, error) {
|
||||
args := a.Mock.Called(ctx, repoURL, revision)
|
||||
return args.Get(0).([]string), args.Error(1)
|
||||
}
|
||||
@@ -174,7 +174,7 @@ func (r *Render) deeplyReplace(copy, original reflect.Value, replaceMap map[stri
|
||||
|
||||
func (r *Render) RenderTemplateParams(tmpl *argoappsv1.Application, syncPolicy *argoappsv1.ApplicationSetSyncPolicy, params map[string]interface{}, useGoTemplate bool) (*argoappsv1.Application, error) {
|
||||
if tmpl == nil {
|
||||
return nil, fmt.Errorf("application template is empty")
|
||||
return nil, fmt.Errorf("application template is empty ")
|
||||
}
|
||||
|
||||
if len(params) == 0 {
|
||||
@@ -204,27 +204,6 @@ func (r *Render) RenderTemplateParams(tmpl *argoappsv1.Application, syncPolicy *
|
||||
return replacedTmpl, nil
|
||||
}
|
||||
|
||||
func (r *Render) RenderGeneratorParams(gen *argoappsv1.ApplicationSetGenerator, params map[string]interface{}, useGoTemplate bool) (*argoappsv1.ApplicationSetGenerator, error) {
|
||||
if gen == nil {
|
||||
return nil, fmt.Errorf("generator is empty")
|
||||
}
|
||||
|
||||
if len(params) == 0 {
|
||||
return gen, nil
|
||||
}
|
||||
|
||||
original := reflect.ValueOf(gen)
|
||||
copy := reflect.New(original.Type()).Elem()
|
||||
|
||||
if err := r.deeplyReplace(copy, original, params, useGoTemplate); err != nil {
|
||||
return nil, fmt.Errorf("failed to replace parameters in generator: %w", err)
|
||||
}
|
||||
|
||||
replacedGen := copy.Interface().(*argoappsv1.ApplicationSetGenerator)
|
||||
|
||||
return replacedGen, nil
|
||||
}
|
||||
|
||||
var isTemplatedRegex = regexp.MustCompile(".*{{.*}}.*")
|
||||
|
||||
// Replace executes basic string substitution of a template with replacement values.
|
||||
|
||||
@@ -133,7 +133,7 @@ func (h *WebhookHandler) Handler(w http.ResponseWriter, r *http.Request) {
|
||||
if err != nil {
|
||||
log.Infof("Webhook processing failed: %s", err)
|
||||
status := http.StatusBadRequest
|
||||
if r.Method != http.MethodPost {
|
||||
if r.Method != "POST" {
|
||||
status = http.StatusMethodNotAllowed
|
||||
}
|
||||
http.Error(w, fmt.Sprintf("Webhook processing failed: %s", html.EscapeString(err.Error())), status)
|
||||
|
||||
@@ -177,7 +177,7 @@ func TestWebhookHandler(t *testing.T) {
|
||||
h, err := NewWebhookHandler(namespace, set, fc, mockGenerators())
|
||||
assert.Nil(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/api/webhook", nil)
|
||||
req := httptest.NewRequest("POST", "/api/webhook", nil)
|
||||
req.Header.Set(test.headerKey, test.headerValue)
|
||||
eventJSON, err := os.ReadFile(filepath.Join("testdata", test.payloadFile))
|
||||
assert.NoError(t, err)
|
||||
|
||||
@@ -271,16 +271,6 @@
|
||||
"description": "the application's namespace.",
|
||||
"name": "appNamespace",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"collectionFormat": "multi",
|
||||
"description": "the project names to restrict returned list applications (legacy name for backwards-compatibility).",
|
||||
"name": "project",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -595,16 +585,6 @@
|
||||
"description": "the application's namespace.",
|
||||
"name": "appNamespace",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"collectionFormat": "multi",
|
||||
"description": "the project names to restrict returned list applications (legacy name for backwards-compatibility).",
|
||||
"name": "project",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -3690,16 +3670,6 @@
|
||||
"description": "the application's namespace.",
|
||||
"name": "appNamespace",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"collectionFormat": "multi",
|
||||
"description": "the project names to restrict returned list applications (legacy name for backwards-compatibility).",
|
||||
"name": "project",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -5705,17 +5675,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"v1alpha1ApplicationPreservedFields": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"annotations": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"v1alpha1ApplicationSet": {
|
||||
"type": "object",
|
||||
"title": "ApplicationSet is a set of Application resources\n+genclient\n+genclient:noStatus\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n+kubebuilder:resource:path=applicationsets,shortName=appset;appsets\n+kubebuilder:subresource:status",
|
||||
@@ -5900,9 +5859,6 @@
|
||||
"goTemplate": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"preservedFields": {
|
||||
"$ref": "#/definitions/v1alpha1ApplicationPreservedFields"
|
||||
},
|
||||
"strategy": {
|
||||
"$ref": "#/definitions/v1alpha1ApplicationSetStrategy"
|
||||
},
|
||||
@@ -6145,10 +6101,6 @@
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"commonAnnotationsEnvsubst": {
|
||||
"type": "boolean",
|
||||
"title": "CommonAnnotationsEnvsubst specifies whether to apply env variables substitution for annotation values"
|
||||
},
|
||||
"commonLabels": {
|
||||
"type": "object",
|
||||
"title": "CommonLabels is a list of additional labels to add to rendered manifests",
|
||||
@@ -6179,17 +6131,6 @@
|
||||
"type": "string",
|
||||
"title": "NameSuffix is a suffix appended to resources for Kustomize apps"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"title": "Namespace sets the namespace that Kustomize adds to all resources"
|
||||
},
|
||||
"replicas": {
|
||||
"type": "array",
|
||||
"title": "Replicas is a list of Kustomize Replicas override specifications",
|
||||
"items": {
|
||||
"$ref": "#/definitions/v1alpha1KustomizeReplica"
|
||||
}
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"title": "Version controls which version of Kustomize to use for rendering manifests"
|
||||
@@ -7022,18 +6963,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"v1alpha1KustomizeReplica": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"count": {
|
||||
"$ref": "#/definitions/intstrIntOrString"
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"title": "Name of Deployment or StatefulSet"
|
||||
}
|
||||
}
|
||||
},
|
||||
"v1alpha1ListGenerator": {
|
||||
"type": "object",
|
||||
"title": "ListGenerator include items info",
|
||||
@@ -7044,9 +6973,6 @@
|
||||
"$ref": "#/definitions/v1JSON"
|
||||
}
|
||||
},
|
||||
"elementsYaml": {
|
||||
"type": "string"
|
||||
},
|
||||
"template": {
|
||||
"$ref": "#/definitions/v1alpha1ApplicationSetTemplate"
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/pkg/stats"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/go-redis/redis/v8"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
@@ -188,7 +188,7 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
|
||||
command.Flags().DurationVar(&metricsCacheExpiration, "metrics-cache-expiration", env.ParseDurationFromEnv("ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION", 0*time.Second, 0, math.MaxInt64), "Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)")
|
||||
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS", 5, 0, math.MaxInt32), "Specifies timeout between application self heal attempts")
|
||||
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", env.ParseInt64FromEnv("ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT", 20, 0, math.MaxInt64), "Number of allowed concurrent kubectl fork/execs. Any value less than 1 means no limit.")
|
||||
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", 20, "Number of allowed concurrent kubectl fork/execs. Any value less the 1 means no limit.")
|
||||
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
|
||||
command.Flags().BoolVar(&repoServerStrictTLS, "repo-server-strict-tls", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS", false), "Whether to use strict validation of the TLS cert presented by the repo server")
|
||||
command.Flags().StringSliceVar(&metricsAplicationLabels, "metrics-application-labels", []string{}, "List of Application labels that will be added to the argocd_application_labels metric")
|
||||
|
||||
@@ -30,6 +30,7 @@ import (
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/applicationset/services"
|
||||
appsetv1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
appv1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
|
||||
"github.com/argoproj/argo-cd/v2/util/cli"
|
||||
@@ -59,6 +60,7 @@ func NewCommand() *cobra.Command {
|
||||
)
|
||||
scheme := runtime.NewScheme()
|
||||
_ = clientgoscheme.AddToScheme(scheme)
|
||||
_ = appsetv1alpha1.AddToScheme(scheme)
|
||||
_ = appv1alpha1.AddToScheme(scheme)
|
||||
var command = cobra.Command{
|
||||
Use: "controller",
|
||||
@@ -177,7 +179,7 @@ func NewCommand() *cobra.Command {
|
||||
KubeClientset: k8sClient,
|
||||
ArgoDB: argoCDDB,
|
||||
EnableProgressiveSyncs: enableProgressiveSyncs,
|
||||
}).SetupWithManager(mgr, enableProgressiveSyncs); err != nil {
|
||||
}).SetupWithManager(mgr); err != nil {
|
||||
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
@@ -44,14 +44,6 @@ func NewCommand() *cobra.Command {
|
||||
config, err := plugin.ReadPluginConfig(configFilePath)
|
||||
errors.CheckError(err)
|
||||
|
||||
if !config.Spec.Discover.IsDefined() {
|
||||
name := config.Metadata.Name
|
||||
if config.Spec.Version != "" {
|
||||
name = name + "-" + config.Spec.Version
|
||||
}
|
||||
log.Infof("No discovery configuration is defined for plugin %s. To use this plugin, specify %q in the Application's spec.source.plugin.name field.", config.Metadata.Name, name)
|
||||
}
|
||||
|
||||
if otlpAddress != "" {
|
||||
var closer func()
|
||||
var err error
|
||||
|
||||
@@ -9,7 +9,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/pkg/stats"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/go-redis/redis/v8"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"google.golang.org/grpc/health/grpc_health_v1"
|
||||
|
||||
@@ -7,7 +7,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/pkg/stats"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/go-redis/redis/v8"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
|
||||
@@ -17,8 +17,6 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/common"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
"github.com/argoproj/argo-cd/v2/util/settings"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -29,9 +27,9 @@ const (
|
||||
var (
|
||||
configMapResource = schema.GroupVersionResource{Group: "", Version: "v1", Resource: "configmaps"}
|
||||
secretResource = schema.GroupVersionResource{Group: "", Version: "v1", Resource: "secrets"}
|
||||
applicationsResource = schema.GroupVersionResource{Group: application.Group, Version: "v1alpha1", Resource: application.ApplicationPlural}
|
||||
appprojectsResource = schema.GroupVersionResource{Group: application.Group, Version: "v1alpha1", Resource: application.AppProjectPlural}
|
||||
appplicationSetResource = schema.GroupVersionResource{Group: application.Group, Version: "v1alpha1", Resource: application.ApplicationSetPlural}
|
||||
applicationsResource = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "applications"}
|
||||
appprojectsResource = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "appprojects"}
|
||||
appplicationSetResource = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "applicationsets"}
|
||||
)
|
||||
|
||||
// NewAdminCommand returns a new instance of an argocd command
|
||||
@@ -195,11 +193,11 @@ func specsEqual(left, right unstructured.Unstructured) bool {
|
||||
leftData, _, _ := unstructured.NestedMap(left.Object, "data")
|
||||
rightData, _, _ := unstructured.NestedMap(right.Object, "data")
|
||||
return reflect.DeepEqual(leftData, rightData)
|
||||
case application.AppProjectKind:
|
||||
case "AppProject":
|
||||
leftSpec, _, _ := unstructured.NestedMap(left.Object, "spec")
|
||||
rightSpec, _, _ := unstructured.NestedMap(right.Object, "spec")
|
||||
return reflect.DeepEqual(leftSpec, rightSpec)
|
||||
case application.ApplicationKind:
|
||||
case "Application":
|
||||
leftSpec, _, _ := unstructured.NestedMap(left.Object, "spec")
|
||||
rightSpec, _, _ := unstructured.NestedMap(right.Object, "spec")
|
||||
leftStatus, _, _ := unstructured.NestedMap(left.Object, "status")
|
||||
|
||||
@@ -17,7 +17,6 @@ import (
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/common"
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
"github.com/argoproj/argo-cd/v2/util/cli"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
)
|
||||
@@ -177,12 +176,12 @@ func NewImportCommand() *cobra.Command {
|
||||
applications, err := acdClients.applications.List(ctx, v1.ListOptions{})
|
||||
errors.CheckError(err)
|
||||
for _, app := range applications.Items {
|
||||
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.ApplicationKind, Name: app.GetName()}] = app
|
||||
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "Application", Name: app.GetName()}] = app
|
||||
}
|
||||
projects, err := acdClients.projects.List(ctx, v1.ListOptions{})
|
||||
errors.CheckError(err)
|
||||
for _, proj := range projects.Items {
|
||||
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.AppProjectKind, Name: proj.GetName()}] = proj
|
||||
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "AppProject", Name: proj.GetName()}] = proj
|
||||
}
|
||||
applicationSets, err := acdClients.applicationSets.List(ctx, v1.ListOptions{})
|
||||
if apierr.IsForbidden(err) || apierr.IsNotFound(err) {
|
||||
@@ -192,7 +191,7 @@ func NewImportCommand() *cobra.Command {
|
||||
}
|
||||
if applicationSets != nil {
|
||||
for _, appSet := range applicationSets.Items {
|
||||
pruneObjects[kube.ResourceKey{Group: application.Group, Kind: application.ApplicationSetKind, Name: appSet.GetName()}] = appSet
|
||||
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "ApplicationSet", Name: appSet.GetName()}] = appSet
|
||||
}
|
||||
}
|
||||
|
||||
@@ -210,11 +209,11 @@ func NewImportCommand() *cobra.Command {
|
||||
dynClient = acdClients.secrets
|
||||
case "ConfigMap":
|
||||
dynClient = acdClients.configMaps
|
||||
case application.AppProjectKind:
|
||||
case "AppProject":
|
||||
dynClient = acdClients.projects
|
||||
case application.ApplicationKind:
|
||||
case "Application":
|
||||
dynClient = acdClients.applications
|
||||
case application.ApplicationSetKind:
|
||||
case "ApplicationSet":
|
||||
dynClient = acdClients.applicationSets
|
||||
}
|
||||
if !exists {
|
||||
@@ -261,9 +260,9 @@ func NewImportCommand() *cobra.Command {
|
||||
switch key.Kind {
|
||||
case "Secret":
|
||||
dynClient = acdClients.secrets
|
||||
case application.AppProjectKind:
|
||||
case "AppProject":
|
||||
dynClient = acdClients.projects
|
||||
case application.ApplicationKind:
|
||||
case "Application":
|
||||
dynClient = acdClients.applications
|
||||
if !dryRun {
|
||||
if finalizers := liveObj.GetFinalizers(); len(finalizers) > 0 {
|
||||
@@ -275,7 +274,7 @@ func NewImportCommand() *cobra.Command {
|
||||
}
|
||||
}
|
||||
}
|
||||
case application.ApplicationSetKind:
|
||||
case "ApplicationSet":
|
||||
dynClient = acdClients.applicationSets
|
||||
default:
|
||||
log.Fatalf("Unexpected kind '%s' in prune list", key.Kind)
|
||||
@@ -315,7 +314,7 @@ func checkAppHasNoNeedToStopOperation(liveObj unstructured.Unstructured, stopOpe
|
||||
return true
|
||||
}
|
||||
switch liveObj.GetKind() {
|
||||
case application.ApplicationKind:
|
||||
case "Application":
|
||||
return liveObj.Object["operation"] == nil
|
||||
}
|
||||
return true
|
||||
@@ -354,9 +353,9 @@ func updateLive(bak, live *unstructured.Unstructured, stopOperation bool) *unstr
|
||||
switch live.GetKind() {
|
||||
case "Secret", "ConfigMap":
|
||||
newLive.Object["data"] = bak.Object["data"]
|
||||
case application.AppProjectKind:
|
||||
case "AppProject":
|
||||
newLive.Object["spec"] = bak.Object["spec"]
|
||||
case application.ApplicationKind:
|
||||
case "Application":
|
||||
newLive.Object["spec"] = bak.Object["spec"]
|
||||
if _, ok := bak.Object["status"]; ok {
|
||||
newLive.Object["status"] = bak.Object["status"]
|
||||
|
||||
@@ -11,7 +11,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"github.com/go-redis/redis/v8"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
@@ -541,11 +541,6 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
|
||||
return
|
||||
}
|
||||
|
||||
if clusterOpts.InCluster && clusterOpts.ClusterEndpoint != "" {
|
||||
log.Fatal("Can only use one of --in-cluster or --cluster-endpoint")
|
||||
return
|
||||
}
|
||||
|
||||
overrides := clientcmd.ConfigOverrides{
|
||||
Context: *clstContext,
|
||||
}
|
||||
@@ -585,13 +580,9 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
|
||||
errors.CheckError(err)
|
||||
|
||||
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, bearerToken, awsAuthConf, execProviderConf, labelsMap, annotationsMap)
|
||||
if clusterOpts.InClusterEndpoint() {
|
||||
if clusterOpts.InCluster {
|
||||
clst.Server = argoappv1.KubernetesInternalAPIServerAddr
|
||||
}
|
||||
if clusterOpts.ClusterEndpoint == string(cmdutil.KubePublicEndpoint) {
|
||||
// Ignore `kube-public` cluster endpoints, since this command is intended to run without invoking any network connections.
|
||||
log.Warn("kube-public cluster endpoints are not supported. Falling back to the endpoint listed in the kubconfig context.")
|
||||
}
|
||||
if clusterOpts.Shard >= 0 {
|
||||
clst.Shard = &clusterOpts.Shard
|
||||
}
|
||||
|
||||
@@ -9,16 +9,13 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/initialize"
|
||||
"github.com/argoproj/argo-cd/v2/common"
|
||||
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
|
||||
"github.com/argoproj/argo-cd/v2/util/cache"
|
||||
"github.com/argoproj/argo-cd/v2/util/env"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
)
|
||||
|
||||
func NewDashboardCommand() *cobra.Command {
|
||||
var (
|
||||
port int
|
||||
address string
|
||||
compressionStr string
|
||||
port int
|
||||
address string
|
||||
)
|
||||
cmd := &cobra.Command{
|
||||
Use: "dashboard",
|
||||
@@ -26,9 +23,7 @@ func NewDashboardCommand() *cobra.Command {
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
ctx := cmd.Context()
|
||||
|
||||
compression, err := cache.CompressionTypeFromString(compressionStr)
|
||||
errors.CheckError(err)
|
||||
errors.CheckError(headless.StartLocalServer(ctx, &argocdclient.ClientOptions{Core: true}, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address, compression))
|
||||
errors.CheckError(headless.StartLocalServer(ctx, &argocdclient.ClientOptions{Core: true}, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address))
|
||||
println(fmt.Sprintf("Argo CD UI is available at http://%s:%d", address, port))
|
||||
<-ctx.Done()
|
||||
},
|
||||
@@ -36,6 +31,5 @@ func NewDashboardCommand() *cobra.Command {
|
||||
initialize.InitCommand(cmd)
|
||||
cmd.Flags().IntVar(&port, "port", common.DefaultPortAPIServer, "Listen on given port")
|
||||
cmd.Flags().StringVar(&address, "address", common.DefaultAddressAPIServer, "Listen on given address")
|
||||
cmd.Flags().StringVar(&compressionStr, "redis-compress", env.StringFromEnv("REDIS_COMPRESSION", string(cache.RedisCompressionNone)), "Enable this if the application controller is configured with redis compression enabled. (possible values: none, gzip)")
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -15,13 +15,12 @@ import (
|
||||
settings "github.com/argoproj/argo-cd/v2/util/notification/settings"
|
||||
"github.com/argoproj/argo-cd/v2/util/tls"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
"github.com/argoproj/notifications-engine/pkg/cmd"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
applications = schema.GroupVersionResource{Group: application.Group, Version: "v1alpha1", Resource: application.ApplicationPlural}
|
||||
applications = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "applications"}
|
||||
)
|
||||
|
||||
func NewNotificationsCommand() *cobra.Command {
|
||||
|
||||
@@ -28,8 +28,6 @@ import (
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
|
||||
// load the azure plugin (required to authenticate with AKS clusters).
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
)
|
||||
|
||||
// NewProjectAllowListGenCommand generates a project from clusterRole
|
||||
@@ -153,7 +151,7 @@ func generateProjectAllowList(serverResources []*metav1.APIResourceList, cluster
|
||||
}
|
||||
globalProj := v1alpha1.AppProject{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.AppProjectKind,
|
||||
Kind: "AppProject",
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{Name: projName},
|
||||
|
||||
@@ -33,6 +33,7 @@ import (
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/headless"
|
||||
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
|
||||
argocommon "github.com/argoproj/argo-cd/v2/common"
|
||||
"github.com/argoproj/argo-cd/v2/controller"
|
||||
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apiclient/application"
|
||||
@@ -149,6 +150,9 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
if appNamespace != "" {
|
||||
app.Namespace = appNamespace
|
||||
}
|
||||
@@ -165,9 +169,7 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
||||
|
||||
// Get app before creating to see if it is being updated or no change
|
||||
existing, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &app.Name})
|
||||
unwrappedError := grpc.UnwrapGRPCStatus(err).Code()
|
||||
// As part of the fix for CVE-2022-41354, the API will return Permission Denied when an app does not exist.
|
||||
if unwrappedError != codes.NotFound && unwrappedError != codes.PermissionDenied {
|
||||
if grpc.UnwrapGRPCStatus(err).Code() != codes.NotFound {
|
||||
errors.CheckError(err)
|
||||
}
|
||||
|
||||
@@ -292,6 +294,10 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
pConn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
|
||||
defer argoio.Close(pConn)
|
||||
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: app.Spec.Project})
|
||||
@@ -619,6 +625,10 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName, AppNamespace: &appNs})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
visited := cmdutil.SetAppSpecOptions(c.Flags(), &app.Spec, &appOpts)
|
||||
if visited == 0 {
|
||||
log.Error("Please set at least one option to update")
|
||||
@@ -645,9 +655,7 @@ type unsetOpts struct {
|
||||
namePrefix bool
|
||||
nameSuffix bool
|
||||
kustomizeVersion bool
|
||||
kustomizeNamespace bool
|
||||
kustomizeImages []string
|
||||
kustomizeReplicas []string
|
||||
parameters []string
|
||||
valuesFiles []string
|
||||
valuesLiteral bool
|
||||
@@ -656,17 +664,6 @@ type unsetOpts struct {
|
||||
passCredentials bool
|
||||
}
|
||||
|
||||
// IsZero returns true when the Application options for kustomize are considered empty
|
||||
func (o *unsetOpts) KustomizeIsZero() bool {
|
||||
return o == nil ||
|
||||
!o.namePrefix &&
|
||||
!o.nameSuffix &&
|
||||
!o.kustomizeVersion &&
|
||||
!o.kustomizeNamespace &&
|
||||
len(o.kustomizeImages) == 0 &&
|
||||
len(o.kustomizeReplicas) == 0
|
||||
}
|
||||
|
||||
// NewApplicationUnsetCommand returns a new instance of an `argocd app unset` command
|
||||
func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
appOpts := cmdutil.AppOptions{}
|
||||
@@ -696,6 +693,10 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
||||
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName, AppNamespace: &appNs})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
source := app.Spec.GetSource()
|
||||
updated, nothingToUnset := unset(&source, opts)
|
||||
if nothingToUnset {
|
||||
@@ -723,9 +724,7 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
||||
command.Flags().BoolVar(&opts.nameSuffix, "namesuffix", false, "Kustomize namesuffix")
|
||||
command.Flags().BoolVar(&opts.namePrefix, "nameprefix", false, "Kustomize nameprefix")
|
||||
command.Flags().BoolVar(&opts.kustomizeVersion, "kustomize-version", false, "Kustomize version")
|
||||
command.Flags().BoolVar(&opts.kustomizeNamespace, "kustomize-namespace", false, "Kustomize namespace")
|
||||
command.Flags().StringArrayVar(&opts.kustomizeImages, "kustomize-image", []string{}, "Kustomize images name (e.g. --kustomize-image node --kustomize-image mysql)")
|
||||
command.Flags().StringArrayVar(&opts.kustomizeReplicas, "kustomize-replica", []string{}, "Kustomize replicas name (e.g. --kustomize-replica my-deployment --kustomize-replica my-statefulset)")
|
||||
command.Flags().StringArrayVar(&opts.pluginEnvs, "plugin-env", []string{}, "Unset plugin env variables (e.g --plugin-env name)")
|
||||
command.Flags().BoolVar(&opts.passCredentials, "pass-credentials", false, "Unset passCredentials")
|
||||
return command
|
||||
@@ -733,7 +732,7 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
||||
|
||||
func unset(source *argoappv1.ApplicationSource, opts unsetOpts) (updated bool, nothingToUnset bool) {
|
||||
if source.Kustomize != nil {
|
||||
if opts.KustomizeIsZero() {
|
||||
if !opts.namePrefix && !opts.nameSuffix && !opts.kustomizeVersion && len(opts.kustomizeImages) == 0 {
|
||||
return false, true
|
||||
}
|
||||
|
||||
@@ -752,11 +751,6 @@ func unset(source *argoappv1.ApplicationSource, opts unsetOpts) (updated bool, n
|
||||
source.Kustomize.Version = ""
|
||||
}
|
||||
|
||||
if opts.kustomizeNamespace && source.Kustomize.Namespace != "" {
|
||||
updated = true
|
||||
source.Kustomize.Namespace = ""
|
||||
}
|
||||
|
||||
for _, kustomizeImage := range opts.kustomizeImages {
|
||||
for i, item := range source.Kustomize.Images {
|
||||
if argoappv1.KustomizeImage(kustomizeImage).Match(item) {
|
||||
@@ -770,17 +764,6 @@ func unset(source *argoappv1.ApplicationSource, opts unsetOpts) (updated bool, n
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for _, kustomizeReplica := range opts.kustomizeReplicas {
|
||||
kustomizeReplicas := source.Kustomize.Replicas
|
||||
for i, item := range kustomizeReplicas {
|
||||
if kustomizeReplica == item.Name {
|
||||
source.Kustomize.Replicas = append(kustomizeReplicas[0:i], kustomizeReplicas[i+1:]...)
|
||||
updated = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if source.Helm != nil {
|
||||
if len(opts.parameters) == 0 && len(opts.valuesFiles) == 0 && !opts.valuesLiteral && !opts.ignoreMissingValueFiles && !opts.passCredentials {
|
||||
@@ -950,6 +933,10 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName, AppNamespace: &appNs})
|
||||
errors.CheckError(err)
|
||||
conn, settingsIf := clientset.NewSettingsClientOrDie()
|
||||
@@ -1314,6 +1301,17 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
if cluster != "" {
|
||||
appList = argo.FilterByCluster(appList, cluster)
|
||||
}
|
||||
var appsWithDeprecatedPlugins []string
|
||||
for _, app := range appList {
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
appsWithDeprecatedPlugins = append(appsWithDeprecatedPlugins, app.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if len(appsWithDeprecatedPlugins) > 0 {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
log.Warnf("The following Applications use deprecated plugins: %s", strings.Join(appsWithDeprecatedPlugins, ", "))
|
||||
}
|
||||
|
||||
switch output {
|
||||
case "yaml", "json":
|
||||
@@ -1672,6 +1670,10 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
}
|
||||
|
||||
if local != "" {
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
if app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil && !dryRun {
|
||||
log.Fatal("Cannot use local sync when Automatic Sync Policy is enabled except with --dry-run")
|
||||
}
|
||||
@@ -1745,6 +1747,10 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
}
|
||||
}
|
||||
if diffChanges {
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{
|
||||
ApplicationName: &appName,
|
||||
AppNamespace: &appNs,
|
||||
@@ -2202,6 +2208,10 @@ func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
if output == "id" {
|
||||
printApplicationHistoryIds(app.Status.History)
|
||||
} else {
|
||||
@@ -2262,6 +2272,10 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
depInfo, err := findRevisionHistory(app, int64(depID))
|
||||
errors.CheckError(err)
|
||||
|
||||
@@ -2345,6 +2359,10 @@ func NewApplicationManifestsCommand(clientOpts *argocdclient.ClientOptions) *cob
|
||||
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
settingsConn, settingsIf := clientset.NewSettingsClientOrDie()
|
||||
defer argoio.Close(settingsConn)
|
||||
argoSettings, err := settingsIf.Get(context.Background(), &settingspkg.SettingsQuery{})
|
||||
@@ -2444,6 +2462,10 @@ func NewApplicationEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Spec.GetSource().Plugin != nil && app.Spec.GetSource().Plugin.Name != "" {
|
||||
log.Warnf(argocommon.ConfigMapPluginCLIDeprecationWarning)
|
||||
}
|
||||
|
||||
appData, err := json.Marshal(app.Spec)
|
||||
errors.CheckError(err)
|
||||
appData, err = yaml.JSONToYAML(appData)
|
||||
@@ -2485,11 +2507,12 @@ func NewApplicationPatchCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
||||
command := cobra.Command{
|
||||
Use: "patch APPNAME",
|
||||
Short: "Patch application",
|
||||
Example: ` # Update an application's source path using json patch
|
||||
argocd app patch myapplication --patch='[{"op": "replace", "path": "/spec/source/path", "value": "newPath"}]' --type json
|
||||
Long: `Examples:
|
||||
# Update an application's source path using json patch
|
||||
argocd app patch myapplication --patch='[{"op": "replace", "path": "/spec/source/path", "value": "newPath"}]' --type json
|
||||
|
||||
# Update an application's repository target revision using merge patch
|
||||
argocd app patch myapplication --patch '{"spec": { "source": { "targetRevision": "master" } }}' --type merge`,
|
||||
# Update an application's repository target revision using merge patch
|
||||
argocd app patch myapplication --patch '{"spec": { "source": { "targetRevision": "master" } }}' --type merge`,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
ctx := c.Context()
|
||||
|
||||
|
||||
@@ -18,7 +18,6 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/headless"
|
||||
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
|
||||
applicationpkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/application"
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
v1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v2/util/argo"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
@@ -203,7 +202,7 @@ func getActionableResourcesForApplication(appIf applicationpkg.ApplicationServic
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
app.Kind = application.ApplicationKind
|
||||
app.Kind = "Application"
|
||||
app.APIVersion = "argoproj.io/v1alpha1"
|
||||
appManifest, err := json.Marshal(app)
|
||||
if err != nil {
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"time"
|
||||
|
||||
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/gitops-engine/pkg/health"
|
||||
@@ -17,7 +16,6 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/util/intstr"
|
||||
)
|
||||
|
||||
func Test_getInfos(t *testing.T) {
|
||||
@@ -752,16 +750,6 @@ func Test_unset(t *testing.T) {
|
||||
"old1=new:tag",
|
||||
"old2=new:tag",
|
||||
},
|
||||
Replicas: []v1alpha1.KustomizeReplica{
|
||||
{
|
||||
Name: "my-deployment",
|
||||
Count: intstr.FromInt(2),
|
||||
},
|
||||
{
|
||||
Name: "my-statefulset",
|
||||
Count: intstr.FromInt(4),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -838,15 +826,6 @@ func Test_unset(t *testing.T) {
|
||||
assert.False(t, updated)
|
||||
assert.False(t, nothingToUnset)
|
||||
|
||||
assert.Equal(t, 2, len(kustomizeSource.Kustomize.Replicas))
|
||||
updated, nothingToUnset = unset(kustomizeSource, unsetOpts{kustomizeReplicas: []string{"my-deployment"}})
|
||||
assert.Equal(t, 1, len(kustomizeSource.Kustomize.Replicas))
|
||||
assert.True(t, updated)
|
||||
assert.False(t, nothingToUnset)
|
||||
updated, nothingToUnset = unset(kustomizeSource, unsetOpts{kustomizeReplicas: []string{"my-deployment"}})
|
||||
assert.False(t, updated)
|
||||
assert.False(t, nothingToUnset)
|
||||
|
||||
assert.Equal(t, 2, len(helmSource.Helm.Parameters))
|
||||
updated, nothingToUnset = unset(helmSource, unsetOpts{parameters: []string{"name-1"}})
|
||||
assert.Equal(t, 1, len(helmSource.Helm.Parameters))
|
||||
@@ -1142,13 +1121,13 @@ func TestParseSelectedResources(t *testing.T) {
|
||||
assert.Equal(t, *operationResources[0], v1alpha1.SyncOperationResource{
|
||||
Namespace: "",
|
||||
Name: "test",
|
||||
Kind: application.ApplicationKind,
|
||||
Kind: "Application",
|
||||
Group: "v1alpha",
|
||||
})
|
||||
assert.Equal(t, *operationResources[1], v1alpha1.SyncOperationResource{
|
||||
Namespace: "namespace",
|
||||
Name: "test",
|
||||
Kind: application.ApplicationKind,
|
||||
Kind: "Application",
|
||||
Group: "v1alpha",
|
||||
})
|
||||
assert.Equal(t, *operationResources[2], v1alpha1.SyncOperationResource{
|
||||
|
||||
@@ -343,19 +343,16 @@ func printAppSetSummaryTable(appSet *arogappsetv1.ApplicationSet) {
|
||||
fmt.Printf(printOpFmtStr, "Path:", source.Path)
|
||||
printAppSourceDetails(&source)
|
||||
|
||||
var (
|
||||
syncPolicyStr string
|
||||
syncPolicy = appSet.Spec.Template.Spec.SyncPolicy
|
||||
)
|
||||
if syncPolicy != nil && syncPolicy.Automated != nil {
|
||||
syncPolicyStr = "Automated"
|
||||
if syncPolicy.Automated.Prune {
|
||||
syncPolicyStr += " (Prune)"
|
||||
var syncPolicy string
|
||||
if appSet.Spec.SyncPolicy != nil && appSet.Spec.Template.Spec.SyncPolicy.Automated != nil {
|
||||
syncPolicy = "Automated"
|
||||
if appSet.Spec.Template.Spec.SyncPolicy.Automated.Prune {
|
||||
syncPolicy += " (Prune)"
|
||||
}
|
||||
} else {
|
||||
syncPolicyStr = "<none>"
|
||||
syncPolicy = "<none>"
|
||||
}
|
||||
fmt.Printf(printOpFmtStr, "SyncPolicy:", syncPolicyStr)
|
||||
fmt.Printf(printOpFmtStr, "SyncPolicy:", syncPolicy)
|
||||
|
||||
}
|
||||
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
@@ -70,124 +68,3 @@ func TestPrintApplicationSetTable(t *testing.T) {
|
||||
expectation := "NAME NAMESPACE PROJECT SYNCPOLICY CONDITIONS\napp-name default nil [{ResourcesUpToDate <nil> True }]\napp-name default nil [{ResourcesUpToDate <nil> True }]\n"
|
||||
assert.Equal(t, expectation, output)
|
||||
}
|
||||
|
||||
func TestPrintAppSetSummaryTable(t *testing.T) {
|
||||
baseAppSet := &arogappsetv1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app-name",
|
||||
},
|
||||
Spec: arogappsetv1.ApplicationSetSpec{
|
||||
Generators: []arogappsetv1.ApplicationSetGenerator{
|
||||
arogappsetv1.ApplicationSetGenerator{
|
||||
Git: &arogappsetv1.GitGenerator{
|
||||
RepoURL: "https://github.com/argoproj/argo-cd.git",
|
||||
Revision: "head",
|
||||
Directories: []arogappsetv1.GitDirectoryGeneratorItem{
|
||||
arogappsetv1.GitDirectoryGeneratorItem{
|
||||
Path: "applicationset/examples/git-generator-directory/cluster-addons/*",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Template: arogappsetv1.ApplicationSetTemplate{
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "default",
|
||||
},
|
||||
},
|
||||
},
|
||||
Status: arogappsetv1.ApplicationSetStatus{
|
||||
Conditions: []arogappsetv1.ApplicationSetCondition{
|
||||
arogappsetv1.ApplicationSetCondition{
|
||||
Status: v1alpha1.ApplicationSetConditionStatusTrue,
|
||||
Type: arogappsetv1.ApplicationSetConditionResourcesUpToDate,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
appsetSpecSyncPolicy := baseAppSet.DeepCopy()
|
||||
appsetSpecSyncPolicy.Spec.SyncPolicy = &arogappsetv1.ApplicationSetSyncPolicy{
|
||||
PreserveResourcesOnDeletion: true,
|
||||
}
|
||||
|
||||
appSetTemplateSpecSyncPolicy := baseAppSet.DeepCopy()
|
||||
appSetTemplateSpecSyncPolicy.Spec.Template.Spec.SyncPolicy = &arogappsetv1.SyncPolicy{
|
||||
Automated: &arogappsetv1.SyncPolicyAutomated{
|
||||
SelfHeal: true,
|
||||
},
|
||||
}
|
||||
|
||||
appSetBothSyncPolicies := baseAppSet.DeepCopy()
|
||||
appSetBothSyncPolicies.Spec.SyncPolicy = &arogappsetv1.ApplicationSetSyncPolicy{
|
||||
PreserveResourcesOnDeletion: true,
|
||||
}
|
||||
appSetBothSyncPolicies.Spec.Template.Spec.SyncPolicy = &arogappsetv1.SyncPolicy{
|
||||
Automated: &arogappsetv1.SyncPolicyAutomated{
|
||||
SelfHeal: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
appSet *arogappsetv1.ApplicationSet
|
||||
expectedOutput string
|
||||
}{
|
||||
{
|
||||
name: "appset with only spec.syncPolicy set",
|
||||
appSet: appsetSpecSyncPolicy,
|
||||
expectedOutput: `Name: app-name
|
||||
Project: default
|
||||
Server:
|
||||
Namespace:
|
||||
Repo:
|
||||
Target:
|
||||
Path:
|
||||
SyncPolicy: <none>
|
||||
`,
|
||||
},
|
||||
{
|
||||
name: "appset with only spec.template.spec.syncPolicy set",
|
||||
appSet: appSetTemplateSpecSyncPolicy,
|
||||
expectedOutput: `Name: app-name
|
||||
Project: default
|
||||
Server:
|
||||
Namespace:
|
||||
Repo:
|
||||
Target:
|
||||
Path:
|
||||
SyncPolicy: Automated
|
||||
`,
|
||||
},
|
||||
{
|
||||
name: "appset with both spec.SyncPolicy and spec.template.spec.syncPolicy set",
|
||||
appSet: appSetBothSyncPolicies,
|
||||
expectedOutput: `Name: app-name
|
||||
Project: default
|
||||
Server:
|
||||
Namespace:
|
||||
Repo:
|
||||
Target:
|
||||
Path:
|
||||
SyncPolicy: Automated
|
||||
`,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
oldStdout := os.Stdout
|
||||
defer func() {
|
||||
os.Stdout = oldStdout
|
||||
}()
|
||||
|
||||
r, w, _ := os.Pipe()
|
||||
os.Stdout = w
|
||||
|
||||
printAppSetSummaryTable(tt.appSet)
|
||||
w.Close()
|
||||
|
||||
out, err := io.ReadAll(r)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, tt.expectedOutput, string(out))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -93,17 +93,9 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
cmdutil.PrintKubeContexts(configAccess)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if clusterOpts.InCluster && clusterOpts.ClusterEndpoint != "" {
|
||||
log.Fatal("Can only use one of --in-cluster or --cluster-endpoint")
|
||||
return
|
||||
}
|
||||
|
||||
contextName := args[0]
|
||||
conf, err := getRestConfig(pathOpts, contextName)
|
||||
errors.CheckError(err)
|
||||
clientset, err := kubernetes.NewForConfig(conf)
|
||||
errors.CheckError(err)
|
||||
managerBearerToken := ""
|
||||
var awsAuthConf *argoappv1.AWSAuthConfig
|
||||
var execProviderConf *argoappv1.ExecProviderConfig
|
||||
@@ -122,10 +114,13 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
}
|
||||
} else {
|
||||
// Install RBAC resources for managing the cluster
|
||||
clientset, err := kubernetes.NewForConfig(conf)
|
||||
errors.CheckError(err)
|
||||
if clusterOpts.ServiceAccount != "" {
|
||||
managerBearerToken, err = clusterauth.GetServiceAccountBearerToken(clientset, clusterOpts.SystemNamespace, clusterOpts.ServiceAccount, common.BearerTokenTimeout)
|
||||
} else {
|
||||
isTerminal := isatty.IsTerminal(os.Stdout.Fd()) || isatty.IsCygwinTerminal(os.Stdout.Fd())
|
||||
|
||||
if isTerminal && !skipConfirmation {
|
||||
accessLevel := "cluster"
|
||||
if len(clusterOpts.Namespaces) > 0 {
|
||||
@@ -152,18 +147,9 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
contextName = clusterOpts.Name
|
||||
}
|
||||
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, managerBearerToken, awsAuthConf, execProviderConf, labelsMap, annotationsMap)
|
||||
if clusterOpts.InClusterEndpoint() {
|
||||
if clusterOpts.InCluster {
|
||||
clst.Server = argoappv1.KubernetesInternalAPIServerAddr
|
||||
} else if clusterOpts.ClusterEndpoint == string(cmdutil.KubePublicEndpoint) {
|
||||
endpoint, err := cmdutil.GetKubePublicEndpoint(clientset)
|
||||
if err != nil || len(endpoint) == 0 {
|
||||
log.Warnf("Failed to find the cluster endpoint from kube-public data: %v", err)
|
||||
log.Infof("Falling back to the endpoint '%s' as listed in the kubeconfig context", clst.Server)
|
||||
endpoint = clst.Server
|
||||
}
|
||||
clst.Server = endpoint
|
||||
}
|
||||
|
||||
if clusterOpts.Shard >= 0 {
|
||||
clst.Shard = &clusterOpts.Shard
|
||||
}
|
||||
|
||||
@@ -204,13 +204,8 @@ To access completions in your current shell, run
|
||||
$ source <(argocd completion bash)
|
||||
Alternatively, write it to a file and source in .bash_profile
|
||||
|
||||
For zsh, add the following to your ~/.zshrc file:
|
||||
source <(argocd completion zsh)
|
||||
compdef _argocd argocd
|
||||
|
||||
Optionally, also add the following, in case you are getting errors involving compdef & compinit such as command not found: compdef:
|
||||
autoload -Uz compinit
|
||||
compinit
|
||||
For zsh, output to a file in a directory referenced by the $fpath shell
|
||||
variable.
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if len(args) != 1 {
|
||||
|
||||
@@ -13,8 +13,8 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/initialize"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/golang/protobuf/ptypes/empty"
|
||||
"github.com/redis/go-redis/v9"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/pflag"
|
||||
"k8s.io/apimachinery/pkg/util/runtime"
|
||||
@@ -38,12 +38,11 @@ import (
|
||||
)
|
||||
|
||||
type forwardCacheClient struct {
|
||||
namespace string
|
||||
context string
|
||||
init sync.Once
|
||||
client cache.CacheClient
|
||||
compression cache.RedisCompressionType
|
||||
err error
|
||||
namespace string
|
||||
context string
|
||||
init sync.Once
|
||||
client cache.CacheClient
|
||||
err error
|
||||
}
|
||||
|
||||
func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error) error {
|
||||
@@ -59,7 +58,7 @@ func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error)
|
||||
}
|
||||
|
||||
redisClient := redis.NewClient(&redis.Options{Addr: fmt.Sprintf("localhost:%d", redisPort)})
|
||||
c.client = cache.NewRedisCache(redisClient, time.Hour, c.compression)
|
||||
c.client = cache.NewRedisCache(redisClient, time.Hour, cache.RedisCompressionNone)
|
||||
})
|
||||
if c.err != nil {
|
||||
return c.err
|
||||
@@ -140,7 +139,7 @@ func testAPI(ctx context.Context, clientOpts *apiclient.ClientOptions) error {
|
||||
|
||||
// StartLocalServer allows executing command in a headless mode: on the fly starts Argo CD API server and
|
||||
// changes provided client options to use started API server port
|
||||
func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string, compression cache.RedisCompressionType) error {
|
||||
func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string) error {
|
||||
flags := pflag.NewFlagSet("tmp", pflag.ContinueOnError)
|
||||
clientConfig := cli.AddKubectlFlagsToSet(flags)
|
||||
startInProcessAPI := clientOpts.Core
|
||||
@@ -201,7 +200,7 @@ func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions,
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: compression}), time.Hour)
|
||||
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr}), time.Hour)
|
||||
srv := server.NewServer(ctx, server.ArgoCDServerOpts{
|
||||
EnableGZip: false,
|
||||
Namespace: namespace,
|
||||
@@ -244,7 +243,7 @@ func NewClientOrDie(opts *apiclient.ClientOptions, c *cobra.Command) apiclient.C
|
||||
ctx := c.Context()
|
||||
|
||||
ctxStr := initialize.RetrieveContextIfChanged(c.Flag("context"))
|
||||
err := StartLocalServer(ctx, opts, ctxStr, nil, nil, cache.RedisCompressionNone)
|
||||
err := StartLocalServer(ctx, opts, ctxStr, nil, nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
@@ -64,13 +64,11 @@ type AppOptions struct {
|
||||
jsonnetExtVarCode []string
|
||||
jsonnetLibs []string
|
||||
kustomizeImages []string
|
||||
kustomizeReplicas []string
|
||||
kustomizeVersion string
|
||||
kustomizeCommonLabels []string
|
||||
kustomizeCommonAnnotations []string
|
||||
kustomizeForceCommonLabels bool
|
||||
kustomizeForceCommonAnnotations bool
|
||||
kustomizeNamespace string
|
||||
pluginEnvs []string
|
||||
Validate bool
|
||||
directoryExclude string
|
||||
@@ -119,14 +117,12 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
|
||||
command.Flags().StringArrayVar(&opts.jsonnetExtVarCode, "jsonnet-ext-var-code", []string{}, "Jsonnet ext var")
|
||||
command.Flags().StringArrayVar(&opts.jsonnetLibs, "jsonnet-libs", []string{}, "Additional jsonnet libs (prefixed by repoRoot)")
|
||||
command.Flags().StringArrayVar(&opts.kustomizeImages, "kustomize-image", []string{}, "Kustomize images (e.g. --kustomize-image node:8.15.0 --kustomize-image mysql=mariadb,alpine@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d)")
|
||||
command.Flags().StringArrayVar(&opts.kustomizeReplicas, "kustomize-replica", []string{}, "Kustomize replicas (e.g. --kustomize-replica my-development=2 --kustomize-replica my-statefulset=4)")
|
||||
command.Flags().StringArrayVar(&opts.pluginEnvs, "plugin-env", []string{}, "Additional plugin envs")
|
||||
command.Flags().BoolVar(&opts.Validate, "validate", true, "Validation of repo and cluster")
|
||||
command.Flags().StringArrayVar(&opts.kustomizeCommonLabels, "kustomize-common-label", []string{}, "Set common labels in Kustomize")
|
||||
command.Flags().StringArrayVar(&opts.kustomizeCommonAnnotations, "kustomize-common-annotation", []string{}, "Set common labels in Kustomize")
|
||||
command.Flags().BoolVar(&opts.kustomizeForceCommonLabels, "kustomize-force-common-label", false, "Force common labels in Kustomize")
|
||||
command.Flags().BoolVar(&opts.kustomizeForceCommonAnnotations, "kustomize-force-common-annotation", false, "Force common annotations in Kustomize")
|
||||
command.Flags().StringVar(&opts.kustomizeNamespace, "kustomize-namespace", "", "Kustomize namespace")
|
||||
command.Flags().StringVar(&opts.directoryExclude, "directory-exclude", "", "Set glob expression used to exclude files from application source path")
|
||||
command.Flags().StringVar(&opts.directoryInclude, "directory-include", "", "Set glob expression used to include files from application source path")
|
||||
command.Flags().Int64Var(&opts.retryLimit, "sync-retry-limit", 0, "Max number of allowed sync retries")
|
||||
@@ -222,12 +218,8 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
|
||||
setKustomizeOpt(source, kustomizeOpts{nameSuffix: appOpts.nameSuffix})
|
||||
case "kustomize-image":
|
||||
setKustomizeOpt(source, kustomizeOpts{images: appOpts.kustomizeImages})
|
||||
case "kustomize-replica":
|
||||
setKustomizeOpt(source, kustomizeOpts{replicas: appOpts.kustomizeReplicas})
|
||||
case "kustomize-version":
|
||||
setKustomizeOpt(source, kustomizeOpts{version: appOpts.kustomizeVersion})
|
||||
case "kustomize-namespace":
|
||||
setKustomizeOpt(source, kustomizeOpts{namespace: appOpts.kustomizeNamespace})
|
||||
case "kustomize-common-label":
|
||||
parsedLabels, err := label.Parse(appOpts.kustomizeCommonLabels)
|
||||
errors.CheckError(err)
|
||||
@@ -336,13 +328,11 @@ type kustomizeOpts struct {
|
||||
namePrefix string
|
||||
nameSuffix string
|
||||
images []string
|
||||
replicas []string
|
||||
version string
|
||||
commonLabels map[string]string
|
||||
commonAnnotations map[string]string
|
||||
forceCommonLabels bool
|
||||
forceCommonAnnotations bool
|
||||
namespace string
|
||||
}
|
||||
|
||||
func setKustomizeOpt(src *argoappv1.ApplicationSource, opts kustomizeOpts) {
|
||||
@@ -358,9 +348,6 @@ func setKustomizeOpt(src *argoappv1.ApplicationSource, opts kustomizeOpts) {
|
||||
if opts.nameSuffix != "" {
|
||||
src.Kustomize.NameSuffix = opts.nameSuffix
|
||||
}
|
||||
if opts.namespace != "" {
|
||||
src.Kustomize.Namespace = opts.namespace
|
||||
}
|
||||
if opts.commonLabels != nil {
|
||||
src.Kustomize.CommonLabels = opts.commonLabels
|
||||
}
|
||||
@@ -376,14 +363,6 @@ func setKustomizeOpt(src *argoappv1.ApplicationSource, opts kustomizeOpts) {
|
||||
for _, image := range opts.images {
|
||||
src.Kustomize.MergeImage(argoappv1.KustomizeImage(image))
|
||||
}
|
||||
for _, replica := range opts.replicas {
|
||||
r, err := argoappv1.NewKustomizeReplica(replica)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
src.Kustomize.MergeReplica(*r)
|
||||
}
|
||||
|
||||
if src.Kustomize.IsZero() {
|
||||
src.Kustomize = nil
|
||||
}
|
||||
|
||||
@@ -10,8 +10,6 @@ import (
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
|
||||
"k8s.io/apimachinery/pkg/util/intstr"
|
||||
)
|
||||
|
||||
func Test_setHelmOpt(t *testing.T) {
|
||||
@@ -88,32 +86,11 @@ func Test_setKustomizeOpt(t *testing.T) {
|
||||
setKustomizeOpt(&src, kustomizeOpts{images: []string{"org/image:v1", "org/image:v2"}})
|
||||
assert.Equal(t, &v1alpha1.ApplicationSourceKustomize{Images: v1alpha1.KustomizeImages{v1alpha1.KustomizeImage("org/image:v2")}}, src.Kustomize)
|
||||
})
|
||||
t.Run("Replicas", func(t *testing.T) {
|
||||
src := v1alpha1.ApplicationSource{}
|
||||
testReplicasString := []string{"my-deployment=2", "my-statefulset=4"}
|
||||
testReplicas := v1alpha1.KustomizeReplicas{
|
||||
{
|
||||
Name: "my-deployment",
|
||||
Count: intstr.FromInt(2),
|
||||
},
|
||||
{
|
||||
Name: "my-statefulset",
|
||||
Count: intstr.FromInt(4),
|
||||
},
|
||||
}
|
||||
setKustomizeOpt(&src, kustomizeOpts{replicas: testReplicasString})
|
||||
assert.Equal(t, &v1alpha1.ApplicationSourceKustomize{Replicas: testReplicas}, src.Kustomize)
|
||||
})
|
||||
t.Run("Version", func(t *testing.T) {
|
||||
src := v1alpha1.ApplicationSource{}
|
||||
setKustomizeOpt(&src, kustomizeOpts{version: "v0.1"})
|
||||
assert.Equal(t, &v1alpha1.ApplicationSourceKustomize{Version: "v0.1"}, src.Kustomize)
|
||||
})
|
||||
t.Run("Namespace", func(t *testing.T) {
|
||||
src := v1alpha1.ApplicationSource{}
|
||||
setKustomizeOpt(&src, kustomizeOpts{namespace: "custom-namespace"})
|
||||
assert.Equal(t, &v1alpha1.ApplicationSourceKustomize{Namespace: "custom-namespace"}, src.Kustomize)
|
||||
})
|
||||
t.Run("Common labels", func(t *testing.T) {
|
||||
src := v1alpha1.ApplicationSource{}
|
||||
setKustomizeOpt(&src, kustomizeOpts{commonLabels: map[string]string{"foo1": "bar1", "foo2": "bar2"}})
|
||||
@@ -214,11 +191,6 @@ func Test_setAppSpecOptions(t *testing.T) {
|
||||
assert.NoError(t, f.SetFlag("sync-retry-limit", "0"))
|
||||
assert.Nil(t, f.spec.SyncPolicy.Retry)
|
||||
})
|
||||
t.Run("Kustomize", func(t *testing.T) {
|
||||
assert.NoError(t, f.SetFlag("kustomize-replica", "my-deployment=2"))
|
||||
assert.NoError(t, f.SetFlag("kustomize-replica", "my-statefulset=4"))
|
||||
assert.Equal(t, f.spec.Source.Kustomize.Replicas, argoappv1.KustomizeReplicas{{Name: "my-deployment", Count: intstr.FromInt(2)}, {Name: "my-statefulset", Count: intstr.FromInt(4)}})
|
||||
})
|
||||
}
|
||||
|
||||
func Test_setAnnotations(t *testing.T) {
|
||||
|
||||
@@ -1,33 +1,20 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"text/tabwriter"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/spf13/cobra"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
clientcmdapiv1 "k8s.io/client-go/tools/clientcmd/api/v1"
|
||||
|
||||
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
)
|
||||
|
||||
type ClusterEndpoint string
|
||||
|
||||
const (
|
||||
KubeConfigEndpoint ClusterEndpoint = "kubeconfig"
|
||||
KubePublicEndpoint ClusterEndpoint = "kube-public"
|
||||
KubeInternalEndpoint ClusterEndpoint = "internal"
|
||||
)
|
||||
|
||||
func PrintKubeContexts(ca clientcmd.ConfigAccess) {
|
||||
config, err := ca.GetStartingConfig()
|
||||
errors.CheckError(err)
|
||||
@@ -115,30 +102,6 @@ func NewCluster(name string, namespaces []string, clusterResources bool, conf *r
|
||||
return &clst
|
||||
}
|
||||
|
||||
// GetKubePublicEndpoint returns the kubernetes apiserver endpoint as published
|
||||
// in the kube-public.
|
||||
func GetKubePublicEndpoint(client kubernetes.Interface) (string, error) {
|
||||
clusterInfo, err := client.CoreV1().ConfigMaps("kube-public").Get(context.TODO(), "cluster-info", metav1.GetOptions{})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
kubeconfig, ok := clusterInfo.Data["kubeconfig"]
|
||||
if !ok {
|
||||
return "", fmt.Errorf("cluster-info does not contain a public kubeconfig")
|
||||
}
|
||||
// Parse Kubeconfig and get server address
|
||||
config := &clientcmdapiv1.Config{}
|
||||
err = yaml.Unmarshal([]byte(kubeconfig), config)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to parse cluster-info kubeconfig: %v", err)
|
||||
}
|
||||
if len(config.Clusters) == 0 {
|
||||
return "", fmt.Errorf("cluster-info kubeconfig does not have any clusters")
|
||||
}
|
||||
|
||||
return config.Clusters[0].Cluster.Server, nil
|
||||
}
|
||||
|
||||
type ClusterOptions struct {
|
||||
InCluster bool
|
||||
Upsert bool
|
||||
@@ -156,13 +119,6 @@ type ClusterOptions struct {
|
||||
ExecProviderEnv map[string]string
|
||||
ExecProviderAPIVersion string
|
||||
ExecProviderInstallHint string
|
||||
ClusterEndpoint string
|
||||
}
|
||||
|
||||
// InClusterEndpoint returns true if ArgoCD should reference the in-cluster
|
||||
// endpoint when registering the target cluster.
|
||||
func (o ClusterOptions) InClusterEndpoint() bool {
|
||||
return o.InCluster || o.ClusterEndpoint == string(KubeInternalEndpoint)
|
||||
}
|
||||
|
||||
func AddClusterFlags(command *cobra.Command, opts *ClusterOptions) {
|
||||
@@ -179,5 +135,4 @@ func AddClusterFlags(command *cobra.Command, opts *ClusterOptions) {
|
||||
command.Flags().StringToStringVar(&opts.ExecProviderEnv, "exec-command-env", nil, "Environment vars to set when running the --exec-command executable")
|
||||
command.Flags().StringVar(&opts.ExecProviderAPIVersion, "exec-command-api-version", "", "Preferred input version of the ExecInfo for the --exec-command executable")
|
||||
command.Flags().StringVar(&opts.ExecProviderInstallHint, "exec-command-install-hint", "", "Text shown to the user when the --exec-command executable doesn't seem to be present")
|
||||
command.Flags().StringVar(&opts.ClusterEndpoint, "cluster-endpoint", "", "Cluster endpoint to use. Can be one of the following: 'kubeconfig', 'kube-public', or 'internal'.")
|
||||
}
|
||||
|
||||
@@ -4,14 +4,8 @@ import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/stretchr/testify/assert"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
"k8s.io/client-go/rest"
|
||||
clientcmdapiv1 "k8s.io/client-go/tools/clientcmd/api/v1"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
@@ -75,109 +69,3 @@ func Test_newCluster(t *testing.T) {
|
||||
assert.Nil(t, clusterWithBearerToken.Labels)
|
||||
assert.Nil(t, clusterWithBearerToken.Annotations)
|
||||
}
|
||||
|
||||
func TestGetKubePublicEndpoint(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
clusterInfo *corev1.ConfigMap
|
||||
expectedEndpoint string
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
name: "has public endpoint",
|
||||
clusterInfo: &corev1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "kube-public",
|
||||
Name: "cluster-info",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"kubeconfig": kubeconfigFixture("https://test-cluster:6443"),
|
||||
},
|
||||
},
|
||||
expectedEndpoint: "https://test-cluster:6443",
|
||||
},
|
||||
{
|
||||
name: "no cluster-info",
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
name: "no kubeconfig in cluster-info",
|
||||
clusterInfo: &corev1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "kube-public",
|
||||
Name: "cluster-info",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"argo": "the project, not the movie",
|
||||
},
|
||||
},
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
name: "no clusters in cluster-info kubeconfig",
|
||||
clusterInfo: &corev1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "kube-public",
|
||||
Name: "cluster-info",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"kubeconfig": kubeconfigFixture(""),
|
||||
},
|
||||
},
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
name: "can't parse kubeconfig",
|
||||
clusterInfo: &corev1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "kube-public",
|
||||
Name: "cluster-info",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"kubeconfig": "this is not valid YAML",
|
||||
},
|
||||
},
|
||||
expectError: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
objects := []runtime.Object{}
|
||||
if tc.clusterInfo != nil {
|
||||
objects = append(objects, tc.clusterInfo)
|
||||
}
|
||||
clientset := fake.NewSimpleClientset(objects...)
|
||||
endpoint, err := GetKubePublicEndpoint(clientset)
|
||||
if err != nil && !tc.expectError {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if err == nil && tc.expectError {
|
||||
t.Error("expected error to be returned, received none")
|
||||
}
|
||||
if endpoint != tc.expectedEndpoint {
|
||||
t.Errorf("expected endpoint %s, got %s", tc.expectedEndpoint, endpoint)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func kubeconfigFixture(endpoint string) string {
|
||||
kubeconfig := &clientcmdapiv1.Config{}
|
||||
if len(endpoint) > 0 {
|
||||
kubeconfig.Clusters = []clientcmdapiv1.NamedCluster{
|
||||
{
|
||||
Name: "test-kube",
|
||||
Cluster: clientcmdapiv1.Cluster{
|
||||
Server: endpoint,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
configYAML, err := yaml.Marshal(kubeconfig)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return string(configYAML)
|
||||
}
|
||||
|
||||
@@ -138,10 +138,7 @@ func readProjFromURI(fileURL string, proj *v1alpha1.AppProject) error {
|
||||
} else {
|
||||
err = config.UnmarshalRemoteFile(fileURL, &proj)
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("error reading proj from uri: %w", err)
|
||||
}
|
||||
return nil
|
||||
return fmt.Errorf("error reading proj from uri: %w", err)
|
||||
}
|
||||
|
||||
func SetProjSpecOptions(flags *pflag.FlagSet, spec *v1alpha1.AppProjectSpec, projOpts *ProjectOpts) int {
|
||||
|
||||
@@ -22,24 +22,19 @@ type PluginConfig struct {
|
||||
}
|
||||
|
||||
type PluginConfigSpec struct {
|
||||
Version string `json:"version"`
|
||||
Init Command `json:"init,omitempty"`
|
||||
Generate Command `json:"generate"`
|
||||
Discover Discover `json:"discover"`
|
||||
Parameters Parameters `yaml:"parameters"`
|
||||
PreserveFileMode bool `json:"preserveFileMode,omitempty"`
|
||||
Version string `json:"version"`
|
||||
Init Command `json:"init,omitempty"`
|
||||
Generate Command `json:"generate"`
|
||||
Discover Discover `json:"discover"`
|
||||
Parameters Parameters `yaml:"parameters"`
|
||||
}
|
||||
|
||||
// Discover holds find and fileName
|
||||
//Discover holds find and fileName
|
||||
type Discover struct {
|
||||
Find Find `json:"find"`
|
||||
FileName string `json:"fileName"`
|
||||
}
|
||||
|
||||
func (d Discover) IsDefined() bool {
|
||||
return d.FileName != "" || d.Find.Glob != "" || len(d.Find.Command.Command) > 0
|
||||
}
|
||||
|
||||
// Command holds binary path and arguments list
|
||||
type Command struct {
|
||||
Command []string `json:"command,omitempty"`
|
||||
|
||||
@@ -9,10 +9,8 @@ import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode"
|
||||
|
||||
"github.com/argoproj/pkg/rand"
|
||||
|
||||
@@ -75,8 +73,9 @@ func runCommand(ctx context.Context, command Command, path string, env []string)
|
||||
}
|
||||
logCtx := log.WithFields(log.Fields{"execID": execId})
|
||||
|
||||
argsToLog := getCommandArgsToLog(cmd)
|
||||
logCtx.WithFields(log.Fields{"dir": cmd.Dir}).Info(argsToLog)
|
||||
// log in a way we can copy-and-paste into a terminal
|
||||
args := strings.Join(cmd.Args, " ")
|
||||
logCtx.WithFields(log.Fields{"dir": cmd.Dir}).Info(args)
|
||||
|
||||
var stdout bytes.Buffer
|
||||
var stderr bytes.Buffer
|
||||
@@ -107,42 +106,14 @@ func runCommand(ctx context.Context, command Command, path string, env []string)
|
||||
logCtx.WithFields(log.Fields{"duration": duration}).Debug(output)
|
||||
|
||||
if err != nil {
|
||||
err := newCmdError(argsToLog, errors.New(err.Error()), strings.TrimSpace(stderr.String()))
|
||||
err := newCmdError(args, errors.New(err.Error()), strings.TrimSpace(stderr.String()))
|
||||
logCtx.Error(err.Error())
|
||||
return strings.TrimSuffix(output, "\n"), err
|
||||
}
|
||||
if len(output) == 0 {
|
||||
log.WithFields(log.Fields{
|
||||
"stderr": stderr.String(),
|
||||
"command": command,
|
||||
}).Warn("Plugin command returned zero output")
|
||||
}
|
||||
|
||||
return strings.TrimSuffix(output, "\n"), nil
|
||||
}
|
||||
|
||||
// getCommandArgsToLog represents the given command in a way that we can copy-and-paste into a terminal
|
||||
func getCommandArgsToLog(cmd *exec.Cmd) string {
|
||||
var argsToLog []string
|
||||
for _, arg := range cmd.Args {
|
||||
containsSpace := false
|
||||
for _, r := range arg {
|
||||
if unicode.IsSpace(r) {
|
||||
containsSpace = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if containsSpace {
|
||||
// add quotes and escape any internal quotes
|
||||
argsToLog = append(argsToLog, strconv.Quote(arg))
|
||||
} else {
|
||||
argsToLog = append(argsToLog, arg)
|
||||
}
|
||||
}
|
||||
args := strings.Join(argsToLog, " ")
|
||||
return args
|
||||
}
|
||||
|
||||
type CmdError struct {
|
||||
Args string
|
||||
Stderr string
|
||||
@@ -213,7 +184,7 @@ func (s *Service) generateManifestGeneric(stream GenerateManifestStream) error {
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
metadata, err := cmp.ReceiveRepoStream(ctx, stream, workDir, s.initConstants.PluginConfig.Spec.PreserveFileMode)
|
||||
metadata, err := cmp.ReceiveRepoStream(ctx, stream, workDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("generate manifest error receiving stream: %w", err)
|
||||
}
|
||||
@@ -297,7 +268,7 @@ func (s *Service) matchRepositoryGeneric(stream MatchRepositoryStream) error {
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
metadata, err := cmp.ReceiveRepoStream(bufferedCtx, stream, workDir, s.initConstants.PluginConfig.Spec.PreserveFileMode)
|
||||
metadata, err := cmp.ReceiveRepoStream(bufferedCtx, stream, workDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("match repository error receiving stream: %w", err)
|
||||
}
|
||||
@@ -375,7 +346,7 @@ func (s *Service) GetParametersAnnouncement(stream apiclient.ConfigManagementPlu
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
metadata, err := cmp.ReceiveRepoStream(bufferedCtx, stream, workDir, s.initConstants.PluginConfig.Spec.PreserveFileMode)
|
||||
metadata, err := cmp.ReceiveRepoStream(bufferedCtx, stream, workDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("parameters announcement error receiving stream: %w", err)
|
||||
}
|
||||
@@ -384,7 +355,7 @@ func (s *Service) GetParametersAnnouncement(stream apiclient.ConfigManagementPlu
|
||||
return fmt.Errorf("illegal appPath: out of workDir bound")
|
||||
}
|
||||
|
||||
repoResponse, err := getParametersAnnouncement(bufferedCtx, appPath, s.initConstants.PluginConfig.Spec.Parameters.Static, s.initConstants.PluginConfig.Spec.Parameters.Dynamic, metadata.GetEnv())
|
||||
repoResponse, err := getParametersAnnouncement(bufferedCtx, appPath, s.initConstants.PluginConfig.Spec.Parameters.Static, s.initConstants.PluginConfig.Spec.Parameters.Dynamic)
|
||||
if err != nil {
|
||||
return fmt.Errorf("get parameters announcement error: %w", err)
|
||||
}
|
||||
@@ -396,12 +367,11 @@ func (s *Service) GetParametersAnnouncement(stream apiclient.ConfigManagementPlu
|
||||
return nil
|
||||
}
|
||||
|
||||
func getParametersAnnouncement(ctx context.Context, appDir string, announcements []*repoclient.ParameterAnnouncement, command Command, envEntries []*apiclient.EnvEntry) (*apiclient.ParametersAnnouncementResponse, error) {
|
||||
func getParametersAnnouncement(ctx context.Context, appDir string, announcements []*repoclient.ParameterAnnouncement, command Command) (*apiclient.ParametersAnnouncementResponse, error) {
|
||||
augmentedAnnouncements := announcements
|
||||
|
||||
if len(command.Command) > 0 {
|
||||
env := append(os.Environ(), environ(envEntries)...)
|
||||
stdout, err := runCommand(ctx, command, appDir, env)
|
||||
stdout, err := runCommand(ctx, command, appDir, os.Environ())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error executing dynamic parameter output command: %w", err)
|
||||
}
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
@@ -381,7 +380,7 @@ func Test_getParametersAnnouncement_empty_command(t *testing.T) {
|
||||
Command: []string{"echo"},
|
||||
Args: []string{`[]`},
|
||||
}
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command, []*apiclient.EnvEntry{})
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []*repoclient.ParameterAnnouncement{{Name: "static-a"}, {Name: "static-b"}}, res.ParameterAnnouncements)
|
||||
}
|
||||
@@ -395,7 +394,7 @@ func Test_getParametersAnnouncement_no_command(t *testing.T) {
|
||||
err := yaml.Unmarshal([]byte(staticYAML), static)
|
||||
require.NoError(t, err)
|
||||
command := Command{}
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command, []*apiclient.EnvEntry{})
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []*repoclient.ParameterAnnouncement{{Name: "static-a"}, {Name: "static-b"}}, res.ParameterAnnouncements)
|
||||
}
|
||||
@@ -412,7 +411,7 @@ func Test_getParametersAnnouncement_static_and_dynamic(t *testing.T) {
|
||||
Command: []string{"echo"},
|
||||
Args: []string{`[{"name": "dynamic-a"}, {"name": "dynamic-b"}]`},
|
||||
}
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command, []*apiclient.EnvEntry{})
|
||||
res, err := getParametersAnnouncement(context.Background(), "", *static, command)
|
||||
require.NoError(t, err)
|
||||
expected := []*repoclient.ParameterAnnouncement{
|
||||
{Name: "dynamic-a"},
|
||||
@@ -428,7 +427,7 @@ func Test_getParametersAnnouncement_invalid_json(t *testing.T) {
|
||||
Command: []string{"echo"},
|
||||
Args: []string{`[`},
|
||||
}
|
||||
_, err := getParametersAnnouncement(context.Background(), "", []*repoclient.ParameterAnnouncement{}, command, []*apiclient.EnvEntry{})
|
||||
_, err := getParametersAnnouncement(context.Background(), "", []*repoclient.ParameterAnnouncement{}, command)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "unexpected end of JSON input")
|
||||
}
|
||||
@@ -438,7 +437,7 @@ func Test_getParametersAnnouncement_bad_command(t *testing.T) {
|
||||
Command: []string{"exit"},
|
||||
Args: []string{"1"},
|
||||
}
|
||||
_, err := getParametersAnnouncement(context.Background(), "", []*repoclient.ParameterAnnouncement{}, command, []*apiclient.EnvEntry{})
|
||||
_, err := getParametersAnnouncement(context.Background(), "", []*repoclient.ParameterAnnouncement{}, command)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "error executing dynamic parameter output command")
|
||||
}
|
||||
@@ -730,17 +729,16 @@ func TestService_GetParametersAnnouncement(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("successful response", func(t *testing.T) {
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", []string{"MUST_BE_SET=yep"})
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", nil)
|
||||
require.NoError(t, err)
|
||||
err = service.GetParametersAnnouncement(s)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, s.response)
|
||||
require.Len(t, s.response.ParameterAnnouncements, 2)
|
||||
assert.Equal(t, repoclient.ParameterAnnouncement{Name: "dynamic-test-param", String_: "yep"}, *s.response.ParameterAnnouncements[0])
|
||||
assert.Equal(t, repoclient.ParameterAnnouncement{Name: "test-param", String_: "test-value"}, *s.response.ParameterAnnouncements[1])
|
||||
require.Len(t, s.response.ParameterAnnouncements, 1)
|
||||
assert.Equal(t, repoclient.ParameterAnnouncement{Name: "test-param", String_: "test-value"}, *s.response.ParameterAnnouncements[0])
|
||||
})
|
||||
t.Run("out of bounds app", func(t *testing.T) {
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", []string{"MUST_BE_SET=yep"})
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", nil)
|
||||
require.NoError(t, err)
|
||||
// set a malicious app path on the metadata
|
||||
s.metadataRequest.Request.(*apiclient.AppStreamRequest_Metadata).Metadata.AppRelPath = "../out-of-bounds"
|
||||
@@ -748,38 +746,4 @@ func TestService_GetParametersAnnouncement(t *testing.T) {
|
||||
require.ErrorContains(t, err, "illegal appPath")
|
||||
require.Nil(t, s.response)
|
||||
})
|
||||
t.Run("fails when script fails", func(t *testing.T) {
|
||||
s, err := NewMockParametersAnnouncementStream("./testdata/kustomize", "./testdata/kustomize", []string{"WRONG_ENV_VAR=oops"})
|
||||
require.NoError(t, err)
|
||||
err = service.GetParametersAnnouncement(s)
|
||||
require.ErrorContains(t, err, "error executing dynamic parameter output command")
|
||||
require.Nil(t, s.response)
|
||||
})
|
||||
}
|
||||
|
||||
func Test_getCommandArgsToLog(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
args []string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "no spaces",
|
||||
args: []string{"sh", "-c", "cat"},
|
||||
expected: "sh -c cat",
|
||||
},
|
||||
{
|
||||
name: "spaces",
|
||||
args: []string{"sh", "-c", `echo "hello world"`},
|
||||
expected: `sh -c "echo \"hello world\""`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
tcc := tc
|
||||
t.Run(tcc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
assert.Equal(t, tcc.expected, getCommandArgsToLog(exec.Command(tcc.args[0], tcc.args[1:]...)))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,15 +5,9 @@ metadata:
|
||||
spec:
|
||||
version: v1.0
|
||||
init:
|
||||
command: [sh, -c]
|
||||
args:
|
||||
- |
|
||||
kustomize version
|
||||
command: [kustomize, version]
|
||||
generate:
|
||||
command: [sh, -c]
|
||||
args:
|
||||
- |
|
||||
kustomize build
|
||||
command: [sh, -c, "kustomize build"]
|
||||
discover:
|
||||
find:
|
||||
command: [sh, -c, find . -name kustomization.yaml]
|
||||
@@ -22,12 +16,3 @@ spec:
|
||||
static:
|
||||
- name: test-param
|
||||
string: test-value
|
||||
dynamic:
|
||||
command: [sh, -c]
|
||||
args:
|
||||
- |
|
||||
# Make sure env vars are making it to the plugin.
|
||||
if [ -z "$MUST_BE_SET" ]; then
|
||||
exit 1
|
||||
fi
|
||||
echo "[{\"name\": \"dynamic-test-param\", \"string\": \"$MUST_BE_SET\"}]"
|
||||
|
||||
@@ -8,8 +8,6 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Default service addresses and URLS of Argo CD internal services
|
||||
@@ -159,10 +157,6 @@ const (
|
||||
// Ex: "http://grafana.example.com/d/yu5UH4MMz/deployments"
|
||||
// Ex: "Go to Dashboard|http://grafana.example.com/d/yu5UH4MMz/deployments"
|
||||
AnnotationKeyLinkPrefix = "link.argocd.argoproj.io/"
|
||||
|
||||
// AnnotationKeyAppSkipReconcile tells the Application to skip the Application controller reconcile.
|
||||
// Skip reconcile when the value is "true" or any other string values that can be strconv.ParseBool() to be true.
|
||||
AnnotationKeyAppSkipReconcile = "argocd.argoproj.io/skip-reconcile"
|
||||
)
|
||||
|
||||
// Environment variables for tuning and debugging Argo CD
|
||||
@@ -229,7 +223,13 @@ const (
|
||||
// DefaultCMPWorkDirName defines the work directory name used by the cmp-server
|
||||
DefaultCMPWorkDirName = "_cmp_server"
|
||||
|
||||
ConfigMapPluginDeprecationWarning = "argocd-cm plugins are deprecated, and support will be removed in v2.7. Upgrade your plugin to be installed via sidecar. https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/"
|
||||
ConfigMapPluginDeprecationWarning = "argocd-cm plugins are deprecated, and support will be removed in v2.6. Upgrade your plugin to be installed via sidecar. https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/"
|
||||
|
||||
ConfigMapPluginCLIDeprecationWarning = "spec.plugin.name is set, which means this Application uses a plugin installed in the " +
|
||||
"argocd-cm ConfigMap. Installing plugins via that ConfigMap is deprecated in Argo CD v2.5. " +
|
||||
"Starting in Argo CD v2.6, this Application will fail to sync. Contact your Argo CD admin " +
|
||||
"to make sure an upgrade plan is in place. More info: " +
|
||||
"https://argo-cd.readthedocs.io/en/latest/operator-manual/upgrading/2.4-2.5/"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -322,5 +322,3 @@ const (
|
||||
const TokenVerificationError = "failed to verify the token"
|
||||
|
||||
var TokenVerificationErr = errors.New(TokenVerificationError)
|
||||
|
||||
var PermissionDeniedAPIError = status.Error(codes.PermissionDenied, "permission denied")
|
||||
|
||||
@@ -18,7 +18,6 @@ import (
|
||||
"github.com/argoproj/gitops-engine/pkg/diff"
|
||||
"github.com/argoproj/gitops-engine/pkg/health"
|
||||
synccommon "github.com/argoproj/gitops-engine/pkg/sync/common"
|
||||
resourceutil "github.com/argoproj/gitops-engine/pkg/sync/resource"
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
jsonpatch "github.com/evanphx/json-patch"
|
||||
log "github.com/sirupsen/logrus"
|
||||
@@ -38,7 +37,6 @@ import (
|
||||
"k8s.io/client-go/tools/cache"
|
||||
"k8s.io/client-go/util/workqueue"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/common"
|
||||
statecache "github.com/argoproj/argo-cd/v2/controller/cache"
|
||||
"github.com/argoproj/argo-cd/v2/controller/metrics"
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
@@ -53,7 +51,6 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/util/db"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
"github.com/argoproj/argo-cd/v2/util/glob"
|
||||
"github.com/argoproj/argo-cd/v2/util/helm"
|
||||
logutils "github.com/argoproj/argo-cd/v2/util/log"
|
||||
settings_util "github.com/argoproj/argo-cd/v2/util/settings"
|
||||
)
|
||||
@@ -840,7 +837,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
||||
Message: err.Error(),
|
||||
})
|
||||
message := fmt.Sprintf("Unable to delete application resources: %v", err.Error())
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonStatusRefreshed, Type: v1.EventTypeWarning}, message, "")
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonStatusRefreshed, Type: v1.EventTypeWarning}, message)
|
||||
}
|
||||
}
|
||||
return
|
||||
@@ -944,9 +941,7 @@ func (ctrl *ApplicationController) removeProjectFinalizer(proj *appv1.AppProject
|
||||
|
||||
// shouldBeDeleted returns whether a given resource obj should be deleted on cascade delete of application app
|
||||
func (ctrl *ApplicationController) shouldBeDeleted(app *appv1.Application, obj *unstructured.Unstructured) bool {
|
||||
return !kube.IsCRD(obj) && !isSelfReferencedApp(app, kube.GetObjectRef(obj)) &&
|
||||
!resourceutil.HasAnnotationOption(obj, synccommon.AnnotationSyncOptions, synccommon.SyncOptionDisableDeletion) &&
|
||||
!resourceutil.HasAnnotationOption(obj, helm.ResourcePolicyAnnotation, helm.ResourcePolicyKeep)
|
||||
return !kube.IsCRD(obj) && !isSelfReferencedApp(app, kube.GetObjectRef(obj))
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) getPermittedAppLiveObjects(app *appv1.Application, proj *appv1.AppProject, projectClusters func(project string) ([]*appv1.Cluster, error)) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
|
||||
@@ -1302,7 +1297,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
|
||||
eventInfo.Type = v1.EventTypeWarning
|
||||
messages = append(messages, "failed:", state.Message)
|
||||
}
|
||||
ctrl.auditLogger.LogAppEvent(app, eventInfo, strings.Join(messages, " "), "")
|
||||
ctrl.auditLogger.LogAppEvent(app, eventInfo, strings.Join(messages, " "))
|
||||
ctrl.metricsServer.IncSync(app, state)
|
||||
}
|
||||
return nil
|
||||
@@ -1478,13 +1473,6 @@ func resourceStatusKey(res appv1.ResourceStatus) string {
|
||||
return strings.Join([]string{res.Group, res.Kind, res.Namespace, res.Name}, "/")
|
||||
}
|
||||
|
||||
func currentSourceEqualsSyncedSource(app *appv1.Application) bool {
|
||||
if app.Spec.HasMultipleSources() {
|
||||
return app.Spec.Sources.Equals(app.Status.Sync.ComparedTo.Sources)
|
||||
}
|
||||
return app.Spec.Source.Equals(&app.Status.Sync.ComparedTo.Source)
|
||||
}
|
||||
|
||||
// needRefreshAppStatus answers if application status needs to be refreshed.
|
||||
// Returns true if application never been compared, has changed or comparison result has expired.
|
||||
// Additionally returns whether full refresh was requested or not.
|
||||
@@ -1503,12 +1491,14 @@ func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application,
|
||||
refreshType = requestedType
|
||||
reason = fmt.Sprintf("%s refresh requested", refreshType)
|
||||
} else {
|
||||
if !currentSourceEqualsSyncedSource(app) {
|
||||
if app.Spec.HasMultipleSources() {
|
||||
if (len(app.Spec.Sources) != len(app.Status.Sync.ComparedTo.Sources)) || !reflect.DeepEqual(app.Spec.Sources, app.Status.Sync.ComparedTo.Sources) {
|
||||
reason = "atleast one of the spec.sources differs"
|
||||
compareWith = CompareWithLatestForceResolve
|
||||
}
|
||||
} else if !app.Spec.Source.Equals(app.Status.Sync.ComparedTo.Source) {
|
||||
reason = "spec.source differs"
|
||||
compareWith = CompareWithLatestForceResolve
|
||||
if app.Spec.HasMultipleSources() {
|
||||
reason = "at least one of the spec.sources differs"
|
||||
}
|
||||
} else if hardExpired || softExpired {
|
||||
// The commented line below mysteriously crashes if app.Status.ReconciledAt is nil
|
||||
// reason = fmt.Sprintf("comparison expired. reconciledAt: %v, expiry: %v", app.Status.ReconciledAt, statusRefreshTimeout)
|
||||
@@ -1585,11 +1575,11 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
|
||||
logCtx := log.WithFields(log.Fields{"application": orig.QualifiedName()})
|
||||
if orig.Status.Sync.Status != newStatus.Sync.Status {
|
||||
message := fmt.Sprintf("Updated sync status: %s -> %s", orig.Status.Sync.Status, newStatus.Sync.Status)
|
||||
ctrl.auditLogger.LogAppEvent(orig, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message, "")
|
||||
ctrl.auditLogger.LogAppEvent(orig, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message)
|
||||
}
|
||||
if orig.Status.Health.Status != newStatus.Health.Status {
|
||||
message := fmt.Sprintf("Updated health status: %s -> %s", orig.Status.Health.Status, newStatus.Health.Status)
|
||||
ctrl.auditLogger.LogAppEvent(orig, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message, "")
|
||||
ctrl.auditLogger.LogAppEvent(orig, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message)
|
||||
}
|
||||
var newAnnotations map[string]string
|
||||
if orig.GetAnnotations() != nil {
|
||||
@@ -1724,7 +1714,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
|
||||
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}
|
||||
}
|
||||
message := fmt.Sprintf("Initiated automated sync to '%s'", desiredCommitSHA)
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonOperationStarted, Type: v1.EventTypeNormal}, message, "")
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonOperationStarted, Type: v1.EventTypeNormal}, message)
|
||||
logCtx.Info(message)
|
||||
return nil
|
||||
}
|
||||
@@ -1794,20 +1784,6 @@ func (ctrl *ApplicationController) canProcessApp(obj interface{}) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
if annotations := app.GetAnnotations(); annotations != nil {
|
||||
if skipVal, ok := annotations[common.AnnotationKeyAppSkipReconcile]; ok {
|
||||
logCtx := log.WithFields(log.Fields{"application": app.QualifiedName()})
|
||||
if skipReconcile, err := strconv.ParseBool(skipVal); err == nil {
|
||||
if skipReconcile {
|
||||
logCtx.Debugf("Skipping Application reconcile based on annotation %s", common.AnnotationKeyAppSkipReconcile)
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
logCtx.Debugf("Unable to determine if Application should skip reconcile based on annotation %s: %v", common.AnnotationKeyAppSkipReconcile, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if ctrl.clusterFilter != nil {
|
||||
cluster, err := ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
|
||||
if err != nil {
|
||||
|
||||
@@ -209,55 +209,6 @@ status:
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps.git
|
||||
`
|
||||
|
||||
var fakeMultiSourceApp = `
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
uid: "123"
|
||||
name: my-app
|
||||
namespace: ` + test.FakeArgoCDNamespace + `
|
||||
spec:
|
||||
destination:
|
||||
namespace: ` + test.FakeDestNamespace + `
|
||||
server: https://localhost:6443
|
||||
project: default
|
||||
sources:
|
||||
- path: some/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps.git
|
||||
- path: some/other/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps-fake.git
|
||||
syncPolicy:
|
||||
automated: {}
|
||||
status:
|
||||
operationState:
|
||||
finishedAt: 2018-09-21T23:50:29Z
|
||||
message: successfully synced
|
||||
operation:
|
||||
sync:
|
||||
revisions:
|
||||
- HEAD
|
||||
- HEAD
|
||||
phase: Succeeded
|
||||
startedAt: 2018-09-21T23:50:25Z
|
||||
syncResult:
|
||||
resources:
|
||||
- kind: RoleBinding
|
||||
message: |-
|
||||
rolebinding.rbac.authorization.k8s.io/always-outofsync reconciled
|
||||
rolebinding.rbac.authorization.k8s.io/always-outofsync configured
|
||||
name: always-outofsync
|
||||
namespace: default
|
||||
status: Synced
|
||||
revisions:
|
||||
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
- bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
|
||||
sources:
|
||||
- path: some/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps.git
|
||||
- path: some/other/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps-fake.git
|
||||
`
|
||||
|
||||
var fakeAppWithDestName = `
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
@@ -312,10 +263,6 @@ func newFakeApp() *argoappv1.Application {
|
||||
return createFakeApp(fakeApp)
|
||||
}
|
||||
|
||||
func newFakeMultiSourceApp() *argoappv1.Application {
|
||||
return createFakeApp(fakeMultiSourceApp)
|
||||
}
|
||||
|
||||
func newFakeAppWithDestMismatch() *argoappv1.Application {
|
||||
return createFakeApp(fakeAppWithDestMismatch)
|
||||
}
|
||||
@@ -910,133 +857,101 @@ func TestSetOperationStateOnDeletedApp(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestNeedRefreshAppStatus(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
app *argoappv1.Application
|
||||
}{
|
||||
{
|
||||
name: "single-source app",
|
||||
app: newFakeApp(),
|
||||
},
|
||||
{
|
||||
name: "multi-source app",
|
||||
app: newFakeMultiSourceApp(),
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
|
||||
|
||||
app := newFakeApp()
|
||||
now := metav1.Now()
|
||||
app.Status.ReconciledAt = &now
|
||||
app.Status.Sync = argoappv1.SyncStatus{
|
||||
Status: argoappv1.SyncStatusCodeSynced,
|
||||
ComparedTo: argoappv1.ComparedTo{
|
||||
Source: app.Spec.GetSource(),
|
||||
Destination: app.Spec.Destination,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
|
||||
app := tc.app
|
||||
now := metav1.Now()
|
||||
app.Status.ReconciledAt = &now
|
||||
// no need to refresh just reconciled application
|
||||
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.False(t, needRefresh)
|
||||
|
||||
app.Status.Sync = argoappv1.SyncStatus{
|
||||
Status: argoappv1.SyncStatusCodeSynced,
|
||||
ComparedTo: argoappv1.ComparedTo{
|
||||
Destination: app.Spec.Destination,
|
||||
},
|
||||
}
|
||||
// refresh app using the 'deepest' requested comparison level
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing.Pointer(), nil)
|
||||
|
||||
if app.Spec.HasMultipleSources() {
|
||||
app.Status.Sync.ComparedTo.Sources = app.Spec.Sources
|
||||
} else {
|
||||
app.Status.Sync.ComparedTo.Source = app.Spec.GetSource()
|
||||
}
|
||||
needRefresh, refreshType, compareWith := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithRecent, compareWith)
|
||||
|
||||
// no need to refresh just reconciled application
|
||||
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.False(t, needRefresh)
|
||||
// refresh application which status is not reconciled using latest commit
|
||||
app.Status.Sync = argoappv1.SyncStatus{Status: argoappv1.SyncStatusCodeUnknown}
|
||||
|
||||
// refresh app using the 'deepest' requested comparison level
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing.Pointer(), nil)
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
|
||||
needRefresh, refreshType, compareWith := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithRecent, compareWith)
|
||||
{
|
||||
// refresh app using the 'latest' level if comparison expired
|
||||
app := app.DeepCopy()
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Minute, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
}
|
||||
|
||||
// refresh application which status is not reconciled using latest commit
|
||||
app.Status.Sync = argoappv1.SyncStatus{Status: argoappv1.SyncStatusCodeUnknown}
|
||||
{
|
||||
// refresh app using the 'latest' level if comparison expired for hard refresh
|
||||
app := app.DeepCopy()
|
||||
app.Status.Sync = argoappv1.SyncStatus{
|
||||
Status: argoappv1.SyncStatusCodeSynced,
|
||||
ComparedTo: argoappv1.ComparedTo{
|
||||
Source: app.Spec.GetSource(),
|
||||
Destination: app.Spec.Destination,
|
||||
},
|
||||
}
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 2*time.Hour, 1*time.Minute)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
|
||||
assert.Equal(t, CompareWithLatest, compareWith)
|
||||
}
|
||||
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
{
|
||||
app := app.DeepCopy()
|
||||
// execute hard refresh if app has refresh annotation
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
app.Annotations = map[string]string{
|
||||
v1alpha1.AnnotationKeyRefresh: string(argoappv1.RefreshTypeHard),
|
||||
}
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
}
|
||||
|
||||
t.Run("refresh app using the 'latest' level if comparison expired", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Minute, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
})
|
||||
{
|
||||
app := app.DeepCopy()
|
||||
// ensure that CompareWithLatest level is used if application source has changed
|
||||
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing.Pointer(), nil)
|
||||
// sample app source change
|
||||
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{
|
||||
Parameters: []argoappv1.HelmParameter{{
|
||||
Name: "foo",
|
||||
Value: "bar",
|
||||
}},
|
||||
}
|
||||
|
||||
t.Run("refresh app using the 'latest' level if comparison expired for hard refresh", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
app.Status.Sync = argoappv1.SyncStatus{
|
||||
Status: argoappv1.SyncStatusCodeSynced,
|
||||
ComparedTo: argoappv1.ComparedTo{
|
||||
Destination: app.Spec.Destination,
|
||||
},
|
||||
}
|
||||
if app.Spec.HasMultipleSources() {
|
||||
app.Status.Sync.ComparedTo.Sources = app.Spec.Sources
|
||||
} else {
|
||||
app.Status.Sync.ComparedTo.Source = app.Spec.GetSource()
|
||||
}
|
||||
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 2*time.Hour, 1*time.Minute)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
|
||||
assert.Equal(t, CompareWithLatest, compareWith)
|
||||
})
|
||||
|
||||
t.Run("execute hard refresh if app has refresh annotation", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
|
||||
app.Status.ReconciledAt = &reconciledAt
|
||||
app.Annotations = map[string]string{
|
||||
v1alpha1.AnnotationKeyRefresh: string(argoappv1.RefreshTypeHard),
|
||||
}
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
})
|
||||
|
||||
t.Run("ensure that CompareWithLatest level is used if application source has changed", func(t *testing.T) {
|
||||
app := app.DeepCopy()
|
||||
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing.Pointer(), nil)
|
||||
// sample app source change
|
||||
if app.Spec.HasMultipleSources() {
|
||||
app.Spec.Sources[0].Helm = &argoappv1.ApplicationSourceHelm{
|
||||
Parameters: []argoappv1.HelmParameter{{
|
||||
Name: "foo",
|
||||
Value: "bar",
|
||||
}},
|
||||
}
|
||||
} else {
|
||||
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{
|
||||
Parameters: []argoappv1.HelmParameter{{
|
||||
Name: "foo",
|
||||
Value: "bar",
|
||||
}},
|
||||
}
|
||||
}
|
||||
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
})
|
||||
})
|
||||
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
|
||||
assert.True(t, needRefresh)
|
||||
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
|
||||
assert.Equal(t, CompareWithLatestForceResolve, compareWith)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1486,52 +1401,3 @@ func Test_canProcessApp(t *testing.T) {
|
||||
assert.False(t, canProcess)
|
||||
})
|
||||
}
|
||||
|
||||
func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
|
||||
appSkipReconcileInvalid := newFakeApp()
|
||||
appSkipReconcileInvalid.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "invalid-value"}
|
||||
appSkipReconcileFalse := newFakeApp()
|
||||
appSkipReconcileFalse.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "false"}
|
||||
appSkipReconcileTrue := newFakeApp()
|
||||
appSkipReconcileTrue.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "true"}
|
||||
ctrl := newFakeController(&fakeData{})
|
||||
tests := []struct {
|
||||
name string
|
||||
input interface{}
|
||||
expected bool
|
||||
}{
|
||||
{"No skip reconcile annotation", newFakeApp(), true},
|
||||
{"Contains skip reconcile annotation ", appSkipReconcileInvalid, true},
|
||||
{"Contains skip reconcile annotation value false", appSkipReconcileFalse, true},
|
||||
{"Contains skip reconcile annotation value true", appSkipReconcileTrue, false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, tt.expected, ctrl.canProcessApp(tt.input))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_syncDeleteOption(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
|
||||
cm := newFakeCM()
|
||||
t.Run("without delete option object is deleted", func(t *testing.T) {
|
||||
cmObj := kube.MustToUnstructured(&cm)
|
||||
delete := ctrl.shouldBeDeleted(app, cmObj)
|
||||
assert.True(t, delete)
|
||||
})
|
||||
t.Run("with delete set to false object is retained", func(t *testing.T) {
|
||||
cmObj := kube.MustToUnstructured(&cm)
|
||||
cmObj.SetAnnotations(map[string]string{"argocd.argoproj.io/sync-options": "Delete=false"})
|
||||
delete := ctrl.shouldBeDeleted(app, cmObj)
|
||||
assert.False(t, delete)
|
||||
})
|
||||
t.Run("with delete set to false object is retained", func(t *testing.T) {
|
||||
cmObj := kube.MustToUnstructured(&cm)
|
||||
cmObj.SetAnnotations(map[string]string{"helm.sh/resource-policy": "keep"})
|
||||
delete := ctrl.shouldBeDeleted(app, cmObj)
|
||||
assert.False(t, delete)
|
||||
})
|
||||
}
|
||||
|
||||
17
controller/cache/cache.go
vendored
17
controller/cache/cache.go
vendored
@@ -25,7 +25,6 @@ import (
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/controller/metrics"
|
||||
@@ -395,20 +394,6 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
|
||||
return nil, fmt.Errorf("error getting custom label: %w", err)
|
||||
}
|
||||
|
||||
clusterCacheConfig := cluster.RESTConfig()
|
||||
// Controller dynamically fetches all resource types available on the cluster
|
||||
// using a discovery API that may contain deprecated APIs.
|
||||
// This causes log flooding when managing a large number of clusters.
|
||||
// https://github.com/argoproj/argo-cd/issues/11973
|
||||
// However, we can safely suppress deprecation warnings
|
||||
// because we do not rely on resources with a particular API group or version.
|
||||
// https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling
|
||||
//
|
||||
// Completely suppress warning logs only for log levels that are less than Debug.
|
||||
if log.GetLevel() < log.DebugLevel {
|
||||
clusterCacheConfig.WarningHandler = rest.NoWarnings{}
|
||||
}
|
||||
|
||||
clusterCacheOpts := []clustercache.UpdateSettingsFunc{
|
||||
clustercache.SetListSemaphore(semaphore.NewWeighted(clusterCacheListSemaphoreSize)),
|
||||
clustercache.SetListPageSize(clusterCacheListPageSize),
|
||||
@@ -440,7 +425,7 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
|
||||
clustercache.SetRetryOptions(clusterCacheAttemptLimit, clusterCacheRetryUseBackoff, isRetryableError),
|
||||
}
|
||||
|
||||
clusterCache = clustercache.NewClusterCache(clusterCacheConfig, clusterCacheOpts...)
|
||||
clusterCache = clustercache.NewClusterCache(cluster.RESTConfig(), clusterCacheOpts...)
|
||||
|
||||
_ = clusterCache.OnResourceUpdated(func(newRes *clustercache.Resource, oldRes *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) {
|
||||
toNotify := make(map[string]bool)
|
||||
|
||||
@@ -2,13 +2,13 @@ package controller
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
"fmt"
|
||||
"github.com/argoproj/gitops-engine/pkg/cache"
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
log "github.com/sirupsen/logrus"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/controller/metrics"
|
||||
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
|
||||
@@ -1,13 +1,10 @@
|
||||
package controller
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/health"
|
||||
hookutil "github.com/argoproj/gitops-engine/pkg/sync/hook"
|
||||
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
|
||||
kubeutil "github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
|
||||
@@ -18,7 +15,6 @@ import (
|
||||
// setApplicationHealth updates the health statuses of all resources performed in the comparison
|
||||
func setApplicationHealth(resources []managedResource, statuses []appv1.ResourceStatus, resourceOverrides map[string]appv1.ResourceOverride, app *appv1.Application, persistResourceHealth bool) (*appv1.HealthStatus, error) {
|
||||
var savedErr error
|
||||
var errCount uint
|
||||
appHealth := appv1.HealthStatus{Status: health.HealthStatusHealthy}
|
||||
for i, res := range resources {
|
||||
if res.Target != nil && hookutil.Skip(res.Target) {
|
||||
@@ -42,10 +38,7 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
|
||||
}
|
||||
healthStatus, err = health.GetResourceHealth(res.Live, healthOverrides)
|
||||
if err != nil && savedErr == nil {
|
||||
errCount++
|
||||
savedErr = fmt.Errorf("failed to get resource health for %q with name %q in namespace %q: %w", res.Live.GetKind(), res.Live.GetName(), res.Live.GetNamespace(), err)
|
||||
// also log so we don't lose the message
|
||||
log.WithField("application", app.QualifiedName()).Warn(savedErr)
|
||||
savedErr = err
|
||||
}
|
||||
}
|
||||
|
||||
@@ -79,8 +72,5 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
|
||||
} else {
|
||||
app.Status.ResourceHealthSource = appv1.ResourceHealthLocationAppTree
|
||||
}
|
||||
if savedErr != nil && errCount > 1 {
|
||||
savedErr = fmt.Errorf("see applicaton-controller logs for %d other errors; most recent error was: %w", errCount-1, savedErr)
|
||||
}
|
||||
return &appHealth, savedErr
|
||||
}
|
||||
|
||||
@@ -124,7 +124,7 @@ func newAppLiveObj(status health.HealthStatusCode) *unstructured.Unstructured {
|
||||
},
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
Kind: application.ApplicationKind,
|
||||
Kind: "Application",
|
||||
},
|
||||
Status: appv1.ApplicationStatus{
|
||||
Health: appv1.HealthStatus{
|
||||
|
||||
@@ -207,7 +207,7 @@ func runTest(t *testing.T, cfg TestMetricServerConfig) {
|
||||
metricsServ.registry.MustRegister(collector)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", nil)
|
||||
req, err := http.NewRequest("GET", "/metrics", nil)
|
||||
assert.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -337,7 +337,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",name="my-app",namespa
|
||||
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: common.OperationSucceeded})
|
||||
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: common.OperationSucceeded})
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", nil)
|
||||
req, err := http.NewRequest("GET", "/metrics", nil)
|
||||
assert.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -391,7 +391,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
|
||||
fakeApp := newFakeApp(fakeApp)
|
||||
metricsServ.IncReconcile(fakeApp, 5*time.Second)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", nil)
|
||||
req, err := http.NewRequest("GET", "/metrics", nil)
|
||||
assert.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -415,7 +415,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",name="my-app",namespa
|
||||
argocd_app_sync_total{dest_server="https://localhost:6443",name="my-app",namespace="argocd",phase="Succeeded",project="important-project"} 2
|
||||
`
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "/metrics", nil)
|
||||
req, err := http.NewRequest("GET", "/metrics", nil)
|
||||
assert.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
@@ -426,7 +426,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",name="my-app",namespa
|
||||
err = metricsServ.SetExpiration(time.Second)
|
||||
assert.NoError(t, err)
|
||||
time.Sleep(2 * time.Second)
|
||||
req, err = http.NewRequest(http.MethodGet, "/metrics", nil)
|
||||
req, err = http.NewRequest("GET", "/metrics", nil)
|
||||
assert.NoError(t, err)
|
||||
rr = httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
|
||||
@@ -471,7 +471,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
failedToLoadObjs = true
|
||||
}
|
||||
|
||||
logCtx.Debugf("Retrieved live manifests")
|
||||
logCtx.Debugf("Retrieved lived manifests")
|
||||
|
||||
// filter out all resources which are not permitted in the application project
|
||||
for k, v := range liveObjByKey {
|
||||
@@ -682,7 +682,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
|
||||
healthStatus, err := setApplicationHealth(managedResources, resourceSummaries, resourceOverrides, app, m.persistResourceHealth)
|
||||
if err != nil {
|
||||
conditions = append(conditions, appv1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: fmt.Sprintf("error setting app health: %s", err.Error()), LastTransitionTime: &now})
|
||||
conditions = append(conditions, appv1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error(), LastTransitionTime: &now})
|
||||
}
|
||||
|
||||
// Git has already performed the signature verification via its GPG interface, and the result is available
|
||||
|
||||
@@ -322,25 +322,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
|
||||
var resState []common.ResourceSyncResult
|
||||
state.Phase, state.Message, resState = syncCtx.GetState()
|
||||
state.SyncResult.Resources = nil
|
||||
|
||||
var apiVersion []kube.APIResourceInfo
|
||||
for _, res := range resState {
|
||||
augmentedMsg, err := argo.AugmentSyncMsg(res, func() ([]kube.APIResourceInfo, error) {
|
||||
if apiVersion == nil {
|
||||
_, apiVersion, err = m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get version info from the target cluster %q", app.Spec.Destination.Server)
|
||||
}
|
||||
}
|
||||
return apiVersion, nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
log.Errorf("using the original message since: %v", err)
|
||||
} else {
|
||||
res.Message = augmentedMsg
|
||||
}
|
||||
|
||||
state.SyncResult.Resources = append(state.SyncResult.Resources, &v1alpha1.ResourceResult{
|
||||
HookType: res.HookType,
|
||||
Group: res.ResourceKey.Group,
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
# 2.7 to 2.8
|
||||
|
||||
## Tini as entrypoint
|
||||
|
||||
With the 2.8 release `entrypoint.sh` will be removed from the containers, because starting with 2.7, the implicit entrypoint is set to `tini` in the `Dockerfile` explicitly, and the kubernetes manifests has been updated to use it. Simply updating the containers without updating the deployment manifests will result in pod startup failures, as the old manifests are relying on `entrypoint.sh` instead of `tini`. Please make sure the manifests are updated properly before moving to 2.8.
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 32 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 8.3 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 132 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 41 KiB |
@@ -9,6 +9,16 @@ setTimeout(function() {
|
||||
caret.innerHTML = "<i class='fa fa-caret-down dropdown-caret'></i>"
|
||||
caret.classList.add('dropdown-caret')
|
||||
div.querySelector('.rst-current-version').appendChild(caret);
|
||||
div.querySelector('.rst-current-version').addEventListener('click', function() {
|
||||
const classes = container.className.split(' ');
|
||||
const index = classes.indexOf('shift-up');
|
||||
if (index === -1) {
|
||||
classes.push('shift-up');
|
||||
} else {
|
||||
classes.splice(index, 1);
|
||||
}
|
||||
container.className = classes.join(' ');
|
||||
});
|
||||
}
|
||||
|
||||
var CSSLink = document.createElement('link');
|
||||
|
||||
@@ -23,13 +23,13 @@ git push origin <yourbranch>
|
||||
|
||||
First, make sure the failing build step succeeds on your machine. Remember the containerized build toolchain is available, too.
|
||||
|
||||
If the build is failing at the `Ensure Go modules synchronicity` step, you need to first download all Go dependent modules locally via `go mod download` and then run `go mod tidy` to make sure the dependent Go modules are tidied up. Finally, commit and push your changes to `go.mod` and `go.sum` to your branch.
|
||||
If the build is failing at the `Ensure Go modules synchronicity` step, you need to first download all Go dependent modules locally via `go mod download` and then run `go mod tidy` to make sure the dependent Go modules are tidied up. Finally commit and push your changes to `go.mod` and `go.sum` to your branch.
|
||||
|
||||
If the build is failing at the `Build & cache Go code`, you need to make sure `make build-local` runs successfully on your local machine.
|
||||
|
||||
### Why does the codegen step fail?
|
||||
|
||||
If the codegen step fails with "Check nothing has changed...", chances are high that you did not run `make codegen`, or did not commit the changes it made. You should double-check by running `make codegen` followed by `git status` in the local working copy of your branch. Commit any changes and push them to your GH branch to have the CI check it again.
|
||||
If the codegen step fails with "Check nothing has changed...", chances are high that you did not run `make codegen`, or did not commit the changes it made. You should double check by running `make codegen` followed by `git status` in the local working copy of your branch. Commit any changes and push them to your GH branch to have the CI check it again.
|
||||
|
||||
A second common case for this is, when you modified any of the auto generated assets, as these will be overwritten upon `make codegen`.
|
||||
|
||||
|
||||
@@ -109,4 +109,4 @@ The current cadence of our meetings is weekly, every Thursday at 4:15pm UTC (8:1
|
||||
|
||||
If you want to discuss something, we kindly ask you to put your item on the
|
||||
[agenda](https://docs.google.com/document/d/1xkoFkVviB70YBzSEa4bDnu-rUZ1sIFtwKKG1Uw8XsY8)
|
||||
for one of the upcoming meetings so that we can plan in the time for discussing it.
|
||||
for one of the upcoming meetings so that we can plan in the time for discussing about it.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user