Compare commits

..

62 Commits

Author SHA1 Message Date
Alexander Matyushentsev
567fcc3314 Update manifests to v1.1.2 2019-07-30 10:38:35 -07:00
Alexander Matyushentsev
26ba5022aa Issue #2049 - 'argocd app wait' should print correct sync status (#2050) 2019-07-30 10:31:55 -07:00
Devon Mizelle
14d5c49f85 Check that TLS is enabled when registering DEX Handlers (#1963)
This commit makes it so that `registerDexHandlers` in `server/server.go`
only attempts to modify `a.TLSConfig` if TLS is enabled.

Without this, deployments of ArgoCD that don't have a certificate
enabled (in the case where a LB/Ingress Controller is handling SSL
connections as a reverse proxy) end up having a nil pointer reference
panic on start.
2019-07-30 09:26:09 -07:00
Alex Collins
c961f7417b Do not ignore Argo hooks when there is a Helm hook. Closes #1952 (#1973) 2019-07-29 15:14:32 -07:00
Alexander Matyushentsev
8544bef56b Update manifests to v1.1.1 2019-07-24 10:27:08 -07:00
Alexander Matyushentsev
b9090df4fa Issue #1984 - Support 'override' action in UI/API (#1985) 2019-07-23 14:30:51 -07:00
Alexander Matyushentsev
0ac0591fdb Issue #1982 - Fix argocd app wait message (#1983) 2019-07-23 14:30:46 -07:00
Alexander Matyushentsev
c48712d988 Update manifests to v1.1.0 2019-07-22 15:06:19 -07:00
Alexander Matyushentsev
e39f96999e Fix merge issues 2019-07-19 14:48:37 -07:00
Alexander Matyushentsev
08d7b6492b Update manifests to v1.1.0-rc8 2019-07-19 14:36:31 -07:00
Alexander Matyushentsev
24f0efd791 Fix argocd app sync/get cli (#1959) 2019-07-19 14:33:44 -07:00
Alexander Matyushentsev
39a2e5097a Issue #1935 - argocd app sync hangs when cluster is not configured #1935 (#1962) 2019-07-19 14:33:40 -07:00
Alexander Matyushentsev
de881f398a Remove unnecessary details from sync errors (#1951) 2019-07-19 14:33:37 -07:00
Alexander Matyushentsev
70a7855da0 Issue #1919 - Eliminate unnecessary git interactions for top-level resource changes (#1929)
* Issue #1919 - Eliminate unnecessary git interactions for top-level resource changes

* Apply reviewer notes
2019-07-19 14:33:33 -07:00
Jesse Suen
6d8a592509 Pin k8s.io/kube-openapi to a specific version to stop updating 2019-07-19 13:41:19 -07:00
Jesse Suen
5581a85bff Do not allow app-of-app child app's Missing status to affect parent (#1954) 2019-07-19 13:28:22 -07:00
Alex Collins
156b3de4c5 Improve sync result messages. Closes #1486 (#1768) 2019-07-19 09:33:00 -07:00
Alexander Matyushentsev
257991b69c Update manifests to v1.1.0-rc7 2019-07-17 16:12:00 -07:00
Alexander Matyushentsev
c26c4729a1 Change git prometheus counter name (#1949) 2019-07-17 16:05:38 -07:00
Alexander Matyushentsev
2a79017c5f Update manifests to v1.1.0-rc6 2019-07-15 17:46:44 -07:00
Jesse Suen
7fe92adaed Update k8s libraries to v1.14 (#1806) 2019-07-11 11:25:18 -07:00
Alexander Matyushentsev
59d564017e Fix merging issue 2019-07-11 11:23:27 -07:00
Alexander Matyushentsev
6d8d805f92 Issue #897 - Secret data not redacted in last-applied-configuration (#1920) 2019-07-11 10:58:15 -07:00
Alexander Matyushentsev
f5ab4d55c3 Issue #1912 - Add Prometheus metrics for git repo interactions (#1914) 2019-07-10 17:14:39 -07:00
Alexander Matyushentsev
8cfb628d24 Issue #1909 - App controller should log additional information during app syncing (#1910) 2019-07-10 13:34:48 -07:00
Alexander Matyushentsev
3c95a4a3c4 Upgrade argo ui version to pull dropdown fix (#1906) 2019-07-10 13:34:44 -07:00
Alexander Matyushentsev
c924350adf Update manifests to v1.1.0-rc5 2019-07-09 15:17:03 -07:00
Alexander Matyushentsev
4633eb6db8 Update manifests to v1.0.0-rc5 2019-07-09 14:23:05 -07:00
Alexander Matyushentsev
7089e6b0f9 Upgrade argo ui version to pull dropdown fix (#1899) 2019-07-09 14:17:26 -07:00
Alex Collins
513f773ae8 Log more error information. See #1887 (#1891) 2019-07-08 18:03:15 -07:00
Alex Collins
75db0b6c8c Update manifests to v1.1.0-rc4 2019-07-03 14:07:53 -07:00
Alex Collins
0860b032ec Merged from master 2019-07-03 13:52:12 -07:00
Alexander Matyushentsev
ba267f627a Issue #1874 - validate app spec before verifying app permissions (#1875) 2019-07-03 13:10:23 -07:00
Alex Collins
3000146574 Redacts Helm username and password. Closes #1868 (#1871) 2019-07-03 13:02:48 -07:00
Alexander Matyushentsev
686ec75e0a Issue #1867 - Fix JS error on project role edit panel (#1869) 2019-07-03 10:38:46 -07:00
Alexander Matyushentsev
d08534f068 Upgrade argo-ui version to fix dropdown position calculation (#1847) 2019-07-01 10:48:11 -07:00
Alex Collins
7c0fd908a0 Update manifests to v1.1.0-rc3 2019-06-28 13:29:27 -07:00
Alex Collins
e530a4780e Removes logging that appears when using the CLI (#1842) 2019-06-28 13:19:37 -07:00
Alex Collins
8c76771a05 Adds file missing for tests 2019-06-28 13:06:53 -07:00
Alex Collins
65f430c395 Removes merge issue 2019-06-28 12:55:39 -07:00
Simon Behar
d409014da7 Added local path syncing (#1578) 2019-06-28 12:36:02 -07:00
Simon Behar
cfddf23275 Added local sync to docs (#1771) 2019-06-28 12:32:02 -07:00
Alexander Matyushentsev
5698d1b6b1 Issue #1820 - Make sure api server to repo server grpc calls have timeout (#1832) 2019-06-28 12:11:31 -07:00
Alex Collins
0e6895472b Adds a timeout to all external commands. Closes #1821 (#1823) 2019-06-28 12:07:32 -07:00
Jesse Suen
b7e0f91478 Running application actions should require override privileges not get (#1828) 2019-06-27 11:29:46 -07:00
Jesse Suen
78ab336c86 Parameterize Argo UI base image 2019-06-24 16:21:39 -07:00
Jesse Suen
a86e074b1f Switch to user root when building argo-cd base 2019-06-24 14:57:29 -07:00
Alex Collins
b742c66b14 update argo-ui URL from git:// to https:// 2019-06-24 14:15:18 -07:00
Alex Collins
e2756210d9 dep ensure 2019-06-21 16:08:54 -07:00
Alex Collins
97565c0895 Merged from master 2019-06-21 16:02:22 -07:00
Alex Collins
9a6f9ff824 Update manifests to v1.1.0-rc2 2019-06-21 16:00:06 -07:00
dthomson25
290cefaedd Use correct healthcheck for Rollout with empty steps list (#1776) 2019-06-21 15:04:19 -07:00
Jesse Suen
69e49d708f Move remarshaling to happen only during comparison, instead of manifest generation (#1788) 2019-06-21 15:01:07 -07:00
Jesse Suen
e27be81947 Server side rotation of cluster bearer tokens (#1744) 2019-06-21 13:41:14 -07:00
Alexander Matyushentsev
4febc66a64 Add health check to the controller deployment (#1785) 2019-06-21 11:02:43 -07:00
Alexander Matyushentsev
a984af76f1 Make status fields as optional fields (#1779) 2019-06-21 11:02:37 -07:00
Alexander Matyushentsev
be6a0fc21f Sync status button should be hidden if there is no sync operation (#1770) 2019-06-21 11:02:31 -07:00
Alexander Matyushentsev
122729e2a4 UI should allow editing repo URL (#1763) 2019-06-21 11:02:25 -07:00
Alex Collins
84635d4dbe Fixes a bug where cluster objs could leave app is running op state. C… (#1796) 2019-06-21 10:52:43 -07:00
Alex Collins
46550e009b Update manifests to v1.1.0-rc1 2019-06-14 11:16:08 -07:00
Alex Collins
683b9072b8 codegen 2019-06-14 11:15:09 -07:00
Alex Collins
33e66dcf5e Update manifests to v1.1.0-rc1 2019-06-14 11:13:23 -07:00
613 changed files with 15102 additions and 47318 deletions

163
.argo-ci/ci.yaml Normal file
View File

@@ -0,0 +1,163 @@
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: argo-cd-ci-
spec:
entrypoint: argo-cd-ci
arguments:
parameters:
- name: revision
value: master
- name: repo
value: https://github.com/argoproj/argo-cd.git
volumes:
- name: k3setc
emptyDir: {}
- name: k3svar
emptyDir: {}
- name: tmp
emptyDir: {}
templates:
- name: argo-cd-ci
steps:
- - name: build-e2e
template: build-e2e
- name: test
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && make lint test && bash <(curl -s https://codecov.io/bash) -f coverage.out"
# The step builds argo cd image, deploy argo cd components into throw-away kubernetes cluster provisioned using k3s and run e2e tests against it.
- name: build-e2e
inputs:
artifacts:
- name: code
path: /go/src/github.com/argoproj/argo-cd
git:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [sh, -c]
# Main contains build argocd image. The image is saved it into k3s agent images directory so it could be preloaded by the k3s cluster.
args: ["
dep ensure && until docker ps; do sleep 3; done && \
make image DEV_IMAGE=true && mkdir -p /var/lib/rancher/k3s/agent/images && \
docker save argocd:latest > /var/lib/rancher/k3s/agent/images/argocd.tar && \
touch /var/lib/rancher/k3s/ready && until ls /etc/rancher/k3s/k3s.yaml; do sleep 3; done && \
kubectl create ns argocd-e2e && kustomize build ./test/manifests/ci | kubectl apply -n argocd-e2e -f - && \
kubectl rollout status deployment -n argocd-e2e argocd-application-controller && kubectl rollout status deployment -n argocd-e2e argocd-server && \
git config --global user.email \"test@example.com\" && \
export ARGOCD_SERVER=$(kubectl get service argocd-server -o=jsonpath={.spec.clusterIP} -n argocd-e2e):443 && make test-e2e"
]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: USER
value: argocd
- name: DOCKER_HOST
value: 127.0.0.1
- name: DOCKER_BUILDKIT
value: "1"
- name: KUBECONFIG
value: /etc/rancher/k3s/k3s.yaml
volumeMounts:
- name: tmp
mountPath: /tmp
- name: k3setc
mountPath: /etc/rancher/k3s
- name: k3svar
mountPath: /var/lib/rancher/k3s
sidecars:
- name: dind
image: docker:18.09-dind
securityContext:
privileged: true
resources:
requests:
memory: 2048Mi
cpu: 500m
mirrorVolumeMounts: true
# Steps waits for file /var/lib/rancher/k3s/ready which indicates that all required images are ready, then starts the cluster.
- name: k3s
image: rancher/k3s:v0.3.0-rc1
imagePullPolicy: Always
command: [sh, -c]
args: ["until ls /var/lib/rancher/k3s/ready; do sleep 3; done && k3s server || true"]
securityContext:
privileged: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: k3setc
mountPath: /etc/rancher/k3s
- name: k3svar
mountPath: /var/lib/rancher/k3s
- name: ci-builder
inputs:
parameters:
- name: cmd
artifacts:
- name: code
path: /go/src/github.com/argoproj/argo-cd
git:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [bash, -c]
args: ["{{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: CODECOV_TOKEN
valueFrom:
secretKeyRef:
name: codecov-token
key: codecov-token
resources:
requests:
memory: 1024Mi
cpu: 200m
archiveLocation:
archiveLogs: true
- name: ci-dind
inputs:
parameters:
- name: cmd
artifacts:
- name: code
path: /go/src/github.com/argoproj/argo-cd
git:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [sh, -c]
args: ["until docker ps; do sleep 3; done && {{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: DOCKER_HOST
value: 127.0.0.1
- name: DOCKER_BUILDKIT
value: "1"
resources:
requests:
memory: 1024Mi
cpu: 200m
sidecars:
- name: dind
image: docker:18.09-dind
securityContext:
privileged: true
mirrorVolumeMounts: true
archiveLocation:
archiveLogs: true

View File

@@ -1,5 +1,14 @@
version: 2.1
commands:
before:
steps:
- restore_go_cache
- install_golang
- install_tools
- clean_checkout
- configure_git
- install_go_deps
- dep_ensure
configure_git:
steps:
- run:
@@ -11,6 +20,36 @@ commands:
git config --global user.name "Your Name"
echo "export PATH=/home/circleci/.go_workspace/src/github.com/argoproj/argo-cd/hack:\$PATH" | tee -a $BASH_ENV
echo "export GIT_ASKPASS=git-ask-pass.sh" | tee -a $BASH_ENV
- run:
name: Make sure we can clone out the test private repo
command: |
set -x
export GIT_USERNAME=blah
export GIT_PASSWORD=B5sBDeoqAVUouoHkrovy
git-ask-pass.sh Username
git-ask-pass.sh Password
git clone https://gitlab.com/argo-cd-test/test-apps.git /tmp/test-apps
clean_checkout:
steps:
- run:
name: Remove checked out code
command: rm -Rf /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
- checkout
install_go_deps:
steps:
- run:
name: Install Go deps
command: |
set -x
go get github.com/gobuffalo/packr/packr
go get github.com/gogo/protobuf/gogoproto
go get github.com/golang/protobuf/protoc-gen-go
go get github.com/golangci/golangci-lint/cmd/golangci-lint
go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
go get github.com/jstemmer/go-junit-report
go get github.com/mattn/goreman
go get golang.org/x/tools/cmd/goimports
dep_ensure:
steps:
- restore_cache:
@@ -26,45 +65,154 @@ commands:
install_golang:
steps:
- run:
name: Install Golang v1.12.6
name: Install Golang v1.11.4
command: |
go get golang.org/dl/go1.12.6
[ -e /home/circleci/sdk/go1.12.6 ] || go1.12.6 download
go get golang.org/dl/go1.11.4
[ -e /home/circleci/sdk/go1.11.4 ] || go1.11.4 download
echo "export GOPATH=/home/circleci/.go_workspace" | tee -a $BASH_ENV
echo "export PATH=/home/circleci/sdk/go1.12.6/bin:\$PATH" | tee -a $BASH_ENV
echo "export PATH=/home/circleci/sdk/go1.11.4/bin:\$PATH" | tee -a $BASH_ENV
- run:
name: Golang diagnostics
command: |
env
which go
go version
go env
install_tools:
steps:
- run:
name: Create downloads dir
command: mkdir -p /tmp/dl
- restore_cache:
keys:
- dl-v4
- dl-v3
- run:
name: Install JQ v1.6
command: |
set -x
[ -e /tmp/dl/jq ] || curl -sLf -C - -o /tmp/dl/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
sudo cp /tmp/dl/jq /usr/local/bin/jq
sudo chmod +x /usr/local/bin/jq
jq --version
- run:
name: Install Kubectx v0.6.3
command: |
set -x
[ -e /tmp/dl/kubectx.zip ] || curl -sLf -C - -o /tmp/dl/kubectx.zip https://github.com/ahmetb/kubectx/archive/v0.6.3.zip
sudo unzip /tmp/dl/kubectx.zip kubectx-0.6.3/kubectx
sudo unzip /tmp/dl/kubectx.zip kubectx-0.6.3/kubens
sudo mv kubectx-0.6.3/kubectx /usr/local/bin/
sudo mv kubectx-0.6.3/kubens /usr/local/bin/
sudo chmod +x /usr/local/bin/kubectx
sudo chmod +x /usr/local/bin/kubens
- run:
name: Install Dep v0.5.3
command: |
set -x
[ -e /tmp/dl/dep ] || curl -sLf -C - -o /tmp/dl/dep https://github.com/golang/dep/releases/download/v0.5.3/dep-linux-amd64
sudo cp /tmp/dl/dep /usr/local/go/bin/dep
sudo chmod +x /usr/local/go/bin/dep
dep version
- run:
name: Install Go Swagger v0.19.0
command: |
set -x
[ -e /tmp/dl/swagger ] || curl -sLf -C - -o /tmp/dl/swagger https://github.com/go-swagger/go-swagger/releases/download/v0.19.0/swagger_linux_amd64
sudo cp /tmp/dl/swagger /usr/local/bin/swagger
sudo chmod +x /usr/local/bin/swagger
swagger version
- run:
name: Install Ksonnet v0.13.1
command: |
set -x
[ -e /tmp/dl/ks.tar.gz ] || curl -sLf -C - -o /tmp/dl/ks.tar.gz https://github.com/ksonnet/ksonnet/releases/download/v0.13.1/ks_0.13.1_linux_amd64.tar.gz
tar -C /tmp -xf /tmp/dl/ks.tar.gz
sudo cp /tmp/ks_0.13.1_linux_amd64/ks /usr/local/go/bin/ks
sudo chmod +x /usr/local/go/bin/ks
ks version
- run:
name: Install Helm v2.13.1
command: |
set -x
[ -e /tmp/dl/helm.tar.gz ] || curl -sLf -C - -o /tmp/dl/helm.tar.gz https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
tar -C /tmp/ -xf /tmp/dl/helm.tar.gz
sudo cp /tmp/linux-amd64/helm /usr/local/go/bin/helm
helm version --client
helm init --client-only
- run:
name: Install Kustomize v1.0.11
command: |
set -x
[ -e /tmp/dl/kustomize1 ] || curl -sLf -C - -o /tmp/dl/kustomize1 https://github.com/kubernetes-sigs/kustomize/releases/download/v1.0.11/kustomize_1.0.11_linux_amd64
sudo cp /tmp/dl/kustomize1 /usr/local/go/bin/
sudo chmod +x /usr/local/go/bin/kustomize1
kustomize1 version
- run:
name: Install Kustomize v2.0.3
command: |
set -x
[ -e /tmp/dl/kustomize ] || curl -sLf -C - -o /tmp/dl/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v2.0.3/kustomize_2.0.3_linux_amd64
sudo cp /tmp/dl/kustomize /usr/local/go/bin/
sudo chmod +x /usr/local/go/bin/kustomize
kustomize version
- run:
name: Install Protobuf compiler v3.7.1
command: |
set -x
[ -e /tmp/dl/protoc.zip ] || curl -sLf -C - -o /tmp/dl/protoc.zip https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protoc-3.7.1-linux-x86_64.zip
sudo unzip /tmp/dl/protoc.zip bin/protoc -d /usr/local/
sudo chmod +x /usr/local/bin/protoc
sudo unzip /tmp/dl/protoc.zip include/* -d /usr/local/
protoc --version
- save_cache:
key: dl-v4
paths:
- /tmp/dl
save_go_cache:
steps:
- save_cache:
key: go-v18-{{ .Branch }}
key: go-v15-{{ .Branch }}
paths:
- /home/circleci/.go_workspace
- /home/circleci/.cache/go-build
- /home/circleci/sdk/go1.12.6
- /home/circleci/sdk/go1.11.4
restore_go_cache:
steps:
- restore_cache:
keys:
- go-v18-{{ .Branch }}
- go-v18-master
- go-v17-{{ .Branch }}
- go-v17-master
- go-v15-{{ .Branch }}
- go-v15-master
jobs:
codegen:
docker:
- image: circleci/golang:1.12
working_directory: /go/src/github.com/argoproj/argo-cd
build:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
steps:
- checkout
- restore_cache:
keys: [codegen-v2]
- run: ./hack/install.sh codegen-go-tools
- run: sudo ./hack/install.sh codegen-tools
- run: dep ensure
- save_cache:
key: codegen-v2
paths: [vendor, /tmp/dl, /go/pkg]
- run: helm init --client-only
- run: make codegen-local
- before
- run:
name: Run unit tests
command: |
set -x
mkdir -p /tmp/test-results
trap "go-junit-report </tmp/test-results/go-test.out > /tmp/test-results/go-test-report.xml" EXIT
make test | tee /tmp/test-results/go-test.out
- save_go_cache
- run:
name: Uploading code coverage
command: bash <(curl -s https://codecov.io/bash) -f coverage.out
# This takes 2m, lets background it.
background: true
- store_test_results:
path: /tmp/test-results
- run:
name: Generate code
command: make codegen
- run:
name: Lint code
# use GOGC to limit memory usage in exchange for CPU usage, https://github.com/golangci/golangci-lint#memory-usage-of-golangci-lint
# we have 8GB RAM, 2CPUs https://circleci.com/docs/2.0/executor-types/#using-machine
command: LINT_GOGC=50 LINT_CONCURRENCY=2 make lint
- run:
name: Check nothing has changed
command: |
@@ -73,45 +221,14 @@ jobs:
# We exclude the Swagger resources; CircleCI doesn't generate them correctly.
# When this fails, it will, create a patch file you can apply locally to fix it.
# To troubleshoot builds: https://argoproj.github.io/argo-cd/developer-guide/ci/
git diff --exit-code -- . ':!Gopkg.lock' ':!assets/swagger.json' | tee codegen.patch
git diff --exit-code -- . ':!Gopkg.lock' ':!assets/swagger.json' ':!pkg/apis/api-rules/violation_exceptions.list' ':!pkg/apis/application/v1alpha1/openapi_generated.go' | tee codegen.patch
- store_artifacts:
path: codegen.patch
destination: .
test:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
steps:
- restore_go_cache
- install_golang
- checkout
- restore_cache:
key: test-dl-v1
- run: sudo ./hack/install.sh kubectl-linux kubectx-linux dep-linux ksonnet-linux helm-linux kustomize-linux
- save_cache:
key: test-dl-v1
paths: [/tmp/dl]
- configure_git
- run: go get github.com/jstemmer/go-junit-report
- dep_ensure
- save_go_cache
- run: make test
- run:
name: Uploading code coverage
command: bash <(curl -s https://codecov.io/bash) -f coverage.out
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
destination: .
when: always
e2e:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
ARGOCD_SSH_DATA_PATH: "/tmp/argo-e2e/app/config/ssh"
ARGOCD_TLS_DATA_PATH: "/tmp/argo-e2e/app/config/tls"
steps:
- run:
name: Install and start K3S v0.5.0
@@ -123,19 +240,15 @@ jobs:
environment:
INSTALL_K3S_EXEC: --docker
INSTALL_K3S_VERSION: v0.5.0
- restore_go_cache
- install_golang
- checkout
- restore_cache:
keys: [e2e-dl-v1]
- run: sudo ./hack/install.sh kubectx-linux dep-linux ksonnet-linux helm-linux kustomize-linux
- run: go get github.com/jstemmer/go-junit-report
- save_cache:
key: e2e-dl-v10
paths: [/tmp/dl]
- dep_ensure
- configure_git
- run: make cli
- before
- run:
# do this before we build everything else in the background, as they tend to explode
name: Make CLI
command: |
set -x
make cli
# must be added to path for tests
echo export PATH="`pwd`/dist:\$PATH" | tee -a $BASH_ENV
- run:
name: Create namespace
command: |
@@ -159,56 +272,85 @@ jobs:
name: Start repo server
command: go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379
background: true
environment:
# pft. if you do not quote "true", CircleCI turns it into "1", stoopid
ARGOCD_FAKE_IN_CLUSTER: "true"
- run:
name: Start API server
command: go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app
background: true
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
- run:
name: Start Test Git
name: Wait for API server
command: |
test/fixture/testrepos/start-git.sh
background: true
- run: until curl -v http://localhost:8080/healthz; do sleep 10; done
set -x
until curl -v http://localhost:8080/healthz; do sleep 3; done
- run:
name: Start controller
command: go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081 --kubeconfig ~/.kube/config
background: true
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
- run:
command: PATH=dist:$PATH make test-e2e
name: Smoke test
command: |
set -x
argocd login localhost:8080 --plaintext --username admin --password password
argocd app create guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook
argocd app sync guestbook
argocd app delete guestbook
- run:
name: Run e2e tests
command: |
set -x
mkdir -p /tmp/test-results
trap "go-junit-report </tmp/test-results/go-e2e.out > /tmp/test-results/go-e2e-report.xml" EXIT
make test-e2e | tee /tmp/test-results/go-e2e.out
environment:
ARGOCD_OPTS: "--server localhost:8080 --plaintext"
ARGOCD_E2E_EXPECT_TIMEOUT: "30"
ARGOCD_E2E_K3S: "true"
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
destination: .
path: /tmp/test-results
ui:
# note that we checkout the code in ~/argo-cd/, but then work in ~/argo-cd/ui
working_directory: ~/argo-cd/ui
docker:
- image: node:11.15.0
working_directory: ~/argo-cd/ui
steps:
- checkout:
path: ~/argo-cd/
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-v4-{{ checksum "yarn.lock" }}
- run: yarn install --frozen-lockfile --ignore-optional --non-interactive
- yarn-packages-v3-{{ checksum "yarn.lock" }}
- run:
name: Install
command:
yarn install --frozen-lockfile --ignore-optional --non-interactive
- save_cache:
key: yarn-packages-v4-{{ checksum "yarn.lock" }}
paths: [~/.cache/yarn, node_modules]
- run: yarn test
- run: yarn build
- run: yarn lint
name: Save Yarn Package Cache
key: yarn-packages-v3-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
- node_modules
- run:
name: Test
command: yarn test
# This does not appear to work, and I don't want to spend time on it.
- store_test_results:
path: junit.xml
- run:
name: Lint
command: yarn lint
workflows:
version: 2
workflow:
jobs:
- test
- codegen:
requires:
- test
- build
- e2e
- ui:
# this isn't strictly true, we just put in here so that we 2/4 executors rather than 3/4
requires:
- codegen
- e2e
- build

View File

@@ -1,17 +1,16 @@
ignore:
- "**/*.pb.go"
- "**/*.pb.gw.go"
- "**/*generated.go"
- "**/*generated.deepcopy.go"
- "**/*_test.go"
- "pkg/apis/client/.*"
- "pkg/apis/.*"
- "pkg/client/.*"
- "vendor/.*"
- "test/.*"
coverage:
status:
# we've found this not to be useful
patch: off
# allow test coverage to drop by 0.1%, assume that it's typically due to CI problems
patch:
default:
threshold: 0.1
project:
default:
# allow test coverage to drop by 1%, assume that it's typically due to CI problems
threshold: 1
threshold: 0.1

View File

@@ -4,26 +4,23 @@ about: Create a report to help us improve
title: ''
labels: 'bug'
assignees: ''
---
Checklist:
* [ ] I've included steps to reproduce the bug.
* [ ] I've pasted the output of `argocd version`.
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
A list of the steps required to reproduce the issue. Best of all, give us the URL to a repository that exhibits this issue.
If we cannot reproduce, we cannot fix! Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Version**
@@ -37,3 +34,10 @@ Paste the output from `argocd version` here.
```
Paste any relevant application logs here.
```
**Have you thought about contributing a fix yourself?**
Open Source software thrives with your contribution. It not only gives skills you might not be able to get in your day job, it also looks amazing on your resume.
If you want to get involved, check out the
[contributing guide](https://github.com/argoproj/argo-cd/blob/master/docs/CONTRIBUTING.md), then reach out to us on [Slack](https://argoproj.github.io/community/join-slack) so we can see how to get you started.

View File

@@ -1,18 +0,0 @@
---
name: Enhancement proposal
about: Propose an enhancement for this project
title: ''
labels: 'enhancement'
assignees: ''
---
# Summary
What change you think needs making.
# Motivation
Please give examples of your use case, e.g. when would you use this.
# Proposal
How do you think this should be implemented?

View File

@@ -0,0 +1,21 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: 'enhancement'
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Have you thought about contributing yourself?**
Open Source software thrives with your contribution. It not only gives skills you might not be able to get in your day job, it also looks amazing on your resume.
If you want to get involved, check out the
[contributing guide](https://github.com/argoproj/argo-cd/blob/master/docs/CONTRIBUTING.md), then reach out to us on [Slack](https://argoproj.github.io/community/join-slack) so we can see how to get you started.

View File

@@ -1,5 +1,7 @@
Checklist:
<!--
Thank you for submitting your PR!
* [ ] I've created an [enhancement proposal](https://github.com/argoproj/argo-cd/issues/new/choose) and I feel I've gotten a green light from the community.
* [ ] My build is green ([troubleshooting builds](https://argoproj.github.io/argo-cd/developer-guide/ci/)).
* [ ] Optional. My organisation is added to the README.
We'd love your organisation to be listed in the [README](https://github.com/argoproj/argo-cd). Don't forget to add it if you can!
To troubleshoot builds: https://argoproj.github.io/argo-cd/developer-guide/ci/
-->

3
.github/stale.yml vendored
View File

@@ -1,4 +1 @@
# See https://github.com/probot/stale
# See https://github.com/probot/stale
exemptLabels:
- backlog

1
.gitignore vendored
View File

@@ -9,4 +9,3 @@ site/
cmd/**/debug
debug.test
coverage.out
test-results

View File

@@ -3,20 +3,19 @@ run:
skip-files:
- ".*\\.pb\\.go"
skip-dirs:
- pkg/client/
- vendor/
- pkg/client
- vendor
linter-settings:
goimports:
local-prefixes: github.com/argoproj/argo-cd
linters:
enable:
- vet
- deadcode
- gofmt
- goimports
- deadcode
- varcheck
- structcheck
- ineffassign
- unconvert
- unparam
linters-settings:
goimports:
local-prefixes: github.com/argoproj/argo-cd
service:
golangci-lint-version: 1.21.0
- misspell

View File

@@ -1,219 +1,5 @@
# Changelog
## v1.2.3 (2019-10-1)
* Make argo-cd docker images openshift friendly (#2362) (@duboisf)
* Add dest-server and dest-namespace field to reconciliation logs (#2354)
- Stop loggin /repository.RepositoryService/ValidateAccess parameters (#2386)
## v1.2.2 (2019-09-26)
+ Resource action equivalent to `kubectl rollout restart` (#2177)
- Badge response does not contain cache-control header (#2317) (@greenstatic)
- Make sure the controller uses the latest git version if app reconciliation result expired (#2339)
## v1.2.1 (2019-09-12)
+ Support limiting number of concurrent kubectl fork/execs (#2022)
+ Add --self-heal flag to argocd cli (#2296)
- Fix degraded proxy support for http(s) git repository (#2243)
- Fix nil pointer dereference in application controller (#2290)
## v1.2.0 (2019-09-05)
### New Features
#### Server Certificate And Known Hosts Management
The Server Certificate And Known Hosts Management feature makes it really easy to connect private Git repositories to Argo CD. Now Argo CD provides UI and CLI which
enables managing certificates and known hosts which are used to access Git repositories. It is also possible to configure both hosts and certificates in a declarative manner using
[argocd-ssh-known-hosts-cm](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-ssh-known-hosts-cm.yaml) and
[argocd-tls-certs-cm.yaml](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-tls-certs-cm.yaml) config maps.
#### Self-Healing
The existing Automatic Sync feature allows to automatically apply any new changes in Git to the target Kubernetes cluster. However, Automatic Sync does not cover the case when the
application is out of sync due to the unexpected change in the target cluster. The Self-Healing feature fills this gap. With Self-Healing enabled Argo CD automatically pushes the desired state from Git into the cluster every time when state deviation is detected.
**Anonymous access** - enable read-only access without authentication to anyone in your organization.
Support for Git LFS enabled repositories - now you can store Helm charts as tar files and enable Git LFS in your repository.
**Compact diff view** - compact diff summary of the whole application in a single view.
**Badge for application status** - add badge with the health and sync status of your application into README.md of your deployment repo.
**Allow configuring google analytics tracking** - use Google Analytics to check how many users are visiting UI or your Argo CD instance.
#### Backward Incompatible Changes
- Kustomize v1 support is removed. All kustomize charts are built using the same Kustomize version
- Kustomize v2.0.3 upgraded to v3.1.0 . We've noticed one backward incompatible change: https://github.com/kubernetes-sigs/kustomize/issues/42 . Starting v2.1.0 namespace prefix feature works with CRD ( which might cause renaming of generated resource definitions)
- Argo CD config maps must be annotated with `app.kubernetes.io/part-of: argocd` label. Make sure to apply updated `install.yaml` manifest in addition to changing image version.
#### Enhancements
+ Adds a floating action button with help and chat links to every page.… (#2124)
+ Enhances cookie warning with actual length to help users fix their co… (#2134)
+ Added 'SyncFail' to possible HookTypes in UI (#2147)
+ Support for Git LFS enabled repositories (#1853)
+ Server certificate and known hosts management (#1514)
+ Client HTTPS certifcates for private git repositories (#1945)
+ Badge for application status (#1435)
+ Make the health check for APIService a built in (#1841)
+ Bitbucket Server and Gogs webhook providers (#1269)
+ Jsonnet TLA arguments in ArgoCD CLI (#1626)
+ Self Healing (#1736)
+ Compact diff view (#1831)
+ Allow Helm parameters to force ambiguously-typed values to be strings (#1846)
+ Support anonymous argocd access (#1620)
+ Allow configuring google analytics tracking (#738)
+ Bash autocompletion for argocd (#1798)
+ Additional commit metadata (#1219)
+ Displays targetRevision in app dashboards. (#1239)
+ Local path syncing (#839)
+ System level `kustomize build` options (#1789)
+ Adds support for `argocd app set` for Kustomize. (#1843)
+ Allow users to create tokens for projects where they have any role. (#1977)
+ Add Refresh button to applications table and card view (#1606)
+ Adds CLI support for adding and removing groups from project roles. (#1851)
+ Support dry run and hook vs. apply strategy during sync (#798)
+ UI should remember most recent selected tab on resource info panel (#2007)
+ Adds link to the project from the app summary page. (#1911)
+ Different icon for resources which require pruning (#1159)
#### Bug Fixes
- Do not panic if the type is not api.Status (an error scenario) (#2105)
- Make sure endpoint is shown as a child of service (#2060)
- Word-wraps app info in the table and list views. (#2004)
- Project source/destination removal should consider wildcards (#1780)
- Repo whitelisting in UI does not support wildcards (#2000)
- Wait for CRD creation during sync process (#1940)
- Added a button to select out of sync items in the sync panel (#1902)
- Proper handling of an excluded resource in an application (#1621)
- Stop repeating logs on stoped container (#1614)
- Fix git repo url parsing on application list view (#2174)
- Fix nil pointer dereference error during app reconciliation (#2146)
- Fix history api fallback implementation to support app names with dots (#2114)
- Fixes some code issues related to Kustomize build options. (#2146)
- Adds checks around valid paths for apps (#2133)
- Enpoint incorrectly considered top level managed resource (#2060)
- Allow adding certs for hostnames ending on a dot (#2116)
#### Other
* Upgrade kustomize to v3.1.0 (#2068)
* Remove support for Kustomize 1. (#1573)
#### Contributors
* [alexec](https://github.com/alexec)
* [alexmt](https://github.com/alexmt)
* [dmizelle](https://github.com/dmizelle)
* [lcostea](https://github.com/lcostea)
* [jutley](https://github.com/jutley)
* [masa213f](https://github.com/masa213f)
* [Rayyis](https://github.com/Rayyis)
* [simster7](https://github.com/simster7)
* [dthomson25](https://github.com/dthomson25)
* [jannfis](https://github.com/jannfis)
* [naynasiddharth](https://github.com/naynasiddharth)
* [stgarf](https://github.com/stgarf)
## v1.1.2 (2019-07-30)
- 'argocd app wait' should print correct sync status (#2049)
- Check that TLS is enabled when registering DEX Handlers (#2047)
- Do not ignore Argo hooks when there is a Helm hook. (#1952)
## v1.1.1 (2019-07-25)
+ Support 'override' action in UI/API (#1984)
- Fix argocd app wait message (#1982)
## v1.1.0 (2019-07-24)
### New Features
#### Sync Waves
Sync waves feature allows executing a sync operation in a number of steps or waves. Within each synchronization phase (pre-sync, sync, post-sync) you can have one or more waves,
than allows you to ensure certain resources are healthy before subsequent resources are synced.
#### Optimized Interaction With Git
Argo CD needs to execute `git fetch` operation to access application manifests and `git ls-remote` to resolve ambiguous git revision. The `git ls-remote` is executed very frequently
and although the operation is very lightweight it adds unnecessary load on Git server and might cause performance issues. In v1.1 release, the application reconciliation process was
optimized which significantly reduced the number of Git requests. With v1.1 release, Argo CD should send 3x ~ 5x fewer Git requests.
#### User Defined Application Metadata
User-defined Application metadata enables the user to define a list of useful URLs for their specific application and expose those links on the UI
(e.g. reference tp a CI pipeline or an application-specific management tool). These links should provide helpful shortcuts that make easier to integrate Argo CD into existing
systems by making it easier to find other components inside and outside Argo CD.
### Deprecation Notice
* Kustomize v1.0 is deprecated and support will be removed in the Argo CD v1.2 release.
#### Enhancements
- Sync waves [#1544](https://github.com/argoproj/argo-cd/issues/1544)
- Adds Prune=false and IgnoreExtraneous options [#1629](https://github.com/argoproj/argo-cd/issues/1629)
- Forward Git credentials to config management plugins [#1628](https://github.com/argoproj/argo-cd/issues/1628)
- Improve Kustomize 2 parameters UI [#1609](https://github.com/argoproj/argo-cd/issues/1609)
- Adds `argocd logout` [#1210](https://github.com/argoproj/argo-cd/issues/1210)
- Make it possible to set Helm release name different from Argo CD app name. [#1066](https://github.com/argoproj/argo-cd/issues/1066)
- Add ability to specify system namespace during cluster add operation [#1661](https://github.com/argoproj/argo-cd/pull/1661)
- Make listener and metrics ports configurable [#1647](https://github.com/argoproj/argo-cd/pull/1647)
- Using SSH keys to authenticate kustomize bases from git [#827](https://github.com/argoproj/argo-cd/issues/827)
- Adds `argocd app sync APPNAME --async` [#1728](https://github.com/argoproj/argo-cd/issues/1728)
- Allow users to define app specific urls to expose in the UI [#1677](https://github.com/argoproj/argo-cd/issues/1677)
- Error view instead of blank page in UI [#1375](https://github.com/argoproj/argo-cd/issues/1375)
- Project Editor: Whitelisted Cluster Resources doesn't strip whitespace [#1693](https://github.com/argoproj/argo-cd/issues/1693)
- Eliminate unnecessary git interactions for top-level resource changes (#1919)
- Ability to rotate the bearer token used to manage external clusters (#1084)
#### Bug Fixes
- Project Editor: Whitelisted Cluster Resources doesn't strip whitespace [#1693](https://github.com/argoproj/argo-cd/issues/1693)
- \[ui small bug\] menu position outside block [#1711](https://github.com/argoproj/argo-cd/issues/1711)
- UI will crash when create application without destination namespace [#1701](https://github.com/argoproj/argo-cd/issues/1701)
- ArgoCD synchronization failed due to internal error [#1697](https://github.com/argoproj/argo-cd/issues/1697)
- Replicasets ordering is not stable on app tree view [#1668](https://github.com/argoproj/argo-cd/issues/1668)
- Stuck processor on App Controller after deleting application with incomplete operation [#1665](https://github.com/argoproj/argo-cd/issues/1665)
- Role edit page fails with JS error [#1662](https://github.com/argoproj/argo-cd/issues/1662)
- failed parsing on parameters with comma [#1660](https://github.com/argoproj/argo-cd/issues/1660)
- Handle nil obj when processing custom actions [#1700](https://github.com/argoproj/argo-cd/pull/1700)
- Account for missing fields in Rollout HealthStatus [#1699](https://github.com/argoproj/argo-cd/pull/1699)
- Sync operation unnecessary waits for a healthy state of all resources [#1715](https://github.com/argoproj/argo-cd/issues/1715)
- failed parsing on parameters with comma [#1660](https://github.com/argoproj/argo-cd/issues/1660)
- argocd app sync hangs when cluster is not configured (#1935)
- Do not allow app-of-app child app's Missing status to affect parent (#1954)
- Argo CD don't handle well k8s objects which size exceeds 1mb (#1685)
- Secret data not redacted in last-applied-configuration (#897)
- Running app actions requires only read privileges (#1827)
- UI should allow editing repo URL (#1763)
- Make status fields as optional fields (#1779)
- Use correct healthcheck for Rollout with empty steps list (#1776)
#### Other
- Add Prometheus metrics for git repo interactions (#1912)
- App controller should log additional information during app syncing (#1909)
- Make sure api server to repo server grpc calls have timeout (#1820)
- Forked tool processes should timeout (#1821)
- Add health check to the controller deployment (#1785)
#### Contributors
* [Aditya Gupta](https://github.com/AdityaGupta1)
* [Alex Collins](https://github.com/alexec)
* [Alex Matyushentsev](https://github.com/alexmt)
* [Danny Thomson](https://github.com/dthomson25)
* [jannfis](https://github.com/jannfis)
* [Jesse Suen](https://github.com/jessesuen)
* [Liviu Costea](https://github.com/lcostea)
* [narg95](https://github.com/narg95)
* [Simon Behar](https://github.com/simster7)
See also [milestone v1.1](https://github.com/argoproj/argo-cd/milestone/13)
## v1.0.0 (2019-05-16)
### New Features

View File

@@ -1,19 +1,13 @@
ARG BASE_IMAGE=debian:10-slim
ARG BASE_IMAGE=debian:9.5-slim
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM golang:1.12.6 as builder
RUN echo 'deb http://deb.debian.org/debian buster-backports main' >> /etc/apt/sources.list
FROM golang:1.11.4 as builder
RUN apt-get update && apt-get install -y \
openssh-server \
nginx \
fcgiwrap \
git \
git-lfs \
make \
wget \
gcc \
@@ -23,16 +17,79 @@ RUN apt-get update && apt-get install -y \
WORKDIR /tmp
ADD hack/install.sh .
ADD hack/installers installers
# Install docker
ENV DOCKER_CHANNEL stable
ENV DOCKER_VERSION 18.09.1
RUN wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz" && \
tar --extract --file docker.tgz --strip-components 1 --directory /usr/local/bin/ && \
rm docker.tgz
RUN ./install.sh dep-linux
RUN ./install.sh packr-linux
RUN ./install.sh kubectl-linux
RUN ./install.sh ksonnet-linux
RUN ./install.sh helm-linux
RUN ./install.sh kustomize-linux
RUN ./install.sh aws-iam-authenticator-linux
# Install dep
ENV DEP_VERSION=0.5.0
RUN wget https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep
# Install gometalinter
ENV GOMETALINTER_VERSION=2.0.12
RUN curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v${GOMETALINTER_VERSION}/gometalinter-${GOMETALINTER_VERSION}-linux-amd64.tar.gz | \
tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- && \
ln -s $GOPATH/bin/gometalinter $GOPATH/bin/gometalinter.v2
# Install packr
ENV PACKR_VERSION=1.21.9
RUN wget https://github.com/gobuffalo/packr/releases/download/v${PACKR_VERSION}/packr_${PACKR_VERSION}_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# Install kubectl
# NOTE: keep the version synced with https://storage.googleapis.com/kubernetes-release/release/stable.txt
ENV KUBECTL_VERSION=1.14.0
RUN curl -L -o /usr/local/bin/kubectl -LO https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl && \
chmod +x /usr/local/bin/kubectl && \
kubectl version --client
# Install ksonnet
ENV KSONNET_VERSION=0.13.1
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks && \
ks version
# Install helm
ENV HELM_VERSION=2.12.1
RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
tar -C /tmp/ -xf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
mv /tmp/linux-amd64/helm /usr/local/bin/helm && \
helm version --client
# Install kustomize
ENV KUSTOMIZE1_VERSION=1.0.11
RUN curl -L -o /usr/local/bin/kustomize1 https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE1_VERSION}/kustomize_${KUSTOMIZE1_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize1 && \
kustomize1 version
ENV KUSTOMIZE_VERSION=2.0.3
RUN curl -L -o /usr/local/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize && \
kustomize version
# Install AWS IAM Authenticator
ENV AWS_IAM_AUTHENTICATOR_VERSION=0.4.0-alpha.1
RUN curl -L -o /usr/local/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/${AWS_IAM_AUTHENTICATOR_VERSION}/aws-iam-authenticator_${AWS_IAM_AUTHENTICATOR_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/aws-iam-authenticator
# Install golangci-lint
RUN wget https://install.goreleaser.com/github.com/golangci/golangci-lint.sh && \
chmod +x ./golangci-lint.sh && \
./golangci-lint.sh -b $GOPATH/bin && \
golangci-lint linters
COPY .golangci.yml ${GOPATH}/src/dummy/.golangci.yml
RUN cd ${GOPATH}/src/dummy && \
touch dummy.go \
golangci-lint run
####################################################################################################
# Argo CD Base - used as the base for both the release and dev argocd images
@@ -41,35 +98,23 @@ FROM $BASE_IMAGE as argocd-base
USER root
RUN echo 'deb http://deb.debian.org/debian buster-backports main' >> /etc/apt/sources.list
RUN groupadd -g 999 argocd && \
useradd -r -u 999 -g argocd argocd && \
mkdir -p /home/argocd && \
chown argocd:0 /home/argocd && \
chmod g=u /home/argocd && \
chmod g=u /etc/passwd && \
chown argocd:argocd /home/argocd && \
apt-get update && \
apt-get install -y git git-lfs && \
apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY hack/ssh_known_hosts /etc/ssh/ssh_known_hosts
COPY hack/git-ask-pass.sh /usr/local/bin/git-ask-pass.sh
COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=builder /usr/local/bin/kustomize1 /usr/local/bin/kustomize1
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
COPY --from=builder /usr/local/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
# script to add current (possibly arbitrary) user to /etc/passwd at runtime
# (if it's not already there, to be openshift friendly)
COPY uid_entrypoint.sh /usr/local/bin/uid_entrypoint.sh
# support for mounting configuration from a configmap
RUN mkdir -p /app/config/ssh && \
touch /app/config/ssh/ssh_known_hosts && \
ln -s /app/config/ssh/ssh_known_hosts /etc/ssh/ssh_known_hosts
RUN mkdir -p /app/config/tls
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=argocd
@@ -77,26 +122,11 @@ ENV USER=argocd
USER argocd
WORKDIR /home/argocd
####################################################################################################
# Argo CD UI stage
####################################################################################################
FROM node:11.15.0 as argocd-ui
WORKDIR /src
ADD ["ui/package.json", "ui/yarn.lock", "./"]
RUN yarn install
ADD ["ui/", "."]
ARG ARGO_VERSION=latest
ENV ARGO_VERSION=$ARGO_VERSION
RUN NODE_ENV='production' yarn build
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM golang:1.12.6 as argocd-build
FROM golang:1.11.4 as argocd-build
COPY --from=builder /usr/local/bin/dep /usr/local/bin/dep
COPY --from=builder /usr/local/bin/packr /usr/local/bin/packr
@@ -123,5 +153,3 @@ RUN make cli server controller repo-server argocd-util && \
####################################################################################################
FROM argocd-base
COPY --from=argocd-build /go/src/github.com/argoproj/argo-cd/dist/argocd* /usr/local/bin/
COPY --from=argocd-ui ./src/dist/app /shared/app

View File

@@ -3,4 +3,3 @@
####################################################################################################
FROM argocd-base
COPY argocd* /usr/local/bin/
COPY --from=argocd-ui ./src/dist/app /shared/app

104
Gopkg.lock generated
View File

@@ -1,14 +1,6 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
digest = "1:6d5a057da97a9dbdb10e7beedd2f43452b6bf7691001c0c8886e8dacf5610349"
name = "bou.ke/monkey"
packages = ["."]
pruneopts = ""
revision = "bdf6dea004c6fd1cdf4b25da8ad45a606c09409a"
version = "v1.0.1"
[[projects]]
digest = "1:9702dc153c9bb6ee7ee0587c248b7024700e89e4a7be284faaeeab9da32e1c6b"
name = "cloud.google.com/go"
@@ -56,11 +48,23 @@
pruneopts = ""
revision = "09c41003ee1d5015b75f331e52215512e7145b8d"
[[projects]]
branch = "master"
digest = "1:0cac9c70f3308d54ed601878aa66423eb95c374616fdd7d2ea4e2d18b045ae62"
name = "github.com/ant31/crd-validation"
packages = ["pkg"]
pruneopts = ""
revision = "38f6a293f140402953f884b015014e0cd519bbb3"
[[projects]]
branch = "master"
digest = "1:0caf9208419fa5db5a0ca7112affaa9550c54291dda8e2abac0c0e76181c959e"
name = "github.com/argoproj/argo"
packages = ["util"]
packages = [
"pkg/apis/workflow",
"pkg/apis/workflow/v1alpha1",
"util",
]
pruneopts = ""
revision = "7ef1cea68c94f7f0e1e2f8bd75bedc5a7df8af90"
@@ -69,7 +73,6 @@
digest = "1:4f6afcf4ebe041b3d4aa7926d09344b48d2f588e1f957526bbbe54f9cbb366a1"
name = "github.com/argoproj/pkg"
packages = [
"errors",
"exec",
"rand",
"time",
@@ -347,14 +350,6 @@
revision = "5ccd90ef52e1e632236f7326478d4faa74f99438"
version = "v0.2.3"
[[projects]]
branch = "master"
digest = "1:9a06e7365c6039daf4db9bbf79650e2933a2880982cbab8106cb74a36617f40d"
name = "github.com/gogits/go-gogs-client"
packages = ["."]
pruneopts = ""
revision = "5a05380e4bc2440e0ec12f54f6f45648dbdd5e55"
[[projects]]
digest = "1:6e73003ecd35f4487a5e88270d3ca0a81bc80dc88053ac7e4dcfec5fba30d918"
name = "github.com/gogo/protobuf"
@@ -738,14 +733,6 @@
pruneopts = ""
revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92"
[[projects]]
digest = "1:6bb048133650d1fb7fbff9fb3c35bd5c7e8653fc95c3bae6df94cd17d1580278"
name = "github.com/robfig/cron"
packages = ["."]
pruneopts = ""
revision = "45fbe1491cdd47d74d1bf1396286d67faee8b8b5"
version = "v3.0.0"
[[projects]]
digest = "1:5f47c69f85311c4dc292be6cc995a0a3fe8337a6ce38ef4f71e5b7efd5ad42e0"
name = "github.com/rs/cors"
@@ -765,10 +752,7 @@
[[projects]]
digest = "1:01d968ff6535945510c944983eee024e81f1c949043e9bbfe5ab206ebc3588a4"
name = "github.com/sirupsen/logrus"
packages = [
".",
"hooks/test",
]
packages = ["."]
pruneopts = ""
revision = "a67f783a3814b8729bd2dac5780b5f78f8dbd64d"
version = "v1.1.0"
@@ -790,19 +774,19 @@
version = "v0.1.4"
[[projects]]
digest = "1:9ba49264cef4386aded205f9cb5b1f2d30f983d7dc37a21c780d9db3edfac9a7"
digest = "1:2208a80fc3259291e43b30f42f844d18f4218036dff510f42c653ec9890d460a"
name = "github.com/spf13/cobra"
packages = ["."]
pruneopts = ""
revision = "fe5e611709b0c57fa4a89136deaa8e1d4004d053"
revision = "7b2c5ac9fc04fc5efafb60700713d4fa609b777b"
version = "v0.0.1"
[[projects]]
digest = "1:8e243c568f36b09031ec18dff5f7d2769dcf5ca4d624ea511c8e3197dc3d352d"
digest = "1:261bc565833ef4f02121450d74eb88d5ae4bd74bfe5d0e862cddb8550ec35000"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = ""
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
version = "v1.0.1"
revision = "e57e3eeb33f795204c1ca35f56c44f83227c6e66"
[[projects]]
digest = "1:b1861b9a1aa0801b0b62945ed7477c1ab61a4bd03b55dfbc27f6d4f378110c8c"
@@ -1117,18 +1101,17 @@
version = "v1.15.0"
[[projects]]
digest = "1:adf5b0ae3467c3182757ecb86fbfe819939473bb870a42789dc1a3e7729397cd"
name = "gopkg.in/go-playground/webhooks.v5"
digest = "1:bf7444e1e6a36e633f4f1624a67b9e4734cfb879c27ac0a2082ac16aff8462ac"
name = "gopkg.in/go-playground/webhooks.v3"
packages = [
".",
"bitbucket",
"bitbucket-server",
"github",
"gitlab",
"gogs",
]
pruneopts = ""
revision = "175186584584a83966dc9a7b8ec6c3d3a4ce6110"
version = "v5.11.0"
revision = "5580947e3ec83427ef5f6f2392eddca8dde5d99a"
version = "v3.11.0"
[[projects]]
digest = "1:e5d1fb981765b6f7513f793a3fcaac7158408cca77f75f7311ac82cc88e9c445"
@@ -1294,9 +1277,6 @@
packages = [
"pkg/apis/apiextensions",
"pkg/apis/apiextensions/v1beta1",
"pkg/client/clientset/clientset",
"pkg/client/clientset/clientset/scheme",
"pkg/client/clientset/clientset/typed/apiextensions/v1beta1",
]
pruneopts = ""
revision = "7f7d2b94eca3a7a1c49840e119a8bc03c3afb1e3"
@@ -1516,18 +1496,6 @@
revision = "d98d8acdac006fb39831f1b25640813fef9c314f"
version = "v0.3.3"
[[projects]]
branch = "master"
digest = "1:0d737d598e9db0a38d6ef6cba514c358b9fe7e1bc6b1128d02b2622700c75f2a"
name = "k8s.io/kube-aggregator"
packages = [
"pkg/apis/apiregistration",
"pkg/apis/apiregistration/v1",
"pkg/apis/apiregistration/v1beta1",
]
pruneopts = ""
revision = "e80910364765199a4baebd4dec54c885fe52b680"
[[projects]]
digest = "1:42ea993b351fdd39b9aad3c9ebe71f2fdb5d1f8d12eed24e71c3dff1a31b2a43"
name = "k8s.io/kube-openapi"
@@ -1568,7 +1536,6 @@
packages = [
"buffer",
"integer",
"pointer",
"trace",
]
pruneopts = ""
@@ -1594,11 +1561,11 @@
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"bou.ke/monkey",
"github.com/Masterminds/semver",
"github.com/TomOnTime/utfutil",
"github.com/ant31/crd-validation/pkg",
"github.com/argoproj/argo/pkg/apis/workflow/v1alpha1",
"github.com/argoproj/argo/util",
"github.com/argoproj/pkg/errors",
"github.com/argoproj/pkg/exec",
"github.com/argoproj/pkg/time",
"github.com/casbin/casbin",
@@ -1616,7 +1583,6 @@
"github.com/go-redis/redis",
"github.com/gobuffalo/packr",
"github.com/gobwas/glob",
"github.com/gogits/go-gogs-client",
"github.com/gogo/protobuf/gogoproto",
"github.com/gogo/protobuf/proto",
"github.com/gogo/protobuf/protoc-gen-gofast",
@@ -1644,9 +1610,7 @@
"github.com/pkg/errors",
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/robfig/cron",
"github.com/sirupsen/logrus",
"github.com/sirupsen/logrus/hooks/test",
"github.com/skratchdot/open-golang/open",
"github.com/soheilhy/cmux",
"github.com/spf13/cobra",
@@ -1659,7 +1623,6 @@
"github.com/yuin/gopher-lua",
"golang.org/x/crypto/bcrypt",
"golang.org/x/crypto/ssh",
"golang.org/x/crypto/ssh/knownhosts",
"golang.org/x/crypto/ssh/terminal",
"golang.org/x/net/context",
"golang.org/x/oauth2",
@@ -1673,20 +1636,17 @@
"google.golang.org/grpc/metadata",
"google.golang.org/grpc/reflection",
"google.golang.org/grpc/status",
"gopkg.in/go-playground/webhooks.v5/bitbucket",
"gopkg.in/go-playground/webhooks.v5/bitbucket-server",
"gopkg.in/go-playground/webhooks.v5/github",
"gopkg.in/go-playground/webhooks.v5/gitlab",
"gopkg.in/go-playground/webhooks.v5/gogs",
"gopkg.in/go-playground/webhooks.v3",
"gopkg.in/go-playground/webhooks.v3/bitbucket",
"gopkg.in/go-playground/webhooks.v3/github",
"gopkg.in/go-playground/webhooks.v3/gitlab",
"gopkg.in/src-d/go-git.v4",
"gopkg.in/src-d/go-git.v4/config",
"gopkg.in/src-d/go-git.v4/plumbing",
"gopkg.in/src-d/go-git.v4/plumbing/transport",
"gopkg.in/src-d/go-git.v4/plumbing/transport/client",
"gopkg.in/src-d/go-git.v4/plumbing/transport/http",
"gopkg.in/src-d/go-git.v4/plumbing/transport/ssh",
"gopkg.in/src-d/go-git.v4/storage/memory",
"gopkg.in/src-d/go-git.v4/utils/ioutil",
"gopkg.in/yaml.v2",
"k8s.io/api/apps/v1",
"k8s.io/api/batch/v1",
@@ -1694,7 +1654,6 @@
"k8s.io/api/extensions/v1beta1",
"k8s.io/api/rbac/v1",
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1",
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset",
"k8s.io/apimachinery/pkg/api/equality",
"k8s.io/apimachinery/pkg/api/errors",
"k8s.io/apimachinery/pkg/apis/meta/v1",
@@ -1729,8 +1688,6 @@
"k8s.io/client-go/util/workqueue",
"k8s.io/code-generator/cmd/go-to-protobuf",
"k8s.io/klog",
"k8s.io/kube-aggregator/pkg/apis/apiregistration/v1",
"k8s.io/kube-aggregator/pkg/apis/apiregistration/v1beta1",
"k8s.io/kube-openapi/cmd/openapi-gen",
"k8s.io/kube-openapi/pkg/common",
"k8s.io/kubernetes/pkg/api/v1/pod",
@@ -1740,7 +1697,6 @@
"k8s.io/kubernetes/pkg/kubectl/scheme",
"k8s.io/kubernetes/pkg/kubectl/util/term",
"k8s.io/kubernetes/pkg/util/node",
"k8s.io/utils/pointer",
"layeh.com/gopher-json",
]
solver-name = "gps-cdcl"

View File

@@ -71,10 +71,6 @@ required = [
branch = "master"
name = "github.com/yudai/gojsondiff"
[[constraint]]
name = "github.com/spf13/cobra"
revision = "fe5e611709b0c57fa4a89136deaa8e1d4004d053"
# TODO: move off of k8s.io/kube-openapi and use controller-tools for CRD spec generation
# (override argoproj/argo contraint on master)
[[override]]

View File

@@ -9,21 +9,19 @@ GIT_COMMIT=$(shell git rev-parse HEAD)
GIT_TAG=$(shell if [ -z "`git status --porcelain`" ]; then git describe --exact-match --tags HEAD 2>/dev/null; fi)
GIT_TREE_STATE=$(shell if [ -z "`git status --porcelain`" ]; then echo "clean" ; else echo "dirty"; fi)
PACKR_CMD=$(shell if [ "`which packr`" ]; then echo "packr"; else echo "go run vendor/github.com/gobuffalo/packr/packr/main.go"; fi)
VOLUME_MOUNT=$(shell [[ $(go env GOOS)=="darwin" ]] && echo ":delegated" || echo "")
define run-in-dev-tool
docker run --rm -it -u $(shell id -u) -e HOME=/home/user -v ${CURRENT_DIR}:/go/src/github.com/argoproj/argo-cd${VOLUME_MOUNT} -w /go/src/github.com/argoproj/argo-cd argocd-dev-tools bash -c "GOPATH=/go $(1)"
endef
PATH:=$(PATH):$(PWD)/hack
# docker image publishing options
DOCKER_PUSH?=false
IMAGE_NAMESPACE?=
IMAGE_TAG?=latest
# perform static compilation
STATIC_BUILD?=true
# build development images
DEV_IMAGE?=false
# lint is memory and CPU intensive, so we can limit on CI to mitigate OOM
LINT_GOGC?=off
LINT_CONCURRENCY?=8
override LDFLAGS += \
-X ${PACKAGE}.version=${VERSION} \
@@ -65,12 +63,8 @@ openapigen:
clientgen:
./hack/update-codegen.sh
.PHONY: codegen-local
codegen-local: protogen clientgen openapigen manifests-local
.PHONY: codegen
codegen: dev-tools-image
$(call run-in-dev-tool,make codegen-local)
codegen: protogen clientgen openapigen manifests
.PHONY: cli
cli: clean-debug
@@ -86,20 +80,11 @@ release-cli: clean-debug image
.PHONY: argocd-util
argocd-util: clean-debug
# Build argocd-util as a statically linked binary, so it could run within the alpine-based dex container (argoproj/argo-cd#844)
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
.PHONY: dev-tools-image
dev-tools-image:
cd hack && docker build -t argocd-dev-tools . -f Dockerfile.dev-tools
.PHONY: manifests-local
manifests-local:
./hack/update-manifests.sh
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
.PHONY: manifests
manifests:
$(call run-in-dev-tool,make manifests-local IMAGE_TAG='${IMAGE_TAG}')
./hack/update-manifests.sh
# NOTE: we use packr to do the build instead of go, since we embed swagger files and policy.csv
# files into the go binary
@@ -109,7 +94,7 @@ server: clean-debug
.PHONY: repo-server
repo-server:
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
.PHONY: controller
controller:
@@ -124,10 +109,8 @@ ifeq ($(DEV_IMAGE), true)
# The "dev" image builds the binaries from the users desktop environment (instead of in Docker)
# which speeds up builds. Dockerfile.dev needs to be copied into dist to perform the build, since
# the dist directory is under .dockerignore.
IMAGE_TAG="dev-$(shell git describe --always --dirty)"
image: packr
docker build -t argocd-base --target argocd-base .
docker build -t argocd-ui --target argocd-ui .
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd/argocd-application-controller
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
@@ -145,24 +128,17 @@ endif
.PHONY: builder-image
builder-image:
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) --target builder .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) ; fi
.PHONY: dep
dep:
dep ensure -v
docker push $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG)
.PHONY: dep-ensure
dep-ensure:
dep ensure -no-vendor
.PHONY: install-lint-tools
install-lint-tools:
./hack/install.sh lint-tools
.PHONY: lint
lint:
golangci-lint --version
golangci-lint run --fix --verbose
# golangci-lint does not do a good job of formatting imports
goimports -local github.com/argoproj/argo-cd -w `find . ! -path './vendor/*' ! -path './pkg/client/*' -type f -name '*.go'`
GOGC=$(LINT_GOGC) golangci-lint run --fix --verbose --concurrency $(LINT_CONCURRENCY)
.PHONY: build
build:
@@ -170,26 +146,23 @@ build:
.PHONY: test
test:
./hack/test.sh -coverprofile=coverage.out `go list ./... | grep -v 'test/e2e'`
go test -v -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "test/e2e"`
.PHONY: cover
cover:
go tool cover -html=coverage.out
.PHONY: test-e2e
test-e2e:
./hack/test.sh -timeout 15m ./test/e2e
test-e2e: cli
go test -v -timeout 10m ./test/e2e
.PHONY: start-e2e
start-e2e: cli
killall goreman || true
# check we can connect to Docker to start Redis
docker version
kubectl create ns argocd-e2e || true
kubectl config set-context --current --namespace=argocd-e2e
kubens argocd-e2e
kustomize build test/manifests/base | kubectl apply -f -
# set paths for locally managed ssh known hosts and tls certs data
ARGOCD_SSH_DATA_PATH=/tmp/argo-e2e/app/config/ssh \
ARGOCD_TLS_DATA_PATH=/tmp/argo-e2e/app/config/tls \
ARGOCD_E2E_DISABLE_AUTH=false \
ARGOCD_ZJWT_FEATURE_FLAG=always \
goreman start
goreman start
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
.PHONY: clean-debug
@@ -203,12 +176,8 @@ clean: clean-debug
.PHONY: start
start:
killall goreman || true
# check we can connect to Docker to start Redis
docker version
kubectl create ns argocd || true
kubens argocd
ARGOCD_ZJWT_FEATURE_FLAG=always \
goreman start
goreman start
.PHONY: pre-commit
pre-commit: dep-ensure codegen build lint test
@@ -220,4 +189,4 @@ release-precheck: manifests
@if [ "$(GIT_TAG)" != "v`cat VERSION`" ]; then echo 'VERSION does not match git tag'; exit 1; fi
.PHONY: release
release: pre-commit release-precheck image release-cli
release: release-precheck pre-commit image release-cli

6
OWNERS
View File

@@ -1,12 +1,8 @@
owners:
- alexec
- alexmt
- jessesuen
reviewers:
- jannfis
approvers:
- alexec
- alexmt
- jessesuen
- merenbach

View File

@@ -1,7 +1,6 @@
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --staticassets ui/dist/app"
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.14.0 serve /dex.yaml"
redis: docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:5.0.3-alpine --save "" --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}
repo-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-repo-server/main.go --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
git-server: test/fixture/testrepos/start-git.sh
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081"
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --disable-auth --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ui/dist/app"
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.14.0 serve /dex.yaml"
redis: docker run --rm --name argocd-redis -i -p 6379:6379 redis:5.0.3-alpine --save "" --appendonly no
repo-server: sh -c "FORCE_LOG_COLORS=1 go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379"
ui: sh -c 'cd ui && yarn start'

View File

@@ -1,6 +1,5 @@
[![slack](https://img.shields.io/badge/slack-argoproj-brightgreen.svg?logo=slack)](https://argoproj.github.io/community/join-slack)
[![codecov](https://codecov.io/gh/argoproj/argo-cd/branch/master/graph/badge.svg)](https://codecov.io/gh/argoproj/argo-cd)
[![Release Version](https://img.shields.io/github/v/release/argoproj/argo-cd?label=argo-cd)](https://github.com/argoproj/argo-cd/releases/latest)
# Argo CD - Declarative Continuous Delivery for Kubernetes
@@ -13,7 +12,6 @@ Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
## Why Argo CD?
Application definitions, configurations, and environments should be declarative and version controlled.
Application deployment and lifecycle management should be automated, auditable, and easy to understand.
@@ -21,41 +19,24 @@ Application deployment and lifecycle management should be automated, auditable,
Organizations below are **officially** using Argo CD. Please send a PR with your organization name if you are using Argo CD.
1. [ANSTO - Australian Synchrotron](https://www.synchrotron.org.au/)
1. [Codility](https://www.codility.com/)
1. [Commonbond](https://commonbond.co/)
1. [CyberAgent](https://www.cyberagent.co.jp/en/)
1. [END.](https://www.endclothing.com/)
1. [Future PLC](https://www.futureplc.com/)
1. [GMETRI](https://gmetri.com/)
1. [Intuit](https://www.intuit.com/)
1. [KintoHub](https://www.kintohub.com/)
1. [KompiTech GmbH](https://www.kompitech.com/)
1. [Lytt](https://www.lytt.co/)
1. [Mambu](https://www.mambu.com/)
1. [Mirantis](https://mirantis.com/)
1. [OpenSaaS Studio](https://opensaas.studio)
1. [Optoro](https://www.optoro.com/)
1. [Riskified](https://www.riskified.com/)
1. [Saildrone](https://www.saildrone.com/)
1. [Tesla](https://tesla.com/)
1. [tZERO](https://www.tzero.com/)
1. [Ticketmaster](https://ticketmaster.com)
1. [Yieldlab](https://www.yieldlab.de/)
1. [UBIO](https://ub.io/)
1. [Volvo Cars](https://www.volvocars.com/)
## Documentation
To learn more about Argo CD [go to the complete documentation](https://argoproj.github.io/argo-cd/).
## Community Blogs and Presentations
1. [Comparison of Argo CD, Spinnaker, Jenkins X, and Tekton](https://www.inovex.de/blog/spinnaker-vs-argo-cd-vs-tekton-vs-jenkins-x/)
1. [Simplify and Automate Deployments Using GitOps with IBM Multicloud Manager 3.1.2](https://medium.com/ibm-cloud/simplify-and-automate-deployments-using-gitops-with-ibm-multicloud-manager-3-1-2-4395af317359)
1. [GitOps for Kubeflow using Argo CD](https://www.kubeflow.org/docs/use-cases/gitops-for-kubeflow/)
1. [GitOps Toolsets on Kubernetes with CircleCI and Argo CD](https://www.digitalocean.com/community/tutorials/webinar-series-gitops-tool-sets-on-kubernetes-with-circleci-and-argo-cd)
1. [Simplify and Automate Deployments Using GitOps with IBM Multicloud Manager](https://www.ibm.com/blogs/bluemix/2019/02/simplify-and-automate-deployments-using-gitops-with-ibm-multicloud-manager-3-1-2/)
1. [CI/CD in Light Speed with K8s and Argo CD](https://www.youtube.com/watch?v=OdzH82VpMwI&feature=youtu.be)
1. [Machine Learning as Code](https://www.youtube.com/watch?v=VXrGp5er1ZE&t=0s&index=135&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU). Among other things, describes how Kubeflow uses Argo CD to implement GitOPs for ML
1. [Argo CD - GitOps Continuous Delivery for Kubernetes](https://www.youtube.com/watch?v=aWDIQMbp1cc&feature=youtu.be&t=1m4s)

View File

@@ -1 +1 @@
1.3.2
1.1.2

View File

@@ -1,22 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="131" height="20">
<linearGradient id="b" x2="0" y2="100%">
<stop offset="0" stop-color="#bbb" stop-opacity=".1"/>
<stop offset="1" stop-opacity=".1"/>
</linearGradient>
<clipPath id="a">
<rect width="131" height="20" rx="3" fill="#fff"/>
</clipPath>
<g clip-path="url(#a)">
<path id="leftPath" fill="#555" d="M0 0h74v20H0z"/>
<path id="rightPath" fill="#4c1" d="M74 0h57v20H74z"/>
<path fill="url(#b)" d="M0 0h131v20H0z"/>
</g>
<g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="90">
<image x="5" y="3" width="14" height="14" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAB8AAAAeCAYAAADU8sWcAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAACXBIWXMAAABPAAAATwFjiv3XAAACC2lUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyI+CiAgICAgICAgIDx0aWZmOlJlc29sdXRpb25Vbml0PjI8L3RpZmY6UmVzb2x1dGlvblVuaXQ+CiAgICAgICAgIDx0aWZmOkNvbXByZXNzaW9uPjE8L3RpZmY6Q29tcHJlc3Npb24+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZmY6T3JpZW50YXRpb24+CiAgICAgICAgIDx0aWZmOlBob3RvbWV0cmljSW50ZXJwcmV0YXRpb24+MjwvdGlmZjpQaG90b21ldHJpY0ludGVycHJldGF0aW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KD0UqkwAACpZJREFUSA11VnmQFcUZ//qY4527by92WWTXwHKqEJDDg0MQUioKSWopK4oaS9RAlMofUSPGbFlBSaViKomgQSNmIdECkUS8ophdozEJArKoKCsgIFlY2Pu9N29mero7X78FxCR2Vb8309PTv+/4fb9vCHz1II2Nm+jmzYul2bJsdlMyO7LyPGlZtYqQck1ZHCgwLZWiROeJgB7bDzpYb/7o0y/emzXv4Pts8+ZGBUC0uf/vQf57wdw3NTVRnOYFfXvj6pJcadkUmbDGRoxVSEItSYAQQrUmRBP81VoRpkFTJUMuZA/3g/28q3dn89b7u885D4348vgf8NPAxY033LJmSlDizlSOU94f6UhoHaZdC/3QCHWON2gDYBhAYyRAWgyD4QSi1815f29++vv/+CoD2Lm2nAFGl8mndzyxMMgk5/iW6xzPi/xVk+oyE8+vLNt14FR/0uYW14ox0IwUJ4ZAaUpAaaBEYByiyLbikWWNnnThvPIxu+L717auVeb81tbWsyk4C34u8I3LnlhcSCameMzykg7TwzOJGIYWQj+M+voLYcSo/hxY1E65FECUq4G4RFP0nSjMCMVAIIKUnOHkw90L6qq/vXvbh02trV8yoBh2E0NMY9GiG+58fIGXTF6epyw7JOnamZQbz/tCnDrVVzgqqdwTT0ZTdWDN0wPpMh3ZPZqHLSw98C7Y4RwtnAodsTwGAg0ppkcxrlkYJFID+Z0bn/zelsFImzQRXfScNJFiOG689dcT/FT6GzlqedVJTHRpPH6yz/PyfTlx2IrLCpvSdWrv5OXkX9fOJB/Mnaz3XzItap+yMDoyZgEE7F1SceIghryeRNwnTBkEqhRIxiLF6bCpDXNybW2v/ruxcTzbt2+zZmfCvWT+zxNhbek3cxaPlyYsXVWSSHT25rxC3o+OWK6aDn7sZ3r79ZNY28w45Esxuhw5RjmVvEQPlIwgR8bOiE7VdrPaA2+TeGEEhFbhtAEmG5pzk5WqqbWTPv7jC8sLpgxNSRWZrUeWTAgor7EY9ytTiURvzg8GskGU5xxKkc33yDcX1yc7x3ijLxF+zZgoEljgCrSiXOWIG/WGrjifHxl1V9iyeKwW1lHNo5SKmDmcYinivwhjVpkor55sQj9+/D4MO9bpLW8RN6zJzMvbPDUkHaecE368N1+IUUJfIm5hPf1w2kTedlm/LgntmqEWEQGT3ScJYUh2NIxiiWupSUFasor1VgzXvPATfv6BC3Roq0E9MLqAXKQcAe1rqi/dv+q3DwQUw6djY6tiAaG1jLEw7nAn64WhcStPqZqpA6chap8aage077GwbQcJDx8Awq3T3DHlbeijUHWwytHFuuizCTergdQ+YocxLRHVPNcESRAJxqo60nbSvExvvvWx23MCVvucDyRt5hitwDRHHG09pZiaT7MlSUdUhtQGZttYRdwA4WnFQjFnDA4LX7UdIlgMYpaouBgGMnuBKVN/ZhgBRCMVynNAHOfRm777q4UcrW3EZxcEnD/kclYlpVaFKFIZrIRtluvdls/G9InnLZ9fpLXoJjwZAzZkDJ5mjkIj8HTCKETH9oD0eoikMSDQ5cTrLrUg4QoaFlxUH5MdYy6yAFAaYBHed6ODkMPOkMcFZggZKa0SkaL/jMeDpq5D1Vez9693V/wNUg1jQRY8yLX8BcSba8AaOgyBOdqtQBz7DNj8lZCeNY9SN6aiD3bTOc89s2iliDVvKa3uGxf6doQAxYE2oBbm0HQPG5IxqKhOg9ZpSSQj+pDU+jv5t7913spV5c70GcqLJwgMGw6ZJUuBLbgHPW0D4rgQdewDvnAllN58B0D918AvzVBn3jWi/p4Hq2/s+et1URgBRlVRMGVvBtITfwyuyYPAoIRmWSIrEqjZW1Ml/qN7dtRWzptfBzXDoaeri6TicWhes8Zsg9iMOaAyU0D3fw4qdSEkrryquP77xx+HZCwG3T09HEZfCBUXTxh125H9VZ9SHp1OfZF4aEWI9MM1AhmiVCmaIwV6a2F5ge2q8s7OBB8ytPiOZduw6pFHYOS4cUUQGsNWXj4MVLYdaGU9UAQ0o2H8eHh49WpAITReKqdmGC0PsvF2JB4Gu+i5RqKg5JSi82mOnu9GYGEhwbFjSOXadJII7Pa6Ed3hwU8igAU8lU7D/ffdZ1pFEUT19YI+sR9oZgLIjo8h6usDu2oIzJpxOc4ZphKQtoqFxw6LzxIV/RMxkabSUPAJRz3A5oc50x9T7Lf3xkR0R0xGSS+IRIFQOUUUkqvGXdR5tGXnLmjbYXI0OLCJykIBvFe2ABVHgCQqgMpjkH9xE9IY7cSgGd0w+8mOVjiy68DOh+sauusiwSM8F11nlhCm39/SvP7ux4qNZVfba/0TJ183Is9opWtxEXe5E/cC+Y5bf+CilmczqaC/JigEOjjYTrwtzwB8tAn4kJFGMIEmy0B98gb4B7tAYimHncd19ObLrP25rW0/rb76pQqsoKTEkKJgoT64biCObVx35xvGGz67qYW3Nl0RWYH/ftxmo7pyfpSoKFFjEiLxhE73jXaueHXlGw+PZ68rS8pQ01QlodUjTqcAfcQo8qGjQB36EwQfbFAuF8wjZbmNtStefZ4nC0uibKIHez92AeZEUln5wocGuLGxyUbg2cVEFp5a9pFz11MH/ZCP6M0V8gmbxWZL35K2paFmUmjDgCXAMQkjqv8wgFMGBFVSCw8g1wWsrB5IBrTDAvB0eaAkhVHCeGzSQDUm3bE9/1hs/d7dBhy5aSqAaNNWN6Mvdn9+e0KGfl8hZCd6vBwqABmISKSkCgkqr/a6QMWrwLpsKaYWhfrwi1ikfWBNvQmUhXItAlznICMlcopFFNXcEA25wBw/EIms2L4O1onZs5u46abFUsILDWhAc/OKo7H+/OtuEFqaUcsmIE8CR47zEMsSuJvRQcceODVmOqR/uAHKftAC6R+9AKcmzgVxci9qexwLTIDZ34+MR6FVErXECXw7kc+3/G7j8gPG0dbWJmQnum1+zDj3U2rJ0rWzvFRynu/YskMTb5vetmwI7xyeg7R0g072ynEGwa0PwZTRDbDn0GGI1j0Ii2pCKPCyqJTl+eei9tOl5Kr1eU2seuG5ZMBv2fjUIMkQymAWC6jouQE3jdlYZa43PLnsLbcn++eYVwi6FCQktTCEmO3QAx2rhOllAl6bOwsahg2FDTMvhRklWVSUMkxcWEyHJFbUqyCW9AtRSV//K18AF4XmbOWeBTegCF78ujTXf3hm+XvTeo49G8/67Sg+A0a0OGc6CDzIlFbDL3+8GPYuuxLWP7AYysprIQx9YNjdzAglyVkD3qGvd3dsWvv0infM2qBj53zr49rZsJsNZwa2WeTTFxtP3L1oe1VCzh3AWqOM2FJJsFBwbMuCUAgQyAqGrVVJ7Zdwyz2RZS/X/GbrguJ5eNYgyhfnncH5v+DmYcft18Y1T5S7fl9jPMN+4aM6IpeQPmgxThNA5AnuJFhIeI2tHafiNqEUW1XQL+8NrfRznENP1drNuTOA5/5/KezmAZ4zaJBSdVaUvQk5PsNTeqfrMsKQOui5LGKibBAjmKgSRk/RIMlc8BiWCLbI95Dx05jybrMkfsh+xfgPf+hH0AC4OlsAAAAASUVORK5CYII="/>
<text id="leftText1" x="435" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="470"></text>
<text id="leftText2" x="435" y="140" transform="scale(.1)" textLength="470"></text>
<text id="rightText1" x="995" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="470"></text>
<text id="rightText1" x="995" y="140" transform="scale(.1)" textLength="470"></text></g>
</svg>

Before

Width:  |  Height:  |  Size: 5.6 KiB

View File

@@ -7,7 +7,6 @@
# p, <user/group>, <resource>, <action>, <object>
p, role:readonly, applications, get, */*, allow
p, role:readonly, certificates, get, *, allow
p, role:readonly, clusters, get, *, allow
p, role:readonly, repositories, get, *, allow
p, role:readonly, projects, get, *, allow
@@ -17,10 +16,6 @@ p, role:admin, applications, update, */*, allow
p, role:admin, applications, delete, */*, allow
p, role:admin, applications, sync, */*, allow
p, role:admin, applications, override, */*, allow
p, role:admin, applications, action/*, */*, allow
p, role:admin, certificates, create, *, allow
p, role:admin, certificates, update, *, allow
p, role:admin, certificates, delete, *, allow
p, role:admin, clusters, create, *, allow
p, role:admin, clusters, update, *, allow
p, role:admin, clusters, delete, *, allow
1 # Built-in policy which defines two roles: role:readonly and role:admin,
7 # p, <user/group>, <resource>, <action>, <object>
8 p, role:readonly, applications, get, */*, allow
9 p, role:readonly, certificates, get, *, allow p, role:readonly, clusters, get, *, allow
p, role:readonly, clusters, get, *, allow
10 p, role:readonly, repositories, get, *, allow
11 p, role:readonly, projects, get, *, allow
12 p, role:admin, applications, create, */*, allow
16 p, role:admin, applications, override, */*, allow
17 p, role:admin, applications, action/*, */*, allow p, role:admin, clusters, create, *, allow
18 p, role:admin, certificates, create, *, allow p, role:admin, clusters, update, *, allow
p, role:admin, certificates, update, *, allow
p, role:admin, certificates, delete, *, allow
p, role:admin, clusters, create, *, allow
p, role:admin, clusters, update, *, allow
19 p, role:admin, clusters, delete, *, allow
20 p, role:admin, repositories, create, *, allow
21 p, role:admin, repositories, update, *, allow

File diff suppressed because it is too large Load Diff

View File

@@ -20,10 +20,9 @@ import (
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
"github.com/argoproj/argo-cd/util/stats"
)
@@ -41,13 +40,11 @@ func newCommand() *cobra.Command {
appResyncPeriod int64
repoServerAddress string
repoServerTimeoutSeconds int
selfHealTimeoutSeconds int
statusProcessors int
operationProcessors int
logLevel string
glogLevel int
metricsPort int
kubectlParallelismLimit int64
cacheSrc func() (*cache.Cache, error)
)
var command = cobra.Command{
@@ -69,7 +66,7 @@ func newCommand() *cobra.Command {
errors.CheckError(err)
resyncDuration := time.Duration(appResyncPeriod) * time.Second
repoClientset := apiclient.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
repoClientset := reposerver.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@@ -77,7 +74,6 @@ func newCommand() *cobra.Command {
errors.CheckError(err)
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, namespace)
kubectl := &kube.KubectlCmd{}
appController, err := controller.NewApplicationController(
namespace,
settingsMgr,
@@ -85,11 +81,8 @@ func newCommand() *cobra.Command {
appClient,
repoClientset,
cache,
kubectl,
resyncDuration,
time.Duration(selfHealTimeoutSeconds)*time.Second,
metricsPort,
kubectlParallelismLimit)
metricsPort)
errors.CheckError(err)
log.Infof("Application Controller (version: %s) starting (namespace: %s)", common.GetVersion(), namespace)
@@ -113,9 +106,6 @@ func newCommand() *cobra.Command {
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", 5, "Specifies timeout between application self heal attempts")
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", 20, "Number of allowed concurrent kubectl fork/execs. Any value less the 1 means no limit.")
cacheSrc = cache.AddCacheFlagsToCmd(&command)
return &command
}

View File

@@ -7,15 +7,17 @@ import (
"os"
"time"
"github.com/argoproj/argo-cd/reposerver/metrics"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/reposerver/metrics"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/stats"
"github.com/argoproj/argo-cd/util/tls"
)
@@ -46,7 +48,7 @@ func newCommand() *cobra.Command {
cache, err := cacheSrc()
errors.CheckError(err)
metricsServer := metrics.NewMetricsServer()
metricsServer := metrics.NewMetricsServer(git.NewFactory())
server, err := reposerver.NewServer(metricsServer, cache, tlsConfigCustomizer, parallelismLimit)
errors.CheckError(err)

View File

@@ -11,7 +11,7 @@ import (
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/server"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
@@ -60,7 +60,7 @@ func NewCommand() *cobra.Command {
kubeclientset := kubernetes.NewForConfigOrDie(config)
appclientset := appclientset.NewForConfigOrDie(config)
repoclientset := apiclient.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
repoclientset := reposerver.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
argoCDOpts := server.ArgoCDServerOpts{
Insecure: insecure,

View File

@@ -8,7 +8,6 @@ import (
"io/ioutil"
"os"
"os/exec"
"regexp"
"syscall"
"github.com/ghodss/yaml"
@@ -109,7 +108,7 @@ func NewRunDexCommand() *cobra.Command {
} else {
err = ioutil.WriteFile("/tmp/dex.yaml", dexCfgBytes, 0644)
errors.CheckError(err)
log.Info(redactor(string(dexCfgBytes)))
log.Info(string(dexCfgBytes))
cmd = exec.Command("dex", "serve", "/tmp/dex.yaml")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
@@ -387,12 +386,6 @@ func NewExportCommand() *cobra.Command {
acdRBACConfigMap, err := acdClients.configMaps.Get(common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdRBACConfigMap)
acdKnownHostsConfigMap, err := acdClients.configMaps.Get(common.ArgoCDKnownHostsConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdKnownHostsConfigMap)
acdTLSCertsConfigMap, err := acdClients.configMaps.Get(common.ArgoCDTLSCertsConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdTLSCertsConfigMap)
referencedSecrets := getReferencedSecrets(*acdConfigMap)
secrets, err := acdClients.secrets.List(metav1.ListOptions{})
@@ -442,11 +435,27 @@ func getReferencedSecrets(un unstructured.Unstructured) map[string]bool {
if cred.UsernameSecret != nil {
referencedSecrets[cred.UsernameSecret.Name] = true
}
if cred.TLSClientCertDataSecret != nil {
referencedSecrets[cred.TLSClientCertDataSecret.Name] = true
}
}
if helmReposRAW, ok := cm.Data["helm.repositories"]; ok {
helmRepoCreds := make([]settings.HelmRepoCredentials, 0)
err := yaml.Unmarshal([]byte(helmReposRAW), &helmRepoCreds)
errors.CheckError(err)
for _, cred := range helmRepoCreds {
if cred.CASecret != nil {
referencedSecrets[cred.CASecret.Name] = true
}
if cred.TLSClientCertKeySecret != nil {
referencedSecrets[cred.TLSClientCertKeySecret.Name] = true
if cred.CertSecret != nil {
referencedSecrets[cred.CertSecret.Name] = true
}
if cred.KeySecret != nil {
referencedSecrets[cred.KeySecret.Name] = true
}
if cred.UsernameSecret != nil {
referencedSecrets[cred.UsernameSecret.Name] = true
}
if cred.PasswordSecret != nil {
referencedSecrets[cred.PasswordSecret.Name] = true
}
}
}
@@ -533,11 +542,6 @@ func NewClusterConfig() *cobra.Command {
return command
}
func redactor(dirtyString string) string {
dirtyString = regexp.MustCompile("(clientSecret: )[^ \n]*").ReplaceAllString(dirtyString, "$1********")
return regexp.MustCompile("(secret: )[^ \n]*").ReplaceAllString(dirtyString, "$1********")
}
func main() {
if err := NewCommand().Execute(); err != nil {
fmt.Println(err)

View File

@@ -1,73 +0,0 @@
package main
import (
"testing"
"github.com/stretchr/testify/assert"
)
var textToRedact = `
- config:
clientID: aabbccddeeff00112233
clientSecret: $dex.github.clientSecret
orgs:
- name: your-github-org
redirectURI: https://argocd.example.com/api/dex/callback
id: github
name: GitHub
type: github
grpc:
addr: 0.0.0.0:5557
issuer: https://argocd.example.com/api/dex
oauth2:
skipApprovalScreen: true
staticClients:
- id: argo-cd
name: Argo CD
redirectURIs:
- https://argocd.example.com/auth/callback
secret: Dis9M-GA11oTwZVQQWdDklPQw-sWXZkWJFyyEhMs
- id: argo-cd-cli
name: Argo CD CLI
public: true
redirectURIs:
- http://localhost
storage:
type: memory
web:
http: 0.0.0.0:5556`
var expectedRedaction = `
- config:
clientID: aabbccddeeff00112233
clientSecret: ********
orgs:
- name: your-github-org
redirectURI: https://argocd.example.com/api/dex/callback
id: github
name: GitHub
type: github
grpc:
addr: 0.0.0.0:5557
issuer: https://argocd.example.com/api/dex
oauth2:
skipApprovalScreen: true
staticClients:
- id: argo-cd
name: Argo CD
redirectURIs:
- https://argocd.example.com/auth/callback
secret: ********
- id: argo-cd-cli
name: Argo CD CLI
public: true
redirectURIs:
- http://localhost
storage:
type: memory
web:
http: 0.0.0.0:5556`
func TestSecretsRedactor(t *testing.T) {
assert.Equal(t, expectedRedaction, redactor(textToRedact))
}

View File

@@ -2,21 +2,16 @@ package commands
import (
"context"
"encoding/json"
"fmt"
"os"
"strings"
"syscall"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"golang.org/x/crypto/ssh/terminal"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
accountpkg "github.com/argoproj/argo-cd/pkg/apiclient/account"
"github.com/argoproj/argo-cd/pkg/apiclient/session"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/localconfig"
@@ -32,7 +27,6 @@ func NewAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
},
}
command.AddCommand(NewAccountUpdatePasswordCommand(clientOpts))
command.AddCommand(NewAccountGetUserInfoCommand(clientOpts))
return command
}
@@ -99,48 +93,3 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
command.Flags().StringVar(&newPassword, "new-password", "", "new password you want to update to")
return command
}
func NewAccountGetUserInfoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "get-user-info",
Short: "Get user info",
Run: func(c *cobra.Command, args []string) {
if len(args) != 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, client := argocdclient.NewClientOrDie(clientOpts).NewSessionClientOrDie()
defer util.Close(conn)
ctx := context.Background()
response, err := client.GetUserInfo(ctx, &session.GetUserInfoRequest{})
errors.CheckError(err)
switch output {
case "yaml":
yamlBytes, err := yaml.Marshal(response)
errors.CheckError(err)
fmt.Println(string(yamlBytes))
case "json":
jsonBytes, err := json.MarshalIndent(response, "", " ")
errors.CheckError(err)
fmt.Println(string(jsonBytes))
case "":
fmt.Printf("Logged In: %v\n", response.LoggedIn)
if response.LoggedIn {
fmt.Printf("Username: %s\n", response.Username)
fmt.Printf("Issuer: %s\n", response.Iss)
fmt.Printf("Groups: %v\n", strings.Join(response.Groups, ","))
}
default:
log.Fatalf("Unknown output format: %s", output)
}
},
}
command.Flags().StringVarP(&output, "output", "o", "", "Output format. One of: yaml, json")
return command
}

View File

@@ -1,14 +1,17 @@
package commands
import (
"bufio"
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/url"
"os"
"os/exec"
"path"
"reflect"
"regexp"
"sort"
"strconv"
"strings"
@@ -16,6 +19,7 @@ import (
"time"
"github.com/ghodss/yaml"
"github.com/google/shlex"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
@@ -30,11 +34,8 @@ import (
"github.com/argoproj/argo-cd/pkg/apiclient"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
applicationpkg "github.com/argoproj/argo-cd/pkg/apiclient/application"
clusterpkg "github.com/argoproj/argo-cd/pkg/apiclient/cluster"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
settingspkg "github.com/argoproj/argo-cd/pkg/apiclient/settings"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
repoapiclient "github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
@@ -44,7 +45,7 @@ import (
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource/ignore"
"github.com/argoproj/argo-cd/util/resource"
"github.com/argoproj/argo-cd/util/templates"
)
@@ -101,19 +102,11 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
)
var command = &cobra.Command{
Use: "create APPNAME",
Short: "Create an application",
Short: "Create an application from a git location",
Run: func(c *cobra.Command, args []string) {
var app argoappv1.Application
argocdClient := argocdclient.NewClientOrDie(clientOpts)
if fileURL == "-" {
// read stdin
reader := bufio.NewReader(os.Stdin)
err := config.UnmarshalReader(reader, &app)
if err != nil {
log.Fatalf("unable to read manifest from stdin: %v", err)
}
} else if fileURL != "" {
// read uri
if fileURL != "" {
parsedURL, err := url.ParseRequestURI(fileURL)
if err != nil || !(parsedURL.Scheme == "http" || parsedURL.Scheme == "https") {
err = config.UnmarshalLocalFile(fileURL, &app)
@@ -128,7 +121,6 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
log.Fatalf("--name argument '%s' does not match app spec metadata.name '%s'", appName, app.Name)
}
} else {
// read arguments
if len(args) == 1 {
if appName != "" && appName != args[0] {
log.Fatalf("--name argument '%s' does not match app name %s", appName, args[0])
@@ -158,14 +150,9 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
fmt.Printf("application '%s' created\n", created.ObjectMeta.Name)
},
}
command.Flags().StringVarP(&fileURL, "file", "f", "", "Filename or URL to Kubernetes manifests for the app")
command.Flags().StringVar(&appName, "name", "", "A name for the app, ignored if a file is set (DEPRECATED)")
command.Flags().BoolVar(&upsert, "upsert", false, "Allows to override application with the same name even if supplied application spec is different from existing spec")
command.Flags().StringVarP(&fileURL, "file", "f", "", "Filename or URL to Kubernetes manifests for the app")
// Only complete files with appropriate extension.
err := command.Flags().SetAnnotation("file", cobra.BashCompFilenameExt, []string{"json", "yaml", "yml"})
if err != nil {
log.Fatal(err)
}
addAppFlags(command, &appOpts)
return command
}
@@ -207,14 +194,6 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
appName := args[0]
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName, Refresh: getRefreshType(refresh, hardRefresh)})
errors.CheckError(err)
pConn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(pConn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: app.Spec.Project})
errors.CheckError(err)
windows := proj.Spec.SyncWindows.Matches(app)
switch output {
case "yaml":
yamlBytes, err := yaml.Marshal(app)
@@ -226,7 +205,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
fmt.Println(string(jsonBytes))
case "":
aURL := appURL(acdClient, app.Name)
printAppSummaryTable(app, aURL, windows)
printAppSummaryTable(app, aURL)
if len(app.Status.Conditions) > 0 {
fmt.Println()
@@ -240,7 +219,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
printOperationResult(app.Status.OperationState)
}
if showParams {
printParams(app)
printParams(app, appIf)
}
if len(app.Status.Resources) > 0 {
fmt.Println()
@@ -261,7 +240,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
return command
}
func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *argoappv1.SyncWindows) {
func printAppSummaryTable(app *argoappv1.Application, appURL string) {
fmt.Printf(printOpFmtStr, "Name:", app.Name)
fmt.Printf(printOpFmtStr, "Project:", app.Spec.GetProject())
fmt.Printf(printOpFmtStr, "Server:", app.Spec.Destination.Server)
@@ -271,47 +250,6 @@ func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *ar
fmt.Printf(printOpFmtStr, "Target:", app.Spec.Source.TargetRevision)
fmt.Printf(printOpFmtStr, "Path:", app.Spec.Source.Path)
printAppSourceDetails(&app.Spec.Source)
var wds []string
var status string
var allow, deny, inactiveAllows bool
if windows.HasWindows() {
active := windows.Active()
if active.HasWindows() {
for _, w := range *active {
if w.Kind == "deny" {
deny = true
} else {
allow = true
}
}
}
if windows.InactiveAllows().HasWindows() {
inactiveAllows = true
}
s := windows.CanSync(true)
if deny || !deny && !allow && inactiveAllows {
if s {
status = "Manual Allowed"
} else {
status = "Sync Denied"
}
} else {
status = "Sync Allowed"
}
for _, w := range *windows {
s := w.Kind + ":" + w.Schedule + ":" + w.Duration
wds = append(wds, s)
}
} else {
status = "Sync Allowed"
}
fmt.Printf(printOpFmtStr, "SyncWindow:", status)
if len(wds) > 0 {
fmt.Printf(printOpFmtStr, "Assigned Windows:", strings.Join(wds, ","))
}
var syncPolicy string
if app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil {
syncPolicy = "Automated"
@@ -387,7 +325,7 @@ func truncateString(str string, num int) string {
}
// printParams prints parameters and overrides
func printParams(app *argoappv1.Application) {
func printParams(app *argoappv1.Application, appIf applicationpkg.ApplicationServiceClient) {
paramLenLimit := 80
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
@@ -454,20 +392,14 @@ func setAppOptions(flags *pflag.FlagSet, app *argoappv1.Application, appOpts *ap
app.Spec.Source.RepoURL = appOpts.repoURL
case "path":
app.Spec.Source.Path = appOpts.appPath
case "helm-chart":
app.Spec.Source.Chart = appOpts.chart
case "env":
setKsonnetOpt(&app.Spec.Source, &appOpts.env)
case "revision":
app.Spec.Source.TargetRevision = appOpts.revision
case "values":
setHelmOpt(&app.Spec.Source, helmOpts{valueFiles: appOpts.valuesFiles})
setHelmOpt(&app.Spec.Source, appOpts.valuesFiles, nil)
case "release-name":
setHelmOpt(&app.Spec.Source, helmOpts{releaseName: appOpts.releaseName})
case "helm-set":
setHelmOpt(&app.Spec.Source, helmOpts{helmSets: appOpts.helmSets})
case "helm-set-string":
setHelmOpt(&app.Spec.Source, helmOpts{helmSetStrings: appOpts.helmSetStrings})
setHelmOpt(&app.Spec.Source, nil, &appOpts.releaseName)
case "directory-recurse":
app.Spec.Source.Directory = &argoappv1.ApplicationSourceDirectory{Recurse: appOpts.directoryRecurse}
case "config-management-plugin":
@@ -480,12 +412,6 @@ func setAppOptions(flags *pflag.FlagSet, app *argoappv1.Application, appOpts *ap
app.Spec.Project = appOpts.project
case "nameprefix":
setKustomizeOpt(&app.Spec.Source, &appOpts.namePrefix)
case "kustomize-image":
setKustomizeImages(&app.Spec.Source, appOpts.kustomizeImages)
case "jsonnet-tla-str":
setJsonnetOpt(&app.Spec.Source, appOpts.jsonnetTlaStr, false)
case "jsonnet-tla-code":
setJsonnetOpt(&app.Spec.Source, appOpts.jsonnetTlaCode, true)
case "sync-policy":
switch appOpts.syncPolicy {
case "automated":
@@ -505,12 +431,6 @@ func setAppOptions(flags *pflag.FlagSet, app *argoappv1.Application, appOpts *ap
}
app.Spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
}
if flags.Changed("self-heal") {
if app.Spec.SyncPolicy == nil || app.Spec.SyncPolicy.Automated == nil {
log.Fatal("Cannot set --self-helf: application not configured with automatic sync")
}
app.Spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
}
return visited
}
@@ -538,91 +458,25 @@ func setKustomizeOpt(src *argoappv1.ApplicationSource, namePrefix *string) {
src.Kustomize = nil
}
}
func setKustomizeImages(src *argoappv1.ApplicationSource, images []string) {
if src.Kustomize == nil {
src.Kustomize = &argoappv1.ApplicationSourceKustomize{}
}
for _, image := range images {
src.Kustomize.MergeImage(argoappv1.KustomizeImage(image))
}
if src.Kustomize.IsZero() {
src.Kustomize = nil
}
}
type helmOpts struct {
valueFiles []string
releaseName string
helmSets []string
helmSetStrings []string
}
func setHelmOpt(src *argoappv1.ApplicationSource, opts helmOpts) {
func setHelmOpt(src *argoappv1.ApplicationSource, valueFiles []string, releaseName *string) {
if src.Helm == nil {
src.Helm = &argoappv1.ApplicationSourceHelm{}
}
if len(opts.valueFiles) > 0 {
src.Helm.ValueFiles = opts.valueFiles
if valueFiles != nil {
src.Helm.ValueFiles = valueFiles
}
if opts.releaseName != "" {
src.Helm.ReleaseName = opts.releaseName
}
for _, text := range opts.helmSets {
p, err := argoappv1.NewHelmParameter(text, false)
if err != nil {
log.Fatal(err)
}
src.Helm.AddParameter(*p)
}
for _, text := range opts.helmSetStrings {
p, err := argoappv1.NewHelmParameter(text, true)
if err != nil {
log.Fatal(err)
}
src.Helm.AddParameter(*p)
if releaseName != nil {
src.Helm.ReleaseName = *releaseName
}
if src.Helm.IsZero() {
src.Helm = nil
}
}
func setJsonnetOpt(src *argoappv1.ApplicationSource, tlaParameters []string, code bool) {
if src.Directory == nil {
src.Directory = &argoappv1.ApplicationSourceDirectory{}
}
if len(tlaParameters) != 0 {
tlas := make([]argoappv1.JsonnetVar, len(tlaParameters))
for index, paramStr := range tlaParameters {
parts := strings.SplitN(paramStr, "=", 2)
if len(parts) != 2 {
log.Fatalf("Expected parameter of the form: param=value. Received: %s", paramStr)
break
}
tlas[index] = argoappv1.JsonnetVar{
Name: parts[0],
Value: parts[1],
Code: code}
}
existingTLAs := []argoappv1.JsonnetVar{}
for i := range src.Directory.Jsonnet.TLAs {
if src.Directory.Jsonnet.TLAs[i].Code != code {
existingTLAs = append(existingTLAs, src.Directory.Jsonnet.TLAs[i])
}
}
src.Directory.Jsonnet.TLAs = append(existingTLAs, tlas...)
}
if src.Directory.IsZero() {
src.Directory = nil
}
}
type appOptions struct {
repoURL string
appPath string
chart string
env string
revision string
destServer string
@@ -630,43 +484,30 @@ type appOptions struct {
parameters []string
valuesFiles []string
releaseName string
helmSets []string
helmSetStrings []string
project string
syncPolicy string
autoPrune bool
selfHeal bool
namePrefix string
directoryRecurse bool
configManagementPlugin string
jsonnetTlaStr []string
jsonnetTlaCode []string
kustomizeImages []string
}
func addAppFlags(command *cobra.Command, opts *appOptions) {
command.Flags().StringVar(&opts.repoURL, "repo", "", "Repository URL, ignored if a file is set")
command.Flags().StringVar(&opts.appPath, "path", "", "Path in repository to the app directory, ignored if a file is set")
command.Flags().StringVar(&opts.chart, "helm-chart", "", "Helm Chart name")
command.Flags().StringVar(&opts.appPath, "path", "", "Path in repository to the ksonnet app directory, ignored if a file is set")
command.Flags().StringVar(&opts.env, "env", "", "Application environment to monitor")
command.Flags().StringVar(&opts.revision, "revision", "", "The tracking source branch, tag, or commit the application will sync to")
command.Flags().StringVar(&opts.revision, "revision", "HEAD", "The tracking source branch, tag, or commit the application will sync to")
command.Flags().StringVar(&opts.destServer, "dest-server", "", "K8s cluster URL (overrides the server URL specified in the ksonnet app.yaml)")
command.Flags().StringVar(&opts.destNamespace, "dest-namespace", "", "K8s target namespace (overrides the namespace specified in the ksonnet app.yaml)")
command.Flags().StringArrayVarP(&opts.parameters, "parameter", "p", []string{}, "set a parameter override (e.g. -p guestbook=image=example/guestbook:latest)")
command.Flags().StringArrayVar(&opts.valuesFiles, "values", []string{}, "Helm values file(s) to use")
command.Flags().StringVar(&opts.releaseName, "release-name", "", "Helm release-name")
command.Flags().StringArrayVar(&opts.helmSets, "helm-set", []string{}, "Helm set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)")
command.Flags().StringArrayVar(&opts.helmSetStrings, "helm-set-string", []string{}, "Helm set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)")
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
command.Flags().StringVar(&opts.syncPolicy, "sync-policy", "", "Set the sync policy (one of: automated, none)")
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning when sync is automated")
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing when sync is automated")
command.Flags().StringVar(&opts.namePrefix, "nameprefix", "", "Kustomize nameprefix")
command.Flags().BoolVar(&opts.directoryRecurse, "directory-recurse", false, "Recurse directory")
command.Flags().StringVar(&opts.configManagementPlugin, "config-management-plugin", "", "Config management plugin name")
command.Flags().StringArrayVar(&opts.jsonnetTlaStr, "jsonnet-tla-str", []string{}, "Jsonnet top level string arguments")
command.Flags().StringArrayVar(&opts.jsonnetTlaCode, "jsonnet-tla-code", []string{}, "Jsonnet top level code arguments")
command.Flags().StringArrayVar(&opts.kustomizeImages, "kustomize-image", []string{}, "Kustomize images (e.g. --kustomize-image node:8.15.0 --kustomize-image mysql=mariadb,alpine@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d)")
}
// NewApplicationUnsetCommand returns a new instance of an `argocd app unset` command
@@ -727,7 +568,7 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
}
}
setHelmOpt(&app.Spec.Source, helmOpts{valueFiles: specValueFiles})
setHelmOpt(&app.Spec.Source, specValueFiles, nil)
if !updated {
return
}
@@ -771,8 +612,8 @@ func liveObjects(resources []*argoappv1.ResourceDiff) ([]*unstructured.Unstructu
return objs, nil
}
func getLocalObjects(app *argoappv1.Application, local, appLabelKey, kubeVersion string) []*unstructured.Unstructured {
manifestStrings := getLocalObjectsString(app, local, appLabelKey, kubeVersion, nil)
func getLocalObjects(app *argoappv1.Application, local string, appLabelKey string) []*unstructured.Unstructured {
manifestStrings := getLocalObjectsString(app, local, appLabelKey)
objs := make([]*unstructured.Unstructured, len(manifestStrings))
for i := range manifestStrings {
obj := unstructured.Unstructured{}
@@ -783,14 +624,12 @@ func getLocalObjects(app *argoappv1.Application, local, appLabelKey, kubeVersion
return objs
}
func getLocalObjectsString(app *argoappv1.Application, local, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions) []string {
res, err := repository.GenerateManifests(local, &repoapiclient.ManifestRequest{
func getLocalObjectsString(app *argoappv1.Application, local string, appLabelKey string) []string {
res, err := repository.GenerateManifests(local, &repository.ManifestRequest{
ApplicationSource: &app.Spec.Source,
AppLabelKey: appLabelKey,
AppLabelValue: app.Name,
Namespace: app.Spec.Destination.Namespace,
KustomizeOptions: kustomizeOptions,
KubeVersion: kubeVersion,
})
errors.CheckError(err)
@@ -803,8 +642,9 @@ type resourceInfoProvider struct {
// Infer if obj is namespaced or not from corresponding live objects list. If corresponding live object has namespace then target object is also namespaced.
// If live object is missing then it does not matter if target is namespaced or not.
func (p *resourceInfoProvider) IsNamespaced(server string, gk schema.GroupKind) (bool, error) {
return p.namespacedByGk[gk], nil
func (p *resourceInfoProvider) IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error) {
key := kube.GetResourceKey(obj)
return p.namespacedByGk[key.GroupKind()], nil
}
func groupLocalObjs(localObs []*unstructured.Unstructured, liveObjs []*unstructured.Unstructured, appNamespace string) map[kube.ResourceKey]*unstructured.Unstructured {
@@ -820,7 +660,7 @@ func groupLocalObjs(localObs []*unstructured.Unstructured, liveObjs []*unstructu
objByKey := make(map[kube.ResourceKey]*unstructured.Unstructured)
for i := range localObs {
obj := localObs[i]
if !(hook.IsHook(obj) || ignore.Ignore(obj)) {
if !(hook.IsHook(obj) || resource.Ignore(obj)) {
objByKey[kube.GetResourceKey(obj)] = obj
}
}
@@ -840,7 +680,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
Short: shortDesc,
Long: shortDesc + "\nUses 'diff' to render the difference. KUBECTL_EXTERNAL_DIFF environment variable can be used to select your own diff tool.\nReturns the following exit codes: 2 on general errors, 1 when a diff is found, and 0 when no diff is found",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(2)
}
@@ -867,12 +707,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
errors.CheckError(err)
if local != "" {
conn, clusterIf := clientset.NewClusterClientOrDie()
defer util.Close(conn)
cluster, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: app.Spec.Destination.Server})
errors.CheckError(err)
util.Close(conn)
localObjs := groupLocalObjs(getLocalObjects(app, local, argoSettings.AppLabelKey, cluster.ServerVersion), liveObjs, app.Spec.Destination.Namespace)
localObjs := groupLocalObjs(getLocalObjects(app, local, argoSettings.AppLabelKey), liveObjs, app.Spec.Destination.Namespace)
for _, res := range resources.Items {
var live = &unstructured.Unstructured{}
err := json.Unmarshal([]byte(res.LiveState), &live)
@@ -945,10 +780,8 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
foundDiffs := false
for _, item := range items {
if item.target != nil && hook.IsHook(item.target) || item.live != nil && hook.IsHook(item.live) {
continue
}
for i := range items {
item := items[i]
overrides := make(map[string]argoappv1.ResourceOverride)
for k := range argoSettings.ResourceOverrides {
val := argoSettings.ResourceOverrides[k]
@@ -972,7 +805,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
foundDiffs = true
_ = diff.PrintDiff(item.key.Name, target, live)
printDiff(item.key.Name, target, live)
}
}
if foundDiffs {
@@ -987,6 +820,43 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
return command
}
func printDiff(name string, live *unstructured.Unstructured, target *unstructured.Unstructured) {
tempDir, err := ioutil.TempDir("", "argocd-diff")
errors.CheckError(err)
targetFile := path.Join(tempDir, name)
targetData := []byte("")
if target != nil {
targetData, err = yaml.Marshal(target)
errors.CheckError(err)
}
err = ioutil.WriteFile(targetFile, targetData, 0644)
errors.CheckError(err)
liveFile := path.Join(tempDir, fmt.Sprintf("%s-live.yaml", name))
liveData := []byte("")
if live != nil {
liveData, err = yaml.Marshal(live)
errors.CheckError(err)
}
err = ioutil.WriteFile(liveFile, liveData, 0644)
errors.CheckError(err)
cmdBinary := "diff"
var args []string
if envDiff := os.Getenv("KUBECTL_EXTERNAL_DIFF"); envDiff != "" {
parts, err := shlex.Split(envDiff)
errors.CheckError(err)
cmdBinary = parts[0]
args = parts[1:]
}
cmd := exec.Command(cmdBinary, append(args, liveFile, targetFile)...)
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stdout
_ = cmd.Run()
}
// NewApplicationDeleteCommand returns a new instance of an `argocd app delete` command
func NewApplicationDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
@@ -1018,49 +888,10 @@ func NewApplicationDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
return command
}
// Print simple list of application names
func printApplicationNames(apps []argoappv1.Application) {
for _, app := range apps {
fmt.Println(app.Name)
}
}
// Print table of application data
func printApplicationTable(apps []argoappv1.Application, output *string) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
var fmtStr string
headers := []interface{}{"NAME", "CLUSTER", "NAMESPACE", "PROJECT", "STATUS", "HEALTH", "SYNCPOLICY", "CONDITIONS"}
if *output == "wide" {
fmtStr = "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n"
headers = append(headers, "REPO", "PATH", "TARGET")
} else {
fmtStr = "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n"
}
fmt.Fprintf(w, fmtStr, headers...)
for _, app := range apps {
vals := []interface{}{
app.Name,
app.Spec.Destination.Server,
app.Spec.Destination.Namespace,
app.Spec.GetProject(),
app.Status.Sync.Status,
app.Status.Health.Status,
formatSyncPolicy(app),
formatConditionsSummary(app),
}
if *output == "wide" {
vals = append(vals, app.Spec.Source.RepoURL, app.Spec.Source.Path, app.Spec.Source.TargetRevision)
}
fmt.Fprintf(w, fmtStr, vals...)
}
_ = w.Flush()
}
// NewApplicationListCommand returns a new instance of an `argocd app list` command
func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
projects []string
output string
)
var command = &cobra.Command{
Use: "list",
@@ -1070,19 +901,36 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
defer util.Close(conn)
apps, err := appIf.List(context.Background(), &applicationpkg.ApplicationQuery{})
errors.CheckError(err)
appList := apps.Items
if len(projects) != 0 {
appList = argo.FilterByProjects(appList, projects)
}
if output == "name" {
printApplicationNames(appList)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
var fmtStr string
headers := []interface{}{"NAME", "CLUSTER", "NAMESPACE", "PROJECT", "STATUS", "HEALTH", "SYNCPOLICY", "CONDITIONS"}
if output == "wide" {
fmtStr = "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n"
headers = append(headers, "REPO", "PATH", "TARGET")
} else {
printApplicationTable(appList, &output)
fmtStr = "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n"
}
fmt.Fprintf(w, fmtStr, headers...)
for _, app := range apps.Items {
vals := []interface{}{
app.Name,
app.Spec.Destination.Server,
app.Spec.Destination.Namespace,
app.Spec.GetProject(),
app.Status.Sync.Status,
app.Status.Health.Status,
formatSyncPolicy(app),
formatConditionsSummary(app),
}
if output == "wide" {
vals = append(vals, app.Spec.Source.RepoURL, app.Spec.Source.Path, app.Spec.Source.TargetRevision)
}
fmt.Fprintf(w, fmtStr, vals...)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|name")
command.Flags().StringArrayVarP(&projects, "project", "p", []string{}, "Filter by project name")
command.Flags().StringVarP(&output, "output", "o", "", "Output format. One of: wide")
return command
}
@@ -1247,6 +1095,10 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if len(selectedLabels) > 0 {
ctx := context.Background()
if revision == "" {
revision = "HEAD"
}
q := applicationpkg.ApplicationManifestQuery{
Name: &appName,
Revision: revision,
@@ -1281,7 +1133,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
var localObjsStrings []string
if local != "" {
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
errors.CheckError(err)
if app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil {
log.Fatal("Cannot use local sync when Automatic Sync Policy is enabled")
}
@@ -1292,12 +1144,8 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
errors.CheckError(err)
util.Close(conn)
conn, clusterIf := acdClient.NewClusterClientOrDie()
defer util.Close(conn)
cluster, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: app.Spec.Destination.Server})
errors.CheckError(err)
util.Close(conn)
localObjsStrings = getLocalObjectsString(app, local, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions)
localObjsStrings = getLocalObjectsString(app, local, argoSettings.AppLabelKey)
}
syncReq := applicationpkg.ApplicationSyncRequest{
@@ -1408,7 +1256,7 @@ func getResourceStates(app *argoappv1.Application, selectedResources []argoappv1
health := string(res.Status)
key := kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name)
if resource, ok := resourceByKey[key]; ok && res.HookType == "" {
health = ""
health = argoappv1.HealthStatusUnknown
if resource.Health != nil {
health = resource.Health.Status
}
@@ -1429,7 +1277,7 @@ func getResourceStates(app *argoappv1.Application, selectedResources []argoappv1
// print rest of resources which were not part of most recent operation
for _, resKey := range resKeys {
res := resourceByKey[resKey]
health := ""
health := argoappv1.HealthStatusUnknown
if res.Health != nil {
health = res.Health.Status
}
@@ -1499,7 +1347,7 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
}
fmt.Println()
printAppSummaryTable(app, appURL(acdClient, appName), nil)
printAppSummaryTable(app, appURL(acdClient, appName))
fmt.Println()
if watchOperation {
printOperationResult(app.Status.OperationState)
@@ -1638,40 +1486,33 @@ func setParameterOverrides(app *argoappv1.Application, parameters []string) {
if app.Spec.Source.Helm == nil {
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{}
}
for _, p := range parameters {
newParam, err := argoappv1.NewHelmParameter(p, false)
if err != nil {
log.Error(err)
continue
re := regexp.MustCompile(`([^\\]),`)
for _, paramStr := range parameters {
parts := strings.SplitN(paramStr, "=", 2)
if len(parts) != 2 {
log.Fatalf("Expected helm parameter of the form: param=value. Received: %s", paramStr)
}
newParam := argoappv1.HelmParameter{
Name: parts[0],
Value: re.ReplaceAllString(parts[1], `$1\,`),
}
found := false
for i, cp := range app.Spec.Source.Helm.Parameters {
if cp.Name == newParam.Name {
found = true
app.Spec.Source.Helm.Parameters[i] = newParam
break
}
}
if !found {
app.Spec.Source.Helm.Parameters = append(app.Spec.Source.Helm.Parameters, newParam)
}
app.Spec.Source.Helm.AddParameter(*newParam)
}
default:
log.Fatalf("Parameters can only be set against Ksonnet or Helm applications")
}
}
// Print list of history ID's for an application.
func printApplicationHistoryIds(revHistory []argoappv1.RevisionHistory) {
for _, depInfo := range revHistory {
fmt.Println(depInfo.ID)
}
}
// Print a history table for an application.
func printApplicationHistoryTable(revHistory []argoappv1.RevisionHistory) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tDATE\tREVISION\n")
for _, depInfo := range revHistory {
rev := depInfo.Source.TargetRevision
if len(depInfo.Revision) >= 7 {
rev = fmt.Sprintf("%s (%s)", rev, depInfo.Revision[0:7])
}
fmt.Fprintf(w, "%d\t%s\t%s\n", depInfo.ID, depInfo.DeployedAt, rev)
}
_ = w.Flush()
}
// NewApplicationHistoryCommand returns a new instance of an `argocd app history` command
func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
@@ -1690,14 +1531,19 @@ func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra
appName := args[0]
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
errors.CheckError(err)
if output == "id" {
printApplicationHistoryIds(app.Status.History)
} else {
printApplicationHistoryTable(app.Status.History)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tDATE\tREVISION\n")
for _, depInfo := range app.Status.History {
rev := depInfo.Source.TargetRevision
if len(depInfo.Revision) >= 7 {
rev = fmt.Sprintf("%s (%s)", rev, depInfo.Revision[0:7])
}
fmt.Fprintf(w, "%d\t%s\t%s\n", depInfo.ID, depInfo.DeployedAt, rev)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|id")
command.Flags().StringVarP(&output, "output", "o", "", "Output format. One of: wide")
return command
}
@@ -1903,17 +1749,10 @@ func NewApplicationEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
func NewApplicationPatchCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var patch string
var patchType string
command := cobra.Command{
Use: "patch APPNAME",
Short: "Patch application",
Long: `Examples:
# Update an application's source path using json patch
argocd app patch myapplication --patch='[{"op": "replace", "path": "/spec/source/path", "value": "newPath"}]' --type json
# Update an application's repository target revision using merge patch
argocd app patch myapplication --patch '{"spec": { "source": { "targetRevision": "master" } }}' --type merge`,
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
@@ -1924,9 +1763,8 @@ func NewApplicationPatchCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
defer util.Close(conn)
patchedApp, err := appIf.Patch(context.Background(), &applicationpkg.ApplicationPatchRequest{
Name: &appName,
Patch: patch,
PatchType: patchType,
Name: &appName,
Patch: patch,
})
errors.CheckError(err)
@@ -1937,8 +1775,7 @@ func NewApplicationPatchCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
},
}
command.Flags().StringVar(&patch, "patch", "", "Patch body")
command.Flags().StringVar(&patchType, "type", "json", "The type of patch being provided; one of [json merge]")
command.Flags().StringVar(&patch, "patch", "", "Patch")
return &command
}
@@ -1961,11 +1798,10 @@ func filterResources(command *cobra.Command, resources []*argoappv1.ResourceDiff
if resourceName != "" && resourceName != obj.GetName() {
continue
}
if kind != gvk.Kind {
continue
if kind == gvk.Kind {
copy := obj.DeepCopy()
filteredObjects = append(filteredObjects, copy)
}
copy := obj.DeepCopy()
filteredObjects = append(filteredObjects, copy)
}
if len(filteredObjects) == 0 {
log.Fatal("No matching resource found")
@@ -1973,6 +1809,13 @@ func filterResources(command *cobra.Command, resources []*argoappv1.ResourceDiff
if len(filteredObjects) > 1 && !all {
log.Fatal("Multiple resources match inputs. Use the --all flag to patch multiple resources")
}
firstGroup := filteredObjects[0].GroupVersionKind().Group
for i := range filteredObjects {
obj := filteredObjects[i]
if obj.GroupVersionKind().Group != firstGroup {
log.Fatal("Multiple groups found in objects to patch. Specify which group to patch with --group flag")
}
}
return filteredObjects
}

View File

@@ -2,30 +2,20 @@ package commands
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"strconv"
"sort"
"text/tabwriter"
"github.com/ghodss/yaml"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
applicationpkg "github.com/argoproj/argo-cd/pkg/apiclient/application"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
)
type DisplayedAction struct {
Group string
Kind string
Name string
Action string
Disabled bool
}
// NewApplicationResourceActionsCommand returns a new instance of an `argocd app actions` command
func NewApplicationResourceActionsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
@@ -47,7 +37,7 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
var kind string
var group string
var resourceName string
var output string
var all bool
var command = &cobra.Command{
Use: "list APPNAME",
Short: "Lists available actions on a resource",
@@ -63,8 +53,8 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, true)
var availableActions []DisplayedAction
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
availableActions := make(map[string][]argoappv1.ResourceAction)
for i := range filteredObjects {
obj := filteredObjects[i]
gvk := obj.GroupVersionKind()
@@ -76,42 +66,34 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
Kind: gvk.Kind,
})
errors.CheckError(err)
for _, action := range availActionsForResource.Actions {
displayAction := DisplayedAction{
Group: gvk.Group,
Kind: gvk.Kind,
Name: obj.GetName(),
Action: action.Name,
Disabled: action.Disabled,
}
availableActions = append(availableActions, displayAction)
}
availableActions[obj.GetName()] = availActionsForResource.Actions
}
switch output {
case "yaml":
yamlBytes, err := yaml.Marshal(availableActions)
errors.CheckError(err)
fmt.Println(string(yamlBytes))
case "json":
jsonBytes, err := json.MarshalIndent(availableActions, "", " ")
errors.CheckError(err)
fmt.Println(string(jsonBytes))
case "":
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "GROUP\tKIND\tNAME\tACTION\tDISABLED\n")
fmt.Println()
for _, action := range availableActions {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\n", action.Group, action.Kind, action.Name, action.Action, strconv.FormatBool(action.Disabled))
}
_ = w.Flush()
var keys []string
for key := range availableActions {
keys = append(keys, key)
}
sort.Strings(keys)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "RESOURCE\tACTION\n")
fmt.Println()
for key := range availableActions {
for i := range availableActions[key] {
action := availableActions[key][i]
fmt.Fprintf(w, "%s\t%s\n", key, action.Name)
}
}
_ = w.Flush()
}
command.Flags().StringVar(&resourceName, "resource-name", "", "Name of resource")
command.Flags().StringVar(&kind, "kind", "", "Kind")
err := command.MarkFlagRequired("kind")
errors.CheckError(err)
command.Flags().StringVar(&group, "group", "", "Group")
command.Flags().StringVar(&namespace, "namespace", "", "Namespace")
command.Flags().StringVarP(&output, "out", "o", "", "Output format. One of: yaml, json")
command.Flags().BoolVar(&all, "all", false, "Indicates whether to list actions on multiple matching resources")
return command
}
@@ -119,9 +101,9 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
// NewApplicationResourceActionsRunCommand returns a new instance of an `argocd app actions run` command
func NewApplicationResourceActionsRunCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var namespace string
var resourceName string
var kind string
var group string
var resourceName string
var all bool
var command = &cobra.Command{
Use: "run APPNAME ACTION",
@@ -129,10 +111,11 @@ func NewApplicationResourceActionsRunCommand(clientOpts *argocdclient.ClientOpti
}
command.Flags().StringVar(&resourceName, "resource-name", "", "Name of resource")
command.Flags().StringVar(&namespace, "namespace", "", "Namespace")
command.Flags().StringVar(&kind, "kind", "", "Kind")
err := command.MarkFlagRequired("kind")
errors.CheckError(err)
command.Flags().StringVar(&group, "group", "", "Group")
errors.CheckError(command.MarkFlagRequired("kind"))
command.Flags().StringVar(&namespace, "namespace", "", "Namespace")
command.Flags().BoolVar(&all, "all", false, "Indicates whether to run the action on multiple matching resources")
command.Run = func(c *cobra.Command, args []string) {
@@ -142,20 +125,12 @@ func NewApplicationResourceActionsRunCommand(clientOpts *argocdclient.ClientOpti
}
appName := args[0]
actionName := args[1]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
var resGroup = filteredObjects[0].GroupVersionKind().Group
for i := range filteredObjects[1:] {
if filteredObjects[i].GroupVersionKind().Group != resGroup {
log.Fatal("Ambiguous resource group. Use flag --group to specify resource group explicitly.")
}
}
for i := range filteredObjects {
obj := filteredObjects[i]
gvk := obj.GroupVersionKind()

View File

@@ -4,8 +4,6 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
func TestParseLabels(t *testing.T) {
@@ -25,31 +23,3 @@ func TestParseLabels(t *testing.T) {
assert.Len(t, result, 0)
}
func Test_setHelmOpt(t *testing.T) {
t.Run("Zero", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{})
assert.Nil(t, src.Helm)
})
t.Run("ValueFiles", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{valueFiles: []string{"foo"}})
assert.Equal(t, []string{"foo"}, src.Helm.ValueFiles)
})
t.Run("ReleaseName", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{releaseName: "foo"})
assert.Equal(t, "foo", src.Helm.ReleaseName)
})
t.Run("HelmSets", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{helmSets: []string{"foo=bar"}})
assert.Equal(t, []v1alpha1.HelmParameter{{Name: "foo", Value: "bar"}}, src.Helm.Parameters)
})
t.Run("HelmSetStrings", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{helmSetStrings: []string{"foo=bar"}})
assert.Equal(t, []v1alpha1.HelmParameter{{Name: "foo", Value: "bar", ForceString: true}}, src.Helm.Parameters)
})
}

View File

@@ -1,290 +0,0 @@
package commands
import (
"context"
"fmt"
"os"
"sort"
"strings"
"text/tabwriter"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
certificatepkg "github.com/argoproj/argo-cd/pkg/apiclient/certificate"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
certutil "github.com/argoproj/argo-cd/util/cert"
"crypto/x509"
)
// NewCertCommand returns a new instance of an `argocd repo` command
func NewCertCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "cert",
Short: "Manage repository certificates and SSH known hosts entries",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
command.AddCommand(NewCertAddSSHCommand(clientOpts))
command.AddCommand(NewCertAddTLSCommand(clientOpts))
command.AddCommand(NewCertListCommand(clientOpts))
command.AddCommand(NewCertRemoveCommand(clientOpts))
return command
}
func NewCertAddTLSCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
fromFile string
upsert bool
)
var command = &cobra.Command{
Use: "add-tls SERVERNAME",
Short: "Add TLS certificate data for connecting to repository server SERVERNAME",
Run: func(c *cobra.Command, args []string) {
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
var certificateArray []string
var err error
if fromFile != "" {
fmt.Printf("Reading TLS certificate data in PEM format from '%s'\n", fromFile)
certificateArray, err = certutil.ParseTLSCertificatesFromPath(fromFile)
} else {
fmt.Println("Enter TLS certificate data in PEM format. Press CTRL-D when finished.")
certificateArray, err = certutil.ParseTLSCertificatesFromStream(os.Stdin)
}
errors.CheckError(err)
certificateList := make([]appsv1.RepositoryCertificate, 0)
subjectMap := make(map[string]*x509.Certificate)
for _, entry := range certificateArray {
// We want to make sure to only send valid certificate data to the
// server, so we decode the certificate into X509 structure before
// further processing it.
x509cert, err := certutil.DecodePEMCertificateToX509(entry)
errors.CheckError(err)
// TODO: We need a better way to detect duplicates sent in the stream,
// maybe by using fingerprints? For now, no two certs with the same
// subject may be sent.
if subjectMap[x509cert.Subject.String()] != nil {
fmt.Printf("ERROR: Cert with subject '%s' already seen in the input stream.\n", x509cert.Subject.String())
continue
} else {
subjectMap[x509cert.Subject.String()] = x509cert
}
}
serverName := args[0]
if len(certificateArray) > 0 {
certificateList = append(certificateList, appsv1.RepositoryCertificate{
ServerName: serverName,
CertType: "https",
CertData: []byte(strings.Join(certificateArray, "\n")),
})
certificates, err := certIf.CreateCertificate(context.Background(), &certificatepkg.RepositoryCertificateCreateRequest{
Certificates: &appsv1.RepositoryCertificateList{
Items: certificateList,
},
Upsert: upsert,
})
errors.CheckError(err)
fmt.Printf("Created entry with %d PEM certificates for repository server %s\n", len(certificates.Items), serverName)
} else {
fmt.Printf("No valid certificates have been detected in the stream.\n")
}
},
}
command.Flags().StringVar(&fromFile, "from", "", "read TLS certificate data from file (default is to read from stdin)")
command.Flags().BoolVar(&upsert, "upsert", false, "Replace existing TLS certificate if certificate is different in input")
return command
}
// NewCertAddCommand returns a new instance of an `argocd cert add` command
func NewCertAddSSHCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
fromFile string
batchProcess bool
upsert bool
certificates []appsv1.RepositoryCertificate
)
var command = &cobra.Command{
Use: "add-ssh --batch",
Short: "Add SSH known host entries for repository servers",
Run: func(c *cobra.Command, args []string) {
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
var sshKnownHostsLists []string
var err error
// --batch is a flag, but it is mandatory for now.
if batchProcess {
if fromFile != "" {
fmt.Printf("Reading SSH known hosts entries from file '%s'\n", fromFile)
sshKnownHostsLists, err = certutil.ParseSSHKnownHostsFromPath(fromFile)
} else {
fmt.Println("Enter SSH known hosts entries, one per line. Press CTRL-D when finished.")
sshKnownHostsLists, err = certutil.ParseSSHKnownHostsFromStream(os.Stdin)
}
} else {
err = fmt.Errorf("You need to specify --batch or specify --help for usage instructions")
}
errors.CheckError(err)
if len(sshKnownHostsLists) == 0 {
errors.CheckError(fmt.Errorf("No valid SSH known hosts data found."))
}
for _, knownHostsEntry := range sshKnownHostsLists {
hostname, certSubType, certData, err := certutil.TokenizeSSHKnownHostsEntry(knownHostsEntry)
errors.CheckError(err)
_, _, err = certutil.KnownHostsLineToPublicKey(knownHostsEntry)
errors.CheckError(err)
certificate := appsv1.RepositoryCertificate{
ServerName: hostname,
CertType: "ssh",
CertSubType: certSubType,
CertData: certData,
}
certificates = append(certificates, certificate)
}
certList := &appsv1.RepositoryCertificateList{Items: certificates}
response, err := certIf.CreateCertificate(context.Background(), &certificatepkg.RepositoryCertificateCreateRequest{
Certificates: certList,
Upsert: upsert,
})
errors.CheckError(err)
fmt.Printf("Successfully created %d SSH known host entries\n", len(response.Items))
},
}
command.Flags().StringVar(&fromFile, "from", "", "Read SSH known hosts data from file (default is to read from stdin)")
command.Flags().BoolVar(&batchProcess, "batch", false, "Perform batch processing by reading in SSH known hosts data (mandatory flag)")
command.Flags().BoolVar(&upsert, "upsert", false, "Replace existing SSH server public host keys if key is different in input")
return command
}
// NewCertRemoveCommand returns a new instance of an `argocd cert rm` command
func NewCertRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
certType string
certSubType string
certQuery certificatepkg.RepositoryCertificateQuery
)
var command = &cobra.Command{
Use: "rm REPOSERVER",
Short: "Remove certificate of TYPE for REPOSERVER",
Run: func(c *cobra.Command, args []string) {
if len(args) < 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
hostNamePattern := args[0]
// Prevent the user from specifying a wildcard as hostname as precaution
// measure -- the user could still use "?*" or any other pattern to
// remove all certificates, but it's less likely that it happens by
// accident.
if hostNamePattern == "*" {
err := fmt.Errorf("A single wildcard is not allowed as REPOSERVER name.")
errors.CheckError(err)
}
certQuery = certificatepkg.RepositoryCertificateQuery{
HostNamePattern: hostNamePattern,
CertType: certType,
CertSubType: certSubType,
}
removed, err := certIf.DeleteCertificate(context.Background(), &certQuery)
errors.CheckError(err)
if len(removed.Items) > 0 {
for _, cert := range removed.Items {
fmt.Printf("Removed cert for '%s' of type '%s' (subtype '%s')\n", cert.ServerName, cert.CertType, cert.CertSubType)
}
} else {
fmt.Println("No certificates were removed (none matched the given patterns)")
}
},
}
command.Flags().StringVar(&certType, "cert-type", "", "Only remove certs of given type (ssh, https)")
command.Flags().StringVar(&certSubType, "cert-sub-type", "", "Only remove certs of given sub-type (only for ssh)")
return command
}
// NewCertListCommand returns a new instance of an `argocd cert rm` command
func NewCertListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
certType string
hostNamePattern string
sortOrder string
)
var command = &cobra.Command{
Use: "list",
Short: "List configured certificates",
Run: func(c *cobra.Command, args []string) {
if certType != "" {
switch certType {
case "ssh":
case "https":
default:
fmt.Println("cert-type must be either ssh or https")
os.Exit(1)
}
}
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
certificates, err := certIf.ListCertificates(context.Background(), &certificatepkg.RepositoryCertificateQuery{HostNamePattern: hostNamePattern, CertType: certType})
errors.CheckError(err)
printCertTable(certificates.Items, sortOrder)
},
}
command.Flags().StringVar(&sortOrder, "sort", "", "set display sort order, valid: 'hostname', 'type'")
command.Flags().StringVar(&certType, "cert-type", "", "only list certificates of given type, valid: 'ssh','https'")
command.Flags().StringVar(&hostNamePattern, "hostname-pattern", "", "only list certificates for hosts matching given glob-pattern")
return command
}
// Print table of certificate info
func printCertTable(certs []appsv1.RepositoryCertificate, sortOrder string) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "HOSTNAME\tTYPE\tSUBTYPE\tINFO\n")
if sortOrder == "hostname" || sortOrder == "" {
sort.Slice(certs, func(i, j int) bool {
return certs[i].ServerName < certs[j].ServerName
})
} else if sortOrder == "type" {
sort.Slice(certs, func(i, j int) bool {
return certs[i].CertType < certs[j].CertType
})
}
for _, c := range certs {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", c.ServerName, c.CertType, c.CertSubType, c.CertInfo)
}
_ = w.Flush()
}

View File

@@ -229,28 +229,8 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
return command
}
// Print table of cluster information
func printClusterTable(clusters []argoappv1.Cluster) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\n")
for _, c := range clusters {
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\n", c.Server, c.Name, c.ServerVersion, c.ConnectionState.Status, c.ConnectionState.Message)
}
_ = w.Flush()
}
// Print list of cluster servers
func printClusterServers(clusters []argoappv1.Cluster) {
for _, c := range clusters {
fmt.Println(c.Server)
}
}
// NewClusterListCommand returns a new instance of an `argocd cluster rm` command
func NewClusterListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "list",
Short: "List configured clusters",
@@ -259,14 +239,14 @@ func NewClusterListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
defer util.Close(conn)
clusters, err := clusterIf.List(context.Background(), &clusterpkg.ClusterQuery{})
errors.CheckError(err)
if output == "server" {
printClusterServers(clusters.Items)
} else {
printClusterTable(clusters.Items)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "SERVER\tNAME\tSTATUS\tMESSAGE\n")
for _, c := range clusters.Items {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", c.Server, c.Name, c.ConnectionState.Status, c.ConnectionState.Message)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|server")
return command
}

View File

@@ -1,31 +0,0 @@
package commands
import (
"testing"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
func Test_printClusterTable(t *testing.T) {
printClusterTable([]v1alpha1.Cluster{
{
Server: "my-server",
Name: "my-name",
Config: v1alpha1.ClusterConfig{
Username: "my-username",
Password: "my-password",
BearerToken: "my-bearer-token",
TLSClientConfig: v1alpha1.TLSClientConfig{},
AWSAuthConfig: nil,
},
ConnectionState: v1alpha1.ConnectionState{
Status: "my-status",
Message: "my-message",
ModifiedAt: &metav1.Time{},
},
ServerVersion: "my-version",
},
})
}

View File

@@ -1,233 +0,0 @@
package commands
import (
"fmt"
"io"
"log"
"os"
"github.com/spf13/cobra"
)
const (
bashCompletionFunc = `
__argocd_list_apps() {
local -a argocd_out
if argocd_out=($(argocd app list --output name 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_list_app_history() {
local app=$1
local -a argocd_out
if argocd_out=($(argocd app history $app --output id 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_app_rollback() {
local -a command
for comp_word in "${COMP_WORDS[@]}"; do
if [[ $comp_word =~ ^-.*$ ]]; then
continue
fi
command+=($comp_word)
done
# fourth arg is app (if present): e.g.- argocd app rollback guestbook
local app=${command[3]}
local id=${command[4]}
if [[ -z $app || $app == $cur ]]; then
__argocd_list_apps
elif [[ -z $id || $id == $cur ]]; then
__argocd_list_app_history $app
fi
}
__argocd_list_servers() {
local -a argocd_out
if argocd_out=($(argocd cluster list --output server 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_list_repos() {
local -a argocd_out
if argocd_out=($(argocd repo list --output url 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_list_projects() {
local -a argocd_out
if argocd_out=($(argocd proj list --output name 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_list_namespaces() {
local -a argocd_out
if argocd_out=($(kubectl get namespaces --no-headers 2>/dev/null | cut -f1 -d' ' 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_proj_server_namespace() {
local -a command
for comp_word in "${COMP_WORDS[@]}"; do
if [[ $comp_word =~ ^-.*$ ]]; then
continue
fi
command+=($comp_word)
done
# expect something like this: argocd proj add-destination PROJECT SERVER NAMESPACE
local project=${command[3]}
local server=${command[4]}
local namespace=${command[5]}
if [[ -z $project || $project == $cur ]]; then
__argocd_list_projects
elif [[ -z $server || $server == $cur ]]; then
__argocd_list_servers
elif [[ -z $namespace || $namespace == $cur ]]; then
__argocd_list_namespaces
fi
}
__argocd_list_project_role() {
local project="$1"
local -a argocd_out
if argocd_out=($(argocd proj role list "$project" --output=name 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_proj_role(){
local -a command
for comp_word in "${COMP_WORDS[@]}"; do
if [[ $comp_word =~ ^-.*$ ]]; then
continue
fi
command+=($comp_word)
done
# expect something like this: argocd proj role add-policy PROJECT ROLE-NAME
local project=${command[4]}
local role=${command[5]}
if [[ -z $project || $project == $cur ]]; then
__argocd_list_projects
elif [[ -z $role || $role == $cur ]]; then
__argocd_list_project_role $project
fi
}
__argocd_custom_func() {
case ${last_command} in
argocd_app_delete | \
argocd_app_diff | \
argocd_app_edit | \
argocd_app_get | \
argocd_app_history | \
argocd_app_manifests | \
argocd_app_patch-resource | \
argocd_app_set | \
argocd_app_sync | \
argocd_app_terminate-op | \
argocd_app_unset | \
argocd_app_wait | \
argocd_app_create)
__argocd_list_apps
return
;;
argocd_app_rollback)
__argocd_app_rollback
return
;;
argocd_cluster_get | \
argocd_cluster_rm | \
argocd_login | \
argocd_cluster_add)
__argocd_list_servers
return
;;
argocd_repo_rm | \
argocd_repo_add)
__argocd_list_repos
return
;;
argocd_proj_add-destination | \
argocd_proj_remove-destination)
__argocd_proj_server_namespace
return
;;
argocd_proj_add-source | \
argocd_proj_remove-source | \
argocd_proj_allow-cluster-resource | \
argocd_proj_allow-namespace-resource | \
argocd_proj_deny-cluster-resource | \
argocd_proj_deny-namespace-resource | \
argocd_proj_delete | \
argocd_proj_edit | \
argocd_proj_get | \
argocd_proj_set | \
argocd_proj_role_list)
__argocd_list_projects
return
;;
argocd_proj_role_remove-policy | \
argocd_proj_role_add-policy | \
argocd_proj_role_create | \
argocd_proj_role_delete | \
argocd_proj_role_get | \
argocd_proj_role_create-token | \
argocd_proj_role_delete-token)
__argocd_proj_role
return
;;
*)
;;
esac
}
`
)
func NewCompletionCommand() *cobra.Command {
var command = &cobra.Command{
Use: "completion SHELL",
Short: "output shell completion code for the specified shell (bash or zsh)",
Long: `Write bash or zsh shell completion code to standard output.
For bash, ensure you have bash completions installed and enabled.
To access completions in your current shell, run
$ source <(argocd completion bash)
Alternatively, write it to a file and source in .bash_profile
For zsh, output to a file in a directory referenced by the $fpath shell
variable.
`,
Run: func(cmd *cobra.Command, args []string) {
if len(args) != 1 {
cmd.HelpFunc()(cmd, args)
os.Exit(1)
}
shell := args[0]
rootCommand := NewCommand()
rootCommand.BashCompletionFunction = bashCompletionFunc
availableCompletions := map[string]func(io.Writer) error{
"bash": rootCommand.GenBashCompletion,
"zsh": rootCommand.GenZshCompletion,
}
completion, ok := availableCompletions[shell]
if !ok {
fmt.Printf("Invalid shell '%s'. The supported shells are bash and zsh.\n", shell)
os.Exit(1)
}
if err := completion(os.Stdout); err != nil {
log.Fatal(err)
}
},
}
return command
}

View File

@@ -92,7 +92,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
errors.CheckError(err)
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
errors.CheckError(err)
tokenString, refreshToken = oauth2Login(ctx, ssoPort, acdSet.GetOIDCConfig(), oauth2conf, provider)
tokenString, refreshToken = oauth2Login(ctx, ssoPort, oauth2conf, provider)
}
parser := &jwt.Parser{
@@ -154,7 +154,7 @@ func userDisplayName(claims jwt.MapClaims) string {
// oauth2Login opens a browser, runs a temporary HTTP server to delegate OAuth2 login flow and
// returns the JWT token and a refresh token (if supported)
func oauth2Login(ctx context.Context, port int, oidcSettings *settingspkg.OIDCConfig, oauth2conf *oauth2.Config, provider *oidc.Provider) (string, string) {
func oauth2Login(ctx context.Context, port int, oauth2conf *oauth2.Config, provider *oidc.Provider) (string, string) {
oauth2conf.RedirectURL = fmt.Sprintf("http://localhost:%d/auth/callback", port)
oidcConf, err := oidcutil.ParseConfig(provider)
errors.CheckError(err)
@@ -243,17 +243,12 @@ func oauth2Login(ctx context.Context, port int, oidcSettings *settingspkg.OIDCCo
fmt.Printf("Opening browser for authentication\n")
var url string
grantType := oidcutil.InferGrantType(oidcConf)
opts := []oauth2.AuthCodeOption{oauth2.AccessTypeOffline}
if claimsRequested := oidcSettings.GetIDTokenClaims(); claimsRequested != nil {
opts = oidcutil.AppendClaimsAuthenticationRequestParameter(opts, claimsRequested)
}
grantType := oidcutil.InferGrantType(oauth2conf, oidcConf)
switch grantType {
case oidcutil.GrantTypeAuthorizationCode:
url = oauth2conf.AuthCodeURL(stateNonce, opts...)
url = oauth2conf.AuthCodeURL(stateNonce, oauth2.AccessTypeOffline)
case oidcutil.GrantTypeImplicit:
url = oidcutil.ImplicitFlowURL(oauth2conf, stateNonce, opts...)
url = oidcutil.ImplicitFlowURL(oauth2conf, stateNonce, oauth2.AccessTypeOffline)
default:
log.Fatalf("Unsupported grant type: %v", grantType)
}

View File

@@ -1,12 +1,10 @@
package commands
import (
"bufio"
"context"
"encoding/json"
"fmt"
"io"
"net/url"
"os"
"strings"
"text/tabwriter"
@@ -18,7 +16,6 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/pflag"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/pointer"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
@@ -26,16 +23,13 @@ import (
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/config"
"github.com/argoproj/argo-cd/util/git"
)
type projectOpts struct {
description string
destinations []string
sources []string
orphanedResourcesEnabled bool
orphanedResourcesWarn bool
description string
destinations []string
sources []string
}
type policyOpts struct {
@@ -85,7 +79,6 @@ func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command.AddCommand(NewProjectDenyClusterResourceCommand(clientOpts))
command.AddCommand(NewProjectAllowNamespaceResourceCommand(clientOpts))
command.AddCommand(NewProjectDenyNamespaceResourceCommand(clientOpts))
command.AddCommand(NewProjectWindowsCommand(clientOpts))
return command
}
@@ -93,21 +86,7 @@ func addProjFlags(command *cobra.Command, opts *projectOpts) {
command.Flags().StringVarP(&opts.description, "description", "", "", "Project description")
command.Flags().StringArrayVarP(&opts.destinations, "dest", "d", []string{},
"Permitted destination server and namespace (e.g. https://192.168.99.100:8443,default)")
command.Flags().StringArrayVarP(&opts.sources, "src", "s", []string{}, "Permitted source repository URL")
command.Flags().BoolVar(&opts.orphanedResourcesEnabled, "orphaned-resources", false, "Enables orphaned resources monitoring")
command.Flags().BoolVar(&opts.orphanedResourcesWarn, "orphaned-resources-warn", false, "Specifies if applications should be a warning condition when orphaned resources detected")
}
func getOrphanedResourcesSettings(c *cobra.Command, opts projectOpts) *v1alpha1.OrphanedResourcesMonitorSettings {
warnChanged := c.Flag("orphaned-resources-warn").Changed
if opts.orphanedResourcesEnabled || warnChanged {
settings := v1alpha1.OrphanedResourcesMonitorSettings{}
if warnChanged {
settings.Warn = pointer.BoolPtr(opts.orphanedResourcesWarn)
}
return &settings
}
return nil
command.Flags().StringArrayVarP(&opts.sources, "src", "s", []string{}, "Permitted git source repository URL")
}
func addPolicyFlags(command *cobra.Command, opts *policyOpts) {
@@ -124,63 +103,32 @@ func humanizeTimestamp(epoch int64) string {
// NewProjectCreateCommand returns a new instance of an `argocd proj create` command
func NewProjectCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts projectOpts
fileURL string
upsert bool
opts projectOpts
)
var command = &cobra.Command{
Use: "create PROJECT",
Short: "Create a project",
Run: func(c *cobra.Command, args []string) {
var proj v1alpha1.AppProject
if fileURL == "-" {
// read stdin
reader := bufio.NewReader(os.Stdin)
err := config.UnmarshalReader(reader, &proj)
if err != nil {
log.Fatalf("unable to read manifest from stdin: %v", err)
}
} else if fileURL != "" {
// read uri
parsedURL, err := url.ParseRequestURI(fileURL)
if err != nil || !(parsedURL.Scheme == "http" || parsedURL.Scheme == "https") {
err = config.UnmarshalLocalFile(fileURL, &proj)
} else {
err = config.UnmarshalRemoteFile(fileURL, &proj)
}
errors.CheckError(err)
if len(args) == 1 && args[0] != proj.Name {
log.Fatalf("project name '%s' does not match project spec metadata.name '%s'", args[0], proj.Name)
}
} else {
// read arguments
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
proj = v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{Name: projName},
Spec: v1alpha1.AppProjectSpec{
Description: opts.description,
Destinations: opts.GetDestinations(),
SourceRepos: opts.sources,
OrphanedResources: getOrphanedResourcesSettings(c, opts),
},
}
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
proj := v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{Name: projName},
Spec: v1alpha1.AppProjectSpec{
Description: opts.description,
Destinations: opts.GetDestinations(),
SourceRepos: opts.sources,
},
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err := projIf.Create(context.Background(), &projectpkg.ProjectCreateRequest{Project: &proj, Upsert: upsert})
_, err := projIf.Create(context.Background(), &projectpkg.ProjectCreateRequest{Project: &proj})
errors.CheckError(err)
},
}
command.Flags().BoolVar(&upsert, "upsert", false, "Allows to override a project with the same name even if supplied project spec is different from existing spec")
command.Flags().StringVarP(&fileURL, "file", "f", "", "Filename or URL to Kubernetes manifests for the project")
err := command.Flags().SetAnnotation("file", cobra.BashCompFilenameExt, []string{"json", "yaml", "yml"})
if err != nil {
log.Fatal(err)
}
addProjFlags(command, &opts)
return command
}
@@ -215,8 +163,6 @@ func NewProjectSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
proj.Spec.Destinations = opts.GetDestinations()
case "src":
proj.Spec.SourceRepos = opts.sources
case "orphaned-resources", "orphaned-resources-warn":
proj.Spec.OrphanedResources = getOrphanedResourcesSettings(c, opts)
}
})
if visited == 0 {
@@ -497,28 +443,8 @@ func NewProjectDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
return command
}
// Print list of project names
func printProjectNames(projects []v1alpha1.AppProject) {
for _, p := range projects {
fmt.Println(p.Name)
}
}
// Print table of project info
func printProjectTable(projects []v1alpha1.AppProject) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tORPHANED-RESOURCES\n")
for _, p := range projects {
printProjectLine(w, &p)
}
_ = w.Flush()
}
// NewProjectListCommand returns a new instance of an `argocd proj list` command
func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "list",
Short: "List projects",
@@ -527,24 +453,17 @@ func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
defer util.Close(conn)
projects, err := projIf.List(context.Background(), &projectpkg.ProjectQuery{})
errors.CheckError(err)
if output == "name" {
printProjectNames(projects.Items)
} else {
printProjectTable(projects.Items)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\n")
for _, p := range projects.Items {
printProjectLine(w, &p)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|name")
return command
}
func formatOrphanedResources(p *v1alpha1.AppProject) string {
if p.Spec.OrphanedResources == nil {
return "disabled"
}
return fmt.Sprintf("enabled (warn=%v)", p.Spec.OrphanedResources.IsWarn())
}
func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
var destinations, sourceRepos, clusterWhitelist, namespaceBlacklist string
switch len(p.Spec.Destinations) {
@@ -577,7 +496,7 @@ func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
default:
namespaceBlacklist = fmt.Sprintf("%d resources", len(p.Spec.NamespaceResourceBlacklist))
}
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, destinations, sourceRepos, clusterWhitelist, namespaceBlacklist, formatOrphanedResources(p))
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, destinations, sourceRepos, clusterWhitelist, namespaceBlacklist)
}
// NewProjectGetCommand returns a new instance of an `argocd proj get` command
@@ -638,7 +557,6 @@ func NewProjectGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
for i := 1; i < len(p.Spec.NamespaceResourceBlacklist); i++ {
fmt.Printf(printProjFmtStr, "", fmt.Sprintf("%s/%s", p.Spec.NamespaceResourceBlacklist[i].Group, p.Spec.NamespaceResourceBlacklist[i].Kind))
}
fmt.Printf(printProjFmtStr, "Orphaned Resources:", formatOrphanedResources(p))
},
}
return command

View File

@@ -15,6 +15,7 @@ import (
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
projectutil "github.com/argoproj/argo-cd/util/project"
)
const (
@@ -39,8 +40,6 @@ func NewProjectRoleCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
roleCommand.AddCommand(NewProjectRoleDeleteTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleAddPolicyCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleRemovePolicyCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleAddGroupCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleRemoveGroupCommand(clientOpts))
return roleCommand
}
@@ -65,7 +64,7 @@ func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cob
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
role, roleIndex, err := proj.GetRoleByName(roleName)
role, roleIndex, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
policy := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
@@ -100,7 +99,7 @@ func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
role, roleIndex, err := proj.GetRoleByName(roleName)
role, roleIndex, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
policyToRemove := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
@@ -145,7 +144,7 @@ func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
_, _, err = proj.GetRoleByName(roleName)
_, _, err = projectutil.GetRoleByName(proj, roleName)
if err == nil {
fmt.Printf("Role '%s' already exists\n", roleName)
return
@@ -179,7 +178,7 @@ func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
_, index, err := proj.GetRoleByName(roleName)
_, index, err := projectutil.GetRoleByName(proj, roleName)
if err != nil {
fmt.Printf("Role '%s' does not exist in project\n", roleName)
return
@@ -249,28 +248,8 @@ func NewProjectRoleDeleteTokenCommand(clientOpts *argocdclient.ClientOptions) *c
return command
}
// Print list of project role names
func printProjectRoleListName(roles []v1alpha1.ProjectRole) {
for _, role := range roles {
fmt.Println(role.Name)
}
}
// Print table of project roles
func printProjectRoleListTable(roles []v1alpha1.ProjectRole) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
_ = w.Flush()
}
// NewProjectRoleListCommand returns a new instance of an `argocd proj roles list` command
func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "list PROJECT",
Short: "List all the roles in a project",
@@ -285,14 +264,14 @@ func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
project, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
if output == "name" {
printProjectRoleListName(project.Spec.Roles)
} else {
printProjectRoleListTable(project.Spec.Roles)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range project.Spec.Roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|name")
return command
}
@@ -314,7 +293,7 @@ func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
role, _, err := proj.GetRoleByName(roleName)
role, _, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
printRoleFmtStr := "%-15s%s\n"
@@ -343,9 +322,9 @@ func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
func NewProjectRoleAddGroupCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "add-group PROJECT ROLE-NAME GROUP-CLAIM",
Short: "Add a group claim to a project role",
Short: "Add a policy to a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
@@ -354,9 +333,9 @@ func NewProjectRoleAddGroupCommand(clientOpts *argocdclient.ClientOptions) *cobr
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := proj.AddGroupToRole(roleName, groupName)
updated, err := projectutil.AddGroupToRole(proj, roleName, groupName)
errors.CheckError(err)
if !updated {
if updated {
fmt.Printf("Group '%s' already present in role '%s'\n", groupName, roleName)
return
}
@@ -383,7 +362,7 @@ func NewProjectRoleRemoveGroupCommand(clientOpts *argocdclient.ClientOptions) *c
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := proj.RemoveGroupFromRole(roleName, groupName)
updated, err := projectutil.RemoveGroupFromRole(proj, roleName, groupName)
errors.CheckError(err)
if !updated {
fmt.Printf("Group '%s' not present in role '%s'\n", groupName, roleName)

View File

@@ -1,311 +0,0 @@
package commands
import (
"context"
"os"
"github.com/spf13/cobra"
"fmt"
"strings"
"text/tabwriter"
"strconv"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
)
// NewProjectWindowsCommand returns a new instance of the `argocd proj windows` command
func NewProjectWindowsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
roleCommand := &cobra.Command{
Use: "windows",
Short: "Manage a project's sync windows",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
roleCommand.AddCommand(NewProjectWindowsDisableManualSyncCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsEnableManualSyncCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsAddWindowCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsDeleteCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsListCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsUpdateCommand(clientOpts))
return roleCommand
}
// NewProjectSyncWindowsDisableManualSyncCommand returns a new instance of an `argocd proj windows disable-manual-sync` command
func NewProjectWindowsDisableManualSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "disable-manual-sync PROJECT ID",
Short: "Disable manual sync for a sync window",
Long: "Disable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
id, err := strconv.Atoi(args[1])
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
for i, window := range proj.Spec.SyncWindows {
if id == i {
window.ManualSync = false
}
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectWindowsEnableManualSyncCommand returns a new instance of an `argocd proj windows enable-manual-sync` command
func NewProjectWindowsEnableManualSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "enable-manual-sync PROJECT ID",
Short: "Enable manual sync for a sync window",
Long: "Enable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
id, err := strconv.Atoi(args[1])
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
for i, window := range proj.Spec.SyncWindows {
if id == i {
window.ManualSync = true
}
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectWindowsAddWindowCommand returns a new instance of an `argocd proj windows add` command
func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
kind string
schedule string
duration string
applications []string
namespaces []string
clusters []string
manualSync bool
)
var command = &cobra.Command{
Use: "add PROJECT",
Short: "Add a sync window to a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync)
errors.CheckError(err)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
command.Flags().StringVarP(&kind, "kind", "k", "", "Sync window kind, either allow or deny")
command.Flags().StringVar(&schedule, "schedule", "", "Sync window schedule in cron format. (e.g. --schedule \"0 22 * * *\")")
command.Flags().StringVar(&duration, "duration", "", "Sync window duration. (e.g. --duration 1h)")
command.Flags().StringSliceVar(&applications, "applications", []string{}, "Applications that the schedule will be applied to. Comma separated, wildcards supported (e.g. --applications prod-\\*,website)")
command.Flags().StringSliceVar(&namespaces, "namespaces", []string{}, "Namespaces that the schedule will be applied to. Comma separated, wildcards supported (e.g. --namespaces default,\\*-prod)")
command.Flags().StringSliceVar(&clusters, "clusters", []string{}, "Clusters that the schedule will be applied to. Comma separated, wildcards supported (e.g. --clusters prod,staging)")
command.Flags().BoolVar(&manualSync, "manual-sync", false, "Allow manual syncs for both deny and allow windows")
return command
}
// NewProjectWindowsAddWindowCommand returns a new instance of an `argocd proj windows delete` command
func NewProjectWindowsDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete PROJECT ID",
Short: "Delete a sync window from a project. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
id, err := strconv.Atoi(args[1])
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
err = proj.Spec.DeleteWindow(id)
errors.CheckError(err)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectWindowsUpdateCommand returns a new instance of an `argocd proj windows update` command
func NewProjectWindowsUpdateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
schedule string
duration string
applications []string
namespaces []string
clusters []string
)
var command = &cobra.Command{
Use: "update PROJECT ID",
Short: "Update a project sync window",
Long: "Update a project sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
id, err := strconv.Atoi(args[1])
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
for i, window := range proj.Spec.SyncWindows {
if id == i {
err := window.Update(schedule, duration, applications, namespaces, clusters)
if err != nil {
errors.CheckError(err)
}
}
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
command.Flags().StringVar(&schedule, "schedule", "", "Sync window schedule in cron format. (e.g. --schedule \"0 22 * * *\")")
command.Flags().StringVar(&duration, "duration", "", "Sync window duration. (e.g. --duration 1h)")
command.Flags().StringSliceVar(&applications, "applications", []string{}, "Applications that the schedule will be applied to. Comma separated, wildcards supported (e.g. --applications prod-\\*,website)")
command.Flags().StringSliceVar(&namespaces, "namespaces", []string{}, "Namespaces that the schedule will be applied to. Comma separated, wildcards supported (e.g. --namespaces default,\\*-prod)")
command.Flags().StringSliceVar(&clusters, "clusters", []string{}, "Clusters that the schedule will be applied to. Comma separated, wildcards supported (e.g. --clusters prod,staging)")
return command
}
// NewProjectWindowsListCommand returns a new instance of an `argocd proj windows list` command
func NewProjectWindowsListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "list PROJECT",
Short: "List project sync windows",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
printSyncWindows(proj)
},
}
return command
}
// Print table of sync window data
func printSyncWindows(proj *v1alpha1.AppProject) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
var fmtStr string
headers := []interface{}{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC"}
fmtStr = "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n"
fmt.Fprintf(w, fmtStr, headers...)
if proj.Spec.SyncWindows.HasWindows() {
for i, window := range proj.Spec.SyncWindows {
vals := []interface{}{
strconv.Itoa(i),
formatBoolOutput(window.Active()),
window.Kind,
window.Schedule,
window.Duration,
formatListOutput(window.Applications),
formatListOutput(window.Namespaces),
formatListOutput(window.Clusters),
formatManualOutput(window.ManualSync),
}
fmt.Fprintf(w, fmtStr, vals...)
}
}
_ = w.Flush()
}
func formatListOutput(list []string) string {
var o string
if len(list) == 0 {
o = "-"
} else {
o = strings.Join(list, ",")
}
return o
}
func formatBoolOutput(active bool) string {
var o string
if active {
o = "Active"
} else {
o = "Inactive"
}
return o
}
func formatManualOutput(active bool) string {
var o string
if active {
o = "Enabled"
} else {
o = "Disabled"
}
return o
}

View File

@@ -67,7 +67,7 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
errors.CheckError(err)
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
errors.CheckError(err)
tokenString, refreshToken = oauth2Login(ctx, ssoPort, acdSet.GetOIDCConfig(), oauth2conf, provider)
tokenString, refreshToken = oauth2Login(ctx, ssoPort, oauth2conf, provider)
}
localCfg.UpsertUser(localconfig.User{

View File

@@ -10,7 +10,6 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
repositorypkg "github.com/argoproj/argo-cd/pkg/apiclient/repository"
@@ -24,7 +23,7 @@ import (
func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "repo",
Short: "Manage repository connection parameters",
Short: "Manage git repository credentials",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
@@ -40,117 +39,45 @@ func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
// NewRepoAddCommand returns a new instance of an `argocd repo add` command
func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
insecureIgnoreHostKey bool
insecureSkipServerVerification bool
tlsClientCertPath string
tlsClientCertKeyPath string
enableLfs bool
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
insecureIgnoreHostKey bool
)
// For better readability and easier formatting
var repoAddExamples = ` # Add a Git repository via SSH using a private key for authentication, ignoring the server's host key:
argocd repo add git@git.example.com:repos/repo --insecure-ignore-host-key --ssh-private-key-path ~/id_rsa
# Add a private Git repository via HTTPS using username/password and TLS client certificates:
argocd repo add https://git.example.com/repos/repo --username git --password secret --tls-client-cert-path ~/mycert.crt --tls-client-cert-key-path ~/mycert.key
# Add a private Git repository via HTTPS using username/password without verifying the server's TLS certificate
argocd repo add https://git.example.com/repos/repo --username git --password secret --insecure-skip-server-verification
# Add a public Helm repository named 'stable' via HTTPS
argocd repo add https://kubernetes-charts.storage.googleapis.com --type helm --name stable
# Add a private Helm repository named 'stable' via HTTPS
argocd repo add https://kubernetes-charts.storage.googleapis.com --type helm --name stable --username test --password test
`
var command = &cobra.Command{
Use: "add REPOURL",
Short: "Add git repository connection parameters",
Example: repoAddExamples,
Use: "add REPO",
Short: "Add git repository credentials",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
// Repository URL
repo.Repo = args[0]
// Specifying ssh-private-key-path is only valid for SSH repositories
if sshPrivateKeyPath != "" {
if ok, _ := git.IsSSHURL(repo.Repo); ok {
keyData, err := ioutil.ReadFile(sshPrivateKeyPath)
if err != nil {
log.Fatal(err)
}
repo.SSHPrivateKey = string(keyData)
} else {
err := fmt.Errorf("--ssh-private-key-path is only supported for SSH repositories.")
errors.CheckError(err)
keyData, err := ioutil.ReadFile(sshPrivateKeyPath)
if err != nil {
log.Fatal(err)
}
repo.SSHPrivateKey = string(keyData)
}
// tls-client-cert-path and tls-client-cert-key-key-path must always be
// specified together
if (tlsClientCertPath != "" && tlsClientCertKeyPath == "") || (tlsClientCertPath == "" && tlsClientCertKeyPath != "") {
err := fmt.Errorf("--tls-client-cert-path and --tls-client-cert-key-path must be specified together")
errors.CheckError(err)
}
// Specifying tls-client-cert-path is only valid for HTTPS repositories
if tlsClientCertPath != "" {
if git.IsHTTPSURL(repo.Repo) {
tlsCertData, err := ioutil.ReadFile(tlsClientCertPath)
errors.CheckError(err)
tlsCertKey, err := ioutil.ReadFile(tlsClientCertKeyPath)
errors.CheckError(err)
repo.TLSClientCertData = string(tlsCertData)
repo.TLSClientCertKey = string(tlsCertKey)
} else {
err := fmt.Errorf("--tls-client-cert-path is only supported for HTTPS repositories")
errors.CheckError(err)
}
}
// InsecureIgnoreHostKey is deprecated and only here for backwards compat
repo.InsecureIgnoreHostKey = insecureIgnoreHostKey
repo.Insecure = insecureSkipServerVerification
repo.EnableLFS = enableLfs
if repo.Type == "helm" && repo.Name == "" {
errors.CheckError(fmt.Errorf("Must specify --name for repos of type 'helm'"))
// First test the repo *without* username/password. This gives us a hint on whether this
// is a private repo.
// NOTE: it is important not to run git commands to test git credentials on the user's
// system since it may mess with their git credential store (e.g. osx keychain).
// See issue #315
err := git.TestRepo(repo.Repo, "", "", repo.SSHPrivateKey, repo.InsecureIgnoreHostKey)
if err != nil {
if yes, _ := git.IsSSHURL(repo.Repo); yes {
// If we failed using git SSH credentials, then the repo is automatically bad
log.Fatal(err)
}
// If we can't test the repo, it's probably private. Prompt for credentials and
// let the server test it.
repo.Username, repo.Password = cli.PromptCredentials(repo.Username, repo.Password)
}
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
// If the user set a username, but didn't supply password via --password,
// then we prompt for it
if repo.Username != "" && repo.Password == "" {
repo.Password = cli.PromptPassword(repo.Password)
}
// We let the server check access to the repository before adding it. If
// it is a private repo, but we cannot access with with the credentials
// that were supplied, we bail out.
repoAccessReq := repositorypkg.RepoAccessQuery{
Repo: repo.Repo,
Type: repo.Type,
Name: repo.Name,
Username: repo.Username,
Password: repo.Password,
SshPrivateKey: repo.SSHPrivateKey,
TlsClientCertData: repo.TLSClientCertData,
TlsClientCertKey: repo.TLSClientCertKey,
Insecure: repo.IsInsecure(),
}
_, err := repoIf.ValidateAccess(context.Background(), &repoAccessReq)
errors.CheckError(err)
repoCreateReq := repositorypkg.RepoCreateRequest{
Repo: &repo,
Upsert: upsert,
@@ -160,16 +87,10 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
fmt.Printf("repository '%s' added\n", createdRepo.Repo)
},
}
command.Flags().StringVar(&repo.Type, "type", common.DefaultRepoType, "type of the repository, \"git\" or \"helm\"")
command.Flags().StringVar(&repo.Name, "name", "", "name of the repository, mandatory for repositories of type helm")
command.Flags().StringVar(&repo.Username, "username", "", "username to the repository")
command.Flags().StringVar(&repo.Password, "password", "", "password to the repository")
command.Flags().StringVar(&sshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
command.Flags().StringVar(&tlsClientCertPath, "tls-client-cert-path", "", "path to the TLS client cert (must be PEM format)")
command.Flags().StringVar(&tlsClientCertKeyPath, "tls-client-cert-key-path", "", "path to the TLS client cert's key path (must be PEM format)")
command.Flags().BoolVar(&insecureIgnoreHostKey, "insecure-ignore-host-key", false, "disables SSH strict host key checking (deprecated, use --insecure-skip-server-validation instead)")
command.Flags().BoolVar(&insecureSkipServerVerification, "insecure-skip-server-verification", false, "disables server certificate and host key checks")
command.Flags().BoolVar(&enableLfs, "enable-lfs", false, "enable git-lfs (Large File Support) on this repository")
command.Flags().BoolVar(&insecureIgnoreHostKey, "insecure-ignore-host-key", false, "disables SSH strict host key checking")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing repository with the same name even if the spec differs")
return command
}
@@ -178,7 +99,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "rm REPO",
Short: "Remove repository credentials",
Short: "Remove git repository credentials",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
@@ -195,34 +116,8 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
return command
}
// Print table of repo info
func printRepoTable(repos appsv1.Repositories) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "TYPE\tNAME\tREPO\tINSECURE\tLFS\tUSER\tSTATUS\tMESSAGE\n")
for _, r := range repos {
var username string
if r.Username == "" {
username = "-"
} else {
username = r.Username
}
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%v\t%v\t%s\t%s\t%s\n", r.Type, r.Name, r.Repo, r.IsInsecure(), r.EnableLFS, username, r.ConnectionState.Status, r.ConnectionState.Message)
}
_ = w.Flush()
}
// Print list of repo urls
func printRepoUrls(repos appsv1.Repositories) {
for _, r := range repos {
fmt.Println(r.Repo)
}
}
// NewRepoListCommand returns a new instance of an `argocd repo rm` command
func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "list",
Short: "List configured repositories",
@@ -231,13 +126,13 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
defer util.Close(conn)
repos, err := repoIf.List(context.Background(), &repositorypkg.RepoQuery{})
errors.CheckError(err)
if output == "url" {
printRepoUrls(repos.Items)
} else {
printRepoTable(repos.Items)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "REPO\tUSER\tSTATUS\tMESSAGE\n")
for _, r := range repos.Items {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", r.Repo, r.Username, r.ConnectionState.Status, r.ConnectionState.Message)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|url")
return command
}

View File

@@ -36,7 +36,6 @@ func NewCommand() *cobra.Command {
},
}
command.AddCommand(NewCompletionCommand())
command.AddCommand(NewVersionCmd(&clientOpts))
command.AddCommand(NewClusterCommand(&clientOpts, pathOpts))
command.AddCommand(NewApplicationCommand(&clientOpts))
@@ -47,7 +46,6 @@ func NewCommand() *cobra.Command {
command.AddCommand(NewProjectCommand(&clientOpts))
command.AddCommand(NewAccountCommand(&clientOpts))
command.AddCommand(NewLogoutCommand(&clientOpts))
command.AddCommand(NewCertCommand(&clientOpts))
defaultLocalConfigPath, err := localconfig.DefaultLocalConfigPath()
errors.CheckError(err)
@@ -59,6 +57,5 @@ func NewCommand() *cobra.Command {
command.PersistentFlags().StringVar(&clientOpts.AuthToken, "auth-token", config.GetFlag("auth-token", ""), "Authentication token")
command.PersistentFlags().BoolVar(&clientOpts.GRPCWeb, "grpc-web", config.GetBoolFlag("grpc-web"), "Enables gRPC-web protocol. Useful if Argo CD server is behind proxy which does not support HTTP2.")
command.PersistentFlags().StringVar(&logLevel, "loglevel", config.GetFlag("loglevel", "info"), "Set the logging level. One of: debug|info|warn|error")
command.PersistentFlags().StringSliceVarP(&clientOpts.Headers, "header", "H", []string{}, "Sets additional header to all requests made by Argo CD CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers)")
return command
}

View File

@@ -56,9 +56,6 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
fmt.Printf(" Compiler: %s\n", serverVers.Compiler)
fmt.Printf(" Platform: %s\n", serverVers.Platform)
fmt.Printf(" Ksonnet Version: %s\n", serverVers.KsonnetVersion)
fmt.Printf(" Kustomize Version: %s\n", serverVers.KustomizeVersion)
fmt.Printf(" Helm Version: %s\n", serverVers.HelmVersion)
fmt.Printf(" Kubectl Version: %s\n", serverVers.KubectlVersion)
}
},

View File

@@ -15,16 +15,11 @@ const (
ArgoCDConfigMapName = "argocd-cm"
ArgoCDSecretName = "argocd-secret"
ArgoCDRBACConfigMapName = "argocd-rbac-cm"
// Contains SSH known hosts data for connecting repositories. Will get mounted as volume to pods
ArgoCDKnownHostsConfigMapName = "argocd-ssh-known-hosts-cm"
// Contains TLS certificate data for connecting repositories. Will get mounted as volume to pods
ArgoCDTLSCertsConfigMapName = "argocd-tls-certs-cm"
)
// Some default configurables
// Default system namespace
const (
DefaultSystemNamespace = "kube-system"
DefaultRepoType = "git"
)
// Default listener ports for ArgoCD components
@@ -36,18 +31,6 @@ const (
DefaultPortRepoServerMetrics = 8084
)
// Default paths on the pod's file system
const (
// The default base path where application config is located
DefaultPathAppConfig = "/app/config"
// The default path where TLS certificates for repositories are located
DefaultPathTLSConfig = "/app/config/tls"
// The default path where SSH known hosts are stored
DefaultPathSSHConfig = "/app/config/ssh"
// Default name for the SSH known hosts file
DefaultSSHKnownHostsName = "ssh_known_hosts"
)
// Argo CD application related constants
const (
// KubernetesInternalAPIServerAddr is address of the k8s API server when accessing internal to the cluster
@@ -76,8 +59,6 @@ const (
LoginEndpoint = "/auth/login"
// CallbackEndpoint is Argo CD's final callback endpoint we reach after OAuth 2.0 login flow has been completed
CallbackEndpoint = "/auth/callback"
// DexCallbackEndpoint is Argo CD's final callback endpoint when Dex is configured
DexCallbackEndpoint = "/api/dex/callback"
// ArgoCDClientAppName is name of the Oauth client app used when registering our web app to dex
ArgoCDClientAppName = "Argo CD"
// ArgoCDClientAppID is the Oauth client ID we will use when registering our app to dex
@@ -117,6 +98,10 @@ const (
AnnotationKeyManagedBy = "managed-by"
// AnnotationValueManagedByArgoCD is a 'managed-by' annotation value for resources managed by Argo CD
AnnotationValueManagedByArgoCD = "argocd.argoproj.io"
// AnnotationKeyHelmHook is the helm hook annotation
AnnotationKeyHelmHook = "helm.sh/hook"
// AnnotationValueHelmHookCRDInstall is a value of crd helm hook
AnnotationValueHelmHookCRDInstall = "crd-install"
// ResourcesFinalizerName the finalizer value which we inject to finalize deletion of an application
ResourcesFinalizerName = "resources-finalizer.argocd.argoproj.io"
)
@@ -130,19 +115,13 @@ const (
// EnvVarFakeInClusterConfig is an environment variable to fake an in-cluster RESTConfig using
// the current kubectl context (for development purposes)
EnvVarFakeInClusterConfig = "ARGOCD_FAKE_IN_CLUSTER"
// Overrides the location where SSH known hosts for repo access data is stored
EnvVarSSHDataPath = "ARGOCD_SSH_DATA_PATH"
// Overrides the location where TLS certificate for repo access data is stored
EnvVarTLSDataPath = "ARGOCD_TLS_DATA_PATH"
// Specifies number of git remote operations attempts count
EnvGitAttemptsCount = "ARGOCD_GIT_ATTEMPTS_COUNT"
)
const (
// MinClientVersion is the minimum client version that can interface with this API server.
// When introducing breaking changes to the API or datastructures, this number should be bumped.
// The value here may be lower than the current value in VERSION
MinClientVersion = "1.3.0"
MinClientVersion = "1.0.0"
// CacheVersion is a objects version cached using util/cache/cache.go.
// Number should be bumped in case of backward incompatible change to make sure cache is invalidated after upgrade.
CacheVersion = "1.0.0"

View File

@@ -12,7 +12,6 @@ import (
"time"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/semaphore"
v1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -34,7 +33,7 @@ import (
appinformers "github.com/argoproj/argo-cd/pkg/client/informers/externalversions"
"github.com/argoproj/argo-cd/pkg/client/informers/externalversions/application/v1alpha1"
applisters "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
argocache "github.com/argoproj/argo-cd/util/cache"
@@ -46,8 +45,6 @@ import (
const (
updateOperationStateTimeout = 1 * time.Second
// orphanedIndex contains application which monitor orphaned resources by namespace
orphanedIndex = "orphaned"
)
type CompareWith int
@@ -81,14 +78,13 @@ type ApplicationController struct {
appStateManager AppStateManager
stateCache statecache.LiveStateCache
statusRefreshTimeout time.Duration
selfHealTimeout time.Duration
repoClientset apiclient.Clientset
repoClientset reposerver.Clientset
db db.ArgoDB
settings *settings_util.ArgoCDSettings
settingsMgr *settings_util.SettingsManager
refreshRequestedApps map[string]CompareWith
refreshRequestedAppsMutex *sync.Mutex
metricsServer *metrics.MetricsServer
kubectlSemaphore *semaphore.Weighted
}
type ApplicationControllerConfig struct {
@@ -102,20 +98,22 @@ func NewApplicationController(
settingsMgr *settings_util.SettingsManager,
kubeClientset kubernetes.Interface,
applicationClientset appclientset.Interface,
repoClientset apiclient.Clientset,
repoClientset reposerver.Clientset,
argoCache *argocache.Cache,
kubectl kube.Kubectl,
appResyncPeriod time.Duration,
selfHealTimeout time.Duration,
metricsPort int,
kubectlParallelismLimit int64,
) (*ApplicationController, error) {
db := db.NewDB(namespace, settingsMgr, kubeClientset)
settings, err := settingsMgr.GetSettings()
if err != nil {
return nil, err
}
kubectlCmd := kube.KubectlCmd{}
ctrl := ApplicationController{
cache: argoCache,
namespace: namespace,
kubeClientset: kubeClientset,
kubectl: kubectl,
kubectl: kubectlCmd,
applicationClientset: applicationClientset,
repoClientset: repoClientset,
appRefreshQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
@@ -126,25 +124,17 @@ func NewApplicationController(
refreshRequestedAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, "argocd-application-controller"),
settingsMgr: settingsMgr,
selfHealTimeout: selfHealTimeout,
settings: settings,
}
if kubectlParallelismLimit > 0 {
ctrl.kubectlSemaphore = semaphore.NewWeighted(kubectlParallelismLimit)
}
kubectl.SetOnKubectlRun(ctrl.onKubectlRun)
appInformer, appLister, err := ctrl.newApplicationInformerAndLister()
if err != nil {
return nil, err
}
indexers := cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}
projInformer := v1alpha1.NewAppProjectInformer(applicationClientset, namespace, appResyncPeriod, indexers)
appInformer, appLister := ctrl.newApplicationInformerAndLister()
projInformer := v1alpha1.NewAppProjectInformer(applicationClientset, namespace, appResyncPeriod, cache.Indexers{})
metricsAddr := fmt.Sprintf("0.0.0.0:%d", metricsPort)
ctrl.metricsServer = metrics.NewMetricsServer(metricsAddr, appLister, func() error {
_, err := kubeClientset.Discovery().ServerVersion()
return err
})
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settingsMgr, kubectl, ctrl.metricsServer, ctrl.handleObjectUpdated)
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer)
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settings, kubectlCmd, ctrl.metricsServer, ctrl.handleAppUpdated)
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectlCmd, ctrl.settings, stateCache, projInformer, ctrl.metricsServer)
ctrl.appInformer = appInformer
ctrl.appLister = appLister
ctrl.projInformer = projInformer
@@ -154,23 +144,6 @@ func NewApplicationController(
return &ctrl, nil
}
func (ctrl *ApplicationController) onKubectlRun(command string) (util.Closer, error) {
ctrl.metricsServer.IncKubectlExec(command)
if ctrl.kubectlSemaphore != nil {
if err := ctrl.kubectlSemaphore.Acquire(context.Background(), 1); err != nil {
return nil, err
}
ctrl.metricsServer.IncKubectlExecPending(command)
}
return util.NewCloser(func() error {
if ctrl.kubectlSemaphore != nil {
ctrl.kubectlSemaphore.Release(1)
ctrl.metricsServer.DecKubectlExecPending(command)
}
return nil
}), nil
}
func isSelfReferencedApp(app *appv1.Application, ref v1.ObjectReference) bool {
gvk := ref.GroupVersionKind()
return ref.UID == app.UID &&
@@ -180,51 +153,27 @@ func isSelfReferencedApp(app *appv1.Application, ref v1.ObjectReference) bool {
gvk.Kind == application.ApplicationKind
}
func (ctrl *ApplicationController) getAppProj(app *appv1.Application) (*appv1.AppProject, error) {
return argo.GetAppProject(&app.Spec, applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()), ctrl.namespace)
}
func (ctrl *ApplicationController) handleAppUpdated(appName string, isManagedResource bool, ref v1.ObjectReference) {
skipForceRefresh := false
func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]bool, ref v1.ObjectReference) {
// if namespaced resource is not managed by any app it might be orphaned resource of some other apps
if len(managedByApp) == 0 && ref.Namespace != "" {
// retrieve applications which monitor orphaned resources in the same namespace and refresh them unless resource is blacklisted in app project
if objs, err := ctrl.appInformer.GetIndexer().ByIndex(orphanedIndex, ref.Namespace); err == nil {
for i := range objs {
app, ok := objs[i].(*appv1.Application)
if !ok {
continue
}
// exclude resource unless it is permitted in the app project. If project is not permitted then it is not controlled by the user and there is no point showing the warning.
if proj, err := ctrl.getAppProj(app); err == nil && proj.IsResourcePermitted(metav1.GroupKind{Group: ref.GroupVersionKind().Group, Kind: ref.Kind}, true) &&
!isKnownOrphanedResourceExclusion(kube.NewResourceKey(ref.GroupVersionKind().Group, ref.GroupVersionKind().Kind, ref.Namespace, ref.Name)) {
managedByApp[app.Name] = false
}
}
}
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(ctrl.namespace + "/" + appName)
if app, ok := obj.(*appv1.Application); exists && err == nil && ok && isSelfReferencedApp(app, ref) {
// Don't force refresh app if related resource is application itself. This prevents infinite reconciliation loop.
skipForceRefresh = true
}
for appName, isManagedResource := range managedByApp {
skipForceRefresh := false
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(ctrl.namespace + "/" + appName)
if app, ok := obj.(*appv1.Application); exists && err == nil && ok && isSelfReferencedApp(app, ref) {
// Don't force refresh app if related resource is application itself. This prevents infinite reconciliation loop.
skipForceRefresh = true
if !skipForceRefresh {
level := ComparisonWithNothing
if isManagedResource {
level = CompareWithRecent
}
if !skipForceRefresh {
level := ComparisonWithNothing
if isManagedResource {
level = CompareWithRecent
}
ctrl.requestAppRefresh(appName, level)
}
ctrl.appRefreshQueue.Add(fmt.Sprintf("%s/%s", ctrl.namespace, appName))
ctrl.requestAppRefresh(appName, level)
}
ctrl.appRefreshQueue.Add(fmt.Sprintf("%s/%s", ctrl.namespace, appName))
}
func (ctrl *ApplicationController) setAppManagedResources(a *appv1.Application, comparisonResult *comparisonResult) (*appv1.ApplicationTree, error) {
managedResources, err := ctrl.managedResources(comparisonResult)
managedResources, err := ctrl.managedResources(a, comparisonResult)
if err != nil {
return nil, err
}
@@ -239,37 +188,11 @@ func (ctrl *ApplicationController) setAppManagedResources(a *appv1.Application,
return tree, ctrl.cache.SetAppManagedResources(a.Name, managedResources)
}
// returns true of given resources exist in the namespace by default and not managed by the user
func isKnownOrphanedResourceExclusion(key kube.ResourceKey) bool {
if key.Namespace == "default" && key.Group == "" && key.Kind == kube.ServiceKind && key.Name == "kubernetes" {
return true
}
if key.Group == "" && key.Kind == kube.ServiceAccountKind && key.Name == "default" {
return true
}
return false
}
func (ctrl *ApplicationController) getResourceTree(a *appv1.Application, managedResources []*appv1.ResourceDiff) (*appv1.ApplicationTree, error) {
nodes := make([]appv1.ResourceNode, 0)
proj, err := argo.GetAppProject(&a.Spec, applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()), ctrl.namespace)
if err != nil {
return nil, err
}
orphanedNodesMap := make(map[kube.ResourceKey]appv1.ResourceNode)
warnOrphaned := true
if proj.Spec.OrphanedResources != nil {
orphanedNodesMap, err = ctrl.stateCache.GetNamespaceTopLevelResources(a.Spec.Destination.Server, a.Spec.Destination.Namespace)
if err != nil {
return nil, err
}
warnOrphaned = proj.Spec.OrphanedResources.IsWarn()
}
for i := range managedResources {
managedResource := managedResources[i]
delete(orphanedNodesMap, kube.NewResourceKey(managedResource.Group, managedResource.Kind, managedResource.Namespace, managedResource.Name))
var live = &unstructured.Unstructured{}
err := json.Unmarshal([]byte(managedResource.LiveState), &live)
if err != nil {
@@ -292,45 +215,19 @@ func (ctrl *ApplicationController) getResourceTree(a *appv1.Application, managed
},
})
} else {
err := ctrl.stateCache.IterateHierarchy(a.Spec.Destination.Server, kube.GetResourceKey(live), func(child appv1.ResourceNode, appName string) {
err := ctrl.stateCache.IterateHierarchy(a.Spec.Destination.Server, live, func(child appv1.ResourceNode) {
nodes = append(nodes, child)
})
if err != nil {
return nil, err
}
}
}
orphanedNodes := make([]appv1.ResourceNode, 0)
for k := range orphanedNodesMap {
if k.Namespace != "" && proj.IsResourcePermitted(metav1.GroupKind{Group: k.Group, Kind: k.Kind}, true) && !isKnownOrphanedResourceExclusion(k) {
err := ctrl.stateCache.IterateHierarchy(a.Spec.Destination.Server, k, func(child appv1.ResourceNode, appName string) {
belongToAnotherApp := false
if appName != "" {
if _, exists, err := ctrl.appInformer.GetIndexer().GetByKey(ctrl.namespace + "/" + appName); exists && err == nil {
belongToAnotherApp = true
}
}
if !belongToAnotherApp {
orphanedNodes = append(orphanedNodes, child)
}
})
if err != nil {
return nil, err
}
}
}
var conditions []appv1.ApplicationCondition
if len(orphanedNodes) > 0 && warnOrphaned {
conditions = []appv1.ApplicationCondition{{
Type: appv1.ApplicationConditionOrphanedResourceWarning,
Message: fmt.Sprintf("Application has %d orphaned resources", len(orphanedNodes)),
}}
}
a.Status.SetConditions(conditions, map[appv1.ApplicationConditionType]bool{appv1.ApplicationConditionOrphanedResourceWarning: true})
return &appv1.ApplicationTree{Nodes: nodes, OrphanedNodes: orphanedNodes}, nil
return &appv1.ApplicationTree{Nodes: nodes}, nil
}
func (ctrl *ApplicationController) managedResources(comparisonResult *comparisonResult) ([]*appv1.ResourceDiff, error) {
func (ctrl *ApplicationController) managedResources(a *appv1.Application, comparisonResult *comparisonResult) ([]*appv1.ResourceDiff, error) {
items := make([]*appv1.ResourceDiff, len(comparisonResult.managedResources))
for i := range comparisonResult.managedResources {
res := comparisonResult.managedResources[i]
@@ -339,7 +236,6 @@ func (ctrl *ApplicationController) managedResources(comparisonResult *comparison
Name: res.Name,
Group: res.Group,
Kind: res.Kind,
Hook: res.Hook,
}
target := res.Target
@@ -391,13 +287,14 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
go ctrl.appInformer.Run(ctx.Done())
go ctrl.projInformer.Run(ctx.Done())
go ctrl.watchSettings(ctx)
if !cache.WaitForCacheSync(ctx.Done(), ctrl.appInformer.HasSynced, ctrl.projInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
go func() { errors.CheckError(ctrl.stateCache.Run(ctx)) }()
go ctrl.stateCache.Run(ctx)
go func() { errors.CheckError(ctrl.metricsServer.ListenAndServe()) }()
for i := 0; i < statusProcessors; i++ {
@@ -642,13 +539,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
if state.Phase.Completed() {
// if we just completed an operation, force a refresh so that UI will report up-to-date
// sync/health information
if key, err := cache.MetaNamespaceKeyFunc(app); err == nil {
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
ctrl.requestAppRefresh(app.Name, CompareWithLatest)
ctrl.appRefreshQueue.Add(key)
} else {
logCtx.Warnf("Fails to requeue application: %v", err)
}
ctrl.requestAppRefresh(app.ObjectMeta.Name, CompareWithLatest)
}
}
@@ -753,21 +644,14 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
defer func() {
reconcileDuration := time.Since(startTime)
ctrl.metricsServer.IncReconcile(origApp, reconcileDuration)
logCtx := log.WithFields(log.Fields{
"application": origApp.Name,
"time_ms": reconcileDuration.Seconds() * 1e3,
"level": comparisonLevel,
"dest-server": origApp.Spec.Destination.Server,
"dest-namespace": origApp.Spec.Destination.Namespace,
})
logCtx := log.WithFields(log.Fields{"application": origApp.Name, "time_ms": reconcileDuration.Seconds() * 1e3, "level": comparisonLevel})
logCtx.Info("Reconciliation completed")
}()
app := origApp.DeepCopy()
logCtx := log.WithFields(log.Fields{"application": app.Name})
if comparisonLevel == ComparisonWithNothing {
managedResources := make([]*appv1.ResourceDiff, 0)
if err := ctrl.cache.GetAppManagedResources(app.Name, &managedResources); err != nil {
if managedResources, err := ctrl.cache.GetAppManagedResources(app.Name); err != nil {
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fallback to full reconciliation")
} else {
if tree, err := ctrl.getResourceTree(app, managedResources); err != nil {
@@ -779,23 +663,23 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
return
}
}
now := metav1.Now()
app.Status.ObservedAt = &now
app.Status.ObservedAt = metav1.Now()
ctrl.persistAppStatus(origApp, &app.Status)
return
}
}
hasErrors := ctrl.refreshAppConditions(app)
conditions, hasErrors := ctrl.refreshAppConditions(app)
if hasErrors {
app.Status.Sync.Status = appv1.SyncStatusCodeUnknown
app.Status.Health.Status = appv1.HealthStatusUnknown
app.Status.Conditions = conditions
ctrl.persistAppStatus(origApp, &app.Status)
return
}
var localManifests []string
if opState := app.Status.OperationState; opState != nil && opState.Operation.Sync != nil {
if opState := app.Status.OperationState; opState != nil {
localManifests = opState.Operation.Sync.Manifests
}
@@ -803,12 +687,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
if comparisonLevel == CompareWithRecent {
revision = app.Status.Sync.Revision
}
observedAt := metav1.Now()
compareResult := ctrl.appStateManager.CompareAppState(app, revision, app.Spec.Source, refreshType == appv1.RefreshTypeHard, localManifests)
ctrl.normalizeApplication(origApp, app)
compareResult, err := ctrl.appStateManager.CompareAppState(app, revision, app.Spec.Source, refreshType == appv1.RefreshTypeHard, localManifests)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
} else {
ctrl.normalizeApplication(origApp, app, compareResult.appSourceType)
conditions = append(conditions, compareResult.conditions...)
}
tree, err := ctrl.setAppManagedResources(app, compareResult)
if err != nil {
logCtx.Errorf("Failed to cache app resources: %v", err)
@@ -816,29 +701,17 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
app.Status.Summary = tree.GetSummary()
}
project, err := ctrl.getAppProj(app)
if err != nil {
logCtx.Infof("Could not lookup project for %s in order to check schedules state", app.Name)
} else {
if project.Spec.SyncWindows.Matches(app).CanSync(false) {
syncErrCond := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources)
if syncErrCond != nil {
app.Status.SetConditions([]appv1.ApplicationCondition{*syncErrCond}, map[appv1.ApplicationConditionType]bool{appv1.ApplicationConditionSyncError: true})
} else {
app.Status.SetConditions([]appv1.ApplicationCondition{}, map[appv1.ApplicationConditionType]bool{appv1.ApplicationConditionSyncError: true})
}
} else {
logCtx.Infof("Sync prevented by sync window")
}
syncErrCond := ctrl.autoSync(app, compareResult.syncStatus)
if syncErrCond != nil {
conditions = append(conditions, *syncErrCond)
}
if app.Status.ReconciledAt == nil || comparisonLevel == CompareWithLatest {
app.Status.ReconciledAt = &observedAt
}
app.Status.ObservedAt = &observedAt
app.Status.ObservedAt = compareResult.reconciledAt
app.Status.ReconciledAt = compareResult.reconciledAt
app.Status.Sync = *compareResult.syncStatus
app.Status.Health = *compareResult.healthStatus
app.Status.Resources = compareResult.resources
app.Status.Conditions = conditions
app.Status.SourceType = compareResult.appSourceType
ctrl.persistAppStatus(origApp, &app.Status)
return
@@ -853,23 +726,22 @@ func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application,
var reason string
compareWith := CompareWithLatest
refreshType := appv1.RefreshTypeNormal
expired := app.Status.ReconciledAt == nil || app.Status.ReconciledAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
expired := app.Status.ReconciledAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
if requestedType, ok := app.IsRefreshRequested(); ok {
// user requested app refresh.
refreshType = requestedType
reason = fmt.Sprintf("%s refresh requested", refreshType)
} else if expired {
reason = fmt.Sprintf("comparison expired. reconciledAt: %v, expiry: %v", app.Status.ReconciledAt, statusRefreshTimeout)
} else if requested, level := ctrl.isRefreshRequested(app.Name); requested {
compareWith = level
reason = fmt.Sprintf("controller refresh requested")
} else if app.Status.Sync.Status == appv1.SyncStatusCodeUnknown && expired {
reason = "comparison status unknown"
} else if !app.Spec.Source.Equals(app.Status.Sync.ComparedTo.Source) {
reason = "spec.source differs"
} else if !app.Spec.Destination.Equals(app.Status.Sync.ComparedTo.Destination) {
reason = "spec.destination differs"
} else if requested, level := ctrl.isRefreshRequested(app.Name); requested {
compareWith = level
reason = fmt.Sprintf("controller refresh requested")
} else if expired {
reason = fmt.Sprintf("comparison expired. reconciledAt: %v, expiry: %v", app.Status.ReconciledAt, statusRefreshTimeout)
}
if reason != "" {
logCtx.Infof("Refreshing app status (%s), level (%d)", reason, compareWith)
return true, refreshType, compareWith
@@ -877,17 +749,17 @@ func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application,
return false, refreshType, compareWith
}
func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application) bool {
errorConditions := make([]appv1.ApplicationCondition, 0)
proj, err := ctrl.getAppProj(app)
func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application) ([]appv1.ApplicationCondition, bool) {
conditions := make([]appv1.ApplicationCondition, 0)
proj, err := argo.GetAppProject(&app.Spec, applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()), ctrl.namespace)
if err != nil {
if apierr.IsNotFound(err) {
errorConditions = append(errorConditions, appv1.ApplicationCondition{
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("Application referencing project %s which does not exist", app.Spec.Project),
})
} else {
errorConditions = append(errorConditions, appv1.ApplicationCondition{
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionUnknownError,
Message: err.Error(),
})
@@ -895,25 +767,47 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
} else {
specConditions, err := argo.ValidatePermissions(context.Background(), &app.Spec, proj, ctrl.db)
if err != nil {
errorConditions = append(errorConditions, appv1.ApplicationCondition{
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionUnknownError,
Message: err.Error(),
})
} else {
errorConditions = append(errorConditions, specConditions...)
conditions = append(conditions, specConditions...)
}
}
app.Status.SetConditions(errorConditions, map[appv1.ApplicationConditionType]bool{
appv1.ApplicationConditionInvalidSpecError: true,
appv1.ApplicationConditionUnknownError: true,
})
return len(errorConditions) > 0
// List of condition types which have to be reevaluated by controller; all remaining conditions should stay as is.
reevaluateTypes := map[appv1.ApplicationConditionType]bool{
appv1.ApplicationConditionInvalidSpecError: true,
appv1.ApplicationConditionUnknownError: true,
appv1.ApplicationConditionComparisonError: true,
appv1.ApplicationConditionSharedResourceWarning: true,
appv1.ApplicationConditionSyncError: true,
appv1.ApplicationConditionRepeatedResourceWarning: true,
}
appConditions := make([]appv1.ApplicationCondition, 0)
for i := 0; i < len(app.Status.Conditions); i++ {
condition := app.Status.Conditions[i]
if _, ok := reevaluateTypes[condition.Type]; !ok {
appConditions = append(appConditions, condition)
}
}
hasErrors := false
for i := range conditions {
condition := conditions[i]
appConditions = append(appConditions, condition)
if condition.IsError() {
hasErrors = true
}
}
return appConditions, hasErrors
}
// normalizeApplication normalizes an application.spec and additionally persists updates if it changed
func (ctrl *ApplicationController) normalizeApplication(orig, app *appv1.Application) {
func (ctrl *ApplicationController) normalizeApplication(orig, app *appv1.Application, sourceType appv1.ApplicationSourceType) {
logCtx := log.WithFields(log.Fields{"application": app.Name})
app.Spec = *argo.NormalizeApplicationSpec(&app.Spec)
app.Spec = *argo.NormalizeApplicationSpec(&app.Spec, sourceType)
patch, modified, err := diff.CreateTwoWayMergePatch(orig, app, appv1.Application{})
if err != nil {
logCtx.Errorf("error constructing app spec patch: %v", err)
@@ -969,7 +863,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
}
// autoSync will initiate a sync operation for an application configured with automated sync
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus, resources []appv1.ResourceStatus) *appv1.ApplicationCondition {
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus) *appv1.ApplicationCondition {
if app.Spec.SyncPolicy == nil || app.Spec.SyncPolicy.Automated == nil {
return nil
}
@@ -982,59 +876,34 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
logCtx.Infof("Skipping auto-sync: deletion in progress")
return nil
}
// Only perform auto-sync if we detect OutOfSync status. This is to prevent us from attempting
// a sync when application is already in a Synced or Unknown state
if syncStatus.Status != appv1.SyncStatusCodeOutOfSync {
logCtx.Infof("Skipping auto-sync: application status is %s", syncStatus.Status)
return nil
}
desiredCommitSHA := syncStatus.Revision
alreadyAttempted, attemptPhase := alreadyAttemptedSync(app, desiredCommitSHA)
selfHeal := app.Spec.SyncPolicy.Automated.SelfHeal
op := appv1.Operation{
Sync: &appv1.SyncOperation{
Revision: desiredCommitSHA,
Prune: app.Spec.SyncPolicy.Automated.Prune,
},
}
// It is possible for manifests to remain OutOfSync even after a sync/kubectl apply (e.g.
// auto-sync with pruning disabled). We need to ensure that we do not keep Syncing an
// application in an infinite loop. To detect this, we only attempt the Sync if the revision
// and parameter overrides are different from our most recent sync operation.
if alreadyAttempted && (!selfHeal || !attemptPhase.Successful()) {
if !attemptPhase.Successful() {
if alreadyAttemptedSync(app, desiredCommitSHA) {
if app.Status.OperationState.Phase != appv1.OperationSucceeded {
logCtx.Warnf("Skipping auto-sync: failed previous sync attempt to %s", desiredCommitSHA)
message := fmt.Sprintf("Failed sync attempt to %s: %s", desiredCommitSHA, app.Status.OperationState.Message)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}
}
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredCommitSHA)
return nil
} else if alreadyAttempted && selfHeal {
if shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app); shouldSelfHeal {
for _, resource := range resources {
if resource.Status != appv1.SyncStatusCodeSynced {
op.Sync.Resources = append(op.Sync.Resources, appv1.SyncOperationResource{
Kind: resource.Kind,
Group: resource.Group,
Name: resource.Name,
})
}
}
} else {
logCtx.Infof("Skipping auto-sync: already attempted sync to %s with timeout %v (retrying in %v)", desiredCommitSHA, ctrl.selfHealTimeout, retryAfter)
if key, err := cache.MetaNamespaceKeyFunc(app); err == nil {
ctrl.requestAppRefresh(app.Name, CompareWithLatest)
ctrl.appRefreshQueue.AddAfter(key, retryAfter)
} else {
logCtx.Warnf("Fails to requeue application: %v", err)
}
return nil
}
}
op := appv1.Operation{
Sync: &appv1.SyncOperation{
Revision: desiredCommitSHA,
Prune: app.Spec.SyncPolicy.Automated.Prune,
},
}
appIf := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace)
_, err := argo.SetAppOperation(appIf, app.Name, &op)
if err != nil {
@@ -1049,12 +918,12 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
// alreadyAttemptedSync returns whether or not the most recent sync was performed against the
// commitSHA and with the same app source config which are currently set in the app
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) (bool, appv1.OperationPhase) {
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) bool {
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
return false, ""
return false
}
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false, ""
return false
}
// Ignore differences in target revision, since we already just verified commitSHAs are equal,
// and we do not want to trigger auto-sync due to things like HEAD != master
@@ -1062,24 +931,10 @@ func alreadyAttemptedSync(app *appv1.Application, commitSHA string) (bool, appv1
specSource.TargetRevision = ""
syncResSource := app.Status.OperationState.SyncResult.Source.DeepCopy()
syncResSource.TargetRevision = ""
return reflect.DeepEqual(app.Spec.Source, app.Status.OperationState.SyncResult.Source), app.Status.OperationState.Phase
return reflect.DeepEqual(app.Spec.Source, app.Status.OperationState.SyncResult.Source)
}
func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application) (bool, time.Duration) {
if app.Status.OperationState == nil {
return true, time.Duration(0)
}
var retryAfter time.Duration
if app.Status.OperationState.FinishedAt == nil {
retryAfter = ctrl.selfHealTimeout
} else {
retryAfter = ctrl.selfHealTimeout - time.Since(app.Status.OperationState.FinishedAt.Time)
}
return retryAfter <= 0, retryAfter
}
func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.SharedIndexInformer, applisters.ApplicationLister, error) {
func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.SharedIndexInformer, applisters.ApplicationLister) {
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
ctrl.applicationClientset,
ctrl.statusRefreshTimeout,
@@ -1123,24 +978,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
},
},
)
err := informer.AddIndexers(cache.Indexers{
orphanedIndex: func(obj interface{}) (i []string, e error) {
app, ok := obj.(*appv1.Application)
if !ok {
return nil, nil
}
proj, err := ctrl.getAppProj(app)
if err != nil {
return nil, nil
}
if proj.Spec.OrphanedResources != nil {
return []string{app.Spec.Destination.Namespace}, nil
}
return nil, nil
},
})
return informer, lister, err
return informer, lister
}
func isOperationInProgress(app *appv1.Application) bool {
@@ -1160,3 +998,45 @@ func toggledAutomatedSync(old *appv1.Application, new *appv1.Application) bool {
// nothing changed
return false
}
func (ctrl *ApplicationController) watchSettings(ctx context.Context) {
updateCh := make(chan *settings_util.ArgoCDSettings, 1)
ctrl.settingsMgr.Subscribe(updateCh)
prevAppLabelKey := ctrl.settings.GetAppInstanceLabelKey()
prevResourceExclusions := ctrl.settings.ResourceExclusions
prevResourceInclusions := ctrl.settings.ResourceInclusions
prevConfigManagementPlugins := ctrl.settings.ConfigManagementPlugins
done := false
for !done {
select {
case newSettings := <-updateCh:
newAppLabelKey := newSettings.GetAppInstanceLabelKey()
*ctrl.settings = *newSettings
if prevAppLabelKey != newAppLabelKey {
log.Infof("label key changed: %s -> %s", prevAppLabelKey, newAppLabelKey)
ctrl.stateCache.Invalidate()
prevAppLabelKey = newAppLabelKey
}
if !reflect.DeepEqual(prevResourceExclusions, newSettings.ResourceExclusions) {
log.WithFields(log.Fields{"prevResourceExclusions": prevResourceExclusions, "newResourceExclusions": newSettings.ResourceExclusions}).Info("resource exclusions modified")
ctrl.stateCache.Invalidate()
prevResourceExclusions = newSettings.ResourceExclusions
}
if !reflect.DeepEqual(prevResourceInclusions, newSettings.ResourceInclusions) {
log.WithFields(log.Fields{"prevResourceInclusions": prevResourceInclusions, "newResourceInclusions": newSettings.ResourceInclusions}).Info("resource inclusions modified")
ctrl.stateCache.Invalidate()
prevResourceInclusions = newSettings.ResourceInclusions
}
if !reflect.DeepEqual(prevConfigManagementPlugins, newSettings.ConfigManagementPlugins) {
log.WithFields(log.Fields{"prevConfigManagementPlugins": prevConfigManagementPlugins, "newConfigManagementPlugins": newSettings.ConfigManagementPlugins}).Info("config management plugins modified")
ctrl.stateCache.Invalidate()
prevConfigManagementPlugins = newSettings.ConfigManagementPlugins
}
case <-ctx.Done():
done = true
}
}
log.Info("shutting down settings watch")
ctrl.settingsMgr.Unsubscribe(updateCh)
close(updateCh)
}

View File

@@ -2,7 +2,6 @@ package controller
import (
"context"
"encoding/json"
"testing"
"time"
@@ -23,27 +22,19 @@ import (
mockstatecache "github.com/argoproj/argo-cd/controller/cache/mocks"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
"github.com/argoproj/argo-cd/reposerver/apiclient"
mockrepoclient "github.com/argoproj/argo-cd/reposerver/apiclient/mocks"
mockreposerver "github.com/argoproj/argo-cd/reposerver/mocks"
"github.com/argoproj/argo-cd/reposerver/repository"
mockrepoclient "github.com/argoproj/argo-cd/reposerver/repository/mocks"
"github.com/argoproj/argo-cd/test"
utilcache "github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
"github.com/argoproj/argo-cd/util/settings"
)
type namespacedResource struct {
argoappv1.ResourceNode
AppName string
}
type fakeData struct {
apps []runtime.Object
manifestResponse *apiclient.ManifestResponse
managedLiveObjs map[kube.ResourceKey]*unstructured.Unstructured
namespacedResources map[kube.ResourceKey]namespacedResource
configMapData map[string]string
apps []runtime.Object
manifestResponse *repository.ManifestResponse
managedLiveObjs map[kube.ResourceKey]*unstructured.Unstructured
}
func newFakeController(data *fakeData) *ApplicationController {
@@ -73,15 +64,11 @@ func newFakeController(data *fakeData) *ApplicationController {
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-cm",
Namespace: test.FakeArgoCDNamespace,
Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
},
},
Data: data.configMapData,
Data: nil,
}
kubeClient := fake.NewSimpleClientset(&clust, &cm, &secret)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
kubectl := &kubetest.MockKubectlCmd{}
ctrl, err := NewApplicationController(
test.FakeArgoCDNamespace,
settingsMgr,
@@ -89,11 +76,8 @@ func newFakeController(data *fakeData) *ApplicationController {
appclientset.NewSimpleClientset(data.apps...),
&mockRepoClientset,
utilcache.NewCache(utilcache.NewInMemoryCache(1*time.Hour)),
kubectl,
time.Minute,
time.Minute,
common.DefaultPortArgoCDMetrics,
0,
)
if err != nil {
panic(err)
@@ -102,25 +86,14 @@ func newFakeController(data *fakeData) *ApplicationController {
defer cancelProj()
cancelApp := test.StartInformer(ctrl.appInformer)
defer cancelApp()
mockStateCache := mockstatecache.LiveStateCache{}
ctrl.appStateManager.(*appStateManager).liveStateCache = &mockStateCache
ctrl.stateCache = &mockStateCache
mockStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
mockStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
response := make(map[kube.ResourceKey]argoappv1.ResourceNode)
for k, v := range data.namespacedResources {
response[k] = v.ResourceNode
// Mock out call to GetManagedLiveObjs if fake data supplied
if data.managedLiveObjs != nil {
mockStateCache := mockstatecache.LiveStateCache{}
mockStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
mockStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
ctrl.stateCache = &mockStateCache
ctrl.appStateManager.(*appStateManager).liveStateCache = &mockStateCache
}
mockStateCache.On("GetNamespaceTopLevelResources", mock.Anything, mock.Anything).Return(response, nil)
mockStateCache.On("IterateHierarchy", mock.Anything, mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
key := args[1].(kube.ResourceKey)
action := args[2].(func(child argoappv1.ResourceNode, appName string))
appName := ""
if res, ok := data.namespacedResources[key]; ok {
appName = res.AppName
}
action(argoappv1.ResourceNode{ResourceRef: argoappv1.ResourceRef{Group: key.Group, Namespace: key.Namespace, Name: key.Name}}, appName)
}).Return(nil)
return ctrl
}
@@ -203,7 +176,7 @@ func TestAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -222,7 +195,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -237,7 +210,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -253,7 +226,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -270,7 +243,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -296,7 +269,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -332,7 +305,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -375,7 +348,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -429,7 +402,7 @@ func TestNormalizeApplication(t *testing.T) {
app.Spec.Source.Kustomize = &argoappv1.ApplicationSourceKustomize{NamePrefix: "foo-"}
data := fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -487,44 +460,17 @@ func TestHandleAppUpdated(t *testing.T) {
app.Spec.Destination.Server = common.KubernetesInternalAPIServerAddr
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
ctrl.handleObjectUpdated(map[string]bool{app.Name: true}, kube.GetObjectRef(kube.MustToUnstructured(app)))
ctrl.handleAppUpdated(app.Name, true, kube.GetObjectRef(kube.MustToUnstructured(app)))
isRequested, level := ctrl.isRefreshRequested(app.Name)
assert.False(t, isRequested)
assert.Equal(t, ComparisonWithNothing, level)
ctrl.handleObjectUpdated(map[string]bool{app.Name: true}, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: "default"})
ctrl.handleAppUpdated(app.Name, true, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: "default"})
isRequested, level = ctrl.isRefreshRequested(app.Name)
assert.True(t, isRequested)
assert.Equal(t, CompareWithRecent, level)
}
func TestHandleOrphanedResourceUpdated(t *testing.T) {
app1 := newFakeApp()
app1.Name = "app1"
app1.Spec.Destination.Namespace = test.FakeArgoCDNamespace
app1.Spec.Destination.Server = common.KubernetesInternalAPIServerAddr
app2 := newFakeApp()
app2.Name = "app2"
app2.Spec.Destination.Namespace = test.FakeArgoCDNamespace
app2.Spec.Destination.Server = common.KubernetesInternalAPIServerAddr
proj := defaultProj.DeepCopy()
proj.Spec.OrphanedResources = &argoappv1.OrphanedResourcesMonitorSettings{}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app1, app2, proj}})
ctrl.handleObjectUpdated(map[string]bool{}, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: test.FakeArgoCDNamespace})
isRequested, level := ctrl.isRefreshRequested(app1.Name)
assert.True(t, isRequested)
assert.Equal(t, ComparisonWithNothing, level)
isRequested, level = ctrl.isRefreshRequested(app2.Name)
assert.True(t, isRequested)
assert.Equal(t, ComparisonWithNothing, level)
}
func TestSetOperationStateOnDeletedApp(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
@@ -542,8 +488,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
app := newFakeApp()
now := metav1.Now()
app.Status.ReconciledAt = &now
app.Status.ReconciledAt = metav1.Now()
app.Status.Sync = argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeSynced,
ComparedTo: argoappv1.ComparedTo{
@@ -573,160 +518,13 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
{
// refresh app using the 'latest' level if comparison expired
app := app.DeepCopy()
ctrl.requestAppRefresh(app.Name, CompareWithRecent)
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
app.Status.ReconciledAt = &reconciledAt
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Minute)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
// execute hard refresh if app has refresh annotation
app.Annotations = map[string]string{
common.AnnotationKeyRefresh: string(argoappv1.RefreshTypeHard),
}
{
app := app.DeepCopy()
// execute hard refresh if app has refresh annotation
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
app.Status.ReconciledAt = &reconciledAt
app.Annotations = map[string]string{
common.AnnotationKeyRefresh: string(argoappv1.RefreshTypeHard),
}
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
}
{
app := app.DeepCopy()
// ensure that CompareWithLatest level is used if application source has changed
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing)
// sample app source change
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{
Parameters: []argoappv1.HelmParameter{{
Name: "foo",
Value: "bar",
}},
}
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
}
}
func TestRefreshAppConditions(t *testing.T) {
defaultProj := argoappv1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: test.FakeArgoCDNamespace,
},
Spec: argoappv1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []argoappv1.ApplicationDestination{
{
Server: "*",
Namespace: "*",
},
},
},
}
t.Run("NoErrorConditions", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}})
hasErrors := ctrl.refreshAppConditions(app)
assert.False(t, hasErrors)
assert.Len(t, app.Status.Conditions, 0)
})
t.Run("PreserveExistingWarningCondition", func(t *testing.T) {
app := newFakeApp()
app.Status.SetConditions([]argoappv1.ApplicationCondition{{Type: argoappv1.ApplicationConditionExcludedResourceWarning}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}})
hasErrors := ctrl.refreshAppConditions(app)
assert.False(t, hasErrors)
assert.Len(t, app.Status.Conditions, 1)
assert.Equal(t, argoappv1.ApplicationConditionExcludedResourceWarning, app.Status.Conditions[0].Type)
})
t.Run("ReplacesSpecErrorCondition", func(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "wrong project"
app.Status.SetConditions([]argoappv1.ApplicationCondition{{Type: argoappv1.ApplicationConditionInvalidSpecError, Message: "old message"}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}})
hasErrors := ctrl.refreshAppConditions(app)
assert.True(t, hasErrors)
assert.Len(t, app.Status.Conditions, 1)
assert.Equal(t, argoappv1.ApplicationConditionInvalidSpecError, app.Status.Conditions[0].Type)
assert.Equal(t, "Application referencing project wrong project which does not exist", app.Status.Conditions[0].Message)
})
}
func TestUpdateReconciledAt(t *testing.T) {
app := newFakeApp()
reconciledAt := metav1.NewTime(time.Now().Add(-1 * time.Second))
app.Status = argoappv1.ApplicationStatus{ReconciledAt: &reconciledAt}
app.Status.Sync = argoappv1.SyncStatus{ComparedTo: argoappv1.ComparedTo{Source: app.Spec.Source, Destination: app.Spec.Destination}}
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
})
key, _ := cache.MetaNamespaceKeyFunc(app)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
receivedPatch := map[string]interface{}{}
fakeAppCs.AddReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if patchAction, ok := action.(kubetesting.PatchAction); ok {
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
}
return true, nil, nil
})
t.Run("UpdatedOnFullReconciliation", func(t *testing.T) {
receivedPatch = map[string]interface{}{}
ctrl.requestAppRefresh(app.Name, CompareWithLatest)
ctrl.appRefreshQueue.Add(key)
ctrl.processAppRefreshQueueItem()
_, updated, err := unstructured.NestedString(receivedPatch, "status", "reconciledAt")
assert.NoError(t, err)
assert.True(t, updated)
_, updated, err = unstructured.NestedString(receivedPatch, "status", "observedAt")
assert.NoError(t, err)
assert.True(t, updated)
})
t.Run("NotUpdatedOnPartialReconciliation", func(t *testing.T) {
receivedPatch = map[string]interface{}{}
ctrl.appRefreshQueue.Add(key)
ctrl.requestAppRefresh(app.Name, CompareWithRecent)
ctrl.processAppRefreshQueueItem()
_, updated, err := unstructured.NestedString(receivedPatch, "status", "reconciledAt")
assert.NoError(t, err)
assert.False(t, updated)
_, updated, err = unstructured.NestedString(receivedPatch, "status", "observedAt")
assert.NoError(t, err)
assert.True(t, updated)
})
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
}

View File

@@ -2,7 +2,6 @@ package cache
import (
"context"
"reflect"
"sync"
log "github.com/sirupsen/logrus"
@@ -20,27 +19,19 @@ import (
"github.com/argoproj/argo-cd/util/settings"
)
type cacheSettings struct {
ResourceOverrides map[string]appv1.ResourceOverride
AppInstanceLabelKey string
ResourcesFilter *settings.ResourcesFilter
}
type LiveStateCache interface {
IsNamespaced(server string, gk schema.GroupKind) (bool, error)
IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error)
// Executes give callback against resource specified by the key and all its children
IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode, appName string)) error
IterateHierarchy(server string, obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) error
// Returns state of live nodes which correspond for target nodes of specified application.
GetManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error)
// Returns all top level resources (resources without owner references) of a specified namespace
GetNamespaceTopLevelResources(server string, namespace string) (map[kube.ResourceKey]appv1.ResourceNode, error)
// Starts watching resources of each controlled cluster.
Run(ctx context.Context) error
Run(ctx context.Context)
// Invalidate invalidates the entire cluster state cache
Invalidate()
}
type ObjectUpdatedHandler = func(managedByApp map[string]bool, ref v1.ObjectReference)
type AppUpdatedHandler = func(appName string, isManagedResource bool, ref v1.ObjectReference)
func GetTargetObjKey(a *appv1.Application, un *unstructured.Unstructured, isNamespaced bool) kube.ResourceKey {
key := kube.GetResourceKey(un)
@@ -56,51 +47,32 @@ func GetTargetObjKey(a *appv1.Application, un *unstructured.Unstructured, isName
func NewLiveStateCache(
db db.ArgoDB,
appInformer cache.SharedIndexInformer,
settingsMgr *settings.SettingsManager,
settings *settings.ArgoCDSettings,
kubectl kube.Kubectl,
metricsServer *metrics.MetricsServer,
onObjectUpdated ObjectUpdatedHandler) LiveStateCache {
onAppUpdated AppUpdatedHandler) LiveStateCache {
return &liveStateCache{
appInformer: appInformer,
db: db,
clusters: make(map[string]*clusterInfo),
lock: &sync.Mutex{},
onObjectUpdated: onObjectUpdated,
kubectl: kubectl,
settingsMgr: settingsMgr,
metricsServer: metricsServer,
cacheSettingsLock: &sync.Mutex{},
appInformer: appInformer,
db: db,
clusters: make(map[string]*clusterInfo),
lock: &sync.Mutex{},
onAppUpdated: onAppUpdated,
kubectl: kubectl,
settings: settings,
metricsServer: metricsServer,
}
}
type liveStateCache struct {
db db.ArgoDB
clusters map[string]*clusterInfo
lock *sync.Mutex
appInformer cache.SharedIndexInformer
onObjectUpdated ObjectUpdatedHandler
kubectl kube.Kubectl
settingsMgr *settings.SettingsManager
metricsServer *metrics.MetricsServer
cacheSettingsLock *sync.Mutex
cacheSettings *cacheSettings
}
func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
appInstanceLabelKey, err := c.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return nil, err
}
resourcesFilter, err := c.settingsMgr.GetResourcesFilter()
if err != nil {
return nil, err
}
resourceOverrides, err := c.settingsMgr.GetResourceOverrides()
if err != nil {
return nil, err
}
return &cacheSettings{AppInstanceLabelKey: appInstanceLabelKey, ResourceOverrides: resourceOverrides, ResourcesFilter: resourcesFilter}, nil
db db.ArgoDB
clusters map[string]*clusterInfo
lock *sync.Mutex
appInformer cache.SharedIndexInformer
onAppUpdated AppUpdatedHandler
kubectl kube.Kubectl
settings *settings.ArgoCDSettings
metricsServer *metrics.MetricsServer
}
func (c *liveStateCache) getCluster(server string) (*clusterInfo, error) {
@@ -113,17 +85,17 @@ func (c *liveStateCache) getCluster(server string) (*clusterInfo, error) {
return nil, err
}
info = &clusterInfo{
apisMeta: make(map[schema.GroupKind]*apiMeta),
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
nsIndex: make(map[string]map[kube.ResourceKey]*node),
onObjectUpdated: c.onObjectUpdated,
kubectl: c.kubectl,
cluster: cluster,
syncTime: nil,
syncLock: &sync.Mutex{},
log: log.WithField("server", cluster.Server),
cacheSettingsSrc: c.getCacheSettings,
apisMeta: make(map[schema.GroupKind]*apiMeta),
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
nsIndex: make(map[string]map[kube.ResourceKey]*node),
onAppUpdated: c.onAppUpdated,
kubectl: c.kubectl,
cluster: cluster,
syncTime: nil,
syncLock: &sync.Mutex{},
log: log.WithField("server", cluster.Server),
settings: c.settings,
}
c.clusters[cluster.Server] = info
@@ -155,31 +127,23 @@ func (c *liveStateCache) Invalidate() {
log.Info("live state cache invalidated")
}
func (c *liveStateCache) IsNamespaced(server string, gk schema.GroupKind) (bool, error) {
func (c *liveStateCache) IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error) {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return false, err
}
return clusterInfo.isNamespaced(gk), nil
return clusterInfo.isNamespaced(obj), nil
}
func (c *liveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode, appName string)) error {
func (c *liveStateCache) IterateHierarchy(server string, obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) error {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return err
}
clusterInfo.iterateHierarchy(key, action)
clusterInfo.iterateHierarchy(obj, action)
return nil
}
func (c *liveStateCache) GetNamespaceTopLevelResources(server string, namespace string) (map[kube.ResourceKey]appv1.ResourceNode, error) {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return nil, err
}
return clusterInfo.getNamespaceTopLevelResources(namespace), nil
}
func (c *liveStateCache) GetManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
clusterInfo, err := c.getSyncedCluster(a.Spec.Destination.Server)
if err != nil {
@@ -197,55 +161,8 @@ func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
return false
}
func (c *liveStateCache) getCacheSettings() *cacheSettings {
c.cacheSettingsLock.Lock()
defer c.cacheSettingsLock.Unlock()
return c.cacheSettings
}
func (c *liveStateCache) watchSettings(ctx context.Context) {
updateCh := make(chan *settings.ArgoCDSettings, 1)
c.settingsMgr.Subscribe(updateCh)
done := false
for !done {
select {
case <-updateCh:
nextCacheSettings, err := c.loadCacheSettings()
if err != nil {
log.Warnf("Failed to read updated settings: %v", err)
continue
}
c.cacheSettingsLock.Lock()
needInvalidate := false
if !reflect.DeepEqual(c.cacheSettings, nextCacheSettings) {
c.cacheSettings = nextCacheSettings
needInvalidate = true
}
c.cacheSettingsLock.Unlock()
if needInvalidate {
c.Invalidate()
}
case <-ctx.Done():
done = true
}
}
log.Info("shutting down settings watch")
c.settingsMgr.Unsubscribe(updateCh)
close(updateCh)
}
// Run watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (c *liveStateCache) Run(ctx context.Context) error {
cacheSettings, err := c.loadCacheSettings()
if err != nil {
return err
}
c.cacheSettings = cacheSettings
go c.watchSettings(ctx)
func (c *liveStateCache) Run(ctx context.Context) {
util.RetryUntilSucceed(func() error {
clusterEventCallback := func(event *db.ClusterEvent) {
c.lock.Lock()
@@ -271,5 +188,4 @@ func (c *liveStateCache) Run(ctx context.Context) error {
}, "watch clusters", ctx, clusterRetryTimeout)
<-ctx.Done()
return nil
}

View File

@@ -24,6 +24,7 @@ import (
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
)
const (
@@ -48,11 +49,11 @@ type clusterInfo struct {
nodes map[kube.ResourceKey]*node
nsIndex map[string]map[kube.ResourceKey]*node
onObjectUpdated ObjectUpdatedHandler
kubectl kube.Kubectl
cluster *appv1.Cluster
log *log.Entry
cacheSettingsSrc func() *cacheSettings
onAppUpdated AppUpdatedHandler
kubectl kube.Kubectl
cluster *appv1.Cluster
log *log.Entry
settings *settings.ArgoCDSettings
}
func (c *clusterInfo) replaceResourceCache(gk schema.GroupKind, resourceVersion string, objs []unstructured.Unstructured) {
@@ -85,33 +86,6 @@ func (c *clusterInfo) replaceResourceCache(gk schema.GroupKind, resourceVersion
}
}
func isServiceAccountTokenSecret(un *unstructured.Unstructured) (bool, metav1.OwnerReference) {
ref := metav1.OwnerReference{
APIVersion: "v1",
Kind: kube.ServiceAccountKind,
}
if un.GetKind() != kube.SecretKind || un.GroupVersionKind().Group != "" {
return false, ref
}
if typeVal, ok, err := unstructured.NestedString(un.Object, "type"); !ok || err != nil || typeVal != "kubernetes.io/service-account-token" {
return false, ref
}
annotations := un.GetAnnotations()
if annotations == nil {
return false, ref
}
id, okId := annotations["kubernetes.io/service-account.uid"]
name, okName := annotations["kubernetes.io/service-account.name"]
if okId && okName {
ref.Name = name
ref.UID = types.UID(id)
}
return ref.Name != "" && ref.UID != "", ref
}
func (c *clusterInfo) createObjInfo(un *unstructured.Unstructured, appInstanceLabel string) *node {
ownerRefs := un.GetOwnerReferences()
// Special case for endpoint. Remove after https://github.com/kubernetes/kubernetes/issues/28483 is fixed
@@ -119,28 +93,21 @@ func (c *clusterInfo) createObjInfo(un *unstructured.Unstructured, appInstanceLa
ownerRefs = append(ownerRefs, metav1.OwnerReference{
Name: un.GetName(),
Kind: kube.ServiceKind,
APIVersion: "v1",
APIVersion: "",
})
}
// edge case. Consider auto-created service account tokens as a child of service account objects
if yes, ref := isServiceAccountTokenSecret(un); yes {
ownerRefs = append(ownerRefs, ref)
}
nodeInfo := &node{
resourceVersion: un.GetResourceVersion(),
ref: kube.GetObjectRef(un),
ownerRefs: ownerRefs,
}
populateNodeInfo(un, nodeInfo)
appName := kube.GetAppInstanceLabel(un, appInstanceLabel)
if len(ownerRefs) == 0 && appName != "" {
nodeInfo.appName = appName
nodeInfo.resource = un
}
nodeInfo.health, _ = health.GetResourceHealth(un, c.cacheSettingsSrc().ResourceOverrides)
nodeInfo.health, _ = health.GetResourceHealth(un, c.settings.ResourceOverrides)
return nodeInfo
}
@@ -199,7 +166,7 @@ func (c *clusterInfo) stopWatching(gk schema.GroupKind) {
// startMissingWatches lists supported cluster resources and start watching for changes unless watch is already running
func (c *clusterInfo) startMissingWatches() error {
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.cacheSettingsSrc().ResourcesFilter)
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.settings)
if err != nil {
return err
}
@@ -271,7 +238,12 @@ func (c *clusterInfo) watchEvents(ctx context.Context, api kube.APIResourceInfo,
if ok {
obj := event.Object.(*unstructured.Unstructured)
info.resourceVersion = obj.GetResourceVersion()
c.processEvent(event.Type, obj)
err = c.processEvent(event.Type, obj)
if err != nil {
log.Warnf("Failed to process event %s %s/%s/%s: %v", event.Type, obj.GroupVersionKind(), obj.GetNamespace(), obj.GetName(), err)
continue
}
if kube.IsCRD(obj) {
if event.Type == watch.Deleted {
group, groupOk, groupErr := unstructured.NestedString(obj.Object, "spec", "group")
@@ -310,7 +282,7 @@ func (c *clusterInfo) sync() (err error) {
c.apisMeta = make(map[schema.GroupKind]*apiMeta)
c.nodes = make(map[kube.ResourceKey]*node)
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.cacheSettingsSrc().ResourcesFilter)
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.settings)
if err != nil {
return err
}
@@ -324,7 +296,7 @@ func (c *clusterInfo) sync() (err error) {
lock.Lock()
for i := range list.Items {
c.setNode(c.createObjInfo(&list.Items[i], c.cacheSettingsSrc().AppInstanceLabelKey))
c.setNode(c.createObjInfo(&list.Items[i], c.settings.GetAppInstanceLabelKey()))
}
lock.Unlock()
return nil
@@ -357,24 +329,13 @@ func (c *clusterInfo) ensureSynced() error {
return c.syncError
}
func (c *clusterInfo) getNamespaceTopLevelResources(namespace string) map[kube.ResourceKey]appv1.ResourceNode {
c.lock.Lock()
defer c.lock.Unlock()
nodes := make(map[kube.ResourceKey]appv1.ResourceNode)
for _, node := range c.nsIndex[namespace] {
if len(node.ownerRefs) == 0 {
nodes[node.resourceKey()] = node.asResourceNode()
}
}
return nodes
}
func (c *clusterInfo) iterateHierarchy(key kube.ResourceKey, action func(child appv1.ResourceNode, appName string)) {
func (c *clusterInfo) iterateHierarchy(obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) {
c.lock.Lock()
defer c.lock.Unlock()
key := kube.GetResourceKey(obj)
if objInfo, ok := c.nodes[key]; ok {
action(objInfo.asResourceNode())
nsNodes := c.nsIndex[key.Namespace]
action(objInfo.asResourceNode(), objInfo.getApp(nsNodes))
childrenByUID := make(map[types.UID][]*node)
for _, child := range nsNodes {
if objInfo.isParentOf(child) {
@@ -392,15 +353,17 @@ func (c *clusterInfo) iterateHierarchy(key kube.ResourceKey, action func(child a
return strings.Compare(key1.String(), key2.String()) < 0
})
child := children[0]
action(child.asResourceNode(), child.getApp(nsNodes))
action(child.asResourceNode())
child.iterateChildren(nsNodes, map[kube.ResourceKey]bool{objInfo.resourceKey(): true}, action)
}
}
} else {
action(c.createObjInfo(obj, c.settings.GetAppInstanceLabelKey()).asResourceNode())
}
}
func (c *clusterInfo) isNamespaced(gk schema.GroupKind) bool {
if api, ok := c.apisMeta[gk]; ok && !api.namespaced {
func (c *clusterInfo) isNamespaced(obj *unstructured.Unstructured) bool {
if api, ok := c.apisMeta[kube.GetResourceKey(obj).GroupKind()]; ok && !api.namespaced {
return false
}
return true
@@ -423,7 +386,7 @@ func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*uns
lock := &sync.Mutex{}
err := util.RunAllAsync(len(targetObjs), func(i int) error {
targetObj := targetObjs[i]
key := GetTargetObjKey(a, targetObj, c.isNamespaced(targetObj.GroupVersionKind().GroupKind()))
key := GetTargetObjKey(a, targetObj, c.isNamespaced(targetObj))
lock.Lock()
managedObj := managedObjs[key]
lock.Unlock()
@@ -482,7 +445,7 @@ func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*uns
return managedObjs, nil
}
func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstructured) {
func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstructured) error {
c.lock.Lock()
defer c.lock.Unlock()
key := kube.GetResourceKey(un)
@@ -494,6 +457,8 @@ func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstr
} else if event != watch.Deleted {
c.onNodeUpdated(exists, existingNode, un, key)
}
return nil
}
func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstructured.Unstructured, key kube.ResourceKey) {
@@ -501,7 +466,7 @@ func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstruc
if exists {
nodes = append(nodes, existingNode)
}
newObj := c.createObjInfo(un, c.cacheSettingsSrc().AppInstanceLabelKey)
newObj := c.createObjInfo(un, c.settings.GetAppInstanceLabelKey())
c.setNode(newObj)
nodes = append(nodes, newObj)
toNotify := make(map[string]bool)
@@ -515,7 +480,9 @@ func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstruc
toNotify[app] = n.isRootAppNode() || toNotify[app]
}
}
c.onObjectUpdated(toNotify, newObj.ref)
for name, isRootAppNode := range toNotify {
c.onAppUpdated(name, isRootAppNode, newObj.ref)
}
}
func (c *clusterInfo) onNodeRemoved(key kube.ResourceKey, n *node) {
@@ -525,11 +492,9 @@ func (c *clusterInfo) onNodeRemoved(key kube.ResourceKey, n *node) {
}
c.removeNode(key)
managedByApp := make(map[string]bool)
if appName != "" {
managedByApp[appName] = n.isRootAppNode()
c.onAppUpdated(appName, n.isRootAppNode(), n.ref)
}
c.onObjectUpdated(managedByApp, n.ref)
}
var (

View File

@@ -18,11 +18,11 @@ import (
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic/fake"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
"github.com/argoproj/argo-cd/util/settings"
)
func strToUnstructured(jsonStr string) *unstructured.Unstructured {
@@ -60,8 +60,6 @@ var (
uid: "2"
name: helm-guestbook-rs
namespace: default
annotations:
deployment.kubernetes.io/revision: "2"
ownerReferences:
- apiVersion: apps/v1beta1
kind: Deployment
@@ -87,7 +85,6 @@ var (
name: helm-guestbook
namespace: default
resourceVersion: "123"
uid: "4"
spec:
selector:
app: guestbook
@@ -103,7 +100,6 @@ var (
metadata:
name: helm-guestbook
namespace: default
uid: "4"
spec:
backend:
serviceName: not-found-service
@@ -148,66 +144,33 @@ func newCluster(objs ...*unstructured.Unstructured) *clusterInfo {
Meta: metav1.APIResource{Namespaced: true},
}}
return newClusterExt(&kubetest.MockKubectlCmd{APIResources: apiResources})
return newClusterExt(kubetest.MockKubectlCmd{APIResources: apiResources})
}
func newClusterExt(kubectl kube.Kubectl) *clusterInfo {
return &clusterInfo{
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
onObjectUpdated: func(managedByApp map[string]bool, reference corev1.ObjectReference) {},
kubectl: kubectl,
nsIndex: make(map[string]map[kube.ResourceKey]*node),
cluster: &appv1.Cluster{},
syncTime: nil,
syncLock: &sync.Mutex{},
apisMeta: make(map[schema.GroupKind]*apiMeta),
log: log.WithField("cluster", "test"),
cacheSettingsSrc: func() *cacheSettings {
return &cacheSettings{AppInstanceLabelKey: common.LabelKeyAppInstance}
},
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
onAppUpdated: func(appName string, fullRefresh bool, reference corev1.ObjectReference) {},
kubectl: kubectl,
nsIndex: make(map[string]map[kube.ResourceKey]*node),
cluster: &appv1.Cluster{},
syncTime: nil,
syncLock: &sync.Mutex{},
apisMeta: make(map[schema.GroupKind]*apiMeta),
log: log.WithField("cluster", "test"),
settings: &settings.ArgoCDSettings{},
}
}
func getChildren(cluster *clusterInfo, un *unstructured.Unstructured) []appv1.ResourceNode {
hierarchy := make([]appv1.ResourceNode, 0)
cluster.iterateHierarchy(kube.GetResourceKey(un), func(child appv1.ResourceNode, app string) {
cluster.iterateHierarchy(un, func(child appv1.ResourceNode) {
hierarchy = append(hierarchy, child)
})
return hierarchy[1:]
}
func TestGetNamespaceResources(t *testing.T) {
defaultNamespaceTopLevel1 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook1", "namespace": "default"}
`)
defaultNamespaceTopLevel2 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook2", "namespace": "default"}
`)
kubesystemNamespaceTopLevel2 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook3", "namespace": "kube-system"}
`)
cluster := newCluster(defaultNamespaceTopLevel1, defaultNamespaceTopLevel2, kubesystemNamespaceTopLevel2)
err := cluster.ensureSynced()
assert.Nil(t, err)
resources := cluster.getNamespaceTopLevelResources("default")
assert.Len(t, resources, 2)
assert.Equal(t, resources[kube.GetResourceKey(defaultNamespaceTopLevel1)].Name, "helm-guestbook1")
assert.Equal(t, resources[kube.GetResourceKey(defaultNamespaceTopLevel2)].Name, "helm-guestbook2")
resources = cluster.getNamespaceTopLevelResources("kube-system")
assert.Len(t, resources, 1)
assert.Equal(t, resources[kube.GetResourceKey(kubesystemNamespaceTopLevel2)].Name, "helm-guestbook3")
}
func TestGetChildren(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
@@ -249,7 +212,7 @@ func TestGetChildren(t *testing.T) {
},
ResourceVersion: "123",
Health: &appv1.HealthStatus{Status: appv1.HealthStatusHealthy},
Info: []appv1.InfoItem{{Name: "Revision", Value: "Rev:2"}},
Info: []appv1.InfoItem{},
ParentRefs: []appv1.ResourceRef{{Group: "apps", Version: "", Kind: "Deployment", Namespace: "default", Name: "helm-guestbook", UID: "3"}},
}}, rsChildren...), deployChildren)
}
@@ -286,7 +249,8 @@ func TestChildDeletedEvent(t *testing.T) {
err := cluster.ensureSynced()
assert.Nil(t, err)
cluster.processEvent(watch.Deleted, testPod)
err = cluster.processEvent(watch.Deleted, testPod)
assert.Nil(t, err)
rsChildren := getChildren(cluster, testRS)
assert.Equal(t, []appv1.ResourceNode{}, rsChildren)
@@ -311,7 +275,8 @@ func TestProcessNewChildEvent(t *testing.T) {
uid: "2"
resourceVersion: "123"`)
cluster.processEvent(watch.Added, newPod)
err = cluster.processEvent(watch.Added, newPod)
assert.Nil(t, err)
rsChildren := getChildren(cluster, testRS)
sort.Slice(rsChildren, func(i, j int) bool {
@@ -392,7 +357,8 @@ func TestUpdateResourceTags(t *testing.T) {
},
}},
}
cluster.processEvent(watch.Modified, mustToUnstructured(pod))
err = cluster.processEvent(watch.Modified, mustToUnstructured(pod))
assert.Nil(t, err)
podNode = cluster.nodes[kube.GetResourceKey(mustToUnstructured(pod))]
@@ -403,16 +369,15 @@ func TestUpdateResourceTags(t *testing.T) {
func TestUpdateAppResource(t *testing.T) {
updatesReceived := make([]string, 0)
cluster := newCluster(testPod, testRS, testDeploy)
cluster.onObjectUpdated = func(managedByApp map[string]bool, _ corev1.ObjectReference) {
for appName, fullRefresh := range managedByApp {
updatesReceived = append(updatesReceived, fmt.Sprintf("%s: %v", appName, fullRefresh))
}
cluster.onAppUpdated = func(appName string, fullRefresh bool, _ corev1.ObjectReference) {
updatesReceived = append(updatesReceived, fmt.Sprintf("%s: %v", appName, fullRefresh))
}
err := cluster.ensureSynced()
assert.Nil(t, err)
cluster.processEvent(watch.Modified, mustToUnstructured(testPod))
err = cluster.processEvent(watch.Modified, mustToUnstructured(testPod))
assert.Nil(t, err)
assert.Contains(t, updatesReceived, "helm-guestbook: false")
}

View File

@@ -11,16 +11,11 @@ import (
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource"
)
func populateNodeInfo(un *unstructured.Unstructured, node *node) {
gvk := un.GroupVersionKind()
revision := resource.GetRevision(un)
if revision > 0 {
node.info = append(node.info, v1alpha1.InfoItem{Name: "Revision", Value: fmt.Sprintf("Rev:%v", revision)})
}
switch gvk.Group {
case "":
switch gvk.Kind {
@@ -31,13 +26,14 @@ func populateNodeInfo(un *unstructured.Unstructured, node *node) {
populateServiceInfo(un, node)
return
}
case "extensions", "networking.k8s.io":
case "extensions":
switch gvk.Kind {
case kube.IngressKind:
populateIngressInfo(un, node)
return
}
}
node.info = []v1alpha1.InfoItem{}
}
func getIngress(un *unstructured.Unstructured) []v1.LoadBalancerIngress {
@@ -126,23 +122,14 @@ func populateIngressInfo(un *unstructured.Unstructured, node *node) {
stringPort = fmt.Sprintf("%v", port)
}
var externalURL string
switch stringPort {
case "80", "http":
externalURL = fmt.Sprintf("http://%s", host)
urlsSet[fmt.Sprintf("http://%s", host)] = true
case "443", "https":
externalURL = fmt.Sprintf("https://%s", host)
urlsSet[fmt.Sprintf("https://%s", host)] = true
default:
externalURL = fmt.Sprintf("http://%s:%s", host, stringPort)
urlsSet[fmt.Sprintf("http://%s:%s", host, stringPort)] = true
}
subPath := ""
if nestedPath, ok, err := unstructured.NestedString(path, "path"); ok && err == nil {
subPath = nestedPath
}
externalURL += subPath
urlsSet[externalURL] = true
}
}
}
@@ -162,6 +149,7 @@ func populatePodInfo(un *unstructured.Unstructured, node *node) {
pod := v1.Pod{}
err := runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &pod)
if err != nil {
node.info = []v1alpha1.InfoItem{}
return
}
restarts := 0
@@ -249,6 +237,7 @@ func populatePodInfo(un *unstructured.Unstructured, node *node) {
reason = "Terminating"
}
node.info = make([]v1alpha1.InfoItem, 0)
if reason != "" {
node.info = append(node.info, v1alpha1.InfoItem{Name: "Status Reason", Value: reason})
}

View File

@@ -68,7 +68,7 @@ func TestGetIngressInfo(t *testing.T) {
Kind: kube.ServiceKind,
Name: "helm-guestbook",
}},
ExternalURLs: []string{"https://helm-guestbook.com/"},
ExternalURLs: []string{"https://helm-guestbook.com"},
}, node.networkingInfo)
}
@@ -103,120 +103,6 @@ func TestGetIngressInfoNoHost(t *testing.T) {
Kind: kube.ServiceKind,
Name: "helm-guestbook",
}},
ExternalURLs: []string{"https://107.178.210.11/"},
ExternalURLs: []string{"https://107.178.210.11"},
}, node.networkingInfo)
}
func TestExternalUrlWithSubPath(t *testing.T) {
ingress := strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
path: /my/sub/path/
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
expectedExternalUrls := []string{"https://107.178.210.11/my/sub/path/"}
assert.Equal(t, expectedExternalUrls, node.networkingInfo.ExternalURLs)
}
func TestExternalUrlWithMultipleSubPaths(t *testing.T) {
ingress := strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
spec:
rules:
- host: helm-guestbook.com
http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
path: /my/sub/path/
- backend:
serviceName: helm-guestbook-2
servicePort: 443
path: /my/sub/path/2
- backend:
serviceName: helm-guestbook-3
servicePort: 443
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
expectedExternalUrls := []string{"https://helm-guestbook.com/my/sub/path/", "https://helm-guestbook.com/my/sub/path/2", "https://helm-guestbook.com"}
actualURLs := node.networkingInfo.ExternalURLs
sort.Strings(expectedExternalUrls)
sort.Strings(actualURLs)
assert.Equal(t, expectedExternalUrls, actualURLs)
}
func TestExternalUrlWithNoSubPath(t *testing.T) {
ingress := strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
expectedExternalUrls := []string{"https://107.178.210.11"}
assert.Equal(t, expectedExternalUrls, node.networkingInfo.ExternalURLs)
}
func TestExternalUrlWithNetworkingApi(t *testing.T) {
ingress := strToUnstructured(`
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
expectedExternalUrls := []string{"https://107.178.210.11"}
assert.Equal(t, expectedExternalUrls, node.networkingInfo.ExternalURLs)
}

View File

@@ -2,19 +2,11 @@
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
kube "github.com/argoproj/argo-cd/util/kube"
schema "k8s.io/apimachinery/pkg/runtime/schema"
unstructured "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
import context "context"
import kube "github.com/argoproj/argo-cd/util/kube"
import mock "github.com/stretchr/testify/mock"
import unstructured "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
import v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
// LiveStateCache is an autogenerated mock type for the LiveStateCache type
type LiveStateCache struct {
@@ -44,48 +36,25 @@ func (_m *LiveStateCache) GetManagedLiveObjs(a *v1alpha1.Application, targetObjs
return r0, r1
}
// GetNamespaceTopLevelResources provides a mock function with given fields: server, namespace
func (_m *LiveStateCache) GetNamespaceTopLevelResources(server string, namespace string) (map[kube.ResourceKey]v1alpha1.ResourceNode, error) {
ret := _m.Called(server, namespace)
var r0 map[kube.ResourceKey]v1alpha1.ResourceNode
if rf, ok := ret.Get(0).(func(string, string) map[kube.ResourceKey]v1alpha1.ResourceNode); ok {
r0 = rf(server, namespace)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(map[kube.ResourceKey]v1alpha1.ResourceNode)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string, string) error); ok {
r1 = rf(server, namespace)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Invalidate provides a mock function with given fields:
func (_m *LiveStateCache) Invalidate() {
_m.Called()
}
// IsNamespaced provides a mock function with given fields: server, gk
func (_m *LiveStateCache) IsNamespaced(server string, gk schema.GroupKind) (bool, error) {
ret := _m.Called(server, gk)
// IsNamespaced provides a mock function with given fields: server, obj
func (_m *LiveStateCache) IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error) {
ret := _m.Called(server, obj)
var r0 bool
if rf, ok := ret.Get(0).(func(string, schema.GroupKind) bool); ok {
r0 = rf(server, gk)
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured) bool); ok {
r0 = rf(server, obj)
} else {
r0 = ret.Get(0).(bool)
}
var r1 error
if rf, ok := ret.Get(1).(func(string, schema.GroupKind) error); ok {
r1 = rf(server, gk)
if rf, ok := ret.Get(1).(func(string, *unstructured.Unstructured) error); ok {
r1 = rf(server, obj)
} else {
r1 = ret.Error(1)
}
@@ -93,13 +62,13 @@ func (_m *LiveStateCache) IsNamespaced(server string, gk schema.GroupKind) (bool
return r0, r1
}
// IterateHierarchy provides a mock function with given fields: server, key, action
func (_m *LiveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(v1alpha1.ResourceNode, string)) error {
ret := _m.Called(server, key, action)
// IterateHierarchy provides a mock function with given fields: server, obj, action
func (_m *LiveStateCache) IterateHierarchy(server string, obj *unstructured.Unstructured, action func(v1alpha1.ResourceNode)) error {
ret := _m.Called(server, obj, action)
var r0 error
if rf, ok := ret.Get(0).(func(string, kube.ResourceKey, func(v1alpha1.ResourceNode, string)) error); ok {
r0 = rf(server, key, action)
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured, func(v1alpha1.ResourceNode)) error); ok {
r0 = rf(server, obj, action)
} else {
r0 = ret.Error(0)
}
@@ -108,15 +77,6 @@ func (_m *LiveStateCache) IterateHierarchy(server string, key kube.ResourceKey,
}
// Run provides a mock function with given fields: ctx
func (_m *LiveStateCache) Run(ctx context.Context) error {
ret := _m.Called(ctx)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
r0 = rf(ctx)
} else {
r0 = ret.Error(0)
}
return r0
func (_m *LiveStateCache) Run(ctx context.Context) {
_m.Called(ctx)
}

View File

@@ -35,15 +35,7 @@ func (n *node) resourceKey() kube.ResourceKey {
}
func (n *node) isParentOf(child *node) bool {
for i, ownerRef := range child.ownerRefs {
// backfill UID of inferred owner child references
if ownerRef.UID == "" && n.ref.Kind == ownerRef.Kind && n.ref.APIVersion == ownerRef.APIVersion && n.ref.Name == ownerRef.Name {
ownerRef.UID = n.ref.UID
child.ownerRefs[i] = ownerRef
return true
}
for _, ownerRef := range child.ownerRefs {
if n.ref.UID == ownerRef.UID {
return true
}
@@ -127,14 +119,14 @@ func (n *node) asResourceNode() appv1.ResourceNode {
}
}
func (n *node) iterateChildren(ns map[kube.ResourceKey]*node, parents map[kube.ResourceKey]bool, action func(child appv1.ResourceNode, appName string)) {
func (n *node) iterateChildren(ns map[kube.ResourceKey]*node, parents map[kube.ResourceKey]bool, action func(child appv1.ResourceNode)) {
for childKey, child := range ns {
if n.isParentOf(ns[childKey]) {
if parents[childKey] {
key := n.resourceKey()
log.Warnf("Circular dependency detected. %s is child and parent of %s", childKey.String(), key.String())
} else {
action(child.asResourceNode(), child.getApp(ns))
action(child.asResourceNode())
child.iterateChildren(ns, newResourceKeySet(parents, n.resourceKey()), action)
}
}

View File

@@ -3,14 +3,12 @@ package cache
import (
"testing"
"github.com/argoproj/argo-cd/common"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/util/settings"
)
var c = &clusterInfo{cacheSettingsSrc: func() *cacheSettings {
return &cacheSettings{AppInstanceLabelKey: common.LabelKeyAppInstance}
}}
var c = &clusterInfo{settings: &settings.ArgoCDSettings{}}
func TestIsParentOf(t *testing.T) {
child := c.createObjInfo(testPod, "")
@@ -30,54 +28,3 @@ func TestIsParentOfSameKindDifferentGroupAndUID(t *testing.T) {
assert.False(t, invalidParent.isParentOf(child))
}
func TestIsServiceParentOfEndPointWithTheSameName(t *testing.T) {
nonMatchingNameEndPoint := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: Endpoints
metadata:
name: not-matching-name
namespace: default
`), "")
matchingNameEndPoint := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: Endpoints
metadata:
name: helm-guestbook
namespace: default
`), "")
parent := c.createObjInfo(testService, "")
assert.True(t, parent.isParentOf(matchingNameEndPoint))
assert.Equal(t, parent.ref.UID, matchingNameEndPoint.ownerRefs[0].UID)
assert.False(t, parent.isParentOf(nonMatchingNameEndPoint))
}
func TestIsServiceAccoountParentOfSecret(t *testing.T) {
serviceAccount := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
uid: '123'
secrets:
- name: default-token-123
`), "")
tokenSecret := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: '123'
name: default-token-123
namespace: default
uid: '345'
type: kubernetes.io/service-account-token
`), "")
assert.True(t, serviceAccount.isParentOf(tokenSecret))
}

View File

@@ -18,11 +18,9 @@ import (
type MetricsServer struct {
*http.Server
syncCounter *prometheus.CounterVec
k8sRequestCounter *prometheus.CounterVec
kubectlExecCounter *prometheus.CounterVec
kubectlExecPendingGauge *prometheus.GaugeVec
reconcileHistogram *prometheus.HistogramVec
syncCounter *prometheus.CounterVec
k8sRequestCounter *prometheus.CounterVec
reconcileHistogram *prometheus.HistogramVec
}
const (
@@ -78,16 +76,6 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, health
append(descAppDefaultLabels, "phase"),
)
appRegistry.MustRegister(syncCounter)
kubectlExecCounter := prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "argocd_kubectl_exec_total",
Help: "Number of kubectl executions",
}, []string{"command"})
appRegistry.MustRegister(kubectlExecCounter)
kubectlExecPendingGauge := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: "argocd_kubectl_exec_pending",
Help: "Number of pending kubectl executions",
}, []string{"command"})
appRegistry.MustRegister(kubectlExecPendingGauge)
k8sRequestCounter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "argocd_app_k8s_request_total",
@@ -104,7 +92,7 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, health
// Buckets chosen after observing a ~2100ms mean reconcile time
Buckets: []float64{0.25, .5, 1, 2, 4, 8, 16},
},
descAppDefaultLabels,
append(descAppDefaultLabels),
)
appRegistry.MustRegister(reconcileHistogram)
@@ -114,11 +102,9 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, health
Addr: addr,
Handler: mux,
},
syncCounter: syncCounter,
k8sRequestCounter: k8sRequestCounter,
reconcileHistogram: reconcileHistogram,
kubectlExecCounter: kubectlExecCounter,
kubectlExecPendingGauge: kubectlExecPendingGauge,
syncCounter: syncCounter,
k8sRequestCounter: k8sRequestCounter,
reconcileHistogram: reconcileHistogram,
}
}
@@ -140,18 +126,6 @@ func (m *MetricsServer) IncReconcile(app *argoappv1.Application, duration time.D
m.reconcileHistogram.WithLabelValues(app.Namespace, app.Name, app.Spec.GetProject()).Observe(duration.Seconds())
}
func (m *MetricsServer) IncKubectlExec(command string) {
m.kubectlExecCounter.WithLabelValues(command).Inc()
}
func (m *MetricsServer) IncKubectlExecPending(command string) {
m.kubectlExecPendingGauge.WithLabelValues(command).Inc()
}
func (m *MetricsServer) DecKubectlExecPending(command string) {
m.kubectlExecPendingGauge.WithLabelValues(command).Dec()
}
type appCollector struct {
store applister.ApplicationLister
}

View File

@@ -9,7 +9,6 @@ import (
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/cache"
@@ -19,7 +18,8 @@ import (
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/db"
@@ -28,7 +28,6 @@ import (
hookutil "github.com/argoproj/argo-cd/util/hook"
kubeutil "github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource"
"github.com/argoproj/argo-cd/util/resource/ignore"
"github.com/argoproj/argo-cd/util/settings"
)
@@ -53,50 +52,42 @@ func GetLiveObjs(res []managedResource) []*unstructured.Unstructured {
}
type ResourceInfoProvider interface {
IsNamespaced(server string, gk schema.GroupKind) (bool, error)
IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error)
}
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool, localObjects []string) *comparisonResult
CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool, localObjects []string) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
}
type comparisonResult struct {
reconciledAt metav1.Time
syncStatus *v1alpha1.SyncStatus
healthStatus *v1alpha1.HealthStatus
resources []v1alpha1.ResourceStatus
managedResources []managedResource
conditions []v1alpha1.ApplicationCondition
hooks []*unstructured.Unstructured
diffNormalizer diff.Normalizer
appSourceType v1alpha1.ApplicationSourceType
}
func (cr *comparisonResult) targetObjs() []*unstructured.Unstructured {
objs := cr.hooks
for _, r := range cr.managedResources {
if r.Target != nil {
objs = append(objs, r.Target)
}
}
return objs
}
// appStateManager allows to compare applications to git
type appStateManager struct {
metricsServer *metrics.MetricsServer
db db.ArgoDB
settingsMgr *settings.SettingsManager
settings *settings.ArgoCDSettings
appclientset appclientset.Interface
projInformer cache.SharedIndexInformer
kubectl kubeutil.Kubectl
repoClientset apiclient.Clientset
repoClientset reposerver.Clientset
liveStateCache statecache.LiveStateCache
namespace string
}
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache bool) ([]*unstructured.Unstructured, []*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
helmRepos, err := m.db.ListHelmRepositories(context.Background())
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache bool) ([]*unstructured.Unstructured, []*unstructured.Unstructured, *repository.ManifestResponse, error) {
helmRepos, err := m.db.ListHelmRepos(context.Background())
if err != nil {
return nil, nil, nil, err
}
@@ -114,31 +105,14 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
revision = source.TargetRevision
}
plugins, err := m.settingsMgr.GetConfigManagementPlugins()
if err != nil {
return nil, nil, nil, err
tools := make([]*appv1.ConfigManagementPlugin, len(m.settings.ConfigManagementPlugins))
for i := range m.settings.ConfigManagementPlugins {
tools[i] = &m.settings.ConfigManagementPlugins[i]
}
tools := make([]*appv1.ConfigManagementPlugin, len(plugins))
for i := range plugins {
tools[i] = &plugins[i]
}
buildOptions, err := m.settingsMgr.GetKustomizeBuildOptions()
if err != nil {
return nil, nil, nil, err
}
cluster, err := m.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err != nil {
return nil, nil, nil, err
}
cluster.ServerVersion, err = m.kubectl.GetServerVersion(cluster.RESTConfig())
if err != nil {
return nil, nil, nil, err
}
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &apiclient.ManifestRequest{
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &repository.ManifestRequest{
Repo: repo,
Repos: helmRepos,
HelmRepos: helmRepos,
Revision: revision,
NoCache: noCache,
AppLabelKey: appLabelKey,
@@ -146,19 +120,14 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
Namespace: app.Spec.Destination.Namespace,
ApplicationSource: &source,
Plugins: tools,
KustomizeOptions: &appv1.KustomizeOptions{
BuildOptions: buildOptions,
},
KubeVersion: cluster.ServerVersion,
})
if err != nil {
return nil, nil, nil, err
}
targetObjs, hooks, err := unmarshalManifests(manifestInfo.Manifests)
if err != nil {
return nil, nil, nil, err
}
targetObjs, hooks, nil := unmarshalManifests(manifestInfo.Manifests)
return targetObjs, hooks, manifestInfo, nil
}
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, []*unstructured.Unstructured, error) {
@@ -169,7 +138,7 @@ func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, []*un
if err != nil {
return nil, nil, err
}
if ignore.Ignore(obj) {
if resource.Ignore(obj) {
continue
}
if hookutil.IsHook(obj) {
@@ -191,7 +160,7 @@ func DeduplicateTargetObjects(
targetByKey := make(map[kubeutil.ResourceKey][]*unstructured.Unstructured)
for i := range objs {
obj := objs[i]
isNamespaced, err := infoProvider.IsNamespaced(server, obj.GroupVersionKind().GroupKind())
isNamespaced, err := infoProvider.IsNamespaced(server, obj)
if err != nil {
return objs, nil, err
}
@@ -254,49 +223,24 @@ func dedupLiveResources(targetObjs []*unstructured.Unstructured, liveObjsByKey m
}
}
func (m *appStateManager) getComparisonSettings(app *appv1.Application) (string, map[string]v1alpha1.ResourceOverride, diff.Normalizer, error) {
resourceOverrides, err := m.settingsMgr.GetResourceOverrides()
if err != nil {
return "", nil, nil, err
}
appLabelKey, err := m.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return "", nil, nil, err
}
diffNormalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, resourceOverrides)
if err != nil {
return "", nil, nil, err
}
return appLabelKey, resourceOverrides, diffNormalizer, nil
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool, localManifests []string) *comparisonResult {
appLabelKey, resourceOverrides, diffNormalizer, err := m.getComparisonSettings(app)
// return unknown comparison result if basic comparison settings cannot be loaded
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool, localManifests []string) (*comparisonResult, error) {
diffNormalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, m.settings.ResourceOverrides)
if err != nil {
return &comparisonResult{
syncStatus: &v1alpha1.SyncStatus{
ComparedTo: appv1.ComparedTo{Source: source, Destination: app.Spec.Destination},
Status: appv1.SyncStatusCodeUnknown,
},
healthStatus: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
}
return nil, err
}
// do best effort loading live and target state to present as much information about app state as possible
failedToLoadObjs := false
conditions := make([]v1alpha1.ApplicationCondition, 0)
logCtx := log.WithField("application", app.Name)
logCtx.Infof("Comparing app state (cluster: %s, namespace: %s)", app.Spec.Destination.Server, app.Spec.Destination.Namespace)
observedAt := metav1.Now()
failedToLoadObjs := false
conditions := make([]v1alpha1.ApplicationCondition, 0)
appLabelKey := m.settings.GetAppInstanceLabelKey()
var targetObjs []*unstructured.Unstructured
var hooks []*unstructured.Unstructured
var manifestInfo *apiclient.ManifestResponse
var manifestInfo *repository.ManifestResponse
if len(localManifests) == 0 {
targetObjs, hooks, manifestInfo, err = m.getRepoObjs(app, source, appLabelKey, revision, noCache)
@@ -321,23 +265,6 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
}
conditions = append(conditions, dedupConditions...)
resFilter, err := m.settingsMgr.GetResourcesFilter()
if err != nil {
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
} else {
for i := len(targetObjs) - 1; i >= 0; i-- {
targetObj := targetObjs[i]
gvk := targetObj.GroupVersionKind()
if resFilter.IsExcludedResource(gvk.Group, gvk.Kind, app.Spec.Destination.Server) {
targetObjs = append(targetObjs[:i], targetObjs[i+1:]...)
conditions = append(conditions, v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionExcludedResourceWarning,
Message: fmt.Sprintf("Resource %s/%s %s is excluded in the settings", gvk.Group, gvk.Kind, targetObj.GetName()),
})
}
}
}
logCtx.Debugf("Generated config manifests")
liveObjByKey, err := m.liveStateCache.GetManagedLiveObjs(app, targetObjs)
dedupLiveResources(targetObjs, liveObjByKey)
@@ -363,7 +290,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
for i, obj := range targetObjs {
gvk := obj.GroupVersionKind()
ns := util.FirstNonEmpty(obj.GetNamespace(), app.Spec.Destination.Namespace)
if namespaced, err := m.liveStateCache.IsNamespaced(app.Spec.Destination.Server, obj.GroupVersionKind().GroupKind()); err == nil && !namespaced {
if namespaced, err := m.liveStateCache.IsNamespaced(app.Spec.Destination.Server, obj); err == nil && !namespaced {
ns = ""
}
key := kubeutil.NewResourceKey(gvk.Group, gvk.Kind, ns, obj.GetName())
@@ -386,9 +313,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
// Do the actual comparison
diffResults, err := diff.DiffArray(targetObjs, managedLiveObj, diffNormalizer)
if err != nil {
diffResults = &diff.DiffResultList{}
failedToLoadObjs = true
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
return nil, err
}
syncCode := v1alpha1.SyncStatusCodeSynced
@@ -406,17 +331,16 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
gvk := obj.GroupVersionKind()
resState := v1alpha1.ResourceStatus{
Namespace: obj.GetNamespace(),
Name: obj.GetName(),
Kind: gvk.Kind,
Version: gvk.Version,
Group: gvk.Group,
Hook: hookutil.IsHook(obj),
RequiresPruning: targetObj == nil && liveObj != nil,
Namespace: obj.GetNamespace(),
Name: obj.GetName(),
Kind: gvk.Kind,
Version: gvk.Version,
Group: gvk.Group,
Hook: hookutil.IsHook(obj),
}
diffResult := diffResults.Diffs[i]
if resState.Hook || ignore.Ignore(obj) {
if resState.Hook || resource.Ignore(obj) {
// For resource hooks, don't store sync status, and do not affect overall sync status
} else if diffResult.Modified || targetObj == nil || liveObj == nil {
// Set resource state to OutOfSync since one of the following is true:
@@ -432,10 +356,6 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
} else {
resState.Status = v1alpha1.SyncStatusCodeSynced
}
// we can't say anything about the status if we were unable to get the target objects
if failedToLoadObjs {
resState.Status = v1alpha1.SyncStatusCodeUnknown
}
managedResources[i] = managedResource{
Name: resState.Name,
Namespace: resState.Namespace,
@@ -464,7 +384,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
syncStatus.Revision = manifestInfo.Revision
}
healthStatus, err := health.SetApplicationHealth(resourceSummaries, GetLiveObjs(managedResources), resourceOverrides, func(obj *unstructured.Unstructured) bool {
healthStatus, err := health.SetApplicationHealth(resourceSummaries, GetLiveObjs(managedResources), m.settings.ResourceOverrides, func(obj *unstructured.Unstructured) bool {
return !isSelfReferencedApp(app, kubeutil.GetObjectRef(obj))
})
@@ -473,23 +393,19 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
}
compRes := comparisonResult{
reconciledAt: observedAt,
syncStatus: &syncStatus,
healthStatus: healthStatus,
resources: resourceSummaries,
managedResources: managedResources,
conditions: conditions,
hooks: hooks,
diffNormalizer: diffNormalizer,
}
if manifestInfo != nil {
compRes.appSourceType = v1alpha1.ApplicationSourceType(manifestInfo.SourceType)
}
app.Status.SetConditions(conditions, map[appv1.ApplicationConditionType]bool{
appv1.ApplicationConditionComparisonError: true,
appv1.ApplicationConditionSharedResourceWarning: true,
appv1.ApplicationConditionRepeatedResourceWarning: true,
appv1.ApplicationConditionExcludedResourceWarning: true,
})
return &compRes
return &compRes, nil
}
func (m *appStateManager) persistRevisionHistory(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource) error {
@@ -524,10 +440,10 @@ func (m *appStateManager) persistRevisionHistory(app *v1alpha1.Application, revi
func NewAppStateManager(
db db.ArgoDB,
appclientset appclientset.Interface,
repoClientset apiclient.Clientset,
repoClientset reposerver.Clientset,
namespace string,
kubectl kubeutil.Kubectl,
settingsMgr *settings.SettingsManager,
settings *settings.ArgoCDSettings,
liveStateCache statecache.LiveStateCache,
projInformer cache.SharedIndexInformer,
metricsServer *metrics.MetricsServer,
@@ -539,7 +455,7 @@ func NewAppStateManager(
kubectl: kubectl,
repoClientset: repoClientset,
namespace: namespace,
settingsMgr: settingsMgr,
settings: settings,
projInformer: projInformer,
metricsServer: metricsServer,
}

View File

@@ -12,7 +12,7 @@ import (
"github.com/argoproj/argo-cd/common"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
)
@@ -21,7 +21,7 @@ import (
func TestCompareAppStateEmpty(t *testing.T) {
app := newFakeApp()
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -30,21 +30,21 @@ func TestCompareAppStateEmpty(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Len(t, compRes.resources, 0)
assert.Len(t, compRes.managedResources, 0)
assert.Len(t, app.Status.Conditions, 0)
assert.Equal(t, 0, len(compRes.resources))
assert.Equal(t, 0, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.conditions))
}
// TestCompareAppStateMissing tests when there is a manifest defined in the repo which doesn't exist in live
// TestCompareAppStateMissing tests when there is a manifest defined in git which doesn't exist in live
func TestCompareAppStateMissing(t *testing.T) {
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{string(test.PodManifest)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -53,13 +53,13 @@ func TestCompareAppStateMissing(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
assert.Equal(t, argoappv1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
assert.Len(t, compRes.resources, 1)
assert.Len(t, compRes.managedResources, 1)
assert.Len(t, app.Status.Conditions, 0)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.conditions))
}
// TestCompareAppStateExtra tests when there is an extra object in live but not defined in git
@@ -69,7 +69,7 @@ func TestCompareAppStateExtra(t *testing.T) {
app := newFakeApp()
key := kube.ResourceKey{Group: "", Kind: "Pod", Namespace: test.FakeDestNamespace, Name: app.Name}
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -80,12 +80,13 @@ func TestCompareAppStateExtra(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(app.Status.Conditions))
assert.Equal(t, 0, len(compRes.conditions))
}
// TestCompareAppStateHook checks that hooks are detected during manifest generation, and not
@@ -97,7 +98,7 @@ func TestCompareAppStateHook(t *testing.T) {
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{string(podBytes)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -106,13 +107,14 @@ func TestCompareAppStateHook(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 0, len(compRes.resources))
assert.Equal(t, 0, len(compRes.managedResources))
assert.Equal(t, 1, len(compRes.hooks))
assert.Equal(t, 0, len(app.Status.Conditions))
assert.Equal(t, 0, len(compRes.conditions))
}
// checks that ignore resources are detected, but excluded from status
@@ -122,7 +124,7 @@ func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -132,13 +134,14 @@ func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Len(t, compRes.resources, 0)
assert.Len(t, compRes.managedResources, 0)
assert.Len(t, app.Status.Conditions, 0)
assert.Len(t, compRes.conditions, 0)
}
// TestCompareAppStateExtraHook tests when there is an extra _hook_ object in live but not defined in git
@@ -149,7 +152,7 @@ func TestCompareAppStateExtraHook(t *testing.T) {
app := newFakeApp()
key := kube.ResourceKey{Group: "", Kind: "Pod", Namespace: test.FakeDestNamespace, Name: app.Name}
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -160,14 +163,14 @@ func TestCompareAppStateExtraHook(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.hooks))
assert.Equal(t, 0, len(app.Status.Conditions))
assert.Equal(t, 0, len(compRes.conditions))
}
func toJSON(t *testing.T, obj *unstructured.Unstructured) string {
@@ -185,7 +188,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
app := newFakeApp()
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{toJSON(t, obj1), toJSON(t, obj2), toJSON(t, obj3)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -197,10 +200,10 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Contains(t, app.Status.Conditions, argoappv1.ApplicationCondition{
assert.Contains(t, compRes.conditions, argoappv1.ApplicationCondition{
Message: "Resource /Pod/fake-dest-ns/my-pod appeared 2 times among application resources.",
Type: argoappv1.ApplicationConditionRepeatedResourceWarning,
})
@@ -237,7 +240,7 @@ func TestSetHealth(t *testing.T) {
})
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -248,7 +251,8 @@ func TestSetHealth(t *testing.T) {
},
})
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
}
@@ -268,7 +272,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
})
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -280,114 +284,8 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
},
})
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
}
func TestSetManagedResourcesWithOrphanedResources(t *testing.T) {
proj := defaultProj.DeepCopy()
proj.Spec.OrphanedResources = &argoappv1.OrphanedResourcesMonitorSettings{}
app := newFakeApp()
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
ResourceNode: argoappv1.ResourceNode{
ResourceRef: argoappv1.ResourceRef{Kind: kube.DeploymentKind, Name: "guestbook", Namespace: app.Namespace},
},
AppName: "",
},
},
})
tree, err := ctrl.setAppManagedResources(app, &comparisonResult{managedResources: make([]managedResource, 0)})
assert.NoError(t, err)
assert.Equal(t, len(tree.OrphanedNodes), 1)
assert.Equal(t, "guestbook", tree.OrphanedNodes[0].Name)
assert.Equal(t, app.Namespace, tree.OrphanedNodes[0].Namespace)
}
func TestSetManagedResourcesWithResourcesOfAnotherApp(t *testing.T) {
proj := defaultProj.DeepCopy()
proj.Spec.OrphanedResources = &argoappv1.OrphanedResourcesMonitorSettings{}
app1 := newFakeApp()
app1.Name = "app1"
app2 := newFakeApp()
app2.Name = "app2"
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app1, app2, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app2.Namespace, "guestbook"): {
ResourceNode: argoappv1.ResourceNode{
ResourceRef: argoappv1.ResourceRef{Kind: kube.DeploymentKind, Name: "guestbook", Namespace: app2.Namespace},
},
AppName: "app2",
},
},
})
tree, err := ctrl.setAppManagedResources(app1, &comparisonResult{managedResources: make([]managedResource, 0)})
assert.NoError(t, err)
assert.Equal(t, len(tree.OrphanedNodes), 0)
}
func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
proj := defaultProj.DeepCopy()
proj.Spec.OrphanedResources = &argoappv1.OrphanedResourcesMonitorSettings{}
app := newFakeApp()
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
configMapData: map[string]string{
"resource.customizations": "invalid setting",
},
})
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.Equal(t, argoappv1.HealthStatusUnknown, compRes.healthStatus.Status)
assert.Equal(t, argoappv1.SyncStatusCodeUnknown, compRes.syncStatus.Status)
}
func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
proj := defaultProj.DeepCopy()
proj.Spec.OrphanedResources = &argoappv1.OrphanedResourcesMonitorSettings{}
app := newFakeApp()
app.Namespace = "default"
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
ResourceNode: argoappv1.ResourceNode{ResourceRef: argoappv1.ResourceRef{Group: "apps", Kind: kube.DeploymentKind, Name: "guestbook", Namespace: app.Namespace}},
},
kube.NewResourceKey("", kube.ServiceAccountKind, app.Namespace, "default"): {
ResourceNode: argoappv1.ResourceNode{ResourceRef: argoappv1.ResourceRef{Kind: kube.ServiceAccountKind, Name: "default", Namespace: app.Namespace}},
},
kube.NewResourceKey("", kube.ServiceKind, app.Namespace, "kubernetes"): {
ResourceNode: argoappv1.ResourceNode{ResourceRef: argoappv1.ResourceRef{Kind: kube.ServiceAccountKind, Name: "kubernetes", Namespace: app.Namespace}},
},
},
})
tree, err := ctrl.setAppManagedResources(app, &comparisonResult{managedResources: make([]managedResource, 0)})
assert.NoError(t, err)
assert.Len(t, tree.OrphanedNodes, 1)
assert.Equal(t, "guestbook", tree.OrphanedNodes[0].Name)
}
func Test_comparisonResult_obs(t *testing.T) {
assert.Len(t, (&comparisonResult{}).targetObjs(), 0)
assert.Len(t, (&comparisonResult{managedResources: []managedResource{{}}}).targetObjs(), 0)
assert.Len(t, (&comparisonResult{managedResources: []managedResource{{Target: test.NewPod()}}}).targetObjs(), 1)
assert.Len(t, (&comparisonResult{hooks: []*unstructured.Unstructured{{}}}).targetObjs(), 1)
}

View File

@@ -7,17 +7,11 @@ import (
"sort"
"strings"
"sync"
"sync/atomic"
"time"
log "github.com/sirupsen/logrus"
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
"k8s.io/apimachinery/pkg/api/errors"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
@@ -30,33 +24,25 @@ import (
"github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/rand"
"github.com/argoproj/argo-cd/util/resource"
)
const (
crdReadinessTimeout = time.Duration(3) * time.Second
)
var syncIdPrefix uint64 = 0
type syncContext struct {
resourceOverrides map[string]v1alpha1.ResourceOverride
appName string
proj *v1alpha1.AppProject
compareResult *comparisonResult
config *rest.Config
dynamicIf dynamic.Interface
disco discovery.DiscoveryInterface
extensionsclientset *clientset.Clientset
kubectl kube.Kubectl
namespace string
server string
syncOp *v1alpha1.SyncOperation
syncRes *v1alpha1.SyncOperationResult
syncResources []v1alpha1.SyncOperationResource
opState *v1alpha1.OperationState
log *log.Entry
resourceOverrides map[string]v1alpha1.ResourceOverride
appName string
proj *v1alpha1.AppProject
compareResult *comparisonResult
config *rest.Config
dynamicIf dynamic.Interface
disco discovery.DiscoveryInterface
kubectl kube.Kubectl
namespace string
server string
syncOp *v1alpha1.SyncOperation
syncRes *v1alpha1.SyncOperationResult
syncResources []v1alpha1.SyncOperationResource
opState *v1alpha1.OperationState
log *log.Entry
// lock to protect concurrent updates of the result list
lock sync.Mutex
}
@@ -106,13 +92,21 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
revision = syncOp.Revision
}
compareResult := m.CompareAppState(app, revision, source, false, syncOp.Manifests)
compareResult, err := m.CompareAppState(app, revision, source, false, syncOp.Manifests)
if err != nil {
state.Phase = v1alpha1.OperationError
state.Message = err.Error()
return
}
// If there are any comparison or spec errors error conditions do not perform the operation
if errConditions := app.Status.GetConditions(map[v1alpha1.ApplicationConditionType]bool{
v1alpha1.ApplicationConditionComparisonError: true,
v1alpha1.ApplicationConditionInvalidSpecError: true,
}); len(errConditions) > 0 {
// If there are any error conditions, do not perform the operation
errConditions := make([]v1alpha1.ApplicationCondition, 0)
for i := range compareResult.conditions {
if compareResult.conditions[i].IsError() {
errConditions = append(errConditions, compareResult.conditions[i])
}
}
if len(errConditions) > 0 {
state.Phase = v1alpha1.OperationError
state.Message = argo.FormatAppConditions(errConditions)
return
@@ -143,13 +137,6 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
return
}
extensionsclientset, err := clientset.NewForConfig(restConfig)
if err != nil {
state.Phase = v1alpha1.OperationError
state.Message = fmt.Sprintf("Failed to initialize extensions client: %v", err)
return
}
proj, err := argo.GetAppProject(&app.Spec, listersv1alpha1.NewAppProjectLister(m.projInformer.GetIndexer()), m.namespace)
if err != nil {
state.Phase = v1alpha1.OperationError
@@ -157,43 +144,31 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
return
}
resourceOverrides, err := m.settingsMgr.GetResourceOverrides()
if err != nil {
state.Phase = v1alpha1.OperationError
state.Message = fmt.Sprintf("Failed to load resource overrides: %v", err)
return
}
atomic.AddUint64(&syncIdPrefix, 1)
syncId := fmt.Sprintf("%05d-%s", syncIdPrefix, rand.RandString(5))
syncCtx := syncContext{
resourceOverrides: resourceOverrides,
appName: app.Name,
proj: proj,
compareResult: compareResult,
config: restConfig,
dynamicIf: dynamicIf,
disco: disco,
extensionsclientset: extensionsclientset,
kubectl: m.kubectl,
namespace: app.Spec.Destination.Namespace,
server: app.Spec.Destination.Server,
syncOp: &syncOp,
syncRes: syncRes,
syncResources: syncResources,
opState: state,
log: log.WithFields(log.Fields{"application": app.Name, "syncId": syncId}),
resourceOverrides: m.settings.ResourceOverrides,
appName: app.Name,
proj: proj,
compareResult: compareResult,
config: restConfig,
dynamicIf: dynamicIf,
disco: disco,
kubectl: m.kubectl,
namespace: app.Spec.Destination.Namespace,
server: app.Spec.Destination.Server,
syncOp: &syncOp,
syncRes: syncRes,
syncResources: syncResources,
opState: state,
log: log.WithFields(log.Fields{"application": app.Name}),
}
start := time.Now()
if state.Phase == v1alpha1.OperationTerminating {
syncCtx.terminate()
} else {
syncCtx.sync()
}
syncCtx.log.WithField("duration", time.Since(start)).Info("sync/terminate complete")
syncCtx.log.Info("sync/terminate complete")
if !syncOp.DryRun && !syncCtx.isSelectiveSync() && syncCtx.opState.Phase.Successful() {
err := m.persistRevisionHistory(app, compareResult.syncStatus.Revision, source)
@@ -206,8 +181,8 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
// sync has performs the actual apply or hook based sync
func (sc *syncContext) sync() {
sc.log.WithFields(log.Fields{"isSelectiveSync": sc.isSelectiveSync(), "skipHooks": sc.skipHooks(), "started": sc.started()}).Info("syncing")
tasks, ok := sc.getSyncTasks()
if !ok {
tasks, successful := sc.getSyncTasks()
if !successful {
sc.setOperationPhase(v1alpha1.OperationFailed, "one or more synchronization tasks are not valid")
return
}
@@ -222,7 +197,7 @@ func (sc *syncContext) sync() {
// the dry-run for this operation, is if the resource or hook list is empty.
if !sc.started() {
sc.log.Debug("dry-run")
if sc.runTasks(tasks, true) == failed {
if !sc.runTasks(tasks, true) {
sc.setOperationPhase(v1alpha1.OperationFailed, "one or more objects failed to apply (dry run)")
return
}
@@ -239,9 +214,9 @@ func (sc *syncContext) sync() {
sc.setResourceResult(task, "", operationState, message)
// maybe delete the hook
if task.needsDeleting() {
if enforceHookDeletePolicy(task.liveObj, task.operationState) {
err := sc.deleteResource(task)
if err != nil && !errors.IsNotFound(err) {
if err != nil {
sc.setResourceResult(task, "", v1alpha1.OperationError, fmt.Sprintf("failed to delete resource: %v", err))
}
}
@@ -265,27 +240,21 @@ func (sc *syncContext) sync() {
}
}
// if (a) we are multi-step and we have any running tasks,
// or (b) there are any running hooks,
// then wait...
multiStep := tasks.multiStep()
if tasks.Any(func(t *syncTask) bool { return (multiStep || t.isHook()) && t.running() }) {
// any running tasks, lets wait...
if tasks.Find(func(t *syncTask) bool { return t.running() }) != nil {
sc.setOperationPhase(v1alpha1.OperationRunning, "one or more tasks are running")
return
}
// syncFailTasks only run during failure, so separate them from regular tasks
syncFailTasks, tasks := tasks.Split(func(t *syncTask) bool { return t.phase == v1alpha1.SyncPhaseSyncFail })
// if there are any completed but unsuccessful tasks, sync is a failure.
if tasks.Any(func(t *syncTask) bool { return t.completed() && !t.successful() }) {
sc.setOperationFailed(syncFailTasks, "one or more synchronization tasks completed unsuccessfully")
if tasks.Find(func(t *syncTask) bool { return t.completed() && !t.successful() }) != nil {
sc.setOperationPhase(v1alpha1.OperationFailed, "one or more synchronization tasks completed unsuccessfully")
return
}
sc.log.WithFields(log.Fields{"tasks": tasks}).Debug("filtering out non-pending tasks")
sc.log.WithFields(log.Fields{"tasks": tasks}).Debug("filtering out completed tasks")
// remove tasks that are completed, we can assume that there are no running tasks
tasks = tasks.Filter(func(t *syncTask) bool { return t.pending() })
tasks = tasks.Filter(func(t *syncTask) bool { return !t.completed() })
// If no sync tasks were generated (e.g., in case all application manifests have been removed),
// the sync operation is successful.
@@ -301,43 +270,23 @@ func (sc *syncContext) sync() {
// if it is the last phase/wave and the only remaining tasks are non-hooks, the we are successful
// EVEN if those objects subsequently degraded
// This handles the common case where neither hooks or waves are used and a sync equates to simply an (asynchronous) kubectl apply of manifests, which succeeds immediately.
complete := !tasks.Any(func(t *syncTask) bool { return t.phase != phase || wave != t.wave() || t.isHook() })
complete := tasks.Find(func(t *syncTask) bool { return t.phase != phase || wave != t.wave() || t.isHook() }) == nil
sc.log.WithFields(log.Fields{"phase": phase, "wave": wave, "tasks": tasks, "syncFailTasks": syncFailTasks}).Debug("filtering tasks in correct phase and wave")
sc.log.WithFields(log.Fields{"phase": phase, "wave": wave, "tasks": tasks}).Debug("filtering tasks in correct phase and wave")
tasks = tasks.Filter(func(t *syncTask) bool { return t.phase == phase && t.wave() == wave })
sc.setOperationPhase(v1alpha1.OperationRunning, "one or more tasks are running")
sc.log.WithFields(log.Fields{"tasks": tasks}).Debug("wet-run")
runState := sc.runTasks(tasks, false)
switch runState {
case failed:
sc.setOperationFailed(syncFailTasks, "one or more objects failed to apply")
case successful:
if !sc.runTasks(tasks, false) {
sc.setOperationPhase(v1alpha1.OperationFailed, "one or more objects failed to apply")
} else {
if complete {
sc.setOperationPhase(v1alpha1.OperationSucceeded, "successfully synced (all tasks run)")
}
}
}
func (sc *syncContext) setOperationFailed(syncFailTasks syncTasks, message string) {
if len(syncFailTasks) > 0 {
// if all the failure hooks are completed, don't run them again, and mark the sync as failed
if syncFailTasks.All(func(task *syncTask) bool { return task.completed() }) {
sc.setOperationPhase(v1alpha1.OperationFailed, message)
return
}
// otherwise, we need to start the failure hooks, and then return without setting
// the phase, so we make sure we have at least one more sync
sc.log.WithFields(log.Fields{"syncFailTasks": syncFailTasks}).Debug("running sync fail tasks")
if sc.runTasks(syncFailTasks, false) == failed {
sc.setOperationPhase(v1alpha1.OperationFailed, message)
}
} else {
sc.setOperationPhase(v1alpha1.OperationFailed, message)
}
}
func (sc *syncContext) started() bool {
return len(sc.syncRes.Resources) > 0
}
@@ -386,7 +335,7 @@ func (sc *syncContext) getSyncTasks() (_ syncTasks, successful bool) {
for _, resource := range sc.compareResult.managedResources {
if !sc.containsResource(resource) {
sc.log.WithFields(log.Fields{"group": resource.Group, "kind": resource.Kind, "name": resource.Name}).
log.WithFields(log.Fields{"group": resource.Group, "kind": resource.Kind, "name": resource.Name}).
Debug("skipping")
continue
}
@@ -395,7 +344,7 @@ func (sc *syncContext) getSyncTasks() (_ syncTasks, successful bool) {
// this creates garbage tasks
if hook.IsHook(obj) {
sc.log.WithFields(log.Fields{"group": obj.GroupVersionKind().Group, "kind": obj.GetKind(), "namespace": obj.GetNamespace(), "name": obj.GetName()}).
log.WithFields(log.Fields{"group": obj.GroupVersionKind().Group, "kind": obj.GetKind(), "namespace": obj.GetNamespace(), "name": obj.GetName()}).
Debug("skipping hook")
continue
}
@@ -525,32 +474,12 @@ func (sc *syncContext) setOperationPhase(phase v1alpha1.OperationPhase, message
sc.opState.Message = message
}
// ensureCRDReady waits until specified CRD is ready (established condition is true). Method is best effort - it does not fail even if CRD is not ready without timeout.
func (sc *syncContext) ensureCRDReady(name string) {
_ = wait.PollImmediate(time.Duration(100)*time.Millisecond, crdReadinessTimeout, func() (bool, error) {
crd, err := sc.extensionsclientset.ApiextensionsV1beta1().CustomResourceDefinitions().Get(name, metav1.GetOptions{})
if err != nil {
return false, err
}
for _, condition := range crd.Status.Conditions {
if condition.Type == v1beta1.Established {
return condition.Status == v1beta1.ConditionTrue, nil
}
}
return false, nil
})
}
// applyObject performs a `kubectl apply` of a single resource
func (sc *syncContext) applyObject(targetObj *unstructured.Unstructured, dryRun bool, force bool) (v1alpha1.ResultCode, string) {
validate := !resource.HasAnnotationOption(targetObj, common.AnnotationSyncOptions, "Validate=false")
message, err := sc.kubectl.ApplyResource(sc.config, targetObj, targetObj.GetNamespace(), dryRun, force, validate)
message, err := sc.kubectl.ApplyResource(sc.config, targetObj, targetObj.GetNamespace(), dryRun, force)
if err != nil {
return v1alpha1.ResultCodeSyncFailed, err.Error()
}
if kube.IsCRD(targetObj) && !dryRun {
sc.ensureCRDReady(targetObj.GetName())
}
return v1alpha1.ResultCodeSynced, message
}
@@ -578,13 +507,13 @@ func (sc *syncContext) pruneObject(liveObj *unstructured.Unstructured, prune, dr
}
func (sc *syncContext) hasCRDOfGroupKind(group string, kind string) bool {
for _, obj := range sc.compareResult.targetObjs() {
if kube.IsCRD(obj) {
crdGroup, ok, err := unstructured.NestedString(obj.Object, "spec", "group")
for _, res := range sc.compareResult.managedResources {
if res.Target != nil && kube.IsCRD(res.Target) {
crdGroup, ok, err := unstructured.NestedString(res.Target.Object, "spec", "group")
if err != nil || !ok {
continue
}
crdKind, ok, err := unstructured.NestedString(obj.Object, "spec", "names", "kind")
crdKind, ok, err := unstructured.NestedString(res.Target.Object, "spec", "names", "kind")
if err != nil || !ok {
continue
}
@@ -623,25 +552,17 @@ func (sc *syncContext) terminate() {
}
func (sc *syncContext) deleteResource(task *syncTask) error {
sc.log.WithFields(log.Fields{"task": task}).Debug("deleting resource")
resIf, err := sc.getResourceIf(task)
sc.log.WithFields(log.Fields{"task": task}).Debug("deleting task")
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, task.groupVersionKind())
if err != nil {
return err
}
resource := kube.ToGroupVersionResource(task.groupVersionKind().GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, task.namespace())
propagationPolicy := metav1.DeletePropagationForeground
return resIf.Delete(task.name(), &metav1.DeleteOptions{PropagationPolicy: &propagationPolicy})
}
func (sc *syncContext) getResourceIf(task *syncTask) (dynamic.ResourceInterface, error) {
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, task.groupVersionKind())
if err != nil {
return nil, err
}
res := kube.ToGroupVersionResource(task.groupVersionKind().GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, res, task.namespace())
return resIf, err
}
var operationPhases = map[v1alpha1.ResultCode]v1alpha1.OperationPhase{
v1alpha1.ResultCodeSynced: v1alpha1.OperationRunning,
v1alpha1.ResultCodeSyncFailed: v1alpha1.OperationFailed,
@@ -649,22 +570,13 @@ var operationPhases = map[v1alpha1.ResultCode]v1alpha1.OperationPhase{
v1alpha1.ResultCodePruneSkipped: v1alpha1.OperationSucceeded,
}
// tri-state
type runState = int
const (
successful = iota
pending
failed
)
func (sc *syncContext) runTasks(tasks syncTasks, dryRun bool) runState {
func (sc *syncContext) runTasks(tasks syncTasks, dryRun bool) bool {
dryRun = dryRun || sc.syncOp.DryRun
sc.log.WithFields(log.Fields{"numTasks": len(tasks), "dryRun": dryRun}).Debug("running tasks")
runState := successful
successful := true
var createTasks syncTasks
var pruneTasks syncTasks
@@ -675,92 +587,60 @@ func (sc *syncContext) runTasks(tasks syncTasks, dryRun bool) runState {
createTasks = append(createTasks, task)
}
}
// prune first
{
var wg sync.WaitGroup
for _, task := range pruneTasks {
wg.Add(1)
var wg sync.WaitGroup
for _, task := range pruneTasks {
wg.Add(1)
go func(t *syncTask) {
defer wg.Done()
sc.log.WithFields(log.Fields{"dryRun": dryRun, "task": t}).Debug("pruning")
result, message := sc.pruneObject(t.liveObj, sc.syncOp.Prune, dryRun)
if result == v1alpha1.ResultCodeSyncFailed {
successful = false
}
if !dryRun || result == v1alpha1.ResultCodeSyncFailed {
sc.setResourceResult(t, result, operationPhases[result], message)
}
}(task)
}
wg.Wait()
processCreateTasks := func(tasks syncTasks) {
var createWg sync.WaitGroup
for _, task := range tasks {
if dryRun && task.skipDryRun {
continue
}
createWg.Add(1)
go func(t *syncTask) {
defer wg.Done()
sc.log.WithFields(log.Fields{"dryRun": dryRun, "task": t}).Debug("pruning")
result, message := sc.pruneObject(t.liveObj, sc.syncOp.Prune, dryRun)
defer createWg.Done()
sc.log.WithFields(log.Fields{"dryRun": dryRun, "task": t}).Debug("applying")
result, message := sc.applyObject(t.targetObj, dryRun, sc.syncOp.SyncStrategy.Force())
if result == v1alpha1.ResultCodeSyncFailed {
runState = failed
successful = false
}
if !dryRun || result == v1alpha1.ResultCodeSyncFailed {
sc.setResourceResult(t, result, operationPhases[result], message)
}
}(task)
}
wg.Wait()
createWg.Wait()
}
// delete anything that need deleting
if runState == successful && createTasks.Any(func(t *syncTask) bool { return t.needsDeleting() }) {
var wg sync.WaitGroup
for _, task := range createTasks.Filter(func(t *syncTask) bool { return t.needsDeleting() }) {
wg.Add(1)
go func(t *syncTask) {
defer wg.Done()
sc.log.WithFields(log.Fields{"dryRun": dryRun, "task": t}).Debug("deleting")
if !dryRun {
err := sc.deleteResource(t)
if err != nil {
// it is possible to get a race condition here, such that the resource does not exist when
// delete is requested, we treat this as a nop
if !apierr.IsNotFound(err) {
runState = failed
sc.setResourceResult(t, "", v1alpha1.OperationError, fmt.Sprintf("failed to delete resource: %v", err))
}
} else {
// if there is anything that needs deleting, we are at best now in pending and
// want to return and wait for sync to be invoked again
runState = pending
}
}
}(task)
}
wg.Wait()
}
// finally create resources
if runState == successful {
processCreateTasks := func(tasks syncTasks) {
var createWg sync.WaitGroup
for _, task := range tasks {
if dryRun && task.skipDryRun {
continue
}
createWg.Add(1)
go func(t *syncTask) {
defer createWg.Done()
sc.log.WithFields(log.Fields{"dryRun": dryRun, "task": t}).Debug("applying")
result, message := sc.applyObject(t.targetObj, dryRun, sc.syncOp.SyncStrategy.Force())
if result == v1alpha1.ResultCodeSyncFailed {
runState = failed
}
if !dryRun || result == v1alpha1.ResultCodeSyncFailed {
sc.setResourceResult(t, result, operationPhases[result], message)
}
}(task)
}
createWg.Wait()
}
var tasksGroup syncTasks
for _, task := range createTasks {
//Only wait if the type of the next task is different than the previous type
if len(tasksGroup) > 0 && tasksGroup[0].targetObj.GetKind() != task.kind() {
processCreateTasks(tasksGroup)
tasksGroup = syncTasks{task}
} else {
tasksGroup = append(tasksGroup, task)
}
}
if len(tasksGroup) > 0 {
var tasksGroup syncTasks
for _, task := range createTasks {
//Only wait if the type of the next task is different than the previous type
if len(tasksGroup) > 0 && tasksGroup[0].targetObj.GetKind() != task.kind() {
processCreateTasks(tasksGroup)
tasksGroup = syncTasks{task}
} else {
tasksGroup = append(tasksGroup, task)
}
}
return runState
if len(tasksGroup) > 0 {
processCreateTasks(tasksGroup)
}
return successful
}
// setResourceResult sets a resource details in the SyncResult.Resources list

View File

@@ -2,25 +2,46 @@ package controller
import (
"fmt"
"strings"
"github.com/argoproj/argo-cd/util/health"
wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1"
apiv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/kubernetes/pkg/apis/batch"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
// enforceHookDeletePolicy examines the hook deletion policy of a object and deletes it based on the status
func enforceHookDeletePolicy(hook *unstructured.Unstructured, operation v1alpha1.OperationPhase) bool {
annotations := hook.GetAnnotations()
if annotations == nil {
return false
}
deletePolicies := strings.Split(annotations[common.AnnotationKeyHookDeletePolicy], ",")
for _, dp := range deletePolicies {
policy := v1alpha1.HookDeletePolicy(strings.TrimSpace(dp))
if policy == v1alpha1.HookDeletePolicyHookSucceeded && operation == v1alpha1.OperationSucceeded {
return true
}
if policy == v1alpha1.HookDeletePolicyHookFailed && operation == v1alpha1.OperationFailed {
return true
}
}
return false
}
// getOperationPhase returns a hook status from an _live_ unstructured object
func getOperationPhase(hook *unstructured.Unstructured) (operation v1alpha1.OperationPhase, message string) {
gvk := hook.GroupVersionKind()
if isBatchJob(gvk) {
return getStatusFromBatchJob(hook)
} else if isArgoWorkflow(gvk) {
return health.GetStatusFromArgoWorkflow(hook)
return getStatusFromArgoWorkflow(hook)
} else if isPod(gvk) {
return getStatusFromPod(hook)
} else {
@@ -71,6 +92,26 @@ func isArgoWorkflow(gvk schema.GroupVersionKind) bool {
return gvk.Group == "argoproj.io" && gvk.Kind == "Workflow"
}
// TODO - should we move this to health.go?
func getStatusFromArgoWorkflow(hook *unstructured.Unstructured) (operation v1alpha1.OperationPhase, message string) {
var wf wfv1.Workflow
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &wf)
if err != nil {
return v1alpha1.OperationError, err.Error()
}
switch wf.Status.Phase {
case wfv1.NodePending, wfv1.NodeRunning:
return v1alpha1.OperationRunning, wf.Status.Message
case wfv1.NodeSucceeded:
return v1alpha1.OperationSucceeded, wf.Status.Message
case wfv1.NodeFailed:
return v1alpha1.OperationFailed, wf.Status.Message
case wfv1.NodeError:
return v1alpha1.OperationError, wf.Status.Message
}
return v1alpha1.OperationSucceeded, wf.Status.Message
}
func isPod(gvk schema.GroupVersionKind) bool {
return gvk.Group == "" && gvk.Kind == "Pod"
}

View File

@@ -11,17 +11,13 @@ func syncPhases(obj *unstructured.Unstructured) []v1alpha1.SyncPhase {
if hook.Skip(obj) {
return nil
} else if hook.IsHook(obj) {
phasesMap := make(map[v1alpha1.SyncPhase]bool)
var phases []v1alpha1.SyncPhase
for _, hookType := range hook.Types(obj) {
switch hookType {
case v1alpha1.HookTypePreSync, v1alpha1.HookTypeSync, v1alpha1.HookTypePostSync, v1alpha1.HookTypeSyncFail:
phasesMap[v1alpha1.SyncPhase(hookType)] = true
case v1alpha1.HookTypePreSync, v1alpha1.HookTypeSync, v1alpha1.HookTypePostSync:
phases = append(phases, v1alpha1.SyncPhase(hookType))
}
}
var phases []v1alpha1.SyncPhase
for phase := range phasesMap {
phases = append(phases, phase)
}
return phases
} else {
return []v1alpha1.SyncPhase{v1alpha1.SyncPhaseSync}

View File

@@ -35,23 +35,12 @@ func TestSyncPhasePost(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePostSync}, syncPhases(pod("PostSync")))
}
func TestSyncPhaseFail(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSyncFail}, syncPhases(pod("SyncFail")))
}
func TestSyncPhaseTwoPhases(t *testing.T) {
assert.ElementsMatch(t, []SyncPhase{SyncPhasePreSync, SyncPhasePostSync}, syncPhases(pod("PreSync,PostSync")))
}
func TestSyncDuplicatedPhases(t *testing.T) {
assert.ElementsMatch(t, []SyncPhase{SyncPhasePreSync}, syncPhases(pod("PreSync,PreSync")))
assert.ElementsMatch(t, []SyncPhase{SyncPhasePreSync}, syncPhases(podWithHelmHook("pre-install,pre-upgrade")))
assert.Equal(t, []SyncPhase{SyncPhasePreSync, SyncPhasePostSync}, syncPhases(pod("PreSync,PostSync")))
}
func pod(hookType string) *unstructured.Unstructured {
return test.Annotate(test.NewPod(), "argocd.argoproj.io/hook", hookType)
}
func podWithHelmHook(hookType string) *unstructured.Unstructured {
return test.Annotate(test.NewPod(), "helm.sh/hook", hookType)
pod := test.NewPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": hookType})
return pod
}

View File

@@ -2,13 +2,14 @@ package controller
import (
"fmt"
"strconv"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/resource/syncwaves"
)
// syncTask holds the live and target object. At least one should be non-nil. A targetObj of nil
@@ -52,7 +53,18 @@ func (t *syncTask) obj() *unstructured.Unstructured {
}
func (t *syncTask) wave() int {
return syncwaves.Wave(t.obj())
text := t.obj().GetAnnotations()[common.AnnotationSyncWave]
if text == "" {
return 0
}
val, err := strconv.Atoi(text)
if err != nil {
return 0
}
return val
}
func (t *syncTask) isHook() bool {
@@ -82,12 +94,8 @@ func (t *syncTask) namespace() string {
return t.obj().GetNamespace()
}
func (t *syncTask) pending() bool {
return t.operationState == ""
}
func (t *syncTask) running() bool {
return t.operationState.Running()
return t.operationState == v1alpha1.OperationRunning
}
func (t *syncTask) completed() bool {
@@ -98,10 +106,6 @@ func (t *syncTask) successful() bool {
return t.operationState.Successful()
}
func (t *syncTask) failed() bool {
return t.operationState.Failed()
}
func (t *syncTask) hookType() v1alpha1.HookType {
if t.isHook() {
return v1alpha1.HookType(t.phase)
@@ -109,22 +113,3 @@ func (t *syncTask) hookType() v1alpha1.HookType {
return ""
}
}
func (t *syncTask) hasHookDeletePolicy(policy v1alpha1.HookDeletePolicy) bool {
// cannot have a policy if it is not a hook, it is meaningless
if !t.isHook() {
return false
}
for _, p := range hook.DeletePolicies(t.obj()) {
if p == policy {
return true
}
}
return false
}
func (t *syncTask) needsDeleting() bool {
return t.liveObj != nil && (t.pending() && t.hasHookDeletePolicy(v1alpha1.HookDeletePolicyBeforeHookCreation) ||
t.successful() && t.hasHookDeletePolicy(v1alpha1.HookDeletePolicyHookSucceeded) ||
t.failed() && t.hasHookDeletePolicy(v1alpha1.HookDeletePolicyHookFailed))
}

View File

@@ -7,7 +7,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/test"
)
func Test_syncTask_hookType(t *testing.T) {
@@ -20,10 +20,10 @@ func Test_syncTask_hookType(t *testing.T) {
fields fields
want HookType
}{
{"Empty", fields{SyncPhaseSync, NewPod()}, ""},
{"PreSyncHook", fields{SyncPhasePreSync, NewHook(HookTypePreSync)}, HookTypePreSync},
{"SyncHook", fields{SyncPhaseSync, NewHook(HookTypeSync)}, HookTypeSync},
{"PostSyncHook", fields{SyncPhasePostSync, NewHook(HookTypePostSync)}, HookTypePostSync},
{"Empty", fields{SyncPhaseSync, test.NewPod()}, ""},
{"PreSyncHook", fields{SyncPhasePreSync, test.NewHook(HookTypePreSync)}, HookTypePreSync},
{"SyncHook", fields{SyncPhaseSync, test.NewHook(HookTypeSync)}, HookTypeSync},
{"PostSyncHook", fields{SyncPhasePostSync, test.NewHook(HookTypePostSync)}, HookTypePostSync},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@@ -36,31 +36,3 @@ func Test_syncTask_hookType(t *testing.T) {
})
}
}
func Test_syncTask_hasHookDeletePolicy(t *testing.T) {
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(HookDeletePolicyBeforeHookCreation))
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(HookDeletePolicyHookSucceeded))
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(HookDeletePolicyHookFailed))
// must be hook
assert.False(t, (&syncTask{targetObj: Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).hasHookDeletePolicy(HookDeletePolicyBeforeHookCreation))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).hasHookDeletePolicy(HookDeletePolicyBeforeHookCreation))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")}).hasHookDeletePolicy(HookDeletePolicyHookSucceeded))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookFailed")}).hasHookDeletePolicy(HookDeletePolicyHookFailed))
}
func Test_syncTask_needsDeleting(t *testing.T) {
assert.False(t, (&syncTask{liveObj: NewPod()}).needsDeleting())
// must be hook
assert.False(t, (&syncTask{liveObj: Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).needsDeleting())
// no need to delete if no live obj
assert.False(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argoocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).needsDeleting())
assert.True(t, (&syncTask{liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).needsDeleting())
assert.True(t, (&syncTask{liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).needsDeleting())
assert.True(t, (&syncTask{operationState: OperationSucceeded, liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")}).needsDeleting())
assert.True(t, (&syncTask{operationState: OperationFailed, liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookFailed")}).needsDeleting())
}
func Test_syncTask_wave(t *testing.T) {
assert.Equal(t, 0, (&syncTask{targetObj: NewPod()}).wave())
assert.Equal(t, 1, (&syncTask{targetObj: Annotate(NewPod(), "argocd.argoproj.io/sync-wave", "1")}).wave())
}

View File

@@ -11,7 +11,6 @@ var syncPhaseOrder = map[v1alpha1.SyncPhase]int{
v1alpha1.SyncPhasePreSync: -1,
v1alpha1.SyncPhaseSync: 0,
v1alpha1.SyncPhasePostSync: 1,
v1alpha1.SyncPhaseSyncFail: 2,
}
// kindOrder represents the correct order of Kubernetes resources within a manifest
@@ -106,35 +105,6 @@ func (s syncTasks) Filter(predicate func(task *syncTask) bool) (tasks syncTasks)
return tasks
}
func (s syncTasks) Split(predicate func(task *syncTask) bool) (trueTasks, falseTasks syncTasks) {
for _, task := range s {
if predicate(task) {
trueTasks = append(trueTasks, task)
} else {
falseTasks = append(falseTasks, task)
}
}
return trueTasks, falseTasks
}
func (s syncTasks) All(predicate func(task *syncTask) bool) bool {
for _, task := range s {
if !predicate(task) {
return false
}
}
return true
}
func (s syncTasks) Any(predicate func(task *syncTask) bool) bool {
for _, task := range s {
if predicate(task) {
return true
}
}
return false
}
func (s syncTasks) Find(predicate func(task *syncTask) bool) *syncTask {
for _, task := range s {
if predicate(task) {
@@ -165,21 +135,3 @@ func (s syncTasks) wave() int {
}
return 0
}
func (s syncTasks) lastPhase() v1alpha1.SyncPhase {
if len(s) > 0 {
return s[len(s)-1].phase
}
return ""
}
func (s syncTasks) lastWave() int {
if len(s) > 0 {
return s[len(s)-1].wave()
}
return 0
}
func (s syncTasks) multiStep() bool {
return s.wave() != s.lastWave() || s.phase() != s.lastPhase()
}

View File

@@ -8,9 +8,7 @@ import (
apiv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/common"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/test"
)
func Test_syncTasks_kindOrder(t *testing.T) {
@@ -24,39 +22,6 @@ func TestSortSyncTask(t *testing.T) {
assert.Equal(t, sortedTasks, unsortedTasks)
}
func TestAnySyncTasks(t *testing.T) {
res := unsortedTasks.Any(func(task *syncTask) bool {
return task.name() == "a"
})
assert.True(t, res)
res = unsortedTasks.Any(func(task *syncTask) bool {
return task.name() == "does-not-exist"
})
assert.False(t, res)
}
func TestAllSyncTasks(t *testing.T) {
res := unsortedTasks.All(func(task *syncTask) bool {
return task.name() != ""
})
assert.False(t, res)
res = unsortedTasks.All(func(task *syncTask) bool {
return task.name() == "a"
})
assert.False(t, res)
}
func TestSplitSyncTasks(t *testing.T) {
named, unnamed := sortedTasks.Split(func(task *syncTask) bool {
return task.name() != ""
})
assert.Equal(t, named, namedObjTasks)
assert.Equal(t, unnamed, unnamedTasks)
}
var unsortedTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
@@ -82,9 +47,6 @@ var unsortedTasks = syncTasks{
},
},
},
{
phase: SyncPhaseSyncFail, targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
@@ -237,107 +199,6 @@ var sortedTasks = syncTasks{
phase: SyncPhasePostSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhaseSyncFail,
targetObj: &unstructured.Unstructured{},
},
}
var namedObjTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
}
var unnamedTasks = syncTasks{
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
phase: SyncPhasePostSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhaseSyncFail,
targetObj: &unstructured.Unstructured{},
},
}
func Test_syncTasks_Filter(t *testing.T) {
@@ -368,25 +229,3 @@ func TestSyncNamespaceAgainstCRD(t *testing.T) {
assert.Equal(t, syncTasks{namespace, crd}, unsorted)
}
func Test_syncTasks_multiStep(t *testing.T) {
t.Run("Single", func(t *testing.T) {
tasks := syncTasks{{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "-1"), phase: SyncPhaseSync}}
assert.Equal(t, SyncPhaseSync, tasks.phase())
assert.Equal(t, -1, tasks.wave())
assert.Equal(t, SyncPhaseSync, tasks.lastPhase())
assert.Equal(t, -1, tasks.lastWave())
assert.False(t, tasks.multiStep())
})
t.Run("Double", func(t *testing.T) {
tasks := syncTasks{
{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "-1"), phase: SyncPhasePreSync},
{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "1"), phase: SyncPhasePostSync},
}
assert.Equal(t, SyncPhasePreSync, tasks.phase())
assert.Equal(t, -1, tasks.wave())
assert.Equal(t, SyncPhasePostSync, tasks.lastPhase())
assert.Equal(t, 1, tasks.lastWave())
assert.True(t, tasks.multiStep())
})
}

View File

@@ -13,14 +13,13 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
fakedisco "k8s.io/client-go/discovery/fake"
"k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/rest"
testcore "k8s.io/client-go/testing"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
@@ -73,21 +72,10 @@ func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
disco: fakeDisco,
log: log.WithFields(log.Fields{"application": "fake-app"}),
}
sc.kubectl = &kubetest.MockKubectlCmd{}
sc.kubectl = kubetest.MockKubectlCmd{}
return &sc
}
func newManagedResource(live *unstructured.Unstructured) managedResource {
return managedResource{
Live: live,
Group: live.GroupVersionKind().Group,
Version: live.GroupVersionKind().Version,
Kind: live.GroupVersionKind().Kind,
Namespace: live.GetNamespace(),
Name: live.GetName(),
}
}
func TestSyncNotPermittedNamespace(t *testing.T) {
syncCtx := newTestSyncCtx()
targetPod := test.NewPod()
@@ -151,7 +139,7 @@ func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
{Group: "argoproj.io", Kind: "*"},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{}
syncCtx.kubectl = kubetest.MockKubectlCmd{}
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
@@ -251,7 +239,7 @@ func TestSyncDeleteSuccessfully(t *testing.T) {
func TestSyncCreateFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
testSvc := test.NewService()
syncCtx.kubectl = &kubetest.MockKubectlCmd{
syncCtx.kubectl = kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
testSvc.GetName(): {
Output: "",
@@ -274,7 +262,7 @@ func TestSyncCreateFailure(t *testing.T) {
func TestSyncPruneFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = &kubetest.MockKubectlCmd{
syncCtx.kubectl = kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
"test-service": {
Output: "",
@@ -338,33 +326,6 @@ func TestDontPrunePruneFalse(t *testing.T) {
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
}
// make sure Validate=false means we don't validate
func TestSyncOptionValidate(t *testing.T) {
tests := []struct {
name string
annotationVal string
want bool
}{
{"Empty", "", true},
{"True", "Validate=true", true},
{"False", "Validate=false", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationSyncOptions: tt.annotationVal})
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod, Live: pod}}}
syncCtx.sync()
kubectl, _ := syncCtx.kubectl.(*kubetest.MockKubectlCmd)
assert.Equal(t, tt.want, kubectl.LastValidate)
})
}
}
func TestSelectiveSyncOnly(t *testing.T) {
syncCtx := newTestSyncCtx()
pod1 := test.NewPod()
@@ -456,7 +417,7 @@ func TestPersistRevisionHistory(t *testing.T) {
}
data := fakeData{
apps: []runtime.Object{app, defaultProject},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -493,7 +454,7 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
}
data := fakeData{
apps: []runtime.Object{app, defaultProject},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -530,94 +491,6 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
assert.Equal(t, "abc123", updatedApp.Status.History[0].Revision)
}
func TestSyncFailureHookWithSuccessfulSync(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: test.NewPod()}},
hooks: []*unstructured.Unstructured{test.NewHook(HookTypeSyncFail)},
}
syncCtx.sync()
assert.Equal(t, OperationSucceeded, syncCtx.opState.Phase)
// only one result, we did not run the failure failureHook
assert.Len(t, syncCtx.syncRes.Resources, 1)
}
func TestSyncFailureHookWithFailedSync(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{test.NewHook(HookTypeSyncFail)},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{pod.GetName(): {Err: fmt.Errorf("")}},
}
syncCtx.sync()
syncCtx.sync()
assert.Equal(t, OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
}
func TestBeforeHookCreation(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
hook := test.Annotate(test.Annotate(test.NewPod(), common.AnnotationKeyHook, "Sync"), common.AnnotationKeyHookDeletePolicy, "BeforeHookCreation")
hook.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{newManagedResource(hook)},
hooks: []*unstructured.Unstructured{hook},
}
syncCtx.dynamicIf = fake.NewSimpleDynamicClient(runtime.NewScheme())
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Empty(t, syncCtx.syncRes.Resources[0].Message)
}
func TestRunSyncFailHooksFailed(t *testing.T) {
// Tests that other SyncFail Hooks run even if one of them fail.
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
successfulSyncFailHook := test.NewHook(HookTypeSyncFail)
successfulSyncFailHook.SetName("successful-sync-fail-hook")
failedSyncFailHook := test.NewHook(HookTypeSyncFail)
failedSyncFailHook.SetName("failed-sync-fail-hook")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{successfulSyncFailHook, failedSyncFailHook},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
// Fail operation
pod.GetName(): {Err: fmt.Errorf("")},
// Fail a single SyncFail hook
failedSyncFailHook.GetName(): {Err: fmt.Errorf("")}},
}
syncCtx.sync()
syncCtx.sync()
fmt.Println(syncCtx.syncRes.Resources)
fmt.Println(syncCtx.opState.Phase)
// Operation as a whole should fail
assert.Equal(t, OperationFailed, syncCtx.opState.Phase)
// failedSyncFailHook should fail
assert.Equal(t, OperationFailed, syncCtx.syncRes.Resources[1].HookPhase)
assert.Equal(t, ResultCodeSyncFailed, syncCtx.syncRes.Resources[1].Status)
// successfulSyncFailHook should be synced running (it is an nginx pod)
assert.Equal(t, OperationRunning, syncCtx.syncRes.Resources[2].HookPhase)
assert.Equal(t, ResultCodeSynced, syncCtx.syncRes.Resources[2].Status)
}
func Test_syncContext_isSelectiveSync(t *testing.T) {
type fields struct {
compareResult *comparisonResult
@@ -688,12 +561,3 @@ func Test_syncContext_liveObj(t *testing.T) {
})
}
}
func Test_syncContext_hasCRDOfGroupKind(t *testing.T) {
// target
assert.False(t, (&syncContext{compareResult: &comparisonResult{managedResources: []managedResource{{Target: test.NewCRD()}}}}).hasCRDOfGroupKind("", ""))
assert.True(t, (&syncContext{compareResult: &comparisonResult{managedResources: []managedResource{{Target: test.NewCRD()}}}}).hasCRDOfGroupKind("argoproj.io", "TestCrd"))
// hook
assert.False(t, (&syncContext{compareResult: &comparisonResult{hooks: []*unstructured.Unstructured{test.NewCRD()}}}).hasCRDOfGroupKind("", ""))
assert.True(t, (&syncContext{compareResult: &comparisonResult{hooks: []*unstructured.Unstructured{test.NewCRD()}}}).hasCRDOfGroupKind("argoproj.io", "TestCrd"))
}

View File

@@ -10,29 +10,38 @@ Then, to get a good grounding in Go, try out [the tutorial](https://tour.golang.
Install:
* [docker](https://docs.docker.com/install/#supported-platforms)
* [git](https://git-scm.com/) and [git-lfs](https://git-lfs.github.com/)
* [golang](https://golang.org/)
* [dep](https://github.com/golang/dep)
* [protobuf](https://developers.google.com/protocol-buffers/)
* [ksonnet](https://github.com/ksonnet/ksonnet#install)
* [helm](https://github.com/helm/helm/releases)
* [kustomize](https://github.com/kubernetes-sigs/kustomize/releases)
* [go-swagger](https://github.com/go-swagger/go-swagger/blob/master/docs/install.md)
* [jq](https://stedolan.github.io/jq/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [kubectx](https://kubectx.dev)
* [minikube](https://kubernetes.io/docs/setup/minikube/) or Docker for Desktop
!!! warning "Versions"
You will find problems generating code if you do not have the correct versions of `protoc` and `swagger`
```bash
$ protoc --version
libprotoc 3.7.1
~/go/src/github.com/argoproj/argo-cd (ui)
$ swagger version
version: v0.19.0
```
Brew users can quickly install the lot:
```bash
brew install go git-lfs kubectl kubectx dep ksonnet/tap/ks kubernetes-helm kustomize kustomize
brew tap go-swagger/go-swagger
brew install go dep protobuf kubectl kubectx ksonnet/tap/ks kubernetes-helm jq go-swagger kustomize
```
Check the versions:
```
go version ;# must be v1.12.x
helm version ;# must be v2.13.x
kustomize version ;# must be v3.10.x
```
!!! note "Kustomize"
Since Argo CD supports Kustomize v1.0 and v2.0, you will need to install both versions in order for the unit tests to run. The Kustomize 1 unit test expects to find a `kustomize1` binary in the path. You can use this [link](https://github.com/argoproj/argo-cd/blob/master/Dockerfile#L66-L69) to find the Kustomize 1 currently used by Argo CD and modify the curl command to download the correct OS.
Set up environment variables (e.g. is `~/.bashrc`):
@@ -48,24 +57,40 @@ go get -u github.com/argoproj/argo-cd
cd ~/go/src/github.com/argoproj/argo-cd
```
## Building
Install go dependencies:
Ensure dependencies are up to date first:
```shell
dep ensure
make dev-builder-image
make install-lint-tools
go get github.com/mattn/goreman
```bash
go get github.com/gobuffalo/packr/packr
go get github.com/gogo/protobuf/gogoproto
go get github.com/golang/protobuf/protoc-gen-go
go get github.com/golangci/golangci-lint/cmd/golangci-lint
go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
go get github.com/jstemmer/go-junit-report
go get github.com/mattn/goreman
go get golang.org/x/tools/cmd/goimports
```
Common make targets:
## Building
* `make codegen` - Run code generation
* `make lint` - Lint code
* `make test` - Run unit tests
* `make cli` - Make the `argocd` CLI tool
```bash
make
```
The make command can take a while, and we recommend building the specific component you are working on
* `make codegen` - Builds protobuf and swagger files
* `make cli` - Make the argocd CLI tool
* `make server` - Make the API/repo/controller server
* `make argocd-util` - Make the administrator's utility, used for certain tasks such as import/export
## Running Tests
To run unit tests:
```bash
make test
```
Check out the following [documentation](https://github.com/argoproj/argo-cd/blob/master/docs/developer-guide/test-e2e.md) for instructions on running the e2e tests.
@@ -76,20 +101,14 @@ It is much easier to run and debug if you run ArgoCD on your local machine than
You should scale the deployments to zero:
```bash
kubectl -n argocd scale deployment/argocd-application-controller --replicas 0
kubectl -n argocd scale deployment/argocd-dex-server --replicas 0
kubectl -n argocd scale deployment/argocd-repo-server --replicas 0
kubectl -n argocd scale deployment/argocd-server --replicas 0
kubectl -n argocd scale deployment/argocd-redis --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-application-controller --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-dex-server --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-repo-server --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-server --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-redis --replicas 0
```
Download Yarn dependencies and Compile:
```bash
~/go/src/github.com/argoproj/argo-cd/ui
yarn install
yarn build
```
Note: you'll need to use the https://localhost:6443 cluster now.
Then start the services:
@@ -101,17 +120,12 @@ make start
You can now execute `argocd` command against your locally running ArgoCD by appending `--server localhost:8080 --plaintext --insecure`, e.g.:
```bash
argocd app create guestbook --path guestbook --repo https://github.com/argoproj/argocd-example-apps.git --dest-server https://kubernetes.default.svc --dest-namespace default --server localhost:8080 --plaintext --insecure
argocd app set guestbook --path guestbook --repo https://github.com/argoproj/argocd-example-apps.git --dest-server https://localhost:6443 --dest-namespace default --server localhost:8080 --plaintext --insecure
```
You can open the UI: http://localhost:4000
You can open the UI: http://localhost:8080
As an alternative to using the above command line parameters each time you call `argocd` CLI, you can set the following environment variables:
```bash
export ARGOCD_SERVER=127.0.0.1:8080
export ARGOCD_OPTS="--plaintext --insecure"
```
Note: you'll need to use the https://kubernetes.default.svc cluster now.
## Running Local Containers
@@ -129,19 +143,21 @@ Add your username as the environment variable, e.g. to your `~/.bash_profile`:
export IMAGE_NAMESPACE=alexcollinsintuit
```
If you don't want to use `latest` as the image's tag (the default), you can set it from the environment too:
If you have not built the UI image (see [the UI README](https://github.com/argoproj/argo-cd/blob/master/ui/README.md)), then do the following:
```bash
export IMAGE_TAG=yourtag
docker pull argoproj/argocd-ui:latest
docker tag argoproj/argocd-ui:latest $IMAGE_NAMESPACE/argocd-ui:latest
docker push $IMAGE_NAMESPACE/argocd-ui:latest
```
Build the image:
Build the images:
```bash
DOCKER_PUSH=true make image
```
Update the manifests (be sure to do that from a shell that has above environment variables set)
Update the manifests:
```bash
make manifests
@@ -156,11 +172,11 @@ kubectl -n argocd apply --force -f manifests/install.yaml
Scale your deployments up:
```bash
kubectl -n argocd scale deployment/argocd-application-controller --replicas 1
kubectl -n argocd scale deployment/argocd-dex-server --replicas 1
kubectl -n argocd scale deployment/argocd-repo-server --replicas 1
kubectl -n argocd scale deployment/argocd-server --replicas 1
kubectl -n argocd scale deployment/argocd-redis --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-application-controller --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-dex-server --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-repo-server --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-server --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-redis --replicas 1
```
Now you can set-up the port-forwarding and open the UI or CLI.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

View File

@@ -1,32 +1,3 @@
# API Docs
You can find Swagger docs but setting the path `/swagger-ui` to your Argo CD UI's. E.g. [http://localhost:8080/swagger-ui](http://localhost:8080/swagger-ui).
## Authorization
You'll need to authorize your API using a bearer token. To get a token:
```bash
$ curl $ARGOCD_SERVER/api/v1/session -d $'{"username":"admin","password":"password"}'
{"token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE1Njc4MTIzODcsImlzcyI6ImFyZ29jZCIsIm5iZiI6MTU2NzgxMjM4Nywic3ViIjoiYWRtaW4ifQ.ejyTgFxLhuY9mOBtKhcnvobg3QZXJ4_RusN_KIdVwao"}
```
> <=v1.2
Then pass using the HTTP `SetCookie` header, prefixing with `argocd.token`:
```bash
$ curl $ARGOCD_SERVER/api/v1/applications --cookie "argocd.token=$ARGOCD_TOKEN"
{"metadata":{"selfLink":"/apis/argoproj.io/v1alpha1/namespaces/argocd/applications","resourceVersion":"37755"},"items":...}
```
> >v1.3
Then pass using the HTTP `Authorization` header, prefixing with `Bearer `:
```bash
$ curl $ARGOCD_SERVER/api/v1/applications -H "Authorization: Bearer $ARGOCD_TOKEN"
{"metadata":{"selfLink":"/apis/argoproj.io/v1alpha1/namespaces/argocd/applications","resourceVersion":"37755"},"items":...}
```
You sh
You can find Swagger docs but setting the path `/swagger-ui` to your Argo CD UI's. E.g. [http://localhost:8080/swagger-ui](http://localhost:8080/swagger-ui).

View File

@@ -19,13 +19,35 @@ Set the `VERSION` environment variable:
# release candidate
VERSION=v1.0.0-rc1
# GA release
VERSION=v1.0.2
VERSION=v1.0.0
```
If not already created, create UI release branch:
```bash
cd argo-cd-ui
git checkout -b $BRANCH
```
Tag UI:
```bash
git tag $VERSION
git push $REPO $BRANCH --tags
IMAGE_NAMESPACE=argoproj IMAGE_TAG=$VERSION DOCKER_PUSH=true yarn docker
```
If not already created, create release branch:
```bash
cd argo-cd
git checkout -b $BRANCH
git push $REPO $BRANCH
```
Update `VERSION` and manifests with new version:
```bash
git checkout $BRANCH
echo ${VERSION:1} > VERSION
make manifests IMAGE_TAG=$VERSION
git commit -am "Update manifests to $VERSION"
@@ -36,7 +58,6 @@ Tag, build, and push release to Docker Hub
```bash
git tag $VERSION
git clean -fd
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=$VERSION DOCKER_PUSH=true
git push $REPO $VERSION
```
@@ -45,7 +66,7 @@ Update [Github releases](https://github.com/argoproj/argo-cd/releases) with:
* Getting started (copy from previous release)
* Changelog
* Binaries (e.g. `dist/argocd-darwin-amd64`).
* Binaries (e.g. dist/argocd-darwin-amd64).
If GA, update `stable` tag:
@@ -59,10 +80,8 @@ If GA, update Brew formula:
```bash
git clone https://github.com/argoproj/homebrew-tap
cd homebrew-tap
git checkout master
git pull
./update.sh ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
git commit -am "Update argocd to $VERSION"
git commit -a -m "Update argocd to $VERSION"
git push
```

View File

@@ -13,18 +13,6 @@ Git repository via file url: `file:///tmp/argocd-e2e***`.
You can observe the tests by using the UI [http://localhost:8080/applications](http://localhost:8080/applications).
## Configuration of E2E Tests execution
The Makefile's `start-e2e` target starts instances of ArgoCD on your local machine, of which the most will require a network listener. If for whatever reason you already have network services on your machine listening on the same ports, the e2e tests will not be able to run. You can derive from the defaults by setting the following environment variables before you run `make start-e2e`:
* `ARGOCD_E2E_APISERVER_PORT`: Listener port for `argocd-server` (default: `8080`)
* `ARGOCD_E2E_REPOSERVER_PORT`: Listener port for `argocd-reposerver` (default: `8081`)
* `ARGOCD_E2E_DEX_PORT`: Listener port for `dex` (default: `5556`)
* `ARGOCD_E2E_REDIS_PORT`: Listener port for `redis` (default: `6379`)
* `ARGOCD_E2E_YARN_CMD`: Command to use for starting the UI via Yarn (default: `yarn`)
If you have changed the port for `argocd-server`, be sure to also set `ARGOCD_SERVER` environment variable to point to that port, e.g. `export ARGOCD_SERVER=localhost:8888` before running `make test-e2e` so that the test will communicate to the correct server component.
## CI Set-up
The tests are executed by Argo Workflow defined at `.argo-ci/ci.yaml`. CI job The builds an Argo CD image, deploy argo cd components into throw-away kubernetes cluster provisioned
@@ -33,7 +21,7 @@ using k3s and run e2e tests against it.
## Test Isolation
Some effort has been made to balance test isolation with speed. Tests are isolated as follows as each test gets:
* A random 5 character ID.
* A unique Git repository containing the `testdata` in `/tmp/argocd-e2e/${id}`.
* A namespace `argocd-e2e-ns-${id}`.
@@ -46,7 +34,7 @@ Some effort has been made to balance test isolation with speed. Tests are isolat
This maybe due to the metrics server, run this:
```bash
kubectl api-resources
kubectl api-resources
```
If it exits with status code 1, run:
@@ -59,4 +47,4 @@ Remove `/spec/finalizers` from the namespace
```bash
kubectl edit ns argocd-e2e-ns-*
```
```

View File

@@ -1,12 +1,5 @@
# FAQ
## I've deleted/corrupted my repo and can't delete my app.
Argo CD can't delete an app if it cannot generate manifests. You need to either:
1. Reinstate/fix your repo.
1. Delete the app using `--cascade=false` and then manually deleting the resources.
## Why is my application still `OutOfSync` immediately after a successful Sync?
See [Diffing](user-guide/diffing.md) documentation for reasons resources can be OutOfSync, and ways to configure
@@ -22,30 +15,17 @@ to return `Progressing` state instead of `Healthy`.
([contour](https://github.com/heptio/contour/issues/403), [traefik](https://github.com/argoproj/argo-cd/issues/968#issuecomment-451082913)) don't update
`status.loadBalancer.ingress` field which causes `Ingress` to stuck in `Progressing` state forever.
* `StatefulSet` is considered healthy if value of `status.updatedReplicas` field matches to `spec.replicas` field. Due to Kubernetes bug
* `StatufulSet` is considered healthy if value of `status.updatedReplicas` field matches to `spec.replicas` field. Due to Kubernetes bug
[kubernetes/kubernetes#68573](https://github.com/kubernetes/kubernetes/issues/68573) the `status.updatedReplicas` is not populated. So unless you run Kubernetes version which
include the fix [kubernetes/kubernetes#67570](https://github.com/kubernetes/kubernetes/pull/67570) `StatefulSet` might stay in `Progressing` state.
* Your `StatefulSet` or `DaemonSet` is using `OnDelete` instead of `RollingUpdate` strategy. See [#1881](https://github.com/argoproj/argo-cd/issues/1881).
As workaround Argo CD allows providing [health check](operator-manual/health.md) customization which overrides default behavior.
## I forgot the admin password, how do I reset it?
By default the password is set to the name of the server pod, as per [the getting started guide](getting_started.md).
To change the password, edit the `argocd-secret` secret and update the `admin.password` field with a new bcrypt hash. You
can use a site like https://www.browserling.com/tools/bcrypt to generate a new hash. For example:
```bash
# bcrypt(password)=$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa
kubectl -n argocd patch secret argocd-secret \
-p '{"stringData": {
"admin.password": "$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa",
"admin.passwordMtime": "'$(date +%FT%T%Z)'"
}}'
```
Another option is to delete both the `admin.password` and `admin.passwordMtime` keys and restart argocd-server. This will set the password back to the pod name as per [the getting started guide](getting_started.md).
Edit the `argocd-secret` secret and update the `admin.password` field with a new bcrypt hash. You
can use a site like https://www.browserling.com/tools/bcrypt to generate a new hash. Another option
is to delete both the `admin.password` and `admin.passwordMtime` keys and restart argocd-server.
## Argo CD cannot deploy Helm Chart based applications without internet access, how can I solve it?
@@ -55,15 +35,9 @@ uses only internally available Helm repositories. Even if the chart uses only de
```yaml
data:
# v1.2 or earlier use `helm.repositories`
helm.repositories: |
- url: http://<internal-helm-repo-host>:8080
name: stable
# v1.3 or later use `repositories` with `type: helm`
repositories: |
- type: helm
url: http://<internal-helm-repo-host>:8080
name: stable
```
## I've configured [cluster secret](./operator-manual/declarative-setup.md#clusters) but it does not show up in CLI/UI, how do I fix it?
@@ -87,62 +61,4 @@ Now you can manually verify that cluster is accessible from the Argo CD pod.
To terminate the sync, click on the "synchronisation" then "terminate":
![Synchronization](assets/synchronization-button.png) ![Terminate](assets/terminate-button.png)
## Why Is My App Out Of Sync Even After Syncing?
Is some cases, the tool you use may conflict with Argo CD by adding the `app.kubernetes.io/instance` label. E.g. using Kustomize common labels feature.
Argo CD automatically sets the `app.kubernetes.io/instance` label and uses it to determine which resources form the app. If the tool does this too, this causes confusion. You can change this label by setting the `application.instanceLabelKey` value in the `argocd-cm`. We recommend that you use `argocd.argoproj.io/instance`.
!!! note
When you make this change your applications will become out of sync and will need re-syncing.
See [#1482](https://github.com/argoproj/argo-cd/issues/1482).
## Why Are My Resource Limits Out Of Sync?
Kubernetes has normalized your resource limits when they are applied, and then Argo CD has then compared the version in your generated manifests to the normalized one is Kubernetes - they won't match.
E.g.
* `'1000m'` normalized to `'1'`
* `'0.1'` normalized to `'100m'`
* `'3072Mi'` normalized to `'3Gi'`
* `3072` normalized to `'3072'` (quotes added)
To fix this - replace your values with the normalized values.
See [#1615](https://github.com/argoproj/argo-cd/issues/1615)
# How Do I Fix "invalid cookie, longer than max length 4093"?
Argo CD uses a JWT as the auth token. You likely are part of many groups and have gone over the 4KB limit which is set for cookies.
You can get the list of groups by opening "developer tools -> network"
* Click log in
* Find the call to `<argocd_instance>/auth/callback?code=<random_string>`
Decode the token at https://jwt.io/. That will provide the list of teams that you can remove yourself from.
See [#2165](https://github.com/argoproj/argo-cd/issues/2165).
## Why Am I Getting `rpc error: code = Unavailable desc = transport is closing` When Using The CLI?
Maybe you're behind a proxy that does not support HTTP 2? Try the `--grcp-web` flag.:
```bash
argocd ... --grcp-web
```
## Why Am I Getting `x509: certificate signed by unknown authority` When Using The CLI?
Your not running your server with correct certs.
If you're not running in a production system (e.g. you're testing Argo CD out), try the `--insecure` flag:
```bash
argocd ... --insecure
```
!!! warning "Do not use `--insecure` in production"
![Synchronization](assets/synchronization-button.png) ![Terminate](assets/terminate-button.png)

View File

@@ -25,7 +25,7 @@ kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=c
## 2. Download Argo CD CLI
Download the latest Argo CD version from [https://github.com/argoproj/argo-cd/releases/latest](https://github.com/argoproj/argo-cd/releases/latest).
Download the latest Argo CD version from [https://github.com/argoproj/argo-cd/releases/latest].
Also available in Mac Homebrew:
@@ -110,7 +110,7 @@ service account token to perform its management tasks (i.e. deploy/monitoring).
## 6. Create An Application From A Git Repository
An example repository containing a guestbook application is available at
[https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git) to demonstrate how Argo CD works.
https://github.com/argoproj/argocd-example-apps.git to demonstrate how Argo CD works.
### Creating Apps Via CLI
@@ -125,7 +125,7 @@ argocd app create guestbook \
### Creating Apps Via UI
Open a browser to the Argo CD external UI, and login using the credentials, IP/hostname set in step 4.
Connect the [https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git) repo to Argo CD:
Connect the https://github.com/argoproj/argocd-example-apps.git repo to Argo CD:
![connect repo](assets/connect_repo.png)

View File

@@ -79,6 +79,14 @@ For additional details, see [architecture overview](operator-manual/architecture
* Prometheus metrics
* Parameter overrides for overriding ksonnet/helm parameters in Git
## Community Blogs And Presentations
* GitOps with Argo CD: [Simplify and Automate Deployments Using GitOps with IBM Multicloud Manager](https://www.ibm.com/blogs/bluemix/2019/02/simplify-and-automate-deployments-using-gitops-with-ibm-multicloud-manager-3-1-2/)
* KubeCon talk: [CI/CD in Light Speed with K8s and Argo CD](https://www.youtube.com/watch?v=OdzH82VpMwI&feature=youtu.be)
* KubeCon talk: [Machine Learning as Code](https://www.youtube.com/watch?v=VXrGp5er1ZE&t=0s&index=135&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU)
* Among other things, describes how Kubeflow uses Argo CD to implement GitOPs for ML
* SIG Apps demo: [Argo CD - GitOps Continuous Delivery for Kubernetes](https://www.youtube.com/watch?v=aWDIQMbp1cc&feature=youtu.be&t=1m4s)
## Development Status
Argo CD is actively developed and is being used in production to deploy SaaS services at Intuit

View File

@@ -19,14 +19,6 @@ spec:
# helm specific config
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
# Release name override (defaults to application name)
releaseName: guestbook
valueFiles:
- values-prod.yaml
@@ -34,35 +26,36 @@ spec:
kustomize:
# Optional image name prefix
namePrefix: prod-
# Optional images passed to "kustomize edit set image".
# Optional image tags passed to "kustomize edit set imagetag" is Kustomize 1 only.
imageTags:
- name: gcr.io/heptio-images/ks-guestbook-demo
value: "0.2"
# Optional images passed to "kustomize edit set image" is Kustomize 2 only.
images:
- gcr.io/heptio-images/ks-guestbook-demo:0.2
# directory
directory:
recurse: true
jsonnet:
# A list of Jsonnet External Variables
extVars:
- name: foo
value: bar
# You can use "code to determine if the value is either string (false, the default) or Jsonnet code (if code is true).
- code: true
name: baz
value: "true"
# A list of Jsonnet Top-level Arguments
tlas:
- code: false
name: foo
value: bar
jsonnet:
# A list of Jsonnet External Variables
extVars:
- name: foo
value: bar
# You can use "code to determine if the value is either string (false, the default) or Jsonnet code (if code is true).
- code: true
name: baz
value: "true"
# A list of Jsonnet Top-level Arguments
tlas:
- code: false
name: foo
value: bar
# plugin specific config
plugin:
name: mypluginname
# environment variables passed to the plugin
env:
- name: FOO
value: bar
- name: mypluginname
# Destination cluster and namespace to deploy the application
destination:
@@ -72,8 +65,7 @@ spec:
# Sync policy
syncPolicy:
automated:
prune: true # Specifies if resources should be pruned during auto-syncing ( false by default ).
selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
prune: true
# Ignore differences at the specified json pointers
ignoreDifferences:

View File

@@ -30,5 +30,5 @@ manifests when provided the following inputs:
The application controller is a Kubernetes controller which continuously monitors running
applications and compares the current, live state against the desired target state (as specified in
the repo). It detects `OutOfSync` application state and optionally takes corrective action. It
is responsible for invoking any user-defined hooks for lifecycle events (PreSync, Sync, PostSync)
is responsible for invoking any user-defined hooks for lifcecycle events (PreSync, Sync, PostSync)

View File

@@ -2,32 +2,12 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
# Argo CD's externally facing base URL (optional). Required when configuring SSO
url: https://argo-cd-demo.argoproj.io
# Enables application status badge feature
statusbadge.enabled: 'true'
# Enables anonymous user access. The anonymous users get default role permissions specified argocd-rbac-cm.yaml.
users.anonymous.enabled: "true"
# Enables google analytics tracking is specified
ga.trackingid: 'UA-12345-1'
# Unless set to 'false' then user ids are hashed before sending to google analytics
ga.anonymizeusers: 'false'
# the URL for getting chat help, this will typically be your Slack channel for support
help.chatUrl: 'https://mycorp.slack.com/argo-cd'
# the text for getting chat help, defaults to "Chat now!"
help.chatText: 'Chat now!'
# A dex connector configuration (optional). See SSO configuration documentation:
# https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/sso.md
# https://github.com/argoproj/argo-cd/blob/master/docs/sso.md
# https://github.com/dexidp/dex/tree/master/Documentation/connectors
dex.config: |
connectors:
@@ -51,12 +31,9 @@ data:
clientSecret: $oidc.okta.clientSecret
# Optional set of OIDC scopes to request. If omitted, defaults to: ["openid", "profile", "email", "groups"]
requestedScopes: ["openid", "profile", "email"]
# Optional set of OIDC claims to request on the ID token.
requestedIDTokenClaims: {"groups": {"essential": true}}
# Git repositories configure Argo CD with (optional).
# This list is updated when configuring/removing repos from the UI/CLI
# Note: 'type: helm' field is supported in v1.3+. Use 'helm.repositories' for older versions.
repositories: |
- url: https://github.com/argoproj/my-private-repository
passwordSecret:
@@ -68,20 +45,8 @@ data:
sshPrivateKeySecret:
name: my-secret
key: sshPrivateKey
- type: helm
url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
name: istio.io
- type: helm
url: https://my-private-chart-repo.internal
name: private-repo
usernameSecret:
name: my-secret
key: username
passwordSecret:
name: my-secret
key: password
# Non-standard and private Helm repositories (deprecated in 1.3).
# Non-standard and private Helm repositories (optional).
helm.repositories: |
- url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
name: istio.io
@@ -124,27 +89,6 @@ data:
hs.status = "Progressing"
hs.message = "Waiting for certificate"
return hs
apps/Deployment:
# List of Lua Scripts to introduce custom actions
actions: |
# Lua Script to indicate which custom actions are available on the resource
discovery.lua: |
actions = {}
actions["restart"] = {}
return actions
definitions:
- name: restart
# Lua Script to modify the obj
action.lua: |
local os = require("os")
if obj.spec.template.metadata == nil then
obj.spec.template.metadata = {}
end
if obj.spec.template.metadata.annotations == nil then
obj.spec.template.metadata.annotations = {}
end
obj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] = os.date("!%Y-%m-%dT%XZ")
return obj
# Configuration to completely ignore entire classes of resource group/kinds (optional).
# Excluding high-volume resources improves performance and memory usage, and reduces load and
@@ -168,9 +112,6 @@ data:
generate:
command: [kasane, show]
# Build options/parameters to use with `kustomize build` (optional)
kustomize.buildOptions: --load_restrictor none
# The metadata.label key name where Argo CD injects the app name as a tracking label (optional).
# Tracking labels are used to determine which resources need to be deleted when pruning.
# If omitted, Argo CD injects the app name into the label: 'app.kubernetes.io/instance'

View File

@@ -2,17 +2,13 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-rbac-cm
app.kubernetes.io/part-of: argocd
data:
# policy.csv is an file containing user-defined RBAC policies and role definitions (optional).
# Policy rules are in the form:
# p, subject, resource, action, object, effect
# Role definitions and bindings are in the form:
# g, subject, inherited-subject
# See https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/rbac.md for additional information.
# See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md for additional information.
policy.csv: |
# Grant all members of the group 'my-org:team-alpha; the ability to sync apps in 'my-project'
p, my-org:team-alpha, applications, sync, my-project/*, allow
@@ -25,6 +21,6 @@ data:
policy.default: role:readonly
# scopes controls which OIDC scopes to examine during rbac enforcement (in addition to `sub` scope).
# If omitted, defaults to: '[groups]'. The scope value can be a string, or a list of strings.
scopes: '[cognito:groups, email]'
# If omitted, defaults to: `[groups]`. The scope value can be a string, or a list of strings.
scopes: [cognito:groups, email]

View File

@@ -2,10 +2,6 @@ apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
namespace: argocd
labels:
app.kubernetes.io/name: argocd-secret
app.kubernetes.io/part-of: argocd
type: Opaque
data:
# TLS certificate and private key for API server (required).
@@ -23,7 +19,7 @@ data:
server.secretkey:
# Shared secrets for authenticating GitHub, GitLab, BitBucket webhook events (optional).
# See https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/webhook.md for additional details.
# See https://github.com/argoproj/argo-cd/blob/master/docs/webhook.md for additional details.
github.webhook.secret:
gitlab.webhook.secret:
bitbucket.webhook.uuid:

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: argocd-ssh-known-hosts-cm
app.kubernetes.io/part-of: argocd
name: argocd-ssh-known-hosts-cm
data:
ssh_known_hosts: |
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H

View File

@@ -1,45 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-tls-certs-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
server.example.com: |
-----BEGIN CERTIFICATE-----
MIIF1zCCA7+gAwIBAgIUQdTcSHY2Sxd3Tq/v1eIEZPCNbOowDQYJKoZIhvcNAQEL
BQAwezELMAkGA1UEBhMCREUxFTATBgNVBAgMDExvd2VyIFNheG9ueTEQMA4GA1UE
BwwHSGFub3ZlcjEVMBMGA1UECgwMVGVzdGluZyBDb3JwMRIwEAYDVQQLDAlUZXN0
c3VpdGUxGDAWBgNVBAMMD2Jhci5leGFtcGxlLmNvbTAeFw0xOTA3MDgxMzU2MTda
Fw0yMDA3MDcxMzU2MTdaMHsxCzAJBgNVBAYTAkRFMRUwEwYDVQQIDAxMb3dlciBT
YXhvbnkxEDAOBgNVBAcMB0hhbm92ZXIxFTATBgNVBAoMDFRlc3RpbmcgQ29ycDES
MBAGA1UECwwJVGVzdHN1aXRlMRgwFgYDVQQDDA9iYXIuZXhhbXBsZS5jb20wggIi
MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv4mHMdVUcafmaSHVpUM0zZWp5
NFXfboxA4inuOkE8kZlbGSe7wiG9WqLirdr39Ts+WSAFA6oANvbzlu3JrEQ2CHPc
CNQm6diPREFwcDPFCe/eMawbwkQAPVSHPts0UoRxnpZox5pn69ghncBR+jtvx+/u
P6HdwW0qqTvfJnfAF1hBJ4oIk2AXiip5kkIznsAh9W6WRy6nTVCeetmIepDOGe0G
ZJIRn/OfSz7NzKylfDCat2z3EAutyeT/5oXZoWOmGg/8T7pn/pR588GoYYKRQnp+
YilqCPFX+az09EqqK/iHXnkdZ/Z2fCuU+9M/Zhrnlwlygl3RuVBI6xhm/ZsXtL2E
Gxa61lNy6pyx5+hSxHEFEJshXLtioRd702VdLKxEOuYSXKeJDs1x9o6cJ75S6hko
Ml1L4zCU+xEsMcvb1iQ2n7PZdacqhkFRUVVVmJ56th8aYyX7KNX6M9CD+kMpNm6J
kKC1li/Iy+RI138bAvaFplajMF551kt44dSvIoJIbTr1LigudzWPqk31QaZXV/4u
kD1n4p/XMc9HYU/was/CmQBFqmIZedTLTtK7clkuFN6wbwzdo1wmUNgnySQuMacO
gxhHxxzRWxd24uLyk9Px+9U3BfVPaRLiOPaPoC58lyVOykjSgfpgbus7JS69fCq7
bEH4Jatp/10zkco+UQIDAQABo1MwUTAdBgNVHQ4EFgQUjXH6PHi92y4C4hQpey86
r6+x1ewwHwYDVR0jBBgwFoAUjXH6PHi92y4C4hQpey86r6+x1ewwDwYDVR0TAQH/
BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAFE4SdKsX9UsLy+Z0xuHSxhTd0jfn
Iih5mtzb8CDNO5oTw4z0aMeAvpsUvjJ/XjgxnkiRACXh7K9hsG2r+ageRWGevyvx
CaRXFbherV1kTnZw4Y9/pgZTYVWs9jlqFOppz5sStkfjsDQ5lmPJGDii/StENAz2
XmtiPOgfG9Upb0GAJBCuKnrU9bIcT4L20gd2F4Y14ccyjlf8UiUi192IX6yM9OjT
+TuXwZgqnTOq6piVgr+FTSa24qSvaXb5z/mJDLlk23npecTouLg83TNSn3R6fYQr
d/Y9eXuUJ8U7/qTh2Ulz071AO9KzPOmleYPTx4Xty4xAtWi1QE5NHW9/Ajlv5OtO
OnMNWIs7ssDJBsB7VFC8hcwf79jz7kC0xmQqDfw51Xhhk04kla+v+HZcFW2AO9so
6ZdVHHQnIbJa7yQJKZ+hK49IOoBR6JgdB5kymoplLLiuqZSYTcwSBZ72FYTm3iAr
jzvt1hxpxVDmXvRnkhRrIRhK4QgJL0jRmirBjDY+PYYd7bdRIjN7WNZLFsgplnS8
9w6CwG32pRlm0c8kkiQ7FXA6BYCqOsDI8f1VGQv331OpR2Ck+FTv+L7DAmg6l37W
+LB9LGh4OAp68ImTjqf6ioGKG0RBSznwME+r4nXtT1S/qLR6ASWUS4ViWRhbRlNK
XWyb96wrUlv+E8I=
-----END CERTIFICATE-----

View File

@@ -1,12 +1,12 @@
# Cluster Bootstrapping
This guide for operators who have already installed Argo CD, and have a new cluster and are looking to install many apps in that cluster.
This guide for operators who have already installed Argo CD, and have a new cluster and are looking to install many applications in that cluster.
There's no one particular pattern to solve this problem, e.g. you could write a script to create your apps, or you could even manually create them. However, users of Argo CD tend to use the **app of apps pattern**.
There's no one particular pattern to solve this problem, e.g. you could write a script to create your applications, or you could even manually create them. However, users of Argo CD tend to use the **application of applications pattern**.
## App Of Apps Pattern
## Application Of Applications Pattern
[Declaratively](declarative-setup.md) specify one Argo CD app that consists only of other apps.
[Declaratively](declarative-setup.md) specify one Argo CD application that consists only of other applications.
![Application of Applications](../assets/application-of-applications.png)
@@ -28,7 +28,7 @@ A typical layout of your Git repository for this might be:
`Chart.yaml` is boiler-plate.
`templates` contains one file for each child app, roughly:
`templates` contains one file for each application, roughly:
```yaml
apiVersion: argoproj.io/v1alpha1
@@ -46,17 +46,15 @@ spec:
source:
path: guestbook
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: 08836bd970037ebcd14494831de4635bad961139
targetRevision: {{ .Values.spec.source.targetRevision }}
syncPolicy:
automated:
prune: true
```
The sync policy to automated + prune, so that child app is are automatically created, synced, and deleted when the manifest is changed, but you may wish to disable this. I've also added the finalizer, which will ensure that you apps are deleted correctly.
In this example, I've set the sync policy to automated + prune, so that applications are automatically created, synced, and deleted when the manifest is changed, but you may wish to disable this. I've also added the finalizer, which will ensure that you applications are deleted correctly.
Fix the revision to a specific Git commit SHA to make sure that, even if the child apps repo changes, the app will only change when the parent app change that revision. Alternatively, you can set it to HEAD or a branch name.
As you probably want to override the cluster server, this is a templated values.
As you probably want to override the cluster server and maybe the revision, these are templated values.
`values.yaml` contains the default values:
@@ -64,19 +62,21 @@ As you probably want to override the cluster server, this is a templated values.
spec:
destination:
server: https://kubernetes.default.svc
source:
targetRevision: HEAD
```
Finally, you need to create your parent app, e.g.:
Finally, you need to create your application, e.g.:
```bash
argocd app create applications \
--dest-namespace argocd \
--dest-server https://kubernetes.default.svc \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path apps \
--path applications \
--sync-policy automated
```
In this example, I excluded auto-prune, as this would result in all apps being deleted if some accidentally deleted the *app of apps*.
In this example, I excluded auto-prune, as this would result in all applications being deleted if some accidentally deleted the *application of applications*.
View [the example on Github](https://github.com/argoproj/argocd-example-apps/tree/master/apps).
View [the example on Github](https://github.com/argoproj/argocd-example-apps/tree/master/applications).

View File

@@ -8,8 +8,6 @@ Argo CD applications, projects and settings can be defined declaratively using K
| [`argocd-cm.yaml`](argocd-cm.yaml) | ConfigMap | General Argo CD configuration |
| [`argocd-secret.yaml`](argocd-secret.yaml) | Secret | Password, Certificates, Signing Key |
| [`argocd-rbac-cm.yaml`](argocd-rbac-cm.yaml) | ConfigMap | RBAC Configuration |
| [`argocd-tls-certs-cm.yaml`](argocd-tls-certs-cm.yaml) | ConfigMap | Custom TLS certificates for connecting Git repositories via HTTPS (v1.2 and later) |
| [`argocd-ssh-known-hosts-cm.yaml`](argocd-ssh-known-hosts-cm.yaml) | ConfigMap | SSH known hosts data for connecting Git repositories via SSH (v1.2 and later) |
| [`application.yaml`](application.yaml) | Application | Example application spec |
| [`project.yaml`](project.yaml) | AppProject | Example project spec |
@@ -28,7 +26,6 @@ apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
spec:
project: default
source:
@@ -42,9 +39,6 @@ spec:
See [application.yaml](application.yaml) for additional fields
!!! note
The namespace must match the namespace of your Argo cd, typically this is `argocd`.
!!! warning
By default, deleting an application will not perform a cascade delete, thereby deleting its resources. You must add the finalizer if you want this behaviour - which you may well not want.
@@ -54,10 +48,10 @@ metadata:
- resources-finalizer.argocd.argoproj.io
```
### App of Apps
### Application of Applications
You can create an app that creates other apps, which in turn can create other apps.
This allows you to declaratively manage a group of app that can be deployed and configured in concert.
You can create an application that creates other applications, which in turn can create other applications.
This allows you to declaratively manage a group of applications that can be deployed and configured in concert.
See [cluster bootstrapping](cluster-bootstrapping.md).
@@ -76,7 +70,6 @@ apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: my-project
namespace: argocd
spec:
description: Example Project
# Allow manifests to deploy from any Git repos
@@ -133,10 +126,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
- url: https://github.com/argoproj/my-private-repository
@@ -155,10 +144,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
- url: git@github.com:argoproj/my-private-repository
@@ -171,9 +156,7 @@ data:
The Kubernetes documentation has [instructions for creating a secret containing a private key](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys).
### Repository Credentials
>v1.1
### Repository Credentials (v1.1+)
If you want to use the same credentials for multiple repositories, you can use `repository.credentials`:
@@ -182,10 +165,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
- url: https://github.com/argoproj/private-repo
@@ -214,10 +193,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
# this has it's own credentials
@@ -258,96 +233,6 @@ data:
key: sshPrivateKey
```
### Repositories using self-signed TLS certificates (or are signed by custom CA)
> v1.2 or later
You can manage the TLS certificates used to verify the authenticity of your repository servers in a ConfigMap object named `argocd-tls-certs-cm`. The data section should contain a map, with the repository server's hostname part (not the complete URL) as key, and the certificate(s) in PEM format as data. So, if you connect to a repository with the URL `https://server.example.com/repos/my-repo`, you should use `server.example.com` as key. The certificate data should be either the server's certificate (in case of self-signed certificate) or the certificate of the CA that was used to sign the server's certificate. You can configure multiple certificates for each server, e.g. if you are having a certificate roll-over planned.
If there are no dedicated certificates configured for a repository server, the system's default trust store is used for validating the server's repository. This should be good enough for most (if not all) public Git repository services such as GitLab, GitHub and Bitbucket as well as most privately hosted sites which use certificates from well-known CAs, including Let's Encrypt certificates.
An example ConfigMap object:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-tls-certs-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
server.example.com: |
-----BEGIN CERTIFICATE-----
MIIF1zCCA7+gAwIBAgIUQdTcSHY2Sxd3Tq/v1eIEZPCNbOowDQYJKoZIhvcNAQEL
BQAwezELMAkGA1UEBhMCREUxFTATBgNVBAgMDExvd2VyIFNheG9ueTEQMA4GA1UE
BwwHSGFub3ZlcjEVMBMGA1UECgwMVGVzdGluZyBDb3JwMRIwEAYDVQQLDAlUZXN0
c3VpdGUxGDAWBgNVBAMMD2Jhci5leGFtcGxlLmNvbTAeFw0xOTA3MDgxMzU2MTda
Fw0yMDA3MDcxMzU2MTdaMHsxCzAJBgNVBAYTAkRFMRUwEwYDVQQIDAxMb3dlciBT
YXhvbnkxEDAOBgNVBAcMB0hhbm92ZXIxFTATBgNVBAoMDFRlc3RpbmcgQ29ycDES
MBAGA1UECwwJVGVzdHN1aXRlMRgwFgYDVQQDDA9iYXIuZXhhbXBsZS5jb20wggIi
MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv4mHMdVUcafmaSHVpUM0zZWp5
NFXfboxA4inuOkE8kZlbGSe7wiG9WqLirdr39Ts+WSAFA6oANvbzlu3JrEQ2CHPc
CNQm6diPREFwcDPFCe/eMawbwkQAPVSHPts0UoRxnpZox5pn69ghncBR+jtvx+/u
P6HdwW0qqTvfJnfAF1hBJ4oIk2AXiip5kkIznsAh9W6WRy6nTVCeetmIepDOGe0G
ZJIRn/OfSz7NzKylfDCat2z3EAutyeT/5oXZoWOmGg/8T7pn/pR588GoYYKRQnp+
YilqCPFX+az09EqqK/iHXnkdZ/Z2fCuU+9M/Zhrnlwlygl3RuVBI6xhm/ZsXtL2E
Gxa61lNy6pyx5+hSxHEFEJshXLtioRd702VdLKxEOuYSXKeJDs1x9o6cJ75S6hko
Ml1L4zCU+xEsMcvb1iQ2n7PZdacqhkFRUVVVmJ56th8aYyX7KNX6M9CD+kMpNm6J
kKC1li/Iy+RI138bAvaFplajMF551kt44dSvIoJIbTr1LigudzWPqk31QaZXV/4u
kD1n4p/XMc9HYU/was/CmQBFqmIZedTLTtK7clkuFN6wbwzdo1wmUNgnySQuMacO
gxhHxxzRWxd24uLyk9Px+9U3BfVPaRLiOPaPoC58lyVOykjSgfpgbus7JS69fCq7
bEH4Jatp/10zkco+UQIDAQABo1MwUTAdBgNVHQ4EFgQUjXH6PHi92y4C4hQpey86
r6+x1ewwHwYDVR0jBBgwFoAUjXH6PHi92y4C4hQpey86r6+x1ewwDwYDVR0TAQH/
BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAFE4SdKsX9UsLy+Z0xuHSxhTd0jfn
Iih5mtzb8CDNO5oTw4z0aMeAvpsUvjJ/XjgxnkiRACXh7K9hsG2r+ageRWGevyvx
CaRXFbherV1kTnZw4Y9/pgZTYVWs9jlqFOppz5sStkfjsDQ5lmPJGDii/StENAz2
XmtiPOgfG9Upb0GAJBCuKnrU9bIcT4L20gd2F4Y14ccyjlf8UiUi192IX6yM9OjT
+TuXwZgqnTOq6piVgr+FTSa24qSvaXb5z/mJDLlk23npecTouLg83TNSn3R6fYQr
d/Y9eXuUJ8U7/qTh2Ulz071AO9KzPOmleYPTx4Xty4xAtWi1QE5NHW9/Ajlv5OtO
OnMNWIs7ssDJBsB7VFC8hcwf79jz7kC0xmQqDfw51Xhhk04kla+v+HZcFW2AO9so
6ZdVHHQnIbJa7yQJKZ+hK49IOoBR6JgdB5kymoplLLiuqZSYTcwSBZ72FYTm3iAr
jzvt1hxpxVDmXvRnkhRrIRhK4QgJL0jRmirBjDY+PYYd7bdRIjN7WNZLFsgplnS8
9w6CwG32pRlm0c8kkiQ7FXA6BYCqOsDI8f1VGQv331OpR2Ck+FTv+L7DAmg6l37W
+LB9LGh4OAp68ImTjqf6ioGKG0RBSznwME+r4nXtT1S/qLR6ASWUS4ViWRhbRlNK
XWyb96wrUlv+E8I=
-----END CERTIFICATE-----
```
!!! note
The `argocd-tls-certs-cm` ConfigMap will be mounted as a volume at the mount path `/app/config/tls` in the pods of `argocd-server` and `argocd-repo-server`. It will create files for each data key in the mount path directory, so above example would leave the file `/app/config/tls/server.example.com`, which contains the certificate data. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration.
### SSH known host public keys
If you are connecting repositories via SSH, ArgoCD will need to know the SSH known hosts public key of the repository servers. You can manage the SSH known hosts data in the ConfigMap named `argocd-ssh-known-hosts-cm`. This ConfigMap contains a single key/value pair, with `ssh_known_hosts` as the key and the actual public keys of the SSH servers as data. As opposed to TLS configuration, the public key(s) of each single repository server ArgoCD will connect via SSH must be configured, otherwise the connections to the repository will fail. There is no fallback. The data can be copied from any existing `ssh_known_hosts` file, or from the output of the `ssh-keyscan` utility. The basic format is `<servername> <keydata>`, one entry per line.
An example ConfigMap object:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-ssh-known-hosts-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
ssh_known_hosts: |
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
```
!!! note
The `argocd-ssh-known-hosts-cm` ConfigMap will be mounted as a volume at the mount path `/app/config/ssh` in the pods of `argocd-server` and `argocd-repo-server`. It will create a file `ssh_known_hosts` in that directory, which contains the SSH known hosts data used by ArgoCD for connecting to Git repositories via SSH. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration.
## Clusters
@@ -412,8 +297,8 @@ stringData:
## Helm Chart Repositories
Non standard Helm Chart repositories have to be registered under the `repositories` key in the
`argocd-cm` ConfigMap. Each repository must have `url`, `type` and `name` fields. For private Helm repos you
Non standard Helm Chart repositories have to be registered under the `helm.repositories` key in the
`argocd-cm` ConfigMap. Each repository must have `url` and `name` fields. For private Helm repos you
may need to configure access credentials and HTTPS settings using `usernameSecret`, `passwordSecret`,
`caSecret`, `certSecret` and `keySecret` fields.
@@ -424,22 +309,11 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
# v1.2 or earlier use `helm.repositories`
helm.repositories: |
- url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
name: istio.io
# v1.3 or later use `repositories` with `type: helm`
repositories: |
- type: helm
url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
name: istio.io
- type: helm
url: https://argoproj.github.io/argo-helm
- url: https://argoproj.github.io/argo-helm
name: argo
usernameSecret:
name: my-secret

View File

@@ -11,61 +11,9 @@ A set HA of manifests are provided for users who wish to run Argo CD in a highly
## Scaling Up
### argocd-repo-server
You might scale up some Argo CD services in the following circumstances:
**settings:**
* The `argocd-repo-server` can scale up when there is too much contention on a single git repo (e.g. many apps defined in a single git repo).
* The `argocd-server` can scale up to support more front-end load.
The `argocd-repo-server` is responsible for cloning Git repository, keeping it up to date and generating manifests using the appropriate tool.
* `argocd-repo-server` fork/exec config management tool to generate manifests. The fork can fail due to lack of memory and limit on the number of OS threads.
The `--parallelismlimit` flag controls how many manifests generations are running concurrently and allows avoiding OOM kills.
* one instance of `argocd-repo-server` executes only one operation on one Git repo concurrently. Increase the number of `argocd-repo-server` replica count if you have a lot of
applications in the same repository.
* `argocd-repo-server` clones repository into `/tmp` ( of path specified in `TMPDIR` env variable ). Pod might run out of disk space if have too many repository
or repositories has a lot of files. To avoid this problem mount persistent volume.
* `argocd-repo-server` `git ls-remote` to resolve ambiguous revision such as `HEAD`, branch or tag name. This operation is happening pretty frequently
and might fail. To avoid failed syncs use `ARGOCD_GIT_ATTEMPTS_COUNT` environment variable to retry failed requests.
**metrics:**
* `argocd_git_request_total` - Number of git requests. The metric provides two tags: `repo` - Git repo URL; `request_type` - `ls-remote` or `fetch`.
### argocd-application-controller
**settings:**
The `argocd-application-controller` uses `argocd-repo-server` to get generated manifests and Kubernetes API server to get actual cluster state.
* controller uses two separate queues to process application reconciliation (milliseconds) and app syncing (seconds). Number of queue processors for each queue is controlled by
`--status-processors` (20 by default) and `--operation-processors` (10 by default) flags. Increase number of processors if your Argo CD instance manages too many applications.
For 1000 application we use 50 for `--status-processors` and 25 for `--operation-processors`
* The manifest generation typically takes the most time during reconciliation. The duration of manifest generation is limited to make sure controller refresh queue does not overflow.
The app reconciliation fails with `Context deadline exceeded` error if manifest generating taking too much time. As workaround increase value of `--repo-server-timeout-seconds` and
consider scaling up `argocd-repo-server` deployment.
* controller uses `kubectl` fork/exec to push changes into the cluster and to convert resource from preferred version into user specified version
(e.g. Deployment `apps/v1` into `extensions/v1beta1`). Same as config management tool `kubectl` fork/exec might cause pod OOM kill. Use `--kubectl-parallelism-limit` flag to limit
number of allowed concurrent kubectl fork/execs.
* controller uses Kubernetes watch APIs to maintain lightweight Kubernetes cluster cache. This allows to avoid querying Kubernetes during app reconciliation and significantly improve
performance. For performance reasons controller monitors and caches only preferred the version of a resource. During reconciliation, the controller might have to convert cached resource from
preferred version into a version of the resource stored in Git. If `kubectl convert` fails because conversion is not supported than controller fallback to Kubernetes API query which slows down
reconciliation. In this case advice user-preferred resource version in Git.
**metrics**
* `argocd_app_reconcile` - reports application reconciliation duration. Can be used to build reconciliation duration heat map to get high-level reconciliation performance picture.
* `argocd_app_k8s_request_total` - number of k8s requests per application. The number of fallback Kubernetes API queries - useful to identify which application has a resource with
non-preferred version and causes performance issues.
### argocd-server
The `argocd-server` is stateless and probably least likely to cause issues. You might consider increasing number of replicas to 3 or more to ensure there is no downtime during upgrades.
### argocd-dex-server, argocd-redis
The `argocd-dex-server` uses an in-memory database, and two or more instances would have inconsistent data. `argocd-redis` is pre-configured with the understanding of only three total redis servers/sentinels.
All other services should run with their pre-determined number of replicas. The `argocd-application-controller` must not be increased because multiple controllers will fight. The `argocd-dex-server` uses an in-memory database, and two or more instances would have inconsistent data. `argocd-redis` is pre-configured with the understanding of only three total redis servers/sentinels.

View File

@@ -12,7 +12,7 @@ There are several ways how Ingress can be configured.
### Option 1: SSL-Passthrough
Argo CD serves multiple protocols (gRPC/HTTPS) on the same port (443), this provides a
Because Argo CD serves multiple protocols (gRPC/HTTPS) on the same port (443), this provides a
challenge when attempting to define a single nginx ingress object and rule for the argocd-service,
since the `nginx.ingress.kubernetes.io/backend-protocol` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol)
accepts only a single value for the backend protocol (e.g. HTTP, HTTPS, GRPC, GRPCS).
@@ -115,60 +115,22 @@ spec:
- --insecure
```
The obvious disadvantage to this approach is that this technique requires two separate hostnames for
the API server -- one for gRPC and the other for HTTP/HTTPS. However it allows TLS termination to
The obvious disadvantage to this approach is that this technique require two separate hostnames for
the API server -- one for gRPC and the other for HTTP/HTTPS. However it allow TLS termination to
happen at the ingress controller.
## [Traefik (v2.0)](https://docs.traefik.io/)
Traefik can be used as an edge router and provide [TLS](https://docs.traefik.io/user-guides/crd-acme/) termination within the same deployment.
It currently has an advantage over NGINX in that it can terminate both TCP and HTTP connections _on the same port_ meaning you do not require multiple ingress objects and hosts.
```yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: argocd-server-ingress
spec:
entryPoints:
- websecure
routes:
- match: Host(`argocd.example.com`)
kind: Rule
services:
- name: argocd-server
port: 80
tls:
certResolver: default
options: {}
```
## AWS Application Load Balancers (ALBs) And Classic ELB (HTTP Mode)
ALBs and Classic ELBs don't fully support HTTP2/gRPC, which is used by the `argocd` CLI.
Thus, when using an AWS load balancer, either Classic ELB in
Neither ALBs and Classic ELB in HTTP mode, do not have full support for HTTP2/gRPC which is the
protocol used by the `argocd` CLI. Thus, when using an AWS load balancer, either Classic ELB in
passthrough mode is needed, or NLBs.
```shell
$ argocd login <host>:<port> --grpc-web
```
## Authenticating through multiple layers of authenticating reverse proxies
ArgoCD endpoints may be protected by one or more reverse proxies layers, in that case, you can provide additional headers through the `argocd` CLI `--header` parameter to authenticate through those layers.
```shell
$ argocd login <host>:<port> --header 'x-token1:foo' --header 'x-token2:bar' # can be repeated multiple times
$ argocd login <host>:<port> --header 'x-token1:foo,x-token2:bar' # headers can also be comma separated
```
## UI Base Path
If the Argo CD UI is available under a non-root path (e.g. `/argo-cd` instead of `/`) then the UI path should be configured in the API server.
To configure the UI path add the `--basehref` flag into the `argocd-server` deployment command:
If Argo CD UI is available under non-root path (e.g. `/argo-cd` instead of `/`) then UI path should be configured in API server.
To configure UI path add `--basehref` flag into `argocd-server` deployment command:
```yaml
spec:
@@ -186,7 +148,7 @@ spec:
- /argo-cd
```
NOTE: The flag `--basehref` only changes the UI base URL. The API server will keep using the `/` path so you need to add a URL rewrite rule to the proxy config.
NOTE: flag `--basehref` only changes UI base URL. API server keep using `/` path so you need to add URL rewrite rule to proxy config.
Example nginx.conf with URL rewrite:
```

View File

@@ -43,7 +43,7 @@ metadata:
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-server
app.kubernetes.io/name: argocd-server-metrics
endpoints:
- port: metrics
```

View File

@@ -2,7 +2,6 @@ apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: my-project
namespace: argocd
spec:
# Project description
description: Example Project
@@ -30,10 +29,6 @@ spec:
- group: ''
kind: NetworkPolicy
# Enables namespace orphaned resource monitoring.
orphanedResources:
warn: false
roles:
# A role which provides read-only access to all applications in the project
- name: read-only

View File

@@ -28,7 +28,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
@@ -41,8 +40,3 @@ data:
g, your-github-org:your-team, role:org-admin
```
## Anonymous Access
The anonymous access to Argo CD can be enabled using `users.anonymous.enabled` field in `argocd-cm` (see [./argocd-cm.yaml](argocd-cm.yaml)).
The anonymous users get default role permissions specified by `policy.default` in `argocd-rbac-cm.yaml. For read-only access you'll want `policy.default: role:readonly` as above

View File

@@ -86,8 +86,6 @@ NOTES:
* Any values which start with '$' will look to a key in argocd-secret of the same name (minus the $),
to obtain the actual value. This allows you to store the `clientSecret` as a kubernetes secret.
* If you are editing the secret using `kubectl edit secret` remember that the `data` field expects a base64 encoded value (`echo -n "<CLIENT_SECRET>" | base64`).
* The error: *Failed to authenticate: github: get user: github: get URL Get https://api.github.com/user: oauth2: token expired and refresh token is not set* can be attributed to the secret value not being interpreted correctly by dex (e.g. incorrect client secret value).
* There is no need to set `redirectURI` in the `connectors.config` as shown in the dex documentation.
Argo CD will automatically use the correct `redirectURI` for any OAuth2 connectors, to match the
correct external callback URL (e.g. https://argocd.example.com/api/dex/callback)
@@ -108,47 +106,9 @@ data:
clientID: aaaabbbbccccddddeee
clientSecret: $oidc.okta.clientSecret
# Optional set of OIDC scopes to request. If omitted, defaults to: ["openid", "profile", "email", "groups"]
requestedScopes: ["openid", "profile", "email", "groups"]
# Optional set of OIDC claims to request on the ID token.
requestedIDTokenClaims: {"groups": {"essential": true}}
# Some OIDC providers require a separate clientID for different callback URLs.
# For example, if configuring Argo CD with self-hosted Dex, you will need a separate client ID
# for the 'localhost' (CLI) client to Dex. This field is optional. If omitted, the CLI will
# use the same clientID as the Argo CD server
cliClientID: vvvvwwwwxxxxyyyyzzzz
```
### Requesting additional ID token claims
Not all OIDC providers support a special `groups` scope. E.g. Okta, OneLogin and Microsoft do support a special
`groups` scope and will return group membership with the default `requestedScopes`.
Other OIDC providers might be able to return a claim with group membership if explicitly requested to do so.
Individual claims can be requested with `requestedIDTokenClaims`, see
[OpenID Connect Claims Parameter](https://connect2id.com/products/server/docs/guides/requesting-openid-claims#claims-parameter)
for details. The Argo CD configuration for claims is as follows:
```yaml
oidc.config: |
requestedIDTokenClaims:
email:
essential: true
groups:
essential: true
value: org:myorg
acr:
essential: true
values:
- urn:mace:incommon:iap:silver
- urn:mace:incommon:iap:bronze
```
For a simple case this can be:
```yaml
oidc.config: |
requestedIDTokenClaims: {"groups": {"essential": true}}
```

View File

@@ -4,7 +4,7 @@
Argo CD polls Git repositories every three minutes to detect changes to the manifests. To eliminate
this delay from polling, the API server can be configured to receive webhook events. Argo CD supports
Git webhook notifications from GitHub, GitLab, Bitbucket, Bitbucket Server and Gogs. The following explains how to configure
Git webhook notifications from GitHub, GitLab, and BitBucket. The following explains how to configure
a Git webhook for GitHub, but the same process should be applicable to other providers.
### 1. Create The WebHook In The Git Provider
@@ -27,13 +27,11 @@ accessible, then configuring a webhook secret is recommended to prevent a DDoS a
In the `argocd-secret` kubernetes secret, configure one of the following keys with the Git
provider's webhook secret configured in step 1.
| Provider | K8s Secret Key |
|-----------------| ---------------------------------|
| GitHub | `webhook.github.secret` |
| GitLab | `webhook.gitlab.secret` |
| BitBucket | `webhook.bitbucket.uuid` |
| BitBucketServer | `webhook.bitbucketserver.secret` |
| Gogs | `webhook.gogs.secret` |
| Provider | K8s Secret Key |
|---------- | ------------------------ |
| GitHub | `github.webhook.secret` |
| GitLab | `gitlab.webhook.secret` |
| BitBucket | `bitbucket.webhook.uuid` |
Edit the Argo CD kubernetes secret:
@@ -58,19 +56,13 @@ data:
stringData:
# github webhook secret
webhook.github.secret: shhhh! it's a github secret
github.webhook.secret: shhhh! it's a github secret
# gitlab webhook secret
webhook.gitlab.secret: shhhh! it's a gitlab secret
gitlab.webhook.secret: shhhh! it's a gitlab secret
# bitbucket webhook secret
webhook.bitbucket.uuid: your-bitbucket-uuid
# bitbucket server webhook secret
webhook.bitbucketserver.secret: shhhh! it's a bitbucket server secret
# gogs server webhook secret
webhook.gogs.secret: shhhh! it's a gogs server secret
bitbucket.webhook.uuid: your-bitbucket-uuid
```
After saving, the changes should take affect automatically.

View File

@@ -1,10 +1,10 @@
# App Deletion
Apps can be deleted with or without a cascade option. A **cascade delete**, deletes both the app and its resources, rather than only the app.
Apps can be deleted with or without a cascade option. A **cascade delete**, deletes the both app's and its resources, rather than only the app.
## Deletion Using `argocd`
To perform a non-cascade delete:
To perform an non-cascade delete:
```bash
argocd app delete APPNAME
@@ -43,4 +43,4 @@ metadata:
Argo CD's app controller watches for this and will then delete both the app and its resources.
When you invoke `argocd app delete` with `--cascade`, the finalizer is added automatically.
When you invoke `argocd app delete` with `--cascade`, the finalizer is added automatically.

Some files were not shown because too many files have changed in this diff Show More