Compare commits

..

44 Commits

Author SHA1 Message Date
Alex Collins
e0bd546a07 Update manifests to v1.0.2 2019-06-14 09:46:05 -07:00
Alex Collins
984829fcc8 Merge branch 'release-1.0' of github.com:argoproj/argo-cd into release-1.0 2019-06-14 09:38:08 -07:00
Jesse Suen
c48e27f265 Cluster registration was unintentionally persisting client-cert auth credentials (#1742)
Remove unused CreateClusterFromKubeConfig server method
2019-06-14 04:03:01 -07:00
Alex Collins
5fe1447b72 Update manifests to v1.0.1 2019-05-28 10:08:34 -07:00
Alex Collins
539516bd43 Update manifests to v1.0.1 2019-05-28 10:07:32 -07:00
Alex Collins
8a57d544ff Update manifests to v1.0.1 2019-05-28 09:01:21 -07:00
Alex Collins
cd77e2a048 Update manifests to v1.0.1 2019-05-28 08:58:01 -07:00
Alex Collins
a52f766815 removes file which cannot be compiled 2019-05-24 15:03:49 -07:00
Alex Collins
646fd37e16 Public git creds (#1633) 2019-05-24 15:01:03 -07:00
Alexander Matyushentsev
c74ca22023 Update manifests to v1.0.0 2019-05-16 13:00:40 -07:00
Alexander Matyushentsev
2d170be242 Issue #1471 - Support configuring requested OIDC provider scopes and enforced RBAC scopes (#1585)
* Issue #1471 - Support configuring requested OIDC provider scopes and enforced RBAC scopes

* Apply reviewer notes
2019-05-16 07:35:44 -07:00
Alexander Matyushentsev
079101522d Issue #1533 - Prevent reconciliation loop for self-managed apps (#1608) 2019-05-14 08:05:28 -07:00
Jesse Suen
1bea98e01b Supply resourceVersion to watch request to prevent reading of stale cache (#1612) 2019-05-13 15:01:15 -07:00
Alexander Matyushentsev
7c09221f7c Update manifests to v1.0.0-rc3 2019-05-09 09:52:56 -07:00
Alexander Matyushentsev
6355e910d4 Fix flaky TestGetIngressInfo unit test (#1529) 2019-05-09 09:52:16 -07:00
Alexander Matyushentsev
891e0320d7 Issue #1586 - Ignore patch errors during diffing normalization (#1599) 2019-05-09 09:30:48 -07:00
Alexander Matyushentsev
486323ae58 Issue #1596 - SSH URLs support is partially broken (#1597) 2019-05-09 09:30:42 -07:00
Alexander Matyushentsev
4ef875aa0b Issue #1552 - Improve rendering app image information (#1584) 2019-05-09 09:30:31 -07:00
Alexander Matyushentsev
e756b8db7a Fix ingress browsable url formatting if port is not string (#1576) 2019-05-09 09:28:49 -07:00
Alexander Matyushentsev
8023f8ac8d Issue #1579 - Impossible to sync to HEAD from UI if auto-sync is enabled (#1580) 2019-05-09 09:28:42 -07:00
Alexander Matyushentsev
803408904a Issue #1570 - Application controller is unable to delete self-referenced app (#1574) 2019-05-09 09:28:35 -07:00
Alexander Matyushentsev
702f9095da Issue #1546 - Add liveness probe to repo server/api servers (#1560) 2019-05-09 09:28:30 -07:00
Alexander Matyushentsev
0b9ee1ae6d ISsue #1557 - Controller incorrectly report health state of self managed application (#1558) 2019-05-09 09:28:19 -07:00
Alexander Matyushentsev
2f003e08ff Issue #1540 - Fix kustomize manifest generation crash is manifest has image without version (#1559) 2019-05-09 09:28:14 -07:00
Paul Brit
e090857d6b Fix hardcoded 'git' user in util/git.NewClient (#1556)
Closes #1555
2019-05-09 09:28:09 -07:00
dthomson25
4e29fff5a3 Improve Rollout health.lua (#1554) 2019-05-09 09:28:03 -07:00
Alexander Matyushentsev
e279377696 Fix invalid URL for ingress without hostname (#1553) 2019-05-01 15:38:40 -07:00
Alexander Matyushentsev
5e52839ce3 Issue #1533 - Prevent reconciliation loop for self-managed apps (#1547) 2019-05-01 10:22:09 -07:00
Alexander Matyushentsev
3ca632a552 Update manifests to v1.0.0-rc2 2019-04-30 13:19:50 -07:00
Alexander Matyushentsev
cfe55357ac Rollout health checks/actions should support v0.2 and v0.2+ versions (#1543) 2019-04-30 13:18:15 -07:00
Alex Collins
0f6d768eca Fixes bug in normalizer (#1542) 2019-04-30 11:32:54 -07:00
Alexander Matyushentsev
75330da328 Ingress resource might get invalid ExternalURL (#1522) (#1523) 2019-04-30 11:14:18 -07:00
Alexander Matyushentsev
a1bcbab0e5 Issue 1476 - Avoid validating repository in application controller (#1535) 2019-04-30 11:10:06 -07:00
Alexander Matyushentsev
db9272032a Issue #1414 - Load target resource using K8S if conversion fails (#1527) 2019-04-30 11:10:02 -07:00
Alexander Matyushentsev
d6d6c655ff Issue #1476 - Add repo server grpc call timeout (#1528) 2019-04-30 11:09:58 -07:00
Alex Collins
58acc92790 Adds support for configuring repo creds at a domain/org level. Closes… (#1496) 2019-04-30 11:09:53 -07:00
Simon Behar
c3074c0977 Whitelisting of resources (#1509)
* Added whitelisting of resources
2019-04-30 11:09:26 -07:00
Simon Behar
af254f3047 Added ability to sync specific labels from the command line (#1501)
* Finished initial implementation

* Added tests and fix a few bugs
2019-04-30 11:09:16 -07:00
Alex Collins
c140976eeb Updates Makefile 2019-04-24 10:52:53 -07:00
Alex Collins
05c22d4ddc Updates VERSION 2019-04-24 10:51:50 -07:00
Alex Collins
3e08938a20 Updates VERSION 2019-04-24 10:49:30 -07:00
Alex Collins
5937bb574d Update manifests to v1.0.0-rc1 2019-04-24 10:48:24 -07:00
Alex Collins
ded55b26d1 Update manifests to v1.0.0-rc1 2019-04-24 10:46:23 -07:00
Alex Collins
d79ed65de0 Updated CHANGELOG.md 2019-04-24 10:35:04 -07:00
575 changed files with 9776 additions and 54631 deletions

163
.argo-ci/ci.yaml Normal file
View File

@@ -0,0 +1,163 @@
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: argo-cd-ci-
spec:
entrypoint: argo-cd-ci
arguments:
parameters:
- name: revision
value: master
- name: repo
value: https://github.com/argoproj/argo-cd.git
volumes:
- name: k3setc
emptyDir: {}
- name: k3svar
emptyDir: {}
- name: tmp
emptyDir: {}
templates:
- name: argo-cd-ci
steps:
- - name: build-e2e
template: build-e2e
- name: test
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && make lint test && bash <(curl -s https://codecov.io/bash) -f coverage.out"
# The step builds argo cd image, deploy argo cd components into throw-away kubernetes cluster provisioned using k3s and run e2e tests against it.
- name: build-e2e
inputs:
artifacts:
- name: code
path: /go/src/github.com/argoproj/argo-cd
git:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [sh, -c]
# Main contains build argocd image. The image is saved it into k3s agent images directory so it could be preloaded by the k3s cluster.
args: ["
dep ensure && until docker ps; do sleep 3; done && \
make image DEV_IMAGE=true && mkdir -p /var/lib/rancher/k3s/agent/images && \
docker save argocd:latest > /var/lib/rancher/k3s/agent/images/argocd.tar && \
touch /var/lib/rancher/k3s/ready && until ls /etc/rancher/k3s/k3s.yaml; do sleep 3; done && \
kubectl create ns argocd-e2e && kustomize build ./test/manifests/ci | kubectl apply -n argocd-e2e -f - && \
kubectl rollout status deployment -n argocd-e2e argocd-application-controller && kubectl rollout status deployment -n argocd-e2e argocd-server && \
git config --global user.email \"test@example.com\" && \
export ARGOCD_SERVER=$(kubectl get service argocd-server -o=jsonpath={.spec.clusterIP} -n argocd-e2e):443 && make test-e2e"
]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: USER
value: argocd
- name: DOCKER_HOST
value: 127.0.0.1
- name: DOCKER_BUILDKIT
value: "1"
- name: KUBECONFIG
value: /etc/rancher/k3s/k3s.yaml
volumeMounts:
- name: tmp
mountPath: /tmp
- name: k3setc
mountPath: /etc/rancher/k3s
- name: k3svar
mountPath: /var/lib/rancher/k3s
sidecars:
- name: dind
image: docker:18.09-dind
securityContext:
privileged: true
resources:
requests:
memory: 2048Mi
cpu: 500m
mirrorVolumeMounts: true
# Steps waits for file /var/lib/rancher/k3s/ready which indicates that all required images are ready, then starts the cluster.
- name: k3s
image: rancher/k3s:v0.3.0-rc1
imagePullPolicy: Always
command: [sh, -c]
args: ["until ls /var/lib/rancher/k3s/ready; do sleep 3; done && k3s server || true"]
securityContext:
privileged: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: k3setc
mountPath: /etc/rancher/k3s
- name: k3svar
mountPath: /var/lib/rancher/k3s
- name: ci-builder
inputs:
parameters:
- name: cmd
artifacts:
- name: code
path: /go/src/github.com/argoproj/argo-cd
git:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [bash, -c]
args: ["{{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: CODECOV_TOKEN
valueFrom:
secretKeyRef:
name: codecov-token
key: codecov-token
resources:
requests:
memory: 1024Mi
cpu: 200m
archiveLocation:
archiveLogs: true
- name: ci-dind
inputs:
parameters:
- name: cmd
artifacts:
- name: code
path: /go/src/github.com/argoproj/argo-cd
git:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [sh, -c]
args: ["until docker ps; do sleep 3; done && {{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: DOCKER_HOST
value: 127.0.0.1
- name: DOCKER_BUILDKIT
value: "1"
resources:
requests:
memory: 1024Mi
cpu: 200m
sidecars:
- name: dind
image: docker:18.09-dind
securityContext:
privileged: true
mirrorVolumeMounts: true
archiveLocation:
archiveLogs: true

View File

@@ -1,327 +0,0 @@
version: 2.1
commands:
before:
steps:
- restore_go_cache
- install_golang
- install_tools
- clean_checkout
- configure_git
- install_go_deps
- dep_ensure
configure_git:
steps:
- run:
name: Configure Git
command: |
set -x
# must be configured for tests to run
git config --global user.email you@example.com
git config --global user.name "Your Name"
echo "export PATH=/home/circleci/.go_workspace/src/github.com/argoproj/argo-cd/hack:\$PATH" | tee -a $BASH_ENV
echo "export GIT_ASKPASS=git-ask-pass.sh" | tee -a $BASH_ENV
clean_checkout:
steps:
- run:
name: Remove checked out code
command: rm -Rf /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
- checkout
dep_ensure:
steps:
- restore_cache:
keys:
- vendor-v4-{{ checksum "Gopkg.lock" }}
- run:
name: Run dep ensure
command: dep ensure -v
- save_cache:
key: vendor-v4-{{ checksum "Gopkg.lock" }}
paths:
- vendor
install_golang:
steps:
- run:
name: Install Golang v1.12.6
command: |
go get golang.org/dl/go1.12.6
[ -e /home/circleci/sdk/go1.12.6 ] || go1.12.6 download
echo "export GOPATH=/home/circleci/.go_workspace" | tee -a $BASH_ENV
echo "export PATH=/home/circleci/sdk/go1.12.6/bin:\$PATH" | tee -a $BASH_ENV
- run:
name: Golang diagnostics
command: |
env
which go
go version
go env
install_go_deps:
steps:
- run:
name: Install Go deps
command: |
set -x
go get github.com/jstemmer/go-junit-report
go get github.com/mattn/goreman
install_tools:
steps:
- run:
name: Create downloads dir
command: mkdir -p /tmp/dl
- restore_cache:
keys:
- dl-v7
- run:
name: Install Kubectl v1.14.0
command: |
set -x
[ -e /tmp/dl/kubectl ] || curl -sLf -C - -o /tmp/dl/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl
sudo cp /tmp/dl/kubectl /usr/local/bin/kubectl
sudo chmod +x /usr/local/bin/kubectl
- run:
name: Install Kubectx v0.6.3
command: |
set -x
[ -e /tmp/dl/kubectx.zip ] || curl -sLf -C - -o /tmp/dl/kubectx.zip https://github.com/ahmetb/kubectx/archive/v0.6.3.zip
sudo unzip /tmp/dl/kubectx.zip kubectx-0.6.3/kubectx
sudo unzip /tmp/dl/kubectx.zip kubectx-0.6.3/kubens
sudo mv kubectx-0.6.3/kubectx /usr/local/bin/
sudo mv kubectx-0.6.3/kubens /usr/local/bin/
sudo chmod +x /usr/local/bin/kubectx
sudo chmod +x /usr/local/bin/kubens
- run:
name: Install Dep v0.5.3
command: |
set -x
[ -e /tmp/dl/dep ] || curl -sLf -C - -o /tmp/dl/dep https://github.com/golang/dep/releases/download/v0.5.3/dep-linux-amd64
sudo cp /tmp/dl/dep /usr/local/go/bin/dep
sudo chmod +x /usr/local/go/bin/dep
dep version
- run:
name: Install Ksonnet v0.13.1
command: |
set -x
[ -e /tmp/dl/ks.tar.gz ] || curl -sLf -C - -o /tmp/dl/ks.tar.gz https://github.com/ksonnet/ksonnet/releases/download/v0.13.1/ks_0.13.1_linux_amd64.tar.gz
tar -C /tmp -xf /tmp/dl/ks.tar.gz
sudo cp /tmp/ks_0.13.1_linux_amd64/ks /usr/local/go/bin/ks
sudo chmod +x /usr/local/go/bin/ks
ks version
- run:
name: Install Helm v2.13.1
command: |
set -x
[ -e /tmp/dl/helm.tar.gz ] || curl -sLf -C - -o /tmp/dl/helm.tar.gz https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
tar -C /tmp/ -xf /tmp/dl/helm.tar.gz
sudo cp /tmp/linux-amd64/helm /usr/local/go/bin/helm
helm version --client
helm init --client-only
- run:
name: Install Kustomize v3.1.0
command: |
set -x
export VER=3.1.0
[ -e /tmp/dl/kustomize_${VER} ] || curl -sLf -C - -o /tmp/dl/kustomize_${VER} https://github.com/kubernetes-sigs/kustomize/releases/download/v${VER}/kustomize_${VER}_linux_amd64
sudo cp /tmp/dl/kustomize_${VER} /usr/local/go/bin/kustomize
sudo chmod +x /usr/local/go/bin/kustomize
kustomize version
- save_cache:
key: dl-v7
paths:
- /tmp/dl
save_go_cache:
steps:
- save_cache:
key: go-v17-{{ .Branch }}
paths:
- /home/circleci/.go_workspace
- /home/circleci/.cache/go-build
- /home/circleci/sdk/go1.12.6
restore_go_cache:
steps:
- restore_cache:
keys:
- go-v17-{{ .Branch }}
- go-v17-master
- go-v16-{{ .Branch }}
- go-v16-master
jobs:
build:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
steps:
- before
- run:
name: Run unit tests
command: |
set -x
mkdir -p /tmp/test-results
trap "go-junit-report </tmp/test-results/go-test.out > /tmp/test-results/go-test-report.xml" EXIT
make test | tee /tmp/test-results/go-test.out
- save_go_cache
- run:
name: Uploading code coverage
command: bash <(curl -s https://codecov.io/bash) -f coverage.out
# This takes 2m, lets background it.
background: true
- store_test_results:
path: /tmp/test-results
- run:
name: Generate code
command: make codegen
- run:
name: Lint code
# use GOGC to limit memory usage in exchange for CPU usage, https://github.com/golangci/golangci-lint#memory-usage-of-golangci-lint
# we have 8GB RAM, 2CPUs https://circleci.com/docs/2.0/executor-types/#using-machine
command: LINT_GOGC=20 LINT_CONCURRENCY=1 LINT_DEADLINE=3m0s make lint
- run:
name: Check nothing has changed
command: |
set -xo pipefail
# This makes sure you ran `make pre-commit` before you pushed.
# We exclude the Swagger resources; CircleCI doesn't generate them correctly.
# When this fails, it will, create a patch file you can apply locally to fix it.
# To troubleshoot builds: https://argoproj.github.io/argo-cd/developer-guide/ci/
git diff --exit-code -- . ':!Gopkg.lock' ':!assets/swagger.json' | tee codegen.patch
- store_artifacts:
path: codegen.patch
when: always
e2e:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
steps:
- run:
name: Install and start K3S v0.5.0
command: |
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
kubectl version
background: true
environment:
INSTALL_K3S_EXEC: --docker
INSTALL_K3S_VERSION: v0.5.0
- before
- run:
# do this before we build everything else in the background, as they tend to explode
name: Make CLI
command: |
set -x
make cli
# must be added to path for tests
echo export PATH="`pwd`/dist:\$PATH" | tee -a $BASH_ENV
- run:
name: Create namespace
command: |
set -x
cat /etc/rancher/k3s/k3s.yaml | sed "s/localhost/`hostname`/" | tee ~/.kube/config
echo "127.0.0.1 `hostname`" | sudo tee -a /etc/hosts
kubectl create ns argocd-e2e
kubens argocd-e2e
# install the certificates (not 100% sure we need this)
sudo cp /var/lib/rancher/k3s/server/tls/token-ca.crt /usr/local/share/ca-certificates/k3s.crt
sudo update-ca-certificates
- run:
name: Apply manifests
command: kustomize build test/manifests/base | kubectl apply -f -
- run:
name: Start Redis
command: docker run --rm --name argocd-redis -i -p 6379:6379 redis:5.0.3-alpine --save "" --appendonly no
background: true
- run:
name: Start repo server
command: go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379
background: true
environment:
# pft. if you do not quote "true", CircleCI turns it into "1", stoopid
ARGOCD_FAKE_IN_CLUSTER: "true"
ARGOCD_SSH_DATA_PATH: "/tmp/argo-e2e/app/config/ssh"
ARGOCD_TLS_DATA_PATH: "/tmp/argo-e2e/app/config/tls"
- run:
name: Start API server
command: go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app
background: true
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
ARGOCD_SSH_DATA_PATH: "/tmp/argo-e2e/app/config/ssh"
ARGOCD_TLS_DATA_PATH: "/tmp/argo-e2e/app/config/tls"
- run:
name: Start Test Git
command: |
test/fixture/testrepos/start-git.sh
background: true
- run:
name: Wait for API server
command: |
set -x
until curl -v http://localhost:8080/healthz; do sleep 3; done
- run:
name: Start controller
command: go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081 --kubeconfig ~/.kube/config
background: true
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
- run:
name: Smoke test
command: |
set -x
argocd login localhost:8080 --plaintext --username admin --password password
argocd app create guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook
argocd app sync guestbook
argocd app delete guestbook
- run:
name: Run e2e tests
command: |
set -x
mkdir -p /tmp/test-results
trap "go-junit-report </tmp/test-results/go-e2e.out > /tmp/test-results/go-e2e-report.xml" EXIT
make test-e2e | tee /tmp/test-results/go-e2e.out
environment:
ARGOCD_OPTS: "--server localhost:8080 --plaintext"
ARGOCD_E2E_EXPECT_TIMEOUT: "30"
ARGOCD_E2E_K3S: "true"
- store_test_results:
path: /tmp/test-results
ui:
# note that we checkout the code in ~/argo-cd/, but then work in ~/argo-cd/ui
working_directory: ~/argo-cd/ui
docker:
- image: node:11.15.0
steps:
- checkout:
path: ~/argo-cd/
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-v3-{{ checksum "yarn.lock" }}
- run:
name: Install
command:
yarn install --frozen-lockfile --ignore-optional --non-interactive
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-v3-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
- node_modules
- run:
name: Test
command: yarn test
# This does not appear to work, and I don't want to spend time on it.
- store_test_results:
path: junit.xml
- run:
name: Build
command: yarn build
- run:
name: Lint
command: yarn lint
workflows:
version: 2
workflow:
jobs:
- build
- e2e
- ui:
# this isn't strictly true, we just put in here so that we 2/4 executors rather than 3/4
requires:
- build

View File

@@ -5,13 +5,3 @@ ignore:
- "pkg/apis/.*"
- "pkg/client/.*"
- "test/.*"
coverage:
status:
# allow test coverage to drop by 1%, assume that it's typically due to CI problems
patch:
default:
enabled: no
if_not_found: success
project:
default:
threshold: 1

View File

@@ -10,4 +10,3 @@ dist/
cmd/**/debug
debug.test
coverage.out
ui/node_modules/

View File

@@ -11,7 +11,7 @@ assignees: ''
A clear and concise description of what the bug is.
**To Reproduce**
If we cannot reproduce, we cannot fix! Steps to reproduce the behavior:
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
@@ -23,21 +23,5 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Version**
```shell
Paste the output from `argocd version` here.
```
**Logs**
```
Paste any relevant application logs here.
```
**Have you thought about contributing a fix yourself?**
Open Source software thrives with your contribution. It not only gives skills you might not be able to get in your day job, it also looks amazing on your resume.
If you want to get involved, check out the
[contributing guide](https://github.com/argoproj/argo-cd/blob/master/docs/CONTRIBUTING.md), then reach out to us on [Slack](https://argoproj.github.io/community/join-slack) so we can see how to get you started.
**Additional context**
Add any other context about the problem here.

View File

@@ -13,9 +13,8 @@ A clear and concise description of what the problem is. Ex. I'm always frustrate
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Have you thought about contributing yourself?**
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Open Source software thrives with your contribution. It not only gives skills you might not be able to get in your day job, it also looks amazing on your resume.
If you want to get involved, check out the
[contributing guide](https://github.com/argoproj/argo-cd/blob/master/docs/CONTRIBUTING.md), then reach out to us on [Slack](https://argoproj.github.io/community/join-slack) so we can see how to get you started.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,7 +0,0 @@
<!--
Thank you for submitting your PR!
We'd love your organisation to be listed in the [README](https://github.com/argoproj/argo-cd). Don't forget to add it if you can!
To troubleshoot builds: https://argoproj.github.io/argo-cd/developer-guide/ci/
-->

3
.github/stale.yml vendored
View File

@@ -1,4 +1 @@
# See https://github.com/probot/stale
# See https://github.com/probot/stale
exemptLabels:
- backlog

View File

@@ -1,5 +1,5 @@
run:
deadline: 2m
deadline: 8m
skip-files:
- ".*\\.pb\\.go"
skip-dirs:
@@ -19,4 +19,3 @@ linters:
- ineffassign
- unconvert
- misspell
- unparam

View File

@@ -1,123 +1,22 @@
# Changelog
## v1.1.2 (2019-07-30)
- 'argocd app wait' should print correct sync status (#2049)
- Check that TLS is enabled when registering DEX Handlers (#2047)
- Do not ignore Argo hooks when there is a Helm hook. (#1952)
## v1.1.1 (2019-07-25)
+ Support 'override' action in UI/API (#1984)
- Fix argocd app wait message (#1982)
## v1.1.0 (2019-07-24)
### New Features
#### Sync Waves
Sync waves feature allows executing a sync operation in a number of steps or waves. Within each synchronization phase (pre-sync, sync, post-sync) you can have one or more waves,
than allows you to ensure certain resources are healthy before subsequent resources are synced.
#### Optimized Interaction With Git
Argo CD needs to execute `git fetch` operation to access application manifests and `git ls-remote` to resolve ambiguous git revision. The `git ls-remote` is executed very frequently
and although the operation is very lightweight it adds unnecessary load on Git server and might cause performance issues. In v1.1 release, the application reconciliation process was
optimized which significantly reduced the number of Git requests. With v1.1 release, Argo CD should send 3x ~ 5x fewer Git requests.
#### User Defined Application Metadata
User-defined Application metadata enables the user to define a list of useful URLs for their specific application and expose those links on the UI
(e.g. reference tp a CI pipeline or an application-specific management tool). These links should provide helpful shortcuts that make easier to integrate Argo CD into existing
systems by making it easier to find other components inside and outside Argo CD.
### Deprecation Notice
* Kustomize v1.0 is deprecated and support will be removed in the Argo CD v1.2 release.
#### Enhancements
- Sync waves [#1544](https://github.com/argoproj/argo-cd/issues/1544)
- Adds Prune=false and IgnoreExtraneous options [#1629](https://github.com/argoproj/argo-cd/issues/1629)
- Forward Git credentials to config management plugins [#1628](https://github.com/argoproj/argo-cd/issues/1628)
- Improve Kustomize 2 parameters UI [#1609](https://github.com/argoproj/argo-cd/issues/1609)
- Adds `argocd logout` [#1210](https://github.com/argoproj/argo-cd/issues/1210)
- Make it possible to set Helm release name different from Argo CD app name. [#1066](https://github.com/argoproj/argo-cd/issues/1066)
- Add ability to specify system namespace during cluster add operation [#1661](https://github.com/argoproj/argo-cd/pull/1661)
- Make listener and metrics ports configurable [#1647](https://github.com/argoproj/argo-cd/pull/1647)
- Using SSH keys to authenticate kustomize bases from git [#827](https://github.com/argoproj/argo-cd/issues/827)
- Adds `argocd app sync APPNAME --async` [#1728](https://github.com/argoproj/argo-cd/issues/1728)
- Allow users to define app specific urls to expose in the UI [#1677](https://github.com/argoproj/argo-cd/issues/1677)
- Error view instead of blank page in UI [#1375](https://github.com/argoproj/argo-cd/issues/1375)
- Project Editor: Whitelisted Cluster Resources doesn't strip whitespace [#1693](https://github.com/argoproj/argo-cd/issues/1693)
- Eliminate unnecessary git interactions for top-level resource changes (#1919)
- Ability to rotate the bearer token used to manage external clusters (#1084)
#### Bug Fixes
- Project Editor: Whitelisted Cluster Resources doesn't strip whitespace [#1693](https://github.com/argoproj/argo-cd/issues/1693)
- \[ui small bug\] menu position outside block [#1711](https://github.com/argoproj/argo-cd/issues/1711)
- UI will crash when create application without destination namespace [#1701](https://github.com/argoproj/argo-cd/issues/1701)
- ArgoCD synchronization failed due to internal error [#1697](https://github.com/argoproj/argo-cd/issues/1697)
- Replicasets ordering is not stable on app tree view [#1668](https://github.com/argoproj/argo-cd/issues/1668)
- Stuck processor on App Controller after deleting application with incomplete operation [#1665](https://github.com/argoproj/argo-cd/issues/1665)
- Role edit page fails with JS error [#1662](https://github.com/argoproj/argo-cd/issues/1662)
- failed parsing on parameters with comma [#1660](https://github.com/argoproj/argo-cd/issues/1660)
- Handle nil obj when processing custom actions [#1700](https://github.com/argoproj/argo-cd/pull/1700)
- Account for missing fields in Rollout HealthStatus [#1699](https://github.com/argoproj/argo-cd/pull/1699)
- Sync operation unnecessary waits for a healthy state of all resources [#1715](https://github.com/argoproj/argo-cd/issues/1715)
- failed parsing on parameters with comma [#1660](https://github.com/argoproj/argo-cd/issues/1660)
- argocd app sync hangs when cluster is not configured (#1935)
- Do not allow app-of-app child app's Missing status to affect parent (#1954)
- Argo CD don't handle well k8s objects which size exceeds 1mb (#1685)
- Secret data not redacted in last-applied-configuration (#897)
- Running app actions requires only read privileges (#1827)
- UI should allow editing repo URL (#1763)
- Make status fields as optional fields (#1779)
- Use correct healthcheck for Rollout with empty steps list (#1776)
#### Other
- Add Prometheus metrics for git repo interactions (#1912)
- App controller should log additional information during app syncing (#1909)
- Make sure api server to repo server grpc calls have timeout (#1820)
- Forked tool processes should timeout (#1821)
- Add health check to the controller deployment (#1785)
#### Contributors
* [Aditya Gupta](https://github.com/AdityaGupta1)
* [Alex Collins](https://github.com/alexec)
* [Alex Matyushentsev](https://github.com/alexmt)
* [Danny Thomson](https://github.com/dthomson25)
* [jannfis](https://github.com/jannfis)
* [Jesse Suen](https://github.com/jessesuen)
* [Liviu Costea](https://github.com/lcostea)
* [narg95](https://github.com/narg95)
* [Simon Behar](https://github.com/simster7)
See also [milestone v1.1](https://github.com/argoproj/argo-cd/milestone/13)
## v1.0.0 (2019-05-16)
## v1.0.0
### New Features
#### Network View
A new way to visual application resources had been introduced to the Application Details page. The Network View visualizes connections between Ingresses, Services and Pods
based on ingress reference service, service's label selectors and labels. The new view is useful to understand the application traffic flow and troubleshot connectivity issues.
TODO
#### Custom Actions
Argo CD introduces Custom Resource Actions to allow users to provide their own Lua scripts to modify existing Kubernetes resources in their applications. These actions are exposed in the UI to allow easy, safe, and reliable changes to their resources. This functionality can be used to introduce functionality such as suspending and enabling a Kubernetes cronjob, continue a BlueGreen deployment with Argo Rollouts, or scaling a deployment.
#### UI Enhancements & Usability Enhancements
#### UI Enhancements
* New color palette intended to highlight unhealthily and out-of-sync resources more clearly.
* The health of more resources is displayed, so it easier to quickly zoom to unhealthy pods, replica-sets, etc.
* Resources that do not have health no longer appear to be healthy.
* Support for configuring Git repo credentials at a domain/org level
* Support for configuring requested OIDC provider scopes and enforced RBAC scopes
* Support for configuring monitored resources whitelist in addition to excluded resources
### Breaking Changes
@@ -149,12 +48,6 @@ Argo CD introduces Custom Resource Actions to allow users to provide their own L
* UI Enhancement Proposals Quick Wins #1274
* Update argocd-util import/export to support proper backup and restore (#1328)
* Whitelisting repos/clusters in projects should consider repo/cluster permissions #1432
* Adds support for configuring repo creds at a domain/org level. (#1332)
* Implement whitelist option analogous to `resource.exclusions` (#1490)
* Added ability to sync specific labels from the command line (#1241)
* Improve rendering app image information (#1552)
* Add liveness probe to repo server/api servers (#1546)
* Support configuring requested OIDC provider scopes and enforced RBAC scopes (#1471)
#### Bug Fixes
@@ -168,17 +61,7 @@ Argo CD introduces Custom Resource Actions to allow users to provide their own L
- Rollback UI is not showing correct ksonnet parameters in preview #1326
- See details of applications fails with "r.nodes is undefined" #1371
- UI fails to load custom actions is resource is not deployed #1502
- Unable to create app from private repo: x509: certificate signed by unknown authority (#1171)
- Fix hardcoded 'git' user in `util/git.NewClient` (#1555)
- Application controller becomes unresponsive (#1476)
- Load target resource using K8S if conversion fails (#1414)
- Can't ignore a non-existent pointer anymore (#1586)
- Impossible to sync to HEAD from UI if auto-sync is enabled (#1579)
- Application controller is unable to delete self-referenced app (#1570)
- Prevent reconciliation loop for self-managed apps (#1533)
- Controller incorrectly report health state of self managed application (#1557)
- Fix kustomize manifest generation crash is manifest has image without version (#1540)
- Supply resourceVersion to watch request to prevent reading of stale cache (#1605)
- Unable to create app from private repo: x509: certificate signed by unknown authority #1171
## v0.12.2 (2019-04-22)

View File

@@ -1,19 +1,12 @@
ARG BASE_IMAGE=debian:9.5-slim
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM golang:1.12.6 as builder
RUN echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list
FROM golang:1.11.4 as builder
RUN apt-get update && apt-get install -y \
openssh-server \
nginx \
fcgiwrap \
git \
git-lfs \
make \
wget \
gcc \
@@ -23,11 +16,24 @@ RUN apt-get update && apt-get install -y \
WORKDIR /tmp
# Install docker
ENV DOCKER_CHANNEL stable
ENV DOCKER_VERSION 18.09.1
RUN wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz" && \
tar --extract --file docker.tgz --strip-components 1 --directory /usr/local/bin/ && \
rm docker.tgz
# Install dep
ENV DEP_VERSION=0.5.0
RUN wget https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep
# Install gometalinter
ENV GOMETALINTER_VERSION=2.0.12
RUN curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v${GOMETALINTER_VERSION}/gometalinter-${GOMETALINTER_VERSION}-linux-amd64.tar.gz | \
tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- && \
ln -s $GOPATH/bin/gometalinter $GOPATH/bin/gometalinter.v2
# Install packr
ENV PACKR_VERSION=1.21.9
RUN wget https://github.com/gobuffalo/packr/releases/download/v${PACKR_VERSION}/packr_${PACKR_VERSION}_linux_amd64.tar.gz && \
@@ -55,7 +61,14 @@ RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-li
mv /tmp/linux-amd64/helm /usr/local/bin/helm && \
helm version --client
ENV KUSTOMIZE_VERSION=3.1.0
# Install kustomize
ENV KUSTOMIZE1_VERSION=1.0.11
RUN curl -L -o /usr/local/bin/kustomize1 https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE1_VERSION}/kustomize_${KUSTOMIZE1_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize1 && \
kustomize1 version
ENV KUSTOMIZE_VERSION=2.0.3
RUN curl -L -o /usr/local/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize && \
kustomize version
@@ -65,42 +78,40 @@ ENV AWS_IAM_AUTHENTICATOR_VERSION=0.4.0-alpha.1
RUN curl -L -o /usr/local/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/${AWS_IAM_AUTHENTICATOR_VERSION}/aws-iam-authenticator_${AWS_IAM_AUTHENTICATOR_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/aws-iam-authenticator
# Install golangci-lint
RUN wget https://install.goreleaser.com/github.com/golangci/golangci-lint.sh && \
chmod +x ./golangci-lint.sh && \
./golangci-lint.sh -b $GOPATH/bin && \
golangci-lint linters
COPY .golangci.yml ${GOPATH}/src/dummy/.golangci.yml
RUN cd ${GOPATH}/src/dummy && \
touch dummy.go \
golangci-lint run
####################################################################################################
# Argo CD Base - used as the base for both the release and dev argocd images
####################################################################################################
FROM $BASE_IMAGE as argocd-base
USER root
RUN echo 'deb http://deb.debian.org/debian stretch-backports main' >> /etc/apt/sources.list
FROM debian:9.5-slim as argocd-base
RUN groupadd -g 999 argocd && \
useradd -r -u 999 -g argocd argocd && \
mkdir -p /home/argocd && \
chown argocd:0 /home/argocd && \
chmod g=u /home/argocd && \
chmod g=u /etc/passwd && \
chown argocd:argocd /home/argocd && \
apt-get update && \
apt-get install -y git git-lfs && \
apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY hack/ssh_known_hosts /etc/ssh/ssh_known_hosts
COPY hack/git-ask-pass.sh /usr/local/bin/git-ask-pass.sh
COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=builder /usr/local/bin/kustomize1 /usr/local/bin/kustomize1
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
COPY --from=builder /usr/local/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
# script to add current (possibly arbitrary) user to /etc/passwd at runtime
# (if it's not already there, to be openshift friendly)
COPY uid_entrypoint.sh /usr/local/bin/uid_entrypoint.sh
# support for mounting configuration from a configmap
RUN mkdir -p /app/config/ssh && \
touch /app/config/ssh/ssh_known_hosts && \
ln -s /app/config/ssh/ssh_known_hosts /etc/ssh/ssh_known_hosts
RUN mkdir -p /app/config/tls
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=argocd
@@ -108,26 +119,11 @@ ENV USER=argocd
USER argocd
WORKDIR /home/argocd
####################################################################################################
# Argo CD UI stage
####################################################################################################
FROM node:11.15.0 as argocd-ui
WORKDIR /src
ADD ["ui/package.json", "ui/yarn.lock", "./"]
RUN yarn install
ADD ["ui/", "."]
ARG ARGO_VERSION=latest
ENV ARGO_VERSION=$ARGO_VERSION
RUN NODE_ENV='production' yarn build
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM golang:1.12.6 as argocd-build
FROM golang:1.11.4 as argocd-build
COPY --from=builder /usr/local/bin/dep /usr/local/bin/dep
COPY --from=builder /usr/local/bin/packr /usr/local/bin/packr
@@ -154,5 +150,3 @@ RUN make cli server controller repo-server argocd-util && \
####################################################################################################
FROM argocd-base
COPY --from=argocd-build /go/src/github.com/argoproj/argo-cd/dist/argocd* /usr/local/bin/
COPY --from=argocd-ui ./src/dist/app /shared/app

262
Gopkg.lock generated
View File

@@ -48,6 +48,14 @@
pruneopts = ""
revision = "09c41003ee1d5015b75f331e52215512e7145b8d"
[[projects]]
branch = "master"
digest = "1:0cac9c70f3308d54ed601878aa66423eb95c374616fdd7d2ea4e2d18b045ae62"
name = "github.com/ant31/crd-validation"
packages = ["pkg"]
pruneopts = ""
revision = "38f6a293f140402953f884b015014e0cd519bbb3"
[[projects]]
branch = "master"
digest = "1:0caf9208419fa5db5a0ca7112affaa9550c54291dda8e2abac0c0e76181c959e"
@@ -62,16 +70,14 @@
[[projects]]
branch = "master"
digest = "1:4f6afcf4ebe041b3d4aa7926d09344b48d2f588e1f957526bbbe54f9cbb366a1"
digest = "1:e8ec0abbf32fdcc9f7eb14c0656c1d0fc2fc7ec8f60dff4b7ac080c50afd8e49"
name = "github.com/argoproj/pkg"
packages = [
"errors",
"exec",
"rand",
"time",
]
pruneopts = ""
revision = "38dba6e98495680ff1f8225642b63db10a96bb06"
revision = "88ab0e836a8e8c70bc297c5764669bd7da27afd1"
[[projects]]
digest = "1:d8a2bb36a048d1571bcc1aee208b61f39dc16c6c53823feffd37449dde162507"
@@ -89,14 +95,6 @@
pruneopts = ""
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
[[projects]]
digest = "1:a6ee710e45210bafe11f2f28963571be2ac8809f9a7b675a6d2c02302a1ce1a9"
name = "github.com/bouk/monkey"
packages = ["."]
pruneopts = ""
revision = "5df1f207ff77e025801505ae4d903133a0b4353f"
version = "v1.0.0"
[[projects]]
digest = "1:e04162bd6a6d4950541bae744c968108e14913b1cebccf29f7650b573f44adb3"
name = "github.com/casbin/casbin"
@@ -351,14 +349,6 @@
revision = "5ccd90ef52e1e632236f7326478d4faa74f99438"
version = "v0.2.3"
[[projects]]
branch = "master"
digest = "1:9a06e7365c6039daf4db9bbf79650e2933a2880982cbab8106cb74a36617f40d"
name = "github.com/gogits/go-gogs-client"
packages = ["."]
pruneopts = ""
revision = "5a05380e4bc2440e0ec12f54f6f45648dbdd5e55"
[[projects]]
digest = "1:6e73003ecd35f4487a5e88270d3ca0a81bc80dc88053ac7e4dcfec5fba30d918"
name = "github.com/gogo/protobuf"
@@ -427,6 +417,14 @@
revision = "aa810b61a9c79d51363740d207bb46cf8e620ed5"
version = "v1.2.0"
[[projects]]
branch = "master"
digest = "1:1e5b1e14524ed08301977b7b8e10c719ed853cbf3f24ecb66fae783a46f207a6"
name = "github.com/google/btree"
packages = ["."]
pruneopts = ""
revision = "4030bb1f1f0c35b30ca7009e9ebd06849dd45306"
[[projects]]
digest = "1:14d826ee25139b4674e9768ac287a135f4e7c14e1134a5b15e4e152edfd49f41"
name = "github.com/google/go-jsonnet"
@@ -474,6 +472,17 @@
revision = "66b9c49e59c6c48f0ffce28c2d8b8a5678502c6d"
version = "v1.4.0"
[[projects]]
branch = "master"
digest = "1:009a1928b8c096338b68b5822d838a72b4d8520715c1463614476359f3282ec8"
name = "github.com/gregjones/httpcache"
packages = [
".",
"diskcache",
]
pruneopts = ""
revision = "9cad4c3443a7200dd6400aef47183728de563a38"
[[projects]]
branch = "master"
digest = "1:9dca8c981b8aed7448d94e78bc68a76784867a38b3036d5aabc0b32d92ffd1f4"
@@ -671,6 +680,22 @@
revision = "c37440a7cf42ac63b919c752ca73a85067e05992"
version = "v0.2.0"
[[projects]]
branch = "master"
digest = "1:c24598ffeadd2762552269271b3b1510df2d83ee6696c1e543a0ff653af494bc"
name = "github.com/petar/GoLLRB"
packages = ["llrb"]
pruneopts = ""
revision = "53be0d36a84c2a886ca057d34b6aa4468df9ccb4"
[[projects]]
digest = "1:b46305723171710475f2dd37547edd57b67b9de9f2a6267cafdd98331fd6897f"
name = "github.com/peterbourgon/diskv"
packages = ["."]
pruneopts = ""
revision = "5f041e8faa004a95c88a202771f4cc3e991971e6"
version = "v2.0.1"
[[projects]]
digest = "1:7365acd48986e205ccb8652cc746f09c8b7876030d53710ea6ef7d0bd0dcd7ca"
name = "github.com/pkg/errors"
@@ -761,10 +786,7 @@
[[projects]]
digest = "1:01d968ff6535945510c944983eee024e81f1c949043e9bbfe5ab206ebc3588a4"
name = "github.com/sirupsen/logrus"
packages = [
".",
"hooks/test",
]
packages = ["."]
pruneopts = ""
revision = "a67f783a3814b8729bd2dac5780b5f78f8dbd64d"
version = "v1.1.0"
@@ -786,19 +808,19 @@
version = "v0.1.4"
[[projects]]
digest = "1:9ba49264cef4386aded205f9cb5b1f2d30f983d7dc37a21c780d9db3edfac9a7"
digest = "1:2208a80fc3259291e43b30f42f844d18f4218036dff510f42c653ec9890d460a"
name = "github.com/spf13/cobra"
packages = ["."]
pruneopts = ""
revision = "fe5e611709b0c57fa4a89136deaa8e1d4004d053"
revision = "7b2c5ac9fc04fc5efafb60700713d4fa609b777b"
version = "v0.0.1"
[[projects]]
digest = "1:8e243c568f36b09031ec18dff5f7d2769dcf5ca4d624ea511c8e3197dc3d352d"
digest = "1:261bc565833ef4f02121450d74eb88d5ae4bd74bfe5d0e862cddb8550ec35000"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = ""
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
version = "v1.0.1"
revision = "e57e3eeb33f795204c1ca35f56c44f83227c6e66"
[[projects]]
digest = "1:b1861b9a1aa0801b0b62945ed7477c1ab61a4bd03b55dfbc27f6d4f378110c8c"
@@ -1011,39 +1033,6 @@
pruneopts = ""
revision = "5e776fee60db37e560cee3fb46db699d2f095386"
[[projects]]
branch = "master"
digest = "1:e9e4b928898842a138bc345d42aae33741baa6d64f3ca69b0931f9c7a4fd0437"
name = "gonum.org/v1/gonum"
packages = [
"blas",
"blas/blas64",
"blas/cblas128",
"blas/gonum",
"floats",
"graph",
"graph/internal/linear",
"graph/internal/ordered",
"graph/internal/set",
"graph/internal/uid",
"graph/iterator",
"graph/simple",
"graph/topo",
"graph/traverse",
"internal/asm/c128",
"internal/asm/c64",
"internal/asm/f32",
"internal/asm/f64",
"internal/cmplx64",
"internal/math32",
"lapack",
"lapack/gonum",
"lapack/lapack64",
"mat",
]
pruneopts = ""
revision = "90b7154515874cee6c33cf56b29e257403a09a69"
[[projects]]
digest = "1:934fb8966f303ede63aa405e2c8d7f0a427a05ea8df335dfdc1833dd4d40756f"
name = "google.golang.org/appengine"
@@ -1113,18 +1102,17 @@
version = "v1.15.0"
[[projects]]
digest = "1:adf5b0ae3467c3182757ecb86fbfe819939473bb870a42789dc1a3e7729397cd"
name = "gopkg.in/go-playground/webhooks.v5"
digest = "1:bf7444e1e6a36e633f4f1624a67b9e4734cfb879c27ac0a2082ac16aff8462ac"
name = "gopkg.in/go-playground/webhooks.v3"
packages = [
".",
"bitbucket",
"bitbucket-server",
"github",
"gitlab",
"gogs",
]
pruneopts = ""
revision = "175186584584a83966dc9a7b8ec6c3d3a4ce6110"
version = "v5.11.0"
revision = "5580947e3ec83427ef5f6f2392eddca8dde5d99a"
version = "v3.11.0"
[[projects]]
digest = "1:e5d1fb981765b6f7513f793a3fcaac7158408cca77f75f7311ac82cc88e9c445"
@@ -1237,16 +1225,16 @@
version = "v2.2.2"
[[projects]]
branch = "release-1.14"
digest = "1:d8a6f1ec98713e685346a2e4b46c6ec4a1792a5535f8b0dffe3b1c08c9d69b12"
branch = "release-1.12"
digest = "1:3e3e9df293bd6f9fd64effc9fa1f0edcd97e6c74145cd9ab05d35719004dc41f"
name = "k8s.io/api"
packages = [
"admission/v1beta1",
"admissionregistration/v1alpha1",
"admissionregistration/v1beta1",
"apps/v1",
"apps/v1beta1",
"apps/v1beta2",
"auditregistration/v1alpha1",
"authentication/v1",
"authentication/v1beta1",
"authorization/v1",
@@ -1258,21 +1246,16 @@
"batch/v1beta1",
"batch/v2alpha1",
"certificates/v1beta1",
"coordination/v1",
"coordination/v1beta1",
"core/v1",
"events/v1beta1",
"extensions/v1beta1",
"imagepolicy/v1alpha1",
"networking/v1",
"networking/v1beta1",
"node/v1alpha1",
"node/v1beta1",
"policy/v1beta1",
"rbac/v1",
"rbac/v1alpha1",
"rbac/v1beta1",
"scheduling/v1",
"scheduling/v1alpha1",
"scheduling/v1beta1",
"settings/v1alpha1",
@@ -1281,7 +1264,7 @@
"storage/v1beta1",
]
pruneopts = ""
revision = "40a48860b5abbba9aa891b02b32da429b08d96a0"
revision = "6db15a15d2d3874a6c3ddb2140ac9f3bc7058428"
[[projects]]
branch = "master"
@@ -1290,16 +1273,14 @@
packages = [
"pkg/apis/apiextensions",
"pkg/apis/apiextensions/v1beta1",
"pkg/client/clientset/clientset",
"pkg/client/clientset/clientset/scheme",
"pkg/client/clientset/clientset/typed/apiextensions/v1beta1",
"pkg/features",
]
pruneopts = ""
revision = "7f7d2b94eca3a7a1c49840e119a8bc03c3afb1e3"
[[projects]]
branch = "release-1.14"
digest = "1:a802c91b189a31200cfb66744441fe62dac961ec7c5c58c47716570de7da195c"
branch = "release-1.12"
digest = "1:5899da40e41bcc8c1df101b72954096bba9d85b763bc17efc846062ccc111c7b"
name = "k8s.io/apimachinery"
packages = [
"pkg/api/equality",
@@ -1351,11 +1332,22 @@
"third_party/forked/golang/reflect",
]
pruneopts = ""
revision = "6a84e37a896db9780c75367af8d2ed2bb944022e"
revision = "f71dbbc36e126f5a371b85f6cca96bc8c57db2b6"
[[projects]]
branch = "release-11.0"
digest = "1:794140b3ac07405646ea3d4a57e1f6155186e672aed8aa0c996779381cd92fe6"
branch = "release-1.12"
digest = "1:b2c55ff9df6d053e40094b943f949c257c3f7dcdbb035c11487c93c96df9eade"
name = "k8s.io/apiserver"
packages = [
"pkg/features",
"pkg/util/feature",
]
pruneopts = ""
revision = "5e1c1f41ee34b3bb153f928f8c91c2a6dd9482a9"
[[projects]]
branch = "release-9.0"
digest = "1:77bf3d9f18ec82e08ac6c4c7e2d9d1a2ef8d16b25d3ff72fcefcf9256d751573"
name = "k8s.io/client-go"
packages = [
"discovery",
@@ -1367,6 +1359,8 @@
"kubernetes",
"kubernetes/fake",
"kubernetes/scheme",
"kubernetes/typed/admissionregistration/v1alpha1",
"kubernetes/typed/admissionregistration/v1alpha1/fake",
"kubernetes/typed/admissionregistration/v1beta1",
"kubernetes/typed/admissionregistration/v1beta1/fake",
"kubernetes/typed/apps/v1",
@@ -1375,8 +1369,6 @@
"kubernetes/typed/apps/v1beta1/fake",
"kubernetes/typed/apps/v1beta2",
"kubernetes/typed/apps/v1beta2/fake",
"kubernetes/typed/auditregistration/v1alpha1",
"kubernetes/typed/auditregistration/v1alpha1/fake",
"kubernetes/typed/authentication/v1",
"kubernetes/typed/authentication/v1/fake",
"kubernetes/typed/authentication/v1beta1",
@@ -1399,8 +1391,6 @@
"kubernetes/typed/batch/v2alpha1/fake",
"kubernetes/typed/certificates/v1beta1",
"kubernetes/typed/certificates/v1beta1/fake",
"kubernetes/typed/coordination/v1",
"kubernetes/typed/coordination/v1/fake",
"kubernetes/typed/coordination/v1beta1",
"kubernetes/typed/coordination/v1beta1/fake",
"kubernetes/typed/core/v1",
@@ -1411,12 +1401,6 @@
"kubernetes/typed/extensions/v1beta1/fake",
"kubernetes/typed/networking/v1",
"kubernetes/typed/networking/v1/fake",
"kubernetes/typed/networking/v1beta1",
"kubernetes/typed/networking/v1beta1/fake",
"kubernetes/typed/node/v1alpha1",
"kubernetes/typed/node/v1alpha1/fake",
"kubernetes/typed/node/v1beta1",
"kubernetes/typed/node/v1beta1/fake",
"kubernetes/typed/policy/v1beta1",
"kubernetes/typed/policy/v1beta1/fake",
"kubernetes/typed/rbac/v1",
@@ -1425,8 +1409,6 @@
"kubernetes/typed/rbac/v1alpha1/fake",
"kubernetes/typed/rbac/v1beta1",
"kubernetes/typed/rbac/v1beta1/fake",
"kubernetes/typed/scheduling/v1",
"kubernetes/typed/scheduling/v1/fake",
"kubernetes/typed/scheduling/v1alpha1",
"kubernetes/typed/scheduling/v1alpha1/fake",
"kubernetes/typed/scheduling/v1beta1",
@@ -1463,22 +1445,23 @@
"tools/remotecommand",
"transport",
"transport/spdy",
"util/buffer",
"util/cert",
"util/connrotation",
"util/exec",
"util/flowcontrol",
"util/homedir",
"util/integer",
"util/jsonpath",
"util/keyutil",
"util/retry",
"util/workqueue",
]
pruneopts = ""
revision = "11646d1007e006f6f24995cb905c68bc62901c81"
revision = "13596e875accbd333e0b5bd5fd9462185acd9958"
[[projects]]
branch = "release-1.14"
digest = "1:742ce70d2c6de0f02b5331a25d4d549f55de6b214af22044455fd6e6b451cad9"
branch = "release-1.12"
digest = "1:8108815d1aef9159daabdb3f0fcef04a88765536daf0c0cd29a31fdba135ee54"
name = "k8s.io/code-generator"
packages = [
"cmd/go-to-protobuf",
@@ -1487,7 +1470,7 @@
"third_party/forked/golang/reflect",
]
pruneopts = ""
revision = "50b561225d70b3eb79a1faafd3dfe7b1a62cbe73"
revision = "b1289fc74931d4b6b04bd1a259acfc88a2cb0a66"
[[projects]]
branch = "master"
@@ -1505,27 +1488,15 @@
revision = "e17681d19d3ac4837a019ece36c2a0ec31ffe985"
[[projects]]
digest = "1:9eaf86f4f6fb4a8f177220d488ef1e3255d06a691cca95f14ef085d4cd1cef3c"
digest = "1:4f5eb833037cc0ba0bf8fe9cae6be9df62c19dd1c869415275c708daa8ccfda5"
name = "k8s.io/klog"
packages = ["."]
pruneopts = ""
revision = "d98d8acdac006fb39831f1b25640813fef9c314f"
version = "v0.3.3"
revision = "a5bc97fbc634d635061f3146511332c7e313a55a"
version = "v0.1.0"
[[projects]]
branch = "master"
digest = "1:0d737d598e9db0a38d6ef6cba514c358b9fe7e1bc6b1128d02b2622700c75f2a"
name = "k8s.io/kube-aggregator"
packages = [
"pkg/apis/apiregistration",
"pkg/apis/apiregistration/v1",
"pkg/apis/apiregistration/v1beta1",
]
pruneopts = ""
revision = "e80910364765199a4baebd4dec54c885fe52b680"
[[projects]]
digest = "1:42ea993b351fdd39b9aad3c9ebe71f2fdb5d1f8d12eed24e71c3dff1a31b2a43"
digest = "1:aae856b28533bfdb39112514290f7722d00a55a99d2d0b897c6ee82e6feb87b3"
name = "k8s.io/kube-openapi"
packages = [
"cmd/openapi-gen",
@@ -1537,11 +1508,10 @@
"pkg/util/sets",
]
pruneopts = ""
revision = "411b2483e5034420675ebcdd4a55fc76fe5e55cf"
revision = "master"
[[projects]]
branch = "release-1.14"
digest = "1:78aa6079e011ece0d28513c7fe1bd64284fa9eb5d671760803a839ffdf0e9e38"
digest = "1:6061aa42761235df375f20fa4a1aa6d1845cba3687575f3adb2ef3f3bc540af5"
name = "k8s.io/kubernetes"
packages = [
"pkg/api/v1/pod",
@@ -1549,26 +1519,19 @@
"pkg/apis/autoscaling",
"pkg/apis/batch",
"pkg/apis/core",
"pkg/apis/extensions",
"pkg/apis/networking",
"pkg/apis/policy",
"pkg/features",
"pkg/kubectl/scheme",
"pkg/kubectl/util/term",
"pkg/kubelet/apis",
"pkg/util/interrupt",
"pkg/util/node",
]
pruneopts = ""
revision = "2d20b5759406ded89f8b25cf085ff4733b144ba5"
[[projects]]
branch = "master"
digest = "1:4c5d39f7ca1c940d7e74dbc62d2221e2c59b3d35c54f1fa9c77f3fd3113bbcb1"
name = "k8s.io/utils"
packages = [
"buffer",
"integer",
"pointer",
"trace",
]
pruneopts = ""
revision = "c55fbcfc754a5b2ec2fbae8fb9dcac36bdba6a12"
revision = "17c77c7898218073f14c8d573582e8d2313dc740"
version = "v1.12.2"
[[projects]]
branch = "master"
@@ -1578,26 +1541,17 @@
pruneopts = ""
revision = "97fed8db84274c421dbfffbb28ec859901556b97"
[[projects]]
digest = "1:321081b4a44256715f2b68411d8eda9a17f17ebfe6f0cc61d2cc52d11c08acfa"
name = "sigs.k8s.io/yaml"
packages = ["."]
pruneopts = ""
revision = "fd68e9863619f6ec2fdd8625fe1f02e7c877e480"
version = "v1.1.0"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/Masterminds/semver",
"github.com/TomOnTime/utfutil",
"github.com/ant31/crd-validation/pkg",
"github.com/argoproj/argo/pkg/apis/workflow/v1alpha1",
"github.com/argoproj/argo/util",
"github.com/argoproj/pkg/errors",
"github.com/argoproj/pkg/exec",
"github.com/argoproj/pkg/time",
"github.com/bouk/monkey",
"github.com/casbin/casbin",
"github.com/casbin/casbin/model",
"github.com/casbin/casbin/persist",
@@ -1613,7 +1567,6 @@
"github.com/go-redis/redis",
"github.com/gobuffalo/packr",
"github.com/gobwas/glob",
"github.com/gogits/go-gogs-client",
"github.com/gogo/protobuf/gogoproto",
"github.com/gogo/protobuf/proto",
"github.com/gogo/protobuf/protoc-gen-gofast",
@@ -1642,7 +1595,6 @@
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/sirupsen/logrus",
"github.com/sirupsen/logrus/hooks/test",
"github.com/skratchdot/open-golang/open",
"github.com/soheilhy/cmux",
"github.com/spf13/cobra",
@@ -1655,7 +1607,6 @@
"github.com/yuin/gopher-lua",
"golang.org/x/crypto/bcrypt",
"golang.org/x/crypto/ssh",
"golang.org/x/crypto/ssh/knownhosts",
"golang.org/x/crypto/ssh/terminal",
"golang.org/x/net/context",
"golang.org/x/oauth2",
@@ -1669,20 +1620,17 @@
"google.golang.org/grpc/metadata",
"google.golang.org/grpc/reflection",
"google.golang.org/grpc/status",
"gopkg.in/go-playground/webhooks.v5/bitbucket",
"gopkg.in/go-playground/webhooks.v5/bitbucket-server",
"gopkg.in/go-playground/webhooks.v5/github",
"gopkg.in/go-playground/webhooks.v5/gitlab",
"gopkg.in/go-playground/webhooks.v5/gogs",
"gopkg.in/go-playground/webhooks.v3",
"gopkg.in/go-playground/webhooks.v3/bitbucket",
"gopkg.in/go-playground/webhooks.v3/github",
"gopkg.in/go-playground/webhooks.v3/gitlab",
"gopkg.in/src-d/go-git.v4",
"gopkg.in/src-d/go-git.v4/config",
"gopkg.in/src-d/go-git.v4/plumbing",
"gopkg.in/src-d/go-git.v4/plumbing/transport",
"gopkg.in/src-d/go-git.v4/plumbing/transport/client",
"gopkg.in/src-d/go-git.v4/plumbing/transport/http",
"gopkg.in/src-d/go-git.v4/plumbing/transport/ssh",
"gopkg.in/src-d/go-git.v4/storage/memory",
"gopkg.in/src-d/go-git.v4/utils/ioutil",
"gopkg.in/yaml.v2",
"k8s.io/api/apps/v1",
"k8s.io/api/batch/v1",
@@ -1690,7 +1638,6 @@
"k8s.io/api/extensions/v1beta1",
"k8s.io/api/rbac/v1",
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1",
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset",
"k8s.io/apimachinery/pkg/api/equality",
"k8s.io/apimachinery/pkg/api/errors",
"k8s.io/apimachinery/pkg/apis/meta/v1",
@@ -1724,9 +1671,6 @@
"k8s.io/client-go/util/flowcontrol",
"k8s.io/client-go/util/workqueue",
"k8s.io/code-generator/cmd/go-to-protobuf",
"k8s.io/klog",
"k8s.io/kube-aggregator/pkg/apis/apiregistration/v1",
"k8s.io/kube-aggregator/pkg/apis/apiregistration/v1beta1",
"k8s.io/kube-openapi/cmd/openapi-gen",
"k8s.io/kube-openapi/pkg/common",
"k8s.io/kubernetes/pkg/api/v1/pod",

View File

@@ -35,24 +35,16 @@ required = [
name = "github.com/prometheus/client_golang"
revision = "7858729281ec582767b20e0d696b6041d995d5e0"
[[override]]
branch = "release-1.14"
[[constraint]]
branch = "release-1.12"
name = "k8s.io/api"
[[override]]
branch = "release-1.14"
name = "k8s.io/kubernetes"
[[override]]
branch = "release-1.14"
[[constraint]]
branch = "release-1.12"
name = "k8s.io/code-generator"
[[override]]
branch = "release-1.14"
name = "k8s.io/apimachinery"
[[override]]
branch = "release-11.0"
[[constraint]]
branch = "release-9.0"
name = "k8s.io/client-go"
[[constraint]]
@@ -71,12 +63,6 @@ required = [
branch = "master"
name = "github.com/yudai/gojsondiff"
[[constraint]]
name = "github.com/spf13/cobra"
revision = "fe5e611709b0c57fa4a89136deaa8e1d4004d053"
# TODO: move off of k8s.io/kube-openapi and use controller-tools for CRD spec generation
# (override argoproj/argo contraint on master)
[[override]]
revision = "411b2483e5034420675ebcdd4a55fc76fe5e55cf"
revision = "master"
name = "k8s.io/kube-openapi"

View File

@@ -1,4 +1,4 @@
PACKAGE=github.com/argoproj/argo-cd/common
PACKAGE=github.com/argoproj/argo-cd
CURRENT_DIR=$(shell pwd)
DIST_DIR=${CURRENT_DIR}/dist
CLI_NAME=argocd
@@ -9,27 +9,17 @@ GIT_COMMIT=$(shell git rev-parse HEAD)
GIT_TAG=$(shell if [ -z "`git status --porcelain`" ]; then git describe --exact-match --tags HEAD 2>/dev/null; fi)
GIT_TREE_STATE=$(shell if [ -z "`git status --porcelain`" ]; then echo "clean" ; else echo "dirty"; fi)
PACKR_CMD=$(shell if [ "`which packr`" ]; then echo "packr"; else echo "go run vendor/github.com/gobuffalo/packr/packr/main.go"; fi)
define run-in-dev-tool
docker run --rm -it -u $(shell id -u) -e HOME=/home/user -v ${CURRENT_DIR}:/go/src/github.com/argoproj/argo-cd -w /go/src/github.com/argoproj/argo-cd argocd-dev-tools bash -c "GOPATH=/go $(1)"
endef
TEST_CMD=$(shell [ "`which gotestsum`" != "" ] && echo gotestsum -- || echo go test)
PATH:=$(PATH):$(PWD)/hack
# docker image publishing options
DOCKER_PUSH?=false
IMAGE_TAG?=
DOCKER_PUSH=false
IMAGE_TAG=latest
# perform static compilation
STATIC_BUILD?=true
STATIC_BUILD=true
# build development images
DEV_IMAGE?=false
# lint is memory and CPU intensive, so we can limit on CI to mitigate OOM
LINT_GOGC?=off
LINT_CONCURRENCY?=8
# Set timeout for linter
LINT_DEADLINE?=4m0s
CODEGEN=true
LINT=true
DEV_IMAGE=false
override LDFLAGS += \
-X ${PACKAGE}.version=${VERSION} \
@@ -71,12 +61,8 @@ openapigen:
clientgen:
./hack/update-codegen.sh
.PHONY: codegen-local
codegen-local: protogen clientgen openapigen manifests-local
.PHONY: codegen
codegen: dev-tools-image
@if [ "$(CODGEN)" = "true" ]; then $(call run-in-dev-tool,make codegen-local) ; fi
codegen: protogen clientgen openapigen
.PHONY: cli
cli: clean-debug
@@ -92,30 +78,21 @@ release-cli: clean-debug image
.PHONY: argocd-util
argocd-util: clean-debug
# Build argocd-util as a statically linked binary, so it could run within the alpine-based dex container (argoproj/argo-cd#844)
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
.PHONY: dev-tools-image
dev-tools-image:
docker build -t argocd-dev-tools ./hack -f ./hack/Dockerfile.dev-tools
.PHONY: manifests-local
manifests-local:
./hack/update-manifests.sh
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
.PHONY: manifests
manifests: dev-tools-image
$(call run-in-dev-tool,make manifests-local IMAGE_TAG='${IMAGE_TAG}')
manifests:
./hack/update-manifests.sh
# NOTE: we use packr to do the build instead of go, since we embed swagger files and policy.csv
# files into the go binary
.PHONY: server
server: clean-debug
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
.PHONY: repo-server
repo-server:
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
.PHONY: controller
controller:
@@ -149,50 +126,35 @@ endif
.PHONY: builder-image
builder-image:
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) --target builder .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) ; fi
docker push $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG)
.PHONY: dep-ensure
dep-ensure:
dep ensure -no-vendor
.PHONY: lint-local
lint-local: build
# golangci-lint does not do a good job of formatting imports
goimports -local github.com/argoproj/argo-cd -w `find . ! -path './vendor/*' ! -path './pkg/client/*' ! -path '*.pb.go' ! -path '*.gw.go' -type f -name '*.go'`
GOGC=$(LINT_GOGC) golangci-lint run --fix --verbose --concurrency $(LINT_CONCURRENCY) --deadline $(LINT_DEADLINE)
.PHONY: lint
lint: dev-tools-image
@if [ "$(LINT)" = "true" ]; then $(call run-in-dev-tool,make lint-local LINT_CONCURRENCY=$(LINT_CONCURRENCY) LINT_DEADLINE=$(LINT_DEADLINE) LINT_GOGC=$(LINT_GOGC)); fi
lint:
golangci-lint run --fix
.PHONY: build
build:
go build -v `go list ./... | grep -v 'resource_customizations\|test/e2e'`
go build `go list ./... | grep -v resource_customizations`
.PHONY: test
test:
go test -v -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "test/e2e"`
.PHONY: cover
cover:
go tool cover -html=coverage.out
$(TEST_CMD) -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-e2e
test-e2e: cli
go test -v -timeout 10m ./test/e2e
$(TEST_CMD) -v -failfast -timeout 20m ./test/e2e
.PHONY: start-e2e
start-e2e: cli
killall goreman || true
# check we can connect to Docker to start Redis
docker version
kubectl create ns argocd-e2e || true
kubectl config set-context --current --namespace=argocd-e2e
kubens argocd-e2e
kustomize build test/manifests/base | kubectl apply -f -
# set paths for locally managed ssh known hosts and tls certs data
ARGOCD_SSH_DATA_PATH=/tmp/argo-e2e/app/config/ssh \
ARGOCD_TLS_DATA_PATH=/tmp/argo-e2e/app/config/tls \
goreman start
make start
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
.PHONY: clean-debug
@@ -206,8 +168,6 @@ clean: clean-debug
.PHONY: start
start:
killall goreman || true
# check we can connect to Docker to start Redis
docker version
kubens argocd
goreman start
@@ -221,4 +181,4 @@ release-precheck: manifests
@if [ "$(GIT_TAG)" != "v`cat VERSION`" ]; then echo 'VERSION does not match git tag'; exit 1; fi
.PHONY: release
release: pre-commit release-precheck image release-cli
release: release-precheck pre-commit image release-cli

2
OWNERS
View File

@@ -3,6 +3,6 @@ owners:
- jessesuen
approvers:
- alexc
- alexmt
- jessesuen
- merenbach

View File

@@ -1,7 +1,5 @@
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --staticassets ui/dist/app"
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.14.0 serve /dex.yaml"
redis: docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:5.0.3-alpine --save "" --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}
repo-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-repo-server/main.go --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
git-server: test/fixture/testrepos/start-git.sh
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081"
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --disable-auth --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app"
repo-server: sh -c "FORCE_LOG_COLORS=1 go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379"
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.14.0 serve /dex.yaml"
redis: docker run --rm --name argocd-redis -i -p 6379:6379 redis:5.0.3-alpine --save ""--appendonly no

View File

@@ -19,38 +19,14 @@ Application deployment and lifecycle management should be automated, auditable,
Organizations below are **officially** using Argo CD. Please send a PR with your organization name if you are using Argo CD.
1. [ANSTO - Australian Synchrotron](https://www.synchrotron.org.au/)
1. [Codility](https://www.codility.com/)
1. [Commonbond](https://commonbond.co/)
1. [CyberAgent](https://www.cyberagent.co.jp/en/)
1. [END.](https://www.endclothing.com/)
1. [GMETRI](https://gmetri.com/)
1. [Intuit](https://www.intuit.com/)
1. [KintoHub](https://www.kintohub.com/)
1. [KompiTech GmbH](https://www.kompitech.com/)
1. [Mambu](https://www.mambu.com/)
1. [Mirantis](https://mirantis.com/)
1. [OpenSaaS Studio](https://opensaas.studio)
1. [Optoro](https://www.optoro.com/)
1. [Riskified](https://www.riskified.com/)
1. [Saildrone](https://www.saildrone.com/)
1. [Tesla](https://tesla.com/)
1. [tZERO](https://www.tzero.com/)
1. [Ticketmaster](https://ticketmaster.com)
1. [Yieldlab](https://www.yieldlab.de/)
1. [Volvo Cars](https://www.volvocars.com/)
2. [KompiTech GmbH](https://www.kompitech.com/)
3. [Yieldlab](https://www.yieldlab.de/)
4. [Ticketmaster](https://ticketmaster.com)
5. [CyberAgent](https://www.cyberagent.co.jp/en/)
6. [OpenSaaS Studio](https://opensaas.studio)
7. [Riskified](https://www.riskified.com/)
## Documentation
To learn more about Argo CD [go to the complete documentation](https://argoproj.github.io/argo-cd/).
## Community Blogs and Presentations
1. [Comparison of Argo CD, Spinnaker, Jenkins X, and Tekton](https://www.inovex.de/blog/spinnaker-vs-argo-cd-vs-tekton-vs-jenkins-x/)
1. [Simplify and Automate Deployments Using GitOps with IBM Multicloud Manager 3.1.2](https://medium.com/ibm-cloud/simplify-and-automate-deployments-using-gitops-with-ibm-multicloud-manager-3-1-2-4395af317359)
1. [GitOps for Kubeflow using Argo CD](https://www.kubeflow.org/docs/use-cases/gitops-for-kubeflow/)
1. [GitOps Toolsets on Kubernetes with CircleCI and Argo CD](https://www.digitalocean.com/community/tutorials/webinar-series-gitops-tool-sets-on-kubernetes-with-circleci-and-argo-cd)
1. [Simplify and Automate Deployments Using GitOps with IBM Multicloud Manager](https://www.ibm.com/blogs/bluemix/2019/02/simplify-and-automate-deployments-using-gitops-with-ibm-multicloud-manager-3-1-2/)
1. [CI/CD in Light Speed with K8s and Argo CD](https://www.youtube.com/watch?v=OdzH82VpMwI&feature=youtu.be)
1. [Machine Learning as Code](https://www.youtube.com/watch?v=VXrGp5er1ZE&t=0s&index=135&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU). Among other things, describes how Kubeflow uses Argo CD to implement GitOPs for ML
1. [Argo CD - GitOps Continuous Delivery for Kubernetes](https://www.youtube.com/watch?v=aWDIQMbp1cc&feature=youtu.be&t=1m4s)

View File

@@ -1 +1 @@
1.2.5
1.0.2

View File

@@ -1,22 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="131" height="20">
<linearGradient id="b" x2="0" y2="100%">
<stop offset="0" stop-color="#bbb" stop-opacity=".1"/>
<stop offset="1" stop-opacity=".1"/>
</linearGradient>
<clipPath id="a">
<rect width="131" height="20" rx="3" fill="#fff"/>
</clipPath>
<g clip-path="url(#a)">
<path id="leftPath" fill="#555" d="M0 0h74v20H0z"/>
<path id="rightPath" fill="#4c1" d="M74 0h57v20H74z"/>
<path fill="url(#b)" d="M0 0h131v20H0z"/>
</g>
<g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="90">
<image x="5" y="3" width="14" height="14" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAB8AAAAeCAYAAADU8sWcAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAACXBIWXMAAABPAAAATwFjiv3XAAACC2lUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyI+CiAgICAgICAgIDx0aWZmOlJlc29sdXRpb25Vbml0PjI8L3RpZmY6UmVzb2x1dGlvblVuaXQ+CiAgICAgICAgIDx0aWZmOkNvbXByZXNzaW9uPjE8L3RpZmY6Q29tcHJlc3Npb24+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZmY6T3JpZW50YXRpb24+CiAgICAgICAgIDx0aWZmOlBob3RvbWV0cmljSW50ZXJwcmV0YXRpb24+MjwvdGlmZjpQaG90b21ldHJpY0ludGVycHJldGF0aW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KD0UqkwAACpZJREFUSA11VnmQFcUZ//qY4527by92WWTXwHKqEJDDg0MQUioKSWopK4oaS9RAlMofUSPGbFlBSaViKomgQSNmIdECkUS8ophdozEJArKoKCsgIFlY2Pu9N29mero7X78FxCR2Vb8309PTv+/4fb9vCHz1II2Nm+jmzYul2bJsdlMyO7LyPGlZtYqQck1ZHCgwLZWiROeJgB7bDzpYb/7o0y/emzXv4Pts8+ZGBUC0uf/vQf57wdw3NTVRnOYFfXvj6pJcadkUmbDGRoxVSEItSYAQQrUmRBP81VoRpkFTJUMuZA/3g/28q3dn89b7u885D4348vgf8NPAxY033LJmSlDizlSOU94f6UhoHaZdC/3QCHWON2gDYBhAYyRAWgyD4QSi1815f29++vv/+CoD2Lm2nAFGl8mndzyxMMgk5/iW6xzPi/xVk+oyE8+vLNt14FR/0uYW14ox0IwUJ4ZAaUpAaaBEYByiyLbikWWNnnThvPIxu+L717auVeb81tbWsyk4C34u8I3LnlhcSCameMzykg7TwzOJGIYWQj+M+voLYcSo/hxY1E65FECUq4G4RFP0nSjMCMVAIIKUnOHkw90L6qq/vXvbh02trV8yoBh2E0NMY9GiG+58fIGXTF6epyw7JOnamZQbz/tCnDrVVzgqqdwTT0ZTdWDN0wPpMh3ZPZqHLSw98C7Y4RwtnAodsTwGAg0ppkcxrlkYJFID+Z0bn/zelsFImzQRXfScNJFiOG689dcT/FT6GzlqedVJTHRpPH6yz/PyfTlx2IrLCpvSdWrv5OXkX9fOJB/Mnaz3XzItap+yMDoyZgEE7F1SceIghryeRNwnTBkEqhRIxiLF6bCpDXNybW2v/ruxcTzbt2+zZmfCvWT+zxNhbek3cxaPlyYsXVWSSHT25rxC3o+OWK6aDn7sZ3r79ZNY28w45Esxuhw5RjmVvEQPlIwgR8bOiE7VdrPaA2+TeGEEhFbhtAEmG5pzk5WqqbWTPv7jC8sLpgxNSRWZrUeWTAgor7EY9ytTiURvzg8GskGU5xxKkc33yDcX1yc7x3ijLxF+zZgoEljgCrSiXOWIG/WGrjifHxl1V9iyeKwW1lHNo5SKmDmcYinivwhjVpkor55sQj9+/D4MO9bpLW8RN6zJzMvbPDUkHaecE368N1+IUUJfIm5hPf1w2kTedlm/LgntmqEWEQGT3ScJYUh2NIxiiWupSUFasor1VgzXvPATfv6BC3Roq0E9MLqAXKQcAe1rqi/dv+q3DwQUw6djY6tiAaG1jLEw7nAn64WhcStPqZqpA6chap8aage077GwbQcJDx8Awq3T3DHlbeijUHWwytHFuuizCTergdQ+YocxLRHVPNcESRAJxqo60nbSvExvvvWx23MCVvucDyRt5hitwDRHHG09pZiaT7MlSUdUhtQGZttYRdwA4WnFQjFnDA4LX7UdIlgMYpaouBgGMnuBKVN/ZhgBRCMVynNAHOfRm777q4UcrW3EZxcEnD/kclYlpVaFKFIZrIRtluvdls/G9InnLZ9fpLXoJjwZAzZkDJ5mjkIj8HTCKETH9oD0eoikMSDQ5cTrLrUg4QoaFlxUH5MdYy6yAFAaYBHed6ODkMPOkMcFZggZKa0SkaL/jMeDpq5D1Vez9693V/wNUg1jQRY8yLX8BcSba8AaOgyBOdqtQBz7DNj8lZCeNY9SN6aiD3bTOc89s2iliDVvKa3uGxf6doQAxYE2oBbm0HQPG5IxqKhOg9ZpSSQj+pDU+jv5t7913spV5c70GcqLJwgMGw6ZJUuBLbgHPW0D4rgQdewDvnAllN58B0D918AvzVBn3jWi/p4Hq2/s+et1URgBRlVRMGVvBtITfwyuyYPAoIRmWSIrEqjZW1Ml/qN7dtRWzptfBzXDoaeri6TicWhes8Zsg9iMOaAyU0D3fw4qdSEkrryquP77xx+HZCwG3T09HEZfCBUXTxh125H9VZ9SHp1OfZF4aEWI9MM1AhmiVCmaIwV6a2F5ge2q8s7OBB8ytPiOZduw6pFHYOS4cUUQGsNWXj4MVLYdaGU9UAQ0o2H8eHh49WpAITReKqdmGC0PsvF2JB4Gu+i5RqKg5JSi82mOnu9GYGEhwbFjSOXadJII7Pa6Ed3hwU8igAU8lU7D/ffdZ1pFEUT19YI+sR9oZgLIjo8h6usDu2oIzJpxOc4ZphKQtoqFxw6LzxIV/RMxkabSUPAJRz3A5oc50x9T7Lf3xkR0R0xGSS+IRIFQOUUUkqvGXdR5tGXnLmjbYXI0OLCJykIBvFe2ABVHgCQqgMpjkH9xE9IY7cSgGd0w+8mOVjiy68DOh+sauusiwSM8F11nlhCm39/SvP7ux4qNZVfba/0TJ183Is9opWtxEXe5E/cC+Y5bf+CilmczqaC/JigEOjjYTrwtzwB8tAn4kJFGMIEmy0B98gb4B7tAYimHncd19ObLrP25rW0/rb76pQqsoKTEkKJgoT64biCObVx35xvGGz67qYW3Nl0RWYH/ftxmo7pyfpSoKFFjEiLxhE73jXaueHXlGw+PZ68rS8pQ01QlodUjTqcAfcQo8qGjQB36EwQfbFAuF8wjZbmNtStefZ4nC0uibKIHez92AeZEUln5wocGuLGxyUbg2cVEFp5a9pFz11MH/ZCP6M0V8gmbxWZL35K2paFmUmjDgCXAMQkjqv8wgFMGBFVSCw8g1wWsrB5IBrTDAvB0eaAkhVHCeGzSQDUm3bE9/1hs/d7dBhy5aSqAaNNWN6Mvdn9+e0KGfl8hZCd6vBwqABmISKSkCgkqr/a6QMWrwLpsKaYWhfrwi1ikfWBNvQmUhXItAlznICMlcopFFNXcEA25wBw/EIms2L4O1onZs5u46abFUsILDWhAc/OKo7H+/OtuEFqaUcsmIE8CR47zEMsSuJvRQcceODVmOqR/uAHKftAC6R+9AKcmzgVxci9qexwLTIDZ34+MR6FVErXECXw7kc+3/G7j8gPG0dbWJmQnum1+zDj3U2rJ0rWzvFRynu/YskMTb5vetmwI7xyeg7R0g072ynEGwa0PwZTRDbDn0GGI1j0Ii2pCKPCyqJTl+eei9tOl5Kr1eU2seuG5ZMBv2fjUIMkQymAWC6jouQE3jdlYZa43PLnsLbcn++eYVwi6FCQktTCEmO3QAx2rhOllAl6bOwsahg2FDTMvhRklWVSUMkxcWEyHJFbUqyCW9AtRSV//K18AF4XmbOWeBTegCF78ujTXf3hm+XvTeo49G8/67Sg+A0a0OGc6CDzIlFbDL3+8GPYuuxLWP7AYysprIQx9YNjdzAglyVkD3qGvd3dsWvv0infM2qBj53zr49rZsJsNZwa2WeTTFxtP3L1oe1VCzh3AWqOM2FJJsFBwbMuCUAgQyAqGrVVJ7Zdwyz2RZS/X/GbrguJ5eNYgyhfnncH5v+DmYcft18Y1T5S7fl9jPMN+4aM6IpeQPmgxThNA5AnuJFhIeI2tHafiNqEUW1XQL+8NrfRznENP1drNuTOA5/5/KezmAZ4zaJBSdVaUvQk5PsNTeqfrMsKQOui5LGKibBAjmKgSRk/RIMlc8BiWCLbI95Dx05jybrMkfsh+xfgPf+hH0AC4OlsAAAAASUVORK5CYII="/>
<text id="leftText1" x="435" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="470"></text>
<text id="leftText2" x="435" y="140" transform="scale(.1)" textLength="470"></text>
<text id="rightText1" x="995" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="470"></text>
<text id="rightText1" x="995" y="140" transform="scale(.1)" textLength="470"></text></g>
</svg>

Before

Width:  |  Height:  |  Size: 5.6 KiB

View File

@@ -7,7 +7,6 @@
# p, <user/group>, <resource>, <action>, <object>
p, role:readonly, applications, get, */*, allow
p, role:readonly, certificates, get, *, allow
p, role:readonly, clusters, get, *, allow
p, role:readonly, repositories, get, *, allow
p, role:readonly, projects, get, *, allow
@@ -16,10 +15,6 @@ p, role:admin, applications, create, */*, allow
p, role:admin, applications, update, */*, allow
p, role:admin, applications, delete, */*, allow
p, role:admin, applications, sync, */*, allow
p, role:admin, applications, override, */*, allow
p, role:admin, certificates, create, *, allow
p, role:admin, certificates, update, *, allow
p, role:admin, certificates, delete, *, allow
p, role:admin, clusters, create, *, allow
p, role:admin, clusters, update, *, allow
p, role:admin, clusters, delete, *, allow
1 # Built-in policy which defines two roles: role:readonly and role:admin,
7 # p, <user/group>, <resource>, <action>, <object>
8 p, role:readonly, applications, get, */*, allow
9 p, role:readonly, certificates, get, *, allow p, role:readonly, clusters, get, *, allow
p, role:readonly, clusters, get, *, allow
10 p, role:readonly, repositories, get, *, allow
11 p, role:readonly, projects, get, *, allow
12 p, role:admin, applications, create, */*, allow
15 p, role:admin, applications, sync, */*, allow
16 p, role:admin, applications, override, */*, allow p, role:admin, clusters, create, *, allow
17 p, role:admin, certificates, create, *, allow p, role:admin, clusters, update, *, allow
p, role:admin, certificates, update, *, allow
p, role:admin, certificates, delete, *, allow
p, role:admin, clusters, create, *, allow
p, role:admin, clusters, update, *, allow
18 p, role:admin, clusters, delete, *, allow
19 p, role:admin, repositories, create, *, allow
20 p, role:admin, repositories, update, *, allow

View File

@@ -49,7 +49,7 @@
"ApplicationService"
],
"summary": "List returns list of applications",
"operationId": "ListMixin6",
"operationId": "ListMixin5",
"parameters": [
{
"type": "string",
@@ -89,7 +89,7 @@
"ApplicationService"
],
"summary": "Create creates an application",
"operationId": "CreateMixin6",
"operationId": "CreateMixin5",
"parameters": [
{
"name": "body",
@@ -116,7 +116,7 @@
"ApplicationService"
],
"summary": "Update updates an application",
"operationId": "UpdateMixin6",
"operationId": "UpdateMixin5",
"parameters": [
{
"type": "string",
@@ -197,7 +197,7 @@
"ApplicationService"
],
"summary": "Get returns an application by name",
"operationId": "GetMixin6",
"operationId": "GetMixin5",
"parameters": [
{
"type": "string",
@@ -238,7 +238,7 @@
"ApplicationService"
],
"summary": "Delete deletes an application",
"operationId": "DeleteMixin6",
"operationId": "DeleteMixin5",
"parameters": [
{
"type": "string",
@@ -639,37 +639,6 @@
}
}
},
"/api/v1/applications/{name}/revisions/{revision}/metadata": {
"get": {
"tags": [
"ApplicationService"
],
"summary": "Get the meta-data (author, date, tags, message) for a specific revision of the application",
"operationId": "RevisionMetadata",
"parameters": [
{
"type": "string",
"name": "name",
"in": "path",
"required": true
},
{
"type": "string",
"name": "revision",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1alpha1RevisionMetadata"
}
}
}
}
},
"/api/v1/applications/{name}/rollback": {
"post": {
"tags": [
@@ -769,83 +738,6 @@
}
}
},
"/api/v1/certificates": {
"get": {
"tags": [
"CertificateService"
],
"summary": "List all available repository certificates",
"operationId": "ListCertificates",
"parameters": [
{
"type": "string",
"description": "A file-glob pattern (not regular expression) the host name has to match.",
"name": "hostNamePattern",
"in": "query"
},
{
"type": "string",
"description": "The type of the certificate to match (ssh or https).",
"name": "certType",
"in": "query"
},
{
"type": "string",
"description": "The sub type of the certificate to match (protocol dependent, usually only used for ssh certs).",
"name": "certSubType",
"in": "query"
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1alpha1RepositoryCertificateList"
}
}
}
},
"post": {
"tags": [
"CertificateService"
],
"summary": "Creates repository certificates on the server",
"operationId": "CreateCertificate",
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1RepositoryCertificateList"
}
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1alpha1RepositoryCertificateList"
}
}
}
},
"delete": {
"tags": [
"CertificateService"
],
"summary": "Delete the certificates that match the RepositoryCertificateQuery",
"operationId": "DeleteCertificate",
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1alpha1RepositoryCertificateList"
}
}
}
}
},
"/api/v1/clusters": {
"get": {
"tags": [
@@ -976,38 +868,13 @@
}
}
},
"/api/v1/clusters/{server}/rotate-auth": {
"post": {
"tags": [
"ClusterService"
],
"summary": "RotateAuth returns a cluster by server address",
"operationId": "RotateAuth",
"parameters": [
{
"type": "string",
"name": "server",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/clusterClusterResponse"
}
}
}
}
},
"/api/v1/projects": {
"get": {
"tags": [
"ProjectService"
],
"summary": "List returns list of projects",
"operationId": "ListMixin4",
"operationId": "ListMixin3",
"parameters": [
{
"type": "string",
@@ -1029,7 +896,7 @@
"ProjectService"
],
"summary": "Create a new project.",
"operationId": "CreateMixin4",
"operationId": "CreateMixin3",
"parameters": [
{
"name": "body",
@@ -1056,7 +923,7 @@
"ProjectService"
],
"summary": "Get returns a project by name",
"operationId": "GetMixin4",
"operationId": "GetMixin3",
"parameters": [
{
"type": "string",
@@ -1079,7 +946,7 @@
"ProjectService"
],
"summary": "Delete deletes a project",
"operationId": "DeleteMixin4",
"operationId": "DeleteMixin3",
"parameters": [
{
"type": "string",
@@ -1129,7 +996,7 @@
"ProjectService"
],
"summary": "Update updates a project",
"operationId": "UpdateMixin4",
"operationId": "UpdateMixin3",
"parameters": [
{
"type": "string",
@@ -1419,46 +1286,13 @@
}
}
},
"/api/v1/repositories/{repo}/validate": {
"post": {
"tags": [
"RepositoryService"
],
"summary": "ValidateAccess validates access to a repository with given parameters",
"operationId": "ValidateAccess",
"parameters": [
{
"type": "string",
"name": "repo",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/repositoryRepoResponse"
}
}
}
}
},
"/api/v1/session": {
"post": {
"tags": [
"SessionService"
],
"summary": "Create a new JWT for authentication and set a cookie if using HTTP.",
"operationId": "CreateMixin8",
"operationId": "CreateMixin7",
"parameters": [
{
"name": "body",
@@ -1483,7 +1317,7 @@
"SessionService"
],
"summary": "Delete an existing JWT cookie if using HTTP.",
"operationId": "DeleteMixin8",
"operationId": "DeleteMixin7",
"responses": {
"200": {
"description": "(empty)",
@@ -1595,9 +1429,6 @@
},
"patch": {
"type": "string"
},
"patchType": {
"type": "string"
}
}
},
@@ -1640,12 +1471,6 @@
"type": "boolean",
"format": "boolean"
},
"manifests": {
"type": "array",
"items": {
"type": "string"
}
},
"name": {
"type": "string"
},
@@ -1728,32 +1553,6 @@
}
}
},
"clusterGoogleAnalyticsConfig": {
"type": "object",
"properties": {
"anonymizeUsers": {
"type": "boolean",
"format": "boolean"
},
"trackingID": {
"type": "string"
}
}
},
"clusterHelp": {
"type": "object",
"title": "Help settings",
"properties": {
"chatText": {
"type": "string",
"title": "the text for getting chat help, defaults to \"Chat now!\""
},
"chatUrl": {
"type": "string",
"title": "the URL for getting chat help, this will typically be your Slack channel for support"
}
}
},
"clusterOIDCConfig": {
"type": "object",
"properties": {
@@ -1786,15 +1585,6 @@
"dexConfig": {
"$ref": "#/definitions/clusterDexConfig"
},
"googleAnalytics": {
"$ref": "#/definitions/clusterGoogleAnalyticsConfig"
},
"help": {
"$ref": "#/definitions/clusterHelp"
},
"kustomizeOptions": {
"$ref": "#/definitions/v1alpha1KustomizeOptions"
},
"oidcConfig": {
"$ref": "#/definitions/clusterOIDCConfig"
},
@@ -1804,10 +1594,6 @@
"$ref": "#/definitions/v1alpha1ResourceOverride"
}
},
"statusBadgeEnabled": {
"type": "boolean",
"format": "boolean"
},
"url": {
"type": "string"
}
@@ -1982,8 +1768,15 @@
"type": "object",
"title": "KustomizeAppSpec contains kustomize app name and path in source repo",
"properties": {
"imageTags": {
"description": "imageTags is a list of available image tags. This is only populated for Kustomize 1.",
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1KustomizeImageTag"
}
},
"images": {
"description": "images is a list of available images.",
"description": "images is a list of available images. This is only populated for Kustomize 2.",
"type": "array",
"items": {
"type": "string"
@@ -2184,19 +1977,6 @@
}
}
},
"v1Fields": {
"type": "object",
"title": "Fields stores a set of fields in a data structure like a Trie.\nTo understand how this is used, see: https://github.com/kubernetes-sigs/structured-merge-diff",
"properties": {
"map": {
"description": "Map stores a set of fields in a data structure like a Trie.\n\nEach key is either a '.' representing the field itself, and will always map to an empty set,\nor a string representing a sub-field or item. The string will follow one of these four formats:\n'f:<name>', where <name> is the name of a field in a struct, or key in a map\n'v:<value>', where <value> is the exact json formatted value of a list item\n'i:<index>', where <index> is position of a item in a list\n'k:<keys>', where <keys> is a map of a list item's key fields to their unique values\nIf a key maps to an empty Fields value, the field that key represents is part of the set.\n\nThe exact format is defined in k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/internal",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/v1Fields"
}
}
}
},
"v1GroupKind": {
"description": "+protobuf.options.(gogoproto.goproto_stringer)=false",
"type": "object",
@@ -2268,30 +2048,6 @@
}
}
},
"v1ManagedFieldsEntry": {
"description": "ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource\nthat the fieldset applies to.",
"type": "object",
"properties": {
"apiVersion": {
"description": "APIVersion defines the version of this resource that this field set\napplies to. The format is \"group/version\" just like the top-level\nAPIVersion field. It is necessary to track the version of a field\nset because it cannot be automatically converted.",
"type": "string"
},
"fields": {
"$ref": "#/definitions/v1Fields"
},
"manager": {
"description": "Manager is an identifier of the workflow managing these fields.",
"type": "string"
},
"operation": {
"description": "Operation is the type of operation which lead to this ManagedFieldsEntry being created.\nThe only valid values for this field are 'Apply' and 'Update'.",
"type": "string"
},
"time": {
"$ref": "#/definitions/v1Time"
}
}
},
"v1MicroTime": {
"description": "MicroTime is version of Time with microsecond level precision.\n\n+protobuf.options.marshal=false\n+protobuf.as=Timestamp\n+protobuf.options.(gogoproto.goproto_stringer)=false",
"type": "object",
@@ -2360,13 +2116,6 @@
"type": "string"
}
},
"managedFields": {
"description": "ManagedFields maps workflow-id and version to the set of fields\nthat are managed by that workflow. This is mostly for internal\nhousekeeping, and users typically shouldn't need to set or\nunderstand this field. A workflow can be the user's name, a\ncontroller's name, or the name of a specific apply path like\n\"ci-cd\". The set of fields is always in the version that the\nworkflow used when modifying the object.\n\nThis field is alpha and can be changed or removed without notice.\n\n+optional",
"type": "array",
"items": {
"$ref": "#/definitions/v1ManagedFieldsEntry"
}
},
"name": {
"type": "string",
"title": "Name must be unique within a namespace. Is required when creating resources, although\nsome resources may allow a client to request the generation of an appropriate name\nautomatically. Name is primarily intended for creation idempotence and configuration\ndefinition.\nCannot be updated.\nMore info: http://kubernetes.io/docs/user-guide/identifiers#names\n+optional"
@@ -2431,7 +2180,7 @@
}
},
"v1OwnerReference": {
"description": "OwnerReference contains enough information to let you identify an owning\nobject. An owning object must be in the same namespace as the dependent, or\nbe cluster-scoped, so there is no namespace field.",
"description": "OwnerReference contains enough information to let you identify an owning\nobject. Currently, an owning object must be in the same namespace, so there\nis no namespace field.",
"type": "object",
"properties": {
"apiVersion": {
@@ -2575,7 +2324,7 @@
},
"v1alpha1AppProject": {
"type": "object",
"title": "AppProject provides a logical grouping of applications, providing controls for:\n* where the apps may deploy to (cluster whitelist)\n* what may be deployed (repository whitelist, resource whitelist/blacklist)\n* who can access these applications (roles, OIDC group claims bindings)\n* and what they can do (RBAC policies)\n* automation access to these roles (JWT tokens)\n+genclient\n+genclient:noStatus\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n+kubebuilder:resource:path=appprojects,shortName=appproj;appprojs",
"title": "AppProject provides a logical grouping of applications, providing controls for:\n* where the apps may deploy to (cluster whitelist)\n* what may be deployed (repository whitelist, resource whitelist/blacklist)\n* who can access these applications (roles, OIDC group claims bindings)\n* and what they can do (RBAC policies)\n* automation access to these roles (JWT tokens)\n+genclient\n+genclient:noStatus\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object",
"properties": {
"metadata": {
"$ref": "#/definitions/v1ObjectMeta"
@@ -2647,7 +2396,7 @@
},
"v1alpha1Application": {
"type": "object",
"title": "Application is a definition of Application resource.\n+genclient\n+genclient:noStatus\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n+kubebuilder:resource:path=applications,shortName=app;apps",
"title": "Application is a definition of Application resource.\n+genclient\n+genclient:noStatus\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object",
"properties": {
"metadata": {
"$ref": "#/definitions/v1ObjectMeta"
@@ -2735,7 +2484,7 @@
},
"targetRevision": {
"type": "string",
"title": "TargetRevision defines the commit, tag, or branch in which to sync the application to.\nIf omitted, will sync to HEAD"
"title": "Environment is a ksonnet application environment name\nTargetRevision defines the commit, tag, or branch in which to sync the application to.\nIf omitted, will sync to HEAD"
}
}
},
@@ -2762,10 +2511,6 @@
"$ref": "#/definitions/v1alpha1HelmParameter"
}
},
"releaseName": {
"type": "string",
"title": "The Helm release name. If omitted it will use the application name"
},
"valueFiles": {
"type": "array",
"title": "ValuesFiles is a list of Helm value files to use when generating a template",
@@ -2816,16 +2561,16 @@
"type": "object",
"title": "ApplicationSourceKustomize holds kustomize specific options",
"properties": {
"commonLabels": {
"type": "object",
"title": "CommonLabels adds additional kustomize commonLabels",
"additionalProperties": {
"type": "string"
"imageTags": {
"type": "array",
"title": "ImageTags are kustomize 1.0 image tag overrides",
"items": {
"$ref": "#/definitions/v1alpha1KustomizeImageTag"
}
},
"images": {
"type": "array",
"title": "Images are kustomize image overrides",
"title": "Images are kustomize 2.0 image overrides",
"items": {
"type": "string"
}
@@ -2840,12 +2585,6 @@
"type": "object",
"title": "ApplicationSourcePlugin holds config management plugin specific options",
"properties": {
"env": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1EnvEntry"
}
},
"name": {
"type": "string"
}
@@ -2865,13 +2604,6 @@
"$ref": "#/definitions/v1alpha1ResourceIgnoreDifferences"
}
},
"info": {
"type": "array",
"title": "Infos contains a list of useful information (URLs, email addresses, and plain text) that relates to the application",
"items": {
"$ref": "#/definitions/v1alpha1Info"
}
},
"project": {
"description": "Project is a application project name. Empty name means that application belongs to 'default' project.",
"type": "string"
@@ -3057,19 +2789,6 @@
}
}
},
"v1alpha1EnvEntry": {
"type": "object",
"properties": {
"name": {
"type": "string",
"title": "the name, usually uppercase"
},
"value": {
"type": "string",
"title": "the value"
}
}
},
"v1alpha1HealthStatus": {
"type": "object",
"properties": {
@@ -3085,11 +2804,6 @@
"type": "object",
"title": "HelmParameter is a parameter to a helm template",
"properties": {
"forceString": {
"type": "boolean",
"format": "boolean",
"title": "ForceString determines whether to tell Helm to interpret booleans and numbers as strings"
},
"name": {
"type": "string",
"title": "Name is the name of the helm parameter"
@@ -3100,17 +2814,6 @@
}
}
},
"v1alpha1Info": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"type": "string"
}
}
},
"v1alpha1InfoItem": {
"type": "object",
"title": "InfoItem contains human readable information about object",
@@ -3170,13 +2873,17 @@
}
}
},
"v1alpha1KustomizeOptions": {
"v1alpha1KustomizeImageTag": {
"type": "object",
"title": "KustomizeOptions are options for kustomize to use when building manifests",
"title": "KustomizeImageTag is a kustomize image tag",
"properties": {
"buildOptions": {
"name": {
"type": "string",
"title": "BuildOptions is a string of build parameters to use when calling `kustomize build`"
"title": "Name is the name of the image (e.g. nginx)"
},
"value": {
"type": "string",
"title": "Value is the value for the new tag (e.g. 1.8.0)"
}
}
},
@@ -3257,87 +2964,21 @@
"connectionState": {
"$ref": "#/definitions/v1alpha1ConnectionState"
},
"enableLfs": {
"type": "boolean",
"format": "boolean",
"title": "Whether git-lfs support should be enabled for this repo"
},
"insecure": {
"type": "boolean",
"format": "boolean",
"title": "Whether the repo is insecure"
},
"insecureIgnoreHostKey": {
"type": "boolean",
"format": "boolean",
"title": "InsecureIgnoreHostKey should not be used anymore, Insecure is favoured"
"format": "boolean"
},
"password": {
"type": "string",
"title": "Password for authenticating at the repo server"
"type": "string"
},
"repo": {
"type": "string",
"title": "URL of the repo"
"type": "string"
},
"sshPrivateKey": {
"type": "string",
"title": "SSH private key data for authenticating at the repo server"
},
"tlsClientCertData": {
"type": "string",
"title": "TLS client cert data for authenticating at the repo server"
},
"tlsClientCertKey": {
"type": "string",
"title": "TLS client cert key for authenticating at the repo server"
"type": "string"
},
"username": {
"type": "string",
"title": "Username for authenticating at the repo server"
}
}
},
"v1alpha1RepositoryCertificate": {
"type": "object",
"title": "A RepositoryCertificate is either SSH known hosts entry or TLS certificate",
"properties": {
"certData": {
"type": "string",
"format": "byte",
"title": "Actual certificate data, protocol dependent"
},
"certInfo": {
"type": "string",
"title": "Additional certificate info (e.g. SSH fingerprint, X509 CommonName)"
},
"certSubType": {
"type": "string",
"title": "The sub type of the cert, i.e. \"ssh-rsa\""
},
"certType": {
"type": "string",
"title": "Type of certificate - currently \"https\" or \"ssh\""
},
"serverName": {
"type": "string",
"title": "Name of the server the certificate is intended for"
}
}
},
"v1alpha1RepositoryCertificateList": {
"type": "object",
"title": "RepositoryCertificateList is a collection of RepositoryCertificates",
"properties": {
"items": {
"type": "array",
"title": "List of certificates to be processed",
"items": {
"$ref": "#/definitions/v1alpha1RepositoryCertificate"
}
},
"metadata": {
"$ref": "#/definitions/v1ListMeta"
"type": "string"
}
}
},
@@ -3397,10 +3038,6 @@
"group": {
"type": "string"
},
"hook": {
"type": "boolean",
"format": "boolean"
},
"kind": {
"type": "string"
},
@@ -3546,9 +3183,6 @@
"namespace": {
"type": "string"
},
"uid": {
"type": "string"
},
"version": {
"type": "string"
}
@@ -3562,19 +3196,16 @@
"type": "string"
},
"hookPhase": {
"type": "string",
"title": "the state of any operation associated with this resource OR hook\nnote: can contain values for non-hook resources"
"type": "string"
},
"hookType": {
"type": "string",
"title": "the type of the hook, empty for non-hook resources"
"type": "string"
},
"kind": {
"type": "string"
},
"message": {
"type": "string",
"title": "message for the last sync OR operation"
"type": "string"
},
"name": {
"type": "string"
@@ -3583,12 +3214,7 @@
"type": "string"
},
"status": {
"type": "string",
"title": "the final result of the sync, this is be empty if the resources is yet to be applied/pruned and is always zero-value for hooks"
},
"syncPhase": {
"type": "string",
"title": "indicates the particular phase of the sync that this is for"
"type": "string"
},
"version": {
"type": "string"
@@ -3618,10 +3244,6 @@
"namespace": {
"type": "string"
},
"requiresPruning": {
"type": "boolean",
"format": "boolean"
},
"status": {
"type": "string"
},
@@ -3649,30 +3271,6 @@
}
}
},
"v1alpha1RevisionMetadata": {
"type": "object",
"title": "data about a specific revision within a repo",
"properties": {
"author": {
"type": "string",
"title": "who authored this revision,\ntypically their name and email, e.g. \"John Doe <john_doe@my-company.com>\",\nbut might not match this example"
},
"date": {
"$ref": "#/definitions/v1Time"
},
"message": {
"type": "string",
"title": "the message associated with the revision,\nprobably the commit message,\nthis is truncated to the first newline or 64 characters (which ever comes first)"
},
"tags": {
"type": "array",
"title": "tags on the revision,\nnote - tags can move from one revision to another",
"items": {
"type": "string"
}
}
}
},
"v1alpha1SyncOperation": {
"description": "SyncOperation contains sync operation details.",
"type": "object",
@@ -3682,13 +3280,6 @@
"format": "boolean",
"title": "DryRun will perform a `kubectl apply --dry-run` without actually performing the sync"
},
"manifests": {
"type": "array",
"title": "Manifests is an optional field that overrides sync source with a local directory for development",
"items": {
"type": "string"
}
},
"prune": {
"type": "boolean",
"format": "boolean",
@@ -3765,11 +3356,6 @@
"type": "boolean",
"format": "boolean",
"title": "Prune will prune resources automatically as part of automated sync (default: false)"
},
"selfHeal": {
"type": "boolean",
"format": "boolean",
"title": "SelfHeal enables auto-syncing if (default: false)"
}
}
},

View File

@@ -16,11 +16,12 @@ import (
// load the oidc plugin (required to authenticate with OpenID Connect).
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/settings"
@@ -40,13 +41,10 @@ func newCommand() *cobra.Command {
appResyncPeriod int64
repoServerAddress string
repoServerTimeoutSeconds int
selfHealTimeoutSeconds int
statusProcessors int
operationProcessors int
logLevel string
glogLevel int
metricsPort int
kubectlParallelismLimit int64
cacheSrc func() (*cache.Cache, error)
)
var command = cobra.Command{
@@ -68,7 +66,7 @@ func newCommand() *cobra.Command {
errors.CheckError(err)
resyncDuration := time.Duration(appResyncPeriod) * time.Second
repoClientset := apiclient.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
repoClientset := reposerver.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@@ -83,13 +81,10 @@ func newCommand() *cobra.Command {
appClient,
repoClientset,
cache,
resyncDuration,
time.Duration(selfHealTimeoutSeconds)*time.Second,
metricsPort,
kubectlParallelismLimit)
resyncDuration)
errors.CheckError(err)
log.Infof("Application Controller (version: %s) starting (namespace: %s)", common.GetVersion(), namespace)
log.Infof("Application Controller (version: %s) starting (namespace: %s)", argocd.GetVersion(), namespace)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
@@ -109,10 +104,6 @@ func newCommand() *cobra.Command {
command.Flags().IntVar(&operationProcessors, "operation-processors", 1, "Number of application operation processors")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", 5, "Specifies timeout between application self heal attempts")
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", 0, "Number of allowed concurrent kubectl fork/execs.")
cacheSrc = cache.AddCacheFlagsToCmd(&command)
return &command
}

View File

@@ -7,11 +7,11 @@ import (
"os"
"time"
"github.com/argoproj/argo-cd/reposerver/metrics"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/reposerver"
@@ -31,8 +31,6 @@ func newCommand() *cobra.Command {
var (
logLevel string
parallelismLimit int64
listenPort int
metricsPort int
cacheSrc func() (*cache.Cache, error)
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
)
@@ -48,18 +46,16 @@ func newCommand() *cobra.Command {
cache, err := cacheSrc()
errors.CheckError(err)
metricsServer := metrics.NewMetricsServer(git.NewFactory())
server, err := reposerver.NewServer(metricsServer, cache, tlsConfigCustomizer, parallelismLimit)
server, err := reposerver.NewServer(git.NewFactory(), cache, tlsConfigCustomizer, parallelismLimit)
errors.CheckError(err)
grpc := server.CreateGRPC()
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", listenPort))
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", common.PortRepoServer))
errors.CheckError(err)
http.Handle("/metrics", metricsServer.GetHandler())
go func() { errors.CheckError(http.ListenAndServe(fmt.Sprintf(":%d", metricsPort), nil)) }()
http.Handle("/metrics", promhttp.Handler())
go func() { errors.CheckError(http.ListenAndServe(fmt.Sprintf(":%d", common.PortRepoServerMetrics), nil)) }()
log.Infof("argocd-repo-server %s serving on %s", common.GetVersion(), listener.Addr())
log.Infof("argocd-repo-server %s serving on %s", argocd.GetVersion(), listener.Addr())
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
@@ -71,8 +67,6 @@ func newCommand() *cobra.Command {
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().Int64Var(&parallelismLimit, "parallelismlimit", 0, "Limit on number of concurrent manifests generate requests. Any value less the 1 means no limit.")
command.Flags().IntVar(&listenPort, "port", common.DefaultPortRepoServer, "Listen on given port for incoming connections")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortRepoServerMetrics, "Start metrics server on given port")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
cacheSrc = cache.AddCacheFlagsToCmd(&command)
return &command

View File

@@ -11,7 +11,7 @@ import (
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/server"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
@@ -22,20 +22,17 @@ import (
// NewCommand returns a new instance of an argocd command
func NewCommand() *cobra.Command {
var (
insecure bool
listenPort int
metricsPort int
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
repoServerTimeoutSeconds int
staticAssetsDir string
baseHRef string
repoServerAddress string
dexServerAddress string
disableAuth bool
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
cacheSrc func() (*cache.Cache, error)
insecure bool
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
staticAssetsDir string
baseHRef string
repoServerAddress string
dexServerAddress string
disableAuth bool
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
cacheSrc func() (*cache.Cache, error)
)
var command = &cobra.Command{
Use: cliName,
@@ -60,12 +57,10 @@ func NewCommand() *cobra.Command {
kubeclientset := kubernetes.NewForConfigOrDie(config)
appclientset := appclientset.NewForConfigOrDie(config)
repoclientset := apiclient.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
repoclientset := reposerver.NewRepoServerClientset(repoServerAddress, 0)
argoCDOpts := server.ArgoCDServerOpts{
Insecure: insecure,
ListenPort: listenPort,
MetricsPort: metricsPort,
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
BaseHRef: baseHRef,
@@ -86,7 +81,7 @@ func NewCommand() *cobra.Command {
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
argocd := server.NewServer(ctx, argoCDOpts)
argocd.Run(ctx, listenPort, metricsPort)
argocd.Run(ctx, common.PortAPIServer)
cancel()
}
},
@@ -102,9 +97,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringVar(&dexServerAddress, "dex-server", common.DefaultDexServerAddr, "Dex server address")
command.Flags().BoolVar(&disableAuth, "disable-auth", false, "Disable client authentication")
command.AddCommand(cli.NewVersionCmd(cliName))
command.Flags().IntVar(&listenPort, "port", common.DefaultPortAPIServer, "Listen on given port")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDAPIServerMetrics, "Start metrics on given port")
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", 60, "Repo server RPC call timeout seconds.")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(command)
cacheSrc = cache.AddCacheFlagsToCmd(command)
return command

View File

@@ -8,7 +8,6 @@ import (
"io/ioutil"
"os"
"os/exec"
"regexp"
"syscall"
"github.com/ghodss/yaml"
@@ -109,7 +108,7 @@ func NewRunDexCommand() *cobra.Command {
} else {
err = ioutil.WriteFile("/tmp/dex.yaml", dexCfgBytes, 0644)
errors.CheckError(err)
log.Info(redactor(string(dexCfgBytes)))
log.Info(string(dexCfgBytes))
cmd = exec.Command("dex", "serve", "/tmp/dex.yaml")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
@@ -387,12 +386,6 @@ func NewExportCommand() *cobra.Command {
acdRBACConfigMap, err := acdClients.configMaps.Get(common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdRBACConfigMap)
acdKnownHostsConfigMap, err := acdClients.configMaps.Get(common.ArgoCDKnownHostsConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdKnownHostsConfigMap)
acdTLSCertsConfigMap, err := acdClients.configMaps.Get(common.ArgoCDTLSCertsConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdTLSCertsConfigMap)
referencedSecrets := getReferencedSecrets(*acdConfigMap)
secrets, err := acdClients.secrets.List(metav1.ListOptions{})
@@ -442,12 +435,6 @@ func getReferencedSecrets(un unstructured.Unstructured) map[string]bool {
if cred.UsernameSecret != nil {
referencedSecrets[cred.UsernameSecret.Name] = true
}
if cred.TLSClientCertDataSecret != nil {
referencedSecrets[cred.TLSClientCertDataSecret.Name] = true
}
if cred.TLSClientCertKeySecret != nil {
referencedSecrets[cred.TLSClientCertKeySecret.Name] = true
}
}
}
if helmReposRAW, ok := cm.Data["helm.repositories"]; ok {
@@ -555,11 +542,6 @@ func NewClusterConfig() *cobra.Command {
return command
}
func redactor(dirtyString string) string {
dirtyString = regexp.MustCompile("(clientSecret: )[^ \n]*").ReplaceAllString(dirtyString, "$1********")
return regexp.MustCompile("(secret: )[^ \n]*").ReplaceAllString(dirtyString, "$1********")
}
func main() {
if err := NewCommand().Execute(); err != nil {
fmt.Println(err)

View File

@@ -1,73 +0,0 @@
package main
import (
"testing"
"github.com/stretchr/testify/assert"
)
var textToRedact = `
- config:
clientID: aabbccddeeff00112233
clientSecret: $dex.github.clientSecret
orgs:
- name: your-github-org
redirectURI: https://argocd.example.com/api/dex/callback
id: github
name: GitHub
type: github
grpc:
addr: 0.0.0.0:5557
issuer: https://argocd.example.com/api/dex
oauth2:
skipApprovalScreen: true
staticClients:
- id: argo-cd
name: Argo CD
redirectURIs:
- https://argocd.example.com/auth/callback
secret: Dis9M-GA11oTwZVQQWdDklPQw-sWXZkWJFyyEhMs
- id: argo-cd-cli
name: Argo CD CLI
public: true
redirectURIs:
- http://localhost
storage:
type: memory
web:
http: 0.0.0.0:5556`
var expectedRedaction = `
- config:
clientID: aabbccddeeff00112233
clientSecret: ********
orgs:
- name: your-github-org
redirectURI: https://argocd.example.com/api/dex/callback
id: github
name: GitHub
type: github
grpc:
addr: 0.0.0.0:5557
issuer: https://argocd.example.com/api/dex
oauth2:
skipApprovalScreen: true
staticClients:
- id: argo-cd
name: Argo CD
redirectURIs:
- https://argocd.example.com/auth/callback
secret: ********
- id: argo-cd-cli
name: Argo CD CLI
public: true
redirectURIs:
- http://localhost
storage:
type: memory
web:
http: 0.0.0.0:5556`
func TestSecretsRedactor(t *testing.T) {
assert.Equal(t, expectedRedaction, redactor(textToRedact))
}

View File

@@ -11,7 +11,7 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
accountpkg "github.com/argoproj/argo-cd/pkg/apiclient/account"
"github.com/argoproj/argo-cd/server/account"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/localconfig"
@@ -57,7 +57,7 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
errors.CheckError(err)
}
updatePasswordRequest := accountpkg.UpdatePasswordRequest{
updatePasswordRequest := account.UpdatePasswordRequest{
NewPassword: newPassword,
CurrentPassword: currentPassword,
}

File diff suppressed because it is too large Load Diff

View File

@@ -11,8 +11,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
applicationpkg "github.com/argoproj/argo-cd/pkg/apiclient/application"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/util"
)
@@ -51,14 +51,14 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
resources, err := appIf.ManagedResources(ctx, &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
availableActions := make(map[string][]argoappv1.ResourceAction)
for i := range filteredObjects {
obj := filteredObjects[i]
gvk := obj.GroupVersionKind()
availActionsForResource, err := appIf.ListResourceActions(ctx, &applicationpkg.ApplicationResourceRequest{
availActionsForResource, err := appIf.ListResourceActions(ctx, &application.ApplicationResourceRequest{
Name: &appName,
Namespace: obj.GetNamespace(),
ResourceName: obj.GetName(),
@@ -128,14 +128,14 @@ func NewApplicationResourceActionsRunCommand(clientOpts *argocdclient.ClientOpti
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
resources, err := appIf.ManagedResources(ctx, &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
for i := range filteredObjects {
obj := filteredObjects[i]
gvk := obj.GroupVersionKind()
objResourceName := obj.GetName()
_, err := appIf.RunResourceAction(context.Background(), &applicationpkg.ResourceActionRunRequest{
_, err := appIf.RunResourceAction(context.Background(), &application.ResourceActionRunRequest{
Name: &appName,
Namespace: obj.GetNamespace(),
ResourceName: objResourceName,

View File

@@ -1,290 +0,0 @@
package commands
import (
"context"
"fmt"
"os"
"sort"
"strings"
"text/tabwriter"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
certificatepkg "github.com/argoproj/argo-cd/pkg/apiclient/certificate"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
certutil "github.com/argoproj/argo-cd/util/cert"
"crypto/x509"
)
// NewCertCommand returns a new instance of an `argocd repo` command
func NewCertCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "cert",
Short: "Manage repository certificates and SSH known hosts entries",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
command.AddCommand(NewCertAddSSHCommand(clientOpts))
command.AddCommand(NewCertAddTLSCommand(clientOpts))
command.AddCommand(NewCertListCommand(clientOpts))
command.AddCommand(NewCertRemoveCommand(clientOpts))
return command
}
func NewCertAddTLSCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
fromFile string
upsert bool
)
var command = &cobra.Command{
Use: "add-tls SERVERNAME",
Short: "Add TLS certificate data for connecting to repository server SERVERNAME",
Run: func(c *cobra.Command, args []string) {
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
var certificateArray []string
var err error
if fromFile != "" {
fmt.Printf("Reading TLS certificate data in PEM format from '%s'\n", fromFile)
certificateArray, err = certutil.ParseTLSCertificatesFromPath(fromFile)
} else {
fmt.Println("Enter TLS certificate data in PEM format. Press CTRL-D when finished.")
certificateArray, err = certutil.ParseTLSCertificatesFromStream(os.Stdin)
}
errors.CheckError(err)
certificateList := make([]appsv1.RepositoryCertificate, 0)
subjectMap := make(map[string]*x509.Certificate)
for _, entry := range certificateArray {
// We want to make sure to only send valid certificate data to the
// server, so we decode the certificate into X509 structure before
// further processing it.
x509cert, err := certutil.DecodePEMCertificateToX509(entry)
errors.CheckError(err)
// TODO: We need a better way to detect duplicates sent in the stream,
// maybe by using fingerprints? For now, no two certs with the same
// subject may be sent.
if subjectMap[x509cert.Subject.String()] != nil {
fmt.Printf("ERROR: Cert with subject '%s' already seen in the input stream.\n", x509cert.Subject.String())
continue
} else {
subjectMap[x509cert.Subject.String()] = x509cert
}
}
serverName := args[0]
if len(certificateArray) > 0 {
certificateList = append(certificateList, appsv1.RepositoryCertificate{
ServerName: serverName,
CertType: "https",
CertData: []byte(strings.Join(certificateArray, "\n")),
})
certificates, err := certIf.CreateCertificate(context.Background(), &certificatepkg.RepositoryCertificateCreateRequest{
Certificates: &appsv1.RepositoryCertificateList{
Items: certificateList,
},
Upsert: upsert,
})
errors.CheckError(err)
fmt.Printf("Created entry with %d PEM certificates for repository server %s\n", len(certificates.Items), serverName)
} else {
fmt.Printf("No valid certificates have been detected in the stream.\n")
}
},
}
command.Flags().StringVar(&fromFile, "from", "", "read TLS certificate data from file (default is to read from stdin)")
command.Flags().BoolVar(&upsert, "upsert", false, "Replace existing TLS certificate if certificate is different in input")
return command
}
// NewCertAddCommand returns a new instance of an `argocd cert add` command
func NewCertAddSSHCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
fromFile string
batchProcess bool
upsert bool
certificates []appsv1.RepositoryCertificate
)
var command = &cobra.Command{
Use: "add-ssh --batch",
Short: "Add SSH known host entries for repository servers",
Run: func(c *cobra.Command, args []string) {
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
var sshKnownHostsLists []string
var err error
// --batch is a flag, but it is mandatory for now.
if batchProcess {
if fromFile != "" {
fmt.Printf("Reading SSH known hosts entries from file '%s'\n", fromFile)
sshKnownHostsLists, err = certutil.ParseSSHKnownHostsFromPath(fromFile)
} else {
fmt.Println("Enter SSH known hosts entries, one per line. Press CTRL-D when finished.")
sshKnownHostsLists, err = certutil.ParseSSHKnownHostsFromStream(os.Stdin)
}
} else {
err = fmt.Errorf("You need to specify --batch or specify --help for usage instructions")
}
errors.CheckError(err)
if len(sshKnownHostsLists) == 0 {
errors.CheckError(fmt.Errorf("No valid SSH known hosts data found."))
}
for _, knownHostsEntry := range sshKnownHostsLists {
hostname, certSubType, certData, err := certutil.TokenizeSSHKnownHostsEntry(knownHostsEntry)
errors.CheckError(err)
_, _, err = certutil.KnownHostsLineToPublicKey(knownHostsEntry)
errors.CheckError(err)
certificate := appsv1.RepositoryCertificate{
ServerName: hostname,
CertType: "ssh",
CertSubType: certSubType,
CertData: certData,
}
certificates = append(certificates, certificate)
}
certList := &appsv1.RepositoryCertificateList{Items: certificates}
response, err := certIf.CreateCertificate(context.Background(), &certificatepkg.RepositoryCertificateCreateRequest{
Certificates: certList,
Upsert: upsert,
})
errors.CheckError(err)
fmt.Printf("Successfully created %d SSH known host entries\n", len(response.Items))
},
}
command.Flags().StringVar(&fromFile, "from", "", "Read SSH known hosts data from file (default is to read from stdin)")
command.Flags().BoolVar(&batchProcess, "batch", false, "Perform batch processing by reading in SSH known hosts data (mandatory flag)")
command.Flags().BoolVar(&upsert, "upsert", false, "Replace existing SSH server public host keys if key is different in input")
return command
}
// NewCertRemoveCommand returns a new instance of an `argocd cert rm` command
func NewCertRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
certType string
certSubType string
certQuery certificatepkg.RepositoryCertificateQuery
)
var command = &cobra.Command{
Use: "rm REPOSERVER",
Short: "Remove certificate of TYPE for REPOSERVER",
Run: func(c *cobra.Command, args []string) {
if len(args) < 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
hostNamePattern := args[0]
// Prevent the user from specifying a wildcard as hostname as precaution
// measure -- the user could still use "?*" or any other pattern to
// remove all certificates, but it's less likely that it happens by
// accident.
if hostNamePattern == "*" {
err := fmt.Errorf("A single wildcard is not allowed as REPOSERVER name.")
errors.CheckError(err)
}
certQuery = certificatepkg.RepositoryCertificateQuery{
HostNamePattern: hostNamePattern,
CertType: certType,
CertSubType: certSubType,
}
removed, err := certIf.DeleteCertificate(context.Background(), &certQuery)
errors.CheckError(err)
if len(removed.Items) > 0 {
for _, cert := range removed.Items {
fmt.Printf("Removed cert for '%s' of type '%s' (subtype '%s')\n", cert.ServerName, cert.CertType, cert.CertSubType)
}
} else {
fmt.Println("No certificates were removed (none matched the given patterns)")
}
},
}
command.Flags().StringVar(&certType, "cert-type", "", "Only remove certs of given type (ssh, https)")
command.Flags().StringVar(&certSubType, "cert-sub-type", "", "Only remove certs of given sub-type (only for ssh)")
return command
}
// NewCertListCommand returns a new instance of an `argocd cert rm` command
func NewCertListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
certType string
hostNamePattern string
sortOrder string
)
var command = &cobra.Command{
Use: "list",
Short: "List configured certificates",
Run: func(c *cobra.Command, args []string) {
if certType != "" {
switch certType {
case "ssh":
case "https":
default:
fmt.Println("cert-type must be either ssh or https")
os.Exit(1)
}
}
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
certificates, err := certIf.ListCertificates(context.Background(), &certificatepkg.RepositoryCertificateQuery{HostNamePattern: hostNamePattern, CertType: certType})
errors.CheckError(err)
printCertTable(certificates.Items, sortOrder)
},
}
command.Flags().StringVar(&sortOrder, "sort", "", "set display sort order, valid: 'hostname', 'type'")
command.Flags().StringVar(&certType, "cert-type", "", "only list certificates of given type, valid: 'ssh','https'")
command.Flags().StringVar(&hostNamePattern, "hostname-pattern", "", "only list certificates for hosts matching given glob-pattern")
return command
}
// Print table of certificate info
func printCertTable(certs []appsv1.RepositoryCertificate, sortOrder string) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "HOSTNAME\tTYPE\tSUBTYPE\tINFO\n")
if sortOrder == "hostname" || sortOrder == "" {
sort.Slice(certs, func(i, j int) bool {
return certs[i].ServerName < certs[j].ServerName
})
} else if sortOrder == "type" {
sort.Slice(certs, func(i, j int) bool {
return certs[i].CertType < certs[j].CertType
})
}
for _, c := range certs {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", c.ServerName, c.CertType, c.CertSubType, c.CertInfo)
}
_ = w.Flush()
}

View File

@@ -19,10 +19,9 @@ import (
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
clusterpkg "github.com/argoproj/argo-cd/pkg/apiclient/cluster"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/cluster"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/clusterauth"
)
// NewClusterCommand returns a new instance of an `argocd cluster` command
@@ -40,18 +39,16 @@ func NewClusterCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientc
command.AddCommand(NewClusterGetCommand(clientOpts))
command.AddCommand(NewClusterListCommand(clientOpts))
command.AddCommand(NewClusterRemoveCommand(clientOpts))
command.AddCommand(NewClusterRotateAuthCommand(clientOpts))
return command
}
// NewClusterAddCommand returns a new instance of an `argocd cluster add` command
func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientcmd.PathOptions) *cobra.Command {
var (
inCluster bool
upsert bool
awsRoleArn string
awsClusterName string
systemNamespace string
inCluster bool
upsert bool
awsRoleArn string
awsClusterName string
)
var command = &cobra.Command{
Use: "add",
@@ -88,7 +85,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
// Install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
managerBearerToken, err = clusterauth.InstallClusterManagerRBAC(clientset, systemNamespace)
managerBearerToken, err = common.InstallClusterManagerRBAC(clientset)
errors.CheckError(err)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
@@ -97,7 +94,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
if inCluster {
clst.Server = common.KubernetesInternalAPIServerAddr
}
clstCreateReq := clusterpkg.ClusterCreateRequest{
clstCreateReq := cluster.ClusterCreateRequest{
Cluster: clst,
Upsert: upsert,
}
@@ -111,7 +108,6 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing cluster with the same name even if the spec differs")
command.Flags().StringVar(&awsClusterName, "aws-cluster-name", "", "AWS Cluster name if set then aws-iam-authenticator will be used to access cluster")
command.Flags().StringVar(&awsRoleArn, "aws-role-arn", "", "Optional AWS role arn. If set then AWS IAM Authenticator assume a role to perform cluster operations instead of the default AWS credential provider chain.")
command.Flags().StringVar(&systemNamespace, "system-namespace", common.DefaultSystemNamespace, "Use different system namespace")
return command
}
@@ -180,7 +176,7 @@ func NewCluster(name string, conf *rest.Config, managerBearerToken string, awsAu
// NewClusterGetCommand returns a new instance of an `argocd cluster get` command
func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "get CLUSTER",
Use: "get",
Short: "Get cluster information",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
@@ -190,7 +186,7 @@ func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
for _, clusterName := range args {
clst, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: clusterName})
clst, err := clusterIf.Get(context.Background(), &cluster.ClusterQuery{Server: clusterName})
errors.CheckError(err)
yamlBytes, err := yaml.Marshal(clst)
errors.CheckError(err)
@@ -204,7 +200,7 @@ func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
// NewClusterRemoveCommand returns a new instance of an `argocd cluster list` command
func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "rm CLUSTER",
Use: "rm",
Short: "Remove cluster credentials",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
@@ -219,9 +215,9 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
for _, clusterName := range args {
// TODO(jessesuen): find the right context and remove manager RBAC artifacts
// err := clusterauth.UninstallClusterManagerRBAC(clientset)
// err := common.UninstallClusterManagerRBAC(clientset)
// errors.CheckError(err)
_, err := clusterIf.Delete(context.Background(), &clusterpkg.ClusterQuery{Server: clusterName})
_, err := clusterIf.Delete(context.Background(), &cluster.ClusterQuery{Server: clusterName})
errors.CheckError(err)
}
},
@@ -229,65 +225,22 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
return command
}
// Print table of cluster information
func printClusterTable(clusters []argoappv1.Cluster) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "SERVER\tNAME\tSTATUS\tMESSAGE\n")
for _, c := range clusters {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", c.Server, c.Name, c.ConnectionState.Status, c.ConnectionState.Message)
}
_ = w.Flush()
}
// Print list of cluster servers
func printClusterServers(clusters []argoappv1.Cluster) {
for _, c := range clusters {
fmt.Println(c.Server)
}
}
// NewClusterListCommand returns a new instance of an `argocd cluster rm` command
func NewClusterListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "list",
Short: "List configured clusters",
Run: func(c *cobra.Command, args []string) {
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
clusters, err := clusterIf.List(context.Background(), &clusterpkg.ClusterQuery{})
clusters, err := clusterIf.List(context.Background(), &cluster.ClusterQuery{})
errors.CheckError(err)
if output == "server" {
printClusterServers(clusters.Items)
} else {
printClusterTable(clusters.Items)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "SERVER\tNAME\tSTATUS\tMESSAGE\n")
for _, c := range clusters.Items {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", c.Server, c.Name, c.ConnectionState.Status, c.ConnectionState.Message)
}
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|server")
return command
}
// NewClusterRotateAuthCommand returns a new instance of an `argocd cluster rotate-auth` command
func NewClusterRotateAuthCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "rotate-auth CLUSTER",
Short: fmt.Sprintf("%s cluster rotate-auth CLUSTER", cliName),
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
clusterQuery := clusterpkg.ClusterQuery{
Server: args[0],
}
_, err := clusterIf.RotateAuth(context.Background(), &clusterQuery)
errors.CheckError(err)
fmt.Printf("Cluster '%s' rotated auth\n", clusterQuery.Server)
_ = w.Flush()
},
}
return command

View File

@@ -1,233 +0,0 @@
package commands
import (
"fmt"
"io"
"log"
"os"
"github.com/spf13/cobra"
)
const (
bashCompletionFunc = `
__argocd_list_apps() {
local -a argocd_out
if argocd_out=($(argocd app list --output name 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_list_app_history() {
local app=$1
local -a argocd_out
if argocd_out=($(argocd app history $app --output id 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_app_rollback() {
local -a command
for comp_word in "${COMP_WORDS[@]}"; do
if [[ $comp_word =~ ^-.*$ ]]; then
continue
fi
command+=($comp_word)
done
# fourth arg is app (if present): e.g.- argocd app rollback guestbook
local app=${command[3]}
local id=${command[4]}
if [[ -z $app || $app == $cur ]]; then
__argocd_list_apps
elif [[ -z $id || $id == $cur ]]; then
__argocd_list_app_history $app
fi
}
__argocd_list_servers() {
local -a argocd_out
if argocd_out=($(argocd cluster list --output server 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_list_repos() {
local -a argocd_out
if argocd_out=($(argocd repo list --output url 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_list_projects() {
local -a argocd_out
if argocd_out=($(argocd proj list --output name 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_list_namespaces() {
local -a argocd_out
if argocd_out=($(kubectl get namespaces --no-headers 2>/dev/null | cut -f1 -d' ' 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_proj_server_namespace() {
local -a command
for comp_word in "${COMP_WORDS[@]}"; do
if [[ $comp_word =~ ^-.*$ ]]; then
continue
fi
command+=($comp_word)
done
# expect something like this: argocd proj add-destination PROJECT SERVER NAMESPACE
local project=${command[3]}
local server=${command[4]}
local namespace=${command[5]}
if [[ -z $project || $project == $cur ]]; then
__argocd_list_projects
elif [[ -z $server || $server == $cur ]]; then
__argocd_list_servers
elif [[ -z $namespace || $namespace == $cur ]]; then
__argocd_list_namespaces
fi
}
__argocd_list_project_role() {
local project="$1"
local -a argocd_out
if argocd_out=($(argocd proj role list "$project" --output=name 2>/dev/null)); then
COMPREPLY+=( $( compgen -W "${argocd_out[*]}" -- "$cur" ) )
fi
}
__argocd_proj_role(){
local -a command
for comp_word in "${COMP_WORDS[@]}"; do
if [[ $comp_word =~ ^-.*$ ]]; then
continue
fi
command+=($comp_word)
done
# expect something like this: argocd proj role add-policy PROJECT ROLE-NAME
local project=${command[4]}
local role=${command[5]}
if [[ -z $project || $project == $cur ]]; then
__argocd_list_projects
elif [[ -z $role || $role == $cur ]]; then
__argocd_list_project_role $project
fi
}
__argocd_custom_func() {
case ${last_command} in
argocd_app_delete | \
argocd_app_diff | \
argocd_app_edit | \
argocd_app_get | \
argocd_app_history | \
argocd_app_manifests | \
argocd_app_patch-resource | \
argocd_app_set | \
argocd_app_sync | \
argocd_app_terminate-op | \
argocd_app_unset | \
argocd_app_wait | \
argocd_app_create)
__argocd_list_apps
return
;;
argocd_app_rollback)
__argocd_app_rollback
return
;;
argocd_cluster_get | \
argocd_cluster_rm | \
argocd_login | \
argocd_cluster_add)
__argocd_list_servers
return
;;
argocd_repo_rm | \
argocd_repo_add)
__argocd_list_repos
return
;;
argocd_proj_add-destination | \
argocd_proj_remove-destination)
__argocd_proj_server_namespace
return
;;
argocd_proj_add-source | \
argocd_proj_remove-source | \
argocd_proj_allow-cluster-resource | \
argocd_proj_allow-namespace-resource | \
argocd_proj_deny-cluster-resource | \
argocd_proj_deny-namespace-resource | \
argocd_proj_delete | \
argocd_proj_edit | \
argocd_proj_get | \
argocd_proj_set | \
argocd_proj_role_list)
__argocd_list_projects
return
;;
argocd_proj_role_remove-policy | \
argocd_proj_role_add-policy | \
argocd_proj_role_create | \
argocd_proj_role_delete | \
argocd_proj_role_get | \
argocd_proj_role_create-token | \
argocd_proj_role_delete-token)
__argocd_proj_role
return
;;
*)
;;
esac
}
`
)
func NewCompletionCommand() *cobra.Command {
var command = &cobra.Command{
Use: "completion SHELL",
Short: "output shell completion code for the specified shell (bash or zsh)",
Long: `Write bash or zsh shell completion code to standard output.
For bash, ensure you have bash completions installed and enabled.
To access completions in your current shell, run
$ source <(argocd completion bash)
Alternatively, write it to a file and source in .bash_profile
For zsh, output to a file in a directory referenced by the $fpath shell
variable.
`,
Run: func(cmd *cobra.Command, args []string) {
if len(args) != 1 {
cmd.HelpFunc()(cmd, args)
os.Exit(1)
}
shell := args[0]
rootCommand := NewCommand()
rootCommand.BashCompletionFunction = bashCompletionFunc
availableCompletions := map[string]func(io.Writer) error{
"bash": rootCommand.GenBashCompletion,
"zsh": rootCommand.GenZshCompletion,
}
completion, ok := availableCompletions[shell]
if !ok {
fmt.Printf("Invalid shell '%s'. The supported shells are bash and zsh.\n", shell)
os.Exit(1)
}
if err := completion(os.Stdout); err != nil {
log.Fatal(err)
}
},
}
return command
}

View File

@@ -8,8 +8,6 @@ import (
"strings"
"text/tabwriter"
"github.com/spf13/pflag"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@@ -20,36 +18,16 @@ import (
// NewContextCommand returns a new instance of an `argocd ctx` command
func NewContextCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var delete bool
var command = &cobra.Command{
Use: "context",
Aliases: []string{"ctx"},
Short: "Switch between contexts",
Run: func(c *cobra.Command, args []string) {
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
errors.CheckError(err)
deletePresentContext := false
c.Flags().Visit(func(f *pflag.Flag) {
if f.Name == "delete" {
deletePresentContext = true
}
})
if len(args) == 0 {
if deletePresentContext {
err := deleteContext(localCfg.CurrentContext, clientOpts.ConfigPath)
errors.CheckError(err)
return
} else {
printArgoCDContexts(clientOpts.ConfigPath)
return
}
printArgoCDContexts(clientOpts.ConfigPath)
return
}
ctxName := args[0]
argoCDDir, err := localconfig.DefaultConfigDir()
errors.CheckError(err)
prevCtxFile := path.Join(argoCDDir, ".prev-ctx")
@@ -59,6 +37,8 @@ func NewContextCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
errors.CheckError(err)
ctxName = string(prevCtxBytes)
}
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
errors.CheckError(err)
if localCfg.CurrentContext == ctxName {
fmt.Printf("Already at context '%s'\n", localCfg.CurrentContext)
return
@@ -68,7 +48,6 @@ func NewContextCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
prevCtx := localCfg.CurrentContext
localCfg.CurrentContext = ctxName
err = localconfig.WriteLocalConfig(*localCfg, clientOpts.ConfigPath)
errors.CheckError(err)
err = ioutil.WriteFile(prevCtxFile, []byte(prevCtx), 0644)
@@ -76,43 +55,9 @@ func NewContextCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
fmt.Printf("Switched to context '%s'\n", localCfg.CurrentContext)
},
}
command.Flags().BoolVar(&delete, "delete", false, "Delete the context instead of switching to it")
return command
}
func deleteContext(context, configPath string) error {
localCfg, err := localconfig.ReadLocalConfig(configPath)
errors.CheckError(err)
if localCfg == nil {
return fmt.Errorf("Nothing to logout from")
}
serverName, ok := localCfg.RemoveContext(context)
if !ok {
return fmt.Errorf("Context %s does not exist", context)
}
_ = localCfg.RemoveUser(context)
_ = localCfg.RemoveServer(serverName)
if localCfg.IsEmpty() {
err = localconfig.DeleteLocalConfig(configPath)
errors.CheckError(err)
} else {
if localCfg.CurrentContext == context {
localCfg.CurrentContext = localCfg.Contexts[0].Name
}
err = localconfig.ValidateLocalConfig(*localCfg)
if err != nil {
return fmt.Errorf("Error in logging out")
}
err = localconfig.WriteLocalConfig(*localCfg, configPath)
errors.CheckError(err)
}
fmt.Printf("Context '%s' deleted\n", context)
return nil
}
func printArgoCDContexts(configPath string) {
localCfg, err := localconfig.ReadLocalConfig(configPath)
errors.CheckError(err)

View File

@@ -1,60 +0,0 @@
package commands
import (
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/util/localconfig"
)
const testConfig = `contexts:
- name: argocd.example.com:443
server: argocd.example.com:443
user: argocd.example.com:443
- name: localhost:8080
server: localhost:8080
user: localhost:8080
current-context: localhost:8080
servers:
- server: argocd.example.com:443
- plain-text: true
server: localhost:8080
users:
- auth-token: vErrYS3c3tReFRe$hToken
name: argocd.example.com:443
refresh-token: vErrYS3c3tReFRe$hToken
- auth-token: vErrYS3c3tReFRe$hToken
name: localhost:8080`
const testConfigFilePath = "./testdata/config"
func TestContextDelete(t *testing.T) {
// Write the test config file
err := ioutil.WriteFile(testConfigFilePath, []byte(testConfig), os.ModePerm)
assert.NoError(t, err)
localConfig, err := localconfig.ReadLocalConfig(testConfigFilePath)
assert.NoError(t, err)
assert.Equal(t, localConfig.CurrentContext, "localhost:8080")
assert.Contains(t, localConfig.Contexts, localconfig.ContextRef{Name: "localhost:8080", Server: "localhost:8080", User: "localhost:8080"})
err = deleteContext("localhost:8080", testConfigFilePath)
assert.NoError(t, err)
localConfig, err = localconfig.ReadLocalConfig(testConfigFilePath)
assert.NoError(t, err)
assert.Equal(t, localConfig.CurrentContext, "argocd.example.com:443")
assert.NotContains(t, localConfig.Contexts, localconfig.ContextRef{Name: "localhost:8080", Server: "localhost:8080", User: "localhost:8080"})
assert.NotContains(t, localConfig.Servers, localconfig.Server{PlainText: true, Server: "localhost:8080"})
assert.NotContains(t, localConfig.Users, localconfig.User{AuthToken: "vErrYS3c3tReFRe$hToken", Name: "localhost:8080"})
assert.Contains(t, localConfig.Contexts, localconfig.ContextRef{Name: "argocd.example.com:443", Server: "argocd.example.com:443", User: "argocd.example.com:443"})
// Write the file again so that no conflicts are made in git
err = ioutil.WriteFile(testConfigFilePath, []byte(testConfig), os.ModePerm)
assert.NoError(t, err)
}

View File

@@ -8,8 +8,8 @@ import (
"strconv"
"time"
"github.com/coreos/go-oidc"
"github.com/dgrijalva/jwt-go"
oidc "github.com/coreos/go-oidc"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"github.com/skratchdot/open-golang/open"
"github.com/spf13/cobra"
@@ -17,8 +17,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
sessionpkg "github.com/argoproj/argo-cd/pkg/apiclient/session"
settingspkg "github.com/argoproj/argo-cd/pkg/apiclient/settings"
"github.com/argoproj/argo-cd/server/session"
"github.com/argoproj/argo-cd/server/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
grpc_util "github.com/argoproj/argo-cd/util/grpc"
@@ -88,7 +88,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
httpClient, err := acdClient.HTTPClient()
errors.CheckError(err)
ctx = oidc.ClientContext(ctx, httpClient)
acdSet, err := setIf.Get(ctx, &settingspkg.SettingsQuery{})
acdSet, err := setIf.Get(ctx, &settings.SettingsQuery{})
errors.CheckError(err)
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
errors.CheckError(err)
@@ -278,7 +278,7 @@ func passwordLogin(acdClient argocdclient.Client, username, password string) str
username, password = cli.PromptCredentials(username, password)
sessConn, sessionIf := acdClient.NewSessionClientOrDie()
defer util.Close(sessConn)
sessionRequest := sessionpkg.SessionCreateRequest{
sessionRequest := session.SessionCreateRequest{
Username: username,
Password: password,
}

View File

@@ -1,50 +0,0 @@
package commands
import (
"fmt"
"os"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
)
// NewLogoutCommand returns a new instance of `argocd logout` command
func NewLogoutCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "logout CONTEXT",
Short: "Log out from Argo CD",
Long: "Log out from Argo CD",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
context := args[0]
localCfg, err := localconfig.ReadLocalConfig(globalClientOpts.ConfigPath)
errors.CheckError(err)
if localCfg == nil {
log.Fatalf("Nothing to logout from")
}
ok := localCfg.RemoveToken(context)
if !ok {
log.Fatalf("Context %s does not exist", context)
}
err = localconfig.ValidateLocalConfig(*localCfg)
if err != nil {
log.Fatalf("Error in logging out: %s", err)
}
err = localconfig.WriteLocalConfig(*localCfg, globalClientOpts.ConfigPath)
errors.CheckError(err)
fmt.Printf("Logged out from '%s'\n", context)
},
}
return command
}

View File

@@ -1,39 +0,0 @@
package commands
import (
"io/ioutil"
"os"
"testing"
"github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/util/localconfig"
)
func TestLogout(t *testing.T) {
// Write the test config file
err := ioutil.WriteFile(testConfigFilePath, []byte(testConfig), os.ModePerm)
assert.NoError(t, err)
localConfig, err := localconfig.ReadLocalConfig(testConfigFilePath)
assert.NoError(t, err)
assert.Equal(t, localConfig.CurrentContext, "localhost:8080")
assert.Contains(t, localConfig.Contexts, localconfig.ContextRef{Name: "localhost:8080", Server: "localhost:8080", User: "localhost:8080"})
command := NewLogoutCommand(&apiclient.ClientOptions{ConfigPath: testConfigFilePath})
command.Run(nil, []string{"localhost:8080"})
localConfig, err = localconfig.ReadLocalConfig(testConfigFilePath)
assert.NoError(t, err)
assert.Equal(t, localConfig.CurrentContext, "localhost:8080")
assert.NotContains(t, localConfig.Users, localconfig.User{AuthToken: "vErrYS3c3tReFRe$hToken", Name: "localhost:8080"})
assert.Contains(t, localConfig.Contexts, localconfig.ContextRef{Name: "argocd.example.com:443", Server: "argocd.example.com:443", User: "argocd.example.com:443"})
// Write the file again so that no conflicts are made in git
err = ioutil.WriteFile(testConfigFilePath, []byte(testConfig), os.ModePerm)
assert.NoError(t, err)
}

View File

@@ -19,8 +19,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
@@ -125,7 +125,7 @@ func NewProjectCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err := projIf.Create(context.Background(), &projectpkg.ProjectCreateRequest{Project: &proj})
_, err := projIf.Create(context.Background(), &project.ProjectCreateRequest{Project: &proj})
errors.CheckError(err)
},
}
@@ -150,7 +150,7 @@ func NewProjectSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
visited := 0
@@ -171,7 +171,7 @@ func NewProjectSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
os.Exit(1)
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -195,7 +195,7 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, dest := range proj.Spec.Destinations {
@@ -204,7 +204,7 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
}
}
proj.Spec.Destinations = append(proj.Spec.Destinations, v1alpha1.ApplicationDestination{Server: server, Namespace: namespace})
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -227,7 +227,7 @@ func NewProjectRemoveDestinationCommand(clientOpts *argocdclient.ClientOptions)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index := -1
@@ -241,7 +241,7 @@ func NewProjectRemoveDestinationCommand(clientOpts *argocdclient.ClientOptions)
log.Fatal("Specified destination does not exist in project")
} else {
proj.Spec.Destinations = append(proj.Spec.Destinations[:index], proj.Spec.Destinations[index+1:]...)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
@@ -265,7 +265,7 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, item := range proj.Spec.SourceRepos {
@@ -279,7 +279,7 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
}
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos, url)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -299,11 +299,11 @@ func modifyProjectResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.C
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
if action(proj, group, kind) {
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
@@ -399,7 +399,7 @@ func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobr
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index := -1
@@ -413,7 +413,7 @@ func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobr
fmt.Printf("Source repository '%s' does not exist in project\n", url)
} else {
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos[:index], proj.Spec.SourceRepos[index+1:]...)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
@@ -435,7 +435,7 @@ func NewProjectDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
for _, name := range args {
_, err := projIf.Delete(context.Background(), &projectpkg.ProjectQuery{Name: name})
_, err := projIf.Delete(context.Background(), &project.ProjectQuery{Name: name})
errors.CheckError(err)
}
},
@@ -443,44 +443,24 @@ func NewProjectDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
return command
}
// Print list of project names
func printProjectNames(projects []v1alpha1.AppProject) {
for _, p := range projects {
fmt.Println(p.Name)
}
}
// Print table of project info
func printProjectTable(projects []v1alpha1.AppProject) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\n")
for _, p := range projects {
printProjectLine(w, &p)
}
_ = w.Flush()
}
// NewProjectListCommand returns a new instance of an `argocd proj list` command
func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "list",
Short: "List projects",
Run: func(c *cobra.Command, args []string) {
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
projects, err := projIf.List(context.Background(), &projectpkg.ProjectQuery{})
projects, err := projIf.List(context.Background(), &project.ProjectQuery{})
errors.CheckError(err)
if output == "name" {
printProjectNames(projects.Items)
} else {
printProjectTable(projects.Items)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\n")
for _, p := range projects.Items {
printProjectLine(w, &p)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|name")
return command
}
@@ -533,7 +513,7 @@ func NewProjectGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
p, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
p, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
fmt.Printf(printProjFmtStr, "Name:", p.Name)
fmt.Printf(printProjFmtStr, "Description:", p.Spec.Description)
@@ -594,7 +574,7 @@ func NewProjectEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
projData, err := json.Marshal(proj.Spec)
errors.CheckError(err)
@@ -611,12 +591,12 @@ func NewProjectEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
if err != nil {
return err
}
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
if err != nil {
return err
}
proj.Spec = updatedSpec
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
if err != nil {
return fmt.Errorf("Failed to update project:\n%v", err)
}

View File

@@ -12,9 +12,10 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
projectutil "github.com/argoproj/argo-cd/util/project"
)
const (
@@ -39,8 +40,6 @@ func NewProjectRoleCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
roleCommand.AddCommand(NewProjectRoleDeleteTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleAddPolicyCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleRemovePolicyCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleAddGroupCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleRemoveGroupCommand(clientOpts))
return roleCommand
}
@@ -62,16 +61,16 @@ func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cob
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, roleIndex, err := proj.GetRoleByName(roleName)
role, roleIndex, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
policy := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
proj.Spec.Roles[roleIndex].Policies = append(role.Policies, policy)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -97,10 +96,10 @@ func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, roleIndex, err := proj.GetRoleByName(roleName)
role, roleIndex, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
policyToRemove := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
@@ -116,7 +115,7 @@ func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *
}
role.Policies[duplicateIndex] = role.Policies[len(role.Policies)-1]
proj.Spec.Roles[roleIndex].Policies = role.Policies[:len(role.Policies)-1]
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -142,17 +141,17 @@ func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, _, err = proj.GetRoleByName(roleName)
_, _, err = projectutil.GetRoleByName(proj, roleName)
if err == nil {
fmt.Printf("Role '%s' already exists\n", roleName)
return
}
proj.Spec.Roles = append(proj.Spec.Roles, v1alpha1.ProjectRole{Name: roleName, Description: description})
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Role '%s' created\n", roleName)
},
@@ -176,10 +175,10 @@ func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, index, err := proj.GetRoleByName(roleName)
_, index, err := projectutil.GetRoleByName(proj, roleName)
if err != nil {
fmt.Printf("Role '%s' does not exist in project\n", roleName)
return
@@ -187,7 +186,7 @@ func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
proj.Spec.Roles[index] = proj.Spec.Roles[len(proj.Spec.Roles)-1]
proj.Spec.Roles = proj.Spec.Roles[:len(proj.Spec.Roles)-1]
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Role '%s' deleted\n", roleName)
},
@@ -214,7 +213,7 @@ func NewProjectRoleCreateTokenCommand(clientOpts *argocdclient.ClientOptions) *c
defer util.Close(conn)
duration, err := timeutil.ParseDuration(expiresIn)
errors.CheckError(err)
token, err := projIf.CreateToken(context.Background(), &projectpkg.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
token, err := projIf.CreateToken(context.Background(), &project.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
errors.CheckError(err)
fmt.Println(token.Token)
},
@@ -242,35 +241,15 @@ func NewProjectRoleDeleteTokenCommand(clientOpts *argocdclient.ClientOptions) *c
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err = projIf.DeleteToken(context.Background(), &projectpkg.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
_, err = projIf.DeleteToken(context.Background(), &project.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
errors.CheckError(err)
},
}
return command
}
// Print list of project role names
func printProjectRoleListName(roles []v1alpha1.ProjectRole) {
for _, role := range roles {
fmt.Println(role.Name)
}
}
// Print table of project roles
func printProjectRoleListTable(roles []v1alpha1.ProjectRole) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
_ = w.Flush()
}
// NewProjectRoleListCommand returns a new instance of an `argocd proj roles list` command
func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "list PROJECT",
Short: "List all the roles in a project",
@@ -283,16 +262,16 @@ func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
if output == "name" {
printProjectRoleListName(project.Spec.Roles)
} else {
printProjectRoleListTable(project.Spec.Roles)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range project.Spec.Roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|name")
return command
}
@@ -311,10 +290,10 @@ func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, _, err := proj.GetRoleByName(roleName)
role, _, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
printRoleFmtStr := "%-15s%s\n"
@@ -343,24 +322,24 @@ func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
func NewProjectRoleAddGroupCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "add-group PROJECT ROLE-NAME GROUP-CLAIM",
Short: "Add a group claim to a project role",
Short: "Add a policy to a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName, roleName, groupName := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := proj.AddGroupToRole(roleName, groupName)
updated, err := projectutil.AddGroupToRole(proj, roleName, groupName)
errors.CheckError(err)
if !updated {
if updated {
fmt.Printf("Group '%s' already present in role '%s'\n", groupName, roleName)
return
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Group '%s' added to role '%s'\n", groupName, roleName)
},
@@ -381,15 +360,15 @@ func NewProjectRoleRemoveGroupCommand(clientOpts *argocdclient.ClientOptions) *c
projName, roleName, groupName := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := proj.RemoveGroupFromRole(roleName, groupName)
updated, err := projectutil.RemoveGroupFromRole(proj, roleName, groupName)
errors.CheckError(err)
if !updated {
fmt.Printf("Group '%s' not present in role '%s'\n", groupName, roleName)
return
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Group '%s' removed from role '%s'\n", groupName, roleName)
},

View File

@@ -5,13 +5,13 @@ import (
"fmt"
"os"
"github.com/coreos/go-oidc"
oidc "github.com/coreos/go-oidc"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
settingspkg "github.com/argoproj/argo-cd/pkg/apiclient/settings"
"github.com/argoproj/argo-cd/server/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/localconfig"
"github.com/argoproj/argo-cd/util/session"
@@ -63,7 +63,7 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
httpClient, err := acdClient.HTTPClient()
errors.CheckError(err)
ctx = oidc.ClientContext(ctx, httpClient)
acdSet, err := setIf.Get(ctx, &settingspkg.SettingsQuery{})
acdSet, err := setIf.Get(ctx, &settings.SettingsQuery{})
errors.CheckError(err)
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
errors.CheckError(err)

View File

@@ -12,8 +12,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
repositorypkg "github.com/argoproj/argo-cd/pkg/apiclient/repository"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
@@ -23,7 +23,7 @@ import (
func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "repo",
Short: "Manage git repository connection parameters",
Short: "Manage git repository credentials",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
@@ -39,105 +39,46 @@ func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
// NewRepoAddCommand returns a new instance of an `argocd repo add` command
func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
insecureIgnoreHostKey bool
insecureSkipServerVerification bool
tlsClientCertPath string
tlsClientCertKeyPath string
enableLfs bool
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
insecureIgnoreHostKey bool
)
// For better readability and easier formatting
var repoAddExamples = `
Add a SSH repository using a private key for authentication, ignoring the server's host key:",
$ argocd repo add git@git.example.com --insecure-ignore-host-key --ssh-private-key-path ~/id_rsa",
Add a HTTPS repository using username/password and TLS client certificates:",
$ argocd repo add https://git.example.com --username git --password secret --tls-client-cert-path ~/mycert.crt --tls-client-cert-key-path ~/mycert.key",
Add a HTTPS repository using username/password without verifying the server's TLS certificate:",
$ argocd repo add https://git.example.com --username git --password secret --insecure-skip-server-verification",
`
var command = &cobra.Command{
Use: "add REPOURL",
Short: "Add git repository connection parameters",
Example: repoAddExamples,
Use: "add REPO",
Short: "Add git repository credentials",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
// Repository URL
repo.Repo = args[0]
// Specifying ssh-private-key-path is only valid for SSH repositories
if sshPrivateKeyPath != "" {
if ok, _ := git.IsSSHURL(repo.Repo); ok {
keyData, err := ioutil.ReadFile(sshPrivateKeyPath)
if err != nil {
log.Fatal(err)
}
repo.SSHPrivateKey = string(keyData)
} else {
err := fmt.Errorf("--ssh-private-key-path is only supported for SSH repositories.")
errors.CheckError(err)
keyData, err := ioutil.ReadFile(sshPrivateKeyPath)
if err != nil {
log.Fatal(err)
}
repo.SSHPrivateKey = string(keyData)
}
// tls-client-cert-path and tls-client-cert-key-key-path must always be
// specified together
if (tlsClientCertPath != "" && tlsClientCertKeyPath == "") || (tlsClientCertPath == "" && tlsClientCertKeyPath != "") {
err := fmt.Errorf("--tls-client-cert-path and --tls-client-cert-key-path must be specified together")
errors.CheckError(err)
}
// Specifying tls-client-cert-path is only valid for HTTPS repositories
if tlsClientCertPath != "" {
if git.IsHTTPSURL(repo.Repo) {
tlsCertData, err := ioutil.ReadFile(tlsClientCertPath)
errors.CheckError(err)
tlsCertKey, err := ioutil.ReadFile(tlsClientCertKeyPath)
errors.CheckError(err)
repo.TLSClientCertData = string(tlsCertData)
repo.TLSClientCertKey = string(tlsCertKey)
} else {
err := fmt.Errorf("--tls-client-cert-path is only supported for HTTPS repositories")
errors.CheckError(err)
}
}
// InsecureIgnoreHostKey is deprecated and only here for backwards compat
repo.InsecureIgnoreHostKey = insecureIgnoreHostKey
repo.Insecure = insecureSkipServerVerification
repo.EnableLFS = enableLfs
// First test the repo *without* username/password. This gives us a hint on whether this
// is a private repo.
// NOTE: it is important not to run git commands to test git credentials on the user's
// system since it may mess with their git credential store (e.g. osx keychain).
// See issue #315
err := git.TestRepo(repo.Repo, "", "", repo.SSHPrivateKey, repo.InsecureIgnoreHostKey)
if err != nil {
if yes, _ := git.IsSSHURL(repo.Repo); yes {
// If we failed using git SSH credentials, then the repo is automatically bad
log.Fatal(err)
}
// If we can't test the repo, it's probably private. Prompt for credentials and
// let the server test it.
repo.Username, repo.Password = cli.PromptCredentials(repo.Username, repo.Password)
}
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
// If the user set a username, but didn't supply password via --password,
// then we prompt for it
if repo.Username != "" && repo.Password == "" {
repo.Password = cli.PromptPassword(repo.Password)
}
// We let the server check access to the repository before adding it. If
// it is a private repo, but we cannot access with with the credentials
// that were supplied, we bail out.
repoAccessReq := repositorypkg.RepoAccessQuery{
Repo: repo.Repo,
Username: repo.Username,
Password: repo.Password,
SshPrivateKey: repo.SSHPrivateKey,
TlsClientCertData: repo.TLSClientCertData,
TlsClientCertKey: repo.TLSClientCertKey,
Insecure: repo.IsInsecure(),
}
_, err := repoIf.ValidateAccess(context.Background(), &repoAccessReq)
errors.CheckError(err)
repoCreateReq := repositorypkg.RepoCreateRequest{
repoCreateReq := repository.RepoCreateRequest{
Repo: &repo,
Upsert: upsert,
}
@@ -149,11 +90,7 @@ Add a HTTPS repository using username/password without verifying the server's TL
command.Flags().StringVar(&repo.Username, "username", "", "username to the repository")
command.Flags().StringVar(&repo.Password, "password", "", "password to the repository")
command.Flags().StringVar(&sshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
command.Flags().StringVar(&tlsClientCertPath, "tls-client-cert-path", "", "path to the TLS client cert (must be PEM format)")
command.Flags().StringVar(&tlsClientCertKeyPath, "tls-client-cert-key-path", "", "path to the TLS client cert's key path (must be PEM format)")
command.Flags().BoolVar(&insecureIgnoreHostKey, "insecure-ignore-host-key", false, "disables SSH strict host key checking (deprecated, use --insecure-skip-server-validation instead)")
command.Flags().BoolVar(&insecureSkipServerVerification, "insecure-skip-server-verification", false, "disables server certificate and host key checks")
command.Flags().BoolVar(&enableLfs, "enable-lfs", false, "enable git-lfs (Large File Support) on this repository")
command.Flags().BoolVar(&insecureIgnoreHostKey, "insecure-ignore-host-key", false, "disables SSH strict host key checking")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing repository with the same name even if the spec differs")
return command
}
@@ -171,7 +108,7 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
for _, repoURL := range args {
_, err := repoIf.Delete(context.Background(), &repositorypkg.RepoQuery{Repo: repoURL})
_, err := repoIf.Delete(context.Background(), &repository.RepoQuery{Repo: repoURL})
errors.CheckError(err)
}
},
@@ -179,49 +116,23 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
return command
}
// Print table of repo info
func printRepoTable(repos []appsv1.Repository) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "REPO\tINSECURE\tLFS\tUSER\tSTATUS\tMESSAGE\n")
for _, r := range repos {
var username string
if r.Username == "" {
username = "-"
} else {
username = r.Username
}
fmt.Fprintf(w, "%s\t%v\t%v\t%s\t%s\t%s\n", r.Repo, r.IsInsecure(), r.EnableLFS, username, r.ConnectionState.Status, r.ConnectionState.Message)
}
_ = w.Flush()
}
// Print list of repo urls
func printRepoUrls(repos []appsv1.Repository) {
for _, r := range repos {
fmt.Println(r.Repo)
}
}
// NewRepoListCommand returns a new instance of an `argocd repo rm` command
func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
)
var command = &cobra.Command{
Use: "list",
Short: "List configured repositories",
Run: func(c *cobra.Command, args []string) {
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
repos, err := repoIf.List(context.Background(), &repositorypkg.RepoQuery{})
repos, err := repoIf.List(context.Background(), &repository.RepoQuery{})
errors.CheckError(err)
if output == "url" {
printRepoUrls(repos.Items)
} else {
printRepoTable(repos.Items)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "REPO\tUSER\tSTATUS\tMESSAGE\n")
for _, r := range repos.Items {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", r.Repo, r.Username, r.ConnectionState.Status, r.ConnectionState.Message)
}
_ = w.Flush()
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: wide|url")
return command
}

View File

@@ -36,7 +36,6 @@ func NewCommand() *cobra.Command {
},
}
command.AddCommand(NewCompletionCommand())
command.AddCommand(NewVersionCmd(&clientOpts))
command.AddCommand(NewClusterCommand(&clientOpts, pathOpts))
command.AddCommand(NewApplicationCommand(&clientOpts))
@@ -46,8 +45,6 @@ func NewCommand() *cobra.Command {
command.AddCommand(NewContextCommand(&clientOpts))
command.AddCommand(NewProjectCommand(&clientOpts))
command.AddCommand(NewAccountCommand(&clientOpts))
command.AddCommand(NewLogoutCommand(&clientOpts))
command.AddCommand(NewCertCommand(&clientOpts))
defaultLocalConfigPath, err := localconfig.DefaultLocalConfigPath()
errors.CheckError(err)

View File

@@ -1,18 +0,0 @@
contexts:
- name: argocd.example.com:443
server: argocd.example.com:443
user: argocd.example.com:443
- name: localhost:8080
server: localhost:8080
user: localhost:8080
current-context: localhost:8080
servers:
- server: argocd.example.com:443
- plain-text: true
server: localhost:8080
users:
- auth-token: vErrYS3c3tReFRe$hToken
name: argocd.example.com:443
refresh-token: vErrYS3c3tReFRe$hToken
- auth-token: vErrYS3c3tReFRe$hToken
name: localhost:8080

View File

@@ -7,7 +7,7 @@ import (
"github.com/golang/protobuf/ptypes/empty"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/common"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util"
@@ -22,7 +22,7 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
Use: "version",
Short: fmt.Sprintf("Print version information"),
Run: func(cmd *cobra.Command, args []string) {
version := common.GetVersion()
version := argocd.GetVersion()
fmt.Printf("%s: %s\n", cliName, version)
if !short {
fmt.Printf(" BuildDate: %s\n", version.BuildDate)

View File

@@ -15,36 +15,14 @@ const (
ArgoCDConfigMapName = "argocd-cm"
ArgoCDSecretName = "argocd-secret"
ArgoCDRBACConfigMapName = "argocd-rbac-cm"
// Contains SSH known hosts data for connecting repositories. Will get mounted as volume to pods
ArgoCDKnownHostsConfigMapName = "argocd-ssh-known-hosts-cm"
// Contains TLS certificate data for connecting repositories. Will get mounted as volume to pods
ArgoCDTLSCertsConfigMapName = "argocd-tls-certs-cm"
)
// Default system namespace
const (
DefaultSystemNamespace = "kube-system"
)
// Default listener ports for ArgoCD components
const (
DefaultPortAPIServer = 8080
DefaultPortRepoServer = 8081
DefaultPortArgoCDMetrics = 8082
DefaultPortArgoCDAPIServerMetrics = 8083
DefaultPortRepoServerMetrics = 8084
)
// Default paths on the pod's file system
const (
// The default base path where application config is located
DefaultPathAppConfig = "/app/config"
// The default path where TLS certificates for repositories are located
DefaultPathTLSConfig = "/app/config/tls"
// The default path where SSH known hosts are stored
DefaultPathSSHConfig = "/app/config/ssh"
// Default name for the SSH known hosts file
DefaultSSHKnownHostsName = "ssh_known_hosts"
PortAPIServer = 8080
PortRepoServer = 8081
PortArgoCDMetrics = 8082
PortArgoCDAPIServerMetrics = 8083
PortRepoServerMetrics = 8084
)
// Argo CD application related constants
@@ -97,12 +75,6 @@ const (
// LabelValueSecretTypeCluster indicates a secret type of cluster
LabelValueSecretTypeCluster = "cluster"
// AnnotationCompareOptions is a comma-separated list of options for comparison
AnnotationCompareOptions = "argocd.argoproj.io/compare-options"
// AnnotationSyncOptions is a comma-separated list of options for syncing
AnnotationSyncOptions = "argocd.argoproj.io/sync-options"
// AnnotationSyncWave indicates which wave of the sync the resource or hook should be in
AnnotationSyncWave = "argocd.argoproj.io/sync-wave"
// AnnotationKeyHook contains the hook type of a resource
AnnotationKeyHook = "argocd.argoproj.io/hook"
// AnnotationKeyHookDeletePolicy is the policy of deleting a hook
@@ -131,10 +103,6 @@ const (
// EnvVarFakeInClusterConfig is an environment variable to fake an in-cluster RESTConfig using
// the current kubectl context (for development purposes)
EnvVarFakeInClusterConfig = "ARGOCD_FAKE_IN_CLUSTER"
// Overrides the location where SSH known hosts for repo access data is stored
EnvVarSSHDataPath = "ARGOCD_SSH_DATA_PATH"
// Overrides the location where TLS certificate for repo access data is stored
EnvVarTLSDataPath = "ARGOCD_TLS_DATA_PATH"
)
const (

View File

@@ -1,13 +1,11 @@
package clusterauth
package common
import (
"fmt"
"strings"
"time"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -35,13 +33,13 @@ var ArgoCDManagerPolicyRules = []rbacv1.PolicyRule{
},
}
// CreateServiceAccount creates a service account in a given namespace
// CreateServiceAccount creates a service account
func CreateServiceAccount(
clientset kubernetes.Interface,
serviceAccountName string,
namespace string,
) error {
serviceAccount := corev1.ServiceAccount{
serviceAccount := apiv1.ServiceAccount{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "ServiceAccount",
@@ -54,12 +52,12 @@ func CreateServiceAccount(
_, err := clientset.CoreV1().ServiceAccounts(namespace).Create(&serviceAccount)
if err != nil {
if !apierr.IsAlreadyExists(err) {
return fmt.Errorf("Failed to create service account %q in namespace %q: %v", serviceAccountName, namespace, err)
return fmt.Errorf("Failed to create service account %q: %v", serviceAccountName, err)
}
log.Infof("ServiceAccount %q already exists in namespace %q", serviceAccountName, namespace)
log.Infof("ServiceAccount %q already exists", serviceAccountName)
return nil
}
log.Infof("ServiceAccount %q created in namespace %q", serviceAccountName, namespace)
log.Infof("ServiceAccount %q created", serviceAccountName)
return nil
}
@@ -138,7 +136,8 @@ func CreateClusterRoleBinding(
}
// InstallClusterManagerRBAC installs RBAC resources for a cluster manager to operate a cluster. Returns a token
func InstallClusterManagerRBAC(clientset kubernetes.Interface, ns string) (string, error) {
func InstallClusterManagerRBAC(clientset kubernetes.Interface) (string, error) {
const ns = "kube-system"
err := CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
if err != nil {
@@ -155,7 +154,7 @@ func InstallClusterManagerRBAC(clientset kubernetes.Interface, ns string) (strin
return "", err
}
var serviceAccount *corev1.ServiceAccount
var serviceAccount *apiv1.ServiceAccount
var secretName string
err = wait.Poll(500*time.Millisecond, 30*time.Second, func() (bool, error) {
serviceAccount, err = clientset.CoreV1().ServiceAccounts(ns).Get(ArgoCDManagerServiceAccount, metav1.GetOptions{})
@@ -217,106 +216,3 @@ func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleN
}
return nil
}
type ServiceAccountClaims struct {
Sub string `json:"sub"`
Iss string `json:"iss"`
Namespace string `json:"kubernetes.io/serviceaccount/namespace"`
SecretName string `json:"kubernetes.io/serviceaccount/secret.name"`
ServiceAccountName string `json:"kubernetes.io/serviceaccount/service-account.name"`
ServiceAccountUID string `json:"kubernetes.io/serviceaccount/service-account.uid"`
}
// Valid satisfies the jwt.Claims interface to enable JWT parsing
func (sac *ServiceAccountClaims) Valid() error {
return nil
}
// ParseServiceAccountToken parses a Kubernetes service account token
func ParseServiceAccountToken(token string) (*ServiceAccountClaims, error) {
parser := &jwt.Parser{
SkipClaimsValidation: true,
}
var claims ServiceAccountClaims
_, _, err := parser.ParseUnverified(token, &claims)
if err != nil {
return nil, fmt.Errorf("Failed to parse service account token: %s", err)
}
return &claims, nil
}
// GenerateNewClusterManagerSecret creates a new secret derived with same metadata as existing one
// and waits until the secret is populated with a bearer token
func GenerateNewClusterManagerSecret(clientset kubernetes.Interface, claims *ServiceAccountClaims) (*corev1.Secret, error) {
secretsClient := clientset.CoreV1().Secrets(claims.Namespace)
existingSecret, err := secretsClient.Get(claims.SecretName, metav1.GetOptions{})
if err != nil {
return nil, err
}
var newSecret corev1.Secret
secretNameSplit := strings.Split(claims.SecretName, "-")
if len(secretNameSplit) > 0 {
secretNameSplit = secretNameSplit[:len(secretNameSplit)-1]
}
newSecret.Type = corev1.SecretTypeServiceAccountToken
newSecret.GenerateName = strings.Join(secretNameSplit, "-") + "-"
newSecret.Annotations = existingSecret.Annotations
// We will create an empty secret and let kubernetes populate the data
newSecret.Data = nil
created, err := secretsClient.Create(&newSecret)
if err != nil {
return nil, err
}
err = wait.Poll(500*time.Millisecond, 30*time.Second, func() (bool, error) {
created, err = secretsClient.Get(created.Name, metav1.GetOptions{})
if err != nil {
return false, err
}
if len(created.Data) == 0 {
return false, nil
}
return true, nil
})
if err != nil {
return nil, fmt.Errorf("Timed out waiting for secret to generate new token")
}
return created, nil
}
// RotateServiceAccountSecrets rotates the entries in the service accounts secrets list
func RotateServiceAccountSecrets(clientset kubernetes.Interface, claims *ServiceAccountClaims, newSecret *corev1.Secret) error {
// 1. update service account secrets list with new secret name while also removing the old name
saClient := clientset.CoreV1().ServiceAccounts(claims.Namespace)
sa, err := saClient.Get(claims.ServiceAccountName, metav1.GetOptions{})
if err != nil {
return err
}
var newSecretsList []corev1.ObjectReference
alreadyPresent := false
for _, objRef := range sa.Secrets {
if objRef.Name == claims.SecretName {
continue
}
if objRef.Name == newSecret.Name {
alreadyPresent = true
}
newSecretsList = append(newSecretsList, objRef)
}
if !alreadyPresent {
sa.Secrets = append(newSecretsList, corev1.ObjectReference{Name: newSecret.Name})
}
_, err = saClient.Update(sa)
if err != nil {
return err
}
// 2. delete existing secret object
secretsClient := clientset.CoreV1().Secrets(claims.Namespace)
err = secretsClient.Delete(claims.SecretName, &metav1.DeleteOptions{})
if !apierr.IsNotFound(err) {
return err
}
return nil
}

View File

@@ -4,15 +4,15 @@ import (
"context"
"encoding/json"
"fmt"
"math"
"reflect"
"runtime/debug"
"strings"
"sync"
"time"
"github.com/argoproj/argo-cd/pkg/apis/application"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/semaphore"
v1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -28,13 +28,12 @@ import (
statecache "github.com/argoproj/argo-cd/controller/cache"
"github.com/argoproj/argo-cd/controller/metrics"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
appinformers "github.com/argoproj/argo-cd/pkg/client/informers/externalversions"
"github.com/argoproj/argo-cd/pkg/client/informers/externalversions/application/v1alpha1"
applisters "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
argocache "github.com/argoproj/argo-cd/util/cache"
@@ -48,21 +47,6 @@ const (
updateOperationStateTimeout = 1 * time.Second
)
type CompareWith int
const (
// Compare live application state against state defined in latest git revision.
CompareWithLatest CompareWith = 2
// Compare live application state against state defined using revision of most recent comparison.
CompareWithRecent CompareWith = 1
// Skip comparison and only refresh application resources tree
ComparisonWithNothing CompareWith = 0
)
func (a CompareWith) Max(b CompareWith) CompareWith {
return CompareWith(math.Max(float64(a), float64(b)))
}
// ApplicationController is the controller for application resources.
type ApplicationController struct {
cache *argocache.Cache
@@ -79,14 +63,13 @@ type ApplicationController struct {
appStateManager AppStateManager
stateCache statecache.LiveStateCache
statusRefreshTimeout time.Duration
selfHealTimeout time.Duration
repoClientset apiclient.Clientset
repoClientset reposerver.Clientset
db db.ArgoDB
settings *settings_util.ArgoCDSettings
settingsMgr *settings_util.SettingsManager
refreshRequestedApps map[string]CompareWith
refreshRequestedApps map[string]bool
refreshRequestedAppsMutex *sync.Mutex
metricsServer *metrics.MetricsServer
kubectlSemaphore *semaphore.Weighted
}
type ApplicationControllerConfig struct {
@@ -100,14 +83,15 @@ func NewApplicationController(
settingsMgr *settings_util.SettingsManager,
kubeClientset kubernetes.Interface,
applicationClientset appclientset.Interface,
repoClientset apiclient.Clientset,
repoClientset reposerver.Clientset,
argoCache *argocache.Cache,
appResyncPeriod time.Duration,
selfHealTimeout time.Duration,
metricsPort int,
kubectlParallelismLimit int64,
) (*ApplicationController, error) {
db := db.NewDB(namespace, settingsMgr, kubeClientset)
settings, err := settingsMgr.GetSettings()
if err != nil {
return nil, err
}
kubectlCmd := kube.KubectlCmd{}
ctrl := ApplicationController{
cache: argoCache,
@@ -120,26 +104,18 @@ func NewApplicationController(
appOperationQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
db: db,
statusRefreshTimeout: appResyncPeriod,
refreshRequestedApps: make(map[string]CompareWith),
refreshRequestedApps: make(map[string]bool),
refreshRequestedAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, "argocd-application-controller"),
settingsMgr: settingsMgr,
selfHealTimeout: selfHealTimeout,
settings: settings,
}
if kubectlParallelismLimit > 0 {
ctrl.kubectlSemaphore = semaphore.NewWeighted(kubectlParallelismLimit)
}
kubectlCmd.OnKubectlRun = ctrl.onKubectlRun
appInformer, appLister := ctrl.newApplicationInformerAndLister()
projInformer := v1alpha1.NewAppProjectInformer(applicationClientset, namespace, appResyncPeriod, cache.Indexers{})
metricsAddr := fmt.Sprintf("0.0.0.0:%d", metricsPort)
ctrl.metricsServer = metrics.NewMetricsServer(metricsAddr, appLister, func() error {
_, err := kubeClientset.Discovery().ServerVersion()
return err
})
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settingsMgr, kubectlCmd, ctrl.metricsServer, ctrl.handleAppUpdated)
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectlCmd, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer)
metricsAddr := fmt.Sprintf("0.0.0.0:%d", common.PortArgoCDMetrics)
ctrl.metricsServer = metrics.NewMetricsServer(metricsAddr, appLister)
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settings, kubectlCmd, ctrl.metricsServer, ctrl.handleAppUpdated)
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectlCmd, ctrl.settings, stateCache, projInformer, ctrl.metricsServer)
ctrl.appInformer = appInformer
ctrl.appLister = appLister
ctrl.projInformer = projInformer
@@ -149,23 +125,6 @@ func NewApplicationController(
return &ctrl, nil
}
func (ctrl *ApplicationController) onKubectlRun(command string) (util.Closer, error) {
ctrl.metricsServer.IncKubectlExec(command)
if ctrl.kubectlSemaphore != nil {
if err := ctrl.kubectlSemaphore.Acquire(context.Background(), 1); err != nil {
return nil, err
}
ctrl.metricsServer.IncKubectlExecPending(command)
}
return util.NewCloser(func() error {
if ctrl.kubectlSemaphore != nil {
ctrl.kubectlSemaphore.Release(1)
ctrl.metricsServer.DecKubectlExecPending(command)
}
return nil
}), nil
}
func isSelfReferencedApp(app *appv1.Application, ref v1.ObjectReference) bool {
gvk := ref.GroupVersionKind()
return ref.UID == app.UID &&
@@ -175,7 +134,7 @@ func isSelfReferencedApp(app *appv1.Application, ref v1.ObjectReference) bool {
gvk.Kind == application.ApplicationKind
}
func (ctrl *ApplicationController) handleAppUpdated(appName string, isManagedResource bool, ref v1.ObjectReference) {
func (ctrl *ApplicationController) handleAppUpdated(appName string, fullRefresh bool, ref v1.ObjectReference) {
skipForceRefresh := false
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(ctrl.namespace + "/" + appName)
@@ -185,17 +144,13 @@ func (ctrl *ApplicationController) handleAppUpdated(appName string, isManagedRes
}
if !skipForceRefresh {
level := ComparisonWithNothing
if isManagedResource {
level = CompareWithRecent
}
ctrl.requestAppRefresh(appName, level)
ctrl.requestAppRefresh(appName, fullRefresh)
}
ctrl.appRefreshQueue.Add(fmt.Sprintf("%s/%s", ctrl.namespace, appName))
}
func (ctrl *ApplicationController) setAppManagedResources(a *appv1.Application, comparisonResult *comparisonResult) (*appv1.ApplicationTree, error) {
managedResources, err := ctrl.managedResources(comparisonResult)
managedResources, err := ctrl.managedResources(a, comparisonResult)
if err != nil {
return nil, err
}
@@ -237,7 +192,7 @@ func (ctrl *ApplicationController) getResourceTree(a *appv1.Application, managed
},
})
} else {
err := ctrl.stateCache.IterateHierarchy(a.Spec.Destination.Server, live, func(child appv1.ResourceNode) {
err := ctrl.stateCache.IterateHierarchy(a.Spec.Destination.Server, kube.GetResourceKey(live), func(child appv1.ResourceNode) {
nodes = append(nodes, child)
})
if err != nil {
@@ -249,7 +204,7 @@ func (ctrl *ApplicationController) getResourceTree(a *appv1.Application, managed
return &appv1.ApplicationTree{Nodes: nodes}, nil
}
func (ctrl *ApplicationController) managedResources(comparisonResult *comparisonResult) ([]*appv1.ResourceDiff, error) {
func (ctrl *ApplicationController) managedResources(a *appv1.Application, comparisonResult *comparisonResult) ([]*appv1.ResourceDiff, error) {
items := make([]*appv1.ResourceDiff, len(comparisonResult.managedResources))
for i := range comparisonResult.managedResources {
res := comparisonResult.managedResources[i]
@@ -258,7 +213,6 @@ func (ctrl *ApplicationController) managedResources(comparisonResult *comparison
Name: res.Name,
Group: res.Group,
Kind: res.Kind,
Hook: res.Hook,
}
target := res.Target
@@ -310,13 +264,14 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
go ctrl.appInformer.Run(ctx.Done())
go ctrl.projInformer.Run(ctx.Done())
go ctrl.watchSettings(ctx)
if !cache.WaitForCacheSync(ctx.Done(), ctrl.appInformer.HasSynced, ctrl.projInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
go func() { errors.CheckError(ctrl.stateCache.Run(ctx)) }()
go ctrl.stateCache.Run(ctx)
go func() { errors.CheckError(ctrl.metricsServer.ListenAndServe()) }()
for i := 0; i < statusProcessors; i++ {
@@ -336,20 +291,20 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
<-ctx.Done()
}
func (ctrl *ApplicationController) requestAppRefresh(appName string, compareWith CompareWith) {
func (ctrl *ApplicationController) requestAppRefresh(appName string, fullRefresh bool) {
ctrl.refreshRequestedAppsMutex.Lock()
defer ctrl.refreshRequestedAppsMutex.Unlock()
ctrl.refreshRequestedApps[appName] = compareWith.Max(ctrl.refreshRequestedApps[appName])
ctrl.refreshRequestedApps[appName] = fullRefresh || ctrl.refreshRequestedApps[appName]
}
func (ctrl *ApplicationController) isRefreshRequested(appName string) (bool, CompareWith) {
func (ctrl *ApplicationController) isRefreshRequested(appName string) (bool, bool) {
ctrl.refreshRequestedAppsMutex.Lock()
defer ctrl.refreshRequestedAppsMutex.Unlock()
level, ok := ctrl.refreshRequestedApps[appName]
fullRefresh, ok := ctrl.refreshRequestedApps[appName]
if ok {
delete(ctrl.refreshRequestedApps, appName)
}
return ok, level
return ok, fullRefresh
}
func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext bool) {
@@ -539,7 +494,6 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
ctrl.setOperationState(app, state)
logCtx.Infof("Initialized new operation: %v", *app.Operation)
}
ctrl.appStateManager.SyncAppState(app, state)
if state.Phase == appv1.OperationRunning {
@@ -561,13 +515,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
if state.Phase.Completed() {
// if we just completed an operation, force a refresh so that UI will report up-to-date
// sync/health information
if key, err := cache.MetaNamespaceKeyFunc(app); err == nil {
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
ctrl.requestAppRefresh(app.Name, CompareWithLatest)
ctrl.appRefreshQueue.Add(key)
} else {
logCtx.Warnf("Fails to requeue application: %v", err)
}
ctrl.requestAppRefresh(app.ObjectMeta.Name, true)
}
}
@@ -602,10 +550,6 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace)
_, err = appClient.Patch(app.Name, types.MergePatchType, patchJSON)
if err != nil {
// Stop retrying updating deleted application
if apierr.IsNotFound(err) {
return nil
}
return err
}
log.Infof("updated '%s' operation (phase: %s)", app.Name, state.Phase)
@@ -662,7 +606,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
needRefresh, refreshType, comparisonLevel := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout)
needRefresh, refreshType, fullRefresh := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout)
if !needRefresh {
return
@@ -672,19 +616,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
defer func() {
reconcileDuration := time.Since(startTime)
ctrl.metricsServer.IncReconcile(origApp, reconcileDuration)
logCtx := log.WithFields(log.Fields{
"application": origApp.Name,
"time_ms": reconcileDuration.Seconds() * 1e3,
"level": comparisonLevel,
"dest-server": origApp.Spec.Destination.Server,
"dest-namespace": origApp.Spec.Destination.Namespace,
})
logCtx := log.WithFields(log.Fields{"application": origApp.Name, "time_ms": reconcileDuration.Seconds() * 1e3, "full": fullRefresh})
logCtx.Info("Reconciliation completed")
}()
app := origApp.DeepCopy()
logCtx := log.WithFields(log.Fields{"application": app.Name})
if comparisonLevel == ComparisonWithNothing {
if !fullRefresh {
if managedResources, err := ctrl.cache.GetAppManagedResources(app.Name); err != nil {
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fallback to full reconciliation")
} else {
@@ -697,8 +635,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
return
}
}
now := metav1.Now()
app.Status.ObservedAt = &now
app.Status.ObservedAt = metav1.Now()
ctrl.persistAppStatus(origApp, &app.Status)
return
}
@@ -713,23 +650,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
return
}
var localManifests []string
if opState := app.Status.OperationState; opState != nil && opState.Operation.Sync != nil {
localManifests = opState.Operation.Sync.Manifests
compareResult, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, refreshType == appv1.RefreshTypeHard)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
} else {
ctrl.normalizeApplication(origApp, app, compareResult.appSourceType)
conditions = append(conditions, compareResult.conditions...)
}
revision := app.Spec.Source.TargetRevision
if comparisonLevel == CompareWithRecent {
revision = app.Status.Sync.Revision
}
observedAt := metav1.Now()
compareResult := ctrl.appStateManager.CompareAppState(app, revision, app.Spec.Source, refreshType == appv1.RefreshTypeHard, localManifests)
ctrl.normalizeApplication(origApp, app, compareResult.appSourceType)
conditions = append(conditions, compareResult.conditions...)
tree, err := ctrl.setAppManagedResources(app, compareResult)
if err != nil {
logCtx.Errorf("Failed to cache app resources: %v", err)
@@ -737,15 +664,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
app.Status.Summary = tree.GetSummary()
}
syncErrCond := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources)
syncErrCond := ctrl.autoSync(app, compareResult.syncStatus)
if syncErrCond != nil {
conditions = append(conditions, *syncErrCond)
}
if app.Status.ReconciledAt == nil || comparisonLevel == CompareWithLatest {
app.Status.ReconciledAt = &observedAt
}
app.Status.ObservedAt = &observedAt
app.Status.ObservedAt = compareResult.reconciledAt
app.Status.ReconciledAt = compareResult.reconciledAt
app.Status.Sync = *compareResult.syncStatus
app.Status.Health = *compareResult.healthStatus
app.Status.Resources = compareResult.resources
@@ -759,33 +684,32 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
// Returns true if application never been compared, has changed or comparison result has expired.
// Additionally returns whether full refresh was requested or not.
// If full refresh is requested then target and live state should be reconciled, else only live state tree should be updated.
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) (bool, appv1.RefreshType, CompareWith) {
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) (bool, appv1.RefreshType, bool) {
logCtx := log.WithFields(log.Fields{"application": app.Name})
var reason string
compareWith := CompareWithLatest
fullRefresh := true
refreshType := appv1.RefreshTypeNormal
expired := app.Status.ReconciledAt == nil || app.Status.ReconciledAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
expired := app.Status.ReconciledAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
if requestedType, ok := app.IsRefreshRequested(); ok {
// user requested app refresh.
refreshType = requestedType
reason = fmt.Sprintf("%s refresh requested", refreshType)
} else if expired {
reason = fmt.Sprintf("comparison expired. reconciledAt: %v, expiry: %v", app.Status.ReconciledAt, statusRefreshTimeout)
} else if requested, full := ctrl.isRefreshRequested(app.Name); requested {
fullRefresh = full
reason = fmt.Sprintf("controller refresh requested")
} else if app.Status.Sync.Status == appv1.SyncStatusCodeUnknown && expired {
reason = "comparison status unknown"
} else if !app.Spec.Source.Equals(app.Status.Sync.ComparedTo.Source) {
reason = "spec.source differs"
} else if !app.Spec.Destination.Equals(app.Status.Sync.ComparedTo.Destination) {
reason = "spec.destination differs"
} else if requested, level := ctrl.isRefreshRequested(app.Name); requested {
compareWith = level
reason = fmt.Sprintf("controller refresh requested")
} else if expired {
reason = fmt.Sprintf("comparison expired. reconciledAt: %v, expiry: %v", app.Status.ReconciledAt, statusRefreshTimeout)
}
if reason != "" {
logCtx.Infof("Refreshing app status (%s), level (%d)", reason, compareWith)
return true, refreshType, compareWith
logCtx.Infof("Refreshing app status (%s)", reason)
return true, refreshType, fullRefresh
}
return false, refreshType, compareWith
return false, refreshType, fullRefresh
}
func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application) ([]appv1.ApplicationCondition, bool) {
@@ -823,7 +747,6 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
appv1.ApplicationConditionSharedResourceWarning: true,
appv1.ApplicationConditionSyncError: true,
appv1.ApplicationConditionRepeatedResourceWarning: true,
appv1.ApplicationConditionExcludedResourceWarning: true,
}
appConditions := make([]appv1.ApplicationCondition, 0)
for i := 0; i < len(app.Status.Conditions); i++ {
@@ -903,7 +826,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
}
// autoSync will initiate a sync operation for an application configured with automated sync
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus, resources []appv1.ResourceStatus) *appv1.ApplicationCondition {
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus) *appv1.ApplicationCondition {
if app.Spec.SyncPolicy == nil || app.Spec.SyncPolicy.Automated == nil {
return nil
}
@@ -922,52 +845,28 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
logCtx.Infof("Skipping auto-sync: application status is %s", syncStatus.Status)
return nil
}
desiredCommitSHA := syncStatus.Revision
alreadyAttempted, attemptPhase := alreadyAttemptedSync(app, desiredCommitSHA)
selfHeal := app.Spec.SyncPolicy.Automated.SelfHeal
op := appv1.Operation{
Sync: &appv1.SyncOperation{
Revision: desiredCommitSHA,
Prune: app.Spec.SyncPolicy.Automated.Prune,
},
}
// It is possible for manifests to remain OutOfSync even after a sync/kubectl apply (e.g.
// auto-sync with pruning disabled). We need to ensure that we do not keep Syncing an
// application in an infinite loop. To detect this, we only attempt the Sync if the revision
// and parameter overrides are different from our most recent sync operation.
if alreadyAttempted && (!selfHeal || !attemptPhase.Successful()) {
if !attemptPhase.Successful() {
if alreadyAttemptedSync(app, desiredCommitSHA) {
if app.Status.OperationState.Phase != appv1.OperationSucceeded {
logCtx.Warnf("Skipping auto-sync: failed previous sync attempt to %s", desiredCommitSHA)
message := fmt.Sprintf("Failed sync attempt to %s: %s", desiredCommitSHA, app.Status.OperationState.Message)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}
}
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredCommitSHA)
return nil
} else if alreadyAttempted && selfHeal {
if shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app); shouldSelfHeal {
for _, resource := range resources {
if resource.Status != appv1.SyncStatusCodeSynced {
op.Sync.Resources = append(op.Sync.Resources, appv1.SyncOperationResource{
Kind: resource.Kind,
Group: resource.Group,
Name: resource.Name,
})
}
}
} else {
logCtx.Infof("Skipping auto-sync: already attempted sync to %s with timeout %v (retrying in %v)", desiredCommitSHA, ctrl.selfHealTimeout, retryAfter)
if key, err := cache.MetaNamespaceKeyFunc(app); err == nil {
ctrl.requestAppRefresh(app.Name, CompareWithLatest)
ctrl.appRefreshQueue.AddAfter(key, retryAfter)
} else {
logCtx.Warnf("Fails to requeue application: %v", err)
}
return nil
}
}
op := appv1.Operation{
Sync: &appv1.SyncOperation{
Revision: desiredCommitSHA,
Prune: app.Spec.SyncPolicy.Automated.Prune,
},
}
appIf := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace)
_, err := argo.SetAppOperation(appIf, app.Name, &op)
if err != nil {
@@ -982,12 +881,12 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
// alreadyAttemptedSync returns whether or not the most recent sync was performed against the
// commitSHA and with the same app source config which are currently set in the app
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) (bool, appv1.OperationPhase) {
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) bool {
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
return false, ""
return false
}
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false, ""
return false
}
// Ignore differences in target revision, since we already just verified commitSHAs are equal,
// and we do not want to trigger auto-sync due to things like HEAD != master
@@ -995,21 +894,7 @@ func alreadyAttemptedSync(app *appv1.Application, commitSHA string) (bool, appv1
specSource.TargetRevision = ""
syncResSource := app.Status.OperationState.SyncResult.Source.DeepCopy()
syncResSource.TargetRevision = ""
return reflect.DeepEqual(app.Spec.Source, app.Status.OperationState.SyncResult.Source), app.Status.OperationState.Phase
}
func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application) (bool, time.Duration) {
if app.Status.OperationState == nil {
return true, time.Duration(0)
}
var retryAfter time.Duration
if app.Status.OperationState.FinishedAt == nil {
retryAfter = ctrl.selfHealTimeout
} else {
retryAfter = ctrl.selfHealTimeout - time.Since(app.Status.OperationState.FinishedAt.Time)
}
return retryAfter <= 0, retryAfter
return reflect.DeepEqual(app.Spec.Source, app.Status.OperationState.SyncResult.Source)
}
func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.SharedIndexInformer, applisters.ApplicationLister) {
@@ -1040,7 +925,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
if oldOK && newOK {
if toggledAutomatedSync(oldApp, newApp) {
log.WithField("application", newApp.Name).Info("Enabled automated sync")
ctrl.requestAppRefresh(newApp.Name, CompareWithLatest)
ctrl.requestAppRefresh(newApp.Name, true)
}
}
ctrl.appRefreshQueue.Add(key)
@@ -1076,3 +961,39 @@ func toggledAutomatedSync(old *appv1.Application, new *appv1.Application) bool {
// nothing changed
return false
}
func (ctrl *ApplicationController) watchSettings(ctx context.Context) {
updateCh := make(chan *settings_util.ArgoCDSettings, 1)
ctrl.settingsMgr.Subscribe(updateCh)
prevAppLabelKey := ctrl.settings.GetAppInstanceLabelKey()
prevResourceExclusions := ctrl.settings.ResourceExclusions
prevResourceInclusions := ctrl.settings.ResourceInclusions
done := false
for !done {
select {
case newSettings := <-updateCh:
newAppLabelKey := newSettings.GetAppInstanceLabelKey()
*ctrl.settings = *newSettings
if prevAppLabelKey != newAppLabelKey {
log.Infof("label key changed: %s -> %s", prevAppLabelKey, newAppLabelKey)
ctrl.stateCache.Invalidate()
prevAppLabelKey = newAppLabelKey
}
if !reflect.DeepEqual(prevResourceExclusions, newSettings.ResourceExclusions) {
log.WithFields(log.Fields{"prevResourceExclusions": prevResourceExclusions, "newResourceExclusions": newSettings.ResourceExclusions}).Info("resource exclusions modified")
ctrl.stateCache.Invalidate()
prevResourceExclusions = newSettings.ResourceExclusions
}
if !reflect.DeepEqual(prevResourceInclusions, newSettings.ResourceInclusions) {
log.WithFields(log.Fields{"prevResourceInclusions": prevResourceInclusions, "newResourceInclusions": newSettings.ResourceInclusions}).Info("resource inclusions modified")
ctrl.stateCache.Invalidate()
prevResourceInclusions = newSettings.ResourceInclusions
}
case <-ctx.Done():
done = true
}
}
log.Info("shutting down settings watch")
ctrl.settingsMgr.Unsubscribe(updateCh)
close(updateCh)
}

View File

@@ -2,30 +2,28 @@ package controller
import (
"context"
"encoding/json"
"testing"
"time"
"github.com/argoproj/argo-cd/common"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
corev1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/kubernetes/fake"
kubetesting "k8s.io/client-go/testing"
"k8s.io/client-go/tools/cache"
"github.com/argoproj/argo-cd/common"
mockstatecache "github.com/argoproj/argo-cd/controller/cache/mocks"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
"github.com/argoproj/argo-cd/reposerver/apiclient"
mockrepoclient "github.com/argoproj/argo-cd/reposerver/apiclient/mocks"
mockreposerver "github.com/argoproj/argo-cd/reposerver/mocks"
"github.com/argoproj/argo-cd/reposerver/repository"
mockrepoclient "github.com/argoproj/argo-cd/reposerver/repository/mocks"
"github.com/argoproj/argo-cd/test"
utilcache "github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/kube"
@@ -34,9 +32,8 @@ import (
type fakeData struct {
apps []runtime.Object
manifestResponse *apiclient.ManifestResponse
manifestResponse *repository.ManifestResponse
managedLiveObjs map[kube.ResourceKey]*unstructured.Unstructured
configMapData map[string]string
}
func newFakeController(data *fakeData) *ApplicationController {
@@ -66,11 +63,8 @@ func newFakeController(data *fakeData) *ApplicationController {
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-cm",
Namespace: test.FakeArgoCDNamespace,
Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
},
},
Data: data.configMapData,
Data: nil,
}
kubeClient := fake.NewSimpleClientset(&clust, &cm, &secret)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
@@ -82,9 +76,6 @@ func newFakeController(data *fakeData) *ApplicationController {
&mockRepoClientset,
utilcache.NewCache(utilcache.NewInMemoryCache(1*time.Hour)),
time.Minute,
time.Minute,
common.DefaultPortArgoCDMetrics,
0,
)
if err != nil {
panic(err)
@@ -183,7 +174,7 @@ func TestAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -202,7 +193,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -217,7 +208,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -233,7 +224,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -250,7 +241,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -276,7 +267,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -312,7 +303,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -355,7 +346,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond := ctrl.autoSync(app, &syncStatus, []argoappv1.ResourceStatus{})
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
@@ -409,7 +400,7 @@ func TestNormalizeApplication(t *testing.T) {
app.Spec.Source.Kustomize = &argoappv1.ApplicationSourceKustomize{NamePrefix: "foo-"}
data := fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -468,165 +459,10 @@ func TestHandleAppUpdated(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
ctrl.handleAppUpdated(app.Name, true, kube.GetObjectRef(kube.MustToUnstructured(app)))
isRequested, level := ctrl.isRefreshRequested(app.Name)
isRequested, _ := ctrl.isRefreshRequested(app.Name)
assert.False(t, isRequested)
assert.Equal(t, ComparisonWithNothing, level)
ctrl.handleAppUpdated(app.Name, true, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: "default"})
isRequested, level = ctrl.isRefreshRequested(app.Name)
isRequested, _ = ctrl.isRefreshRequested(app.Name)
assert.True(t, isRequested)
assert.Equal(t, CompareWithRecent, level)
}
func TestSetOperationStateOnDeletedApp(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
patched := false
fakeAppCs.AddReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
patched = true
return true, nil, apierr.NewNotFound(schema.GroupResource{}, "my-app")
})
ctrl.setOperationState(newFakeApp(), &argoappv1.OperationState{Phase: argoappv1.OperationSucceeded})
assert.True(t, patched)
}
func TestNeedRefreshAppStatus(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
app := newFakeApp()
now := metav1.Now()
app.Status.ReconciledAt = &now
app.Status.Sync = argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeSynced,
ComparedTo: argoappv1.ComparedTo{
Source: app.Spec.Source,
Destination: app.Spec.Destination,
},
}
// no need to refresh just reconciled application
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.False(t, needRefresh)
// refresh app using the 'deepest' requested comparison level
ctrl.requestAppRefresh(app.Name, CompareWithRecent)
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing)
needRefresh, refreshType, compareWith := ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithRecent, compareWith)
// refresh application which status is not reconciled using latest commit
app.Status.Sync = argoappv1.SyncStatus{Status: argoappv1.SyncStatusCodeUnknown}
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
{
// refresh app using the 'latest' level if comparison expired
app := app.DeepCopy()
ctrl.requestAppRefresh(app.Name, CompareWithRecent)
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
app.Status.ReconciledAt = &reconciledAt
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Minute)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
}
{
app := app.DeepCopy()
// execute hard refresh if app has refresh annotation
reconciledAt := metav1.NewTime(time.Now().UTC().Add(-1 * time.Hour))
app.Status.ReconciledAt = &reconciledAt
app.Annotations = map[string]string{
common.AnnotationKeyRefresh: string(argoappv1.RefreshTypeHard),
}
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
}
{
app := app.DeepCopy()
// ensure that CompareWithLatest level is used if application source has changed
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing)
// sample app source change
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{
Parameters: []argoappv1.HelmParameter{{
Name: "foo",
Value: "bar",
}},
}
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
}
}
func TestUpdateReconciledAt(t *testing.T) {
app := newFakeApp()
reconciledAt := metav1.NewTime(time.Now().Add(-1 * time.Second))
app.Status = argoappv1.ApplicationStatus{ReconciledAt: &reconciledAt}
app.Status.Sync = argoappv1.SyncStatus{ComparedTo: argoappv1.ComparedTo{Source: app.Spec.Source, Destination: app.Spec.Destination}}
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
})
key, _ := cache.MetaNamespaceKeyFunc(app)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
receivedPatch := map[string]interface{}{}
fakeAppCs.AddReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if patchAction, ok := action.(kubetesting.PatchAction); ok {
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
}
return true, nil, nil
})
t.Run("UpdatedOnFullReconciliation", func(t *testing.T) {
receivedPatch = map[string]interface{}{}
ctrl.requestAppRefresh(app.Name, CompareWithLatest)
ctrl.appRefreshQueue.Add(key)
ctrl.processAppRefreshQueueItem()
_, updated, err := unstructured.NestedString(receivedPatch, "status", "reconciledAt")
assert.NoError(t, err)
assert.True(t, updated)
_, updated, err = unstructured.NestedString(receivedPatch, "status", "observedAt")
assert.NoError(t, err)
assert.True(t, updated)
})
t.Run("NotUpdatedOnPartialReconciliation", func(t *testing.T) {
receivedPatch = map[string]interface{}{}
ctrl.appRefreshQueue.Add(key)
ctrl.requestAppRefresh(app.Name, CompareWithRecent)
ctrl.processAppRefreshQueueItem()
_, updated, err := unstructured.NestedString(receivedPatch, "status", "reconciledAt")
assert.NoError(t, err)
assert.False(t, updated)
_, updated, err = unstructured.NestedString(receivedPatch, "status", "observedAt")
assert.NoError(t, err)
assert.True(t, updated)
})
}

View File

@@ -2,7 +2,6 @@ package cache
import (
"context"
"reflect"
"sync"
log "github.com/sirupsen/logrus"
@@ -20,25 +19,19 @@ import (
"github.com/argoproj/argo-cd/util/settings"
)
type cacheSettings struct {
ResourceOverrides map[string]appv1.ResourceOverride
AppInstanceLabelKey string
ResourcesFilter *settings.ResourcesFilter
}
type LiveStateCache interface {
IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error)
// Executes give callback against resource specified by the key and all its children
IterateHierarchy(server string, obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) error
IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode)) error
// Returns state of live nodes which correspond for target nodes of specified application.
GetManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error)
// Starts watching resources of each controlled cluster.
Run(ctx context.Context) error
Run(ctx context.Context)
// Invalidate invalidates the entire cluster state cache
Invalidate()
}
type AppUpdatedHandler = func(appName string, isManagedResource bool, ref v1.ObjectReference)
type AppUpdatedHandler = func(appName string, fullRefresh bool, ref v1.ObjectReference)
func GetTargetObjKey(a *appv1.Application, un *unstructured.Unstructured, isNamespaced bool) kube.ResourceKey {
key := kube.GetResourceKey(un)
@@ -54,51 +47,32 @@ func GetTargetObjKey(a *appv1.Application, un *unstructured.Unstructured, isName
func NewLiveStateCache(
db db.ArgoDB,
appInformer cache.SharedIndexInformer,
settingsMgr *settings.SettingsManager,
settings *settings.ArgoCDSettings,
kubectl kube.Kubectl,
metricsServer *metrics.MetricsServer,
onAppUpdated AppUpdatedHandler) LiveStateCache {
return &liveStateCache{
appInformer: appInformer,
db: db,
clusters: make(map[string]*clusterInfo),
lock: &sync.Mutex{},
onAppUpdated: onAppUpdated,
kubectl: kubectl,
settingsMgr: settingsMgr,
metricsServer: metricsServer,
cacheSettingsLock: &sync.Mutex{},
appInformer: appInformer,
db: db,
clusters: make(map[string]*clusterInfo),
lock: &sync.Mutex{},
onAppUpdated: onAppUpdated,
kubectl: kubectl,
settings: settings,
metricsServer: metricsServer,
}
}
type liveStateCache struct {
db db.ArgoDB
clusters map[string]*clusterInfo
lock *sync.Mutex
appInformer cache.SharedIndexInformer
onAppUpdated AppUpdatedHandler
kubectl kube.Kubectl
settingsMgr *settings.SettingsManager
metricsServer *metrics.MetricsServer
cacheSettingsLock *sync.Mutex
cacheSettings *cacheSettings
}
func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
appInstanceLabelKey, err := c.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return nil, err
}
resourcesFilter, err := c.settingsMgr.GetResourcesFilter()
if err != nil {
return nil, err
}
resourceOverrides, err := c.settingsMgr.GetResourceOverrides()
if err != nil {
return nil, err
}
return &cacheSettings{AppInstanceLabelKey: appInstanceLabelKey, ResourceOverrides: resourceOverrides, ResourcesFilter: resourcesFilter}, nil
db db.ArgoDB
clusters map[string]*clusterInfo
lock *sync.Mutex
appInformer cache.SharedIndexInformer
onAppUpdated AppUpdatedHandler
kubectl kube.Kubectl
settings *settings.ArgoCDSettings
metricsServer *metrics.MetricsServer
}
func (c *liveStateCache) getCluster(server string) (*clusterInfo, error) {
@@ -111,17 +85,17 @@ func (c *liveStateCache) getCluster(server string) (*clusterInfo, error) {
return nil, err
}
info = &clusterInfo{
apisMeta: make(map[schema.GroupKind]*apiMeta),
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
nsIndex: make(map[string]map[kube.ResourceKey]*node),
onAppUpdated: c.onAppUpdated,
kubectl: c.kubectl,
cluster: cluster,
syncTime: nil,
syncLock: &sync.Mutex{},
log: log.WithField("server", cluster.Server),
cacheSettingsSrc: c.getCacheSettings,
apisMeta: make(map[schema.GroupKind]*apiMeta),
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
nsIndex: make(map[string]map[kube.ResourceKey]*node),
onAppUpdated: c.onAppUpdated,
kubectl: c.kubectl,
cluster: cluster,
syncTime: nil,
syncLock: &sync.Mutex{},
log: log.WithField("server", cluster.Server),
settings: c.settings,
}
c.clusters[cluster.Server] = info
@@ -161,12 +135,12 @@ func (c *liveStateCache) IsNamespaced(server string, obj *unstructured.Unstructu
return clusterInfo.isNamespaced(obj), nil
}
func (c *liveStateCache) IterateHierarchy(server string, obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) error {
func (c *liveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode)) error {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return err
}
clusterInfo.iterateHierarchy(obj, action)
clusterInfo.iterateHierarchy(key, action)
return nil
}
@@ -187,55 +161,8 @@ func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
return false
}
func (c *liveStateCache) getCacheSettings() *cacheSettings {
c.cacheSettingsLock.Lock()
defer c.cacheSettingsLock.Unlock()
return c.cacheSettings
}
func (c *liveStateCache) watchSettings(ctx context.Context) {
updateCh := make(chan *settings.ArgoCDSettings, 1)
c.settingsMgr.Subscribe(updateCh)
done := false
for !done {
select {
case <-updateCh:
nextCacheSettings, err := c.loadCacheSettings()
if err != nil {
log.Warnf("Failed to read updated settings: %v", err)
continue
}
c.cacheSettingsLock.Lock()
needInvalidate := false
if !reflect.DeepEqual(c.cacheSettings, nextCacheSettings) {
c.cacheSettings = nextCacheSettings
needInvalidate = true
}
c.cacheSettingsLock.Unlock()
if needInvalidate {
c.Invalidate()
}
case <-ctx.Done():
done = true
}
}
log.Info("shutting down settings watch")
c.settingsMgr.Unsubscribe(updateCh)
close(updateCh)
}
// Run watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (c *liveStateCache) Run(ctx context.Context) error {
cacheSettings, err := c.loadCacheSettings()
if err != nil {
return err
}
c.cacheSettings = cacheSettings
go c.watchSettings(ctx)
func (c *liveStateCache) Run(ctx context.Context) {
util.RetryUntilSucceed(func() error {
clusterEventCallback := func(event *db.ClusterEvent) {
c.lock.Lock()
@@ -261,5 +188,4 @@ func (c *liveStateCache) Run(ctx context.Context) error {
}, "watch clusters", ctx, clusterRetryTimeout)
<-ctx.Done()
return nil
}

View File

@@ -4,13 +4,9 @@ import (
"context"
"fmt"
"runtime/debug"
"sort"
"strings"
"sync"
"time"
"k8s.io/apimachinery/pkg/types"
"github.com/argoproj/argo-cd/controller/metrics"
log "github.com/sirupsen/logrus"
@@ -24,6 +20,7 @@ import (
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
)
const (
@@ -48,11 +45,11 @@ type clusterInfo struct {
nodes map[kube.ResourceKey]*node
nsIndex map[string]map[kube.ResourceKey]*node
onAppUpdated AppUpdatedHandler
kubectl kube.Kubectl
cluster *appv1.Cluster
log *log.Entry
cacheSettingsSrc func() *cacheSettings
onAppUpdated AppUpdatedHandler
kubectl kube.Kubectl
cluster *appv1.Cluster
log *log.Entry
settings *settings.ArgoCDSettings
}
func (c *clusterInfo) replaceResourceCache(gk schema.GroupKind, resourceVersion string, objs []unstructured.Unstructured) {
@@ -92,7 +89,7 @@ func (c *clusterInfo) createObjInfo(un *unstructured.Unstructured, appInstanceLa
ownerRefs = append(ownerRefs, metav1.OwnerReference{
Name: un.GetName(),
Kind: kube.ServiceKind,
APIVersion: "v1",
APIVersion: "",
})
}
nodeInfo := &node{
@@ -106,7 +103,7 @@ func (c *clusterInfo) createObjInfo(un *unstructured.Unstructured, appInstanceLa
nodeInfo.appName = appName
nodeInfo.resource = un
}
nodeInfo.health, _ = health.GetResourceHealth(un, c.cacheSettingsSrc().ResourceOverrides)
nodeInfo.health, _ = health.GetResourceHealth(un, c.settings.ResourceOverrides)
return nodeInfo
}
@@ -165,7 +162,7 @@ func (c *clusterInfo) stopWatching(gk schema.GroupKind) {
// startMissingWatches lists supported cluster resources and start watching for changes unless watch is already running
func (c *clusterInfo) startMissingWatches() error {
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.cacheSettingsSrc().ResourcesFilter)
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.settings)
if err != nil {
return err
}
@@ -237,7 +234,12 @@ func (c *clusterInfo) watchEvents(ctx context.Context, api kube.APIResourceInfo,
if ok {
obj := event.Object.(*unstructured.Unstructured)
info.resourceVersion = obj.GetResourceVersion()
c.processEvent(event.Type, obj)
err = c.processEvent(event.Type, obj)
if err != nil {
log.Warnf("Failed to process event %s %s/%s/%s: %v", event.Type, obj.GroupVersionKind(), obj.GetNamespace(), obj.GetName(), err)
continue
}
if kube.IsCRD(obj) {
if event.Type == watch.Deleted {
group, groupOk, groupErr := unstructured.NestedString(obj.Object, "spec", "group")
@@ -276,7 +278,7 @@ func (c *clusterInfo) sync() (err error) {
c.apisMeta = make(map[schema.GroupKind]*apiMeta)
c.nodes = make(map[kube.ResourceKey]*node)
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.cacheSettingsSrc().ResourcesFilter)
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.settings)
if err != nil {
return err
}
@@ -290,7 +292,7 @@ func (c *clusterInfo) sync() (err error) {
lock.Lock()
for i := range list.Items {
c.setNode(c.createObjInfo(&list.Items[i], c.cacheSettingsSrc().AppInstanceLabelKey))
c.setNode(c.createObjInfo(&list.Items[i], c.settings.GetAppInstanceLabelKey()))
}
lock.Unlock()
return nil
@@ -323,36 +325,18 @@ func (c *clusterInfo) ensureSynced() error {
return c.syncError
}
func (c *clusterInfo) iterateHierarchy(obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) {
func (c *clusterInfo) iterateHierarchy(key kube.ResourceKey, action func(child appv1.ResourceNode)) {
c.lock.Lock()
defer c.lock.Unlock()
key := kube.GetResourceKey(obj)
if objInfo, ok := c.nodes[key]; ok {
action(objInfo.asResourceNode())
nsNodes := c.nsIndex[key.Namespace]
childrenByUID := make(map[types.UID][]*node)
for _, child := range nsNodes {
if objInfo.isParentOf(child) {
childrenByUID[child.ref.UID] = append(childrenByUID[child.ref.UID], child)
}
}
// make sure children has no duplicates
for _, children := range childrenByUID {
if len(children) > 0 {
// The object might have multiple children with the same UID (e.g. replicaset from apps and extensions group). It is ok to pick any object but we need to make sure
// we pick the same child after every refresh.
sort.Slice(children, func(i, j int) bool {
key1 := children[i].resourceKey()
key2 := children[j].resourceKey()
return strings.Compare(key1.String(), key2.String()) < 0
})
child := children[0]
action(child.asResourceNode())
child.iterateChildren(nsNodes, map[kube.ResourceKey]bool{objInfo.resourceKey(): true}, action)
}
}
} else {
action(c.createObjInfo(obj, c.cacheSettingsSrc().AppInstanceLabelKey).asResourceNode())
}
}
@@ -399,15 +383,6 @@ func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*uns
return err
}
}
} else if _, watched := c.apisMeta[key.GroupKind()]; !watched {
var err error
managedObj, err = c.kubectl.GetResource(config, targetObj.GroupVersionKind(), targetObj.GetName(), targetObj.GetNamespace())
if err != nil {
if errors.IsNotFound(err) {
return nil
}
return err
}
}
}
@@ -439,7 +414,7 @@ func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*uns
return managedObjs, nil
}
func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstructured) {
func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstructured) error {
c.lock.Lock()
defer c.lock.Unlock()
key := kube.GetResourceKey(un)
@@ -451,6 +426,8 @@ func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstr
} else if event != watch.Deleted {
c.onNodeUpdated(exists, existingNode, un, key)
}
return nil
}
func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstructured.Unstructured, key kube.ResourceKey) {
@@ -458,7 +435,7 @@ func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstruc
if exists {
nodes = append(nodes, existingNode)
}
newObj := c.createObjInfo(un, c.cacheSettingsSrc().AppInstanceLabelKey)
newObj := c.createObjInfo(un, c.settings.GetAppInstanceLabelKey())
c.setNode(newObj)
nodes = append(nodes, newObj)
toNotify := make(map[string]bool)
@@ -472,8 +449,8 @@ func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstruc
toNotify[app] = n.isRootAppNode() || toNotify[app]
}
}
for name, isRootAppNode := range toNotify {
c.onAppUpdated(name, isRootAppNode, newObj.ref)
for name, full := range toNotify {
c.onAppUpdated(name, full, newObj.ref)
}
}

View File

@@ -18,11 +18,11 @@ import (
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic/fake"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
"github.com/argoproj/argo-cd/util/settings"
)
func strToUnstructured(jsonStr string) *unstructured.Unstructured {
@@ -43,30 +43,24 @@ var (
apiVersion: v1
kind: Pod
metadata:
uid: "1"
name: helm-guestbook-pod
namespace: default
ownerReferences:
- apiVersion: apps/v1
- apiVersion: extensions/v1beta1
kind: ReplicaSet
name: helm-guestbook-rs
uid: "2"
resourceVersion: "123"`)
testRS = strToUnstructured(`
apiVersion: apps/v1
kind: ReplicaSet
metadata:
uid: "2"
name: helm-guestbook-rs
namespace: default
annotations:
deployment.kubernetes.io/revision: "2"
ownerReferences:
- apiVersion: apps/v1beta1
- apiVersion: extensions/v1beta1
kind: Deployment
name: helm-guestbook
uid: "3"
resourceVersion: "123"`)
testDeploy = strToUnstructured(`
@@ -75,7 +69,6 @@ var (
metadata:
labels:
app.kubernetes.io/instance: helm-guestbook
uid: "3"
name: helm-guestbook
namespace: default
resourceVersion: "123"`)
@@ -87,7 +80,6 @@ var (
name: helm-guestbook
namespace: default
resourceVersion: "123"
uid: "4"
spec:
selector:
app: guestbook
@@ -103,7 +95,6 @@ var (
metadata:
name: helm-guestbook
namespace: default
uid: "4"
spec:
backend:
serviceName: not-found-service
@@ -148,7 +139,7 @@ func newCluster(objs ...*unstructured.Unstructured) *clusterInfo {
Meta: metav1.APIResource{Namespaced: true},
}}
return newClusterExt(&kubetest.MockKubectlCmd{APIResources: apiResources})
return newClusterExt(kubetest.MockKubectlCmd{APIResources: apiResources})
}
func newClusterExt(kubectl kube.Kubectl) *clusterInfo {
@@ -163,15 +154,13 @@ func newClusterExt(kubectl kube.Kubectl) *clusterInfo {
syncLock: &sync.Mutex{},
apisMeta: make(map[schema.GroupKind]*apiMeta),
log: log.WithField("cluster", "test"),
cacheSettingsSrc: func() *cacheSettings {
return &cacheSettings{AppInstanceLabelKey: common.LabelKeyAppInstance}
},
settings: &settings.ArgoCDSettings{},
}
}
func getChildren(cluster *clusterInfo, un *unstructured.Unstructured) []appv1.ResourceNode {
hierarchy := make([]appv1.ResourceNode, 0)
cluster.iterateHierarchy(un, func(child appv1.ResourceNode) {
cluster.iterateHierarchy(kube.GetResourceKey(un), func(child appv1.ResourceNode) {
hierarchy = append(hierarchy, child)
})
return hierarchy[1:]
@@ -190,7 +179,6 @@ func TestGetChildren(t *testing.T) {
Name: "helm-guestbook-pod",
Group: "",
Version: "v1",
UID: "1",
},
ParentRefs: []appv1.ResourceRef{{
Group: "apps",
@@ -198,7 +186,6 @@ func TestGetChildren(t *testing.T) {
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
@@ -214,12 +201,11 @@ func TestGetChildren(t *testing.T) {
Name: "helm-guestbook-rs",
Group: "apps",
Version: "v1",
UID: "2",
},
ResourceVersion: "123",
Health: &appv1.HealthStatus{Status: appv1.HealthStatusHealthy},
Info: []appv1.InfoItem{{Name: "Revision", Value: "Rev:2"}},
ParentRefs: []appv1.ResourceRef{{Group: "apps", Version: "", Kind: "Deployment", Namespace: "default", Name: "helm-guestbook", UID: "3"}},
Info: []appv1.InfoItem{},
ParentRefs: []appv1.ResourceRef{{Group: "apps", Version: "", Kind: "Deployment", Namespace: "default", Name: "helm-guestbook"}},
}}, rsChildren...), deployChildren)
}
@@ -255,7 +241,8 @@ func TestChildDeletedEvent(t *testing.T) {
err := cluster.ensureSynced()
assert.Nil(t, err)
cluster.processEvent(watch.Deleted, testPod)
err = cluster.processEvent(watch.Deleted, testPod)
assert.Nil(t, err)
rsChildren := getChildren(cluster, testRS)
assert.Equal(t, []appv1.ResourceNode{}, rsChildren)
@@ -270,17 +257,16 @@ func TestProcessNewChildEvent(t *testing.T) {
apiVersion: v1
kind: Pod
metadata:
uid: "4"
name: helm-guestbook-pod2
namespace: default
ownerReferences:
- apiVersion: apps/v1
- apiVersion: extensions/v1beta1
kind: ReplicaSet
name: helm-guestbook-rs
uid: "2"
resourceVersion: "123"`)
cluster.processEvent(watch.Added, newPod)
err = cluster.processEvent(watch.Added, newPod)
assert.Nil(t, err)
rsChildren := getChildren(cluster, testRS)
sort.Slice(rsChildren, func(i, j int) bool {
@@ -293,7 +279,6 @@ func TestProcessNewChildEvent(t *testing.T) {
Name: "helm-guestbook-pod",
Group: "",
Version: "v1",
UID: "1",
},
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
@@ -304,7 +289,6 @@ func TestProcessNewChildEvent(t *testing.T) {
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
ResourceVersion: "123",
}, {
@@ -314,7 +298,6 @@ func TestProcessNewChildEvent(t *testing.T) {
Name: "helm-guestbook-pod2",
Group: "",
Version: "v1",
UID: "4",
},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
@@ -325,7 +308,6 @@ func TestProcessNewChildEvent(t *testing.T) {
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
ResourceVersion: "123",
}}, rsChildren)
@@ -361,7 +343,8 @@ func TestUpdateResourceTags(t *testing.T) {
},
}},
}
cluster.processEvent(watch.Modified, mustToUnstructured(pod))
err = cluster.processEvent(watch.Modified, mustToUnstructured(pod))
assert.Nil(t, err)
podNode = cluster.nodes[kube.GetResourceKey(mustToUnstructured(pod))]
@@ -379,7 +362,8 @@ func TestUpdateAppResource(t *testing.T) {
err := cluster.ensureSynced()
assert.Nil(t, err)
cluster.processEvent(watch.Modified, mustToUnstructured(testPod))
err = cluster.processEvent(watch.Modified, mustToUnstructured(testPod))
assert.Nil(t, err)
assert.Contains(t, updatesReceived, "helm-guestbook: false")
}
@@ -435,21 +419,3 @@ func TestWatchCacheUpdated(t *testing.T) {
_, ok = cluster.nodes[kube.GetResourceKey(added)]
assert.True(t, ok)
}
func TestGetDuplicatedChildren(t *testing.T) {
extensionsRS := testRS.DeepCopy()
extensionsRS.SetGroupVersionKind(schema.GroupVersionKind{Group: "extensions", Kind: kube.ReplicaSetKind, Version: "v1beta1"})
cluster := newCluster(testDeploy, testRS, extensionsRS)
err := cluster.ensureSynced()
assert.Nil(t, err)
// Get children multiple times to make sure the right child is picked up every time.
for i := 0; i < 5; i++ {
children := getChildren(cluster, testDeploy)
assert.Len(t, children, 1)
assert.Equal(t, "apps", children[0].Group)
assert.Equal(t, kube.ReplicaSetKind, children[0].Kind)
assert.Equal(t, testRS.GetName(), children[0].Name)
}
}

View File

@@ -11,16 +11,11 @@ import (
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource"
)
func populateNodeInfo(un *unstructured.Unstructured, node *node) {
gvk := un.GroupVersionKind()
revision := resource.GetRevision(un)
if revision > 0 {
node.info = append(node.info, v1alpha1.InfoItem{Name: "Revision", Value: fmt.Sprintf("Rev:%v", revision)})
}
switch gvk.Group {
case "":
switch gvk.Kind {
@@ -38,6 +33,7 @@ func populateNodeInfo(un *unstructured.Unstructured, node *node) {
return
}
}
node.info = []v1alpha1.InfoItem{}
}
func getIngress(un *unstructured.Unstructured) []v1.LoadBalancerIngress {
@@ -153,6 +149,7 @@ func populatePodInfo(un *unstructured.Unstructured, node *node) {
pod := v1.Pod{}
err := runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &pod)
if err != nil {
node.info = []v1alpha1.InfoItem{}
return
}
restarts := 0
@@ -240,6 +237,7 @@ func populatePodInfo(un *unstructured.Unstructured, node *node) {
reason = "Terminating"
}
node.info = make([]v1alpha1.InfoItem, 0)
if reason != "" {
node.info = append(node.info, v1alpha1.InfoItem{Name: "Status Reason", Value: reason})
}

View File

@@ -13,6 +13,20 @@ type LiveStateCache struct {
mock.Mock
}
// Delete provides a mock function with given fields: server, obj
func (_m *LiveStateCache) Delete(server string, obj *unstructured.Unstructured) error {
ret := _m.Called(server, obj)
var r0 error
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured) error); ok {
r0 = rf(server, obj)
} else {
r0 = ret.Error(0)
}
return r0
}
// GetManagedLiveObjs provides a mock function with given fields: a, targetObjs
func (_m *LiveStateCache) GetManagedLiveObjs(a *v1alpha1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
ret := _m.Called(a, targetObjs)
@@ -62,13 +76,13 @@ func (_m *LiveStateCache) IsNamespaced(server string, obj *unstructured.Unstruct
return r0, r1
}
// IterateHierarchy provides a mock function with given fields: server, obj, action
func (_m *LiveStateCache) IterateHierarchy(server string, obj *unstructured.Unstructured, action func(v1alpha1.ResourceNode)) error {
ret := _m.Called(server, obj, action)
// IterateHierarchy provides a mock function with given fields: server, key, action
func (_m *LiveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(v1alpha1.ResourceNode)) error {
ret := _m.Called(server, key, action)
var r0 error
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured, func(v1alpha1.ResourceNode)) error); ok {
r0 = rf(server, obj, action)
if rf, ok := ret.Get(0).(func(string, kube.ResourceKey, func(v1alpha1.ResourceNode)) error); ok {
r0 = rf(server, key, action)
} else {
r0 = ret.Error(0)
}
@@ -77,15 +91,6 @@ func (_m *LiveStateCache) IterateHierarchy(server string, obj *unstructured.Unst
}
// Run provides a mock function with given fields: ctx
func (_m *LiveStateCache) Run(ctx context.Context) error {
ret := _m.Called(ctx)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
r0 = rf(ctx)
} else {
r0 = ret.Error(0)
}
return r0
func (_m *LiveStateCache) Run(ctx context.Context) {
_m.Called(ctx)
}

View File

@@ -35,16 +35,9 @@ func (n *node) resourceKey() kube.ResourceKey {
}
func (n *node) isParentOf(child *node) bool {
for i, ownerRef := range child.ownerRefs {
// backfill UID of inferred owner child references
if ownerRef.UID == "" && n.ref.Kind == ownerRef.Kind && n.ref.APIVersion == ownerRef.APIVersion && n.ref.Name == ownerRef.Name {
ownerRef.UID = n.ref.UID
child.ownerRefs[i] = ownerRef
return true
}
if n.ref.UID == ownerRef.UID {
for _, ownerRef := range child.ownerRefs {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
if kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name) == n.resourceKey() {
return true
}
}
@@ -107,11 +100,10 @@ func (n *node) asResourceNode() appv1.ResourceNode {
for _, ownerRef := range n.ownerRefs {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
ownerKey := kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name)
parentRefs[0] = appv1.ResourceRef{Name: ownerRef.Name, Kind: ownerKey.Kind, Namespace: n.ref.Namespace, Group: ownerKey.Group, UID: string(ownerRef.UID)}
parentRefs[0] = appv1.ResourceRef{Name: ownerRef.Name, Kind: ownerKey.Kind, Namespace: n.ref.Namespace, Group: ownerKey.Group}
}
return appv1.ResourceNode{
ResourceRef: appv1.ResourceRef{
UID: string(n.ref.UID),
Name: n.ref.Name,
Group: gv.Group,
Version: gv.Version,

View File

@@ -3,14 +3,12 @@ package cache
import (
"testing"
"github.com/argoproj/argo-cd/common"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/util/settings"
)
var c = &clusterInfo{cacheSettingsSrc: func() *cacheSettings {
return &cacheSettings{AppInstanceLabelKey: common.LabelKeyAppInstance}
}}
var c = &clusterInfo{settings: &settings.ArgoCDSettings{}}
func TestIsParentOf(t *testing.T) {
child := c.createObjInfo(testPod, "")
@@ -21,36 +19,11 @@ func TestIsParentOf(t *testing.T) {
assert.False(t, grandParent.isParentOf(child))
}
func TestIsParentOfSameKindDifferentGroupAndUID(t *testing.T) {
func TestIsParentOfSameKindDifferentGroup(t *testing.T) {
rs := testRS.DeepCopy()
rs.SetAPIVersion("somecrd.io/v1")
rs.SetUID("123")
child := c.createObjInfo(testPod, "")
invalidParent := c.createObjInfo(rs, "")
assert.False(t, invalidParent.isParentOf(child))
}
func TestIsServiceParentOfEndPointWithTheSameName(t *testing.T) {
nonMatchingNameEndPoint := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: Endpoints
metadata:
name: not-matching-name
namespace: default
`), "")
matchingNameEndPoint := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: Endpoints
metadata:
name: helm-guestbook
namespace: default
`), "")
parent := c.createObjInfo(testService, "")
assert.True(t, parent.isParentOf(matchingNameEndPoint))
assert.Equal(t, parent.ref.UID, matchingNameEndPoint.ownerRefs[0].UID)
assert.False(t, parent.isParentOf(nonMatchingNameEndPoint))
}

View File

@@ -13,16 +13,13 @@ import (
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
applister "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/healthz"
)
type MetricsServer struct {
*http.Server
syncCounter *prometheus.CounterVec
k8sRequestCounter *prometheus.CounterVec
kubectlExecCounter *prometheus.CounterVec
kubectlExecPendingGauge *prometheus.GaugeVec
reconcileHistogram *prometheus.HistogramVec
syncCounter *prometheus.CounterVec
k8sRequestCounter *prometheus.CounterVec
reconcileHistogram *prometheus.HistogramVec
}
const (
@@ -62,13 +59,12 @@ var (
)
// NewMetricsServer returns a new prometheus server which collects application metrics
func NewMetricsServer(addr string, appLister applister.ApplicationLister, healthCheck func() error) *MetricsServer {
func NewMetricsServer(addr string, appLister applister.ApplicationLister) *MetricsServer {
mux := http.NewServeMux()
appRegistry := NewAppRegistry(appLister)
appRegistry.MustRegister(prometheus.NewProcessCollector(prometheus.ProcessCollectorOpts{}))
appRegistry.MustRegister(prometheus.NewGoCollector())
mux.Handle(MetricsPath, promhttp.HandlerFor(appRegistry, promhttp.HandlerOpts{}))
healthz.ServeHealthCheck(mux, healthCheck)
syncCounter := prometheus.NewCounterVec(
prometheus.CounterOpts{
@@ -78,16 +74,6 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, health
append(descAppDefaultLabels, "phase"),
)
appRegistry.MustRegister(syncCounter)
kubectlExecCounter := prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "argocd_kubectl_exec_total",
Help: "Number of kubectl executions",
}, []string{"command"})
appRegistry.MustRegister(kubectlExecCounter)
kubectlExecPendingGauge := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: "argocd_kubectl_exec_pending",
Help: "Number of pending kubectl executions",
}, []string{"command"})
appRegistry.MustRegister(kubectlExecPendingGauge)
k8sRequestCounter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "argocd_app_k8s_request_total",
@@ -114,11 +100,9 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, health
Addr: addr,
Handler: mux,
},
syncCounter: syncCounter,
k8sRequestCounter: k8sRequestCounter,
reconcileHistogram: reconcileHistogram,
kubectlExecCounter: kubectlExecCounter,
kubectlExecPendingGauge: kubectlExecPendingGauge,
syncCounter: syncCounter,
k8sRequestCounter: k8sRequestCounter,
reconcileHistogram: reconcileHistogram,
}
}
@@ -140,18 +124,6 @@ func (m *MetricsServer) IncReconcile(app *argoappv1.Application, duration time.D
m.reconcileHistogram.WithLabelValues(app.Namespace, app.Name, app.Spec.GetProject()).Observe(duration.Seconds())
}
func (m *MetricsServer) IncKubectlExec(command string) {
m.kubectlExecCounter.WithLabelValues(command).Inc()
}
func (m *MetricsServer) IncKubectlExecPending(command string) {
m.kubectlExecPendingGauge.WithLabelValues(command).Inc()
}
func (m *MetricsServer) DecKubectlExecPending(command string) {
m.kubectlExecPendingGauge.WithLabelValues(command).Dec()
}
type appCollector struct {
store applister.ApplicationLister
}

View File

@@ -104,10 +104,6 @@ argocd_app_sync_status{name="my-app",namespace="argocd",project="default",sync_s
argocd_app_sync_status{name="my-app",namespace="argocd",project="default",sync_status="Unknown"} 0
`
var noOpHealthCheck = func() error {
return nil
}
func newFakeApp(fakeApp string) *argoappv1.Application {
var app argoappv1.Application
err := yaml.Unmarshal([]byte(fakeApp), &app)
@@ -137,7 +133,7 @@ func newFakeLister(fakeApp ...string) (context.CancelFunc, applister.Application
func testApp(t *testing.T, fakeApp string, expectedResponse string) {
cancel, appLister := newFakeLister(fakeApp)
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister, noOpHealthCheck)
metricsServ := NewMetricsServer("localhost:8082", appLister)
req, err := http.NewRequest("GET", "/metrics", nil)
assert.NoError(t, err)
rr := httptest.NewRecorder()
@@ -180,7 +176,7 @@ argocd_app_sync_total{name="my-app",namespace="argocd",phase="Succeeded",project
func TestMetricsSyncCounter(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister, noOpHealthCheck)
metricsServ := NewMetricsServer("localhost:8082", appLister)
fakeApp := newFakeApp(fakeApp)
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationRunning})
@@ -221,7 +217,7 @@ argocd_app_reconcile_count{name="my-app",namespace="argocd",project="important-p
func TestReconcileMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister, noOpHealthCheck)
metricsServ := NewMetricsServer("localhost:8082", appLister)
fakeApp := newFakeApp(fakeApp)
metricsServ.IncReconcile(fakeApp, 5*time.Second)

View File

@@ -18,7 +18,8 @@ import (
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/db"
@@ -26,7 +27,6 @@ import (
"github.com/argoproj/argo-cd/util/health"
hookutil "github.com/argoproj/argo-cd/util/hook"
kubeutil "github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource"
"github.com/argoproj/argo-cd/util/settings"
)
@@ -56,11 +56,12 @@ type ResourceInfoProvider interface {
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool, localObjects []string) *comparisonResult
CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
}
type comparisonResult struct {
reconciledAt metav1.Time
syncStatus *v1alpha1.SyncStatus
healthStatus *v1alpha1.HealthStatus
resources []v1alpha1.ResourceStatus
@@ -75,16 +76,16 @@ type comparisonResult struct {
type appStateManager struct {
metricsServer *metrics.MetricsServer
db db.ArgoDB
settingsMgr *settings.SettingsManager
settings *settings.ArgoCDSettings
appclientset appclientset.Interface
projInformer cache.SharedIndexInformer
kubectl kubeutil.Kubectl
repoClientset apiclient.Clientset
repoClientset reposerver.Clientset
liveStateCache statecache.LiveStateCache
namespace string
}
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache bool) ([]*unstructured.Unstructured, []*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache bool) ([]*unstructured.Unstructured, []*unstructured.Unstructured, *repository.ManifestResponse, error) {
helmRepos, err := m.db.ListHelmRepos(context.Background())
if err != nil {
return nil, nil, nil, err
@@ -103,21 +104,12 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
revision = source.TargetRevision
}
plugins, err := m.settingsMgr.GetConfigManagementPlugins()
if err != nil {
return nil, nil, nil, err
tools := make([]*appv1.ConfigManagementPlugin, len(m.settings.ConfigManagementPlugins))
for i := range m.settings.ConfigManagementPlugins {
tools[i] = &m.settings.ConfigManagementPlugins[i]
}
tools := make([]*appv1.ConfigManagementPlugin, len(plugins))
for i := range plugins {
tools[i] = &plugins[i]
}
buildOptions, err := m.settingsMgr.GetKustomizeBuildOptions()
if err != nil {
return nil, nil, nil, err
}
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &apiclient.ManifestRequest{
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &repository.ManifestRequest{
Repo: repo,
HelmRepos: helmRepos,
Revision: revision,
@@ -127,30 +119,17 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
Namespace: app.Spec.Destination.Namespace,
ApplicationSource: &source,
Plugins: tools,
KustomizeOptions: &appv1.KustomizeOptions{
BuildOptions: buildOptions,
},
})
if err != nil {
return nil, nil, nil, err
}
targetObjs, hooks, err := unmarshalManifests(manifestInfo.Manifests)
if err != nil {
return nil, nil, nil, err
}
return targetObjs, hooks, manifestInfo, nil
}
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, []*unstructured.Unstructured, error) {
targetObjs := make([]*unstructured.Unstructured, 0)
hooks := make([]*unstructured.Unstructured, 0)
for _, manifest := range manifests {
for _, manifest := range manifestInfo.Manifests {
obj, err := v1alpha1.UnmarshalToUnstructured(manifest)
if err != nil {
return nil, nil, err
}
if resource.Ignore(obj) {
continue
return nil, nil, nil, err
}
if hookutil.IsHook(obj) {
hooks = append(hooks, obj)
@@ -158,7 +137,7 @@ func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, []*un
targetObjs = append(targetObjs, obj)
}
}
return targetObjs, hooks, nil
return targetObjs, hooks, manifestInfo, nil
}
func DeduplicateTargetObjects(
@@ -198,129 +177,34 @@ func DeduplicateTargetObjects(
return result, conditions, nil
}
// dedupLiveResources handles removes live resource duplicates with the same UID. Duplicates are created in a separate resource groups.
// E.g. apps/Deployment produces duplicate in extensions/Deployment, authorization.openshift.io/ClusterRole produces duplicate in rbac.authorization.k8s.io/ClusterRole etc.
// The method removes such duplicates unless it was defined in git ( exists in target resources list ). At least one duplicate stays.
// If non of duplicates are in git at random one stays
func dedupLiveResources(targetObjs []*unstructured.Unstructured, liveObjsByKey map[kubeutil.ResourceKey]*unstructured.Unstructured) {
targetObjByKey := make(map[kubeutil.ResourceKey]*unstructured.Unstructured)
for i := range targetObjs {
targetObjByKey[kubeutil.GetResourceKey(targetObjs[i])] = targetObjs[i]
}
liveObjsById := make(map[types.UID][]*unstructured.Unstructured)
for k := range liveObjsByKey {
obj := liveObjsByKey[k]
if obj != nil {
liveObjsById[obj.GetUID()] = append(liveObjsById[obj.GetUID()], obj)
}
}
for id := range liveObjsById {
objs := liveObjsById[id]
if len(objs) > 1 {
duplicatesLeft := len(objs)
for i := range objs {
obj := objs[i]
resourceKey := kubeutil.GetResourceKey(obj)
if _, ok := targetObjByKey[resourceKey]; !ok {
delete(liveObjsByKey, resourceKey)
duplicatesLeft--
if duplicatesLeft == 1 {
break
}
}
}
}
}
}
func (m *appStateManager) getComparisonSettings(app *appv1.Application) (string, map[string]v1alpha1.ResourceOverride, diff.Normalizer, error) {
resourceOverrides, err := m.settingsMgr.GetResourceOverrides()
if err != nil {
return "", nil, nil, err
}
appLabelKey, err := m.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return "", nil, nil, err
}
diffNormalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, resourceOverrides)
if err != nil {
return "", nil, nil, err
}
return appLabelKey, resourceOverrides, diffNormalizer, nil
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool, localManifests []string) *comparisonResult {
appLabelKey, resourceOverrides, diffNormalizer, err := m.getComparisonSettings(app)
// return unknown comparison result if basic comparison settings cannot be loaded
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool) (*comparisonResult, error) {
diffNormalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, m.settings.ResourceOverrides)
if err != nil {
return &comparisonResult{
syncStatus: &v1alpha1.SyncStatus{
ComparedTo: appv1.ComparedTo{Source: source, Destination: app.Spec.Destination},
Status: appv1.SyncStatusCodeUnknown,
},
healthStatus: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
}
return nil, err
}
// do best effort loading live and target state to present as much information about app state as possible
failedToLoadObjs := false
conditions := make([]v1alpha1.ApplicationCondition, 0)
logCtx := log.WithField("application", app.Name)
logCtx.Infof("Comparing app state (cluster: %s, namespace: %s)", app.Spec.Destination.Server, app.Spec.Destination.Namespace)
var targetObjs []*unstructured.Unstructured
var hooks []*unstructured.Unstructured
var manifestInfo *apiclient.ManifestResponse
if len(localManifests) == 0 {
targetObjs, hooks, manifestInfo, err = m.getRepoObjs(app, source, appLabelKey, revision, noCache)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
} else {
targetObjs, hooks, err = unmarshalManifests(localManifests)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
manifestInfo = nil
observedAt := metav1.Now()
failedToLoadObjs := false
conditions := make([]v1alpha1.ApplicationCondition, 0)
appLabelKey := m.settings.GetAppInstanceLabelKey()
targetObjs, hooks, manifestInfo, err := m.getRepoObjs(app, source, appLabelKey, revision, noCache)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
targetObjs, dedupConditions, err := DeduplicateTargetObjects(app.Spec.Destination.Server, app.Spec.Destination.Namespace, targetObjs, m.liveStateCache)
if err != nil {
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
}
conditions = append(conditions, dedupConditions...)
resFilter, err := m.settingsMgr.GetResourcesFilter()
if err != nil {
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
} else {
for i := len(targetObjs) - 1; i >= 0; i-- {
targetObj := targetObjs[i]
gvk := targetObj.GroupVersionKind()
if resFilter.IsExcludedResource(gvk.Group, gvk.Kind, app.Spec.Destination.Server) {
targetObjs = append(targetObjs[:i], targetObjs[i+1:]...)
conditions = append(conditions, v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionExcludedResourceWarning,
Message: fmt.Sprintf("Resource %s/%s %s is excluded in the settings", gvk.Group, gvk.Kind, targetObj.GetName()),
})
}
}
}
logCtx.Debugf("Generated config manifests")
liveObjByKey, err := m.liveStateCache.GetManagedLiveObjs(app, targetObjs)
dedupLiveResources(targetObjs, liveObjByKey)
if err != nil {
liveObjByKey = make(map[kubeutil.ResourceKey]*unstructured.Unstructured)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
@@ -366,19 +250,16 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
// Do the actual comparison
diffResults, err := diff.DiffArray(targetObjs, managedLiveObj, diffNormalizer)
if err != nil {
diffResults = &diff.DiffResultList{}
failedToLoadObjs = true
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
return nil, err
}
syncCode := v1alpha1.SyncStatusCodeSynced
managedResources := make([]managedResource, len(targetObjs))
resourceSummaries := make([]v1alpha1.ResourceStatus, len(targetObjs))
for i, targetObj := range targetObjs {
liveObj := managedLiveObj[i]
obj := liveObj
for i := 0; i < len(targetObjs); i++ {
obj := managedLiveObj[i]
if obj == nil {
obj = targetObj
obj = targetObjs[i]
}
if obj == nil {
continue
@@ -386,44 +267,35 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
gvk := obj.GroupVersionKind()
resState := v1alpha1.ResourceStatus{
Namespace: obj.GetNamespace(),
Name: obj.GetName(),
Kind: gvk.Kind,
Version: gvk.Version,
Group: gvk.Group,
Hook: hookutil.IsHook(obj),
RequiresPruning: targetObj == nil && liveObj != nil,
Namespace: obj.GetNamespace(),
Name: obj.GetName(),
Kind: gvk.Kind,
Version: gvk.Version,
Group: gvk.Group,
Hook: hookutil.IsHook(obj),
}
diffResult := diffResults.Diffs[i]
if resState.Hook || resource.Ignore(obj) {
if resState.Hook {
// For resource hooks, don't store sync status, and do not affect overall sync status
} else if diffResult.Modified || targetObj == nil || liveObj == nil {
} else if diffResult.Modified || targetObjs[i] == nil || managedLiveObj[i] == nil {
// Set resource state to OutOfSync since one of the following is true:
// * target and live resource are different
// * target resource not defined and live resource is extra
// * target resource present but live resource is missing
resState.Status = v1alpha1.SyncStatusCodeOutOfSync
// we ignore the status if the obj needs pruning AND we have the annotation
needsPruning := targetObj == nil && liveObj != nil
if !(needsPruning && resource.HasAnnotationOption(obj, common.AnnotationCompareOptions, "IgnoreExtraneous")) {
syncCode = v1alpha1.SyncStatusCodeOutOfSync
}
syncCode = v1alpha1.SyncStatusCodeOutOfSync
} else {
resState.Status = v1alpha1.SyncStatusCodeSynced
}
// we can't say anything about the status if we were unable to get the target objects
if failedToLoadObjs {
resState.Status = v1alpha1.SyncStatusCodeUnknown
}
managedResources[i] = managedResource{
Name: resState.Name,
Namespace: resState.Namespace,
Group: resState.Group,
Kind: resState.Kind,
Version: resState.Version,
Live: liveObj,
Target: targetObj,
Live: managedLiveObj[i],
Target: targetObjs[i],
Diff: diffResult,
Hook: resState.Hook,
}
@@ -444,7 +316,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
syncStatus.Revision = manifestInfo.Revision
}
healthStatus, err := health.SetApplicationHealth(resourceSummaries, GetLiveObjs(managedResources), resourceOverrides, func(obj *unstructured.Unstructured) bool {
healthStatus, err := health.SetApplicationHealth(resourceSummaries, GetLiveObjs(managedResources), m.settings.ResourceOverrides, func(obj *unstructured.Unstructured) bool {
return !isSelfReferencedApp(app, kubeutil.GetObjectRef(obj))
})
@@ -453,6 +325,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
}
compRes := comparisonResult{
reconciledAt: observedAt,
syncStatus: &syncStatus,
healthStatus: healthStatus,
resources: resourceSummaries,
@@ -464,7 +337,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
if manifestInfo != nil {
compRes.appSourceType = v1alpha1.ApplicationSourceType(manifestInfo.SourceType)
}
return &compRes
return &compRes, nil
}
func (m *appStateManager) persistRevisionHistory(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource) error {
@@ -499,10 +372,10 @@ func (m *appStateManager) persistRevisionHistory(app *v1alpha1.Application, revi
func NewAppStateManager(
db db.ArgoDB,
appclientset appclientset.Interface,
repoClientset apiclient.Clientset,
repoClientset reposerver.Clientset,
namespace string,
kubectl kubeutil.Kubectl,
settingsMgr *settings.SettingsManager,
settings *settings.ArgoCDSettings,
liveStateCache statecache.LiveStateCache,
projInformer cache.SharedIndexInformer,
metricsServer *metrics.MetricsServer,
@@ -514,7 +387,7 @@ func NewAppStateManager(
kubectl: kubectl,
repoClientset: repoClientset,
namespace: namespace,
settingsMgr: settingsMgr,
settings: settings,
projInformer: projInformer,
metricsServer: metricsServer,
}

View File

@@ -4,15 +4,16 @@ import (
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/argo-cd/common"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
)
@@ -21,7 +22,7 @@ import (
func TestCompareAppStateEmpty(t *testing.T) {
app := newFakeApp()
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -30,7 +31,8 @@ func TestCompareAppStateEmpty(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 0, len(compRes.resources))
@@ -43,7 +45,7 @@ func TestCompareAppStateMissing(t *testing.T) {
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{string(test.PodManifest)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -52,7 +54,8 @@ func TestCompareAppStateMissing(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
@@ -67,7 +70,7 @@ func TestCompareAppStateExtra(t *testing.T) {
app := newFakeApp()
key := kube.ResourceKey{Group: "", Kind: "Pod", Namespace: test.FakeDestNamespace, Name: app.Name}
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -78,7 +81,8 @@ func TestCompareAppStateExtra(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
@@ -95,7 +99,7 @@ func TestCompareAppStateHook(t *testing.T) {
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{string(podBytes)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -104,41 +108,15 @@ func TestCompareAppStateHook(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 0, len(compRes.resources))
assert.Equal(t, 0, len(compRes.managedResources))
assert.Equal(t, 1, len(compRes.hooks))
assert.Equal(t, 0, len(compRes.conditions))
}
// checks that ignore resources are detected, but excluded from status
func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationCompareOptions: "IgnoreExtraneous"})
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Len(t, compRes.resources, 0)
assert.Len(t, compRes.managedResources, 0)
assert.Len(t, compRes.conditions, 0)
}
// TestCompareAppStateExtraHook tests when there is an extra _hook_ object in live but not defined in git
func TestCompareAppStateExtraHook(t *testing.T) {
pod := test.NewPod()
@@ -147,7 +125,7 @@ func TestCompareAppStateExtraHook(t *testing.T) {
app := newFakeApp()
key := kube.ResourceKey{Group: "", Kind: "Pod", Namespace: test.FakeDestNamespace, Name: app.Name}
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -158,13 +136,12 @@ func TestCompareAppStateExtraHook(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.hooks))
assert.Equal(t, 0, len(compRes.conditions))
}
@@ -183,7 +160,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
app := newFakeApp()
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{toJSON(t, obj1), toJSON(t, obj2), toJSON(t, obj3)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -195,8 +172,8 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Contains(t, compRes.conditions, argoappv1.ApplicationCondition{
Message: "Resource /Pod/fake-dest-ns/my-pod appeared 2 times among application resources.",
@@ -235,7 +212,7 @@ func TestSetHealth(t *testing.T) {
})
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -246,7 +223,8 @@ func TestSetHealth(t *testing.T) {
},
})
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
}
@@ -266,7 +244,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
})
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -278,7 +256,8 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
},
})
compRes := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,66 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube/kubetest"
)
var clusterRoleHook = `
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "ClusterRole",
"metadata": {
"name": "cluster-role-hook",
"annotations": {
"argocd.argoproj.io/hook": "PostSync"
}
}
}`
func TestSyncHookProjectPermissions(t *testing.T) {
syncCtx := newTestSyncCtx(&v1.APIResourceList{
GroupVersion: "v1",
APIResources: []v1.APIResource{
{Name: "pod", Namespaced: true, Kind: "Pod", Group: "v1"},
},
}, &v1.APIResourceList{
GroupVersion: "rbac.authorization.k8s.io/v1",
APIResources: []v1.APIResource{
{Name: "clusterroles", Namespaced: false, Kind: "ClusterRole", Group: "rbac.authorization.k8s.io"},
},
})
syncCtx.kubectl = kubetest.MockKubectlCmd{}
crHook, _ := v1alpha1.UnmarshalToUnstructured(clusterRoleHook)
syncCtx.compareResult = &comparisonResult{
hooks: []*unstructured.Unstructured{
crHook,
},
managedResources: []managedResource{{
Target: test.NewPod(),
}},
}
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{}
syncCtx.syncOp.SyncStrategy = nil
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 0)
assert.Contains(t, syncCtx.opState.Message, "not permitted in project")
// Now add the resource to the whitelist and try again. Resource should be created
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{
{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"},
}
syncCtx.syncOp.SyncStrategy = nil
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[0].Status)
}

View File

@@ -2,55 +2,334 @@ package controller
import (
"fmt"
"reflect"
"strings"
wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1"
apiv1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/kubernetes/pkg/apis/batch"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
hookutil "github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
)
// enforceHookDeletePolicy examines the hook deletion policy of a object and deletes it based on the status
func enforceHookDeletePolicy(hook *unstructured.Unstructured, operation v1alpha1.OperationPhase) bool {
// doHookSync initiates (or continues) a hook-based sync. This method will be invoked when there may
// already be in-flight (potentially incomplete) jobs/workflows, and should be idempotent.
func (sc *syncContext) doHookSync(syncTasks []syncTask, hooks []*unstructured.Unstructured) {
if !sc.startedPreSyncPhase() {
if !sc.verifyPermittedHooks(hooks) {
return
}
}
// 1. Run PreSync hooks
if !sc.runHooks(hooks, appv1.HookTypePreSync) {
return
}
// 2. Run Sync hooks (e.g. blue-green sync workflow)
// Before performing Sync hooks, apply any normal manifests which aren't annotated with a hook.
// We only want to do this once per operation.
shouldContinue := true
if !sc.startedSyncPhase() {
if !sc.syncNonHookTasks(syncTasks) {
sc.setOperationPhase(appv1.OperationFailed, "one or more objects failed to apply")
return
}
shouldContinue = false
}
if !sc.runHooks(hooks, appv1.HookTypeSync) {
shouldContinue = false
}
if !shouldContinue {
return
}
// 3. Run PostSync hooks
// Before running PostSync hooks, we want to make rollout is complete (app is healthy). If we
// already started the post-sync phase, then we do not need to perform the health check.
postSyncHooks, _ := sc.getHooks(appv1.HookTypePostSync)
if len(postSyncHooks) > 0 && !sc.startedPostSyncPhase() {
sc.log.Infof("PostSync application health check: %s", sc.compareResult.healthStatus.Status)
if sc.compareResult.healthStatus.Status != appv1.HealthStatusHealthy {
sc.setOperationPhase(appv1.OperationRunning, fmt.Sprintf("waiting for %s state to run %s hooks (current health: %s)",
appv1.HealthStatusHealthy, appv1.HookTypePostSync, sc.compareResult.healthStatus.Status))
return
}
}
if !sc.runHooks(hooks, appv1.HookTypePostSync) {
return
}
// if we get here, all hooks successfully completed
sc.setOperationPhase(appv1.OperationSucceeded, "successfully synced")
}
// verifyPermittedHooks verifies all hooks are permitted in the project
func (sc *syncContext) verifyPermittedHooks(hooks []*unstructured.Unstructured) bool {
for _, hook := range hooks {
gvk := hook.GroupVersionKind()
serverRes, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("unable to identify api resource type: %v", gvk))
return false
}
if !sc.proj.IsResourcePermitted(metav1.GroupKind{Group: gvk.Group, Kind: gvk.Kind}, serverRes.Namespaced) {
sc.setOperationPhase(appv1.OperationFailed, fmt.Sprintf("Hook resource %s:%s is not permitted in project %s", gvk.Group, gvk.Kind, sc.proj.Name))
return false
}
if serverRes.Namespaced && !sc.proj.IsDestinationPermitted(appv1.ApplicationDestination{Namespace: hook.GetNamespace(), Server: sc.server}) {
gvk := hook.GroupVersionKind()
sc.setResourceDetails(&appv1.ResourceResult{
Name: hook.GetName(),
Group: gvk.Group,
Version: gvk.Version,
Kind: hook.GetKind(),
Namespace: hook.GetNamespace(),
Message: fmt.Sprintf("namespace %v is not permitted in project '%s'", hook.GetNamespace(), sc.proj.Name),
Status: appv1.ResultCodeSyncFailed,
})
return false
}
}
return true
}
// getHooks returns all Argo CD hooks, optionally filtered by ones of the specific type(s)
func (sc *syncContext) getHooks(hookTypes ...appv1.HookType) ([]*unstructured.Unstructured, error) {
var hooks []*unstructured.Unstructured
for _, hook := range sc.compareResult.hooks {
if hook.GetNamespace() == "" {
hook.SetNamespace(sc.namespace)
}
if !hookutil.IsArgoHook(hook) {
// TODO: in the future, if we want to map helm hooks to Argo CD lifecycles, we should
// include helm hooks in the returned list
continue
}
if len(hookTypes) > 0 {
match := false
for _, desiredType := range hookTypes {
if isHookType(hook, desiredType) {
match = true
break
}
}
if !match {
continue
}
}
hooks = append(hooks, hook)
}
return hooks, nil
}
// runHooks iterates & filters the target manifests for resources of the specified hook type, then
// creates the resource. Updates the sc.opRes.hooks with the current status. Returns whether or not
// we should continue to the next hook phase.
func (sc *syncContext) runHooks(hooks []*unstructured.Unstructured, hookType appv1.HookType) bool {
shouldContinue := true
for _, hook := range hooks {
if hookType == appv1.HookTypeSync && isHookType(hook, appv1.HookTypeSkip) {
// If we get here, we are invoking all sync hooks and reached a resource that is
// annotated with the Skip hook. This will update the resource details to indicate it
// was skipped due to annotation
gvk := hook.GroupVersionKind()
sc.setResourceDetails(&appv1.ResourceResult{
Name: hook.GetName(),
Group: gvk.Group,
Version: gvk.Version,
Kind: hook.GetKind(),
Namespace: hook.GetNamespace(),
Message: "Skipped",
})
continue
}
if !isHookType(hook, hookType) {
continue
}
updated, err := sc.runHook(hook, hookType)
if err != nil {
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("%s hook error: %v", hookType, err))
return false
}
if updated {
// If the result of running a hook, caused us to modify hook resource state, we should
// not proceed to the next hook phase. This is because before proceeding to the next
// phase, we want a full health assessment to happen. By returning early, we allow
// the application to get requeued into the controller workqueue, and on the next
// process iteration, a new CompareAppState() will be performed to get the most
// up-to-date live state. This enables us to accurately wait for an application to
// become Healthy before proceeding to run PostSync tasks.
shouldContinue = false
}
}
if !shouldContinue {
sc.log.Infof("Stopping after %s phase due to modifications to hook resource state", hookType)
return false
}
completed, successful := areHooksCompletedSuccessful(hookType, sc.syncRes.Resources)
if !completed {
return false
}
if !successful {
sc.setOperationPhase(appv1.OperationFailed, fmt.Sprintf("%s hook failed", hookType))
return false
}
return true
}
// syncNonHookTasks syncs or prunes the objects that are not handled by hooks using an apply sync.
// returns true if the sync was successful
func (sc *syncContext) syncNonHookTasks(syncTasks []syncTask) bool {
var nonHookTasks []syncTask
for _, task := range syncTasks {
if task.targetObj == nil {
nonHookTasks = append(nonHookTasks, task)
} else {
annotations := task.targetObj.GetAnnotations()
if annotations != nil && annotations[common.AnnotationKeyHook] != "" {
// we are doing a hook sync and this resource is annotated with a hook annotation
continue
}
// if we get here, this resource does not have any hook annotation so we
// should perform an `kubectl apply`
nonHookTasks = append(nonHookTasks, task)
}
}
return sc.doApplySync(nonHookTasks, false, sc.syncOp.SyncStrategy.Hook.Force, true)
}
// runHook runs the supplied hook and updates the hook status. Returns true if the result of
// invoking this method resulted in changes to any hook status
func (sc *syncContext) runHook(hook *unstructured.Unstructured, hookType appv1.HookType) (bool, error) {
// Hook resources names are deterministic, whether they are defined by the user (metadata.name),
// or formulated at the time of the operation (metadata.generateName). If user specifies
// metadata.generateName, then we will generate a formulated metadata.name before submission.
if hook.GetName() == "" {
postfix := strings.ToLower(fmt.Sprintf("%s-%s-%d", sc.syncRes.Revision[0:7], hookType, sc.opState.StartedAt.UTC().Unix()))
generatedName := hook.GetGenerateName()
hook = hook.DeepCopy()
hook.SetName(fmt.Sprintf("%s%s", generatedName, postfix))
}
// Check our hook statuses to see if we already completed this hook.
// If so, this method is a noop
prevStatus := sc.getHookStatus(hook, hookType)
if prevStatus != nil && prevStatus.HookPhase.Completed() {
return false, nil
}
gvk := hook.GroupVersionKind()
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
return false, err
}
resource := kube.ToGroupVersionResource(gvk.GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, hook.GetNamespace())
var liveObj *unstructured.Unstructured
existing, err := resIf.Get(hook.GetName(), metav1.GetOptions{})
if err != nil {
if !apierr.IsNotFound(err) {
return false, fmt.Errorf("Failed to get status of %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
_, err := sc.kubectl.ApplyResource(sc.config, hook, hook.GetNamespace(), false, false)
if err != nil {
return false, fmt.Errorf("Failed to create %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
created, err := resIf.Get(hook.GetName(), metav1.GetOptions{})
if err != nil {
return true, fmt.Errorf("Failed to get status of %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
sc.log.Infof("%s hook %s '%s' created", hookType, gvk, created.GetName())
sc.setOperationPhase(appv1.OperationRunning, fmt.Sprintf("running %s hooks", hookType))
liveObj = created
} else {
liveObj = existing
}
hookStatus := newHookStatus(liveObj, hookType)
if hookStatus.HookPhase.Completed() {
if enforceHookDeletePolicy(hook, hookStatus.HookPhase) {
err = sc.deleteHook(hook.GetName(), hook.GetNamespace(), hook.GroupVersionKind())
if err != nil {
hookStatus.HookPhase = appv1.OperationFailed
hookStatus.Message = fmt.Sprintf("failed to delete %s hook: %v", hookStatus.HookPhase, err)
}
}
}
return sc.updateHookStatus(hookStatus), nil
}
// enforceHookDeletePolicy examines the hook deletion policy of a object and deletes it based on the status
func enforceHookDeletePolicy(hook *unstructured.Unstructured, phase appv1.OperationPhase) bool {
annotations := hook.GetAnnotations()
if annotations == nil {
return false
}
deletePolicies := strings.Split(annotations[common.AnnotationKeyHookDeletePolicy], ",")
for _, dp := range deletePolicies {
policy := v1alpha1.HookDeletePolicy(strings.TrimSpace(dp))
if policy == v1alpha1.HookDeletePolicyHookSucceeded && operation == v1alpha1.OperationSucceeded {
policy := appv1.HookDeletePolicy(strings.TrimSpace(dp))
if policy == appv1.HookDeletePolicyHookSucceeded && phase == appv1.OperationSucceeded {
return true
}
if policy == v1alpha1.HookDeletePolicyHookFailed && operation == v1alpha1.OperationFailed {
if policy == appv1.HookDeletePolicyHookFailed && phase == appv1.OperationFailed {
return true
}
}
return false
}
// getOperationPhase returns a hook status from an _live_ unstructured object
func getOperationPhase(hook *unstructured.Unstructured) (operation v1alpha1.OperationPhase, message string) {
gvk := hook.GroupVersionKind()
if isBatchJob(gvk) {
return getStatusFromBatchJob(hook)
} else if isArgoWorkflow(gvk) {
return getStatusFromArgoWorkflow(hook)
} else if isPod(gvk) {
return getStatusFromPod(hook)
} else {
return v1alpha1.OperationSucceeded, fmt.Sprintf("%s created", hook.GetName())
// isHookType tells whether or not the supplied object is a hook of the specified type
func isHookType(hook *unstructured.Unstructured, hookType appv1.HookType) bool {
annotations := hook.GetAnnotations()
if annotations == nil {
return false
}
resHookTypes := strings.Split(annotations[common.AnnotationKeyHook], ",")
for _, ht := range resHookTypes {
if string(hookType) == strings.TrimSpace(ht) {
return true
}
}
return false
}
// newHookStatus returns a hook status from an _live_ unstructured object
func newHookStatus(hook *unstructured.Unstructured, hookType appv1.HookType) appv1.ResourceResult {
gvk := hook.GroupVersionKind()
hookStatus := appv1.ResourceResult{
Name: hook.GetName(),
Kind: hook.GetKind(),
Group: gvk.Group,
Version: gvk.Version,
HookType: hookType,
HookPhase: appv1.OperationRunning,
Namespace: hook.GetNamespace(),
}
if isBatchJob(gvk) {
updateStatusFromBatchJob(hook, &hookStatus)
} else if isArgoWorkflow(gvk) {
updateStatusFromArgoWorkflow(hook, &hookStatus)
} else if isPod(gvk) {
updateStatusFromPod(hook, &hookStatus)
} else {
hookStatus.HookPhase = appv1.OperationSucceeded
hookStatus.Message = fmt.Sprintf("%s created", hook.GetName())
}
return hookStatus
}
// isRunnable returns if the resource object is a runnable type which needs to be terminated
func isRunnable(gvk schema.GroupVersionKind) bool {
func isRunnable(res *appv1.ResourceResult) bool {
gvk := res.GroupVersionKind()
return isBatchJob(gvk) || isArgoWorkflow(gvk) || isPod(gvk)
}
@@ -58,16 +337,18 @@ func isBatchJob(gvk schema.GroupVersionKind) bool {
return gvk.Group == "batch" && gvk.Kind == "Job"
}
// TODO this is a copy-and-paste of health.getJobHealth(), refactor out?
func getStatusFromBatchJob(hook *unstructured.Unstructured) (operation v1alpha1.OperationPhase, message string) {
func updateStatusFromBatchJob(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var job batch.Job
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &job)
if err != nil {
return v1alpha1.OperationError, err.Error()
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
failed := false
var failMsg string
complete := false
var message string
for _, condition := range job.Status.Conditions {
switch condition.Type {
case batch.JobFailed:
@@ -80,11 +361,14 @@ func getStatusFromBatchJob(hook *unstructured.Unstructured) (operation v1alpha1.
}
}
if !complete {
return v1alpha1.OperationRunning, message
hookStatus.HookPhase = appv1.OperationRunning
hookStatus.Message = message
} else if failed {
return v1alpha1.OperationFailed, failMsg
hookStatus.HookPhase = appv1.OperationFailed
hookStatus.Message = failMsg
} else {
return v1alpha1.OperationSucceeded, message
hookStatus.HookPhase = appv1.OperationSucceeded
hookStatus.Message = message
}
}
@@ -92,36 +376,38 @@ func isArgoWorkflow(gvk schema.GroupVersionKind) bool {
return gvk.Group == "argoproj.io" && gvk.Kind == "Workflow"
}
// TODO - should we move this to health.go?
func getStatusFromArgoWorkflow(hook *unstructured.Unstructured) (operation v1alpha1.OperationPhase, message string) {
func updateStatusFromArgoWorkflow(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var wf wfv1.Workflow
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &wf)
if err != nil {
return v1alpha1.OperationError, err.Error()
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
switch wf.Status.Phase {
case wfv1.NodePending, wfv1.NodeRunning:
return v1alpha1.OperationRunning, wf.Status.Message
hookStatus.HookPhase = appv1.OperationRunning
case wfv1.NodeSucceeded:
return v1alpha1.OperationSucceeded, wf.Status.Message
hookStatus.HookPhase = appv1.OperationSucceeded
case wfv1.NodeFailed:
return v1alpha1.OperationFailed, wf.Status.Message
hookStatus.HookPhase = appv1.OperationFailed
case wfv1.NodeError:
return v1alpha1.OperationError, wf.Status.Message
hookStatus.HookPhase = appv1.OperationError
}
return v1alpha1.OperationSucceeded, wf.Status.Message
hookStatus.Message = wf.Status.Message
}
func isPod(gvk schema.GroupVersionKind) bool {
return gvk.Group == "" && gvk.Kind == "Pod"
}
// TODO - this is very similar to health.getPodHealth() should we use that instead?
func getStatusFromPod(hook *unstructured.Unstructured) (v1alpha1.OperationPhase, string) {
func updateStatusFromPod(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var pod apiv1.Pod
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &pod)
if err != nil {
return v1alpha1.OperationError, err.Error()
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
getFailMessage := func(ctr *apiv1.ContainerStatus) string {
if ctr.State.Terminated != nil {
@@ -140,22 +426,135 @@ func getStatusFromPod(hook *unstructured.Unstructured) (v1alpha1.OperationPhase,
switch pod.Status.Phase {
case apiv1.PodPending, apiv1.PodRunning:
return v1alpha1.OperationRunning, ""
hookStatus.HookPhase = appv1.OperationRunning
case apiv1.PodSucceeded:
return v1alpha1.OperationSucceeded, ""
hookStatus.HookPhase = appv1.OperationSucceeded
case apiv1.PodFailed:
hookStatus.HookPhase = appv1.OperationFailed
if pod.Status.Message != "" {
// Pod has a nice error message. Use that.
return v1alpha1.OperationFailed, pod.Status.Message
hookStatus.Message = pod.Status.Message
return
}
for _, ctr := range append(pod.Status.InitContainerStatuses, pod.Status.ContainerStatuses...) {
if msg := getFailMessage(&ctr); msg != "" {
return v1alpha1.OperationFailed, msg
hookStatus.Message = msg
return
}
}
return v1alpha1.OperationFailed, ""
case apiv1.PodUnknown:
return v1alpha1.OperationError, ""
hookStatus.HookPhase = appv1.OperationError
}
return v1alpha1.OperationRunning, ""
}
func (sc *syncContext) getHookStatus(hookObj *unstructured.Unstructured, hookType appv1.HookType) *appv1.ResourceResult {
for _, hr := range sc.syncRes.Resources {
if !hr.IsHook() {
continue
}
ns := util.FirstNonEmpty(hookObj.GetNamespace(), sc.namespace)
if hookEqual(hr, hookObj.GroupVersionKind().Group, hookObj.GetKind(), ns, hookObj.GetName(), hookType) {
return hr
}
}
return nil
}
func hookEqual(hr *appv1.ResourceResult, group, kind, namespace, name string, hookType appv1.HookType) bool {
return bool(
hr.Group == group &&
hr.Kind == kind &&
hr.Namespace == namespace &&
hr.Name == name &&
hr.HookType == hookType)
}
// updateHookStatus updates the status of a hook. Returns true if the hook was modified
func (sc *syncContext) updateHookStatus(hookStatus appv1.ResourceResult) bool {
sc.lock.Lock()
defer sc.lock.Unlock()
for i, prev := range sc.syncRes.Resources {
if !prev.IsHook() {
continue
}
if hookEqual(prev, hookStatus.Group, hookStatus.Kind, hookStatus.Namespace, hookStatus.Name, hookStatus.HookType) {
if reflect.DeepEqual(prev, hookStatus) {
return false
}
if prev.HookPhase != hookStatus.HookPhase {
sc.log.Infof("Hook %s %s/%s hookPhase: %s -> %s", hookStatus.HookType, prev.Kind, prev.Name, prev.HookPhase, hookStatus.HookPhase)
}
if prev.Status != hookStatus.Status {
sc.log.Infof("Hook %s %s/%s status: %s -> %s", hookStatus.HookType, prev.Kind, prev.Name, prev.Status, hookStatus.Status)
}
if prev.Message != hookStatus.Message {
sc.log.Infof("Hook %s %s/%s message: '%s' -> '%s'", hookStatus.HookType, prev.Kind, prev.Name, prev.Message, hookStatus.Message)
}
sc.syncRes.Resources[i] = &hookStatus
return true
}
}
sc.syncRes.Resources = append(sc.syncRes.Resources, &hookStatus)
sc.log.Infof("Set new hook %s %s/%s. phase: %s, message: %s", hookStatus.HookType, hookStatus.Kind, hookStatus.Name, hookStatus.HookPhase, hookStatus.Message)
return true
}
// areHooksCompletedSuccessful checks if all the hooks of the specified type are completed and successful
func areHooksCompletedSuccessful(hookType appv1.HookType, hookStatuses []*appv1.ResourceResult) (bool, bool) {
isSuccessful := true
for _, hookStatus := range hookStatuses {
if !hookStatus.IsHook() {
continue
}
if hookStatus.HookType != hookType {
continue
}
if !hookStatus.HookPhase.Completed() {
return false, false
}
if !hookStatus.HookPhase.Successful() {
isSuccessful = false
}
}
return true, isSuccessful
}
// terminate looks for any running jobs/workflow hooks and deletes the resource
func (sc *syncContext) terminate() {
terminateSuccessful := true
for _, hookStatus := range sc.syncRes.Resources {
if !hookStatus.IsHook() {
continue
}
if hookStatus.HookPhase.Completed() {
continue
}
if isRunnable(hookStatus) {
hookStatus.HookPhase = appv1.OperationFailed
err := sc.deleteHook(hookStatus.Name, hookStatus.Namespace, hookStatus.GroupVersionKind())
if err != nil {
hookStatus.Message = fmt.Sprintf("Failed to delete %s hook %s/%s: %v", hookStatus.HookType, hookStatus.Kind, hookStatus.Name, err)
terminateSuccessful = false
} else {
hookStatus.Message = fmt.Sprintf("Deleted %s hook %s/%s", hookStatus.HookType, hookStatus.Kind, hookStatus.Name)
}
sc.updateHookStatus(*hookStatus)
}
}
if terminateSuccessful {
sc.setOperationPhase(appv1.OperationFailed, "Operation terminated")
} else {
sc.setOperationPhase(appv1.OperationError, "Operation termination had errors")
}
}
func (sc *syncContext) deleteHook(name, namespace string, gvk schema.GroupVersionKind) error {
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
return err
}
resource := kube.ToGroupVersionResource(gvk.GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, namespace)
propagationPolicy := metav1.DeletePropagationForeground
return resIf.Delete(name, &metav1.DeleteOptions{PropagationPolicy: &propagationPolicy})
}

View File

@@ -1,25 +0,0 @@
package controller
import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/hook"
)
func syncPhases(obj *unstructured.Unstructured) []v1alpha1.SyncPhase {
if hook.Skip(obj) {
return nil
} else if hook.IsHook(obj) {
var phases []v1alpha1.SyncPhase
for _, hookType := range hook.Types(obj) {
switch hookType {
case v1alpha1.HookTypePreSync, v1alpha1.HookTypeSync, v1alpha1.HookTypePostSync, v1alpha1.HookTypeSyncFail:
phases = append(phases, v1alpha1.SyncPhase(hookType))
}
}
return phases
} else {
return []v1alpha1.SyncPhase{v1alpha1.SyncPhaseSync}
}
}

View File

@@ -1,50 +0,0 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/test"
)
func TestSyncPhaseNone(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSync}, syncPhases(&unstructured.Unstructured{}))
}
func TestSyncPhasePreSync(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePreSync}, syncPhases(pod("PreSync")))
}
func TestSyncPhaseSync(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSync}, syncPhases(pod("Sync")))
}
func TestSyncPhaseSkip(t *testing.T) {
assert.Nil(t, syncPhases(pod("Skip")))
}
// garbage hooks are still hooks, but have no phases, because some user spelled something wrong
func TestSyncPhaseGarbage(t *testing.T) {
assert.Nil(t, syncPhases(pod("Garbage")))
}
func TestSyncPhasePost(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePostSync}, syncPhases(pod("PostSync")))
}
func TestSyncPhaseFail(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSyncFail}, syncPhases(pod("SyncFail")))
}
func TestSyncPhaseTwoPhases(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePreSync, SyncPhasePostSync}, syncPhases(pod("PreSync,PostSync")))
}
func pod(hookType string) *unstructured.Unstructured {
pod := test.NewPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": hookType})
return pod
}

View File

@@ -1,119 +0,0 @@
package controller
import (
"fmt"
"strconv"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/hook"
)
// syncTask holds the live and target object. At least one should be non-nil. A targetObj of nil
// indicates the live object needs to be pruned. A liveObj of nil indicates the object has yet to
// be deployed
type syncTask struct {
phase v1alpha1.SyncPhase
liveObj *unstructured.Unstructured
targetObj *unstructured.Unstructured
skipDryRun bool
syncStatus v1alpha1.ResultCode
operationState v1alpha1.OperationPhase
message string
}
func ternary(val bool, a, b string) string {
if val {
return a
} else {
return b
}
}
func (t *syncTask) String() string {
return fmt.Sprintf("%s/%d %s %s/%s:%s/%s %s->%s (%s,%s,%s)",
t.phase, t.wave(),
ternary(t.isHook(), "hook", "resource"), t.group(), t.kind(), t.namespace(), t.name(),
ternary(t.liveObj != nil, "obj", "nil"), ternary(t.targetObj != nil, "obj", "nil"),
t.syncStatus, t.operationState, t.message,
)
}
func (t *syncTask) isPrune() bool {
return t.targetObj == nil
}
// return the target object (if this exists) otherwise the live object
// some caution - often you explicitly want the live object not the target object
func (t *syncTask) obj() *unstructured.Unstructured {
return obj(t.targetObj, t.liveObj)
}
func (t *syncTask) wave() int {
text := t.obj().GetAnnotations()[common.AnnotationSyncWave]
if text == "" {
return 0
}
val, err := strconv.Atoi(text)
if err != nil {
return 0
}
return val
}
func (t *syncTask) isHook() bool {
return hook.IsHook(t.obj())
}
func (t *syncTask) group() string {
return t.groupVersionKind().Group
}
func (t *syncTask) kind() string {
return t.groupVersionKind().Kind
}
func (t *syncTask) version() string {
return t.groupVersionKind().Version
}
func (t *syncTask) groupVersionKind() schema.GroupVersionKind {
return t.obj().GroupVersionKind()
}
func (t *syncTask) name() string {
return t.obj().GetName()
}
func (t *syncTask) namespace() string {
return t.obj().GetNamespace()
}
func (t *syncTask) pending() bool {
return t.operationState == ""
}
func (t *syncTask) running() bool {
return t.operationState == v1alpha1.OperationRunning
}
func (t *syncTask) completed() bool {
return t.operationState.Completed()
}
func (t *syncTask) successful() bool {
return t.operationState.Successful()
}
func (t *syncTask) hookType() v1alpha1.HookType {
if t.isHook() {
return v1alpha1.HookType(t.phase)
} else {
return ""
}
}

View File

@@ -1,38 +0,0 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/test"
)
func Test_syncTask_hookType(t *testing.T) {
type fields struct {
phase SyncPhase
liveObj *unstructured.Unstructured
}
tests := []struct {
name string
fields fields
want HookType
}{
{"Empty", fields{SyncPhaseSync, test.NewPod()}, ""},
{"PreSyncHook", fields{SyncPhasePreSync, test.NewHook(HookTypePreSync)}, HookTypePreSync},
{"SyncHook", fields{SyncPhaseSync, test.NewHook(HookTypeSync)}, HookTypeSync},
{"PostSyncHook", fields{SyncPhasePostSync, test.NewHook(HookTypePostSync)}, HookTypePostSync},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
task := &syncTask{
phase: tt.fields.phase,
liveObj: tt.fields.liveObj,
}
hookType := task.hookType()
assert.EqualValues(t, tt.want, hookType)
})
}
}

View File

@@ -1,185 +0,0 @@
package controller
import (
"strings"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
// kindOrder represents the correct order of Kubernetes resources within a manifest
var syncPhaseOrder = map[v1alpha1.SyncPhase]int{
v1alpha1.SyncPhasePreSync: -1,
v1alpha1.SyncPhaseSync: 0,
v1alpha1.SyncPhasePostSync: 1,
v1alpha1.SyncPhaseSyncFail: 2,
}
// kindOrder represents the correct order of Kubernetes resources within a manifest
// https://github.com/helm/helm/blob/master/pkg/tiller/kind_sorter.go
var kindOrder = map[string]int{}
func init() {
kinds := []string{
"Namespace",
"ResourceQuota",
"LimitRange",
"PodSecurityPolicy",
"PodDisruptionBudget",
"Secret",
"ConfigMap",
"StorageClass",
"PersistentVolume",
"PersistentVolumeClaim",
"ServiceAccount",
"CustomResourceDefinition",
"ClusterRole",
"ClusterRoleBinding",
"Role",
"RoleBinding",
"Service",
"DaemonSet",
"Pod",
"ReplicationController",
"ReplicaSet",
"Deployment",
"StatefulSet",
"Job",
"CronJob",
"Ingress",
"APIService",
}
for i, kind := range kinds {
// make sure none of the above entries are zero, we need that for custom resources
kindOrder[kind] = i - len(kinds)
}
}
type syncTasks []*syncTask
func (s syncTasks) Len() int {
return len(s)
}
func (s syncTasks) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// order is
// 1. phase
// 2. wave
// 3. kind
// 4. name
func (s syncTasks) Less(i, j int) bool {
tA := s[i]
tB := s[j]
d := syncPhaseOrder[tA.phase] - syncPhaseOrder[tB.phase]
if d != 0 {
return d < 0
}
d = tA.wave() - tB.wave()
if d != 0 {
return d < 0
}
a := tA.obj()
b := tB.obj()
// we take advantage of the fact that if the kind is not in the kindOrder map,
// then it will return the default int value of zero, which is the highest value
d = kindOrder[a.GetKind()] - kindOrder[b.GetKind()]
if d != 0 {
return d < 0
}
return a.GetName() < b.GetName()
}
func (s syncTasks) Filter(predicate func(task *syncTask) bool) (tasks syncTasks) {
for _, task := range s {
if predicate(task) {
tasks = append(tasks, task)
}
}
return tasks
}
func (s syncTasks) Split(predicate func(task *syncTask) bool) (trueTasks, falseTasks syncTasks) {
for _, task := range s {
if predicate(task) {
trueTasks = append(trueTasks, task)
} else {
falseTasks = append(falseTasks, task)
}
}
return trueTasks, falseTasks
}
func (s syncTasks) All(predicate func(task *syncTask) bool) bool {
for _, task := range s {
if !predicate(task) {
return false
}
}
return true
}
func (s syncTasks) Any(predicate func(task *syncTask) bool) bool {
for _, task := range s {
if predicate(task) {
return true
}
}
return false
}
func (s syncTasks) Find(predicate func(task *syncTask) bool) *syncTask {
for _, task := range s {
if predicate(task) {
return task
}
}
return nil
}
func (s syncTasks) String() string {
var values []string
for _, task := range s {
values = append(values, task.String())
}
return "[" + strings.Join(values, ", ") + "]"
}
func (s syncTasks) phase() v1alpha1.SyncPhase {
if len(s) > 0 {
return s[0].phase
}
return ""
}
func (s syncTasks) wave() int {
if len(s) > 0 {
return s[0].wave()
}
return 0
}
func (s syncTasks) lastPhase() v1alpha1.SyncPhase {
if len(s) > 0 {
return s[len(s)-1].phase
}
return ""
}
func (s syncTasks) lastWave() int {
if len(s) > 0 {
return s[len(s)-1].wave()
}
return 0
}
func (s syncTasks) multiStep() bool {
return s.wave() != s.lastWave() || s.phase() != s.lastPhase()
}

View File

@@ -1,392 +0,0 @@
package controller
import (
"sort"
"testing"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/common"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/test"
)
func Test_syncTasks_kindOrder(t *testing.T) {
assert.Equal(t, -27, kindOrder["Namespace"])
assert.Equal(t, -1, kindOrder["APIService"])
assert.Equal(t, 0, kindOrder["MyCRD"])
}
func TestSortSyncTask(t *testing.T) {
sort.Sort(unsortedTasks)
assert.Equal(t, sortedTasks, unsortedTasks)
}
func TestAnySyncTasks(t *testing.T) {
res := unsortedTasks.Any(func(task *syncTask) bool {
return task.name() == "a"
})
assert.True(t, res)
res = unsortedTasks.Any(func(task *syncTask) bool {
return task.name() == "does-not-exist"
})
assert.False(t, res)
}
func TestAllSyncTasks(t *testing.T) {
res := unsortedTasks.All(func(task *syncTask) bool {
return task.name() != ""
})
assert.False(t, res)
res = unsortedTasks.All(func(task *syncTask) bool {
return task.name() == "a"
})
assert.False(t, res)
}
func TestSplitSyncTasks(t *testing.T) {
named, unnamed := sortedTasks.Split(func(task *syncTask) bool {
return task.name() != ""
})
assert.Equal(t, named, namedObjTasks)
assert.Equal(t, unnamed, unnamedTasks)
}
var unsortedTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
phase: SyncPhaseSyncFail, targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhasePostSync, targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
}
var sortedTasks = syncTasks{
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
phase: SyncPhasePostSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhaseSyncFail,
targetObj: &unstructured.Unstructured{},
},
}
var namedObjTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
}
var unnamedTasks = syncTasks{
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
phase: SyncPhasePostSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhaseSyncFail,
targetObj: &unstructured.Unstructured{},
},
}
func Test_syncTasks_Filter(t *testing.T) {
tasks := syncTasks{{phase: SyncPhaseSync}, {phase: SyncPhasePostSync}}
assert.Equal(t, syncTasks{{phase: SyncPhaseSync}}, tasks.Filter(func(t *syncTask) bool {
return t.phase == SyncPhaseSync
}))
}
func TestSyncNamespaceAgainstCRD(t *testing.T) {
crd := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": "Workflow",
},
}}
namespace := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": "Namespace",
},
},
}
unsorted := syncTasks{crd, namespace}
sort.Sort(unsorted)
assert.Equal(t, syncTasks{namespace, crd}, unsorted)
}
func Test_syncTasks_multiStep(t *testing.T) {
t.Run("Single", func(t *testing.T) {
tasks := syncTasks{{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "-1"), phase: SyncPhaseSync}}
assert.Equal(t, SyncPhaseSync, tasks.phase())
assert.Equal(t, -1, tasks.wave())
assert.Equal(t, SyncPhaseSync, tasks.lastPhase())
assert.Equal(t, -1, tasks.lastWave())
assert.False(t, tasks.multiStep())
})
t.Run("Double", func(t *testing.T) {
tasks := syncTasks{
{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "-1"), phase: SyncPhasePreSync},
{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "1"), phase: SyncPhasePostSync},
}
assert.Equal(t, SyncPhasePreSync, tasks.phase())
assert.Equal(t, -1, tasks.wave())
assert.Equal(t, SyncPhasePostSync, tasks.lastPhase())
assert.Equal(t, 1, tasks.lastWave())
assert.True(t, tasks.multiStep())
})
}

View File

@@ -2,11 +2,12 @@ package controller
import (
"fmt"
"reflect"
"sort"
"testing"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -18,8 +19,7 @@ import (
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
@@ -45,9 +45,7 @@ func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
config: &rest.Config{},
namespace: test.FakeArgoCDNamespace,
server: test.FakeClusterURL,
syncRes: &v1alpha1.SyncOperationResult{
Revision: "FooBarBaz",
},
syncRes: &v1alpha1.SyncOperationResult{},
syncOp: &v1alpha1.SyncOperation{
Prune: true,
SyncStrategy: &v1alpha1.SyncStrategy{
@@ -72,7 +70,7 @@ func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
disco: fakeDisco,
log: log.WithFields(log.Fields{"application": "fake-app"}),
}
sc.kubectl = &kubetest.MockKubectlCmd{}
sc.kubectl = kubetest.MockKubectlCmd{}
return &sc
}
@@ -106,19 +104,18 @@ func TestSyncCreateInSortedOrder(t *testing.T) {
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodeSynced, result.Status)
assert.Equal(t, "", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, "", result.Message)
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
@@ -139,7 +136,7 @@ func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
{Group: "argoproj.io", Kind: "*"},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{}
syncCtx.kubectl = kubetest.MockKubectlCmd{}
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
@@ -150,9 +147,8 @@ func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Contains(t, result.Message, "not permitted in project")
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncBlacklistedNamespacedResources(t *testing.T) {
@@ -170,83 +166,73 @@ func TestSyncBlacklistedNamespacedResources(t *testing.T) {
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Contains(t, result.Message, "not permitted in project")
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: test.NewService(),
}, {
Live: pod,
Live: test.NewPod(),
Target: nil,
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, result.Status)
assert.Equal(t, "", result.Message)
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncDeleteSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
svc := test.NewService()
svc.SetNamespace(test.FakeArgoCDNamespace)
pod := test.NewPod()
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: svc,
Live: test.NewService(),
Target: nil,
}, {
Live: pod,
Live: test.NewPod(),
Target: nil,
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
testSvc := test.NewService()
syncCtx.kubectl = &kubetest.MockKubectlCmd{
syncCtx.kubectl = kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
testSvc.GetName(): {
"test-service": {
Output: "",
Err: fmt.Errorf("foo"),
Err: fmt.Errorf("error: error validating \"test.yaml\": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false"),
},
},
}
testSvc := test.NewService()
testSvc.SetAPIVersion("")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
@@ -255,24 +241,21 @@ func TestSyncCreateFailure(t *testing.T) {
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Equal(t, "foo", result.Message)
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func TestSyncPruneFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = &kubetest.MockKubectlCmd{
syncCtx.kubectl = kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
"test-service": {
Output: "",
Err: fmt.Errorf("foo"),
Err: fmt.Errorf(" error: timed out waiting for \"test-service\" to be synced"),
},
},
}
testSvc := test.NewService()
testSvc.SetName("test-service")
testSvc.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: testSvc,
@@ -280,11 +263,155 @@ func TestSyncPruneFailure(t *testing.T) {
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Equal(t, "foo", result.Message)
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func unsortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
}
}
func sortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
}
}
func TestSortKubernetesResourcesSuccessfully(t *testing.T) {
unsorted := unsortedManifest()
ks := newKindSorter(unsorted, resourceOrder)
sort.Sort(ks)
expectedOrder := sortedManifest()
assert.Equal(t, len(unsorted), len(expectedOrder))
for i, sorted := range unsorted {
assert.Equal(t, expectedOrder[i], sorted)
}
}
func TestSortManifestHandleNil(t *testing.T) {
task := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
}
manifest := []syncTask{
{},
task,
}
ks := newKindSorter(manifest, resourceOrder)
sort.Sort(ks)
assert.Equal(t, task, manifest[0])
assert.Nil(t, manifest[1].targetObj)
}
func TestSyncNamespaceAgainstCRD(t *testing.T) {
crd := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": "argoproj.io/alpha1",
"kind": "Workflow",
},
}}
namespace := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Namespace",
},
},
}
unsorted := []syncTask{crd, namespace}
ks := newKindSorter(unsorted, resourceOrder)
sort.Sort(ks)
expectedOrder := []syncTask{namespace, crd}
assert.Equal(t, len(unsorted), len(expectedOrder))
for i, sorted := range unsorted {
assert.Equal(t, expectedOrder[i], sorted)
}
}
func TestDontSyncOrPruneHooks(t *testing.T) {
@@ -294,141 +421,23 @@ func TestDontSyncOrPruneHooks(t *testing.T) {
targetPod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
liveSvc := test.NewService()
liveSvc.SetName("dont-prune-me")
liveSvc.SetNamespace(test.FakeArgoCDNamespace)
liveSvc.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
syncCtx.compareResult = &comparisonResult{
hooks: []*unstructured.Unstructured{targetPod, liveSvc},
managedResources: []managedResource{{
Live: nil,
Target: targetPod,
Hook: true,
}, {
Live: liveSvc,
Target: nil,
Hook: true,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 0)
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
}
// make sure that we do not prune resources with Prune=false
func TestDontPrunePruneFalse(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationSyncOptions: "Prune=false"})
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Live: pod}}}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodePruneSkipped, syncCtx.syncRes.Resources[0].Status)
assert.Equal(t, "ignored (no prune)", syncCtx.syncRes.Resources[0].Message)
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
}
// make sure Validate=false means we don't validate
func TestSyncOptionValidate(t *testing.T) {
tests := []struct {
name string
annotationVal string
want bool
}{
{"Empty", "", true},
{"True", "Validate=true", true},
{"False", "Validate=false", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationSyncOptions: tt.annotationVal})
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod, Live: pod}}}
syncCtx.sync()
kubectl, _ := syncCtx.kubectl.(*kubetest.MockKubectlCmd)
assert.Equal(t, tt.want, kubectl.LastValidate)
})
}
}
func TestSelectiveSyncOnly(t *testing.T) {
syncCtx := newTestSyncCtx()
pod1 := test.NewPod()
pod1.SetName("pod-1")
pod2 := test.NewPod()
pod2.SetName("pod-2")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod1}},
}
syncCtx.syncResources = []v1alpha1.SyncOperationResource{{Kind: "Pod", Name: "pod-1"}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, "pod-1", tasks[0].name())
}
func TestUnnamedHooksGetUniqueNames(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
pod.SetName("")
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync,PostSync"})
syncCtx.compareResult = &comparisonResult{hooks: []*unstructured.Unstructured{pod}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 2)
assert.Contains(t, tasks[0].name(), "foobarb-presync-")
assert.Contains(t, tasks[1].name(), "foobarb-postsync-")
assert.Equal(t, "", pod.GetName())
}
func TestManagedResourceAreNotNamed(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetName("")
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod}}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, "", tasks[0].name())
assert.Equal(t, "", pod.GetName())
}
func TestDeDupingTasks(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "Sync"})
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{pod},
}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
}
func TestObjectsGetANamespace(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod}}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, test.FakeArgoCDNamespace, tasks[0].namespace())
assert.Equal(t, "", pod.GetNamespace())
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestPersistRevisionHistory(t *testing.T) {
@@ -444,7 +453,7 @@ func TestPersistRevisionHistory(t *testing.T) {
}
data := fakeData{
apps: []runtime.Object{app, defaultProject},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -481,7 +490,7 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
}
data := fakeData{
apps: []runtime.Object{app, defaultProject},
manifestResponse: &apiclient.ManifestResponse{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
@@ -517,146 +526,3 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
assert.Equal(t, source, updatedApp.Status.History[0].Source)
assert.Equal(t, "abc123", updatedApp.Status.History[0].Revision)
}
func TestSyncFailureHookWithSuccessfulSync(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: test.NewPod()}},
hooks: []*unstructured.Unstructured{test.NewHook(HookTypeSyncFail)},
}
syncCtx.sync()
assert.Equal(t, OperationSucceeded, syncCtx.opState.Phase)
// only one result, we did not run the failure failureHook
assert.Len(t, syncCtx.syncRes.Resources, 1)
}
func TestSyncFailureHookWithFailedSync(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{test.NewHook(HookTypeSyncFail)},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{pod.GetName(): {Err: fmt.Errorf("")}},
}
syncCtx.sync()
syncCtx.sync()
assert.Equal(t, OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
}
func TestRunSyncFailHooksFailed(t *testing.T) {
// Tests that other SyncFail Hooks run even if one of them fail.
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
successfulSyncFailHook := test.NewHook(HookTypeSyncFail)
successfulSyncFailHook.SetName("successful-sync-fail-hook")
failedSyncFailHook := test.NewHook(HookTypeSyncFail)
failedSyncFailHook.SetName("failed-sync-fail-hook")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{successfulSyncFailHook, failedSyncFailHook},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
// Fail operation
pod.GetName(): {Err: fmt.Errorf("")},
// Fail a single SyncFail hook
failedSyncFailHook.GetName(): {Err: fmt.Errorf("")}},
}
syncCtx.sync()
syncCtx.sync()
fmt.Println(syncCtx.syncRes.Resources)
fmt.Println(syncCtx.opState.Phase)
// Operation as a whole should fail
assert.Equal(t, OperationFailed, syncCtx.opState.Phase)
// failedSyncFailHook should fail
assert.Equal(t, OperationFailed, syncCtx.syncRes.Resources[1].HookPhase)
assert.Equal(t, ResultCodeSyncFailed, syncCtx.syncRes.Resources[1].Status)
// successfulSyncFailHook should be synced running (it is an nginx pod)
assert.Equal(t, OperationRunning, syncCtx.syncRes.Resources[2].HookPhase)
assert.Equal(t, ResultCodeSynced, syncCtx.syncRes.Resources[2].Status)
}
func Test_syncContext_isSelectiveSync(t *testing.T) {
type fields struct {
compareResult *comparisonResult
syncResources []SyncOperationResource
}
oneSyncResource := []SyncOperationResource{{}}
oneResource := func(group, kind, name string, hook bool) *comparisonResult {
return &comparisonResult{resources: []v1alpha1.ResourceStatus{{Group: group, Kind: kind, Name: name, Hook: hook}}}
}
tests := []struct {
name string
fields fields
want bool
}{
{"Empty", fields{}, false},
{"OneCompareResult", fields{oneResource("", "", "", false), []SyncOperationResource{}}, true},
{"OneSyncResource", fields{&comparisonResult{}, oneSyncResource}, true},
{"Equal", fields{oneResource("", "", "", false), oneSyncResource}, false},
{"EqualOutOfOrder", fields{&comparisonResult{resources: []v1alpha1.ResourceStatus{{Group: "a"}, {Group: "b"}}}, []SyncOperationResource{{Group: "b"}, {Group: "a"}}}, false},
{"KindDifferent", fields{oneResource("foo", "", "", false), oneSyncResource}, true},
{"GroupDifferent", fields{oneResource("", "foo", "", false), oneSyncResource}, true},
{"NameDifferent", fields{oneResource("", "", "foo", false), oneSyncResource}, true},
{"HookIgnored", fields{oneResource("", "", "", true), []SyncOperationResource{}}, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sc := &syncContext{
compareResult: tt.fields.compareResult,
syncResources: tt.fields.syncResources,
}
if got := sc.isSelectiveSync(); got != tt.want {
t.Errorf("syncContext.isSelectiveSync() = %v, want %v", got, tt.want)
}
})
}
}
func Test_syncContext_liveObj(t *testing.T) {
type fields struct {
compareResult *comparisonResult
}
type args struct {
obj *unstructured.Unstructured
}
obj := test.NewPod()
obj.SetNamespace("my-ns")
found := test.NewPod()
tests := []struct {
name string
fields fields
args args
want *unstructured.Unstructured
}{
{"None", fields{compareResult: &comparisonResult{managedResources: []managedResource{}}}, args{obj: &unstructured.Unstructured{}}, nil},
{"Found", fields{compareResult: &comparisonResult{managedResources: []managedResource{{Group: obj.GroupVersionKind().Group, Kind: obj.GetKind(), Namespace: obj.GetNamespace(), Name: obj.GetName(), Live: found}}}}, args{obj: obj}, found},
{"EmptyNamespace", fields{compareResult: &comparisonResult{managedResources: []managedResource{{Group: obj.GroupVersionKind().Group, Kind: obj.GetKind(), Name: obj.GetName(), Live: found}}}}, args{obj: obj}, found},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sc := &syncContext{
compareResult: tt.fields.compareResult,
}
if got := sc.liveObj(tt.args.obj); !reflect.DeepEqual(got, tt.want) {
t.Errorf("syncContext.liveObj() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -10,12 +10,14 @@ Then, to get a good grounding in Go, try out [the tutorial](https://tour.golang.
Install:
* [docker](https://docs.docker.com/install/#supported-platforms)
* [git](https://git-scm.com/) and [git-lfs](https://git-lfs.github.com/)
* [golang](https://golang.org/)
* [dep](https://github.com/golang/dep)
* [protobuf](https://developers.google.com/protocol-buffers/)
* [ksonnet](https://github.com/ksonnet/ksonnet#install)
* [helm](https://github.com/helm/helm/releases)
* [kustomize](https://github.com/kubernetes-sigs/kustomize/releases)
* [go-swagger](https://github.com/go-swagger/go-swagger/blob/master/docs/install.md)
* [jq](https://stedolan.github.io/jq/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [kubectx](https://kubectx.dev)
* [minikube](https://kubernetes.io/docs/setup/minikube/) or Docker for Desktop
@@ -23,9 +25,13 @@ Install:
Brew users can quickly install the lot:
```bash
brew install git-lfs go dep kubectl kubectx ksonnet/tap/ks kubernetes-helm kustomize
brew tap go-swagger/go-swagger
brew install go dep protobuf kubectl kubectx ksonnet/tap/ks kubernetes-helm jq go-swagger
```
!!! note "Kustomize"
Since Argo CD supports Kustomize v1.0 and v2.0, you will need to install both versions in order for the unit tests to run. The Kustomize 1 unit test expects to find a `kustomize1` binary in the path. You can use this [link](https://github.com/argoproj/argo-cd/blob/master/Dockerfile#L66-L69) to find the Kustomize 1 currently used by Argo CD and modify the curl command to download the correct OS.
Set up environment variables (e.g. is `~/.bashrc`):
```bash
@@ -33,34 +39,29 @@ export GOPATH=~/go
export PATH=$PATH:$GOPATH/bin
```
Checkout the code:
Install go dependencies:
```bash
go get -u github.com/argoproj/argo-cd
cd ~/go/src/github.com/argoproj/argo-cd
go get -u github.com/golang/protobuf/protoc-gen-go
go get -u github.com/go-swagger/go-swagger/cmd/swagger
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
go get -u github.com/golangci/golangci-lint/cmd/golangci-lint
go get -u github.com/mattn/goreman
go get -u gotest.tools/gotestsum
```
## Building
Ensure dependencies are up to date first:
```shell
make dep
```
Build `cli`, `image`, and `argocd-util` as default targets by running make:
```bash
go get -u github.com/argoproj/argo-cd
dep ensure
make
```
The make command can take a while, and we recommend building the specific component you are working on
* `make codegen` - Builds protobuf and swagger files.
Note: `make codegen` is slow because it uses docker + volume mounts. To improve performance you might install binaries from `./hack/Dockerfile.dev-tools`
and use `make codegen-local`. It is still recommended to run `make codegen` once before sending PR to make sure correct version of codegen tools is used.
* `make codegen` - Builds protobuf and swagger files
* `make cli` - Make the argocd CLI tool
* `make server` - Make the API/repo/controller server
* `make argocd-util` - Make the administrator's utility, used for certain tasks such as import/export
@@ -89,6 +90,15 @@ kubectl -n argocd scale deployment.extensions/argocd-server --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-redis --replicas 0
```
Then checkout and build the UI next to your code
```
cd ~/go/src/github.com/argoproj
git clone git@github.com:argoproj/argo-cd-ui.git
```
Follow the UI's [README](https://github.com/argoproj/argo-cd-ui/blob/master/README.md) to build it.
Note: you'll need to use the https://localhost:6443 cluster now.
Then start the services:
@@ -124,7 +134,7 @@ Add your username as the environment variable, e.g. to your `~/.bash_profile`:
export IMAGE_NAMESPACE=alexcollinsintuit
```
If you have not built the UI image (see [the UI README](https://github.com/argoproj/argo-cd/blob/master/ui/README.md)), then do the following:
If you have not built the UI image (see [the UI README](https://github.com/argoproj/argo-cd-ui/blob/master/README.md)), then do the following:
```bash
docker pull argoproj/argocd-ui:latest
@@ -161,3 +171,16 @@ kubectl -n argocd scale deployment.extensions/argocd-redis --replicas 1
```
Now you can set-up the port-forwarding and open the UI or CLI.
## Pre-commit Checks
Before you commit, make sure you've formatted and linted your code, or your PR will fail CI:
```bash
STAGED_GO_FILES=$(git diff --cached --name-only | grep ".go$")
gofmt -w $STAGED_GO_FILES
make codgen
make precommit ;# lint and test
```

View File

@@ -2,5 +2,5 @@
1. Make sure you've read [understanding the basics](understand_the_basics.md) the [getting started guide](getting_started.md).
2. Looked for an answer [the frequently asked questions](faq.md).
3. Ask a question in [the Argo CD Slack channel ⧉](https://argoproj.github.io/community/join-slack).
4. [Read issues, report a bug, or request a feature ⧉](https://github.com/argoproj/argo-cd/issues)
3. Ask a question in [the Argo CD Slack channel ⧉](https://argoproj.slack.com/messages/CASHNF6MS).
4. [Read issues, report a bug, or request a feature ⧉](https://github.com/argoproj/argo-cd/issues)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.1 KiB

View File

@@ -1,41 +0,0 @@
# CI
## Troubleshooting Builds
### "Check nothing has changed" step fails
If your PR fails the `codegen` CI step, you can either:
(1) Simple - download the `codgen.patch` file from CircleCI and apply it:
![download codegen patch file](../assets/download-codegen-patch-file.png)
```bash
git apply codegen.patch
git commit -am "Applies codegen patch"
```
(2) Advanced - if you have the tools installed (see the contributing guide), run the following:
```bash
make pre-commit
git commit -am 'Ran pre-commit checks'
```
## Updating The Builder Image
Login to Docker Hub:
```bash
docker login
```
Build image:
```bash
make builder-image IMAGE_NAMESPACE=argoproj IMAGE_TAG=v1.0.0
```
## Public CD
[https://cd.apps.argoproj.io/](https://cd.apps.argoproj.io/)

View File

@@ -1,88 +1,48 @@
# Releasing
Make sure you are logged into Docker Hub:
1. Tag, build, and push argo-cd-ui
```bash
docker login
cd argo-cd-ui
git checkout -b release-X.Y
git tag vX.Y.Z
git push upstream release-X.Y --tags
IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true yarn docker
```
Export the upstream repository and branch name, e.g.:
2. Create release-X.Y branch (if creating initial X.Y release)
```bash
REPO=upstream ;# or origin
BRANCH=release-1.0
git checkout -b release-X.Y
git push upstream release-X.Y
```
Set the `VERSION` environment variable:
```bash
# release candidate
VERSION=v1.0.0-rc1
# GA release
VERSION=v1.0.2
```
Update `VERSION` and manifests with new version:
3. Update VERSION and manifests with new version
```bash
git checkout $BRANCH
echo ${VERSION:1} > VERSION
make manifests IMAGE_TAG=$VERSION
git commit -am "Update manifests to $VERSION"
git push $REPO $BRANCH
vi VERSION # ensure value is desired X.Y.Z semantic version
make manifests IMAGE_TAG=vX.Y.Z
git commit -a -m "Update manifests to vX.Y.Z"
git push upstream release-X.Y
```
Tag, build, and push release to Docker Hub
4. Tag, build, and push release to docker hub
```bash
git tag $VERSION
git clean -fd
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=$VERSION DOCKER_PUSH=true
git push $REPO $VERSION
git tag vX.Y.Z
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true
git push upstream vX.Y.Z
```
Update [Github releases](https://github.com/argoproj/argo-cd/releases) with:
* Getting started (copy from previous release)
* Changelog
* Binaries (e.g. `dist/argocd-darwin-amd64`).
If GA, update `stable` tag:
```bash
git tag stable --force && git push $REPO stable --force
```
If GA, update Brew formula:
5. Update argocd brew formula
```bash
git clone https://github.com/argoproj/homebrew-tap
cd homebrew-tap
git checkout master
git pull
./update.sh ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
git commit -am "Update argocd to $VERSION"
git commit -a -m "Update argocd to vX.Y.Z"
git push
```
### Verify
Locally:
```bash
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/$VERSION/manifests/install.yaml
6. Update documentation:
* Edit CHANGELOG.md with release notes
* Update `stable` tag
```
Follow the [Getting Started Guide](../getting_started/).
If GA:
```bash
brew upgrade argocd
/usr/local/bin/argocd version
git tag stable --force && git push upstream stable --force
```
Sync Argo CD in [https://cd.apps.argoproj.io/applications/argo-cd](https://cd.apps.argoproj.io/applications/argo-cd).
Deploy the [site](site.md).
* Create GitHub release from new tag and upload binaries (e.g. dist/argocd-darwin-amd64)

View File

@@ -13,50 +13,7 @@ Git repository via file url: `file:///tmp/argocd-e2e***`.
You can observe the tests by using the UI [http://localhost:8080/applications](http://localhost:8080/applications).
## Configuration of E2E Tests execution
The Makefile's `start-e2e` target starts instances of ArgoCD on your local machine, of which the most will require a network listener. If for whatever reason you already have network services on your machine listening on the same ports, the e2e tests will not be able to run. You can derive from the defaults by setting the following environment variables before you run `make start-e2e`:
* `ARGOCD_E2E_APISERVER_PORT`: Listener port for `argocd-server` (default: `8080`)
* `ARGOCD_E2E_REPOSERVER_PORT`: Listener port for `argocd-reposerver` (default: `8081`)
* `ARGOCD_E2E_DEX_PORT`: Listener port for `dex` (default: `5556`)
* `ARGOCD_E2E_REDIS_PORT`: Listener port for `redis` (default: `6379`)
* `ARGOCD_E2E_YARN_CMD`: Command to use for starting the UI via Yarn (default: `yarn`)
If you have changed the port for `argocd-server`, be sure to also set `ARGOCD_SERVER` environment variable to point to that port, e.g. `export ARGOCD_SERVER=localhost:8888` before running `make test-e2e` so that the test will communicate to the correct server component.
## CI Set-up
The tests are executed by Argo Workflow defined at `.argo-ci/ci.yaml`. CI job The builds an Argo CD image, deploy argo cd components into throw-away kubernetes cluster provisioned
using k3s and run e2e tests against it.
## Test Isolation
Some effort has been made to balance test isolation with speed. Tests are isolated as follows as each test gets:
* A random 5 character ID.
* A unique Git repository containing the `testdata` in `/tmp/argocd-e2e/${id}`.
* A namespace `argocd-e2e-ns-${id}`.
* An primary name for the app `argocd-e2e-${id}`.
## Troubleshooting
**Tests fails to delete `argocd-e2e-ns-*` namespaces.**
This maybe due to the metrics server, run this:
```bash
kubectl api-resources
```
If it exits with status code 1, run:
```bash
kubectl delete apiservice v1beta1.metrics.k8s.io
```
Remove `/spec/finalizers` from the namespace
```bash
kubectl edit ns argocd-e2e-ns-*
```

View File

@@ -1,12 +1,5 @@
# FAQ
## I've deleted/corrupted my repo and can't delete my app.
Argo CD can't delete an app if it cannot generate manifests. You need to either:
1. Reinstate/fix your repo.
1. Delete the app using `--cascade=false` and then manually deleting the resources.
## Why is my application still `OutOfSync` immediately after a successful Sync?
See [Diffing](user-guide/diffing.md) documentation for reasons resources can be OutOfSync, and ways to configure
@@ -22,7 +15,7 @@ to return `Progressing` state instead of `Healthy`.
([contour](https://github.com/heptio/contour/issues/403), [traefik](https://github.com/argoproj/argo-cd/issues/968#issuecomment-451082913)) don't update
`status.loadBalancer.ingress` field which causes `Ingress` to stuck in `Progressing` state forever.
* `StatefulSet` is considered healthy if value of `status.updatedReplicas` field matches to `spec.replicas` field. Due to Kubernetes bug
* `StatufulSet` is considered healthy if value of `status.updatedReplicas` field matches to `spec.replicas` field. Due to Kubernetes bug
[kubernetes/kubernetes#68573](https://github.com/kubernetes/kubernetes/issues/68573) the `status.updatedReplicas` is not populated. So unless you run Kubernetes version which
include the fix [kubernetes/kubernetes#67570](https://github.com/kubernetes/kubernetes/pull/67570) `StatefulSet` might stay in `Progressing` state.
@@ -30,22 +23,9 @@ As workaround Argo CD allows providing [health check](operator-manual/health.md)
## I forgot the admin password, how do I reset it?
By default the password is set to the name of the server pod, as per [the getting started guide](getting_started.md).
To change the password, edit the `argocd-secret` secret and update the `admin.password` field with a new bcrypt hash. You
can use a site like https://www.browserling.com/tools/bcrypt to generate a new hash. For example:
```bash
# bcrypt(Password1!)=$2a$10$hDj12Tw9xVmvybSahN1Y0.f9DZixxN8oybyA32Uy/eqWklFU4Mo8O
kubectl -n argocd patch secret argocd-secret \
-p "{\"data\": \
{\
\"admin.password\": \"$(echo -n '$2a$10$hDj12Tw9xVmvybSahN1Y0.f9DZixxN8oybyA32Uy/eqWklFU4Mo8O' | base64)\", \
\"admin.passwordMtime\": \"$(date +%FT%T%Z | base64)\" \
}}"
```
Another option is to delete both the `admin.password` and `admin.passwordMtime` keys and restart argocd-server. This will set the password back to the pod name as per [the getting started guide](getting_started.md).
Edit the `argocd-secret` secret and update the `admin.password` field with a new bcrypt hash. You
can use a site like https://www.browserling.com/tools/bcrypt to generate a new hash. Another option
is to delete both the `admin.password` and `admin.passwordMtime` keys and restart argocd-server.
## Argo CD cannot deploy Helm Chart based applications without internet access, how can I solve it?
@@ -76,52 +56,3 @@ KUBECONFIG=/tmp/config kubectl get pods # test connection manually
```
Now you can manually verify that cluster is accessible from the Argo CD pod.
## How Can I Terminate A Sync?
To terminate the sync, click on the "synchronisation" then "terminate":
![Synchronization](assets/synchronization-button.png) ![Terminate](assets/terminate-button.png)
## Why Is My App Out Of Sync Even After Syncing?
Is some cases, the tool you use may conflict with Argo CD by adding the `app.kubernetes.io/instance` label. E.g. using Kustomize common labels feature.
Argo CD automatically sets the `app.kubernetes.io/instance` label and uses it to determine which resources form the app. If the tool does this too, this causes confusion. You can change this label by setting the `application.instanceLabelKey` value in the `argocd-cm`. We recommend that you use `argocd.argoproj.io/instance`.
!!! note When you make this change your applications will become out of sync and will need re-syncing.
See [#1482](https://github.com/argoproj/argo-cd/issues/1482).
# How Do I Fix "invalid cookie, longer than max length 4093"?
Argo CD uses a JWT as the auth token. You likely are part of many groups and have gone over the 4KB limit which is set for cookies.
You can get the list of groups by opening "developer tools -> network"
* Click log in
* Find the call to `<argocd_instance>/auth/callback?code=<random_string>`
Decode the token at https://jwt.io/. That will provide the list of teams that you can remove yourself from.
See [#2165](https://github.com/argoproj/argo-cd/issues/2165).
## Why Am I Getting `rpc error: code = Unavailable desc = transport is closing` When Using The CLI?
Maybe you're behind a proxy that does not support HTTP 2? Try the `--grcp-web` flag.:
```bash
argocd ... --grcp-web
```
## Why Am I Getting `x509: certificate signed by unknown authority` When Using The CLI?
Your not running your server with correct certs.
If you're not running in a production system (e.g. you're testing Argo CD out), try the `--insecure` flag:
```bash
argocd ... --insecure
```
!!! warning "Do not use `--insecure` in production"

View File

@@ -79,6 +79,14 @@ For additional details, see [architecture overview](operator-manual/architecture
* Prometheus metrics
* Parameter overrides for overriding ksonnet/helm parameters in Git
## Community Blogs And Presentations
* GitOps with Argo CD: [Simplify and Automate Deployments Using GitOps with IBM Multicloud Manager](https://www.ibm.com/blogs/bluemix/2019/02/simplify-and-automate-deployments-using-gitops-with-ibm-multicloud-manager-3-1-2/)
* KubeCon talk: [CI/CD in Light Speed with K8s and Argo CD](https://www.youtube.com/watch?v=OdzH82VpMwI&feature=youtu.be)
* KubeCon talk: [Machine Learning as Code](https://www.youtube.com/watch?v=VXrGp5er1ZE&t=0s&index=135&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU)
* Among other things, describes how Kubeflow uses Argo CD to implement GitOPs for ML
* SIG Apps demo: [Argo CD - GitOps Continuous Delivery for Kubernetes](https://www.youtube.com/watch?v=aWDIQMbp1cc&feature=youtu.be&t=1m4s)
## Development Status
Argo CD is actively developed and is being used in production to deploy SaaS services at Intuit

View File

@@ -19,14 +19,6 @@ spec:
# helm specific config
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
# Release name override (defaults to application name)
releaseName: guestbook
valueFiles:
- values-prod.yaml
@@ -34,35 +26,36 @@ spec:
kustomize:
# Optional image name prefix
namePrefix: prod-
# Optional images passed to "kustomize edit set image".
# Optional image tags passed to "kustomize edit set imagetag" is Kustomize 1 only.
imageTags:
- name: gcr.io/heptio-images/ks-guestbook-demo
value: "0.2"
# Optional images passed to "kustomize edit set image" is Kustomize 2 only.
images:
- gcr.io/heptio-images/ks-guestbook-demo:0.2
# directory
directory:
recurse: true
jsonnet:
# A list of Jsonnet External Variables
extVars:
- name: foo
value: bar
# You can use "code to determine if the value is either string (false, the default) or Jsonnet code (if code is true).
- code: true
name: baz
value: "true"
# A list of Jsonnet Top-level Arguments
tlas:
- code: false
name: foo
value: bar
jsonnet:
# A list of Jsonnet External Variables
extVars:
- name: foo
value: bar
# You can use "code to determine if the value is either string (false, the default) or Jsonnet code (if code is true).
- code: true
name: baz
value: "true"
# A list of Jsonnet Top-level Arguments
tlas:
- code: false
name: foo
value: bar
# plugin specific config
plugin:
name: mypluginname
# environment variables passed to the plugin
env:
- name: FOO
value: bar
- name: mypluginname
# Destination cluster and namespace to deploy the application
destination:
@@ -72,8 +65,7 @@ spec:
# Sync policy
syncPolicy:
automated:
prune: true # Specifies if resources should be pruned during auto-syncing ( false by default ).
selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
prune: true
# Ignore differences at the specified json pointers
ignoreDifferences:

View File

@@ -2,27 +2,10 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
# Argo CD's externally facing base URL (optional). Required when configuring SSO
url: https://argo-cd-demo.argoproj.io
# Enables application status badge feature
statusbadge.enabled: 'true'
# Enables anonymous user access. The anonymous users get default role permissions specified argocd-rbac-cm.yaml.
users.anonymous.enabled: "true"
# Enables google analytics tracking is specified
ga.trackingid: 'UA-12345-1'
# Unless set to 'false' then user ids are hashed before sending to google analytics
ga.anonymizeusers: 'false'
# the URL for getting chat help, this will typically be your Slack channel for support
help.chatUrl: 'https://mycorp.slack.com/argo-cd'
# the text for getting chat help, defaults to "Chat now!"
help.chatText: 'Chat now!'
# A dex connector configuration (optional). See SSO configuration documentation:
# https://github.com/argoproj/argo-cd/blob/master/docs/sso.md
# https://github.com/dexidp/dex/tree/master/Documentation/connectors
@@ -129,9 +112,6 @@ data:
generate:
command: [kasane, show]
# Build options/parameters to use with `kustomize build` (optional)
kustomize.buildOptions: --load_restrictor none
# The metadata.label key name where Argo CD injects the app name as a tracking label (optional).
# Tracking labels are used to determine which resources need to be deleted when pruning.
# If omitted, Argo CD injects the app name into the label: 'app.kubernetes.io/instance'

View File

@@ -2,7 +2,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
# policy.csv is an file containing user-defined RBAC policies and role definitions (optional).
# Policy rules are in the form:
@@ -22,6 +21,6 @@ data:
policy.default: role:readonly
# scopes controls which OIDC scopes to examine during rbac enforcement (in addition to `sub` scope).
# If omitted, defaults to: '[groups]'. The scope value can be a string, or a list of strings.
scopes: '[cognito:groups, email]'
# If omitted, defaults to: `[groups]`. The scope value can be a string, or a list of strings.
scopes: [cognito:groups, email]

View File

@@ -2,7 +2,6 @@ apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
namespace: argocd
type: Opaque
data:
# TLS certificate and private key for API server (required).

View File

@@ -1,82 +0,0 @@
# Cluster Bootstrapping
This guide for operators who have already installed Argo CD, and have a new cluster and are looking to install many applications in that cluster.
There's no one particular pattern to solve this problem, e.g. you could write a script to create your applications, or you could even manually create them. However, users of Argo CD tend to use the **application of applications pattern**.
## Application Of Applications Pattern
[Declaratively](declarative-setup.md) specify one Argo CD application that consists only of other applications.
![Application of Applications](../assets/application-of-applications.png)
### Helm Example
This example shows how to use Helm to achieve this. You can, of course, use another tool if you like.
A typical layout of your Git repository for this might be:
```
├── Chart.yaml
├── templates
│   ├── guestbook.yaml
│   ├── helm-dependency.yaml
│   ├── helm-guestbook.yaml
│   └── kustomize-guestbook.yaml
└── values.yaml
```
`Chart.yaml` is boiler-plate.
`templates` contains one file for each application, roughly:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: argocd
server: {{ .Values.spec.destination.server }}
project: default
source:
path: guestbook
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: {{ .Values.spec.source.targetRevision }}
syncPolicy:
automated:
prune: true
```
In this example, I've set the sync policy to automated + prune, so that applications are automatically created, synced, and deleted when the manifest is changed, but you may wish to disable this. I've also added the finalizer, which will ensure that you applications are deleted correctly.
As you probably want to override the cluster server and maybe the revision, these are templated values.
`values.yaml` contains the default values:
```yaml
spec:
destination:
server: https://kubernetes.default.svc
source:
targetRevision: HEAD
```
Finally, you need to create your application, e.g.:
```bash
argocd app create applications \
--dest-namespace argocd \
--dest-server https://kubernetes.default.svc \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path applications \
--sync-policy automated
```
In this example, I excluded auto-prune, as this would result in all applications being deleted if some accidentally deleted the *application of applications*.
View [the example on Github](https://github.com/argoproj/argocd-example-apps/tree/master/applications).

View File

@@ -8,8 +8,6 @@ Argo CD applications, projects and settings can be defined declaratively using K
| [`argocd-cm.yaml`](argocd-cm.yaml) | ConfigMap | General Argo CD configuration |
| [`argocd-secret.yaml`](argocd-secret.yaml) | Secret | Password, Certificates, Signing Key |
| [`argocd-rbac-cm.yaml`](argocd-rbac-cm.yaml) | ConfigMap | RBAC Configuration |
| [`argocd-tls-certs-cm.yaml`](argocd-rbac-cm.yaml) | ConfigMap | Custom TLS certificates for connecting Git repositories via HTTPS (v1.2 and later) |
| [`argocd-ssh-known-hosts-cm.yaml`](argocd-rbac-cm.yaml) | ConfigMap | SSH known hosts data for connecting Git repositories via SSH (v1.2 and later) |
| [`application.yaml`](application.yaml) | Application | Example application spec |
| [`project.yaml`](project.yaml) | AppProject | Example project spec |
@@ -28,7 +26,6 @@ apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
spec:
project: default
source:
@@ -42,9 +39,6 @@ spec:
See [application.yaml](application.yaml) for additional fields
!!! note
The namespace must match the namespace of your Argo cd, typically this is `argocd`.
!!! warning
By default, deleting an application will not perform a cascade delete, thereby deleting its resources. You must add the finalizer if you want this behaviour - which you may well not want.
@@ -54,12 +48,10 @@ metadata:
- resources-finalizer.argocd.argoproj.io
```
### Application of Applications
### App of Apps of Apps
You can create an application that creates other applications, which in turn can create other applications.
This allows you to declaratively manage a group of applications that can be deployed and configured in concert.
See [cluster bootstrapping](cluster-bootstrapping.md).
This allows you to declaratively manage a group of applications that can be deployed and configured in concert.
## Projects
The AppProject CRD is the Kubernetes resource object representing a logical grouping of applications.
@@ -76,7 +68,6 @@ apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: my-project
namespace: argocd
spec:
description: Example Project
# Allow manifests to deploy from any Git repos
@@ -133,7 +124,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
repositories: |
- url: https://github.com/argoproj/my-private-repository
@@ -152,7 +142,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
repositories: |
- url: git@github.com:argoproj/my-private-repository
@@ -165,9 +154,7 @@ data:
The Kubernetes documentation has [instructions for creating a secret containing a private key](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys).
### Repository Credentials
>v1.1
### Repository Credentials (v1.1+)
If you want to use the same credentials for multiple repositories, you can use `repository.credentials`:
@@ -176,7 +163,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
repositories: |
- url: https://github.com/argoproj/private-repo
@@ -205,10 +191,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
# this has it's own credentials
@@ -249,96 +231,6 @@ data:
key: sshPrivateKey
```
### Repositories using self-signed TLS certificates (or are signed by custom CA)
> v1.2 or later
You can manage the TLS certificates used to verify the authenticity of your repository servers in a ConfigMap object named `argocd-tls-certs-cm`. The data section should contain a map, with the repository server's hostname part (not the complete URL) as key, and the certificate(s) in PEM format as data. So, if you connect to a repository with the URL `https://server.example.com/repos/my-repo`, you should use `server.example.com` as key. The certificate data should be either the server's certificate (in case of self-signed certificate) or the certificate of the CA that was used to sign the server's certificate. You can configure multiple certificates for each server, e.g. if you are having a certificate roll-over planned.
If there are no dedicated certificates configured for a repository server, the system's default trust store is used for validating the server's repository. This should be good enough for most (if not all) public Git repository services such as GitLab, GitHub and Bitbucket as well as most privately hosted sites which use certificates from well-known CAs, including Let's Encrypt certificates.
An example ConfigMap object:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-tls-certs-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
server.example.com: |
-----BEGIN CERTIFICATE-----
MIIF1zCCA7+gAwIBAgIUQdTcSHY2Sxd3Tq/v1eIEZPCNbOowDQYJKoZIhvcNAQEL
BQAwezELMAkGA1UEBhMCREUxFTATBgNVBAgMDExvd2VyIFNheG9ueTEQMA4GA1UE
BwwHSGFub3ZlcjEVMBMGA1UECgwMVGVzdGluZyBDb3JwMRIwEAYDVQQLDAlUZXN0
c3VpdGUxGDAWBgNVBAMMD2Jhci5leGFtcGxlLmNvbTAeFw0xOTA3MDgxMzU2MTda
Fw0yMDA3MDcxMzU2MTdaMHsxCzAJBgNVBAYTAkRFMRUwEwYDVQQIDAxMb3dlciBT
YXhvbnkxEDAOBgNVBAcMB0hhbm92ZXIxFTATBgNVBAoMDFRlc3RpbmcgQ29ycDES
MBAGA1UECwwJVGVzdHN1aXRlMRgwFgYDVQQDDA9iYXIuZXhhbXBsZS5jb20wggIi
MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv4mHMdVUcafmaSHVpUM0zZWp5
NFXfboxA4inuOkE8kZlbGSe7wiG9WqLirdr39Ts+WSAFA6oANvbzlu3JrEQ2CHPc
CNQm6diPREFwcDPFCe/eMawbwkQAPVSHPts0UoRxnpZox5pn69ghncBR+jtvx+/u
P6HdwW0qqTvfJnfAF1hBJ4oIk2AXiip5kkIznsAh9W6WRy6nTVCeetmIepDOGe0G
ZJIRn/OfSz7NzKylfDCat2z3EAutyeT/5oXZoWOmGg/8T7pn/pR588GoYYKRQnp+
YilqCPFX+az09EqqK/iHXnkdZ/Z2fCuU+9M/Zhrnlwlygl3RuVBI6xhm/ZsXtL2E
Gxa61lNy6pyx5+hSxHEFEJshXLtioRd702VdLKxEOuYSXKeJDs1x9o6cJ75S6hko
Ml1L4zCU+xEsMcvb1iQ2n7PZdacqhkFRUVVVmJ56th8aYyX7KNX6M9CD+kMpNm6J
kKC1li/Iy+RI138bAvaFplajMF551kt44dSvIoJIbTr1LigudzWPqk31QaZXV/4u
kD1n4p/XMc9HYU/was/CmQBFqmIZedTLTtK7clkuFN6wbwzdo1wmUNgnySQuMacO
gxhHxxzRWxd24uLyk9Px+9U3BfVPaRLiOPaPoC58lyVOykjSgfpgbus7JS69fCq7
bEH4Jatp/10zkco+UQIDAQABo1MwUTAdBgNVHQ4EFgQUjXH6PHi92y4C4hQpey86
r6+x1ewwHwYDVR0jBBgwFoAUjXH6PHi92y4C4hQpey86r6+x1ewwDwYDVR0TAQH/
BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAFE4SdKsX9UsLy+Z0xuHSxhTd0jfn
Iih5mtzb8CDNO5oTw4z0aMeAvpsUvjJ/XjgxnkiRACXh7K9hsG2r+ageRWGevyvx
CaRXFbherV1kTnZw4Y9/pgZTYVWs9jlqFOppz5sStkfjsDQ5lmPJGDii/StENAz2
XmtiPOgfG9Upb0GAJBCuKnrU9bIcT4L20gd2F4Y14ccyjlf8UiUi192IX6yM9OjT
+TuXwZgqnTOq6piVgr+FTSa24qSvaXb5z/mJDLlk23npecTouLg83TNSn3R6fYQr
d/Y9eXuUJ8U7/qTh2Ulz071AO9KzPOmleYPTx4Xty4xAtWi1QE5NHW9/Ajlv5OtO
OnMNWIs7ssDJBsB7VFC8hcwf79jz7kC0xmQqDfw51Xhhk04kla+v+HZcFW2AO9so
6ZdVHHQnIbJa7yQJKZ+hK49IOoBR6JgdB5kymoplLLiuqZSYTcwSBZ72FYTm3iAr
jzvt1hxpxVDmXvRnkhRrIRhK4QgJL0jRmirBjDY+PYYd7bdRIjN7WNZLFsgplnS8
9w6CwG32pRlm0c8kkiQ7FXA6BYCqOsDI8f1VGQv331OpR2Ck+FTv+L7DAmg6l37W
+LB9LGh4OAp68ImTjqf6ioGKG0RBSznwME+r4nXtT1S/qLR6ASWUS4ViWRhbRlNK
XWyb96wrUlv+E8I=
-----END CERTIFICATE-----
```
!!! note
The `argocd-tls-certs-cm` ConfigMap will be mounted as a volume at the mount path `/app/config/tls` in the pods of `argocd-server` and `argocd-repo-server`. It will create files for each data key in the mount path directory, so above example would leave the file `/app/config/tls/server.example.com`, which contains the certificate data. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration.
### SSH known host public keys
If you are connecting repositories via SSH, ArgoCD will need to know the SSH known hosts public key of the repository servers. You can manage the SSH known hosts data in the ConfigMap named `argocd-ssh-known-hosts-cm`. This ConfigMap contains a single key/value pair, with `ssh_known_hosts` as the key and the actual public keys of the SSH servers as data. As opposed to TLS configuration, the public key(s) of each single repository server ArgoCD will connect via SSH must be configured, otherwise the connections to the repository will fail. There is no fallback. The data can be copied from any existing `ssh_known_hosts` file, or from the output of the `ssh-keyscan` utility. The basic format is `<servername> <keydata>`, one entry per line.
An example ConfigMap object:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-ssh-known-hosts-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
ssh_known_hosts: |
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
```
!!! note
The `argocd-ssh-known-hosts-cm` ConfigMap will be mounted as a volume at the mount path `/app/config/ssh` in the pods of `argocd-server` and `argocd-repo-server`. It will create a file `ssh_known_hosts` in that directory, which contains the SSH known hosts data used by ArgoCD for connecting to Git repositories via SSH. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration.
## Clusters
@@ -415,7 +307,6 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
helm.repositories: |
- url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
@@ -496,7 +387,7 @@ Example of `kustomization.yaml`:
```yaml
bases:
- github.com/argoproj/argo-cd//manifests/cluster-install?ref=v1.0.1
- github.com/argoproj/argo-cd//manifests/cluster-install?ref=v0.10.6
# additional resources like ingress rules, cluster and repository secrets.
resources:

View File

@@ -1,25 +0,0 @@
# Disaster Recovery
You can use `argocd-util` can be used to import and export all Argo CD data.
Make sure you have `~/.kube/config` pointing to your Argo CD cluster.
Figure out what version of Argo CD you're running:
```bash
argocd version | grep server
# ...
export VERSION=v1.0.1
```
Export to a backup:
```bash
docker run -v ~/.kube:/home/argocd/.kube --rm argoproj/argocd:$VERSION argocd-util export > backup.yaml
```
Import from a backup:
```bash
docker run -v ~/.kube:/home/argocd/.kube --rm argoproj/argocd:$VERSION argocd-util import - < backup.yaml
```

View File

@@ -1,19 +0,0 @@
# High Availability
Argo CD is largely stateless, all data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.
A set HA of manifests are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and run Redis in HA mode.
[Manifests ⧉](https://github.com/argoproj/argo-cd/tree/master/manifests)
!!! note
The HA installation will require at least three different nodes due to pod anti-affinity roles in the specs.
## Scaling Up
You might scale up some Argo CD services in the following circumstances:
* The `argocd-repo-server` can scale up when there is too much contention on a single git repo (e.g. many apps defined in a single git repo).
* The `argocd-server` can scale up to support more front-end load.
All other services should run with their pre-determined number of replicas. The `argocd-application-controller` must not be increased because multiple controllers will fight. The `argocd-dex-server` uses an in-memory database, and two or more instances would have inconsistent data. `argocd-redis` is pre-configured with the understanding of only three total redis servers/sentinels.

Some files were not shown because too many files have changed in this diff Show More