Compare commits

..

44 Commits

Author SHA1 Message Date
Alex Collins
e0bd546a07 Update manifests to v1.0.2 2019-06-14 09:46:05 -07:00
Alex Collins
984829fcc8 Merge branch 'release-1.0' of github.com:argoproj/argo-cd into release-1.0 2019-06-14 09:38:08 -07:00
Jesse Suen
c48e27f265 Cluster registration was unintentionally persisting client-cert auth credentials (#1742)
Remove unused CreateClusterFromKubeConfig server method
2019-06-14 04:03:01 -07:00
Alex Collins
5fe1447b72 Update manifests to v1.0.1 2019-05-28 10:08:34 -07:00
Alex Collins
539516bd43 Update manifests to v1.0.1 2019-05-28 10:07:32 -07:00
Alex Collins
8a57d544ff Update manifests to v1.0.1 2019-05-28 09:01:21 -07:00
Alex Collins
cd77e2a048 Update manifests to v1.0.1 2019-05-28 08:58:01 -07:00
Alex Collins
a52f766815 removes file which cannot be compiled 2019-05-24 15:03:49 -07:00
Alex Collins
646fd37e16 Public git creds (#1633) 2019-05-24 15:01:03 -07:00
Alexander Matyushentsev
c74ca22023 Update manifests to v1.0.0 2019-05-16 13:00:40 -07:00
Alexander Matyushentsev
2d170be242 Issue #1471 - Support configuring requested OIDC provider scopes and enforced RBAC scopes (#1585)
* Issue #1471 - Support configuring requested OIDC provider scopes and enforced RBAC scopes

* Apply reviewer notes
2019-05-16 07:35:44 -07:00
Alexander Matyushentsev
079101522d Issue #1533 - Prevent reconciliation loop for self-managed apps (#1608) 2019-05-14 08:05:28 -07:00
Jesse Suen
1bea98e01b Supply resourceVersion to watch request to prevent reading of stale cache (#1612) 2019-05-13 15:01:15 -07:00
Alexander Matyushentsev
7c09221f7c Update manifests to v1.0.0-rc3 2019-05-09 09:52:56 -07:00
Alexander Matyushentsev
6355e910d4 Fix flaky TestGetIngressInfo unit test (#1529) 2019-05-09 09:52:16 -07:00
Alexander Matyushentsev
891e0320d7 Issue #1586 - Ignore patch errors during diffing normalization (#1599) 2019-05-09 09:30:48 -07:00
Alexander Matyushentsev
486323ae58 Issue #1596 - SSH URLs support is partially broken (#1597) 2019-05-09 09:30:42 -07:00
Alexander Matyushentsev
4ef875aa0b Issue #1552 - Improve rendering app image information (#1584) 2019-05-09 09:30:31 -07:00
Alexander Matyushentsev
e756b8db7a Fix ingress browsable url formatting if port is not string (#1576) 2019-05-09 09:28:49 -07:00
Alexander Matyushentsev
8023f8ac8d Issue #1579 - Impossible to sync to HEAD from UI if auto-sync is enabled (#1580) 2019-05-09 09:28:42 -07:00
Alexander Matyushentsev
803408904a Issue #1570 - Application controller is unable to delete self-referenced app (#1574) 2019-05-09 09:28:35 -07:00
Alexander Matyushentsev
702f9095da Issue #1546 - Add liveness probe to repo server/api servers (#1560) 2019-05-09 09:28:30 -07:00
Alexander Matyushentsev
0b9ee1ae6d ISsue #1557 - Controller incorrectly report health state of self managed application (#1558) 2019-05-09 09:28:19 -07:00
Alexander Matyushentsev
2f003e08ff Issue #1540 - Fix kustomize manifest generation crash is manifest has image without version (#1559) 2019-05-09 09:28:14 -07:00
Paul Brit
e090857d6b Fix hardcoded 'git' user in util/git.NewClient (#1556)
Closes #1555
2019-05-09 09:28:09 -07:00
dthomson25
4e29fff5a3 Improve Rollout health.lua (#1554) 2019-05-09 09:28:03 -07:00
Alexander Matyushentsev
e279377696 Fix invalid URL for ingress without hostname (#1553) 2019-05-01 15:38:40 -07:00
Alexander Matyushentsev
5e52839ce3 Issue #1533 - Prevent reconciliation loop for self-managed apps (#1547) 2019-05-01 10:22:09 -07:00
Alexander Matyushentsev
3ca632a552 Update manifests to v1.0.0-rc2 2019-04-30 13:19:50 -07:00
Alexander Matyushentsev
cfe55357ac Rollout health checks/actions should support v0.2 and v0.2+ versions (#1543) 2019-04-30 13:18:15 -07:00
Alex Collins
0f6d768eca Fixes bug in normalizer (#1542) 2019-04-30 11:32:54 -07:00
Alexander Matyushentsev
75330da328 Ingress resource might get invalid ExternalURL (#1522) (#1523) 2019-04-30 11:14:18 -07:00
Alexander Matyushentsev
a1bcbab0e5 Issue 1476 - Avoid validating repository in application controller (#1535) 2019-04-30 11:10:06 -07:00
Alexander Matyushentsev
db9272032a Issue #1414 - Load target resource using K8S if conversion fails (#1527) 2019-04-30 11:10:02 -07:00
Alexander Matyushentsev
d6d6c655ff Issue #1476 - Add repo server grpc call timeout (#1528) 2019-04-30 11:09:58 -07:00
Alex Collins
58acc92790 Adds support for configuring repo creds at a domain/org level. Closes… (#1496) 2019-04-30 11:09:53 -07:00
Simon Behar
c3074c0977 Whitelisting of resources (#1509)
* Added whitelisting of resources
2019-04-30 11:09:26 -07:00
Simon Behar
af254f3047 Added ability to sync specific labels from the command line (#1501)
* Finished initial implementation

* Added tests and fix a few bugs
2019-04-30 11:09:16 -07:00
Alex Collins
c140976eeb Updates Makefile 2019-04-24 10:52:53 -07:00
Alex Collins
05c22d4ddc Updates VERSION 2019-04-24 10:51:50 -07:00
Alex Collins
3e08938a20 Updates VERSION 2019-04-24 10:49:30 -07:00
Alex Collins
5937bb574d Update manifests to v1.0.0-rc1 2019-04-24 10:48:24 -07:00
Alex Collins
ded55b26d1 Update manifests to v1.0.0-rc1 2019-04-24 10:46:23 -07:00
Alex Collins
d79ed65de0 Updated CHANGELOG.md 2019-04-24 10:35:04 -07:00
388 changed files with 4210 additions and 29020 deletions

View File

@@ -1,356 +0,0 @@
version: 2.1
commands:
before:
steps:
- restore_go_cache
- install_golang
- install_tools
- clean_checkout
- configure_git
- install_go_deps
- dep_ensure
configure_git:
steps:
- run:
name: Configure Git
command: |
set -x
# must be configured for tests to run
git config --global user.email you@example.com
git config --global user.name "Your Name"
echo "export PATH=/home/circleci/.go_workspace/src/github.com/argoproj/argo-cd/hack:\$PATH" | tee -a $BASH_ENV
echo "export GIT_ASKPASS=git-ask-pass.sh" | tee -a $BASH_ENV
- run:
name: Make sure we can clone out the test private repo
command: |
set -x
export GIT_USERNAME=blah
export GIT_PASSWORD=B5sBDeoqAVUouoHkrovy
git-ask-pass.sh Username
git-ask-pass.sh Password
git clone https://gitlab.com/argo-cd-test/test-apps.git /tmp/test-apps
clean_checkout:
steps:
- run:
name: Remove checked out code
command: rm -Rf /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
- checkout
install_go_deps:
steps:
- run:
name: Install Go deps
command: |
set -x
go get github.com/gobuffalo/packr/packr
go get github.com/gogo/protobuf/gogoproto
go get github.com/golang/protobuf/protoc-gen-go
go get github.com/golangci/golangci-lint/cmd/golangci-lint
go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
go get github.com/jstemmer/go-junit-report
go get github.com/mattn/goreman
go get golang.org/x/tools/cmd/goimports
dep_ensure:
steps:
- restore_cache:
keys:
- vendor-v4-{{ checksum "Gopkg.lock" }}
- run:
name: Run dep ensure
command: dep ensure -v
- save_cache:
key: vendor-v4-{{ checksum "Gopkg.lock" }}
paths:
- vendor
install_golang:
steps:
- run:
name: Install Golang v1.11.4
command: |
go get golang.org/dl/go1.11.4
[ -e /home/circleci/sdk/go1.11.4 ] || go1.11.4 download
echo "export GOPATH=/home/circleci/.go_workspace" | tee -a $BASH_ENV
echo "export PATH=/home/circleci/sdk/go1.11.4/bin:\$PATH" | tee -a $BASH_ENV
- run:
name: Golang diagnostics
command: |
env
which go
go version
go env
install_tools:
steps:
- run:
name: Create downloads dir
command: mkdir -p /tmp/dl
- restore_cache:
keys:
- dl-v4
- dl-v3
- run:
name: Install JQ v1.6
command: |
set -x
[ -e /tmp/dl/jq ] || curl -sLf -C - -o /tmp/dl/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
sudo cp /tmp/dl/jq /usr/local/bin/jq
sudo chmod +x /usr/local/bin/jq
jq --version
- run:
name: Install Kubectx v0.6.3
command: |
set -x
[ -e /tmp/dl/kubectx.zip ] || curl -sLf -C - -o /tmp/dl/kubectx.zip https://github.com/ahmetb/kubectx/archive/v0.6.3.zip
sudo unzip /tmp/dl/kubectx.zip kubectx-0.6.3/kubectx
sudo unzip /tmp/dl/kubectx.zip kubectx-0.6.3/kubens
sudo mv kubectx-0.6.3/kubectx /usr/local/bin/
sudo mv kubectx-0.6.3/kubens /usr/local/bin/
sudo chmod +x /usr/local/bin/kubectx
sudo chmod +x /usr/local/bin/kubens
- run:
name: Install Dep v0.5.3
command: |
set -x
[ -e /tmp/dl/dep ] || curl -sLf -C - -o /tmp/dl/dep https://github.com/golang/dep/releases/download/v0.5.3/dep-linux-amd64
sudo cp /tmp/dl/dep /usr/local/go/bin/dep
sudo chmod +x /usr/local/go/bin/dep
dep version
- run:
name: Install Go Swagger v0.19.0
command: |
set -x
[ -e /tmp/dl/swagger ] || curl -sLf -C - -o /tmp/dl/swagger https://github.com/go-swagger/go-swagger/releases/download/v0.19.0/swagger_linux_amd64
sudo cp /tmp/dl/swagger /usr/local/bin/swagger
sudo chmod +x /usr/local/bin/swagger
swagger version
- run:
name: Install Ksonnet v0.13.1
command: |
set -x
[ -e /tmp/dl/ks.tar.gz ] || curl -sLf -C - -o /tmp/dl/ks.tar.gz https://github.com/ksonnet/ksonnet/releases/download/v0.13.1/ks_0.13.1_linux_amd64.tar.gz
tar -C /tmp -xf /tmp/dl/ks.tar.gz
sudo cp /tmp/ks_0.13.1_linux_amd64/ks /usr/local/go/bin/ks
sudo chmod +x /usr/local/go/bin/ks
ks version
- run:
name: Install Helm v2.13.1
command: |
set -x
[ -e /tmp/dl/helm.tar.gz ] || curl -sLf -C - -o /tmp/dl/helm.tar.gz https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
tar -C /tmp/ -xf /tmp/dl/helm.tar.gz
sudo cp /tmp/linux-amd64/helm /usr/local/go/bin/helm
helm version --client
helm init --client-only
- run:
name: Install Kustomize v1.0.11
command: |
set -x
[ -e /tmp/dl/kustomize1 ] || curl -sLf -C - -o /tmp/dl/kustomize1 https://github.com/kubernetes-sigs/kustomize/releases/download/v1.0.11/kustomize_1.0.11_linux_amd64
sudo cp /tmp/dl/kustomize1 /usr/local/go/bin/
sudo chmod +x /usr/local/go/bin/kustomize1
kustomize1 version
- run:
name: Install Kustomize v2.0.3
command: |
set -x
[ -e /tmp/dl/kustomize ] || curl -sLf -C - -o /tmp/dl/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v2.0.3/kustomize_2.0.3_linux_amd64
sudo cp /tmp/dl/kustomize /usr/local/go/bin/
sudo chmod +x /usr/local/go/bin/kustomize
kustomize version
- run:
name: Install Protobuf compiler v3.7.1
command: |
set -x
[ -e /tmp/dl/protoc.zip ] || curl -sLf -C - -o /tmp/dl/protoc.zip https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protoc-3.7.1-linux-x86_64.zip
sudo unzip /tmp/dl/protoc.zip bin/protoc -d /usr/local/
sudo chmod +x /usr/local/bin/protoc
sudo unzip /tmp/dl/protoc.zip include/* -d /usr/local/
protoc --version
- save_cache:
key: dl-v4
paths:
- /tmp/dl
save_go_cache:
steps:
- save_cache:
key: go-v15-{{ .Branch }}
paths:
- /home/circleci/.go_workspace
- /home/circleci/.cache/go-build
- /home/circleci/sdk/go1.11.4
restore_go_cache:
steps:
- restore_cache:
keys:
- go-v15-{{ .Branch }}
- go-v15-master
jobs:
build:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
steps:
- before
- run:
name: Run unit tests
command: |
set -x
mkdir -p /tmp/test-results
trap "go-junit-report </tmp/test-results/go-test.out > /tmp/test-results/go-test-report.xml" EXIT
make test | tee /tmp/test-results/go-test.out
- save_go_cache
- run:
name: Uploading code coverage
command: bash <(curl -s https://codecov.io/bash) -f coverage.out
# This takes 2m, lets background it.
background: true
- store_test_results:
path: /tmp/test-results
- run:
name: Generate code
command: make codegen
- run:
name: Lint code
# use GOGC to limit memory usage in exchange for CPU usage, https://github.com/golangci/golangci-lint#memory-usage-of-golangci-lint
# we have 8GB RAM, 2CPUs https://circleci.com/docs/2.0/executor-types/#using-machine
command: LINT_GOGC=50 LINT_CONCURRENCY=2 make lint
- run:
name: Check nothing has changed
command: |
set -xo pipefail
# This makes sure you ran `make pre-commit` before you pushed.
# We exclude the Swagger resources; CircleCI doesn't generate them correctly.
# When this fails, it will, create a patch file you can apply locally to fix it.
# To troubleshoot builds: https://argoproj.github.io/argo-cd/developer-guide/ci/
git diff --exit-code -- . ':!Gopkg.lock' ':!assets/swagger.json' ':!pkg/apis/api-rules/violation_exceptions.list' ':!pkg/apis/application/v1alpha1/openapi_generated.go' | tee codegen.patch
- store_artifacts:
path: codegen.patch
when: always
e2e:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
steps:
- run:
name: Install and start K3S v0.5.0
command: |
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
kubectl version
background: true
environment:
INSTALL_K3S_EXEC: --docker
INSTALL_K3S_VERSION: v0.5.0
- before
- run:
# do this before we build everything else in the background, as they tend to explode
name: Make CLI
command: |
set -x
make cli
# must be added to path for tests
echo export PATH="`pwd`/dist:\$PATH" | tee -a $BASH_ENV
- run:
name: Create namespace
command: |
set -x
kubectl create ns argocd-e2e
kubens argocd-e2e
# install the certificates (not 100% sure we need this)
sudo cp /var/lib/rancher/k3s/server/tls/token-ca.crt /usr/local/share/ca-certificates/k3s.crt
sudo update-ca-certificates
# create the kubecfg, again - not sure we need this
cat /etc/rancher/k3s/k3s.yaml | sed "s/localhost/`hostname`/" | tee ~/.kube/config
echo "127.0.0.1 `hostname`" | sudo tee -a /etc/hosts
- run:
name: Apply manifests
command: kustomize build test/manifests/base | kubectl apply -f -
- run:
name: Start Redis
command: docker run --rm --name argocd-redis -i -p 6379:6379 redis:5.0.3-alpine --save "" --appendonly no
background: true
- run:
name: Start repo server
command: go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379
background: true
environment:
# pft. if you do not quote "true", CircleCI turns it into "1", stoopid
ARGOCD_FAKE_IN_CLUSTER: "true"
- run:
name: Start API server
command: go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app
background: true
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
- run:
name: Wait for API server
command: |
set -x
until curl -v http://localhost:8080/healthz; do sleep 3; done
- run:
name: Start controller
command: go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081 --kubeconfig ~/.kube/config
background: true
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
- run:
name: Smoke test
command: |
set -x
argocd login localhost:8080 --plaintext --username admin --password password
argocd app create guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook
argocd app sync guestbook
argocd app delete guestbook
- run:
name: Run e2e tests
command: |
set -x
mkdir -p /tmp/test-results
trap "go-junit-report </tmp/test-results/go-e2e.out > /tmp/test-results/go-e2e-report.xml" EXIT
make test-e2e | tee /tmp/test-results/go-e2e.out
environment:
ARGOCD_OPTS: "--server localhost:8080 --plaintext"
ARGOCD_E2E_EXPECT_TIMEOUT: "30"
ARGOCD_E2E_K3S: "true"
- store_test_results:
path: /tmp/test-results
ui:
# note that we checkout the code in ~/argo-cd/, but then work in ~/argo-cd/ui
working_directory: ~/argo-cd/ui
docker:
- image: node:11.15.0
steps:
- checkout:
path: ~/argo-cd/
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-v3-{{ checksum "yarn.lock" }}
- run:
name: Install
command:
yarn install --frozen-lockfile --ignore-optional --non-interactive
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-v3-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
- node_modules
- run:
name: Test
command: yarn test
# This does not appear to work, and I don't want to spend time on it.
- store_test_results:
path: junit.xml
- run:
name: Lint
command: yarn lint
workflows:
version: 2
workflow:
jobs:
- build
- e2e
- ui:
# this isn't strictly true, we just put in here so that we 2/4 executors rather than 3/4
requires:
- build

View File

@@ -5,12 +5,3 @@ ignore:
- "pkg/apis/.*"
- "pkg/client/.*"
- "test/.*"
coverage:
status:
# allow test coverage to drop by 0.1%, assume that it's typically due to CI problems
patch:
default:
threshold: 0.1
project:
default:
threshold: 0.1

View File

@@ -10,4 +10,3 @@ dist/
cmd/**/debug
debug.test
coverage.out
ui/node_modules/

View File

@@ -11,7 +11,7 @@ assignees: ''
A clear and concise description of what the bug is.
**To Reproduce**
If we cannot reproduce, we cannot fix! Steps to reproduce the behavior:
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
@@ -23,21 +23,5 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Version**
```shell
Paste the output from `argocd version` here.
```
**Logs**
```
Paste any relevant application logs here.
```
**Have you thought about contributing a fix yourself?**
Open Source software thrives with your contribution. It not only gives skills you might not be able to get in your day job, it also looks amazing on your resume.
If you want to get involved, check out the
[contributing guide](https://github.com/argoproj/argo-cd/blob/master/docs/CONTRIBUTING.md), then reach out to us on [Slack](https://argoproj.github.io/community/join-slack) so we can see how to get you started.
**Additional context**
Add any other context about the problem here.

View File

@@ -13,9 +13,8 @@ A clear and concise description of what the problem is. Ex. I'm always frustrate
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Have you thought about contributing yourself?**
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Open Source software thrives with your contribution. It not only gives skills you might not be able to get in your day job, it also looks amazing on your resume.
If you want to get involved, check out the
[contributing guide](https://github.com/argoproj/argo-cd/blob/master/docs/CONTRIBUTING.md), then reach out to us on [Slack](https://argoproj.github.io/community/join-slack) so we can see how to get you started.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,7 +0,0 @@
<!--
Thank you for submitting your PR!
We'd love your organisation to be listed in the [README](https://github.com/argoproj/argo-cd). Don't forget to add it if you can!
To troubleshoot builds: https://argoproj.github.io/argo-cd/developer-guide/ci/
-->

View File

@@ -1,5 +1,5 @@
run:
deadline: 2m
deadline: 8m
skip-files:
- ".*\\.pb\\.go"
skip-dirs:

View File

@@ -1,26 +1,22 @@
# Changelog
## v1.0.0 (2019-05-16)
## v1.0.0
### New Features
#### Network View
A new way to visual application resources had been introduced to the Application Details page. The Network View visualizes connections between Ingresses, Services and Pods
based on ingress reference service, service's label selectors and labels. The new view is useful to understand the application traffic flow and troubleshot connectivity issues.
TODO
#### Custom Actions
Argo CD introduces Custom Resource Actions to allow users to provide their own Lua scripts to modify existing Kubernetes resources in their applications. These actions are exposed in the UI to allow easy, safe, and reliable changes to their resources. This functionality can be used to introduce functionality such as suspending and enabling a Kubernetes cronjob, continue a BlueGreen deployment with Argo Rollouts, or scaling a deployment.
#### UI Enhancements & Usability Enhancements
#### UI Enhancements
* New color palette intended to highlight unhealthily and out-of-sync resources more clearly.
* The health of more resources is displayed, so it easier to quickly zoom to unhealthy pods, replica-sets, etc.
* Resources that do not have health no longer appear to be healthy.
* Support for configuring Git repo credentials at a domain/org level
* Support for configuring requested OIDC provider scopes and enforced RBAC scopes
* Support for configuring monitored resources whitelist in addition to excluded resources
### Breaking Changes
@@ -52,12 +48,6 @@ Argo CD introduces Custom Resource Actions to allow users to provide their own L
* UI Enhancement Proposals Quick Wins #1274
* Update argocd-util import/export to support proper backup and restore (#1328)
* Whitelisting repos/clusters in projects should consider repo/cluster permissions #1432
* Adds support for configuring repo creds at a domain/org level. (#1332)
* Implement whitelist option analogous to `resource.exclusions` (#1490)
* Added ability to sync specific labels from the command line (#1241)
* Improve rendering app image information (#1552)
* Add liveness probe to repo server/api servers (#1546)
* Support configuring requested OIDC provider scopes and enforced RBAC scopes (#1471)
#### Bug Fixes
@@ -71,17 +61,7 @@ Argo CD introduces Custom Resource Actions to allow users to provide their own L
- Rollback UI is not showing correct ksonnet parameters in preview #1326
- See details of applications fails with "r.nodes is undefined" #1371
- UI fails to load custom actions is resource is not deployed #1502
- Unable to create app from private repo: x509: certificate signed by unknown authority (#1171)
- Fix hardcoded 'git' user in `util/git.NewClient` (#1555)
- Application controller becomes unresponsive (#1476)
- Load target resource using K8S if conversion fails (#1414)
- Can't ignore a non-existent pointer anymore (#1586)
- Impossible to sync to HEAD from UI if auto-sync is enabled (#1579)
- Application controller is unable to delete self-referenced app (#1570)
- Prevent reconciliation loop for self-managed apps (#1533)
- Controller incorrectly report health state of self managed application (#1557)
- Fix kustomize manifest generation crash is manifest has image without version (#1540)
- Supply resourceVersion to watch request to prevent reading of stale cache (#1605)
- Unable to create app from private repo: x509: certificate signed by unknown authority #1171
## v0.12.2 (2019-04-22)

View File

@@ -1,4 +1,3 @@
ARG BASE_IMAGE=debian:9.5-slim
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
@@ -94,9 +93,7 @@ RUN cd ${GOPATH}/src/dummy && \
####################################################################################################
# Argo CD Base - used as the base for both the release and dev argocd images
####################################################################################################
FROM $BASE_IMAGE as argocd-base
USER root
FROM debian:9.5-slim as argocd-base
RUN groupadd -g 999 argocd && \
useradd -r -u 999 -g argocd argocd && \

175
Gopkg.lock generated
View File

@@ -70,15 +70,14 @@
[[projects]]
branch = "master"
digest = "1:4f6afcf4ebe041b3d4aa7926d09344b48d2f588e1f957526bbbe54f9cbb366a1"
digest = "1:e8ec0abbf32fdcc9f7eb14c0656c1d0fc2fc7ec8f60dff4b7ac080c50afd8e49"
name = "github.com/argoproj/pkg"
packages = [
"exec",
"rand",
"time",
]
pruneopts = ""
revision = "38dba6e98495680ff1f8225642b63db10a96bb06"
revision = "88ab0e836a8e8c70bc297c5764669bd7da27afd1"
[[projects]]
digest = "1:d8a2bb36a048d1571bcc1aee208b61f39dc16c6c53823feffd37449dde162507"
@@ -418,6 +417,14 @@
revision = "aa810b61a9c79d51363740d207bb46cf8e620ed5"
version = "v1.2.0"
[[projects]]
branch = "master"
digest = "1:1e5b1e14524ed08301977b7b8e10c719ed853cbf3f24ecb66fae783a46f207a6"
name = "github.com/google/btree"
packages = ["."]
pruneopts = ""
revision = "4030bb1f1f0c35b30ca7009e9ebd06849dd45306"
[[projects]]
digest = "1:14d826ee25139b4674e9768ac287a135f4e7c14e1134a5b15e4e152edfd49f41"
name = "github.com/google/go-jsonnet"
@@ -465,6 +472,17 @@
revision = "66b9c49e59c6c48f0ffce28c2d8b8a5678502c6d"
version = "v1.4.0"
[[projects]]
branch = "master"
digest = "1:009a1928b8c096338b68b5822d838a72b4d8520715c1463614476359f3282ec8"
name = "github.com/gregjones/httpcache"
packages = [
".",
"diskcache",
]
pruneopts = ""
revision = "9cad4c3443a7200dd6400aef47183728de563a38"
[[projects]]
branch = "master"
digest = "1:9dca8c981b8aed7448d94e78bc68a76784867a38b3036d5aabc0b32d92ffd1f4"
@@ -662,6 +680,22 @@
revision = "c37440a7cf42ac63b919c752ca73a85067e05992"
version = "v0.2.0"
[[projects]]
branch = "master"
digest = "1:c24598ffeadd2762552269271b3b1510df2d83ee6696c1e543a0ff653af494bc"
name = "github.com/petar/GoLLRB"
packages = ["llrb"]
pruneopts = ""
revision = "53be0d36a84c2a886ca057d34b6aa4468df9ccb4"
[[projects]]
digest = "1:b46305723171710475f2dd37547edd57b67b9de9f2a6267cafdd98331fd6897f"
name = "github.com/peterbourgon/diskv"
packages = ["."]
pruneopts = ""
revision = "5f041e8faa004a95c88a202771f4cc3e991971e6"
version = "v2.0.1"
[[projects]]
digest = "1:7365acd48986e205ccb8652cc746f09c8b7876030d53710ea6ef7d0bd0dcd7ca"
name = "github.com/pkg/errors"
@@ -999,39 +1033,6 @@
pruneopts = ""
revision = "5e776fee60db37e560cee3fb46db699d2f095386"
[[projects]]
branch = "master"
digest = "1:e9e4b928898842a138bc345d42aae33741baa6d64f3ca69b0931f9c7a4fd0437"
name = "gonum.org/v1/gonum"
packages = [
"blas",
"blas/blas64",
"blas/cblas128",
"blas/gonum",
"floats",
"graph",
"graph/internal/linear",
"graph/internal/ordered",
"graph/internal/set",
"graph/internal/uid",
"graph/iterator",
"graph/simple",
"graph/topo",
"graph/traverse",
"internal/asm/c128",
"internal/asm/c64",
"internal/asm/f32",
"internal/asm/f64",
"internal/cmplx64",
"internal/math32",
"lapack",
"lapack/gonum",
"lapack/lapack64",
"mat",
]
pruneopts = ""
revision = "90b7154515874cee6c33cf56b29e257403a09a69"
[[projects]]
digest = "1:934fb8966f303ede63aa405e2c8d7f0a427a05ea8df335dfdc1833dd4d40756f"
name = "google.golang.org/appengine"
@@ -1224,16 +1225,16 @@
version = "v2.2.2"
[[projects]]
branch = "release-1.14"
digest = "1:d8a6f1ec98713e685346a2e4b46c6ec4a1792a5535f8b0dffe3b1c08c9d69b12"
branch = "release-1.12"
digest = "1:3e3e9df293bd6f9fd64effc9fa1f0edcd97e6c74145cd9ab05d35719004dc41f"
name = "k8s.io/api"
packages = [
"admission/v1beta1",
"admissionregistration/v1alpha1",
"admissionregistration/v1beta1",
"apps/v1",
"apps/v1beta1",
"apps/v1beta2",
"auditregistration/v1alpha1",
"authentication/v1",
"authentication/v1beta1",
"authorization/v1",
@@ -1245,21 +1246,16 @@
"batch/v1beta1",
"batch/v2alpha1",
"certificates/v1beta1",
"coordination/v1",
"coordination/v1beta1",
"core/v1",
"events/v1beta1",
"extensions/v1beta1",
"imagepolicy/v1alpha1",
"networking/v1",
"networking/v1beta1",
"node/v1alpha1",
"node/v1beta1",
"policy/v1beta1",
"rbac/v1",
"rbac/v1alpha1",
"rbac/v1beta1",
"scheduling/v1",
"scheduling/v1alpha1",
"scheduling/v1beta1",
"settings/v1alpha1",
@@ -1268,7 +1264,7 @@
"storage/v1beta1",
]
pruneopts = ""
revision = "40a48860b5abbba9aa891b02b32da429b08d96a0"
revision = "6db15a15d2d3874a6c3ddb2140ac9f3bc7058428"
[[projects]]
branch = "master"
@@ -1277,13 +1273,14 @@
packages = [
"pkg/apis/apiextensions",
"pkg/apis/apiextensions/v1beta1",
"pkg/features",
]
pruneopts = ""
revision = "7f7d2b94eca3a7a1c49840e119a8bc03c3afb1e3"
[[projects]]
branch = "release-1.14"
digest = "1:a802c91b189a31200cfb66744441fe62dac961ec7c5c58c47716570de7da195c"
branch = "release-1.12"
digest = "1:5899da40e41bcc8c1df101b72954096bba9d85b763bc17efc846062ccc111c7b"
name = "k8s.io/apimachinery"
packages = [
"pkg/api/equality",
@@ -1335,11 +1332,22 @@
"third_party/forked/golang/reflect",
]
pruneopts = ""
revision = "6a84e37a896db9780c75367af8d2ed2bb944022e"
revision = "f71dbbc36e126f5a371b85f6cca96bc8c57db2b6"
[[projects]]
branch = "release-11.0"
digest = "1:794140b3ac07405646ea3d4a57e1f6155186e672aed8aa0c996779381cd92fe6"
branch = "release-1.12"
digest = "1:b2c55ff9df6d053e40094b943f949c257c3f7dcdbb035c11487c93c96df9eade"
name = "k8s.io/apiserver"
packages = [
"pkg/features",
"pkg/util/feature",
]
pruneopts = ""
revision = "5e1c1f41ee34b3bb153f928f8c91c2a6dd9482a9"
[[projects]]
branch = "release-9.0"
digest = "1:77bf3d9f18ec82e08ac6c4c7e2d9d1a2ef8d16b25d3ff72fcefcf9256d751573"
name = "k8s.io/client-go"
packages = [
"discovery",
@@ -1351,6 +1359,8 @@
"kubernetes",
"kubernetes/fake",
"kubernetes/scheme",
"kubernetes/typed/admissionregistration/v1alpha1",
"kubernetes/typed/admissionregistration/v1alpha1/fake",
"kubernetes/typed/admissionregistration/v1beta1",
"kubernetes/typed/admissionregistration/v1beta1/fake",
"kubernetes/typed/apps/v1",
@@ -1359,8 +1369,6 @@
"kubernetes/typed/apps/v1beta1/fake",
"kubernetes/typed/apps/v1beta2",
"kubernetes/typed/apps/v1beta2/fake",
"kubernetes/typed/auditregistration/v1alpha1",
"kubernetes/typed/auditregistration/v1alpha1/fake",
"kubernetes/typed/authentication/v1",
"kubernetes/typed/authentication/v1/fake",
"kubernetes/typed/authentication/v1beta1",
@@ -1383,8 +1391,6 @@
"kubernetes/typed/batch/v2alpha1/fake",
"kubernetes/typed/certificates/v1beta1",
"kubernetes/typed/certificates/v1beta1/fake",
"kubernetes/typed/coordination/v1",
"kubernetes/typed/coordination/v1/fake",
"kubernetes/typed/coordination/v1beta1",
"kubernetes/typed/coordination/v1beta1/fake",
"kubernetes/typed/core/v1",
@@ -1395,12 +1401,6 @@
"kubernetes/typed/extensions/v1beta1/fake",
"kubernetes/typed/networking/v1",
"kubernetes/typed/networking/v1/fake",
"kubernetes/typed/networking/v1beta1",
"kubernetes/typed/networking/v1beta1/fake",
"kubernetes/typed/node/v1alpha1",
"kubernetes/typed/node/v1alpha1/fake",
"kubernetes/typed/node/v1beta1",
"kubernetes/typed/node/v1beta1/fake",
"kubernetes/typed/policy/v1beta1",
"kubernetes/typed/policy/v1beta1/fake",
"kubernetes/typed/rbac/v1",
@@ -1409,8 +1409,6 @@
"kubernetes/typed/rbac/v1alpha1/fake",
"kubernetes/typed/rbac/v1beta1",
"kubernetes/typed/rbac/v1beta1/fake",
"kubernetes/typed/scheduling/v1",
"kubernetes/typed/scheduling/v1/fake",
"kubernetes/typed/scheduling/v1alpha1",
"kubernetes/typed/scheduling/v1alpha1/fake",
"kubernetes/typed/scheduling/v1beta1",
@@ -1447,22 +1445,23 @@
"tools/remotecommand",
"transport",
"transport/spdy",
"util/buffer",
"util/cert",
"util/connrotation",
"util/exec",
"util/flowcontrol",
"util/homedir",
"util/integer",
"util/jsonpath",
"util/keyutil",
"util/retry",
"util/workqueue",
]
pruneopts = ""
revision = "11646d1007e006f6f24995cb905c68bc62901c81"
revision = "13596e875accbd333e0b5bd5fd9462185acd9958"
[[projects]]
branch = "release-1.14"
digest = "1:742ce70d2c6de0f02b5331a25d4d549f55de6b214af22044455fd6e6b451cad9"
branch = "release-1.12"
digest = "1:8108815d1aef9159daabdb3f0fcef04a88765536daf0c0cd29a31fdba135ee54"
name = "k8s.io/code-generator"
packages = [
"cmd/go-to-protobuf",
@@ -1471,7 +1470,7 @@
"third_party/forked/golang/reflect",
]
pruneopts = ""
revision = "50b561225d70b3eb79a1faafd3dfe7b1a62cbe73"
revision = "b1289fc74931d4b6b04bd1a259acfc88a2cb0a66"
[[projects]]
branch = "master"
@@ -1489,15 +1488,15 @@
revision = "e17681d19d3ac4837a019ece36c2a0ec31ffe985"
[[projects]]
digest = "1:9eaf86f4f6fb4a8f177220d488ef1e3255d06a691cca95f14ef085d4cd1cef3c"
digest = "1:4f5eb833037cc0ba0bf8fe9cae6be9df62c19dd1c869415275c708daa8ccfda5"
name = "k8s.io/klog"
packages = ["."]
pruneopts = ""
revision = "d98d8acdac006fb39831f1b25640813fef9c314f"
version = "v0.3.3"
revision = "a5bc97fbc634d635061f3146511332c7e313a55a"
version = "v0.1.0"
[[projects]]
digest = "1:42ea993b351fdd39b9aad3c9ebe71f2fdb5d1f8d12eed24e71c3dff1a31b2a43"
digest = "1:aae856b28533bfdb39112514290f7722d00a55a99d2d0b897c6ee82e6feb87b3"
name = "k8s.io/kube-openapi"
packages = [
"cmd/openapi-gen",
@@ -1509,11 +1508,10 @@
"pkg/util/sets",
]
pruneopts = ""
revision = "411b2483e5034420675ebcdd4a55fc76fe5e55cf"
revision = "master"
[[projects]]
branch = "release-1.14"
digest = "1:78aa6079e011ece0d28513c7fe1bd64284fa9eb5d671760803a839ffdf0e9e38"
digest = "1:6061aa42761235df375f20fa4a1aa6d1845cba3687575f3adb2ef3f3bc540af5"
name = "k8s.io/kubernetes"
packages = [
"pkg/api/v1/pod",
@@ -1521,25 +1519,19 @@
"pkg/apis/autoscaling",
"pkg/apis/batch",
"pkg/apis/core",
"pkg/apis/extensions",
"pkg/apis/networking",
"pkg/apis/policy",
"pkg/features",
"pkg/kubectl/scheme",
"pkg/kubectl/util/term",
"pkg/kubelet/apis",
"pkg/util/interrupt",
"pkg/util/node",
]
pruneopts = ""
revision = "2d20b5759406ded89f8b25cf085ff4733b144ba5"
[[projects]]
branch = "master"
digest = "1:4c5d39f7ca1c940d7e74dbc62d2221e2c59b3d35c54f1fa9c77f3fd3113bbcb1"
name = "k8s.io/utils"
packages = [
"buffer",
"integer",
"trace",
]
pruneopts = ""
revision = "c55fbcfc754a5b2ec2fbae8fb9dcac36bdba6a12"
revision = "17c77c7898218073f14c8d573582e8d2313dc740"
version = "v1.12.2"
[[projects]]
branch = "master"
@@ -1549,14 +1541,6 @@
pruneopts = ""
revision = "97fed8db84274c421dbfffbb28ec859901556b97"
[[projects]]
digest = "1:321081b4a44256715f2b68411d8eda9a17f17ebfe6f0cc61d2cc52d11c08acfa"
name = "sigs.k8s.io/yaml"
packages = ["."]
pruneopts = ""
revision = "fd68e9863619f6ec2fdd8625fe1f02e7c877e480"
version = "v1.1.0"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
@@ -1687,7 +1671,6 @@
"k8s.io/client-go/util/flowcontrol",
"k8s.io/client-go/util/workqueue",
"k8s.io/code-generator/cmd/go-to-protobuf",
"k8s.io/klog",
"k8s.io/kube-openapi/cmd/openapi-gen",
"k8s.io/kube-openapi/pkg/common",
"k8s.io/kubernetes/pkg/api/v1/pod",

View File

@@ -35,24 +35,16 @@ required = [
name = "github.com/prometheus/client_golang"
revision = "7858729281ec582767b20e0d696b6041d995d5e0"
[[override]]
branch = "release-1.14"
[[constraint]]
branch = "release-1.12"
name = "k8s.io/api"
[[override]]
branch = "release-1.14"
name = "k8s.io/kubernetes"
[[override]]
branch = "release-1.14"
[[constraint]]
branch = "release-1.12"
name = "k8s.io/code-generator"
[[override]]
branch = "release-1.14"
name = "k8s.io/apimachinery"
[[override]]
branch = "release-11.0"
[[constraint]]
branch = "release-9.0"
name = "k8s.io/client-go"
[[constraint]]
@@ -71,8 +63,6 @@ required = [
branch = "master"
name = "github.com/yudai/gojsondiff"
# TODO: move off of k8s.io/kube-openapi and use controller-tools for CRD spec generation
# (override argoproj/argo contraint on master)
[[override]]
revision = "411b2483e5034420675ebcdd4a55fc76fe5e55cf"
revision = "master"
name = "k8s.io/kube-openapi"

View File

@@ -1,4 +1,4 @@
PACKAGE=github.com/argoproj/argo-cd/common
PACKAGE=github.com/argoproj/argo-cd
CURRENT_DIR=$(shell pwd)
DIST_DIR=${CURRENT_DIR}/dist
CLI_NAME=argocd
@@ -9,19 +9,17 @@ GIT_COMMIT=$(shell git rev-parse HEAD)
GIT_TAG=$(shell if [ -z "`git status --porcelain`" ]; then git describe --exact-match --tags HEAD 2>/dev/null; fi)
GIT_TREE_STATE=$(shell if [ -z "`git status --porcelain`" ]; then echo "clean" ; else echo "dirty"; fi)
PACKR_CMD=$(shell if [ "`which packr`" ]; then echo "packr"; else echo "go run vendor/github.com/gobuffalo/packr/packr/main.go"; fi)
TEST_CMD=$(shell [ "`which gotestsum`" != "" ] && echo gotestsum -- || echo go test)
PATH:=$(PATH):$(PWD)/hack
# docker image publishing options
DOCKER_PUSH?=false
IMAGE_TAG?=latest
DOCKER_PUSH=false
IMAGE_TAG=latest
# perform static compilation
STATIC_BUILD?=true
STATIC_BUILD=true
# build development images
DEV_IMAGE?=false
# lint is memory and CPU intensive, so we can limit on CI to mitigate OOM
LINT_GOGC?=off
LINT_CONCURRENCY?=8
DEV_IMAGE=false
override LDFLAGS += \
-X ${PACKAGE}.version=${VERSION} \
@@ -64,7 +62,7 @@ clientgen:
./hack/update-codegen.sh
.PHONY: codegen
codegen: protogen clientgen openapigen manifests
codegen: protogen clientgen openapigen
.PHONY: cli
cli: clean-debug
@@ -91,7 +89,7 @@ manifests:
.PHONY: server
server: clean-debug
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
.PHONY: repo-server
repo-server:
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
@@ -136,25 +134,19 @@ dep-ensure:
.PHONY: lint
lint:
# golangci-lint does not do a good job of formatting imports
goimports -local github.com/argoproj/argo-cd -w `find . ! -path './vendor/*' ! -path './pkg/client/*' -type f -name '*.go'`
GOGC=$(LINT_GOGC) golangci-lint run --fix --verbose --concurrency $(LINT_CONCURRENCY)
golangci-lint run --fix
.PHONY: build
build:
go build -v `go list ./... | grep -v 'resource_customizations\|test/e2e'`
go build `go list ./... | grep -v resource_customizations`
.PHONY: test
test:
go test -v -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "test/e2e"`
.PHONY: cover
cover:
go tool cover -html=coverage.out
$(TEST_CMD) -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-e2e
test-e2e: cli
go test -v -timeout 10m ./test/e2e
$(TEST_CMD) -v -failfast -timeout 20m ./test/e2e
.PHONY: start-e2e
start-e2e: cli
@@ -162,7 +154,7 @@ start-e2e: cli
kubectl create ns argocd-e2e || true
kubens argocd-e2e
kustomize build test/manifests/base | kubectl apply -f -
goreman start
make start
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
.PHONY: clean-debug

View File

@@ -1,6 +1,5 @@
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081"
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --disable-auth --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ui/dist/app"
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.14.0 serve /dex.yaml"
redis: docker run --rm --name argocd-redis -i -p 6379:6379 redis:5.0.3-alpine --save "" --appendonly no
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --disable-auth --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app"
repo-server: sh -c "FORCE_LOG_COLORS=1 go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379"
ui: sh -c 'cd ui && yarn start'
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.14.0 serve /dex.yaml"
redis: docker run --rm --name argocd-redis -i -p 6379:6379 redis:5.0.3-alpine --save ""--appendonly no

View File

@@ -19,23 +19,13 @@ Application deployment and lifecycle management should be automated, auditable,
Organizations below are **officially** using Argo CD. Please send a PR with your organization name if you are using Argo CD.
1. [Codility](https://www.codility.com/)
1. [Commonbond](https://commonbond.co/)
1. [CyberAgent](https://www.cyberagent.co.jp/en/)
1. [END.](https://www.endclothing.com/)
1. [GMETRI](https://gmetri.com/)
1. [Intuit](https://www.intuit.com/)
1. [KintoHub](https://www.kintohub.com/)
1. [KompiTech GmbH](https://www.kompitech.com/)
1. [Mirantis](https://mirantis.com/)
1. [OpenSaaS Studio](https://opensaas.studio)
1. [Optoro](https://www.optoro.com/)
1. [Riskified](https://www.riskified.com/)
1. [Tesla](https://tesla.com/)
1. [tZERO](https://www.tzero.com/)
1. [Ticketmaster](https://ticketmaster.com)
1. [Yieldlab](https://www.yieldlab.de/)
1. [Volvo Cars](https://www.volvocars.com/)
2. [KompiTech GmbH](https://www.kompitech.com/)
3. [Yieldlab](https://www.yieldlab.de/)
4. [Ticketmaster](https://ticketmaster.com)
5. [CyberAgent](https://www.cyberagent.co.jp/en/)
6. [OpenSaaS Studio](https://opensaas.studio)
7. [Riskified](https://www.riskified.com/)
## Documentation

View File

@@ -1 +1 @@
1.1.2
1.0.2

View File

@@ -15,7 +15,6 @@ p, role:admin, applications, create, */*, allow
p, role:admin, applications, update, */*, allow
p, role:admin, applications, delete, */*, allow
p, role:admin, applications, sync, */*, allow
p, role:admin, applications, override, */*, allow
p, role:admin, clusters, create, *, allow
p, role:admin, clusters, update, *, allow
p, role:admin, clusters, delete, *, allow
1 # Built-in policy which defines two roles: role:readonly and role:admin,
15 p, role:admin, applications, sync, */*, allow
16 p, role:admin, applications, override, */*, allow p, role:admin, clusters, create, *, allow
17 p, role:admin, clusters, create, *, allow p, role:admin, clusters, update, *, allow
p, role:admin, clusters, update, *, allow
18 p, role:admin, clusters, delete, *, allow
19 p, role:admin, repositories, create, *, allow
20 p, role:admin, repositories, update, *, allow

View File

@@ -868,31 +868,6 @@
}
}
},
"/api/v1/clusters/{server}/rotate-auth": {
"post": {
"tags": [
"ClusterService"
],
"summary": "RotateAuth returns a cluster by server address",
"operationId": "RotateAuth",
"parameters": [
{
"type": "string",
"name": "server",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/clusterClusterResponse"
}
}
}
}
},
"/api/v1/projects": {
"get": {
"tags": [
@@ -1496,12 +1471,6 @@
"type": "boolean",
"format": "boolean"
},
"manifests": {
"type": "array",
"items": {
"type": "string"
}
},
"name": {
"type": "string"
},
@@ -2008,19 +1977,6 @@
}
}
},
"v1Fields": {
"type": "object",
"title": "Fields stores a set of fields in a data structure like a Trie.\nTo understand how this is used, see: https://github.com/kubernetes-sigs/structured-merge-diff",
"properties": {
"map": {
"description": "Map stores a set of fields in a data structure like a Trie.\n\nEach key is either a '.' representing the field itself, and will always map to an empty set,\nor a string representing a sub-field or item. The string will follow one of these four formats:\n'f:<name>', where <name> is the name of a field in a struct, or key in a map\n'v:<value>', where <value> is the exact json formatted value of a list item\n'i:<index>', where <index> is position of a item in a list\n'k:<keys>', where <keys> is a map of a list item's key fields to their unique values\nIf a key maps to an empty Fields value, the field that key represents is part of the set.\n\nThe exact format is defined in k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/internal",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/v1Fields"
}
}
}
},
"v1GroupKind": {
"description": "+protobuf.options.(gogoproto.goproto_stringer)=false",
"type": "object",
@@ -2092,30 +2048,6 @@
}
}
},
"v1ManagedFieldsEntry": {
"description": "ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource\nthat the fieldset applies to.",
"type": "object",
"properties": {
"apiVersion": {
"description": "APIVersion defines the version of this resource that this field set\napplies to. The format is \"group/version\" just like the top-level\nAPIVersion field. It is necessary to track the version of a field\nset because it cannot be automatically converted.",
"type": "string"
},
"fields": {
"$ref": "#/definitions/v1Fields"
},
"manager": {
"description": "Manager is an identifier of the workflow managing these fields.",
"type": "string"
},
"operation": {
"description": "Operation is the type of operation which lead to this ManagedFieldsEntry being created.\nThe only valid values for this field are 'Apply' and 'Update'.",
"type": "string"
},
"time": {
"$ref": "#/definitions/v1Time"
}
}
},
"v1MicroTime": {
"description": "MicroTime is version of Time with microsecond level precision.\n\n+protobuf.options.marshal=false\n+protobuf.as=Timestamp\n+protobuf.options.(gogoproto.goproto_stringer)=false",
"type": "object",
@@ -2184,13 +2116,6 @@
"type": "string"
}
},
"managedFields": {
"description": "ManagedFields maps workflow-id and version to the set of fields\nthat are managed by that workflow. This is mostly for internal\nhousekeeping, and users typically shouldn't need to set or\nunderstand this field. A workflow can be the user's name, a\ncontroller's name, or the name of a specific apply path like\n\"ci-cd\". The set of fields is always in the version that the\nworkflow used when modifying the object.\n\nThis field is alpha and can be changed or removed without notice.\n\n+optional",
"type": "array",
"items": {
"$ref": "#/definitions/v1ManagedFieldsEntry"
}
},
"name": {
"type": "string",
"title": "Name must be unique within a namespace. Is required when creating resources, although\nsome resources may allow a client to request the generation of an appropriate name\nautomatically. Name is primarily intended for creation idempotence and configuration\ndefinition.\nCannot be updated.\nMore info: http://kubernetes.io/docs/user-guide/identifiers#names\n+optional"
@@ -2255,7 +2180,7 @@
}
},
"v1OwnerReference": {
"description": "OwnerReference contains enough information to let you identify an owning\nobject. An owning object must be in the same namespace as the dependent, or\nbe cluster-scoped, so there is no namespace field.",
"description": "OwnerReference contains enough information to let you identify an owning\nobject. Currently, an owning object must be in the same namespace, so there\nis no namespace field.",
"type": "object",
"properties": {
"apiVersion": {
@@ -2586,10 +2511,6 @@
"$ref": "#/definitions/v1alpha1HelmParameter"
}
},
"releaseName": {
"type": "string",
"title": "The Helm release name. If omitted it will use the application name"
},
"valueFiles": {
"type": "array",
"title": "ValuesFiles is a list of Helm value files to use when generating a template",
@@ -2640,13 +2561,6 @@
"type": "object",
"title": "ApplicationSourceKustomize holds kustomize specific options",
"properties": {
"commonLabels": {
"type": "object",
"title": "CommonLabels adds additional kustomize commonLabels",
"additionalProperties": {
"type": "string"
}
},
"imageTags": {
"type": "array",
"title": "ImageTags are kustomize 1.0 image tag overrides",
@@ -2690,13 +2604,6 @@
"$ref": "#/definitions/v1alpha1ResourceIgnoreDifferences"
}
},
"info": {
"type": "array",
"title": "Infos contains a list of useful information (URLs, email addresses, and plain text) that relates to the application",
"items": {
"$ref": "#/definitions/v1alpha1Info"
}
},
"project": {
"description": "Project is a application project name. Empty name means that application belongs to 'default' project.",
"type": "string"
@@ -2907,17 +2814,6 @@
}
}
},
"v1alpha1Info": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"type": "string"
}
}
},
"v1alpha1InfoItem": {
"type": "object",
"title": "InfoItem contains human readable information about object",
@@ -3287,9 +3183,6 @@
"namespace": {
"type": "string"
},
"uid": {
"type": "string"
},
"version": {
"type": "string"
}
@@ -3303,19 +3196,16 @@
"type": "string"
},
"hookPhase": {
"type": "string",
"title": "the state of any operation associated with this resource OR hook\nnote: can contain values for non-hook resources"
"type": "string"
},
"hookType": {
"type": "string",
"title": "the type of the hook, empty for non-hook resources"
"type": "string"
},
"kind": {
"type": "string"
},
"message": {
"type": "string",
"title": "message for the last sync OR operation"
"type": "string"
},
"name": {
"type": "string"
@@ -3324,12 +3214,7 @@
"type": "string"
},
"status": {
"type": "string",
"title": "the final result of the sync, this is be empty if the resources is yet to be applied/pruned and is always zero-value for hooks"
},
"syncPhase": {
"type": "string",
"title": "indicates the particular phase of the sync that this is for"
"type": "string"
},
"version": {
"type": "string"
@@ -3395,13 +3280,6 @@
"format": "boolean",
"title": "DryRun will perform a `kubectl apply --dry-run` without actually performing the sync"
},
"manifests": {
"type": "array",
"title": "Manifests is an optional field that overrides sync source with a local directory for development",
"items": {
"type": "string"
}
},
"prune": {
"type": "boolean",
"format": "boolean",

View File

@@ -16,6 +16,7 @@ import (
// load the oidc plugin (required to authenticate with OpenID Connect).
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
@@ -44,7 +45,6 @@ func newCommand() *cobra.Command {
operationProcessors int
logLevel string
glogLevel int
metricsPort int
cacheSrc func() (*cache.Cache, error)
)
var command = cobra.Command{
@@ -81,11 +81,10 @@ func newCommand() *cobra.Command {
appClient,
repoClientset,
cache,
resyncDuration,
metricsPort)
resyncDuration)
errors.CheckError(err)
log.Infof("Application Controller (version: %s) starting (namespace: %s)", common.GetVersion(), namespace)
log.Infof("Application Controller (version: %s) starting (namespace: %s)", argocd.GetVersion(), namespace)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
@@ -105,7 +104,6 @@ func newCommand() *cobra.Command {
command.Flags().IntVar(&operationProcessors, "operation-processors", 1, "Number of application operation processors")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
cacheSrc = cache.AddCacheFlagsToCmd(&command)
return &command
}

View File

@@ -7,11 +7,11 @@ import (
"os"
"time"
"github.com/argoproj/argo-cd/reposerver/metrics"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/reposerver"
@@ -31,8 +31,6 @@ func newCommand() *cobra.Command {
var (
logLevel string
parallelismLimit int64
listenPort int
metricsPort int
cacheSrc func() (*cache.Cache, error)
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
)
@@ -48,18 +46,16 @@ func newCommand() *cobra.Command {
cache, err := cacheSrc()
errors.CheckError(err)
metricsServer := metrics.NewMetricsServer(git.NewFactory())
server, err := reposerver.NewServer(metricsServer, cache, tlsConfigCustomizer, parallelismLimit)
server, err := reposerver.NewServer(git.NewFactory(), cache, tlsConfigCustomizer, parallelismLimit)
errors.CheckError(err)
grpc := server.CreateGRPC()
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", listenPort))
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", common.PortRepoServer))
errors.CheckError(err)
http.Handle("/metrics", metricsServer.GetHandler())
go func() { errors.CheckError(http.ListenAndServe(fmt.Sprintf(":%d", metricsPort), nil)) }()
http.Handle("/metrics", promhttp.Handler())
go func() { errors.CheckError(http.ListenAndServe(fmt.Sprintf(":%d", common.PortRepoServerMetrics), nil)) }()
log.Infof("argocd-repo-server %s serving on %s", common.GetVersion(), listener.Addr())
log.Infof("argocd-repo-server %s serving on %s", argocd.GetVersion(), listener.Addr())
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
@@ -71,8 +67,6 @@ func newCommand() *cobra.Command {
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().Int64Var(&parallelismLimit, "parallelismlimit", 0, "Limit on number of concurrent manifests generate requests. Any value less the 1 means no limit.")
command.Flags().IntVar(&listenPort, "port", common.DefaultPortRepoServer, "Listen on given port for incoming connections")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortRepoServerMetrics, "Start metrics server on given port")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
cacheSrc = cache.AddCacheFlagsToCmd(&command)
return &command

View File

@@ -22,20 +22,17 @@ import (
// NewCommand returns a new instance of an argocd command
func NewCommand() *cobra.Command {
var (
insecure bool
listenPort int
metricsPort int
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
repoServerTimeoutSeconds int
staticAssetsDir string
baseHRef string
repoServerAddress string
dexServerAddress string
disableAuth bool
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
cacheSrc func() (*cache.Cache, error)
insecure bool
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
staticAssetsDir string
baseHRef string
repoServerAddress string
dexServerAddress string
disableAuth bool
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
cacheSrc func() (*cache.Cache, error)
)
var command = &cobra.Command{
Use: cliName,
@@ -60,12 +57,10 @@ func NewCommand() *cobra.Command {
kubeclientset := kubernetes.NewForConfigOrDie(config)
appclientset := appclientset.NewForConfigOrDie(config)
repoclientset := reposerver.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
repoclientset := reposerver.NewRepoServerClientset(repoServerAddress, 0)
argoCDOpts := server.ArgoCDServerOpts{
Insecure: insecure,
ListenPort: listenPort,
MetricsPort: metricsPort,
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
BaseHRef: baseHRef,
@@ -86,7 +81,7 @@ func NewCommand() *cobra.Command {
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
argocd := server.NewServer(ctx, argoCDOpts)
argocd.Run(ctx, listenPort, metricsPort)
argocd.Run(ctx, common.PortAPIServer)
cancel()
}
},
@@ -102,9 +97,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringVar(&dexServerAddress, "dex-server", common.DefaultDexServerAddr, "Dex server address")
command.Flags().BoolVar(&disableAuth, "disable-auth", false, "Disable client authentication")
command.AddCommand(cli.NewVersionCmd(cliName))
command.Flags().IntVar(&listenPort, "port", common.DefaultPortAPIServer, "Listen on given port")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDAPIServerMetrics, "Start metrics on given port")
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", 60, "Repo server RPC call timeout seconds.")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(command)
cacheSrc = cache.AddCacheFlagsToCmd(command)
return command

View File

@@ -11,7 +11,7 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
accountpkg "github.com/argoproj/argo-cd/pkg/apiclient/account"
"github.com/argoproj/argo-cd/server/account"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/localconfig"
@@ -57,7 +57,7 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
errors.CheckError(err)
}
updatePasswordRequest := accountpkg.UpdatePasswordRequest{
updatePasswordRequest := account.UpdatePasswordRequest{
NewPassword: newPassword,
CurrentPassword: currentPassword,
}

View File

@@ -11,8 +11,6 @@ import (
"os/exec"
"path"
"reflect"
"regexp"
"sort"
"strconv"
"strings"
"text/tabwriter"
@@ -33,10 +31,10 @@ import (
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apiclient"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
applicationpkg "github.com/argoproj/argo-cd/pkg/apiclient/application"
settingspkg "github.com/argoproj/argo-cd/pkg/apiclient/settings"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/server/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/cli"
@@ -45,7 +43,6 @@ import (
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource"
"github.com/argoproj/argo-cd/util/templates"
)
@@ -141,7 +138,7 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
}
conn, appIf := argocdClient.NewApplicationClientOrDie()
defer util.Close(conn)
appCreateRequest := applicationpkg.ApplicationCreateRequest{
appCreateRequest := application.ApplicationCreateRequest{
Application: app,
Upsert: &upsert,
}
@@ -192,7 +189,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
conn, appIf := acdClient.NewApplicationClientOrDie()
defer util.Close(conn)
appName := args[0]
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName, Refresh: getRefreshType(refresh, hardRefresh)})
app, err := appIf.Get(context.Background(), &application.ApplicationQuery{Name: &appName, Refresh: getRefreshType(refresh, hardRefresh)})
errors.CheckError(err)
switch output {
case "yaml":
@@ -224,7 +221,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
if len(app.Status.Resources) > 0 {
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
printAppResources(w, app)
printAppResources(w, app, showOperation)
_ = w.Flush()
}
default:
@@ -325,7 +322,7 @@ func truncateString(str string, num int) string {
}
// printParams prints parameters and overrides
func printParams(app *argoappv1.Application, appIf applicationpkg.ApplicationServiceClient) {
func printParams(app *argoappv1.Application, appIf application.ApplicationServiceClient) {
paramLenLimit := 80
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
@@ -363,7 +360,7 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
argocdClient := argocdclient.NewClientOrDie(clientOpts)
conn, appIf := argocdClient.NewApplicationClientOrDie()
defer util.Close(conn)
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName})
app, err := appIf.Get(ctx, &application.ApplicationQuery{Name: &appName})
errors.CheckError(err)
visited := setAppOptions(c.Flags(), app, &appOpts)
if visited == 0 {
@@ -372,7 +369,7 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
os.Exit(1)
}
setParameterOverrides(app, appOpts.parameters)
_, err = appIf.UpdateSpec(ctx, &applicationpkg.ApplicationUpdateSpecRequest{
_, err = appIf.UpdateSpec(ctx, &application.ApplicationUpdateSpecRequest{
Name: &app.Name,
Spec: app.Spec,
})
@@ -397,9 +394,7 @@ func setAppOptions(flags *pflag.FlagSet, app *argoappv1.Application, appOpts *ap
case "revision":
app.Spec.Source.TargetRevision = appOpts.revision
case "values":
setHelmOpt(&app.Spec.Source, appOpts.valuesFiles, nil)
case "release-name":
setHelmOpt(&app.Spec.Source, nil, &appOpts.releaseName)
setHelmOpt(&app.Spec.Source, appOpts.valuesFiles)
case "directory-recurse":
app.Spec.Source.Directory = &argoappv1.ApplicationSourceDirectory{Recurse: appOpts.directoryRecurse}
case "config-management-plugin":
@@ -459,16 +454,13 @@ func setKustomizeOpt(src *argoappv1.ApplicationSource, namePrefix *string) {
}
}
func setHelmOpt(src *argoappv1.ApplicationSource, valueFiles []string, releaseName *string) {
func setHelmOpt(src *argoappv1.ApplicationSource, valueFiles []string) {
if src.Helm == nil {
src.Helm = &argoappv1.ApplicationSourceHelm{}
}
if valueFiles != nil {
src.Helm.ValueFiles = valueFiles
}
if releaseName != nil {
src.Helm.ReleaseName = *releaseName
}
if src.Helm.IsZero() {
src.Helm = nil
}
@@ -483,7 +475,6 @@ type appOptions struct {
destNamespace string
parameters []string
valuesFiles []string
releaseName string
project string
syncPolicy string
autoPrune bool
@@ -501,7 +492,6 @@ func addAppFlags(command *cobra.Command, opts *appOptions) {
command.Flags().StringVar(&opts.destNamespace, "dest-namespace", "", "K8s target namespace (overrides the namespace specified in the ksonnet app.yaml)")
command.Flags().StringArrayVarP(&opts.parameters, "parameter", "p", []string{}, "set a parameter override (e.g. -p guestbook=image=example/guestbook:latest)")
command.Flags().StringArrayVar(&opts.valuesFiles, "values", []string{}, "Helm values file(s) to use")
command.Flags().StringVar(&opts.releaseName, "release-name", "", "Helm release-name")
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
command.Flags().StringVar(&opts.syncPolicy, "sync-policy", "", "Set the sync policy (one of: automated, none)")
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning when sync is automated")
@@ -527,7 +517,7 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
app, err := appIf.Get(context.Background(), &application.ApplicationQuery{Name: &appName})
errors.CheckError(err)
updated := false
@@ -568,13 +558,13 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
}
}
setHelmOpt(&app.Spec.Source, specValueFiles, nil)
setHelmOpt(&app.Spec.Source, specValueFiles)
if !updated {
return
}
}
_, err = appIf.UpdateSpec(context.Background(), &applicationpkg.ApplicationUpdateSpecRequest{
_, err = appIf.UpdateSpec(context.Background(), &application.ApplicationUpdateSpecRequest{
Name: &app.Name,
Spec: app.Spec,
})
@@ -613,18 +603,6 @@ func liveObjects(resources []*argoappv1.ResourceDiff) ([]*unstructured.Unstructu
}
func getLocalObjects(app *argoappv1.Application, local string, appLabelKey string) []*unstructured.Unstructured {
manifestStrings := getLocalObjectsString(app, local, appLabelKey)
objs := make([]*unstructured.Unstructured, len(manifestStrings))
for i := range manifestStrings {
obj := unstructured.Unstructured{}
err := json.Unmarshal([]byte(manifestStrings[i]), &obj)
errors.CheckError(err)
objs[i] = &obj
}
return objs
}
func getLocalObjectsString(app *argoappv1.Application, local string, appLabelKey string) []string {
res, err := repository.GenerateManifests(local, &repository.ManifestRequest{
ApplicationSource: &app.Spec.Source,
AppLabelKey: appLabelKey,
@@ -632,8 +610,14 @@ func getLocalObjectsString(app *argoappv1.Application, local string, appLabelKey
Namespace: app.Spec.Destination.Namespace,
})
errors.CheckError(err)
return res.Manifests
objs := make([]*unstructured.Unstructured, len(res.Manifests))
for i := range res.Manifests {
obj := unstructured.Unstructured{}
err = json.Unmarshal([]byte(res.Manifests[i]), &obj)
errors.CheckError(err)
objs[i] = &obj
}
return objs
}
type resourceInfoProvider struct {
@@ -660,7 +644,7 @@ func groupLocalObjs(localObs []*unstructured.Unstructured, liveObjs []*unstructu
objByKey := make(map[kube.ResourceKey]*unstructured.Unstructured)
for i := range localObs {
obj := localObs[i]
if !(hook.IsHook(obj) || resource.Ignore(obj)) {
if !hook.IsHook(obj) {
objByKey[kube.GetResourceKey(obj)] = obj
}
}
@@ -689,9 +673,9 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
conn, appIf := clientset.NewApplicationClientOrDie()
defer util.Close(conn)
appName := args[0]
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName, Refresh: getRefreshType(refresh, hardRefresh)})
app, err := appIf.Get(context.Background(), &application.ApplicationQuery{Name: &appName, Refresh: getRefreshType(refresh, hardRefresh)})
errors.CheckError(err)
resources, err := appIf.ManagedResources(context.Background(), &applicationpkg.ResourcesQuery{ApplicationName: &appName})
resources, err := appIf.ManagedResources(context.Background(), &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
liveObjs, err := liveObjects(resources.Items)
errors.CheckError(err)
@@ -703,7 +687,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
conn, settingsIf := clientset.NewSettingsClientOrDie()
defer util.Close(conn)
argoSettings, err := settingsIf.Get(context.Background(), &settingspkg.SettingsQuery{})
argoSettings, err := settingsIf.Get(context.Background(), &settings.SettingsQuery{})
errors.CheckError(err)
if local != "" {
@@ -873,7 +857,7 @@ func NewApplicationDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
for _, appName := range args {
appDeleteReq := applicationpkg.ApplicationDeleteRequest{
appDeleteReq := application.ApplicationDeleteRequest{
Name: &appName,
}
if c.Flag("cascade").Changed {
@@ -899,7 +883,7 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
Run: func(c *cobra.Command, args []string) {
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
apps, err := appIf.List(context.Background(), &applicationpkg.ApplicationQuery{})
apps, err := appIf.List(context.Background(), &application.ApplicationQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
var fmtStr string
@@ -1052,10 +1036,51 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
// printAppResources prints the resources of an application in a tabwriter table
func printAppResources(w io.Writer, app *argoappv1.Application) {
_, _ = fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tSTATUS\tHEALTH\tHOOK\tMESSAGE\n")
for _, res := range getResourceStates(app, nil) {
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n", res.Group, res.Kind, res.Namespace, res.Name, res.Status, res.Health, res.Hook, res.Message)
// Optionally prints the message from the operation state
func printAppResources(w io.Writer, app *argoappv1.Application, showOperation bool) {
messages := make(map[string]string)
opState := app.Status.OperationState
var syncRes *argoappv1.SyncOperationResult
if showOperation {
fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tSTATUS\tHEALTH\tHOOK\tMESSAGE\n")
if opState != nil {
if opState.SyncResult != nil {
syncRes = opState.SyncResult
}
}
if syncRes != nil {
for _, res := range syncRes.Resources {
if !res.IsHook() {
messages[fmt.Sprintf("%s/%s/%s/%s", res.Group, res.Kind, res.Namespace, res.Name)] = res.Message
} else if res.HookType == argoappv1.HookTypePreSync {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n", res.Group, res.Kind, res.Namespace, res.Name, res.HookPhase, "", res.HookType, res.Message)
}
}
}
} else {
fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tSTATUS\tHEALTH\n")
}
for _, res := range app.Status.Resources {
healthStatus := ""
if res.Health != nil {
healthStatus = res.Health.Status
}
if showOperation {
message := messages[fmt.Sprintf("%s/%s/%s/%s", res.Group, res.Kind, res.Namespace, res.Name)]
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s", res.Group, res.Kind, res.Namespace, res.Name, res.Status, healthStatus, "", message)
} else {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s", res.Group, res.Kind, res.Namespace, res.Name, res.Status, healthStatus)
}
fmt.Fprint(w, "\n")
}
if showOperation && syncRes != nil {
for _, res := range syncRes.Resources {
if res.HookType == argoappv1.HookTypeSync || res.HookType == argoappv1.HookTypePostSync {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n", res.Group, res.Kind, res.Namespace, res.Name, res.HookPhase, "", res.HookType, res.Message)
}
}
}
}
@@ -1070,8 +1095,6 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
timeout uint
strategy string
force bool
async bool
local string
)
var command = &cobra.Command{
Use: "sync APPNAME",
@@ -1099,7 +1122,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
revision = "HEAD"
}
q := applicationpkg.ApplicationManifestQuery{
q := application.ApplicationManifestQuery{
Name: &appName,
Revision: revision,
}
@@ -1130,31 +1153,12 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
selectedResources := parseSelectedResources(resources)
var localObjsStrings []string
if local != "" {
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
if app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil {
log.Fatal("Cannot use local sync when Automatic Sync Policy is enabled")
}
errors.CheckError(err)
conn, settingsIf := acdClient.NewSettingsClientOrDie()
argoSettings, err := settingsIf.Get(context.Background(), &settingspkg.SettingsQuery{})
errors.CheckError(err)
util.Close(conn)
localObjsStrings = getLocalObjectsString(app, local, argoSettings.AppLabelKey)
}
syncReq := applicationpkg.ApplicationSyncRequest{
syncReq := application.ApplicationSyncRequest{
Name: &appName,
DryRun: dryRun,
Revision: revision,
Resources: selectedResources,
Prune: prune,
Manifests: localObjsStrings,
}
switch strategy {
case "apply":
@@ -1170,20 +1174,23 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
_, err := appIf.Sync(ctx, &syncReq)
errors.CheckError(err)
if !async {
app, err := waitOnApplicationStatus(acdClient, appName, timeout, false, false, true, false, selectedResources)
errors.CheckError(err)
app, err := waitOnApplicationStatus(acdClient, appName, timeout, false, false, true, false, selectedResources)
errors.CheckError(err)
// Only get resources to be pruned if sync was application-wide
if len(selectedResources) == 0 {
pruningRequired := app.Status.OperationState.SyncResult.Resources.PruningRequired()
if pruningRequired > 0 {
log.Fatalf("%d resources require pruning", pruningRequired)
// Only get resources to be pruned if sync was application-wide
if len(selectedResources) == 0 {
pruningRequired := 0
for _, resDetails := range app.Status.OperationState.SyncResult.Resources {
if resDetails.Status == argoappv1.ResultCodePruneSkipped {
pruningRequired++
}
}
if pruningRequired > 0 {
log.Fatalf("%d resources require pruning", pruningRequired)
}
if !app.Status.OperationState.Phase.Successful() && !dryRun {
os.Exit(1)
}
if !app.Status.OperationState.Phase.Successful() && !dryRun {
os.Exit(1)
}
}
},
@@ -1196,8 +1203,6 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().UintVar(&timeout, "timeout", defaultCheckTimeoutSeconds, "Time out after this many seconds")
command.Flags().StringVar(&strategy, "strategy", "", "Sync strategy (one of: apply|hook)")
command.Flags().BoolVar(&force, "force", false, "Use a force apply")
command.Flags().BoolVar(&async, "async", false, "Do not wait for application to sync before continuing")
command.Flags().StringVar(&local, "local", "", "Path to a local directory. When this flag is present no git queries will be made")
return command
}
@@ -1213,6 +1218,33 @@ type resourceState struct {
Message string
}
func newResourceStateFromStatus(res *argoappv1.ResourceStatus) *resourceState {
healthStatus := ""
if res.Health != nil {
healthStatus = res.Health.Status
}
return &resourceState{
Group: res.Group,
Kind: res.Kind,
Namespace: res.Namespace,
Name: res.Name,
Status: string(res.Status),
Health: healthStatus,
}
}
func newResourceStateFromResult(res *argoappv1.ResourceResult) *resourceState {
return &resourceState{
Group: res.Group,
Kind: res.Kind,
Namespace: res.Namespace,
Name: res.Name,
Status: string(res.HookPhase),
Hook: string(res.HookType),
Message: res.Message,
}
}
// Key returns a unique-ish key for the resource.
func (rs *resourceState) Key() string {
return fmt.Sprintf("%s/%s/%s/%s", rs.Group, rs.Kind, rs.Namespace, rs.Name)
@@ -1241,69 +1273,44 @@ func (rs *resourceState) Merge(newState *resourceState) bool {
return updated
}
func getResourceStates(app *argoappv1.Application, selectedResources []argoappv1.SyncOperationResource) []*resourceState {
var states []*resourceState
resourceByKey := make(map[kube.ResourceKey]argoappv1.ResourceStatus)
for i := range app.Status.Resources {
res := app.Status.Resources[i]
resourceByKey[kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name)] = res
func calculateResourceStates(app *argoappv1.Application, selectedResources []argoappv1.SyncOperationResource) map[string]*resourceState {
resStates := getResourceStates(app, selectedResources)
var opResult *argoappv1.SyncOperationResult
if app.Status.OperationState != nil {
if app.Status.OperationState.SyncResult != nil {
opResult = app.Status.OperationState.SyncResult
}
}
if opResult == nil {
return resStates
}
// print most resources info along with most recent operation results
if app.Status.OperationState != nil && app.Status.OperationState.SyncResult != nil {
for _, res := range app.Status.OperationState.SyncResult.Resources {
sync := string(res.HookPhase)
health := string(res.Status)
key := kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name)
if resource, ok := resourceByKey[key]; ok && res.HookType == "" {
health = argoappv1.HealthStatusUnknown
if resource.Health != nil {
health = resource.Health.Status
}
sync = string(resource.Status)
}
states = append(states, &resourceState{
Group: res.Group, Kind: res.Kind, Namespace: res.Namespace, Name: res.Name, Status: sync, Health: health, Hook: string(res.HookType), Message: res.Message})
delete(resourceByKey, kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name))
for _, result := range opResult.Resources {
newState := newResourceStateFromResult(result)
key := newState.Key()
if prev, ok := resStates[key]; ok {
prev.Merge(newState)
} else {
resStates[key] = newState
}
}
resKeys := make([]kube.ResourceKey, 0)
for k := range resourceByKey {
resKeys = append(resKeys, k)
}
sort.Slice(resKeys, func(i, j int) bool {
return resKeys[i].String() < resKeys[j].String()
})
// print rest of resources which were not part of most recent operation
for _, resKey := range resKeys {
res := resourceByKey[resKey]
health := argoappv1.HealthStatusUnknown
if res.Health != nil {
health = res.Health.Status
}
states = append(states, &resourceState{
Group: res.Group, Kind: res.Kind, Namespace: res.Namespace, Name: res.Name, Status: string(res.Status), Health: health, Hook: "", Message: ""})
}
// filter out not selected resources
if len(selectedResources) > 0 {
for i := len(states) - 1; i >= 0; i-- {
res := states[i]
if !argo.ContainsSyncResource(res.Name, schema.GroupVersionKind{Group: res.Group, Kind: res.Kind}, selectedResources) {
states = append(states[:i], states[i+1:]...)
}
}
}
return states
return resStates
}
func groupResourceStates(app *argoappv1.Application, selectedResources []argoappv1.SyncOperationResource) map[string]*resourceState {
func getResourceStates(app *argoappv1.Application, selectedResources []argoappv1.SyncOperationResource) map[string]*resourceState {
resStates := make(map[string]*resourceState)
for _, result := range getResourceStates(app, selectedResources) {
key := result.Key()
for _, res := range app.Status.Resources {
if len(selectedResources) > 0 && !argo.ContainsSyncResource(res.Name, res.GroupVersionKind(), selectedResources) {
continue
}
newState := newResourceStateFromStatus(&res)
key := newState.Key()
if prev, ok := resStates[key]; ok {
prev.Merge(result)
prev.Merge(newState)
} else {
resStates[key] = result
resStates[key] = newState
}
}
return resStates
@@ -1341,7 +1348,7 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
if refresh {
conn, appClient := acdClient.NewApplicationClientOrDie()
refreshType := string(argoappv1.RefreshTypeNormal)
app, err = appClient.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName, Refresh: &refreshType})
app, err = appClient.Get(context.Background(), &application.ApplicationQuery{Name: &appName, Refresh: &refreshType})
errors.CheckError(err)
_ = conn.Close()
}
@@ -1356,7 +1363,7 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
if len(app.Status.Resources) > 0 {
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 5, 0, 2, ' ', 0)
printAppResources(w, app)
printAppResources(w, app, watchOperation)
_ = w.Flush()
}
}
@@ -1368,13 +1375,13 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
}
w := tabwriter.NewWriter(os.Stdout, 5, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, waitFormatString, "TIMESTAMP", "GROUP", "KIND", "NAMESPACE", "NAME", "STATUS", "HEALTH", "HOOK", "MESSAGE")
fmt.Fprintf(w, waitFormatString, "TIMESTAMP", "GROUP", "KIND", "NAMESPACE", "NAME", "STATUS", "HEALTH", "HOOK", "MESSAGE")
prevStates := make(map[string]*resourceState)
appEventCh := acdClient.WatchApplicationWithRetry(ctx, appName)
conn, appClient := acdClient.NewApplicationClientOrDie()
defer util.Close(conn)
app, err := appClient.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName})
app, err := appClient.Get(ctx, &application.ApplicationQuery{Name: &appName})
errors.CheckError(err)
for appEvent := range appEventCh {
@@ -1400,12 +1407,12 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
selectedResourcesAreReady = checkResourceStatus(watchSync, watchHealth, watchOperation, watchSuspended, app.Status.Health.Status, string(app.Status.Sync.Status), appEvent.Application.Operation)
}
if selectedResourcesAreReady {
if len(app.Status.GetErrorConditions()) == 0 && selectedResourcesAreReady {
printFinalStatus(app)
return app, nil
}
newStates := groupResourceStates(app, selectedResources)
newStates := calculateResourceStates(app, selectedResources)
for _, newState := range newStates {
var doPrint bool
stateKey := newState.Key()
@@ -1486,7 +1493,6 @@ func setParameterOverrides(app *argoappv1.Application, parameters []string) {
if app.Spec.Source.Helm == nil {
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{}
}
re := regexp.MustCompile(`([^\\]),`)
for _, paramStr := range parameters {
parts := strings.SplitN(paramStr, "=", 2)
if len(parts) != 2 {
@@ -1494,7 +1500,7 @@ func setParameterOverrides(app *argoappv1.Application, parameters []string) {
}
newParam := argoappv1.HelmParameter{
Name: parts[0],
Value: re.ReplaceAllString(parts[1], `$1\,`),
Value: parts[1],
}
found := false
for i, cp := range app.Spec.Source.Helm.Parameters {
@@ -1529,7 +1535,7 @@ func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
appName := args[0]
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
app, err := appIf.Get(context.Background(), &application.ApplicationQuery{Name: &appName})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tDATE\tREVISION\n")
@@ -1568,7 +1574,7 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
conn, appIf := acdClient.NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName})
app, err := appIf.Get(ctx, &application.ApplicationQuery{Name: &appName})
errors.CheckError(err)
var depInfo *argoappv1.RevisionHistory
for _, di := range app.Status.History {
@@ -1581,7 +1587,7 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
log.Fatalf("Application '%s' does not have deployment id '%d' in history\n", app.ObjectMeta.Name, depID)
}
_, err = appIf.Rollback(ctx, &applicationpkg.ApplicationRollbackRequest{
_, err = appIf.Rollback(ctx, &application.ApplicationRollbackRequest{
Name: &appName,
ID: int64(depID),
Prune: prune,
@@ -1641,14 +1647,14 @@ func NewApplicationManifestsCommand(clientOpts *argocdclient.ClientOptions) *cob
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(context.Background(), &applicationpkg.ResourcesQuery{ApplicationName: &appName})
resources, err := appIf.ManagedResources(context.Background(), &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
var unstructureds []*unstructured.Unstructured
switch source {
case "git":
if revision != "" {
q := applicationpkg.ApplicationManifestQuery{
q := application.ApplicationManifestQuery{
Name: &appName,
Revision: revision,
}
@@ -1699,7 +1705,7 @@ func NewApplicationTerminateOpCommand(clientOpts *argocdclient.ClientOptions) *c
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
_, err := appIf.TerminateOperation(ctx, &applicationpkg.OperationTerminateRequest{Name: &appName})
_, err := appIf.TerminateOperation(ctx, &application.OperationTerminateRequest{Name: &appName})
errors.CheckError(err)
fmt.Printf("Application '%s' operation terminating\n", appName)
},
@@ -1719,7 +1725,7 @@ func NewApplicationEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
app, err := appIf.Get(context.Background(), &application.ApplicationQuery{Name: &appName})
errors.CheckError(err)
appData, err := json.Marshal(app.Spec)
errors.CheckError(err)
@@ -1736,7 +1742,7 @@ func NewApplicationEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if err != nil {
return err
}
_, err = appIf.UpdateSpec(context.Background(), &applicationpkg.ApplicationUpdateSpecRequest{Name: &app.Name, Spec: updatedSpec})
_, err = appIf.UpdateSpec(context.Background(), &application.ApplicationUpdateSpecRequest{Name: &app.Name, Spec: updatedSpec})
if err != nil {
return fmt.Errorf("Failed to update application spec:\n%v", err)
}
@@ -1762,7 +1768,7 @@ func NewApplicationPatchCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
patchedApp, err := appIf.Patch(context.Background(), &applicationpkg.ApplicationPatchRequest{
patchedApp, err := appIf.Patch(context.Background(), &application.ApplicationPatchRequest{
Name: &appName,
Patch: patch,
})
@@ -1785,9 +1791,6 @@ func filterResources(command *cobra.Command, resources []*argoappv1.ResourceDiff
filteredObjects := make([]*unstructured.Unstructured, 0)
for i := range liveObjs {
obj := liveObjs[i]
if obj == nil {
continue
}
gvk := obj.GroupVersionKind()
if command.Flags().Changed("group") && group != gvk.Group {
continue
@@ -1853,13 +1856,13 @@ func NewApplicationPatchResourceCommand(clientOpts *argocdclient.ClientOptions)
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
resources, err := appIf.ManagedResources(ctx, &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
objectsToPatch := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
for i := range objectsToPatch {
obj := objectsToPatch[i]
gvk := obj.GroupVersionKind()
_, err = appIf.PatchResource(ctx, &applicationpkg.ApplicationResourcePatchRequest{
_, err = appIf.PatchResource(ctx, &application.ApplicationResourcePatchRequest{
Name: &appName,
Namespace: obj.GetNamespace(),
ResourceName: obj.GetName(),

View File

@@ -11,8 +11,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
applicationpkg "github.com/argoproj/argo-cd/pkg/apiclient/application"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/util"
)
@@ -51,14 +51,14 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
resources, err := appIf.ManagedResources(ctx, &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
availableActions := make(map[string][]argoappv1.ResourceAction)
for i := range filteredObjects {
obj := filteredObjects[i]
gvk := obj.GroupVersionKind()
availActionsForResource, err := appIf.ListResourceActions(ctx, &applicationpkg.ApplicationResourceRequest{
availActionsForResource, err := appIf.ListResourceActions(ctx, &application.ApplicationResourceRequest{
Name: &appName,
Namespace: obj.GetNamespace(),
ResourceName: obj.GetName(),
@@ -128,14 +128,14 @@ func NewApplicationResourceActionsRunCommand(clientOpts *argocdclient.ClientOpti
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
resources, err := appIf.ManagedResources(ctx, &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
for i := range filteredObjects {
obj := filteredObjects[i]
gvk := obj.GroupVersionKind()
objResourceName := obj.GetName()
_, err := appIf.RunResourceAction(context.Background(), &applicationpkg.ResourceActionRunRequest{
_, err := appIf.RunResourceAction(context.Background(), &application.ResourceActionRunRequest{
Name: &appName,
Namespace: obj.GetNamespace(),
ResourceName: objResourceName,

View File

@@ -19,10 +19,9 @@ import (
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
clusterpkg "github.com/argoproj/argo-cd/pkg/apiclient/cluster"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/cluster"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/clusterauth"
)
// NewClusterCommand returns a new instance of an `argocd cluster` command
@@ -40,18 +39,16 @@ func NewClusterCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientc
command.AddCommand(NewClusterGetCommand(clientOpts))
command.AddCommand(NewClusterListCommand(clientOpts))
command.AddCommand(NewClusterRemoveCommand(clientOpts))
command.AddCommand(NewClusterRotateAuthCommand(clientOpts))
return command
}
// NewClusterAddCommand returns a new instance of an `argocd cluster add` command
func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientcmd.PathOptions) *cobra.Command {
var (
inCluster bool
upsert bool
awsRoleArn string
awsClusterName string
systemNamespace string
inCluster bool
upsert bool
awsRoleArn string
awsClusterName string
)
var command = &cobra.Command{
Use: "add",
@@ -88,7 +85,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
// Install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
managerBearerToken, err = clusterauth.InstallClusterManagerRBAC(clientset, systemNamespace)
managerBearerToken, err = common.InstallClusterManagerRBAC(clientset)
errors.CheckError(err)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
@@ -97,7 +94,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
if inCluster {
clst.Server = common.KubernetesInternalAPIServerAddr
}
clstCreateReq := clusterpkg.ClusterCreateRequest{
clstCreateReq := cluster.ClusterCreateRequest{
Cluster: clst,
Upsert: upsert,
}
@@ -111,7 +108,6 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing cluster with the same name even if the spec differs")
command.Flags().StringVar(&awsClusterName, "aws-cluster-name", "", "AWS Cluster name if set then aws-iam-authenticator will be used to access cluster")
command.Flags().StringVar(&awsRoleArn, "aws-role-arn", "", "Optional AWS role arn. If set then AWS IAM Authenticator assume a role to perform cluster operations instead of the default AWS credential provider chain.")
command.Flags().StringVar(&systemNamespace, "system-namespace", common.DefaultSystemNamespace, "Use different system namespace")
return command
}
@@ -180,7 +176,7 @@ func NewCluster(name string, conf *rest.Config, managerBearerToken string, awsAu
// NewClusterGetCommand returns a new instance of an `argocd cluster get` command
func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "get CLUSTER",
Use: "get",
Short: "Get cluster information",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
@@ -190,7 +186,7 @@ func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
for _, clusterName := range args {
clst, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: clusterName})
clst, err := clusterIf.Get(context.Background(), &cluster.ClusterQuery{Server: clusterName})
errors.CheckError(err)
yamlBytes, err := yaml.Marshal(clst)
errors.CheckError(err)
@@ -204,7 +200,7 @@ func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
// NewClusterRemoveCommand returns a new instance of an `argocd cluster list` command
func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "rm CLUSTER",
Use: "rm",
Short: "Remove cluster credentials",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
@@ -219,9 +215,9 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
for _, clusterName := range args {
// TODO(jessesuen): find the right context and remove manager RBAC artifacts
// err := clusterauth.UninstallClusterManagerRBAC(clientset)
// err := common.UninstallClusterManagerRBAC(clientset)
// errors.CheckError(err)
_, err := clusterIf.Delete(context.Background(), &clusterpkg.ClusterQuery{Server: clusterName})
_, err := clusterIf.Delete(context.Background(), &cluster.ClusterQuery{Server: clusterName})
errors.CheckError(err)
}
},
@@ -237,7 +233,7 @@ func NewClusterListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
Run: func(c *cobra.Command, args []string) {
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
clusters, err := clusterIf.List(context.Background(), &clusterpkg.ClusterQuery{})
clusters, err := clusterIf.List(context.Background(), &cluster.ClusterQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "SERVER\tNAME\tSTATUS\tMESSAGE\n")
@@ -249,26 +245,3 @@ func NewClusterListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
}
return command
}
// NewClusterRotateAuthCommand returns a new instance of an `argocd cluster rotate-auth` command
func NewClusterRotateAuthCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "rotate-auth CLUSTER",
Short: fmt.Sprintf("%s cluster rotate-auth CLUSTER", cliName),
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
clusterQuery := clusterpkg.ClusterQuery{
Server: args[0],
}
_, err := clusterIf.RotateAuth(context.Background(), &clusterQuery)
errors.CheckError(err)
fmt.Printf("Cluster '%s' rotated auth\n", clusterQuery.Server)
},
}
return command
}

View File

@@ -8,8 +8,6 @@ import (
"strings"
"text/tabwriter"
"github.com/spf13/pflag"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@@ -20,36 +18,16 @@ import (
// NewContextCommand returns a new instance of an `argocd ctx` command
func NewContextCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var delete bool
var command = &cobra.Command{
Use: "context",
Aliases: []string{"ctx"},
Short: "Switch between contexts",
Run: func(c *cobra.Command, args []string) {
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
errors.CheckError(err)
deletePresentContext := false
c.Flags().Visit(func(f *pflag.Flag) {
if f.Name == "delete" {
deletePresentContext = true
}
})
if len(args) == 0 {
if deletePresentContext {
err := deleteContext(localCfg.CurrentContext, clientOpts.ConfigPath)
errors.CheckError(err)
return
} else {
printArgoCDContexts(clientOpts.ConfigPath)
return
}
printArgoCDContexts(clientOpts.ConfigPath)
return
}
ctxName := args[0]
argoCDDir, err := localconfig.DefaultConfigDir()
errors.CheckError(err)
prevCtxFile := path.Join(argoCDDir, ".prev-ctx")
@@ -59,6 +37,8 @@ func NewContextCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
errors.CheckError(err)
ctxName = string(prevCtxBytes)
}
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
errors.CheckError(err)
if localCfg.CurrentContext == ctxName {
fmt.Printf("Already at context '%s'\n", localCfg.CurrentContext)
return
@@ -68,7 +48,6 @@ func NewContextCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
prevCtx := localCfg.CurrentContext
localCfg.CurrentContext = ctxName
err = localconfig.WriteLocalConfig(*localCfg, clientOpts.ConfigPath)
errors.CheckError(err)
err = ioutil.WriteFile(prevCtxFile, []byte(prevCtx), 0644)
@@ -76,43 +55,9 @@ func NewContextCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
fmt.Printf("Switched to context '%s'\n", localCfg.CurrentContext)
},
}
command.Flags().BoolVar(&delete, "delete", false, "Delete the context instead of switching to it")
return command
}
func deleteContext(context, configPath string) error {
localCfg, err := localconfig.ReadLocalConfig(configPath)
errors.CheckError(err)
if localCfg == nil {
return fmt.Errorf("Nothing to logout from")
}
serverName, ok := localCfg.RemoveContext(context)
if !ok {
return fmt.Errorf("Context %s does not exist", context)
}
_ = localCfg.RemoveUser(context)
_ = localCfg.RemoveServer(serverName)
if localCfg.IsEmpty() {
err = localconfig.DeleteLocalConfig(configPath)
errors.CheckError(err)
} else {
if localCfg.CurrentContext == context {
localCfg.CurrentContext = localCfg.Contexts[0].Name
}
err = localconfig.ValidateLocalConfig(*localCfg)
if err != nil {
return fmt.Errorf("Error in logging out")
}
err = localconfig.WriteLocalConfig(*localCfg, configPath)
errors.CheckError(err)
}
fmt.Printf("Context '%s' deleted\n", context)
return nil
}
func printArgoCDContexts(configPath string) {
localCfg, err := localconfig.ReadLocalConfig(configPath)
errors.CheckError(err)

View File

@@ -1,60 +0,0 @@
package commands
import (
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/util/localconfig"
)
const testConfig = `contexts:
- name: argocd.example.com:443
server: argocd.example.com:443
user: argocd.example.com:443
- name: localhost:8080
server: localhost:8080
user: localhost:8080
current-context: localhost:8080
servers:
- server: argocd.example.com:443
- plain-text: true
server: localhost:8080
users:
- auth-token: vErrYS3c3tReFRe$hToken
name: argocd.example.com:443
refresh-token: vErrYS3c3tReFRe$hToken
- auth-token: vErrYS3c3tReFRe$hToken
name: localhost:8080`
const testConfigFilePath = "./testdata/config"
func TestContextDelete(t *testing.T) {
// Write the test config file
err := ioutil.WriteFile(testConfigFilePath, []byte(testConfig), os.ModePerm)
assert.NoError(t, err)
localConfig, err := localconfig.ReadLocalConfig(testConfigFilePath)
assert.NoError(t, err)
assert.Equal(t, localConfig.CurrentContext, "localhost:8080")
assert.Contains(t, localConfig.Contexts, localconfig.ContextRef{Name: "localhost:8080", Server: "localhost:8080", User: "localhost:8080"})
err = deleteContext("localhost:8080", testConfigFilePath)
assert.NoError(t, err)
localConfig, err = localconfig.ReadLocalConfig(testConfigFilePath)
assert.NoError(t, err)
assert.Equal(t, localConfig.CurrentContext, "argocd.example.com:443")
assert.NotContains(t, localConfig.Contexts, localconfig.ContextRef{Name: "localhost:8080", Server: "localhost:8080", User: "localhost:8080"})
assert.NotContains(t, localConfig.Servers, localconfig.Server{PlainText: true, Server: "localhost:8080"})
assert.NotContains(t, localConfig.Users, localconfig.User{AuthToken: "vErrYS3c3tReFRe$hToken", Name: "localhost:8080"})
assert.Contains(t, localConfig.Contexts, localconfig.ContextRef{Name: "argocd.example.com:443", Server: "argocd.example.com:443", User: "argocd.example.com:443"})
// Write the file again so that no conflicts are made in git
err = ioutil.WriteFile(testConfigFilePath, []byte(testConfig), os.ModePerm)
assert.NoError(t, err)
}

View File

@@ -8,8 +8,8 @@ import (
"strconv"
"time"
"github.com/coreos/go-oidc"
"github.com/dgrijalva/jwt-go"
oidc "github.com/coreos/go-oidc"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"github.com/skratchdot/open-golang/open"
"github.com/spf13/cobra"
@@ -17,8 +17,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
sessionpkg "github.com/argoproj/argo-cd/pkg/apiclient/session"
settingspkg "github.com/argoproj/argo-cd/pkg/apiclient/settings"
"github.com/argoproj/argo-cd/server/session"
"github.com/argoproj/argo-cd/server/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
grpc_util "github.com/argoproj/argo-cd/util/grpc"
@@ -88,7 +88,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
httpClient, err := acdClient.HTTPClient()
errors.CheckError(err)
ctx = oidc.ClientContext(ctx, httpClient)
acdSet, err := setIf.Get(ctx, &settingspkg.SettingsQuery{})
acdSet, err := setIf.Get(ctx, &settings.SettingsQuery{})
errors.CheckError(err)
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
errors.CheckError(err)
@@ -278,7 +278,7 @@ func passwordLogin(acdClient argocdclient.Client, username, password string) str
username, password = cli.PromptCredentials(username, password)
sessConn, sessionIf := acdClient.NewSessionClientOrDie()
defer util.Close(sessConn)
sessionRequest := sessionpkg.SessionCreateRequest{
sessionRequest := session.SessionCreateRequest{
Username: username,
Password: password,
}

View File

@@ -1,50 +0,0 @@
package commands
import (
"fmt"
"os"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
)
// NewLogoutCommand returns a new instance of `argocd logout` command
func NewLogoutCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "logout CONTEXT",
Short: "Log out from Argo CD",
Long: "Log out from Argo CD",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
context := args[0]
localCfg, err := localconfig.ReadLocalConfig(globalClientOpts.ConfigPath)
errors.CheckError(err)
if localCfg == nil {
log.Fatalf("Nothing to logout from")
}
ok := localCfg.RemoveToken(context)
if !ok {
log.Fatalf("Context %s does not exist", context)
}
err = localconfig.ValidateLocalConfig(*localCfg)
if err != nil {
log.Fatalf("Error in logging out: %s", err)
}
err = localconfig.WriteLocalConfig(*localCfg, globalClientOpts.ConfigPath)
errors.CheckError(err)
fmt.Printf("Logged out from '%s'\n", context)
},
}
return command
}

View File

@@ -1,39 +0,0 @@
package commands
import (
"io/ioutil"
"os"
"testing"
"github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/util/localconfig"
)
func TestLogout(t *testing.T) {
// Write the test config file
err := ioutil.WriteFile(testConfigFilePath, []byte(testConfig), os.ModePerm)
assert.NoError(t, err)
localConfig, err := localconfig.ReadLocalConfig(testConfigFilePath)
assert.NoError(t, err)
assert.Equal(t, localConfig.CurrentContext, "localhost:8080")
assert.Contains(t, localConfig.Contexts, localconfig.ContextRef{Name: "localhost:8080", Server: "localhost:8080", User: "localhost:8080"})
command := NewLogoutCommand(&apiclient.ClientOptions{ConfigPath: testConfigFilePath})
command.Run(nil, []string{"localhost:8080"})
localConfig, err = localconfig.ReadLocalConfig(testConfigFilePath)
assert.NoError(t, err)
assert.Equal(t, localConfig.CurrentContext, "localhost:8080")
assert.NotContains(t, localConfig.Users, localconfig.User{AuthToken: "vErrYS3c3tReFRe$hToken", Name: "localhost:8080"})
assert.Contains(t, localConfig.Contexts, localconfig.ContextRef{Name: "argocd.example.com:443", Server: "argocd.example.com:443", User: "argocd.example.com:443"})
// Write the file again so that no conflicts are made in git
err = ioutil.WriteFile(testConfigFilePath, []byte(testConfig), os.ModePerm)
assert.NoError(t, err)
}

View File

@@ -19,8 +19,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
@@ -125,7 +125,7 @@ func NewProjectCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err := projIf.Create(context.Background(), &projectpkg.ProjectCreateRequest{Project: &proj})
_, err := projIf.Create(context.Background(), &project.ProjectCreateRequest{Project: &proj})
errors.CheckError(err)
},
}
@@ -150,7 +150,7 @@ func NewProjectSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
visited := 0
@@ -171,7 +171,7 @@ func NewProjectSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
os.Exit(1)
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -195,7 +195,7 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, dest := range proj.Spec.Destinations {
@@ -204,7 +204,7 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
}
}
proj.Spec.Destinations = append(proj.Spec.Destinations, v1alpha1.ApplicationDestination{Server: server, Namespace: namespace})
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -227,7 +227,7 @@ func NewProjectRemoveDestinationCommand(clientOpts *argocdclient.ClientOptions)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index := -1
@@ -241,7 +241,7 @@ func NewProjectRemoveDestinationCommand(clientOpts *argocdclient.ClientOptions)
log.Fatal("Specified destination does not exist in project")
} else {
proj.Spec.Destinations = append(proj.Spec.Destinations[:index], proj.Spec.Destinations[index+1:]...)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
@@ -265,7 +265,7 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, item := range proj.Spec.SourceRepos {
@@ -279,7 +279,7 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
}
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos, url)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -299,11 +299,11 @@ func modifyProjectResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.C
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
if action(proj, group, kind) {
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
@@ -399,7 +399,7 @@ func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobr
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index := -1
@@ -413,7 +413,7 @@ func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobr
fmt.Printf("Source repository '%s' does not exist in project\n", url)
} else {
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos[:index], proj.Spec.SourceRepos[index+1:]...)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
@@ -435,7 +435,7 @@ func NewProjectDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
for _, name := range args {
_, err := projIf.Delete(context.Background(), &projectpkg.ProjectQuery{Name: name})
_, err := projIf.Delete(context.Background(), &project.ProjectQuery{Name: name})
errors.CheckError(err)
}
},
@@ -451,7 +451,7 @@ func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
Run: func(c *cobra.Command, args []string) {
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
projects, err := projIf.List(context.Background(), &projectpkg.ProjectQuery{})
projects, err := projIf.List(context.Background(), &project.ProjectQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\n")
@@ -513,7 +513,7 @@ func NewProjectGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
p, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
p, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
fmt.Printf(printProjFmtStr, "Name:", p.Name)
fmt.Printf(printProjFmtStr, "Description:", p.Spec.Description)
@@ -574,7 +574,7 @@ func NewProjectEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
projData, err := json.Marshal(proj.Spec)
errors.CheckError(err)
@@ -591,12 +591,12 @@ func NewProjectEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
if err != nil {
return err
}
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
if err != nil {
return err
}
proj.Spec = updatedSpec
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
if err != nil {
return fmt.Errorf("Failed to update project:\n%v", err)
}

View File

@@ -12,8 +12,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
projectutil "github.com/argoproj/argo-cd/util/project"
)
@@ -61,7 +61,7 @@ func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cob
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, roleIndex, err := projectutil.GetRoleByName(proj, roleName)
@@ -70,7 +70,7 @@ func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cob
policy := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
proj.Spec.Roles[roleIndex].Policies = append(role.Policies, policy)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -96,7 +96,7 @@ func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, roleIndex, err := projectutil.GetRoleByName(proj, roleName)
@@ -115,7 +115,7 @@ func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *
}
role.Policies[duplicateIndex] = role.Policies[len(role.Policies)-1]
proj.Spec.Roles[roleIndex].Policies = role.Policies[:len(role.Policies)-1]
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
@@ -141,7 +141,7 @@ func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, _, err = projectutil.GetRoleByName(proj, roleName)
@@ -151,7 +151,7 @@ func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
}
proj.Spec.Roles = append(proj.Spec.Roles, v1alpha1.ProjectRole{Name: roleName, Description: description})
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Role '%s' created\n", roleName)
},
@@ -175,7 +175,7 @@ func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, index, err := projectutil.GetRoleByName(proj, roleName)
@@ -186,7 +186,7 @@ func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
proj.Spec.Roles[index] = proj.Spec.Roles[len(proj.Spec.Roles)-1]
proj.Spec.Roles = proj.Spec.Roles[:len(proj.Spec.Roles)-1]
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Role '%s' deleted\n", roleName)
},
@@ -213,7 +213,7 @@ func NewProjectRoleCreateTokenCommand(clientOpts *argocdclient.ClientOptions) *c
defer util.Close(conn)
duration, err := timeutil.ParseDuration(expiresIn)
errors.CheckError(err)
token, err := projIf.CreateToken(context.Background(), &projectpkg.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
token, err := projIf.CreateToken(context.Background(), &project.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
errors.CheckError(err)
fmt.Println(token.Token)
},
@@ -241,7 +241,7 @@ func NewProjectRoleDeleteTokenCommand(clientOpts *argocdclient.ClientOptions) *c
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err = projIf.DeleteToken(context.Background(), &projectpkg.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
_, err = projIf.DeleteToken(context.Background(), &project.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
errors.CheckError(err)
},
}
@@ -262,7 +262,7 @@ func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
@@ -290,7 +290,7 @@ func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, _, err := projectutil.GetRoleByName(proj, roleName)
@@ -331,7 +331,7 @@ func NewProjectRoleAddGroupCommand(clientOpts *argocdclient.ClientOptions) *cobr
projName, roleName, groupName := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := projectutil.AddGroupToRole(proj, roleName, groupName)
errors.CheckError(err)
@@ -339,7 +339,7 @@ func NewProjectRoleAddGroupCommand(clientOpts *argocdclient.ClientOptions) *cobr
fmt.Printf("Group '%s' already present in role '%s'\n", groupName, roleName)
return
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Group '%s' added to role '%s'\n", groupName, roleName)
},
@@ -360,7 +360,7 @@ func NewProjectRoleRemoveGroupCommand(clientOpts *argocdclient.ClientOptions) *c
projName, roleName, groupName := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := projectutil.RemoveGroupFromRole(proj, roleName, groupName)
errors.CheckError(err)
@@ -368,7 +368,7 @@ func NewProjectRoleRemoveGroupCommand(clientOpts *argocdclient.ClientOptions) *c
fmt.Printf("Group '%s' not present in role '%s'\n", groupName, roleName)
return
}
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Group '%s' removed from role '%s'\n", groupName, roleName)
},

View File

@@ -5,13 +5,13 @@ import (
"fmt"
"os"
"github.com/coreos/go-oidc"
oidc "github.com/coreos/go-oidc"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
settingspkg "github.com/argoproj/argo-cd/pkg/apiclient/settings"
"github.com/argoproj/argo-cd/server/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/localconfig"
"github.com/argoproj/argo-cd/util/session"
@@ -63,7 +63,7 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
httpClient, err := acdClient.HTTPClient()
errors.CheckError(err)
ctx = oidc.ClientContext(ctx, httpClient)
acdSet, err := setIf.Get(ctx, &settingspkg.SettingsQuery{})
acdSet, err := setIf.Get(ctx, &settings.SettingsQuery{})
errors.CheckError(err)
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
errors.CheckError(err)

View File

@@ -12,8 +12,8 @@ import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
repositorypkg "github.com/argoproj/argo-cd/pkg/apiclient/repository"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
@@ -78,7 +78,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
repoCreateReq := repositorypkg.RepoCreateRequest{
repoCreateReq := repository.RepoCreateRequest{
Repo: &repo,
Upsert: upsert,
}
@@ -108,7 +108,7 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
for _, repoURL := range args {
_, err := repoIf.Delete(context.Background(), &repositorypkg.RepoQuery{Repo: repoURL})
_, err := repoIf.Delete(context.Background(), &repository.RepoQuery{Repo: repoURL})
errors.CheckError(err)
}
},
@@ -124,7 +124,7 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
Run: func(c *cobra.Command, args []string) {
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
repos, err := repoIf.List(context.Background(), &repositorypkg.RepoQuery{})
repos, err := repoIf.List(context.Background(), &repository.RepoQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "REPO\tUSER\tSTATUS\tMESSAGE\n")

View File

@@ -45,7 +45,6 @@ func NewCommand() *cobra.Command {
command.AddCommand(NewContextCommand(&clientOpts))
command.AddCommand(NewProjectCommand(&clientOpts))
command.AddCommand(NewAccountCommand(&clientOpts))
command.AddCommand(NewLogoutCommand(&clientOpts))
defaultLocalConfigPath, err := localconfig.DefaultLocalConfigPath()
errors.CheckError(err)

View File

@@ -1,18 +0,0 @@
contexts:
- name: argocd.example.com:443
server: argocd.example.com:443
user: argocd.example.com:443
- name: localhost:8080
server: localhost:8080
user: localhost:8080
current-context: localhost:8080
servers:
- server: argocd.example.com:443
- plain-text: true
server: localhost:8080
users:
- auth-token: vErrYS3c3tReFRe$hToken
name: argocd.example.com:443
refresh-token: vErrYS3c3tReFRe$hToken
- auth-token: vErrYS3c3tReFRe$hToken
name: localhost:8080

View File

@@ -7,7 +7,7 @@ import (
"github.com/golang/protobuf/ptypes/empty"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/common"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util"
@@ -22,7 +22,7 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
Use: "version",
Short: fmt.Sprintf("Print version information"),
Run: func(cmd *cobra.Command, args []string) {
version := common.GetVersion()
version := argocd.GetVersion()
fmt.Printf("%s: %s\n", cliName, version)
if !short {
fmt.Printf(" BuildDate: %s\n", version.BuildDate)

View File

@@ -17,18 +17,12 @@ const (
ArgoCDRBACConfigMapName = "argocd-rbac-cm"
)
// Default system namespace
const (
DefaultSystemNamespace = "kube-system"
)
// Default listener ports for ArgoCD components
const (
DefaultPortAPIServer = 8080
DefaultPortRepoServer = 8081
DefaultPortArgoCDMetrics = 8082
DefaultPortArgoCDAPIServerMetrics = 8083
DefaultPortRepoServerMetrics = 8084
PortAPIServer = 8080
PortRepoServer = 8081
PortArgoCDMetrics = 8082
PortArgoCDAPIServerMetrics = 8083
PortRepoServerMetrics = 8084
)
// Argo CD application related constants
@@ -81,12 +75,6 @@ const (
// LabelValueSecretTypeCluster indicates a secret type of cluster
LabelValueSecretTypeCluster = "cluster"
// AnnotationCompareOptions is a comma-separated list of options for comparison
AnnotationCompareOptions = "argocd.argoproj.io/compare-options"
// AnnotationSyncOptions is a comma-separated list of options for syncing
AnnotationSyncOptions = "argocd.argoproj.io/sync-options"
// AnnotationSyncWave indicates which wave of the sync the resource or hook should be in
AnnotationSyncWave = "argocd.argoproj.io/sync-wave"
// AnnotationKeyHook contains the hook type of a resource
AnnotationKeyHook = "argocd.argoproj.io/hook"
// AnnotationKeyHookDeletePolicy is the policy of deleting a hook

View File

@@ -1,13 +1,11 @@
package clusterauth
package common
import (
"fmt"
"strings"
"time"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -35,13 +33,13 @@ var ArgoCDManagerPolicyRules = []rbacv1.PolicyRule{
},
}
// CreateServiceAccount creates a service account in a given namespace
// CreateServiceAccount creates a service account
func CreateServiceAccount(
clientset kubernetes.Interface,
serviceAccountName string,
namespace string,
) error {
serviceAccount := corev1.ServiceAccount{
serviceAccount := apiv1.ServiceAccount{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "ServiceAccount",
@@ -54,12 +52,12 @@ func CreateServiceAccount(
_, err := clientset.CoreV1().ServiceAccounts(namespace).Create(&serviceAccount)
if err != nil {
if !apierr.IsAlreadyExists(err) {
return fmt.Errorf("Failed to create service account %q in namespace %q: %v", serviceAccountName, namespace, err)
return fmt.Errorf("Failed to create service account %q: %v", serviceAccountName, err)
}
log.Infof("ServiceAccount %q already exists in namespace %q", serviceAccountName, namespace)
log.Infof("ServiceAccount %q already exists", serviceAccountName)
return nil
}
log.Infof("ServiceAccount %q created in namespace %q", serviceAccountName, namespace)
log.Infof("ServiceAccount %q created", serviceAccountName)
return nil
}
@@ -138,7 +136,8 @@ func CreateClusterRoleBinding(
}
// InstallClusterManagerRBAC installs RBAC resources for a cluster manager to operate a cluster. Returns a token
func InstallClusterManagerRBAC(clientset kubernetes.Interface, ns string) (string, error) {
func InstallClusterManagerRBAC(clientset kubernetes.Interface) (string, error) {
const ns = "kube-system"
err := CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
if err != nil {
@@ -155,7 +154,7 @@ func InstallClusterManagerRBAC(clientset kubernetes.Interface, ns string) (strin
return "", err
}
var serviceAccount *corev1.ServiceAccount
var serviceAccount *apiv1.ServiceAccount
var secretName string
err = wait.Poll(500*time.Millisecond, 30*time.Second, func() (bool, error) {
serviceAccount, err = clientset.CoreV1().ServiceAccounts(ns).Get(ArgoCDManagerServiceAccount, metav1.GetOptions{})
@@ -217,106 +216,3 @@ func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleN
}
return nil
}
type ServiceAccountClaims struct {
Sub string `json:"sub"`
Iss string `json:"iss"`
Namespace string `json:"kubernetes.io/serviceaccount/namespace"`
SecretName string `json:"kubernetes.io/serviceaccount/secret.name"`
ServiceAccountName string `json:"kubernetes.io/serviceaccount/service-account.name"`
ServiceAccountUID string `json:"kubernetes.io/serviceaccount/service-account.uid"`
}
// Valid satisfies the jwt.Claims interface to enable JWT parsing
func (sac *ServiceAccountClaims) Valid() error {
return nil
}
// ParseServiceAccountToken parses a Kubernetes service account token
func ParseServiceAccountToken(token string) (*ServiceAccountClaims, error) {
parser := &jwt.Parser{
SkipClaimsValidation: true,
}
var claims ServiceAccountClaims
_, _, err := parser.ParseUnverified(token, &claims)
if err != nil {
return nil, fmt.Errorf("Failed to parse service account token: %s", err)
}
return &claims, nil
}
// GenerateNewClusterManagerSecret creates a new secret derived with same metadata as existing one
// and waits until the secret is populated with a bearer token
func GenerateNewClusterManagerSecret(clientset kubernetes.Interface, claims *ServiceAccountClaims) (*corev1.Secret, error) {
secretsClient := clientset.CoreV1().Secrets(claims.Namespace)
existingSecret, err := secretsClient.Get(claims.SecretName, metav1.GetOptions{})
if err != nil {
return nil, err
}
var newSecret corev1.Secret
secretNameSplit := strings.Split(claims.SecretName, "-")
if len(secretNameSplit) > 0 {
secretNameSplit = secretNameSplit[:len(secretNameSplit)-1]
}
newSecret.Type = corev1.SecretTypeServiceAccountToken
newSecret.GenerateName = strings.Join(secretNameSplit, "-") + "-"
newSecret.Annotations = existingSecret.Annotations
// We will create an empty secret and let kubernetes populate the data
newSecret.Data = nil
created, err := secretsClient.Create(&newSecret)
if err != nil {
return nil, err
}
err = wait.Poll(500*time.Millisecond, 30*time.Second, func() (bool, error) {
created, err = secretsClient.Get(created.Name, metav1.GetOptions{})
if err != nil {
return false, err
}
if len(created.Data) == 0 {
return false, nil
}
return true, nil
})
if err != nil {
return nil, fmt.Errorf("Timed out waiting for secret to generate new token")
}
return created, nil
}
// RotateServiceAccountSecrets rotates the entries in the service accounts secrets list
func RotateServiceAccountSecrets(clientset kubernetes.Interface, claims *ServiceAccountClaims, newSecret *corev1.Secret) error {
// 1. update service account secrets list with new secret name while also removing the old name
saClient := clientset.CoreV1().ServiceAccounts(claims.Namespace)
sa, err := saClient.Get(claims.ServiceAccountName, metav1.GetOptions{})
if err != nil {
return err
}
var newSecretsList []corev1.ObjectReference
alreadyPresent := false
for _, objRef := range sa.Secrets {
if objRef.Name == claims.SecretName {
continue
}
if objRef.Name == newSecret.Name {
alreadyPresent = true
}
newSecretsList = append(newSecretsList, objRef)
}
if !alreadyPresent {
sa.Secrets = append(newSecretsList, corev1.ObjectReference{Name: newSecret.Name})
}
_, err = saClient.Update(sa)
if err != nil {
return err
}
// 2. delete existing secret object
secretsClient := clientset.CoreV1().Secrets(claims.Namespace)
err = secretsClient.Delete(claims.SecretName, &metav1.DeleteOptions{})
if !apierr.IsNotFound(err) {
return err
}
return nil
}

View File

@@ -4,13 +4,14 @@ import (
"context"
"encoding/json"
"fmt"
"math"
"reflect"
"runtime/debug"
"strings"
"sync"
"time"
"github.com/argoproj/argo-cd/pkg/apis/application"
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
@@ -27,7 +28,6 @@ import (
statecache "github.com/argoproj/argo-cd/controller/cache"
"github.com/argoproj/argo-cd/controller/metrics"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
appinformers "github.com/argoproj/argo-cd/pkg/client/informers/externalversions"
@@ -47,21 +47,6 @@ const (
updateOperationStateTimeout = 1 * time.Second
)
type CompareWith int
const (
// Compare live application state against state defined in latest git revision.
CompareWithLatest CompareWith = 2
// Compare live application state against state defined using revision of most recent comparison.
CompareWithRecent CompareWith = 1
// Skip comparison and only refresh application resources tree
ComparisonWithNothing CompareWith = 0
)
func (a CompareWith) Max(b CompareWith) CompareWith {
return CompareWith(math.Max(float64(a), float64(b)))
}
// ApplicationController is the controller for application resources.
type ApplicationController struct {
cache *argocache.Cache
@@ -82,7 +67,7 @@ type ApplicationController struct {
db db.ArgoDB
settings *settings_util.ArgoCDSettings
settingsMgr *settings_util.SettingsManager
refreshRequestedApps map[string]CompareWith
refreshRequestedApps map[string]bool
refreshRequestedAppsMutex *sync.Mutex
metricsServer *metrics.MetricsServer
}
@@ -101,7 +86,6 @@ func NewApplicationController(
repoClientset reposerver.Clientset,
argoCache *argocache.Cache,
appResyncPeriod time.Duration,
metricsPort int,
) (*ApplicationController, error) {
db := db.NewDB(namespace, settingsMgr, kubeClientset)
settings, err := settingsMgr.GetSettings()
@@ -120,7 +104,7 @@ func NewApplicationController(
appOperationQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
db: db,
statusRefreshTimeout: appResyncPeriod,
refreshRequestedApps: make(map[string]CompareWith),
refreshRequestedApps: make(map[string]bool),
refreshRequestedAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, "argocd-application-controller"),
settingsMgr: settingsMgr,
@@ -128,11 +112,8 @@ func NewApplicationController(
}
appInformer, appLister := ctrl.newApplicationInformerAndLister()
projInformer := v1alpha1.NewAppProjectInformer(applicationClientset, namespace, appResyncPeriod, cache.Indexers{})
metricsAddr := fmt.Sprintf("0.0.0.0:%d", metricsPort)
ctrl.metricsServer = metrics.NewMetricsServer(metricsAddr, appLister, func() error {
_, err := kubeClientset.Discovery().ServerVersion()
return err
})
metricsAddr := fmt.Sprintf("0.0.0.0:%d", common.PortArgoCDMetrics)
ctrl.metricsServer = metrics.NewMetricsServer(metricsAddr, appLister)
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settings, kubectlCmd, ctrl.metricsServer, ctrl.handleAppUpdated)
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectlCmd, ctrl.settings, stateCache, projInformer, ctrl.metricsServer)
ctrl.appInformer = appInformer
@@ -153,7 +134,7 @@ func isSelfReferencedApp(app *appv1.Application, ref v1.ObjectReference) bool {
gvk.Kind == application.ApplicationKind
}
func (ctrl *ApplicationController) handleAppUpdated(appName string, isManagedResource bool, ref v1.ObjectReference) {
func (ctrl *ApplicationController) handleAppUpdated(appName string, fullRefresh bool, ref v1.ObjectReference) {
skipForceRefresh := false
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(ctrl.namespace + "/" + appName)
@@ -163,11 +144,7 @@ func (ctrl *ApplicationController) handleAppUpdated(appName string, isManagedRes
}
if !skipForceRefresh {
level := ComparisonWithNothing
if isManagedResource {
level = CompareWithRecent
}
ctrl.requestAppRefresh(appName, level)
ctrl.requestAppRefresh(appName, fullRefresh)
}
ctrl.appRefreshQueue.Add(fmt.Sprintf("%s/%s", ctrl.namespace, appName))
}
@@ -215,7 +192,7 @@ func (ctrl *ApplicationController) getResourceTree(a *appv1.Application, managed
},
})
} else {
err := ctrl.stateCache.IterateHierarchy(a.Spec.Destination.Server, live, func(child appv1.ResourceNode) {
err := ctrl.stateCache.IterateHierarchy(a.Spec.Destination.Server, kube.GetResourceKey(live), func(child appv1.ResourceNode) {
nodes = append(nodes, child)
})
if err != nil {
@@ -314,20 +291,20 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
<-ctx.Done()
}
func (ctrl *ApplicationController) requestAppRefresh(appName string, compareWith CompareWith) {
func (ctrl *ApplicationController) requestAppRefresh(appName string, fullRefresh bool) {
ctrl.refreshRequestedAppsMutex.Lock()
defer ctrl.refreshRequestedAppsMutex.Unlock()
ctrl.refreshRequestedApps[appName] = compareWith.Max(ctrl.refreshRequestedApps[appName])
ctrl.refreshRequestedApps[appName] = fullRefresh || ctrl.refreshRequestedApps[appName]
}
func (ctrl *ApplicationController) isRefreshRequested(appName string) (bool, CompareWith) {
func (ctrl *ApplicationController) isRefreshRequested(appName string) (bool, bool) {
ctrl.refreshRequestedAppsMutex.Lock()
defer ctrl.refreshRequestedAppsMutex.Unlock()
level, ok := ctrl.refreshRequestedApps[appName]
fullRefresh, ok := ctrl.refreshRequestedApps[appName]
if ok {
delete(ctrl.refreshRequestedApps, appName)
}
return ok, level
return ok, fullRefresh
}
func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext bool) {
@@ -517,7 +494,6 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
ctrl.setOperationState(app, state)
logCtx.Infof("Initialized new operation: %v", *app.Operation)
}
ctrl.appStateManager.SyncAppState(app, state)
if state.Phase == appv1.OperationRunning {
@@ -539,7 +515,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
if state.Phase.Completed() {
// if we just completed an operation, force a refresh so that UI will report up-to-date
// sync/health information
ctrl.requestAppRefresh(app.ObjectMeta.Name, CompareWithLatest)
ctrl.requestAppRefresh(app.ObjectMeta.Name, true)
}
}
@@ -574,10 +550,6 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace)
_, err = appClient.Patch(app.Name, types.MergePatchType, patchJSON)
if err != nil {
// Stop retrying updating deleted application
if apierr.IsNotFound(err) {
return nil
}
return err
}
log.Infof("updated '%s' operation (phase: %s)", app.Name, state.Phase)
@@ -634,7 +606,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
needRefresh, refreshType, comparisonLevel := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout)
needRefresh, refreshType, fullRefresh := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout)
if !needRefresh {
return
@@ -644,13 +616,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
defer func() {
reconcileDuration := time.Since(startTime)
ctrl.metricsServer.IncReconcile(origApp, reconcileDuration)
logCtx := log.WithFields(log.Fields{"application": origApp.Name, "time_ms": reconcileDuration.Seconds() * 1e3, "level": comparisonLevel})
logCtx := log.WithFields(log.Fields{"application": origApp.Name, "time_ms": reconcileDuration.Seconds() * 1e3, "full": fullRefresh})
logCtx.Info("Reconciliation completed")
}()
app := origApp.DeepCopy()
logCtx := log.WithFields(log.Fields{"application": app.Name})
if comparisonLevel == ComparisonWithNothing {
if !fullRefresh {
if managedResources, err := ctrl.cache.GetAppManagedResources(app.Name); err != nil {
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fallback to full reconciliation")
} else {
@@ -678,16 +650,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
return
}
var localManifests []string
if opState := app.Status.OperationState; opState != nil {
localManifests = opState.Operation.Sync.Manifests
}
revision := app.Spec.Source.TargetRevision
if comparisonLevel == CompareWithRecent {
revision = app.Status.Sync.Revision
}
compareResult, err := ctrl.appStateManager.CompareAppState(app, revision, app.Spec.Source, refreshType == appv1.RefreshTypeHard, localManifests)
compareResult, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, refreshType == appv1.RefreshTypeHard)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
} else {
@@ -721,17 +684,17 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
// Returns true if application never been compared, has changed or comparison result has expired.
// Additionally returns whether full refresh was requested or not.
// If full refresh is requested then target and live state should be reconciled, else only live state tree should be updated.
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) (bool, appv1.RefreshType, CompareWith) {
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) (bool, appv1.RefreshType, bool) {
logCtx := log.WithFields(log.Fields{"application": app.Name})
var reason string
compareWith := CompareWithLatest
fullRefresh := true
refreshType := appv1.RefreshTypeNormal
expired := app.Status.ReconciledAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
if requestedType, ok := app.IsRefreshRequested(); ok {
refreshType = requestedType
reason = fmt.Sprintf("%s refresh requested", refreshType)
} else if requested, level := ctrl.isRefreshRequested(app.Name); requested {
compareWith = level
} else if requested, full := ctrl.isRefreshRequested(app.Name); requested {
fullRefresh = full
reason = fmt.Sprintf("controller refresh requested")
} else if app.Status.Sync.Status == appv1.SyncStatusCodeUnknown && expired {
reason = "comparison status unknown"
@@ -743,10 +706,10 @@ func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application,
reason = fmt.Sprintf("comparison expired. reconciledAt: %v, expiry: %v", app.Status.ReconciledAt, statusRefreshTimeout)
}
if reason != "" {
logCtx.Infof("Refreshing app status (%s), level (%d)", reason, compareWith)
return true, refreshType, compareWith
logCtx.Infof("Refreshing app status (%s)", reason)
return true, refreshType, fullRefresh
}
return false, refreshType, compareWith
return false, refreshType, fullRefresh
}
func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application) ([]appv1.ApplicationCondition, bool) {
@@ -962,7 +925,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
if oldOK && newOK {
if toggledAutomatedSync(oldApp, newApp) {
log.WithField("application", newApp.Name).Info("Enabled automated sync")
ctrl.requestAppRefresh(newApp.Name, CompareWithLatest)
ctrl.requestAppRefresh(newApp.Name, true)
}
}
ctrl.appRefreshQueue.Add(key)
@@ -1005,7 +968,6 @@ func (ctrl *ApplicationController) watchSettings(ctx context.Context) {
prevAppLabelKey := ctrl.settings.GetAppInstanceLabelKey()
prevResourceExclusions := ctrl.settings.ResourceExclusions
prevResourceInclusions := ctrl.settings.ResourceInclusions
prevConfigManagementPlugins := ctrl.settings.ConfigManagementPlugins
done := false
for !done {
select {
@@ -1027,11 +989,6 @@ func (ctrl *ApplicationController) watchSettings(ctx context.Context) {
ctrl.stateCache.Invalidate()
prevResourceInclusions = newSettings.ResourceInclusions
}
if !reflect.DeepEqual(prevConfigManagementPlugins, newSettings.ConfigManagementPlugins) {
log.WithFields(log.Fields{"prevConfigManagementPlugins": prevConfigManagementPlugins, "newConfigManagementPlugins": newSettings.ConfigManagementPlugins}).Info("config management plugins modified")
ctrl.stateCache.Invalidate()
prevConfigManagementPlugins = newSettings.ConfigManagementPlugins
}
case <-ctx.Done():
done = true
}

View File

@@ -5,20 +5,19 @@ import (
"testing"
"time"
"github.com/argoproj/argo-cd/common"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
corev1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/kubernetes/fake"
kubetesting "k8s.io/client-go/testing"
"k8s.io/client-go/tools/cache"
"github.com/argoproj/argo-cd/common"
mockstatecache "github.com/argoproj/argo-cd/controller/cache/mocks"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
@@ -77,7 +76,6 @@ func newFakeController(data *fakeData) *ApplicationController {
&mockRepoClientset,
utilcache.NewCache(utilcache.NewInMemoryCache(1*time.Hour)),
time.Minute,
common.DefaultPortArgoCDMetrics,
)
if err != nil {
panic(err)
@@ -461,70 +459,10 @@ func TestHandleAppUpdated(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
ctrl.handleAppUpdated(app.Name, true, kube.GetObjectRef(kube.MustToUnstructured(app)))
isRequested, level := ctrl.isRefreshRequested(app.Name)
isRequested, _ := ctrl.isRefreshRequested(app.Name)
assert.False(t, isRequested)
assert.Equal(t, ComparisonWithNothing, level)
ctrl.handleAppUpdated(app.Name, true, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: "default"})
isRequested, level = ctrl.isRefreshRequested(app.Name)
isRequested, _ = ctrl.isRefreshRequested(app.Name)
assert.True(t, isRequested)
assert.Equal(t, CompareWithRecent, level)
}
func TestSetOperationStateOnDeletedApp(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
patched := false
fakeAppCs.AddReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
patched = true
return true, nil, apierr.NewNotFound(schema.GroupResource{}, "my-app")
})
ctrl.setOperationState(newFakeApp(), &argoappv1.OperationState{Phase: argoappv1.OperationSucceeded})
assert.True(t, patched)
}
func TestNeedRefreshAppStatus(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}})
app := newFakeApp()
app.Status.ReconciledAt = metav1.Now()
app.Status.Sync = argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeSynced,
ComparedTo: argoappv1.ComparedTo{
Source: app.Spec.Source,
Destination: app.Spec.Destination,
},
}
// no need to refresh just reconciled application
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.False(t, needRefresh)
// refresh app using the 'deepest' requested comparison level
ctrl.requestAppRefresh(app.Name, CompareWithRecent)
ctrl.requestAppRefresh(app.Name, ComparisonWithNothing)
needRefresh, refreshType, compareWith := ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithRecent, compareWith)
// refresh application which status is not reconciled using latest commit
app.Status.Sync = argoappv1.SyncStatus{Status: argoappv1.SyncStatusCodeUnknown}
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
// execute hard refresh if app has refresh annotation
app.Annotations = map[string]string{
common.AnnotationKeyRefresh: string(argoappv1.RefreshTypeHard),
}
needRefresh, refreshType, compareWith = ctrl.needRefreshAppStatus(app, 1*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, argoappv1.RefreshTypeHard, refreshType)
assert.Equal(t, CompareWithLatest, compareWith)
}

View File

@@ -22,7 +22,7 @@ import (
type LiveStateCache interface {
IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error)
// Executes give callback against resource specified by the key and all its children
IterateHierarchy(server string, obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) error
IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode)) error
// Returns state of live nodes which correspond for target nodes of specified application.
GetManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error)
// Starts watching resources of each controlled cluster.
@@ -31,7 +31,7 @@ type LiveStateCache interface {
Invalidate()
}
type AppUpdatedHandler = func(appName string, isManagedResource bool, ref v1.ObjectReference)
type AppUpdatedHandler = func(appName string, fullRefresh bool, ref v1.ObjectReference)
func GetTargetObjKey(a *appv1.Application, un *unstructured.Unstructured, isNamespaced bool) kube.ResourceKey {
key := kube.GetResourceKey(un)
@@ -135,12 +135,12 @@ func (c *liveStateCache) IsNamespaced(server string, obj *unstructured.Unstructu
return clusterInfo.isNamespaced(obj), nil
}
func (c *liveStateCache) IterateHierarchy(server string, obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) error {
func (c *liveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode)) error {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return err
}
clusterInfo.iterateHierarchy(obj, action)
clusterInfo.iterateHierarchy(key, action)
return nil
}

View File

@@ -4,13 +4,9 @@ import (
"context"
"fmt"
"runtime/debug"
"sort"
"strings"
"sync"
"time"
"k8s.io/apimachinery/pkg/types"
"github.com/argoproj/argo-cd/controller/metrics"
log "github.com/sirupsen/logrus"
@@ -329,36 +325,18 @@ func (c *clusterInfo) ensureSynced() error {
return c.syncError
}
func (c *clusterInfo) iterateHierarchy(obj *unstructured.Unstructured, action func(child appv1.ResourceNode)) {
func (c *clusterInfo) iterateHierarchy(key kube.ResourceKey, action func(child appv1.ResourceNode)) {
c.lock.Lock()
defer c.lock.Unlock()
key := kube.GetResourceKey(obj)
if objInfo, ok := c.nodes[key]; ok {
action(objInfo.asResourceNode())
nsNodes := c.nsIndex[key.Namespace]
childrenByUID := make(map[types.UID][]*node)
for _, child := range nsNodes {
if objInfo.isParentOf(child) {
childrenByUID[child.ref.UID] = append(childrenByUID[child.ref.UID], child)
}
}
// make sure children has no duplicates
for _, children := range childrenByUID {
if len(children) > 0 {
// The object might have multiple children with the same UID (e.g. replicaset from apps and extensions group). It is ok to pick any object but we need to make sure
// we pick the same child after every refresh.
sort.Slice(children, func(i, j int) bool {
key1 := children[i].resourceKey()
key2 := children[j].resourceKey()
return strings.Compare(key1.String(), key2.String()) < 0
})
child := children[0]
action(child.asResourceNode())
child.iterateChildren(nsNodes, map[kube.ResourceKey]bool{objInfo.resourceKey(): true}, action)
}
}
} else {
action(c.createObjInfo(obj, c.settings.GetAppInstanceLabelKey()).asResourceNode())
}
}
@@ -405,15 +383,6 @@ func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*uns
return err
}
}
} else if _, watched := c.apisMeta[key.GroupKind()]; !watched {
var err error
managedObj, err = c.kubectl.GetResource(config, targetObj.GroupVersionKind(), targetObj.GetName(), targetObj.GetNamespace())
if err != nil {
if errors.IsNotFound(err) {
return nil
}
return err
}
}
}
@@ -480,8 +449,8 @@ func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstruc
toNotify[app] = n.isRootAppNode() || toNotify[app]
}
}
for name, isRootAppNode := range toNotify {
c.onAppUpdated(name, isRootAppNode, newObj.ref)
for name, full := range toNotify {
c.onAppUpdated(name, full, newObj.ref)
}
}

View File

@@ -43,28 +43,24 @@ var (
apiVersion: v1
kind: Pod
metadata:
uid: "1"
name: helm-guestbook-pod
namespace: default
ownerReferences:
- apiVersion: apps/v1
- apiVersion: extensions/v1beta1
kind: ReplicaSet
name: helm-guestbook-rs
uid: "2"
resourceVersion: "123"`)
testRS = strToUnstructured(`
apiVersion: apps/v1
kind: ReplicaSet
metadata:
uid: "2"
name: helm-guestbook-rs
namespace: default
ownerReferences:
- apiVersion: apps/v1beta1
- apiVersion: extensions/v1beta1
kind: Deployment
name: helm-guestbook
uid: "3"
resourceVersion: "123"`)
testDeploy = strToUnstructured(`
@@ -73,7 +69,6 @@ var (
metadata:
labels:
app.kubernetes.io/instance: helm-guestbook
uid: "3"
name: helm-guestbook
namespace: default
resourceVersion: "123"`)
@@ -165,7 +160,7 @@ func newClusterExt(kubectl kube.Kubectl) *clusterInfo {
func getChildren(cluster *clusterInfo, un *unstructured.Unstructured) []appv1.ResourceNode {
hierarchy := make([]appv1.ResourceNode, 0)
cluster.iterateHierarchy(un, func(child appv1.ResourceNode) {
cluster.iterateHierarchy(kube.GetResourceKey(un), func(child appv1.ResourceNode) {
hierarchy = append(hierarchy, child)
})
return hierarchy[1:]
@@ -184,7 +179,6 @@ func TestGetChildren(t *testing.T) {
Name: "helm-guestbook-pod",
Group: "",
Version: "v1",
UID: "1",
},
ParentRefs: []appv1.ResourceRef{{
Group: "apps",
@@ -192,7 +186,6 @@ func TestGetChildren(t *testing.T) {
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
@@ -208,12 +201,11 @@ func TestGetChildren(t *testing.T) {
Name: "helm-guestbook-rs",
Group: "apps",
Version: "v1",
UID: "2",
},
ResourceVersion: "123",
Health: &appv1.HealthStatus{Status: appv1.HealthStatusHealthy},
Info: []appv1.InfoItem{},
ParentRefs: []appv1.ResourceRef{{Group: "apps", Version: "", Kind: "Deployment", Namespace: "default", Name: "helm-guestbook", UID: "3"}},
ParentRefs: []appv1.ResourceRef{{Group: "apps", Version: "", Kind: "Deployment", Namespace: "default", Name: "helm-guestbook"}},
}}, rsChildren...), deployChildren)
}
@@ -265,14 +257,12 @@ func TestProcessNewChildEvent(t *testing.T) {
apiVersion: v1
kind: Pod
metadata:
uid: "4"
name: helm-guestbook-pod2
namespace: default
ownerReferences:
- apiVersion: apps/v1
- apiVersion: extensions/v1beta1
kind: ReplicaSet
name: helm-guestbook-rs
uid: "2"
resourceVersion: "123"`)
err = cluster.processEvent(watch.Added, newPod)
@@ -289,7 +279,6 @@ func TestProcessNewChildEvent(t *testing.T) {
Name: "helm-guestbook-pod",
Group: "",
Version: "v1",
UID: "1",
},
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
@@ -300,7 +289,6 @@ func TestProcessNewChildEvent(t *testing.T) {
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
ResourceVersion: "123",
}, {
@@ -310,7 +298,6 @@ func TestProcessNewChildEvent(t *testing.T) {
Name: "helm-guestbook-pod2",
Group: "",
Version: "v1",
UID: "4",
},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
@@ -321,7 +308,6 @@ func TestProcessNewChildEvent(t *testing.T) {
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
ResourceVersion: "123",
}}, rsChildren)
@@ -433,21 +419,3 @@ func TestWatchCacheUpdated(t *testing.T) {
_, ok = cluster.nodes[kube.GetResourceKey(added)]
assert.True(t, ok)
}
func TestGetDuplicatedChildren(t *testing.T) {
extensionsRS := testRS.DeepCopy()
extensionsRS.SetGroupVersionKind(schema.GroupVersionKind{Group: "extensions", Kind: kube.ReplicaSetKind, Version: "v1beta1"})
cluster := newCluster(testDeploy, testRS, extensionsRS)
err := cluster.ensureSynced()
assert.Nil(t, err)
// Get children multiple times to make sure the right child is picked up every time.
for i := 0; i < 5; i++ {
children := getChildren(cluster, testDeploy)
assert.Len(t, children, 1)
assert.Equal(t, "apps", children[0].Group)
assert.Equal(t, kube.ReplicaSetKind, children[0].Kind)
assert.Equal(t, testRS.GetName(), children[0].Name)
}
}

View File

@@ -13,6 +13,20 @@ type LiveStateCache struct {
mock.Mock
}
// Delete provides a mock function with given fields: server, obj
func (_m *LiveStateCache) Delete(server string, obj *unstructured.Unstructured) error {
ret := _m.Called(server, obj)
var r0 error
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured) error); ok {
r0 = rf(server, obj)
} else {
r0 = ret.Error(0)
}
return r0
}
// GetManagedLiveObjs provides a mock function with given fields: a, targetObjs
func (_m *LiveStateCache) GetManagedLiveObjs(a *v1alpha1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
ret := _m.Called(a, targetObjs)
@@ -62,13 +76,13 @@ func (_m *LiveStateCache) IsNamespaced(server string, obj *unstructured.Unstruct
return r0, r1
}
// IterateHierarchy provides a mock function with given fields: server, obj, action
func (_m *LiveStateCache) IterateHierarchy(server string, obj *unstructured.Unstructured, action func(v1alpha1.ResourceNode)) error {
ret := _m.Called(server, obj, action)
// IterateHierarchy provides a mock function with given fields: server, key, action
func (_m *LiveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(v1alpha1.ResourceNode)) error {
ret := _m.Called(server, key, action)
var r0 error
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured, func(v1alpha1.ResourceNode)) error); ok {
r0 = rf(server, obj, action)
if rf, ok := ret.Get(0).(func(string, kube.ResourceKey, func(v1alpha1.ResourceNode)) error); ok {
r0 = rf(server, key, action)
} else {
r0 = ret.Error(0)
}

View File

@@ -36,7 +36,8 @@ func (n *node) resourceKey() kube.ResourceKey {
func (n *node) isParentOf(child *node) bool {
for _, ownerRef := range child.ownerRefs {
if n.ref.UID == ownerRef.UID {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
if kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name) == n.resourceKey() {
return true
}
}
@@ -99,11 +100,10 @@ func (n *node) asResourceNode() appv1.ResourceNode {
for _, ownerRef := range n.ownerRefs {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
ownerKey := kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name)
parentRefs[0] = appv1.ResourceRef{Name: ownerRef.Name, Kind: ownerKey.Kind, Namespace: n.ref.Namespace, Group: ownerKey.Group, UID: string(ownerRef.UID)}
parentRefs[0] = appv1.ResourceRef{Name: ownerRef.Name, Kind: ownerKey.Kind, Namespace: n.ref.Namespace, Group: ownerKey.Group}
}
return appv1.ResourceNode{
ResourceRef: appv1.ResourceRef{
UID: string(n.ref.UID),
Name: n.ref.Name,
Group: gv.Group,
Version: gv.Version,

View File

@@ -19,10 +19,9 @@ func TestIsParentOf(t *testing.T) {
assert.False(t, grandParent.isParentOf(child))
}
func TestIsParentOfSameKindDifferentGroupAndUID(t *testing.T) {
func TestIsParentOfSameKindDifferentGroup(t *testing.T) {
rs := testRS.DeepCopy()
rs.SetAPIVersion("somecrd.io/v1")
rs.SetUID("123")
child := c.createObjInfo(testPod, "")
invalidParent := c.createObjInfo(rs, "")

View File

@@ -13,7 +13,6 @@ import (
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
applister "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/healthz"
)
type MetricsServer struct {
@@ -60,13 +59,12 @@ var (
)
// NewMetricsServer returns a new prometheus server which collects application metrics
func NewMetricsServer(addr string, appLister applister.ApplicationLister, healthCheck func() error) *MetricsServer {
func NewMetricsServer(addr string, appLister applister.ApplicationLister) *MetricsServer {
mux := http.NewServeMux()
appRegistry := NewAppRegistry(appLister)
appRegistry.MustRegister(prometheus.NewProcessCollector(prometheus.ProcessCollectorOpts{}))
appRegistry.MustRegister(prometheus.NewGoCollector())
mux.Handle(MetricsPath, promhttp.HandlerFor(appRegistry, promhttp.HandlerOpts{}))
healthz.ServeHealthCheck(mux, healthCheck)
syncCounter := prometheus.NewCounterVec(
prometheus.CounterOpts{

View File

@@ -104,10 +104,6 @@ argocd_app_sync_status{name="my-app",namespace="argocd",project="default",sync_s
argocd_app_sync_status{name="my-app",namespace="argocd",project="default",sync_status="Unknown"} 0
`
var noOpHealthCheck = func() error {
return nil
}
func newFakeApp(fakeApp string) *argoappv1.Application {
var app argoappv1.Application
err := yaml.Unmarshal([]byte(fakeApp), &app)
@@ -137,7 +133,7 @@ func newFakeLister(fakeApp ...string) (context.CancelFunc, applister.Application
func testApp(t *testing.T, fakeApp string, expectedResponse string) {
cancel, appLister := newFakeLister(fakeApp)
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister, noOpHealthCheck)
metricsServ := NewMetricsServer("localhost:8082", appLister)
req, err := http.NewRequest("GET", "/metrics", nil)
assert.NoError(t, err)
rr := httptest.NewRecorder()
@@ -180,7 +176,7 @@ argocd_app_sync_total{name="my-app",namespace="argocd",phase="Succeeded",project
func TestMetricsSyncCounter(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister, noOpHealthCheck)
metricsServ := NewMetricsServer("localhost:8082", appLister)
fakeApp := newFakeApp(fakeApp)
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationRunning})
@@ -221,7 +217,7 @@ argocd_app_reconcile_count{name="my-app",namespace="argocd",project="important-p
func TestReconcileMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister, noOpHealthCheck)
metricsServ := NewMetricsServer("localhost:8082", appLister)
fakeApp := newFakeApp(fakeApp)
metricsServ.IncReconcile(fakeApp, 5*time.Second)

View File

@@ -27,7 +27,6 @@ import (
"github.com/argoproj/argo-cd/util/health"
hookutil "github.com/argoproj/argo-cd/util/hook"
kubeutil "github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource"
"github.com/argoproj/argo-cd/util/settings"
)
@@ -57,7 +56,7 @@ type ResourceInfoProvider interface {
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool, localObjects []string) (*comparisonResult, error)
CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
}
@@ -125,21 +124,12 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
return nil, nil, nil, err
}
targetObjs, hooks, nil := unmarshalManifests(manifestInfo.Manifests)
return targetObjs, hooks, manifestInfo, nil
}
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, []*unstructured.Unstructured, error) {
targetObjs := make([]*unstructured.Unstructured, 0)
hooks := make([]*unstructured.Unstructured, 0)
for _, manifest := range manifests {
for _, manifest := range manifestInfo.Manifests {
obj, err := v1alpha1.UnmarshalToUnstructured(manifest)
if err != nil {
return nil, nil, err
}
if resource.Ignore(obj) {
continue
return nil, nil, nil, err
}
if hookutil.IsHook(obj) {
hooks = append(hooks, obj)
@@ -147,7 +137,7 @@ func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, []*un
targetObjs = append(targetObjs, obj)
}
}
return targetObjs, hooks, nil
return targetObjs, hooks, manifestInfo, nil
}
func DeduplicateTargetObjects(
@@ -187,46 +177,10 @@ func DeduplicateTargetObjects(
return result, conditions, nil
}
// dedupLiveResources handles removes live resource duplicates with the same UID. Duplicates are created in a separate resource groups.
// E.g. apps/Deployment produces duplicate in extensions/Deployment, authorization.openshift.io/ClusterRole produces duplicate in rbac.authorization.k8s.io/ClusterRole etc.
// The method removes such duplicates unless it was defined in git ( exists in target resources list ). At least one duplicate stays.
// If non of duplicates are in git at random one stays
func dedupLiveResources(targetObjs []*unstructured.Unstructured, liveObjsByKey map[kubeutil.ResourceKey]*unstructured.Unstructured) {
targetObjByKey := make(map[kubeutil.ResourceKey]*unstructured.Unstructured)
for i := range targetObjs {
targetObjByKey[kubeutil.GetResourceKey(targetObjs[i])] = targetObjs[i]
}
liveObjsById := make(map[types.UID][]*unstructured.Unstructured)
for k := range liveObjsByKey {
obj := liveObjsByKey[k]
if obj != nil {
liveObjsById[obj.GetUID()] = append(liveObjsById[obj.GetUID()], obj)
}
}
for id := range liveObjsById {
objs := liveObjsById[id]
if len(objs) > 1 {
duplicatesLeft := len(objs)
for i := range objs {
obj := objs[i]
resourceKey := kubeutil.GetResourceKey(obj)
if _, ok := targetObjByKey[resourceKey]; !ok {
delete(liveObjsByKey, resourceKey)
duplicatesLeft--
if duplicatesLeft == 1 {
break
}
}
}
}
}
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool, localManifests []string) (*comparisonResult, error) {
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool) (*comparisonResult, error) {
diffNormalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, m.settings.ResourceOverrides)
if err != nil {
return nil, err
@@ -237,28 +191,12 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
failedToLoadObjs := false
conditions := make([]v1alpha1.ApplicationCondition, 0)
appLabelKey := m.settings.GetAppInstanceLabelKey()
var targetObjs []*unstructured.Unstructured
var hooks []*unstructured.Unstructured
var manifestInfo *repository.ManifestResponse
if len(localManifests) == 0 {
targetObjs, hooks, manifestInfo, err = m.getRepoObjs(app, source, appLabelKey, revision, noCache)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
} else {
targetObjs, hooks, err = unmarshalManifests(localManifests)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
manifestInfo = nil
targetObjs, hooks, manifestInfo, err := m.getRepoObjs(app, source, appLabelKey, revision, noCache)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
targetObjs, dedupConditions, err := DeduplicateTargetObjects(app.Spec.Destination.Server, app.Spec.Destination.Namespace, targetObjs, m.liveStateCache)
if err != nil {
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
@@ -267,7 +205,6 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
logCtx.Debugf("Generated config manifests")
liveObjByKey, err := m.liveStateCache.GetManagedLiveObjs(app, targetObjs)
dedupLiveResources(targetObjs, liveObjByKey)
if err != nil {
liveObjByKey = make(map[kubeutil.ResourceKey]*unstructured.Unstructured)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
@@ -319,11 +256,10 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
syncCode := v1alpha1.SyncStatusCodeSynced
managedResources := make([]managedResource, len(targetObjs))
resourceSummaries := make([]v1alpha1.ResourceStatus, len(targetObjs))
for i, targetObj := range targetObjs {
liveObj := managedLiveObj[i]
obj := liveObj
for i := 0; i < len(targetObjs); i++ {
obj := managedLiveObj[i]
if obj == nil {
obj = targetObj
obj = targetObjs[i]
}
if obj == nil {
continue
@@ -340,19 +276,15 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
}
diffResult := diffResults.Diffs[i]
if resState.Hook || resource.Ignore(obj) {
if resState.Hook {
// For resource hooks, don't store sync status, and do not affect overall sync status
} else if diffResult.Modified || targetObj == nil || liveObj == nil {
} else if diffResult.Modified || targetObjs[i] == nil || managedLiveObj[i] == nil {
// Set resource state to OutOfSync since one of the following is true:
// * target and live resource are different
// * target resource not defined and live resource is extra
// * target resource present but live resource is missing
resState.Status = v1alpha1.SyncStatusCodeOutOfSync
// we ignore the status if the obj needs pruning AND we have the annotation
needsPruning := targetObj == nil && liveObj != nil
if !(needsPruning && resource.HasAnnotationOption(obj, common.AnnotationCompareOptions, "IgnoreExtraneous")) {
syncCode = v1alpha1.SyncStatusCodeOutOfSync
}
syncCode = v1alpha1.SyncStatusCodeOutOfSync
} else {
resState.Status = v1alpha1.SyncStatusCodeSynced
}
@@ -362,8 +294,8 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
Group: resState.Group,
Kind: resState.Kind,
Version: resState.Version,
Live: liveObj,
Target: targetObj,
Live: managedLiveObj[i],
Target: targetObjs[i],
Diff: diffResult,
Hook: resState.Hook,
}

View File

@@ -4,9 +4,10 @@ import (
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
@@ -30,7 +31,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
@@ -53,7 +54,7 @@ func TestCompareAppStateMissing(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
@@ -80,7 +81,7 @@ func TestCompareAppStateExtra(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
@@ -107,43 +108,15 @@ func TestCompareAppStateHook(t *testing.T) {
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 0, len(compRes.resources))
assert.Equal(t, 0, len(compRes.managedResources))
assert.Equal(t, 1, len(compRes.hooks))
assert.Equal(t, 0, len(compRes.conditions))
}
// checks that ignore resources are detected, but excluded from status
func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationCompareOptions: "IgnoreExtraneous"})
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Len(t, compRes.resources, 0)
assert.Len(t, compRes.managedResources, 0)
assert.Len(t, compRes.conditions, 0)
}
// TestCompareAppStateExtraHook tests when there is an extra _hook_ object in live but not defined in git
func TestCompareAppStateExtraHook(t *testing.T) {
pod := test.NewPod()
@@ -163,13 +136,12 @@ func TestCompareAppStateExtraHook(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.hooks))
assert.Equal(t, 0, len(compRes.conditions))
}
@@ -200,7 +172,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
},
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Contains(t, compRes.conditions, argoappv1.ApplicationCondition{
@@ -251,7 +223,7 @@ func TestSetHealth(t *testing.T) {
},
})
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
@@ -284,7 +256,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
},
})
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,66 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube/kubetest"
)
var clusterRoleHook = `
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "ClusterRole",
"metadata": {
"name": "cluster-role-hook",
"annotations": {
"argocd.argoproj.io/hook": "PostSync"
}
}
}`
func TestSyncHookProjectPermissions(t *testing.T) {
syncCtx := newTestSyncCtx(&v1.APIResourceList{
GroupVersion: "v1",
APIResources: []v1.APIResource{
{Name: "pod", Namespaced: true, Kind: "Pod", Group: "v1"},
},
}, &v1.APIResourceList{
GroupVersion: "rbac.authorization.k8s.io/v1",
APIResources: []v1.APIResource{
{Name: "clusterroles", Namespaced: false, Kind: "ClusterRole", Group: "rbac.authorization.k8s.io"},
},
})
syncCtx.kubectl = kubetest.MockKubectlCmd{}
crHook, _ := v1alpha1.UnmarshalToUnstructured(clusterRoleHook)
syncCtx.compareResult = &comparisonResult{
hooks: []*unstructured.Unstructured{
crHook,
},
managedResources: []managedResource{{
Target: test.NewPod(),
}},
}
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{}
syncCtx.syncOp.SyncStrategy = nil
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 0)
assert.Contains(t, syncCtx.opState.Message, "not permitted in project")
// Now add the resource to the whitelist and try again. Resource should be created
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{
{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"},
}
syncCtx.syncOp.SyncStrategy = nil
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[0].Status)
}

View File

@@ -2,55 +2,334 @@ package controller
import (
"fmt"
"reflect"
"strings"
wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1"
apiv1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/kubernetes/pkg/apis/batch"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
hookutil "github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
)
// enforceHookDeletePolicy examines the hook deletion policy of a object and deletes it based on the status
func enforceHookDeletePolicy(hook *unstructured.Unstructured, operation v1alpha1.OperationPhase) bool {
// doHookSync initiates (or continues) a hook-based sync. This method will be invoked when there may
// already be in-flight (potentially incomplete) jobs/workflows, and should be idempotent.
func (sc *syncContext) doHookSync(syncTasks []syncTask, hooks []*unstructured.Unstructured) {
if !sc.startedPreSyncPhase() {
if !sc.verifyPermittedHooks(hooks) {
return
}
}
// 1. Run PreSync hooks
if !sc.runHooks(hooks, appv1.HookTypePreSync) {
return
}
// 2. Run Sync hooks (e.g. blue-green sync workflow)
// Before performing Sync hooks, apply any normal manifests which aren't annotated with a hook.
// We only want to do this once per operation.
shouldContinue := true
if !sc.startedSyncPhase() {
if !sc.syncNonHookTasks(syncTasks) {
sc.setOperationPhase(appv1.OperationFailed, "one or more objects failed to apply")
return
}
shouldContinue = false
}
if !sc.runHooks(hooks, appv1.HookTypeSync) {
shouldContinue = false
}
if !shouldContinue {
return
}
// 3. Run PostSync hooks
// Before running PostSync hooks, we want to make rollout is complete (app is healthy). If we
// already started the post-sync phase, then we do not need to perform the health check.
postSyncHooks, _ := sc.getHooks(appv1.HookTypePostSync)
if len(postSyncHooks) > 0 && !sc.startedPostSyncPhase() {
sc.log.Infof("PostSync application health check: %s", sc.compareResult.healthStatus.Status)
if sc.compareResult.healthStatus.Status != appv1.HealthStatusHealthy {
sc.setOperationPhase(appv1.OperationRunning, fmt.Sprintf("waiting for %s state to run %s hooks (current health: %s)",
appv1.HealthStatusHealthy, appv1.HookTypePostSync, sc.compareResult.healthStatus.Status))
return
}
}
if !sc.runHooks(hooks, appv1.HookTypePostSync) {
return
}
// if we get here, all hooks successfully completed
sc.setOperationPhase(appv1.OperationSucceeded, "successfully synced")
}
// verifyPermittedHooks verifies all hooks are permitted in the project
func (sc *syncContext) verifyPermittedHooks(hooks []*unstructured.Unstructured) bool {
for _, hook := range hooks {
gvk := hook.GroupVersionKind()
serverRes, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("unable to identify api resource type: %v", gvk))
return false
}
if !sc.proj.IsResourcePermitted(metav1.GroupKind{Group: gvk.Group, Kind: gvk.Kind}, serverRes.Namespaced) {
sc.setOperationPhase(appv1.OperationFailed, fmt.Sprintf("Hook resource %s:%s is not permitted in project %s", gvk.Group, gvk.Kind, sc.proj.Name))
return false
}
if serverRes.Namespaced && !sc.proj.IsDestinationPermitted(appv1.ApplicationDestination{Namespace: hook.GetNamespace(), Server: sc.server}) {
gvk := hook.GroupVersionKind()
sc.setResourceDetails(&appv1.ResourceResult{
Name: hook.GetName(),
Group: gvk.Group,
Version: gvk.Version,
Kind: hook.GetKind(),
Namespace: hook.GetNamespace(),
Message: fmt.Sprintf("namespace %v is not permitted in project '%s'", hook.GetNamespace(), sc.proj.Name),
Status: appv1.ResultCodeSyncFailed,
})
return false
}
}
return true
}
// getHooks returns all Argo CD hooks, optionally filtered by ones of the specific type(s)
func (sc *syncContext) getHooks(hookTypes ...appv1.HookType) ([]*unstructured.Unstructured, error) {
var hooks []*unstructured.Unstructured
for _, hook := range sc.compareResult.hooks {
if hook.GetNamespace() == "" {
hook.SetNamespace(sc.namespace)
}
if !hookutil.IsArgoHook(hook) {
// TODO: in the future, if we want to map helm hooks to Argo CD lifecycles, we should
// include helm hooks in the returned list
continue
}
if len(hookTypes) > 0 {
match := false
for _, desiredType := range hookTypes {
if isHookType(hook, desiredType) {
match = true
break
}
}
if !match {
continue
}
}
hooks = append(hooks, hook)
}
return hooks, nil
}
// runHooks iterates & filters the target manifests for resources of the specified hook type, then
// creates the resource. Updates the sc.opRes.hooks with the current status. Returns whether or not
// we should continue to the next hook phase.
func (sc *syncContext) runHooks(hooks []*unstructured.Unstructured, hookType appv1.HookType) bool {
shouldContinue := true
for _, hook := range hooks {
if hookType == appv1.HookTypeSync && isHookType(hook, appv1.HookTypeSkip) {
// If we get here, we are invoking all sync hooks and reached a resource that is
// annotated with the Skip hook. This will update the resource details to indicate it
// was skipped due to annotation
gvk := hook.GroupVersionKind()
sc.setResourceDetails(&appv1.ResourceResult{
Name: hook.GetName(),
Group: gvk.Group,
Version: gvk.Version,
Kind: hook.GetKind(),
Namespace: hook.GetNamespace(),
Message: "Skipped",
})
continue
}
if !isHookType(hook, hookType) {
continue
}
updated, err := sc.runHook(hook, hookType)
if err != nil {
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("%s hook error: %v", hookType, err))
return false
}
if updated {
// If the result of running a hook, caused us to modify hook resource state, we should
// not proceed to the next hook phase. This is because before proceeding to the next
// phase, we want a full health assessment to happen. By returning early, we allow
// the application to get requeued into the controller workqueue, and on the next
// process iteration, a new CompareAppState() will be performed to get the most
// up-to-date live state. This enables us to accurately wait for an application to
// become Healthy before proceeding to run PostSync tasks.
shouldContinue = false
}
}
if !shouldContinue {
sc.log.Infof("Stopping after %s phase due to modifications to hook resource state", hookType)
return false
}
completed, successful := areHooksCompletedSuccessful(hookType, sc.syncRes.Resources)
if !completed {
return false
}
if !successful {
sc.setOperationPhase(appv1.OperationFailed, fmt.Sprintf("%s hook failed", hookType))
return false
}
return true
}
// syncNonHookTasks syncs or prunes the objects that are not handled by hooks using an apply sync.
// returns true if the sync was successful
func (sc *syncContext) syncNonHookTasks(syncTasks []syncTask) bool {
var nonHookTasks []syncTask
for _, task := range syncTasks {
if task.targetObj == nil {
nonHookTasks = append(nonHookTasks, task)
} else {
annotations := task.targetObj.GetAnnotations()
if annotations != nil && annotations[common.AnnotationKeyHook] != "" {
// we are doing a hook sync and this resource is annotated with a hook annotation
continue
}
// if we get here, this resource does not have any hook annotation so we
// should perform an `kubectl apply`
nonHookTasks = append(nonHookTasks, task)
}
}
return sc.doApplySync(nonHookTasks, false, sc.syncOp.SyncStrategy.Hook.Force, true)
}
// runHook runs the supplied hook and updates the hook status. Returns true if the result of
// invoking this method resulted in changes to any hook status
func (sc *syncContext) runHook(hook *unstructured.Unstructured, hookType appv1.HookType) (bool, error) {
// Hook resources names are deterministic, whether they are defined by the user (metadata.name),
// or formulated at the time of the operation (metadata.generateName). If user specifies
// metadata.generateName, then we will generate a formulated metadata.name before submission.
if hook.GetName() == "" {
postfix := strings.ToLower(fmt.Sprintf("%s-%s-%d", sc.syncRes.Revision[0:7], hookType, sc.opState.StartedAt.UTC().Unix()))
generatedName := hook.GetGenerateName()
hook = hook.DeepCopy()
hook.SetName(fmt.Sprintf("%s%s", generatedName, postfix))
}
// Check our hook statuses to see if we already completed this hook.
// If so, this method is a noop
prevStatus := sc.getHookStatus(hook, hookType)
if prevStatus != nil && prevStatus.HookPhase.Completed() {
return false, nil
}
gvk := hook.GroupVersionKind()
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
return false, err
}
resource := kube.ToGroupVersionResource(gvk.GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, hook.GetNamespace())
var liveObj *unstructured.Unstructured
existing, err := resIf.Get(hook.GetName(), metav1.GetOptions{})
if err != nil {
if !apierr.IsNotFound(err) {
return false, fmt.Errorf("Failed to get status of %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
_, err := sc.kubectl.ApplyResource(sc.config, hook, hook.GetNamespace(), false, false)
if err != nil {
return false, fmt.Errorf("Failed to create %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
created, err := resIf.Get(hook.GetName(), metav1.GetOptions{})
if err != nil {
return true, fmt.Errorf("Failed to get status of %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
sc.log.Infof("%s hook %s '%s' created", hookType, gvk, created.GetName())
sc.setOperationPhase(appv1.OperationRunning, fmt.Sprintf("running %s hooks", hookType))
liveObj = created
} else {
liveObj = existing
}
hookStatus := newHookStatus(liveObj, hookType)
if hookStatus.HookPhase.Completed() {
if enforceHookDeletePolicy(hook, hookStatus.HookPhase) {
err = sc.deleteHook(hook.GetName(), hook.GetNamespace(), hook.GroupVersionKind())
if err != nil {
hookStatus.HookPhase = appv1.OperationFailed
hookStatus.Message = fmt.Sprintf("failed to delete %s hook: %v", hookStatus.HookPhase, err)
}
}
}
return sc.updateHookStatus(hookStatus), nil
}
// enforceHookDeletePolicy examines the hook deletion policy of a object and deletes it based on the status
func enforceHookDeletePolicy(hook *unstructured.Unstructured, phase appv1.OperationPhase) bool {
annotations := hook.GetAnnotations()
if annotations == nil {
return false
}
deletePolicies := strings.Split(annotations[common.AnnotationKeyHookDeletePolicy], ",")
for _, dp := range deletePolicies {
policy := v1alpha1.HookDeletePolicy(strings.TrimSpace(dp))
if policy == v1alpha1.HookDeletePolicyHookSucceeded && operation == v1alpha1.OperationSucceeded {
policy := appv1.HookDeletePolicy(strings.TrimSpace(dp))
if policy == appv1.HookDeletePolicyHookSucceeded && phase == appv1.OperationSucceeded {
return true
}
if policy == v1alpha1.HookDeletePolicyHookFailed && operation == v1alpha1.OperationFailed {
if policy == appv1.HookDeletePolicyHookFailed && phase == appv1.OperationFailed {
return true
}
}
return false
}
// getOperationPhase returns a hook status from an _live_ unstructured object
func getOperationPhase(hook *unstructured.Unstructured) (operation v1alpha1.OperationPhase, message string) {
gvk := hook.GroupVersionKind()
if isBatchJob(gvk) {
return getStatusFromBatchJob(hook)
} else if isArgoWorkflow(gvk) {
return getStatusFromArgoWorkflow(hook)
} else if isPod(gvk) {
return getStatusFromPod(hook)
} else {
return v1alpha1.OperationSucceeded, fmt.Sprintf("%s created", hook.GetName())
// isHookType tells whether or not the supplied object is a hook of the specified type
func isHookType(hook *unstructured.Unstructured, hookType appv1.HookType) bool {
annotations := hook.GetAnnotations()
if annotations == nil {
return false
}
resHookTypes := strings.Split(annotations[common.AnnotationKeyHook], ",")
for _, ht := range resHookTypes {
if string(hookType) == strings.TrimSpace(ht) {
return true
}
}
return false
}
// newHookStatus returns a hook status from an _live_ unstructured object
func newHookStatus(hook *unstructured.Unstructured, hookType appv1.HookType) appv1.ResourceResult {
gvk := hook.GroupVersionKind()
hookStatus := appv1.ResourceResult{
Name: hook.GetName(),
Kind: hook.GetKind(),
Group: gvk.Group,
Version: gvk.Version,
HookType: hookType,
HookPhase: appv1.OperationRunning,
Namespace: hook.GetNamespace(),
}
if isBatchJob(gvk) {
updateStatusFromBatchJob(hook, &hookStatus)
} else if isArgoWorkflow(gvk) {
updateStatusFromArgoWorkflow(hook, &hookStatus)
} else if isPod(gvk) {
updateStatusFromPod(hook, &hookStatus)
} else {
hookStatus.HookPhase = appv1.OperationSucceeded
hookStatus.Message = fmt.Sprintf("%s created", hook.GetName())
}
return hookStatus
}
// isRunnable returns if the resource object is a runnable type which needs to be terminated
func isRunnable(gvk schema.GroupVersionKind) bool {
func isRunnable(res *appv1.ResourceResult) bool {
gvk := res.GroupVersionKind()
return isBatchJob(gvk) || isArgoWorkflow(gvk) || isPod(gvk)
}
@@ -58,16 +337,18 @@ func isBatchJob(gvk schema.GroupVersionKind) bool {
return gvk.Group == "batch" && gvk.Kind == "Job"
}
// TODO this is a copy-and-paste of health.getJobHealth(), refactor out?
func getStatusFromBatchJob(hook *unstructured.Unstructured) (operation v1alpha1.OperationPhase, message string) {
func updateStatusFromBatchJob(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var job batch.Job
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &job)
if err != nil {
return v1alpha1.OperationError, err.Error()
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
failed := false
var failMsg string
complete := false
var message string
for _, condition := range job.Status.Conditions {
switch condition.Type {
case batch.JobFailed:
@@ -80,11 +361,14 @@ func getStatusFromBatchJob(hook *unstructured.Unstructured) (operation v1alpha1.
}
}
if !complete {
return v1alpha1.OperationRunning, message
hookStatus.HookPhase = appv1.OperationRunning
hookStatus.Message = message
} else if failed {
return v1alpha1.OperationFailed, failMsg
hookStatus.HookPhase = appv1.OperationFailed
hookStatus.Message = failMsg
} else {
return v1alpha1.OperationSucceeded, message
hookStatus.HookPhase = appv1.OperationSucceeded
hookStatus.Message = message
}
}
@@ -92,36 +376,38 @@ func isArgoWorkflow(gvk schema.GroupVersionKind) bool {
return gvk.Group == "argoproj.io" && gvk.Kind == "Workflow"
}
// TODO - should we move this to health.go?
func getStatusFromArgoWorkflow(hook *unstructured.Unstructured) (operation v1alpha1.OperationPhase, message string) {
func updateStatusFromArgoWorkflow(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var wf wfv1.Workflow
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &wf)
if err != nil {
return v1alpha1.OperationError, err.Error()
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
switch wf.Status.Phase {
case wfv1.NodePending, wfv1.NodeRunning:
return v1alpha1.OperationRunning, wf.Status.Message
hookStatus.HookPhase = appv1.OperationRunning
case wfv1.NodeSucceeded:
return v1alpha1.OperationSucceeded, wf.Status.Message
hookStatus.HookPhase = appv1.OperationSucceeded
case wfv1.NodeFailed:
return v1alpha1.OperationFailed, wf.Status.Message
hookStatus.HookPhase = appv1.OperationFailed
case wfv1.NodeError:
return v1alpha1.OperationError, wf.Status.Message
hookStatus.HookPhase = appv1.OperationError
}
return v1alpha1.OperationSucceeded, wf.Status.Message
hookStatus.Message = wf.Status.Message
}
func isPod(gvk schema.GroupVersionKind) bool {
return gvk.Group == "" && gvk.Kind == "Pod"
}
// TODO - this is very similar to health.getPodHealth() should we use that instead?
func getStatusFromPod(hook *unstructured.Unstructured) (v1alpha1.OperationPhase, string) {
func updateStatusFromPod(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var pod apiv1.Pod
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &pod)
if err != nil {
return v1alpha1.OperationError, err.Error()
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
getFailMessage := func(ctr *apiv1.ContainerStatus) string {
if ctr.State.Terminated != nil {
@@ -140,22 +426,135 @@ func getStatusFromPod(hook *unstructured.Unstructured) (v1alpha1.OperationPhase,
switch pod.Status.Phase {
case apiv1.PodPending, apiv1.PodRunning:
return v1alpha1.OperationRunning, ""
hookStatus.HookPhase = appv1.OperationRunning
case apiv1.PodSucceeded:
return v1alpha1.OperationSucceeded, ""
hookStatus.HookPhase = appv1.OperationSucceeded
case apiv1.PodFailed:
hookStatus.HookPhase = appv1.OperationFailed
if pod.Status.Message != "" {
// Pod has a nice error message. Use that.
return v1alpha1.OperationFailed, pod.Status.Message
hookStatus.Message = pod.Status.Message
return
}
for _, ctr := range append(pod.Status.InitContainerStatuses, pod.Status.ContainerStatuses...) {
if msg := getFailMessage(&ctr); msg != "" {
return v1alpha1.OperationFailed, msg
hookStatus.Message = msg
return
}
}
return v1alpha1.OperationFailed, ""
case apiv1.PodUnknown:
return v1alpha1.OperationError, ""
hookStatus.HookPhase = appv1.OperationError
}
return v1alpha1.OperationRunning, ""
}
func (sc *syncContext) getHookStatus(hookObj *unstructured.Unstructured, hookType appv1.HookType) *appv1.ResourceResult {
for _, hr := range sc.syncRes.Resources {
if !hr.IsHook() {
continue
}
ns := util.FirstNonEmpty(hookObj.GetNamespace(), sc.namespace)
if hookEqual(hr, hookObj.GroupVersionKind().Group, hookObj.GetKind(), ns, hookObj.GetName(), hookType) {
return hr
}
}
return nil
}
func hookEqual(hr *appv1.ResourceResult, group, kind, namespace, name string, hookType appv1.HookType) bool {
return bool(
hr.Group == group &&
hr.Kind == kind &&
hr.Namespace == namespace &&
hr.Name == name &&
hr.HookType == hookType)
}
// updateHookStatus updates the status of a hook. Returns true if the hook was modified
func (sc *syncContext) updateHookStatus(hookStatus appv1.ResourceResult) bool {
sc.lock.Lock()
defer sc.lock.Unlock()
for i, prev := range sc.syncRes.Resources {
if !prev.IsHook() {
continue
}
if hookEqual(prev, hookStatus.Group, hookStatus.Kind, hookStatus.Namespace, hookStatus.Name, hookStatus.HookType) {
if reflect.DeepEqual(prev, hookStatus) {
return false
}
if prev.HookPhase != hookStatus.HookPhase {
sc.log.Infof("Hook %s %s/%s hookPhase: %s -> %s", hookStatus.HookType, prev.Kind, prev.Name, prev.HookPhase, hookStatus.HookPhase)
}
if prev.Status != hookStatus.Status {
sc.log.Infof("Hook %s %s/%s status: %s -> %s", hookStatus.HookType, prev.Kind, prev.Name, prev.Status, hookStatus.Status)
}
if prev.Message != hookStatus.Message {
sc.log.Infof("Hook %s %s/%s message: '%s' -> '%s'", hookStatus.HookType, prev.Kind, prev.Name, prev.Message, hookStatus.Message)
}
sc.syncRes.Resources[i] = &hookStatus
return true
}
}
sc.syncRes.Resources = append(sc.syncRes.Resources, &hookStatus)
sc.log.Infof("Set new hook %s %s/%s. phase: %s, message: %s", hookStatus.HookType, hookStatus.Kind, hookStatus.Name, hookStatus.HookPhase, hookStatus.Message)
return true
}
// areHooksCompletedSuccessful checks if all the hooks of the specified type are completed and successful
func areHooksCompletedSuccessful(hookType appv1.HookType, hookStatuses []*appv1.ResourceResult) (bool, bool) {
isSuccessful := true
for _, hookStatus := range hookStatuses {
if !hookStatus.IsHook() {
continue
}
if hookStatus.HookType != hookType {
continue
}
if !hookStatus.HookPhase.Completed() {
return false, false
}
if !hookStatus.HookPhase.Successful() {
isSuccessful = false
}
}
return true, isSuccessful
}
// terminate looks for any running jobs/workflow hooks and deletes the resource
func (sc *syncContext) terminate() {
terminateSuccessful := true
for _, hookStatus := range sc.syncRes.Resources {
if !hookStatus.IsHook() {
continue
}
if hookStatus.HookPhase.Completed() {
continue
}
if isRunnable(hookStatus) {
hookStatus.HookPhase = appv1.OperationFailed
err := sc.deleteHook(hookStatus.Name, hookStatus.Namespace, hookStatus.GroupVersionKind())
if err != nil {
hookStatus.Message = fmt.Sprintf("Failed to delete %s hook %s/%s: %v", hookStatus.HookType, hookStatus.Kind, hookStatus.Name, err)
terminateSuccessful = false
} else {
hookStatus.Message = fmt.Sprintf("Deleted %s hook %s/%s", hookStatus.HookType, hookStatus.Kind, hookStatus.Name)
}
sc.updateHookStatus(*hookStatus)
}
}
if terminateSuccessful {
sc.setOperationPhase(appv1.OperationFailed, "Operation terminated")
} else {
sc.setOperationPhase(appv1.OperationError, "Operation termination had errors")
}
}
func (sc *syncContext) deleteHook(name, namespace string, gvk schema.GroupVersionKind) error {
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
return err
}
resource := kube.ToGroupVersionResource(gvk.GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, namespace)
propagationPolicy := metav1.DeletePropagationForeground
return resIf.Delete(name, &metav1.DeleteOptions{PropagationPolicy: &propagationPolicy})
}

View File

@@ -1,25 +0,0 @@
package controller
import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/hook"
)
func syncPhases(obj *unstructured.Unstructured) []v1alpha1.SyncPhase {
if hook.Skip(obj) {
return nil
} else if hook.IsHook(obj) {
var phases []v1alpha1.SyncPhase
for _, hookType := range hook.Types(obj) {
switch hookType {
case v1alpha1.HookTypePreSync, v1alpha1.HookTypeSync, v1alpha1.HookTypePostSync:
phases = append(phases, v1alpha1.SyncPhase(hookType))
}
}
return phases
} else {
return []v1alpha1.SyncPhase{v1alpha1.SyncPhaseSync}
}
}

View File

@@ -1,46 +0,0 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/test"
)
func TestSyncPhaseNone(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSync}, syncPhases(&unstructured.Unstructured{}))
}
func TestSyncPhasePreSync(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePreSync}, syncPhases(pod("PreSync")))
}
func TestSyncPhaseSync(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSync}, syncPhases(pod("Sync")))
}
func TestSyncPhaseSkip(t *testing.T) {
assert.Nil(t, syncPhases(pod("Skip")))
}
// garbage hooks are still hooks, but have no phases, because some user spelled something wrong
func TestSyncPhaseGarbage(t *testing.T) {
assert.Nil(t, syncPhases(pod("Garbage")))
}
func TestSyncPhasePost(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePostSync}, syncPhases(pod("PostSync")))
}
func TestSyncPhaseTwoPhases(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePreSync, SyncPhasePostSync}, syncPhases(pod("PreSync,PostSync")))
}
func pod(hookType string) *unstructured.Unstructured {
pod := test.NewPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": hookType})
return pod
}

View File

@@ -1,115 +0,0 @@
package controller
import (
"fmt"
"strconv"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/hook"
)
// syncTask holds the live and target object. At least one should be non-nil. A targetObj of nil
// indicates the live object needs to be pruned. A liveObj of nil indicates the object has yet to
// be deployed
type syncTask struct {
phase v1alpha1.SyncPhase
liveObj *unstructured.Unstructured
targetObj *unstructured.Unstructured
skipDryRun bool
syncStatus v1alpha1.ResultCode
operationState v1alpha1.OperationPhase
message string
}
func ternary(val bool, a, b string) string {
if val {
return a
} else {
return b
}
}
func (t *syncTask) String() string {
return fmt.Sprintf("%s/%d %s %s/%s:%s/%s %s->%s (%s,%s,%s)",
t.phase, t.wave(),
ternary(t.isHook(), "hook", "resource"), t.group(), t.kind(), t.namespace(), t.name(),
ternary(t.liveObj != nil, "obj", "nil"), ternary(t.targetObj != nil, "obj", "nil"),
t.syncStatus, t.operationState, t.message,
)
}
func (t *syncTask) isPrune() bool {
return t.targetObj == nil
}
// return the target object (if this exists) otherwise the live object
// some caution - often you explicitly want the live object not the target object
func (t *syncTask) obj() *unstructured.Unstructured {
return obj(t.targetObj, t.liveObj)
}
func (t *syncTask) wave() int {
text := t.obj().GetAnnotations()[common.AnnotationSyncWave]
if text == "" {
return 0
}
val, err := strconv.Atoi(text)
if err != nil {
return 0
}
return val
}
func (t *syncTask) isHook() bool {
return hook.IsHook(t.obj())
}
func (t *syncTask) group() string {
return t.groupVersionKind().Group
}
func (t *syncTask) kind() string {
return t.groupVersionKind().Kind
}
func (t *syncTask) version() string {
return t.groupVersionKind().Version
}
func (t *syncTask) groupVersionKind() schema.GroupVersionKind {
return t.obj().GroupVersionKind()
}
func (t *syncTask) name() string {
return t.obj().GetName()
}
func (t *syncTask) namespace() string {
return t.obj().GetNamespace()
}
func (t *syncTask) running() bool {
return t.operationState == v1alpha1.OperationRunning
}
func (t *syncTask) completed() bool {
return t.operationState.Completed()
}
func (t *syncTask) successful() bool {
return t.operationState.Successful()
}
func (t *syncTask) hookType() v1alpha1.HookType {
if t.isHook() {
return v1alpha1.HookType(t.phase)
} else {
return ""
}
}

View File

@@ -1,38 +0,0 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/test"
)
func Test_syncTask_hookType(t *testing.T) {
type fields struct {
phase SyncPhase
liveObj *unstructured.Unstructured
}
tests := []struct {
name string
fields fields
want HookType
}{
{"Empty", fields{SyncPhaseSync, test.NewPod()}, ""},
{"PreSyncHook", fields{SyncPhasePreSync, test.NewHook(HookTypePreSync)}, HookTypePreSync},
{"SyncHook", fields{SyncPhaseSync, test.NewHook(HookTypeSync)}, HookTypeSync},
{"PostSyncHook", fields{SyncPhasePostSync, test.NewHook(HookTypePostSync)}, HookTypePostSync},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
task := &syncTask{
phase: tt.fields.phase,
liveObj: tt.fields.liveObj,
}
hookType := task.hookType()
assert.EqualValues(t, tt.want, hookType)
})
}
}

View File

@@ -1,137 +0,0 @@
package controller
import (
"strings"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
// kindOrder represents the correct order of Kubernetes resources within a manifest
var syncPhaseOrder = map[v1alpha1.SyncPhase]int{
v1alpha1.SyncPhasePreSync: -1,
v1alpha1.SyncPhaseSync: 0,
v1alpha1.SyncPhasePostSync: 1,
}
// kindOrder represents the correct order of Kubernetes resources within a manifest
// https://github.com/helm/helm/blob/master/pkg/tiller/kind_sorter.go
var kindOrder = map[string]int{}
func init() {
kinds := []string{
"Namespace",
"ResourceQuota",
"LimitRange",
"PodSecurityPolicy",
"PodDisruptionBudget",
"Secret",
"ConfigMap",
"StorageClass",
"PersistentVolume",
"PersistentVolumeClaim",
"ServiceAccount",
"CustomResourceDefinition",
"ClusterRole",
"ClusterRoleBinding",
"Role",
"RoleBinding",
"Service",
"DaemonSet",
"Pod",
"ReplicationController",
"ReplicaSet",
"Deployment",
"StatefulSet",
"Job",
"CronJob",
"Ingress",
"APIService",
}
for i, kind := range kinds {
// make sure none of the above entries are zero, we need that for custom resources
kindOrder[kind] = i - len(kinds)
}
}
type syncTasks []*syncTask
func (s syncTasks) Len() int {
return len(s)
}
func (s syncTasks) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// order is
// 1. phase
// 2. wave
// 3. kind
// 4. name
func (s syncTasks) Less(i, j int) bool {
tA := s[i]
tB := s[j]
d := syncPhaseOrder[tA.phase] - syncPhaseOrder[tB.phase]
if d != 0 {
return d < 0
}
d = tA.wave() - tB.wave()
if d != 0 {
return d < 0
}
a := tA.obj()
b := tB.obj()
// we take advantage of the fact that if the kind is not in the kindOrder map,
// then it will return the default int value of zero, which is the highest value
d = kindOrder[a.GetKind()] - kindOrder[b.GetKind()]
if d != 0 {
return d < 0
}
return a.GetName() < b.GetName()
}
func (s syncTasks) Filter(predicate func(task *syncTask) bool) (tasks syncTasks) {
for _, task := range s {
if predicate(task) {
tasks = append(tasks, task)
}
}
return tasks
}
func (s syncTasks) Find(predicate func(task *syncTask) bool) *syncTask {
for _, task := range s {
if predicate(task) {
return task
}
}
return nil
}
func (s syncTasks) String() string {
var values []string
for _, task := range s {
values = append(values, task.String())
}
return "[" + strings.Join(values, ", ") + "]"
}
func (s syncTasks) phase() v1alpha1.SyncPhase {
if len(s) > 0 {
return s[0].phase
}
return ""
}
func (s syncTasks) wave() int {
if len(s) > 0 {
return s[0].wave()
}
return 0
}

View File

@@ -1,231 +0,0 @@
package controller
import (
"sort"
"testing"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
func Test_syncTasks_kindOrder(t *testing.T) {
assert.Equal(t, -27, kindOrder["Namespace"])
assert.Equal(t, -1, kindOrder["APIService"])
assert.Equal(t, 0, kindOrder["MyCRD"])
}
func TestSortSyncTask(t *testing.T) {
sort.Sort(unsortedTasks)
assert.Equal(t, sortedTasks, unsortedTasks)
}
var unsortedTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhasePostSync, targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
}
var sortedTasks = syncTasks{
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
phase: SyncPhasePostSync,
targetObj: &unstructured.Unstructured{},
},
}
func Test_syncTasks_Filter(t *testing.T) {
tasks := syncTasks{{phase: SyncPhaseSync}, {phase: SyncPhasePostSync}}
assert.Equal(t, syncTasks{{phase: SyncPhaseSync}}, tasks.Filter(func(t *syncTask) bool {
return t.phase == SyncPhaseSync
}))
}
func TestSyncNamespaceAgainstCRD(t *testing.T) {
crd := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": "Workflow",
},
}}
namespace := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": "Namespace",
},
},
}
unsorted := syncTasks{crd, namespace}
sort.Sort(unsorted)
assert.Equal(t, syncTasks{namespace, crd}, unsorted)
}

View File

@@ -2,11 +2,12 @@ package controller
import (
"fmt"
"reflect"
"sort"
"testing"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -18,7 +19,6 @@ import (
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
@@ -45,9 +45,7 @@ func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
config: &rest.Config{},
namespace: test.FakeArgoCDNamespace,
server: test.FakeClusterURL,
syncRes: &v1alpha1.SyncOperationResult{
Revision: "FooBarBaz",
},
syncRes: &v1alpha1.SyncOperationResult{},
syncOp: &v1alpha1.SyncOperation{
Prune: true,
SyncStrategy: &v1alpha1.SyncStrategy{
@@ -106,19 +104,18 @@ func TestSyncCreateInSortedOrder(t *testing.T) {
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodeSynced, result.Status)
assert.Equal(t, "", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, "", result.Message)
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
@@ -150,9 +147,8 @@ func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Contains(t, result.Message, "not permitted in project")
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncBlacklistedNamespacedResources(t *testing.T) {
@@ -170,83 +166,73 @@ func TestSyncBlacklistedNamespacedResources(t *testing.T) {
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Contains(t, result.Message, "not permitted in project")
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: test.NewService(),
}, {
Live: pod,
Live: test.NewPod(),
Target: nil,
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, result.Status)
assert.Equal(t, "", result.Message)
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncDeleteSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
svc := test.NewService()
svc.SetNamespace(test.FakeArgoCDNamespace)
pod := test.NewPod()
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: svc,
Live: test.NewService(),
Target: nil,
}, {
Live: pod,
Live: test.NewPod(),
Target: nil,
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
testSvc := test.NewService()
syncCtx.kubectl = kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
testSvc.GetName(): {
"test-service": {
Output: "",
Err: fmt.Errorf("foo"),
Err: fmt.Errorf("error: error validating \"test.yaml\": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false"),
},
},
}
testSvc := test.NewService()
testSvc.SetAPIVersion("")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
@@ -255,9 +241,7 @@ func TestSyncCreateFailure(t *testing.T) {
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Equal(t, "foo", result.Message)
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func TestSyncPruneFailure(t *testing.T) {
@@ -266,13 +250,12 @@ func TestSyncPruneFailure(t *testing.T) {
Commands: map[string]kubetest.KubectlOutput{
"test-service": {
Output: "",
Err: fmt.Errorf("foo"),
Err: fmt.Errorf(" error: timed out waiting for \"test-service\" to be synced"),
},
},
}
testSvc := test.NewService()
testSvc.SetName("test-service")
testSvc.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: testSvc,
@@ -280,11 +263,155 @@ func TestSyncPruneFailure(t *testing.T) {
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Equal(t, "foo", result.Message)
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func unsortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
}
}
func sortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
}
}
func TestSortKubernetesResourcesSuccessfully(t *testing.T) {
unsorted := unsortedManifest()
ks := newKindSorter(unsorted, resourceOrder)
sort.Sort(ks)
expectedOrder := sortedManifest()
assert.Equal(t, len(unsorted), len(expectedOrder))
for i, sorted := range unsorted {
assert.Equal(t, expectedOrder[i], sorted)
}
}
func TestSortManifestHandleNil(t *testing.T) {
task := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
}
manifest := []syncTask{
{},
task,
}
ks := newKindSorter(manifest, resourceOrder)
sort.Sort(ks)
assert.Equal(t, task, manifest[0])
assert.Nil(t, manifest[1].targetObj)
}
func TestSyncNamespaceAgainstCRD(t *testing.T) {
crd := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": "argoproj.io/alpha1",
"kind": "Workflow",
},
}}
namespace := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Namespace",
},
},
}
unsorted := []syncTask{crd, namespace}
ks := newKindSorter(unsorted, resourceOrder)
sort.Sort(ks)
expectedOrder := []syncTask{namespace, crd}
assert.Equal(t, len(unsorted), len(expectedOrder))
for i, sorted := range unsorted {
assert.Equal(t, expectedOrder[i], sorted)
}
}
func TestDontSyncOrPruneHooks(t *testing.T) {
@@ -294,114 +421,23 @@ func TestDontSyncOrPruneHooks(t *testing.T) {
targetPod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
liveSvc := test.NewService()
liveSvc.SetName("dont-prune-me")
liveSvc.SetNamespace(test.FakeArgoCDNamespace)
liveSvc.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
syncCtx.compareResult = &comparisonResult{
hooks: []*unstructured.Unstructured{targetPod, liveSvc},
managedResources: []managedResource{{
Live: nil,
Target: targetPod,
Hook: true,
}, {
Live: liveSvc,
Target: nil,
Hook: true,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 0)
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
}
// make sure that we do not prune resources with Prune=false
func TestDontPrunePruneFalse(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationSyncOptions: "Prune=false"})
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Live: pod}}}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodePruneSkipped, syncCtx.syncRes.Resources[0].Status)
assert.Equal(t, "ignored (no prune)", syncCtx.syncRes.Resources[0].Message)
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
}
func TestSelectiveSyncOnly(t *testing.T) {
syncCtx := newTestSyncCtx()
pod1 := test.NewPod()
pod1.SetName("pod-1")
pod2 := test.NewPod()
pod2.SetName("pod-2")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod1}},
}
syncCtx.syncResources = []v1alpha1.SyncOperationResource{{Kind: "Pod", Name: "pod-1"}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, "pod-1", tasks[0].name())
}
func TestUnnamedHooksGetUniqueNames(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
pod.SetName("")
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync,PostSync"})
syncCtx.compareResult = &comparisonResult{hooks: []*unstructured.Unstructured{pod}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 2)
assert.Contains(t, tasks[0].name(), "foobarb-presync-")
assert.Contains(t, tasks[1].name(), "foobarb-postsync-")
assert.Equal(t, "", pod.GetName())
}
func TestManagedResourceAreNotNamed(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetName("")
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod}}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, "", tasks[0].name())
assert.Equal(t, "", pod.GetName())
}
func TestDeDupingTasks(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "Sync"})
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{pod},
}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
}
func TestObjectsGetANamespace(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod}}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, test.FakeArgoCDNamespace, tasks[0].namespace())
assert.Equal(t, "", pod.GetNamespace())
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestPersistRevisionHistory(t *testing.T) {
@@ -490,74 +526,3 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
assert.Equal(t, source, updatedApp.Status.History[0].Source)
assert.Equal(t, "abc123", updatedApp.Status.History[0].Revision)
}
func Test_syncContext_isSelectiveSync(t *testing.T) {
type fields struct {
compareResult *comparisonResult
syncResources []SyncOperationResource
}
oneSyncResource := []SyncOperationResource{{}}
oneResource := func(group, kind, name string, hook bool) *comparisonResult {
return &comparisonResult{resources: []v1alpha1.ResourceStatus{{Group: group, Kind: kind, Name: name, Hook: hook}}}
}
tests := []struct {
name string
fields fields
want bool
}{
{"Empty", fields{}, false},
{"OneCompareResult", fields{oneResource("", "", "", false), []SyncOperationResource{}}, true},
{"OneSyncResource", fields{&comparisonResult{}, oneSyncResource}, true},
{"Equal", fields{oneResource("", "", "", false), oneSyncResource}, false},
{"EqualOutOfOrder", fields{&comparisonResult{resources: []v1alpha1.ResourceStatus{{Group: "a"}, {Group: "b"}}}, []SyncOperationResource{{Group: "b"}, {Group: "a"}}}, false},
{"KindDifferent", fields{oneResource("foo", "", "", false), oneSyncResource}, true},
{"GroupDifferent", fields{oneResource("", "foo", "", false), oneSyncResource}, true},
{"NameDifferent", fields{oneResource("", "", "foo", false), oneSyncResource}, true},
{"HookIgnored", fields{oneResource("", "", "", true), []SyncOperationResource{}}, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sc := &syncContext{
compareResult: tt.fields.compareResult,
syncResources: tt.fields.syncResources,
}
if got := sc.isSelectiveSync(); got != tt.want {
t.Errorf("syncContext.isSelectiveSync() = %v, want %v", got, tt.want)
}
})
}
}
func Test_syncContext_liveObj(t *testing.T) {
type fields struct {
compareResult *comparisonResult
}
type args struct {
obj *unstructured.Unstructured
}
obj := test.NewPod()
obj.SetNamespace("my-ns")
found := test.NewPod()
tests := []struct {
name string
fields fields
args args
want *unstructured.Unstructured
}{
{"None", fields{compareResult: &comparisonResult{managedResources: []managedResource{}}}, args{obj: &unstructured.Unstructured{}}, nil},
{"Found", fields{compareResult: &comparisonResult{managedResources: []managedResource{{Group: obj.GroupVersionKind().Group, Kind: obj.GetKind(), Namespace: obj.GetNamespace(), Name: obj.GetName(), Live: found}}}}, args{obj: obj}, found},
{"EmptyNamespace", fields{compareResult: &comparisonResult{managedResources: []managedResource{{Group: obj.GroupVersionKind().Group, Kind: obj.GetKind(), Name: obj.GetName(), Live: found}}}}, args{obj: obj}, found},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sc := &syncContext{
compareResult: tt.fields.compareResult,
}
if got := sc.liveObj(tt.args.obj); !reflect.DeepEqual(got, tt.want) {
t.Errorf("syncContext.liveObj() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -22,22 +22,11 @@ Install:
* [kubectx](https://kubectx.dev)
* [minikube](https://kubernetes.io/docs/setup/minikube/) or Docker for Desktop
!!! warning "Versions"
You will find problems generating code if you do not have the correct versions of `protoc` and `swagger`
```bash
$ protoc --version
libprotoc 3.7.1
~/go/src/github.com/argoproj/argo-cd (ui)
$ swagger version
version: v0.19.0
```
Brew users can quickly install the lot:
```bash
brew tap go-swagger/go-swagger
brew install go dep protobuf kubectl kubectx ksonnet/tap/ks kubernetes-helm jq go-swagger kustomize
brew install go dep protobuf kubectl kubectx ksonnet/tap/ks kubernetes-helm jq go-swagger
```
!!! note "Kustomize"
@@ -50,30 +39,23 @@ export GOPATH=~/go
export PATH=$PATH:$GOPATH/bin
```
Checkout the code:
```bash
go get -u github.com/argoproj/argo-cd
cd ~/go/src/github.com/argoproj/argo-cd
```
Install go dependencies:
```bash
go get github.com/gobuffalo/packr/packr
go get github.com/gogo/protobuf/gogoproto
go get github.com/golang/protobuf/protoc-gen-go
go get github.com/golangci/golangci-lint/cmd/golangci-lint
go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
go get github.com/jstemmer/go-junit-report
go get github.com/mattn/goreman
go get golang.org/x/tools/cmd/goimports
go get -u github.com/golang/protobuf/protoc-gen-go
go get -u github.com/go-swagger/go-swagger/cmd/swagger
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
go get -u github.com/golangci/golangci-lint/cmd/golangci-lint
go get -u github.com/mattn/goreman
go get -u gotest.tools/gotestsum
```
## Building
```bash
go get -u github.com/argoproj/argo-cd
dep ensure
make
```
@@ -108,6 +90,15 @@ kubectl -n argocd scale deployment.extensions/argocd-server --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-redis --replicas 0
```
Then checkout and build the UI next to your code
```
cd ~/go/src/github.com/argoproj
git clone git@github.com:argoproj/argo-cd-ui.git
```
Follow the UI's [README](https://github.com/argoproj/argo-cd-ui/blob/master/README.md) to build it.
Note: you'll need to use the https://localhost:6443 cluster now.
Then start the services:
@@ -143,7 +134,7 @@ Add your username as the environment variable, e.g. to your `~/.bash_profile`:
export IMAGE_NAMESPACE=alexcollinsintuit
```
If you have not built the UI image (see [the UI README](https://github.com/argoproj/argo-cd/blob/master/ui/README.md)), then do the following:
If you have not built the UI image (see [the UI README](https://github.com/argoproj/argo-cd-ui/blob/master/README.md)), then do the following:
```bash
docker pull argoproj/argocd-ui:latest
@@ -180,3 +171,16 @@ kubectl -n argocd scale deployment.extensions/argocd-redis --replicas 1
```
Now you can set-up the port-forwarding and open the UI or CLI.
## Pre-commit Checks
Before you commit, make sure you've formatted and linted your code, or your PR will fail CI:
```bash
STAGED_GO_FILES=$(git diff --cached --name-only | grep ".go$")
gofmt -w $STAGED_GO_FILES
make codgen
make precommit ;# lint and test
```

View File

@@ -2,5 +2,5 @@
1. Make sure you've read [understanding the basics](understand_the_basics.md) the [getting started guide](getting_started.md).
2. Looked for an answer [the frequently asked questions](faq.md).
3. Ask a question in [the Argo CD Slack channel ⧉](https://argoproj.github.io/community/join-slack).
4. [Read issues, report a bug, or request a feature ⧉](https://github.com/argoproj/argo-cd/issues)
3. Ask a question in [the Argo CD Slack channel ⧉](https://argoproj.slack.com/messages/CASHNF6MS).
4. [Read issues, report a bug, or request a feature ⧉](https://github.com/argoproj/argo-cd/issues)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.1 KiB

View File

@@ -1,41 +0,0 @@
# CI
## Troubleshooting Builds
### "Check nothing has changed" step fails
If your PR fails the `codegen` CI step, you can either:
(1) Simple - download the `codgen.patch` file from CircleCI and apply it:
![download codegen patch file](../assets/download-codegen-patch-file.png)
```bash
git apply codegen.patch
git commit -am "Applies codegen patch"
```
(2) Advanced - if you have the tools installed (see the contributing guide), run the following:
```bash
make pre-commit
git commit -am 'Ran pre-commit checks'
```
## Updating The Builder Image
Login to Docker Hub:
```bash
docker login
```
Build image:
```bash
make builder-image IMAGE_NAMESPACE=argoproj IMAGE_TAG=v1.0.0
```
## Public CD
[https://cd.apps.argoproj.io/](https://cd.apps.argoproj.io/)

View File

@@ -1,107 +1,48 @@
# Releasing
Make sure you are logged into Docker Hub:
```bash
docker login
```
Export the upstream repository and branch name, e.g.:
```bash
REPO=upstream ;# or origin
BRANCH=release-1.0
```
Set the `VERSION` environment variable:
```bash
# release candidate
VERSION=v1.0.0-rc1
# GA release
VERSION=v1.0.0
```
If not already created, create UI release branch:
1. Tag, build, and push argo-cd-ui
```bash
cd argo-cd-ui
git checkout -b $BRANCH
git checkout -b release-X.Y
git tag vX.Y.Z
git push upstream release-X.Y --tags
IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true yarn docker
```
Tag UI:
2. Create release-X.Y branch (if creating initial X.Y release)
```bash
git tag $VERSION
git push $REPO $BRANCH --tags
IMAGE_NAMESPACE=argoproj IMAGE_TAG=$VERSION DOCKER_PUSH=true yarn docker
git checkout -b release-X.Y
git push upstream release-X.Y
```
If not already created, create release branch:
3. Update VERSION and manifests with new version
```bash
cd argo-cd
git checkout -b $BRANCH
git push $REPO $BRANCH
vi VERSION # ensure value is desired X.Y.Z semantic version
make manifests IMAGE_TAG=vX.Y.Z
git commit -a -m "Update manifests to vX.Y.Z"
git push upstream release-X.Y
```
Update `VERSION` and manifests with new version:
4. Tag, build, and push release to docker hub
```bash
echo ${VERSION:1} > VERSION
make manifests IMAGE_TAG=$VERSION
git commit -am "Update manifests to $VERSION"
git push $REPO $BRANCH
git tag vX.Y.Z
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true
git push upstream vX.Y.Z
```
Tag, build, and push release to Docker Hub
```bash
git tag $VERSION
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=$VERSION DOCKER_PUSH=true
git push $REPO $VERSION
```
Update [Github releases](https://github.com/argoproj/argo-cd/releases) with:
* Getting started (copy from previous release)
* Changelog
* Binaries (e.g. dist/argocd-darwin-amd64).
If GA, update `stable` tag:
```bash
git tag stable --force && git push $REPO stable --force
```
If GA, update Brew formula:
5. Update argocd brew formula
```bash
git clone https://github.com/argoproj/homebrew-tap
cd homebrew-tap
./update.sh ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
git commit -a -m "Update argocd to $VERSION"
git commit -a -m "Update argocd to vX.Y.Z"
git push
```
### Verify
Locally:
```bash
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/$VERSION/manifests/install.yaml
6. Update documentation:
* Edit CHANGELOG.md with release notes
* Update `stable` tag
```
Follow the [Getting Started Guide](../getting_started/).
If GA:
```bash
brew upgrade argocd
/usr/local/bin/argocd version
git tag stable --force && git push upstream stable --force
```
Sync Argo CD in [https://cd.apps.argoproj.io/applications/argo-cd](https://cd.apps.argoproj.io/applications/argo-cd).
Deploy the [site](site.md).
* Create GitHub release from new tag and upload binaries (e.g. dist/argocd-darwin-amd64)

View File

@@ -17,34 +17,3 @@ You can observe the tests by using the UI [http://localhost:8080/applications](h
The tests are executed by Argo Workflow defined at `.argo-ci/ci.yaml`. CI job The builds an Argo CD image, deploy argo cd components into throw-away kubernetes cluster provisioned
using k3s and run e2e tests against it.
## Test Isolation
Some effort has been made to balance test isolation with speed. Tests are isolated as follows as each test gets:
* A random 5 character ID.
* A unique Git repository containing the `testdata` in `/tmp/argocd-e2e/${id}`.
* A namespace `argocd-e2e-ns-${id}`.
* An primary name for the app `argocd-e2e-${id}`.
## Troubleshooting
**Tests fails to delete `argocd-e2e-ns-*` namespaces.**
This maybe due to the metrics server, run this:
```bash
kubectl api-resources
```
If it exits with status code 1, run:
```bash
kubectl delete apiservice v1beta1.metrics.k8s.io
```
Remove `/spec/finalizers` from the namespace
```bash
kubectl edit ns argocd-e2e-ns-*
```

View File

@@ -56,9 +56,3 @@ KUBECONFIG=/tmp/config kubectl get pods # test connection manually
```
Now you can manually verify that cluster is accessible from the Argo CD pod.
## How Can I Terminate A Sync?
To terminate the sync, click on the "synchronisation" then "terminate":
![Synchronization](assets/synchronization-button.png) ![Terminate](assets/terminate-button.png)

View File

@@ -1,82 +0,0 @@
# Cluster Bootstrapping
This guide for operators who have already installed Argo CD, and have a new cluster and are looking to install many applications in that cluster.
There's no one particular pattern to solve this problem, e.g. you could write a script to create your applications, or you could even manually create them. However, users of Argo CD tend to use the **application of applications pattern**.
## Application Of Applications Pattern
[Declaratively](declarative-setup.md) specify one Argo CD application that consists only of other applications.
![Application of Applications](../assets/application-of-applications.png)
### Helm Example
This example shows how to use Helm to achieve this. You can, of course, use another tool if you like.
A typical layout of your Git repository for this might be:
```
├── Chart.yaml
├── templates
│   ├── guestbook.yaml
│   ├── helm-dependency.yaml
│   ├── helm-guestbook.yaml
│   └── kustomize-guestbook.yaml
└── values.yaml
```
`Chart.yaml` is boiler-plate.
`templates` contains one file for each application, roughly:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: argocd
server: {{ .Values.spec.destination.server }}
project: default
source:
path: guestbook
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: {{ .Values.spec.source.targetRevision }}
syncPolicy:
automated:
prune: true
```
In this example, I've set the sync policy to automated + prune, so that applications are automatically created, synced, and deleted when the manifest is changed, but you may wish to disable this. I've also added the finalizer, which will ensure that you applications are deleted correctly.
As you probably want to override the cluster server and maybe the revision, these are templated values.
`values.yaml` contains the default values:
```yaml
spec:
destination:
server: https://kubernetes.default.svc
source:
targetRevision: HEAD
```
Finally, you need to create your application, e.g.:
```bash
argocd app create applications \
--dest-namespace argocd \
--dest-server https://kubernetes.default.svc \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path applications \
--sync-policy automated
```
In this example, I excluded auto-prune, as this would result in all applications being deleted if some accidentally deleted the *application of applications*.
View [the example on Github](https://github.com/argoproj/argocd-example-apps/tree/master/applications).

View File

@@ -48,12 +48,10 @@ metadata:
- resources-finalizer.argocd.argoproj.io
```
### Application of Applications
### App of Apps of Apps
You can create an application that creates other applications, which in turn can create other applications.
This allows you to declaratively manage a group of applications that can be deployed and configured in concert.
See [cluster bootstrapping](cluster-bootstrapping.md).
This allows you to declaratively manage a group of applications that can be deployed and configured in concert.
## Projects
The AppProject CRD is the Kubernetes resource object representing a logical grouping of applications.
@@ -389,7 +387,7 @@ Example of `kustomization.yaml`:
```yaml
bases:
- github.com/argoproj/argo-cd//manifests/cluster-install?ref=v1.0.1
- github.com/argoproj/argo-cd//manifests/cluster-install?ref=v0.10.6
# additional resources like ingress rules, cluster and repository secrets.
resources:

View File

@@ -1,25 +0,0 @@
# Disaster Recovery
You can use `argocd-util` can be used to import and export all Argo CD data.
Make sure you have `~/.kube/config` pointing to your Argo CD cluster.
Figure out what version of Argo CD you're running:
```bash
argocd version | grep server
# ...
export VERSION=v1.0.1
```
Export to a backup:
```bash
docker run -v ~/.kube:/home/argocd/.kube --rm argoproj/argocd:$VERSION argocd-util export > backup.yaml
```
Import from a backup:
```bash
docker run -v ~/.kube:/home/argocd/.kube --rm argoproj/argocd:$VERSION argocd-util import - < backup.yaml
```

View File

@@ -1,19 +0,0 @@
# High Availability
Argo CD is largely stateless, all data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.
A set HA of manifests are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and run Redis in HA mode.
[Manifests ⧉](https://github.com/argoproj/argo-cd/tree/master/manifests)
!!! note
The HA installation will require at least three different nodes due to pod anti-affinity roles in the specs.
## Scaling Up
You might scale up some Argo CD services in the following circumstances:
* The `argocd-repo-server` can scale up when there is too much contention on a single git repo (e.g. many apps defined in a single git repo).
* The `argocd-server` can scale up to support more front-end load.
All other services should run with their pre-determined number of replicas. The `argocd-application-controller` must not be increased because multiple controllers will fight. The `argocd-dex-server` uses an in-memory database, and two or more instances would have inconsistent data. `argocd-redis` is pre-configured with the understanding of only three total redis servers/sentinels.

View File

@@ -1,46 +0,0 @@
# App Deletion
Apps can be deleted with or without a cascade option. A **cascade delete**, deletes the both app's and its resources, rather than only the app.
## Deletion Using `argocd`
To perform an non-cascade delete:
```bash
argocd app delete APPNAME
```
To perform a cascade delete:
```bash
argocd app delete APPNAME --cascade
```
# Deletion Using `kubectl`
To perform a non-cascade delete:
```bash
kubetctl delete app APPNAME
```
To perform a cascade delete set the finalizer, e.g. using `kubctl patch`:
```bash
kubectl patch app APPNAME -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge
kubectl delete app APPNAME
```
# About The Deletion Finalizer
For the technical amongst you, the Argo CD application controller watches for this finalizer:
```yaml
metadata:
finalizers:
- resources-finalizer.argocd.argoproj.io
```
Argo CD's app controller watches for this and will then delete both the app and its resources.
When you invoke `argocd app delete` with `--cascade`, the finalizer is added automatically.

View File

@@ -1,21 +1,144 @@
# Tools
# Application Source Types
## Production
Argo CD supports several different ways in which kubernetes manifests can be defined:
Argo CD supports several different ways in which Kubernetes manifests can be defined:
* **[Ksonnet](https://ksonnet.io)** applications
* **[Kustomize](https://kustomize.io)** applications
* **[Helm](https://helm.sh)** charts
* **Directory** of YAML/json/jsonnet manifests
* Any custom config management tool configured as a config management plugin
* [Kustomize](kustomize.md) applications
* [Helm](helm.md) charts
* [Ksonnet](ksonnet.md) applications
* A directory of YAML/JSO/Jsonnet manifests
* Any [custom config management tool](config-management-plugins.md) configured as a config management plugin
Some additional considerations should be made when deploying apps of a particular type:
## Development
Argo CD also supports uploading local manifests directly. Since this is an anti-pattern of the
GitOps paradigm, this should only be done for development purposes. A user with an `override` permission is required
to upload manifests locally (typically an admin). All of the different Kubernetes deployment tools above are supported.
To upload a local application:
## Kustomize
Ops. We haven't got around to writing this part yet.
## Ksonnet
### Environments
Ksonnet has a first class concept of an "environment." To create an application from a ksonnet
app directory, an environment must be specified. For example, the following command creates the
"guestbook-default" app, which points to the `default` environment:
```bash
$ argocd app sync APPNAME --local /path/to/dir/
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
```
### Parameters
Ksonnet parameters all belong to a component. For example, the following are the parameters
available in the guestbook app, all of which belong to the `guestbook-ui` component:
```bash
$ ks param list
COMPONENT PARAM VALUE
========= ===== =====
guestbook-ui containerPort 80
guestbook-ui image "gcr.io/heptio-images/ks-guestbook-demo:0.1"
guestbook-ui name "guestbook-ui"
guestbook-ui replicas 1
guestbook-ui servicePort 80
guestbook-ui type "LoadBalancer"
```
When overriding ksonnet parameters in Argo CD, the component name should also be specified in the
`argocd app set` command, in the form of `-p COMPONENT=PARAM=VALUE`. For example:
```bash
argocd app set guestbook-default -p guestbook-ui=image=gcr.io/heptio-images/ks-guestbook-demo:0.1
```
## Helm
### Values Files
Helm has the ability to use a different, or even multiple "values.yaml" files to derive its
parameters from. Alternate or multiple values file(s), can be specified using the `--values`
flag. The flag can be repeated to support multiple values files:
```bash
argocd app set helm-guestbook --values values-production.yaml
```
### Helm Parameters
Helm has the ability to set parameter values, which override any values in
a `values.yaml`. For example, `service.type` is a common parameter which is exposed in a Helm chart:
```bash
helm template . --set service.type=LoadBalancer
```
Similarly Argo CD can override values in the `values.yaml` parameters using `argo app set` command,
in the form of `-p PARAM=VALUE`. For example:
```bash
argocd app set helm-guestbook -p service.type=LoadBalancer
```
### Helm Hooks
Helm hooks are equivalent in concept to [Argo CD resource hooks](resource_hooks.md). In helm, a hook
is any normal kubernetes resource annotated with the `helm.sh/hook` annotation. When Argo CD deploys
helm application which contains helm hooks, all helm hook resources are currently ignored during
the `kubectl apply` of the manifests. There is an
[open issue](https://github.com/argoproj/argo-cd/issues/355) to map Helm hooks to Argo CD's concept
of Pre/Post/Sync hooks.
### Random Data
Helm templating has the ability to generate random data during chart rendering via the
`randAlphaNum` function. Many helm charts from the [charts repository](https://github.com/helm/charts)
make use of this feature. For example, the following is the secret for the
[redis helm chart](https://github.com/helm/charts/blob/master/stable/redis/templates/secret.yaml):
```yaml
data:
{{- if .Values.password }}
redis-password: {{ .Values.password | b64enc | quote }}
{{- else }}
redis-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
```
The Argo CD application controller periodically compares Git state against the live state, running
the `helm template <CHART>` command to generate the helm manifests. Because the random value is
regenerated every time the comparison is made, any application which makes use of the `randAlphaNum`
function will always be in an `OutOfSync` state. This can be mitigated by explicitly setting a
value, in the values.yaml such that the value is stable between each comparison. For example:
```bash
argocd app set redis -p password=abc123
```
## Config Management Plugins
Argo CD allows integrating more config management tools using config management plugins. Following changes are required to configure new plugin:
* Make sure required binaries are available in `argocd-repo-server` pod. The binaries can be added via volume mounts or using custom image (see [custom_tools](../operator-manual/custom_tools.md)).
* Register a new plugin in `argocd-cm` ConfigMap:
```yaml
data:
configManagementPlugins: |
- name: pluginName
init: # Optional command to initialize application source directory
command: ["sample command"]
args: ["sample args"]
generate: # Command to generate manifests YAML
command: ["sample command"]
args: ["sample args"]
```
The `generate` command must print a valid YAML stream to stdout. Both `init` and `generate` commands are executed inside the application source directory.
Commands have access to system environment variables and following additional variables:
`ARGOCD_APP_NAME` - name of application; `ARGOCD_APP_NAMESPACE` - destination application namespace
* Create an application and specify required config management plugin name.
```bash
argocd app create <appName> --config-management-plugin <pluginName>
```
More config management plugin examples are available in [argocd-example-apps](https://github.com/argoproj/argocd-example-apps/tree/master/plugins).

View File

@@ -24,7 +24,7 @@ from your application source code, is highly recommended for the following reaso
unintentionally. By having separate repos, commit access can be given to the source code repo,
and not the application config repo.
5. If you are automating your CI pipeline, pushing manifest changes to the same Git repository can
5. If you are automating your CI pipeline, pushing manifest changes to the same Cit repository can
trigger an infinite loop of build jobs and Git commit triggers. Having a separate repo to push
config changes to, prevents this from happening.

View File

@@ -1,34 +0,0 @@
# Compare Options
## Ignoring Resources That Are Extraneous
You may wish to exclude resources from the app's overall sync status under certain circumstances. E.g. if they are generated by a tool. This can be done by adding this annotation:
```yaml
metadata:
annotations:
argocd.argoproj.io/compare-options: IgnoreExtraneous
```
![compare option needs pruning](../assets/compare-option-ignore-needs-pruning.png)
!!! note
This only affects the sync status. If the resource's health is degraded, then the app will also be degraded.
Kustomize has a feature that allows you to generate config maps ([read more ⧉](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configGeneration.md)). You can set `generatorOptions` to add this annotation so that your app remains in sync:
```yaml
configMapGenerator:
- name: my-map
literals:
- foo=bar
generatorOptions:
annotations:
argocd.argoproj.io/compare-options: IgnoreExtraneous
kind: Kustomization
```
!!! note
`generatorOptions` adds annotations to both config maps and secrets ([read more ⧉](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/generatorOptions.md)).
You may wish to combine this with the [`Prune=false` sync option](sync-options.md).

View File

@@ -1,31 +0,0 @@
# Plugins
Argo CD allows integrating more config management tools using config management plugins. Following changes are required to configure new plugin:
* Make sure required binaries are available in `argocd-repo-server` pod. The binaries can be added via volume mounts or using custom image (see [custom_tools](../operator-manual/custom_tools.md)).
* Register a new plugin in `argocd-cm` ConfigMap:
```yaml
data:
configManagementPlugins: |
- name: pluginName
init: # Optional command to initialize application source directory
command: ["sample command"]
args: ["sample args"]
generate: # Command to generate manifests YAML
command: ["sample command"]
args: ["sample args"]
```
The `generate` command must print a valid YAML stream to stdout. Both `init` and `generate` commands are executed inside the application source directory.
Commands have access to system environment variables and following additional variables:
`ARGOCD_APP_NAME` - name of application; `ARGOCD_APP_NAMESPACE` - destination application namespace
* Create an application and specify required config management plugin name.
```bash
argocd app create <appName> --config-management-plugin <pluginName>
```
More config management plugin examples are available in [argocd-example-apps](https://github.com/argoproj/argocd-example-apps/tree/master/plugins).

View File

@@ -1,89 +0,0 @@
# Helm
## Values Files
Helm has the ability to use a different, or even multiple "values.yaml" files to derive its
parameters from. Alternate or multiple values file(s), can be specified using the `--values`
flag. The flag can be repeated to support multiple values files:
```bash
argocd app set helm-guestbook --values values-production.yaml
```
## Helm Parameters
Helm has the ability to set parameter values, which override any values in
a `values.yaml`. For example, `service.type` is a common parameter which is exposed in a Helm chart:
```bash
helm template . --set service.type=LoadBalancer
```
Similarly Argo CD can override values in the `values.yaml` parameters using `argo app set` command,
in the form of `-p PARAM=VALUE`. For example:
```bash
argocd app set helm-guestbook -p service.type=LoadBalancer
```
## Helm Release Name
By default the Helm release name is equal to the Application name to which it belongs. Sometimes, especially on a centralised ArgoCD,
you may want to override that name, and it is possible with the `release-name` flag on the cli:
```bash
argocd app set helm-guestbook --release-name myRelease
```
or using the releaseName for yaml:
```yaml
source:
helm:
releaseName: myRelease
```
```diff
- Important notice on overriding the release name
```
Please not that overriding the Helm release name might cause problems when the chart you are deploying is using the `app.kubernetes.io/instance` label.
ArgoCD injects this label with the value of the Application name for tracking purposes. So when overriding the release name, the Application name will
stop being equal to the release name. Because ArgoCD will overwrite the label with the Application name it might cause some selectors on the resources
to stop working. In order to avoid this we can configure ArgoCD to use another label for tracking in the [ArgoCD configmap argocd-cm.yaml](./../operator-manual/argocd-cm.yaml) -
check the lines describing `application.instanceLabelKey`
## Helm Hooks
Helm hooks are equivalent in concept to [Argo CD resource hooks](resource_hooks.md). In helm, a hook
is any normal kubernetes resource annotated with the `helm.sh/hook` annotation. When Argo CD deploys
helm application which contains helm hooks, all helm hook resources are currently ignored during
the `kubectl apply` of the manifests. There is an
[open issue](https://github.com/argoproj/argo-cd/issues/355) to map Helm hooks to Argo CD's concept
of Pre/Post/Sync hooks.
## Random Data
Helm templating has the ability to generate random data during chart rendering via the
`randAlphaNum` function. Many helm charts from the [charts repository](https://github.com/helm/charts)
make use of this feature. For example, the following is the secret for the
[redis helm chart](https://github.com/helm/charts/blob/master/stable/redis/templates/secret.yaml):
```yaml
data:
{{- if .Values.password }}
redis-password: {{ .Values.password | b64enc | quote }}
{{- else }}
redis-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
```
The Argo CD application controller periodically compares Git state against the live state, running
the `helm template <CHART>` command to generate the helm manifests. Because the random value is
regenerated every time the comparison is made, any application which makes use of the `randAlphaNum`
function will always be in an `OutOfSync` state. This can be mitigated by explicitly setting a
value, in the values.yaml such that the value is stable between each comparison. For example:
```bash
argocd app set redis -p password=abc123
```

View File

@@ -1,35 +0,0 @@
# Ksonnet
## Environments
Ksonnet has a first class concept of an "environment." To create an application from a ksonnet
app directory, an environment must be specified. For example, the following command creates the
"guestbook-default" app, which points to the `default` environment:
```bash
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
```
## Parameters
Ksonnet parameters all belong to a component. For example, the following are the parameters
available in the guestbook app, all of which belong to the `guestbook-ui` component:
```bash
$ ks param list
COMPONENT PARAM VALUE
========= ===== =====
guestbook-ui containerPort 80
guestbook-ui image "gcr.io/heptio-images/ks-guestbook-demo:0.1"
guestbook-ui name "guestbook-ui"
guestbook-ui replicas 1
guestbook-ui servicePort 80
guestbook-ui type "LoadBalancer"
```
When overriding ksonnet parameters in Argo CD, the component name should also be specified in the
`argocd app set` command, in the form of `-p COMPONENT=PARAM=VALUE`. For example:
```bash
argocd app set guestbook-default -p guestbook-ui=image=gcr.io/heptio-images/ks-guestbook-demo:0.1
```

View File

@@ -1,24 +0,0 @@
# Kustomize
!!! warning Kustomize 1 vs 2
Argo CD supports both versions, and auto-detects then by looking for `apiVersion/kind` is `kustomize.yaml`.
You're probably using version 2 now, so make sure you you have those fields.
You have three configuration options for Kustomize:
* `namePrefix` is a prefix appended to resources for Kustomize apps
* `imageTags` is a list of Kustomize 1.0 image tag overrides
* `images` is a list of Kustomize 2.0 image overrides
To use Kustomize with an overlay, point your path to the overlay.
!!! tip
If you're generating resources, you should read up how to ignore those generated resources using the [`IgnoreExtraneous` compare option](compare-options.md).
## Private Remote Bases
If you have remote bases that are either (a) HTTPS and need username/password (b) SSH and need SSH private key, then they'll inherit that from the app's repo.
This will work if the remote bases uses the same credentials/private key. It will not work if they use different ones. For security reasons your app only ever knows about it's own repo (not other team's or users repos), and so you won't be able to access other private repo, even if Argo CD knows about them.
Read more about [private repos](private-repositories.md).

View File

@@ -4,7 +4,7 @@
If application manifests are located in private repository then repository credentials have to be configured. Argo CD supports both HTTP and SSH Git credentials.
### HTTPS Username And Password Credential
### HTTP Username And Password Credential
Private repositories that require a username and password typically have a URL that start with "https://" rather than "git@" or "ssh://".
@@ -41,34 +41,30 @@ The Argo CD UI don't support configuring SSH credentials. The SSH credentials ca
argocd repo add git@github.com:argoproj/argocd-example-apps.git --ssh-private-key-path ~/.ssh/id_rsa
```
## Self-signed & Untrusted TLS Certificates
## Self-Signed Certificates
We do not currently have first-class support for this. See [#1513](https://github.com/argoproj/argo-cd/issues/1513).
If you are using self-hosted Git hosting service with the self-signed certificate then you need to disable certificate validation for that Git host.
Following options are available:
As a work-around, you can customize your Argo CD image. See [#1344](https://github.com/argoproj/argo-cd/issues/1344#issuecomment-479811810)
Add repository using Argo CD CLI and `--insecure-ignore-host-key` flag:
## Unknown SSH Hosts
If you are using a privately hosted Git service over SSH, then you have the following options:
```bash
argocd repo add git@github.com:argoproj/argocd-example-apps.git --ssh-private-key-path ~/.ssh/id_rsa
```
(1) You can customize the Argo CD Docker image by adding the host's SSH public key to `/etc/ssh/ssh_known_hosts`. Additional entries to this file can be generated using the `ssh-keyscan` utility (e.g. `ssh-keyscan your-private-git-server.com`. For more information see [example](https://github.com/argoproj/argo-cd/tree/master/examples/known-hosts) which demonstrates how `/etc/ssh/ssh_known_hosts` can be customized.
The flag disables certificate validation only for specified repository.
!!! warning
The `--insecure-ignore-host-key` flag does not work for HTTPS Git URLs. See [#1513](https://github.com/argoproj/argo-cd/issues/1513).
You can add Git service hostname to the `/etc/ssh/ssh_known_hosts` in each Argo CD deployment and disables cert validation for Git SSL URLs. For more information see
[example](https://github.com/argoproj/argo-cd/tree/master/examples/known-hosts) which demonstrates how `/etc/ssh/ssh_known_hosts` can be customized.
!!! note
The `/etc/ssh/ssh_known_hosts` should include Git host on each Argo CD deployment as well as on a computer where `argocd repo add` is executed. After resolving issue
[#1514](https://github.com/argoproj/argo-cd/issues/1514) only `argocd-repo-server` deployment has to be customized.
(1) Add repository using Argo CD CLI and `--insecure-ignore-host-key` flag:
```bash
argocd repo add git@github.com:argoproj/argocd-example-apps.git --ssh-private-key-path ~/.ssh/id_rsa --insecure-ignore-host-key
```
!!! warning "Don't use in production"
The `--insecure-ignore-host-key` should not be used in production as this is subject to man-in-the-middle attacks.
!!! warning "This does not work for Kustomize remote bases or custom plugins"
For Kustomize support, see [#827](https://github.com/argoproj/argo-cd/issues/827).
## Declarative Configuration
See [declarative setup](../operator-manual/declarative-setup#Repositories)

View File

@@ -1,8 +1,5 @@
# Resource Hooks
!!! warning
Helm hooks are currently ignored. [Read more](helm.md).
## Overview
Synchronization can be configured using resource hooks. Hooks are ways to interject custom logic before, during,
@@ -43,10 +40,6 @@ The following hooks are defined:
| `PostSync` | Executes after all `Sync` hooks completed and were successful, a succcessful apply, and all resources in a `Healthy` state. |
## Selective Sync
Hooks are run during [selective sync](selective_sync.md).
## Hook Deletion Policies
Hooks can be deleted in an automatic fashion using the annotation: `argocd.argoproj.io/hook-delete-policy`.

View File

@@ -1,9 +0,0 @@
# Selective Sync
A *selective sync* is one where only some resources are sync'd. You can choose which resources from the UI:
![selective sync](../assets/selective-sync.png)
When doing so, bear in mind:
* Your sync is not recorded in the history, and so rollback is not possible.

View File

@@ -1,24 +0,0 @@
# Sync Options
## No Prune Resources
You may wish to prevent an object from being pruned:
```yaml
metadata:
annotations:
argocd.argoproj.io/sync-options: Prune=false
```
In the UI, the pod will simply appear as out-of-sync:
![sync option no prune](../assets/sync-option-no-prune.png)
The sync-status panel shows that pruning was skipped, and why:
![sync option no prune](../assets/sync-option-no-prune-sync-status.png)
!!! note
The app will be out of sync if Argo CD expects a resource to be pruned. You may wish to use this along with [compare options](compare-options.md).

View File

@@ -1,48 +0,0 @@
# Sync Phases and Waves
<iframe width="560" height="315" src="https://www.youtube.com/embed/zIHe3EVp528" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Argo CD executes a sync operation in a number of steps. At a high-level, there are three phases *pre-sync*, *sync* and *post-sync*.
Within each phase you can have one or more waves, than allows you to ensure certain resources are healthy before subsequent resources are synced.
## How Do I Configure Phases?
Pre-sync and post-sync can only contain hooks. Apply the hook annotation:
```yaml
metadata:
annotations:
argocd.argoproj.io/hook: PreSync
```
[Read more about hooks](resource_hooks.md).
## How Do I Configure Waves?
Specify the wave using the following annotation:
```yaml
metadata:
annotations:
argocd.argoproj.io/sync-wave: "5"
```
Hooks and resources are assigned to wave zero by default. The wave can be negative, so you can create a wave that runs before all other resources.
## How Does It Work?
When Argo CD starts a sync, it orders the resources in the following precedence:
* The phase
* The wave they are in (lower values first)
* By kind (e.g. namespaces first)
* By name
It then determines which the number of the next wave to apply. This is the first number where any resource is out-of-sync or unhealthy.
It applies resources in that wave.
It repeats this process until all phases and waves are in in-sync and healthy.
Because an application can have resources that are unhealthy in the first wave, it may be that the app can never get to healthy.

View File

@@ -10,7 +10,3 @@ func CheckError(err error) {
log.Fatal(err)
}
}
func FailOnErr(_ interface{}, err error) {
CheckError(err)
}

View File

@@ -10,10 +10,3 @@ for Git repositories connected using SSL urls:
!!! note
The `/etc/ssh/ssh_known_hosts` should include Git host on each Argo CD deployment as well as on a computer where `argocd repo add` is executed. After resolving issue
[#1514](https://github.com/argoproj/argo-cd/issues/1514) only `argocd-repo-server` deployment has to be customized.
For the known_hosts file to work with custom repository port you have to obtain the public key using `ssh-keyscan` and hash the file before adding it to configmap, i.e.:
```
ssh-keyscan -p 1234 git.repo.com > known_hosts
ssh-keygen -Hf known_hosts
cat known_hosts
```

View File

@@ -9,11 +9,6 @@ set -o errexit
set -o nounset
set -o pipefail
# output tool versions
protoc --version
swagger version
jq --version
PROJECT_ROOT=$(cd $(dirname ${BASH_SOURCE})/..; pwd)
CODEGEN_PKG=${CODEGEN_PKG:-$(cd ${PROJECT_ROOT}; ls -d -1 ./vendor/k8s.io/code-generator 2>/dev/null || echo ../code-generator)}
PATH="${PROJECT_ROOT}/dist:${PATH}"
@@ -63,7 +58,7 @@ go build -i -o dist/protoc-gen-grpc-gateway ./vendor/github.com/grpc-ecosystem/g
go build -i -o dist/protoc-gen-swagger ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
# Generate server/<service>/(<service>.pb.go|<service>.pb.gw.go)
PROTO_FILES=$(find $PROJECT_ROOT \( -name "*.proto" -and -path '*/server/*' -or -path '*/reposerver/*' -and -name "*.proto" \) | sort)
PROTO_FILES=$(find $PROJECT_ROOT \( -name "*.proto" -and -path '*/server/*' -or -path '*/reposerver/*' -and -name "*.proto" \))
for i in ${PROTO_FILES}; do
# Path to the google API gateway annotations.proto will be different depending if we are
# building natively (e.g. from workspace) vs. part of a docker build.
@@ -121,7 +116,7 @@ clean_swagger() {
/usr/bin/find "${SWAGGER_ROOT}" -name '*.swagger.json' -delete
}
collect_swagger server 26
collect_swagger server 24
clean_swagger server
clean_swagger reposerver
clean_swagger controller

View File

@@ -1,8 +1,4 @@
#! /usr/bin/env bash
set -x
set -o errexit
set -o nounset
set -o pipefail
#!/bin/sh -x -e
SRCROOT="$( CDPATH='' cd -- "$(dirname "$0")/.." && pwd -P )"
AUTOGENMSG="# This is an auto-generated file. DO NOT EDIT"

View File

@@ -30,14 +30,7 @@ spec:
ports:
- containerPort: 8082
readinessProbe:
httpGet:
path: /healthz
port: 8082
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
tcpSocket:
port: 8082
initialDelaySeconds: 5
periodSeconds: 10

View File

@@ -12,7 +12,7 @@ bases:
images:
- name: argoproj/argocd
newName: argoproj/argocd
newTag: v1.1.2
newTag: v1.0.2
- name: argoproj/argocd-ui
newName: argoproj/argocd-ui
newTag: v1.1.2
newTag: v1.0.2

View File

@@ -38,12 +38,6 @@ spec:
description: DryRun will perform a `kubectl apply --dry-run` without
actually performing the sync
type: boolean
manifests:
description: Manifests is an optional field that overrides sync
source with a local directory for development
items:
type: string
type: array
prune:
description: Prune deletes resources that are no longer tracked
in git
@@ -130,10 +124,6 @@ spec:
type: string
type: object
type: array
releaseName:
description: The Helm release name. If omitted it will use
the application name
type: string
valueFiles:
description: ValuesFiles is a list of Helm value files to
use when generating a template
@@ -171,9 +161,6 @@ spec:
description: ApplicationSourceKustomize holds kustomize specific
options
properties:
commonLabels:
description: CommonLabels adds additional kustomize commonLabels
type: object
imageTags:
description: ImageTags are kustomize 1.0 image tag overrides
items:
@@ -302,20 +289,6 @@ spec:
- jsonPointers
type: object
type: array
info:
description: Infos contains a list of useful information (URLs, email
addresses, and plain text) that relates to the application
items:
properties:
name:
type: string
value:
type: string
required:
- name
- value
type: object
type: array
project:
description: Project is a application project name. Empty name means
that application belongs to 'default' project.
@@ -382,10 +355,6 @@ spec:
type: string
type: object
type: array
releaseName:
description: The Helm release name. If omitted it will use the
application name
type: string
valueFiles:
description: ValuesFiles is a list of Helm value files to use
when generating a template
@@ -422,9 +391,6 @@ spec:
description: ApplicationSourceKustomize holds kustomize specific
options
properties:
commonLabels:
description: CommonLabels adds additional kustomize commonLabels
type: object
imageTags:
description: ImageTags are kustomize 1.0 image tag overrides
items:
@@ -595,10 +561,6 @@ spec:
type: string
type: object
type: array
releaseName:
description: The Helm release name. If omitted it will
use the application name
type: string
valueFiles:
description: ValuesFiles is a list of Helm value files
to use when generating a template
@@ -637,9 +599,6 @@ spec:
description: ApplicationSourceKustomize holds kustomize specific
options
properties:
commonLabels:
description: CommonLabels adds additional kustomize commonLabels
type: object
imageTags:
description: ImageTags are kustomize 1.0 image tag overrides
items:
@@ -695,6 +654,7 @@ spec:
- revision
- deployedAt
- id
- source
type: object
type: array
observedAt: {}
@@ -717,12 +677,6 @@ spec:
description: DryRun will perform a `kubectl apply --dry-run`
without actually performing the sync
type: boolean
manifests:
description: Manifests is an optional field that overrides
sync source with a local directory for development
items:
type: string
type: array
prune:
description: Prune deletes resources that are no longer
tracked in git
@@ -819,10 +773,6 @@ spec:
type: string
type: object
type: array
releaseName:
description: The Helm release name. If omitted it
will use the application name
type: string
valueFiles:
description: ValuesFiles is a list of Helm value
files to use when generating a template
@@ -861,10 +811,6 @@ spec:
description: ApplicationSourceKustomize holds kustomize
specific options
properties:
commonLabels:
description: CommonLabels adds additional kustomize
commonLabels
type: object
imageTags:
description: ImageTags are kustomize 1.0 image tag
overrides
@@ -971,31 +917,18 @@ spec:
group:
type: string
hookPhase:
description: 'the state of any operation associated with
this resource OR hook note: can contain values for non-hook
resources'
type: string
hookType:
description: the type of the hook, empty for non-hook
resources
type: string
kind:
type: string
message:
description: message for the last sync OR operation
type: string
name:
type: string
namespace:
type: string
status:
description: the final result of the sync, this is be
empty if the resources is yet to be applied/pruned and
is always zero-value for hooks
type: string
syncPhase:
description: indicates the particular phase of the sync
that this is for
type: string
version:
type: string
@@ -1076,10 +1009,6 @@ spec:
type: string
type: object
type: array
releaseName:
description: The Helm release name. If omitted it will
use the application name
type: string
valueFiles:
description: ValuesFiles is a list of Helm value files
to use when generating a template
@@ -1118,10 +1047,6 @@ spec:
description: ApplicationSourceKustomize holds kustomize
specific options
properties:
commonLabels:
description: CommonLabels adds additional kustomize
commonLabels
type: object
imageTags:
description: ImageTags are kustomize 1.0 image tag overrides
items:
@@ -1175,6 +1100,7 @@ spec:
type: object
required:
- revision
- source
type: object
required:
- operation
@@ -1313,10 +1239,6 @@ spec:
type: string
type: object
type: array
releaseName:
description: The Helm release name. If omitted it will
use the application name
type: string
valueFiles:
description: ValuesFiles is a list of Helm value files
to use when generating a template
@@ -1355,10 +1277,6 @@ spec:
description: ApplicationSourceKustomize holds kustomize
specific options
properties:
commonLabels:
description: CommonLabels adds additional kustomize
commonLabels
type: object
imageTags:
description: ImageTags are kustomize 1.0 image tag overrides
items:
@@ -1420,6 +1338,8 @@ spec:
type: string
required:
- status
- comparedTo
- revision
type: object
type: object
type: object

View File

@@ -17,7 +17,7 @@ patchesStrategicMerge:
images:
- name: argoproj/argocd
newName: argoproj/argocd
newTag: v1.1.2
newTag: v1.0.2
- name: argoproj/argocd-ui
newName: argoproj/argocd-ui
newTag: v1.1.2
newTag: v1.0.2

View File

@@ -3,7 +3,7 @@
helm dependency update ./chart --skip-refresh
# This step is necessary because we do not want the helm tests to be included
templates=$(tar -tf ./chart/charts/redis-ha-*.tgz | grep 'redis-ha/templates/redis-.*.yaml')
templates=$(tar -tf ./chart/charts/redis-ha-*.tgz 'redis-ha/templates/redis-*.yaml')
helm_execute=""
for tmpl in ${templates}; do
helm_execute="${helm_execute} -x charts/${tmpl}"

Some files were not shown because too many files have changed in this diff Show More