Compare commits

..

5 Commits

Author SHA1 Message Date
Alexander Matyushentsev
e4d0bd3926 Take into account number of unavailable replicas to decided if deployment is healthy or not (#270)
* Take into account number of unavailable replicas to decided if deployment is healthy or not

* Run one controller for all e2e tests to reduce tests duration

* Apply reviewer notes: use logic from kubectl/rollout_status.go to check deployment health
2018-06-07 11:09:38 -07:00
Jesse Suen
18dc82d14d Remove hard requirement of initializing OIDC app during server startup (resolves #272) 2018-06-07 02:12:36 -07:00
Jesse Suen
e720abb58b Bump version to v0.4.7 2018-06-06 14:28:14 -07:00
Jesse Suen
a2e9a9ee49 Repo names containing underscores were not being accepted (resolves #258) 2018-06-06 14:27:41 -07:00
Jesse Suen
cf3776903d Retry argocd app wait connection errors from EOF watch. Show detailed state changes 2018-06-06 10:45:59 -07:00
313 changed files with 6424 additions and 40588 deletions

View File

@@ -29,19 +29,7 @@ spec:
- name: cmd
value: "{{item}}"
withItems:
- dep ensure && make cli lint
- name: test-coverage
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && go get github.com/mattn/goveralls && make test-coverage"
- name: test-e2e
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && make test-e2e"
- dep ensure && make cli lint test test-e2e
- name: ci-builder
inputs:
@@ -56,22 +44,12 @@ spec:
container:
image: argoproj/argo-cd-ci-builder:latest
command: [sh, -c]
args: ["mkfifo pipe; tee /tmp/logs.txt < pipe & {{inputs.parameters.cmd}} > pipe"]
args: ["{{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: COVERALLS_TOKEN
valueFrom:
secretKeyRef:
name: coverall-token
key: coverall-token
resources:
requests:
memory: 1024Mi
cpu: 200m
outputs:
artifacts:
- name: logs
path: /tmp/logs.txt
- name: ci-dind
inputs:
@@ -86,7 +64,7 @@ spec:
container:
image: argoproj/argo-cd-ci-builder:latest
command: [sh, -c]
args: ["mkfifo pipe; tee /tmp/logs.txt < pipe & until docker ps; do sleep 3; done && {{inputs.parameters.cmd}} > pipe"]
args: ["until docker ps; do sleep 3; done && {{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: DOCKER_HOST
@@ -101,7 +79,4 @@ spec:
securityContext:
privileged: true
mirrorVolumeMounts: true
outputs:
artifacts:
- name: logs
path: /tmp/logs.txt

1
.gitignore vendored
View File

@@ -7,4 +7,3 @@ dist/
# delve debug binaries
cmd/**/debug
debug.test
coverage.out

View File

@@ -1,279 +1,5 @@
# Changelog
## v0.10.0 (TBD)
### Changes since v0.9:
+ Allow more fine-grained sync (issue #508)
+ Display init container logs (issue #681)
+ Redirect to /auth/login instead of /login when SSO token is used for authenticaion (issue #348)
+ Support ability to use a helm values files from a URL (issue #624)
+ Support public not-connected repo in app creation UI (issue #426)
+ Use ksonnet CLI instead of ksonnet libs (issue #626)
+ We should be able to select the order of the `yaml` files while creating a Helm App (#664)
* Remove default params from app history (issue #556)
* Update to ksonnet v0.13.0
* Update to kustomize 1.0.8
- API Server fails to return apps due to grpc max message size limit (issue #690)
- App Creation UI for Helm Apps shows only files prefixed with `values-` (issue #663)
- App creation UI should allow specifying values files outside of helm app directory bug (issue #658)
- argocd-server logs credentials in plain text when adding git repositories (issue #653)
- Azure Repos do not work as a repository (issue #643)
- Better update conflict error handing during app editing (issue #685)
- Cluster watch needs to be restarted when CRDs get created (issue #627)
- Credentials not being accepted for Google Source Repositories (issue #651)
- Default project is created without permission to deploy cluster level resources (issue #679)
- Generate role token click resets policy changes (issue #655)
- Input type text instead of password on Connect repo panel (issue #693)
- Metrics endpoint not reachable through the metrics kubernetes service (issue #672)
- Operation stuck in 'in progress' state if application has no resources (issue #682)
- Project should influence options for cluster and namespace during app creation (issue #592)
- Repo server unable to execute ls-remote for private repos (issue #639)
- Resource is always out of sync if it has only 'ksonnet.io/component' label (issue #686)
- Resource nodes are 'jumping' on app details page (issue #683)
- Sync always suggest using latest revision instead of target UI bug (issue #669)
- Temporary ignore service catalog resources (issue #650)
## v0.9.2 (2018-09-28)
* Update to kustomize 1.0.8
- Fix issue where argocd-server logged credentials in plain text during repo add (issue #653)
- Credentials not being accepted for Google Source Repositories (issue #651)
- Azure Repos do not work as a repository (issue #643)
- Temporary ignore service catalog resources (issue #650)
- Normalize policies by always adding space after comma
## v0.9.1 (2018-09-24)
- Repo server unable to execute ls-remote for private repos (issue #639)
## v0.9.0 (2018-09-24)
### Notes about upgrading from v0.8
* Cluster wide resources should be allowed in default project (due to issue #330):
```
argocd project allow-cluster-resource default '*' '*'
```
* Projects now provide the ability to allow or deny deployments of cluster-scoped resources
(e.g. Namespaces, ClusterRoles, CustomResourceDefinitions). When upgrading from v0.8 to v0.9, to
match the behavior of v0.8 (which did not have restrictions on deploying resources) and continue to
allow deployment of cluster-scoped resources, an additional command should be run:
```bash
argocd proj allow-cluster-resource default '*' '*'
```
The above command allows the `default` project to deploy any cluster-scoped resources which matches
the behavior of v0.8.
* The secret keys in the argocd-secret containing the TLS certificate and key, has been renamed from
`server.crt` and `server.key` to the standard `tls.crt` and `tls.key` keys. This enables ArgoCD
to integrate better with Ingress and cert-manager. When upgrading to v0.9, the `server.crt` and
`server.key` keys in argocd-secret should be renamed to the new keys.
### Changes since v0.8:
+ Auto-sync option in application CRD instance (issue #79)
+ Support raw jsonnet as an application source (issue #540)
+ Reorder K8s resources to correct creation order (issue #102)
+ Redact K8s secrets from API server payloads (issue #470)
+ Support --in-cluster authentication without providing a kubeconfig (issue #527)
+ Special handling of CustomResourceDefinitions (issue #613)
+ ArgoCD should download helm chart dependencies (issue #582)
+ Export ArgoCD stats as prometheus style metrics (issue #513)
+ Support restricting TLS version (issue #609)
+ Use 'kubectl auth reconcile' before 'kubectl apply' (issue #523)
+ Projects need controls on cluster-scoped resources (issue #330)
+ Support IAM Authentication for managing external K8s clusters (issue #482)
+ Compatibility with cert manager (issue #617)
* Enable TLS for repo server (issue #553)
* Split out dex into it's own deployment (instead of sidecar) (issue #555)
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
+ [UI] Improve resource diff rendering (issue #457)
+ [UI] Indicate number of ready containers in pod (issue #539)
+ [UI] Indicate when app is overriding parameters (issue #503)
+ [UI] Provide a YAML view of resources (issue #396)
+ [UI] Project Role/Token management from UI (issue #548)
+ [UI] App creation wizard should allow specifying source revision (issue #562)
+ [UI] Ability to modify application from UI (issue #615)
+ [UI] indicate when operation is in progress or has failed (issue #566)
- Fix issue where changes were not pulled when tracking a branch (issue #567)
- Lazy enforcement of unknown cluster/namespace restricted resources (issue #599)
- Fix controller hot loop when app source contains bad manifests (issue #568)
- Fix issue where ArgoCD fails to deploy when resources are in a K8s list format (issue #584)
- Fix comparison failure when app contains unregistered custom resource (issue #583)
- Fix issue where helm hooks were being deployed as part of sync (issue #605)
- Fix race conditions in kube.GetResourcesWithLabel and DeleteResourceWithLabel (issue #587)
- [UI] Fix issue where projects filter does not work when application got changed
- [UI] Creating apps from directories is not obvious (issue #565)
- Helm hooks are being deployed as resources (issue #605)
- Disagreement in three way diff calculation (issue #597)
- SIGSEGV in kube.GetResourcesWithLabel (issue #587)
- ArgoCD fails to deploy resources list (issue #584)
- Branch tracking not working properly (issue #567)
- Controller hot loop when application source has bad manifests (issue #568)
## v0.8.2 (2018-09-12)
- Downgrade ksonnet from v0.12.0 to v0.11.0 due to quote unescape regression
- Fix CLI panic when performing an initial `argocd sync/wait`
## v0.8.1 (2018-09-10)
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
+ [UI] Improve resource diff rendering (issue #457)
+ [UI] Indicate number of ready containers in pod (issue #539)
+ [UI] Indicate when app is overriding parameters (issue #503)
+ [UI] Provide a YAML view of resources (issue #396)
- Fix issue where changes were not pulled when tracking a branch (issue #567)
- Fix controller hot loop when app source contains bad manifests (issue #568)
- [UI] Fix issue where projects filter does not work when application got changed
## v0.8.0 (2018-09-04)
### Notes about upgrading from v0.7
* The RBAC model has been improved to support explicit denies. What this means is that any previous
RBAC policy rules, need to be rewritten to include one extra column with the effect:
`allow` or `deny`. For example, if a rule was written like this:
```
p, my-org:my-team, applications, get, */*
```
It should be rewritten to look like this:
```
p, my-org:my-team, applications, get, */*, allow
```
### Changes since v0.7:
+ Support kustomize as an application source (issue #510)
+ Introduce project tokens for automation access (issue #498)
+ Add ability to delete a single application resource to support immutable updates (issue #262)
+ Update RBAC model to support explicit denies (issue #497)
+ Ability to view Kubernetes events related to application projects for auditing
+ Add PVC healthcheck to controller (issue #501)
+ Run all containers as an unprivileged user (issue #528)
* Upgrade ksonnet to v0.12.0
* Add readiness probes to API server (issue #522)
* Use gRPC error codes instead of fmt.Errorf (#532)
- API discovery becomes best effort when partial resource list is returned (issue #524)
- Fix `argocd app wait` printing incorrect Sync output (issue #542)
- Fix issue where argocd could not sync to a tag (#541)
- Fix issue where static assets were browser cached between upgrades (issue #489)
## v0.7.2 (2018-08-21)
- API discovery becomes best effort when partial resource list is returned (issue #524)
## v0.7.1 (2018-08-03)
+ Surface helm parameters to the application level (#485)
+ [UI] Improve application creation wizard (#459)
+ [UI] Show indicator when refresh is still in progress (#493)
* [UI] Improve data loading error notification (#446)
* Infer username from claims during an `argocd relogin` (#475)
* Expand RBAC role to be able to create application events. Fix username claims extraction
- Fix scalability issues with the ListApps API (#494)
- Fix issue where application server was retrieving events from incorrect cluster (#478)
- Fix failure in identifying app source type when path was '.'
- AppProjectSpec SourceRepos mislabeled (#490)
- Failed e2e test was not failing CI workflow
* Fix linux download link in getting_started.md (#487) (@chocopowwwa)
## v0.7.0 (2018-07-27)
+ Support helm charts and yaml directories as an application source
+ Audit trails in the form of API call logs
+ Generate kubernetes events for application state changes
+ Add ksonnet version to version endpoint (#433)
+ Show CLI progress for sync and rollback
+ Make use of dex refresh tokens and store them into local config
+ Expire local superuser tokens when their password changes
+ Add `argocd relogin` command as a convenience around login to current context
- Fix saving default connection status for repos and clusters
- Fix undesired fail-fast behavior of health check
- Fix memory leak in the cluster resource watch
- Health check for StatefulSets, DaemonSet, and ReplicaSets were failing due to use of wrong converters
## v0.6.2 (2018-07-23)
- Health check for StatefulSets, DaemonSet, and ReplicaSets were failing due to use of wrong converters
## v0.6.1 (2018-07-18)
- Fix regression where deployment health check incorrectly reported Healthy
+ Intercept dex SSO errors and present them in Argo login page
## v0.6.0 (2018-07-16)
+ Support PreSync, Sync, PostSync resource hooks
+ Introduce Application Projects for finer grain RBAC controls
+ Swagger Docs & UI
+ Support in-cluster deployments internal kubernetes service name
+ Refactoring & Improvements
* Improved error handling, status and condition reporting
* Remove installer in favor of kubectl apply instructions
* Add validation when setting application parameters
* Cascade deletion is decided during app deletion, instead of app creation
- Fix git authentication implementation when using using SSH key
- app-name label was inadvertently injected into spec.selector if selector was omitted from v1beta1 specs
## v0.5.4 (2018-06-27)
- Refresh flag to sync should be optional, not required
## v0.5.3 (2018-06-20)
+ Support cluster management using the internal k8s API address https://kubernetes.default.svc (#307)
+ Support diffing a local ksonnet app to the live application state (resolves #239) (#298)
+ Add ability to show last operation result in app get. Show path in app list -o wide (#297)
+ Update dependencies: ksonnet v0.11, golang v1.10, debian v9.4 (#296)
+ Add ability to force a refresh of an app during get (resolves #269) (#293)
+ Automatically restart API server upon certificate changes (#292)
## v0.5.2 (2018-06-14)
+ Resource events tab on application details page (#286)
+ Display pod status on application details page (#231)
## v0.5.1 (2018-06-13)
- API server incorrectly compose application fully qualified name for RBAC check (#283)
- UI crash while rendering application operation info if operation failed
## v0.5.0 (2018-06-12)
+ RBAC access control
+ Repository/Cluster state monitoring
+ ArgoCD settings import/export
+ Application creation UI wizard
+ argocd app manifests for printing the application manifests
+ argocd app unset command to unset parameter overrides
+ Fail app sync if prune flag is required (#276)
+ Take into account number of unavailable replicas to decided if deployment is healthy or not #270
+ Add ability to show parameters and overrides in CLI (resolves #240)
- Repo names containing underscores were not being accepted (#258)
- Cookie token was not parsed properly when mixed with other site cookies
## v0.4.7 (2018-06-07)
- Fix argocd app wait health checking logic
## v0.4.6 (2018-06-06)
- Retry argocd app wait connection errors from EOF watch. Show detailed state changes
## v0.4.5 (2018-05-31)
+ Add argocd app unset command to unset parameter overrides
- Cookie token was not parsed properly when mixed with other site cookies
## v0.4.4 (2018-05-30)
+ Add ability to show parameters and overrides in CLI (resolves #240)
+ Add Events API endpoint
+ Issue #238 - add upsert flag to 'argocd app create' command
+ Add repo browsing endpoint (#229)
+ Support subscribing to settings updates and auto-restart of dex and API server
- Issue #233 - Controller does not persist rollback operation result
- App sync frequently fails due to concurrent app modification
## v0.4.3 (2018-05-21)
- Move local branch deletion as part of git Reset() (resolves #185) (#222)
- Fix exit code for app wait (#219)
## v0.4.2 (2018-05-21)
+ Show URL in argocd app get
- Remove interactive context name prompt during login which broke login automation
* Rename force flag to cascade in argocd app delete
## v0.4.1 (2018-05-18)
+ Implemented argocd app wait command
## v0.4.0 (2018-05-17)
+ SSO Integration
+ GitHub Webhook
@@ -286,36 +12,3 @@ RBAC policy rules, need to be rewritten to include one extra column with the eff
* Manifests are memoized in repo server
- Fix connection timeouts to SSH repos
## v0.3.2 (2018-05-03)
+ Application sync should delete 'unexpected' resources #139
+ Update ksonnet to v0.10.1
+ Detect unexpected resources
- Fix: App sync frequently fails due to concurrent app modification #147
- Fix: improve app state comparator: #136, #132
## v0.3.1 (2018-04-24)
+ Add new rollback RPC with numeric identifiers
+ New argo app history and argo app rollback command
+ Switch to gogo/protobuf for golang code generation
- Fix: create .argocd directory during argo login (issue #123)
- Fix: Allow overriding server or namespace separately (issue #110)
## v0.3.0 (2018-04-23)
+ Auth support
+ TLS support
+ DAG-based application view
+ Bulk watch
+ ksonnet v0.10.0-alpha.3
+ kubectl apply deployment strategy
+ CLI improvements for app management
## v0.2.0 (2018-04-03)
+ Rollback UI
+ Override parameters
## v0.1.0 (2018-03-12)
+ Define app in Github with dev and preprod environment using KSonnet
+ Add cluster Diff App with a cluster Deploy app in a cluster
+ Deploy a new version of the app in the cluster
+ App sync based on Github app config change - polling only
+ Basic UI: App diff between Git and k8s cluster for all environments Basic GUI

View File

@@ -1,23 +1,10 @@
## Requirements
Make sure you have following tools installed
* [docker](https://docs.docker.com/install/#supported-platforms)
* [golang](https://golang.org/)
* [dep](https://github.com/golang/dep)
* [protobuf](https://developers.google.com/protocol-buffers/)
* [ksonnet](https://github.com/ksonnet/ksonnet#install)
* [helm](https://github.com/helm/helm/releases)
* [kustomize](https://github.com/kubernetes-sigs/kustomize/releases)
* [go-swagger](https://github.com/go-swagger/go-swagger/blob/master/docs/install.md)
* [jq](https://stedolan.github.io/jq/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
Make sure you have following tools installed [golang](https://golang.org/), [dep](https://github.com/golang/dep), [protobuf](https://developers.google.com/protocol-buffers/),
[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
```
$ brew tap go-swagger/go-swagger
$ brew install go dep protobuf kubectl ksonnet/tap/ks kubernetes-helm jq go-swagger
$ brew install go dep protobuf kubectl
$ go get -u github.com/golang/protobuf/protoc-gen-go
$ go get -u github.com/go-swagger/go-swagger/cmd/swagger
$ go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
$ go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
```
Nice to have [gometalinter](https://github.com/alecthomas/gometalinter) and [goreman](https://github.com/mattn/goreman):
@@ -33,23 +20,6 @@ $ go get -u github.com/argoproj/argo-cd
$ dep ensure
$ make
```
NOTE: The make command can take a while, and we recommend building the specific component you are working on
* `make cli` - Make the argocd CLI tool
* `make server` - Make the API/repo/controller server
* `make codegen` - Builds protobuf and swagger files
* `make argocd-util` - Make the administrator's utility, used for certain tasks such as import/export
## Generating ArgoCD manifests for a specific image repository/tag
During development, the `update-manifests.sh` script, can be used to conveniently regenerate the
ArgoCD installation manifests with a customized image namespace and tag. This enables developers
to easily apply manifests which are using the images that they pushed into their personal container
repository.
```
$ IMAGE_NAMESPACE=jessesuen IMAGE_TAG=latest ./hack/update-manifests.sh
$ kubectl apply -n argocd -f ./manifests/install.yaml
```
## Running locally
@@ -68,10 +38,3 @@ $ kubectl create -f install/manifests/01_application-crd.yaml
```
$ goreman start
```
## Troubleshooting
* Ensure argocd is installed: ./dist/argocd install
* Ensure you're logged in: ./dist/argocd login --username admin --password <whatever password you set at install> localhost:8080
* Ensure that roles are configured: kubectl create -f install/manifests/02c_argocd-rbac-cm.yaml
* Ensure minikube is running: minikube stop && minikube start
* Ensure Argo CD is aware of minikube: ./dist/argocd cluster add minikube

13
COPYRIGHT Normal file
View File

@@ -0,0 +1,13 @@
Copyright 2017-2018 The Argo Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License..

View File

@@ -1,136 +0,0 @@
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM golang:1.10.3 as builder
RUN apt-get update && apt-get install -y \
git \
make \
wget \
gcc \
zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /tmp
# Install docker
ENV DOCKER_VERSION=18.06.0
RUN curl -O https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}-ce.tgz && \
tar -xzf docker-${DOCKER_VERSION}-ce.tgz && \
mv docker/docker /usr/local/bin/docker && \
rm -rf ./docker
# Install dep
ENV DEP_VERSION=0.5.0
RUN wget https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep
# Install gometalinter
RUN curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v2.0.5/gometalinter-2.0.5-linux-amd64.tar.gz | \
tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- && \
ln -s $GOPATH/bin/gometalinter $GOPATH/bin/gometalinter.v2
# Install packr
ENV PACKR_VERSION=1.13.2
RUN wget https://github.com/gobuffalo/packr/releases/download/v${PACKR_VERSION}/packr_${PACKR_VERSION}_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# Install kubectl
RUN curl -L -o /usr/local/bin/kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /usr/local/bin/kubectl
# Install ksonnet
# NOTE: we frequently switch between tip of master ksonnet vs. official builds. Comment/uncomment
# the corresponding section to switch between the two options:
# Option 1: build ksonnet ourselves
#RUN go get -v -u github.com/ksonnet/ksonnet && mv ${GOPATH}/bin/ksonnet /usr/local/bin/ks
# Option 2: use official tagged ksonnet release
ENV KSONNET_VERSION=0.13.0
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks
# Install helm
ENV HELM_VERSION=2.11.0
RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
tar -C /tmp/ -xf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
mv /tmp/linux-amd64/helm /usr/local/bin/helm
# Install kustomize
ENV KUSTOMIZE_VERSION=1.0.10
RUN curl -L -o /usr/local/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize
ENV AWS_IAM_AUTHENTICATOR_VERSION=0.3.0
RUN curl -L -o /usr/local/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.3.0/heptio-authenticator-aws_${AWS_IAM_AUTHENTICATOR_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/aws-iam-authenticator
####################################################################################################
# ArgoCD Build stage which performs the actual build of ArgoCD binaries
####################################################################################################
FROM golang:1.10.3 as argocd-build
COPY --from=builder /usr/local/bin/dep /usr/local/bin/dep
COPY --from=builder /usr/local/bin/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
# Perform the build
WORKDIR /go/src/github.com/argoproj/argo-cd
COPY . .
ARG MAKE_TARGET="cli server controller repo-server argocd-util"
RUN make ${MAKE_TARGET}
####################################################################################################
# Final image
####################################################################################################
FROM debian:9.5-slim
RUN groupadd -g 999 argocd && \
useradd -r -u 999 -g argocd argocd && \
mkdir -p /home/argocd && \
chown argocd:argocd /home/argocd && \
apt-get update && \
apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
COPY --from=builder /usr/local/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=argocd
COPY --from=argocd-build /go/src/github.com/argoproj/argo-cd/dist/* /usr/local/bin/
# Symlink argocd binaries under root for backwards compatibility that expect it under /
RUN ln -s /usr/local/bin/argocd /argocd && \
ln -s /usr/local/bin/argocd-server /argocd-server && \
ln -s /usr/local/bin/argocd-util /argocd-util && \
ln -s /usr/local/bin/argocd-application-controller /argocd-application-controller && \
ln -s /usr/local/bin/argocd-repo-server /argocd-repo-server
USER argocd
RUN helm init --client-only
WORKDIR /home/argocd
ARG BINARY
CMD ${BINARY}

83
Dockerfile-argocd Normal file
View File

@@ -0,0 +1,83 @@
FROM debian:9.3 as builder
RUN apt-get update && apt-get install -y \
git \
make \
wget \
gcc \
zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install go
ENV GO_VERSION 1.9.3
ENV GO_ARCH amd64
ENV GOPATH /root/go
ENV PATH ${GOPATH}/bin:/usr/local/go/bin:${PATH}
RUN wget https://storage.googleapis.com/golang/go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
tar -C /usr/local/ -xf /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
rm /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz
# Install protoc, dep, packr
ENV PROTOBUF_VERSION 3.5.1
RUN cd /usr/local && \
wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/protoc-${PROTOBUF_VERSION}-linux-x86_64.zip && \
unzip protoc-*.zip && \
wget https://github.com/golang/dep/releases/download/v0.4.1/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep && \
wget https://github.com/gobuffalo/packr/releases/download/v1.10.4/packr_1.10.4_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
# Perform the build
WORKDIR /root/go/src/github.com/argoproj/argo-cd
COPY . .
ARG MAKE_TARGET="cli server controller repo-server argocd-util"
RUN make ${MAKE_TARGET}
##############################################################
# This stage will pull in or build any CLI tooling we need for our final image
FROM golang:1.10 as cli-tooling
# NOTE: we frequently switch between tip of master ksonnet vs. official builds. Comment/uncomment
# the corresponding section to switch between the two options:
# Option 1: build ksonnet ourselves
#RUN go get -v -u github.com/ksonnet/ksonnet && mv ${GOPATH}/bin/ksonnet /ks
# Option 2: use official tagged ksonnet release
env KSONNET_VERSION=0.10.2
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /ks
RUN curl -o /kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /kubectl
##############################################################
FROM debian:9.3
RUN apt-get update && apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=cli-tooling /ks /usr/local/bin/ks
COPY --from=cli-tooling /kubectl /usr/local/bin/kubectl
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=root
COPY --from=builder /root/go/src/github.com/argoproj/argo-cd/dist/* /
ARG BINARY
CMD /${BINARY}

22
Dockerfile-ci-builder Normal file
View File

@@ -0,0 +1,22 @@
FROM golang:1.9.2
WORKDIR /tmp
RUN curl -O https://get.docker.com/builds/Linux/x86_64/docker-1.13.1.tgz && \
tar -xzf docker-1.13.1.tgz && \
mv docker/docker /usr/local/bin/docker && \
rm -rf ./docker && \
go get -u github.com/golang/dep/cmd/dep && \
go get -u gopkg.in/alecthomas/gometalinter.v2 && \
gometalinter.v2 --install
# Install kubectl
RUN curl -o /kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /kubectl && mv /kubectl /usr/local/bin/kubectl
# Install ksonnet
env KSONNET_VERSION=0.10.2
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks && \
rm -rf /tmp/ks_${KSONNET_VERSION}

956
Gopkg.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,63 +1,48 @@
# Packages should only be added to the following list when we use them *outside* of our go code.
# (e.g. we want to build the binary to invoke as part of the build process, such as in
# generate-proto.sh). Normal use of golang packages should be added via `dep ensure`, and pinned
# with a [[constraint]] or [[override]] when version is important.
required = [
"github.com/golang/protobuf/protoc-gen-go",
"github.com/gogo/protobuf/protoc-gen-gofast",
"github.com/gogo/protobuf/protoc-gen-gogofast",
"k8s.io/code-generator/cmd/go-to-protobuf",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger",
"golang.org/x/sync/errgroup",
"k8s.io/code-generator/cmd/go-to-protobuf",
]
[[constraint]]
name = "google.golang.org/grpc"
version = "1.15.0"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "1.1.1"
# override github.com/grpc-ecosystem/go-grpc-middleware's constraint on master
[[override]]
name = "github.com/golang/protobuf"
version = "1.2.0"
version = "1.9.2"
[[constraint]]
name = "github.com/grpc-ecosystem/grpc-gateway"
version = "v1.3.1"
# prometheus does not believe in semversioning yet
[[constraint]]
name = "github.com/prometheus/client_golang"
revision = "7858729281ec582767b20e0d696b6041d995d5e0"
# override ksonnet's release-1.8 dependency
[[override]]
branch = "release-1.9"
name = "k8s.io/apimachinery"
[[constraint]]
branch = "release-1.12"
branch = "release-1.9"
name = "k8s.io/api"
[[constraint]]
name = "k8s.io/apiextensions-apiserver"
branch = "release-1.12"
branch = "release-1.9"
[[constraint]]
branch = "release-1.12"
branch = "release-1.9"
name = "k8s.io/code-generator"
[[constraint]]
branch = "release-9.0"
branch = "release-6.0"
name = "k8s.io/client-go"
[[constraint]]
name = "github.com/stretchr/testify"
version = "1.2.2"
version = "1.2.1"
[[constraint]]
name = "github.com/gobuffalo/packr"
version = "v1.11.0"
name = "github.com/ksonnet/ksonnet"
version = "v0.10.1"
[[constraint]]
branch = "master"
name = "github.com/argoproj/pkg"
# override ksonnet's logrus dependency
[[override]]
name = "github.com/sirupsen/logrus"
version = "v1.0.3"

View File

@@ -187,7 +187,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017-2018 The Argo Authors
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -57,38 +57,34 @@ codegen: protogen clientgen
# NOTE: we use packr to do the build instead of go, since we embed .yaml files into the go binary.
# This enables ease of maintenance of the yaml files.
.PHONY: cli
cli: clean-debug
${PACKR_CMD} build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/${CLI_NAME} ./cmd/argocd
cli:
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/${CLI_NAME} ./cmd/argocd
.PHONY: cli-linux
cli-linux: clean-debug
docker build --iidfile /tmp/argocd-linux-id --target argocd-build --build-arg MAKE_TARGET="cli IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-linux-amd64" .
cli-linux:
docker build --iidfile /tmp/argocd-linux-id --target builder --build-arg MAKE_TARGET="cli IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-linux-amd64" -f Dockerfile-argocd .
docker create --name tmp-argocd-linux `cat /tmp/argocd-linux-id`
docker cp tmp-argocd-linux:/go/src/github.com/argoproj/argo-cd/dist/argocd-linux-amd64 dist/
docker cp tmp-argocd-linux:/root/go/src/github.com/argoproj/argo-cd/dist/argocd-linux-amd64 dist/
docker rm tmp-argocd-linux
.PHONY: cli-darwin
cli-darwin: clean-debug
docker build --iidfile /tmp/argocd-darwin-id --target argocd-build --build-arg MAKE_TARGET="cli GOOS=darwin IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-darwin-amd64" .
cli-darwin:
docker build --iidfile /tmp/argocd-darwin-id --target builder --build-arg MAKE_TARGET="cli GOOS=darwin IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-darwin-amd64" -f Dockerfile-argocd .
docker create --name tmp-argocd-darwin `cat /tmp/argocd-darwin-id`
docker cp tmp-argocd-darwin:/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64 dist/
docker cp tmp-argocd-darwin:/root/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64 dist/
docker rm tmp-argocd-darwin
.PHONY: argocd-util
argocd-util: clean-debug
argocd-util:
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
.PHONY: manifests
manifests:
./hack/update-manifests.sh
.PHONY: server
server: clean-debug
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
server:
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
.PHONY: server-image
server-image:
docker build --build-arg BINARY=argocd-server -t $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) .
docker build --build-arg BINARY=argocd-server -t $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) ; fi
.PHONY: repo-server
@@ -97,7 +93,7 @@ repo-server:
.PHONY: repo-server-image
repo-server-image:
docker build --build-arg BINARY=argocd-repo-server -t $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) .
docker build --build-arg BINARY=argocd-repo-server -t $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) ; fi
.PHONY: controller
@@ -106,17 +102,17 @@ controller:
.PHONY: controller-image
controller-image:
docker build --build-arg BINARY=argocd-application-controller -t $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) .
docker build --build-arg BINARY=argocd-application-controller -t $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) ; fi
.PHONY: cli-image
cli-image:
docker build --build-arg BINARY=argocd -t $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) .
docker build --build-arg BINARY=argocd -t $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) ; fi
.PHONY: builder-image
builder-image:
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) --target builder .
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) -f Dockerfile-ci-builder .
.PHONY: lint
lint:
@@ -124,34 +120,23 @@ lint:
.PHONY: test
test:
go test -v `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-coverage
test-coverage:
go test -v -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
@if [ "$(COVERALLS_TOKEN)" != "" ] ; then goveralls -ignore `find . -name '*.pb*.go' | grep -v vendor/ | sed 's!^./!!' | paste -d, -s -` -coverprofile=coverage.out -service=argo-ci -repotoken "$(COVERALLS_TOKEN)"; else echo 'No COVERALLS_TOKEN env var specified. Skipping submission to Coveralls.io'; fi
go test `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-e2e
test-e2e:
go test -v -failfast -timeout 20m ./test/e2e
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
.PHONY: clean-debug
clean-debug:
-find ${CURRENT_DIR} -name debug.test | xargs rm -f
go test ./test/e2e
.PHONY: clean
clean: clean-debug
clean:
-rm -rf ${CURRENT_DIR}/dist
.PHONY: precheckin
precheckin: test lint
.PHONY: release-precheck
release-precheck: manifests
release-precheck:
@if [ "$(GIT_TREE_STATE)" != "clean" ]; then echo 'git tree state is $(GIT_TREE_STATE)' ; exit 1; fi
@if [ -z "$(GIT_TAG)" ]; then echo 'commit must be tagged to perform release' ; exit 1; fi
@if [ "$(GIT_TAG)" != "v`cat VERSION`" ]; then echo 'VERSION does not match git tag'; exit 1; fi
.PHONY: release
release: release-precheck precheckin cli-darwin cli-linux server-image controller-image repo-server-image cli-image

View File

@@ -1,4 +1,5 @@
controller: go run ./cmd/argocd-application-controller/main.go
controller: go run ./cmd/argocd-application-controller/main.go --app-resync 10
api-server: go run ./cmd/argocd-server/main.go --insecure --disable-auth
repo-server: go run ./cmd/argocd-repo-server/main.go --loglevel debug
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -p 5557:5557 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/coreos/dex:v2.10.0 serve /dex.yaml"
redis: docker run --rm -p 6379:6379 redis:3.2.11

View File

@@ -1,12 +1,9 @@
[![Coverage Status](https://coveralls.io/repos/github/argoproj/argo-cd/badge.svg?branch=master)](https://coveralls.io/github/argoproj/argo-cd?branch=master)
# Argo CD - Declarative Continuous Delivery for Kubernetes
# Argo CD - GitOps Continuous Delivery for Kubernetes
## What is Argo CD?
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
![Argo CD UI](docs/argocd-ui.gif)
Argo CD is a declarative, continuous delivery service based on ksonnet for Kubernetes.
## Why Argo CD?
@@ -15,37 +12,25 @@ Application deployment and lifecycle management should be automated, auditable,
## Getting Started
Follow our [getting started guide](docs/getting_started.md). Further [documentation](docs/)
is provided for additional features.
Follow our [getting started guide](docs/getting_started.md).
## How it works
Argo CD follows the **GitOps** pattern of using git repositories as the source of truth for defining
the desired application state. Kubernetes manifests can be specified in several ways:
* [ksonnet](https://ksonnet.io) applications
* [kustomize](https://kustomize.io) applications
* [helm](https://helm.sh) charts
* Plain directory of YAML/json manifests
Argo CD uses git repositories as the source of truth for defining the desired application state as
well as the target deployment environments. Kubernetes manifests are specified as
[ksonnet](https://ksonnet.io) applications. Argo CD automates the deployment of the desired
application states in the specified target environments.
![Argo CD Architecture](docs/argocd_architecture.png)
Argo CD automates the deployment of the desired application states in the specified target environments.
Application deployments can track updates to branches, tags, or pinned to a specific version of
manifests at a git commit. See [tracking strategies](docs/tracking_strategies.md) for additional
details about the different tracking strategies available.
For a quick 10 minute overview of ArgoCD, check out the demo presented to the Sig Apps community
meeting:
[![Alt text](https://img.youtube.com/vi/aWDIQMbp1cc/0.jpg)](https://youtu.be/aWDIQMbp1cc?t=1m4s)
## Architecture
![Argo CD Architecture](docs/argocd_architecture.png)
Argo CD is implemented as a kubernetes controller which continuously monitors running applications
and compares the current, live state against the desired target state (as specified in the git repo).
A deployed application whose live state deviates from the target state is considered `OutOfSync`.
Argo CD reports & visualizes the differences, while providing facilities to automatically or
A deployed application whose live state deviates from the target state is considered out-of-sync.
Argo CD reports & visualizes the differences as well as providing facilities to automatically or
manually sync the live state back to the desired target state. Any modifications made to the desired
target state in the git repo can be automatically applied and reflected in the specified target
environments.
@@ -55,22 +40,52 @@ For additional details, see [architecture overview](docs/architecture.md).
## Features
* Automated deployment of applications to specified target environments
* Flexibility in support for multiple config management tools (Ksonnet, Kustomize, Helm, plain-YAML)
* Continuous monitoring of deployed applications
* Automated or manual syncing of applications to its desired state
* Web and CLI based visualization of applications and differences between live vs. desired state
* Automated or manual syncing of applications to its target state
* Web and CLI based visualization of applications and differences between live vs. target state
* Rollback/Roll-anywhere to any application state committed in the git repository
* Health assessment statuses on all components of the application
* SSO Integration (OIDC, OAuth2, LDAP, SAML 2.0, GitLab, Microsoft, LinkedIn)
* SSO Integration (OIDC, LDAP, SAML 2.0, GitLab, Microsoft, LinkedIn)
* Webhook Integration (GitHub, BitBucket, GitLab)
* PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades)
* Audit trails for application events and API calls
* Parameter overrides for overriding ksonnet/helm parameters in git
* Service account/access key management for CI pipelines
## What is ksonnet?
* [Jsonnet](http://jsonnet.org), the basis for ksonnet, is a domain specific configuration language,
which provides extreme flexibility for composing and manipulating JSON/YAML specifications.
* [Ksonnet](http://ksonnet.io) goes one step further by applying Jsonnet principles to Kubernetes
manifests. It provides an opinionated file & directory structure to organize applications into
reusable components, parameters, and environments. Environments can be hierarchical, which promotes
both re-use and granular customization of application and environment specifications.
## Why ksonnet?
Application configuration management is a hard problem and grows rapidly in complexity as you deploy
more applications, against more and more environments. Current templating systems, such as Jinja,
and Golang templating, are unnatural ways to maintain kubernetes manifests, and are not well suited to
capture subtle configuration differences between environments. Its ability to compose and re-use
application and environment configurations is also very limited.
Imagine we have a single guestbook application deployed in following environments:
| Environment | K8s Version | Application Image | DB Connection String | Environment Vars | Sidecars |
|---------------|-------------|------------------------|-----------------------|------------------|---------------|
| minikube | 1.10.0 | jesse/guestbook:latest | sql://locahost/db | DEBUG=true | |
| dev | 1.9.0 | app/guestbook:latest | sql://dev-test/db | DEBUG=true | |
| staging | 1.8.0 | app/guestbook:e3c0263 | sql://staging/db | | istio,dnsmasq |
| us-west-1 | 1.8.0 | app/guestbook:abc1234 | sql://prod/db | FOO_FEATURE=true | istio,dnsmasq |
| us-west-2 | 1.8.0 | app/guestbook:abc1234 | sql://prod/db | | istio,dnsmasq |
| us-east-1 | 1.9.0 | app/guestbook:abc1234 | sql://prod/db | BAR_FEATURE=true | istio,dnsmasq |
Ksonnet:
* Enables composition and re-use of common YAML specifications
* Allows overrides, additions, and subtractions of YAML sub-components specific to each environment
* Guarantees proper generation of K8s manifests suitable for the corresponding Kubernetes API version
* Provides [kubernetes-specific jsonnet libraries](https://github.com/ksonnet/ksonnet-lib) to enable
concise definition of kubernetes manifests
## Development Status
* Argo CD is being used in production to deploy SaaS services at Intuit
* Argo CD is in early development
## Roadmap
* Revamped UI, and feature parity with CLI
* Customizable application actions
* PreSync, PostSync, OutOfSync hooks
* Customized application actions as Argo workflows
* Blue/Green & canary upgrades

View File

@@ -1 +1 @@
0.10.6
0.4.7

View File

@@ -8,6 +8,12 @@ import (
"strconv"
"time"
"github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
@@ -16,15 +22,8 @@ import (
// load the gcp plugin (required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// load the oidc plugin (required to authenticate with OpenID Connect).
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
"github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/stats"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
)
const (
@@ -66,25 +65,30 @@ func newCommand() *cobra.Command {
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
// TODO (amatyushentsev): Use config map to store controller configuration
controllerConfig := controller.ApplicationControllerConfig{
Namespace: namespace,
InstanceID: "",
}
db := db.NewDB(namespace, kubeClient)
resyncDuration := time.Duration(appResyncPeriod) * time.Second
repoClientset := reposerver.NewRepositoryServerClientset(repoServerAddress)
appStateManager := controller.NewAppStateManager(db, appClient, reposerver.NewRepositoryServerClientset(repoServerAddress), namespace)
appHealthManager := controller.NewAppHealthManager(db, namespace)
appController := controller.NewApplicationController(
namespace,
kubeClient,
appClient,
repoClientset,
resyncDuration)
secretController := controller.NewSecretController(kubeClient, repoClientset, resyncDuration, namespace)
db,
appStateManager,
appHealthManager,
resyncDuration,
&controllerConfig)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
log.Infof("Application Controller (version: %s) starting (namespace: %s)", argocd.GetVersion(), namespace)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
go secretController.Run(ctx)
go appController.Run(ctx, statusProcessors, operationProcessors)
// Wait forever
select {}

View File

@@ -4,10 +4,6 @@ import (
"fmt"
"net"
"os"
"time"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/errors"
@@ -16,8 +12,9 @@ import (
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/ksonnet"
"github.com/argoproj/argo-cd/util/stats"
"github.com/argoproj/argo-cd/util/tls"
"github.com/go-redis/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
const (
@@ -28,8 +25,7 @@ const (
func newCommand() *cobra.Command {
var (
logLevel string
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
logLevel string
)
var command = cobra.Command{
Use: cliName,
@@ -39,11 +35,7 @@ func newCommand() *cobra.Command {
errors.CheckError(err)
log.SetLevel(level)
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
errors.CheckError(err)
server, err := reposerver.NewServer(git.NewFactory(), newCache(), tlsConfigCustomizer)
errors.CheckError(err)
server := reposerver.NewServer(git.NewFactory(), newCache())
grpc := server.CreateGRPC()
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", port))
errors.CheckError(err)
@@ -53,9 +45,6 @@ func newCommand() *cobra.Command {
log.Infof("argocd-repo-server %s serving on %s", argocd.GetVersion(), listener.Addr())
log.Infof("ksonnet version: %s", ksVers)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
err = grpc.Serve(listener)
errors.CheckError(err)
return nil
@@ -63,18 +52,17 @@ func newCommand() *cobra.Command {
}
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
return &command
}
func newCache() cache.Cache {
return cache.NewInMemoryCache(repository.DefaultRepoCacheExpiration)
// client := redis.NewClient(&redis.Options{
// Addr: "localhost:6379",
// Password: "",
// DB: 0,
// })
// return cache.NewRedisCache(client, repository.DefaultRepoCacheExpiration)
//return cache.NewInMemoryCache(repository.DefaultRepoCacheExpiration)
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "",
DB: 0,
})
return cache.NewRedisCache(client, repository.DefaultRepoCacheExpiration)
}
func main() {

View File

@@ -2,35 +2,27 @@ package commands
import (
"context"
"flag"
"strconv"
"time"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/server"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/stats"
"github.com/argoproj/argo-cd/util/tls"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
// NewCommand returns a new instance of an argocd command
func NewCommand() *cobra.Command {
var (
insecure bool
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
staticAssetsDir string
repoServerAddress string
disableAuth bool
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
insecure bool
logLevel string
clientConfig clientcmd.ClientConfig
staticAssetsDir string
repoServerAddress string
disableAuth bool
)
var command = &cobra.Command{
Use: cliName,
@@ -41,41 +33,28 @@ func NewCommand() *cobra.Command {
errors.CheckError(err)
log.SetLevel(level)
// Set the glog level for the k8s go-client
_ = flag.CommandLine.Parse([]string{})
_ = flag.Lookup("logtostderr").Value.Set("true")
_ = flag.Lookup("v").Value.Set(strconv.Itoa(glogLevel))
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
errors.CheckError(err)
kubeclientset := kubernetes.NewForConfigOrDie(config)
appclientset := appclientset.NewForConfigOrDie(config)
repoclientset := reposerver.NewRepositoryServerClientset(repoServerAddress)
argoCDOpts := server.ArgoCDServerOpts{
Insecure: insecure,
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
KubeClientset: kubeclientset,
AppClientset: appclientset,
RepoClientset: repoclientset,
DisableAuth: disableAuth,
TLSConfigCustomizer: tlsConfigCustomizer,
Insecure: insecure,
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
KubeClientset: kubeclientset,
AppClientset: appclientset,
RepoClientset: repoclientset,
DisableAuth: disableAuth,
}
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
argocd := server.NewServer(argoCDOpts)
for {
argocd := server.NewServer(argoCDOpts)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
argocd.Run(ctx, 8080)
@@ -88,10 +67,8 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&insecure, "insecure", false, "Run server without TLS")
command.Flags().StringVar(&staticAssetsDir, "staticassets", "", "Static assets directory path")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().StringVar(&repoServerAddress, "repo-server", "localhost:8081", "Repo server address.")
command.Flags().BoolVar(&disableAuth, "disable-auth", false, "Disable client authentication")
command.AddCommand(cli.NewVersionCmd(cliName))
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(command)
return command
}

View File

@@ -6,23 +6,14 @@ import (
"io/ioutil"
"os"
"os/exec"
"strings"
"syscall"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/dex"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
@@ -35,9 +26,6 @@ import (
const (
// CLIName is the name of the CLI
cliName = "argocd-util"
// YamlSeparator separates sections of a YAML file
yamlSeparator = "\n---\n"
)
// NewCommand returns a new instance of an argocd command
@@ -57,10 +45,6 @@ func NewCommand() *cobra.Command {
command.AddCommand(cli.NewVersionCmd(cliName))
command.AddCommand(NewRunDexCommand())
command.AddCommand(NewGenDexConfigCommand())
command.AddCommand(NewImportCommand())
command.AddCommand(NewExportCommand())
command.AddCommand(NewSettingsCommand())
command.AddCommand(NewClusterConfig())
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
return command
@@ -170,251 +154,6 @@ func NewGenDexConfigCommand() *cobra.Command {
return &command
}
// NewImportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewImportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
)
var command = cobra.Command{
Use: "import SOURCE",
Short: "Import Argo CD data from stdin (specify `-') or a file",
RunE: func(c *cobra.Command, args []string) error {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
var (
input []byte
err error
newSettings *settings.ArgoCDSettings
newRepos []*v1alpha1.Repository
newClusters []*v1alpha1.Cluster
newApps []*v1alpha1.Application
newRBACCM *apiv1.ConfigMap
)
if in := args[0]; in == "-" {
input, err = ioutil.ReadAll(os.Stdin)
errors.CheckError(err)
} else {
input, err = ioutil.ReadFile(in)
errors.CheckError(err)
}
inputStrings := strings.Split(string(input), yamlSeparator)
err = yaml.Unmarshal([]byte(inputStrings[0]), &newSettings)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[1]), &newRepos)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[2]), &newClusters)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[3]), &newApps)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[4]), &newRBACCM)
errors.CheckError(err)
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
settingsMgr := settings.NewSettingsManager(kubeClientset, namespace)
err = settingsMgr.SaveSettings(newSettings)
errors.CheckError(err)
db := db.NewDB(namespace, kubeClientset)
_, err = kubeClientset.CoreV1().ConfigMaps(namespace).Create(newRBACCM)
errors.CheckError(err)
for _, repo := range newRepos {
_, err := db.CreateRepository(context.Background(), repo)
if err != nil {
log.Warn(err)
}
}
for _, cluster := range newClusters {
_, err := db.CreateCluster(context.Background(), cluster)
if err != nil {
log.Warn(err)
}
}
appClientset := appclientset.NewForConfigOrDie(config)
for _, app := range newApps {
out, err := appClientset.ArgoprojV1alpha1().Applications(namespace).Create(app)
errors.CheckError(err)
log.Println(out)
}
return nil
},
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
return &command
}
// NewExportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewExportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
out string
)
var command = cobra.Command{
Use: "export",
Short: "Export all Argo CD data to stdout (default) or a file",
RunE: func(c *cobra.Command, args []string) error {
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
settingsMgr := settings.NewSettingsManager(kubeClientset, namespace)
settings, err := settingsMgr.GetSettings()
errors.CheckError(err)
// certificate data is included in secrets that are exported alongside
settings.Certificate = nil
db := db.NewDB(namespace, kubeClientset)
clusters, err := db.ListClusters(context.Background())
errors.CheckError(err)
repos, err := db.ListRepositories(context.Background())
errors.CheckError(err)
appClientset := appclientset.NewForConfigOrDie(config)
apps, err := appClientset.ArgoprojV1alpha1().Applications(namespace).List(metav1.ListOptions{})
errors.CheckError(err)
rbacCM, err := kubeClientset.CoreV1().ConfigMaps(namespace).Get(common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
// remove extraneous cruft from output
rbacCM.ObjectMeta = metav1.ObjectMeta{
Name: rbacCM.ObjectMeta.Name,
}
// remove extraneous cruft from output
for idx, app := range apps.Items {
apps.Items[idx].ObjectMeta = metav1.ObjectMeta{
Name: app.ObjectMeta.Name,
Finalizers: app.ObjectMeta.Finalizers,
}
apps.Items[idx].Status = v1alpha1.ApplicationStatus{
History: app.Status.History,
}
apps.Items[idx].Operation = nil
}
// take a list of exportable objects, marshal them to YAML,
// and return a string joined by a delimiter
output := func(delimiter string, oo ...interface{}) string {
out := make([]string, 0)
for _, o := range oo {
data, err := yaml.Marshal(o)
errors.CheckError(err)
out = append(out, string(data))
}
return strings.Join(out, delimiter)
}(yamlSeparator, settings, repos.Items, clusters.Items, apps.Items, rbacCM)
if out == "-" {
fmt.Println(output)
} else {
err = ioutil.WriteFile(out, []byte(output), 0644)
errors.CheckError(err)
}
return nil
},
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().StringVarP(&out, "out", "o", "-", "Output to the specified file instead of stdout")
return &command
}
// NewSettingsCommand returns a new instance of `argocd-util settings` command
func NewSettingsCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
updateSuperuser bool
superuserPassword string
updateSignature bool
)
var command = &cobra.Command{
Use: "settings",
Short: "Creates or updates ArgoCD settings",
Long: "Creates or updates ArgoCD settings",
Run: func(c *cobra.Command, args []string) {
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if !(wasSpecified) {
namespace = "argocd"
}
kubeclientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
settingsMgr := settings.NewSettingsManager(kubeclientset, namespace)
_, err = settings.UpdateSettings(superuserPassword, settingsMgr, updateSignature, updateSuperuser, namespace)
errors.CheckError(err)
},
}
command.Flags().BoolVar(&updateSuperuser, "update-superuser", false, "force updating the superuser password")
command.Flags().StringVar(&superuserPassword, "superuser-password", "", "password for super user")
command.Flags().BoolVar(&updateSignature, "update-signature", false, "force updating the server-side token signing signature")
clientConfig = cli.AddKubectlFlagsToCmd(command)
return command
}
// NewClusterConfig returns a new instance of `argocd-util cluster-kubeconfig` command
func NewClusterConfig() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
)
var command = &cobra.Command{
Use: "cluster-kubeconfig CLUSTER_URL OUTPUT_PATH",
Short: "Generates kubeconfig for the specified cluster",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
serverUrl := args[0]
output := args[1]
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if !(wasSpecified) {
namespace = "argocd"
}
kubeclientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
cluster, err := db.NewDB(namespace, kubeclientset).GetCluster(context.Background(), serverUrl)
errors.CheckError(err)
err = kube.WriteKubeConfig(cluster.RESTConfig(), namespace, output)
errors.CheckError(err)
},
}
clientConfig = cli.AddKubectlFlagsToCmd(command)
return command
}
func main() {
if err := NewCommand().Execute(); err != nil {
fmt.Println(err)

View File

@@ -1,74 +0,0 @@
package commands
import (
"context"
"fmt"
"os"
"syscall"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/server/account"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/settings"
"github.com/spf13/cobra"
"golang.org/x/crypto/ssh/terminal"
)
func NewAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "account",
Short: "Manage account settings",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
command.AddCommand(NewAccountUpdatePasswordCommand(clientOpts))
return command
}
func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
currentPassword string
newPassword string
)
var command = &cobra.Command{
Use: "update-password",
Short: "Update password",
Run: func(c *cobra.Command, args []string) {
if len(args) != 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
if currentPassword == "" {
fmt.Print("*** Enter current password: ")
password, err := terminal.ReadPassword(syscall.Stdin)
errors.CheckError(err)
currentPassword = string(password)
fmt.Print("\n")
}
if newPassword == "" {
var err error
newPassword, err = settings.ReadAndConfirmPassword()
errors.CheckError(err)
}
updatePasswordRequest := account.UpdatePasswordRequest{
NewPassword: newPassword,
CurrentPassword: currentPassword,
}
conn, usrIf := argocdclient.NewClientOrDie(clientOpts).NewAccountClientOrDie()
defer util.Close(conn)
_, err := usrIf.UpdatePassword(context.Background(), &updatePasswordRequest)
errors.CheckError(err)
fmt.Printf("Password updated\n")
},
}
command.Flags().StringVar(&currentPassword, "current-password", "", "current password you wish to change")
command.Flags().StringVar(&newPassword, "new-password", "", "new password you want to update to")
return command
}

File diff suppressed because it is too large Load Diff

View File

@@ -18,7 +18,6 @@ import (
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
)
@@ -43,12 +42,6 @@ func NewClusterCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientc
// NewClusterAddCommand returns a new instance of an `argocd cluster add` command
func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientcmd.PathOptions) *cobra.Command {
var (
inCluster bool
upsert bool
awsRoleArn string
awsClusterName string
)
var command = &cobra.Command{
Use: "add",
Short: fmt.Sprintf("%s cluster add CONTEXT", cliName),
@@ -65,7 +58,6 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
if clstContext == nil {
log.Fatalf("Context %s does not exist in kubeconfig", args[0])
}
overrides := clientcmd.ConfigOverrides{
Context: *clstContext,
}
@@ -73,40 +65,18 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
managerBearerToken := ""
var awsAuthConf *argoappv1.AWSAuthConfig
if awsClusterName != "" {
awsAuthConf = &argoappv1.AWSAuthConfig{
ClusterName: awsClusterName,
RoleARN: awsRoleArn,
}
} else {
// Install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
managerBearerToken, err = common.InstallClusterManagerRBAC(clientset)
errors.CheckError(err)
}
// Install RBAC resources for managing the cluster
managerBearerToken := common.InstallClusterManagerRBAC(conf)
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
clst := NewCluster(args[0], conf, managerBearerToken, awsAuthConf)
if inCluster {
clst.Server = common.KubernetesInternalAPIServerAddr
}
clstCreateReq := cluster.ClusterCreateRequest{
Cluster: clst,
Upsert: upsert,
}
clst, err = clusterIf.Create(context.Background(), &clstCreateReq)
clst := NewCluster(args[0], conf, managerBearerToken)
clst, err = clusterIf.Create(context.Background(), clst)
errors.CheckError(err)
fmt.Printf("Cluster '%s' added\n", clst.Name)
},
}
command.PersistentFlags().StringVar(&pathOpts.LoadingRules.ExplicitPath, pathOpts.ExplicitFileFlag, pathOpts.LoadingRules.ExplicitPath, "use a particular kubeconfig file")
command.Flags().BoolVar(&inCluster, "in-cluster", false, "Indicates ArgoCD resides inside this cluster and should connect using the internal k8s hostname (kubernetes.default.svc)")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing cluster with the same name even if the spec differs")
command.Flags().StringVar(&awsClusterName, "aws-cluster-name", "", "AWS Cluster name if set then aws-iam-authenticator will be used to access cluster")
command.Flags().StringVar(&awsRoleArn, "aws-role-arn", "", "Optional AWS role arn. If set then AWS IAM Authenticator assume a role to perform cluster operations instead of the default AWS credential provider chain.")
return command
}
@@ -126,20 +96,9 @@ func printKubeContexts(ca clientcmd.ConfigAccess) {
}
sort.Strings(contextNames)
if config.Clusters == nil {
return
}
for _, name := range contextNames {
// ignore malformed kube config entries
context := config.Contexts[name]
if context == nil {
continue
}
cluster := config.Clusters[context.Cluster]
if cluster == nil {
continue
}
prefix := " "
if config.CurrentContext == name {
prefix = "*"
@@ -149,7 +108,7 @@ func printKubeContexts(ca clientcmd.ConfigAccess) {
}
}
func NewCluster(name string, conf *rest.Config, managerBearerToken string, awsAuthConf *argoappv1.AWSAuthConfig) *argoappv1.Cluster {
func NewCluster(name string, conf *rest.Config, managerBearerToken string) *argoappv1.Cluster {
tlsClientConfig := argoappv1.TLSClientConfig{
Insecure: conf.TLSClientConfig.Insecure,
ServerName: conf.TLSClientConfig.ServerName,
@@ -178,7 +137,6 @@ func NewCluster(name string, conf *rest.Config, managerBearerToken string, awsAu
Config: argoappv1.ClusterConfig{
BearerToken: managerBearerToken,
TLSClientConfig: tlsClientConfig,
AWSAuthConfig: awsAuthConf,
},
}
return &clst
@@ -220,14 +178,9 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
// clientset, err := kubernetes.NewForConfig(conf)
// errors.CheckError(err)
for _, clusterName := range args {
// TODO(jessesuen): find the right context and remove manager RBAC artifacts
// err := common.UninstallClusterManagerRBAC(clientset)
// errors.CheckError(err)
// common.UninstallClusterManagerRBAC(conf)
_, err := clusterIf.Delete(context.Background(), &cluster.ClusterQuery{Server: clusterName})
errors.CheckError(err)
}
@@ -247,9 +200,9 @@ func NewClusterListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
clusters, err := clusterIf.List(context.Background(), &cluster.ClusterQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "SERVER\tNAME\tSTATUS\tMESSAGE\n")
fmt.Fprintf(w, "SERVER\tNAME\tMESSAGE\n")
for _, c := range clusters.Items {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", c.Server, c.Name, c.ConnectionState.Status, c.ConnectionState.Message)
fmt.Fprintf(w, "%s\t%s\t%s\n", c.Server, c.Name, c.Message)
}
_ = w.Flush()
},

View File

@@ -0,0 +1,75 @@
package commands
import (
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/install"
"github.com/argoproj/argo-cd/util/cli"
"github.com/spf13/cobra"
"k8s.io/client-go/tools/clientcmd"
)
// NewInstallCommand returns a new instance of `argocd install` command
func NewInstallCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
installOpts install.InstallOptions
)
var command = &cobra.Command{
Use: "install",
Short: "Install Argo CD",
Long: "Install Argo CD",
Run: func(c *cobra.Command, args []string) {
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if wasSpecified {
installOpts.Namespace = namespace
}
installer, err := install.NewInstaller(conf, installOpts)
errors.CheckError(err)
installer.Install()
},
}
command.Flags().BoolVar(&installOpts.Upgrade, "upgrade", false, "upgrade controller/ui deployments and configmap if already installed")
command.Flags().BoolVar(&installOpts.DryRun, "dry-run", false, "print the kubernetes manifests to stdout instead of installing")
command.Flags().StringVar(&installOpts.SuperuserPassword, "superuser-password", "", "password for super user")
command.Flags().StringVar(&installOpts.ControllerImage, "controller-image", install.DefaultControllerImage, "use a specified controller image")
command.Flags().StringVar(&installOpts.ServerImage, "server-image", install.DefaultServerImage, "use a specified api server image")
command.Flags().StringVar(&installOpts.UIImage, "ui-image", install.DefaultUIImage, "use a specified ui image")
command.Flags().StringVar(&installOpts.RepoServerImage, "repo-server-image", install.DefaultRepoServerImage, "use a specified repo server image")
command.Flags().StringVar(&installOpts.ImagePullPolicy, "image-pull-policy", "", "set the image pull policy of the pod specs")
clientConfig = cli.AddKubectlFlagsToCmd(command)
command.AddCommand(newSettingsCommand())
return command
}
// newSettingsCommand returns a new instance of `argocd install settings` command
func newSettingsCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
installOpts install.InstallOptions
)
var command = &cobra.Command{
Use: "settings",
Short: "Creates or updates ArgoCD settings",
Long: "Creates or updates ArgoCD settings",
Run: func(c *cobra.Command, args []string) {
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if wasSpecified {
installOpts.Namespace = namespace
}
installer, err := install.NewInstaller(conf, installOpts)
errors.CheckError(err)
installer.InstallSettings()
},
}
command.Flags().BoolVar(&installOpts.UpdateSuperuser, "update-superuser", false, "force updating the superuser password")
command.Flags().StringVar(&installOpts.SuperuserPassword, "superuser-password", "", "password for super user")
command.Flags().BoolVar(&installOpts.UpdateSignature, "update-signature", false, "force updating the server-side token signing signature")
clientConfig = cli.AddKubectlFlagsToCmd(command)
return command
}

View File

@@ -77,7 +77,6 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
// Perform the login
var tokenString string
var refreshToken string
if !sso {
tokenString = passwordLogin(acdClient, username, password)
} else {
@@ -86,7 +85,15 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
if !ssoConfigured(acdSet) {
log.Fatalf("ArgoCD instance is not configured with SSO")
}
tokenString, refreshToken = oauth2Login(server, clientOpts.PlainText)
tokenString = oauth2Login(server, clientOpts.PlainText)
// The token which we just received from the OAuth2 flow, was from dex. ArgoCD
// currently does not back dex with any kind of persistent storage (it is run
// in-memory). As a result, this token cannot be used in any permanent capacity.
// Restarts of dex will result in a different signing key, and sessions becoming
// invalid. Instead we turn-around and ask ArgoCD to re-sign the token (who *does*
// have persistence of signing keys), and is what we store in the config. Should we
// ever decide to have a database layer for dex, the next line can be removed.
tokenString = tokenLogin(acdClient, tokenString)
}
parser := &jwt.Parser{
@@ -109,9 +116,8 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
Insecure: globalClientOpts.Insecure,
})
localCfg.UpsertUser(localconfig.User{
Name: ctxName,
AuthToken: tokenString,
RefreshToken: refreshToken,
Name: ctxName,
AuthToken: tokenString,
})
if ctxName == "" {
ctxName = server
@@ -157,9 +163,8 @@ func getFreePort() (int, error) {
return ln.Addr().(*net.TCPAddr).Port, ln.Close()
}
// oauth2Login opens a browser, runs a temporary HTTP server to delegate OAuth2 login flow and
// returns the JWT token and a refresh token (if supported)
func oauth2Login(host string, plaintext bool) (string, string) {
// oauth2Login opens a browser, runs a temporary HTTP server to delegate OAuth2 login flow and returns the JWT token
func oauth2Login(host string, plaintext bool) string {
ctx := context.Background()
port, err := getFreePort()
errors.CheckError(err)
@@ -178,7 +183,6 @@ func oauth2Login(host string, plaintext bool) (string, string) {
}
srv := &http.Server{Addr: ":" + strconv.Itoa(port)}
var tokenString string
var refreshToken string
loginCompleted := make(chan struct{})
callbackHandler := func(w http.ResponseWriter, r *http.Request) {
@@ -211,9 +215,8 @@ func oauth2Login(host string, plaintext bool) (string, string) {
log.Fatal(errMsg)
return
}
refreshToken, _ = tok.Extra("refresh_token").(string)
log.Debugf("Token: %s", tokenString)
log.Debugf("Refresh Token: %s", tokenString)
successPage := `
<div style="height:100px; width:100%!; display:flex; flex-direction: column; justify-content: center; align-items:center; background-color:#2ecc71; color:white; font-size:22"><div>Authentication successful!</div></div>
<p style="margin-top:20px; font-size:18; text-align:center">Authentication was successful, you can now return to CLI. This page will close automatically</p>
@@ -245,7 +248,7 @@ func oauth2Login(host string, plaintext bool) (string, string) {
}()
<-loginCompleted
_ = srv.Shutdown(ctx)
return tokenString, refreshToken
return tokenString
}
func passwordLogin(acdClient argocdclient.Client, username, password string) string {
@@ -260,3 +263,14 @@ func passwordLogin(acdClient argocdclient.Client, username, password string) str
errors.CheckError(err)
return createdSession.Token
}
func tokenLogin(acdClient argocdclient.Client, token string) string {
sessConn, sessionIf := acdClient.NewSessionClientOrDie()
defer util.Close(sessConn)
sessionRequest := session.SessionCreateRequest{
Token: token,
}
createdSession, err := sessionIf.Create(context.Background(), &sessionRequest)
errors.CheckError(err)
return createdSession.Token
}

View File

@@ -1,790 +0,0 @@
package commands
import (
"context"
"fmt"
"os"
"strconv"
"strings"
"text/tabwriter"
"time"
timeutil "github.com/argoproj/pkg/time"
"github.com/dustin/go-humanize"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/git"
projectutil "github.com/argoproj/argo-cd/util/project"
)
const (
policyTemplate = "p, proj:%s:%s, applications, %s, %s/%s, %s"
)
type projectOpts struct {
description string
destinations []string
sources []string
}
type policyOpts struct {
action string
permission string
object string
}
func (opts *projectOpts) GetDestinations() []v1alpha1.ApplicationDestination {
destinations := make([]v1alpha1.ApplicationDestination, 0)
for _, destStr := range opts.destinations {
parts := strings.Split(destStr, ",")
if len(parts) != 2 {
log.Fatalf("Expected destination of the form: server,namespace. Received: %s", destStr)
} else {
destinations = append(destinations, v1alpha1.ApplicationDestination{
Server: parts[0],
Namespace: parts[1],
})
}
}
return destinations
}
// NewProjectCommand returns a new instance of an `argocd proj` command
func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "proj",
Short: "Manage projects",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
command.AddCommand(NewProjectRoleCommand(clientOpts))
command.AddCommand(NewProjectCreateCommand(clientOpts))
command.AddCommand(NewProjectDeleteCommand(clientOpts))
command.AddCommand(NewProjectListCommand(clientOpts))
command.AddCommand(NewProjectSetCommand(clientOpts))
command.AddCommand(NewProjectAddDestinationCommand(clientOpts))
command.AddCommand(NewProjectRemoveDestinationCommand(clientOpts))
command.AddCommand(NewProjectAddSourceCommand(clientOpts))
command.AddCommand(NewProjectRemoveSourceCommand(clientOpts))
command.AddCommand(NewProjectAllowClusterResourceCommand(clientOpts))
command.AddCommand(NewProjectDenyClusterResourceCommand(clientOpts))
command.AddCommand(NewProjectAllowNamespaceResourceCommand(clientOpts))
command.AddCommand(NewProjectDenyNamespaceResourceCommand(clientOpts))
return command
}
func addProjFlags(command *cobra.Command, opts *projectOpts) {
command.Flags().StringVarP(&opts.description, "description", "", "", "Project description")
command.Flags().StringArrayVarP(&opts.destinations, "dest", "d", []string{},
"Permitted destination server and namespace (e.g. https://192.168.99.100:8443,default)")
command.Flags().StringArrayVarP(&opts.sources, "src", "s", []string{}, "Permitted git source repository URL")
}
func addPolicyFlags(command *cobra.Command, opts *policyOpts) {
command.Flags().StringVarP(&opts.action, "action", "a", "", "Action to grant/deny permission on (e.g. get, create, list, update, delete)")
command.Flags().StringVarP(&opts.permission, "permission", "p", "allow", "Whether to allow or deny access to object with the action. This can only be 'allow' or 'deny'")
command.Flags().StringVarP(&opts.object, "object", "o", "", "Object within the project to grant/deny access. Use '*' for a wildcard. Will want access to '<project>/<object>'")
}
// NewProjectRoleCommand returns a new instance of the `argocd proj role` command
func NewProjectRoleCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
roleCommand := &cobra.Command{
Use: "role",
Short: "Manage a project's roles",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
roleCommand.AddCommand(NewProjectRoleListCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleGetCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleCreateCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleDeleteCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleCreateTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleDeleteTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleAddPolicyCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleRemovePolicyCommand(clientOpts))
return roleCommand
}
// NewProjectRoleAddPolicyCommand returns a new instance of an `argocd proj role add-policy` command
func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts policyOpts
)
var command = &cobra.Command{
Use: "add-policy PROJECT ROLE-NAME",
Short: "Add a policy to a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
if len(opts.action) <= 0 {
log.Fatal("Action needs to longer than 0 characters")
}
if len(opts.object) <= 0 {
log.Fatal("Objects needs to longer than 0 characters")
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
roleIndex, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
log.Fatal(err)
}
role := proj.Spec.Roles[roleIndex]
policy := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
proj.Spec.Roles[roleIndex].Policies = append(role.Policies, policy)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addPolicyFlags(command, &opts)
return command
}
// NewProjectRoleRemovePolicyCommand returns a new instance of an `argocd proj role remove-policy` command
func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts policyOpts
)
var command = &cobra.Command{
Use: "remove-policy PROJECT ROLE-NAME",
Short: "Remove a policy from a role within a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
if opts.permission != "allow" && opts.permission != "deny" {
log.Fatal("Permission flag can only have the values 'allow' or 'deny'")
}
if len(opts.action) <= 0 {
log.Fatal("Action needs to longer than 0 characters")
}
if len(opts.object) <= 0 {
log.Fatal("Objects needs to longer than 0 characters")
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
roleIndex, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
log.Fatal(err)
}
role := proj.Spec.Roles[roleIndex]
policyToRemove := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
duplicateIndex := -1
for i, policy := range role.Policies {
if policy == policyToRemove {
duplicateIndex = i
break
}
}
if duplicateIndex < 0 {
return
}
role.Policies[duplicateIndex] = role.Policies[len(role.Policies)-1]
proj.Spec.Roles[roleIndex].Policies = role.Policies[:len(role.Policies)-1]
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addPolicyFlags(command, &opts)
return command
}
// NewProjectRoleCreateCommand returns a new instance of an `argocd proj role create` command
func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
description string
)
var command = &cobra.Command{
Use: "create PROJECT ROLE-NAME",
Short: "Create a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, err = projectutil.GetRoleIndexByName(proj, roleName)
if err == nil {
return
}
proj.Spec.Roles = append(proj.Spec.Roles, v1alpha1.ProjectRole{Name: roleName, Description: description})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
command.Flags().StringVarP(&description, "description", "", "", "Project description")
return command
}
// NewProjectRoleDeleteCommand returns a new instance of an `argocd proj role delete` command
func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete PROJECT ROLE-NAME",
Short: "Delete a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
return
}
proj.Spec.Roles[index] = proj.Spec.Roles[len(proj.Spec.Roles)-1]
proj.Spec.Roles = proj.Spec.Roles[:len(proj.Spec.Roles)-1]
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectRoleCreateTokenCommand returns a new instance of an `argocd proj role create-token` command
func NewProjectRoleCreateTokenCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
expiresIn string
)
var command = &cobra.Command{
Use: "create-token PROJECT ROLE-NAME",
Short: "Create a project token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
duration, err := timeutil.ParseDuration(expiresIn)
errors.CheckError(err)
token, err := projIf.CreateToken(context.Background(), &project.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
errors.CheckError(err)
fmt.Println(token.Token)
},
}
command.Flags().StringVarP(&expiresIn, "expires-in", "e", "0s", "Duration before the token will expire. (Default: No expiration)")
return command
}
// NewProjectRoleDeleteTokenCommand returns a new instance of an `argocd proj role delete-token` command
func NewProjectRoleDeleteTokenCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete-token PROJECT ROLE-NAME ISSUED-AT",
Short: "Delete a project token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
issuedAt, err := strconv.ParseInt(args[2], 10, 64)
if err != nil {
log.Fatal(err)
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err = projIf.DeleteToken(context.Background(), &project.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
errors.CheckError(err)
},
}
return command
}
// NewProjectRoleListCommand returns a new instance of an `argocd proj roles list` command
func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "list PROJECT",
Short: "List all the roles in a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range project.Spec.Roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
_ = w.Flush()
},
}
return command
}
// NewProjectRoleGetCommand returns a new instance of an `argocd proj roles get` command
func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "get PROJECT ROLE-NAME",
Short: "Get the details of a specific role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index, err := projectutil.GetRoleIndexByName(project, roleName)
errors.CheckError(err)
role := project.Spec.Roles[index]
printRoleFmtStr := "%-15s%s\n"
fmt.Printf(printRoleFmtStr, "Role Name:", roleName)
fmt.Printf(printRoleFmtStr, "Description:", role.Description)
fmt.Printf("Policies:\n")
fmt.Printf("%s\n", project.ProjectPoliciesString())
fmt.Printf("JWT Tokens:\n")
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
for _, token := range role.JWTTokens {
expiresAt := "<none>"
if token.ExpiresAt > 0 {
expiresAt = humanizeTimestamp(token.ExpiresAt)
}
fmt.Fprintf(w, "%d\t%s\t%s\n", token.IssuedAt, humanizeTimestamp(token.IssuedAt), expiresAt)
}
_ = w.Flush()
},
}
return command
}
func humanizeTimestamp(epoch int64) string {
ts := time.Unix(epoch, 0)
return fmt.Sprintf("%s (%s)", ts.Format(time.RFC3339), humanize.Time(ts))
}
// NewProjectCreateCommand returns a new instance of an `argocd proj create` command
func NewProjectCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts projectOpts
)
var command = &cobra.Command{
Use: "create PROJECT",
Short: "Create a project",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
proj := v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{Name: projName},
Spec: v1alpha1.AppProjectSpec{
Description: opts.description,
Destinations: opts.GetDestinations(),
SourceRepos: opts.sources,
},
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err := projIf.Create(context.Background(), &project.ProjectCreateRequest{Project: &proj})
errors.CheckError(err)
},
}
addProjFlags(command, &opts)
return command
}
// NewProjectSetCommand returns a new instance of an `argocd proj set` command
func NewProjectSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts projectOpts
)
var command = &cobra.Command{
Use: "set PROJECT",
Short: "Set project parameters",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
visited := 0
c.Flags().Visit(func(f *pflag.Flag) {
visited++
switch f.Name {
case "description":
proj.Spec.Description = opts.description
case "dest":
proj.Spec.Destinations = opts.GetDestinations()
case "src":
proj.Spec.SourceRepos = opts.sources
}
})
if visited == 0 {
log.Error("Please set at least one option to update")
c.HelpFunc()(c, args)
os.Exit(1)
}
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addProjFlags(command, &opts)
return command
}
// NewProjectAddDestinationCommand returns a new instance of an `argocd proj add-destination` command
func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "add-destination PROJECT SERVER NAMESPACE",
Short: "Add project destination",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, dest := range proj.Spec.Destinations {
if dest.Namespace == namespace && dest.Server == server {
log.Fatal("Specified destination is already defined in project")
}
}
proj.Spec.Destinations = append(proj.Spec.Destinations, v1alpha1.ApplicationDestination{Server: server, Namespace: namespace})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectRemoveDestinationCommand returns a new instance of an `argocd proj remove-destination` command
func NewProjectRemoveDestinationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "remove-destination PROJECT SERVER NAMESPACE",
Short: "Remove project destination",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index := -1
for i, dest := range proj.Spec.Destinations {
if dest.Namespace == namespace && dest.Server == server {
index = i
break
}
}
if index == -1 {
log.Fatal("Specified destination does not exist in project")
} else {
proj.Spec.Destinations = append(proj.Spec.Destinations[:index], proj.Spec.Destinations[index+1:]...)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
}
return command
}
// NewProjectAddSourceCommand returns a new instance of an `argocd proj add-src` command
func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "add-source PROJECT URL",
Short: "Add project source repository",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
url := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, item := range proj.Spec.SourceRepos {
if item == "*" && item == url {
log.Info("Wildcard source repository is already defined in project")
return
}
if item == git.NormalizeGitURL(url) {
log.Info("Specified source repository is already defined in project")
return
}
}
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos, url)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
func modifyProjectResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.ClientOptions, action func(proj *v1alpha1.AppProject, group string, kind string) bool) *cobra.Command {
return &cobra.Command{
Use: cmdUse,
Short: cmdDesc,
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName, group, kind := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
if action(proj, group, kind) {
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
}
}
// NewProjectAllowNamespaceResourceCommand returns a new instance of an `deny-cluster-resources` command
func NewProjectAllowNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "allow-namespace-resource PROJECT GROUP KIND"
desc := "Removes a namespaced API resource from the blacklist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
index := -1
for i, item := range proj.Spec.NamespaceResourceBlacklist {
if item.Group == group && item.Kind == kind {
index = i
break
}
}
if index == -1 {
log.Info("Specified cluster resource is not blacklisted")
return false
}
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist[:index], proj.Spec.NamespaceResourceBlacklist[index+1:]...)
return true
})
}
// NewProjectDenyNamespaceResourceCommand returns a new instance of an `argocd proj deny-namespace-resource` command
func NewProjectDenyNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "deny-namespace-resource PROJECT GROUP KIND"
desc := "Adds a namespaced API resource to the blacklist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
for _, item := range proj.Spec.NamespaceResourceBlacklist {
if item.Group == group && item.Kind == kind {
log.Infof("Group '%s' and kind '%s' are already blacklisted in project", item.Group, item.Kind)
return false
}
}
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist, v1.GroupKind{Group: group, Kind: kind})
return true
})
}
// NewProjectDenyClusterResourceCommand returns a new instance of an `deny-cluster-resource` command
func NewProjectDenyClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "deny-cluster-resource PROJECT GROUP KIND"
desc := "Removes a cluster-scoped API resource from the whitelist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
index := -1
for i, item := range proj.Spec.ClusterResourceWhitelist {
if item.Group == group && item.Kind == kind {
index = i
break
}
}
if index == -1 {
log.Info("Specified cluster resource already denied in project")
return false
}
proj.Spec.ClusterResourceWhitelist = append(proj.Spec.ClusterResourceWhitelist[:index], proj.Spec.ClusterResourceWhitelist[index+1:]...)
return true
})
}
// NewProjectAllowClusterResourceCommand returns a new instance of an `argocd proj allow-cluster-resource` command
func NewProjectAllowClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "allow-cluster-resource PROJECT GROUP KIND"
desc := "Adds a cluster-scoped API resource to the whitelist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
for _, item := range proj.Spec.ClusterResourceWhitelist {
if item.Group == group && item.Kind == kind {
log.Infof("Group '%s' and kind '%s' are already whitelisted in project", item.Group, item.Kind)
return false
}
}
proj.Spec.ClusterResourceWhitelist = append(proj.Spec.ClusterResourceWhitelist, v1.GroupKind{Group: group, Kind: kind})
return true
})
}
// NewProjectRemoveSourceCommand returns a new instance of an `argocd proj remove-src` command
func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "remove-source PROJECT URL",
Short: "Remove project source repository",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
url := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index := -1
for i, item := range proj.Spec.SourceRepos {
if item == "*" && item == url {
index = i
break
}
if item == git.NormalizeGitURL(url) {
index = i
break
}
}
if index == -1 {
log.Info("Specified source repository does not exist in project")
} else {
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos[:index], proj.Spec.SourceRepos[index+1:]...)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
}
return command
}
// NewProjectDeleteCommand returns a new instance of an `argocd proj delete` command
func NewProjectDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete PROJECT",
Short: "Delete project",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
for _, name := range args {
_, err := projIf.Delete(context.Background(), &project.ProjectQuery{Name: name})
errors.CheckError(err)
}
},
}
return command
}
// NewProjectListCommand returns a new instance of an `argocd proj list` command
func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "list",
Short: "List projects",
Run: func(c *cobra.Command, args []string) {
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
projects, err := projIf.List(context.Background(), &project.ProjectQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\n")
for _, p := range projects.Items {
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, p.Spec.Destinations, p.Spec.SourceRepos, p.Spec.ClusterResourceWhitelist, p.Spec.NamespaceResourceBlacklist)
}
_ = w.Flush()
},
}
return command
}

View File

@@ -1,75 +0,0 @@
package commands
import (
"fmt"
"os"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
"github.com/argoproj/argo-cd/util/session"
)
// NewReloginCommand returns a new instance of `argocd relogin` command
func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
password string
)
var command = &cobra.Command{
Use: "relogin",
Short: "Refresh an expired authenticate token",
Long: "Refresh an expired authenticate token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
localCfg, err := localconfig.ReadLocalConfig(globalClientOpts.ConfigPath)
errors.CheckError(err)
if localCfg == nil {
log.Fatalf("No context found. Login using `argocd login`")
}
configCtx, err := localCfg.ResolveContext(localCfg.CurrentContext)
errors.CheckError(err)
parser := &jwt.Parser{
SkipClaimsValidation: true,
}
claims := jwt.StandardClaims{}
_, _, err = parser.ParseUnverified(configCtx.User.AuthToken, &claims)
errors.CheckError(err)
var tokenString string
var refreshToken string
if claims.Issuer == session.SessionManagerClaimsIssuer {
clientOpts := argocdclient.ClientOptions{
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
PlainText: configCtx.Server.PlainText,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
fmt.Printf("Relogging in as '%s'\n", claims.Subject)
tokenString = passwordLogin(acdClient, claims.Subject, password)
} else {
fmt.Println("Reinitiating SSO login")
tokenString, refreshToken = oauth2Login(configCtx.Server.Server, configCtx.Server.PlainText)
}
localCfg.UpsertUser(localconfig.User{
Name: localCfg.CurrentContext,
AuthToken: tokenString,
RefreshToken: refreshToken,
})
err = localconfig.WriteLocalConfig(*localCfg, globalClientOpts.ConfigPath)
errors.CheckError(err)
fmt.Printf("Context '%s' updated\n", localCfg.CurrentContext)
},
}
command.Flags().StringVar(&password, "password", "", "the password of an account to authenticate")
return command
}

View File

@@ -7,9 +7,6 @@ import (
"os"
"text/tabwriter"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
@@ -17,6 +14,8 @@ import (
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
// NewRepoCommand returns a new instance of an `argocd repo` command
@@ -40,7 +39,6 @@ func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
)
var command = &cobra.Command{
@@ -59,36 +57,27 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
repo.SSHPrivateKey = string(keyData)
}
// First test the repo *without* username/password. This gives us a hint on whether this
// is a private repo.
// NOTE: it is important not to run git commands to test git credentials on the user's
// system since it may mess with their git credential store (e.g. osx keychain).
// See issue #315
err := git.TestRepo(repo.Repo, "", "", repo.SSHPrivateKey)
err := git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey)
if err != nil {
if git.IsSSHURL(repo.Repo) {
// If we failed using git SSH credentials, then the repo is automatically bad
if repo.Username != "" && repo.Password != "" || git.IsSSHURL(repo.Repo) {
// if everything was supplied or repo URL is SSH url, one of the inputs was definitely bad
log.Fatal(err)
}
// If we can't test the repo, it's probably private. Prompt for credentials and
// let the server test it.
// If we can't test the repo, it's probably private. Prompt for credentials and try again.
repo.Username, repo.Password = cli.PromptCredentials(repo.Username, repo.Password)
err = git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey)
}
errors.CheckError(err)
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
repoCreateReq := repository.RepoCreateRequest{
Repo: &repo,
Upsert: upsert,
}
createdRepo, err := repoIf.Create(context.Background(), &repoCreateReq)
createdRepo, err := repoIf.Create(context.Background(), &repo)
errors.CheckError(err)
fmt.Printf("repository '%s' added\n", createdRepo.Repo)
},
}
command.Flags().StringVar(&repo.Username, "username", "", "username to the repository")
command.Flags().StringVar(&repo.Password, "password", "", "password to the repository")
command.Flags().StringVar(&sshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing repository with the same name even if the spec differs")
command.Flags().StringVar(&sshPrivateKeyPath, "sshPrivateKeyPath", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
return command
}
@@ -124,9 +113,9 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
repos, err := repoIf.List(context.Background(), &repository.RepoQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "REPO\tUSER\tSTATUS\tMESSAGE\n")
fmt.Fprintf(w, "REPO\tUSER\tMESSAGE\n")
for _, r := range repos.Items {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", r.Repo, r.Username, r.ConnectionState.Status, r.ConnectionState.Message)
fmt.Fprintf(w, "%s\t%s\t%s\n", r.Repo, r.Username, r.Message)
}
_ = w.Flush()
},

View File

@@ -27,11 +27,10 @@ func NewCommand() *cobra.Command {
command.AddCommand(NewClusterCommand(&clientOpts, pathOpts))
command.AddCommand(NewApplicationCommand(&clientOpts))
command.AddCommand(NewLoginCommand(&clientOpts))
command.AddCommand(NewReloginCommand(&clientOpts))
command.AddCommand(NewRepoCommand(&clientOpts))
command.AddCommand(NewInstallCommand())
command.AddCommand(NewUninstallCommand())
command.AddCommand(NewContextCommand(&clientOpts))
command.AddCommand(NewProjectCommand(&clientOpts))
command.AddCommand(NewAccountCommand(&clientOpts))
defaultLocalConfigPath, err := localconfig.DefaultLocalConfigPath()
errors.CheckError(err)

View File

@@ -0,0 +1,40 @@
package commands
import (
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/install"
"github.com/argoproj/argo-cd/util/cli"
"github.com/spf13/cobra"
"k8s.io/client-go/tools/clientcmd"
)
// NewUninstallCommand returns a new instance of `argocd install` command
func NewUninstallCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
installOpts install.InstallOptions
deleteNamespace bool
deleteCRD bool
)
var command = &cobra.Command{
Use: "uninstall",
Short: "Uninstall Argo CD",
Long: "Uninstall Argo CD",
Run: func(c *cobra.Command, args []string) {
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if wasSpecified {
installOpts.Namespace = namespace
}
installer, err := install.NewInstaller(conf, installOpts)
errors.CheckError(err)
installer.Uninstall(deleteNamespace, deleteCRD)
},
}
clientConfig = cli.AddKubectlFlagsToCmd(command)
command.Flags().BoolVar(&deleteNamespace, "delete-namespace", false, "Also delete the namespace during uninstall")
command.Flags().BoolVar(&deleteCRD, "delete-crd", false, "Also delete the Application CRD during uninstall")
return command
}

View File

@@ -54,7 +54,6 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
fmt.Printf(" GoVersion: %s\n", serverVers.GoVersion)
fmt.Printf(" Compiler: %s\n", serverVers.Compiler)
fmt.Printf(" Platform: %s\n", serverVers.Platform)
fmt.Printf(" Ksonnet Version: %s\n", serverVers.KsonnetVersion)
}
},

View File

@@ -1,9 +1,8 @@
package common
import (
rbacv1 "k8s.io/api/rbac/v1"
"github.com/argoproj/argo-cd/pkg/apis/application"
rbacv1 "k8s.io/api/rbac/v1"
)
const (
@@ -20,16 +19,12 @@ const (
AuthCookieName = "argocd.token"
// ResourcesFinalizerName is a number of application CRD finalizer
ResourcesFinalizerName = "resources-finalizer." + MetadataPrefix
// KubernetesInternalAPIServerAddr is address of the k8s API server when accessing internal to the cluster
KubernetesInternalAPIServerAddr = "https://kubernetes.default.svc"
)
const (
ArgoCDAdminUsername = "admin"
ArgoCDSecretName = "argocd-secret"
ArgoCDConfigMapName = "argocd-cm"
ArgoCDRBACConfigMapName = "argocd-rbac-cm"
ArgoCDAdminUsername = "admin"
ArgoCDSecretName = "argocd-secret"
ArgoCDConfigMapName = "argocd-cm"
)
const (
@@ -49,10 +44,6 @@ const (
ArgoCDCLIClientAppID = "argo-cd-cli"
// EnvVarSSODebug is an environment variable to enable additional OAuth debugging in the API server
EnvVarSSODebug = "ARGOCD_SSO_DEBUG"
// EnvVarRBACDebug is an environment variable to enable additional RBAC debugging in the API server
EnvVarRBACDebug = "ARGOCD_RBAC_DEBUG"
// DefaultAppProjectName contains name of default app project. The default app project allows deploying application to any cluster.
DefaultAppProjectName = "default"
)
var (
@@ -62,20 +53,6 @@ var (
// LabelKeySecretType contains the type of argocd secret (either 'cluster' or 'repo')
LabelKeySecretType = MetadataPrefix + "/secret-type"
// AnnotationConnectionStatus contains connection state status
AnnotationConnectionStatus = MetadataPrefix + "/connection-status"
// AnnotationConnectionMessage contains additional information about connection status
AnnotationConnectionMessage = MetadataPrefix + "/connection-message"
// AnnotationConnectionModifiedAt contains timestamp when connection state had been modified
AnnotationConnectionModifiedAt = MetadataPrefix + "/connection-modified-at"
// AnnotationHook contains the hook type of a resource
AnnotationHook = MetadataPrefix + "/hook"
// AnnotationHookDeletePolicy is the policy of deleting a hook
AnnotationHookDeletePolicy = MetadataPrefix + "/hook-delete-policy"
// AnnotationHelmHook is the helm hook annotation
AnnotationHelmHook = "helm.sh/hook"
// LabelKeyApplicationControllerInstanceID is the label which allows to separate application among multiple running application controllers.
LabelKeyApplicationControllerInstanceID = application.ApplicationFullName + "/controller-instanceid"
@@ -102,8 +79,4 @@ var ArgoCDManagerPolicyRules = []rbacv1.PolicyRule{
Resources: []string{"*"},
Verbs: []string{"*"},
},
{
NonResourceURLs: []string{"*"},
Verbs: []string{"*"},
},
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"time"
"github.com/argoproj/argo-cd/errors"
log "github.com/sirupsen/logrus"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
@@ -11,6 +12,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
// CreateServiceAccount creates a service account
@@ -18,7 +20,7 @@ func CreateServiceAccount(
clientset kubernetes.Interface,
serviceAccountName string,
namespace string,
) error {
) {
serviceAccount := apiv1.ServiceAccount{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
@@ -32,13 +34,12 @@ func CreateServiceAccount(
_, err := clientset.CoreV1().ServiceAccounts(namespace).Create(&serviceAccount)
if err != nil {
if !apierr.IsAlreadyExists(err) {
return fmt.Errorf("Failed to create service account %q: %v", serviceAccountName, err)
log.Fatalf("Failed to create service account '%s': %v\n", serviceAccountName, err)
}
log.Infof("ServiceAccount %q already exists", serviceAccountName)
return nil
fmt.Printf("ServiceAccount '%s' already exists\n", serviceAccountName)
return
}
log.Infof("ServiceAccount %q created", serviceAccountName)
return nil
fmt.Printf("ServiceAccount '%s' created\n", serviceAccountName)
}
// CreateClusterRole creates a cluster role
@@ -46,7 +47,7 @@ func CreateClusterRole(
clientset kubernetes.Interface,
clusterRoleName string,
rules []rbacv1.PolicyRule,
) error {
) {
clusterRole := rbacv1.ClusterRole{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
@@ -61,17 +62,16 @@ func CreateClusterRole(
_, err := crclient.Create(&clusterRole)
if err != nil {
if !apierr.IsAlreadyExists(err) {
return fmt.Errorf("Failed to create ClusterRole %q: %v", clusterRoleName, err)
log.Fatalf("Failed to create ClusterRole '%s': %v\n", clusterRoleName, err)
}
_, err = crclient.Update(&clusterRole)
if err != nil {
return fmt.Errorf("Failed to update ClusterRole %q: %v", clusterRoleName, err)
log.Fatalf("Failed to update ClusterRole '%s': %v\n", clusterRoleName, err)
}
log.Infof("ClusterRole %q updated", clusterRoleName)
fmt.Printf("ClusterRole '%s' updated\n", clusterRoleName)
} else {
log.Infof("ClusterRole %q created", clusterRoleName)
fmt.Printf("ClusterRole '%s' created\n", clusterRoleName)
}
return nil
}
// CreateClusterRoleBinding create a ClusterRoleBinding
@@ -81,7 +81,7 @@ func CreateClusterRoleBinding(
serviceAccountName,
clusterRoleName string,
namespace string,
) error {
) {
roleBinding := rbacv1.ClusterRoleBinding{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
@@ -106,34 +106,22 @@ func CreateClusterRoleBinding(
_, err := clientset.RbacV1().ClusterRoleBindings().Create(&roleBinding)
if err != nil {
if !apierr.IsAlreadyExists(err) {
return fmt.Errorf("Failed to create ClusterRoleBinding %s: %v", clusterBindingRoleName, err)
log.Fatalf("Failed to create ClusterRoleBinding %s: %v\n", clusterBindingRoleName, err)
}
log.Infof("ClusterRoleBinding %q already exists", clusterBindingRoleName)
return nil
fmt.Printf("ClusterRoleBinding '%s' already exists\n", clusterBindingRoleName)
return
}
log.Infof("ClusterRoleBinding %q created, bound %q to %q", clusterBindingRoleName, serviceAccountName, clusterRoleName)
return nil
fmt.Printf("ClusterRoleBinding '%s' created, bound '%s' to '%s'\n", clusterBindingRoleName, serviceAccountName, clusterRoleName)
}
// InstallClusterManagerRBAC installs RBAC resources for a cluster manager to operate a cluster. Returns a token
func InstallClusterManagerRBAC(clientset kubernetes.Interface) (string, error) {
func InstallClusterManagerRBAC(conf *rest.Config) string {
const ns = "kube-system"
var err error
err = CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
if err != nil {
return "", err
}
err = CreateClusterRole(clientset, ArgoCDManagerClusterRole, ArgoCDManagerPolicyRules)
if err != nil {
return "", err
}
err = CreateClusterRoleBinding(clientset, ArgoCDManagerClusterRoleBinding, ArgoCDManagerServiceAccount, ArgoCDManagerClusterRole, ns)
if err != nil {
return "", err
}
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
CreateClusterRole(clientset, ArgoCDManagerClusterRole, ArgoCDManagerPolicyRules)
CreateClusterRoleBinding(clientset, ArgoCDManagerClusterRoleBinding, ArgoCDManagerServiceAccount, ArgoCDManagerClusterRole, ns)
var serviceAccount *apiv1.ServiceAccount
var secretName string
@@ -149,51 +137,52 @@ func InstallClusterManagerRBAC(clientset kubernetes.Interface) (string, error) {
return true, nil
})
if err != nil {
return "", fmt.Errorf("Failed to wait for service account secret: %v", err)
log.Fatalf("Failed to wait for service account secret: %v", err)
}
secret, err := clientset.CoreV1().Secrets(ns).Get(secretName, metav1.GetOptions{})
if err != nil {
return "", fmt.Errorf("Failed to retrieve secret %q: %v", secretName, err)
log.Fatalf("Failed to retrieve secret '%s': %v", secretName, err)
}
token, ok := secret.Data["token"]
if !ok {
return "", fmt.Errorf("Secret %q for service account %q did not have a token", secretName, serviceAccount)
log.Fatalf("Secret '%s' for service account '%s' did not have a token", secretName, serviceAccount)
}
return string(token), nil
return string(token)
}
// UninstallClusterManagerRBAC removes RBAC resources for a cluster manager to operate a cluster
func UninstallClusterManagerRBAC(clientset kubernetes.Interface) error {
return UninstallRBAC(clientset, "kube-system", ArgoCDManagerClusterRoleBinding, ArgoCDManagerClusterRole, ArgoCDManagerServiceAccount)
func UninstallClusterManagerRBAC(conf *rest.Config) {
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
UninstallRBAC(clientset, "kube-system", ArgoCDManagerClusterRoleBinding, ArgoCDManagerClusterRole, ArgoCDManagerServiceAccount)
}
// UninstallRBAC uninstalls RBAC related resources for a binding, role, and service account
func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleName, serviceAccount string) error {
func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleName, serviceAccount string) {
if err := clientset.RbacV1().ClusterRoleBindings().Delete(bindingName, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
return fmt.Errorf("Failed to delete ClusterRoleBinding: %v", err)
log.Fatalf("Failed to delete ClusterRoleBinding: %v\n", err)
}
log.Infof("ClusterRoleBinding %q not found", bindingName)
fmt.Printf("ClusterRoleBinding '%s' not found\n", bindingName)
} else {
log.Infof("ClusterRoleBinding %q deleted", bindingName)
fmt.Printf("ClusterRoleBinding '%s' deleted\n", bindingName)
}
if err := clientset.RbacV1().ClusterRoles().Delete(roleName, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
return fmt.Errorf("Failed to delete ClusterRole: %v", err)
log.Fatalf("Failed to delete ClusterRole: %v\n", err)
}
log.Infof("ClusterRole %q not found", roleName)
fmt.Printf("ClusterRole '%s' not found\n", roleName)
} else {
log.Infof("ClusterRole %q deleted", roleName)
fmt.Printf("ClusterRole '%s' deleted\n", roleName)
}
if err := clientset.CoreV1().ServiceAccounts(namespace).Delete(serviceAccount, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
return fmt.Errorf("Failed to delete ServiceAccount: %v", err)
log.Fatalf("Failed to delete ServiceAccount: %v\n", err)
}
log.Infof("ServiceAccount %q in namespace %q not found", serviceAccount, namespace)
fmt.Printf("ServiceAccount '%s' in namespace '%s' not found\n", serviceAccount, namespace)
} else {
log.Infof("ServiceAccount %q deleted", serviceAccount)
fmt.Printf("ServiceAccount '%s' deleted\n", serviceAccount)
}
return nil
}

View File

@@ -1,864 +0,0 @@
package controller
import (
"context"
"encoding/json"
"fmt"
"reflect"
"runtime/debug"
"sync"
"time"
log "github.com/sirupsen/logrus"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/strategicpatch"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"github.com/argoproj/argo-cd/common"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
appinformers "github.com/argoproj/argo-cd/pkg/client/informers/externalversions"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/kube"
)
const (
watchResourcesRetryTimeout = 10 * time.Second
updateOperationStateTimeout = 1 * time.Second
)
// ApplicationController is the controller for application resources.
type ApplicationController struct {
namespace string
kubeClientset kubernetes.Interface
kubectl kube.Kubectl
applicationClientset appclientset.Interface
auditLogger *argo.AuditLogger
appRefreshQueue workqueue.RateLimitingInterface
appOperationQueue workqueue.RateLimitingInterface
appInformer cache.SharedIndexInformer
appStateManager AppStateManager
statusRefreshTimeout time.Duration
repoClientset reposerver.Clientset
db db.ArgoDB
forceRefreshApps map[string]bool
forceRefreshAppsMutex *sync.Mutex
}
type ApplicationControllerConfig struct {
InstanceID string
Namespace string
}
// NewApplicationController creates new instance of ApplicationController.
func NewApplicationController(
namespace string,
kubeClientset kubernetes.Interface,
applicationClientset appclientset.Interface,
repoClientset reposerver.Clientset,
appResyncPeriod time.Duration,
) *ApplicationController {
db := db.NewDB(namespace, kubeClientset)
kubectlCmd := kube.KubectlCmd{}
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectlCmd)
ctrl := ApplicationController{
namespace: namespace,
kubeClientset: kubeClientset,
kubectl: kubectlCmd,
applicationClientset: applicationClientset,
repoClientset: repoClientset,
appRefreshQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
appOperationQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
appStateManager: appStateManager,
db: db,
statusRefreshTimeout: appResyncPeriod,
forceRefreshApps: make(map[string]bool),
forceRefreshAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, "application-controller"),
}
ctrl.appInformer = ctrl.newApplicationInformer()
return &ctrl
}
// Run starts the Application CRD controller.
func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int, operationProcessors int) {
defer runtime.HandleCrash()
defer ctrl.appRefreshQueue.ShutDown()
go ctrl.appInformer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), ctrl.appInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
go ctrl.watchAppsResources()
for i := 0; i < statusProcessors; i++ {
go wait.Until(func() {
for ctrl.processAppRefreshQueueItem() {
}
}, time.Second, ctx.Done())
}
for i := 0; i < operationProcessors; i++ {
go wait.Until(func() {
for ctrl.processAppOperationQueueItem() {
}
}, time.Second, ctx.Done())
}
<-ctx.Done()
}
func (ctrl *ApplicationController) forceAppRefresh(appName string) {
ctrl.forceRefreshAppsMutex.Lock()
defer ctrl.forceRefreshAppsMutex.Unlock()
ctrl.forceRefreshApps[appName] = true
}
func (ctrl *ApplicationController) isRefreshForced(appName string) bool {
ctrl.forceRefreshAppsMutex.Lock()
defer ctrl.forceRefreshAppsMutex.Unlock()
_, ok := ctrl.forceRefreshApps[appName]
if ok {
delete(ctrl.forceRefreshApps, appName)
}
return ok
}
// watchClusterResources watches for resource changes annotated with application label on specified cluster and schedule corresponding app refresh.
func (ctrl *ApplicationController) watchClusterResources(ctx context.Context, item appv1.Cluster) {
retryUntilSucceed(func() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Recovered from panic: %v\n", r)
}
}()
config := item.RESTConfig()
watchStartTime := time.Now()
ch, err := ctrl.kubectl.WatchResources(ctx, config, "", func(gvk schema.GroupVersionKind) metav1.ListOptions {
ops := metav1.ListOptions{}
if !kube.IsCRDGroupVersionKind(gvk) {
ops.LabelSelector = common.LabelApplicationName
}
return ops
})
if err != nil {
return err
}
for event := range ch {
eventObj := event.Object.(*unstructured.Unstructured)
if kube.IsCRD(eventObj) {
// restart if new CRD has been created after watch started
if event.Type == watch.Added && watchStartTime.Before(eventObj.GetCreationTimestamp().Time) {
return fmt.Errorf("Restarting the watch because a new CRD was added.")
} else if event.Type == watch.Deleted {
return fmt.Errorf("Restarting the watch because a CRD was deleted.")
}
}
objLabels := eventObj.GetLabels()
if objLabels == nil {
objLabels = make(map[string]string)
}
if appName, ok := objLabels[common.LabelApplicationName]; ok {
ctrl.forceAppRefresh(appName)
ctrl.appRefreshQueue.Add(ctrl.namespace + "/" + appName)
}
}
return fmt.Errorf("resource updates channel has closed")
}, fmt.Sprintf("watch app resources on %s", item.Server), ctx, watchResourcesRetryTimeout)
}
func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
for _, obj := range apps {
if app, ok := obj.(*appv1.Application); ok && app.Spec.Destination.Server == cluster.Server {
return true
}
}
return false
}
// WatchAppsResources watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (ctrl *ApplicationController) watchAppsResources() {
watchingClusters := make(map[string]struct {
cancel context.CancelFunc
cluster *appv1.Cluster
})
retryUntilSucceed(func() error {
clusterEventCallback := func(event *db.ClusterEvent) {
info, ok := watchingClusters[event.Cluster.Server]
hasApps := isClusterHasApps(ctrl.appInformer.GetStore().List(), event.Cluster)
// cluster resources must be watched only if cluster has at least one app
if (event.Type == watch.Deleted || !hasApps) && ok {
info.cancel()
delete(watchingClusters, event.Cluster.Server)
} else if event.Type != watch.Deleted && !ok && hasApps {
ctx, cancel := context.WithCancel(context.Background())
watchingClusters[event.Cluster.Server] = struct {
cancel context.CancelFunc
cluster *appv1.Cluster
}{
cancel: cancel,
cluster: event.Cluster,
}
go ctrl.watchClusterResources(ctx, *event.Cluster)
}
}
onAppModified := func(obj interface{}) {
if app, ok := obj.(*appv1.Application); ok {
var cluster *appv1.Cluster
info, infoOk := watchingClusters[app.Spec.Destination.Server]
if infoOk {
cluster = info.cluster
} else {
cluster, _ = ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
}
if cluster != nil {
// trigger cluster event every time when app created/deleted to either start or stop watching resources
clusterEventCallback(&db.ClusterEvent{Cluster: cluster, Type: watch.Modified})
}
}
}
ctrl.appInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: onAppModified, DeleteFunc: onAppModified})
return ctrl.db.WatchClusters(context.Background(), clusterEventCallback)
}, "watch clusters", context.Background(), watchResourcesRetryTimeout)
<-context.Background().Done()
}
// retryUntilSucceed keep retrying given action with specified timeout until action succeed or specified context is done.
func retryUntilSucceed(action func() error, desc string, ctx context.Context, timeout time.Duration) {
ctxCompleted := false
go func() {
select {
case <-ctx.Done():
ctxCompleted = true
}
}()
for {
err := action()
if err == nil {
return
}
if err != nil {
if ctxCompleted {
log.Infof("Stop retrying %s", desc)
return
} else {
log.Warnf("Failed to %s: %+v, retrying in %v", desc, err, timeout)
time.Sleep(timeout)
}
}
}
}
func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appOperationQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appOperationQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
app, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
if app.Operation != nil {
ctrl.processRequestedAppOperation(app)
} else if app.DeletionTimestamp != nil && app.CascadedDeletion() {
ctrl.finalizeApplicationDeletion(app)
}
return
}
func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Application) {
logCtx := log.WithField("application", app.Name)
logCtx.Infof("Deleting resources")
// Get refreshed application info, since informer app copy might be stale
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(app.Name, metav1.GetOptions{})
if err != nil {
if !errors.IsNotFound(err) {
logCtx.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
}
return
}
clst, err := ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err == nil {
config := clst.RESTConfig()
err = kube.DeleteResourcesWithLabel(config, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
if err == nil {
app.SetCascadedDeletion(false)
var patch []byte
patch, err = json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
"finalizers": app.Finalizers,
},
})
if err == nil {
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
}
}
if err != nil {
ctrl.setAppCondition(app, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionDeletionError,
Message: err.Error(),
})
message := fmt.Sprintf("Unable to delete application resources: %v", err)
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonStatusRefreshed, Type: v1.EventTypeWarning}, message)
} else {
logCtx.Info("Successfully deleted resources")
}
}
func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condition appv1.ApplicationCondition) {
index := -1
for i, exiting := range app.Status.Conditions {
if exiting.Type == condition.Type {
index = i
break
}
}
if index > -1 {
app.Status.Conditions[index] = condition
} else {
app.Status.Conditions = append(app.Status.Conditions, condition)
}
var patch []byte
patch, err := json.Marshal(map[string]interface{}{
"status": map[string]interface{}{
"conditions": app.Status.Conditions,
},
})
if err == nil {
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
if err != nil {
log.Errorf("Unable to set application condition: %v", err)
}
}
func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Application) {
logCtx := log.WithField("application", app.Name)
var state *appv1.OperationState
// Recover from any unexpected panics and automatically set the status to be failed
defer func() {
if r := recover(); r != nil {
logCtx.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
state.Phase = appv1.OperationError
if rerr, ok := r.(error); ok {
state.Message = rerr.Error()
} else {
state.Message = fmt.Sprintf("%v", r)
}
ctrl.setOperationState(app, state)
}
}()
if isOperationInProgress(app) {
// If we get here, we are about process an operation but we notice it is already in progress.
// We need to detect if the app object we pulled off the informer is stale and doesn't
// reflect the fact that the operation is completed. We don't want to perform the operation
// again. To detect this, always retrieve the latest version to ensure it is not stale.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
if err != nil {
logCtx.Errorf("Failed to retrieve latest application state: %v", err)
return
}
if !isOperationInProgress(freshApp) {
logCtx.Infof("Skipping operation on stale application state")
return
}
app = freshApp
state = app.Status.OperationState.DeepCopy()
logCtx.Infof("Resuming in-progress operation. phase: %s, message: %s", state.Phase, state.Message)
} else {
state = &appv1.OperationState{Phase: appv1.OperationRunning, Operation: *app.Operation, StartedAt: metav1.Now()}
ctrl.setOperationState(app, state)
logCtx.Infof("Initialized new operation: %v", *app.Operation)
}
ctrl.appStateManager.SyncAppState(app, state)
if state.Phase == appv1.OperationRunning {
// It's possible for an app to be terminated while we were operating on it. We do not want
// to clobber the Terminated state with Running. Get the latest app state to check for this.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
if err == nil {
if freshApp.Status.OperationState != nil && freshApp.Status.OperationState.Phase == appv1.OperationTerminating {
state.Phase = appv1.OperationTerminating
state.Message = "operation is terminating"
// after this, we will get requeued to the workqueue, but next time the
// SyncAppState will operate in a Terminating phase, allowing the worker to perform
// cleanup (e.g. delete jobs, workflows, etc...)
}
}
}
ctrl.setOperationState(app, state)
if state.Phase.Completed() {
// if we just completed an operation, force a refresh so that UI will report up-to-date
// sync/health information
ctrl.forceAppRefresh(app.ObjectMeta.Name)
}
}
func (ctrl *ApplicationController) setOperationState(app *appv1.Application, state *appv1.OperationState) {
retryUntilSucceed(func() error {
if state.Phase == "" {
// expose any bugs where we neglect to set phase
panic("no phase was set")
}
if state.Phase.Completed() {
now := metav1.Now()
state.FinishedAt = &now
}
patch := map[string]interface{}{
"status": map[string]interface{}{
"operationState": state,
},
}
if state.Phase.Completed() {
// If operation is completed, clear the operation field to indicate no operation is
// in progress.
patch["operation"] = nil
}
if reflect.DeepEqual(app.Status.OperationState, state) {
log.Infof("No operation updates necessary to '%s'. Skipping patch", app.Name)
return nil
}
patchJSON, err := json.Marshal(patch)
if err != nil {
return err
}
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace)
_, err = appClient.Patch(app.Name, types.MergePatchType, patchJSON)
if err != nil {
return err
}
log.Infof("updated '%s' operation (phase: %s)", app.Name, state.Phase)
if state.Phase.Completed() {
eventInfo := argo.EventInfo{Reason: argo.EventReasonOperationCompleted}
var message string
if state.Phase.Successful() {
eventInfo.Type = v1.EventTypeNormal
message = "Operation succeeded"
} else {
eventInfo.Type = v1.EventTypeWarning
message = fmt.Sprintf("Operation failed: %v", state.Message)
}
ctrl.auditLogger.LogAppEvent(app, eventInfo, message)
}
return nil
}, "Update application operation state", context.Background(), updateOperationStateTimeout)
}
func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appRefreshQueue.Get()
if shutdown {
processNext = false
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appRefreshQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
app, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
if !ctrl.needRefreshAppStatus(app, ctrl.statusRefreshTimeout) {
return
}
app = app.DeepCopy()
conditions, hasErrors := ctrl.refreshAppConditions(app)
if hasErrors {
comparisonResult := app.Status.ComparisonResult.DeepCopy()
comparisonResult.Status = appv1.ComparisonStatusUnknown
health := app.Status.Health.DeepCopy()
health.Status = appv1.HealthStatusUnknown
ctrl.updateAppStatus(app, comparisonResult, health, nil, conditions)
return
}
comparisonResult, manifestInfo, compConditions, err := ctrl.appStateManager.CompareAppState(app, "", nil)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
} else {
conditions = append(conditions, compConditions...)
}
var parameters []*appv1.ComponentParameter
if manifestInfo != nil {
parameters = manifestInfo.Params
}
healthState, err := setApplicationHealth(ctrl.kubectl, comparisonResult)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
}
syncErrCond := ctrl.autoSync(app, comparisonResult)
if syncErrCond != nil {
conditions = append(conditions, *syncErrCond)
}
ctrl.updateAppStatus(app, comparisonResult, healthState, parameters, conditions)
return
}
// needRefreshAppStatus answers if application status needs to be refreshed.
// Returns true if application never been compared, has changed or comparison result has expired.
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) bool {
logCtx := log.WithFields(log.Fields{"application": app.Name})
var reason string
expired := app.Status.ComparisonResult.ComparedAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
if ctrl.isRefreshForced(app.Name) {
reason = "force refresh"
} else if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown && expired {
reason = "comparison status unknown"
} else if !app.Spec.Source.Equals(app.Status.ComparisonResult.ComparedTo) {
reason = "spec.source differs"
} else if expired {
reason = fmt.Sprintf("comparison expired. comparedAt: %v, expiry: %v", app.Status.ComparisonResult.ComparedAt, statusRefreshTimeout)
}
if reason != "" {
logCtx.Infof("Refreshing app status (%s)", reason)
return true
}
return false
}
func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application) ([]appv1.ApplicationCondition, bool) {
conditions := make([]appv1.ApplicationCondition, 0)
proj, err := argo.GetAppProject(&app.Spec, ctrl.applicationClientset, ctrl.namespace)
if err != nil {
if errors.IsNotFound(err) {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("Application referencing project %s which does not exist", app.Spec.Project),
})
} else {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionUnknownError,
Message: err.Error(),
})
}
} else {
specConditions, err := argo.GetSpecErrors(context.Background(), &app.Spec, proj, ctrl.repoClientset, ctrl.db)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionUnknownError,
Message: err.Error(),
})
} else {
conditions = append(conditions, specConditions...)
}
}
// List of condition types which have to be reevaluated by controller; all remaining conditions should stay as is.
reevaluateTypes := map[appv1.ApplicationConditionType]bool{
appv1.ApplicationConditionInvalidSpecError: true,
appv1.ApplicationConditionUnknownError: true,
appv1.ApplicationConditionComparisonError: true,
appv1.ApplicationConditionSharedResourceWarning: true,
appv1.ApplicationConditionSyncError: true,
}
appConditions := make([]appv1.ApplicationCondition, 0)
for i := 0; i < len(app.Status.Conditions); i++ {
condition := app.Status.Conditions[i]
if _, ok := reevaluateTypes[condition.Type]; !ok {
appConditions = append(appConditions, condition)
}
}
hasErrors := false
for i := range conditions {
condition := conditions[i]
appConditions = append(appConditions, condition)
if condition.IsError() {
hasErrors = true
}
}
return appConditions, hasErrors
}
// setApplicationHealth updates the health statuses of all resources performed in the comparison
func setApplicationHealth(kubectl kube.Kubectl, comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error) {
var savedErr error
appHealth := appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
if comparisonResult.Status == appv1.ComparisonStatusUnknown {
appHealth.Status = appv1.HealthStatusUnknown
}
for i, resource := range comparisonResult.Resources {
if resource.LiveState == "null" {
resource.Health = appv1.HealthStatus{Status: appv1.HealthStatusMissing}
} else {
var obj unstructured.Unstructured
err := json.Unmarshal([]byte(resource.LiveState), &obj)
if err != nil {
return nil, err
}
healthState, err := health.GetAppHealth(kubectl, &obj)
if err != nil && savedErr == nil {
savedErr = err
}
resource.Health = *healthState
}
comparisonResult.Resources[i] = resource
if health.IsWorse(appHealth.Status, resource.Health.Status) {
appHealth.Status = resource.Health.Status
}
}
return &appHealth, savedErr
}
// updateAppStatus persists updates to application status. Detects if there patch
func (ctrl *ApplicationController) updateAppStatus(
app *appv1.Application,
comparisonResult *appv1.ComparisonResult,
healthState *appv1.HealthStatus,
parameters []*appv1.ComponentParameter,
conditions []appv1.ApplicationCondition,
) {
logCtx := log.WithFields(log.Fields{"application": app.Name})
modifiedApp := app.DeepCopy()
if comparisonResult != nil {
modifiedApp.Status.ComparisonResult = *comparisonResult
if app.Status.ComparisonResult.Status != comparisonResult.Status {
message := fmt.Sprintf("Updated sync status: %s -> %s", app.Status.ComparisonResult.Status, comparisonResult.Status)
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message)
}
logCtx.Infof("Comparison result: prev: %s. current: %s", app.Status.ComparisonResult.Status, comparisonResult.Status)
}
if healthState != nil {
if modifiedApp.Status.Health.Status != healthState.Status {
message := fmt.Sprintf("Updated health status: %s -> %s", modifiedApp.Status.Health.Status, healthState.Status)
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message)
}
modifiedApp.Status.Health = *healthState
}
if parameters != nil {
modifiedApp.Status.Parameters = make([]appv1.ComponentParameter, len(parameters))
for i := range parameters {
modifiedApp.Status.Parameters[i] = *parameters[i]
}
}
if conditions != nil {
modifiedApp.Status.Conditions = conditions
}
origBytes, err := json.Marshal(app)
if err != nil {
logCtx.Errorf("Error updating (marshal orig app): %v", err)
return
}
modifiedBytes, err := json.Marshal(modifiedApp)
if err != nil {
logCtx.Errorf("Error updating (marshal modified app): %v", err)
return
}
patch, err := strategicpatch.CreateTwoWayMergePatch(origBytes, modifiedBytes, appv1.Application{})
if err != nil {
logCtx.Errorf("Error calculating patch for update: %v", err)
return
}
if string(patch) == "{}" {
logCtx.Infof("No status changes. Skipping patch")
return
}
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace)
_, err = appClient.Patch(app.Name, types.MergePatchType, patch)
if err != nil {
logCtx.Warnf("Error updating application: %v", err)
} else {
logCtx.Infof("Update successful")
}
}
// autoSync will initiate a sync operation for an application configured with automated sync
func (ctrl *ApplicationController) autoSync(app *appv1.Application, comparisonResult *appv1.ComparisonResult) *appv1.ApplicationCondition {
if app.Spec.SyncPolicy == nil || app.Spec.SyncPolicy.Automated == nil {
return nil
}
logCtx := log.WithFields(log.Fields{"application": app.Name})
if app.Operation != nil {
logCtx.Infof("Skipping auto-sync: another operation is in progress")
return nil
}
// Only perform auto-sync if we detect OutOfSync status. This is to prevent us from attempting
// a sync when application is already in a Synced or Unknown state
if comparisonResult.Status != appv1.ComparisonStatusOutOfSync {
logCtx.Infof("Skipping auto-sync: application status is %s", comparisonResult.Status)
return nil
}
desiredCommitSHA := comparisonResult.Revision
// It is possible for manifests to remain OutOfSync even after a sync/kubectl apply (e.g.
// auto-sync with pruning disabled). We need to ensure that we do not keep Syncing an
// application in an infinite loop. To detect this, we only attempt the Sync if the revision
// and parameter overrides are different from our most recent sync operation.
if alreadyAttemptedSync(app, desiredCommitSHA) {
if app.Status.OperationState.Phase != appv1.OperationSucceeded {
logCtx.Warnf("Skipping auto-sync: failed previous sync attempt to %s", desiredCommitSHA)
message := fmt.Sprintf("Failed sync attempt to %s: %s", desiredCommitSHA, app.Status.OperationState.Message)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}
}
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredCommitSHA)
return nil
}
op := appv1.Operation{
Sync: &appv1.SyncOperation{
Revision: desiredCommitSHA,
Prune: app.Spec.SyncPolicy.Automated.Prune,
ParameterOverrides: app.Spec.Source.ComponentParameterOverrides,
},
}
appIf := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace)
_, err := argo.SetAppOperation(context.Background(), appIf, ctrl.auditLogger, app.Name, &op)
if err != nil {
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredCommitSHA, err)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}
}
message := fmt.Sprintf("Initiated automated sync to '%s'", desiredCommitSHA)
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonOperationStarted, Type: v1.EventTypeNormal}, message)
logCtx.Info(message)
return nil
}
// alreadyAttemptedSync returns whether or not the most recent sync was performed against the
// commitSHA and with the same parameter overrides which are currently set in the app
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) bool {
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
return false
}
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false
}
if !reflect.DeepEqual(appv1.ParameterOverrides(app.Spec.Source.ComponentParameterOverrides), app.Status.OperationState.Operation.Sync.ParameterOverrides) {
return false
}
return true
}
func (ctrl *ApplicationController) newApplicationInformer() cache.SharedIndexInformer {
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
ctrl.applicationClientset,
ctrl.statusRefreshTimeout,
ctrl.namespace,
func(options *metav1.ListOptions) {},
)
informer := appInformerFactory.Argoproj().V1alpha1().Applications().Informer()
informer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
ctrl.appRefreshQueue.Add(key)
ctrl.appOperationQueue.Add(key)
}
},
UpdateFunc: func(old, new interface{}) {
key, err := cache.MetaNamespaceKeyFunc(new)
if err != nil {
return
}
oldApp, oldOK := old.(*appv1.Application)
newApp, newOK := new.(*appv1.Application)
if oldOK && newOK {
if toggledAutomatedSync(oldApp, newApp) {
log.WithField("application", newApp.Name).Info("Enabled automated sync")
ctrl.forceAppRefresh(newApp.Name)
}
}
ctrl.appRefreshQueue.Add(key)
ctrl.appOperationQueue.Add(key)
},
DeleteFunc: func(obj interface{}) {
// IndexerInformer uses a delta queue, therefore for deletes we have to use this
// key function.
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err == nil {
ctrl.appRefreshQueue.Add(key)
}
},
},
)
return informer
}
func isOperationInProgress(app *appv1.Application) bool {
return app.Status.OperationState != nil && !app.Status.OperationState.Phase.Completed()
}
// toggledAutomatedSync tests if an app went from auto-sync disabled to enabled.
// if it was toggled to be enabled, the informer handler will force a refresh
func toggledAutomatedSync(old *appv1.Application, new *appv1.Application) bool {
if new.Spec.SyncPolicy == nil || new.Spec.SyncPolicy.Automated == nil {
return false
}
// auto-sync is enabled. check if it was previously disabled
if old.Spec.SyncPolicy == nil || old.Spec.SyncPolicy.Automated == nil {
return true
}
// nothing changed
return false
}

View File

@@ -1,231 +0,0 @@
package controller
import (
"testing"
"time"
"github.com/ghodss/yaml"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
reposerver "github.com/argoproj/argo-cd/reposerver/mocks"
"github.com/stretchr/testify/assert"
)
func newFakeController(apps ...runtime.Object) *ApplicationController {
kubeClientset := fake.NewSimpleClientset()
appClientset := appclientset.NewSimpleClientset(apps...)
repoClientset := reposerver.Clientset{}
return NewApplicationController(
"argocd",
kubeClientset,
appClientset,
&repoClientset,
time.Minute,
)
}
var fakeApp = `
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
destination:
namespace: dummy-namespace
server: https://localhost:6443
project: default
source:
path: some/path
repoURL: https://github.com/argoproj/argocd-example-apps.git
syncPolicy:
automated: {}
status:
operationState:
finishedAt: 2018-09-21T23:50:29Z
message: successfully synced
operation:
sync:
revision: HEAD
phase: Succeeded
startedAt: 2018-09-21T23:50:25Z
syncResult:
resources:
- kind: RoleBinding
message: |-
rolebinding.rbac.authorization.k8s.io/always-outofsync reconciled
rolebinding.rbac.authorization.k8s.io/always-outofsync configured
name: always-outofsync
namespace: default
status: Synced
revision: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
`
func newFakeApp() *argoappv1.Application {
var app argoappv1.Application
err := yaml.Unmarshal([]byte(fakeApp), &app)
if err != nil {
panic(err)
}
return &app
}
func TestAutoSync(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(app)
compRes := argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.NotNil(t, app.Operation)
assert.NotNil(t, app.Operation.Sync)
assert.False(t, app.Operation.Sync.Prune)
}
func TestSkipAutoSync(t *testing.T) {
// Verify we skip when we previously synced to it in our most recent history
// Set current to 'aaaaa', desired to 'aaaa' and mark system OutOfSync
app := newFakeApp()
ctrl := newFakeController(app)
compRes := argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
cond := ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
// Verify we skip when we are already Synced (even if revision is different)
app = newFakeApp()
ctrl = newFakeController(app)
compRes = argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond = ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
// Verify we skip when auto-sync is disabled
app = newFakeApp()
app.Spec.SyncPolicy = nil
ctrl = newFakeController(app)
compRes = argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond = ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
// Verify we skip when previous sync attempt failed and return error condition
// Set current to 'aaaaa', desired to 'bbbbb' and add 'bbbbb' to failure history
app = newFakeApp()
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
},
}
ctrl = newFakeController(app)
compRes = argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond = ctrl.autoSync(app, &compRes)
assert.NotNil(t, cond)
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// TestAutoSyncIndicateError verifies we skip auto-sync and return error condition if previous sync failed
func TestAutoSyncIndicateError(t *testing.T) {
app := newFakeApp()
app.Spec.Source.ComponentParameterOverrides = []argoappv1.ComponentParameter{
{
Name: "a",
Value: "1",
},
}
ctrl := newFakeController(app)
compRes := argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{
ParameterOverrides: argoappv1.ParameterOverrides{
{
Name: "a",
Value: "1",
},
},
},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond := ctrl.autoSync(app, &compRes)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// TestAutoSyncParameterOverrides verifies we auto-sync if revision is same but parameter overrides are different
func TestAutoSyncParameterOverrides(t *testing.T) {
app := newFakeApp()
app.Spec.Source.ComponentParameterOverrides = []argoappv1.ComponentParameter{
{
Name: "a",
Value: "1",
},
}
ctrl := newFakeController(app)
compRes := argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{
ParameterOverrides: argoappv1.ParameterOverrides{
{
Name: "a",
Value: "2", // this value changed
},
},
},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond := ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.NotNil(t, app.Operation)
}

560
controller/controller.go Normal file
View File

@@ -0,0 +1,560 @@
package controller
import (
"context"
"encoding/json"
"fmt"
"runtime/debug"
"sync"
"time"
"github.com/argoproj/argo-cd/common"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
appinformers "github.com/argoproj/argo-cd/pkg/client/informers/externalversions"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/kube"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
)
const (
watchResourcesRetryTimeout = 10 * time.Second
updateOperationStateTimeout = 1 * time.Second
)
// ApplicationController is the controller for application resources.
type ApplicationController struct {
namespace string
kubeClientset kubernetes.Interface
applicationClientset appclientset.Interface
appRefreshQueue workqueue.RateLimitingInterface
appOperationQueue workqueue.RateLimitingInterface
appInformer cache.SharedIndexInformer
appStateManager AppStateManager
appHealthManager AppHealthManager
statusRefreshTimeout time.Duration
db db.ArgoDB
forceRefreshApps map[string]bool
forceRefreshAppsMutex *sync.Mutex
}
type ApplicationControllerConfig struct {
InstanceID string
Namespace string
}
// NewApplicationController creates new instance of ApplicationController.
func NewApplicationController(
namespace string,
kubeClientset kubernetes.Interface,
applicationClientset appclientset.Interface,
db db.ArgoDB,
appStateManager AppStateManager,
appHealthManager AppHealthManager,
appResyncPeriod time.Duration,
config *ApplicationControllerConfig,
) *ApplicationController {
appRefreshQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
appOperationQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
return &ApplicationController{
namespace: namespace,
kubeClientset: kubeClientset,
applicationClientset: applicationClientset,
appRefreshQueue: appRefreshQueue,
appOperationQueue: appOperationQueue,
appStateManager: appStateManager,
appHealthManager: appHealthManager,
appInformer: newApplicationInformer(applicationClientset, appRefreshQueue, appOperationQueue, appResyncPeriod, config),
db: db,
statusRefreshTimeout: appResyncPeriod,
forceRefreshApps: make(map[string]bool),
forceRefreshAppsMutex: &sync.Mutex{},
}
}
// Run starts the Application CRD controller.
func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int, operationProcessors int) {
defer runtime.HandleCrash()
defer ctrl.appRefreshQueue.ShutDown()
go ctrl.appInformer.Run(ctx.Done())
go ctrl.watchAppsResources()
if !cache.WaitForCacheSync(ctx.Done(), ctrl.appInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
for i := 0; i < statusProcessors; i++ {
go wait.Until(func() {
for ctrl.processAppRefreshQueueItem() {
}
}, time.Second, ctx.Done())
}
for i := 0; i < operationProcessors; i++ {
go wait.Until(func() {
for ctrl.processAppOperationQueueItem() {
}
}, time.Second, ctx.Done())
}
<-ctx.Done()
}
func (ctrl *ApplicationController) forceAppRefresh(appName string) {
ctrl.forceRefreshAppsMutex.Lock()
defer ctrl.forceRefreshAppsMutex.Unlock()
ctrl.forceRefreshApps[appName] = true
}
func (ctrl *ApplicationController) isRefreshForced(appName string) bool {
ctrl.forceRefreshAppsMutex.Lock()
defer ctrl.forceRefreshAppsMutex.Unlock()
_, ok := ctrl.forceRefreshApps[appName]
if ok {
delete(ctrl.forceRefreshApps, appName)
}
return ok
}
// watchClusterResources watches for resource changes annotated with application label on specified cluster and schedule corresponding app refresh.
func (ctrl *ApplicationController) watchClusterResources(ctx context.Context, item appv1.Cluster) {
config := item.RESTConfig()
retryUntilSucceed(func() error {
ch, err := kube.WatchResourcesWithLabel(ctx, config, "", common.LabelApplicationName)
if err != nil {
return err
}
for event := range ch {
eventObj := event.Object.(*unstructured.Unstructured)
objLabels := eventObj.GetLabels()
if objLabels == nil {
objLabels = make(map[string]string)
}
if appName, ok := objLabels[common.LabelApplicationName]; ok {
ctrl.forceAppRefresh(appName)
ctrl.appRefreshQueue.Add(ctrl.namespace + "/" + appName)
}
}
return fmt.Errorf("resource updates channel has closed")
}, fmt.Sprintf("watch app resources on %s", config.Host), ctx, watchResourcesRetryTimeout)
}
// watchAppsResources watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (ctrl *ApplicationController) watchAppsResources() {
watchingClusters := make(map[string]context.CancelFunc)
retryUntilSucceed(func() error {
return ctrl.db.WatchClusters(context.Background(), func(event *db.ClusterEvent) {
cancel, ok := watchingClusters[event.Cluster.Server]
if event.Type == watch.Deleted && ok {
cancel()
delete(watchingClusters, event.Cluster.Server)
} else if event.Type != watch.Deleted && !ok {
ctx, cancel := context.WithCancel(context.Background())
watchingClusters[event.Cluster.Server] = cancel
go ctrl.watchClusterResources(ctx, *event.Cluster)
}
})
}, "watch clusters", context.Background(), watchResourcesRetryTimeout)
<-context.Background().Done()
}
// retryUntilSucceed keep retrying given action with specified timeout until action succeed or specified context is done.
func retryUntilSucceed(action func() error, desc string, ctx context.Context, timeout time.Duration) {
ctxCompleted := false
go func() {
select {
case <-ctx.Done():
ctxCompleted = true
}
}()
for {
err := action()
if err == nil {
return
}
if err != nil {
if ctxCompleted {
log.Infof("Stop retrying %s", desc)
return
} else {
log.Warnf("Failed to %s: %v, retrying in %v", desc, err, timeout)
time.Sleep(timeout)
}
}
}
}
func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appOperationQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appOperationQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
app, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
if app.Operation != nil {
ctrl.processRequestedAppOperation(app)
} else if app.DeletionTimestamp != nil && app.CascadedDeletion() {
ctrl.finalizeApplicationDeletion(app)
}
return
}
func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Application) {
log.Infof("Deleting resources for application %s", app.Name)
// Get refreshed application info, since informer app copy might be stale
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(app.Name, metav1.GetOptions{})
if err != nil {
if !errors.IsNotFound(err) {
log.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
}
return
}
clst, err := ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err == nil {
config := clst.RESTConfig()
err = kube.DeleteResourceWithLabel(config, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
if err == nil {
app.SetCascadedDeletion(false)
var patch []byte
patch, err = json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
"finalizers": app.Finalizers,
},
})
if err == nil {
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
}
}
if err != nil {
log.Errorf("Unable to delete application resources: %v", err)
ctrl.setAppCondition(app, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionDeletionError,
Message: err.Error(),
})
} else {
log.Infof("Successfully deleted resources for application %s", app.Name)
}
}
func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condition appv1.ApplicationCondition) {
index := -1
for i, exiting := range app.Status.Conditions {
if exiting.Type == condition.Type {
index = i
break
}
}
if index > -1 {
app.Status.Conditions[index] = condition
} else {
app.Status.Conditions = append(app.Status.Conditions, condition)
}
var patch []byte
patch, err := json.Marshal(map[string]interface{}{
"status": map[string]interface{}{
"conditions": app.Status.Conditions,
},
})
if err == nil {
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
if err != nil {
log.Errorf("Unable to set application condition: %v", err)
}
}
func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Application) {
state := appv1.OperationState{Phase: appv1.OperationRunning, Operation: *app.Operation, StartedAt: metav1.Now()}
// Recover from any unexpected panics and automatically set the status to be failed
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
// TODO: consider adding Error OperationStatus in addition to Failed
state.Phase = appv1.OperationError
if rerr, ok := r.(error); ok {
state.Message = rerr.Error()
} else {
state.Message = fmt.Sprintf("%v", r)
}
ctrl.setOperationState(app.Name, state, app.Operation)
}
}()
if app.Status.OperationState != nil && !app.Status.OperationState.Phase.Completed() {
// If we get here, we are about process an operation but we notice it is already Running.
// We need to detect if the controller crashed before completing the operation, or if the
// the app object we pulled off the informer is simply stale and doesn't reflect the fact
// that the operation is completed. We don't want to perform the operation again. To detect
// this, always retrieve the latest version to ensure it is not stale.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
if err != nil {
log.Errorf("Failed to retrieve latest application state: %v", err)
return
}
if freshApp.Status.OperationState == nil || freshApp.Status.OperationState.Phase.Completed() {
log.Infof("Skipping operation on stale application state (%s)", app.ObjectMeta.Name)
return
}
log.Warnf("Found interrupted application operation %s %v", app.ObjectMeta.Name, app.Status.OperationState)
} else {
ctrl.setOperationState(app.Name, state, app.Operation)
}
if app.Operation.Sync != nil {
opRes := ctrl.appStateManager.SyncAppState(app, app.Operation.Sync.Revision, nil, app.Operation.Sync.DryRun, app.Operation.Sync.Prune)
state.Phase = opRes.Phase
state.Message = opRes.Message
state.SyncResult = opRes.SyncResult
} else if app.Operation.Rollback != nil {
var deploymentInfo *appv1.DeploymentInfo
for _, info := range app.Status.History {
if info.ID == app.Operation.Rollback.ID {
deploymentInfo = &info
break
}
}
if deploymentInfo == nil {
state.Phase = appv1.OperationFailed
state.Message = fmt.Sprintf("application %s does not have deployment with id %v", app.Name, app.Operation.Rollback.ID)
} else {
opRes := ctrl.appStateManager.SyncAppState(app, deploymentInfo.Revision, &deploymentInfo.ComponentParameterOverrides, app.Operation.Rollback.DryRun, app.Operation.Rollback.Prune)
state.Phase = opRes.Phase
state.Message = opRes.Message
state.RollbackResult = opRes.SyncResult
}
} else {
state.Phase = appv1.OperationFailed
state.Message = "Invalid operation request"
}
ctrl.setOperationState(app.Name, state, app.Operation)
}
func (ctrl *ApplicationController) setOperationState(appName string, state appv1.OperationState, operation *appv1.Operation) {
retryUntilSucceed(func() error {
var inProgressOpValue *appv1.Operation
if state.Phase == "" {
// expose any bugs where we neglect to set phase
panic("no phase was set")
}
if !state.Phase.Completed() {
// If operation is still running, we populate the app.operation field, which prevents
// any other operation from running at the same time. Otherwise, it is cleared by setting
// it to nil which indicates no operation is in progress.
inProgressOpValue = operation
} else {
nowTime := metav1.Now()
state.FinishedAt = &nowTime
}
patch, err := json.Marshal(map[string]interface{}{
"status": map[string]interface{}{
"operationState": state,
},
"operation": inProgressOpValue,
})
if err != nil {
return err
}
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace)
_, err = appClient.Patch(appName, types.MergePatchType, patch)
if err != nil {
return err
}
log.Infof("updated '%s' operation (phase: %s)", appName, state.Phase)
return nil
}, "Update application operation state", context.Background(), updateOperationStateTimeout)
}
func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appRefreshQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appRefreshQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
app, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
isForceRefreshed := ctrl.isRefreshForced(app.Name)
if isForceRefreshed || app.NeedRefreshAppStatus(ctrl.statusRefreshTimeout) {
log.Infof("Refreshing application '%s' status (force refreshed: %v)", app.Name, isForceRefreshed)
comparisonResult, parameters, healthState, err := ctrl.tryRefreshAppStatus(app.DeepCopy())
if err != nil {
comparisonResult = &appv1.ComparisonResult{
Status: appv1.ComparisonStatusError,
Error: fmt.Sprintf("Failed to get application status for application '%s': %v", app.Name, err),
ComparedTo: app.Spec.Source,
ComparedAt: metav1.Time{Time: time.Now().UTC()},
}
parameters = nil
healthState = &appv1.HealthStatus{Status: appv1.HealthStatusUnknown}
}
ctrl.updateAppStatus(app.Name, app.Namespace, comparisonResult, parameters, *healthState)
}
return
}
func (ctrl *ApplicationController) tryRefreshAppStatus(app *appv1.Application) (*appv1.ComparisonResult, *[]appv1.ComponentParameter, *appv1.HealthStatus, error) {
comparisonResult, manifestInfo, err := ctrl.appStateManager.CompareAppState(app)
if err != nil {
return nil, nil, nil, err
}
log.Infof("App %s comparison result: prev: %s. current: %s", app.Name, app.Status.ComparisonResult.Status, comparisonResult.Status)
parameters := make([]appv1.ComponentParameter, len(manifestInfo.Params))
for i := range manifestInfo.Params {
parameters[i] = *manifestInfo.Params[i]
}
healthState, err := ctrl.appHealthManager.GetAppHealth(app.Spec.Destination.Server, app.Spec.Destination.Namespace, comparisonResult)
if err != nil {
return nil, nil, nil, err
}
return comparisonResult, &parameters, healthState, nil
}
func (ctrl *ApplicationController) updateAppStatus(
appName string, namespace string, comparisonResult *appv1.ComparisonResult, parameters *[]appv1.ComponentParameter, healthState appv1.HealthStatus) {
statusPatch := make(map[string]interface{})
statusPatch["comparisonResult"] = comparisonResult
statusPatch["parameters"] = parameters
statusPatch["health"] = healthState
patch, err := json.Marshal(map[string]interface{}{
"status": statusPatch,
})
if err == nil {
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(namespace)
_, err = appClient.Patch(appName, types.MergePatchType, patch)
}
if err != nil {
log.Warnf("Error updating application: %v", err)
} else {
log.Info("Application update successful")
}
}
func newApplicationInformer(
appClientset appclientset.Interface,
appQueue workqueue.RateLimitingInterface,
appOperationQueue workqueue.RateLimitingInterface,
appResyncPeriod time.Duration,
config *ApplicationControllerConfig) cache.SharedIndexInformer {
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
appClientset,
appResyncPeriod,
config.Namespace,
func(options *metav1.ListOptions) {
var instanceIDReq *labels.Requirement
var err error
if config.InstanceID != "" {
instanceIDReq, err = labels.NewRequirement(common.LabelKeyApplicationControllerInstanceID, selection.Equals, []string{config.InstanceID})
} else {
instanceIDReq, err = labels.NewRequirement(common.LabelKeyApplicationControllerInstanceID, selection.DoesNotExist, nil)
}
if err != nil {
panic(err)
}
options.FieldSelector = fields.Everything().String()
labelSelector := labels.NewSelector().Add(*instanceIDReq)
options.LabelSelector = labelSelector.String()
},
)
informer := appInformerFactory.Argoproj().V1alpha1().Applications().Informer()
informer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
appQueue.Add(key)
appOperationQueue.Add(key)
}
},
UpdateFunc: func(old, new interface{}) {
key, err := cache.MetaNamespaceKeyFunc(new)
if err == nil {
appQueue.Add(key)
appOperationQueue.Add(key)
}
},
DeleteFunc: func(obj interface{}) {
// IndexerInformer uses a delta queue, therefore for deletes we have to use this
// key function.
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err == nil {
appQueue.Add(key)
}
},
},
)
return informer
}

161
controller/health.go Normal file
View File

@@ -0,0 +1,161 @@
package controller
import (
"context"
"encoding/json"
"fmt"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/kube"
"k8s.io/api/apps/v1"
coreV1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
const (
maxHistoryCnt = 5
)
type AppHealthManager interface {
GetAppHealth(server string, namespace string, comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error)
}
type kubeAppHealthManager struct {
db db.ArgoDB
namespace string
}
func NewAppHealthManager(db db.ArgoDB, namespace string) AppHealthManager {
return &kubeAppHealthManager{db: db, namespace: namespace}
}
func (ctrl *kubeAppHealthManager) getServiceHealth(config *rest.Config, namespace string, name string) (*appv1.HealthStatus, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
service, err := clientSet.CoreV1().Services(namespace).Get(name, metav1.GetOptions{})
if err != nil {
return nil, err
}
health := appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
if service.Spec.Type == coreV1.ServiceTypeLoadBalancer {
health.Status = appv1.HealthStatusProgressing
for _, ingress := range service.Status.LoadBalancer.Ingress {
if ingress.Hostname != "" || ingress.IP != "" {
health.Status = appv1.HealthStatusHealthy
break
}
}
}
return &health, nil
}
func (ctrl *kubeAppHealthManager) getDeploymentHealth(config *rest.Config, namespace string, name string) (*appv1.HealthStatus, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
deployment, err := clientSet.AppsV1().Deployments(namespace).Get(name, metav1.GetOptions{})
if err != nil {
return nil, err
}
if deployment.Generation <= deployment.Status.ObservedGeneration {
cond := getDeploymentCondition(deployment.Status, v1.DeploymentProgressing)
if cond != nil && cond.Reason == "ProgressDeadlineExceeded" {
return &appv1.HealthStatus{
Status: appv1.HealthStatusDegraded,
StatusDetails: fmt.Sprintf("Deployment %q exceeded its progress deadline", name),
}, nil
} else if deployment.Spec.Replicas != nil && deployment.Status.UpdatedReplicas < *deployment.Spec.Replicas {
return &appv1.HealthStatus{
Status: appv1.HealthStatusProgressing,
StatusDetails: fmt.Sprintf("Waiting for rollout to finish: %d out of %d new replicas have been updated...\n", deployment.Status.UpdatedReplicas, *deployment.Spec.Replicas),
}, nil
} else if deployment.Status.Replicas > deployment.Status.UpdatedReplicas {
return &appv1.HealthStatus{
Status: appv1.HealthStatusProgressing,
StatusDetails: fmt.Sprintf("Waiting for rollout to finish: %d old replicas are pending termination...\n", deployment.Status.Replicas-deployment.Status.UpdatedReplicas),
}, nil
} else if deployment.Status.AvailableReplicas < deployment.Status.UpdatedReplicas {
return &appv1.HealthStatus{
Status: appv1.HealthStatusProgressing,
StatusDetails: fmt.Sprintf("Waiting for rollout to finish: %d of %d updated replicas are available...\n", deployment.Status.AvailableReplicas, deployment.Status.UpdatedReplicas),
}, nil
}
} else {
return &appv1.HealthStatus{
Status: appv1.HealthStatusProgressing,
StatusDetails: "Waiting for rollout to finish: observed deployment generation less then desired generation",
}, nil
}
return &appv1.HealthStatus{
Status: appv1.HealthStatusHealthy,
}, nil
}
func getDeploymentCondition(status v1.DeploymentStatus, condType v1.DeploymentConditionType) *v1.DeploymentCondition {
for i := range status.Conditions {
c := status.Conditions[i]
if c.Type == condType {
return &c
}
}
return nil
}
func (ctrl *kubeAppHealthManager) GetAppHealth(server string, namespace string, comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error) {
clst, err := ctrl.db.GetCluster(context.Background(), server)
if err != nil {
return nil, err
}
restConfig := clst.RESTConfig()
appHealth := appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
for i := range comparisonResult.Resources {
resource := comparisonResult.Resources[i]
if resource.LiveState == "null" {
resource.Health = appv1.HealthStatus{Status: appv1.HealthStatusUnknown}
} else {
var obj unstructured.Unstructured
err := json.Unmarshal([]byte(resource.LiveState), &obj)
if err != nil {
return nil, err
}
switch obj.GetKind() {
case kube.DeploymentKind:
state, err := ctrl.getDeploymentHealth(restConfig, namespace, obj.GetName())
if err != nil {
return nil, err
}
resource.Health = *state
case kube.ServiceKind:
state, err := ctrl.getServiceHealth(restConfig, namespace, obj.GetName())
if err != nil {
return nil, err
}
resource.Health = *state
default:
resource.Health = appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
}
if resource.Health.Status == appv1.HealthStatusProgressing {
if appHealth.Status == appv1.HealthStatusHealthy {
appHealth.Status = appv1.HealthStatusProgressing
}
} else if resource.Health.Status == appv1.HealthStatusDegraded {
if appHealth.Status == appv1.HealthStatusHealthy || appHealth.Status == appv1.HealthStatusProgressing {
appHealth.Status = appv1.HealthStatusDegraded
}
}
}
comparisonResult.Resources[i] = resource
}
return &appHealth, nil
}

View File

@@ -1,195 +0,0 @@
package controller
import (
"context"
"encoding/json"
"runtime/debug"
"time"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/git"
)
type SecretController struct {
kubeClient kubernetes.Interface
secretQueue workqueue.RateLimitingInterface
secretInformer cache.SharedIndexInformer
repoClientset reposerver.Clientset
namespace string
}
func (ctrl *SecretController) Run(ctx context.Context) {
go ctrl.secretInformer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), ctrl.secretInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
go wait.Until(func() {
for ctrl.processSecret() {
}
}, time.Second, ctx.Done())
}
func (ctrl *SecretController) processSecret() (processNext bool) {
secretKey, shutdown := ctrl.secretQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.secretQueue.Done(secretKey)
}()
obj, exists, err := ctrl.secretInformer.GetIndexer().GetByKey(secretKey.(string))
if err != nil {
log.Errorf("Failed to get secret '%s' from informer index: %+v", secretKey, err)
return
}
if !exists {
// This happens after secret was deleted, but the work queue still had an entry for it.
return
}
secret, ok := obj.(*corev1.Secret)
if !ok {
log.Warnf("Key '%s' in index is not an secret", secretKey)
return
}
if secret.Labels[common.LabelKeySecretType] == common.SecretTypeCluster {
cluster := db.SecretToCluster(secret)
ctrl.updateState(secret, ctrl.getClusterState(cluster))
} else if secret.Labels[common.LabelKeySecretType] == common.SecretTypeRepository {
repo := db.SecretToRepo(secret)
ctrl.updateState(secret, ctrl.getRepoConnectionState(repo))
}
return
}
func (ctrl *SecretController) getRepoConnectionState(repo *v1alpha1.Repository) v1alpha1.ConnectionState {
state := v1alpha1.ConnectionState{
ModifiedAt: repo.ConnectionState.ModifiedAt,
Status: v1alpha1.ConnectionStatusUnknown,
}
err := git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey)
if err == nil {
state.Status = v1alpha1.ConnectionStatusSuccessful
} else {
state.Status = v1alpha1.ConnectionStatusFailed
state.Message = err.Error()
}
return state
}
func (ctrl *SecretController) getClusterState(cluster *v1alpha1.Cluster) v1alpha1.ConnectionState {
state := v1alpha1.ConnectionState{
ModifiedAt: cluster.ConnectionState.ModifiedAt,
Status: v1alpha1.ConnectionStatusUnknown,
}
kubeClientset, err := kubernetes.NewForConfig(cluster.RESTConfig())
if err == nil {
_, err = kubeClientset.Discovery().ServerVersion()
}
if err == nil {
state.Status = v1alpha1.ConnectionStatusSuccessful
} else {
state.Status = v1alpha1.ConnectionStatusFailed
state.Message = err.Error()
}
return state
}
func (ctrl *SecretController) updateState(secret *corev1.Secret, state v1alpha1.ConnectionState) {
annotationsPatch := make(map[string]string)
for key, value := range db.AnnotationsFromConnectionState(&state) {
if secret.Annotations[key] != value {
annotationsPatch[key] = value
}
}
if len(annotationsPatch) > 0 {
annotationsPatch[common.AnnotationConnectionModifiedAt] = metav1.Now().Format(time.RFC3339)
patchData, err := json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": annotationsPatch,
},
})
if err != nil {
log.Warnf("Unable to prepare secret state annotation patch: %v", err)
} else {
_, err := ctrl.kubeClient.CoreV1().Secrets(secret.Namespace).Patch(secret.Name, types.MergePatchType, patchData)
if err != nil {
log.Warnf("Unable to patch secret state annotation: %v", err)
}
}
}
}
func newSecretInformer(client kubernetes.Interface, resyncPeriod time.Duration, namespace string, secretQueue workqueue.RateLimitingInterface) cache.SharedIndexInformer {
informerFactory := informers.NewFilteredSharedInformerFactory(
client,
resyncPeriod,
namespace,
func(options *metav1.ListOptions) {
var req *labels.Requirement
req, err := labels.NewRequirement(common.LabelKeySecretType, selection.In, []string{common.SecretTypeCluster, common.SecretTypeRepository})
if err != nil {
panic(err)
}
options.FieldSelector = fields.Everything().String()
labelSelector := labels.NewSelector().Add(*req)
options.LabelSelector = labelSelector.String()
},
)
informer := informerFactory.Core().V1().Secrets().Informer()
informer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
secretQueue.Add(key)
}
},
UpdateFunc: func(old, new interface{}) {
key, err := cache.MetaNamespaceKeyFunc(new)
if err == nil {
secretQueue.Add(key)
}
},
},
)
return informer
}
func NewSecretController(kubeClient kubernetes.Interface, repoClientset reposerver.Clientset, resyncPeriod time.Duration, namespace string) *SecretController {
secretQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
return &SecretController{
kubeClient: kubeClient,
secretQueue: secretQueue,
secretInformer: newSecretInformer(kubeClient, resyncPeriod, namespace, secretQueue),
namespace: namespace,
repoClientset: repoClientset,
}
}

View File

@@ -6,14 +6,6 @@ import (
"fmt"
"time"
log "github.com/sirupsen/logrus"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
@@ -22,32 +14,34 @@ import (
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/kube"
kubeutil "github.com/argoproj/argo-cd/util/kube"
)
const (
maxHistoryCnt = 5
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
)
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) (
*v1alpha1.ComparisonResult, *repository.ManifestResponse, []v1alpha1.ApplicationCondition, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
CompareAppState(app *v1alpha1.Application) (*v1alpha1.ComparisonResult, *repository.ManifestResponse, error)
SyncAppState(app *v1alpha1.Application, revision string, overrides *[]v1alpha1.ComponentParameter, dryRun bool, prune bool) *v1alpha1.OperationState
}
// appStateManager allows to compare application using KSonnet CLI
type appStateManager struct {
// KsonnetAppStateManager allows to compare application using KSonnet CLI
type KsonnetAppStateManager struct {
db db.ArgoDB
appclientset appclientset.Interface
kubectl kubeutil.Kubectl
repoClientset reposerver.Clientset
namespace string
}
// groupLiveObjects deduplicate list of kubernetes resources and choose correct version of resource: if resource has corresponding expected application resource then method pick
// kubernetes resource with matching version, otherwise chooses single kubernetes resource with any version
func groupLiveObjects(liveObjs []*unstructured.Unstructured, targetObjs []*unstructured.Unstructured) map[string]*unstructured.Unstructured {
func (ks *KsonnetAppStateManager) groupLiveObjects(liveObjs []*unstructured.Unstructured, targetObjs []*unstructured.Unstructured) map[string]*unstructured.Unstructured {
targetByFullName := make(map[string]*unstructured.Unstructured)
for _, obj := range targetObjs {
targetByFullName[getResourceFullName(obj)] = obj
@@ -85,33 +79,19 @@ func groupLiveObjects(liveObjs []*unstructured.Unstructured, targetObjs []*unstr
return liveByFullName
}
func (s *appStateManager) getTargetObjs(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) ([]*unstructured.Unstructured, *repository.ManifestResponse, error) {
repo := s.getRepo(app.Spec.Source.RepoURL)
conn, repoClient, err := s.repoClientset.NewRepositoryClient()
// CompareAppState compares application spec and real app state using KSonnet
func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v1alpha1.ComparisonResult, *repository.ManifestResponse, error) {
repo := ks.getRepo(app.Spec.Source.RepoURL)
conn, repoClient, err := ks.repoClientset.NewRepositoryClient()
if err != nil {
return nil, nil, err
}
defer util.Close(conn)
if revision == "" {
revision = app.Spec.Source.TargetRevision
}
// Decide what overrides to compare with.
var mfReqOverrides []*v1alpha1.ComponentParameter
if overrides != nil {
// If overrides is supplied, use that
mfReqOverrides = make([]*v1alpha1.ComponentParameter, len(overrides))
for i := range overrides {
item := overrides[i]
mfReqOverrides[i] = &item
}
} else {
// Otherwise, use the overrides in the app spec
mfReqOverrides = make([]*v1alpha1.ComponentParameter, len(app.Spec.Source.ComponentParameterOverrides))
overrides := make([]*v1alpha1.ComponentParameter, len(app.Spec.Source.ComponentParameterOverrides))
if app.Spec.Source.ComponentParameterOverrides != nil {
for i := range app.Spec.Source.ComponentParameterOverrides {
item := app.Spec.Source.ComponentParameterOverrides[i]
mfReqOverrides[i] = &item
overrides[i] = &item
}
}
@@ -119,62 +99,45 @@ func (s *appStateManager) getTargetObjs(app *v1alpha1.Application, revision stri
Repo: repo,
Environment: app.Spec.Source.Environment,
Path: app.Spec.Source.Path,
Revision: revision,
ComponentParameterOverrides: mfReqOverrides,
Revision: app.Spec.Source.TargetRevision,
ComponentParameterOverrides: overrides,
AppLabel: app.Name,
ValueFiles: app.Spec.Source.ValuesFiles,
Namespace: app.Spec.Destination.Namespace,
})
if err != nil {
return nil, nil, err
}
targetObjs := make([]*unstructured.Unstructured, 0)
for _, manifest := range manifestInfo.Manifests {
targetObjs := make([]*unstructured.Unstructured, len(manifestInfo.Manifests))
for i, manifest := range manifestInfo.Manifests {
obj, err := v1alpha1.UnmarshalToUnstructured(manifest)
if err != nil {
return nil, nil, err
}
if isHook(obj) {
continue
}
targetObjs = append(targetObjs, obj)
targetObjs[i] = obj
}
return targetObjs, manifestInfo, nil
}
func (s *appStateManager) getLiveObjs(app *v1alpha1.Application, targetObjs []*unstructured.Unstructured) (
[]*unstructured.Unstructured, map[string]*unstructured.Unstructured, error) {
server, namespace := app.Spec.Destination.Server, app.Spec.Destination.Namespace
log.Infof("Comparing app %s state in cluster %s (namespace: %s)", app.ObjectMeta.Name, server, namespace)
// Get the REST config for the cluster corresponding to the environment
clst, err := s.db.GetCluster(context.Background(), app.Spec.Destination.Server)
clst, err := ks.db.GetCluster(context.Background(), server)
if err != nil {
return nil, nil, err
}
restConfig := clst.RESTConfig()
// Retrieve the live versions of the objects. exclude any hook objects
labeledObjs, err := kubeutil.GetResourcesWithLabel(restConfig, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
// Retrieve the live versions of the objects
liveObjs, err := kubeutil.GetResourcesWithLabel(restConfig, namespace, common.LabelApplicationName, app.Name)
if err != nil {
return nil, nil, err
}
liveObjs := make([]*unstructured.Unstructured, 0)
for _, obj := range labeledObjs {
if isHook(obj) {
continue
}
liveObjs = append(liveObjs, obj)
}
liveObjByFullName := groupLiveObjects(liveObjs, targetObjs)
liveObjByFullName := ks.groupLiveObjects(liveObjs, targetObjs)
controlledLiveObj := make([]*unstructured.Unstructured, len(targetObjs))
// Move live resources which have corresponding target object to controlledLiveObj
dynamicIf, err := dynamic.NewForConfig(restConfig)
if err != nil {
return nil, nil, err
}
dynClientPool := dynamic.NewDynamicClientPool(restConfig)
disco, err := discovery.NewDiscoveryClientForConfig(restConfig)
if err != nil {
return nil, nil, err
@@ -182,64 +145,28 @@ func (s *appStateManager) getLiveObjs(app *v1alpha1.Application, targetObjs []*u
for i, targetObj := range targetObjs {
fullName := getResourceFullName(targetObj)
liveObj := liveObjByFullName[fullName]
if liveObj == nil && targetObj.GetName() != "" {
if liveObj == nil {
// If we get here, it indicates we did not find the live resource when querying using
// our app label. However, it is possible that the resource was created/modified outside
// of ArgoCD. In order to determine that it is truly missing, we fall back to perform a
// direct lookup of the resource by name. See issue #141
gvk := targetObj.GroupVersionKind()
dclient, err := dynClientPool.ClientForGroupVersionKind(gvk)
if err != nil {
return nil, nil, err
}
apiResource, err := kubeutil.ServerResourceForGroupVersionKind(disco, gvk)
if err != nil {
if !apierr.IsNotFound(err) {
return nil, nil, err
}
// If we get here, the app is comprised of a custom resource which has yet to be registered
} else {
liveObj, err = kubeutil.GetLiveResource(dynamicIf, targetObj, apiResource, app.Spec.Destination.Namespace)
if err != nil {
return nil, nil, err
}
return nil, nil, err
}
liveObj, err = kubeutil.GetLiveResource(dclient, targetObj, apiResource, namespace)
if err != nil {
return nil, nil, err
}
}
controlledLiveObj[i] = liveObj
delete(liveObjByFullName, fullName)
}
return controlledLiveObj, liveObjByFullName, nil
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied overrides. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (s *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) (
*v1alpha1.ComparisonResult, *repository.ManifestResponse, []v1alpha1.ApplicationCondition, error) {
failedToLoadObjs := false
conditions := make([]v1alpha1.ApplicationCondition, 0)
targetObjs, manifestInfo, err := s.getTargetObjs(app, revision, overrides)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
controlledLiveObj, liveObjByFullName, err := s.getLiveObjs(app, targetObjs)
if err != nil {
controlledLiveObj = make([]*unstructured.Unstructured, len(targetObjs))
liveObjByFullName = make(map[string]*unstructured.Unstructured)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
for _, liveObj := range controlledLiveObj {
if liveObj != nil && liveObj.GetLabels() != nil {
if appLabelVal, ok := liveObj.GetLabels()[common.LabelApplicationName]; ok && appLabelVal != "" && appLabelVal != app.Name {
conditions = append(conditions, v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
Message: fmt.Sprintf("Resource %s/%s is controller by applications '%s' and '%s'", liveObj.GetKind(), liveObj.GetName(), app.Name, appLabelVal),
})
}
}
}
// Move root level live resources to controlledLiveObj and add nil to targetObjs to indicate that target object is missing
for fullName := range liveObjByFullName {
@@ -250,14 +177,11 @@ func (s *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
}
}
log.Infof("Comparing app %s state in cluster %s (namespace: %s)", app.ObjectMeta.Name, app.Spec.Destination.Server, app.Spec.Destination.Namespace)
// Do the actual comparison
diffResults, err := diff.DiffArray(targetObjs, controlledLiveObj)
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
comparisonStatus := v1alpha1.ComparisonStatusSynced
resources := make([]v1alpha1.ResourceState, len(targetObjs))
@@ -282,7 +206,7 @@ func (s *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
} else {
targetObjBytes, err := json.Marshal(targetObjs[i].Object)
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
resState.TargetState = string(targetObjBytes)
}
@@ -295,7 +219,7 @@ func (s *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
} else {
liveObjBytes, err := json.Marshal(controlledLiveObj[i].Object)
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
resState.LiveState = string(liveObjBytes)
}
@@ -308,26 +232,19 @@ func (s *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
if liveResource != nil {
childResources, err := getChildren(liveResource, liveObjByFullName)
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
resource.ChildLiveResources = childResources
resources[i] = resource
}
}
if failedToLoadObjs {
comparisonStatus = v1alpha1.ComparisonStatusUnknown
}
compResult := v1alpha1.ComparisonResult{
ComparedTo: app.Spec.Source,
ComparedAt: metav1.Time{Time: time.Now().UTC()},
Resources: resources,
Status: comparisonStatus,
}
if manifestInfo != nil {
compResult.Revision = manifestInfo.Revision
}
return &compResult, manifestInfo, conditions, nil
return &compResult, manifestInfo, nil
}
func hasParent(obj *unstructured.Unstructured) bool {
@@ -369,7 +286,29 @@ func getResourceFullName(obj *unstructured.Unstructured) string {
return fmt.Sprintf("%s:%s", obj.GetKind(), obj.GetName())
}
func (s *appStateManager) getRepo(repoURL string) *v1alpha1.Repository {
func (s *KsonnetAppStateManager) SyncAppState(
app *v1alpha1.Application, revision string, overrides *[]v1alpha1.ComponentParameter, dryRun bool, prune bool) *v1alpha1.OperationState {
if revision != "" {
app.Spec.Source.TargetRevision = revision
}
if overrides != nil {
app.Spec.Source.ComponentParameterOverrides = *overrides
}
opRes, manifest := s.syncAppResources(app, dryRun, prune)
if !dryRun && opRes.Phase.Successful() {
err := s.persistDeploymentInfo(app, manifest.Revision, manifest.Params, nil)
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = fmt.Sprintf("failed to record sync to history: %v", err)
}
}
return opRes
}
func (s *KsonnetAppStateManager) getRepo(repoURL string) *v1alpha1.Repository {
repo, err := s.db.GetRepository(context.Background(), repoURL)
if err != nil {
// If we couldn't retrieve from the repo service, assume public repositories
@@ -378,7 +317,7 @@ func (s *appStateManager) getRepo(repoURL string) *v1alpha1.Repository {
return repo
}
func (s *appStateManager) persistDeploymentInfo(
func (s *KsonnetAppStateManager) persistDeploymentInfo(
app *v1alpha1.Application, revision string, envParams []*v1alpha1.ComponentParameter, overrides *[]v1alpha1.ComponentParameter) error {
params := make([]v1alpha1.ComponentParameter, len(envParams))
@@ -386,15 +325,16 @@ func (s *appStateManager) persistDeploymentInfo(
param := *envParams[i]
params[i] = param
}
var nextID int64 = 0
var nextId int64 = 0
if len(app.Status.History) > 0 {
nextID = app.Status.History[len(app.Status.History)-1].ID + 1
nextId = app.Status.History[len(app.Status.History)-1].ID + 1
}
history := append(app.Status.History, v1alpha1.DeploymentInfo{
ComponentParameterOverrides: app.Spec.Source.ComponentParameterOverrides,
Revision: revision,
DeployedAt: metav1.NewTime(time.Now().UTC()),
ID: nextID,
Params: params,
DeployedAt: metav1.NewTime(time.Now()),
ID: nextId,
})
if len(history) > maxHistoryCnt {
@@ -413,18 +353,141 @@ func (s *appStateManager) persistDeploymentInfo(
return err
}
func (s *KsonnetAppStateManager) syncAppResources(
app *v1alpha1.Application,
dryRun bool,
prune bool) (*v1alpha1.OperationState, *repository.ManifestResponse) {
opRes := v1alpha1.OperationState{
SyncResult: &v1alpha1.SyncOperationResult{},
}
comparison, manifestInfo, err := s.CompareAppState(app)
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = err.Error()
return &opRes, manifestInfo
}
clst, err := s.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = err.Error()
return &opRes, manifestInfo
}
config := clst.RESTConfig()
opRes.SyncResult.Resources = make([]*v1alpha1.ResourceDetails, len(comparison.Resources))
liveObjs := make([]*unstructured.Unstructured, len(comparison.Resources))
targetObjs := make([]*unstructured.Unstructured, len(comparison.Resources))
// First perform a `kubectl apply --dry-run` against all the manifests. This will detect most
// (but not all) validation issues with the users' manifests (e.g. will detect syntax issues,
// but will not not detect if they are mutating immutable fields). If anything fails, we will
// refuse to perform the sync.
dryRunSuccessful := true
for i, resourceState := range comparison.Resources {
liveObj, err := resourceState.LiveObject()
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = fmt.Sprintf("Failed to unmarshal live object: %v", err)
return &opRes, manifestInfo
}
targetObj, err := resourceState.TargetObject()
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = fmt.Sprintf("Failed to unmarshal target object: %v", err)
return &opRes, manifestInfo
}
liveObjs[i] = liveObj
targetObjs[i] = targetObj
resDetails, successful := syncObject(config, app.Spec.Destination.Namespace, targetObj, liveObj, prune, true)
if !successful {
dryRunSuccessful = false
}
opRes.SyncResult.Resources[i] = &resDetails
}
if !dryRunSuccessful {
opRes.Phase = v1alpha1.OperationFailed
opRes.Message = "one or more objects failed to apply (dry run)"
return &opRes, manifestInfo
}
if dryRun {
opRes.Phase = v1alpha1.OperationSucceeded
opRes.Message = "successfully synced (dry run)"
return &opRes, manifestInfo
}
// If we get here, all objects passed their dry-run, so we are now ready to actually perform the
// `kubectl apply`. Loop through the resources again, this time without dry-run.
syncSuccessful := true
for i := range comparison.Resources {
resDetails, successful := syncObject(config, app.Spec.Destination.Namespace, targetObjs[i], liveObjs[i], prune, false)
if !successful {
syncSuccessful = false
}
opRes.SyncResult.Resources[i] = &resDetails
}
if !syncSuccessful {
opRes.Message = "one or more objects failed to apply"
opRes.Phase = v1alpha1.OperationFailed
} else {
opRes.Message = "successfully synced"
opRes.Phase = v1alpha1.OperationSucceeded
}
return &opRes, manifestInfo
}
// syncObject performs a sync of a single resource
func syncObject(config *rest.Config, namespace string, targetObj, liveObj *unstructured.Unstructured, prune, dryRun bool) (v1alpha1.ResourceDetails, bool) {
obj := targetObj
if obj == nil {
obj = liveObj
}
resDetails := v1alpha1.ResourceDetails{
Name: obj.GetName(),
Kind: obj.GetKind(),
Namespace: namespace,
}
needsDelete := targetObj == nil
successful := true
if needsDelete {
if prune {
if dryRun {
resDetails.Message = "pruned (dry run)"
} else {
err := kubeutil.DeleteResource(config, liveObj, namespace)
if err != nil {
resDetails.Message = err.Error()
successful = false
} else {
resDetails.Message = "pruned"
}
}
} else {
resDetails.Message = "ignored (requires pruning)"
}
} else {
message, err := kube.ApplyResource(config, targetObj, namespace, dryRun)
if err != nil {
resDetails.Message = err.Error()
successful = false
} else {
resDetails.Message = message
}
}
return resDetails, successful
}
// NewAppStateManager creates new instance of Ksonnet app comparator
func NewAppStateManager(
db db.ArgoDB,
appclientset appclientset.Interface,
repoClientset reposerver.Clientset,
namespace string,
kubectl kubeutil.Kubectl,
) AppStateManager {
return &appStateManager{
return &KsonnetAppStateManager{
db: db,
appclientset: appclientset,
kubectl: kubectl,
repoClientset: repoClientset,
namespace: namespace,
}

View File

@@ -1,52 +0,0 @@
package controller
import (
"testing"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
var podManifest = []byte(`
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- image: nginx:1.7.9
name: nginx
resources:
requests:
cpu: 0.2
`)
func newPod() *unstructured.Unstructured {
var un unstructured.Unstructured
err := yaml.Unmarshal(podManifest, &un)
if err != nil {
panic(err)
}
return &un
}
func TestIsHook(t *testing.T) {
pod := newPod()
assert.False(t, isHook(pod))
pod.SetAnnotations(map[string]string{"helm.sh/hook": "post-install"})
assert.True(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "PreSync"})
assert.True(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "Skip"})
assert.False(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "Unknown"})
assert.False(t, isHook(pod))
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,402 +0,0 @@
package controller
import (
"context"
"fmt"
"sort"
"testing"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
fakedisco "k8s.io/client-go/discovery/fake"
"k8s.io/client-go/rest"
testcore "k8s.io/client-go/testing"
)
type kubectlOutput struct {
output string
err error
}
type mockKubectlCmd struct {
commands map[string]kubectlOutput
events chan watch.Event
}
func (k mockKubectlCmd) WatchResources(
ctx context.Context, config *rest.Config, namespace string, selector func(kind schema.GroupVersionKind) v1.ListOptions) (chan watch.Event, error) {
return k.events, nil
}
func (k mockKubectlCmd) DeleteResource(config *rest.Config, obj *unstructured.Unstructured, namespace string) error {
command, ok := k.commands[obj.GetName()]
if !ok {
return nil
}
return command.err
}
func (k mockKubectlCmd) ApplyResource(config *rest.Config, obj *unstructured.Unstructured, namespace string, dryRun, force bool) (string, error) {
command, ok := k.commands[obj.GetName()]
if !ok {
return "", nil
}
return command.output, command.err
}
// ConvertToVersion converts an unstructured object into the specified group/version
func (k mockKubectlCmd) ConvertToVersion(obj *unstructured.Unstructured, group, version string) (*unstructured.Unstructured, error) {
return obj, nil
}
func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
fakeDisco := &fakedisco.FakeDiscovery{Fake: &testcore.Fake{}}
fakeDisco.Resources = append(resources, &v1.APIResourceList{
APIResources: []v1.APIResource{
{Kind: "pod", Namespaced: true},
{Kind: "deployment", Namespaced: true},
{Kind: "service", Namespaced: true},
},
})
kube.FlushServerResourcesCache()
return &syncContext{
comparison: &v1alpha1.ComparisonResult{},
config: &rest.Config{},
namespace: "test-namespace",
syncRes: &v1alpha1.SyncOperationResult{},
syncOp: &v1alpha1.SyncOperation{
Prune: true,
SyncStrategy: &v1alpha1.SyncStrategy{
Apply: &v1alpha1.SyncStrategyApply{},
},
},
proj: &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Name: "test",
},
Spec: v1alpha1.AppProjectSpec{
ClusterResourceWhitelist: []v1.GroupKind{
{Group: "*", Kind: "*"},
},
},
},
opState: &v1alpha1.OperationState{},
disco: fakeDisco,
log: log.WithFields(log.Fields{"application": "fake-app"}),
}
}
func TestSyncCreateInSortedOrder(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: "{\"kind\":\"pod\"}",
}, {
LiveState: "",
TargetState: "{\"kind\":\"service\"}",
},
},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "pod" {
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
syncCtx := newTestSyncCtx(&v1.APIResourceList{
GroupVersion: v1alpha1.SchemeGroupVersion.String(),
APIResources: []v1.APIResource{
{Name: "workflows", Namespaced: false, Kind: "Workflow", Group: "argoproj.io"},
{Name: "application", Namespaced: false, Kind: "Application", Group: "argoproj.io"},
},
}, &v1.APIResourceList{
GroupVersion: "rbac.authorization.k8s.io/v1",
APIResources: []v1.APIResource{
{Name: "clusterroles", Namespaced: false, Kind: "ClusterRole", Group: "rbac.authorization.k8s.io"},
},
})
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{
{Group: "argoproj.io", Kind: "*"},
}
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: `{"apiVersion": "rbac.authorization.k8s.io/v1", "kind": "ClusterRole", "metadata": {"name": "argo-ui-cluster-role" }}`,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncBlacklistedNamespacedResources(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.proj.Spec.NamespaceResourceBlacklist = []v1.GroupKind{
{Group: "*", Kind: "deployment"},
}
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: "{\"kind\":\"deployment\"}",
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: "{\"kind\":\"service\"}",
}, {
LiveState: "{\"kind\":\"pod\"}",
TargetState: "",
},
},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "pod" {
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncDeleteSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "{\"kind\":\"service\"}",
TargetState: "",
}, {
LiveState: "{\"kind\":\"pod\"}",
TargetState: "",
},
},
}
syncCtx.sync()
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "pod" {
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{
commands: map[string]kubectlOutput{
"test-service": {
output: "",
err: fmt.Errorf("error: error validating \"test.yaml\": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false"),
},
},
}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: "{\"kind\":\"service\", \"metadata\":{\"name\":\"test-service\"}}",
},
},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func TestSyncPruneFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{
commands: map[string]kubectlOutput{
"test-service": {
output: "",
err: fmt.Errorf(" error: timed out waiting for \"test-service\" to be synced"),
},
},
}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "{\"kind\":\"service\", \"metadata\":{\"name\":\"test-service\"}}",
TargetState: "",
},
},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func TestRunWorkflows(t *testing.T) {
// syncCtx := newTestSyncCtx()
// syncCtx.doWorkflowSync(nil, nil)
}
func unsortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
}
}
func sortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
}
}
func TestSortKubernetesResourcesSuccessfully(t *testing.T) {
unsorted := unsortedManifest()
ks := newKindSorter(unsorted, resourceOrder)
sort.Sort(ks)
expectedOrder := sortedManifest()
assert.Equal(t, len(unsorted), len(expectedOrder))
for i, sorted := range unsorted {
assert.Equal(t, expectedOrder[i], sorted)
}
}
func TestSortManifestHandleNil(t *testing.T) {
task := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
}
manifest := []syncTask{
{},
task,
}
ks := newKindSorter(manifest, resourceOrder)
sort.Sort(ks)
assert.Equal(t, task, manifest[0])
assert.Nil(t, manifest[1].targetObj)
}

View File

@@ -1,22 +0,0 @@
# ArgoCD Documentation
## [Getting Started](getting_started.md)
## Concepts
* [Architecture](architecture.md)
* [Tracking Strategies](tracking_strategies.md)
## Features
* [Application Sources](application_sources.md)
* [Application Parameters](parameters.md)
* [Projects](projects.md)
* [Automated Sync](auto_sync.md)
* [Resource Health](health.md)
* [Resource Hooks](resource_hooks.md)
* [Single Sign On](sso.md)
* [Webhooks](webhook.md)
* [RBAC](rbac.md)
## Other
* [Configuring Ingress](ingress.md)
* [F.A.Q.](faq.md)

View File

@@ -1,103 +0,0 @@
# Application Source Types
ArgoCD supports several different ways in which kubernetes manifests can be defined:
* [ksonnet](https://ksonnet.io) applications
* [kustomize](https://kustomize.io) applications
* [helm](https://helm.sh) charts
* Directory of YAML/json/jsonnet manifests
Some additional considerations should be made when deploying apps of a particular type:
## Ksonnet
### Environments
Ksonnet has a first class concept of an "environment." To create an application from a ksonnet
app directory, an environment must be specified. For example, the following command creates the
"guestbook-default" app, which points to the `default` environment:
```
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
```
### Parameters
Ksonnet parameters all belong to a component. For example, the following are the parameters
available in the guestbook app, all of which belong to the `guestbook-ui` component:
```
$ ks param list
COMPONENT PARAM VALUE
========= ===== =====
guestbook-ui containerPort 80
guestbook-ui image "gcr.io/heptio-images/ks-guestbook-demo:0.1"
guestbook-ui name "guestbook-ui"
guestbook-ui replicas 1
guestbook-ui servicePort 80
guestbook-ui type "LoadBalancer"
```
When overriding ksonnet parameters in ArgoCD, the component name should also be specified in the
`argocd app set` command, in the form of `-p COMPONENT=PARAM=VALUE`. For example:
```
argocd app set guestbook-default -p guestbook-ui=image=gcr.io/heptio-images/ks-guestbook-demo:0.1
```
## Helm
### Values Files
Helm has the ability to use a different, or even multiple "values.yaml" files to derive its
parameters from. Alternate or multiple values file(s), can be specified using the `--values`
flag. The flag can be repeated to support multiple values files:
```
argocd app set helm-guestbook --values values-production.yaml
```
### Helm Parameters
Helm has the ability to set parameter values, which override any values in
a `values.yaml`. For example, `service.type` is a common parameter which is exposed in a Helm chart:
```
helm template . --set service.type=LoadBalancer
```
Similarly ArgoCD can override values in the `values.yaml` parameters using `argo app set` command,
in the form of `-p PARAM=VALUE`. For example:
```
argocd app set helm-guestbook -p service.type=LoadBalancer
```
### Helm Hooks
Helm hooks are equivalent in concept to [ArgoCD resource hooks](resource_hooks.md). In helm, a hook
is any normal kubernetes resource annotated with the `helm.sh/hook` annotation. When ArgoCD deploys
helm application which contains helm hooks, all helm hook resources are currently ignored during
the `kubectl apply` of the manifests. There is an
[open issue](https://github.com/argoproj/argo-cd/issues/355) to map Helm hooks to ArgoCD's concept
of Pre/Post/Sync hooks.
### Random Data
Helm templating has the ability to generate random data during chart rendering via the
`randAlphaNum` function. Many helm charts from the [charts repository](https://github.com/helm/charts)
make use of this feature. For example, the following is the secret for the
[redis helm chart](https://github.com/helm/charts/blob/master/stable/redis/templates/secrets.yaml):
```
data:
{{- if .Values.password }}
redis-password: {{ .Values.password | b64enc | quote }}
{{- else }}
redis-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
```
The ArgoCD application controller periodically compares git state against the live state, running
the `helm template <CHART>` command to generate the helm manifests. Because the random value is
regenerated every time the comparison is made, any application which makes use of the `randAlphaNum`
function will always be in an `OutOfSync` state. This can be mitigated by explicitly setting a
value, in the values.yaml such that the value is stable between each comparison. For example:
```
argocd app set redis -p password=abc123
```

View File

@@ -22,57 +22,15 @@ manifests when provided the following inputs:
* repository URL
* git revision (commit, tag, branch)
* application path
* template specific settings: parameters, ksonnet environments, helm values.yaml
* application environment
### Application Controller
The application controller is a Kubernetes controller which continuously monitors running
applications and compares the current, live state against the desired target state (as specified in
the git repo). It detects `OutOfSync` application state and optionally takes corrective action. It
is responsible for invoking any user-defined hooks for lifcecycle events (PreSync, Sync, PostSync)
the git repo). It detects out-of-sync application state and optionally takes corrective action. It
is responsible for invoking any user-defined handlers (argo workflows) for Sync, OutOfSync events
### Application CRD (Custom Resource Definition)
The Application CRD is the Kubernetes resource object representing a deployed application instance
in an environment. It is defined by two key pieces of information:
* `source` reference to the desired state in git (repository, revision, path, environment)
* `destination` reference to the target cluster and namespace.
An example spec is as follows:
```
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
environment: default
destination:
server: https://kubernetes.default.svc
namespace: default
```
### AppProject CRD (Custom Resource Definition)
The AppProject CRD is the Kubernetes resource object representing a grouping of applications. It is defined by three key pieces of information:
* `sourceRepos` reference to the reposities that applications within the project can pull manifests from.
* `destinations` reference to clusters and namespaces that applications within the project can deploy into.
* `roles` list of entities with defintions of their access to resources within the project.
An example spec is as follows:
```
spec:
description: Description of the project
destinations:
- namespace: default
server: https://kubernetes.default.svc
roles:
- description: Description of the role
jwtTokens:
- iat: 1535390316
name: role-name
policies:
- p, proj:proj-name:role-name, applications, get, proj-name/*, allow
- p, proj:proj-name:role-name, applications, sync, proj-name/*, deny
sourceRepos:
- https://github.com/argoproj/argocd-example-apps.git
```
in an environment. It holds a reference to the desired target state (repo, revision, app, environment)
of which the application controller will enforce state against.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.5 MiB

BIN
docs/argocd-ui.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

View File

@@ -1,51 +0,0 @@
# Automated Sync Policy
ArgoCD has the ability to automatically sync an application when it detects differences between
the desired manifests in git, and the live state in the cluster. A benefit of automatic sync is that
CI/CD pipelines no longer need direct access to the ArgoCD API server to perform the deployment.
Instead, the pipeline makes a commit and push to the git repository with the changes to the
manifests in the tracking git repo.
To configure automated sync run:
```bash
argocd app set <APPNAME> --sync-policy automated
```
Alternatively, if creating the application an application manifest, specify a syncPolicy with an
`automated` policy.
```yaml
spec:
syncPolicy:
automated: {}
```
## Automatic Pruning
By default (and as a safety mechanism), automated sync will not delete resources when ArgoCD detects
the resource is no longer defined in git. To prune the resources, a manual sync can always be
performed (with pruning checked). Pruning can also be enabled to happen automatically as part of the
automated sync by running:
```bash
argocd app set <APPNAME> --auto-prune
```
Or by setting the prune option to true in the automated sync policy:
```yaml
spec:
syncPolicy:
automated:
prune: true
```
## Automated Sync Semantics
* An automated sync will only be performed if the application is OutOfSync. Applications in a
Synced or error state will not attempt automated sync.
* Automated sync will only attempt one synchronization per unique combination of commit SHA1 and
application parameters. If the most recent successful sync in the history was already performed
against the same commit-SHA and parameters, a second sync will not be attempted.
* Automatic sync will not reattempt a sync if the previous sync attempt against the same commit-SHA
and parameters had failed.
* Rollback cannot be performed against an application with automated sync enabled.

View File

@@ -1,16 +0,0 @@
# FAQ
## Why is my application still `OutOfSync` immediately after a successful Sync?
It is possible for an application to still be `OutOfSync` even immediately after a successful Sync
operation. Some reasons for this might be:
* There may be problems in manifests themselves, which may contain extra/unknown fields from the
actual K8s spec. These extra fields would get dropped when querying Kubernetes for the live state,
resulting in an `OutOfSync` status indicating a missing field was detected.
* The sync was performed (with pruning disabled), and there are resources which need to be deleted.
* A mutating webhook altered the manifest after it was submitted to Kubernetes
To debug `OutOfSync` issues, run the `app diff` command to see the differences between git and live:
```
argocd app diff APPNAME
```

View File

@@ -1,177 +1,81 @@
# ArgoCD Getting Started
# Argo CD Getting Started
An example guestbook application is provided to demonstrate how ArgoCD works.
An example Ksonnet guestbook application is provided to demonstrates how Argo CD works.
## Requirements
* Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool
* Installed [minikube](https://github.com/kubernetes/minikube#installation)
* Installed the [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool
* Have a [kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) file (default location is `~/.kube/config`).
## 1. Install ArgoCD
```bash
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.9.2/manifests/install.yaml
## 1. Download Argo CD
Download the latest Argo CD version
```
This will create a new namespace, `argocd`, where ArgoCD services and application resources will live.
NOTE:
* On GKE with RBAC enabled, you may need to grant your account the ability to create new cluster roles
```bash
kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUREMAIL@gmail.com
```
## 2. Download ArgoCD CLI
Download the latest ArgoCD version:
On Mac:
```bash
brew install argoproj/tap/argocd
```
On Linux:
```bash
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.9.2/argocd-linux-amd64
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.4.3/argocd-darwin-amd64
chmod +x /usr/local/bin/argocd
```
## 3. Access the ArgoCD API server
By default, the ArgoCD API server is not exposed with an external IP. To access the API server,
choose one of the following means to expose the ArgoCD API server:
## 2. Install Argo CD
```
argocd install
```
This will create a new namespace, `argocd`, where Argo CD services and application resources will live.
### Service Type LoadBalancer
Change the argocd-server service type to `LoadBalancer`:
## 3. Open access to Argo CD API server
```bash
By default, the Argo CD API server is not exposed with an external IP. To expose the API server,
change service type to `LoadBalancer`:
```
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
```
### Ingress
Follow the [ingress documentation](ingress.md) on how to configure ArgoCD with ingress.
## 4. Login to the server from the CLI
### Port Forwarding
`kubectl port-forward` can also be used to connect to the API server without exposing the service.
The API server can be accessed using the localhost address/port.
## 4. Login using the CLI
Login as the `admin` user. The initial password is autogenerated to be the pod name of the
ArgoCD API server. This can be retrieved with the command:
```bash
kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2
```
argocd login $(minikube service argocd-server -n argocd --url | cut -d'/' -f 3)
```
Using the above password, login to ArgoCD's external IP:
```bash
kubectl get svc -n argocd argocd-server
argocd login <EXTERNAL-IP>
Now, the Argo CD cli is configured to talk to API server and you can deploy your first application.
## 5. Connect and deploy the Guestbook application
1. Register the minikube cluster to Argo CD:
```
argocd cluster add minikube
```
The `argocd cluster add CONTEXT` command installs an `argocd-manager` ServiceAccount and ClusterRole into
the cluster associated with the supplied kubectl context. Argo CD then uses the associated service account
token to perform its required management tasks (i.e. deploy/monitoring).
2. Add the guestbook application and github repository containing the Guestbook application
```
argocd app create --name guestbook --repo https://github.com/argoproj/argo-cd.git --path examples/guestbook --env minikube --dest-server https://$(minikube ip):8443
```
After logging in, change the password using the command:
```bash
argocd account update-password
argocd relogin
Once the application is added, you can now see its status:
```
## 5. Register a cluster to deploy apps to (optional)
This step registers a cluster's credentials to ArgoCD, and is only necessary when deploying to
an external cluster. When deploying internally (to the same cluster that ArgoCD is running in),
https://kubernetes.default.svc should be used as the application's K8s API server address.
First list all clusters contexts in your current kubconfig:
```bash
argocd cluster add
```
Choose a context name from the list and supply it to `argocd cluster add CONTEXTNAME`. For example,
for docker-for-desktop context, run:
```bash
argocd cluster add docker-for-desktop
```
The above command installs an `argocd-manager` ServiceAccount and ClusterRole into the cluster
associated with the supplied kubectl context. ArgoCD uses this service account token to perform its
management tasks (i.e. deploy/monitoring).
## 6. Create an application from a git repository location
### Creating apps via UI
Open a browser to the ArgoCD external UI, and login using the credentials set in step 4, and the
external IP/hostname set in step 4.
Connect a git repository containing your apps. An example repository containing a sample
guestbook application is available at https://github.com/argoproj/argocd-example-apps.git.
![connect repo](assets/connect_repo.png)
After connecting a git repository, select the guestbook application for creation:
![select repo](assets/select_repo.png)
![select app](assets/select_app.png)
![select env](assets/select_env.png)
![create app](assets/create_app.png)
### Creating apps via CLI
Applications can be also be created using the ArgoCD CLI:
```bash
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path ksonnet-guestbook
```
## 7. Sync (deploy) the application
Once the guestbook application is created, you can now view its status:
From UI:
![guestbook app](assets/guestbook-app.png)
From CLI:
```bash
$ argocd app get guestbook-default
Name: guestbook-default
Server: https://kubernetes.default.svc
Namespace: default
URL: https://192.168.64.36:31880/applications/argocd/guestbook-default
Environment: default
Repo: https://github.com/argoproj/argocd-example-apps.git
Path: guestbook
Target: HEAD
KIND NAME STATUS HEALTH
Service guestbook-ui OutOfSync
Deployment guestbook-ui OutOfSync
argocd app list
argocd app get guestbook
```
The application status is initially in an `OutOfSync` state, since the application has yet to be
deployed, and no Kubernetes resources have been created. To sync (deploy) the application, run:
```bash
$ argocd app sync guestbook-default
Application: guestbook-default
Operation: Sync
Phase: Succeeded
Message: successfully synced
KIND NAME MESSAGE
Service guestbook-ui service "guestbook-ui" created
Deployment guestbook-ui deployment.apps "guestbook-ui" created
```
argocd app sync guestbook
```
This command retrieves the manifests from git repository and performs a `kubectl apply` of the
manifests. The guestbook app is now running and you can now view its resource components, logs,
events, and assessed health status:
[![asciicast](https://asciinema.org/a/uYnbFMy5WI2rc9S49oEAyGLb0.png)](https://asciinema.org/a/uYnbFMy5WI2rc9S49oEAyGLb0)
![view app](assets/guestbook-tree.png)
Argo CD also allows to view and manager applications using web UI. Get the web UI URL by running:
## 8. Next Steps
```
minikube service argocd-server -n argocd --url
```
ArgoCD supports additional features such as automated sync, SSO, WebHooks, RBAC, Projects. See the
rest of the [documentation](./) for details.
![argo cd ui](argocd-ui.png)

View File

@@ -1,20 +0,0 @@
# Resource Health
## Overview
ArgoCD provides built-in health assessment for several standard Kubernetes types, which is then
surfaced to the overall Application health status as a whole. The following checks are made for
specific types of kuberentes resources:
### Deployment, ReplicaSet, StatefulSet DaemonSet
* Observed generation is equal to desired generation.
* Number of **updated** replicas equals the number of desired replicas.
### Service
* If service type is of type `LoadBalancer`, the `status.loadBalancer.ingress` list is non-empty,
with at least one value for `hostname` or `IP`.
### Ingress
* The `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`.
### PersistentVolumeClaim
* The `status.phase` is `Bound`

View File

@@ -1,126 +0,0 @@
# Ingress Configuration
ArgoCD runs both a gRPC server (used by the CLI), as well as a HTTP/HTTPS server (used by the UI).
Both protocols are exposed by the argocd-server service object on the following ports:
* 443 - gRPC/HTTPS
* 80 - HTTP (redirects to HTTPS)
There are several ways how Ingress can be configured.
## [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx)
### Option 1: ssl-passthrough
Because ArgoCD serves multiple protocols (gRPC/HTTPS) on the same port (443), this provides a
challenge when attempting to define a single nginx ingress object and rule for the argocd-service,
since the `nginx.ingress.kubernetes.io/backend-protocol` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol)
accepts only a single value for the backend protocol (e.g. HTTP, HTTPS, GRPC, GRPCS).
In order to expose the ArgoCD API server with a single ingress rule and hostname, the
`nginx.ingress.kubernetes.io/ssl-passthrough` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough)
must be used to passthrough TLS connections and terminate TLS at the ArgoCD API server.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: argocd.example.com
http:
paths:
- backend:
serviceName: argocd-server
servicePort: https
```
The above rule terminates TLS at the ArgoCD API server, which detects the protocol being used,
and responds appropriately. Note that the `nginx.ingress.kubernetes.io/ssl-passthrough` annotation
requires that the `--enable-ssl-passthrough` flag be added to the command line arguments to
`nginx-ingress-controller`.
### Option 2: Multiple ingress objects and hosts
Since ingress-nginx Ingress supports only a single protocol per Ingress object, an alternative
way would be to define two Ingress objects. One for HTTP/HTTPS, and the other for gRPC:
HTTP/HTTPS Ingress:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server-http-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- http:
paths:
- backend:
serviceName: argocd-server
servicePort: http
host: argocd.example.com
tls:
- hosts:
- argocd.example.com
secretName: argocd-secret
```
gRPC Ingress:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server-grpc-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
rules:
- http:
paths:
- backend:
serviceName: argocd-server
servicePort: https
host: grpc.argocd.example.com
tls:
- hosts:
- grpc.argocd.example.com
secretName: argocd-secret
```
The API server should then be run with TLS disabled. Edit the `argocd-server` deployment to add the
`--insecure` flag to the argocd-server command:
```yaml
spec:
template:
spec:
name: argocd-server
containers:
- command:
- /argocd-server
- --staticassets
- /shared/app
- --repo-server
- argocd-repo-server:8081
- --insecure
```
The obvious disadvantage to this approach is that this technique require two separate hostnames for
the API server -- one for gRPC and the other for HTTP/HTTPS. However it allow TLS termination to
happen at the ingress controller.
## AWS Application Load Balancers (ALBs) and Classic ELB (HTTP mode)
Neither ALBs and Classic ELB in HTTP mode, do not have full support for HTTP2/gRPC which is the
protocol used by the `argocd` CLI. Thus, when using an AWS load balancer, either Classic ELB in
passthrough mode is needed, or NLBs.

View File

@@ -1,47 +0,0 @@
# ArgoCD Release Instructions
1. Tag, build, and push argo-cd-ui
```bash
cd argo-cd-ui
git checkout -b release-X.Y
git tag vX.Y.Z
git push upstream release-X.Y --tags
IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true yarn docker
```
2. Create release-X.Y branch (if creating initial X.Y release)
```bash
git checkout -b release-X.Y
git push upstream release-X.Y
```
3. Update VERSION and manifests with new version
```bash
vi VERSION # ensure value is desired X.Y.Z semantic version
vi manifests/base/kustomization.yaml # update with new image tags
make manifests
git commit -a -m "Update manifests to vX.Y.Z"
git push upstream release-X.Y
```
4. Tag, build, and push release to docker hub
```bash
git tag vX.Y.Z
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true
git push upstream vX.Y.Z
```
5. Update argocd brew formula
```bash
git clone https://github.com/argoproj/homebrew-tap
cd homebrew-tap
shasum -a 256 ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
# edit argocd.rb with version and checksum
git commit -a -m "Update argocd to vX.Y.Z"
git push
```
6. Update documentation:
* Edit CHANGELOG.md with release notes
* Update getting_started.md with new version
* Create GitHub release from new tag and upload binaries (e.g. dist/argocd-darwin-amd64)

View File

@@ -1,39 +0,0 @@
# Parameter Overrides
ArgoCD provides a mechanism to override the parameters of a ksonnet/helm app. This gives some extra
flexibility in having most of the application manifests defined in git, while leaving room for
*some* parts of the k8s manifests determined dynamically, or outside of git. It also serves as an
alternative way of redeploying an application by changing application parameters via ArgoCD, instead
of making the changes to the manifests in git.
**NOTE:** many consider this mode of operation as an anti-pattern to GitOps, since the source of
truth becomes a union of the git repository, and the application overrides. The ArgoCD parameter
overrides feature is provided mainly convenience to developers and is intended to be used more for
dev/test environments, vs. production environments.
To use parameter overrides, run the `argocd app set -p (COMPONENT=)PARAM=VALUE` command:
```
argocd app set guestbook -p guestbook=image=example/guestbook:abcd123
argocd app sync guestbook
```
The following are situations where parameter overrides would be useful:
1. A team maintains a "dev" environment, which needs to be continually updated with the latest
version of their guestbook application after every build in the tip of master. To address this use
case, the application would expose an parameter named `image`, whose value used in the `dev`
environment contains a placeholder value (e.g. `example/guestbook:replaceme`). The placeholder value
would be determined externally (outside of git) such as a build systems. Then, as part of the build
pipeline, the parameter value of the `image` would be continually updated to the freshly built image
(e.g. `argocd app set guestbook -p guestbook=image=example/guestbook:abcd123`). A sync operation
would result in the application being redeployed with the new image.
2. A repository of helm manifests is already publicly available (e.g. https://github.com/helm/charts).
Since commit access to the repository is unavailable, it is useful to be able to install charts from
the public repository, customizing the deployment with different parameters, without resorting to
forking the repository to make the changes. For example, to install redis from the helm chart
repository and customize the the database password, you would run:
```
argocd app create redis --repo https://github.com/helm/charts.git --path stable/redis --dest-server https://kubernetes.default.svc --dest-namespace default -p password=abc123
```

View File

@@ -1,181 +0,0 @@
## Projects
Projects provide a logical grouping of applications, which is useful when ArgoCD is used by multiple
teams. Projects provide the following features:
* ability to restrict *what* may be deployed (the git source repositories)
* ability to restrict *where* apps may be deployed (the destination clusters and namespaces)
* ability to control what type of objects may be deployed (e.g. RBAC, CRDs, DaemonSets, NetworkPolicy etc...)
### The default project
Every application belongs to a single project. If unspecified, an application belongs to the
`default` project, which is created automatically and by default, permits deployments from any
source repo, to any cluster, and all resource Kinds. The default project can be modified, but not
deleted. When initially created, it's specification is configured to be the most permissive:
```yaml
spec:
sourceRepos:
- '*'
destinations:
- namespace: '*'
server: '*'
clusterResourceWhitelist:
- group: '*'
kind: '*'
```
### Creating Projects
Additional projects can be created to give separate teams different levels of access to namespaces.
The following command creates a new project `myproject` which can deploy applications to namespace
`mynamespace` of cluster `https://kubernetes.default.svc`. The permitted git source repository is
set to `https://github.com/argoproj/argocd-example-apps.git` repository.
```
argocd proj create myproject -d https://kubernetes.default.svc,mynamespace -s https://github.com/argoproj/argocd-example-apps.git
```
### Managing Projects
Permitted source git repositories are managed using commands:
```bash
argocd project add-source <PROJECT> <REPO>
argocd project remove-source <PROJECT> <REPO>
```
Permitted destination clusters and namespaces are managed with the commands:
```
argocd project add-destination <PROJECT> <CLUSTER>,<NAMESPACE>
argocd project remove-destination <PROJECT> <CLUSTER>,<NAMESPACE>
```
Permitted destination K8s resource kinds are managed with the commands. Note that namespaced-scoped
resources are restricted via a blacklist, whereas cluster-scoped resources are restricted via
whitelist.
```
argocd project allow-cluster-resource <PROJECT> <GROUP> <KIND>
argocd project allow-namespace-resource <PROJECT> <GROUP> <KIND>
argocd project deny-cluster-resource <PROJECT> <GROUP> <KIND>
argocd project deny-namespace-resource <PROJECT> <GROUP> <KIND>
```
### Assign application to a project
The application project can be changed using `app set` command. In order to change the project of
an app, the user must have permissions to access the new project.
```
argocd app set guestbook-default --project myproject
```
### Configuring RBAC with projects
Once projects have been defined, RBAC rules can be written to restrict access to the applications
in the project. The following example configures RBAC for two GitHub teams: `team1` and `team2`,
both in the GitHub org, `some-github-org`. There are two projects, `project-a` and `project-b`.
`team1` can only manage applications in `project-a`, while `team2` can only manage applications in
`project-b`. Both `team1` and `team2` have the ability to manage repositories.
*ConfigMap `argocd-rbac-cm` example:*
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.default: ""
policy.csv: |
p, some-github-org:team1, applications, *, project-a/*, allow
p, some-github-org:team2, applications, *, project-a/*, allow
p, role:org-admin, repositories, get, *, allow
p, role:org-admin, repositories, create, *, allow
p, role:org-admin, repositories, update, *, allow
p, role:org-admin, repositories, delete, *, allow
g, some-github-org:team1, org-admin
g, some-github-org:team2, org-admin
```
## Project Roles
Projects include a feature called roles that enable automated access to a project's applications.
These can be used to give a CI pipeline a restricted set of permissions. For example, a CI system
may only be able to sync a single app (but not change its source or destination).
Projects can have multiple roles, and those roles can have different access granted to them. These
permissions are called policies, and they are stored within the role as a list of policy strings.
A role's policy can only grant access to that role and are limited to applications within the role's
project. However, the policies have an option for granting wildcard access to any application
within a project.
In order to create roles in a project and add policies to a role, a user will need permission to
update a project. The following commands can be used to manage a role.
```bash
argoproj proj role list
argoproj proj role get
argoproj proj role create
argoproj proj role delete
argoproj proj role add-policy
argoproj proj role remove-policy
```
Project roles in itself are not useful without generating a token to associate to that role. ArgoCD
supports JWT tokens as the means to authenticate to a role. Since the JWT token is
associated with a role's policies, any changes to the role's policies will immediately take effect
for that JWT token.
The following commands are used to manage the JWT tokens.
```bash
argoproj proj role create-token PROJECT ROLE-NAME
argoproj proj role delete-token PROJECT ROLE-NAME ISSUED-AT
```
Since the JWT tokens aren't stored in ArgoCD, they can only be retrieved when they are created. A
user can leverage them in the cli by either passing them in using the `--auth-token` flag or setting
the ARGOCD_AUTH_TOKEN environment variable. The JWT tokens can be used until they expire or are
revoked. The JWT tokens can created with or without an expiration, but the default on the cli is
creates them without an expirations date. Even if a token has not expired, it cannot be used if
the token has been revoked.
Below is an example of leveraging a JWT token to access a guestbook application. It makes the
assumption that the user already has a project named myproject and an application called
guestbook-default.
```bash
PROJ=myproject
APP=guestbook-default
ROLE=get-role
argocd proj role create $PROJ $ROLE
argocd proj role create-token $PROJ $ROLE -e 10m
JWT=<value from command above>
argocd proj role list $PROJ
argocd proj role get $PROJ $ROLE
# This command will fail because the JWT Token associated with the project role does not have a policy to allow access to the application
argocd app get $APP --auth-token $JWT
# Adding a policy to grant access to the application for the new role
argocd proj role add-policy $PROJ $ROLE --action get --permission allow --object $APP
argocd app get $PROJ-$ROLE --auth-token $JWT
# Removing the policy we added and adding one with a wildcard.
argocd proj role remove-policy $PROJ $TOKEN -a get -o $PROJ-$TOKEN
argocd proj role remove-policy $PROJ $TOKEN -a get -o '*'
# The wildcard allows us to access the application due to the wildcard.
argocd app get $PROJ-$TOKEN --auth-token $JWT
argocd proj role get $PROJ
argocd proj role get $PROJ $ROLE
# Revoking the JWT token
argocd proj role delete-token $PROJ $ROLE <id field from the last command>
# This will fail since the JWT Token was deleted for the project role.
argocd app get $APP --auth-token $JWT
```

View File

@@ -1,40 +0,0 @@
# RBAC
## Overview
The RBAC feature enables restriction of access to ArgoCD resources. ArgoCD does not have its own
user management system and has only one built-in user `admin`. The `admin` user is a superuser and
it has unrestricted access to the system. RBAC requires [SSO configuration](./sso.md). Once SSO is
configured, additional RBAC roles can be defined, and SSO groups can man be mapped to roles.
## Configure RBAC
RBAC configuration allows defining roles and groups. ArgoCD has two pre-defined roles:
* `role:readonly` - read-only access to all resources
* `role:admin` - unrestricted access to all resources
These role definitions can be seen in [builtin-policy.csv](../util/rbac/builtin-policy.csv)
Additional roles and groups can be configured in `argocd-rbac-cm` ConfigMap. The example below
configures a custom role, named `org-admin`. The role is assigned to any user which belongs to
`your-github-org:your-team` group. All other users get the default policy of `role:readonly`,
which cannot modify ArgoCD settings.
*ConfigMap `argocd-rbac-cm` example:*
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.default: role:readonly
policy.csv: |
p, role:org-admin, applications, *, */*, allow
p, role:org-admin, clusters, get, *, allow
p, role:org-admin, repositories, get, *, allow
p, role:org-admin, repositories, create, *, allow
p, role:org-admin, repositories, update, *, allow
p, role:org-admin, repositories, delete, *, allow
g, your-github-org:your-team, role:org-admin
```

View File

@@ -1,61 +0,0 @@
# Resource Hooks
## Overview
Hooks are ways to interject custom logic before, during, and after a Sync operation. Some use cases
for hooks are:
* Using a `PreSync` hook to perform a database schema migration before deploying a new version of the app.
* Using a `Sync` hook to orchestrate a complex deployment requiring more sophistication than the
kubernetes rolling update strategy (e.g. a blue/green deployment).
* Using a `PostSync` hook to run integration and health checks after a deployment.
## Usage
Hooks are simply kubernetes manifests annotated with the `argocd.argoproj.io/hook` annotation. To
make use of hooks, simply add the annotation to any resource:
```yaml
apiVersion: batch/v1
kind: Job
metadata:
generateName: schema-migrate-
annotations:
argocd.argoproj.io/hook: PreSync
```
During a Sync operation, ArgoCD will create the resource during the appropriate stage of the
deployment. Hooks can be any type of Kuberentes resource kind, but tend to be most useful as
[Kubernetes Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
or [Argo Workflows](https://github.com/argoproj/argo). Multiple hooks can be specified as a comma
separated list.
## Available Hooks
The following hooks are defined:
| Hook | Description |
|------|-------------|
| `PreSync` | Executes prior to the apply of the manifests. |
| `Sync` | Executes after all `PreSync` hooks completed and were successful. Occurs in conjuction with the apply of the manifests. |
| `Skip` | Indicates to ArgoCD to skip the apply of the manifest. This is typically used in conjunction with a `Sync` hook which is presumably handling the deployment in an alternate way (e.g. blue-green deployment) |
| `PostSync` | Executes after all `Sync` hooks completed and were successful, a succcessful apply, and all resources in a `Healthy` state. |
## Hook Deletion Policies
Hooks can be deleted in an automatic fashion using the annotation: `argocd.argoproj.io/hook-delete-policy`.
```yaml
apiVersion: batch/v1
kind: Job
metadata:
generateName: integration-test-
annotations:
argocd.argoproj.io/hook: PostSync
argocd.argoproj.io/hook-delete-policy: OnSuccess
```
The following policies define when the hook will be deleted.
| Policy | Description |
|--------|-------------|
| `OnSuccess` | The hook resource is deleted after the hook succeeded (e.g. Job/Workflow completed successfully). |
| `OnFailure` | The hook resource is deleted after the hook failed. |

View File

@@ -3,9 +3,9 @@
## Overview
ArgoCD embeds and bundles [Dex](https://github.com/coreos/dex) as part of its installation, for the
purpose of delegating authentication to an external identity provider. Multiple types of identity
purposes of delegating authentication to an external identity provider. Multiple types of identity
providers are supported (OIDC, SAML, LDAP, GitHub, etc...). SSO configuration of ArgoCD requires
editing the `argocd-cm` ConfigMap with
editing the `argocd-cm` ConfigMap with a
[Dex connector](https://github.com/coreos/dex/tree/master/Documentation/connectors) settings.
This document describes how to configure ArgoCD SSO using GitHub (OAuth2) as an example, but the
@@ -35,7 +35,7 @@ kubectl edit configmap argocd-cm
[GitHub connector](https://github.com/coreos/dex/blob/master/Documentation/connectors/github.md)
documentation for explanation of the fields. A minimal config should populate the clientID,
clientSecret generated in Step 1.
* You will very likely want to restrict logins to one or more GitHub organization. In the
* You will very likely want to restrict logins to one ore more GitHub organization. In the
`connectors.config.orgs` list, add one or more GitHub organizations. Any member of the org will
then be able to login to ArgoCD to perform management tasks.
@@ -45,39 +45,16 @@ data:
dex.config: |
connectors:
# GitHub example
- type: github
id: github
name: GitHub
config:
clientID: aabbccddeeff00112233
clientSecret: $dex.github.clientSecret
orgs:
- name: your-github-org
# GitHub enterprise example
- type: github
id: acme-github
name: Acme GitHub
config:
hostName: github.acme.com
clientID: abcdefghijklmnopqrst
clientSecret: $dex.acme.clientSecret
orgs:
- name: your-github-org
# OIDC example (e.g. Okta)
- type: oidc
id: okta
name: Okta
config:
issuer: https://dev-123456.oktapreview.com
clientID: aaaabbbbccccddddeee
clientSecret: $dex.okta.clientSecret
- type: github
id: github
name: GitHub
config:
clientID: 5aae0fcec2c11634be8c
clientSecret: c6fcb18177869174bd09be2c51259fb049c9d4e5
orgs:
- name: your-github-org
```
After saving, the changes should take affect automatically.
NOTES:
* Any values which start with '$' will look to a key in argocd-secret of the same name (minus the $),
to obtain the actual value. This allows you to store the `clientSecret` as a kubernetes secret.
@@ -85,3 +62,11 @@ NOTES:
ArgoCD will automatically use the correct `redirectURI` for any OAuth2 connectors, to match the
correct external callback URL (e.g. https://argocd.example.com/api/dex/callback)
### 3. Restart ArgoCD for changes to take effect
Any changes to the `argocd-cm` ConfigMap or `argocd-secret` Secret, currently require a restart of
the ArgoCD API server for the settings to take effect. Delete the `argocd-server` pod to force a
restart. [Issue #174](https://github.com/argoproj/argo-cd/issues/174) will address this limitation.
```
kubectl delete pod -l app=argocd-server
```

View File

@@ -1,46 +1,41 @@
# Tracking and Deployment Strategies
An ArgoCD application spec provides several different ways of track kubernetes resource manifests in
git. This document describes the different techniques and the means of deploying those manifests to
the target environment.
An ArgoCD application spec provides several different ways of track kubernetes resource manifests in git. This document describes the different techniques and the means of deploying those manifests to the target environment.
## HEAD / Branch Tracking
## Branch Tracking
If a branch name, or a symbolic reference (like HEAD) is specified, ArgoCD will continually compare
live state against the resource manifests defined at the tip of the specified branch or the
deferenced commit of the symbolic reference.
If a branch name is specified, ArgoCD will continually compare live state against the resource manifests defined at the tip of the specified branch.
To redeploy an application, a user makes changes to the manifests, and commit/pushes those the
changes to the tracked branch/symbolic reference, which will then be detected by ArgoCD controller.
To redeploy an application, a user makes changes to the manifests, and commit/pushes those the changes to the tracked branch, which will then be detected by ArgoCD controller.
## Tag Tracking
If a tag is specified, the manifests at the specified git tag will be used to perform the sync
comparison. This provides some advantages over branch tracking in that a tag is generally considered
more stable, and less frequently updated, with some manual judgement of what constitutes a tag.
If a tag is specified, the manifests at the specified git tag will be used to perform the sync comparison. This provides some advantages over branch tracking in that a tag is generally considered more stable, and less frequently updated, with some manual judgement of what constitutes a tag.
To redeploy an application, the user uses git to change the meaning of a tag by retagging it to a
different commit SHA. ArgoCD will detect the new meaning of the tag when performing the
comparison/sync.
To redeploy an application, the user uses git to change the meaning of a tag by retagging it to a different commit SHA. ArgoCD will detect the new meaning of the tag when performing the comparison/sync.
## Commit Pinning
If a git commit SHA is specified, the application is effectively pinned to the manifests defined at
the specified commit. This is the most restrictive of the techniques and is typically used to
control production environments.
If a git commit SHA is specified, the application is effectively pinned to the manifests defined at the specified commit. This is the most restrictive of the techniques and is typically used to control production environments.
Since commit SHAs cannot change meaning, the only way to change the live state of an application
which is pinned to a commit, is by updating the tracking revision in the application to a different
commit containing the new manifests. Note that [parameter overrides](parameters.md) can still be set
on an application which is pinned to a revision.
## Automated Sync
In all tracking strategies, the application has the option to sync automatically. If [auto-sync](auto_sync.md)
is configured, the new resources manifests will be applied automatically -- as soon as a difference
is detected between the target state (git) and live state. If auto-sync is disabled, a manual sync
will be needed using the Argo UI, CLI, or API.
Since commit SHAs cannot change meaning, the only way to change the live state of an application which is pinned to a commit, is by updating the tracking revision in the application to a different commit containing the new manifests.
Note that parameter overrides can still be made against a application which is pinned to a revision.
## Parameter Overrides
Note that in all tracking strategies, any [parameter overrides](parameters.md) set in the
application instance take precedence over the git state.
ArgoCD provides means to override the parameters of a ksonnet app. This gives some extra flexibility in having *some* parts of the k8s manifests determined dynamically. It also serves as an alternative way of redeploying an application by changing application parameters via ArgoCD, instead of making the changes to the manifests in git.
The following is an example of where this would be useful: A team maintains a "dev" environment, which needs to be continually updated with the latest version of their guestbook application after every build in the tip of master. To address this use case, the ksonnet application should expose an parameter named `image`, whose value used in the `dev` environment contains a placeholder value (e.g. `example/guestbook:replaceme`), intended to be set externally (outside of git) such as a build systems. As part of the build pipeline, the parameter value of the `image` would be continually updated to the freshly built image (e.g. `example/guestbook:abcd123`). A sync operation would result in the application being redeployed with the new image.
ArgoCD provides these operations conveniently via the CLI, or alternatively via the gRPC/REST API.
```
$ argocd app set guestbook -p guestbook=image=example/guestbook:abcd123
$ argocd app sync guestbook
```
Note that in all tracking strategies, any parameter overrides set in the application instance will be honored.
## [Auto-Sync](https://github.com/argoproj/argo-cd/issues/79) (Not Yet Implemented)
In all tracking strategies, the application will have the option to sync automatically. If auto-sync is configured, the new resources manifests will be applied automatically -- as soon as a difference is detected between the target state (git) and live state. If auto-sync is disabled, a manual sync will be needed using the Argo UI, CLI, or API.

View File

@@ -60,4 +60,11 @@ stringData:
```
After saving, the changes should take affect automatically.
### 3. Restart ArgoCD for changes to take effect
Any changes to the `argocd-cm` ConfigMap or `argocd-secret` Secret, currently require a restart of
the ArgoCD API server for the settings to take effect. Delete the `argocd-server` pod to force a
restart. [Issue #174](https://github.com/argoproj/argo-cd/issues/174) will address this limitation.
```
kubectl delete pod -l app=argocd-server
```

View File

@@ -24,6 +24,6 @@ local appDeployment = deployment
container
.new(params.name, params.image)
.withPorts(containerPort.new(targetPort)) + if params.command != null then { command: [ params.command ] } else {},
labels).withProgressDeadlineSeconds(5);
labels);
k.core.v1.list.new([appService, appDeployment])

View File

@@ -1,4 +1,4 @@
#! /usr/bin/env bash
#!/bin/bash
# This script auto-generates protobuf related files. It is intended to be run manually when either
# API types are added/modified, or server gRPC calls are added. The generated files should then
@@ -55,8 +55,6 @@ GOPROTOBINARY=gogofast
# protoc-gen-grpc-gateway is used to build <service>.pb.gw.go files from from .proto files
go build -i -o dist/protoc-gen-grpc-gateway ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
# protoc-gen-swagger is used to build swagger.json
go build -i -o dist/protoc-gen-swagger ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
# Generate server/<service>/(<service>.pb.go|<service>.pb.gw.go)
PROTO_FILES=$(find $PROJECT_ROOT \( -name "*.proto" -and -path '*/server/*' -or -path '*/reposerver/*' -and -name "*.proto" \))
@@ -64,8 +62,8 @@ for i in ${PROTO_FILES}; do
# Path to the google API gateway annotations.proto will be different depending if we are
# building natively (e.g. from workspace) vs. part of a docker build.
if [ -f /.dockerenv ]; then
GOOGLE_PROTO_API_PATH=$GOPATH/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
GOGO_PROTOBUF_PATH=$GOPATH/src/github.com/gogo/protobuf
GOOGLE_PROTO_API_PATH=/root/go/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
GOGO_PROTOBUF_PATH=/root/go/src/github.com/gogo/protobuf
else
GOOGLE_PROTO_API_PATH=${PROJECT_ROOT}/vendor/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
GOGO_PROTOBUF_PATH=${PROJECT_ROOT}/vendor/github.com/gogo/protobuf
@@ -79,44 +77,5 @@ for i in ${PROTO_FILES}; do
-I${GOGO_PROTOBUF_PATH} \
--${GOPROTOBINARY}_out=plugins=grpc:$GOPATH/src \
--grpc-gateway_out=logtostderr=true:$GOPATH/src \
--swagger_out=logtostderr=true:. \
$i
done
# collect_swagger gathers swagger files into a subdirectory
collect_swagger() {
SWAGGER_ROOT="$1"
EXPECTED_COLLISIONS="$2"
SWAGGER_OUT="${SWAGGER_ROOT}/swagger.json"
PRIMARY_SWAGGER=`mktemp`
COMBINED_SWAGGER=`mktemp`
cat <<EOF > "${PRIMARY_SWAGGER}"
{
"swagger": "2.0",
"info": {
"title": "Consolidate Services",
"description": "Description of all APIs",
"version": "version not set"
},
"paths": {}
}
EOF
/bin/rm -f "${SWAGGER_OUT}"
/usr/bin/find "${SWAGGER_ROOT}" -name '*.swagger.json' -exec /usr/local/bin/swagger mixin -c "${EXPECTED_COLLISIONS}" "${PRIMARY_SWAGGER}" '{}' \+ > "${COMBINED_SWAGGER}"
/usr/local/bin/jq -r 'del(.definitions[].properties[]? | select(."$ref"!=null and .description!=null).description) | del(.definitions[].properties[]? | select(."$ref"!=null and .title!=null).title)' "${COMBINED_SWAGGER}" > "${SWAGGER_OUT}"
/bin/rm "${PRIMARY_SWAGGER}" "${COMBINED_SWAGGER}"
}
# clean up generated swagger files (should come after collect_swagger)
clean_swagger() {
SWAGGER_ROOT="$1"
/usr/bin/find "${SWAGGER_ROOT}" -name '*.swagger.json' -delete
}
collect_swagger server 21
clean_swagger server
clean_swagger reposerver

View File

@@ -0,0 +1,168 @@
package main
import (
"context"
"fmt"
"hash/fnv"
"log"
"os"
"strings"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/server/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/git"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
// origRepoURLToSecretName hashes repo URL to the secret name using a formula.
// Part of the original repo name is incorporated for debugging purposes
func origRepoURLToSecretName(repo string) string {
repo = git.NormalizeGitURL(repo)
h := fnv.New32a()
_, _ = h.Write([]byte(repo))
parts := strings.Split(strings.TrimSuffix(repo, ".git"), "/")
return fmt.Sprintf("repo-%s-%v", strings.ToLower(parts[len(parts)-1]), h.Sum32())
}
// repoURLToSecretName hashes repo URL to the secret name using a formula.
// Part of the original repo name is incorporated for debugging purposes
func repoURLToSecretName(repo string) string {
repo = strings.ToLower(git.NormalizeGitURL(repo))
h := fnv.New32a()
_, _ = h.Write([]byte(repo))
parts := strings.Split(strings.TrimSuffix(repo, ".git"), "/")
return fmt.Sprintf("repo-%s-%v", parts[len(parts)-1], h.Sum32())
}
// RenameSecret renames a Kubernetes secret in a given namespace.
func renameSecret(namespace, oldName, newName string) {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
loadingRules.DefaultClientConfig = &clientcmd.DefaultClientConfig
overrides := clientcmd.ConfigOverrides{}
clientConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, &overrides)
log.Printf("Renaming secret %q to %q in namespace %q\n", oldName, newName, namespace)
config, err := clientConfig.ClientConfig()
if err != nil {
log.Println("Could not retrieve client config: ", err)
return
}
kubeclientset := kubernetes.NewForConfigOrDie(config)
repoSecret, err := kubeclientset.CoreV1().Secrets(namespace).Get(oldName, metav1.GetOptions{})
if err != nil {
log.Println("Could not retrieve old secret: ", err)
return
}
repoSecret.ObjectMeta.Name = newName
repoSecret.ObjectMeta.ResourceVersion = ""
repoSecret, err = kubeclientset.CoreV1().Secrets(namespace).Create(repoSecret)
if err != nil {
log.Println("Could not create new secret: ", err)
return
}
err = kubeclientset.CoreV1().Secrets(namespace).Delete(oldName, &metav1.DeleteOptions{})
if err != nil {
log.Println("Could not remove old secret: ", err)
}
}
// RenameRepositorySecrets ensures that repository secrets use the new naming format.
func renameRepositorySecrets(clientOpts argocdclient.ClientOptions, namespace string) {
conn, repoIf := argocdclient.NewClientOrDie(&clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
repos, err := repoIf.List(context.Background(), &repository.RepoQuery{})
if err != nil {
log.Println("An error occurred, so skipping secret renaming: ", err)
return
}
log.Println("Renaming repository secrets...")
for _, repo := range repos.Items {
oldSecretName := origRepoURLToSecretName(repo.Repo)
newSecretName := repoURLToSecretName(repo.Repo)
if oldSecretName != newSecretName {
log.Printf("Repo %q had its secret name change, so updating\n", repo.Repo)
renameSecret(namespace, oldSecretName, newSecretName)
}
}
}
/*
// PopulateAppDestinations ensures that apps have a Server and Namespace set explicitly.
func populateAppDestinations(clientOpts argocdclient.ClientOptions) {
conn, appIf := argocdclient.NewClientOrDie(&clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
apps, err := appIf.List(context.Background(), &application.ApplicationQuery{})
if err != nil {
log.Println("An error occurred, so skipping destination population: ", err)
return
}
log.Println("Populating app Destination fields")
for _, app := range apps.Items {
changed := false
log.Printf("Ensuring destination field is populated on app %q\n", app.ObjectMeta.Name)
if app.Spec.Destination.Server == "" {
if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown || app.Status.ComparisonResult.Status == appv1.ComparisonStatusError {
log.Printf("App %q was missing Destination.Server, but could not fill it in: %s", app.ObjectMeta.Name, app.Status.ComparisonResult.Status)
} else {
log.Printf("App %q was missing Destination.Server, so setting to %q\n", app.ObjectMeta.Name, app.Status.ComparisonResult.Server)
app.Spec.Destination.Server = app.Status.ComparisonResult.Server
changed = true
}
}
if app.Spec.Destination.Namespace == "" {
if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown || app.Status.ComparisonResult.Status == appv1.ComparisonStatusError {
log.Printf("App %q was missing Destination.Namespace, but could not fill it in: %s", app.ObjectMeta.Name, app.Status.ComparisonResult.Status)
} else {
log.Printf("App %q was missing Destination.Namespace, so setting to %q\n", app.ObjectMeta.Name, app.Status.ComparisonResult.Namespace)
app.Spec.Destination.Namespace = app.Status.ComparisonResult.Namespace
changed = true
}
}
if changed {
_, err = appIf.UpdateSpec(context.Background(), &application.ApplicationSpecRequest{
AppName: app.Name,
Spec: &app.Spec,
})
if err != nil {
log.Println("An error occurred (but continuing anyway): ", err)
}
}
}
}
*/
func main() {
if len(os.Args) < 3 {
log.Fatalf("USAGE: %s SERVER NAMESPACE\n", os.Args[0])
}
server, namespace := os.Args[1], os.Args[2]
log.Printf("Using argocd server %q and namespace %q\n", server, namespace)
isLocalhost := false
switch {
case strings.HasPrefix(server, "localhost:"):
isLocalhost = true
case strings.HasPrefix(server, "127.0.0.1:"):
isLocalhost = true
}
clientOpts := argocdclient.ClientOptions{
ServerAddr: server,
Insecure: true,
PlainText: isLocalhost,
}
renameRepositorySecrets(clientOpts, namespace)
//populateAppDestinations(clientOpts)
}

View File

@@ -1,22 +0,0 @@
#!/bin/sh
SRCROOT="$( cd "$(dirname "$0")/.." ; pwd -P )"
AUTOGENMSG="# This is an auto-generated file. DO NOT EDIT"
update_image () {
if [ ! -z "${IMAGE_NAMESPACE}" ]; then
sed -i '' 's| image: \(.*\)/\(argocd-.*\)| image: '"${IMAGE_NAMESPACE}"'/\2|g' ${1}
fi
if [ ! -z "${IMAGE_TAG}" ]; then
sed -i '' 's|\( image: .*/argocd-.*\)\:.*|\1:'"${IMAGE_TAG}"'|g' ${1}
fi
}
echo "${AUTOGENMSG}" > ${SRCROOT}/manifests/install.yaml
kustomize build ${SRCROOT}/manifests/cluster-install >> ${SRCROOT}/manifests/install.yaml
update_image ${SRCROOT}/manifests/install.yaml
echo "${AUTOGENMSG}" > ${SRCROOT}/manifests/namespace-install.yaml
kustomize build ${SRCROOT}/manifests/base >> ${SRCROOT}/manifests/namespace-install.yaml
update_image ${SRCROOT}/manifests/namespace-install.yaml

379
install/install.go Normal file
View File

@@ -0,0 +1,379 @@
package install
import (
"fmt"
"strings"
"syscall"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/password"
"github.com/argoproj/argo-cd/util/session"
"github.com/argoproj/argo-cd/util/settings"
tlsutil "github.com/argoproj/argo-cd/util/tls"
"github.com/ghodss/yaml"
"github.com/gobuffalo/packr"
log "github.com/sirupsen/logrus"
"github.com/yudai/gojsondiff/formatter"
"golang.org/x/crypto/ssh/terminal"
appsv1beta2 "k8s.io/api/apps/v1beta2"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apiextensionsv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
var (
// These values will be overridden by the link flags during build
// (e.g. imageTag will use the official release tag on tagged builds)
imageNamespace = "argoproj"
imageTag = "latest"
// Default namespace and image names which `argocd install` uses during install
DefaultInstallNamespace = "argocd"
DefaultControllerImage = imageNamespace + "/argocd-application-controller:" + imageTag
DefaultUIImage = imageNamespace + "/argocd-ui:" + imageTag
DefaultServerImage = imageNamespace + "/argocd-server:" + imageTag
DefaultRepoServerImage = imageNamespace + "/argocd-repo-server:" + imageTag
)
// InstallOptions stores a collection of installation settings.
type InstallOptions struct {
DryRun bool
Upgrade bool
UpdateSuperuser bool
UpdateSignature bool
SuperuserPassword string
Namespace string
ControllerImage string
UIImage string
ServerImage string
RepoServerImage string
ImagePullPolicy string
}
type Installer struct {
InstallOptions
box packr.Box
config *rest.Config
dynClientPool dynamic.ClientPool
disco discovery.DiscoveryInterface
}
func NewInstaller(config *rest.Config, opts InstallOptions) (*Installer, error) {
shallowCopy := *config
inst := Installer{
InstallOptions: opts,
box: packr.NewBox("./manifests"),
config: &shallowCopy,
}
if opts.Namespace == "" {
inst.Namespace = DefaultInstallNamespace
}
var err error
inst.dynClientPool = dynamic.NewDynamicClientPool(inst.config)
inst.disco, err = discovery.NewDiscoveryClientForConfig(inst.config)
if err != nil {
return nil, err
}
return &inst, nil
}
func (i *Installer) Install() {
i.InstallNamespace()
i.InstallApplicationCRD()
i.InstallSettings()
i.InstallApplicationController()
i.InstallArgoCDServer()
i.InstallArgoCDRepoServer()
}
func (i *Installer) Uninstall(deleteNamespace, deleteCRD bool) {
manifests := i.box.List()
for _, manifestPath := range manifests {
if strings.HasSuffix(manifestPath, ".yaml") || strings.HasSuffix(manifestPath, ".yml") {
var obj unstructured.Unstructured
i.unmarshalManifest(manifestPath, &obj)
switch strings.ToLower(obj.GetKind()) {
case "namespace":
if !deleteNamespace {
log.Infof("Skipped deletion of Namespace: '%s'", obj.GetName())
continue
}
case "customresourcedefinition":
if !deleteCRD {
log.Infof("Skipped deletion of CustomResourceDefinition: '%s'", obj.GetName())
continue
}
}
i.MustUninstallResource(&obj)
}
}
}
func (i *Installer) InstallNamespace() {
if i.Namespace != DefaultInstallNamespace {
// don't create namespace if a different one was supplied
return
}
var namespace apiv1.Namespace
i.unmarshalManifest("00_namespace.yaml", &namespace)
namespace.ObjectMeta.Name = i.Namespace
i.MustInstallResource(kube.MustToUnstructured(&namespace))
}
func (i *Installer) InstallApplicationCRD() {
var applicationCRD apiextensionsv1beta1.CustomResourceDefinition
i.unmarshalManifest("01_application-crd.yaml", &applicationCRD)
i.MustInstallResource(kube.MustToUnstructured(&applicationCRD))
}
func (i *Installer) InstallSettings() {
kubeclientset, err := kubernetes.NewForConfig(i.config)
errors.CheckError(err)
settingsMgr := settings.NewSettingsManager(kubeclientset, i.Namespace)
cdSettings, err := settingsMgr.GetSettings()
if err != nil {
if apierr.IsNotFound(err) {
cdSettings = &settings.ArgoCDSettings{}
} else {
log.Fatal(err)
}
}
if cdSettings.ServerSignature == nil || i.UpdateSignature {
// set JWT signature
signature, err := session.MakeSignature(32)
errors.CheckError(err)
cdSettings.ServerSignature = signature
}
if cdSettings.LocalUsers == nil {
cdSettings.LocalUsers = make(map[string]string)
}
if _, ok := cdSettings.LocalUsers[common.ArgoCDAdminUsername]; !ok || i.UpdateSuperuser {
passwordRaw := i.SuperuserPassword
if passwordRaw == "" {
passwordRaw = readAndConfirmPassword()
}
hashedPassword, err := password.HashPassword(passwordRaw)
errors.CheckError(err)
cdSettings.LocalUsers = map[string]string{
common.ArgoCDAdminUsername: hashedPassword,
}
}
if cdSettings.Certificate == nil {
// generate TLS cert
hosts := []string{
"localhost",
"argocd-server",
fmt.Sprintf("argocd-server.%s", i.Namespace),
fmt.Sprintf("argocd-server.%s.svc", i.Namespace),
fmt.Sprintf("argocd-server.%s.svc.cluster.local", i.Namespace),
}
certOpts := tlsutil.CertOptions{
Hosts: hosts,
Organization: "Argo CD",
IsCA: true,
}
cert, err := tlsutil.GenerateX509KeyPair(certOpts)
errors.CheckError(err)
cdSettings.Certificate = cert
}
err = settingsMgr.SaveSettings(cdSettings)
errors.CheckError(err)
}
func readAndConfirmPassword() string {
for {
fmt.Print("*** Enter an admin password: ")
password, err := terminal.ReadPassword(syscall.Stdin)
errors.CheckError(err)
fmt.Print("\n")
fmt.Print("*** Confirm the admin password: ")
confirmPassword, err := terminal.ReadPassword(syscall.Stdin)
errors.CheckError(err)
fmt.Print("\n")
if string(password) == string(confirmPassword) {
return string(password)
}
log.Error("Passwords do not match")
}
}
func (i *Installer) InstallApplicationController() {
var applicationControllerServiceAccount apiv1.ServiceAccount
var applicationControllerRole rbacv1.Role
var applicationControllerRoleBinding rbacv1.RoleBinding
var applicationControllerDeployment appsv1beta2.Deployment
i.unmarshalManifest("03a_application-controller-sa.yaml", &applicationControllerServiceAccount)
i.unmarshalManifest("03b_application-controller-role.yaml", &applicationControllerRole)
i.unmarshalManifest("03c_application-controller-rolebinding.yaml", &applicationControllerRoleBinding)
i.unmarshalManifest("03d_application-controller-deployment.yaml", &applicationControllerDeployment)
applicationControllerDeployment.Spec.Template.Spec.Containers[0].Image = i.ControllerImage
applicationControllerDeployment.Spec.Template.Spec.Containers[0].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
i.MustInstallResource(kube.MustToUnstructured(&applicationControllerServiceAccount))
i.MustInstallResource(kube.MustToUnstructured(&applicationControllerRole))
i.MustInstallResource(kube.MustToUnstructured(&applicationControllerRoleBinding))
i.MustInstallResource(kube.MustToUnstructured(&applicationControllerDeployment))
}
func (i *Installer) InstallArgoCDServer() {
var argoCDServerServiceAccount apiv1.ServiceAccount
var argoCDServerControllerRole rbacv1.Role
var argoCDServerControllerRoleBinding rbacv1.RoleBinding
var argoCDServerControllerDeployment appsv1beta2.Deployment
var argoCDServerService apiv1.Service
i.unmarshalManifest("04a_argocd-server-sa.yaml", &argoCDServerServiceAccount)
i.unmarshalManifest("04b_argocd-server-role.yaml", &argoCDServerControllerRole)
i.unmarshalManifest("04c_argocd-server-rolebinding.yaml", &argoCDServerControllerRoleBinding)
i.unmarshalManifest("04d_argocd-server-deployment.yaml", &argoCDServerControllerDeployment)
i.unmarshalManifest("04e_argocd-server-service.yaml", &argoCDServerService)
argoCDServerControllerDeployment.Spec.Template.Spec.InitContainers[0].Image = i.ServerImage
argoCDServerControllerDeployment.Spec.Template.Spec.InitContainers[0].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
argoCDServerControllerDeployment.Spec.Template.Spec.InitContainers[1].Image = i.UIImage
argoCDServerControllerDeployment.Spec.Template.Spec.InitContainers[1].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
argoCDServerControllerDeployment.Spec.Template.Spec.Containers[0].Image = i.ServerImage
argoCDServerControllerDeployment.Spec.Template.Spec.Containers[0].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerServiceAccount))
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerControllerRole))
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerControllerRoleBinding))
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerControllerDeployment))
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerService))
}
func (i *Installer) InstallArgoCDRepoServer() {
var argoCDRepoServerControllerDeployment appsv1beta2.Deployment
var argoCDRepoServerService apiv1.Service
i.unmarshalManifest("05a_argocd-repo-server-deployment.yaml", &argoCDRepoServerControllerDeployment)
i.unmarshalManifest("05b_argocd-repo-server-service.yaml", &argoCDRepoServerService)
argoCDRepoServerControllerDeployment.Spec.Template.Spec.Containers[0].Image = i.RepoServerImage
argoCDRepoServerControllerDeployment.Spec.Template.Spec.Containers[0].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
i.MustInstallResource(kube.MustToUnstructured(&argoCDRepoServerControllerDeployment))
i.MustInstallResource(kube.MustToUnstructured(&argoCDRepoServerService))
}
func (i *Installer) unmarshalManifest(fileName string, obj interface{}) {
yamlBytes, err := i.box.MustBytes(fileName)
errors.CheckError(err)
err = yaml.Unmarshal(yamlBytes, obj)
errors.CheckError(err)
}
func (i *Installer) MustInstallResource(obj *unstructured.Unstructured) *unstructured.Unstructured {
obj, err := i.InstallResource(obj)
errors.CheckError(err)
return obj
}
func isNamespaced(obj *unstructured.Unstructured) bool {
switch obj.GetKind() {
case "Namespace", "ClusterRole", "ClusterRoleBinding", "CustomResourceDefinition":
return false
}
return true
}
// InstallResource creates or updates a resource. If installed resource is up-to-date, does nothing
func (i *Installer) InstallResource(obj *unstructured.Unstructured) (*unstructured.Unstructured, error) {
if isNamespaced(obj) {
obj.SetNamespace(i.Namespace)
}
// remove 'creationTimestamp' and 'status' fields from object so that the diff will not be modified
obj.SetCreationTimestamp(metav1.Time{})
delete(obj.Object, "status")
if i.DryRun {
printYAML(obj)
return nil, nil
}
gvk := obj.GroupVersionKind()
dclient, err := i.dynClientPool.ClientForGroupVersionKind(gvk)
if err != nil {
return nil, err
}
apiResource, err := kube.ServerResourceForGroupVersionKind(i.disco, gvk)
if err != nil {
return nil, err
}
reIf := dclient.Resource(apiResource, i.Namespace)
liveObj, err := reIf.Create(obj)
if err == nil {
fmt.Printf("%s '%s' created\n", liveObj.GetKind(), liveObj.GetName())
return liveObj, nil
}
if !apierr.IsAlreadyExists(err) {
return nil, err
}
liveObj, err = reIf.Get(obj.GetName(), metav1.GetOptions{})
if err != nil {
return nil, err
}
diffRes := diff.Diff(obj, liveObj)
if !diffRes.Modified {
fmt.Printf("%s '%s' up-to-date\n", liveObj.GetKind(), liveObj.GetName())
return liveObj, nil
}
if !i.Upgrade {
log.Println(diffRes.ASCIIFormat(obj, formatter.AsciiFormatterConfig{}))
return nil, fmt.Errorf("%s '%s' already exists. Rerun with --upgrade to update", obj.GetKind(), obj.GetName())
}
liveObj, err = reIf.Update(obj)
if err != nil {
return nil, err
}
fmt.Printf("%s '%s' updated\n", liveObj.GetKind(), liveObj.GetName())
return liveObj, nil
}
func (i *Installer) MustUninstallResource(obj *unstructured.Unstructured) {
err := i.UninstallResource(obj)
errors.CheckError(err)
}
// UninstallResource deletes a resource from the cluster
func (i *Installer) UninstallResource(obj *unstructured.Unstructured) error {
if isNamespaced(obj) {
obj.SetNamespace(i.Namespace)
}
gvk := obj.GroupVersionKind()
dclient, err := i.dynClientPool.ClientForGroupVersionKind(gvk)
if err != nil {
return err
}
apiResource, err := kube.ServerResourceForGroupVersionKind(i.disco, gvk)
if err != nil {
return err
}
reIf := dclient.Resource(apiResource, i.Namespace)
deletePolicy := metav1.DeletePropagationForeground
err = reIf.Delete(obj.GetName(), &metav1.DeleteOptions{PropagationPolicy: &deletePolicy})
if err != nil {
if apierr.IsNotFound(err) {
fmt.Printf("%s '%s' not found\n", obj.GetKind(), obj.GetName())
return nil
}
return err
}
fmt.Printf("%s '%s' deleted\n", obj.GetKind(), obj.GetName())
return nil
}
func printYAML(obj interface{}) {
objBytes, err := yaml.Marshal(obj)
if err != nil {
log.Fatalf("Failed to marshal %v", obj)
}
fmt.Printf("---\n%s\n", string(objBytes))
}

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: argocd

View File

@@ -0,0 +1,57 @@
# NOTE: the values here are just a example and are not the values used during an install.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
# url is the externally facing base URL of ArgoCD.
# This field is required when configuring SSO, which ArgoCD uses as part the redirectURI for the
# dex connectors. When configuring the application in the SSO provider (e.g. github, okta), the
# authorization callback URL will be url + /api/dex/callback. For example, if ArgoCD's url is
# https://example.com, then the auth callback to set in the SSO provider should be:
# https://example.com/api/dex/callback
url: http://localhost:8080
# dex.config holds the contents of the configuration yaml for the dex OIDC/Oauth2 provider sidecar.
# Only a subset of a full dex config is required, namely the connectors list. ArgoCD will generate
# the complete dex config based on the configured URL, and the known callback endpoint which the
# ArgoCD API server exposes (i.e. /api/dex/callback).
dex.config: |
# connectors is a list of dex connector configurations. For details on available connectors and
# how to configure them, see: https://github.com/coreos/dex/tree/master/Documentation/connectors
# NOTE:
# * Any values which start with '$' will look to a key in argocd-secret of the same name, to
# obtain the actual value.
# * ArgoCD will automatically set the 'redirectURI' field in any OAuth2 connectors, to match the
# external callback URL (e.g. https://example.com/api/dex/callback)
connectors:
# GitHub example
- type: github
id: github
name: GitHub
config:
clientID: aabbccddeeff00112233
clientSecret: $dex.github.clientSecret
orgs:
- name: your-github-org
# GitHub enterprise example
- type: github
id: acme-github
name: Acme GitHub
config:
hostName: github.acme.com
clientID: abcdefghijklmnopqrst
clientSecret: $dex.acme.clientSecret
orgs:
- name: your-github-org
# OIDC example (e.g. Okta)
- type: oidc
id: okta
name: Okta
config:
issuer: https://dev-123456.oktapreview.com
clientID: aaaabbbbccccddddeee
clientSecret: $dex.okta.clientSecret

View File

@@ -0,0 +1,27 @@
# NOTE: the values in this secret are provided as working manifest example and are not the values
# used during an install. New values will be generated as part of `argocd install`
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
namespace: argocd
type: Opaque
stringData:
# bcrypt hash of the string "password"
admin.password: $2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W
# random server signature key for session validation
server.secretkey: aEDvv73vv70F77+9CRBSNu+/vTYQ77+9EUFh77+9LzFyJ++/vXfLsO+/vWRbeu+/ve+/vQ==
# The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
# events. To enable webhooks, configure one or more of the following keys with the shared git
# provider webhook secret. The payload URL configured in the git provider should use the
# /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
github.webhook.secret: shhhh! it's a github secret
gitlab.webhook.secret: shhhh! it's a gitlab secret
bitbucket.webhook.uuid: your-bitbucket-uuid
# the following of user defined keys which are referenced in the example argocd-cm configmap
# as pat of SSO configuration.
dex.github.clientSecret: nv1vx8w4gw5byrflujfkxww6ykh85yq818aorvwy
dex.acme.clientSecret: 5pp7dyre3d5nyk0ree1tr0gd68k18xn94x8lfae9
dex.okta.clientSecret: x41ztv6ufyf07oyoopc6f62p222c00mox2ciquvt

View File

@@ -2,3 +2,4 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: application-controller
namespace: argocd

View File

@@ -2,6 +2,7 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: application-controller-role
namespace: argocd
rules:
- apiGroups:
- ""
@@ -10,14 +11,10 @@ rules:
verbs:
- get
- watch
- list
- patch
- update
- apiGroups:
- argoproj.io
resources:
- applications
- appprojects
verbs:
- create
- get
@@ -26,11 +23,3 @@ rules:
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- list

View File

@@ -2,6 +2,7 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: application-controller-role-binding
namespace: argocd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role

View File

@@ -1,7 +1,8 @@
apiVersion: apps/v1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: application-controller
namespace: argocd
spec:
selector:
matchLabels:
@@ -12,14 +13,7 @@ spec:
app: application-controller
spec:
containers:
- command:
- /argocd-application-controller
- --repo-server
- argocd-repo-server:8081
- --status-processors
- "20"
- --operation-processors
- "10"
- command: [/argocd-application-controller, --repo-server, 'argocd-repo-server:8081']
image: argoproj/argocd-application-controller:latest
name: application-controller
serviceAccountName: application-controller

View File

@@ -2,3 +2,4 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: argocd-server
namespace: argocd

View File

@@ -2,11 +2,33 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argocd-server-role
namespace: argocd
rules:
- apiGroups:
- ""
resources:
- pods
- pods/exec
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
@@ -20,7 +42,6 @@ rules:
- argoproj.io
resources:
- applications
- appprojects
verbs:
- create
- get
@@ -29,10 +50,3 @@ rules:
- update
- delete
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- list

View File

@@ -2,6 +2,7 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argocd-server-role-binding
namespace: argocd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role

View File

@@ -1,7 +1,8 @@
apiVersion: apps/v1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: argocd-server
namespace: argocd
spec:
selector:
matchLabels:
@@ -13,6 +14,12 @@ spec:
spec:
serviceAccountName: argocd-server
initContainers:
- name: copyutil
image: argoproj/argocd-server:latest
command: [cp, /argocd-util, /shared]
volumeMounts:
- mountPath: /shared
name: static-files
- name: ui
image: argoproj/argocd-ui:latest
command: [cp, -r, /app, /shared]
@@ -26,12 +33,12 @@ spec:
volumeMounts:
- mountPath: /shared
name: static-files
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 30
- name: dex
image: quay.io/coreos/dex:v2.10.0
command: [/shared/argocd-util, rundex]
volumeMounts:
- mountPath: /shared
name: static-files
volumes:
- emptyDir: {}
name: static-files

View File

@@ -2,6 +2,7 @@ apiVersion: v1
kind: Service
metadata:
name: argocd-server
namespace: argocd
spec:
ports:
- name: http

View File

@@ -1,7 +1,8 @@
apiVersion: apps/v1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: argocd-repo-server
namespace: argocd
spec:
selector:
matchLabels:
@@ -11,15 +12,13 @@ spec:
labels:
app: argocd-repo-server
spec:
automountServiceAccountToken: false
containers:
- name: argocd-repo-server
image: argoproj/argocd-repo-server:latest
command: [/argocd-repo-server]
ports:
- containerPort: 8081
readinessProbe:
tcpSocket:
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
- containerPort: 8081
- name: redis
image: redis:3.2.11
ports:
- containerPort: 6379

View File

@@ -2,6 +2,7 @@ apiVersion: v1
kind: Service
metadata:
name: argocd-repo-server
namespace: argocd
spec:
ports:
- port: 8081

View File

@@ -1,16 +0,0 @@
# ArgoCD Installation Manifests
Two sets of installation manifests are provided:
* [install.yaml](install.yaml) - Standard ArgoCD installation with cluster-admin access. Use this
manifest set if you plan to use ArgoCD to deploy applications in the same cluster that ArgoCD runs
in (i.e. kubernetes.svc.default). Will still be able to deploy to external clusters with inputted
credentials.
* [namespace-install.yaml](namespace-install.yaml) - Installation of ArgoCD which requires only
namespace level privileges (does not need cluster roles). Use this manifest set if you do not
need ArgoCD to deploy applications in the same cluster that ArgoCD runs in, and will rely solely
on inputted cluster credentials. An example of using this set of manifests is if you run several
ArgoCD instances for different teams, where each instance will bedeploying applications to
external clusters. Will still be possible to deploy to the same cluster (kubernetes.svc.default)
with inputted credentials (i.e. `argocd cluster add <CONTEXT> --in-cluster`).

View File

@@ -1,14 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: appprojects.argoproj.io
spec:
group: argoproj.io
names:
kind: AppProject
plural: appprojects
shortNames:
- appproj
- appprojs
scope: Namespaced
version: v1alpha1

View File

@@ -1,23 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
# data:
# # ArgoCD's externally facing base URL. Required for configuring SSO
# # url: https://argo-cd-demo.argoproj.io
#
# # A dex connector configuration. See documentation on how to configure SSO:
# # https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
# dex.config: |
# connectors:
# # GitHub example
# - type: github
# id: github
# name: GitHub
# config:
# clientID: aabbccddeeff00112233
# clientSecret: $dex.github.clientSecret
# orgs:
# - name: your-github-org
# teams:
# - red-team

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: argocd-metrics
spec:
ports:
- name: http
protocol: TCP
port: 8082
targetPort: 8082
selector:
app: argocd-server

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
# data:
# # An RBAC policy .csv file containing additional policy and role definitions.
# # See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md on how to write RBAC policies.
# policy.csv: |
# # Give all members of "my-org:team-alpha" the ability to sync apps in "my-project"
# p, my-org:team-alpha, applications, sync, my-project/*, allow
# # Make all members of "my-org:team-beta" admins
# g, my-org:team-beta, role:admin
#
# # The default role ArgoCD will fall back to, when authorizing API requests
# policy.default: role:readonly

View File

@@ -1,26 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
type: Opaque
# data:
# # TLS certificate and private key for API server.
# # Autogenerated with a self-signed ceritificate if keys are missing.
# tls.crt:
# tls.key:
#
# # bcrypt hash of the admin password and it's last modified time. Autogenerated on initial
# # startup. To reset a forgotten password, delete both keys and restart argocd-server.
# admin.password:
# admin.passwordMtime:
#
# # random server signature key for session validation. Autogenerated on initial startup
# server.secretkey:
#
# # The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
# # events. To enable webhooks, configure one or more of the following keys with the shared git
# # provider webhook secret. The payload URL configured in the git provider should use the
# # /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
# github.webhook.secret:
# gitlab.webhook.secret:
# bitbucket.webhook.uuid:

View File

@@ -1,34 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: dex-server
spec:
selector:
matchLabels:
app: dex-server
template:
metadata:
labels:
app: dex-server
spec:
serviceAccountName: dex-server
initContainers:
- name: copyutil
image: argoproj/argocd-server:latest
command: [cp, /argocd-util, /shared]
volumeMounts:
- mountPath: /shared
name: static-files
containers:
- name: dex
image: quay.io/dexidp/dex:v2.11.0
command: [/shared/argocd-util, rundex]
ports:
- containerPort: 5556
- containerPort: 5557
volumeMounts:
- mountPath: /shared
name: static-files
volumes:
- emptyDir: {}
name: static-files

View File

@@ -1,14 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dex-server-role
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
verbs:
- get
- list
- watch

View File

@@ -1,11 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dex-server-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dex-server-role
subjects:
- kind: ServiceAccount
name: dex-server

View File

@@ -1,4 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: dex-server

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: dex-server
spec:
ports:
- name: http
protocol: TCP
port: 5556
targetPort: 5556
- name: grpc
protocol: TCP
port: 5557
targetPort: 5557
selector:
app: dex-server

View File

@@ -1,33 +0,0 @@
resources:
- application-crd.yaml
- appproject-crd.yaml
- argocd-cm.yaml
- argocd-secret.yaml
- argocd-rbac-cm.yaml
- application-controller-sa.yaml
- application-controller-role.yaml
- application-controller-rolebinding.yaml
- application-controller-deployment.yaml
- argocd-server-sa.yaml
- argocd-server-role.yaml
- argocd-server-rolebinding.yaml
- argocd-server-deployment.yaml
- argocd-server-service.yaml
- argocd-metrics-service.yaml
- argocd-repo-server-deployment.yaml
- argocd-repo-server-service.yaml
- dex-server-sa.yaml
- dex-server-role.yaml
- dex-server-rolebinding.yaml
- dex-server-deployment.yaml
- dex-server-service.yaml
imageTags:
- name: argoproj/argocd-server
newTag: v0.10.6
- name: argoproj/argocd-ui
newTag: v0.10.6
- name: argoproj/argocd-repo-server
newTag: v0.10.6
- name: argoproj/argocd-application-controller
newTag: v0.10.6

View File

@@ -1,15 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: application-controller-clusterrole
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: application-controller-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: application-controller-clusterrole
subjects:
- kind: ServiceAccount
name: application-controller
namespace: argocd

Some files were not shown because too many files have changed in this diff Show More