Compare commits

..

3 Commits

Author SHA1 Message Date
Jesse Suen
08c63ec234 Update install manifests to v0.6.2 2018-07-20 22:18:27 -07:00
Jesse Suen
41f950fd43 Bump version to v0.6.2 2018-07-20 21:52:38 -07:00
Jesse Suen
826ee0dfa0 Health check was using wrong converter for statefulsets, daemonset, replicasets (#439) 2018-07-20 21:49:43 -07:00
172 changed files with 2355 additions and 13604 deletions

View File

@@ -29,19 +29,7 @@ spec:
- name: cmd
value: "{{item}}"
withItems:
- dep ensure && make cli lint
- name: test-coverage
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && go get github.com/mattn/goveralls && make test-coverage"
- name: test-e2e
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && make test-e2e"
- dep ensure && make cli lint test test-e2e
- name: ci-builder
inputs:
@@ -56,22 +44,12 @@ spec:
container:
image: argoproj/argo-cd-ci-builder:latest
command: [sh, -c]
args: ["mkfifo pipe; tee /tmp/logs.txt < pipe & {{inputs.parameters.cmd}} > pipe"]
args: ["{{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: COVERALLS_TOKEN
valueFrom:
secretKeyRef:
name: coverall-token
key: coverall-token
resources:
requests:
memory: 1024Mi
cpu: 200m
outputs:
artifacts:
- name: logs
path: /tmp/logs.txt
- name: ci-dind
inputs:
@@ -86,7 +64,7 @@ spec:
container:
image: argoproj/argo-cd-ci-builder:latest
command: [sh, -c]
args: ["mkfifo pipe; tee /tmp/logs.txt < pipe & until docker ps; do sleep 3; done && {{inputs.parameters.cmd}} > pipe"]
args: ["until docker ps; do sleep 3; done && {{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: DOCKER_HOST
@@ -101,7 +79,4 @@ spec:
securityContext:
privileged: true
mirrorVolumeMounts: true
outputs:
artifacts:
- name: logs
path: /tmp/logs.txt

1
.gitignore vendored
View File

@@ -7,4 +7,3 @@ dist/
# delve debug binaries
cmd/**/debug
debug.test
coverage.out

View File

@@ -1,164 +1,5 @@
# Changelog
## v0.8.2 (2018-09-12)
- Downgrade ksonnet from v0.12.0 to v0.11.0 due to quote unescape regression
- Fix CLI panic when performing an initial `argocd sync/wait`
## v0.8.1 (2018-09-10)
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
+ [UI] Improve resource diff rendering (issue #457)
+ [UI] Indicate number of ready containers in pod (issue #539)
+ [UI] Indicate when app is overriding parameters (issue #503)
+ [UI] Provide a YAML view of resources (issue #396)
- Fix issue where changes were not pulled when tracking a branch (issue #567)
- Fix controller hot loop when app source contains bad manifests (issue #568)
- [UI] Fix issue where projects filter does not work when application got changed
## v0.8.0 (2018-09-04)
### Notes about upgrading from v0.7
* The RBAC model has been improved to support explicit denies. What this means is that any previous
RBAC policy rules, need to be rewritten to include one extra column with the effect:
`allow` or `deny`. For example, if a rule was written like this:
```
p, my-org:my-team, applications, get, */*
```
It should be rewritten to look like this:
```
p, my-org:my-team, applications, get, */*, allow
```
### Changes since v0.7:
+ Support kustomize as an application source (issue #510)
+ Introduce project tokens for automation access (issue #498)
+ Add ability to delete a single application resource to support immutable updates (issue #262)
+ Update RBAC model to support explicit denies (issue #497)
+ Ability to view Kubernetes events related to application projects for auditing
+ Add PVC healthcheck to controller (issue #501)
+ Run all containers as an unprivileged user (issue #528)
* Upgrade ksonnet to v0.12.0
* Add readiness probes to API server (issue #522)
* Use gRPC error codes instead of fmt.Errorf (#532)
- API discovery becomes best effort when partial resource list is returned (issue #524)
- Fix `argocd app wait` printing incorrect Sync output (issue #542)
- Fix issue where argocd could not sync to a tag (#541)
- Fix issue where static assets were browser cached between upgrades (issue #489)
## v0.7.2 (2018-08-21)
- API discovery becomes best effort when partial resource list is returned (issue #524)
## v0.7.1 (2018-08-03)
+ Surface helm parameters to the application level (#485)
+ [UI] Improve application creation wizard (#459)
+ [UI] Show indicator when refresh is still in progress (#493)
* [UI] Improve data loading error notification (#446)
* Infer username from claims during an `argocd relogin` (#475)
* Expand RBAC role to be able to create application events. Fix username claims extraction
- Fix scalability issues with the ListApps API (#494)
- Fix issue where application server was retrieving events from incorrect cluster (#478)
- Fix failure in identifying app source type when path was '.'
- AppProjectSpec SourceRepos mislabeled (#490)
- Failed e2e test was not failing CI workflow
* Fix linux download link in getting_started.md (#487) (@chocopowwwa)
## v0.7.0 (2018-07-27)
+ Support helm charts and yaml directories as an application source
+ Audit trails in the form of API call logs
+ Generate kubernetes events for application state changes
+ Add ksonnet version to version endpoint (#433)
+ Show CLI progress for sync and rollback
+ Make use of dex refresh tokens and store them into local config
+ Expire local superuser tokens when their password changes
+ Add `argocd relogin` command as a convenience around login to current context
- Fix saving default connection status for repos and clusters
- Fix undesired fail-fast behavior of health check
- Fix memory leak in the cluster resource watch
- Health check for StatefulSets, DaemonSet, and ReplicaSets were failing due to use of wrong converters
## v0.6.2 (2018-07-23)
- Health check for StatefulSets, DaemonSet, and ReplicaSets were failing due to use of wrong converters
## v0.6.1 (2018-07-18)
- Fix regression where deployment health check incorrectly reported Healthy
+ Intercept dex SSO errors and present them in Argo login page
## v0.6.0 (2018-07-16)
+ Support PreSync, Sync, PostSync resource hooks
+ Introduce Application Projects for finer grain RBAC controls
+ Swagger Docs & UI
+ Support in-cluster deployments internal kubernetes service name
+ Refactoring & Improvements
* Improved error handling, status and condition reporting
* Remove installer in favor of kubectl apply instructions
* Add validation when setting application parameters
* Cascade deletion is decided during app deletion, instead of app creation
- Fix git authentication implementation when using using SSH key
- app-name label was inadvertently injected into spec.selector if selector was omitted from v1beta1 specs
## v0.5.4 (2018-06-27)
- Refresh flag to sync should be optional, not required
## v0.5.3 (2018-06-20)
+ Support cluster management using the internal k8s API address https://kubernetes.default.svc (#307)
+ Support diffing a local ksonnet app to the live application state (resolves #239) (#298)
+ Add ability to show last operation result in app get. Show path in app list -o wide (#297)
+ Update dependencies: ksonnet v0.11, golang v1.10, debian v9.4 (#296)
+ Add ability to force a refresh of an app during get (resolves #269) (#293)
+ Automatically restart API server upon certificate changes (#292)
## v0.5.2 (2018-06-14)
+ Resource events tab on application details page (#286)
+ Display pod status on application details page (#231)
## v0.5.1 (2018-06-13)
- API server incorrectly compose application fully qualified name for RBAC check (#283)
- UI crash while rendering application operation info if operation failed
## v0.5.0 (2018-06-12)
+ RBAC access control
+ Repository/Cluster state monitoring
+ ArgoCD settings import/export
+ Application creation UI wizard
+ argocd app manifests for printing the application manifests
+ argocd app unset command to unset parameter overrides
+ Fail app sync if prune flag is required (#276)
+ Take into account number of unavailable replicas to decided if deployment is healthy or not #270
+ Add ability to show parameters and overrides in CLI (resolves #240)
- Repo names containing underscores were not being accepted (#258)
- Cookie token was not parsed properly when mixed with other site cookies
## v0.4.7 (2018-06-07)
- Fix argocd app wait health checking logic
## v0.4.6 (2018-06-06)
- Retry argocd app wait connection errors from EOF watch. Show detailed state changes
## v0.4.5 (2018-05-31)
+ Add argocd app unset command to unset parameter overrides
- Cookie token was not parsed properly when mixed with other site cookies
## v0.4.4 (2018-05-30)
+ Add ability to show parameters and overrides in CLI (resolves #240)
+ Add Events API endpoint
+ Issue #238 - add upsert flag to 'argocd app create' command
+ Add repo browsing endpoint (#229)
+ Support subscribing to settings updates and auto-restart of dex and API server
- Issue #233 - Controller does not persist rollback operation result
- App sync frequently fails due to concurrent app modification
## v0.4.3 (2018-05-21)
- Move local branch deletion as part of git Reset() (resolves #185) (#222)
- Fix exit code for app wait (#219)
## v0.4.2 (2018-05-21)
+ Show URL in argocd app get
- Remove interactive context name prompt during login which broke login automation
* Rename force flag to cascade in argocd app delete
## v0.4.1 (2018-05-18)
+ Implemented argocd app wait command
## v0.4.0 (2018-05-17)
+ SSO Integration
+ GitHub Webhook
@@ -171,36 +12,3 @@ RBAC policy rules, need to be rewritten to include one extra column with the eff
* Manifests are memoized in repo server
- Fix connection timeouts to SSH repos
## v0.3.2 (2018-05-03)
+ Application sync should delete 'unexpected' resources #139
+ Update ksonnet to v0.10.1
+ Detect unexpected resources
- Fix: App sync frequently fails due to concurrent app modification #147
- Fix: improve app state comparator: #136, #132
## v0.3.1 (2018-04-24)
+ Add new rollback RPC with numeric identifiers
+ New argo app history and argo app rollback command
+ Switch to gogo/protobuf for golang code generation
- Fix: create .argocd directory during argo login (issue #123)
- Fix: Allow overriding server or namespace separately (issue #110)
## v0.3.0 (2018-04-23)
+ Auth support
+ TLS support
+ DAG-based application view
+ Bulk watch
+ ksonnet v0.10.0-alpha.3
+ kubectl apply deployment strategy
+ CLI improvements for app management
## v0.2.0 (2018-04-03)
+ Rollback UI
+ Override parameters
## v0.1.0 (2018-03-12)
+ Define app in Github with dev and preprod environment using KSonnet
+ Add cluster Diff App with a cluster Deploy app in a cluster
+ Deploy a new version of the app in the cluster
+ App sync based on Github app config change - polling only
+ Basic UI: App diff between Git and k8s cluster for all environments Basic GUI

View File

@@ -1,19 +1,10 @@
## Requirements
Make sure you have following tools installed
* [docker](https://docs.docker.com/install/#supported-platforms)
* [golang](https://golang.org/)
* [dep](https://github.com/golang/dep)
* [protobuf](https://developers.google.com/protocol-buffers/)
* [ksonnet](https://github.com/ksonnet/ksonnet#install)
* [helm](https://github.com/helm/helm/releases)
* [kustomize](https://github.com/kubernetes-sigs/kustomize/releases)
* [go-swagger](https://github.com/go-swagger/go-swagger/blob/master/docs/install.md)
* [jq](https://stedolan.github.io/jq/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
Make sure you have following tools installed [docker](https://docs.docker.com/install/#supported-platforms), [golang](https://golang.org/), [dep](https://github.com/golang/dep), [protobuf](https://developers.google.com/protocol-buffers/), [ksonnet](https://github.com/ksonnet/ksonnet#install), [go-swagger](https://github.com/go-swagger/go-swagger/blob/master/docs/install.md), and [jq](https://stedolan.github.io/jq/)
[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
```
$ brew tap go-swagger/go-swagger
$ brew install go dep protobuf kubectl ksonnet/tap/ks kubernetes-helm jq go-swagger
$ brew install go dep protobuf kubectl ksonnet/tap/ks jq go-swagger
$ go get -u github.com/golang/protobuf/protoc-gen-go
$ go get -u github.com/go-swagger/go-swagger/cmd/swagger
$ go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
@@ -39,18 +30,6 @@ NOTE: The make command can take a while, and we recommend building the specific
* `make codegen` - Builds protobuf and swagger files
* `make argocd-util` - Make the administrator's utility, used for certain tasks such as import/export
## Generating ArgoCD manifests for a specific image repository/tag
During development, the `update-manifests.sh` script, can be used to conveniently regenerate the
ArgoCD installation manifests with a customized image namespace and tag. This enables developers
to easily apply manifests which are using the images that they pushed into their personal container
repository.
```
$ IMAGE_NAMESPACE=jessesuen IMAGE_TAG=latest ./hack/update-manifests.sh
$ kubectl apply -n argocd -f ./manifests/install.yaml
```
## Running locally
You need to have access to kubernetes cluster (including [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) or [docker edge](https://docs.docker.com/docker-for-mac/install/) ) in order to run Argo CD on your laptop:

View File

@@ -1,131 +0,0 @@
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM golang:1.10.3 as builder
RUN apt-get update && apt-get install -y \
git \
make \
wget \
gcc \
zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /tmp
# Install docker
ENV DOCKER_VERSION=18.06.0
RUN curl -O https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}-ce.tgz && \
tar -xzf docker-${DOCKER_VERSION}-ce.tgz && \
mv docker/docker /usr/local/bin/docker && \
rm -rf ./docker
# Install dep
ENV DEP_VERSION=0.5.0
RUN wget https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep
# Install gometalinter
RUN curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v2.0.5/gometalinter-2.0.5-linux-amd64.tar.gz | \
tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- && \
ln -s $GOPATH/bin/gometalinter $GOPATH/bin/gometalinter.v2
# Install packr
ENV PACKR_VERSION=1.13.2
RUN wget https://github.com/gobuffalo/packr/releases/download/v${PACKR_VERSION}/packr_${PACKR_VERSION}_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# Install kubectl
RUN curl -L -o /usr/local/bin/kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /usr/local/bin/kubectl
# Install ksonnet
# NOTE: we frequently switch between tip of master ksonnet vs. official builds. Comment/uncomment
# the corresponding section to switch between the two options:
# Option 1: build ksonnet ourselves
#RUN go get -v -u github.com/ksonnet/ksonnet && mv ${GOPATH}/bin/ksonnet /usr/local/bin/ks
# Option 2: use official tagged ksonnet release
ENV KSONNET_VERSION=0.11.0
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks
# Install helm
ENV HELM_VERSION=2.9.1
RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
tar -C /tmp/ -xf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
mv /tmp/linux-amd64/helm /usr/local/bin/helm
# Install kustomize
ENV KUSTOMIZE_VERSION=1.0.7
RUN curl -L -o /usr/local/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize
####################################################################################################
# ArgoCD Build stage which performs the actual build of ArgoCD binaries
####################################################################################################
FROM golang:1.10.3 as argocd-build
COPY --from=builder /usr/local/bin/dep /usr/local/bin/dep
COPY --from=builder /usr/local/bin/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
# Perform the build
WORKDIR /go/src/github.com/argoproj/argo-cd
COPY . .
ARG MAKE_TARGET="cli server controller repo-server argocd-util"
RUN make ${MAKE_TARGET}
####################################################################################################
# Final image
####################################################################################################
FROM debian:9.5-slim
RUN groupadd -g 999 argocd && \
useradd -r -u 999 -g argocd argocd && \
mkdir -p /home/argocd && \
chown argocd:argocd /home/argocd && \
apt-get update && \
apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=argocd
COPY --from=argocd-build /go/src/github.com/argoproj/argo-cd/dist/* /usr/local/bin/
# Symlink argocd binaries under root for backwards compatibility that expect it under /
RUN ln -s /usr/local/bin/argocd /argocd && \
ln -s /usr/local/bin/argocd-server /argocd-server && \
ln -s /usr/local/bin/argocd-util /argocd-util && \
ln -s /usr/local/bin/argocd-application-controller /argocd-application-controller && \
ln -s /usr/local/bin/argocd-repo-server /argocd-repo-server
USER argocd
RUN helm init --client-only
WORKDIR /home/argocd
ARG BINARY
CMD ${BINARY}

83
Dockerfile-argocd Normal file
View File

@@ -0,0 +1,83 @@
FROM debian:9.4 as builder
RUN apt-get update && apt-get install -y \
git \
make \
wget \
gcc \
zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install go
ENV GO_VERSION 1.10.3
ENV GO_ARCH amd64
ENV GOPATH /root/go
ENV PATH ${GOPATH}/bin:/usr/local/go/bin:${PATH}
RUN wget https://storage.googleapis.com/golang/go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
tar -C /usr/local/ -xf /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
rm /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz
# Install protoc, dep, packr
ENV PROTOBUF_VERSION 3.5.1
RUN cd /usr/local && \
wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/protoc-${PROTOBUF_VERSION}-linux-x86_64.zip && \
unzip protoc-*.zip && \
wget https://github.com/golang/dep/releases/download/v0.4.1/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep && \
wget https://github.com/gobuffalo/packr/releases/download/v1.11.0/packr_1.11.0_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
# Perform the build
WORKDIR /root/go/src/github.com/argoproj/argo-cd
COPY . .
ARG MAKE_TARGET="cli server controller repo-server argocd-util"
RUN make ${MAKE_TARGET}
##############################################################
# This stage will pull in or build any CLI tooling we need for our final image
FROM golang:1.10 as cli-tooling
# NOTE: we frequently switch between tip of master ksonnet vs. official builds. Comment/uncomment
# the corresponding section to switch between the two options:
# Option 1: build ksonnet ourselves
#RUN go get -v -u github.com/ksonnet/ksonnet && mv ${GOPATH}/bin/ksonnet /ks
# Option 2: use official tagged ksonnet release
env KSONNET_VERSION=0.11.0
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /ks
RUN curl -o /kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /kubectl
##############################################################
FROM debian:9.3
RUN apt-get update && apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=cli-tooling /ks /usr/local/bin/ks
COPY --from=cli-tooling /kubectl /usr/local/bin/kubectl
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=root
COPY --from=builder /root/go/src/github.com/argoproj/argo-cd/dist/* /
ARG BINARY
CMD /${BINARY}

22
Dockerfile-ci-builder Normal file
View File

@@ -0,0 +1,22 @@
FROM golang:1.10.3
WORKDIR /tmp
RUN curl -O https://get.docker.com/builds/Linux/x86_64/docker-1.13.1.tgz && \
tar -xzf docker-1.13.1.tgz && \
mv docker/docker /usr/local/bin/docker && \
rm -rf ./docker && \
go get -u github.com/golang/dep/cmd/dep && \
go get -u gopkg.in/alecthomas/gometalinter.v2 && \
gometalinter.v2 --install
# Install kubectl
RUN curl -o /kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /kubectl && mv /kubectl /usr/local/bin/kubectl
# Install ksonnet
env KSONNET_VERSION=0.11.0
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks && \
rm -rf /tmp/ks_${KSONNET_VERSION}

400
Gopkg.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,9 +6,6 @@ required = [
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger",
"github.com/golang/protobuf/protoc-gen-go",
"golang.org/x/tools/cmd/cover",
"github.com/argoproj/pkg/time",
"github.com/dustin/go-humanize",
]
[[constraint]]
@@ -56,7 +53,3 @@ required = [
[[override]]
name = "github.com/sirupsen/logrus"
revision = "ea8897e79973357ba785ac2533559a6297e83c44"
[[constraint]]
branch = "master"
name = "github.com/argoproj/pkg"

View File

@@ -62,16 +62,16 @@ cli: clean-debug
.PHONY: cli-linux
cli-linux: clean-debug
docker build --iidfile /tmp/argocd-linux-id --target argocd-build --build-arg MAKE_TARGET="cli IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-linux-amd64" .
docker build --iidfile /tmp/argocd-linux-id --target builder --build-arg MAKE_TARGET="cli IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-linux-amd64" -f Dockerfile-argocd .
docker create --name tmp-argocd-linux `cat /tmp/argocd-linux-id`
docker cp tmp-argocd-linux:/go/src/github.com/argoproj/argo-cd/dist/argocd-linux-amd64 dist/
docker cp tmp-argocd-linux:/root/go/src/github.com/argoproj/argo-cd/dist/argocd-linux-amd64 dist/
docker rm tmp-argocd-linux
.PHONY: cli-darwin
cli-darwin: clean-debug
docker build --iidfile /tmp/argocd-darwin-id --target argocd-build --build-arg MAKE_TARGET="cli GOOS=darwin IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-darwin-amd64" .
docker build --iidfile /tmp/argocd-darwin-id --target builder --build-arg MAKE_TARGET="cli GOOS=darwin IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-darwin-amd64" -f Dockerfile-argocd .
docker create --name tmp-argocd-darwin `cat /tmp/argocd-darwin-id`
docker cp tmp-argocd-darwin:/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64 dist/
docker cp tmp-argocd-darwin:/root/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64 dist/
docker rm tmp-argocd-darwin
.PHONY: argocd-util
@@ -81,7 +81,8 @@ argocd-util: clean-debug
.PHONY: install-manifest
install-manifest:
if [ "${IMAGE_NAMESPACE}" = "" ] ; then echo "IMAGE_NAMESPACE must be set to build install manifest" ; exit 1 ; fi
./hack/update-manifests.sh
echo "# This is an auto-generated file. DO NOT EDIT" > manifests/install.yaml
cat manifests/components/*.yaml | sed 's@\( image: argoproj/\(.*\):latest\)@ image: '"${IMAGE_NAMESPACE}"'/\2:'"${IMAGE_TAG}"'@g' >> manifests/install.yaml
.PHONY: server
server: clean-debug
@@ -89,7 +90,7 @@ server: clean-debug
.PHONY: server-image
server-image:
docker build --build-arg BINARY=argocd-server -t $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) .
docker build --build-arg BINARY=argocd-server -t $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) ; fi
.PHONY: repo-server
@@ -98,7 +99,7 @@ repo-server:
.PHONY: repo-server-image
repo-server-image:
docker build --build-arg BINARY=argocd-repo-server -t $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) .
docker build --build-arg BINARY=argocd-repo-server -t $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) ; fi
.PHONY: controller
@@ -107,17 +108,17 @@ controller:
.PHONY: controller-image
controller-image:
docker build --build-arg BINARY=argocd-application-controller -t $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) .
docker build --build-arg BINARY=argocd-application-controller -t $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) ; fi
.PHONY: cli-image
cli-image:
docker build --build-arg BINARY=argocd -t $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) .
docker build --build-arg BINARY=argocd -t $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) ; fi
.PHONY: builder-image
builder-image:
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) --target builder .
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) -f Dockerfile-ci-builder .
.PHONY: lint
lint:
@@ -125,16 +126,11 @@ lint:
.PHONY: test
test:
go test -v `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-coverage
test-coverage:
go test -v -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
@if [ "$(COVERALLS_TOKEN)" != "" ] ; then goveralls -ignore `find . -name '*.pb*.go' | grep -v vendor/ | sed 's!^./!!' | paste -d, -s -` -coverprofile=coverage.out -service=argo-ci -repotoken "$(COVERALLS_TOKEN)"; else echo 'No COVERALLS_TOKEN env var specified. Skipping submission to Coveralls.io'; fi
go test `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-e2e
test-e2e:
go test -v -failfast -timeout 20m ./test/e2e
go test ./test/e2e
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
.PHONY: clean-debug

View File

@@ -1,10 +1,9 @@
[![Coverage Status](https://coveralls.io/repos/github/argoproj/argo-cd/badge.svg?branch=master)](https://coveralls.io/github/argoproj/argo-cd?branch=master)
# Argo CD - Declarative Continuous Delivery for Kubernetes
## What is Argo CD?
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
Argo CD is a declarative, continuous delivery service based on **ksonnet** for Kubernetes.
![Argo CD UI](docs/argocd-ui.gif)
@@ -20,25 +19,21 @@ is provided for additional features.
## How it works
Argo CD follows the **GitOps** pattern of using git repositories as the source of truth for defining
the desired application state. Kubernetes manifests can be specified in several ways:
* [ksonnet](https://ksonnet.io) applications
* [helm](https://helm.sh) charts
* Simple directory of YAML/json manifests
Argo CD follows the **GitOps** pattern of using git repositories as the source of truth for defining the
desired application state. Kubernetes manifests are specified as [ksonnet](https://ksonnet.io)
applications. Argo CD automates the deployment of the desired
application states in the specified target environments.
![Argo CD Architecture](docs/argocd_architecture.png)
Argo CD automates the deployment of the desired application states in the specified target environments.
Application deployments can track updates to branches, tags, or pinned to a specific version of
manifests at a git commit. See [tracking strategies](docs/tracking_strategies.md) for additional
details about the different tracking strategies available.
## Architecture
![Argo CD Architecture](docs/argocd_architecture.png)
Argo CD is implemented as a kubernetes controller which continuously monitors running applications
and compares the current, live state against the desired target state (as specified in the git repo).
A deployed application whose live state deviates from the target state is considered `OutOfSync`.
Argo CD reports & visualizes the differences, while providing facilities to automatically or
A deployed application whose live state deviates from the target state is considered out-of-sync.
Argo CD reports & visualizes the differences as well as providing facilities to automatically or
manually sync the live state back to the desired target state. Any modifications made to the desired
target state in the git repo can be automatically applied and reflected in the specified target
environments.
@@ -56,15 +51,47 @@ For additional details, see [architecture overview](docs/architecture.md).
* SSO Integration (OIDC, OAuth2, LDAP, SAML 2.0, GitLab, Microsoft, LinkedIn)
* Webhook Integration (GitHub, BitBucket, GitLab)
* PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades)
* Audit trails for application events and API calls
* Parameter overrides for overriding ksonnet/helm parameters in git
## What is ksonnet?
* [Jsonnet](http://jsonnet.org), the basis for ksonnet, is a domain specific configuration language,
which provides extreme flexibility for composing and manipulating JSON/YAML specifications.
* [Ksonnet](http://ksonnet.io) goes one step further by applying Jsonnet principles to Kubernetes
manifests. It provides an opinionated file & directory structure to organize applications into
reusable components, parameters, and environments. Environments can be hierarchical, which promotes
both re-use and granular customization of application and environment specifications.
## Why ksonnet?
Application configuration management is a hard problem and grows rapidly in complexity as you deploy
more applications, against more and more environments. Current templating systems, such as Jinja,
and Golang templating, are unnatural ways to maintain kubernetes manifests, and are not well suited to
capture subtle configuration differences between environments. Its ability to compose and re-use
application and environment configurations is also very limited.
Imagine we have a single guestbook application deployed in following environments:
| Environment | K8s Version | Application Image | DB Connection String | Environment Vars | Sidecars |
|---------------|-------------|------------------------|-----------------------|------------------|---------------|
| minikube | 1.10.0 | jesse/guestbook:latest | sql://locahost/db | DEBUG=true | |
| dev | 1.11.0 | app/guestbook:latest | sql://dev-test/db | DEBUG=true | |
| staging | 1.10.0 | app/guestbook:e3c0263 | sql://staging/db | | istio,dnsmasq |
| us-west-1 | 1.9.0 | app/guestbook:abc1234 | sql://prod/db | FOO_FEATURE=true | istio,dnsmasq |
| us-west-2 | 1.10.0 | app/guestbook:abc1234 | sql://prod/db | | istio,dnsmasq |
| us-east-1 | 1.9.0 | app/guestbook:abc1234 | sql://prod/db | BAR_FEATURE=true | istio,dnsmasq |
Ksonnet:
* Enables composition and re-use of common YAML specifications
* Allows overrides, additions, and subtractions of YAML sub-components specific to each environment
* Guarantees proper generation of K8s manifests suitable for the corresponding Kubernetes API version
* Provides [kubernetes-specific jsonnet libraries](https://github.com/ksonnet/ksonnet-lib) to enable
concise definition of kubernetes manifests
## Development Status
* Argo CD is being used in production to deploy SaaS services at Intuit
## Roadmap
* Auto-sync toggle to directly apply git state changes to live state
* Audit trails for application events and API calls
* Service account/access key management for CI pipelines
* Support for additional config management tools (Kustomize?)
* Revamped UI, and feature parity with CLI
* Revamped UI
* Customizable application actions

View File

@@ -1 +1 @@
0.8.2
0.6.2

View File

@@ -7,7 +7,6 @@ import (
"io"
"net/url"
"os"
"reflect"
"strconv"
"strings"
"text/tabwriter"
@@ -31,7 +30,7 @@ import (
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/config"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/ksonnet"
kubeutil "github.com/argoproj/argo-cd/util/kube"
@@ -72,27 +71,29 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
upsert bool
)
var command = &cobra.Command{
Use: "create APPNAME",
Use: "create",
Short: "Create an application from a git location",
Run: func(c *cobra.Command, args []string) {
if len(args) != 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
var app argoappv1.Application
if fileURL != "" {
parsedURL, err := url.ParseRequestURI(fileURL)
if err != nil || !(parsedURL.Scheme == "http" || parsedURL.Scheme == "https") {
err = config.UnmarshalLocalFile(fileURL, &app)
err = cli.UnmarshalLocalFile(fileURL, &app)
} else {
err = config.UnmarshalRemoteFile(fileURL, &app)
err = cli.UnmarshalRemoteFile(fileURL, &app)
}
errors.CheckError(err)
if err != nil {
log.Fatal(err)
}
} else {
if len(args) == 1 {
if appName != "" && appName != args[0] {
log.Fatalf("--name argument '%s' does not match app name %s", appName, args[0])
}
appName = args[0]
}
if appOpts.repoURL == "" || appOpts.appPath == "" || appName == "" {
log.Fatal("name, repo, path are required")
if appOpts.repoURL == "" || appOpts.appPath == "" || appOpts.env == "" || appName == "" {
log.Fatal("name, repo, path, env are required")
os.Exit(1)
}
app = argoappv1.Application{
ObjectMeta: metav1.ObjectMeta{
@@ -116,9 +117,6 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
app.Spec.Destination.Namespace = appOpts.destNamespace
}
setParameterOverrides(&app, appOpts.parameters)
if len(appOpts.valuesFiles) > 0 {
app.Spec.Source.ValuesFiles = appOpts.valuesFiles
}
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
appCreateRequest := application.ApplicationCreateRequest{
@@ -131,7 +129,7 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
},
}
command.Flags().StringVarP(&fileURL, "file", "f", "", "Filename or URL to Kubernetes manifests for the app")
command.Flags().StringVar(&appName, "name", "", "A name for the app, ignored if a file is set (DEPRECATED)")
command.Flags().StringVar(&appName, "name", "", "A name for the app, ignored if a file is set")
command.Flags().BoolVar(&upsert, "upsert", false, "Allows to override application with the same name even if supplied application spec is different from existing spec")
addAppFlags(command, &appOpts)
return command
@@ -173,15 +171,10 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
fmt.Printf(printOpFmtStr, "Server:", app.Spec.Destination.Server)
fmt.Printf(printOpFmtStr, "Namespace:", app.Spec.Destination.Namespace)
fmt.Printf(printOpFmtStr, "URL:", appURL(acdClient, app))
fmt.Printf(printOpFmtStr, "Environment:", app.Spec.Source.Environment)
fmt.Printf(printOpFmtStr, "Repo:", app.Spec.Source.RepoURL)
fmt.Printf(printOpFmtStr, "Target:", app.Spec.Source.TargetRevision)
fmt.Printf(printOpFmtStr, "Path:", app.Spec.Source.Path)
if app.Spec.Source.Environment != "" {
fmt.Printf(printOpFmtStr, "Environment:", app.Spec.Source.Environment)
}
if len(app.Spec.Source.ValuesFiles) > 0 {
fmt.Printf(printOpFmtStr, "Helm Values:", strings.Join(app.Spec.Source.ValuesFiles, ","))
}
fmt.Printf(printOpFmtStr, "Target:", app.Spec.Source.TargetRevision)
if len(app.Status.Conditions) > 0 {
fmt.Println()
@@ -258,19 +251,10 @@ func printParams(app *argoappv1.Application) {
}
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
isKsonnet := app.Spec.Source.Environment != ""
if isKsonnet {
fmt.Fprintf(w, "COMPONENT\tNAME\tVALUE\tOVERRIDE\n")
for _, p := range app.Status.Parameters {
overrideValue := overrides[fmt.Sprintf("%s/%s", p.Component, p.Name)]
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", p.Component, p.Name, truncateString(p.Value, paramLenLimit), truncateString(overrideValue, paramLenLimit))
}
} else {
fmt.Fprintf(w, "NAME\tVALUE\n")
for _, p := range app.Spec.Source.ComponentParameterOverrides {
fmt.Fprintf(w, "%s\t%s\n", p.Name, truncateString(p.Value, paramLenLimit))
}
fmt.Fprintf(w, "COMPONENT\tNAME\tVALUE\tOVERRIDE\n")
for _, p := range app.Status.Parameters {
overrideValue := overrides[fmt.Sprintf("%s/%s", p.Component, p.Name)]
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", p.Component, p.Name, truncateString(p.Value, paramLenLimit), truncateString(overrideValue, paramLenLimit))
}
_ = w.Flush()
}
@@ -305,8 +289,6 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
app.Spec.Source.Environment = appOpts.env
case "revision":
app.Spec.Source.TargetRevision = appOpts.revision
case "values":
app.Spec.Source.ValuesFiles = appOpts.valuesFiles
case "dest-server":
app.Spec.Destination.Server = appOpts.destServer
case "dest-namespace":
@@ -356,7 +338,6 @@ type appOptions struct {
destServer string
destNamespace string
parameters []string
valuesFiles []string
project string
}
@@ -368,21 +349,19 @@ func addAppFlags(command *cobra.Command, opts *appOptions) {
command.Flags().StringVar(&opts.destServer, "dest-server", "", "K8s cluster URL (overrides the server URL specified in the ksonnet app.yaml)")
command.Flags().StringVar(&opts.destNamespace, "dest-namespace", "", "K8s target namespace (overrides the namespace specified in the ksonnet app.yaml)")
command.Flags().StringArrayVarP(&opts.parameters, "parameter", "p", []string{}, "set a parameter override (e.g. -p guestbook=image=example/guestbook:latest)")
command.Flags().StringArrayVar(&opts.valuesFiles, "values", []string{}, "Helm values file(s) to use")
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
}
// NewApplicationUnsetCommand returns a new instance of an `argocd app unset` command
func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
parameters []string
valuesFiles []string
parameters []string
)
var command = &cobra.Command{
Use: "unset APPNAME -p COMPONENT=PARAM",
Short: "Unset application parameters",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 || (len(parameters) == 0 && len(valuesFiles) == 0) {
if len(args) != 1 || len(parameters) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
@@ -391,44 +370,22 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
defer util.Close(conn)
app, err := appIf.Get(context.Background(), &application.ApplicationQuery{Name: &appName})
errors.CheckError(err)
isKsonnetApp := app.Spec.Source.Environment != ""
updated := false
for _, paramStr := range parameters {
if isKsonnetApp {
parts := strings.SplitN(paramStr, "=", 2)
if len(parts) != 2 {
log.Fatalf("Expected parameter of the form: component=param. Received: %s", paramStr)
}
overrides := app.Spec.Source.ComponentParameterOverrides
for i, override := range overrides {
if override.Component == parts[0] && override.Name == parts[1] {
app.Spec.Source.ComponentParameterOverrides = append(overrides[0:i], overrides[i+1:]...)
updated = true
break
}
}
} else {
overrides := app.Spec.Source.ComponentParameterOverrides
for i, override := range overrides {
if override.Name == paramStr {
app.Spec.Source.ComponentParameterOverrides = append(overrides[0:i], overrides[i+1:]...)
updated = true
break
}
}
parts := strings.SplitN(paramStr, "=", 2)
if len(parts) != 2 {
log.Fatalf("Expected parameter of the form: component=param. Received: %s", paramStr)
}
}
for _, valuesFile := range valuesFiles {
for i, vf := range app.Spec.Source.ValuesFiles {
if vf == valuesFile {
app.Spec.Source.ValuesFiles = append(app.Spec.Source.ValuesFiles[0:i], app.Spec.Source.ValuesFiles[i+1:]...)
overrides := app.Spec.Source.ComponentParameterOverrides
for i, override := range overrides {
if override.Component == parts[0] && override.Name == parts[1] {
app.Spec.Source.ComponentParameterOverrides = append(overrides[0:i], overrides[i+1:]...)
updated = true
break
}
}
}
if !updated {
return
}
@@ -440,7 +397,6 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
},
}
command.Flags().StringArrayVarP(&parameters, "parameter", "p", []string{}, "unset a parameter override (e.g. -p guestbook=image)")
command.Flags().StringArrayVar(&valuesFiles, "values", []string{}, "unset one or more helm values files")
return command
}
@@ -636,10 +592,9 @@ func formatConditionsSummary(app argoappv1.Application) string {
// NewApplicationWaitCommand returns a new instance of an `argocd app wait` command
func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
watchSync bool
watchHealth bool
watchOperations bool
timeout uint
syncOnly bool
healthOnly bool
timeout uint
)
var command = &cobra.Command{
Use: "wait APPNAME",
@@ -649,22 +604,49 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
c.HelpFunc()(c, args)
os.Exit(1)
}
if !watchSync && !watchHealth && !watchOperations {
watchSync = true
watchHealth = true
watchOperations = true
if syncOnly && healthOnly {
log.Fatalln("Please specify at most one of --sync-only or --health-only.")
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
_, err := waitOnApplicationStatus(appIf, appName, timeout, watchSync, watchHealth, watchOperations)
if timeout != 0 {
time.AfterFunc(time.Duration(timeout)*time.Second, func() {
cancel()
})
}
// print the initial components to format the tabwriter columns
app, err := appIf.Get(ctx, &application.ApplicationQuery{Name: &appName})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
printAppResources(w, app, false)
_ = w.Flush()
prevCompRes := &app.Status.ComparisonResult
appEventCh := watchApp(ctx, appIf, appName)
for appEvent := range appEventCh {
app := appEvent.Application
printAppStateChange(w, prevCompRes, &app)
_ = w.Flush()
prevCompRes = &app.Status.ComparisonResult
synced := app.Status.ComparisonResult.Status == argoappv1.ComparisonStatusSynced
healthy := app.Status.Health.Status == argoappv1.HealthStatusHealthy
if len(app.Status.GetErrorConditions()) == 0 && ((synced && healthy) || (synced && syncOnly) || (healthy && healthOnly)) {
log.Printf("App %q matches desired state", appName)
return
}
}
log.Fatalf("Timed out (%ds) waiting for app %q match desired state", timeout, appName)
},
}
command.Flags().BoolVar(&watchSync, "sync", false, "Wait for sync")
command.Flags().BoolVar(&watchHealth, "health", false, "Wait for health")
command.Flags().BoolVar(&watchOperations, "operation", false, "Wait for pending operations")
command.Flags().BoolVar(&syncOnly, "sync-only", false, "Wait only for sync")
command.Flags().BoolVar(&healthOnly, "health-only", false, "Wait only for health")
command.Flags().UintVar(&timeout, "timeout", defaultCheckTimeoutSeconds, "Time out after this many seconds")
return command
}
@@ -776,6 +758,38 @@ func printAppResources(w io.Writer, app *argoappv1.Application, showOperation bo
}
}
// printAppStateChange prints a component state change if it was different from the last time we saw it
func printAppStateChange(w io.Writer, prevComp *argoappv1.ComparisonResult, app *argoappv1.Application) {
getPrevResState := func(kind, name string) (argoappv1.ComparisonStatus, argoappv1.HealthStatusCode) {
for _, res := range prevComp.Resources {
obj, err := argoappv1.UnmarshalToUnstructured(res.TargetState)
errors.CheckError(err)
if obj == nil {
obj, err = argoappv1.UnmarshalToUnstructured(res.LiveState)
errors.CheckError(err)
}
if obj.GetKind() == kind && obj.GetName() == name {
return res.Status, res.Health.Status
}
}
return "", ""
}
if len(app.Status.ComparisonResult.Resources) > 0 {
for _, res := range app.Status.ComparisonResult.Resources {
obj, err := argoappv1.UnmarshalToUnstructured(res.TargetState)
errors.CheckError(err)
if obj == nil {
obj, err = argoappv1.UnmarshalToUnstructured(res.LiveState)
errors.CheckError(err)
}
prevSync, prevHealth := getPrevResState(obj.GetKind(), obj.GetName())
if prevSync != res.Status || prevHealth != res.Health.Status {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", obj.GetKind(), obj.GetName(), res.Status, res.Health.Status)
}
}
}
}
// NewApplicationSyncCommand returns a new instance of an `argocd app sync` command
func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
@@ -816,10 +830,23 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
ctx := context.Background()
_, err := appIf.Sync(ctx, &syncReq)
errors.CheckError(err)
app, err := waitOnApplicationStatus(appIf, appName, timeout, false, false, true)
app, err := waitUntilOperationCompleted(appIf, appName, timeout)
errors.CheckError(err)
// get refreshed app before printing to show accurate sync/health status
app, err = appIf.Get(ctx, &application.ApplicationQuery{Name: &appName, Refresh: true})
errors.CheckError(err)
fmt.Printf(printOpFmtStr, "Application:", appName)
printOperationResult(app.Status.OperationState)
if len(app.Status.ComparisonResult.Resources) > 0 {
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
printAppResources(w, app, true)
_ = w.Flush()
}
pruningRequired := 0
for _, resDetails := range app.Status.OperationState.SyncResult.Resources {
if resDetails.Status == argoappv1.ResourceDetailsPruningRequired {
@@ -844,187 +871,26 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
return command
}
// ResourceState tracks the state of a resource when waiting on an application status.
type resourceState struct {
Kind string
Name string
Status string
Health string
Hook string
Message string
}
func newResourceState(kind, name, status, health, hook, message string) *resourceState {
return &resourceState{
Kind: kind,
Name: name,
Status: status,
Health: health,
Hook: hook,
Message: message,
}
}
// Key returns a unique-ish key for the resource.
func (rs *resourceState) Key() string {
return fmt.Sprintf("%s/%s", rs.Kind, rs.Name)
}
func (rs *resourceState) String() string {
return fmt.Sprintf("%s\t%s\t%s\t%s\t%s\t%s", rs.Kind, rs.Name, rs.Status, rs.Health, rs.Hook, rs.Message)
}
// Merge merges the new state with any different contents from another resourceState.
// Blank fields in the receiver state will be updated to non-blank.
// Non-blank fields in the receiver state will never be updated to blank.
// Returns whether or not any keys were updated.
func (rs *resourceState) Merge(newState *resourceState) bool {
updated := false
for _, field := range []string{"Status", "Health", "Hook", "Message"} {
v := reflect.ValueOf(rs).Elem().FieldByName(field)
currVal := v.String()
newVal := reflect.ValueOf(newState).Elem().FieldByName(field).String()
if newVal != "" && currVal != newVal {
v.SetString(newVal)
updated = true
}
}
return updated
}
func calculateResourceStates(app *argoappv1.Application) map[string]*resourceState {
resStates := make(map[string]*resourceState)
for _, res := range app.Status.ComparisonResult.Resources {
obj, err := argoappv1.UnmarshalToUnstructured(res.TargetState)
errors.CheckError(err)
if obj == nil {
obj, err = argoappv1.UnmarshalToUnstructured(res.LiveState)
errors.CheckError(err)
}
newState := newResourceState(obj.GetKind(), obj.GetName(), string(res.Status), res.Health.Status, "", "")
key := newState.Key()
if prev, ok := resStates[key]; ok {
prev.Merge(newState)
} else {
resStates[key] = newState
}
}
var opResult *argoappv1.SyncOperationResult
if app.Status.OperationState != nil {
if app.Status.OperationState.SyncResult != nil {
opResult = app.Status.OperationState.SyncResult
} else if app.Status.OperationState.RollbackResult != nil {
opResult = app.Status.OperationState.SyncResult
}
}
if opResult == nil {
return resStates
}
for _, hook := range opResult.Hooks {
newState := newResourceState(hook.Kind, hook.Name, string(hook.Status), "", string(hook.Type), hook.Message)
key := newState.Key()
if prev, ok := resStates[key]; ok {
prev.Merge(newState)
} else {
resStates[key] = newState
}
}
for _, res := range opResult.Resources {
newState := newResourceState(res.Kind, res.Name, "", "", "", res.Message)
key := newState.Key()
if prev, ok := resStates[key]; ok {
prev.Merge(newState)
} else {
resStates[key] = newState
}
}
return resStates
}
func waitOnApplicationStatus(appClient application.ApplicationServiceClient, appName string, timeout uint, watchSync, watchHealth, watchOperation bool) (*argoappv1.Application, error) {
func waitUntilOperationCompleted(appClient application.ApplicationServiceClient, appName string, timeout uint) (*argoappv1.Application, error) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// refresh controls whether or not we refresh the app before printing the final status.
// We only want to do this when an operation is in progress, since operations are the only
// time when the sync status lags behind when an operation completes
refresh := false
printFinalStatus := func(app *argoappv1.Application) {
var err error
if refresh {
app, err = appClient.Get(context.Background(), &application.ApplicationQuery{Name: &appName, Refresh: true})
errors.CheckError(err)
}
fmt.Println()
fmt.Printf(printOpFmtStr, "Application:", app.Name)
if watchOperation {
printOperationResult(app.Status.OperationState)
}
if len(app.Status.ComparisonResult.Resources) > 0 {
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 5, 0, 2, ' ', 0)
printAppResources(w, app, watchOperation)
_ = w.Flush()
}
}
if timeout != 0 {
time.AfterFunc(time.Duration(timeout)*time.Second, func() {
cancel()
})
}
w := tabwriter.NewWriter(os.Stdout, 5, 0, 2, ' ', 0)
fmt.Fprintln(w, "KIND\tNAME\tSTATUS\tHEALTH\tHOOK\tOPERATIONMSG")
prevStates := make(map[string]*resourceState)
appEventCh := watchApp(ctx, appClient, appName)
var app *argoappv1.Application
for appEvent := range appEventCh {
app = &appEvent.Application
if app.Operation != nil {
refresh = true
if appEvent.Application.Status.OperationState != nil && appEvent.Application.Status.OperationState.Phase.Completed() {
return &appEvent.Application, nil
}
// consider skipped checks successful
synced := !watchSync || app.Status.ComparisonResult.Status == argoappv1.ComparisonStatusSynced
healthy := !watchHealth || app.Status.Health.Status == argoappv1.HealthStatusHealthy
operational := !watchOperation || appEvent.Application.Operation == nil
if len(app.Status.GetErrorConditions()) == 0 && synced && healthy && operational {
printFinalStatus(app)
return app, nil
}
newStates := calculateResourceStates(app)
for _, newState := range newStates {
var doPrint bool
stateKey := newState.Key()
if prevState, found := prevStates[stateKey]; found {
doPrint = prevState.Merge(newState)
} else {
prevStates[stateKey] = newState
doPrint = true
}
if doPrint {
fmt.Fprintln(w, prevStates[stateKey])
}
}
_ = w.Flush()
}
printFinalStatus(app)
return nil, fmt.Errorf("Timed out (%ds) waiting for app %q match desired state", timeout, appName)
}
// setParameterOverrides updates an existing or appends a new parameter override in the application
// If the app is a ksonnet app, then parameters are expected to be in the form: component=param=value
// Otherwise, the app is assumed to be a helm app and is expected to be in the form:
// param=value
func setParameterOverrides(app *argoappv1.Application, parameters []string) {
if len(parameters) == 0 {
return
@@ -1035,28 +901,15 @@ func setParameterOverrides(app *argoappv1.Application, parameters []string) {
} else {
newParams = make([]argoappv1.ComponentParameter, 0)
}
isKsonnetApp := app.Spec.Source.Environment != ""
for _, paramStr := range parameters {
var newParam argoappv1.ComponentParameter
if isKsonnetApp {
parts := strings.SplitN(paramStr, "=", 3)
if len(parts) != 3 {
log.Fatalf("Expected ksonnet parameter of the form: component=param=value. Received: %s", paramStr)
}
newParam = argoappv1.ComponentParameter{
Component: parts[0],
Name: parts[1],
Value: parts[2],
}
} else {
parts := strings.SplitN(paramStr, "=", 2)
if len(parts) != 2 {
log.Fatalf("Expected helm parameter of the form: param=value. Received: %s", paramStr)
}
newParam = argoappv1.ComponentParameter{
Name: parts[0],
Value: parts[1],
}
parts := strings.SplitN(paramStr, "=", 3)
if len(parts) != 3 {
log.Fatalf("Expected parameter of the form: component=param=value. Received: %s", paramStr)
}
newParam := argoappv1.ComponentParameter{
Component: parts[0],
Name: parts[1],
Value: parts[2],
}
index := -1
for i, cp := range newParams {
@@ -1166,8 +1019,18 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
})
errors.CheckError(err)
_, err = waitOnApplicationStatus(appIf, appName, timeout, false, false, true)
app, err = waitUntilOperationCompleted(appIf, appName, timeout)
errors.CheckError(err)
// get refreshed app before printing to show accurate sync/health status
app, err = appIf.Get(ctx, &application.ApplicationQuery{Name: &appName, Refresh: true})
errors.CheckError(err)
fmt.Printf(printOpFmtStr, "Application:", appName)
printOperationResult(app.Status.OperationState)
if !app.Status.OperationState.Phase.Successful() {
os.Exit(1)
}
},
}
command.Flags().BoolVar(&prune, "prune", false, "Allow deleting unexpected resources")

View File

@@ -18,7 +18,6 @@ import (
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
)
@@ -71,10 +70,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
errors.CheckError(err)
// Install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
managerBearerToken, err := common.InstallClusterManagerRBAC(clientset)
errors.CheckError(err)
managerBearerToken := common.InstallClusterManagerRBAC(conf)
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
@@ -206,14 +202,9 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
// clientset, err := kubernetes.NewForConfig(conf)
// errors.CheckError(err)
for _, clusterName := range args {
// TODO(jessesuen): find the right context and remove manager RBAC artifacts
// err := common.UninstallClusterManagerRBAC(clientset)
// errors.CheckError(err)
// common.UninstallClusterManagerRBAC(conf)
_, err := clusterIf.Delete(context.Background(), &cluster.ClusterQuery{Server: clusterName})
errors.CheckError(err)
}

View File

@@ -77,7 +77,6 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
// Perform the login
var tokenString string
var refreshToken string
if !sso {
tokenString = passwordLogin(acdClient, username, password)
} else {
@@ -86,7 +85,15 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
if !ssoConfigured(acdSet) {
log.Fatalf("ArgoCD instance is not configured with SSO")
}
tokenString, refreshToken = oauth2Login(server, clientOpts.PlainText)
tokenString = oauth2Login(server, clientOpts.PlainText)
// The token which we just received from the OAuth2 flow, was from dex. ArgoCD
// currently does not back dex with any kind of persistent storage (it is run
// in-memory). As a result, this token cannot be used in any permanent capacity.
// Restarts of dex will result in a different signing key, and sessions becoming
// invalid. Instead we turn-around and ask ArgoCD to re-sign the token (who *does*
// have persistence of signing keys), and is what we store in the config. Should we
// ever decide to have a database layer for dex, the next line can be removed.
tokenString = tokenLogin(acdClient, tokenString)
}
parser := &jwt.Parser{
@@ -109,9 +116,8 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
Insecure: globalClientOpts.Insecure,
})
localCfg.UpsertUser(localconfig.User{
Name: ctxName,
AuthToken: tokenString,
RefreshToken: refreshToken,
Name: ctxName,
AuthToken: tokenString,
})
if ctxName == "" {
ctxName = server
@@ -157,9 +163,8 @@ func getFreePort() (int, error) {
return ln.Addr().(*net.TCPAddr).Port, ln.Close()
}
// oauth2Login opens a browser, runs a temporary HTTP server to delegate OAuth2 login flow and
// returns the JWT token and a refresh token (if supported)
func oauth2Login(host string, plaintext bool) (string, string) {
// oauth2Login opens a browser, runs a temporary HTTP server to delegate OAuth2 login flow and returns the JWT token
func oauth2Login(host string, plaintext bool) string {
ctx := context.Background()
port, err := getFreePort()
errors.CheckError(err)
@@ -178,7 +183,6 @@ func oauth2Login(host string, plaintext bool) (string, string) {
}
srv := &http.Server{Addr: ":" + strconv.Itoa(port)}
var tokenString string
var refreshToken string
loginCompleted := make(chan struct{})
callbackHandler := func(w http.ResponseWriter, r *http.Request) {
@@ -211,9 +215,8 @@ func oauth2Login(host string, plaintext bool) (string, string) {
log.Fatal(errMsg)
return
}
refreshToken, _ = tok.Extra("refresh_token").(string)
log.Debugf("Token: %s", tokenString)
log.Debugf("Refresh Token: %s", tokenString)
successPage := `
<div style="height:100px; width:100%!; display:flex; flex-direction: column; justify-content: center; align-items:center; background-color:#2ecc71; color:white; font-size:22"><div>Authentication successful!</div></div>
<p style="margin-top:20px; font-size:18; text-align:center">Authentication was successful, you can now return to CLI. This page will close automatically</p>
@@ -245,7 +248,7 @@ func oauth2Login(host string, plaintext bool) (string, string) {
}()
<-loginCompleted
_ = srv.Shutdown(ctx)
return tokenString, refreshToken
return tokenString
}
func passwordLogin(acdClient argocdclient.Client, username, password string) string {
@@ -260,3 +263,14 @@ func passwordLogin(acdClient argocdclient.Client, username, password string) str
errors.CheckError(err)
return createdSession.Token
}
func tokenLogin(acdClient argocdclient.Client, token string) string {
sessConn, sessionIf := acdClient.NewSessionClientOrDie()
defer util.Close(sessConn)
sessionRequest := session.SessionCreateRequest{
Token: token,
}
createdSession, err := sessionIf.Create(context.Background(), &sessionRequest)
errors.CheckError(err)
return createdSession.Token
}

View File

@@ -1,20 +1,17 @@
package commands
import (
"context"
"fmt"
"os"
"strconv"
"strings"
"text/tabwriter"
"time"
timeutil "github.com/argoproj/pkg/time"
"github.com/dustin/go-humanize"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"strings"
"context"
"fmt"
"text/tabwriter"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
@@ -22,11 +19,8 @@ import (
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/git"
projectutil "github.com/argoproj/argo-cd/util/project"
)
const (
policyTemplate = "p, proj:%s:%s, applications, %s, %s/%s, %s"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/apis/meta/v1"
)
type projectOpts struct {
@@ -35,12 +29,6 @@ type projectOpts struct {
sources []string
}
type policyOpts struct {
action string
permission string
object string
}
func (opts *projectOpts) GetDestinations() []v1alpha1.ApplicationDestination {
destinations := make([]v1alpha1.ApplicationDestination, 0)
for _, destStr := range opts.destinations {
@@ -67,7 +55,6 @@ func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
os.Exit(1)
},
}
command.AddCommand(NewProjectRoleCommand(clientOpts))
command.AddCommand(NewProjectCreateCommand(clientOpts))
command.AddCommand(NewProjectDeleteCommand(clientOpts))
command.AddCommand(NewProjectListCommand(clientOpts))
@@ -80,341 +67,10 @@ func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
func addProjFlags(command *cobra.Command, opts *projectOpts) {
command.Flags().StringVarP(&opts.description, "description", "", "", "Project description")
command.Flags().StringVarP(&opts.description, "description", "", "desc", "Project description")
command.Flags().StringArrayVarP(&opts.destinations, "dest", "d", []string{},
"Permitted destination server and namespace (e.g. https://192.168.99.100:8443,default)")
command.Flags().StringArrayVarP(&opts.sources, "src", "s", []string{}, "Permitted git source repository URL")
}
func addPolicyFlags(command *cobra.Command, opts *policyOpts) {
command.Flags().StringVarP(&opts.action, "action", "a", "", "Action to grant/deny permission on (e.g. get, create, list, update, delete)")
command.Flags().StringVarP(&opts.permission, "permission", "p", "allow", "Whether to allow or deny access to object with the action. This can only be 'allow' or 'deny'")
command.Flags().StringVarP(&opts.object, "object", "o", "", "Object within the project to grant/deny access. Use '*' for a wildcard. Will want access to '<project>/<object>'")
}
// NewProjectRoleCommand returns a new instance of the `argocd proj role` command
func NewProjectRoleCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
roleCommand := &cobra.Command{
Use: "role",
Short: "Manage a project's roles",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
roleCommand.AddCommand(NewProjectRoleListCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleGetCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleCreateCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleDeleteCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleCreateTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleDeleteTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleAddPolicyCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleRemovePolicyCommand(clientOpts))
return roleCommand
}
// NewProjectRoleAddPolicyCommand returns a new instance of an `argocd proj role add-policy` command
func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts policyOpts
)
var command = &cobra.Command{
Use: "add-policy PROJECT ROLE-NAME",
Short: "Add a policy to a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
if len(opts.action) <= 0 {
log.Fatal("Action needs to longer than 0 characters")
}
if len(opts.object) <= 0 {
log.Fatal("Objects needs to longer than 0 characters")
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
roleIndex, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
log.Fatal(err)
}
role := proj.Spec.Roles[roleIndex]
policy := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
proj.Spec.Roles[roleIndex].Policies = append(role.Policies, policy)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addPolicyFlags(command, &opts)
return command
}
// NewProjectRoleRemovePolicyCommand returns a new instance of an `argocd proj role remove-policy` command
func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts policyOpts
)
var command = &cobra.Command{
Use: "remove-policy PROJECT ROLE-NAME",
Short: "Remove a policy from a role within a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
if opts.permission != "allow" && opts.permission != "deny" {
log.Fatal("Permission flag can only have the values 'allow' or 'deny'")
}
if len(opts.action) <= 0 {
log.Fatal("Action needs to longer than 0 characters")
}
if len(opts.object) <= 0 {
log.Fatal("Objects needs to longer than 0 characters")
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
roleIndex, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
log.Fatal(err)
}
role := proj.Spec.Roles[roleIndex]
policyToRemove := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
duplicateIndex := -1
for i, policy := range role.Policies {
if policy == policyToRemove {
duplicateIndex = i
break
}
}
if duplicateIndex < 0 {
return
}
role.Policies[duplicateIndex] = role.Policies[len(role.Policies)-1]
proj.Spec.Roles[roleIndex].Policies = role.Policies[:len(role.Policies)-1]
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addPolicyFlags(command, &opts)
return command
}
// NewProjectRoleCreateCommand returns a new instance of an `argocd proj role create` command
func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
description string
)
var command = &cobra.Command{
Use: "create PROJECT ROLE-NAME",
Short: "Create a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, err = projectutil.GetRoleIndexByName(proj, roleName)
if err == nil {
return
}
proj.Spec.Roles = append(proj.Spec.Roles, v1alpha1.ProjectRole{Name: roleName, Description: description})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
command.Flags().StringVarP(&description, "description", "", "", "Project description")
return command
}
// NewProjectRoleDeleteCommand returns a new instance of an `argocd proj role delete` command
func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete PROJECT ROLE-NAME",
Short: "Delete a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
return
}
proj.Spec.Roles[index] = proj.Spec.Roles[len(proj.Spec.Roles)-1]
proj.Spec.Roles = proj.Spec.Roles[:len(proj.Spec.Roles)-1]
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectRoleCreateTokenCommand returns a new instance of an `argocd proj role create-token` command
func NewProjectRoleCreateTokenCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
expiresIn string
)
var command = &cobra.Command{
Use: "create-token PROJECT ROLE-NAME",
Short: "Create a project token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
duration, err := timeutil.ParseDuration(expiresIn)
errors.CheckError(err)
token, err := projIf.CreateToken(context.Background(), &project.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
errors.CheckError(err)
fmt.Println(token.Token)
},
}
command.Flags().StringVarP(&expiresIn, "expires-in", "e", "0s", "Duration before the token will expire. (Default: No expiration)")
return command
}
// NewProjectRoleDeleteTokenCommand returns a new instance of an `argocd proj role delete-token` command
func NewProjectRoleDeleteTokenCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete-token PROJECT ROLE-NAME ISSUED-AT",
Short: "Delete a project token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
issuedAt, err := strconv.ParseInt(args[2], 10, 64)
if err != nil {
log.Fatal(err)
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err = projIf.DeleteToken(context.Background(), &project.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
errors.CheckError(err)
},
}
return command
}
// NewProjectRoleListCommand returns a new instance of an `argocd proj roles list` command
func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "list PROJECT",
Short: "List all the roles in a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range project.Spec.Roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
_ = w.Flush()
},
}
return command
}
// NewProjectRoleGetCommand returns a new instance of an `argocd proj roles get` command
func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "get PROJECT ROLE-NAME",
Short: "Get the details of a specific role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index, err := projectutil.GetRoleIndexByName(project, roleName)
errors.CheckError(err)
role := project.Spec.Roles[index]
printRoleFmtStr := "%-15s%s\n"
fmt.Printf(printRoleFmtStr, "Role Name:", roleName)
fmt.Printf(printRoleFmtStr, "Description:", role.Description)
fmt.Printf("Policies:\n")
fmt.Printf("%s\n", project.ProjectPoliciesString())
fmt.Printf("JWT Tokens:\n")
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
for _, token := range role.JWTTokens {
expiresAt := "<none>"
if token.ExpiresAt > 0 {
expiresAt = humanizeTimestamp(token.ExpiresAt)
}
fmt.Fprintf(w, "%d\t%s\t%s\n", token.IssuedAt, humanizeTimestamp(token.IssuedAt), expiresAt)
}
_ = w.Flush()
},
}
return command
}
func humanizeTimestamp(epoch int64) string {
ts := time.Unix(epoch, 0)
return fmt.Sprintf("%s (%s)", ts.Format(time.RFC3339), humanize.Time(ts))
"Allowed deployment destination. Includes comma separated server url and namespace (e.g. https://192.168.99.100:8443,default")
command.Flags().StringArrayVarP(&opts.sources, "src", "s", []string{}, "Allowed deployment source repository URL.")
}
// NewProjectCreateCommand returns a new instance of an `argocd proj create` command
@@ -586,13 +242,8 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
errors.CheckError(err)
for _, item := range proj.Spec.SourceRepos {
if item == "*" && item == url {
log.Info("Wildcard source repository is already defined in project")
return
}
if item == git.NormalizeGitURL(url) {
log.Info("Specified source repository is already defined in project")
return
log.Fatal("Specified source repository is already defined in project")
}
}
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos, url)
@@ -623,17 +274,13 @@ func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobr
index := -1
for i, item := range proj.Spec.SourceRepos {
if item == "*" && item == url {
index = i
break
}
if item == git.NormalizeGitURL(url) {
index = i
break
}
}
if index == -1 {
log.Info("Specified source repository does not exist in project")
log.Fatal("Specified source repository does not exist in project")
} else {
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos[:index], proj.Spec.SourceRepos[index+1:]...)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})

View File

@@ -1,75 +0,0 @@
package commands
import (
"fmt"
"os"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
"github.com/argoproj/argo-cd/util/session"
)
// NewReloginCommand returns a new instance of `argocd relogin` command
func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
password string
)
var command = &cobra.Command{
Use: "relogin",
Short: "Refresh an expired authenticate token",
Long: "Refresh an expired authenticate token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
localCfg, err := localconfig.ReadLocalConfig(globalClientOpts.ConfigPath)
errors.CheckError(err)
if localCfg == nil {
log.Fatalf("No context found. Login using `argocd login`")
}
configCtx, err := localCfg.ResolveContext(localCfg.CurrentContext)
errors.CheckError(err)
parser := &jwt.Parser{
SkipClaimsValidation: true,
}
claims := jwt.StandardClaims{}
_, _, err = parser.ParseUnverified(configCtx.User.AuthToken, &claims)
errors.CheckError(err)
var tokenString string
var refreshToken string
if claims.Issuer == session.SessionManagerClaimsIssuer {
clientOpts := argocdclient.ClientOptions{
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
PlainText: configCtx.Server.PlainText,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
fmt.Printf("Relogging in as '%s'\n", claims.Subject)
tokenString = passwordLogin(acdClient, claims.Subject, password)
} else {
fmt.Println("Reinitiating SSO login")
tokenString, refreshToken = oauth2Login(configCtx.Server.Server, configCtx.Server.PlainText)
}
localCfg.UpsertUser(localconfig.User{
Name: localCfg.CurrentContext,
AuthToken: tokenString,
RefreshToken: refreshToken,
})
err = localconfig.WriteLocalConfig(*localCfg, globalClientOpts.ConfigPath)
errors.CheckError(err)
fmt.Printf("Context '%s' updated\n", localCfg.CurrentContext)
},
}
command.Flags().StringVar(&password, "password", "", "the password of an account to authenticate")
return command
}

View File

@@ -27,7 +27,6 @@ func NewCommand() *cobra.Command {
command.AddCommand(NewClusterCommand(&clientOpts, pathOpts))
command.AddCommand(NewApplicationCommand(&clientOpts))
command.AddCommand(NewLoginCommand(&clientOpts))
command.AddCommand(NewReloginCommand(&clientOpts))
command.AddCommand(NewRepoCommand(&clientOpts))
command.AddCommand(NewContextCommand(&clientOpts))
command.AddCommand(NewProjectCommand(&clientOpts))

View File

@@ -54,7 +54,6 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
fmt.Printf(" GoVersion: %s\n", serverVers.GoVersion)
fmt.Printf(" Compiler: %s\n", serverVers.Compiler)
fmt.Printf(" Platform: %s\n", serverVers.Platform)
fmt.Printf(" Ksonnet Version: %s\n", serverVers.KsonnetVersion)
}
},

View File

@@ -73,8 +73,6 @@ var (
AnnotationHook = MetadataPrefix + "/hook"
// AnnotationHookDeletePolicy is the policy of deleting a hook
AnnotationHookDeletePolicy = MetadataPrefix + "/hook-delete-policy"
// AnnotationHelmHook is the helm hook annotation
AnnotationHelmHook = "helm.sh/hook"
// LabelKeyApplicationControllerInstanceID is the label which allows to separate application among multiple running application controllers.
LabelKeyApplicationControllerInstanceID = application.ApplicationFullName + "/controller-instanceid"
@@ -102,8 +100,4 @@ var ArgoCDManagerPolicyRules = []rbacv1.PolicyRule{
Resources: []string{"*"},
Verbs: []string{"*"},
},
{
NonResourceURLs: []string{"*"},
Verbs: []string{"*"},
},
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"time"
"github.com/argoproj/argo-cd/errors"
log "github.com/sirupsen/logrus"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
@@ -11,6 +12,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
// CreateServiceAccount creates a service account
@@ -18,7 +20,7 @@ func CreateServiceAccount(
clientset kubernetes.Interface,
serviceAccountName string,
namespace string,
) error {
) {
serviceAccount := apiv1.ServiceAccount{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
@@ -32,13 +34,12 @@ func CreateServiceAccount(
_, err := clientset.CoreV1().ServiceAccounts(namespace).Create(&serviceAccount)
if err != nil {
if !apierr.IsAlreadyExists(err) {
return fmt.Errorf("Failed to create service account %q: %v", serviceAccountName, err)
log.Fatalf("Failed to create service account '%s': %v\n", serviceAccountName, err)
}
log.Infof("ServiceAccount %q already exists", serviceAccountName)
return nil
fmt.Printf("ServiceAccount '%s' already exists\n", serviceAccountName)
return
}
log.Infof("ServiceAccount %q created", serviceAccountName)
return nil
fmt.Printf("ServiceAccount '%s' created\n", serviceAccountName)
}
// CreateClusterRole creates a cluster role
@@ -46,7 +47,7 @@ func CreateClusterRole(
clientset kubernetes.Interface,
clusterRoleName string,
rules []rbacv1.PolicyRule,
) error {
) {
clusterRole := rbacv1.ClusterRole{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
@@ -61,17 +62,16 @@ func CreateClusterRole(
_, err := crclient.Create(&clusterRole)
if err != nil {
if !apierr.IsAlreadyExists(err) {
return fmt.Errorf("Failed to create ClusterRole %q: %v", clusterRoleName, err)
log.Fatalf("Failed to create ClusterRole '%s': %v\n", clusterRoleName, err)
}
_, err = crclient.Update(&clusterRole)
if err != nil {
return fmt.Errorf("Failed to update ClusterRole %q: %v", clusterRoleName, err)
log.Fatalf("Failed to update ClusterRole '%s': %v\n", clusterRoleName, err)
}
log.Infof("ClusterRole %q updated", clusterRoleName)
fmt.Printf("ClusterRole '%s' updated\n", clusterRoleName)
} else {
log.Infof("ClusterRole %q created", clusterRoleName)
fmt.Printf("ClusterRole '%s' created\n", clusterRoleName)
}
return nil
}
// CreateClusterRoleBinding create a ClusterRoleBinding
@@ -81,7 +81,7 @@ func CreateClusterRoleBinding(
serviceAccountName,
clusterRoleName string,
namespace string,
) error {
) {
roleBinding := rbacv1.ClusterRoleBinding{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
@@ -106,34 +106,22 @@ func CreateClusterRoleBinding(
_, err := clientset.RbacV1().ClusterRoleBindings().Create(&roleBinding)
if err != nil {
if !apierr.IsAlreadyExists(err) {
return fmt.Errorf("Failed to create ClusterRoleBinding %s: %v", clusterBindingRoleName, err)
log.Fatalf("Failed to create ClusterRoleBinding %s: %v\n", clusterBindingRoleName, err)
}
log.Infof("ClusterRoleBinding %q already exists", clusterBindingRoleName)
return nil
fmt.Printf("ClusterRoleBinding '%s' already exists\n", clusterBindingRoleName)
return
}
log.Infof("ClusterRoleBinding %q created, bound %q to %q", clusterBindingRoleName, serviceAccountName, clusterRoleName)
return nil
fmt.Printf("ClusterRoleBinding '%s' created, bound '%s' to '%s'\n", clusterBindingRoleName, serviceAccountName, clusterRoleName)
}
// InstallClusterManagerRBAC installs RBAC resources for a cluster manager to operate a cluster. Returns a token
func InstallClusterManagerRBAC(clientset kubernetes.Interface) (string, error) {
func InstallClusterManagerRBAC(conf *rest.Config) string {
const ns = "kube-system"
var err error
err = CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
if err != nil {
return "", err
}
err = CreateClusterRole(clientset, ArgoCDManagerClusterRole, ArgoCDManagerPolicyRules)
if err != nil {
return "", err
}
err = CreateClusterRoleBinding(clientset, ArgoCDManagerClusterRoleBinding, ArgoCDManagerServiceAccount, ArgoCDManagerClusterRole, ns)
if err != nil {
return "", err
}
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
CreateClusterRole(clientset, ArgoCDManagerClusterRole, ArgoCDManagerPolicyRules)
CreateClusterRoleBinding(clientset, ArgoCDManagerClusterRoleBinding, ArgoCDManagerServiceAccount, ArgoCDManagerClusterRole, ns)
var serviceAccount *apiv1.ServiceAccount
var secretName string
@@ -149,51 +137,52 @@ func InstallClusterManagerRBAC(clientset kubernetes.Interface) (string, error) {
return true, nil
})
if err != nil {
return "", fmt.Errorf("Failed to wait for service account secret: %v", err)
log.Fatalf("Failed to wait for service account secret: %v", err)
}
secret, err := clientset.CoreV1().Secrets(ns).Get(secretName, metav1.GetOptions{})
if err != nil {
return "", fmt.Errorf("Failed to retrieve secret %q: %v", secretName, err)
log.Fatalf("Failed to retrieve secret '%s': %v", secretName, err)
}
token, ok := secret.Data["token"]
if !ok {
return "", fmt.Errorf("Secret %q for service account %q did not have a token", secretName, serviceAccount)
log.Fatalf("Secret '%s' for service account '%s' did not have a token", secretName, serviceAccount)
}
return string(token), nil
return string(token)
}
// UninstallClusterManagerRBAC removes RBAC resources for a cluster manager to operate a cluster
func UninstallClusterManagerRBAC(clientset kubernetes.Interface) error {
return UninstallRBAC(clientset, "kube-system", ArgoCDManagerClusterRoleBinding, ArgoCDManagerClusterRole, ArgoCDManagerServiceAccount)
func UninstallClusterManagerRBAC(conf *rest.Config) {
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
UninstallRBAC(clientset, "kube-system", ArgoCDManagerClusterRoleBinding, ArgoCDManagerClusterRole, ArgoCDManagerServiceAccount)
}
// UninstallRBAC uninstalls RBAC related resources for a binding, role, and service account
func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleName, serviceAccount string) error {
func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleName, serviceAccount string) {
if err := clientset.RbacV1().ClusterRoleBindings().Delete(bindingName, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
return fmt.Errorf("Failed to delete ClusterRoleBinding: %v", err)
log.Fatalf("Failed to delete ClusterRoleBinding: %v\n", err)
}
log.Infof("ClusterRoleBinding %q not found", bindingName)
fmt.Printf("ClusterRoleBinding '%s' not found\n", bindingName)
} else {
log.Infof("ClusterRoleBinding %q deleted", bindingName)
fmt.Printf("ClusterRoleBinding '%s' deleted\n", bindingName)
}
if err := clientset.RbacV1().ClusterRoles().Delete(roleName, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
return fmt.Errorf("Failed to delete ClusterRole: %v", err)
log.Fatalf("Failed to delete ClusterRole: %v\n", err)
}
log.Infof("ClusterRole %q not found", roleName)
fmt.Printf("ClusterRole '%s' not found\n", roleName)
} else {
log.Infof("ClusterRole %q deleted", roleName)
fmt.Printf("ClusterRole '%s' deleted\n", roleName)
}
if err := clientset.CoreV1().ServiceAccounts(namespace).Delete(serviceAccount, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
return fmt.Errorf("Failed to delete ServiceAccount: %v", err)
log.Fatalf("Failed to delete ServiceAccount: %v\n", err)
}
log.Infof("ServiceAccount %q in namespace %q not found", serviceAccount, namespace)
fmt.Printf("ServiceAccount '%s' in namespace '%s' not found\n", serviceAccount, namespace)
} else {
log.Infof("ServiceAccount %q deleted", serviceAccount)
fmt.Printf("ServiceAccount '%s' deleted\n", serviceAccount)
}
return nil
}

View File

@@ -10,7 +10,6 @@ import (
"time"
log "github.com/sirupsen/logrus"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@@ -47,7 +46,6 @@ type ApplicationController struct {
namespace string
kubeClientset kubernetes.Interface
applicationClientset appclientset.Interface
auditLogger *argo.AuditLogger
appRefreshQueue workqueue.RateLimitingInterface
appOperationQueue workqueue.RateLimitingInterface
appInformer cache.SharedIndexInformer
@@ -90,7 +88,6 @@ func NewApplicationController(
statusRefreshTimeout: appResyncPeriod,
forceRefreshApps: make(map[string]bool),
forceRefreshAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, "application-controller"),
}
}
@@ -204,7 +201,7 @@ func retryUntilSucceed(action func() error, desc string, ctx context.Context, ti
log.Infof("Stop retrying %s", desc)
return
} else {
log.Warnf("Failed to %s: %+v, retrying in %v", desc, err, timeout)
log.Warnf("Failed to %s: %v, retrying in %v", desc, err, timeout)
time.Sleep(timeout)
}
}
@@ -286,7 +283,6 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
Type: appv1.ApplicationConditionDeletionError,
Message: err.Error(),
})
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonStatusRefreshed, Action: "refresh_status"}, v1.EventTypeWarning)
} else {
log.Infof("Successfully deleted resources for application %s", app.Name)
}
@@ -400,7 +396,6 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
// If operation is completed, clear the operation field to indicate no operation is
// in progress.
patch["operation"] = nil
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Action: "refresh_status"}, v1.EventTypeNormal)
}
if reflect.DeepEqual(app.Status.OperationState, state) {
log.Infof("No operation updates necessary to '%s'. Skipping patch", app.Name)
@@ -487,14 +482,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
// Returns true if application never been compared, has changed or comparison result has expired.
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) bool {
var reason string
expired := app.Status.ComparisonResult.ComparedAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
if ctrl.isRefreshForced(app.Name) {
reason = "force refresh"
} else if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown && expired {
} else if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown {
reason = "comparison status unknown"
} else if !app.Spec.Source.Equals(app.Status.ComparisonResult.ComparedTo) {
reason = "spec.source differs"
} else if expired {
} else if app.Status.ComparisonResult.ComparedAt.Add(statusRefreshTimeout).Before(time.Now().UTC()) {
reason = fmt.Sprintf("comparison expired. comparedAt: %v, expiry: %v", app.Status.ComparisonResult.ComparedAt, statusRefreshTimeout)
}
if reason != "" {
@@ -559,7 +553,6 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
// setApplicationHealth updates the health statuses of all resources performed in the comparison
func setApplicationHealth(comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error) {
var savedErr error
appHealth := appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
if comparisonResult.Status == appv1.ComparisonStatusUnknown {
appHealth.Status = appv1.HealthStatusUnknown
@@ -574,8 +567,8 @@ func setApplicationHealth(comparisonResult *appv1.ComparisonResult) (*appv1.Heal
return nil, err
}
healthState, err := health.GetAppHealth(&obj)
if err != nil && savedErr == nil {
savedErr = err
if err != nil {
return nil, err
}
resource.Health = *healthState
}
@@ -584,7 +577,7 @@ func setApplicationHealth(comparisonResult *appv1.ComparisonResult) (*appv1.Heal
appHealth.Status = resource.Health.Status
}
}
return &appHealth, savedErr
return &appHealth, nil
}
// updateAppStatus persists updates to application status. Detects if there patch

View File

@@ -7,7 +7,6 @@ import (
"time"
log "github.com/sirupsen/logrus"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
@@ -121,8 +120,6 @@ func (s *ksonnetAppStateManager) getTargetObjs(app *v1alpha1.Application, revisi
Revision: revision,
ComponentParameterOverrides: mfReqOverrides,
AppLabel: app.Name,
ValueFiles: app.Spec.Source.ValuesFiles,
Namespace: app.Spec.Destination.Namespace,
})
if err != nil {
return nil, nil, err
@@ -190,15 +187,11 @@ func (s *ksonnetAppStateManager) getLiveObjs(app *v1alpha1.Application, targetOb
}
apiResource, err := kubeutil.ServerResourceForGroupVersionKind(disco, gvk)
if err != nil {
if !apierr.IsNotFound(err) {
return nil, nil, err
}
// If we get here, the app is comprised of a custom resource which has yet to be registered
} else {
liveObj, err = kubeutil.GetLiveResource(dclient, targetObj, apiResource, app.Spec.Destination.Namespace)
if err != nil {
return nil, nil, err
}
return nil, nil, err
}
liveObj, err = kubeutil.GetLiveResource(dclient, targetObj, apiResource, app.Spec.Destination.Namespace)
if err != nil {
return nil, nil, err
}
}
controlledLiveObj[i] = liveObj

View File

@@ -1,52 +0,0 @@
package controller
import (
"testing"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
var podManifest = []byte(`
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- image: nginx:1.7.9
name: nginx
resources:
requests:
cpu: 0.2
`)
func newPod() *unstructured.Unstructured {
var un unstructured.Unstructured
err := yaml.Unmarshal(podManifest, &un)
if err != nil {
panic(err)
}
return &un
}
func TestIsHook(t *testing.T) {
pod := newPod()
assert.False(t, isHook(pod))
pod.SetAnnotations(map[string]string{"helm.sh/hook": "post-install"})
assert.True(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "PreSync"})
assert.True(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "Skip"})
assert.False(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "Unknown"})
assert.False(t, isHook(pod))
}

View File

@@ -431,7 +431,7 @@ func (sc *syncContext) doHookSync(syncTasks []syncTask, hooks []*unstructured.Un
sc.setOperationPhase(appv1.OperationSucceeded, "successfully synced")
}
// getHooks returns all ArgoCD hooks, optionally filtered by ones of the specific type(s)
// getHooks returns all hooks, or ones of the specific type(s)
func (sc *syncContext) getHooks(hookTypes ...appv1.HookType) ([]*unstructured.Unstructured, error) {
var hooks []*unstructured.Unstructured
for _, manifest := range sc.manifestInfo.Manifests {
@@ -440,9 +440,7 @@ func (sc *syncContext) getHooks(hookTypes ...appv1.HookType) ([]*unstructured.Un
if err != nil {
return nil, err
}
if !isArgoHook(&hook) {
// TODO: in the future, if we want to map helm hooks to ArgoCD lifecycles, we should
// include helm hooks in the returned list
if !isHook(&hook) {
continue
}
if len(hookTypes) > 0 {
@@ -616,24 +614,9 @@ func isHookType(hook *unstructured.Unstructured, hookType appv1.HookType) bool {
return false
}
// isHook indicates if the object is either a ArgoCD or Helm hook
// isHook tells whether or not the supplied object is a application lifecycle hook, or a normal,
// synced application resource
func isHook(obj *unstructured.Unstructured) bool {
return isArgoHook(obj) || isHelmHook(obj)
}
// isHelmHook indicates if the supplied object is a helm hook
func isHelmHook(obj *unstructured.Unstructured) bool {
annotations := obj.GetAnnotations()
if annotations == nil {
return false
}
_, ok := annotations[common.AnnotationHelmHook]
return ok
}
// isArgoHook indicates if the supplied object is an ArgoCD application lifecycle hook
// (vs. a normal, synced application resource)
func isArgoHook(obj *unstructured.Unstructured) bool {
annotations := obj.GetAnnotations()
if annotations == nil {
return false

View File

@@ -7,8 +7,6 @@
* [Tracking Strategies](tracking_strategies.md)
## Features
* [Application Sources](application_sources.md)
* [Application Parameters](parameters.md)
* [Resource Health](health.md)
* [Resource Hooks](resource_hooks.md)
* [Single Sign On](sso.md)

View File

@@ -1,102 +0,0 @@
# Application Source Types
ArgoCD supports several different ways in which kubernetes manifests can be defined:
* [ksonnet](https://ksonnet.io) applications
* [helm](https://helm.sh) charts
* Simple directory of YAML/json manifests
Some additional considerations should be made when deploying apps of a particular type:
## Ksonnet
### Environments
Ksonnet has a first class concept of an "environment." To create an application from a ksonnet
app directory, an environment must be specified. For example, the following command creates the
"guestbook-default" app, which points to the `default` environment:
```
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
```
### Parameters
Ksonnet parameters all belong to a component. For example, the following are the parameters
available in the guestbook app, all of which belong to the `guestbook-ui` component:
```
$ ks param list
COMPONENT PARAM VALUE
========= ===== =====
guestbook-ui containerPort 80
guestbook-ui image "gcr.io/heptio-images/ks-guestbook-demo:0.1"
guestbook-ui name "guestbook-ui"
guestbook-ui replicas 1
guestbook-ui servicePort 80
guestbook-ui type "LoadBalancer"
```
When overriding ksonnet parameters in ArgoCD, the component name should also be specified in the
`argocd app set` command, in the form of `-p COMPONENT=PARAM=VALUE`. For example:
```
argocd app set guestbook-default -p guestbook-ui=image=gcr.io/heptio-images/ks-guestbook-demo:0.1
```
## Helm
### Values Files
Helm has the ability to use a different, or even multiple "values.yaml" files to derive its
parameters from. Alternate or multiple values file(s), can be specified using the `--values`
flag. The flag can be repeated to support multiple values files:
```
argocd app set helm-guestbook --values values-production.yaml
```
### Helm Parameters
Helm has the ability to set parameter values, which override any values in
a `values.yaml`. For example, `service.type` is a common parameter which is exposed in a Helm chart:
```
helm template . --set service.type=LoadBalancer
```
Similarly ArgoCD can override values in the `values.yaml` parameters using `argo app set` command,
in the form of `-p PARAM=VALUE`. For example:
```
argocd app set helm-guestbook -p service.type=LoadBalancer
```
### Helm Hooks
Helm hooks are equivalent in concept to [ArgoCD resource hooks](resource_hooks.md). In helm, a hook
is any normal kubernetes resource annotated with the `helm.sh/hook` annotation. When ArgoCD deploys
helm application which contains helm hooks, all helm hook resources are currently ignored during
the `kubectl apply` of the manifests. There is an
[open issue](https://github.com/argoproj/argo-cd/issues/355) to map Helm hooks to ArgoCD's concept
of Pre/Post/Sync hooks.
### Random Data
Helm templating has the ability to generate random data during chart rendering via the
`randAlphaNum` function. Many helm charts from the [charts repository](https://github.com/helm/charts)
make use of this feature. For example, the following is the secret for the
[redis helm chart](https://github.com/helm/charts/blob/master/stable/redis/templates/secrets.yaml):
```
data:
{{- if .Values.password }}
redis-password: {{ .Values.password | b64enc | quote }}
{{- else }}
redis-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
```
The ArgoCD application controller periodically compares git state against the live state, running
the `helm template <CHART>` command to generate the helm manifests. Because the random value is
regenerated every time the comparison is made, any application which makes use of the `randAlphaNum`
function will always be in an `OutOfSync` state. This can be mitigated by explicitly setting a
value, in the values.yaml such that the value is stable between each comparison. For example:
```
argocd app set redis -p password=abc123
```

View File

@@ -22,57 +22,15 @@ manifests when provided the following inputs:
* repository URL
* git revision (commit, tag, branch)
* application path
* template specific settings: parameters, ksonnet environments, helm values.yaml
* application environment
### Application Controller
The application controller is a Kubernetes controller which continuously monitors running
applications and compares the current, live state against the desired target state (as specified in
the git repo). It detects `OutOfSync` application state and optionally takes corrective action. It
is responsible for invoking any user-defined hooks for lifcecycle events (PreSync, Sync, PostSync)
the git repo). It detects out-of-sync application state and optionally takes corrective action. It
is responsible for invoking any user-defined handlers (argo workflows) for Sync, OutOfSync events
### Application CRD (Custom Resource Definition)
The Application CRD is the Kubernetes resource object representing a deployed application instance
in an environment. It is defined by two key pieces of information:
* `source` reference to the desired state in git (repository, revision, path, environment)
* `destination` reference to the target cluster and namespace.
An example spec is as follows:
```
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
environment: default
destination:
server: https://kubernetes.default.svc
namespace: default
```
### AppProject CRD (Custom Resource Definition)
The AppProject CRD is the Kubernetes resource object representing a grouping of applications. It is defined by three key pieces of information:
* `sourceRepos` reference to the reposities that applications within the project can pull manifests from.
* `destinations` reference to clusters and namespaces that applications within the project can deploy into.
* `roles` list of entities with defintions of their access to resources within the project.
An example spec is as follows:
```
spec:
description: Description of the project
destinations:
- namespace: default
server: https://kubernetes.default.svc
roles:
- description: Description of the role
jwtTokens:
- iat: 1535390316
name: role-name
policies:
- p, proj:proj-name:role-name, applications, get, proj-name/*, allow
- p, proj:proj-name:role-name, applications, sync, proj-name/*, deny
sourceRepos:
- https://github.com/argoproj/argocd-example-apps.git
```
in an environment. It holds a reference to the desired target state (repo, revision, app, environment)
of which the application controller will enforce state against.

View File

@@ -1,37 +1,30 @@
# ArgoCD Getting Started
An example guestbook application is provided to demonstrate how ArgoCD works.
An example Ksonnet guestbook application is provided to demonstrates how ArgoCD works.
## Requirements
* Installed [minikube](https://github.com/kubernetes/minikube#installation)
* Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool
* Have a [kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) file (default location is `~/.kube/config`).
## 1. Install ArgoCD
```
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.7.2/manifests/install.yaml
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/install.yaml
```
This will create a new namespace, `argocd`, where ArgoCD services and application resources will live.
NOTE:
* On GKE with RBAC enabled, you may need to grant your account the ability to create new cluster roles
```
kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUREMAIL@gmail.com
$ kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUREMAIL@gmail.com
```
## 2. Download ArgoCD CLI
Download the latest ArgoCD version:
On Mac:
```
brew install argoproj/tap/argocd
```
On Linux:
```
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.7.2/argocd-linux-amd64
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.6.0/argocd-darwin-amd64
chmod +x /usr/local/bin/argocd
```
@@ -60,14 +53,13 @@ argocd login $(minikube service argocd-server -n argocd --url | cut -d'/' -f 3)
```
Other clusters:
```
kubectl get svc -n argocd argocd-server
kubectl get svc argocd-server
argocd login <EXTERNAL-IP>
```
After logging in, change the password using the command:
```
argocd account update-password
argocd relogin
```
@@ -123,7 +115,7 @@ After connecting a git repository, select the guestbook application for creation
Applications can be also be created using the ArgoCD CLI:
```
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
argocd app create --name guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
```
## 7. Sync (deploy) the application
@@ -165,13 +157,17 @@ Service guestbook-ui service "guestbook-ui" created
Deployment guestbook-ui deployment.apps "guestbook-ui" created
```
This command retrieves the manifests from git repository and performs a `kubectl apply` of the
manifests. The guestbook app is now running and you can now view its resource
This command retrieves the manifests from the ksonnet app in the git repository and performs a
`kubectl apply` of the manifests. The guestbook app is now running and you can now view its resource
components, logs, events, and assessed health:
![view app](assets/guestbook-tree.png)
## 8. Next Steps
ArgoCD supports additional features such as SSO, WebHooks, RBAC, Projects. See the rest of
the [documentation](./) for details.
ArgoCD supports additional features such as SSO, WebHooks, RBAC. See the following guides on setting
these up:
* [Configuring SSO](sso.md)
* [Configuring RBAC](rbac.md)
* [Configuring WebHooks](webhook.md)

View File

@@ -1,39 +0,0 @@
# Parameter Overrides
ArgoCD provides a mechanism to override the parameters of a ksonnet/helm app. This gives some extra
flexibility in having most of the application manifests defined in git, while leaving room for
*some* parts of the k8s manifests determined dynamically, or outside of git. It also serves as an
alternative way of redeploying an application by changing application parameters via ArgoCD, instead
of making the changes to the manifests in git.
**NOTE:** many consider this mode of operation as an anti-pattern to GitOps, since the source of
truth becomes a union of the git repository, and the application overrides. The ArgoCD parameter
overrides feature is provided mainly convenience to developers and is intended to be used more for
dev/test environments, vs. production environments.
To use parameter overrides, run the `argocd app set -p (COMPONENT=)PARAM=VALUE` command:
```
argocd app set guestbook -p guestbook=image=example/guestbook:abcd123
argocd app sync guestbook
```
The following are situations where parameter overrides would be useful:
1. A team maintains a "dev" environment, which needs to be continually updated with the latest
version of their guestbook application after every build in the tip of master. To address this use
case, the application would expose an parameter named `image`, whose value used in the `dev`
environment contains a placeholder value (e.g. `example/guestbook:replaceme`). The placeholder value
would be determined externally (outside of git) such as a build systems. Then, as part of the build
pipeline, the parameter value of the `image` would be continually updated to the freshly built image
(e.g. `argocd app set guestbook -p guestbook=image=example/guestbook:abcd123`). A sync operation
would result in the application being redeployed with the new image.
2. A repository of helm manifests is already publicly available (e.g. https://github.com/helm/charts).
Since commit access to the repository is unavailable, it is useful to be able to install charts from
the public repository, customizing the deployment with different parameters, without resorting to
forking the repository to make the changes. For example, to install redis from the helm chart
repository and customize the the database password, you would run:
```
argocd app create redis --repo https://github.com/helm/charts.git --path stable/redis --dest-server https://kubernetes.default.svc --dest-namespace default -p password=abc123
```

View File

@@ -2,17 +2,13 @@
## Overview
The RBAC feature enables restriction of access to ArgoCD resources. ArgoCD does not have its own
user management system and has only one built-in user `admin`. The `admin` user is a superuser and
it has unrestricted access to the system. RBAC requires [SSO configuration](./sso.md). Once SSO is
configured, additional RBAC roles can be defined, and SSO groups can man be mapped to roles.
The feature RBAC allows restricting access to ArgoCD resources. ArgoCD does not have own user management system and has only one built-in user `admin`. The `admin` user is a
superuser and it has full access. RBAC requires configuring [SSO](./sso.md) integration. Once [SSO](./sso.md) is connected you can define RBAC roles and map roles to groups.
## Configure RBAC
RBAC configuration allows defining roles and groups. ArgoCD has two pre-defined roles:
* `role:readonly` - read-only access to all resources
* `role:admin` - unrestricted access to all resources
These role definitions can be seen in [builtin-policy.csv](../util/rbac/builtin-policy.csv)
RBAC configuration allows defining roles and groups. ArgoCD has two pre-defined roles: role `role:readonly` which provides read-only access to all resources and role `role:admin`
which provides full access. Role definitions are available in [builtin-policy.csv](../util/rbac/builtin-policy.csv) file.
Additional roles and groups can be configured in `argocd-rbac-cm` ConfigMap. The example below custom role `org-admin`. The role is assigned to any user which belongs to
`your-github-org:your-team` group. All other users get `role:readonly` and cannot modify ArgoCD settings.
@@ -24,16 +20,16 @@ apiVersion: v1
data:
policy.default: role:readonly
policy.csv: |
p, role:org-admin, applications, *, */*, allow
p, role:org-admin, applications/*, *, */*, allow
p, role:org-admin, applications, *, */*
p, role:org-admin, applications/*, *, */*
p, role:org-admin, clusters, get, *, allow
p, role:org-admin, repositories, get, *, allow
p, role:org-admin, repositories/apps, get, *, allow
p, role:org-admin, clusters, get, *
p, role:org-admin, repositories, get, *
p, role:org-admin, repositories/apps, get, *
p, role:org-admin, repositories, create, *, allow
p, role:org-admin, repositories, update, *, allow
p, role:org-admin, repositories, delete, *, allow
p, role:org-admin, repositories, create, *
p, role:org-admin, repositories, update, *
p, role:org-admin, repositories, delete, *
g, your-github-org:your-team, role:org-admin
kind: ConfigMap
@@ -48,19 +44,15 @@ Kubernetes clusters which can be used by applications belonging to the project.
### 1. Create new project
Following command creates project `myproject` which can deploy applications to namespace `default` of cluster `https://kubernetes.default.svc`. The valid application source is defined in the `https://github.com/argoproj/argocd-example-apps.git` repository.
Following command creates project `myproject` which can deploy applications to namespace `default` of cluster `https://kubernetes.default.svc`. The source ksonnet application
should be defined in `https://github.com/argoproj/argocd-example-apps.git` repository.
```
argocd proj create myproject -d https://kubernetes.default.svc,default -s https://github.com/argoproj/argocd-example-apps.git
```
Project sources and destinations can be managed using commands
```
argocd project add-destination
argocd project remove-destination
argocd project add-source
argocd project remove-source
```
Project sources and destinations can be managed using commands `argocd project add-destination`, `argocd project remove-destination`, `argocd project add-source`
and `argocd project remove-source`.
### 2. Assign application to a project
@@ -83,19 +75,19 @@ apiVersion: v1
data:
policy.default: ""
policy.csv: |
p, role:team1-admin, applications, *, default/*, allow
p, role:team1-admin, applications/*, *, default/*, allow
p, role:team1-admin, applications, *, default/*
p, role:team1-admin, applications/*, *, default/*
p, role:team1-admin, applications, *, myproject/*, allow
p, role:team1-admin, applications/*, *, myproject/*, allow
p, role:team1-admin, applications, *, myproject/*
p, role:team1-admin, applications/*, *, myproject/*
p, role:org-admin, clusters, get, *, allow
p, role:org-admin, repositories, get, *, allow
p, role:org-admin, repositories/apps, get, *, allow
p, role:org-admin, clusters, get, *
p, role:org-admin, repositories, get, *
p, role:org-admin, repositories/apps, get, *
p, role:org-admin, repositories, create, *, allow
p, role:org-admin, repositories, update, *, allow
p, role:org-admin, repositories, delete, *, allow
p, role:org-admin, repositories, create, *
p, role:org-admin, repositories, update, *
p, role:org-admin, repositories, delete, *
g, role:team1-admin, org-admin
g, role:team2-admin, org-admin
@@ -105,58 +97,3 @@ kind: ConfigMap
metadata:
name: argocd-rbac-cm
```
## Project Roles
Projects include a feature called roles that allow users to define access to project's applications. A project can have multiple roles, and those roles can have different access granted to them. These permissions are called policies, and they are stored within the role as a list of casbin strings. A role's policy can only grant access to that role and are limited to applications within the role's project. However, the policies have an option for granting wildcard access to any application within a project.
In order to create roles in a project and add policies to a role, a user will need permission to update a project. The following commands can be used to manage a role.
```
argoproj proj role list
argoproj proj role get
argoproj proj role create
argoproj proj role delete
argoproj proj role add-policy
argoproj proj role remove-policy
```
Project roles can not be used unless a user creates a entity that is associated with that project role. ArgoCD supports creating JWT tokens with a role associated with it. Since the JWT token is associated with a role's policies, any changes to the role's policies will immediately take effect for that JWT token.
A user will need permission to update a project in order to create a JWT token for a role, and they can use the following commands to manage the JWT tokens.
```
argoproj proj role create-token
argoproj proj role delete-token
```
Since the JWT tokens aren't stored in ArgoCD, they can only be retrieved when they are created. A user can leverage them in the cli by either passing them in using the `--auth-token` flag or setting the ARGOCD_AUTH_TOKEN environment variable. The JWT tokens can be used until they expire or are revoked. The JWT tokens can created with or without an expiration, but the default on the cli is creates them without an expirations date. Even if a token has not expired, it can not be used if the token has been revoke.
Below is an example of leveraging a JWT token to access the guestbook application. It makes the assumption that the user already has a project named myproject and an application called guestbook-default.
```
PROJ=myproject
APP=guestbook-default
ROLE=get-role
argocd proj role create $PROJ $ROLE
argocd proj role create-token $PROJ $ROLE -e 10m
JWT=<value from command above>
argocd proj role list $PROJ
argocd proj role get $PROJ $ROLE
#This command will fail because the JWT Token associated with the project role does not have a policy to allow access to the application
argocd app get $APP --auth-token $JWT
# Adding a policy to grant access to the application for the new role
argocd proj role add-policy $PROJ $ROLE --action get --permission allow --object $APP
argocd app get $PROJ-$ROLE --auth-token $JWT
# Removing the policy we added and adding one with a wildcard.
argocd proj role remove-policy $PROJ $TOKEN -a get -o $PROJ-$TOKEN
argocd proj role remove-policy $PROJ $TOKEN -a get -o '*'
# The wildcard allows us to access the application due to the wildcard.
argocd app get $PROJ-$TOKEN --auth-token $JWT
argocd proj role get $PROJ
argocd proj role get $PROJ $ROLE
# Revoking the JWT token
argocd proj role delete-token $PROJ $ROLE <id field from the last command>
# This will fail since the JWT Token was deleted for the project role.
argocd app get $APP --auth-token $JWT
```

View File

@@ -35,7 +35,7 @@ kubectl edit configmap argocd-cm
[GitHub connector](https://github.com/coreos/dex/blob/master/Documentation/connectors/github.md)
documentation for explanation of the fields. A minimal config should populate the clientID,
clientSecret generated in Step 1.
* You will very likely want to restrict logins to one or more GitHub organization. In the
* You will very likely want to restrict logins to one ore more GitHub organization. In the
`connectors.config.orgs` list, add one or more GitHub organizations. Any member of the org will
then be able to login to ArgoCD to perform management tasks.

View File

@@ -1,45 +1,41 @@
# Tracking and Deployment Strategies
An ArgoCD application spec provides several different ways of track kubernetes resource manifests in
git. This document describes the different techniques and the means of deploying those manifests to
the target environment.
An ArgoCD application spec provides several different ways of track kubernetes resource manifests in git. This document describes the different techniques and the means of deploying those manifests to the target environment.
## Branch Tracking
If a branch name is specified, ArgoCD will continually compare live state against the resource
manifests defined at the tip of the specified branch.
If a branch name is specified, ArgoCD will continually compare live state against the resource manifests defined at the tip of the specified branch.
To redeploy an application, a user makes changes to the manifests, and commit/pushes those the
changes to the tracked branch, which will then be detected by ArgoCD controller.
To redeploy an application, a user makes changes to the manifests, and commit/pushes those the changes to the tracked branch, which will then be detected by ArgoCD controller.
## Tag Tracking
If a tag is specified, the manifests at the specified git tag will be used to perform the sync
comparison. This provides some advantages over branch tracking in that a tag is generally considered
more stable, and less frequently updated, with some manual judgement of what constitutes a tag.
If a tag is specified, the manifests at the specified git tag will be used to perform the sync comparison. This provides some advantages over branch tracking in that a tag is generally considered more stable, and less frequently updated, with some manual judgement of what constitutes a tag.
To redeploy an application, the user uses git to change the meaning of a tag by retagging it to a
different commit SHA. ArgoCD will detect the new meaning of the tag when performing the
comparison/sync.
To redeploy an application, the user uses git to change the meaning of a tag by retagging it to a different commit SHA. ArgoCD will detect the new meaning of the tag when performing the comparison/sync.
## Commit Pinning
If a git commit SHA is specified, the application is effectively pinned to the manifests defined at
the specified commit. This is the most restrictive of the techniques and is typically used to
control production environments.
If a git commit SHA is specified, the application is effectively pinned to the manifests defined at the specified commit. This is the most restrictive of the techniques and is typically used to control production environments.
Since commit SHAs cannot change meaning, the only way to change the live state of an application
which is pinned to a commit, is by updating the tracking revision in the application to a different
commit containing the new manifests. Note that [parameter overrides](parameters.md) can still be set
on an application which is pinned to a revision.
## Auto-Sync [(Not Yet Implemented)]((https://github.com/argoproj/argo-cd/issues/79))
In all tracking strategies, the application will have the option to sync automatically. If auto-sync
is configured, the new resources manifests will be applied automatically -- as soon as a difference
is detected between the target state (git) and live state. If auto-sync is disabled, a manual sync
will be needed using the Argo UI, CLI, or API.
Since commit SHAs cannot change meaning, the only way to change the live state of an application which is pinned to a commit, is by updating the tracking revision in the application to a different commit containing the new manifests.
Note that parameter overrides can still be made against a application which is pinned to a revision.
## Parameter Overrides
Note that in all tracking strategies, any [parameter overrides](parameters.md) set in the
application instance take precedence over the git state.
ArgoCD provides means to override the parameters of a ksonnet app. This gives some extra flexibility in having *some* parts of the k8s manifests determined dynamically. It also serves as an alternative way of redeploying an application by changing application parameters via ArgoCD, instead of making the changes to the manifests in git.
The following is an example of where this would be useful: A team maintains a "dev" environment, which needs to be continually updated with the latest version of their guestbook application after every build in the tip of master. To address this use case, the ksonnet application should expose an parameter named `image`, whose value used in the `dev` environment contains a placeholder value (e.g. `example/guestbook:replaceme`), intended to be set externally (outside of git) such as a build systems. As part of the build pipeline, the parameter value of the `image` would be continually updated to the freshly built image (e.g. `example/guestbook:abcd123`). A sync operation would result in the application being redeployed with the new image.
ArgoCD provides these operations conveniently via the CLI, or alternatively via the gRPC/REST API.
```
$ argocd app set guestbook -p guestbook=image=example/guestbook:abcd123
$ argocd app sync guestbook
```
Note that in all tracking strategies, any parameter overrides set in the application instance will be honored.
## [Auto-Sync](https://github.com/argoproj/argo-cd/issues/79) (Not Yet Implemented)
In all tracking strategies, the application will have the option to sync automatically. If auto-sync is configured, the new resources manifests will be applied automatically -- as soon as a difference is detected between the target state (git) and live state. If auto-sync is disabled, a manual sync will be needed using the Argo UI, CLI, or API.

View File

@@ -117,6 +117,6 @@ clean_swagger() {
/usr/bin/find "${SWAGGER_ROOT}" -name '*.swagger.json' -delete
}
collect_swagger server 21
collect_swagger server 15
clean_swagger server
clean_swagger reposerver

View File

@@ -0,0 +1,168 @@
package main
import (
"context"
"fmt"
"hash/fnv"
"log"
"os"
"strings"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/server/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/git"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
// origRepoURLToSecretName hashes repo URL to the secret name using a formula.
// Part of the original repo name is incorporated for debugging purposes
func origRepoURLToSecretName(repo string) string {
repo = git.NormalizeGitURL(repo)
h := fnv.New32a()
_, _ = h.Write([]byte(repo))
parts := strings.Split(strings.TrimSuffix(repo, ".git"), "/")
return fmt.Sprintf("repo-%s-%v", strings.ToLower(parts[len(parts)-1]), h.Sum32())
}
// repoURLToSecretName hashes repo URL to the secret name using a formula.
// Part of the original repo name is incorporated for debugging purposes
func repoURLToSecretName(repo string) string {
repo = strings.ToLower(git.NormalizeGitURL(repo))
h := fnv.New32a()
_, _ = h.Write([]byte(repo))
parts := strings.Split(strings.TrimSuffix(repo, ".git"), "/")
return fmt.Sprintf("repo-%s-%v", parts[len(parts)-1], h.Sum32())
}
// RenameSecret renames a Kubernetes secret in a given namespace.
func renameSecret(namespace, oldName, newName string) {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
loadingRules.DefaultClientConfig = &clientcmd.DefaultClientConfig
overrides := clientcmd.ConfigOverrides{}
clientConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, &overrides)
log.Printf("Renaming secret %q to %q in namespace %q\n", oldName, newName, namespace)
config, err := clientConfig.ClientConfig()
if err != nil {
log.Println("Could not retrieve client config: ", err)
return
}
kubeclientset := kubernetes.NewForConfigOrDie(config)
repoSecret, err := kubeclientset.CoreV1().Secrets(namespace).Get(oldName, metav1.GetOptions{})
if err != nil {
log.Println("Could not retrieve old secret: ", err)
return
}
repoSecret.ObjectMeta.Name = newName
repoSecret.ObjectMeta.ResourceVersion = ""
repoSecret, err = kubeclientset.CoreV1().Secrets(namespace).Create(repoSecret)
if err != nil {
log.Println("Could not create new secret: ", err)
return
}
err = kubeclientset.CoreV1().Secrets(namespace).Delete(oldName, &metav1.DeleteOptions{})
if err != nil {
log.Println("Could not remove old secret: ", err)
}
}
// RenameRepositorySecrets ensures that repository secrets use the new naming format.
func renameRepositorySecrets(clientOpts argocdclient.ClientOptions, namespace string) {
conn, repoIf := argocdclient.NewClientOrDie(&clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
repos, err := repoIf.List(context.Background(), &repository.RepoQuery{})
if err != nil {
log.Println("An error occurred, so skipping secret renaming: ", err)
return
}
log.Println("Renaming repository secrets...")
for _, repo := range repos.Items {
oldSecretName := origRepoURLToSecretName(repo.Repo)
newSecretName := repoURLToSecretName(repo.Repo)
if oldSecretName != newSecretName {
log.Printf("Repo %q had its secret name change, so updating\n", repo.Repo)
renameSecret(namespace, oldSecretName, newSecretName)
}
}
}
/*
// PopulateAppDestinations ensures that apps have a Server and Namespace set explicitly.
func populateAppDestinations(clientOpts argocdclient.ClientOptions) {
conn, appIf := argocdclient.NewClientOrDie(&clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
apps, err := appIf.List(context.Background(), &application.ApplicationQuery{})
if err != nil {
log.Println("An error occurred, so skipping destination population: ", err)
return
}
log.Println("Populating app Destination fields")
for _, app := range apps.Items {
changed := false
log.Printf("Ensuring destination field is populated on app %q\n", app.ObjectMeta.Name)
if app.Spec.Destination.Server == "" {
if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown || app.Status.ComparisonResult.Status == appv1.ComparisonStatusError {
log.Printf("App %q was missing Destination.Server, but could not fill it in: %s", app.ObjectMeta.Name, app.Status.ComparisonResult.Status)
} else {
log.Printf("App %q was missing Destination.Server, so setting to %q\n", app.ObjectMeta.Name, app.Status.ComparisonResult.Server)
app.Spec.Destination.Server = app.Status.ComparisonResult.Server
changed = true
}
}
if app.Spec.Destination.Namespace == "" {
if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown || app.Status.ComparisonResult.Status == appv1.ComparisonStatusError {
log.Printf("App %q was missing Destination.Namespace, but could not fill it in: %s", app.ObjectMeta.Name, app.Status.ComparisonResult.Status)
} else {
log.Printf("App %q was missing Destination.Namespace, so setting to %q\n", app.ObjectMeta.Name, app.Status.ComparisonResult.Namespace)
app.Spec.Destination.Namespace = app.Status.ComparisonResult.Namespace
changed = true
}
}
if changed {
_, err = appIf.UpdateSpec(context.Background(), &application.ApplicationSpecRequest{
AppName: app.Name,
Spec: &app.Spec,
})
if err != nil {
log.Println("An error occurred (but continuing anyway): ", err)
}
}
}
}
*/
func main() {
if len(os.Args) < 3 {
log.Fatalf("USAGE: %s SERVER NAMESPACE\n", os.Args[0])
}
server, namespace := os.Args[1], os.Args[2]
log.Printf("Using argocd server %q and namespace %q\n", server, namespace)
isLocalhost := false
switch {
case strings.HasPrefix(server, "localhost:"):
isLocalhost = true
case strings.HasPrefix(server, "127.0.0.1:"):
isLocalhost = true
}
clientOpts := argocdclient.ClientOptions{
ServerAddr: server,
Insecure: true,
PlainText: isLocalhost,
}
renameRepositorySecrets(clientOpts, namespace)
//populateAppDestinations(clientOpts)
}

View File

@@ -1,11 +0,0 @@
#!/bin/sh
IMAGE_NAMESPACE=${IMAGE_NAMESPACE:='argoproj'}
IMAGE_TAG=${IMAGE_TAG:='latest'}
for i in "$(ls manifests/components/*.yaml)"; do
sed -i '' 's@\( image: \(.*\)/\(argocd-.*\):.*\)@ image: '"${IMAGE_NAMESPACE}"'/\3:'"${IMAGE_TAG}"'@g' $i
done
echo "# This is an auto-generated file. DO NOT EDIT" > manifests/install.yaml
cat manifests/components/*.yaml >> manifests/install.yaml

View File

@@ -3,23 +3,12 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
#data:
# ArgoCD's externally facing URL
# url: https://argo-cd-demo.argoproj.io
data:
# See https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
# for more details about how to setup data config needed for sso
# A dex connector configuration.
# Visit https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
# for instructions on configuring SSO.
# dex.config: |
# connectors:
# # GitHub example
# - type: github
# id: github
# name: GitHub
# config:
# clientID: aabbccddeeff00112233
# clientSecret: $dex.github.clientSecret
# orgs:
# - name: your-github-org
# teams:
# - red-team
# URL is the external URL of ArgoCD
#url:
# A dex connector configuration
#dex.config:

View File

@@ -1,26 +1,24 @@
---
# NOTE: some values in this secret will be populated by the initial startup of the API server
# NOTE: the values in this secret will be populated by the initial startup of the API
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
type: Opaque
#data:
# bcrypt hash of the admin password
#admin.password:
# random server signature key for session validation
#server.secretkey:
# TLS certificate and private key for API server
# server.crt:
# server.key:
#server.crt:
#server.key:
# The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
# events. To enable webhooks, configure one or more of the following keys with the shared git
# provider webhook secret. The payload URL configured in the git provider should use the
# /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
# github.webhook.secret:
# gitlab.webhook.secret:
# bitbucket.webhook.uuid:
# bcrypt hash of the admin password (autogenerated on initial startup).
# To reset a forgotten password, delete this key and restart the argocd-server
# admin.password:
# random server signature key for session validation (autogenerated on initial startup)
# server.secretkey:
#github.webhook.secret:
#gitlab.webhook.secret:
#bitbucket.webhook.uuid:

View File

@@ -3,14 +3,24 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
#data:
# An RBAC policy .csv file containing additional policy and role definitions.
# See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md on how to write RBAC policies.
# policy.csv: |
# # Give all members of "my-org:team-alpha" the ability to sync apps in "my-project"
# p, my-org:team-alpha, applications, sync, my-project/*, allow
# # Make all members of "my-org:team-beta" admins
# g, my-org:team-beta, role:admin
data:
# policy.csv holds the CSV file policy file which contains additional policy and role definitions.
# ArgoCD defines two built-in roles:
# * role:readonly - readonly access to all objects
# * role:admin - admin access to all objects
# The built-in policy can be seen under util/rbac/builtin-policy.csv
#policy.csv: ""
# The default role ArgoCD will fall back to, when authorizing API requests
# policy.default: role:readonly
# There are two policy formats:
# 1. Applications (which belong to a project):
# p, <user/group>, <resource>, <action>, <project>/<object>
# 2. All other resources:
# p, <user/group>, <resource>, <action>, <object>
# For example, the following rule gives all members of 'my-org:team1' the ability to sync
# applications in the project named: my-project
# p, my-org:team1, applications, sync, my-project/*
# policy.default holds the default policy which will ArgoCD will fall back to, when authorizing
# a user for API requests
policy.default: role:readonly

View File

@@ -27,11 +27,3 @@ rules:
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- list

View File

@@ -14,6 +14,6 @@ spec:
spec:
containers:
- command: [/argocd-application-controller, --repo-server, 'argocd-repo-server:8081']
image: argoproj/argocd-application-controller:v0.8.2
image: argoproj/argocd-application-controller:latest
name: application-controller
serviceAccountName: application-controller

View File

@@ -30,10 +30,3 @@ rules:
- update
- delete
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- list

View File

@@ -15,30 +15,24 @@ spec:
serviceAccountName: argocd-server
initContainers:
- name: copyutil
image: argoproj/argocd-server:v0.8.2
image: argoproj/argocd-server:latest
command: [cp, /argocd-util, /shared]
volumeMounts:
- mountPath: /shared
name: static-files
- name: ui
image: argoproj/argocd-ui:v0.8.2
image: argoproj/argocd-ui:latest
command: [cp, -r, /app, /shared]
volumeMounts:
- mountPath: /shared
name: static-files
containers:
- name: argocd-server
image: argoproj/argocd-server:v0.8.2
image: argoproj/argocd-server:latest
command: [/argocd-server, --staticassets, /shared/app, --repo-server, 'argocd-repo-server:8081']
volumeMounts:
- mountPath: /shared
name: static-files
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 30
- name: dex
image: quay.io/coreos/dex:v2.10.0
command: [/shared/argocd-util, rundex]

View File

@@ -12,15 +12,9 @@ spec:
labels:
app: argocd-repo-server
spec:
automountServiceAccountToken: false
containers:
- name: argocd-repo-server
image: argoproj/argocd-repo-server:v0.8.2
image: argoproj/argocd-repo-server:latest
command: [/argocd-repo-server]
ports:
- containerPort: 8081
readinessProbe:
tcpSocket:
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
- containerPort: 8081

View File

@@ -33,68 +33,65 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
#data:
# ArgoCD's externally facing URL
# url: https://argo-cd-demo.argoproj.io
data:
# See https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
# for more details about how to setup data config needed for sso
# A dex connector configuration.
# Visit https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
# for instructions on configuring SSO.
# dex.config: |
# connectors:
# # GitHub example
# - type: github
# id: github
# name: GitHub
# config:
# clientID: aabbccddeeff00112233
# clientSecret: $dex.github.clientSecret
# orgs:
# - name: your-github-org
# teams:
# - red-team
# URL is the external URL of ArgoCD
#url:
# A dex connector configuration
#dex.config:
---
# NOTE: some values in this secret will be populated by the initial startup of the API server
# NOTE: the values in this secret will be populated by the initial startup of the API
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
type: Opaque
#data:
# bcrypt hash of the admin password
#admin.password:
# random server signature key for session validation
#server.secretkey:
# TLS certificate and private key for API server
# server.crt:
# server.key:
#server.crt:
#server.key:
# The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
# events. To enable webhooks, configure one or more of the following keys with the shared git
# provider webhook secret. The payload URL configured in the git provider should use the
# /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
# github.webhook.secret:
# gitlab.webhook.secret:
# bitbucket.webhook.uuid:
# bcrypt hash of the admin password (autogenerated on initial startup).
# To reset a forgotten password, delete this key and restart the argocd-server
# admin.password:
# random server signature key for session validation (autogenerated on initial startup)
# server.secretkey:
#github.webhook.secret:
#gitlab.webhook.secret:
#bitbucket.webhook.uuid:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
#data:
# An RBAC policy .csv file containing additional policy and role definitions.
# See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md on how to write RBAC policies.
# policy.csv: |
# # Give all members of "my-org:team-alpha" the ability to sync apps in "my-project"
# p, my-org:team-alpha, applications, sync, my-project/*, allow
# # Make all members of "my-org:team-beta" admins
# g, my-org:team-beta, role:admin
data:
# policy.csv holds the CSV file policy file which contains additional policy and role definitions.
# ArgoCD defines two built-in roles:
# * role:readonly - readonly access to all objects
# * role:admin - admin access to all objects
# The built-in policy can be seen under util/rbac/builtin-policy.csv
#policy.csv: ""
# The default role ArgoCD will fall back to, when authorizing API requests
# policy.default: role:readonly
# There are two policy formats:
# 1. Applications (which belong to a project):
# p, <user/group>, <resource>, <action>, <project>/<object>
# 2. All other resources:
# p, <user/group>, <resource>, <action>, <object>
# For example, the following rule gives all members of 'my-org:team1' the ability to sync
# applications in the project named: my-project
# p, my-org:team1, applications, sync, my-project/*
# policy.default holds the default policy which will ArgoCD will fall back to, when authorizing
# a user for API requests
policy.default: role:readonly
---
apiVersion: v1
kind: ServiceAccount
@@ -129,14 +126,6 @@ rules:
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
@@ -165,7 +154,7 @@ spec:
spec:
containers:
- command: [/argocd-application-controller, --repo-server, 'argocd-repo-server:8081']
image: argoproj/argocd-application-controller:v0.8.2
image: argoproj/argocd-application-controller:v0.6.2
name: application-controller
serviceAccountName: application-controller
---
@@ -205,13 +194,6 @@ rules:
- update
- delete
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
@@ -241,30 +223,24 @@ spec:
serviceAccountName: argocd-server
initContainers:
- name: copyutil
image: argoproj/argocd-server:v0.8.2
image: argoproj/argocd-server:v0.6.2
command: [cp, /argocd-util, /shared]
volumeMounts:
- mountPath: /shared
name: static-files
- name: ui
image: argoproj/argocd-ui:v0.8.2
image: argoproj/argocd-ui:v0.6.2
command: [cp, -r, /app, /shared]
volumeMounts:
- mountPath: /shared
name: static-files
containers:
- name: argocd-server
image: argoproj/argocd-server:v0.8.2
image: argoproj/argocd-server:v0.6.2
command: [/argocd-server, --staticassets, /shared/app, --repo-server, 'argocd-repo-server:8081']
volumeMounts:
- mountPath: /shared
name: static-files
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 30
- name: dex
image: quay.io/coreos/dex:v2.10.0
command: [/shared/argocd-util, rundex]
@@ -307,7 +283,7 @@ spec:
spec:
containers:
- name: argocd-repo-server
image: argoproj/argocd-repo-server:v0.8.2
image: argoproj/argocd-repo-server:v0.6.2
command: [/argocd-repo-server]
ports:
- containerPort: 8081

View File

@@ -8,20 +8,9 @@ import (
"errors"
"fmt"
"io/ioutil"
"net"
"net/http"
"os"
"strings"
"time"
oidc "github.com/coreos/go-oidc"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"golang.org/x/oauth2"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/server/account"
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/server/cluster"
@@ -32,6 +21,9 @@ import (
"github.com/argoproj/argo-cd/server/version"
grpc_util "github.com/argoproj/argo-cd/util/grpc"
"github.com/argoproj/argo-cd/util/localconfig"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
)
const (
@@ -76,12 +68,11 @@ type ClientOptions struct {
}
type client struct {
ServerAddr string
PlainText bool
Insecure bool
CertPEMData []byte
AuthToken string
RefreshToken string
ServerAddr string
PlainText bool
Insecure bool
CertPEMData []byte
AuthToken string
}
// NewClient creates a new API client from a set of config options.
@@ -91,7 +82,6 @@ func NewClient(opts *ClientOptions) (Client, error) {
if err != nil {
return nil, err
}
var ctxName string
if localCfg != nil {
configCtx, err := localCfg.ResolveContext(opts.Context)
if err != nil {
@@ -108,8 +98,6 @@ func NewClient(opts *ClientOptions) (Client, error) {
c.PlainText = configCtx.Server.PlainText
c.Insecure = configCtx.Server.Insecure
c.AuthToken = configCtx.User.AuthToken
c.RefreshToken = configCtx.User.RefreshToken
ctxName = configCtx.Name
}
}
// Override server address if specified in env or CLI flag
@@ -149,97 +137,9 @@ func NewClient(opts *ClientOptions) (Client, error) {
if opts.Insecure {
c.Insecure = true
}
if localCfg != nil {
err = c.refreshAuthToken(localCfg, ctxName, opts.ConfigPath)
if err != nil {
return nil, err
}
}
return &c, nil
}
// refreshAuthToken refreshes a JWT auth token if it is invalid (e.g. expired)
func (c *client) refreshAuthToken(localCfg *localconfig.LocalConfig, ctxName, configPath string) error {
configCtx, err := localCfg.ResolveContext(ctxName)
if err != nil {
return err
}
if c.RefreshToken == "" {
// If we have no refresh token, there's no point in doing anything
return nil
}
parser := &jwt.Parser{
SkipClaimsValidation: true,
}
var claims jwt.StandardClaims
_, _, err = parser.ParseUnverified(configCtx.User.AuthToken, &claims)
if err != nil {
return err
}
if claims.Valid() == nil {
// token is still valid
return nil
}
log.Debug("Auth token no longer valid. Refreshing")
tlsConfig, err := c.tlsConfig()
if err != nil {
return err
}
httpClient := &http.Client{
Transport: &http.Transport{
TLSClientConfig: tlsConfig,
Proxy: http.ProxyFromEnvironment,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
},
}
ctx := oidc.ClientContext(context.Background(), httpClient)
var scheme string
if c.PlainText {
scheme = "http"
} else {
scheme = "https"
}
conf := &oauth2.Config{
ClientID: common.ArgoCDCLIClientAppID,
Scopes: []string{"openid", "profile", "email", "groups", "offline_access"},
Endpoint: oauth2.Endpoint{
AuthURL: fmt.Sprintf("%s://%s%s/auth", scheme, c.ServerAddr, common.DexAPIEndpoint),
TokenURL: fmt.Sprintf("%s://%s%s/token", scheme, c.ServerAddr, common.DexAPIEndpoint),
},
RedirectURL: fmt.Sprintf("%s://%s/auth/callback", scheme, c.ServerAddr),
}
t := &oauth2.Token{
RefreshToken: c.RefreshToken,
}
token, err := conf.TokenSource(ctx, t).Token()
if err != nil {
return err
}
rawIDToken, ok := token.Extra("id_token").(string)
if !ok {
return errors.New("no id_token in token response")
}
refreshToken, _ := token.Extra("refresh_token").(string)
c.AuthToken = rawIDToken
c.RefreshToken = refreshToken
localCfg.UpsertUser(localconfig.User{
Name: ctxName,
AuthToken: c.AuthToken,
RefreshToken: c.RefreshToken,
})
err = localconfig.WriteLocalConfig(*localCfg, configPath)
if err != nil {
return err
}
return nil
}
// NewClientOrDie creates a new API client from a set of config options, or fails fatally if the new client creation fails.
func NewClientOrDie(opts *ClientOptions) Client {
client, err := NewClient(opts)
@@ -262,17 +162,25 @@ func (c jwtCredentials) RequireTransportSecurity() bool {
func (c jwtCredentials) GetRequestMetadata(context.Context, ...string) (map[string]string, error) {
return map[string]string{
MetaDataTokenKey: c.Token,
"tokens": c.Token, // legacy key. delete eventually
}, nil
}
func (c *client) NewConn() (*grpc.ClientConn, error) {
var creds credentials.TransportCredentials
if !c.PlainText {
tlsConfig, err := c.tlsConfig()
if err != nil {
return nil, err
var tlsConfig tls.Config
if len(c.CertPEMData) > 0 {
cp := x509.NewCertPool()
if !cp.AppendCertsFromPEM(c.CertPEMData) {
return nil, fmt.Errorf("credentials: failed to append certificates")
}
tlsConfig.RootCAs = cp
}
creds = credentials.NewTLS(tlsConfig)
if c.Insecure {
tlsConfig.InsecureSkipVerify = true
}
creds = credentials.NewTLS(&tlsConfig)
}
endpointCredentials := jwtCredentials{
Token: c.AuthToken,
@@ -280,21 +188,6 @@ func (c *client) NewConn() (*grpc.ClientConn, error) {
return grpc_util.BlockingDial(context.Background(), "tcp", c.ServerAddr, creds, grpc.WithPerRPCCredentials(endpointCredentials))
}
func (c *client) tlsConfig() (*tls.Config, error) {
var tlsConfig tls.Config
if len(c.CertPEMData) > 0 {
cp := x509.NewCertPool()
if !cp.AppendCertsFromPEM(c.CertPEMData) {
return nil, fmt.Errorf("credentials: failed to append certificates")
}
tlsConfig.RootCAs = cp
}
if c.Insecure {
tlsConfig.InsecureSkipVerify = true
}
return &tlsConfig, nil
}
func (c *client) ClientOptions() ClientOptions {
return ClientOptions{
ServerAddr: c.ServerAddr,

File diff suppressed because it is too large Load Diff

View File

@@ -34,15 +34,13 @@ message AppProjectList {
// AppProjectSpec represents
message AppProjectSpec {
// SourceRepos contains list of git repository URLs which can be used for deployment
repeated string sourceRepos = 1;
repeated string sources = 1;
// Destinations contains list of destinations available for deployment
repeated ApplicationDestination destinations = 2;
// Description contains optional project description
optional string description = 3;
repeated ProjectRole roles = 4;
}
// Application is a definition of Application resource.
@@ -87,24 +85,21 @@ message ApplicationList {
// ApplicationSource contains information about github repository, path within repository and target application environment.
message ApplicationSource {
// RepoURL is the git repository URL of the application manifests
// RepoURL is the repository URL containing the ksonnet application.
optional string repoURL = 1;
// Path is a directory path within the repository containing a
// Path is a directory path within repository which contains ksonnet application.
optional string path = 2;
// Environment is a ksonnet application environment name
// Environment is a ksonnet application environment name.
optional string environment = 3;
// TargetRevision defines the commit, tag, or branch in which to sync the application to.
// If omitted, will sync to HEAD
optional string targetRevision = 4;
// ComponentParameterOverrides are a list of parameter override values
// Environment parameter override values
repeated ComponentParameter componentParameterOverrides = 5;
// ValuesFiles is a list of Helm values files to use when generating a template
repeated string valuesFiles = 6;
}
// ApplicationSpec represents desired application state. Contains link to repository with application definition and additional parameters link definition revision.
@@ -254,13 +249,6 @@ message HookStatus {
optional string message = 6;
}
// JWTToken holds the issuedAt and expiresAt values of a token
message JWTToken {
optional int64 iat = 1;
optional int64 exp = 2;
}
// Operation contains requested operation parameters.
message Operation {
optional SyncOperation sync = 1;
@@ -292,18 +280,6 @@ message OperationState {
optional k8s.io.apimachinery.pkg.apis.meta.v1.Time finishedAt = 7;
}
// ProjectRole represents a role that has access to a project
message ProjectRole {
optional string name = 1;
optional string description = 2;
// Policies Stores a list of casbin formated strings that define access policies for the role in the project.
repeated string policies = 3;
repeated JWTToken jwtTokens = 4;
}
// Repository is a Git repository holding application configurations
message Repository {
optional string repo = 1;

View File

@@ -2,7 +2,6 @@ package v1alpha1
import (
"encoding/json"
"reflect"
"strings"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -222,26 +221,24 @@ type ApplicationSpec struct {
// ComponentParameter contains information about component parameter value
type ComponentParameter struct {
Component string `json:"component,omitempty" protobuf:"bytes,1,opt,name=component"`
Component string `json:"component" protobuf:"bytes,1,opt,name=component"`
Name string `json:"name" protobuf:"bytes,2,opt,name=name"`
Value string `json:"value" protobuf:"bytes,3,opt,name=value"`
}
// ApplicationSource contains information about github repository, path within repository and target application environment.
type ApplicationSource struct {
// RepoURL is the git repository URL of the application manifests
// RepoURL is the repository URL containing the ksonnet application.
RepoURL string `json:"repoURL" protobuf:"bytes,1,opt,name=repoURL"`
// Path is a directory path within the repository containing a
// Path is a directory path within repository which contains ksonnet application.
Path string `json:"path" protobuf:"bytes,2,opt,name=path"`
// Environment is a ksonnet application environment name
Environment string `json:"environment,omitempty" protobuf:"bytes,3,opt,name=environment"`
// Environment is a ksonnet application environment name.
Environment string `json:"environment" protobuf:"bytes,3,opt,name=environment"`
// TargetRevision defines the commit, tag, or branch in which to sync the application to.
// If omitted, will sync to HEAD
TargetRevision string `json:"targetRevision,omitempty" protobuf:"bytes,4,opt,name=targetRevision"`
// ComponentParameterOverrides are a list of parameter override values
// Environment parameter override values
ComponentParameterOverrides []ComponentParameter `json:"componentParameterOverrides,omitempty" protobuf:"bytes,5,opt,name=componentParameterOverrides"`
// ValuesFiles is a list of Helm values files to use when generating a template
ValuesFiles []string `json:"valuesFiles,omitempty" protobuf:"bytes,6,opt,name=valuesFiles"`
}
// ApplicationDestination contains deployment destination information
@@ -443,42 +440,25 @@ type AppProject struct {
Spec AppProjectSpec `json:"spec" protobuf:"bytes,2,opt,name=spec"`
}
// ProjectPoliciesString returns Casbin formated string of a project's polcies for each role
func (proj *AppProject) ProjectPoliciesString() string {
var policies []string
for _, role := range proj.Spec.Roles {
policies = append(policies, role.Policies...)
}
return strings.Join(policies, "\n")
}
// AppProjectSpec represents
type AppProjectSpec struct {
// SourceRepos contains list of git repository URLs which can be used for deployment
SourceRepos []string `json:"sourceRepos" protobuf:"bytes,1,name=sourceRepos"`
SourceRepos []string `json:"sources" protobuf:"bytes,1,name=destination"`
// Destinations contains list of destinations available for deployment
Destinations []ApplicationDestination `json:"destinations" protobuf:"bytes,2,name=destination"`
// Description contains optional project description
Description string `json:"description,omitempty" protobuf:"bytes,3,opt,name=description"`
Roles []ProjectRole `json:"roles,omitempty" protobuf:"bytes,4,rep,name=roles"`
}
// ProjectRole represents a role that has access to a project
type ProjectRole struct {
Name string `json:"name" protobuf:"bytes,1,opt,name=name"`
Description string `json:"description" protobuf:"bytes,2,opt,name=description"`
// Policies Stores a list of casbin formated strings that define access policies for the role in the project.
Policies []string `json:"policies" protobuf:"bytes,3,rep,name=policies"`
JWTTokens []JWTToken `json:"jwtTokens" protobuf:"bytes,4,rep,name=jwtTokens"`
}
// JWTToken holds the issuedAt and expiresAt values of a token
type JWTToken struct {
IssuedAt int64 `json:"iat,omitempty" protobuf:"int64,1,opt,name=iat"`
ExpiresAt int64 `json:"exp,omitempty" protobuf:"int64,2,opt,name=exp"`
func GetDefaultProject(namespace string) AppProject {
return AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: common.DefaultAppProjectName,
Namespace: namespace,
},
}
}
func (app *Application) getFinalizerIndex(name string) int {
@@ -527,7 +507,10 @@ func (condition *ApplicationCondition) IsError() bool {
// Equals compares two instances of ApplicationSource and return true if instances are equal.
func (source ApplicationSource) Equals(other ApplicationSource) bool {
return reflect.DeepEqual(source, other)
return source.TargetRevision == other.TargetRevision &&
source.RepoURL == other.RepoURL &&
source.Path == other.Path &&
source.Environment == other.Environment
}
func (spec ApplicationSpec) BelongsToDefaultProject() bool {
@@ -541,14 +524,16 @@ func (spec ApplicationSpec) GetProject() string {
return spec.Project
}
// IsSourcePermitted validiates if the provided application's source is a one of the allowed sources for the project.
func (proj AppProject) IsSourcePermitted(src ApplicationSource) bool {
func (proj AppProject) IsDefault() bool {
return proj.Name == "" || proj.Name == common.DefaultAppProjectName
}
func (proj AppProject) IsSourcePermitted(src ApplicationSource) bool {
if proj.IsDefault() {
return true
}
normalizedURL := git.NormalizeGitURL(src.RepoURL)
for _, repoURL := range proj.Spec.SourceRepos {
if repoURL == "*" {
return true
}
if git.NormalizeGitURL(repoURL) == normalizedURL {
return true
}
@@ -556,14 +541,13 @@ func (proj AppProject) IsSourcePermitted(src ApplicationSource) bool {
return false
}
// IsDestinationPermitted validiates if the provided application's destination is one of the allowed destinations for the project
func (proj AppProject) IsDestinationPermitted(dst ApplicationDestination) bool {
if proj.IsDefault() {
return true
}
for _, item := range proj.Spec.Destinations {
if item.Server == dst.Server || item.Server == "*" {
if item.Namespace == dst.Namespace || item.Namespace == "*" {
return true
}
if item.Server == dst.Server && item.Namespace == dst.Namespace {
return true
}
}
return false

View File

@@ -82,13 +82,6 @@ func (in *AppProjectSpec) DeepCopyInto(out *AppProjectSpec) {
*out = make([]ApplicationDestination, len(*in))
copy(*out, *in)
}
if in.Roles != nil {
in, out := &in.Roles, &out.Roles
*out = make([]ProjectRole, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
@@ -212,11 +205,6 @@ func (in *ApplicationSource) DeepCopyInto(out *ApplicationSource) {
*out = make([]ComponentParameter, len(*in))
copy(*out, *in)
}
if in.ValuesFiles != nil {
in, out := &in.ValuesFiles, &out.ValuesFiles
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
@@ -493,22 +481,6 @@ func (in *HookStatus) DeepCopy() *HookStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JWTToken) DeepCopyInto(out *JWTToken) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JWTToken.
func (in *JWTToken) DeepCopy() *JWTToken {
if in == nil {
return nil
}
out := new(JWTToken)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Operation) DeepCopyInto(out *Operation) {
*out = *in
@@ -588,32 +560,6 @@ func (in *OperationState) DeepCopy() *OperationState {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProjectRole) DeepCopyInto(out *ProjectRole) {
*out = *in
if in.Policies != nil {
in, out := &in.Policies, &out.Policies
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.JWTTokens != nil {
in, out := &in.JWTTokens, &out.JWTTokens
*out = make([]JWTToken, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProjectRole.
func (in *ProjectRole) DeepCopy() *ProjectRole {
if in == nil {
return nil
}
out := new(ProjectRole)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Repository) DeepCopyInto(out *Repository) {
*out = *in

View File

@@ -7,26 +7,17 @@ import (
"io/ioutil"
"os"
"path"
"regexp"
"strings"
"time"
"github.com/ksonnet/ksonnet/pkg/app"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/helm"
"github.com/argoproj/argo-cd/util/ksonnet"
ksutil "github.com/argoproj/argo-cd/util/ksonnet"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kustomize"
)
const (
@@ -34,15 +25,6 @@ const (
DefaultRepoCacheExpiration = 24 * time.Hour
)
type AppSourceType string
const (
AppSourceKsonnet AppSourceType = "ksonnet"
AppSourceHelm AppSourceType = "helm"
AppSourceKustomize AppSourceType = "kustomize"
AppSourceDirectory AppSourceType = "directory"
)
// Service implements ManifestService interface
type Service struct {
repoLock *util.KeyLock
@@ -83,7 +65,7 @@ func (s *Service) ListDir(ctx context.Context, q *ListDirRequest) (*FileList, er
return &res, nil
}
err = checkoutRevision(gitClient, commitSHA)
err = checkoutRevision(gitClient, q.Revision)
if err != nil {
return nil, err
}
@@ -117,11 +99,7 @@ func (s *Service) GetFile(ctx context.Context, q *GetFileRequest) (*GetFileRespo
if err != nil {
return nil, err
}
commitSHA, err := gitClient.LsRemote(q.Revision)
if err != nil {
return nil, err
}
err = checkoutRevision(gitClient, commitSHA)
err = checkoutRevision(gitClient, q.Revision)
if err != nil {
return nil, err
}
@@ -139,7 +117,7 @@ func (s *Service) GenerateManifest(c context.Context, q *ManifestRequest) (*Mani
var res ManifestResponse
if git.IsCommitSHA(q.Revision) {
cacheKey := manifestCacheKey(q.Revision, q)
err := s.cache.Get(cacheKey, res)
err := s.cache.Get(cacheKey, &res)
if err == nil {
log.Infof("manifest cache hit: %s", cacheKey)
return &res, nil
@@ -170,21 +148,65 @@ func (s *Service) GenerateManifest(c context.Context, q *ManifestRequest) (*Mani
log.Infof("manifest cache miss: %s", cacheKey)
}
err = checkoutRevision(gitClient, commitSHA)
err = checkoutRevision(gitClient, q.Revision)
if err != nil {
return nil, err
}
appPath := path.Join(appRepoPath, q.Path)
ksApp, err := ksutil.NewKsonnetApp(appPath)
if err != nil {
return nil, fmt.Errorf("unable to load application from %s: %v", appPath, err)
}
genRes, err := generateManifests(appPath, q)
params, err := ksApp.ListEnvParams(q.Environment)
if err != nil {
return nil, fmt.Errorf("Failed to list ksonnet app params: %v", err)
}
if q.ComponentParameterOverrides != nil {
for _, override := range q.ComponentParameterOverrides {
err = ksApp.SetComponentParams(q.Environment, override.Component, override.Name, override.Value)
if err != nil {
return nil, err
}
}
}
appSpec := ksApp.App()
env, err := appSpec.Environment(q.Environment)
if err != nil {
return nil, fmt.Errorf("environment '%s' does not exist in ksonnet app", q.Environment)
}
targetObjs, err := ksApp.Show(q.Environment)
if err != nil {
return nil, err
}
res = *genRes
res.Revision = commitSHA
manifests := make([]string, len(targetObjs))
for i, target := range targetObjs {
if q.AppLabel != "" {
err = kube.SetLabel(target, common.LabelApplicationName, q.AppLabel)
if err != nil {
return nil, err
}
}
manifestStr, err := json.Marshal(target.Object)
if err != nil {
return nil, err
}
manifests[i] = string(manifestStr)
}
res = ManifestResponse{
Revision: commitSHA,
Manifests: manifests,
Namespace: env.Destination.Namespace,
Server: env.Destination.Server,
Params: params,
}
err = s.cache.Set(&cache.Item{
Key: cacheKey,
Object: res,
Object: &res,
Expiration: DefaultRepoCacheExpiration,
})
if err != nil {
@@ -193,127 +215,22 @@ func (s *Service) GenerateManifest(c context.Context, q *ManifestRequest) (*Mani
return &res, nil
}
// generateManifests generates manifests from a path
func generateManifests(appPath string, q *ManifestRequest) (*ManifestResponse, error) {
var targetObjs []*unstructured.Unstructured
var params []*v1alpha1.ComponentParameter
var env *app.EnvironmentSpec
var err error
appSourceType := IdentifyAppSourceTypeByAppDir(appPath)
switch appSourceType {
case AppSourceKsonnet:
targetObjs, params, env, err = ksShow(appPath, q.Environment, q.ComponentParameterOverrides)
case AppSourceHelm:
h := helm.NewHelmApp(appPath)
err = h.DependencyBuild()
if err != nil {
return nil, err
}
targetObjs, err = h.Template(q.AppLabel, q.Namespace, q.ValueFiles, q.ComponentParameterOverrides)
if err != nil {
return nil, err
}
params, err = h.GetParameters(q.ValueFiles)
if err != nil {
return nil, err
}
case AppSourceKustomize:
k := kustomize.NewKustomizeApp(appPath)
targetObjs, err = k.Build()
case AppSourceDirectory:
targetObjs, err = findManifests(appPath)
}
if err != nil {
return nil, err
}
manifests := make([]string, 0)
for _, obj := range targetObjs {
var targets []*unstructured.Unstructured
if obj.IsList() {
err = obj.EachListItem(func(object runtime.Object) error {
unstructuredObj, ok := object.(*unstructured.Unstructured)
if ok {
targets = append(targets, unstructuredObj)
return nil
} else {
return fmt.Errorf("resource list item has unexpected type")
}
})
if err != nil {
return nil, err
}
} else {
targets = []*unstructured.Unstructured{obj}
}
for _, target := range targets {
if q.AppLabel != "" {
err = kube.SetLabel(target, common.LabelApplicationName, q.AppLabel)
if err != nil {
return nil, err
}
}
manifestStr, err := json.Marshal(target.Object)
if err != nil {
return nil, err
}
manifests = append(manifests, string(manifestStr))
}
}
res := ManifestResponse{
Manifests: manifests,
Params: params,
}
if env != nil {
res.Namespace = env.Destination.Namespace
res.Server = env.Destination.Server
}
return &res, nil
}
// tempRepoPath returns a formulated temporary directory location to clone a repository
func tempRepoPath(repo string) string {
return path.Join(os.TempDir(), strings.Replace(repo, "/", "_", -1))
}
// IdentifyAppSourceTypeByAppDir examines a directory and determines its application source type
func IdentifyAppSourceTypeByAppDir(appDirPath string) AppSourceType {
if pathExists(path.Join(appDirPath, "app.yaml")) {
return AppSourceKsonnet
}
if pathExists(path.Join(appDirPath, "Chart.yaml")) {
return AppSourceHelm
}
if pathExists(path.Join(appDirPath, "kustomization.yaml")) {
return AppSourceKustomize
}
return AppSourceDirectory
}
// IdentifyAppSourceTypeByAppPath determines application source type by app file path
func IdentifyAppSourceTypeByAppPath(appFilePath string) AppSourceType {
if strings.HasSuffix(appFilePath, "app.yaml") {
return AppSourceKsonnet
}
if strings.HasSuffix(appFilePath, "Chart.yaml") {
return AppSourceHelm
}
if strings.HasSuffix(appFilePath, "kustomization.yaml") {
return AppSourceKustomize
}
return AppSourceDirectory
}
// checkoutRevision is a convenience function to initialize a repo, fetch, and checkout a revision
func checkoutRevision(gitClient git.Client, commitSHA string) error {
func checkoutRevision(gitClient git.Client, revision string) error {
err := gitClient.Fetch()
if err != nil {
return err
}
err = gitClient.Checkout(commitSHA)
err = gitClient.Reset()
if err != nil {
log.Warn(err)
}
err = gitClient.Checkout(revision)
if err != nil {
return err
}
@@ -322,92 +239,9 @@ func checkoutRevision(gitClient git.Client, commitSHA string) error {
func manifestCacheKey(commitSHA string, q *ManifestRequest) string {
pStr, _ := json.Marshal(q.ComponentParameterOverrides)
valuesFiles := strings.Join(q.ValueFiles, ",")
return fmt.Sprintf("mfst|%s|%s|%s|%s|%s|%s|%s", q.AppLabel, q.Path, q.Environment, commitSHA, string(pStr), valuesFiles, q.Namespace)
return fmt.Sprintf("mfst|%s|%s|%s|%s", q.Path, q.Environment, commitSHA, string(pStr))
}
func listDirCacheKey(commitSHA string, q *ListDirRequest) string {
return fmt.Sprintf("ldir|%s|%s", q.Path, commitSHA)
}
// ksShow runs `ks show` in an app directory after setting any component parameter overrides
func ksShow(appPath, envName string, overrides []*v1alpha1.ComponentParameter) ([]*unstructured.Unstructured, []*v1alpha1.ComponentParameter, *app.EnvironmentSpec, error) {
ksApp, err := ksonnet.NewKsonnetApp(appPath)
if err != nil {
return nil, nil, nil, status.Errorf(codes.FailedPrecondition, "unable to load application from %s: %v", appPath, err)
}
params, err := ksApp.ListEnvParams(envName)
if err != nil {
return nil, nil, nil, status.Errorf(codes.InvalidArgument, "Failed to list ksonnet app params: %v", err)
}
if overrides != nil {
for _, override := range overrides {
err = ksApp.SetComponentParams(envName, override.Component, override.Name, override.Value)
if err != nil {
return nil, nil, nil, err
}
}
}
appSpec := ksApp.App()
env, err := appSpec.Environment(envName)
if err != nil {
return nil, nil, nil, status.Errorf(codes.NotFound, "environment %q does not exist in ksonnet app", envName)
}
targetObjs, err := ksApp.Show(envName)
if err != nil {
return nil, nil, nil, err
}
return targetObjs, params, env, nil
}
var manifestFile = regexp.MustCompile(`^.*\.(yaml|yml|json)$`)
// findManifests looks at all yaml files in a directory and unmarshals them into a list of unstructured objects
func findManifests(appPath string) ([]*unstructured.Unstructured, error) {
files, err := ioutil.ReadDir(appPath)
if err != nil {
return nil, status.Errorf(codes.FailedPrecondition, "Failed to read dir %s: %v", appPath, err)
}
var objs []*unstructured.Unstructured
for _, f := range files {
if f.IsDir() || !manifestFile.MatchString(f.Name()) {
continue
}
out, err := ioutil.ReadFile(path.Join(appPath, f.Name()))
if err != nil {
return nil, err
}
if strings.HasSuffix(f.Name(), ".json") {
var obj unstructured.Unstructured
err = json.Unmarshal(out, &obj)
if err != nil {
return nil, status.Errorf(codes.FailedPrecondition, "Failed to unmarshal %q: %v", f.Name(), err)
}
objs = append(objs, &obj)
} else {
yamlObjs, err := kube.SplitYAML(string(out))
if err != nil {
if len(yamlObjs) > 0 {
// If we get here, we had a multiple objects in a single YAML file which had some
// valid k8s objects, but errors parsing others (within the same file). It's very
// likely the user messed up a portion of the YAML, so report on that.
return nil, status.Errorf(codes.FailedPrecondition, "Failed to unmarshal %q: %v", f.Name(), err)
}
// Otherwise, it might be a unrelated YAML file which we will ignore
continue
}
objs = append(objs, yamlObjs...)
}
}
return objs, nil
}
// pathExists reports whether the named file or directory exists.
func pathExists(name string) bool {
if _, err := os.Stat(name); err != nil {
if os.IsNotExist(err) {
return false
}
}
return true
}

View File

@@ -49,8 +49,6 @@ type ManifestRequest struct {
Environment string `protobuf:"bytes,4,opt,name=environment,proto3" json:"environment,omitempty"`
AppLabel string `protobuf:"bytes,5,opt,name=appLabel,proto3" json:"appLabel,omitempty"`
ComponentParameterOverrides []*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ComponentParameter `protobuf:"bytes,6,rep,name=componentParameterOverrides" json:"componentParameterOverrides,omitempty"`
ValueFiles []string `protobuf:"bytes,7,rep,name=valueFiles" json:"valueFiles,omitempty"`
Namespace string `protobuf:"bytes,8,opt,name=namespace,proto3" json:"namespace,omitempty"`
}
func (m *ManifestRequest) Reset() { *m = ManifestRequest{} }
@@ -100,20 +98,6 @@ func (m *ManifestRequest) GetComponentParameterOverrides() []*github_com_argopro
return nil
}
func (m *ManifestRequest) GetValueFiles() []string {
if m != nil {
return m.ValueFiles
}
return nil
}
func (m *ManifestRequest) GetNamespace() string {
if m != nil {
return m.Namespace
}
return ""
}
type ManifestResponse struct {
Manifests []string `protobuf:"bytes,1,rep,name=manifests" json:"manifests,omitempty"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
@@ -476,27 +460,6 @@ func (m *ManifestRequest) MarshalTo(dAtA []byte) (int, error) {
i += n
}
}
if len(m.ValueFiles) > 0 {
for _, s := range m.ValueFiles {
dAtA[i] = 0x3a
i++
l = len(s)
for l >= 1<<7 {
dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
l >>= 7
i++
}
dAtA[i] = uint8(l)
i++
i += copy(dAtA[i:], s)
}
}
if len(m.Namespace) > 0 {
dAtA[i] = 0x42
i++
i = encodeVarintRepository(dAtA, i, uint64(len(m.Namespace)))
i += copy(dAtA[i:], m.Namespace)
}
return i, nil
}
@@ -738,16 +701,6 @@ func (m *ManifestRequest) Size() (n int) {
n += 1 + l + sovRepository(uint64(l))
}
}
if len(m.ValueFiles) > 0 {
for _, s := range m.ValueFiles {
l = len(s)
n += 1 + l + sovRepository(uint64(l))
}
}
l = len(m.Namespace)
if l > 0 {
n += 1 + l + sovRepository(uint64(l))
}
return n
}
@@ -1061,64 +1014,6 @@ func (m *ManifestRequest) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 7:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ValueFiles", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowRepository
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthRepository
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.ValueFiles = append(m.ValueFiles, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
case 8:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowRepository
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthRepository
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Namespace = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipRepository(dAtA[iNdEx:])
@@ -1887,42 +1782,40 @@ var (
func init() { proto.RegisterFile("reposerver/repository/repository.proto", fileDescriptorRepository) }
var fileDescriptorRepository = []byte{
// 584 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x55, 0xdd, 0x8a, 0xd3, 0x40,
0x14, 0xde, 0x6c, 0xbb, 0xdd, 0x76, 0x2a, 0xee, 0x3a, 0x14, 0x09, 0x69, 0x29, 0x21, 0xa0, 0xf4,
0xc6, 0x84, 0xd6, 0x1b, 0x6f, 0x44, 0xd0, 0xd5, 0x45, 0xd8, 0x65, 0x25, 0x5e, 0xe9, 0x8d, 0x4c,
0xd3, 0x63, 0x3a, 0x36, 0x99, 0x19, 0x67, 0xa6, 0x01, 0x9f, 0xc2, 0x07, 0xf0, 0x0d, 0x7c, 0x12,
0x2f, 0x7d, 0x04, 0xe9, 0xdd, 0xbe, 0x85, 0x64, 0x9a, 0x34, 0x69, 0xb7, 0xec, 0x8d, 0x08, 0x7b,
0x77, 0xe6, 0x3b, 0x27, 0xdf, 0x77, 0xfe, 0x38, 0x41, 0x8f, 0x25, 0x08, 0xae, 0x40, 0x66, 0x20,
0x03, 0x63, 0x52, 0xcd, 0xe5, 0xb7, 0x9a, 0xe9, 0x0b, 0xc9, 0x35, 0xc7, 0xa8, 0x42, 0x9c, 0x5e,
0xcc, 0x63, 0x6e, 0xe0, 0x20, 0xb7, 0xd6, 0x11, 0xce, 0x20, 0xe6, 0x3c, 0x4e, 0x20, 0x20, 0x82,
0x06, 0x84, 0x31, 0xae, 0x89, 0xa6, 0x9c, 0xa9, 0xc2, 0xeb, 0x2d, 0x9e, 0x29, 0x9f, 0x72, 0xe3,
0x8d, 0xb8, 0x84, 0x20, 0x1b, 0x07, 0x31, 0x30, 0x90, 0x44, 0xc3, 0xac, 0x88, 0x79, 0x1b, 0x53,
0x3d, 0x5f, 0x4e, 0xfd, 0x88, 0xa7, 0x01, 0x91, 0x46, 0xe2, 0x8b, 0x31, 0x9e, 0x44, 0xb3, 0x40,
0x2c, 0xe2, 0xfc, 0x63, 0x15, 0x10, 0x21, 0x12, 0x1a, 0x19, 0xf2, 0x20, 0x1b, 0x93, 0x44, 0xcc,
0xc9, 0x0d, 0x2a, 0xef, 0x67, 0x03, 0x9d, 0x5c, 0x12, 0x46, 0x3f, 0x83, 0xd2, 0x21, 0x7c, 0x5d,
0x82, 0xd2, 0xf8, 0x03, 0x6a, 0xe6, 0x45, 0xd8, 0x96, 0x6b, 0x8d, 0xba, 0x93, 0xd7, 0x7e, 0xa5,
0xe6, 0x97, 0x6a, 0xc6, 0xf8, 0x14, 0xcd, 0x7c, 0xb1, 0x88, 0xfd, 0x5c, 0xcd, 0xaf, 0xa9, 0xf9,
0xa5, 0x9a, 0x1f, 0x6e, 0x7a, 0x11, 0x1a, 0x4a, 0xec, 0xa0, 0xb6, 0x84, 0x8c, 0x2a, 0xca, 0x99,
0x7d, 0xe8, 0x5a, 0xa3, 0x4e, 0xb8, 0x79, 0x63, 0x8c, 0x9a, 0x82, 0xe8, 0xb9, 0xdd, 0x30, 0xb8,
0xb1, 0xb1, 0x8b, 0xba, 0xc0, 0x32, 0x2a, 0x39, 0x4b, 0x81, 0x69, 0xbb, 0x69, 0x5c, 0x75, 0x28,
0x67, 0x24, 0x42, 0x5c, 0x90, 0x29, 0x24, 0xf6, 0xd1, 0x9a, 0xb1, 0x7c, 0xe3, 0xef, 0x16, 0xea,
0x47, 0x3c, 0x15, 0x9c, 0x01, 0xd3, 0xef, 0x88, 0x24, 0x29, 0x68, 0x90, 0x57, 0x19, 0x48, 0x49,
0x67, 0xa0, 0xec, 0x96, 0xdb, 0x18, 0x75, 0x27, 0x97, 0xff, 0x50, 0xe0, 0xab, 0x1b, 0xec, 0xe1,
0x6d, 0x8a, 0x78, 0x88, 0x50, 0x46, 0x92, 0x25, 0xbc, 0xa1, 0x09, 0x28, 0xfb, 0xd8, 0x6d, 0x8c,
0x3a, 0x61, 0x0d, 0xc1, 0x03, 0xd4, 0x61, 0x24, 0x05, 0x25, 0x48, 0x04, 0x76, 0xdb, 0x94, 0x53,
0x01, 0xde, 0xb5, 0x85, 0x4e, 0xab, 0x61, 0x29, 0xc1, 0x99, 0x82, 0xfc, 0x93, 0xb4, 0xc0, 0x94,
0x6d, 0x19, 0xc6, 0x0a, 0xd8, 0x26, 0x3c, 0xdc, 0x21, 0xc4, 0x0f, 0x51, 0x6b, 0xbd, 0xd2, 0x45,
0xd3, 0x8b, 0xd7, 0xd6, 0x98, 0x9a, 0x3b, 0x63, 0x02, 0xd4, 0x12, 0x79, 0x61, 0xca, 0x3e, 0xfa,
0x1f, 0xed, 0x2b, 0xc8, 0xbd, 0x1f, 0x16, 0xba, 0x7f, 0x41, 0x95, 0x3e, 0xa3, 0xf2, 0xee, 0xed,
0xa5, 0xe7, 0xa2, 0x76, 0x3e, 0xb0, 0x3c, 0x41, 0xdc, 0x43, 0x47, 0x54, 0x43, 0x5a, 0x36, 0x7f,
0xfd, 0x30, 0xf9, 0x9f, 0x83, 0xce, 0xa3, 0xee, 0x60, 0xfe, 0x8f, 0xd0, 0xc9, 0x26, 0xb9, 0x62,
0x8f, 0x30, 0x6a, 0xce, 0x88, 0x26, 0x26, 0xbb, 0x7b, 0xa1, 0xb1, 0x27, 0xd7, 0x16, 0x7a, 0x50,
0x69, 0xbd, 0x07, 0x99, 0xd1, 0x08, 0xf0, 0x15, 0x3a, 0x3d, 0x2f, 0xce, 0x48, 0xb9, 0x8d, 0xb8,
0xef, 0xd7, 0x2e, 0xe1, 0xce, 0x41, 0x71, 0x06, 0xfb, 0x9d, 0x6b, 0x61, 0xef, 0x00, 0x3f, 0x47,
0xc7, 0xc5, 0xa8, 0xb1, 0x53, 0x0f, 0xdd, 0x9e, 0xbf, 0xd3, 0xab, 0xfb, 0xca, 0xf6, 0x7b, 0x07,
0xf8, 0x0c, 0x1d, 0x17, 0xc5, 0x6c, 0x7f, 0xbe, 0xdd, 0x7e, 0xa7, 0xbf, 0xd7, 0x57, 0x26, 0xf1,
0xf2, 0xc5, 0xaf, 0xd5, 0xd0, 0xfa, 0xbd, 0x1a, 0x5a, 0x7f, 0x56, 0x43, 0xeb, 0xe3, 0xf8, 0xb6,
0x13, 0xbb, 0xf7, 0x57, 0x30, 0x6d, 0x99, 0x8b, 0xfa, 0xf4, 0x6f, 0x00, 0x00, 0x00, 0xff, 0xff,
0x22, 0x8e, 0xa1, 0x51, 0x2a, 0x06, 0x00, 0x00,
// 560 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x54, 0x4d, 0x8b, 0xd3, 0x40,
0x18, 0xde, 0x6c, 0x3f, 0x76, 0x3b, 0x15, 0x77, 0x1d, 0x8a, 0x84, 0xb4, 0x94, 0x10, 0x50, 0x7a,
0x31, 0xa1, 0xf5, 0xe2, 0x45, 0x04, 0x5d, 0x5d, 0x84, 0x5d, 0x56, 0xe2, 0x49, 0x2f, 0x32, 0x4d,
0x5f, 0xd3, 0xb1, 0xcd, 0xcc, 0x38, 0x33, 0x1b, 0xf0, 0x57, 0xf8, 0x03, 0xfc, 0x43, 0x1e, 0xfd,
0x09, 0xd2, 0xdb, 0x82, 0x3f, 0x42, 0x32, 0x9d, 0x34, 0xe9, 0x6e, 0xd9, 0x8b, 0x08, 0x7b, 0x7b,
0xbf, 0xf2, 0x3c, 0xcf, 0x3c, 0x79, 0x79, 0xd1, 0x63, 0x09, 0x82, 0x2b, 0x90, 0x39, 0xc8, 0xc8,
0x84, 0x54, 0x73, 0xf9, 0xad, 0x16, 0x86, 0x42, 0x72, 0xcd, 0x31, 0xaa, 0x2a, 0x5e, 0x2f, 0xe5,
0x29, 0x37, 0xe5, 0xa8, 0x88, 0xd6, 0x13, 0xde, 0x20, 0xe5, 0x3c, 0x5d, 0x42, 0x44, 0x04, 0x8d,
0x08, 0x63, 0x5c, 0x13, 0x4d, 0x39, 0x53, 0xb6, 0x1b, 0x2c, 0x9e, 0xa9, 0x90, 0x72, 0xd3, 0x4d,
0xb8, 0x84, 0x28, 0x1f, 0x47, 0x29, 0x30, 0x90, 0x44, 0xc3, 0xcc, 0xce, 0xbc, 0x4d, 0xa9, 0x9e,
0x5f, 0x4e, 0xc3, 0x84, 0x67, 0x11, 0x91, 0x86, 0xe2, 0x8b, 0x09, 0x9e, 0x24, 0xb3, 0x48, 0x2c,
0xd2, 0xe2, 0x63, 0x15, 0x11, 0x21, 0x96, 0x34, 0x31, 0xe0, 0x51, 0x3e, 0x26, 0x4b, 0x31, 0x27,
0x37, 0xa0, 0x82, 0x3f, 0xfb, 0xe8, 0xe8, 0x9c, 0x30, 0xfa, 0x19, 0x94, 0x8e, 0xe1, 0xeb, 0x25,
0x28, 0x8d, 0x3f, 0xa0, 0x66, 0xf1, 0x08, 0xd7, 0xf1, 0x9d, 0x51, 0x77, 0xf2, 0x3a, 0xac, 0xd8,
0xc2, 0x92, 0xcd, 0x04, 0x9f, 0x92, 0x59, 0x28, 0x16, 0x69, 0x58, 0xb0, 0x85, 0x35, 0xb6, 0xb0,
0x64, 0x0b, 0xe3, 0x8d, 0x17, 0xb1, 0x81, 0xc4, 0x1e, 0x3a, 0x94, 0x90, 0x53, 0x45, 0x39, 0x73,
0xf7, 0x7d, 0x67, 0xd4, 0x89, 0x37, 0x39, 0xc6, 0xa8, 0x29, 0x88, 0x9e, 0xbb, 0x0d, 0x53, 0x37,
0x31, 0xf6, 0x51, 0x17, 0x58, 0x4e, 0x25, 0x67, 0x19, 0x30, 0xed, 0x36, 0x4d, 0xab, 0x5e, 0x2a,
0x10, 0x89, 0x10, 0x67, 0x64, 0x0a, 0x4b, 0xb7, 0xb5, 0x46, 0x2c, 0x73, 0xfc, 0xdd, 0x41, 0xfd,
0x84, 0x67, 0x82, 0x33, 0x60, 0xfa, 0x1d, 0x91, 0x24, 0x03, 0x0d, 0xf2, 0x22, 0x07, 0x29, 0xe9,
0x0c, 0x94, 0xdb, 0xf6, 0x1b, 0xa3, 0xee, 0xe4, 0xfc, 0x1f, 0x1e, 0xf8, 0xea, 0x06, 0x7a, 0x7c,
0x1b, 0x63, 0x70, 0xe5, 0xa0, 0xe3, 0xca, 0x6e, 0x25, 0x38, 0x53, 0x80, 0x07, 0xa8, 0x93, 0xd9,
0x9a, 0x72, 0x1d, 0xbf, 0x31, 0xea, 0xc4, 0x55, 0xa1, 0xe8, 0x32, 0x92, 0x81, 0x12, 0x24, 0x01,
0xeb, 0x59, 0x55, 0xc0, 0x0f, 0x51, 0x7b, 0xbd, 0x94, 0xd6, 0x36, 0x9b, 0x6d, 0x19, 0xdd, 0xbc,
0x66, 0x34, 0xa0, 0xb6, 0x28, 0xa4, 0x29, 0xb7, 0xf5, 0x3f, 0x0c, 0xb0, 0xe0, 0xc1, 0x0f, 0x07,
0xdd, 0x3f, 0xa3, 0x4a, 0x9f, 0x50, 0x79, 0xf7, 0x36, 0x2b, 0xf0, 0xd1, 0xe1, 0x1b, 0xba, 0x84,
0x42, 0x20, 0xee, 0xa1, 0x16, 0xd5, 0x90, 0x95, 0xe6, 0xaf, 0x13, 0xa3, 0xff, 0x14, 0x74, 0x31,
0x75, 0x07, 0xf5, 0x3f, 0x42, 0x47, 0x1b, 0x71, 0x76, 0x8f, 0x30, 0x6a, 0xce, 0x88, 0x26, 0x46,
0xdd, 0xbd, 0xd8, 0xc4, 0x93, 0x2b, 0x07, 0x3d, 0xa8, 0xb8, 0xde, 0x83, 0xcc, 0x69, 0x02, 0xf8,
0x02, 0x1d, 0x9f, 0xda, 0x43, 0x50, 0x6e, 0x23, 0xee, 0x87, 0xb5, 0x5b, 0x76, 0xed, 0x24, 0x78,
0x83, 0xdd, 0xcd, 0x35, 0x71, 0xb0, 0x87, 0x9f, 0xa3, 0x03, 0xfb, 0xab, 0xb1, 0x57, 0x1f, 0xdd,
0xfe, 0xff, 0x5e, 0xaf, 0xde, 0x2b, 0xed, 0x0f, 0xf6, 0xf0, 0x09, 0x3a, 0xb0, 0x8f, 0xd9, 0xfe,
0x7c, 0xdb, 0x7e, 0xaf, 0xbf, 0xb3, 0x57, 0x8a, 0x78, 0xf9, 0xe2, 0xe7, 0x6a, 0xe8, 0xfc, 0x5a,
0x0d, 0x9d, 0xdf, 0xab, 0xa1, 0xf3, 0x71, 0x7c, 0xdb, 0x91, 0xdc, 0x79, 0xcc, 0xa7, 0x6d, 0x73,
0x13, 0x9f, 0xfe, 0x0d, 0x00, 0x00, 0xff, 0xff, 0x01, 0x96, 0x09, 0x12, 0xec, 0x05, 0x00, 0x00,
}

View File

@@ -16,8 +16,6 @@ message ManifestRequest {
string environment = 4;
string appLabel = 5;
repeated github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.ComponentParameter componentParameterOverrides = 6;
repeated string valueFiles = 7;
string namespace = 8;
}
message ManifestResponse {

View File

@@ -1,19 +0,0 @@
package repository
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestGenerateManifestInDir(t *testing.T) {
q := ManifestRequest{}
res1, err := generateManifests("../../manifests/components", &q)
assert.Nil(t, err)
assert.True(t, len(res1.Manifests) == 16) // update this value if we add/remove manifests
// this will test concatenated manifests to verify we split YAMLs correctly
res2, err := generateManifests("../../manifests", &q)
assert.Nil(t, err)
assert.True(t, len(res2.Manifests) == len(res1.Manifests))
}

View File

@@ -1,14 +1,11 @@
package account
import (
"time"
jwt "github.com/dgrijalva/jwt-go"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"github.com/argoproj/argo-cd/common"
jwtutil "github.com/argoproj/argo-cd/util/jwt"
"github.com/argoproj/argo-cd/util/password"
"github.com/argoproj/argo-cd/util/session"
@@ -30,17 +27,16 @@ func NewServer(sessionMgr *session.SessionManager, settingsMgr *settings.Setting
}
// UpdatePassword updates the password of the local admin superuser.
//UpdatePassword is used to Update a User's Passwords
func (s *Server) UpdatePassword(ctx context.Context, q *UpdatePasswordRequest) (*UpdatePasswordResponse, error) {
username := getAuthenticatedUser(ctx)
if username != common.ArgoCDAdminUsername {
return nil, status.Errorf(codes.InvalidArgument, "password can only be changed for local users, not user %q", username)
}
cdSettings, err := s.settingsMgr.GetSettings()
if err != nil {
return nil, err
}
if _, ok := cdSettings.LocalUsers[username]; !ok {
return nil, status.Errorf(codes.InvalidArgument, "password can only be changed for local users")
}
err = s.sessionMgr.VerifyUsernamePassword(username, q.CurrentPassword)
if err != nil {
@@ -52,8 +48,7 @@ func (s *Server) UpdatePassword(ctx context.Context, q *UpdatePasswordRequest) (
return nil, err
}
cdSettings.AdminPasswordHash = hashedPassword
cdSettings.AdminPasswordMtime = time.Now().UTC()
cdSettings.LocalUsers[username] = hashedPassword
err = s.settingsMgr.SaveSettings(cdSettings)
if err != nil {

View File

@@ -15,7 +15,6 @@ import (
"k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
@@ -32,9 +31,7 @@ import (
argoutil "github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/grpc"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/rbac"
"github.com/argoproj/argo-cd/util/session"
)
// Server provides a Application service
@@ -47,7 +44,6 @@ type Server struct {
appComparator controller.AppStateManager
enf *rbac.Enforcer
projectLock *util.KeyLock
auditLogger *argo.AuditLogger
}
// NewServer returns a new instance of the Application service
@@ -70,7 +66,6 @@ func NewServer(
appComparator: controller.NewAppStateManager(db, appclientset, repoClientset, namespace),
enf: enf,
projectLock: projectLock,
auditLogger: argo.NewAuditLogger(namespace, kubeclientset, "argocd-server"),
}
}
@@ -102,8 +97,10 @@ func (s *Server) Create(ctx context.Context, q *ApplicationCreateRequest) (*appv
return nil, grpc.ErrPermissionDenied
}
s.projectLock.Lock(q.Application.Spec.Project)
defer s.projectLock.Unlock(q.Application.Spec.Project)
if !q.Application.Spec.BelongsToDefaultProject() {
s.projectLock.Lock(q.Application.Spec.Project)
defer s.projectLock.Unlock(q.Application.Spec.Project)
}
a := q.Application
err := s.validateApp(ctx, &a.Spec)
@@ -131,10 +128,6 @@ func (s *Server) Create(ctx context.Context, q *ApplicationCreateRequest) (*appv
}
}
}
if err == nil {
s.logEvent(out, ctx, argo.EventReasonResourceCreated, "create")
}
return out, err
}
@@ -173,8 +166,6 @@ func (s *Server) GetManifests(ctx context.Context, q *ApplicationManifestQuery)
Revision: revision,
ComponentParameterOverrides: overrides,
AppLabel: a.Name,
ValueFiles: a.Spec.Source.ValuesFiles,
Namespace: a.Spec.Destination.Namespace,
})
if err != nil {
return nil, err
@@ -215,41 +206,24 @@ func (s *Server) ListResourceEvents(ctx context.Context, q *ApplicationResourceE
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications/events", "get", appRBACName(*a)) {
return nil, grpc.ErrPermissionDenied
}
var (
kubeClientset kubernetes.Interface
fieldSelector string
namespace string
)
// There are two places where we get events. If we are getting application events, we query
// our own cluster. If it is events on a resource on an external cluster, then we query the
// external cluster using its rest.Config
if q.ResourceName == "" && q.ResourceUID == "" {
kubeClientset = s.kubeclientset
namespace = a.Namespace
fieldSelector = fields.SelectorFromSet(map[string]string{
"involvedObject.name": a.Name,
"involvedObject.uid": string(a.UID),
"involvedObject.namespace": a.Namespace,
}).String()
} else {
var config *rest.Config
config, namespace, err = s.getApplicationClusterConfig(*q.Name)
if err != nil {
return nil, err
}
kubeClientset, err = kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
fieldSelector = fields.SelectorFromSet(map[string]string{
"involvedObject.name": q.ResourceName,
"involvedObject.uid": q.ResourceUID,
"involvedObject.namespace": namespace,
}).String()
config, namespace, err := s.getApplicationClusterConfig(*q.Name)
if err != nil {
return nil, err
}
kubeClientset, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
fieldSelector := fields.SelectorFromSet(map[string]string{
"involvedObject.name": q.ResourceName,
"involvedObject.uid": q.ResourceUID,
"involvedObject.namespace": namespace,
}).String()
log.Infof("Querying for resource events with field selector: %s", fieldSelector)
opts := metav1.ListOptions{FieldSelector: fieldSelector}
return kubeClientset.CoreV1().Events(namespace).List(opts)
}
@@ -259,8 +233,10 @@ func (s *Server) Update(ctx context.Context, q *ApplicationUpdateRequest) (*appv
return nil, grpc.ErrPermissionDenied
}
s.projectLock.Lock(q.Application.Spec.Project)
defer s.projectLock.Unlock(q.Application.Spec.Project)
if !q.Application.Spec.BelongsToDefaultProject() {
s.projectLock.Lock(q.Application.Spec.Project)
defer s.projectLock.Unlock(q.Application.Spec.Project)
}
a := q.Application
err := s.validateApp(ctx, &a.Spec)
@@ -275,10 +251,6 @@ func (s *Server) Update(ctx context.Context, q *ApplicationUpdateRequest) (*appv
// throws an error is passed override is invalid
// if passed override and old overrides are invalid, throws error, old overrides not dropped
func (s *Server) removeInvalidOverrides(a *appv1.Application, q *ApplicationUpdateSpecRequest) (*ApplicationUpdateSpecRequest, error) {
if a.Spec.Source.Environment == "" {
// this method is only valid for ksonnet apps
return q, nil
}
oldParams := argo.ParamToMap(a.Spec.Source.ComponentParameterOverrides)
validAppSet := argo.ParamToMap(a.Status.Parameters)
@@ -288,7 +260,7 @@ func (s *Server) removeInvalidOverrides(a *appv1.Application, q *ApplicationUpda
if !argo.CheckValidParam(validAppSet, param) {
alreadySet := argo.CheckValidParam(oldParams, param)
if !alreadySet {
return nil, status.Errorf(codes.InvalidArgument, "Parameter '%s' in '%s' does not exist in ksonnet app", param.Name, param.Component)
return nil, status.Errorf(codes.InvalidArgument, "Parameter '%s' in '%s' does not exist in ksonnet", param.Name, param.Component)
}
} else {
params = append(params, param)
@@ -301,8 +273,10 @@ func (s *Server) removeInvalidOverrides(a *appv1.Application, q *ApplicationUpda
// UpdateSpec updates an application spec and filters out any invalid parameter overrides
func (s *Server) UpdateSpec(ctx context.Context, q *ApplicationUpdateSpecRequest) (*appv1.ApplicationSpec, error) {
s.projectLock.Lock(q.Spec.Project)
defer s.projectLock.Unlock(q.Spec.Project)
if !q.Spec.BelongsToDefaultProject() {
s.projectLock.Lock(q.Spec.Project)
defer s.projectLock.Unlock(q.Spec.Project)
}
a, err := s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Get(*q.Name, metav1.GetOptions{})
if err != nil {
@@ -319,23 +293,14 @@ func (s *Server) UpdateSpec(ctx context.Context, q *ApplicationUpdateSpecRequest
if err != nil {
return nil, err
}
for {
a.Spec = q.Spec
_, err = s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Update(a)
if err == nil {
if err != nil {
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, "update")
}
return &q.Spec, nil
}
if !apierr.IsConflict(err) {
return nil, err
}
a, err = s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Get(*q.Name, metav1.GetOptions{})
if err != nil {
return nil, err
}
patch, err := json.Marshal(map[string]appv1.ApplicationSpec{
"spec": q.Spec,
})
if err != nil {
return nil, err
}
_, err = s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Patch(*q.Name, types.MergePatchType, patch)
return &q.Spec, err
}
// Delete removes an application and all associated resources
@@ -345,8 +310,10 @@ func (s *Server) Delete(ctx context.Context, q *ApplicationDeleteRequest) (*Appl
return nil, err
}
s.projectLock.Lock(a.Spec.Project)
defer s.projectLock.Unlock(a.Spec.Project)
if !a.Spec.BelongsToDefaultProject() {
s.projectLock.Lock(a.Spec.Project)
defer s.projectLock.Unlock(a.Spec.Project)
}
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "delete", appRBACName(*a)) {
return nil, grpc.ErrPermissionDenied
@@ -388,7 +355,6 @@ func (s *Server) Delete(ctx context.Context, q *ApplicationDeleteRequest) (*Appl
return nil, err
}
s.logEvent(a, ctx, argo.EventReasonResourceDeleted, "delete")
return &ApplicationResponse{}, nil
}
@@ -422,7 +388,6 @@ func (s *Server) Watch(q *ApplicationQuery, ws ApplicationService_WatchServer) e
case <-ws.Context().Done():
w.Stop()
case <-done:
w.Stop()
}
return nil
}
@@ -443,7 +408,7 @@ func (s *Server) validateApp(ctx context.Context, spec *appv1.ApplicationSpec) e
return err
}
if len(conditions) > 0 {
return status.Errorf(codes.InvalidArgument, "application spec is invalid: %s", argo.FormatAppConditions(conditions))
return status.Errorf(codes.InvalidArgument, "application spec is invalid: \n%s", argo.FormatAppConditions(conditions))
}
return nil
}
@@ -476,69 +441,35 @@ func (s *Server) ensurePodBelongsToApp(applicationName string, podName, namespac
return nil
}
func (s *Server) DeleteResource(ctx context.Context, q *ApplicationDeleteResourceRequest) (*ApplicationResponse, error) {
func (s *Server) DeletePod(ctx context.Context, q *ApplicationDeletePodRequest) (*ApplicationResponse, error) {
a, err := s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Get(*q.Name, metav1.GetOptions{})
if err != nil {
return nil, err
}
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications/resources", "delete", appRBACName(*a)) {
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications/pods", "delete", appRBACName(*a)) {
return nil, grpc.ErrPermissionDenied
}
found := findResource(a, q)
if found == nil {
return nil, status.Errorf(codes.InvalidArgument, "%s %s %s not found as part of application %s", q.Kind, q.APIVersion, q.ResourceName, *q.Name)
}
config, namespace, err := s.getApplicationClusterConfig(*q.Name)
if err != nil {
return nil, err
}
err = kube.DeleteResource(config, found, namespace)
kubeClientset, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
err = s.ensurePodBelongsToApp(*q.Name, *q.PodName, namespace, kubeClientset)
if err != nil {
return nil, err
}
err = kubeClientset.CoreV1().Pods(namespace).Delete(*q.PodName, &metav1.DeleteOptions{})
if err != nil {
return nil, err
}
return &ApplicationResponse{}, nil
}
func findResource(a *appv1.Application, q *ApplicationDeleteResourceRequest) *unstructured.Unstructured {
for _, res := range a.Status.ComparisonResult.Resources {
liveObj, err := res.LiveObject()
if err != nil {
log.Warnf("Failed to unmarshal live object: %v", err)
continue
}
if liveObj == nil {
continue
}
if q.ResourceName == liveObj.GetName() && q.APIVersion == liveObj.GetAPIVersion() && q.Kind == liveObj.GetKind() {
return liveObj
}
liveObj = recurseResourceNode(q.ResourceName, q.APIVersion, q.Kind, res.ChildLiveResources)
if liveObj != nil {
return liveObj
}
}
return nil
}
func recurseResourceNode(name, apiVersion, kind string, nodes []appv1.ResourceNode) *unstructured.Unstructured {
for _, node := range nodes {
var childObj unstructured.Unstructured
err := json.Unmarshal([]byte(node.State), &childObj)
if err != nil {
log.Warnf("Failed to unmarshal child live object: %v", err)
continue
}
if name == childObj.GetName() && apiVersion == childObj.GetAPIVersion() && kind == childObj.GetKind() {
return &childObj
}
recurseChildObj := recurseResourceNode(name, apiVersion, kind, node.Children)
if recurseChildObj != nil {
return recurseChildObj
}
}
return nil
}
func (s *Server) PodLogs(q *ApplicationPodLogsQuery, ws ApplicationService_PodLogsServer) error {
a, err := s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Get(*q.Name, metav1.GetOptions{})
if err != nil {
@@ -639,7 +570,7 @@ func (s *Server) Sync(ctx context.Context, syncReq *ApplicationSyncRequest) (*ap
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "sync", appRBACName(*a)) {
return nil, grpc.ErrPermissionDenied
}
return s.setAppOperation(ctx, *syncReq.Name, "sync", func(app *appv1.Application) (*appv1.Operation, error) {
return s.setAppOperation(ctx, *syncReq.Name, func(app *appv1.Application) (*appv1.Operation, error) {
syncOp := appv1.SyncOperation{
Revision: syncReq.Revision,
Prune: syncReq.Prune,
@@ -660,7 +591,7 @@ func (s *Server) Rollback(ctx context.Context, rollbackReq *ApplicationRollbackR
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "rollback", appRBACName(*a)) {
return nil, grpc.ErrPermissionDenied
}
return s.setAppOperation(ctx, *rollbackReq.Name, "rollback", func(app *appv1.Application) (*appv1.Operation, error) {
return s.setAppOperation(ctx, *rollbackReq.Name, func(app *appv1.Application) (*appv1.Operation, error) {
return &appv1.Operation{
Rollback: &appv1.RollbackOperation{
ID: rollbackReq.ID,
@@ -671,7 +602,7 @@ func (s *Server) Rollback(ctx context.Context, rollbackReq *ApplicationRollbackR
})
}
func (s *Server) setAppOperation(ctx context.Context, appName string, operationName string, operationCreator func(app *appv1.Application) (*appv1.Operation, error)) (*appv1.Application, error) {
func (s *Server) setAppOperation(ctx context.Context, appName string, operationCreator func(app *appv1.Application) (*appv1.Operation, error)) (*appv1.Application, error) {
for {
a, err := s.Get(ctx, &ApplicationQuery{Name: &appName})
if err != nil {
@@ -690,9 +621,6 @@ func (s *Server) setAppOperation(ctx context.Context, appName string, operationN
if err != nil && apierr.IsConflict(err) {
log.Warnf("Failed to set operation for app '%s' due to update conflict. Retrying again...", appName)
} else {
if err == nil {
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, operationName)
}
return a, err
}
}
@@ -724,13 +652,7 @@ func (s *Server) TerminateOperation(ctx context.Context, termOpReq *OperationTer
a, err = s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Get(*termOpReq.Name, metav1.GetOptions{})
if err != nil {
return nil, err
} else {
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, "terminateop")
}
}
return nil, status.Errorf(codes.Internal, "Failed to terminate app. Too many conflicts")
}
func (s *Server) logEvent(a *appv1.Application, ctx context.Context, reason string, action string) {
s.auditLogger.LogAppEvent(a, argo.EventInfo{Reason: reason, Action: action, Username: session.Username(ctx)}, v1.EventTypeNormal)
}

View File

@@ -22,7 +22,7 @@
ApplicationSyncRequest
ApplicationUpdateSpecRequest
ApplicationRollbackRequest
ApplicationDeleteResourceRequest
ApplicationDeletePodRequest
ApplicationPodLogsQuery
LogEntry
OperationTerminateRequest
@@ -359,45 +359,29 @@ func (m *ApplicationRollbackRequest) GetPrune() bool {
return false
}
type ApplicationDeleteResourceRequest struct {
type ApplicationDeletePodRequest struct {
Name *string `protobuf:"bytes,1,req,name=name" json:"name,omitempty"`
ResourceName string `protobuf:"bytes,2,req,name=resourceName" json:"resourceName"`
APIVersion string `protobuf:"bytes,3,req,name=apiVersion" json:"apiVersion"`
Kind string `protobuf:"bytes,4,req,name=kind" json:"kind"`
PodName *string `protobuf:"bytes,2,req,name=podName" json:"podName,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *ApplicationDeleteResourceRequest) Reset() { *m = ApplicationDeleteResourceRequest{} }
func (m *ApplicationDeleteResourceRequest) String() string { return proto.CompactTextString(m) }
func (*ApplicationDeleteResourceRequest) ProtoMessage() {}
func (*ApplicationDeleteResourceRequest) Descriptor() ([]byte, []int) {
func (m *ApplicationDeletePodRequest) Reset() { *m = ApplicationDeletePodRequest{} }
func (m *ApplicationDeletePodRequest) String() string { return proto.CompactTextString(m) }
func (*ApplicationDeletePodRequest) ProtoMessage() {}
func (*ApplicationDeletePodRequest) Descriptor() ([]byte, []int) {
return fileDescriptorApplication, []int{10}
}
func (m *ApplicationDeleteResourceRequest) GetName() string {
func (m *ApplicationDeletePodRequest) GetName() string {
if m != nil && m.Name != nil {
return *m.Name
}
return ""
}
func (m *ApplicationDeleteResourceRequest) GetResourceName() string {
if m != nil {
return m.ResourceName
}
return ""
}
func (m *ApplicationDeleteResourceRequest) GetAPIVersion() string {
if m != nil {
return m.APIVersion
}
return ""
}
func (m *ApplicationDeleteResourceRequest) GetKind() string {
if m != nil {
return m.Kind
func (m *ApplicationDeletePodRequest) GetPodName() string {
if m != nil && m.PodName != nil {
return *m.PodName
}
return ""
}
@@ -535,7 +519,7 @@ func init() {
proto.RegisterType((*ApplicationSyncRequest)(nil), "application.ApplicationSyncRequest")
proto.RegisterType((*ApplicationUpdateSpecRequest)(nil), "application.ApplicationUpdateSpecRequest")
proto.RegisterType((*ApplicationRollbackRequest)(nil), "application.ApplicationRollbackRequest")
proto.RegisterType((*ApplicationDeleteResourceRequest)(nil), "application.ApplicationDeleteResourceRequest")
proto.RegisterType((*ApplicationDeletePodRequest)(nil), "application.ApplicationDeletePodRequest")
proto.RegisterType((*ApplicationPodLogsQuery)(nil), "application.ApplicationPodLogsQuery")
proto.RegisterType((*LogEntry)(nil), "application.LogEntry")
proto.RegisterType((*OperationTerminateRequest)(nil), "application.OperationTerminateRequest")
@@ -577,8 +561,8 @@ type ApplicationServiceClient interface {
Rollback(ctx context.Context, in *ApplicationRollbackRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Application, error)
// TerminateOperation terminates the currently running operation
TerminateOperation(ctx context.Context, in *OperationTerminateRequest, opts ...grpc.CallOption) (*OperationTerminateResponse, error)
// DeleteResource deletes a single application resource
DeleteResource(ctx context.Context, in *ApplicationDeleteResourceRequest, opts ...grpc.CallOption) (*ApplicationResponse, error)
// DeletePod returns stream of log entries for the specified pod. Pod
DeletePod(ctx context.Context, in *ApplicationDeletePodRequest, opts ...grpc.CallOption) (*ApplicationResponse, error)
// PodLogs returns stream of log entries for the specified pod. Pod
PodLogs(ctx context.Context, in *ApplicationPodLogsQuery, opts ...grpc.CallOption) (ApplicationService_PodLogsClient, error)
}
@@ -722,9 +706,9 @@ func (c *applicationServiceClient) TerminateOperation(ctx context.Context, in *O
return out, nil
}
func (c *applicationServiceClient) DeleteResource(ctx context.Context, in *ApplicationDeleteResourceRequest, opts ...grpc.CallOption) (*ApplicationResponse, error) {
func (c *applicationServiceClient) DeletePod(ctx context.Context, in *ApplicationDeletePodRequest, opts ...grpc.CallOption) (*ApplicationResponse, error) {
out := new(ApplicationResponse)
err := grpc.Invoke(ctx, "/application.ApplicationService/DeleteResource", in, out, c.cc, opts...)
err := grpc.Invoke(ctx, "/application.ApplicationService/DeletePod", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
@@ -790,8 +774,8 @@ type ApplicationServiceServer interface {
Rollback(context.Context, *ApplicationRollbackRequest) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Application, error)
// TerminateOperation terminates the currently running operation
TerminateOperation(context.Context, *OperationTerminateRequest) (*OperationTerminateResponse, error)
// DeleteResource deletes a single application resource
DeleteResource(context.Context, *ApplicationDeleteResourceRequest) (*ApplicationResponse, error)
// DeletePod returns stream of log entries for the specified pod. Pod
DeletePod(context.Context, *ApplicationDeletePodRequest) (*ApplicationResponse, error)
// PodLogs returns stream of log entries for the specified pod. Pod
PodLogs(*ApplicationPodLogsQuery, ApplicationService_PodLogsServer) error
}
@@ -1019,20 +1003,20 @@ func _ApplicationService_TerminateOperation_Handler(srv interface{}, ctx context
return interceptor(ctx, in, info, handler)
}
func _ApplicationService_DeleteResource_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ApplicationDeleteResourceRequest)
func _ApplicationService_DeletePod_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ApplicationDeletePodRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ApplicationServiceServer).DeleteResource(ctx, in)
return srv.(ApplicationServiceServer).DeletePod(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/application.ApplicationService/DeleteResource",
FullMethod: "/application.ApplicationService/DeletePod",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ApplicationServiceServer).DeleteResource(ctx, req.(*ApplicationDeleteResourceRequest))
return srv.(ApplicationServiceServer).DeletePod(ctx, req.(*ApplicationDeletePodRequest))
}
return interceptor(ctx, in, info, handler)
}
@@ -1107,8 +1091,8 @@ var _ApplicationService_serviceDesc = grpc.ServiceDesc{
Handler: _ApplicationService_TerminateOperation_Handler,
},
{
MethodName: "DeleteResource",
Handler: _ApplicationService_DeleteResource_Handler,
MethodName: "DeletePod",
Handler: _ApplicationService_DeletePod_Handler,
},
},
Streams: []grpc.StreamDesc{
@@ -1522,7 +1506,7 @@ func (m *ApplicationRollbackRequest) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func (m *ApplicationDeleteResourceRequest) Marshal() (dAtA []byte, err error) {
func (m *ApplicationDeletePodRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
@@ -1532,7 +1516,7 @@ func (m *ApplicationDeleteResourceRequest) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
func (m *ApplicationDeleteResourceRequest) MarshalTo(dAtA []byte) (int, error) {
func (m *ApplicationDeletePodRequest) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
@@ -1545,18 +1529,14 @@ func (m *ApplicationDeleteResourceRequest) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintApplication(dAtA, i, uint64(len(*m.Name)))
i += copy(dAtA[i:], *m.Name)
}
dAtA[i] = 0x12
i++
i = encodeVarintApplication(dAtA, i, uint64(len(m.ResourceName)))
i += copy(dAtA[i:], m.ResourceName)
dAtA[i] = 0x1a
i++
i = encodeVarintApplication(dAtA, i, uint64(len(m.APIVersion)))
i += copy(dAtA[i:], m.APIVersion)
dAtA[i] = 0x22
i++
i = encodeVarintApplication(dAtA, i, uint64(len(m.Kind)))
i += copy(dAtA[i:], m.Kind)
if m.PodName == nil {
return 0, proto.NewRequiredNotSetError("podName")
} else {
dAtA[i] = 0x12
i++
i = encodeVarintApplication(dAtA, i, uint64(len(*m.PodName)))
i += copy(dAtA[i:], *m.PodName)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
@@ -1876,19 +1856,17 @@ func (m *ApplicationRollbackRequest) Size() (n int) {
return n
}
func (m *ApplicationDeleteResourceRequest) Size() (n int) {
func (m *ApplicationDeletePodRequest) Size() (n int) {
var l int
_ = l
if m.Name != nil {
l = len(*m.Name)
n += 1 + l + sovApplication(uint64(l))
}
l = len(m.ResourceName)
n += 1 + l + sovApplication(uint64(l))
l = len(m.APIVersion)
n += 1 + l + sovApplication(uint64(l))
l = len(m.Kind)
n += 1 + l + sovApplication(uint64(l))
if m.PodName != nil {
l = len(*m.PodName)
n += 1 + l + sovApplication(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
@@ -3177,7 +3155,7 @@ func (m *ApplicationRollbackRequest) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *ApplicationDeleteResourceRequest) Unmarshal(dAtA []byte) error {
func (m *ApplicationDeletePodRequest) Unmarshal(dAtA []byte) error {
var hasFields [1]uint64
l := len(dAtA)
iNdEx := 0
@@ -3201,10 +3179,10 @@ func (m *ApplicationDeleteResourceRequest) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ApplicationDeleteResourceRequest: wiretype end group for non-group")
return fmt.Errorf("proto: ApplicationDeletePodRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ApplicationDeleteResourceRequest: illegal tag %d (wire type %d)", fieldNum, wire)
return fmt.Errorf("proto: ApplicationDeletePodRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
@@ -3240,7 +3218,7 @@ func (m *ApplicationDeleteResourceRequest) Unmarshal(dAtA []byte) error {
hasFields[0] |= uint64(0x00000001)
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ResourceName", wireType)
return fmt.Errorf("proto: wrong wireType = %d for field PodName", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
@@ -3265,69 +3243,10 @@ func (m *ApplicationDeleteResourceRequest) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.ResourceName = string(dAtA[iNdEx:postIndex])
s := string(dAtA[iNdEx:postIndex])
m.PodName = &s
iNdEx = postIndex
hasFields[0] |= uint64(0x00000002)
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field APIVersion", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowApplication
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthApplication
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.APIVersion = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
hasFields[0] |= uint64(0x00000004)
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowApplication
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthApplication
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Kind = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
hasFields[0] |= uint64(0x00000008)
default:
iNdEx = preIndex
skippy, err := skipApplication(dAtA[iNdEx:])
@@ -3348,13 +3267,7 @@ func (m *ApplicationDeleteResourceRequest) Unmarshal(dAtA []byte) error {
return proto.NewRequiredNotSetError("name")
}
if hasFields[0]&uint64(0x00000002) == 0 {
return proto.NewRequiredNotSetError("resourceName")
}
if hasFields[0]&uint64(0x00000004) == 0 {
return proto.NewRequiredNotSetError("apiVersion")
}
if hasFields[0]&uint64(0x00000008) == 0 {
return proto.NewRequiredNotSetError("kind")
return proto.NewRequiredNotSetError("podName")
}
if iNdEx > l {
@@ -3982,88 +3895,86 @@ var (
func init() { proto.RegisterFile("server/application/application.proto", fileDescriptorApplication) }
var fileDescriptorApplication = []byte{
// 1323 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x58, 0xcd, 0x6f, 0xdc, 0x44,
0x14, 0x67, 0x76, 0xb7, 0xf9, 0x78, 0xa9, 0x10, 0x0c, 0x6d, 0x31, 0x26, 0x4d, 0x57, 0x6e, 0x9a,
0xa6, 0x29, 0xb5, 0x9b, 0x08, 0x09, 0x54, 0x21, 0xa1, 0x86, 0x96, 0x36, 0x28, 0x94, 0xb0, 0x69,
0x41, 0xe2, 0x82, 0xa6, 0xf6, 0x74, 0x63, 0xb2, 0x3b, 0x63, 0x66, 0x66, 0x17, 0x2d, 0x55, 0x0f,
0x14, 0xc4, 0x09, 0xa9, 0x42, 0x70, 0xe0, 0x06, 0xf4, 0x8c, 0xb8, 0x70, 0xe7, 0xdc, 0x23, 0x12,
0xf7, 0x08, 0x45, 0x5c, 0xf9, 0x1f, 0xd0, 0x8c, 0xed, 0xf5, 0xb8, 0xd9, 0x75, 0x0a, 0x2c, 0xb7,
0xf1, 0x9b, 0x37, 0xef, 0xfd, 0xde, 0xc7, 0xcc, 0xfb, 0xc9, 0xb0, 0x28, 0xa9, 0xe8, 0x53, 0x11,
0x90, 0x24, 0xe9, 0xc4, 0x21, 0x51, 0x31, 0x67, 0xf6, 0xda, 0x4f, 0x04, 0x57, 0x1c, 0xcf, 0x59,
0x22, 0xf7, 0x58, 0x9b, 0xb7, 0xb9, 0x91, 0x07, 0x7a, 0x95, 0xaa, 0xb8, 0xf3, 0x6d, 0xce, 0xdb,
0x1d, 0x1a, 0x90, 0x24, 0x0e, 0x08, 0x63, 0x5c, 0x19, 0x65, 0x99, 0xed, 0x7a, 0xbb, 0xaf, 0x4a,
0x3f, 0xe6, 0x66, 0x37, 0xe4, 0x82, 0x06, 0xfd, 0xd5, 0xa0, 0x4d, 0x19, 0x15, 0x44, 0xd1, 0x28,
0xd3, 0x79, 0xb9, 0xd0, 0xe9, 0x92, 0x70, 0x27, 0x66, 0x54, 0x0c, 0x82, 0x64, 0xb7, 0xad, 0x05,
0x32, 0xe8, 0x52, 0x45, 0x46, 0x9d, 0xda, 0x68, 0xc7, 0x6a, 0xa7, 0x77, 0xdb, 0x0f, 0x79, 0x37,
0x20, 0xc2, 0x00, 0xfb, 0xc8, 0x2c, 0x2e, 0x84, 0x51, 0x71, 0xda, 0x0e, 0xaf, 0xbf, 0x4a, 0x3a,
0xc9, 0x0e, 0x39, 0x68, 0x6a, 0xbd, 0xca, 0x94, 0xa0, 0x09, 0xcf, 0x72, 0x65, 0x96, 0xb1, 0xe2,
0x62, 0x60, 0x2d, 0x53, 0x1b, 0x1e, 0x83, 0x67, 0x2e, 0x17, 0xbe, 0xde, 0xed, 0x51, 0x31, 0xc0,
0x18, 0x1a, 0x8c, 0x74, 0xa9, 0x83, 0x9a, 0x68, 0x79, 0xb6, 0x65, 0xd6, 0x78, 0x01, 0xa6, 0x05,
0xbd, 0x23, 0xa8, 0xdc, 0x71, 0x6a, 0x4d, 0xb4, 0x3c, 0xb3, 0xde, 0x78, 0xb4, 0x77, 0xea, 0xa9,
0x56, 0x2e, 0xc4, 0x4b, 0x30, 0xad, 0xdd, 0xd3, 0x50, 0x39, 0xf5, 0x66, 0x7d, 0x79, 0x76, 0xfd,
0xe8, 0xfe, 0xde, 0xa9, 0x99, 0xad, 0x54, 0x24, 0x5b, 0xf9, 0xa6, 0xf7, 0x25, 0x82, 0x05, 0xcb,
0x61, 0x8b, 0x4a, 0xde, 0x13, 0x21, 0xbd, 0xda, 0xa7, 0x4c, 0xc9, 0xc7, 0xdd, 0xd7, 0x86, 0xee,
0x97, 0xe1, 0xa8, 0xc8, 0x54, 0x6f, 0xe8, 0xbd, 0x9a, 0xde, 0xcb, 0x30, 0x94, 0x76, 0xf0, 0x12,
0xcc, 0xe5, 0xdf, 0xb7, 0x36, 0xae, 0x38, 0x75, 0x4b, 0xd1, 0xde, 0xf0, 0xb6, 0xc0, 0xb1, 0x70,
0xbc, 0x4d, 0x58, 0x7c, 0x87, 0x4a, 0x35, 0x1e, 0x41, 0x13, 0x66, 0x04, 0xed, 0xc7, 0x32, 0xe6,
0xcc, 0x64, 0x20, 0x37, 0x3a, 0x94, 0x7a, 0xc7, 0xe1, 0xb9, 0x72, 0x64, 0x09, 0x67, 0x92, 0x7a,
0x0f, 0x51, 0xc9, 0xd3, 0x1b, 0x82, 0x12, 0x45, 0x5b, 0xf4, 0xe3, 0x1e, 0x95, 0x0a, 0x33, 0xb0,
0x5b, 0xd5, 0x38, 0x9c, 0x5b, 0x7b, 0xd3, 0x2f, 0x0a, 0xeb, 0xe7, 0x85, 0x35, 0x8b, 0x0f, 0xc3,
0xc8, 0x4f, 0x76, 0xdb, 0xbe, 0xee, 0x11, 0xdf, 0x6e, 0xfb, 0xbc, 0x47, 0x7c, 0xcb, 0x53, 0x1e,
0xb5, 0xa5, 0x87, 0x4f, 0xc0, 0x54, 0x2f, 0x91, 0x54, 0xa8, 0xb4, 0x8a, 0xad, 0xec, 0xcb, 0xfb,
0xa2, 0x0c, 0xf2, 0x56, 0x12, 0x59, 0x20, 0x77, 0xfe, 0x47, 0x90, 0x25, 0x78, 0xde, 0xf5, 0x12,
0x8a, 0x2b, 0xb4, 0x43, 0x0b, 0x14, 0xa3, 0x8a, 0xe2, 0xc0, 0x74, 0x48, 0x64, 0x48, 0x22, 0x9a,
0xc5, 0x93, 0x7f, 0x7a, 0x7f, 0x21, 0x38, 0x61, 0x99, 0xda, 0x1e, 0xb0, 0xb0, 0xca, 0xd0, 0xa1,
0xd5, 0xc5, 0xf3, 0x30, 0x15, 0x89, 0x41, 0xab, 0xc7, 0x9c, 0xba, 0xd5, 0xff, 0x99, 0x0c, 0xbb,
0x70, 0x24, 0x11, 0x3d, 0x46, 0x9d, 0x86, 0xb5, 0x99, 0x8a, 0x70, 0x08, 0x33, 0x52, 0xe9, 0x7b,
0xdb, 0x1e, 0x38, 0x47, 0x9a, 0x68, 0x79, 0x6e, 0xed, 0xda, 0x7f, 0xc8, 0x9d, 0x8e, 0x64, 0x3b,
0x33, 0xd7, 0x1a, 0x1a, 0xf6, 0xbe, 0x43, 0x30, 0x7f, 0xa0, 0x80, 0xdb, 0x09, 0xad, 0x8c, 0x3a,
0x82, 0x86, 0x4c, 0x68, 0x68, 0x6e, 0xd3, 0xdc, 0xda, 0x5b, 0x93, 0xa9, 0xa8, 0x76, 0x9a, 0x25,
0xc0, 0x58, 0xd7, 0x57, 0xde, 0xb5, 0x2b, 0xce, 0x3b, 0x9d, 0xdb, 0x24, 0xdc, 0xad, 0x02, 0xe6,
0x42, 0x2d, 0x8e, 0x0c, 0xac, 0xfa, 0x3a, 0x68, 0x53, 0xfb, 0x7b, 0xa7, 0x6a, 0x1b, 0x57, 0x5a,
0xb5, 0x38, 0xfa, 0xf7, 0x85, 0xf0, 0x7e, 0x46, 0xd0, 0x1c, 0xd1, 0x5e, 0xe9, 0x9b, 0x50, 0x05,
0xe7, 0xc9, 0x5f, 0x9f, 0x35, 0x00, 0x92, 0xc4, 0xef, 0x51, 0x61, 0x3a, 0x29, 0x7d, 0x7c, 0x70,
0x16, 0x00, 0x5c, 0xde, 0xda, 0xc8, 0x76, 0x5a, 0x96, 0x16, 0x76, 0xa0, 0xb1, 0x1b, 0xb3, 0xc8,
0x69, 0x58, 0x56, 0x8d, 0xc4, 0xfb, 0xb1, 0x06, 0xcf, 0x5b, 0x80, 0xb7, 0x78, 0xb4, 0xc9, 0xdb,
0x15, 0xaf, 0xa4, 0x03, 0xd3, 0x09, 0x8f, 0x0a, 0x88, 0xad, 0xfc, 0x13, 0x7b, 0x30, 0x1b, 0x72,
0xa6, 0x88, 0x1e, 0x52, 0xa5, 0x37, 0xb1, 0x10, 0xeb, 0x28, 0x65, 0xcc, 0x42, 0xba, 0x4d, 0x43,
0xce, 0x22, 0x69, 0xf0, 0xd4, 0xf3, 0x28, 0xed, 0x1d, 0x7c, 0x1d, 0x66, 0xcd, 0xf7, 0xcd, 0xb8,
0x4b, 0xb3, 0x96, 0x5e, 0xf1, 0xd3, 0x69, 0xe8, 0xdb, 0xd3, 0xb0, 0x68, 0x1a, 0x3d, 0x0d, 0xfd,
0xfe, 0xaa, 0xaf, 0x4f, 0xb4, 0x8a, 0xc3, 0x1a, 0x97, 0x22, 0x71, 0x67, 0x33, 0x66, 0x54, 0x3a,
0x53, 0x96, 0xc3, 0x42, 0xac, 0x0b, 0x7e, 0x87, 0x77, 0x3a, 0xfc, 0x13, 0x67, 0xba, 0x59, 0x2b,
0x0a, 0x9e, 0xca, 0xbc, 0x4f, 0x61, 0x66, 0x93, 0xb7, 0xaf, 0x32, 0x25, 0x06, 0x7a, 0x48, 0xe9,
0x70, 0x28, 0x53, 0x69, 0x5a, 0xf2, 0x21, 0x95, 0x09, 0xf1, 0x0d, 0x98, 0x55, 0x71, 0x97, 0x6e,
0x2b, 0xd2, 0x4d, 0xb2, 0xa6, 0xff, 0x07, 0xb8, 0x87, 0xc8, 0x72, 0x13, 0x5e, 0x00, 0x2f, 0xbc,
0x93, 0xe8, 0x91, 0x1c, 0x73, 0x76, 0x93, 0x8a, 0x6e, 0xcc, 0x48, 0xe5, 0x7b, 0xe5, 0xcd, 0x83,
0x3b, 0xea, 0x40, 0x3a, 0x29, 0xd6, 0x3e, 0x7f, 0x16, 0xb0, 0x7d, 0x91, 0xa8, 0xe8, 0xc7, 0x21,
0xc5, 0x0f, 0x10, 0x34, 0x36, 0x63, 0xa9, 0xf0, 0xc9, 0xd2, 0xdd, 0x7b, 0x7c, 0x6c, 0xbb, 0x13,
0xba, 0xbf, 0xda, 0x95, 0x37, 0x7f, 0xff, 0xf7, 0x3f, 0xbf, 0xa9, 0x9d, 0xc0, 0xc7, 0x0c, 0x03,
0xea, 0xaf, 0xda, 0x84, 0x44, 0xe2, 0xaf, 0x10, 0x60, 0xad, 0x56, 0x9e, 0xde, 0xf8, 0xfc, 0x38,
0x7c, 0x23, 0xa6, 0xbc, 0x7b, 0xd2, 0x4a, 0xbc, 0xaf, 0x29, 0x96, 0x4e, 0xb3, 0x51, 0x30, 0x00,
0x56, 0x0c, 0x80, 0x45, 0xec, 0x8d, 0x02, 0x10, 0xdc, 0xd5, 0xd9, 0xbc, 0x17, 0xd0, 0xd4, 0xef,
0xf7, 0x08, 0x8e, 0xbc, 0x4f, 0x54, 0xb8, 0x73, 0x58, 0x86, 0xb6, 0x26, 0x93, 0x21, 0xe3, 0xcb,
0x40, 0xf5, 0x4e, 0x1b, 0x98, 0x27, 0xf1, 0x8b, 0x39, 0x4c, 0xa9, 0x04, 0x25, 0xdd, 0x12, 0xda,
0x8b, 0x08, 0x3f, 0x44, 0x30, 0x95, 0x0e, 0x7e, 0x7c, 0x66, 0x1c, 0xc4, 0x12, 0x31, 0x70, 0x27,
0x34, 0x5e, 0xbd, 0x73, 0x06, 0xe0, 0x69, 0x6f, 0x64, 0x21, 0x2f, 0x95, 0xb8, 0xc1, 0xd7, 0x08,
0xea, 0xd7, 0xe8, 0xa1, 0x6d, 0x36, 0x29, 0x64, 0x07, 0x52, 0x37, 0xa2, 0xc2, 0xf8, 0x3e, 0x82,
0xa3, 0xd7, 0xa8, 0xca, 0xe9, 0x99, 0x1c, 0x9f, 0xbe, 0x12, 0x83, 0x73, 0xe7, 0x7d, 0x8b, 0xe9,
0xe6, 0x5b, 0x43, 0x4a, 0x76, 0xc1, 0xb8, 0x3e, 0x8b, 0xcf, 0x54, 0x35, 0x57, 0x77, 0xe8, 0xf3,
0x57, 0x04, 0x53, 0xe9, 0x40, 0x1d, 0xef, 0xbe, 0xc4, 0x98, 0x26, 0x96, 0xa3, 0xab, 0x06, 0xe8,
0xeb, 0xee, 0xc5, 0xd1, 0x40, 0xed, 0xf3, 0xfa, 0xa5, 0x8a, 0x88, 0x22, 0xbe, 0x41, 0x5f, 0xae,
0xec, 0x2f, 0x08, 0xa0, 0x60, 0x04, 0xf8, 0x5c, 0x75, 0x10, 0x16, 0x6b, 0x70, 0x27, 0xc8, 0x09,
0x3c, 0xdf, 0x04, 0xb3, 0xec, 0x36, 0xab, 0xb2, 0xae, 0x19, 0xc3, 0x25, 0xc3, 0x1b, 0x70, 0x1f,
0xa6, 0xd2, 0x11, 0x3d, 0x3e, 0xeb, 0x25, 0x86, 0xe8, 0x36, 0x2b, 0xde, 0x9f, 0xb4, 0xf0, 0x59,
0xcf, 0xad, 0x54, 0xf6, 0xdc, 0x0f, 0x08, 0x1a, 0x9a, 0x65, 0xe1, 0xd3, 0xe3, 0xec, 0x59, 0x6c,
0x72, 0x62, 0xa5, 0x3e, 0x6f, 0xa0, 0x9d, 0xf1, 0xaa, 0xb3, 0x33, 0x60, 0xe1, 0x25, 0xb4, 0x82,
0x7f, 0x42, 0x30, 0x93, 0xf3, 0x28, 0x7c, 0x76, 0x6c, 0xd8, 0x65, 0xa6, 0x35, 0x31, 0xa8, 0x81,
0x81, 0x7a, 0xce, 0x5b, 0xac, 0x82, 0x2a, 0x32, 0xe7, 0x1a, 0xee, 0xb7, 0x08, 0xf0, 0x70, 0xdc,
0x0d, 0x07, 0x20, 0x5e, 0x2a, 0xb9, 0x1a, 0x3b, 0x49, 0xdd, 0xb3, 0x87, 0xea, 0x95, 0xef, 0xf5,
0x4a, 0xe5, 0xbd, 0xe6, 0x43, 0xff, 0x0f, 0x10, 0x3c, 0x5d, 0x26, 0x81, 0xf8, 0xc2, 0x61, 0x9d,
0x56, 0x22, 0x8b, 0x4f, 0xd0, 0x71, 0x2f, 0x19, 0x48, 0x4b, 0x2b, 0xd5, 0xb9, 0xca, 0xdd, 0x7f,
0x86, 0x60, 0x3a, 0x63, 0x79, 0x78, 0x71, 0x9c, 0x6d, 0x9b, 0x06, 0xba, 0xc7, 0x4b, 0x5a, 0x39,
0x13, 0xf2, 0x5e, 0x31, 0x6e, 0x57, 0x71, 0x50, 0xe5, 0x36, 0xe1, 0x91, 0x0c, 0xee, 0x66, 0x14,
0xf1, 0x5e, 0xd0, 0xe1, 0x6d, 0x79, 0x11, 0xad, 0xbf, 0xf6, 0x68, 0x7f, 0x01, 0xfd, 0xb6, 0xbf,
0x80, 0xfe, 0xd8, 0x5f, 0x40, 0x1f, 0xf8, 0x55, 0xff, 0x18, 0x0e, 0xfe, 0x8b, 0xf9, 0x3b, 0x00,
0x00, 0xff, 0xff, 0x23, 0x18, 0x5a, 0x1f, 0xa0, 0x11, 0x00, 0x00,
// 1288 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x58, 0x41, 0x8f, 0x1b, 0x35,
0x14, 0xc6, 0xd9, 0x74, 0x77, 0xe3, 0xed, 0xa1, 0x32, 0x6d, 0x19, 0xa6, 0xe9, 0x36, 0x72, 0xb7,
0x6d, 0x9a, 0xd2, 0x99, 0xee, 0x0a, 0x09, 0x54, 0x21, 0x21, 0x96, 0x96, 0xb6, 0xb0, 0x94, 0x25,
0xdb, 0x0a, 0x89, 0x0b, 0x72, 0x67, 0xdc, 0xc9, 0xb0, 0x89, 0x3d, 0xd8, 0x4e, 0x50, 0xa8, 0x7a,
0xa0, 0x42, 0x5c, 0x40, 0x42, 0x08, 0x0e, 0xdc, 0x80, 0x9e, 0xb9, 0x71, 0xe7, 0xdc, 0x23, 0x12,
0xf7, 0x0a, 0xad, 0xb8, 0xf2, 0x1b, 0x40, 0xf6, 0xcc, 0x64, 0x3c, 0xdd, 0x64, 0xb6, 0x40, 0xb8,
0x79, 0x9e, 0xed, 0xf7, 0x7d, 0x7e, 0xef, 0xd9, 0xdf, 0x4b, 0xe0, 0x9a, 0xa4, 0x62, 0x44, 0x85,
0x4f, 0x92, 0xa4, 0x1f, 0x07, 0x44, 0xc5, 0x9c, 0xd9, 0x63, 0x2f, 0x11, 0x5c, 0x71, 0xb4, 0x62,
0x99, 0xdc, 0xa3, 0x11, 0x8f, 0xb8, 0xb1, 0xfb, 0x7a, 0x94, 0x2e, 0x71, 0x9b, 0x11, 0xe7, 0x51,
0x9f, 0xfa, 0x24, 0x89, 0x7d, 0xc2, 0x18, 0x57, 0x66, 0xb1, 0xcc, 0x66, 0xf1, 0xee, 0xcb, 0xd2,
0x8b, 0xb9, 0x99, 0x0d, 0xb8, 0xa0, 0xfe, 0x68, 0xdd, 0x8f, 0x28, 0xa3, 0x82, 0x28, 0x1a, 0x66,
0x6b, 0x5e, 0x2c, 0xd6, 0x0c, 0x48, 0xd0, 0x8b, 0x19, 0x15, 0x63, 0x3f, 0xd9, 0x8d, 0xb4, 0x41,
0xfa, 0x03, 0xaa, 0xc8, 0xb4, 0x5d, 0x37, 0xa2, 0x58, 0xf5, 0x86, 0x77, 0xbc, 0x80, 0x0f, 0x7c,
0x22, 0x0c, 0xb1, 0x0f, 0xcd, 0xe0, 0x62, 0x10, 0x16, 0xbb, 0xed, 0xe3, 0x8d, 0xd6, 0x49, 0x3f,
0xe9, 0x91, 0xfd, 0xae, 0x36, 0xab, 0x5c, 0x09, 0x9a, 0xf0, 0x2c, 0x56, 0x66, 0x18, 0x2b, 0x2e,
0xc6, 0xd6, 0x30, 0xf5, 0x81, 0x19, 0x3c, 0xf2, 0x5a, 0x81, 0xf5, 0xee, 0x90, 0x8a, 0x31, 0x42,
0xb0, 0xce, 0xc8, 0x80, 0x3a, 0xa0, 0x05, 0xda, 0x8d, 0xae, 0x19, 0xa3, 0x55, 0xb8, 0x24, 0xe8,
0x5d, 0x41, 0x65, 0xcf, 0xa9, 0xb5, 0x40, 0x7b, 0x79, 0xb3, 0xfe, 0xe8, 0xf1, 0xa9, 0x67, 0xba,
0xb9, 0x11, 0x9d, 0x85, 0x4b, 0x1a, 0x9e, 0x06, 0xca, 0x59, 0x68, 0x2d, 0xb4, 0x1b, 0x9b, 0x87,
0xf7, 0x1e, 0x9f, 0x5a, 0xde, 0x4e, 0x4d, 0xb2, 0x9b, 0x4f, 0xe2, 0xcf, 0x01, 0x5c, 0xb5, 0x00,
0xbb, 0x54, 0xf2, 0xa1, 0x08, 0xe8, 0xd5, 0x11, 0x65, 0x4a, 0x3e, 0x09, 0x5f, 0x9b, 0xc0, 0xb7,
0xe1, 0x61, 0x91, 0x2d, 0xbd, 0xa9, 0xe7, 0x6a, 0x7a, 0x2e, 0xe3, 0x50, 0x9a, 0x41, 0x67, 0xe1,
0x4a, 0xfe, 0x7d, 0xfb, 0xc6, 0x15, 0x67, 0xc1, 0x5a, 0x68, 0x4f, 0xe0, 0x6d, 0xe8, 0x58, 0x3c,
0xde, 0x26, 0x2c, 0xbe, 0x4b, 0xa5, 0x9a, 0xcd, 0xa0, 0x05, 0x97, 0x05, 0x1d, 0xc5, 0x32, 0xe6,
0xcc, 0x44, 0x20, 0x77, 0x3a, 0xb1, 0xe2, 0x63, 0xf0, 0xd9, 0xf2, 0xc9, 0x12, 0xce, 0x24, 0xc5,
0x0f, 0x41, 0x09, 0xe9, 0x75, 0x41, 0x89, 0xa2, 0x5d, 0xfa, 0xd1, 0x90, 0x4a, 0x85, 0x18, 0xb4,
0x4b, 0xd5, 0x00, 0xae, 0x6c, 0xbc, 0xe1, 0x15, 0x89, 0xf5, 0xf2, 0xc4, 0x9a, 0xc1, 0x07, 0x41,
0xe8, 0x25, 0xbb, 0x91, 0xa7, 0x6b, 0xc4, 0xb3, 0xcb, 0x3e, 0xaf, 0x11, 0xcf, 0x42, 0xca, 0x4f,
0x6d, 0xad, 0x43, 0xc7, 0xe1, 0xe2, 0x30, 0x91, 0x54, 0xa8, 0x34, 0x8b, 0xdd, 0xec, 0x0b, 0x7f,
0x56, 0x26, 0x79, 0x3b, 0x09, 0x2d, 0x92, 0xbd, 0xff, 0x91, 0x64, 0x89, 0x1e, 0xbe, 0x5e, 0x62,
0x71, 0x85, 0xf6, 0x69, 0xc1, 0x62, 0x5a, 0x52, 0x1c, 0xb8, 0x14, 0x10, 0x19, 0x90, 0x90, 0x66,
0xe7, 0xc9, 0x3f, 0xf1, 0x9f, 0x00, 0x1e, 0xb7, 0x5c, 0xed, 0x8c, 0x59, 0x50, 0xe5, 0xe8, 0xc0,
0xec, 0xa2, 0x26, 0x5c, 0x0c, 0xc5, 0xb8, 0x3b, 0x64, 0xce, 0x82, 0x55, 0xff, 0x99, 0x0d, 0xb9,
0xf0, 0x50, 0x22, 0x86, 0x8c, 0x3a, 0x75, 0x6b, 0x32, 0x35, 0xa1, 0x00, 0x2e, 0x4b, 0xa5, 0xef,
0x6d, 0x34, 0x76, 0x0e, 0xb5, 0x40, 0x7b, 0x65, 0xe3, 0xda, 0x7f, 0x88, 0x9d, 0x3e, 0xc9, 0x4e,
0xe6, 0xae, 0x3b, 0x71, 0x8c, 0xbf, 0x03, 0xb0, 0xb9, 0x2f, 0x81, 0x3b, 0x09, 0xad, 0x3c, 0x75,
0x08, 0xeb, 0x32, 0xa1, 0x81, 0xb9, 0x4d, 0x2b, 0x1b, 0x6f, 0xce, 0x27, 0xa3, 0x1a, 0x34, 0x0b,
0x80, 0xf1, 0xae, 0xaf, 0xbc, 0x6b, 0x67, 0x9c, 0xf7, 0xfb, 0x77, 0x48, 0xb0, 0x5b, 0x45, 0xcc,
0x85, 0xb5, 0x38, 0x34, 0xb4, 0x16, 0x36, 0xa1, 0x76, 0xb5, 0xf7, 0xf8, 0x54, 0xed, 0xc6, 0x95,
0x6e, 0x2d, 0x0e, 0xff, 0x7d, 0x22, 0xf0, 0x5b, 0xf0, 0xc4, 0xbe, 0xea, 0xda, 0xe6, 0xe1, 0x01,
0x05, 0x96, 0xf0, 0xb0, 0x78, 0x72, 0xba, 0xf9, 0x27, 0xfe, 0xb1, 0x06, 0x9f, 0xb3, 0xbc, 0x6d,
0xf3, 0x70, 0x8b, 0x47, 0x15, 0x2f, 0xd8, 0x4c, 0x4f, 0x08, 0xc3, 0x46, 0xc0, 0x99, 0x22, 0x5a,
0x40, 0x4a, 0xef, 0x55, 0x61, 0xd6, 0xef, 0x9f, 0x8c, 0x59, 0x40, 0x77, 0x68, 0xc0, 0x59, 0x28,
0x9d, 0xba, 0x09, 0x4d, 0xf6, 0xfe, 0xd9, 0x33, 0xe8, 0x3a, 0x6c, 0x98, 0xef, 0x5b, 0xf1, 0x80,
0x66, 0xe5, 0xd6, 0xf1, 0x52, 0xa5, 0xf2, 0x6c, 0xa5, 0x2a, 0x12, 0xaa, 0x95, 0xca, 0x1b, 0xad,
0x7b, 0x7a, 0x47, 0xb7, 0xd8, 0xac, 0x79, 0x29, 0x12, 0xf7, 0xb7, 0x62, 0x46, 0xa5, 0xb3, 0x68,
0x01, 0x16, 0x66, 0x9d, 0x8c, 0xbb, 0xbc, 0xdf, 0xe7, 0x1f, 0x3b, 0x4b, 0xad, 0x5a, 0x91, 0x8c,
0xd4, 0x86, 0x3f, 0x81, 0xcb, 0x5b, 0x3c, 0xba, 0xca, 0x94, 0x18, 0x6b, 0x01, 0xd1, 0xc7, 0xa1,
0x4c, 0xa5, 0x61, 0xc9, 0x05, 0x24, 0x33, 0xa2, 0x9b, 0xb0, 0xa1, 0xe2, 0x01, 0xdd, 0x51, 0x64,
0x90, 0x64, 0x05, 0xf9, 0x0f, 0x78, 0x4f, 0x98, 0xe5, 0x2e, 0xb0, 0x0f, 0x9f, 0x7f, 0x27, 0xd1,
0x72, 0x19, 0x73, 0x76, 0x8b, 0x8a, 0x41, 0xcc, 0x48, 0xe5, 0x5b, 0x82, 0x9b, 0xd0, 0x9d, 0xb6,
0x21, 0x7d, 0xc5, 0x37, 0xfe, 0x3a, 0x02, 0x91, 0x5d, 0xe4, 0x54, 0x8c, 0xe2, 0x80, 0xa2, 0xaf,
0x00, 0xac, 0x6f, 0xc5, 0x52, 0xa1, 0x93, 0xa5, 0x7b, 0xf1, 0xa4, 0xa4, 0xba, 0x73, 0xba, 0x5b,
0x1a, 0x0a, 0x37, 0x1f, 0xfc, 0xf6, 0xc7, 0x37, 0xb5, 0xe3, 0xe8, 0xa8, 0xe9, 0x4e, 0x46, 0xeb,
0x76, 0xb3, 0x20, 0xd1, 0x97, 0x00, 0x22, 0xbd, 0xac, 0xac, 0xac, 0xe8, 0xc2, 0x2c, 0x7e, 0x53,
0x14, 0xd8, 0x3d, 0x69, 0x05, 0xde, 0xd3, 0xed, 0x8f, 0x0e, 0xb3, 0x59, 0x60, 0x08, 0x74, 0x0c,
0x81, 0x35, 0x84, 0xa7, 0x11, 0xf0, 0xef, 0xe9, 0x68, 0xde, 0xf7, 0x69, 0x8a, 0xfb, 0x3d, 0x80,
0x87, 0xde, 0x23, 0x2a, 0xe8, 0x1d, 0x14, 0xa1, 0xed, 0xf9, 0x44, 0xc8, 0x60, 0x19, 0xaa, 0xf8,
0xb4, 0xa1, 0x79, 0x12, 0x9d, 0xc8, 0x69, 0x4a, 0x25, 0x28, 0x19, 0x94, 0xd8, 0x5e, 0x02, 0xe8,
0x21, 0x80, 0x8b, 0xa9, 0x28, 0xa3, 0x33, 0xb3, 0x28, 0x96, 0x44, 0xdb, 0x9d, 0x93, 0xf4, 0xe1,
0xf3, 0x86, 0xe0, 0x69, 0x3c, 0x35, 0x91, 0x97, 0x4b, 0xba, 0xfd, 0x35, 0x80, 0x0b, 0xd7, 0xe8,
0x81, 0x65, 0x36, 0x2f, 0x66, 0xfb, 0x42, 0x37, 0x25, 0xc3, 0xe8, 0x01, 0x80, 0x87, 0xaf, 0x51,
0x95, 0xb7, 0x4e, 0x72, 0x76, 0xf8, 0x4a, 0xdd, 0x95, 0xdb, 0xf4, 0xac, 0x2e, 0x34, 0x9f, 0x9a,
0xb4, 0x4b, 0x17, 0x0d, 0xf4, 0x39, 0x74, 0xa6, 0xaa, 0xb8, 0x06, 0x13, 0xcc, 0x5f, 0x00, 0x5c,
0x4c, 0xc5, 0x6e, 0x36, 0x7c, 0xa9, 0x9b, 0x99, 0x5b, 0x8c, 0xae, 0x1a, 0xa2, 0xaf, 0xba, 0x97,
0xa6, 0x13, 0xb5, 0xf7, 0xeb, 0x97, 0x2a, 0x24, 0x8a, 0x78, 0x86, 0x7d, 0x39, 0xb3, 0x3f, 0x03,
0x08, 0x0b, 0xb5, 0x46, 0xe7, 0xab, 0x0f, 0x61, 0x29, 0xba, 0x3b, 0x47, 0xbd, 0xc6, 0x9e, 0x39,
0x4c, 0xdb, 0x6d, 0x55, 0x45, 0x5d, 0xab, 0xf9, 0x65, 0xa3, 0xe9, 0x68, 0x04, 0x17, 0x53, 0xfd,
0x9c, 0x1d, 0xf5, 0x52, 0xf7, 0xe6, 0xb6, 0x2a, 0xde, 0x9f, 0x34, 0xf1, 0x59, 0xcd, 0x75, 0x2a,
0x6b, 0xee, 0x07, 0x00, 0xeb, 0xba, 0x03, 0x42, 0xa7, 0x67, 0xf9, 0xb3, 0x3a, 0xbd, 0xb9, 0xa5,
0xfa, 0x82, 0xa1, 0x76, 0x06, 0x57, 0x47, 0x67, 0xcc, 0x82, 0xcb, 0xa0, 0x83, 0x7e, 0x02, 0x70,
0x39, 0xef, 0x71, 0xd0, 0xb9, 0x99, 0xc7, 0x2e, 0x77, 0x41, 0x73, 0xa3, 0xea, 0x1b, 0xaa, 0xe7,
0xf1, 0x5a, 0x15, 0x55, 0x91, 0x81, 0x6b, 0xba, 0xdf, 0x02, 0x88, 0x26, 0x72, 0x37, 0x11, 0x40,
0x74, 0xb6, 0x04, 0x35, 0x53, 0x49, 0xdd, 0x73, 0x07, 0xae, 0x2b, 0xdf, 0xeb, 0x4e, 0xe5, 0xbd,
0xe6, 0x13, 0xfc, 0x2f, 0x00, 0x6c, 0x4c, 0x3a, 0x34, 0xd4, 0xae, 0x2e, 0xb2, 0xa2, 0x89, 0x7b,
0x8a, 0x3a, 0xdb, 0x30, 0x44, 0x5e, 0xe8, 0x74, 0xaa, 0x88, 0x24, 0x3c, 0x94, 0xfe, 0xbd, 0xac,
0x43, 0xbb, 0x8f, 0x3e, 0x05, 0x70, 0x29, 0xeb, 0xf0, 0xd0, 0xda, 0x2c, 0x04, 0xbb, 0x05, 0x74,
0x8f, 0x95, 0x56, 0xe5, 0x5d, 0x10, 0x7e, 0xc9, 0x80, 0xaf, 0x23, 0xff, 0xe9, 0xc1, 0xfd, 0x3e,
0x8f, 0xe4, 0x25, 0xb0, 0xf9, 0xca, 0xa3, 0xbd, 0x55, 0xf0, 0xeb, 0xde, 0x2a, 0xf8, 0x7d, 0x6f,
0x15, 0xbc, 0xef, 0x55, 0xfd, 0xf6, 0xdf, 0xff, 0x1f, 0xc9, 0xdf, 0x01, 0x00, 0x00, 0xff, 0xff,
0x6a, 0x89, 0x9d, 0x9f, 0x38, 0x11, 0x00, 0x00,
}

View File

@@ -382,12 +382,8 @@ func request_ApplicationService_TerminateOperation_0(ctx context.Context, marsha
}
var (
filter_ApplicationService_DeleteResource_0 = &utilities.DoubleArray{Encoding: map[string]int{"name": 0}, Base: []int{1, 1, 0}, Check: []int{0, 1, 2}}
)
func request_ApplicationService_DeleteResource_0(ctx context.Context, marshaler runtime.Marshaler, client ApplicationServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ApplicationDeleteResourceRequest
func request_ApplicationService_DeletePod_0(ctx context.Context, marshaler runtime.Marshaler, client ApplicationServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ApplicationDeletePodRequest
var metadata runtime.ServerMetadata
var (
@@ -408,11 +404,18 @@ func request_ApplicationService_DeleteResource_0(ctx context.Context, marshaler
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "name", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.URL.Query(), filter_ApplicationService_DeleteResource_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
val, ok = pathParams["podName"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "podName")
}
msg, err := client.DeleteResource(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
protoReq.PodName, err = runtime.StringP(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "podName", err)
}
msg, err := client.DeletePod(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
@@ -857,7 +860,7 @@ func RegisterApplicationServiceHandlerClient(ctx context.Context, mux *runtime.S
})
mux.Handle("DELETE", pattern_ApplicationService_DeleteResource_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("DELETE", pattern_ApplicationService_DeletePod_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
@@ -875,14 +878,14 @@ func RegisterApplicationServiceHandlerClient(ctx context.Context, mux *runtime.S
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_ApplicationService_DeleteResource_0(rctx, inboundMarshaler, client, req, pathParams)
resp, md, err := request_ApplicationService_DeletePod_0(rctx, inboundMarshaler, client, req, pathParams)
ctx = runtime.NewServerMetadataContext(ctx, md)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
forward_ApplicationService_DeleteResource_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
forward_ApplicationService_DeletePod_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
@@ -943,7 +946,7 @@ var (
pattern_ApplicationService_TerminateOperation_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "applications", "name", "operation"}, ""))
pattern_ApplicationService_DeleteResource_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "applications", "name", "resource"}, ""))
pattern_ApplicationService_DeletePod_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"api", "v1", "applications", "name", "pods", "podName"}, ""))
pattern_ApplicationService_PodLogs_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5, 2, 6}, []string{"api", "v1", "applications", "name", "pods", "podName", "logs"}, ""))
)
@@ -973,7 +976,7 @@ var (
forward_ApplicationService_TerminateOperation_0 = runtime.ForwardResponseMessage
forward_ApplicationService_DeleteResource_0 = runtime.ForwardResponseMessage
forward_ApplicationService_DeletePod_0 = runtime.ForwardResponseMessage
forward_ApplicationService_PodLogs_0 = runtime.ForwardResponseStream
)

View File

@@ -72,11 +72,9 @@ message ApplicationRollbackRequest {
optional bool prune = 4 [(gogoproto.nullable) = false];
}
message ApplicationDeleteResourceRequest {
message ApplicationDeletePodRequest {
required string name = 1;
required string resourceName = 2 [(gogoproto.nullable) = false];
required string apiVersion = 3 [(gogoproto.customname) = "APIVersion", (gogoproto.nullable) = false];
required string kind = 4 [(gogoproto.nullable) = false];
required string podName = 2;
}
message ApplicationPodLogsQuery {
@@ -181,9 +179,9 @@ service ApplicationService {
};
}
// DeleteResource deletes a single application resource
rpc DeleteResource(ApplicationDeleteResourceRequest) returns (ApplicationResponse) {
option (google.api.http).delete = "/api/v1/applications/{name}/resource";
// DeletePod returns stream of log entries for the specified pod. Pod
rpc DeletePod(ApplicationDeletePodRequest) returns (ApplicationResponse) {
option (google.api.http).delete = "/api/v1/applications/{name}/pods/{podName}";
}
// PodLogs returns stream of log entries for the specified pod. Pod

View File

@@ -6,7 +6,6 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"golang.org/x/net/context"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
"github.com/argoproj/argo-cd/common"
@@ -65,23 +64,13 @@ version: 0.0.1
}
}
func fakeListDirResponse() *repository.FileList {
return &repository.FileList{
Items: []string{
"some/path/app.yaml",
},
}
}
// return an ApplicationServiceServer which returns fake data
func newTestAppServer() ApplicationServiceServer {
kubeclientset := fake.NewSimpleClientset()
enforcer := rbac.NewEnforcer(kubeclientset, testNamespace, common.ArgoCDRBACConfigMapName, nil)
enforcer.SetBuiltinPolicy(test.BuiltinPolicy)
enforcer.SetDefaultRole("role:admin")
enforcer.SetClaimsEnforcerFunc(func(rvals ...interface{}) bool {
return true
})
db := db.NewDB(testNamespace, kubeclientset)
ctx := context.Background()
_, err := db.CreateRepository(ctx, fakeRepo())
@@ -91,23 +80,14 @@ func newTestAppServer() ApplicationServiceServer {
mockRepoServiceClient := mockreposerver.RepositoryServiceClient{}
mockRepoServiceClient.On("GetFile", mock.Anything, mock.Anything).Return(fakeFileResponse(), nil)
mockRepoServiceClient.On("ListDir", mock.Anything, mock.Anything).Return(fakeListDirResponse(), nil)
mockRepoClient := &mockrepo.Clientset{}
mockRepoClient.On("NewRepositoryClient").Return(&fakeCloser{}, &mockRepoServiceClient, nil)
defaultProj := &appsv1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: "default"},
Spec: appsv1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []appsv1.ApplicationDestination{{Server: "*", Namespace: "*"}},
},
}
return NewServer(
testNamespace,
kubeclientset,
apps.NewSimpleClientset(defaultProj),
apps.NewSimpleClientset(),
mockRepoClient,
db,
enforcer,
@@ -122,7 +102,7 @@ func TestCreateApp(t *testing.T) {
Spec: appsv1.ApplicationSpec{
Source: appsv1.ApplicationSource{
RepoURL: fakeRepoURL,
Path: "some/path",
Path: ".",
Environment: "default",
TargetRevision: "HEAD",
},

View File

@@ -6,10 +6,7 @@ import (
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/common"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/grpc"
@@ -79,53 +76,6 @@ func (s *Server) Create(ctx context.Context, q *ClusterCreateRequest) (*appv1.Cl
return redact(clust), err
}
// Create creates a cluster
func (s *Server) CreateFromKubeConfig(ctx context.Context, q *ClusterCreateFromKubeConfigRequest) (*appv1.Cluster, error) {
kubeconfig, err := clientcmd.Load([]byte(q.Kubeconfig))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not unmarshal kubeconfig: %v", err)
}
var clusterServer string
var clusterInsecure bool
if q.InCluster {
clusterServer = common.KubernetesInternalAPIServerAddr
} else if cluster, ok := kubeconfig.Clusters[q.Context]; ok {
clusterServer = cluster.Server
clusterInsecure = cluster.InsecureSkipTLSVerify
} else {
return nil, status.Errorf(codes.Internal, "Context %s does not exist in kubeconfig", q.Context)
}
c := &appv1.Cluster{
Server: clusterServer,
Name: q.Context,
Config: appv1.ClusterConfig{
TLSClientConfig: appv1.TLSClientConfig{
Insecure: clusterInsecure,
},
},
}
// Temporarily install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(c.RESTConfig())
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not create Kubernetes clientset: %v", err)
}
bearerToken, err := common.InstallClusterManagerRBAC(clientset)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not install cluster manager RBAC: %v", err)
}
c.Config.BearerToken = bearerToken
return s.Create(ctx, &ClusterCreateRequest{
Cluster: c,
Upsert: q.Upsert,
})
}
// Get returns a cluster from a query
func (s *Server) Get(ctx context.Context, q *ClusterQuery) (*appv1.Cluster, error) {
if !s.enf.EnforceClaims(ctx.Value("claims"), "clusters", "get", q.Server) {

View File

@@ -15,7 +15,6 @@
ClusterQuery
ClusterResponse
ClusterCreateRequest
ClusterCreateFromKubeConfigRequest
ClusterUpdateRequest
*/
package cluster
@@ -93,48 +92,6 @@ func (m *ClusterCreateRequest) GetUpsert() bool {
return false
}
type ClusterCreateFromKubeConfigRequest struct {
Kubeconfig string `protobuf:"bytes,1,opt,name=kubeconfig,proto3" json:"kubeconfig,omitempty"`
Context string `protobuf:"bytes,2,opt,name=context,proto3" json:"context,omitempty"`
Upsert bool `protobuf:"varint,3,opt,name=upsert,proto3" json:"upsert,omitempty"`
InCluster bool `protobuf:"varint,4,opt,name=inCluster,proto3" json:"inCluster,omitempty"`
}
func (m *ClusterCreateFromKubeConfigRequest) Reset() { *m = ClusterCreateFromKubeConfigRequest{} }
func (m *ClusterCreateFromKubeConfigRequest) String() string { return proto.CompactTextString(m) }
func (*ClusterCreateFromKubeConfigRequest) ProtoMessage() {}
func (*ClusterCreateFromKubeConfigRequest) Descriptor() ([]byte, []int) {
return fileDescriptorCluster, []int{3}
}
func (m *ClusterCreateFromKubeConfigRequest) GetKubeconfig() string {
if m != nil {
return m.Kubeconfig
}
return ""
}
func (m *ClusterCreateFromKubeConfigRequest) GetContext() string {
if m != nil {
return m.Context
}
return ""
}
func (m *ClusterCreateFromKubeConfigRequest) GetUpsert() bool {
if m != nil {
return m.Upsert
}
return false
}
func (m *ClusterCreateFromKubeConfigRequest) GetInCluster() bool {
if m != nil {
return m.InCluster
}
return false
}
type ClusterUpdateRequest struct {
Cluster *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster `protobuf:"bytes,1,opt,name=cluster" json:"cluster,omitempty"`
}
@@ -142,7 +99,7 @@ type ClusterUpdateRequest struct {
func (m *ClusterUpdateRequest) Reset() { *m = ClusterUpdateRequest{} }
func (m *ClusterUpdateRequest) String() string { return proto.CompactTextString(m) }
func (*ClusterUpdateRequest) ProtoMessage() {}
func (*ClusterUpdateRequest) Descriptor() ([]byte, []int) { return fileDescriptorCluster, []int{4} }
func (*ClusterUpdateRequest) Descriptor() ([]byte, []int) { return fileDescriptorCluster, []int{3} }
func (m *ClusterUpdateRequest) GetCluster() *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster {
if m != nil {
@@ -155,7 +112,6 @@ func init() {
proto.RegisterType((*ClusterQuery)(nil), "cluster.ClusterQuery")
proto.RegisterType((*ClusterResponse)(nil), "cluster.ClusterResponse")
proto.RegisterType((*ClusterCreateRequest)(nil), "cluster.ClusterCreateRequest")
proto.RegisterType((*ClusterCreateFromKubeConfigRequest)(nil), "cluster.ClusterCreateFromKubeConfigRequest")
proto.RegisterType((*ClusterUpdateRequest)(nil), "cluster.ClusterUpdateRequest")
}
@@ -174,8 +130,6 @@ type ClusterServiceClient interface {
List(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ClusterList, error)
// Create creates a cluster
Create(ctx context.Context, in *ClusterCreateRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
// CreateFromKubeConfig installs the argocd-manager service account into the cluster specified in the given kubeconfig and context
CreateFromKubeConfig(ctx context.Context, in *ClusterCreateFromKubeConfigRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
// Get returns a cluster by server address
Get(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
// Update updates a cluster
@@ -210,15 +164,6 @@ func (c *clusterServiceClient) Create(ctx context.Context, in *ClusterCreateRequ
return out, nil
}
func (c *clusterServiceClient) CreateFromKubeConfig(ctx context.Context, in *ClusterCreateFromKubeConfigRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error) {
out := new(github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster)
err := grpc.Invoke(ctx, "/cluster.ClusterService/CreateFromKubeConfig", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *clusterServiceClient) Get(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error) {
out := new(github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster)
err := grpc.Invoke(ctx, "/cluster.ClusterService/Get", in, out, c.cc, opts...)
@@ -253,8 +198,6 @@ type ClusterServiceServer interface {
List(context.Context, *ClusterQuery) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ClusterList, error)
// Create creates a cluster
Create(context.Context, *ClusterCreateRequest) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
// CreateFromKubeConfig installs the argocd-manager service account into the cluster specified in the given kubeconfig and context
CreateFromKubeConfig(context.Context, *ClusterCreateFromKubeConfigRequest) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
// Get returns a cluster by server address
Get(context.Context, *ClusterQuery) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
// Update updates a cluster
@@ -303,24 +246,6 @@ func _ClusterService_Create_Handler(srv interface{}, ctx context.Context, dec fu
return interceptor(ctx, in, info, handler)
}
func _ClusterService_CreateFromKubeConfig_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ClusterCreateFromKubeConfigRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ClusterServiceServer).CreateFromKubeConfig(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/cluster.ClusterService/CreateFromKubeConfig",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ClusterServiceServer).CreateFromKubeConfig(ctx, req.(*ClusterCreateFromKubeConfigRequest))
}
return interceptor(ctx, in, info, handler)
}
func _ClusterService_Get_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ClusterQuery)
if err := dec(in); err != nil {
@@ -387,10 +312,6 @@ var _ClusterService_serviceDesc = grpc.ServiceDesc{
MethodName: "Create",
Handler: _ClusterService_Create_Handler,
},
{
MethodName: "CreateFromKubeConfig",
Handler: _ClusterService_CreateFromKubeConfig_Handler,
},
{
MethodName: "Get",
Handler: _ClusterService_Get_Handler,
@@ -488,56 +409,6 @@ func (m *ClusterCreateRequest) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func (m *ClusterCreateFromKubeConfigRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ClusterCreateFromKubeConfigRequest) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Kubeconfig) > 0 {
dAtA[i] = 0xa
i++
i = encodeVarintCluster(dAtA, i, uint64(len(m.Kubeconfig)))
i += copy(dAtA[i:], m.Kubeconfig)
}
if len(m.Context) > 0 {
dAtA[i] = 0x12
i++
i = encodeVarintCluster(dAtA, i, uint64(len(m.Context)))
i += copy(dAtA[i:], m.Context)
}
if m.Upsert {
dAtA[i] = 0x18
i++
if m.Upsert {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i++
}
if m.InCluster {
dAtA[i] = 0x20
i++
if m.InCluster {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i++
}
return i, nil
}
func (m *ClusterUpdateRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -604,26 +475,6 @@ func (m *ClusterCreateRequest) Size() (n int) {
return n
}
func (m *ClusterCreateFromKubeConfigRequest) Size() (n int) {
var l int
_ = l
l = len(m.Kubeconfig)
if l > 0 {
n += 1 + l + sovCluster(uint64(l))
}
l = len(m.Context)
if l > 0 {
n += 1 + l + sovCluster(uint64(l))
}
if m.Upsert {
n += 2
}
if m.InCluster {
n += 2
}
return n
}
func (m *ClusterUpdateRequest) Size() (n int) {
var l int
_ = l
@@ -879,154 +730,6 @@ func (m *ClusterCreateRequest) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *ClusterCreateFromKubeConfigRequest) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowCluster
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ClusterCreateFromKubeConfigRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ClusterCreateFromKubeConfigRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Kubeconfig", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowCluster
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthCluster
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Kubeconfig = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Context", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowCluster
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthCluster
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Context = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Upsert", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowCluster
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
m.Upsert = bool(v != 0)
case 4:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field InCluster", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowCluster
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
m.InCluster = bool(v != 0)
default:
iNdEx = preIndex
skippy, err := skipCluster(dAtA[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthCluster
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *ClusterUpdateRequest) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
@@ -1218,41 +921,35 @@ var (
func init() { proto.RegisterFile("server/cluster/cluster.proto", fileDescriptorCluster) }
var fileDescriptorCluster = []byte{
// 564 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x95, 0xcd, 0x6e, 0x13, 0x31,
0x10, 0xc7, 0xe5, 0xb6, 0xda, 0x12, 0x83, 0xf8, 0xb0, 0x0a, 0x5a, 0xd2, 0x10, 0xa5, 0x3e, 0x54,
0x55, 0xa0, 0xb6, 0x12, 0x2e, 0x55, 0x8f, 0x0d, 0x2a, 0x42, 0x70, 0x21, 0x88, 0x0b, 0xaa, 0x84,
0x36, 0x9b, 0x61, 0xbb, 0x24, 0x5d, 0x2f, 0xb6, 0x37, 0x02, 0x21, 0x84, 0x04, 0x57, 0xc4, 0x05,
0xee, 0x3c, 0x02, 0xaf, 0xc1, 0x11, 0x89, 0x1b, 0x27, 0x14, 0xf1, 0x20, 0x68, 0xbd, 0x76, 0xbe,
0xc3, 0x85, 0x88, 0x53, 0xec, 0x19, 0x67, 0xe6, 0x37, 0x33, 0xff, 0x4c, 0x70, 0x45, 0x81, 0x1c,
0x80, 0xe4, 0x61, 0x3f, 0x53, 0x7a, 0xfc, 0xc9, 0x52, 0x29, 0xb4, 0x20, 0x9b, 0xf6, 0x5a, 0xde,
0x8a, 0x44, 0x24, 0x8c, 0x8d, 0xe7, 0xa7, 0xc2, 0x5d, 0xae, 0x44, 0x42, 0x44, 0x7d, 0xe0, 0x41,
0x1a, 0xf3, 0x20, 0x49, 0x84, 0x0e, 0x74, 0x2c, 0x12, 0x65, 0xbd, 0xb4, 0x77, 0xa0, 0x58, 0x2c,
0x8c, 0x37, 0x14, 0x12, 0xf8, 0xa0, 0xc1, 0x23, 0x48, 0x40, 0x06, 0x1a, 0xba, 0xf6, 0xcd, 0xbd,
0x28, 0xd6, 0xa7, 0x59, 0x87, 0x85, 0xe2, 0x8c, 0x07, 0xd2, 0xa4, 0x78, 0x6e, 0x0e, 0xfb, 0x61,
0x97, 0xa7, 0xbd, 0x28, 0xff, 0xb2, 0xe2, 0x41, 0x9a, 0xf6, 0xe3, 0xd0, 0x04, 0xe7, 0x83, 0x46,
0xd0, 0x4f, 0x4f, 0x83, 0xb9, 0x50, 0x74, 0x17, 0x5f, 0x68, 0x15, 0xb4, 0x0f, 0x33, 0x90, 0xaf,
0xc8, 0x35, 0xec, 0x15, 0xb5, 0xf9, 0xa8, 0x86, 0xf6, 0x4a, 0x6d, 0x7b, 0xa3, 0x57, 0xf0, 0x25,
0xfb, 0xae, 0x0d, 0x2a, 0x15, 0x89, 0x02, 0xfa, 0x01, 0xe1, 0x2d, 0x6b, 0x6b, 0x49, 0x08, 0x34,
0xb4, 0xe1, 0x45, 0x06, 0x4a, 0x93, 0x13, 0xec, 0x3a, 0x60, 0x82, 0x9c, 0x6f, 0x1e, 0xb1, 0x31,
0x30, 0x73, 0xc0, 0xe6, 0xf0, 0x34, 0xec, 0xb2, 0xb4, 0x17, 0xb1, 0x1c, 0x98, 0x4d, 0x00, 0x33,
0x07, 0xcc, 0x5c, 0x56, 0x17, 0x32, 0x27, 0xcc, 0x52, 0x05, 0x52, 0xfb, 0x6b, 0x35, 0xb4, 0x77,
0xae, 0x6d, 0x6f, 0xf4, 0x33, 0xc2, 0x74, 0x0a, 0xe7, 0x58, 0x8a, 0xb3, 0xfb, 0x59, 0x07, 0x5a,
0x22, 0x79, 0x16, 0x47, 0x0e, 0xae, 0x8a, 0x71, 0x2f, 0xeb, 0x40, 0x68, 0x8c, 0xb6, 0xc8, 0x09,
0x0b, 0xf1, 0xf1, 0x66, 0x28, 0x12, 0x0d, 0x2f, 0x8b, 0xf8, 0xa5, 0xb6, 0xbb, 0x4e, 0x24, 0x5e,
0x9f, 0x4c, 0x4c, 0x2a, 0xb8, 0x14, 0x27, 0x36, 0xb3, 0xbf, 0x61, 0x5c, 0x63, 0x03, 0xd5, 0xa3,
0x26, 0x3d, 0x4e, 0xbb, 0xff, 0xab, 0x49, 0xcd, 0x9f, 0x1e, 0xbe, 0x68, 0x8d, 0x8f, 0x40, 0x0e,
0xe2, 0x10, 0xc8, 0x5b, 0xbc, 0xf1, 0x20, 0x56, 0x9a, 0x5c, 0x65, 0x4e, 0xad, 0x93, 0x83, 0x2f,
0x1f, 0xff, 0x7b, 0xfa, 0x3c, 0x3c, 0xf5, 0xdf, 0xfd, 0xf8, 0xfd, 0x69, 0x8d, 0x90, 0xcb, 0x46,
0xc1, 0x83, 0x86, 0xfb, 0x6d, 0x28, 0xf2, 0x11, 0x61, 0xaf, 0x98, 0x0c, 0xb9, 0x31, 0xcb, 0x30,
0x25, 0xa0, 0xf2, 0x0a, 0x5a, 0x41, 0x77, 0x0c, 0xc7, 0x36, 0x9d, 0xe3, 0x38, 0x1c, 0x29, 0xe9,
0x6b, 0x2e, 0xe0, 0x05, 0x52, 0x21, 0x37, 0x17, 0xe3, 0x2d, 0x14, 0xd4, 0x4a, 0x60, 0x77, 0x0d,
0x6c, 0x8d, 0x6e, 0xcf, 0xc2, 0xee, 0x8f, 0x95, 0x79, 0x88, 0xea, 0xe4, 0x3d, 0xc2, 0xeb, 0x77,
0x61, 0xe9, 0x0c, 0x57, 0xd8, 0x37, 0x72, 0x7d, 0x16, 0x85, 0xbf, 0x2e, 0x56, 0xc1, 0x1b, 0xf2,
0x05, 0x61, 0xaf, 0x10, 0xf3, 0xfc, 0x20, 0xa7, 0x44, 0xbe, 0x12, 0xa0, 0xa6, 0x01, 0xba, 0x55,
0xde, 0x99, 0x07, 0x72, 0xb9, 0x2d, 0xd8, 0x78, 0xb2, 0x27, 0xd8, 0xbb, 0x03, 0x7d, 0xd0, 0xb0,
0xac, 0x53, 0xfe, 0xac, 0x79, 0xb4, 0xd5, 0x6c, 0xfd, 0xf5, 0xe5, 0xf5, 0x1f, 0x1d, 0x7c, 0x1b,
0x56, 0xd1, 0xf7, 0x61, 0x15, 0xfd, 0x1a, 0x56, 0xd1, 0x93, 0xfa, 0xdf, 0x96, 0xf1, 0xf4, 0xff,
0x44, 0xc7, 0x33, 0x4b, 0xf7, 0xf6, 0x9f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x23, 0xc4, 0x8d, 0xa4,
0x40, 0x06, 0x00, 0x00,
// 472 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x94, 0xcf, 0x8b, 0x13, 0x31,
0x14, 0xc7, 0xc9, 0xaa, 0xa3, 0x46, 0xf1, 0x47, 0x58, 0xa5, 0x8e, 0x6b, 0xd9, 0xcd, 0x41, 0x96,
0x45, 0x13, 0x5a, 0x2f, 0x8b, 0xc7, 0x5d, 0x51, 0x04, 0x2f, 0x56, 0xbc, 0xc8, 0x82, 0x64, 0xa7,
0x8f, 0xec, 0xd8, 0x71, 0x12, 0x93, 0xcc, 0x80, 0x88, 0x08, 0x7a, 0x15, 0x2f, 0xfe, 0x01, 0x5e,
0xfd, 0x53, 0x3c, 0x0a, 0xfe, 0x03, 0x52, 0xfc, 0x43, 0x64, 0x32, 0x49, 0xbb, 0x6d, 0xa9, 0x17,
0xcb, 0x9e, 0x9a, 0xbc, 0xa4, 0xef, 0x7d, 0xf2, 0x7d, 0xdf, 0x79, 0x78, 0xc3, 0x82, 0xa9, 0xc1,
0xf0, 0xac, 0xa8, 0xac, 0x9b, 0xfe, 0x32, 0x6d, 0x94, 0x53, 0xe4, 0x6c, 0xd8, 0xa6, 0xeb, 0x52,
0x49, 0xe5, 0x63, 0xbc, 0x59, 0xb5, 0xc7, 0xe9, 0x86, 0x54, 0x4a, 0x16, 0xc0, 0x85, 0xce, 0xb9,
0x28, 0x4b, 0xe5, 0x84, 0xcb, 0x55, 0x69, 0xc3, 0x29, 0x1d, 0xed, 0x5a, 0x96, 0x2b, 0x7f, 0x9a,
0x29, 0x03, 0xbc, 0xee, 0x71, 0x09, 0x25, 0x18, 0xe1, 0x60, 0x18, 0xee, 0x3c, 0x96, 0xb9, 0x3b,
0xaa, 0x0e, 0x59, 0xa6, 0x5e, 0x73, 0x61, 0x7c, 0x89, 0x57, 0x7e, 0x71, 0x37, 0x1b, 0x72, 0x3d,
0x92, 0xcd, 0x9f, 0x2d, 0x17, 0x5a, 0x17, 0x79, 0xe6, 0x93, 0xf3, 0xba, 0x27, 0x0a, 0x7d, 0x24,
0x16, 0x52, 0xd1, 0xdb, 0xf8, 0xe2, 0x7e, 0x4b, 0xfb, 0xb4, 0x02, 0xf3, 0x96, 0x5c, 0xc7, 0x49,
0xfb, 0xb6, 0x0e, 0xda, 0x44, 0xdb, 0xe7, 0x07, 0x61, 0x47, 0xaf, 0xe2, 0xcb, 0xe1, 0xde, 0x00,
0xac, 0x56, 0xa5, 0x05, 0xfa, 0x19, 0xe1, 0xf5, 0x10, 0xdb, 0x37, 0x20, 0x1c, 0x0c, 0xe0, 0x4d,
0x05, 0xd6, 0x91, 0x03, 0x1c, 0x15, 0xf0, 0x49, 0x2e, 0xf4, 0xf7, 0xd8, 0x14, 0x98, 0x45, 0x60,
0xbf, 0x78, 0x99, 0x0d, 0x99, 0x1e, 0x49, 0xd6, 0x00, 0xb3, 0x63, 0xc0, 0x2c, 0x02, 0xb3, 0x58,
0x35, 0xa6, 0x6c, 0x08, 0x2b, 0x6d, 0xc1, 0xb8, 0xce, 0xda, 0x26, 0xda, 0x3e, 0x37, 0x08, 0x3b,
0xea, 0x26, 0x34, 0xcf, 0xf5, 0xf0, 0xa4, 0x68, 0xfa, 0xdf, 0xcf, 0xe0, 0x4b, 0x21, 0xf8, 0x0c,
0x4c, 0x9d, 0x67, 0x40, 0x3e, 0xe0, 0xd3, 0x4f, 0x72, 0xeb, 0xc8, 0x35, 0x16, 0x6d, 0x71, 0x5c,
0xe1, 0xf4, 0xe1, 0xff, 0x97, 0x6f, 0xd2, 0xd3, 0xce, 0xc7, 0x5f, 0x7f, 0xbe, 0xae, 0x11, 0x72,
0xc5, 0x5b, 0xa5, 0xee, 0x45, 0x13, 0x5a, 0xf2, 0x05, 0xe1, 0xa4, 0xed, 0x08, 0xb9, 0x35, 0xcf,
0x30, 0xd3, 0xa9, 0x74, 0x05, 0x52, 0xd0, 0x2d, 0xcf, 0x71, 0x93, 0x2e, 0x70, 0xdc, 0x9f, 0xb4,
0xec, 0x13, 0xc2, 0xa7, 0x1e, 0xc1, 0x52, 0x45, 0x56, 0x48, 0x41, 0x6e, 0xcc, 0x53, 0xf0, 0x77,
0xad, 0x83, 0xdf, 0x93, 0x6f, 0x08, 0x27, 0xad, 0x35, 0x16, 0x65, 0x99, 0xb1, 0xcc, 0x4a, 0x80,
0xfa, 0x1e, 0xe8, 0x4e, 0xba, 0xb5, 0x08, 0x14, 0x6b, 0x07, 0xb0, 0xa9, 0x4e, 0x07, 0x38, 0x79,
0x00, 0x05, 0x38, 0x58, 0xa6, 0x54, 0x67, 0x3e, 0x3c, 0xf9, 0x18, 0xc3, 0xfb, 0x77, 0x96, 0xbf,
0x7f, 0x6f, 0xf7, 0xc7, 0xb8, 0x8b, 0x7e, 0x8e, 0xbb, 0xe8, 0xf7, 0xb8, 0x8b, 0x5e, 0xec, 0xfc,
0x6b, 0x86, 0xcc, 0x8e, 0xb7, 0xc3, 0xc4, 0xcf, 0x8a, 0x7b, 0x7f, 0x03, 0x00, 0x00, 0xff, 0xff,
0x2c, 0xae, 0x46, 0x1e, 0xf7, 0x04, 0x00, 0x00,
}

View File

@@ -66,19 +66,6 @@ func request_ClusterService_Create_0(ctx context.Context, marshaler runtime.Mars
}
func request_ClusterService_CreateFromKubeConfig_0(ctx context.Context, marshaler runtime.Marshaler, client ClusterServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ClusterCreateFromKubeConfigRequest
var metadata runtime.ServerMetadata
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.CreateFromKubeConfig(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func request_ClusterService_Get_0(ctx context.Context, marshaler runtime.Marshaler, client ClusterServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ClusterQuery
var metadata runtime.ServerMetadata
@@ -260,35 +247,6 @@ func RegisterClusterServiceHandlerClient(ctx context.Context, mux *runtime.Serve
})
mux.Handle("POST", pattern_ClusterService_CreateFromKubeConfig_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
go func(done <-chan struct{}, closed <-chan bool) {
select {
case <-done:
case <-closed:
cancel()
}
}(ctx.Done(), cn.CloseNotify())
}
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
rctx, err := runtime.AnnotateContext(ctx, mux, req)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_ClusterService_CreateFromKubeConfig_0(rctx, inboundMarshaler, client, req, pathParams)
ctx = runtime.NewServerMetadataContext(ctx, md)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
forward_ClusterService_CreateFromKubeConfig_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("GET", pattern_ClusterService_Get_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
@@ -384,8 +342,6 @@ var (
pattern_ClusterService_Create_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "clusters"}, ""))
pattern_ClusterService_CreateFromKubeConfig_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "clusters-kubeconfig"}, ""))
pattern_ClusterService_Get_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "clusters", "server"}, ""))
pattern_ClusterService_Update_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "clusters", "cluster.server"}, ""))
@@ -398,8 +354,6 @@ var (
forward_ClusterService_Create_0 = runtime.ForwardResponseMessage
forward_ClusterService_CreateFromKubeConfig_0 = runtime.ForwardResponseMessage
forward_ClusterService_Get_0 = runtime.ForwardResponseMessage
forward_ClusterService_Update_0 = runtime.ForwardResponseMessage

View File

@@ -24,13 +24,6 @@ message ClusterCreateRequest {
bool upsert = 2;
}
message ClusterCreateFromKubeConfigRequest {
string kubeconfig = 1;
string context = 2;
bool upsert = 3;
bool inCluster = 4;
}
message ClusterUpdateRequest {
github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.Cluster cluster = 1;
}
@@ -50,14 +43,6 @@ service ClusterService {
body: "cluster"
};
}
// CreateFromKubeConfig installs the argocd-manager service account into the cluster specified in the given kubeconfig and context
rpc CreateFromKubeConfig(ClusterCreateFromKubeConfigRequest) returns (github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.Cluster) {
option (google.api.http) = {
post: "/api/v1/clusters-kubeconfig"
body: "*"
};
}
// Get returns a cluster by server address
rpc Get(ClusterQuery) returns (github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.Cluster) {

View File

@@ -3,14 +3,8 @@ package project
import (
"context"
"fmt"
"strings"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/client-go/kubernetes"
"strings"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
@@ -19,118 +13,23 @@ import (
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/grpc"
projectutil "github.com/argoproj/argo-cd/util/project"
"github.com/argoproj/argo-cd/util/rbac"
"github.com/argoproj/argo-cd/util/session"
jwt "github.com/dgrijalva/jwt-go"
)
const (
// JWTTokenSubFormat format of the JWT token subject that ArgoCD vends out.
JWTTokenSubFormat = "proj:%s:%s"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// Server provides a Project service
type Server struct {
ns string
enf *rbac.Enforcer
appclientset appclientset.Interface
kubeclientset kubernetes.Interface
auditLogger *argo.AuditLogger
projectLock *util.KeyLock
sessionMgr *session.SessionManager
ns string
enf *rbac.Enforcer
appclientset appclientset.Interface
projectLock *util.KeyLock
}
// NewServer returns a new instance of the Project service
func NewServer(ns string, kubeclientset kubernetes.Interface, appclientset appclientset.Interface, enf *rbac.Enforcer, projectLock *util.KeyLock, sessionMgr *session.SessionManager) *Server {
auditLogger := argo.NewAuditLogger(ns, kubeclientset, "argocd-server")
return &Server{enf: enf, appclientset: appclientset, kubeclientset: kubeclientset, ns: ns, projectLock: projectLock, auditLogger: auditLogger, sessionMgr: sessionMgr}
}
// CreateToken creates a new token to access a project
func (s *Server) CreateToken(ctx context.Context, q *ProjectTokenCreateRequest) (*ProjectTokenResponse, error) {
if !s.enf.EnforceClaims(ctx.Value("claims"), "projects", "update", q.Project) {
return nil, grpc.ErrPermissionDenied
}
project, err := s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Get(q.Project, metav1.GetOptions{})
if err != nil {
return nil, err
}
err = validateProject(project)
if err != nil {
return nil, err
}
s.projectLock.Lock(q.Project)
defer s.projectLock.Unlock(q.Project)
index, err := projectutil.GetRoleIndexByName(project, q.Role)
if err != nil {
return nil, status.Errorf(codes.NotFound, "project '%s' does not have role '%s'", q.Project, q.Role)
}
tokenName := fmt.Sprintf(JWTTokenSubFormat, q.Project, q.Role)
jwtToken, err := s.sessionMgr.Create(tokenName, q.ExpiresIn)
if err != nil {
return nil, status.Error(codes.InvalidArgument, err.Error())
}
parser := &jwt.Parser{
SkipClaimsValidation: true,
}
claims := jwt.StandardClaims{}
_, _, err = parser.ParseUnverified(jwtToken, &claims)
if err != nil {
return nil, status.Error(codes.InvalidArgument, err.Error())
}
issuedAt := claims.IssuedAt
expiresAt := claims.ExpiresAt
project.Spec.Roles[index].JWTTokens = append(project.Spec.Roles[index].JWTTokens, v1alpha1.JWTToken{IssuedAt: issuedAt, ExpiresAt: expiresAt})
_, err = s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Update(project)
if err != nil {
return nil, err
}
s.logEvent(project, ctx, argo.EventReasonResourceCreated, "create token")
return &ProjectTokenResponse{Token: jwtToken}, nil
}
// DeleteToken deletes a token in a project
func (s *Server) DeleteToken(ctx context.Context, q *ProjectTokenDeleteRequest) (*EmptyResponse, error) {
if !s.enf.EnforceClaims(ctx.Value("claims"), "projects", "delete", q.Project) {
return nil, grpc.ErrPermissionDenied
}
project, err := s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Get(q.Project, metav1.GetOptions{})
if err != nil {
return nil, err
}
err = validateProject(project)
if err != nil {
return nil, err
}
s.projectLock.Lock(q.Project)
defer s.projectLock.Unlock(q.Project)
roleIndex, err := projectutil.GetRoleIndexByName(project, q.Role)
if err != nil {
return &EmptyResponse{}, nil
}
if project.Spec.Roles[roleIndex].JWTTokens == nil {
return &EmptyResponse{}, nil
}
jwtTokenIndex, err := projectutil.GetJWTTokenIndexByIssuedAt(project, roleIndex, q.Iat)
if err != nil {
return &EmptyResponse{}, nil
}
project.Spec.Roles[roleIndex].JWTTokens[jwtTokenIndex] = project.Spec.Roles[roleIndex].JWTTokens[len(project.Spec.Roles[roleIndex].JWTTokens)-1]
project.Spec.Roles[roleIndex].JWTTokens = project.Spec.Roles[roleIndex].JWTTokens[:len(project.Spec.Roles[roleIndex].JWTTokens)-1]
_, err = s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Update(project)
if err != nil {
return nil, err
}
s.logEvent(project, ctx, argo.EventReasonResourceDeleted, "deleted token")
return &EmptyResponse{}, nil
func NewServer(ns string, appclientset appclientset.Interface, enf *rbac.Enforcer, projectLock *util.KeyLock) *Server {
return &Server{enf: enf, appclientset: appclientset, ns: ns, projectLock: projectLock}
}
// Create a new project.
@@ -138,21 +37,20 @@ func (s *Server) Create(ctx context.Context, q *ProjectCreateRequest) (*v1alpha1
if !s.enf.EnforceClaims(ctx.Value("claims"), "projects", "create", q.Project.Name) {
return nil, grpc.ErrPermissionDenied
}
if q.Project.Name == common.DefaultAppProjectName {
return nil, status.Errorf(codes.InvalidArgument, "name '%s' is reserved and cannot be used as a project name", q.Project.Name)
}
err := validateProject(q.Project)
if err != nil {
return nil, err
}
res, err := s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Create(q.Project)
if err == nil {
s.logEvent(res, ctx, argo.EventReasonResourceCreated, "create")
}
return res, err
return s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Create(q.Project)
}
// List returns list of projects
func (s *Server) List(ctx context.Context, q *ProjectQuery) (*v1alpha1.AppProjectList, error) {
list, err := s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).List(metav1.ListOptions{})
list.Items = append(list.Items, v1alpha1.GetDefaultProject(s.ns))
if list != nil {
newItems := make([]v1alpha1.AppProject, 0)
for i := range list.Items {
@@ -214,58 +112,6 @@ func getRemovedSources(oldProj, newProj *v1alpha1.AppProject) map[string]bool {
return removed
}
func validateJWTToken(proj string, token string, policy string) error {
err := validatePolicy(proj, policy)
if err != nil {
return err
}
policyComponents := strings.Split(policy, ",")
if strings.Trim(policyComponents[2], " ") != "applications" {
return status.Errorf(codes.InvalidArgument, "incorrect format for '%s' as JWT tokens can only access applications", policy)
}
roleComponents := strings.Split(strings.Trim(policyComponents[1], " "), ":")
if len(roleComponents) != 3 {
return status.Errorf(codes.InvalidArgument, "incorrect number of role arguments for '%s' policy", policy)
}
if roleComponents[0] != "proj" {
return status.Errorf(codes.InvalidArgument, "incorrect policy format for '%s' as role should start with 'proj:'", policy)
}
if roleComponents[1] != proj {
return status.Errorf(codes.InvalidArgument, "incorrect policy format for '%s' as policy can't grant access to other projects", policy)
}
if roleComponents[2] != token {
return status.Errorf(codes.InvalidArgument, "incorrect policy format for '%s' as policy can't grant access to other roles", policy)
}
return nil
}
func validatePolicy(proj string, policy string) error {
policyComponents := strings.Split(policy, ",")
if len(policyComponents) != 6 {
return status.Errorf(codes.InvalidArgument, "incorrect number of policy arguments for '%s'", policy)
}
if strings.Trim(policyComponents[0], " ") != "p" {
return status.Errorf(codes.InvalidArgument, "policies can only use the policy format: '%s'", policy)
}
if len(strings.Trim(policyComponents[1], " ")) <= 0 {
return status.Errorf(codes.InvalidArgument, "incorrect policy format for '%s' as subject must be longer than 0 characters:", policy)
}
if len(strings.Trim(policyComponents[2], " ")) <= 0 {
return status.Errorf(codes.InvalidArgument, "incorrect policy format for '%s' as object must be longer than 0 characters:", policy)
}
if len(strings.Trim(policyComponents[3], " ")) <= 0 {
return status.Errorf(codes.InvalidArgument, "incorrect policy format for '%s' as action must be longer than 0 characters:", policy)
}
if !strings.HasPrefix(strings.Trim(policyComponents[4], " "), proj) {
return status.Errorf(codes.InvalidArgument, "incorrect policy format for '%s' as policies can't grant access to other projects", policy)
}
effect := strings.Trim(policyComponents[5], " ")
if effect != "allow" && effect != "deny" {
return status.Errorf(codes.InvalidArgument, "incorrect policy format for '%s' as effect can only have value 'allow' or 'deny'", policy)
}
return nil
}
func validateProject(p *v1alpha1.AppProject) error {
destKeys := make(map[string]bool)
for _, dest := range p.Spec.Destinations {
@@ -278,9 +124,7 @@ func validateProject(p *v1alpha1.AppProject) error {
}
srcRepos := make(map[string]bool)
for i, src := range p.Spec.SourceRepos {
if src != "*" {
src = git.NormalizeGitURL(src)
}
src = git.NormalizeGitURL(src)
p.Spec.SourceRepos[i] = src
if _, ok := srcRepos[src]; !ok {
srcRepos[src] = true
@@ -288,39 +132,14 @@ func validateProject(p *v1alpha1.AppProject) error {
return status.Errorf(codes.InvalidArgument, "source repository %s should not be listed more than once.", src)
}
}
roleNames := make(map[string]bool)
for _, role := range p.Spec.Roles {
existingPolicies := make(map[string]bool)
for _, policy := range role.Policies {
var err error
if role.JWTTokens != nil {
err = validateJWTToken(p.Name, role.Name, policy)
} else {
err = validatePolicy(p.Name, policy)
}
if err != nil {
return err
}
if _, ok := existingPolicies[policy]; !ok {
existingPolicies[policy] = true
} else {
return status.Errorf(codes.AlreadyExists, "policy '%s' already exists for role '%s'", policy, role.Name)
}
}
if _, ok := roleNames[role.Name]; !ok {
roleNames[role.Name] = true
} else {
return status.Errorf(codes.AlreadyExists, "can't have duplicate roles: role '%s' already exists", role.Name)
}
}
return nil
}
// Update updates a project
func (s *Server) Update(ctx context.Context, q *ProjectUpdateRequest) (*v1alpha1.AppProject, error) {
if q.Project.Name == common.DefaultAppProjectName {
return nil, grpc.ErrPermissionDenied
}
if !s.enf.EnforceClaims(ctx.Value("claims"), "projects", "update", q.Project.Name) {
return nil, grpc.ErrPermissionDenied
}
@@ -368,18 +187,11 @@ func (s *Server) Update(ctx context.Context, q *ProjectUpdateRequest) (*v1alpha1
codes.InvalidArgument, "following source repos are used by one or more application and cannot be removed: %s", strings.Join(removedSrcUsed, ";"))
}
res, err := s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Update(q.Project)
if err == nil {
s.logEvent(res, ctx, argo.EventReasonResourceUpdated, "update")
}
return res, err
return s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Update(q.Project)
}
// Delete deletes a project
func (s *Server) Delete(ctx context.Context, q *ProjectQuery) (*EmptyResponse, error) {
if q.Name == common.DefaultAppProjectName {
return nil, status.Errorf(codes.InvalidArgument, "name '%s' is reserved and cannot be deleted", q.Name)
}
if !s.enf.EnforceClaims(ctx.Value("claims"), "projects", "delete", q.Name) {
return nil, grpc.ErrPermissionDenied
}
@@ -387,11 +199,6 @@ func (s *Server) Delete(ctx context.Context, q *ProjectQuery) (*EmptyResponse, e
s.projectLock.Lock(q.Name)
defer s.projectLock.Unlock(q.Name)
p, err := s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Get(q.Name, metav1.GetOptions{})
if err != nil {
return nil, err
}
appsList, err := s.appclientset.ArgoprojV1alpha1().Applications(s.ns).List(metav1.ListOptions{})
if err != nil {
return nil, err
@@ -400,29 +207,5 @@ func (s *Server) Delete(ctx context.Context, q *ProjectQuery) (*EmptyResponse, e
if len(apps) > 0 {
return nil, status.Errorf(codes.InvalidArgument, "project is referenced by %d applications", len(apps))
}
err = s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Delete(q.Name, &metav1.DeleteOptions{})
if err == nil {
s.logEvent(p, ctx, argo.EventReasonResourceDeleted, "delete")
}
return &EmptyResponse{}, err
}
func (s *Server) ListEvents(ctx context.Context, q *ProjectQuery) (*v1.EventList, error) {
if !s.enf.EnforceClaims(ctx.Value("claims"), "projects/events", "get", q.Name) {
return nil, grpc.ErrPermissionDenied
}
proj, err := s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Get(q.Name, metav1.GetOptions{})
if err != nil {
return nil, err
}
fieldSelector := fields.SelectorFromSet(map[string]string{
"involvedObject.name": proj.Name,
"involvedObject.uid": string(proj.UID),
"involvedObject.namespace": proj.Namespace,
}).String()
return s.kubeclientset.CoreV1().Events(s.ns).List(metav1.ListOptions{FieldSelector: fieldSelector})
}
func (s *Server) logEvent(p *v1alpha1.AppProject, ctx context.Context, reason string, action string) {
s.auditLogger.LogAppProjEvent(p, argo.EventInfo{Reason: reason, Action: action, Username: session.Username(ctx)}, v1.EventTypeNormal)
return &EmptyResponse{}, s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Delete(q.Name, &metav1.DeleteOptions{})
}

View File

@@ -13,9 +13,6 @@
It has these top-level messages:
ProjectCreateRequest
ProjectTokenDeleteRequest
ProjectTokenCreateRequest
ProjectTokenResponse
ProjectQuery
ProjectUpdateRequest
EmptyResponse
@@ -27,8 +24,7 @@ import fmt "fmt"
import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import _ "google.golang.org/genproto/googleapis/api/annotations"
import k8s_io_api_core_v1 "k8s.io/api/core/v1"
import _ "k8s.io/apimachinery/pkg/apis/meta/v1"
import _ "k8s.io/api/core/v1"
import github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
import context "golang.org/x/net/context"
@@ -64,98 +60,6 @@ func (m *ProjectCreateRequest) GetProject() *github_com_argoproj_argo_cd_pkg_api
return nil
}
// ProjectTokenCreateRequest defines project token deletion parameters.
type ProjectTokenDeleteRequest struct {
Project string `protobuf:"bytes,1,opt,name=project,proto3" json:"project,omitempty"`
Role string `protobuf:"bytes,2,opt,name=role,proto3" json:"role,omitempty"`
Iat int64 `protobuf:"varint,3,opt,name=iat,proto3" json:"iat,omitempty"`
}
func (m *ProjectTokenDeleteRequest) Reset() { *m = ProjectTokenDeleteRequest{} }
func (m *ProjectTokenDeleteRequest) String() string { return proto.CompactTextString(m) }
func (*ProjectTokenDeleteRequest) ProtoMessage() {}
func (*ProjectTokenDeleteRequest) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{1} }
func (m *ProjectTokenDeleteRequest) GetProject() string {
if m != nil {
return m.Project
}
return ""
}
func (m *ProjectTokenDeleteRequest) GetRole() string {
if m != nil {
return m.Role
}
return ""
}
func (m *ProjectTokenDeleteRequest) GetIat() int64 {
if m != nil {
return m.Iat
}
return 0
}
// ProjectTokenCreateRequest defines project token creation parameters.
type ProjectTokenCreateRequest struct {
Project string `protobuf:"bytes,1,opt,name=project,proto3" json:"project,omitempty"`
Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
Role string `protobuf:"bytes,3,opt,name=role,proto3" json:"role,omitempty"`
// expiresIn represents a duration in seconds
ExpiresIn int64 `protobuf:"varint,4,opt,name=expiresIn,proto3" json:"expiresIn,omitempty"`
}
func (m *ProjectTokenCreateRequest) Reset() { *m = ProjectTokenCreateRequest{} }
func (m *ProjectTokenCreateRequest) String() string { return proto.CompactTextString(m) }
func (*ProjectTokenCreateRequest) ProtoMessage() {}
func (*ProjectTokenCreateRequest) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{2} }
func (m *ProjectTokenCreateRequest) GetProject() string {
if m != nil {
return m.Project
}
return ""
}
func (m *ProjectTokenCreateRequest) GetDescription() string {
if m != nil {
return m.Description
}
return ""
}
func (m *ProjectTokenCreateRequest) GetRole() string {
if m != nil {
return m.Role
}
return ""
}
func (m *ProjectTokenCreateRequest) GetExpiresIn() int64 {
if m != nil {
return m.ExpiresIn
}
return 0
}
// ProjectTokenResponse wraps the created token or returns an empty string if deleted.
type ProjectTokenResponse struct {
Token string `protobuf:"bytes,1,opt,name=token,proto3" json:"token,omitempty"`
}
func (m *ProjectTokenResponse) Reset() { *m = ProjectTokenResponse{} }
func (m *ProjectTokenResponse) String() string { return proto.CompactTextString(m) }
func (*ProjectTokenResponse) ProtoMessage() {}
func (*ProjectTokenResponse) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{3} }
func (m *ProjectTokenResponse) GetToken() string {
if m != nil {
return m.Token
}
return ""
}
// ProjectQuery is a query for Project resources
type ProjectQuery struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
@@ -164,7 +68,7 @@ type ProjectQuery struct {
func (m *ProjectQuery) Reset() { *m = ProjectQuery{} }
func (m *ProjectQuery) String() string { return proto.CompactTextString(m) }
func (*ProjectQuery) ProtoMessage() {}
func (*ProjectQuery) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{4} }
func (*ProjectQuery) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{1} }
func (m *ProjectQuery) GetName() string {
if m != nil {
@@ -180,7 +84,7 @@ type ProjectUpdateRequest struct {
func (m *ProjectUpdateRequest) Reset() { *m = ProjectUpdateRequest{} }
func (m *ProjectUpdateRequest) String() string { return proto.CompactTextString(m) }
func (*ProjectUpdateRequest) ProtoMessage() {}
func (*ProjectUpdateRequest) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{5} }
func (*ProjectUpdateRequest) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{2} }
func (m *ProjectUpdateRequest) GetProject() *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.AppProject {
if m != nil {
@@ -195,13 +99,10 @@ type EmptyResponse struct {
func (m *EmptyResponse) Reset() { *m = EmptyResponse{} }
func (m *EmptyResponse) String() string { return proto.CompactTextString(m) }
func (*EmptyResponse) ProtoMessage() {}
func (*EmptyResponse) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{6} }
func (*EmptyResponse) Descriptor() ([]byte, []int) { return fileDescriptorProject, []int{3} }
func init() {
proto.RegisterType((*ProjectCreateRequest)(nil), "project.ProjectCreateRequest")
proto.RegisterType((*ProjectTokenDeleteRequest)(nil), "project.ProjectTokenDeleteRequest")
proto.RegisterType((*ProjectTokenCreateRequest)(nil), "project.ProjectTokenCreateRequest")
proto.RegisterType((*ProjectTokenResponse)(nil), "project.ProjectTokenResponse")
proto.RegisterType((*ProjectQuery)(nil), "project.ProjectQuery")
proto.RegisterType((*ProjectUpdateRequest)(nil), "project.ProjectUpdateRequest")
proto.RegisterType((*EmptyResponse)(nil), "project.EmptyResponse")
@@ -218,10 +119,6 @@ const _ = grpc.SupportPackageIsVersion4
// Client API for ProjectService service
type ProjectServiceClient interface {
// Create a new project token.
CreateToken(ctx context.Context, in *ProjectTokenCreateRequest, opts ...grpc.CallOption) (*ProjectTokenResponse, error)
// Delete a new project token.
DeleteToken(ctx context.Context, in *ProjectTokenDeleteRequest, opts ...grpc.CallOption) (*EmptyResponse, error)
// Create a new project.
Create(ctx context.Context, in *ProjectCreateRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.AppProject, error)
// List returns list of projects
@@ -232,8 +129,6 @@ type ProjectServiceClient interface {
Update(ctx context.Context, in *ProjectUpdateRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.AppProject, error)
// Delete deletes a project
Delete(ctx context.Context, in *ProjectQuery, opts ...grpc.CallOption) (*EmptyResponse, error)
// ListEvents returns a list of project events
ListEvents(ctx context.Context, in *ProjectQuery, opts ...grpc.CallOption) (*k8s_io_api_core_v1.EventList, error)
}
type projectServiceClient struct {
@@ -244,24 +139,6 @@ func NewProjectServiceClient(cc *grpc.ClientConn) ProjectServiceClient {
return &projectServiceClient{cc}
}
func (c *projectServiceClient) CreateToken(ctx context.Context, in *ProjectTokenCreateRequest, opts ...grpc.CallOption) (*ProjectTokenResponse, error) {
out := new(ProjectTokenResponse)
err := grpc.Invoke(ctx, "/project.ProjectService/CreateToken", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *projectServiceClient) DeleteToken(ctx context.Context, in *ProjectTokenDeleteRequest, opts ...grpc.CallOption) (*EmptyResponse, error) {
out := new(EmptyResponse)
err := grpc.Invoke(ctx, "/project.ProjectService/DeleteToken", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *projectServiceClient) Create(ctx context.Context, in *ProjectCreateRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.AppProject, error) {
out := new(github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.AppProject)
err := grpc.Invoke(ctx, "/project.ProjectService/Create", in, out, c.cc, opts...)
@@ -307,22 +184,9 @@ func (c *projectServiceClient) Delete(ctx context.Context, in *ProjectQuery, opt
return out, nil
}
func (c *projectServiceClient) ListEvents(ctx context.Context, in *ProjectQuery, opts ...grpc.CallOption) (*k8s_io_api_core_v1.EventList, error) {
out := new(k8s_io_api_core_v1.EventList)
err := grpc.Invoke(ctx, "/project.ProjectService/ListEvents", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for ProjectService service
type ProjectServiceServer interface {
// Create a new project token.
CreateToken(context.Context, *ProjectTokenCreateRequest) (*ProjectTokenResponse, error)
// Delete a new project token.
DeleteToken(context.Context, *ProjectTokenDeleteRequest) (*EmptyResponse, error)
// Create a new project.
Create(context.Context, *ProjectCreateRequest) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.AppProject, error)
// List returns list of projects
@@ -333,50 +197,12 @@ type ProjectServiceServer interface {
Update(context.Context, *ProjectUpdateRequest) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.AppProject, error)
// Delete deletes a project
Delete(context.Context, *ProjectQuery) (*EmptyResponse, error)
// ListEvents returns a list of project events
ListEvents(context.Context, *ProjectQuery) (*k8s_io_api_core_v1.EventList, error)
}
func RegisterProjectServiceServer(s *grpc.Server, srv ProjectServiceServer) {
s.RegisterService(&_ProjectService_serviceDesc, srv)
}
func _ProjectService_CreateToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ProjectTokenCreateRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ProjectServiceServer).CreateToken(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/project.ProjectService/CreateToken",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ProjectServiceServer).CreateToken(ctx, req.(*ProjectTokenCreateRequest))
}
return interceptor(ctx, in, info, handler)
}
func _ProjectService_DeleteToken_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ProjectTokenDeleteRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ProjectServiceServer).DeleteToken(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/project.ProjectService/DeleteToken",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ProjectServiceServer).DeleteToken(ctx, req.(*ProjectTokenDeleteRequest))
}
return interceptor(ctx, in, info, handler)
}
func _ProjectService_Create_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ProjectCreateRequest)
if err := dec(in); err != nil {
@@ -467,36 +293,10 @@ func _ProjectService_Delete_Handler(srv interface{}, ctx context.Context, dec fu
return interceptor(ctx, in, info, handler)
}
func _ProjectService_ListEvents_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ProjectQuery)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ProjectServiceServer).ListEvents(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/project.ProjectService/ListEvents",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ProjectServiceServer).ListEvents(ctx, req.(*ProjectQuery))
}
return interceptor(ctx, in, info, handler)
}
var _ProjectService_serviceDesc = grpc.ServiceDesc{
ServiceName: "project.ProjectService",
HandlerType: (*ProjectServiceServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateToken",
Handler: _ProjectService_CreateToken_Handler,
},
{
MethodName: "DeleteToken",
Handler: _ProjectService_DeleteToken_Handler,
},
{
MethodName: "Create",
Handler: _ProjectService_Create_Handler,
@@ -517,10 +317,6 @@ var _ProjectService_serviceDesc = grpc.ServiceDesc{
MethodName: "Delete",
Handler: _ProjectService_Delete_Handler,
},
{
MethodName: "ListEvents",
Handler: _ProjectService_ListEvents_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "server/project/project.proto",
@@ -554,106 +350,6 @@ func (m *ProjectCreateRequest) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func (m *ProjectTokenDeleteRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ProjectTokenDeleteRequest) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Project) > 0 {
dAtA[i] = 0xa
i++
i = encodeVarintProject(dAtA, i, uint64(len(m.Project)))
i += copy(dAtA[i:], m.Project)
}
if len(m.Role) > 0 {
dAtA[i] = 0x12
i++
i = encodeVarintProject(dAtA, i, uint64(len(m.Role)))
i += copy(dAtA[i:], m.Role)
}
if m.Iat != 0 {
dAtA[i] = 0x18
i++
i = encodeVarintProject(dAtA, i, uint64(m.Iat))
}
return i, nil
}
func (m *ProjectTokenCreateRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ProjectTokenCreateRequest) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Project) > 0 {
dAtA[i] = 0xa
i++
i = encodeVarintProject(dAtA, i, uint64(len(m.Project)))
i += copy(dAtA[i:], m.Project)
}
if len(m.Description) > 0 {
dAtA[i] = 0x12
i++
i = encodeVarintProject(dAtA, i, uint64(len(m.Description)))
i += copy(dAtA[i:], m.Description)
}
if len(m.Role) > 0 {
dAtA[i] = 0x1a
i++
i = encodeVarintProject(dAtA, i, uint64(len(m.Role)))
i += copy(dAtA[i:], m.Role)
}
if m.ExpiresIn != 0 {
dAtA[i] = 0x20
i++
i = encodeVarintProject(dAtA, i, uint64(m.ExpiresIn))
}
return i, nil
}
func (m *ProjectTokenResponse) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ProjectTokenResponse) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Token) > 0 {
dAtA[i] = 0xa
i++
i = encodeVarintProject(dAtA, i, uint64(len(m.Token)))
i += copy(dAtA[i:], m.Token)
}
return i, nil
}
func (m *ProjectQuery) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -743,54 +439,6 @@ func (m *ProjectCreateRequest) Size() (n int) {
return n
}
func (m *ProjectTokenDeleteRequest) Size() (n int) {
var l int
_ = l
l = len(m.Project)
if l > 0 {
n += 1 + l + sovProject(uint64(l))
}
l = len(m.Role)
if l > 0 {
n += 1 + l + sovProject(uint64(l))
}
if m.Iat != 0 {
n += 1 + sovProject(uint64(m.Iat))
}
return n
}
func (m *ProjectTokenCreateRequest) Size() (n int) {
var l int
_ = l
l = len(m.Project)
if l > 0 {
n += 1 + l + sovProject(uint64(l))
}
l = len(m.Description)
if l > 0 {
n += 1 + l + sovProject(uint64(l))
}
l = len(m.Role)
if l > 0 {
n += 1 + l + sovProject(uint64(l))
}
if m.ExpiresIn != 0 {
n += 1 + sovProject(uint64(m.ExpiresIn))
}
return n
}
func (m *ProjectTokenResponse) Size() (n int) {
var l int
_ = l
l = len(m.Token)
if l > 0 {
n += 1 + l + sovProject(uint64(l))
}
return n
}
func (m *ProjectQuery) Size() (n int) {
var l int
_ = l
@@ -913,368 +561,6 @@ func (m *ProjectCreateRequest) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *ProjectTokenDeleteRequest) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ProjectTokenDeleteRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ProjectTokenDeleteRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Project", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthProject
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Project = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Role", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthProject
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Role = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Iat", wireType)
}
m.Iat = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Iat |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipProject(dAtA[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthProject
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *ProjectTokenCreateRequest) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ProjectTokenCreateRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ProjectTokenCreateRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Project", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthProject
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Project = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Description", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthProject
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Description = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Role", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthProject
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Role = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 4:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field ExpiresIn", wireType)
}
m.ExpiresIn = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.ExpiresIn |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipProject(dAtA[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthProject
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *ProjectTokenResponse) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ProjectTokenResponse: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ProjectTokenResponse: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Token", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowProject
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthProject
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Token = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipProject(dAtA[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthProject
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *ProjectQuery) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
@@ -1595,49 +881,36 @@ var (
func init() { proto.RegisterFile("server/project/project.proto", fileDescriptorProject) }
var fileDescriptorProject = []byte{
// 689 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x55, 0x5d, 0x6b, 0x13, 0x4d,
0x14, 0x66, 0x9a, 0xbe, 0x79, 0xed, 0xc4, 0x8f, 0x32, 0xb4, 0x9a, 0xc6, 0x36, 0x86, 0xb9, 0x90,
0x12, 0xec, 0x0c, 0x69, 0x15, 0x8a, 0x77, 0x7e, 0x14, 0x29, 0x78, 0xa1, 0x51, 0x41, 0xf4, 0xa2,
0x4c, 0x37, 0x87, 0xed, 0x36, 0xc9, 0xce, 0x38, 0x3b, 0x5d, 0x2d, 0xa5, 0x20, 0xc5, 0x1b, 0xf5,
0xd2, 0x9f, 0x20, 0xf8, 0x5b, 0xbc, 0x14, 0xfc, 0x03, 0x12, 0xfc, 0x21, 0x32, 0xb3, 0xbb, 0x49,
0xb6, 0xe9, 0x16, 0x84, 0xe0, 0x55, 0xce, 0x9e, 0x39, 0x73, 0x9e, 0xe7, 0x39, 0x1f, 0x19, 0xbc,
0x1c, 0x81, 0x8e, 0x41, 0x73, 0xa5, 0xe5, 0x3e, 0x78, 0x26, 0xfb, 0x65, 0x4a, 0x4b, 0x23, 0xc9,
0xff, 0xe9, 0x67, 0x6d, 0xc1, 0x97, 0xbe, 0x74, 0x3e, 0x6e, 0xad, 0xe4, 0xb8, 0xb6, 0xec, 0x4b,
0xe9, 0xf7, 0x80, 0x0b, 0x15, 0x70, 0x11, 0x86, 0xd2, 0x08, 0x13, 0xc8, 0x30, 0x4a, 0x4f, 0x69,
0x77, 0x33, 0x62, 0x81, 0x74, 0xa7, 0x9e, 0xd4, 0xc0, 0xe3, 0x16, 0xf7, 0x21, 0x04, 0x2d, 0x0c,
0x74, 0xd2, 0x98, 0xdb, 0xa3, 0x98, 0xbe, 0xf0, 0xf6, 0x82, 0x10, 0xf4, 0x21, 0x57, 0x5d, 0xdf,
0x3a, 0x22, 0xde, 0x07, 0x23, 0xce, 0xba, 0xb5, 0xed, 0x07, 0x66, 0xef, 0x60, 0x97, 0x79, 0xb2,
0xcf, 0x85, 0x76, 0xc4, 0xf6, 0x9d, 0xb1, 0xe6, 0x75, 0x46, 0xb7, 0x85, 0x52, 0xbd, 0xc0, 0x73,
0x94, 0x78, 0xdc, 0x12, 0x3d, 0xb5, 0x27, 0x26, 0x52, 0xd1, 0xb7, 0x78, 0xe1, 0x49, 0xa2, 0xf1,
0x81, 0x06, 0x61, 0xa0, 0x0d, 0x6f, 0x0e, 0x20, 0x32, 0x64, 0x07, 0x67, 0xda, 0xab, 0xa8, 0x81,
0x56, 0x2b, 0xeb, 0x5b, 0x6c, 0x04, 0xca, 0x32, 0x50, 0x67, 0xec, 0x78, 0x1d, 0xa6, 0xba, 0x3e,
0xb3, 0xa0, 0x6c, 0x0c, 0x94, 0x65, 0xa0, 0xec, 0x9e, 0x52, 0x29, 0x48, 0x3b, 0xcb, 0x4a, 0x5f,
0xe3, 0xa5, 0xd4, 0xf7, 0x5c, 0x76, 0x21, 0x7c, 0x08, 0x3d, 0x18, 0xa1, 0x57, 0xf3, 0xe8, 0x73,
0xc3, 0x6b, 0x84, 0xe0, 0x59, 0x2d, 0x7b, 0x50, 0x9d, 0x71, 0x6e, 0x67, 0x93, 0x79, 0x5c, 0x0a,
0x84, 0xa9, 0x96, 0x1a, 0x68, 0xb5, 0xd4, 0xb6, 0x26, 0xfd, 0x88, 0xf2, 0xd9, 0xf3, 0xda, 0x8a,
0xb3, 0x37, 0x70, 0xa5, 0x03, 0x91, 0xa7, 0x03, 0x65, 0x05, 0xa4, 0x20, 0xe3, 0xae, 0x21, 0x7e,
0x69, 0x0c, 0x7f, 0x19, 0xcf, 0xc1, 0x3b, 0x15, 0x68, 0x88, 0xb6, 0xc3, 0xea, 0xac, 0x63, 0x31,
0x72, 0xd0, 0x5b, 0xc3, 0x0a, 0x3b, 0x2a, 0x6d, 0x88, 0x94, 0x0c, 0x23, 0x20, 0x0b, 0xf8, 0x3f,
0x63, 0x1d, 0x29, 0x87, 0xe4, 0x83, 0x52, 0x7c, 0x31, 0x8d, 0x7e, 0x7a, 0x00, 0xfa, 0xd0, 0xe2,
0x85, 0xa2, 0x0f, 0x69, 0x90, 0xb3, 0xc7, 0x7a, 0xf6, 0x42, 0x75, 0xfe, 0x65, 0xcf, 0xae, 0xe0,
0x4b, 0x5b, 0x7d, 0x65, 0x0e, 0x33, 0x0d, 0xeb, 0xdf, 0x2e, 0xe0, 0xcb, 0x69, 0xd4, 0x33, 0xd0,
0x71, 0xe0, 0x01, 0xf9, 0x84, 0x70, 0x25, 0x29, 0xb7, 0x93, 0x4b, 0x28, 0xcb, 0x56, 0xaa, 0xb0,
0x21, 0xb5, 0x95, 0x33, 0x63, 0x32, 0x14, 0xba, 0x79, 0xf2, 0xf3, 0xf7, 0x97, 0x99, 0x75, 0xba,
0xe6, 0x56, 0x29, 0x6e, 0x65, 0x4b, 0x1a, 0xf1, 0xa3, 0xd4, 0x3a, 0xe6, 0xb6, 0x11, 0x11, 0x3f,
0xb2, 0x3f, 0xc7, 0xdc, 0x95, 0xf2, 0x2e, 0x6a, 0x92, 0xf7, 0x08, 0x57, 0x92, 0xc9, 0x3a, 0x8f,
0x4c, 0x6e, 0xf6, 0x6a, 0x57, 0x87, 0x31, 0x39, 0xad, 0xf4, 0x8e, 0x63, 0xc1, 0x9b, 0x7f, 0xc7,
0x82, 0x7c, 0x46, 0xb8, 0x9c, 0xa8, 0x25, 0x13, 0x32, 0xf3, 0x55, 0x98, 0x4e, 0xb7, 0xe8, 0x75,
0xc7, 0x73, 0x91, 0xce, 0x9f, 0xe6, 0x69, 0x0b, 0x72, 0x82, 0xf0, 0xec, 0xe3, 0x20, 0x32, 0x64,
0xf1, 0x34, 0x17, 0x37, 0x6e, 0xb5, 0xed, 0xa9, 0x70, 0xb0, 0x08, 0xb4, 0xea, 0x78, 0x10, 0x32,
0xc1, 0x83, 0x7c, 0x40, 0xb8, 0xf4, 0x08, 0x0a, 0x39, 0x4c, 0xa9, 0x0e, 0x37, 0x1c, 0xfe, 0x12,
0xb9, 0x36, 0xd9, 0x2f, 0xbb, 0x45, 0xc7, 0xe4, 0x2b, 0xc2, 0xe5, 0x64, 0x81, 0x26, 0x3b, 0x93,
0x5b, 0xac, 0x69, 0x31, 0xda, 0x70, 0x8c, 0xd6, 0x6a, 0xab, 0x85, 0x13, 0xc4, 0xec, 0x3f, 0x7e,
0x47, 0x18, 0xc1, 0x1c, 0x45, 0xdb, 0xb1, 0x97, 0xb8, 0x9c, 0xcc, 0x67, 0x51, 0xb9, 0x8a, 0xe6,
0x35, 0xd5, 0xdf, 0x2c, 0xd4, 0xbf, 0x8f, 0xb1, 0x6d, 0xd4, 0x56, 0x0c, 0xa1, 0x89, 0x8a, 0xb2,
0xaf, 0xb0, 0xe4, 0x85, 0xb2, 0x0a, 0x99, 0x7d, 0xc5, 0x58, 0xdc, 0x62, 0xee, 0x8a, 0x6b, 0xf2,
0x4d, 0x07, 0xd2, 0x20, 0xf5, 0x02, 0x10, 0x0e, 0x2e, 0xfb, 0xfd, 0xcd, 0xef, 0x83, 0x3a, 0xfa,
0x31, 0xa8, 0xa3, 0x5f, 0x83, 0x3a, 0x7a, 0xd5, 0x3c, 0xef, 0xfd, 0xca, 0x3f, 0xc8, 0xbb, 0x65,
0xf7, 0x4e, 0x6d, 0xfc, 0x09, 0x00, 0x00, 0xff, 0xff, 0x53, 0xd4, 0xec, 0x49, 0xa9, 0x07, 0x00,
0x00,
// 484 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x94, 0x4d, 0x6b, 0x14, 0x31,
0x18, 0xc7, 0x89, 0xd6, 0x15, 0xe3, 0x2b, 0xa1, 0xd5, 0x3a, 0x6d, 0x57, 0x19, 0x2f, 0xb2, 0x68,
0xc2, 0xd6, 0x83, 0xc5, 0x9b, 0x2f, 0x45, 0x0a, 0x1e, 0x74, 0xc5, 0x8b, 0x08, 0x25, 0x9d, 0x79,
0x48, 0xd3, 0xdd, 0x9d, 0xc4, 0x24, 0x3b, 0x52, 0x8a, 0x97, 0xe2, 0xcd, 0xa3, 0x1f, 0xc2, 0xbb,
0x9f, 0xc2, 0xa3, 0xe0, 0x17, 0x90, 0xc5, 0x0f, 0x22, 0xf3, 0xec, 0x44, 0xbb, 0xdd, 0xae, 0xa7,
0xa1, 0xa7, 0x79, 0x26, 0xc9, 0xe4, 0xf7, 0x9b, 0x27, 0x7f, 0x42, 0x57, 0x3d, 0xb8, 0x12, 0x9c,
0xb0, 0xce, 0xec, 0x41, 0x16, 0xe2, 0x93, 0x5b, 0x67, 0x82, 0x61, 0xe7, 0xeb, 0xd7, 0x64, 0x51,
0x19, 0x65, 0x70, 0x4c, 0x54, 0xd5, 0x64, 0x3a, 0x59, 0x55, 0xc6, 0xa8, 0x01, 0x08, 0x69, 0xb5,
0x90, 0x45, 0x61, 0x82, 0x0c, 0xda, 0x14, 0xbe, 0x9e, 0x4d, 0xfb, 0x1b, 0x9e, 0x6b, 0x83, 0xb3,
0x99, 0x71, 0x20, 0xca, 0xae, 0x50, 0x50, 0x80, 0x93, 0x01, 0xf2, 0x7a, 0xcd, 0x96, 0xd2, 0x61,
0x77, 0xb4, 0xc3, 0x33, 0x33, 0x14, 0xd2, 0x21, 0x62, 0x0f, 0x8b, 0xfb, 0x59, 0x2e, 0x6c, 0x5f,
0x55, 0x1f, 0x7b, 0x21, 0xad, 0x1d, 0xe8, 0x0c, 0x37, 0x17, 0x65, 0x57, 0x0e, 0xec, 0xae, 0x9c,
0xd9, 0x2a, 0xfd, 0x40, 0x17, 0x5f, 0x4e, 0x6c, 0x9f, 0x3a, 0x90, 0x01, 0x7a, 0xf0, 0x7e, 0x04,
0x3e, 0xb0, 0x6d, 0x1a, 0xff, 0x62, 0x99, 0xdc, 0x26, 0x77, 0x2f, 0xae, 0x6f, 0xf2, 0x7f, 0x50,
0x1e, 0xa1, 0x58, 0x6c, 0x67, 0x39, 0xb7, 0x7d, 0xc5, 0x2b, 0x28, 0x3f, 0x02, 0xe5, 0x11, 0xca,
0x1f, 0x5b, 0x5b, 0x43, 0x7a, 0x71, 0xd7, 0x34, 0xa5, 0x97, 0xea, 0xb1, 0x57, 0x23, 0x70, 0xfb,
0x8c, 0xd1, 0x85, 0x42, 0x0e, 0x01, 0x69, 0x17, 0x7a, 0x58, 0x1f, 0x91, 0x7b, 0x63, 0xf3, 0xd3,
0x94, 0xbb, 0x4a, 0x2f, 0x6f, 0x0e, 0x6d, 0xd8, 0xef, 0x81, 0xb7, 0xa6, 0xf0, 0xb0, 0xfe, 0xed,
0x1c, 0xbd, 0x52, 0xaf, 0x7a, 0x0d, 0xae, 0xd4, 0x19, 0xb0, 0xcf, 0x84, 0xb6, 0x26, 0x3d, 0x63,
0x6b, 0x3c, 0x06, 0xe0, 0xa4, 0x5e, 0x26, 0xcd, 0xd8, 0xa5, 0x2b, 0x87, 0x3f, 0x7f, 0x7f, 0x39,
0xb3, 0x94, 0x5e, 0xc3, 0x6c, 0x94, 0xdd, 0x98, 0x3a, 0xff, 0x88, 0x74, 0xd8, 0x21, 0xa1, 0x0b,
0x2f, 0xb4, 0x0f, 0x6c, 0xe9, 0xb8, 0x0b, 0xb6, 0x37, 0xd9, 0x6a, 0xc4, 0xa1, 0x22, 0xa4, 0xcb,
0xe8, 0xc1, 0xd8, 0x8c, 0x07, 0xfb, 0x44, 0xe8, 0xd9, 0xe7, 0x30, 0xd7, 0xa1, 0xa1, 0x3e, 0xdc,
0x42, 0xfe, 0x4d, 0x76, 0xe3, 0x38, 0x5f, 0x1c, 0x54, 0xa9, 0xf9, 0xc8, 0xbe, 0x12, 0xda, 0x9a,
0x04, 0x66, 0xf6, 0x64, 0xa6, 0x82, 0xd4, 0x94, 0xd1, 0x43, 0x34, 0xea, 0x26, 0xf7, 0xa2, 0x91,
0x03, 0x6b, 0xbc, 0x0e, 0xc6, 0x69, 0xf0, 0xe2, 0x20, 0x2a, 0x0c, 0x21, 0xc8, 0x5c, 0x06, 0xc9,
0x51, 0xb3, 0x3a, 0xb5, 0x77, 0xb4, 0xf5, 0x0c, 0x06, 0x10, 0x60, 0x5e, 0xcb, 0xae, 0xff, 0x1d,
0x9e, 0xca, 0x63, 0x7a, 0x07, 0x89, 0x6b, 0x9d, 0x95, 0x93, 0x89, 0x08, 0x78, 0xb2, 0xf1, 0x7d,
0xdc, 0x26, 0x3f, 0xc6, 0x6d, 0xf2, 0x6b, 0xdc, 0x26, 0x6f, 0x3b, 0xff, 0xbb, 0x34, 0xa6, 0xef,
0xb3, 0x9d, 0x16, 0x5e, 0x0e, 0x0f, 0xfe, 0x04, 0x00, 0x00, 0xff, 0xff, 0x95, 0xb1, 0xb7, 0x37,
0xe8, 0x04, 0x00, 0x00,
}

View File

@@ -28,94 +28,6 @@ var _ status.Status
var _ = runtime.String
var _ = utilities.NewDoubleArray
func request_ProjectService_CreateToken_0(ctx context.Context, marshaler runtime.Marshaler, client ProjectServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ProjectTokenCreateRequest
var metadata runtime.ServerMetadata
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
var (
val string
ok bool
err error
_ = err
)
val, ok = pathParams["project"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "project")
}
protoReq.Project, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "project", err)
}
val, ok = pathParams["role"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "role")
}
protoReq.Role, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "role", err)
}
msg, err := client.CreateToken(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
var (
filter_ProjectService_DeleteToken_0 = &utilities.DoubleArray{Encoding: map[string]int{"project": 0, "role": 1}, Base: []int{1, 1, 2, 0, 0}, Check: []int{0, 1, 1, 2, 3}}
)
func request_ProjectService_DeleteToken_0(ctx context.Context, marshaler runtime.Marshaler, client ProjectServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ProjectTokenDeleteRequest
var metadata runtime.ServerMetadata
var (
val string
ok bool
err error
_ = err
)
val, ok = pathParams["project"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "project")
}
protoReq.Project, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "project", err)
}
val, ok = pathParams["role"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "role")
}
protoReq.Role, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "role", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.URL.Query(), filter_ProjectService_DeleteToken_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.DeleteToken(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func request_ProjectService_Create_0(ctx context.Context, marshaler runtime.Marshaler, client ProjectServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ProjectCreateRequest
var metadata runtime.ServerMetadata
@@ -231,33 +143,6 @@ func request_ProjectService_Delete_0(ctx context.Context, marshaler runtime.Mars
}
func request_ProjectService_ListEvents_0(ctx context.Context, marshaler runtime.Marshaler, client ProjectServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq ProjectQuery
var metadata runtime.ServerMetadata
var (
val string
ok bool
err error
_ = err
)
val, ok = pathParams["name"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "name")
}
protoReq.Name, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "name", err)
}
msg, err := client.ListEvents(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
// RegisterProjectServiceHandlerFromEndpoint is same as RegisterProjectServiceHandler but
// automatically dials to "endpoint" and closes the connection when "ctx" gets done.
func RegisterProjectServiceHandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error) {
@@ -296,64 +181,6 @@ func RegisterProjectServiceHandler(ctx context.Context, mux *runtime.ServeMux, c
// "ProjectServiceClient" to call the correct interceptors.
func RegisterProjectServiceHandlerClient(ctx context.Context, mux *runtime.ServeMux, client ProjectServiceClient) error {
mux.Handle("POST", pattern_ProjectService_CreateToken_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
go func(done <-chan struct{}, closed <-chan bool) {
select {
case <-done:
case <-closed:
cancel()
}
}(ctx.Done(), cn.CloseNotify())
}
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
rctx, err := runtime.AnnotateContext(ctx, mux, req)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_ProjectService_CreateToken_0(rctx, inboundMarshaler, client, req, pathParams)
ctx = runtime.NewServerMetadataContext(ctx, md)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
forward_ProjectService_CreateToken_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("DELETE", pattern_ProjectService_DeleteToken_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
go func(done <-chan struct{}, closed <-chan bool) {
select {
case <-done:
case <-closed:
cancel()
}
}(ctx.Done(), cn.CloseNotify())
}
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
rctx, err := runtime.AnnotateContext(ctx, mux, req)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_ProjectService_DeleteToken_0(rctx, inboundMarshaler, client, req, pathParams)
ctx = runtime.NewServerMetadataContext(ctx, md)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
forward_ProjectService_DeleteToken_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("POST", pattern_ProjectService_Create_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
@@ -499,61 +326,22 @@ func RegisterProjectServiceHandlerClient(ctx context.Context, mux *runtime.Serve
})
mux.Handle("GET", pattern_ProjectService_ListEvents_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
go func(done <-chan struct{}, closed <-chan bool) {
select {
case <-done:
case <-closed:
cancel()
}
}(ctx.Done(), cn.CloseNotify())
}
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
rctx, err := runtime.AnnotateContext(ctx, mux, req)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_ProjectService_ListEvents_0(rctx, inboundMarshaler, client, req, pathParams)
ctx = runtime.NewServerMetadataContext(ctx, md)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
forward_ProjectService_ListEvents_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
var (
pattern_ProjectService_CreateToken_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5, 2, 6}, []string{"api", "v1", "projects", "project", "roles", "role", "token"}, ""))
pattern_ProjectService_DeleteToken_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5, 2, 6}, []string{"api", "v1", "projects", "project", "roles", "role", "token"}, ""))
pattern_ProjectService_Create_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "projects"}, ""))
pattern_ProjectService_List_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "projects"}, ""))
pattern_ProjectService_Get_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "projects", "name"}, ""))
pattern_ProjectService_Update_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "projects", "project.metadata.name"}, ""))
pattern_ProjectService_Update_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "repositories", "project.metadata.name"}, ""))
pattern_ProjectService_Delete_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "projects", "name"}, ""))
pattern_ProjectService_ListEvents_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "projects", "name", "events"}, ""))
pattern_ProjectService_Delete_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "repositories", "name"}, ""))
)
var (
forward_ProjectService_CreateToken_0 = runtime.ForwardResponseMessage
forward_ProjectService_DeleteToken_0 = runtime.ForwardResponseMessage
forward_ProjectService_Create_0 = runtime.ForwardResponseMessage
forward_ProjectService_List_0 = runtime.ForwardResponseMessage
@@ -563,6 +351,4 @@ var (
forward_ProjectService_Update_0 = runtime.ForwardResponseMessage
forward_ProjectService_Delete_0 = runtime.ForwardResponseMessage
forward_ProjectService_ListEvents_0 = runtime.ForwardResponseMessage
)

View File

@@ -9,7 +9,6 @@ package project;
import "gogoproto/gogo.proto";
import "google/api/annotations.proto";
import "k8s.io/api/core/v1/generated.proto";
import "k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto";
import "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1/generated.proto";
@@ -18,27 +17,6 @@ message ProjectCreateRequest {
github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.AppProject project = 1;
}
// ProjectTokenCreateRequest defines project token deletion parameters.
message ProjectTokenDeleteRequest {
string project = 1;
string role = 2;
int64 iat = 3;
}
// ProjectTokenCreateRequest defines project token creation parameters.
message ProjectTokenCreateRequest {
string project = 1;
string description = 2;
string role = 3;
// expiresIn represents a duration in seconds
int64 expiresIn = 4;
}
// ProjectTokenResponse wraps the created token or returns an empty string if deleted.
message ProjectTokenResponse {
string token = 1;
}
// ProjectQuery is a query for Project resources
message ProjectQuery {
string name = 1;
@@ -53,19 +31,6 @@ message EmptyResponse {}
// ProjectService
service ProjectService {
// Create a new project token.
rpc CreateToken(ProjectTokenCreateRequest) returns (ProjectTokenResponse) {
option (google.api.http) = {
post: "/api/v1/projects/{project}/roles/{role}/token"
body: "*"
};
}
// Delete a new project token.
rpc DeleteToken(ProjectTokenDeleteRequest) returns (EmptyResponse) {
option (google.api.http).delete = "/api/v1/projects/{project}/roles/{role}/token";
}
// Create a new project.
rpc Create(ProjectCreateRequest) returns (github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.AppProject) {
option (google.api.http) = {
@@ -87,18 +52,13 @@ service ProjectService {
// Update updates a project
rpc Update(ProjectUpdateRequest) returns (github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.AppProject) {
option (google.api.http) = {
put: "/api/v1/projects/{project.metadata.name}"
put: "/api/v1/repositories/{project.metadata.name}"
body: "*"
};
}
// Delete deletes a project
rpc Delete(ProjectQuery) returns (EmptyResponse) {
option (google.api.http).delete = "/api/v1/projects/{name}";
}
// ListEvents returns a list of project events
rpc ListEvents(ProjectQuery) returns (k8s.io.api.core.v1.EventList) {
option (google.api.http).get = "/api/v1/projects/{name}/events";
option (google.api.http).delete = "/api/v1/repositories/{name}";
}
}

View File

@@ -2,7 +2,6 @@ package project
import (
"context"
"fmt"
"testing"
"github.com/stretchr/testify/assert"
@@ -16,19 +15,13 @@ import (
apps "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util"
jwtutil "github.com/argoproj/argo-cd/util/jwt"
"github.com/argoproj/argo-cd/util/rbac"
"github.com/argoproj/argo-cd/util/session"
"github.com/argoproj/argo-cd/util/settings"
)
func TestProjectServer(t *testing.T) {
enforcer := rbac.NewEnforcer(fake.NewSimpleClientset(), "default", common.ArgoCDRBACConfigMapName, nil)
enforcer.SetBuiltinPolicy(test.BuiltinPolicy)
enforcer.SetDefaultRole("role:admin")
enforcer.SetClaimsEnforcerFunc(func(rvals ...interface{}) bool {
return true
})
existingProj := v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{Name: "test", Namespace: "default"},
Spec: v1alpha1.AppProjectSpec{
@@ -40,15 +33,13 @@ func TestProjectServer(t *testing.T) {
},
}
policyTemplate := "p, proj:%s:%s, applications, %s, %s/%s, %s"
t.Run("TestRemoveDestinationSuccessful", func(t *testing.T) {
existingApp := v1alpha1.Application{
ObjectMeta: v1.ObjectMeta{Name: "test", Namespace: "default"},
Spec: v1alpha1.ApplicationSpec{Project: "test", Destination: v1alpha1.ApplicationDestination{Namespace: "ns3", Server: "https://server3"}},
}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock(), nil)
projectServer := NewServer("default", apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock())
updatedProj := existingProj.DeepCopy()
updatedProj.Spec.Destinations = updatedProj.Spec.Destinations[1:]
@@ -64,7 +55,7 @@ func TestProjectServer(t *testing.T) {
Spec: v1alpha1.ApplicationSpec{Project: "test", Destination: v1alpha1.ApplicationDestination{Namespace: "ns1", Server: "https://server1"}},
}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock(), nil)
projectServer := NewServer("default", apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock())
updatedProj := existingProj.DeepCopy()
updatedProj.Spec.Destinations = updatedProj.Spec.Destinations[1:]
@@ -81,7 +72,7 @@ func TestProjectServer(t *testing.T) {
Spec: v1alpha1.ApplicationSpec{Project: "test"},
}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock(), nil)
projectServer := NewServer("default", apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock())
updatedProj := existingProj.DeepCopy()
updatedProj.Spec.SourceRepos = []string{}
@@ -97,7 +88,7 @@ func TestProjectServer(t *testing.T) {
Spec: v1alpha1.ApplicationSpec{Project: "test", Source: v1alpha1.ApplicationSource{RepoURL: "https://github.com/argoproj/argo-cd.git"}},
}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock(), nil)
projectServer := NewServer("default", apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock())
updatedProj := existingProj.DeepCopy()
updatedProj.Spec.SourceRepos = []string{}
@@ -109,221 +100,24 @@ func TestProjectServer(t *testing.T) {
})
t.Run("TestDeleteProjectSuccessful", func(t *testing.T) {
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(&existingProj), enforcer, util.NewKeyLock(), nil)
projectServer := NewServer("default", apps.NewSimpleClientset(&existingProj), enforcer, util.NewKeyLock())
_, err := projectServer.Delete(context.Background(), &ProjectQuery{Name: "test"})
assert.Nil(t, err)
})
t.Run("TestDeleteDefaultProjectFailure", func(t *testing.T) {
defaultProj := v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{Name: "default", Namespace: "default"},
Spec: v1alpha1.AppProjectSpec{},
}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(&defaultProj), enforcer, util.NewKeyLock(), nil)
_, err := projectServer.Delete(context.Background(), &ProjectQuery{Name: defaultProj.Name})
assert.Equal(t, codes.InvalidArgument, grpc.Code(err))
})
t.Run("TestDeleteProjectReferencedByApp", func(t *testing.T) {
existingApp := v1alpha1.Application{
ObjectMeta: v1.ObjectMeta{Name: "test", Namespace: "default"},
Spec: v1alpha1.ApplicationSpec{Project: "test"},
}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock(), nil)
projectServer := NewServer("default", apps.NewSimpleClientset(&existingProj, &existingApp), enforcer, util.NewKeyLock())
_, err := projectServer.Delete(context.Background(), &ProjectQuery{Name: "test"})
assert.NotNil(t, err)
assert.Equal(t, codes.InvalidArgument, grpc.Code(err))
})
t.Run("TestCreateTokenSuccesfully", func(t *testing.T) {
sessionMgr := session.NewSessionManager(&settings.ArgoCDSettings{})
projectWithRole := existingProj.DeepCopy()
tokenName := "testToken"
projectWithRole.Spec.Roles = []v1alpha1.ProjectRole{{Name: tokenName}}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projectWithRole), enforcer, util.NewKeyLock(), sessionMgr)
tokenResponse, err := projectServer.CreateToken(context.Background(), &ProjectTokenCreateRequest{Project: projectWithRole.Name, Role: tokenName, ExpiresIn: 1})
assert.Nil(t, err)
claims, err := sessionMgr.Parse(tokenResponse.Token)
assert.Nil(t, err)
mapClaims, err := jwtutil.MapClaims(claims)
subject, ok := mapClaims["sub"].(string)
assert.True(t, ok)
expectedSubject := fmt.Sprintf(JWTTokenSubFormat, projectWithRole.Name, tokenName)
assert.Equal(t, expectedSubject, subject)
assert.Nil(t, err)
})
t.Run("TestDeleteTokenSuccesfully", func(t *testing.T) {
sessionMgr := session.NewSessionManager(&settings.ArgoCDSettings{})
projWithToken := existingProj.DeepCopy()
tokenName := "testToken"
issuedAt := int64(1)
secondIssuedAt := issuedAt + 1
token := v1alpha1.ProjectRole{Name: tokenName, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: issuedAt}, {IssuedAt: secondIssuedAt}}}
projWithToken.Spec.Roles = append(projWithToken.Spec.Roles, token)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projWithToken), enforcer, util.NewKeyLock(), sessionMgr)
_, err := projectServer.DeleteToken(context.Background(), &ProjectTokenDeleteRequest{Project: projWithToken.Name, Role: tokenName, Iat: issuedAt})
assert.Nil(t, err)
projWithoutToken, err := projectServer.Get(context.Background(), &ProjectQuery{Name: projWithToken.Name})
assert.Nil(t, err)
assert.Len(t, projWithoutToken.Spec.Roles, 1)
assert.Len(t, projWithoutToken.Spec.Roles[0].JWTTokens, 1)
assert.Equal(t, projWithoutToken.Spec.Roles[0].JWTTokens[0].IssuedAt, secondIssuedAt)
})
t.Run("TestCreateTwoTokensInRoleSuccess", func(t *testing.T) {
sessionMgr := session.NewSessionManager(&settings.ArgoCDSettings{})
projWithToken := existingProj.DeepCopy()
tokenName := "testToken"
token := v1alpha1.ProjectRole{Name: tokenName, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: 1}}}
projWithToken.Spec.Roles = append(projWithToken.Spec.Roles, token)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projWithToken), enforcer, util.NewKeyLock(), sessionMgr)
_, err := projectServer.CreateToken(context.Background(), &ProjectTokenCreateRequest{Project: projWithToken.Name, Role: tokenName})
assert.Nil(t, err)
projWithTwoTokens, err := projectServer.Get(context.Background(), &ProjectQuery{Name: projWithToken.Name})
assert.Nil(t, err)
assert.Len(t, projWithTwoTokens.Spec.Roles, 1)
assert.Len(t, projWithTwoTokens.Spec.Roles[0].JWTTokens, 2)
})
t.Run("TestAddWildcardSource", func(t *testing.T) {
proj := existingProj.DeepCopy()
wildSouceRepo := "*"
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos, wildSouceRepo)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(proj), enforcer, util.NewKeyLock(), nil)
request := &ProjectUpdateRequest{Project: proj}
updatedProj, err := projectServer.Update(context.Background(), request)
assert.Nil(t, err)
assert.Equal(t, wildSouceRepo, updatedProj.Spec.SourceRepos[1])
})
t.Run("TestCreateRolePolicySuccessfully", func(t *testing.T) {
action := "create"
object := "testApplication"
roleName := "testRole"
effect := "allow"
projWithRole := existingProj.DeepCopy()
role := v1alpha1.ProjectRole{Name: roleName, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: 1}}}
policy := fmt.Sprintf(policyTemplate, projWithRole.Name, roleName, action, projWithRole.Name, object, effect)
role.Policies = append(role.Policies, policy)
projWithRole.Spec.Roles = append(projWithRole.Spec.Roles, role)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projWithRole), enforcer, util.NewKeyLock(), nil)
request := &ProjectUpdateRequest{Project: projWithRole}
_, err := projectServer.Update(context.Background(), request)
assert.Nil(t, err)
t.Log(projWithRole.Spec.Roles[0].Policies[0])
expectedPolicy := fmt.Sprintf(policyTemplate, projWithRole.Name, role.Name, action, projWithRole.Name, object, effect)
assert.Equal(t, projWithRole.Spec.Roles[0].Policies[0], expectedPolicy)
})
t.Run("TestValidatePolicyDuplicatePolicyFailure", func(t *testing.T) {
action := "create"
object := "testApplication"
roleName := "testRole"
effect := "allow"
projWithRole := existingProj.DeepCopy()
role := v1alpha1.ProjectRole{Name: roleName, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: 1}}}
policy := fmt.Sprintf(policyTemplate, projWithRole.Name, roleName, action, projWithRole.Name, object, effect)
role.Policies = append(role.Policies, policy)
role.Policies = append(role.Policies, policy)
projWithRole.Spec.Roles = append(projWithRole.Spec.Roles, role)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projWithRole), enforcer, util.NewKeyLock(), nil)
request := &ProjectUpdateRequest{Project: projWithRole}
_, err := projectServer.Update(context.Background(), request)
expectedErr := fmt.Sprintf("rpc error: code = AlreadyExists desc = policy '%s' already exists for role '%s'", policy, roleName)
assert.EqualError(t, err, expectedErr)
})
t.Run("TestValidateProjectAccessToSeparateProjectObjectFailure", func(t *testing.T) {
action := "create"
object := "testApplication"
roleName := "testRole"
otherProject := "other-project"
effect := "allow"
projWithRole := existingProj.DeepCopy()
role := v1alpha1.ProjectRole{Name: roleName, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: 1}}}
policy := fmt.Sprintf(policyTemplate, projWithRole.Name, roleName, action, otherProject, object, effect)
role.Policies = append(role.Policies, policy)
projWithRole.Spec.Roles = append(projWithRole.Spec.Roles, role)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projWithRole), enforcer, util.NewKeyLock(), nil)
request := &ProjectUpdateRequest{Project: projWithRole}
_, err := projectServer.Update(context.Background(), request)
expectedErr := fmt.Sprintf("rpc error: code = InvalidArgument desc = incorrect policy format for '%s' as policies can't grant access to other projects", policy)
assert.EqualError(t, err, expectedErr)
})
t.Run("TestValidateProjectIncorrectProjectInRoleFailure", func(t *testing.T) {
action := "create"
object := "testApplication"
roleName := "testRole"
otherProject := "other-project"
effect := "allow"
projWithRole := existingProj.DeepCopy()
role := v1alpha1.ProjectRole{Name: roleName, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: 1}}}
invalidPolicy := fmt.Sprintf(policyTemplate, otherProject, roleName, action, projWithRole.Name, object, effect)
role.Policies = append(role.Policies, invalidPolicy)
projWithRole.Spec.Roles = append(projWithRole.Spec.Roles, role)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projWithRole), enforcer, util.NewKeyLock(), nil)
request := &ProjectUpdateRequest{Project: projWithRole}
_, err := projectServer.Update(context.Background(), request)
expectedErr := fmt.Sprintf("rpc error: code = InvalidArgument desc = incorrect policy format for '%s' as policy can't grant access to other projects", invalidPolicy)
assert.EqualError(t, err, expectedErr)
})
t.Run("TestValidateProjectIncorrectTokenInRoleFailure", func(t *testing.T) {
action := "create"
object := "testApplication"
roleName := "testRole"
otherToken := "other-token"
effect := "allow"
projWithRole := existingProj.DeepCopy()
role := v1alpha1.ProjectRole{Name: roleName, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: 1}}}
invalidPolicy := fmt.Sprintf(policyTemplate, projWithRole.Name, otherToken, action, projWithRole.Name, object, effect)
role.Policies = append(role.Policies, invalidPolicy)
projWithRole.Spec.Roles = append(projWithRole.Spec.Roles, role)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projWithRole), enforcer, util.NewKeyLock(), nil)
request := &ProjectUpdateRequest{Project: projWithRole}
_, err := projectServer.Update(context.Background(), request)
expectedErr := fmt.Sprintf("rpc error: code = InvalidArgument desc = incorrect policy format for '%s' as policy can't grant access to other roles", invalidPolicy)
assert.EqualError(t, err, expectedErr)
})
t.Run("TestValidateProjectInvalidEffectFailure", func(t *testing.T) {
action := "create"
object := "testApplication"
roleName := "testRole"
effect := "testEffect"
projWithRole := existingProj.DeepCopy()
role := v1alpha1.ProjectRole{Name: roleName, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: 1}}}
invalidPolicy := fmt.Sprintf(policyTemplate, projWithRole.Name, roleName, action, projWithRole.Name, object, effect)
role.Policies = append(role.Policies, invalidPolicy)
projWithRole.Spec.Roles = append(projWithRole.Spec.Roles, role)
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projWithRole), enforcer, util.NewKeyLock(), nil)
request := &ProjectUpdateRequest{Project: projWithRole}
_, err := projectServer.Update(context.Background(), request)
expectedErr := fmt.Sprintf("rpc error: code = InvalidArgument desc = incorrect policy format for '%s' as effect can only have value 'allow' or 'deny'", invalidPolicy)
assert.EqualError(t, err, expectedErr)
})
}

View File

@@ -0,0 +1,150 @@
// Code generated by mockery v1.0.0
package mocks
import context "context"
import mock "github.com/stretchr/testify/mock"
import repository "github.com/argoproj/argo-cd/server/repository"
import v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
// RepositoryServiceServer is an autogenerated mock type for the RepositoryServiceServer type
type RepositoryServiceServer struct {
mock.Mock
}
// Create provides a mock function with given fields: _a0, _a1
func (_m *RepositoryServiceServer) Create(_a0 context.Context, _a1 *repository.RepoCreateRequest) (*v1alpha1.Repository, error) {
ret := _m.Called(_a0, _a1)
var r0 *v1alpha1.Repository
if rf, ok := ret.Get(0).(func(context.Context, *repository.RepoCreateRequest) *v1alpha1.Repository); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.Repository)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *repository.RepoCreateRequest) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Delete provides a mock function with given fields: _a0, _a1
func (_m *RepositoryServiceServer) Delete(_a0 context.Context, _a1 *repository.RepoQuery) (*repository.RepoResponse, error) {
ret := _m.Called(_a0, _a1)
var r0 *repository.RepoResponse
if rf, ok := ret.Get(0).(func(context.Context, *repository.RepoQuery) *repository.RepoResponse); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*repository.RepoResponse)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *repository.RepoQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Get provides a mock function with given fields: _a0, _a1
func (_m *RepositoryServiceServer) Get(_a0 context.Context, _a1 *repository.RepoQuery) (*v1alpha1.Repository, error) {
ret := _m.Called(_a0, _a1)
var r0 *v1alpha1.Repository
if rf, ok := ret.Get(0).(func(context.Context, *repository.RepoQuery) *v1alpha1.Repository); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.Repository)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *repository.RepoQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// List provides a mock function with given fields: _a0, _a1
func (_m *RepositoryServiceServer) List(_a0 context.Context, _a1 *repository.RepoQuery) (*v1alpha1.RepositoryList, error) {
ret := _m.Called(_a0, _a1)
var r0 *v1alpha1.RepositoryList
if rf, ok := ret.Get(0).(func(context.Context, *repository.RepoQuery) *v1alpha1.RepositoryList); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.RepositoryList)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *repository.RepoQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ListKsonnetApps provides a mock function with given fields: _a0, _a1
func (_m *RepositoryServiceServer) ListKsonnetApps(_a0 context.Context, _a1 *repository.RepoKsonnetQuery) (*repository.RepoKsonnetResponse, error) {
ret := _m.Called(_a0, _a1)
var r0 *repository.RepoKsonnetResponse
if rf, ok := ret.Get(0).(func(context.Context, *repository.RepoKsonnetQuery) *repository.RepoKsonnetResponse); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*repository.RepoKsonnetResponse)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *repository.RepoKsonnetQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Update provides a mock function with given fields: _a0, _a1
func (_m *RepositoryServiceServer) Update(_a0 context.Context, _a1 *repository.RepoUpdateRequest) (*v1alpha1.Repository, error) {
ret := _m.Called(_a0, _a1)
var r0 *v1alpha1.Repository
if rf, ok := ret.Get(0).(func(context.Context, *repository.RepoUpdateRequest) *v1alpha1.Repository); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.Repository)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *repository.RepoUpdateRequest) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}

View File

@@ -1,8 +1,6 @@
package repository
import (
"path"
"path/filepath"
"reflect"
"github.com/ghodss/yaml"
@@ -56,7 +54,7 @@ func (s *Server) List(ctx context.Context, q *RepoQuery) (*appsv1.RepositoryList
}
// ListKsonnetApps returns list of Ksonnet apps in the repo
func (s *Server) ListApps(ctx context.Context, q *RepoAppsQuery) (*RepoAppsResponse, error) {
func (s *Server) ListKsonnetApps(ctx context.Context, q *RepoKsonnetQuery) (*RepoKsonnetResponse, error) {
if !s.enf.EnforceClaims(ctx.Value("claims"), "repositories/apps", "get", q.Repo) {
return nil, grpc.ErrPermissionDenied
}
@@ -77,119 +75,39 @@ func (s *Server) ListApps(ctx context.Context, q *RepoAppsQuery) (*RepoAppsRespo
revision = "HEAD"
}
ksonnetRes, err := repoClient.ListDir(ctx, &repository.ListDirRequest{Repo: repo, Revision: revision, Path: "*app.yaml"})
if err != nil {
return nil, err
}
helmRes, err := repoClient.ListDir(ctx, &repository.ListDirRequest{Repo: repo, Revision: revision, Path: "*Chart.yaml"})
if err != nil {
return nil, err
}
kustomizationRes, err := repoClient.ListDir(ctx, &repository.ListDirRequest{Repo: repo, Revision: revision, Path: "*kustomization.yaml"})
if err != nil {
return nil, err
}
items := make([]*AppInfo, 0)
for i := range ksonnetRes.Items {
items = append(items, &AppInfo{Type: string(repository.AppSourceKsonnet), Path: ksonnetRes.Items[i]})
}
for i := range helmRes.Items {
items = append(items, &AppInfo{Type: string(repository.AppSourceHelm), Path: helmRes.Items[i]})
}
for i := range kustomizationRes.Items {
items = append(items, &AppInfo{Type: string(repository.AppSourceKustomize), Path: kustomizationRes.Items[i]})
}
return &RepoAppsResponse{Items: items}, nil
}
func (s *Server) GetAppDetails(ctx context.Context, q *RepoAppDetailsQuery) (*RepoAppDetailsResponse, error) {
if !s.enf.EnforceClaims(ctx.Value("claims"), "repositories/apps", "get", q.Repo) {
return nil, grpc.ErrPermissionDenied
}
repo, err := s.db.GetRepository(ctx, q.Repo)
if err != nil {
return nil, err
}
// Test the repo
conn, repoClient, err := s.repoClientset.NewRepositoryClient()
if err != nil {
return nil, err
}
defer util.Close(conn)
revision := q.Revision
if revision == "" {
revision = "HEAD"
}
appSpecRes, err := repoClient.GetFile(ctx, &repository.GetFileRequest{
// Verify app.yaml is functional
req := repository.ListDirRequest{
Repo: repo,
Revision: revision,
Path: q.Path,
})
Path: "*app.yaml",
}
getRes, err := repoClient.ListDir(ctx, &req)
if err != nil {
return nil, err
}
appSourceType := repository.IdentifyAppSourceTypeByAppPath(q.Path)
switch appSourceType {
case repository.AppSourceKsonnet:
var appSpec KsonnetAppSpec
appSpec.Path = q.Path
err = yaml.Unmarshal(appSpecRes.Data, &appSpec)
if err != nil {
return nil, err
}
return &RepoAppDetailsResponse{
Type: string(appSourceType),
Ksonnet: &appSpec,
}, nil
case repository.AppSourceHelm:
var appSpec HelmAppSpec
appSpec.Path = q.Path
err = yaml.Unmarshal(appSpecRes.Data, &appSpec)
if err != nil {
return nil, err
}
appDir := path.Dir(q.Path)
valuesFilesRes, err := repoClient.ListDir(ctx, &repository.ListDirRequest{
Revision: revision,
out := make([]*KsonnetAppSpec, 0)
for _, path := range getRes.Items {
getFileRes, err := repoClient.GetFile(ctx, &repository.GetFileRequest{
Repo: repo,
Path: path.Join(appDir, "*values*.yaml"),
Revision: revision,
Path: path,
})
if err != nil {
return nil, err
}
appSpec.ValueFiles = make([]string, len(valuesFilesRes.Items))
for i := range valuesFilesRes.Items {
valueFilePath, err := filepath.Rel(appDir, valuesFilesRes.Items[i])
if err != nil {
return nil, err
}
appSpec.ValueFiles[i] = valueFilePath
var appSpec KsonnetAppSpec
appSpec.Path = path
err = yaml.Unmarshal(getFileRes.Data, &appSpec)
if err == nil && appSpec.Name != "" && len(appSpec.Environments) > 0 {
out = append(out, &appSpec)
}
return &RepoAppDetailsResponse{
Type: string(appSourceType),
Helm: &appSpec,
}, nil
case repository.AppSourceKustomize:
appSpec := KustomizeAppSpec{
Path: q.Path,
}
return &RepoAppDetailsResponse{
Type: string(appSourceType),
Kustomize: &appSpec,
}, nil
}
return nil, status.Errorf(codes.InvalidArgument, "specified application path is not supported")
return &RepoKsonnetResponse{
Items: out,
}, nil
}
// Create creates a repository

File diff suppressed because it is too large Load Diff

View File

@@ -46,11 +46,11 @@ func request_RepositoryService_List_0(ctx context.Context, marshaler runtime.Mar
}
var (
filter_RepositoryService_ListApps_0 = &utilities.DoubleArray{Encoding: map[string]int{"repo": 0}, Base: []int{1, 1, 0}, Check: []int{0, 1, 2}}
filter_RepositoryService_ListKsonnetApps_0 = &utilities.DoubleArray{Encoding: map[string]int{"repo": 0}, Base: []int{1, 1, 0}, Check: []int{0, 1, 2}}
)
func request_RepositoryService_ListApps_0(ctx context.Context, marshaler runtime.Marshaler, client RepositoryServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq RepoAppsQuery
func request_RepositoryService_ListKsonnetApps_0(ctx context.Context, marshaler runtime.Marshaler, client RepositoryServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq RepoKsonnetQuery
var metadata runtime.ServerMetadata
var (
@@ -71,57 +71,11 @@ func request_RepositoryService_ListApps_0(ctx context.Context, marshaler runtime
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "repo", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.URL.Query(), filter_RepositoryService_ListApps_0); err != nil {
if err := runtime.PopulateQueryParameters(&protoReq, req.URL.Query(), filter_RepositoryService_ListKsonnetApps_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.ListApps(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
var (
filter_RepositoryService_GetAppDetails_0 = &utilities.DoubleArray{Encoding: map[string]int{"repo": 0, "path": 1}, Base: []int{1, 1, 2, 0, 0}, Check: []int{0, 1, 1, 2, 3}}
)
func request_RepositoryService_GetAppDetails_0(ctx context.Context, marshaler runtime.Marshaler, client RepositoryServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var protoReq RepoAppDetailsQuery
var metadata runtime.ServerMetadata
var (
val string
ok bool
err error
_ = err
)
val, ok = pathParams["repo"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "repo")
}
protoReq.Repo, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "repo", err)
}
val, ok = pathParams["path"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "path")
}
protoReq.Path, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "path", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.URL.Query(), filter_RepositoryService_GetAppDetails_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.GetAppDetails(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
msg, err := client.ListKsonnetApps(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
@@ -299,7 +253,7 @@ func RegisterRepositoryServiceHandlerClient(ctx context.Context, mux *runtime.Se
})
mux.Handle("GET", pattern_RepositoryService_ListApps_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
mux.Handle("GET", pattern_RepositoryService_ListKsonnetApps_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
@@ -317,43 +271,14 @@ func RegisterRepositoryServiceHandlerClient(ctx context.Context, mux *runtime.Se
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_RepositoryService_ListApps_0(rctx, inboundMarshaler, client, req, pathParams)
resp, md, err := request_RepositoryService_ListKsonnetApps_0(rctx, inboundMarshaler, client, req, pathParams)
ctx = runtime.NewServerMetadataContext(ctx, md)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
forward_RepositoryService_ListApps_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle("GET", pattern_RepositoryService_GetAppDetails_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
go func(done <-chan struct{}, closed <-chan bool) {
select {
case <-done:
case <-closed:
cancel()
}
}(ctx.Done(), cn.CloseNotify())
}
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
rctx, err := runtime.AnnotateContext(ctx, mux, req)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_RepositoryService_GetAppDetails_0(rctx, inboundMarshaler, client, req, pathParams)
ctx = runtime.NewServerMetadataContext(ctx, md)
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
forward_RepositoryService_GetAppDetails_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
forward_RepositoryService_ListKsonnetApps_0(ctx, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
@@ -479,9 +404,7 @@ func RegisterRepositoryServiceHandlerClient(ctx context.Context, mux *runtime.Se
var (
pattern_RepositoryService_List_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "repositories"}, ""))
pattern_RepositoryService_ListApps_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "repositories", "repo", "apps"}, ""))
pattern_RepositoryService_GetAppDetails_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"api", "v1", "repositories", "repo", "apps", "path"}, ""))
pattern_RepositoryService_ListKsonnetApps_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "repositories", "repo", "ksonnet"}, ""))
pattern_RepositoryService_Create_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "repositories"}, ""))
@@ -495,9 +418,7 @@ var (
var (
forward_RepositoryService_List_0 = runtime.ForwardResponseMessage
forward_RepositoryService_ListApps_0 = runtime.ForwardResponseMessage
forward_RepositoryService_GetAppDetails_0 = runtime.ForwardResponseMessage
forward_RepositoryService_ListKsonnetApps_0 = runtime.ForwardResponseMessage
forward_RepositoryService_Create_0 = runtime.ForwardResponseMessage

View File

@@ -11,37 +11,15 @@ import "google/api/annotations.proto";
import "k8s.io/api/core/v1/generated.proto";
import "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1/generated.proto";
// RepoAppsQuery is a query for Repository apps
message RepoAppsQuery {
// RepoKsonnetQuery is a query for Repository contents matching a particular path
message RepoKsonnetQuery {
string repo = 1;
string revision = 2;
}
// AppInfo contains application type and app file path
message AppInfo {
string type = 1;
string path = 2;
}
// RepoAppDetailsQuery contains query information for app details request
message RepoAppDetailsQuery {
string repo = 1;
string revision = 2;
string path = 3;
}
// RepoAppDetailsResponse application details
message RepoAppDetailsResponse {
string type = 1;
KsonnetAppSpec ksonnet = 2;
HelmAppSpec helm = 3;
KustomizeAppSpec kustomize = 4;
}
// RepoAppsResponse contains applications of specified repository
message RepoAppsResponse {
repeated AppInfo items = 1;
// RepoKsonnetResponse is a response for Repository contents matching a particular path
message RepoKsonnetResponse {
repeated KsonnetAppSpec items = 1;
}
// KsonnetAppSpec contains Ksonnet app response
@@ -52,18 +30,6 @@ message KsonnetAppSpec {
map<string, KsonnetEnvironment> environments = 3;
}
// HelmAppSpec contains helm app name and path in source repo
message HelmAppSpec {
string name = 1;
string path = 2;
repeated string valueFiles = 3;
}
// KustomizeAppSpec contains kustomize app name and path in source repo
message KustomizeAppSpec {
string path = 1;
}
message KsonnetEnvironment {
// Name is the user defined name of an environment
string name = 1;
@@ -106,14 +72,9 @@ service RepositoryService {
option (google.api.http).get = "/api/v1/repositories";
}
// ListApps returns list of apps in the repo
rpc ListApps(RepoAppsQuery) returns (RepoAppsResponse) {
option (google.api.http).get = "/api/v1/repositories/{repo}/apps";
}
// GetAppDetails returns application details by given path
rpc GetAppDetails(RepoAppDetailsQuery) returns (RepoAppDetailsResponse) {
option (google.api.http).get = "/api/v1/repositories/{repo}/apps/{path}";
// ListKsonnetApps returns list of Ksonnet apps in the repo
rpc ListKsonnetApps(RepoKsonnetQuery) returns (RepoKsonnetResponse) {
option (google.api.http).get = "/api/v1/repositories/{repo}/ksonnet";
}
// Create creates a repo

View File

@@ -8,36 +8,31 @@ import (
"net/http"
"net/url"
"os"
"regexp"
"strings"
"time"
jwt "github.com/dgrijalva/jwt-go"
"github.com/gobuffalo/packr"
golang_proto "github.com/golang/protobuf/proto"
"github.com/grpc-ecosystem/go-grpc-middleware"
"github.com/grpc-ecosystem/go-grpc-middleware/auth"
"github.com/grpc-ecosystem/go-grpc-middleware/logging/logrus"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
grpc_auth "github.com/grpc-ecosystem/go-grpc-middleware/auth"
grpc_logrus "github.com/grpc-ecosystem/go-grpc-middleware/logging/logrus"
"github.com/grpc-ecosystem/grpc-gateway/runtime"
log "github.com/sirupsen/logrus"
"github.com/soheilhy/cmux"
netCtx "golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/reflection"
"google.golang.org/grpc/status"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"github.com/argoproj/argo-cd"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/server/account"
@@ -53,10 +48,7 @@ import (
"github.com/argoproj/argo-cd/util/dex"
dexutil "github.com/argoproj/argo-cd/util/dex"
grpc_util "github.com/argoproj/argo-cd/util/grpc"
"github.com/argoproj/argo-cd/util/healthz"
jsonutil "github.com/argoproj/argo-cd/util/json"
jwtutil "github.com/argoproj/argo-cd/util/jwt"
projectutil "github.com/argoproj/argo-cd/util/project"
"github.com/argoproj/argo-cd/util/rbac"
util_session "github.com/argoproj/argo-cd/util/session"
settings_util "github.com/argoproj/argo-cd/util/settings"
@@ -70,13 +62,6 @@ var (
ErrNoSession = status.Errorf(codes.Unauthenticated, "no session information")
)
var noCacheHeaders = map[string]string{
"Expires": time.Unix(0, 0).Format(time.RFC1123),
"Cache-Control": "no-cache, private, max-age=0",
"Pragma": "no-cache",
"X-Accel-Expires": "0",
}
var backoff = wait.Backoff{
Steps: 5,
Duration: 500 * time.Millisecond,
@@ -120,23 +105,6 @@ type ArgoCDServerOpts struct {
RepoClientset reposerver.Clientset
}
// initializeDefaultProject creates the default project if it does not already exist
func initializeDefaultProject(opts ArgoCDServerOpts) error {
defaultProj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: common.DefaultAppProjectName, Namespace: opts.Namespace},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []v1alpha1.ApplicationDestination{{Server: "*", Namespace: "*"}},
},
}
_, err := opts.AppClientset.ArgoprojV1alpha1().AppProjects(opts.Namespace).Create(defaultProj)
if apierrors.IsAlreadyExists(err) {
return nil
}
return err
}
// initializeSettings sets default secret settings (password set to hostname)
func initializeSettings(settingsMgr *settings_util.SettingsManager, opts ArgoCDServerOpts) (*settings_util.ArgoCDSettings, error) {
@@ -151,11 +119,8 @@ func initializeSettings(settingsMgr *settings_util.SettingsManager, opts ArgoCDS
// NewServer returns a new instance of the ArgoCD API server
func NewServer(opts ArgoCDServerOpts) *ArgoCDServer {
settingsMgr := settings_util.NewSettingsManager(opts.KubeClientset, opts.Namespace)
settings, err := initializeSettings(settingsMgr, opts)
errors.CheckError(err)
err = initializeDefaultProject(opts)
errors.CheckError(err)
sessionMgr := util_session.NewSessionManager(settings)
enf := rbac.NewEnforcer(opts.KubeClientset, opts.Namespace, common.ArgoCDRBACConfigMapName, nil)
@@ -334,18 +299,11 @@ func (a *ArgoCDServer) useTLS() bool {
func (a *ArgoCDServer) newGRPCServer() *grpc.Server {
var sOpts []grpc.ServerOption
sensitiveMethods := map[string]bool{
"/session.SessionService/Create": true,
"/account.AccountService/UpdatePassword": true,
}
// NOTE: notice we do not configure the gRPC server here with TLS (e.g. grpc.Creds(creds))
// This is because TLS handshaking occurs in cmux handling
sOpts = append(sOpts, grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(
grpc_logrus.StreamServerInterceptor(a.log),
grpc_auth.StreamServerInterceptor(a.authenticate),
grpc_util.PayloadStreamServerInterceptor(a.log, true, func(ctx netCtx.Context, fullMethodName string, servingObject interface{}) bool {
return !sensitiveMethods[fullMethodName]
}),
grpc_util.ErrorCodeStreamServerInterceptor(),
grpc_util.PanicLoggerStreamServerInterceptor(a.log),
)))
@@ -353,13 +311,10 @@ func (a *ArgoCDServer) newGRPCServer() *grpc.Server {
bug21955WorkaroundInterceptor,
grpc_logrus.UnaryServerInterceptor(a.log),
grpc_auth.UnaryServerInterceptor(a.authenticate),
grpc_util.PayloadUnaryServerInterceptor(a.log, true, func(ctx netCtx.Context, fullMethodName string, servingObject interface{}) bool {
return !sensitiveMethods[fullMethodName]
}),
grpc_util.ErrorCodeUnaryServerInterceptor(),
grpc_util.PanicLoggerUnaryServerInterceptor(a.log),
)))
a.enf.SetClaimsEnforcerFunc(EnforceClaims(a.enf, a.AppClientset, a.Namespace))
grpcS := grpc.NewServer(sOpts...)
db := db.NewDB(a.Namespace, a.KubeClientset)
clusterService := cluster.NewServer(db, a.enf)
@@ -367,7 +322,7 @@ func (a *ArgoCDServer) newGRPCServer() *grpc.Server {
sessionService := session.NewServer(a.sessionMgr)
projectLock := util.NewKeyLock()
applicationService := application.NewServer(a.Namespace, a.KubeClientset, a.AppClientset, a.RepoClientset, db, a.enf, projectLock)
projectService := project.NewServer(a.Namespace, a.KubeClientset, a.AppClientset, a.enf, projectLock, a.sessionMgr)
projectService := project.NewServer(a.Namespace, a.AppClientset, a.enf, projectLock)
settingsService := settings.NewServer(a.settingsMgr)
accountService := account.NewServer(a.sessionMgr, a.settingsMgr)
version.RegisterVersionServiceServer(grpcS, &version.Server{})
@@ -438,10 +393,6 @@ func (a *ArgoCDServer) newHTTPServer(ctx context.Context, port int) *http.Server
mustRegisterGWHandler(project.RegisterProjectServiceHandlerFromEndpoint, ctx, gwmux, endpoint, dOpts)
swagger.ServeSwaggerUI(mux, packr.NewBox("."), "/swagger-ui")
healthz.ServeHealthCheck(mux, func() error {
_, err := a.KubeClientset.(*kubernetes.Clientset).ServerVersion()
return err
})
// Dex reverse proxy and client app and OAuth2 login/callback
a.registerDexHandlers(mux)
@@ -463,9 +414,6 @@ func (a *ArgoCDServer) newHTTPServer(ctx context.Context, port int) *http.Server
// serve index.html for non file requests to support HTML5 History API
if acceptHTML && !fileRequest && (request.Method == "GET" || request.Method == "HEAD") {
for k, v := range noCacheHeaders {
writer.Header().Set(k, v)
}
http.ServeFile(writer, request, a.StaticAssetsDir+"/index.html")
} else {
http.ServeFile(writer, request, a.StaticAssetsDir+request.URL.Path)
@@ -544,6 +492,11 @@ func getToken(md metadata.MD) string {
if ok && len(tokens) > 0 {
return tokens[0]
}
// check the legacy key (v0.3.2 and below). 'tokens' was renamed to 'token'
tokens, ok = md["tokens"]
if ok && len(tokens) > 0 {
return tokens[0]
}
// check the HTTP cookie
for _, cookieToken := range md["grpcgateway-cookie"] {
header := http.Header{}
@@ -562,17 +515,22 @@ type bug21955Workaround struct {
handler http.Handler
}
var pathPatters = []*regexp.Regexp{
regexp.MustCompile(`/api/v1/clusters/[^/]+`),
regexp.MustCompile(`/api/v1/repositories/[^/]+`),
regexp.MustCompile(`/api/v1/repositories/[^/]+/apps`),
regexp.MustCompile(`/api/v1/repositories/[^/]+/apps/[^/]+`),
}
func (bf *bug21955Workaround) ServeHTTP(w http.ResponseWriter, r *http.Request) {
for _, pattern := range pathPatters {
if pattern.MatchString(r.URL.RawPath) {
r.URL.Path = r.URL.RawPath
paths := map[string][]string{
"/api/v1/repositories/": {"ksonnet"},
"/api/v1/clusters/": {},
}
for path, subPaths := range paths {
if strings.Index(r.URL.Path, path) > -1 {
postfix := ""
for _, subPath := range subPaths {
if strings.LastIndex(r.URL.Path, subPath) == len(r.URL.Path)-len(subPath) {
postfix = "/" + subPath
r.URL.Path = r.URL.Path[0 : len(r.URL.Path)-len(subPath)-1]
break
}
}
r.URL.Path = path + url.QueryEscape(r.URL.Path[len(path):]) + postfix
break
}
}
@@ -586,23 +544,12 @@ func bug21955WorkaroundInterceptor(ctx context.Context, req interface{}, _ *grpc
return nil, err
}
rq.Repo = repo
} else if rk, ok := req.(*repository.RepoAppsQuery); ok {
} else if rk, ok := req.(*repository.RepoKsonnetQuery); ok {
repo, err := url.QueryUnescape(rk.Repo)
if err != nil {
return nil, err
}
rk.Repo = repo
} else if rdq, ok := req.(*repository.RepoAppDetailsQuery); ok {
repo, err := url.QueryUnescape(rdq.Repo)
if err != nil {
return nil, err
}
path, err := url.QueryUnescape(rdq.Path)
if err != nil {
return nil, err
}
rdq.Repo = repo
rdq.Path = path
} else if ru, ok := req.(*repository.RepoUpdateRequest); ok {
repo, err := url.QueryUnescape(ru.Repo.Repo)
if err != nil {
@@ -624,71 +571,3 @@ func bug21955WorkaroundInterceptor(ctx context.Context, req interface{}, _ *grpc
}
return handler(ctx, req)
}
func EnforceClaims(enf *rbac.Enforcer, a appclientset.Interface, namespace string) func(rvals ...interface{}) bool {
return func(rvals ...interface{}) bool {
claims, ok := rvals[0].(jwt.Claims)
if !ok {
if rvals[0] == nil {
vals := append([]interface{}{""}, rvals[1:]...)
return enf.Enforce(vals...)
}
return enf.Enforce(rvals...)
}
mapClaims, err := jwtutil.MapClaims(claims)
if err != nil {
vals := append([]interface{}{""}, rvals[1:]...)
return enf.Enforce(vals...)
}
groups := jwtutil.GetGroups(mapClaims)
for _, group := range groups {
vals := append([]interface{}{group}, rvals[1:]...)
if enf.Enforcer.Enforce(vals...) {
return true
}
}
user := jwtutil.GetField(mapClaims, "sub")
if strings.HasPrefix(user, "proj:") {
return enforceProjectToken(enf, a, namespace, user, mapClaims, rvals...)
}
vals := append([]interface{}{user}, rvals[1:]...)
return enf.Enforce(vals...)
}
}
func enforceProjectToken(enf *rbac.Enforcer, a appclientset.Interface, namespace string, user string, claims jwt.MapClaims, rvals ...interface{}) bool {
userSplit := strings.Split(user, ":")
if len(userSplit) != 3 {
return false
}
projName := userSplit[1]
tokenName := userSplit[2]
proj, err := a.ArgoprojV1alpha1().AppProjects(namespace).Get(projName, metav1.GetOptions{})
if err != nil {
return false
}
index, err := projectutil.GetRoleIndexByName(proj, tokenName)
if err != nil {
return false
}
if proj.Spec.Roles[index].JWTTokens == nil {
return false
}
iatField, ok := claims["iat"]
if !ok {
return false
}
iatFloat, ok := iatField.(float64)
if !ok {
return false
}
iat := int64(iatFloat)
_, err = projectutil.GetJWTTokenIndexByIssuedAt(proj, index, iat)
if err != nil {
return false
}
vals := append([]interface{}{user}, rvals[1:]...)
return enf.EnforceCustomPolicy(proj.ProjectPoliciesString(), vals...)
}

View File

@@ -1,240 +0,0 @@
package server
import (
"fmt"
"testing"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
apps "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
"github.com/argoproj/argo-cd/util/rbac"
)
const (
fakeNamespace = "fake-ns"
builtinPolicyFile = "builtin-policy.csv"
)
func fakeConfigMap() *apiv1.ConfigMap {
cm := apiv1.ConfigMap{
TypeMeta: v1.TypeMeta{
Kind: "ConfigMap",
APIVersion: "v1",
},
ObjectMeta: v1.ObjectMeta{
Name: common.ArgoCDConfigMapName,
Namespace: fakeNamespace,
},
Data: make(map[string]string),
}
return &cm
}
func fakeSecret(policy ...string) *apiv1.Secret {
secret := apiv1.Secret{
TypeMeta: v1.TypeMeta{
Kind: "Secret",
APIVersion: "v1",
},
ObjectMeta: v1.ObjectMeta{
Name: common.ArgoCDSecretName,
Namespace: fakeNamespace,
},
Data: make(map[string][]byte),
}
return &secret
}
func TestEnforceProjectToken(t *testing.T) {
projectName := "testProj"
roleName := "testRole"
subFormat := "proj:%s:%s"
policyTemplate := "p, %s, applications, get, %s/%s, %s"
defaultObject := "*"
defaultEffect := "allow"
defaultTestObject := fmt.Sprintf("%s/%s", projectName, "test")
defaultIssuedAt := int64(1)
defaultSub := fmt.Sprintf(subFormat, projectName, roleName)
defaultPolicy := fmt.Sprintf(policyTemplate, defaultSub, projectName, defaultObject, defaultEffect)
role := v1alpha1.ProjectRole{Name: roleName, Policies: []string{defaultPolicy}, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: defaultIssuedAt}}}
existingProj := v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{Name: projectName, Namespace: fakeNamespace},
Spec: v1alpha1.AppProjectSpec{
Roles: []v1alpha1.ProjectRole{role},
},
}
cm := fakeConfigMap()
secret := fakeSecret()
kubeclientset := fake.NewSimpleClientset(cm, secret)
t.Run("TestEnforceProjectTokenSuccessful", func(t *testing.T) {
s := NewServer(ArgoCDServerOpts{Namespace: fakeNamespace, KubeClientset: kubeclientset, AppClientset: apps.NewSimpleClientset(&existingProj)})
s.newGRPCServer()
claims := jwt.MapClaims{"sub": defaultSub, "iat": defaultIssuedAt}
assert.True(t, s.enf.EnforceClaims(claims, "applications", "get", defaultTestObject))
})
t.Run("TestEnforceProjectTokenWithDiffCreateAtFailure", func(t *testing.T) {
s := NewServer(ArgoCDServerOpts{Namespace: fakeNamespace, KubeClientset: kubeclientset, AppClientset: apps.NewSimpleClientset(&existingProj)})
s.newGRPCServer()
diffCreateAt := defaultIssuedAt + 1
claims := jwt.MapClaims{"sub": defaultSub, "iat": diffCreateAt}
assert.False(t, s.enf.EnforceClaims(claims, "applications", "get", defaultTestObject))
})
t.Run("TestEnforceProjectTokenIncorrectSubFormatFailure", func(t *testing.T) {
s := NewServer(ArgoCDServerOpts{Namespace: fakeNamespace, KubeClientset: kubeclientset, AppClientset: apps.NewSimpleClientset(&existingProj)})
s.newGRPCServer()
invalidSub := "proj:test"
claims := jwt.MapClaims{"sub": invalidSub, "iat": defaultIssuedAt}
assert.False(t, s.enf.EnforceClaims(claims, "applications", "get", defaultTestObject))
})
t.Run("TestEnforceProjectTokenNoTokenFailure", func(t *testing.T) {
s := NewServer(ArgoCDServerOpts{Namespace: fakeNamespace, KubeClientset: kubeclientset, AppClientset: apps.NewSimpleClientset(&existingProj)})
s.newGRPCServer()
nonExistentToken := "fake-token"
invalidSub := fmt.Sprintf(subFormat, projectName, nonExistentToken)
claims := jwt.MapClaims{"sub": invalidSub, "iat": defaultIssuedAt}
assert.False(t, s.enf.EnforceClaims(claims, "applications", "get", defaultTestObject))
})
t.Run("TestEnforceProjectTokenNotJWTTokenFailure", func(t *testing.T) {
proj := existingProj.DeepCopy()
proj.Spec.Roles[0].JWTTokens = nil
s := NewServer(ArgoCDServerOpts{Namespace: fakeNamespace, KubeClientset: kubeclientset, AppClientset: apps.NewSimpleClientset(proj)})
s.newGRPCServer()
claims := jwt.MapClaims{"sub": defaultSub, "iat": defaultIssuedAt}
assert.False(t, s.enf.EnforceClaims(claims, "applications", "get", defaultTestObject))
})
t.Run("TestEnforceProjectTokenExplicitDeny", func(t *testing.T) {
denyApp := "testDenyApp"
allowPolicy := fmt.Sprintf(policyTemplate, defaultSub, projectName, defaultObject, defaultEffect)
denyPolicy := fmt.Sprintf(policyTemplate, defaultSub, projectName, denyApp, "deny")
role := v1alpha1.ProjectRole{Name: roleName, Policies: []string{allowPolicy, denyPolicy}, JWTTokens: []v1alpha1.JWTToken{{IssuedAt: defaultIssuedAt}}}
proj := existingProj.DeepCopy()
proj.Spec.Roles[0] = role
s := NewServer(ArgoCDServerOpts{Namespace: fakeNamespace, KubeClientset: kubeclientset, AppClientset: apps.NewSimpleClientset(proj)})
s.newGRPCServer()
claims := jwt.MapClaims{"sub": defaultSub, "iat": defaultIssuedAt}
allowedObject := fmt.Sprintf("%s/%s", projectName, "test")
denyObject := fmt.Sprintf("%s/%s", projectName, denyApp)
assert.True(t, s.enf.EnforceClaims(claims, "applications", "get", allowedObject))
assert.False(t, s.enf.EnforceClaims(claims, "applications", "get", denyObject))
})
}
func TestEnforceClaims(t *testing.T) {
kubeclientset := fake.NewSimpleClientset(fakeConfigMap())
enf := rbac.NewEnforcer(kubeclientset, fakeNamespace, common.ArgoCDConfigMapName, nil)
enf.SetBuiltinPolicy(box.String(builtinPolicyFile))
enf.SetClaimsEnforcerFunc(EnforceClaims(enf, nil, fakeNamespace))
policy := `
g, org2:team2, role:admin
g, bob, role:admin
`
enf.SetUserPolicy(policy)
allowed := []jwt.Claims{
jwt.MapClaims{"groups": []string{"org1:team1", "org2:team2"}},
jwt.StandardClaims{Subject: "admin"},
}
for _, c := range allowed {
if !assert.True(t, enf.EnforceClaims(c, "applications", "delete", "foo/obj")) {
log.Errorf("%v: expected true, got false", c)
}
}
disallowed := []jwt.Claims{
jwt.MapClaims{"groups": []string{"org3:team3"}},
jwt.StandardClaims{Subject: "nobody"},
}
for _, c := range disallowed {
if !assert.False(t, enf.EnforceClaims(c, "applications", "delete", "foo/obj")) {
log.Errorf("%v: expected true, got false", c)
}
}
}
func TestDefaultRoleWithClaims(t *testing.T) {
kubeclientset := fake.NewSimpleClientset()
enf := rbac.NewEnforcer(kubeclientset, fakeNamespace, common.ArgoCDConfigMapName, nil)
enf.SetBuiltinPolicy(box.String(builtinPolicyFile))
enf.SetClaimsEnforcerFunc(EnforceClaims(enf, nil, fakeNamespace))
claims := jwt.MapClaims{"groups": []string{"org1:team1", "org2:team2"}}
assert.False(t, enf.EnforceClaims(claims, "applications", "get", "foo/bar"))
// after setting the default role to be the read-only role, this should now pass
enf.SetDefaultRole("role:readonly")
assert.True(t, enf.EnforceClaims(claims, "applications", "get", "foo/bar"))
}
func TestEnforceNilClaims(t *testing.T) {
kubeclientset := fake.NewSimpleClientset(fakeConfigMap())
enf := rbac.NewEnforcer(kubeclientset, fakeNamespace, common.ArgoCDConfigMapName, nil)
enf.SetBuiltinPolicy(box.String(builtinPolicyFile))
enf.SetClaimsEnforcerFunc(EnforceClaims(enf, nil, fakeNamespace))
assert.False(t, enf.EnforceClaims(nil, "applications", "get", "foo/obj"))
enf.SetDefaultRole("role:readonly")
assert.True(t, enf.EnforceClaims(nil, "applications", "get", "foo/obj"))
}
func TestInitializingExistingDefaultProject(t *testing.T) {
cm := fakeConfigMap()
secret := fakeSecret()
kubeclientset := fake.NewSimpleClientset(cm, secret)
defaultProj := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{Name: common.DefaultAppProjectName, Namespace: fakeNamespace},
Spec: v1alpha1.AppProjectSpec{},
}
appClientSet := apps.NewSimpleClientset(defaultProj)
argoCDOpts := ArgoCDServerOpts{
Namespace: fakeNamespace,
KubeClientset: kubeclientset,
AppClientset: appClientSet,
}
argocd := NewServer(argoCDOpts)
assert.NotNil(t, argocd)
proj, err := appClientSet.ArgoprojV1alpha1().AppProjects(fakeNamespace).Get(common.DefaultAppProjectName, v1.GetOptions{})
assert.Nil(t, err)
assert.NotNil(t, proj)
assert.Equal(t, proj.Name, common.DefaultAppProjectName)
}
func TestInitializingNotExistingDefaultProject(t *testing.T) {
cm := fakeConfigMap()
secret := fakeSecret()
kubeclientset := fake.NewSimpleClientset(cm, secret)
appClientSet := apps.NewSimpleClientset()
argoCDOpts := ArgoCDServerOpts{
Namespace: fakeNamespace,
KubeClientset: kubeclientset,
AppClientset: appClientSet,
}
argocd := NewServer(argoCDOpts)
assert.NotNil(t, argocd)
proj, err := appClientSet.ArgoprojV1alpha1().AppProjects(fakeNamespace).Get(common.DefaultAppProjectName, v1.GetOptions{})
assert.Nil(t, err)
assert.NotNil(t, proj)
assert.Equal(t, proj.Name, common.DefaultAppProjectName)
}

View File

@@ -2,10 +2,12 @@ package session
import (
"context"
"fmt"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"github.com/argoproj/argo-cd/util/jwt"
sessionmgr "github.com/argoproj/argo-cd/util/session"
)
@@ -21,24 +23,41 @@ func NewServer(mgr *sessionmgr.SessionManager) *Server {
}
}
// Create generates a JWT token signed by ArgoCD intended for web/CLI logins of the admin user
// using username/password
// Create generates a non-expiring JWT token signed by ArgoCD. This endpoint is used in two circumstances:
// 1. Web/CLI logins for local users (i.e. admin), for when SSO is not configured. In this case,
// username/password.
// 2. CLI login which completed an OAuth2 login flow but wish to store a permanent token in their config
func (s *Server) Create(ctx context.Context, q *SessionCreateRequest) (*SessionResponse, error) {
if q.Token != "" {
return nil, status.Errorf(codes.Unauthenticated, "token-based session creation no longer supported. please upgrade argocd cli to v0.7+")
}
if q.Username == "" || q.Password == "" {
var tokenString string
var err error
if q.Password != "" {
// first case
err = s.mgr.VerifyUsernamePassword(q.Username, q.Password)
if err != nil {
return nil, err
}
tokenString, err = s.mgr.Create(q.Username)
if err != nil {
return nil, err
}
} else if q.Token != "" {
// second case
claimsIf, err := s.mgr.VerifyToken(q.Token)
if err != nil {
return nil, err
}
claims, err := jwt.MapClaims(claimsIf)
if err != nil {
return nil, err
}
tokenString, err = s.mgr.ReissueClaims(claims)
if err != nil {
return nil, fmt.Errorf("Failed to resign claims: %v", err)
}
} else {
return nil, status.Errorf(codes.Unauthenticated, "no credentials supplied")
}
err := s.mgr.VerifyUsernamePassword(q.Username, q.Password)
if err != nil {
return nil, err
}
jwtToken, err := s.mgr.Create(q.Username, 0)
if err != nil {
return nil, err
}
return &SessionResponse{Token: jwtToken}, nil
return &SessionResponse{Token: tokenString}, nil
}
// Delete an authentication cookie from the client. This makes sense only for the Web client.

View File

@@ -291,6 +291,37 @@
}
}
},
"/api/v1/applications/{name}/pods/{podName}": {
"delete": {
"tags": [
"ApplicationService"
],
"summary": "DeletePod returns stream of log entries for the specified pod. Pod",
"operationId": "DeletePod",
"parameters": [
{
"type": "string",
"name": "name",
"in": "path",
"required": true
},
{
"type": "string",
"name": "podName",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/applicationApplicationResponse"
}
}
}
}
},
"/api/v1/applications/{name}/pods/{podName}/logs": {
"get": {
"tags": [
@@ -359,31 +390,6 @@
}
}
},
"/api/v1/applications/{name}/resource": {
"delete": {
"tags": [
"ApplicationService"
],
"summary": "DeleteResource deletes a single application resource",
"operationId": "DeleteResource",
"parameters": [
{
"type": "string",
"name": "name",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/applicationApplicationResponse"
}
}
}
}
},
"/api/v1/applications/{name}/rollback": {
"post": {
"tags": [
@@ -532,33 +538,6 @@
}
}
},
"/api/v1/clusters-kubeconfig": {
"post": {
"tags": [
"ClusterService"
],
"summary": "CreateFromKubeConfig installs the argocd-manager service account into the cluster specified in the given kubeconfig and context",
"operationId": "CreateFromKubeConfig",
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/clusterClusterCreateFromKubeConfigRequest"
}
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1alpha1Cluster"
}
}
}
}
},
"/api/v1/clusters/{cluster.server}": {
"put": {
"tags": [
@@ -712,155 +691,6 @@
}
}
}
},
"delete": {
"tags": [
"ProjectService"
],
"summary": "Delete deletes a project",
"operationId": "DeleteMixin3",
"parameters": [
{
"type": "string",
"name": "name",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/projectEmptyResponse"
}
}
}
}
},
"/api/v1/projects/{name}/events": {
"get": {
"tags": [
"ProjectService"
],
"summary": "ListEvents returns a list of project events",
"operationId": "ListEvents",
"parameters": [
{
"type": "string",
"name": "name",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1EventList"
}
}
}
}
},
"/api/v1/projects/{project.metadata.name}": {
"put": {
"tags": [
"ProjectService"
],
"summary": "Update updates a project",
"operationId": "UpdateMixin3",
"parameters": [
{
"type": "string",
"name": "project.metadata.name",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/projectProjectUpdateRequest"
}
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1alpha1AppProject"
}
}
}
}
},
"/api/v1/projects/{project}/roles/{role}/token": {
"post": {
"tags": [
"ProjectService"
],
"summary": "Create a new project token.",
"operationId": "CreateToken",
"parameters": [
{
"type": "string",
"name": "project",
"in": "path",
"required": true
},
{
"type": "string",
"name": "role",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/projectProjectTokenCreateRequest"
}
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/projectProjectTokenResponse"
}
}
}
},
"delete": {
"tags": [
"ProjectService"
],
"summary": "Delete a new project token.",
"operationId": "DeleteToken",
"parameters": [
{
"type": "string",
"name": "project",
"in": "path",
"required": true
},
{
"type": "string",
"name": "role",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/projectEmptyResponse"
}
}
}
}
},
"/api/v1/repositories": {
@@ -912,6 +742,64 @@
}
}
},
"/api/v1/repositories/{name}": {
"delete": {
"tags": [
"ProjectService"
],
"summary": "Delete deletes a project",
"operationId": "DeleteMixin3",
"parameters": [
{
"type": "string",
"name": "name",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/projectEmptyResponse"
}
}
}
}
},
"/api/v1/repositories/{project.metadata.name}": {
"put": {
"tags": [
"ProjectService"
],
"summary": "Update updates a project",
"operationId": "UpdateMixin3",
"parameters": [
{
"type": "string",
"name": "project.metadata.name",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/projectProjectUpdateRequest"
}
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1alpha1AppProject"
}
}
}
}
},
"/api/v1/repositories/{repo.repo}": {
"put": {
"tags": [
@@ -993,13 +881,13 @@
}
}
},
"/api/v1/repositories/{repo}/apps": {
"/api/v1/repositories/{repo}/ksonnet": {
"get": {
"tags": [
"RepositoryService"
],
"summary": "ListApps returns list of apps in the repo",
"operationId": "ListApps",
"summary": "ListKsonnetApps returns list of Ksonnet apps in the repo",
"operationId": "ListKsonnetApps",
"parameters": [
{
"type": "string",
@@ -1017,43 +905,7 @@
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/repositoryRepoAppsResponse"
}
}
}
}
},
"/api/v1/repositories/{repo}/apps/{path}": {
"get": {
"tags": [
"RepositoryService"
],
"summary": "GetAppDetails returns application details by given path",
"operationId": "GetAppDetails",
"parameters": [
{
"type": "string",
"name": "repo",
"in": "path",
"required": true
},
{
"type": "string",
"name": "path",
"in": "path",
"required": true
},
{
"type": "string",
"name": "revision",
"in": "query"
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/repositoryRepoAppDetailsResponse"
"$ref": "#/definitions/repositoryRepoKsonnetResponse"
}
}
}
@@ -1330,25 +1182,6 @@
"applicationOperationTerminateResponse": {
"type": "object"
},
"clusterClusterCreateFromKubeConfigRequest": {
"type": "object",
"properties": {
"context": {
"type": "string"
},
"inCluster": {
"type": "boolean",
"format": "boolean"
},
"kubeconfig": {
"type": "string"
},
"upsert": {
"type": "boolean",
"format": "boolean"
}
}
},
"clusterClusterResponse": {
"type": "object"
},
@@ -1397,35 +1230,6 @@
}
}
},
"projectProjectTokenCreateRequest": {
"description": "ProjectTokenCreateRequest defines project token creation parameters.",
"type": "object",
"properties": {
"description": {
"type": "string"
},
"expiresIn": {
"type": "string",
"format": "int64",
"title": "expiresIn represents a duration in seconds"
},
"project": {
"type": "string"
},
"role": {
"type": "string"
}
}
},
"projectProjectTokenResponse": {
"description": "ProjectTokenResponse wraps the created token or returns an empty string if deleted.",
"type": "object",
"properties": {
"token": {
"type": "string"
}
}
},
"projectProjectUpdateRequest": {
"type": "object",
"properties": {
@@ -1434,36 +1238,6 @@
}
}
},
"repositoryAppInfo": {
"type": "object",
"title": "AppInfo contains application type and app file path",
"properties": {
"path": {
"type": "string"
},
"type": {
"type": "string"
}
}
},
"repositoryHelmAppSpec": {
"type": "object",
"title": "HelmAppSpec contains helm app name and path in source repo",
"properties": {
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"valueFiles": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"repositoryKsonnetAppSpec": {
"type": "object",
"title": "KsonnetAppSpec contains Ksonnet app response\nThis roughly reflects: ksonnet/ksonnet/metadata/app/schema.go",
@@ -1515,15 +1289,6 @@
}
}
},
"repositoryKustomizeAppSpec": {
"type": "object",
"title": "KustomizeAppSpec contains kustomize app name and path in source repo",
"properties": {
"path": {
"type": "string"
}
}
},
"repositoryManifestResponse": {
"type": "object",
"properties": {
@@ -1550,32 +1315,14 @@
}
}
},
"repositoryRepoAppDetailsResponse": {
"repositoryRepoKsonnetResponse": {
"type": "object",
"title": "RepoAppDetailsResponse application details",
"properties": {
"helm": {
"$ref": "#/definitions/repositoryHelmAppSpec"
},
"ksonnet": {
"$ref": "#/definitions/repositoryKsonnetAppSpec"
},
"kustomize": {
"$ref": "#/definitions/repositoryKustomizeAppSpec"
},
"type": {
"type": "string"
}
}
},
"repositoryRepoAppsResponse": {
"type": "object",
"title": "RepoAppsResponse contains applications of specified repository",
"title": "RepoKsonnetResponse is a response for Repository contents matching a particular path",
"properties": {
"items": {
"type": "array",
"items": {
"$ref": "#/definitions/repositoryAppInfo"
"$ref": "#/definitions/repositoryKsonnetAppSpec"
}
}
}
@@ -1979,13 +1726,7 @@
"$ref": "#/definitions/v1alpha1ApplicationDestination"
}
},
"roles": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1ProjectRole"
}
},
"sourceRepos": {
"sources": {
"type": "array",
"title": "SourceRepos contains list of git repository URLs which can be used for deployment",
"items": {
@@ -2061,33 +1802,26 @@
"properties": {
"componentParameterOverrides": {
"type": "array",
"title": "ComponentParameterOverrides are a list of parameter override values",
"title": "Environment parameter override values",
"items": {
"$ref": "#/definitions/v1alpha1ComponentParameter"
}
},
"environment": {
"type": "string",
"title": "Environment is a ksonnet application environment name"
"description": "Environment is a ksonnet application environment name.",
"type": "string"
},
"path": {
"type": "string",
"title": "Path is a directory path within the repository containing a"
"description": "Path is a directory path within repository which contains ksonnet application.",
"type": "string"
},
"repoURL": {
"type": "string",
"title": "RepoURL is the git repository URL of the application manifests"
"description": "RepoURL is the repository URL containing the ksonnet application.",
"type": "string"
},
"targetRevision": {
"type": "string",
"title": "TargetRevision defines the commit, tag, or branch in which to sync the application to.\nIf omitted, will sync to HEAD"
},
"valuesFiles": {
"type": "array",
"title": "ValuesFiles is a list of Helm values files to use when generating a template",
"items": {
"type": "string"
}
}
}
},
@@ -2327,20 +2061,6 @@
}
}
},
"v1alpha1JWTToken": {
"type": "object",
"title": "JWTToken holds the issuedAt and expiresAt values of a token",
"properties": {
"exp": {
"type": "string",
"format": "int64"
},
"iat": {
"type": "string",
"format": "int64"
}
}
},
"v1alpha1Operation": {
"description": "Operation contains requested operation parameters.",
"type": "object",
@@ -2382,31 +2102,6 @@
}
}
},
"v1alpha1ProjectRole": {
"type": "object",
"title": "ProjectRole represents a role that has access to a project",
"properties": {
"description": {
"type": "string"
},
"jwtTokens": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1JWTToken"
}
},
"name": {
"type": "string"
},
"policies": {
"description": "Policies Stores a list of casbin formated strings that define access policies for the role in the project.",
"type": "array",
"items": {
"type": "string"
}
}
}
},
"v1alpha1Repository": {
"type": "object",
"title": "Repository is a Git repository holding application configurations",
@@ -2650,9 +2345,6 @@
"GoVersion": {
"type": "string"
},
"KsonnetVersion": {
"type": "string"
},
"Platform": {
"type": "string"
},

View File

@@ -2,7 +2,6 @@ package version
import (
argocd "github.com/argoproj/argo-cd"
ksutil "github.com/argoproj/argo-cd/util/ksonnet"
"github.com/golang/protobuf/ptypes/empty"
"golang.org/x/net/context"
)
@@ -12,20 +11,15 @@ type Server struct{}
// Version returns the version of the API server
func (s *Server) Version(context.Context, *empty.Empty) (*VersionMessage, error) {
vers := argocd.GetVersion()
ksonnetVersion, err := ksutil.KsonnetVersion()
if err != nil {
return nil, err
}
return &VersionMessage{
Version: vers.Version,
BuildDate: vers.BuildDate,
GitCommit: vers.GitCommit,
GitTag: vers.GitTag,
GitTreeState: vers.GitTreeState,
GoVersion: vers.GoVersion,
Compiler: vers.Compiler,
Platform: vers.Platform,
KsonnetVersion: ksonnetVersion,
Version: vers.Version,
BuildDate: vers.BuildDate,
GitCommit: vers.GitCommit,
GitTag: vers.GitTag,
GitTreeState: vers.GitTreeState,
GoVersion: vers.GoVersion,
Compiler: vers.Compiler,
Platform: vers.Platform,
}, nil
}

View File

@@ -40,15 +40,14 @@ const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
// VersionMessage represents version of the ArgoCD API server
type VersionMessage struct {
Version string `protobuf:"bytes,1,opt,name=Version,proto3" json:"Version,omitempty"`
BuildDate string `protobuf:"bytes,2,opt,name=BuildDate,proto3" json:"BuildDate,omitempty"`
GitCommit string `protobuf:"bytes,3,opt,name=GitCommit,proto3" json:"GitCommit,omitempty"`
GitTag string `protobuf:"bytes,4,opt,name=GitTag,proto3" json:"GitTag,omitempty"`
GitTreeState string `protobuf:"bytes,5,opt,name=GitTreeState,proto3" json:"GitTreeState,omitempty"`
GoVersion string `protobuf:"bytes,6,opt,name=GoVersion,proto3" json:"GoVersion,omitempty"`
Compiler string `protobuf:"bytes,7,opt,name=Compiler,proto3" json:"Compiler,omitempty"`
Platform string `protobuf:"bytes,8,opt,name=Platform,proto3" json:"Platform,omitempty"`
KsonnetVersion string `protobuf:"bytes,9,opt,name=KsonnetVersion,proto3" json:"KsonnetVersion,omitempty"`
Version string `protobuf:"bytes,1,opt,name=Version,proto3" json:"Version,omitempty"`
BuildDate string `protobuf:"bytes,2,opt,name=BuildDate,proto3" json:"BuildDate,omitempty"`
GitCommit string `protobuf:"bytes,3,opt,name=GitCommit,proto3" json:"GitCommit,omitempty"`
GitTag string `protobuf:"bytes,4,opt,name=GitTag,proto3" json:"GitTag,omitempty"`
GitTreeState string `protobuf:"bytes,5,opt,name=GitTreeState,proto3" json:"GitTreeState,omitempty"`
GoVersion string `protobuf:"bytes,6,opt,name=GoVersion,proto3" json:"GoVersion,omitempty"`
Compiler string `protobuf:"bytes,7,opt,name=Compiler,proto3" json:"Compiler,omitempty"`
Platform string `protobuf:"bytes,8,opt,name=Platform,proto3" json:"Platform,omitempty"`
}
func (m *VersionMessage) Reset() { *m = VersionMessage{} }
@@ -112,13 +111,6 @@ func (m *VersionMessage) GetPlatform() string {
return ""
}
func (m *VersionMessage) GetKsonnetVersion() string {
if m != nil {
return m.KsonnetVersion
}
return ""
}
func init() {
proto.RegisterType((*VersionMessage)(nil), "version.VersionMessage")
}
@@ -260,12 +252,6 @@ func (m *VersionMessage) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintVersion(dAtA, i, uint64(len(m.Platform)))
i += copy(dAtA[i:], m.Platform)
}
if len(m.KsonnetVersion) > 0 {
dAtA[i] = 0x4a
i++
i = encodeVarintVersion(dAtA, i, uint64(len(m.KsonnetVersion)))
i += copy(dAtA[i:], m.KsonnetVersion)
}
return i, nil
}
@@ -313,10 +299,6 @@ func (m *VersionMessage) Size() (n int) {
if l > 0 {
n += 1 + l + sovVersion(uint64(l))
}
l = len(m.KsonnetVersion)
if l > 0 {
n += 1 + l + sovVersion(uint64(l))
}
return n
}
@@ -594,35 +576,6 @@ func (m *VersionMessage) Unmarshal(dAtA []byte) error {
}
m.Platform = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 9:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field KsonnetVersion", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersion
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthVersion
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.KsonnetVersion = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipVersion(dAtA[iNdEx:])
@@ -752,27 +705,26 @@ var (
func init() { proto.RegisterFile("server/version/version.proto", fileDescriptorVersion) }
var fileDescriptorVersion = []byte{
// 343 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x5c, 0x92, 0xcf, 0x4a, 0xc3, 0x40,
0x10, 0xc6, 0x49, 0xd5, 0xfe, 0x59, 0x4a, 0x0f, 0x8b, 0xd4, 0x25, 0x96, 0x22, 0x3d, 0x88, 0x08,
0x26, 0xa0, 0x17, 0xcf, 0xad, 0xd2, 0x83, 0x08, 0xc5, 0x8a, 0x07, 0x6f, 0x9b, 0x76, 0x1a, 0x57,
0x92, 0x4c, 0xd8, 0x4c, 0x0b, 0x5e, 0x7d, 0x05, 0x5f, 0xc0, 0xc7, 0xf1, 0x28, 0xf8, 0x02, 0x52,
0x7c, 0x10, 0xc9, 0x26, 0x1b, 0x89, 0xa7, 0xcc, 0xf7, 0xfd, 0x86, 0x2f, 0xe1, 0x9b, 0xb0, 0x41,
0x06, 0x7a, 0x03, 0xda, 0xdf, 0x80, 0xce, 0x14, 0x26, 0xf6, 0xe9, 0xa5, 0x1a, 0x09, 0x79, 0xab,
0x94, 0xee, 0x20, 0x44, 0x0c, 0x23, 0xf0, 0x65, 0xaa, 0x7c, 0x99, 0x24, 0x48, 0x92, 0x14, 0x26,
0x59, 0xb1, 0xe6, 0x1e, 0x96, 0xd4, 0xa8, 0x60, 0xbd, 0xf2, 0x21, 0x4e, 0xe9, 0xa5, 0x80, 0xa3,
0xf7, 0x06, 0xeb, 0x3d, 0x14, 0x31, 0xb7, 0x90, 0x65, 0x32, 0x04, 0x2e, 0x58, 0xab, 0x74, 0x84,
0x73, 0xe4, 0x9c, 0x74, 0xee, 0xac, 0xe4, 0x03, 0xd6, 0x19, 0xaf, 0x55, 0xb4, 0xbc, 0x92, 0x04,
0xa2, 0x61, 0xd8, 0x9f, 0x91, 0xd3, 0xa9, 0xa2, 0x09, 0xc6, 0xb1, 0x22, 0xb1, 0x53, 0xd0, 0xca,
0xe0, 0x7d, 0xd6, 0x9c, 0x2a, 0xba, 0x97, 0xa1, 0xd8, 0x35, 0xa8, 0x54, 0x7c, 0xc4, 0xba, 0xf9,
0xa4, 0x01, 0xe6, 0x94, 0xc7, 0xee, 0x19, 0x5a, 0xf3, 0x4c, 0x32, 0xda, 0x6f, 0x6a, 0x96, 0xc9,
0xd6, 0xe0, 0x2e, 0x6b, 0x4f, 0x30, 0x4e, 0x55, 0x04, 0x5a, 0xb4, 0x0c, 0xac, 0x74, 0xce, 0x66,
0x91, 0xa4, 0x15, 0xea, 0x58, 0xb4, 0x0b, 0x66, 0x35, 0x3f, 0x66, 0xbd, 0x9b, 0x0c, 0x93, 0x04,
0xc8, 0x46, 0x77, 0xcc, 0xc6, 0x3f, 0xf7, 0x3c, 0xa8, 0x1a, 0x9a, 0x83, 0xde, 0xa8, 0x05, 0xf0,
0x59, 0xd5, 0x10, 0xef, 0x7b, 0x45, 0xbb, 0x9e, 0x6d, 0xd7, 0xbb, 0xce, 0xdb, 0x75, 0x0f, 0x3c,
0x7b, 0xab, 0x7a, 0xbb, 0xa3, 0xfd, 0xd7, 0xaf, 0x9f, 0xb7, 0x46, 0x8f, 0x77, 0xcd, 0xb5, 0xca,
0xa5, 0xf1, 0xe5, 0xc7, 0x76, 0xe8, 0x7c, 0x6e, 0x87, 0xce, 0xf7, 0x76, 0xe8, 0x3c, 0x9e, 0x86,
0x8a, 0x9e, 0xd6, 0x81, 0xb7, 0xc0, 0xd8, 0x97, 0x3a, 0xc4, 0x54, 0xe3, 0xb3, 0x19, 0xce, 0x16,
0x4b, 0xbf, 0xfe, 0x4b, 0x04, 0x4d, 0xf3, 0xe2, 0x8b, 0xdf, 0x00, 0x00, 0x00, 0xff, 0xff, 0x82,
0x85, 0x6d, 0x31, 0x2b, 0x02, 0x00, 0x00,
// 327 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x54, 0x91, 0xcd, 0x4a, 0xfb, 0x40,
0x14, 0xc5, 0x49, 0xff, 0x7f, 0xfb, 0x31, 0x94, 0x2e, 0x06, 0xa9, 0x43, 0x2c, 0x45, 0xba, 0x12,
0xc1, 0x04, 0x74, 0xe3, 0xba, 0x55, 0xba, 0x12, 0x8a, 0x15, 0x17, 0xee, 0x26, 0xed, 0x6d, 0x1c,
0xc9, 0xe4, 0x86, 0xc9, 0xb4, 0xe0, 0xd6, 0x57, 0xf0, 0xa5, 0x5c, 0x0a, 0xbe, 0x80, 0x14, 0x1f,
0xc2, 0xa5, 0x64, 0x3e, 0x22, 0x59, 0x65, 0xce, 0xf9, 0x0d, 0x27, 0x73, 0xcf, 0x25, 0xa3, 0x12,
0xd4, 0x0e, 0x54, 0xbc, 0x03, 0x55, 0x0a, 0xcc, 0xfd, 0x37, 0x2a, 0x14, 0x6a, 0xa4, 0x1d, 0x27,
0xc3, 0x51, 0x8a, 0x98, 0x66, 0x10, 0xf3, 0x42, 0xc4, 0x3c, 0xcf, 0x51, 0x73, 0x2d, 0x30, 0x2f,
0xed, 0xb5, 0xf0, 0xd8, 0x51, 0xa3, 0x92, 0xed, 0x26, 0x06, 0x59, 0xe8, 0x17, 0x0b, 0x27, 0x3f,
0x01, 0x19, 0x3c, 0xd8, 0x98, 0x5b, 0x28, 0x4b, 0x9e, 0x02, 0x65, 0xa4, 0xe3, 0x1c, 0x16, 0x9c,
0x04, 0xa7, 0xbd, 0x3b, 0x2f, 0xe9, 0x88, 0xf4, 0xa6, 0x5b, 0x91, 0xad, 0xaf, 0xb9, 0x06, 0xd6,
0x32, 0xec, 0xcf, 0xa8, 0xe8, 0x5c, 0xe8, 0x19, 0x4a, 0x29, 0x34, 0xfb, 0x67, 0x69, 0x6d, 0xd0,
0x21, 0x69, 0xcf, 0x85, 0xbe, 0xe7, 0x29, 0xfb, 0x6f, 0x90, 0x53, 0x74, 0x42, 0xfa, 0xd5, 0x49,
0x01, 0x2c, 0x75, 0x15, 0x7b, 0x60, 0x68, 0xc3, 0x33, 0xc9, 0xe8, 0xdf, 0xd4, 0x76, 0xc9, 0xde,
0xa0, 0x21, 0xe9, 0xce, 0x50, 0x16, 0x22, 0x03, 0xc5, 0x3a, 0x06, 0xd6, 0xba, 0x62, 0x8b, 0x8c,
0xeb, 0x0d, 0x2a, 0xc9, 0xba, 0x96, 0x79, 0x7d, 0x91, 0xd4, 0x93, 0x2f, 0x41, 0xed, 0xc4, 0x0a,
0xe8, 0xa2, 0x9e, 0x9c, 0x0e, 0x23, 0xdb, 0x5a, 0xe4, 0x5b, 0x8b, 0x6e, 0xaa, 0xd6, 0xc2, 0xa3,
0xc8, 0xef, 0xa0, 0xd9, 0xda, 0xe4, 0xf0, 0xf5, 0xf3, 0xfb, 0xad, 0x35, 0xa0, 0x7d, 0xb3, 0x05,
0x77, 0x69, 0x7a, 0xf5, 0xbe, 0x1f, 0x07, 0x1f, 0xfb, 0x71, 0xf0, 0xb5, 0x1f, 0x07, 0x8f, 0x67,
0xa9, 0xd0, 0x4f, 0xdb, 0x24, 0x5a, 0xa1, 0x8c, 0xb9, 0x4a, 0xb1, 0x50, 0xf8, 0x6c, 0x0e, 0xe7,
0xab, 0x75, 0xdc, 0x5c, 0x75, 0xd2, 0x36, 0x3f, 0xbe, 0xfc, 0x0d, 0x00, 0x00, 0xff, 0xff, 0x19,
0x01, 0x2c, 0x30, 0x03, 0x02, 0x00, 0x00,
}

View File

@@ -19,7 +19,6 @@ message VersionMessage {
string GoVersion = 6;
string Compiler = 7;
string Platform = 8;
string KsonnetVersion = 9;
}
// VersionService returns the version of the API server.

View File

@@ -5,39 +5,18 @@ import (
"testing"
"time"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
// load the gcp plugin (required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// load the oidc plugin (required to authenticate with OpenID Connect).
"k8s.io/apimachinery/pkg/api/errors"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/argo"
)
func TestAppManagement(t *testing.T) {
assertAppHasEvent := func(a *v1alpha1.Application, action string, reason string) {
list, err := fixture.KubeClient.CoreV1().Events(fixture.Namespace).List(metav1.ListOptions{
FieldSelector: fields.SelectorFromSet(map[string]string{
"involvedObject.name": a.Name,
"involvedObject.uid": string(a.UID),
"involvedObject.namespace": fixture.Namespace,
}).String(),
})
if err != nil {
t.Fatalf("Unable to get app events %v", err)
}
for i := range list.Items {
event := list.Items[i]
if event.Reason == reason && event.Action == action {
return
}
}
t.Errorf("Unable to find event with reason=%s; action=%s", reason, action)
}
testApp := &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Source: v1alpha1.ApplicationSource{
@@ -73,7 +52,6 @@ func TestAppManagement(t *testing.T) {
assert.Equal(t, ".", app.Spec.Source.Path)
assert.Equal(t, fixture.Namespace, app.Spec.Destination.Namespace)
assert.Equal(t, fixture.Config.Host, app.Spec.Destination.Server)
assertAppHasEvent(app, "create", argo.EventReasonResourceCreated)
})
t.Run("TestAppDeletion", func(t *testing.T) {
@@ -84,15 +62,10 @@ func TestAppManagement(t *testing.T) {
t.Fatalf("Unable to delete app %v", err)
}
a, err := fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.Name, metav1.GetOptions{})
_, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.Name, metav1.GetOptions{})
if err != nil && !errors.IsNotFound(err) {
t.Fatalf("Unable to get app %v", err)
} else {
assert.NotNil(t, a.DeletionTimestamp)
}
assertAppHasEvent(app, "delete", argo.EventReasonResourceDeleted)
assert.NotNil(t, err)
assert.True(t, errors.IsNotFound(err))
})
t.Run("TestTrackAppStateAndSyncApp", func(t *testing.T) {
@@ -107,7 +80,6 @@ func TestAppManagement(t *testing.T) {
if err != nil {
t.Fatalf("Unable to sync app %v", err)
}
assertAppHasEvent(app, "sync", argo.EventReasonResourceUpdated)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
@@ -148,8 +120,6 @@ func TestAppManagement(t *testing.T) {
t.Fatalf("Unable to sync app %v", err)
}
assertAppHasEvent(app, "rollback", argo.EventReasonResourceUpdated)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.ComparisonResult.Status == v1alpha1.ComparisonStatusSynced, err
@@ -169,7 +139,7 @@ func TestAppManagement(t *testing.T) {
WaitUntil(t, func() (done bool, err error) {
app, err := fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.ComparisonResult.Status == v1alpha1.ComparisonStatusUnknown && len(app.Status.Conditions) > 0, err
return err == nil && app.Status.ComparisonResult.Status != v1alpha1.ComparisonStatusUnknown && len(app.Status.Conditions) > 0, err
})
app, err := fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})

View File

@@ -21,13 +21,9 @@ import (
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"path"
"path/filepath"
"github.com/argoproj/argo-cd/cmd/argocd/commands"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
@@ -190,10 +186,7 @@ func (f *Fixture) ensureClusterRegistered() error {
return err
}
// Install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
managerBearerToken, err := common.InstallClusterManagerRBAC(clientset)
errors.CheckError(err)
managerBearerToken := common.InstallClusterManagerRBAC(conf)
clst := commands.NewCluster(f.Config.Host, conf, managerBearerToken)
clstCreateReq := cluster.ClusterCreateRequest{Cluster: clst}
_, err = cluster.NewServer(f.DB, f.Enforcer).Create(context.Background(), &clstCreateReq)
@@ -259,7 +252,6 @@ func NewFixture() (*Fixture, error) {
}
db := db.NewDB(namespace, kubeClient)
enforcer := rbac.NewEnforcer(kubeClient, namespace, common.ArgoCDRBACConfigMapName, nil)
enforcer.SetClaimsEnforcerFunc(server.EnforceClaims(enforcer, appClient, namespace))
err = enforcer.SetBuiltinPolicy(test.BuiltinPolicy)
if err != nil {
return nil, err
@@ -418,14 +410,7 @@ func (c *FakeGitClient) LsRemote(s string) (string, error) {
}
func (c *FakeGitClient) LsFiles(s string) ([]string, error) {
matches, err := filepath.Glob(path.Join(c.root, s))
if err != nil {
return nil, err
}
for i := range matches {
matches[i] = strings.TrimPrefix(matches[i], c.root)
}
return matches, nil
return []string{"abcdef123456890"}, nil
}
func (c *FakeGitClient) CommitSHA() (string, error) {

View File

@@ -17,8 +17,7 @@ func TestMain(m *testing.M) {
println(fmt.Sprintf("Unable to create e2e fixture: %v", err))
os.Exit(-1)
} else {
code := m.Run()
fixture.TearDown()
os.Exit(code)
defer fixture.TearDown()
m.Run()
}
}

View File

@@ -6,36 +6,13 @@ import (
"testing"
"time"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/argo"
)
func TestProjectManagement(t *testing.T) {
assertProjHasEvent := func(a *v1alpha1.AppProject, action string, reason string) {
list, err := fixture.KubeClient.CoreV1().Events(fixture.Namespace).List(metav1.ListOptions{
FieldSelector: fields.SelectorFromSet(map[string]string{
"involvedObject.name": a.Name,
"involvedObject.uid": string(a.UID),
"involvedObject.namespace": fixture.Namespace,
}).String(),
})
if err != nil {
t.Fatalf("Unable to get app events %v", err)
}
for i := range list.Items {
event := list.Items[i]
if event.Reason == reason && event.Action == action {
return
}
}
t.Errorf("Unable to find event with reason=%s; action=%s", reason, action)
}
t.Run("TestProjectCreation", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.RunCli("proj", "create", projectName,
@@ -43,7 +20,9 @@ func TestProjectManagement(t *testing.T) {
"-d", "https://192.168.99.100:8443,default",
"-d", "https://192.168.99.100:8443,service",
"-s", "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
@@ -60,13 +39,11 @@ func TestProjectManagement(t *testing.T) {
assert.Equal(t, 1, len(proj.Spec.SourceRepos))
assert.Equal(t, "https://github.com/argoproj/argo-cd.git", proj.Spec.SourceRepos[0])
assertProjHasEvent(proj, "create", argo.EventReasonResourceCreated)
})
t.Run("TestProjectDeletion", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
_, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
@@ -78,7 +55,6 @@ func TestProjectManagement(t *testing.T) {
_, err = fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
assert.True(t, errors.IsNotFound(err))
assertProjHasEvent(proj, "delete", argo.EventReasonResourceDeleted)
})
t.Run("TestSetProject", func(t *testing.T) {
@@ -108,7 +84,6 @@ func TestProjectManagement(t *testing.T) {
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[1].Server)
assert.Equal(t, "service", proj.Spec.Destinations[1].Namespace)
assertProjHasEvent(proj, "update", argo.EventReasonResourceUpdated)
})
t.Run("TestAddProjectDestination", func(t *testing.T) {
@@ -143,7 +118,6 @@ func TestProjectManagement(t *testing.T) {
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[0].Server)
assert.Equal(t, "test1", proj.Spec.Destinations[0].Namespace)
assertProjHasEvent(proj, "update", argo.EventReasonResourceUpdated)
})
t.Run("TestRemoveProjectDestination", func(t *testing.T) {
@@ -184,7 +158,6 @@ func TestProjectManagement(t *testing.T) {
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 0, len(proj.Spec.Destinations))
assertProjHasEvent(proj, "update", argo.EventReasonResourceUpdated)
})
t.Run("TestAddProjectSource", func(t *testing.T) {
@@ -201,7 +174,8 @@ func TestProjectManagement(t *testing.T) {
}
_, err = fixture.RunCli("proj", "add-source", projectName, "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
assert.NotNil(t, err)
assert.True(t, strings.Contains(err.Error(), "already defined"))
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
@@ -233,7 +207,8 @@ func TestProjectManagement(t *testing.T) {
}
_, err = fixture.RunCli("proj", "remove-source", projectName, "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
assert.NotNil(t, err)
assert.True(t, strings.Contains(err.Error(), "does not exist"))
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
@@ -241,42 +216,5 @@ func TestProjectManagement(t *testing.T) {
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 0, len(proj.Spec.SourceRepos))
assertProjHasEvent(proj, "update", argo.EventReasonResourceUpdated)
})
t.Run("TestUseJWTToken", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
appName := "app-" + strconv.FormatInt(time.Now().Unix(), 10)
roleName := "roleTest"
testApp := &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: appName,
},
Spec: v1alpha1.ApplicationSpec{
Source: v1alpha1.ApplicationSource{
RepoURL: "https://github.com/argoproj/argo-cd.git", Path: ".", Environment: "minikube",
},
Destination: v1alpha1.ApplicationDestination{
Server: fixture.Config.Host,
Namespace: fixture.Namespace,
},
Project: projectName,
},
}
_, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
assert.Nil(t, err)
_, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Create(testApp)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "create", projectName, roleName)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "create-token", projectName, roleName)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "add-policy", projectName, roleName, "-a", "get", "-o", "*", "-p", "allow")
assert.Nil(t, err)
})
}

View File

@@ -6,20 +6,17 @@ import (
"errors"
"fmt"
"path"
"strings"
"time"
"github.com/ghodss/yaml"
"github.com/ksonnet/ksonnet/pkg/app"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/watch"
"strings"
"github.com/argoproj/argo-cd/common"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
@@ -29,10 +26,10 @@ import (
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/git"
)
const (
errDestinationMissing = "Destination server and/or namespace missing from app spec"
"github.com/ghodss/yaml"
"github.com/ksonnet/ksonnet/pkg/app"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// FormatAppConditions returns string representation of give app condition list
@@ -214,31 +211,59 @@ func GetSpecErrors(
}
if repoAccessable {
appSourceType, err := queryAppSourceType(ctx, spec, repoRes, repoClient)
// Verify app.yaml is functional
req := repository.GetFileRequest{
Repo: &argoappv1.Repository{
Repo: spec.Source.RepoURL,
},
Revision: spec.Source.TargetRevision,
Path: path.Join(spec.Source.Path, "app.yaml"),
}
if repoRes != nil {
req.Repo.Username = repoRes.Username
req.Repo.Password = repoRes.Password
req.Repo.SSHPrivateKey = repoRes.SSHPrivateKey
}
getRes, err := repoClient.GetFile(ctx, &req)
if err != nil {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("Unable to determine app source type: %v", err),
Message: fmt.Sprintf("Unable to load app.yaml: %v", err),
})
} else {
switch appSourceType {
case repository.AppSourceKsonnet:
appYamlConditions := verifyAppYAML(ctx, repoRes, spec, repoClient)
if len(appYamlConditions) > 0 {
conditions = append(conditions, appYamlConditions...)
var appSpec app.Spec
err = yaml.Unmarshal(getRes.Data, &appSpec)
if err != nil {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: "app.yaml is not a valid ksonnet app spec",
})
} else {
// Default revision to HEAD if unspecified
if spec.Source.TargetRevision == "" {
spec.Source.TargetRevision = "HEAD"
}
case repository.AppSourceHelm:
helmConditions := verifyHelmChart(ctx, repoRes, spec, repoClient)
if len(helmConditions) > 0 {
conditions = append(conditions, helmConditions...)
}
case repository.AppSourceDirectory, repository.AppSourceKustomize:
maniDirConditions := verifyGenerateManifests(ctx, repoRes, spec, repoClient)
if len(maniDirConditions) > 0 {
conditions = append(conditions, maniDirConditions...)
}
}
// Verify the specified environment is defined in it
envSpec, ok := appSpec.Environments[spec.Source.Environment]
if !ok || envSpec == nil {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("environment '%s' does not exist in ksonnet app", spec.Source.Environment),
})
}
if envSpec != nil {
// If server and namespace are not supplied, pull it from the app.yaml
if spec.Destination.Server == "" {
spec.Destination.Server = envSpec.Destination.Server
}
if spec.Destination.Namespace == "" {
spec.Destination.Namespace = envSpec.Destination.Namespace
}
}
}
}
}
@@ -276,172 +301,14 @@ func GetSpecErrors(
return conditions, nil
}
// GetAppProject returns a project from an application
func GetAppProject(spec *argoappv1.ApplicationSpec, appclientset appclientset.Interface, ns string) (*argoappv1.AppProject, error) {
var proj *argoappv1.AppProject
var err error
if spec.BelongsToDefaultProject() {
return appclientset.ArgoprojV1alpha1().AppProjects(ns).Get(common.DefaultAppProjectName, metav1.GetOptions{})
}
return appclientset.ArgoprojV1alpha1().AppProjects(ns).Get(spec.Project, metav1.GetOptions{})
}
// queryAppSourceType queries repo server for yaml files in a directory, and determines its
// application source type based on the files in the directory.
func queryAppSourceType(ctx context.Context, spec *argoappv1.ApplicationSpec, repoRes *argoappv1.Repository, repoClient repository.RepositoryServiceClient) (repository.AppSourceType, error) {
req := repository.ListDirRequest{
Repo: &argoappv1.Repository{
Repo: spec.Source.RepoURL,
},
Revision: spec.Source.TargetRevision,
Path: fmt.Sprintf("%s/*.yaml", spec.Source.Path),
}
if repoRes != nil {
req.Repo.Username = repoRes.Username
req.Repo.Password = repoRes.Password
req.Repo.SSHPrivateKey = repoRes.SSHPrivateKey
}
getRes, err := repoClient.ListDir(ctx, &req)
if err != nil {
return "", err
}
for _, gitPath := range getRes.Items {
// gitPath may look like: app.yaml, or some/subpath/app.yaml
trimmedPath := strings.TrimPrefix(gitPath, spec.Source.Path)
trimmedPath = strings.TrimPrefix(trimmedPath, "/")
if trimmedPath == "app.yaml" {
return repository.AppSourceKsonnet, nil
}
if trimmedPath == "Chart.yaml" {
return repository.AppSourceHelm, nil
}
if trimmedPath == "kustomization.yaml" {
return repository.AppSourceKustomize, nil
}
}
return repository.AppSourceDirectory, nil
}
// verifyAppYAML verifies that a ksonnet app.yaml is functional
func verifyAppYAML(ctx context.Context, repoRes *argoappv1.Repository, spec *argoappv1.ApplicationSpec, repoClient repository.RepositoryServiceClient) []argoappv1.ApplicationCondition {
req := repository.GetFileRequest{
Repo: &argoappv1.Repository{
Repo: spec.Source.RepoURL,
},
Revision: spec.Source.TargetRevision,
Path: path.Join(spec.Source.Path, "app.yaml"),
}
if repoRes != nil {
req.Repo.Username = repoRes.Username
req.Repo.Password = repoRes.Password
req.Repo.SSHPrivateKey = repoRes.SSHPrivateKey
}
getRes, err := repoClient.GetFile(ctx, &req)
var conditions []argoappv1.ApplicationCondition
if err != nil {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("Unable to load app.yaml: %v", err),
})
defaultProj := argoappv1.GetDefaultProject(ns)
proj = &defaultProj
} else {
var appSpec app.Spec
err = yaml.Unmarshal(getRes.Data, &appSpec)
if err != nil {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: "app.yaml is not a valid ksonnet app spec",
})
} else {
// Default revision to HEAD if unspecified
if spec.Source.TargetRevision == "" {
spec.Source.TargetRevision = "HEAD"
}
// Verify the specified environment is defined in it
envSpec, ok := appSpec.Environments[spec.Source.Environment]
if !ok || envSpec == nil {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("environment '%s' does not exist in ksonnet app", spec.Source.Environment),
})
}
if envSpec != nil {
// If server and namespace are not supplied, pull it from the app.yaml
if spec.Destination.Server == "" {
spec.Destination.Server = envSpec.Destination.Server
}
if spec.Destination.Namespace == "" {
spec.Destination.Namespace = envSpec.Destination.Namespace
}
}
}
proj, err = appclientset.ArgoprojV1alpha1().AppProjects(ns).Get(spec.Project, metav1.GetOptions{})
}
return conditions
}
// verifyHelmChart verifies a helm chart is functional
func verifyHelmChart(ctx context.Context, repoRes *argoappv1.Repository, spec *argoappv1.ApplicationSpec, repoClient repository.RepositoryServiceClient) []argoappv1.ApplicationCondition {
var conditions []argoappv1.ApplicationCondition
if spec.Destination.Server == "" || spec.Destination.Namespace == "" {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: errDestinationMissing,
})
}
req := repository.GetFileRequest{
Repo: &argoappv1.Repository{
Repo: spec.Source.RepoURL,
},
Revision: spec.Source.TargetRevision,
Path: path.Join(spec.Source.Path, "Chart.yaml"),
}
if repoRes != nil {
req.Repo.Username = repoRes.Username
req.Repo.Password = repoRes.Password
req.Repo.SSHPrivateKey = repoRes.SSHPrivateKey
}
_, err := repoClient.GetFile(ctx, &req)
if err != nil {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("Unable to load Chart.yaml: %v", err),
})
}
return conditions
}
// verifyGenerateManifests verifies a repo path can generate manifests
func verifyGenerateManifests(ctx context.Context, repoRes *argoappv1.Repository, spec *argoappv1.ApplicationSpec, repoClient repository.RepositoryServiceClient) []argoappv1.ApplicationCondition {
var conditions []argoappv1.ApplicationCondition
if spec.Destination.Server == "" || spec.Destination.Namespace == "" {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: errDestinationMissing,
})
}
req := repository.ManifestRequest{
Repo: &argoappv1.Repository{
Repo: spec.Source.RepoURL,
},
Revision: spec.Source.TargetRevision,
Path: spec.Source.Path,
Namespace: spec.Destination.Namespace,
}
if repoRes != nil {
req.Repo.Username = repoRes.Username
req.Repo.Password = repoRes.Password
req.Repo.SSHPrivateKey = repoRes.SSHPrivateKey
}
manRes, err := repoClient.GenerateManifest(ctx, &req)
if err != nil {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("Unable to generate manifests in %s: %v", spec.Source.Path, err),
})
} else if len(manRes.Manifests) == 0 {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("Path '%s' contained no kubernetes manifests", spec.Source.Path),
})
}
return conditions
return proj, err
}

View File

@@ -29,23 +29,6 @@ func TestRefreshApp(t *testing.T) {
//assert.True(t, ok)
}
func TestGetAppProjectWithNoProjDefined(t *testing.T) {
projName := "default"
namespace := "default"
testProj := &argoappv1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: projName, Namespace: namespace},
}
var testApp argoappv1.Application
testApp.Name = "test-app"
testApp.Namespace = namespace
appClientset := appclientset.NewSimpleClientset(testProj)
proj, err := GetAppProject(&testApp.Spec, appClientset, namespace)
assert.Nil(t, err)
assert.Equal(t, proj.Name, projName)
}
func TestCheckValidParam(t *testing.T) {
oldAppSet := make(map[string]map[string]bool)
oldAppSet["testComponent"] = make(map[string]bool)

View File

@@ -1,85 +0,0 @@
package argo
import (
log "github.com/sirupsen/logrus"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/kubernetes"
"fmt"
"time"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
type AuditLogger struct {
kIf kubernetes.Interface
component string
ns string
}
type EventInfo struct {
Action string
Reason string
Username string
}
const (
EventReasonStatusRefreshed = "StatusRefreshed"
EventReasonResourceCreated = "ResourceCreated"
EventReasonResourceUpdated = "ResourceUpdated"
EventReasonResourceDeleted = "ResourceDeleted"
)
func (l *AuditLogger) logEvent(objMeta metav1.ObjectMeta, gvk schema.GroupVersionKind, info EventInfo, eventType string) {
var message string
if info.Username != "" {
message = fmt.Sprintf("User %s executed action %s", info.Username, info.Action)
} else {
message = fmt.Sprintf("Unknown user executed action %s", info.Action)
}
t := metav1.Time{Time: time.Now()}
_, err := l.kIf.CoreV1().Events(l.ns).Create(&v1.Event{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%v.%x", objMeta.Name, t.UnixNano()),
},
Source: v1.EventSource{
Component: l.component,
},
InvolvedObject: v1.ObjectReference{
Kind: gvk.Kind,
Name: objMeta.Name,
Namespace: objMeta.Namespace,
ResourceVersion: objMeta.ResourceVersion,
APIVersion: gvk.Version,
UID: objMeta.UID,
},
FirstTimestamp: t,
LastTimestamp: t,
Count: 1,
Message: message,
Type: eventType,
Action: info.Action,
Reason: info.Reason,
})
if err != nil {
log.Errorf("Unable to create audit event: %v", err)
}
}
func (l *AuditLogger) LogAppEvent(app *v1alpha1.Application, info EventInfo, eventType string) {
l.logEvent(app.ObjectMeta, v1alpha1.ApplicationSchemaGroupVersionKind, info, eventType)
}
func (l *AuditLogger) LogAppProjEvent(proj *v1alpha1.AppProject, info EventInfo, eventType string) {
l.logEvent(proj.ObjectMeta, v1alpha1.AppProjectSchemaGroupVersionKind, info, eventType)
}
func NewAuditLogger(ns string, kIf kubernetes.Interface, component string) *AuditLogger {
return &AuditLogger{
ns: ns,
kIf: kIf,
component: component,
}
}

View File

@@ -1,4 +1,4 @@
package config
package cli
import (
"encoding/json"

View File

@@ -1,4 +1,4 @@
package config
package cli
import (
"fmt"

View File

@@ -61,7 +61,7 @@ func (s *db) CreateCluster(ctx context.Context, c *appv1.Cluster) (*appv1.Cluste
clusterSecret, err = s.kubeclientset.CoreV1().Secrets(s.ns).Create(clusterSecret)
if err != nil {
if apierr.IsAlreadyExists(err) {
return nil, status.Errorf(codes.AlreadyExists, "cluster %q already exists", c.Server)
return nil, status.Errorf(codes.AlreadyExists, "cluster '%s' already exists", c.Server)
}
return nil, err
}
@@ -88,24 +88,17 @@ func (s *db) WatchClusters(ctx context.Context, callback func(*ClusterEvent)) er
if err != nil {
return err
}
defer w.Stop()
done := make(chan bool)
go func() {
for next := range w.ResultChan() {
secret := next.Object.(*apiv1.Secret)
cluster := SecretToCluster(secret)
callback(&ClusterEvent{
Type: next.Type,
Cluster: cluster,
})
}
done <- true
<-ctx.Done()
w.Stop()
}()
select {
case <-done:
case <-ctx.Done():
for next := range w.ResultChan() {
secret := next.Object.(*apiv1.Secret)
cluster := SecretToCluster(secret)
callback(&ClusterEvent{
Type: next.Type,
Cluster: cluster,
})
}
return nil
}
@@ -118,7 +111,7 @@ func (s *db) getClusterSecret(server string) (*apiv1.Secret, error) {
clusterSecret, err := s.kubeclientset.CoreV1().Secrets(s.ns).Get(secName, metav1.GetOptions{})
if err != nil {
if apierr.IsNotFound(err) {
return nil, status.Errorf(codes.NotFound, "cluster %q not found", server)
return nil, status.Errorf(codes.NotFound, "cluster '%s' not found", server)
}
return nil, err
}

View File

@@ -57,7 +57,6 @@ func (s *db) CreateRepository(ctx context.Context, r *appsv1.Repository) (*appsv
},
}
repoSecret.Data = repoToData(r)
repoSecret.Annotations = AnnotationsFromConnectionState(&r.ConnectionState)
repoSecret, err := s.kubeclientset.CoreV1().Secrets(s.ns).Create(repoSecret)
if err != nil {
if apierr.IsAlreadyExists(err) {

View File

@@ -13,8 +13,6 @@ import (
"k8s.io/apimachinery/pkg/util/strategicpatch"
"k8s.io/kubernetes/pkg/apis/core"
"k8s.io/kubernetes/pkg/kubectl/scheme"
jsonutil "github.com/argoproj/argo-cd/util/json"
)
type DiffResult struct {
@@ -45,7 +43,7 @@ func TwoWayDiff(config, live *unstructured.Unstructured) *DiffResult {
configObj = config.Object
}
if live != nil {
liveObj = jsonutil.RemoveMapFields(configObj, live.Object)
liveObj = RemoveMapFields(configObj, live.Object)
}
gjDiff := gojsondiff.New().CompareObjects(configObj, liveObj)
dr := DiffResult{
@@ -60,7 +58,7 @@ func TwoWayDiff(config, live *unstructured.Unstructured) *DiffResult {
func ThreeWayDiff(orig, config, live *unstructured.Unstructured) *DiffResult {
orig = removeNamespaceAnnotation(orig)
// remove extra fields in the live, that were not in the original object
liveObj := jsonutil.RemoveMapFields(orig.Object, live.Object)
liveObj := RemoveMapFields(orig.Object, live.Object)
// now we have a pruned live object
gjDiff := gojsondiff.New().CompareObjects(config.Object, liveObj)
dr := DiffResult{
@@ -230,3 +228,42 @@ func (d *DiffResult) ASCIIFormat(left *unstructured.Unstructured, formatOpts for
asciiFmt := formatter.NewAsciiFormatter(left.Object, formatOpts)
return asciiFmt.Format(d.Diff)
}
// https://github.com/ksonnet/ksonnet/blob/master/pkg/kubecfg/diff.go
func removeFields(config, live interface{}) interface{} {
switch c := config.(type) {
case map[string]interface{}:
return RemoveMapFields(c, live.(map[string]interface{}))
case []interface{}:
return removeListFields(c, live.([]interface{}))
default:
return live
}
}
// RemoveMapFields remove all non-existent fields in the live that don't exist in the config
func RemoveMapFields(config, live map[string]interface{}) map[string]interface{} {
result := map[string]interface{}{}
for k, v1 := range config {
v2, ok := live[k]
if !ok {
continue
}
result[k] = removeFields(v1, v2)
}
return result
}
func removeListFields(config, live []interface{}) []interface{} {
// If live is longer than config, then the extra elements at the end of the
// list will be returned as-is so they appear in the diff.
result := make([]interface{}, 0, len(live))
for i, v2 := range live {
if len(config) > i {
result = append(result, removeFields(config[i], v2))
} else {
result = append(result, v2)
}
}
return result
}

View File

@@ -136,7 +136,7 @@ func (m *nativeGitClient) setCredentials() error {
func (m *nativeGitClient) Fetch() error {
var err error
log.Debugf("Fetching repo %s at %s", m.repoURL, m.root)
if _, err = m.runCmd("git", "fetch", "origin", "--tags", "--force"); err != nil {
if _, err = m.runCmd("git", "fetch", "origin"); err != nil {
return err
}
// git fetch does not update the HEAD reference. The following command will update the local
@@ -192,10 +192,7 @@ func (m *nativeGitClient) Checkout(revision string) error {
if revision == "" || revision == "HEAD" {
revision = "origin/HEAD"
}
if _, err := m.runCmd("git", "checkout", "--force", revision); err != nil {
return err
}
if _, err := m.runCmd("git", "clean", "-fd"); err != nil {
if _, err := m.runCmd("git", "checkout", revision); err != nil {
return err
}
return nil

View File

@@ -61,8 +61,6 @@ func IsSSHURL(url string) bool {
const gitSSHCommand = "ssh -q -F /dev/null -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=20"
//TODO: Make sure every public method works with '*' repo
// GetGitCommandEnvAndURL returns URL and env options for git operation
func GetGitCommandEnvAndURL(repo, username, password string, sshPrivateKey string) (string, []string, error) {
cmdURL := repo

View File

@@ -1,82 +0,0 @@
package grpc
import (
"bytes"
"encoding/json"
"fmt"
"golang.org/x/net/context"
"google.golang.org/grpc"
"github.com/gogo/protobuf/proto"
"github.com/grpc-ecosystem/go-grpc-middleware/logging"
"github.com/grpc-ecosystem/go-grpc-middleware/logging/logrus"
"github.com/grpc-ecosystem/go-grpc-middleware/tags/logrus"
"github.com/sirupsen/logrus"
)
func logRequest(entry *logrus.Entry, info string, pbMsg interface{}, ctx context.Context, logClaims bool) {
if logClaims {
if data, err := json.Marshal(ctx.Value("claims")); err == nil {
entry = entry.WithField("grpc.request.claims", string(data))
}
}
if p, ok := pbMsg.(proto.Message); ok {
entry = entry.WithField("grpc.request.content", &jsonpbMarshalleble{p})
}
entry.Info(info)
}
type jsonpbMarshalleble struct {
proto.Message
}
func (j *jsonpbMarshalleble) MarshalJSON() ([]byte, error) {
b := &bytes.Buffer{}
if err := grpc_logrus.JsonPbMarshaller.Marshal(b, j.Message); err != nil {
return nil, fmt.Errorf("jsonpb serializer failed: %v", err)
}
return b.Bytes(), nil
}
type loggingServerStream struct {
grpc.ServerStream
entry *logrus.Entry
logClaims bool
info string
}
func (l *loggingServerStream) SendMsg(m interface{}) error {
return l.ServerStream.SendMsg(m)
}
func (l *loggingServerStream) RecvMsg(m interface{}) error {
err := l.ServerStream.RecvMsg(m)
if err == nil {
logRequest(l.entry, l.info, m, l.ServerStream.Context(), l.logClaims)
}
return err
}
func PayloadStreamServerInterceptor(entry *logrus.Entry, logClaims bool, decider grpc_logging.ServerPayloadLoggingDecider) grpc.StreamServerInterceptor {
return func(srv interface{}, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
if !decider(stream.Context(), info.FullMethod, srv) {
return handler(srv, stream)
}
logEntry := entry.WithFields(ctx_logrus.Extract(stream.Context()).Data)
newStream := &loggingServerStream{ServerStream: stream, entry: logEntry, logClaims: logClaims, info: fmt.Sprintf("received streaming call %s", info.FullMethod)}
return handler(srv, newStream)
}
}
func PayloadUnaryServerInterceptor(entry *logrus.Entry, logClaims bool, decider grpc_logging.ServerPayloadLoggingDecider) grpc.UnaryServerInterceptor {
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
if !decider(ctx, info.FullMethod, info.Server) {
return handler(ctx, req)
}
logEntry := entry.WithFields(ctx_logrus.Extract(ctx).Data)
logRequest(logEntry, fmt.Sprintf("received unary call %s", info.FullMethod), req, ctx, logClaims)
resp, err := handler(ctx, req)
return resp, err
}
}

View File

@@ -16,34 +16,22 @@ import (
)
func GetAppHealth(obj *unstructured.Unstructured) (*appv1.HealthStatus, error) {
var err error
var health *appv1.HealthStatus
switch obj.GetKind() {
case kube.DeploymentKind:
health, err = getDeploymentHealth(obj)
return getDeploymentHealth(obj)
case kube.ServiceKind:
health, err = getServiceHealth(obj)
return getServiceHealth(obj)
case kube.IngressKind:
health, err = getIngressHealth(obj)
return getIngressHealth(obj)
case kube.StatefulSetKind:
health, err = getStatefulSetHealth(obj)
return getStatefulSetHealth(obj)
case kube.ReplicaSetKind:
health, err = getReplicaSetHealth(obj)
return getReplicaSetHealth(obj)
case kube.DaemonSetKind:
health, err = getDaemonSetHealth(obj)
case kube.PersistentVolumeClaimKind:
health, err = getPvcHealth(obj)
return getDaemonSetHealth(obj)
default:
health = &appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
return &appv1.HealthStatus{Status: appv1.HealthStatusHealthy}, nil
}
if err != nil {
health.Status = appv1.HealthStatusUnknown
health.StatusDetails = err.Error()
}
return health, err
}
// healthOrder is a list of health codes in order of most healthy to least healthy
@@ -70,29 +58,6 @@ func IsWorse(current, new appv1.HealthStatusCode) bool {
return newIndex > currentIndex
}
func getPvcHealth(obj *unstructured.Unstructured) (*appv1.HealthStatus, error) {
obj, err := kube.ConvertToVersion(obj, "", "v1")
if err != nil {
return nil, err
}
var pvc coreV1.PersistentVolumeClaim
err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &pvc)
if err != nil {
return nil, err
}
switch pvc.Status.Phase {
case coreV1.ClaimLost:
return &appv1.HealthStatus{Status: appv1.HealthStatusDegraded}, nil
case coreV1.ClaimPending:
return &appv1.HealthStatus{Status: appv1.HealthStatusProgressing}, nil
case coreV1.ClaimBound:
return &appv1.HealthStatus{Status: appv1.HealthStatusHealthy}, nil
default:
return &appv1.HealthStatus{Status: appv1.HealthStatusUnknown}, nil
}
}
func getIngressHealth(obj *unstructured.Unstructured) (*appv1.HealthStatus, error) {
obj, err := kube.ConvertToVersion(obj, "extensions", "v1beta1")
if err != nil {
@@ -147,7 +112,6 @@ func getDeploymentHealth(obj *unstructured.Unstructured) (*appv1.HealthStatus, e
if err != nil {
return nil, err
}
// Borrowed at kubernetes/kubectl/rollout_status.go https://github.com/kubernetes/kubernetes/blob/5232ad4a00ec93942d0b2c6359ee6cd1201b46bc/pkg/kubectl/rollout_status.go#L80
if deployment.Generation <= deployment.Status.ObservedGeneration {
cond := getDeploymentCondition(deployment.Status, v1.DeploymentProgressing)

View File

@@ -58,27 +58,3 @@ func TestStatefulSetHealth(t *testing.T) {
assert.NotNil(t, health)
assert.Equal(t, appv1.HealthStatusHealthy, health.Status)
}
func TestPvcHealthy(t *testing.T) {
yamlBytes, err := ioutil.ReadFile("./testdata/pvc-bound.yaml")
assert.Nil(t, err)
var obj unstructured.Unstructured
err = yaml.Unmarshal(yamlBytes, &obj)
assert.Nil(t, err)
health, err := GetAppHealth(&obj)
assert.Nil(t, err)
assert.NotNil(t, health)
assert.Equal(t, appv1.HealthStatusHealthy, health.Status)
}
func TestPvcPending(t *testing.T) {
yamlBytes, err := ioutil.ReadFile("./testdata/pvc-pending.yaml")
assert.Nil(t, err)
var obj unstructured.Unstructured
err = yaml.Unmarshal(yamlBytes, &obj)
assert.Nil(t, err)
health, err := GetAppHealth(&obj)
assert.Nil(t, err)
assert.NotNil(t, health)
assert.Equal(t, appv1.HealthStatusProgressing, health.Status)
}

View File

@@ -1,34 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"e57a9040-a984-11e8-836b-c4b301c4d0d1","leaseDurationSeconds":15,"acquireTime":"2018-08-27T23:00:54Z","renewTime":"2018-08-27T23:00:56Z","leaderTransitions":0}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"applications.argoproj.io/app-name":"working-pvc"},"name":"testpvc","namespace":"argocd"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}}}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
creationTimestamp: 2018-08-27T23:00:54Z
finalizers:
- kubernetes.io/pvc-protection
labels:
applications.argoproj.io/app-name: working-pvc
name: testpvc
namespace: argocd
resourceVersion: "323170"
selfLink: /api/v1/namespaces/argocd/persistentvolumeclaims/testpvc
uid: 0cedda2c-aa4d-11e8-a271-025000000001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: hostpath
volumeName: pvc-0cedda2c-aa4d-11e8-a271-025000000001
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
phase: Bound

View File

@@ -1,26 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"applications.argoproj.io/app-name":"working-pvc"},"name":"testpvc-2","namespace":"argocd"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}},"storageClassName":"slow"}}
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
creationTimestamp: 2018-08-27T23:00:54Z
finalizers:
- kubernetes.io/pvc-protection
labels:
applications.argoproj.io/app-name: working-pvc
name: testpvc-2
namespace: argocd
resourceVersion: "323141"
selfLink: /api/v1/namespaces/argocd/persistentvolumeclaims/testpvc-2
uid: 0cedfc44-aa4d-11e8-a271-025000000001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: slow
status:
phase: Pending

Some files were not shown because too many files have changed in this diff Show More