Compare commits

...

61 Commits

Author SHA1 Message Date
Alexander Matyushentsev
ff0c23fd46 Update manifests to v0.12.2 2019-04-22 14:28:14 -07:00
Alexander Matyushentsev
400b5f57d5 Issue #1476 - Fix racing condition in controller cache (#1485) 2019-04-18 08:24:19 -07:00
dthomson25
5f9c84b218 Fix Failing Linter (#1350) 2019-04-18 08:24:05 -07:00
Alexander Matyushentsev
88611d2b61 Generate random name for grpc proxy unix socket file instead of time stamp (#1455) 2019-04-12 14:27:42 -07:00
Alexander Matyushentsev
1d70f47f68 Issue #1446 - Delete helm temp directories (#1449) 2019-04-12 07:35:19 -07:00
Alexander Matyushentsev
6f6caae5e9 Issue #1389 - Fix null pointer exception in secret normalization function (#1443) 2019-04-12 07:35:14 -07:00
Alexander Matyushentsev
a3a972611b Issue #1425 - Argo CD should not delete CRDs (#1428) 2019-04-11 09:07:55 -07:00
Alexander Matyushentsev
6a1751ad8d Update manifests to v0.12.1 2019-04-09 14:03:22 -07:00
Alexander Matyushentsev
16e061ffb6 Run 'go fmt' for application.go and server.go (#1417) 2019-04-09 10:31:26 -07:00
dthomson25
5953a00204 Add patch audit (#1416)
* Add auditing to patching commands

* Omit Patch Resource logs to prevent secret leaks
2019-04-09 08:57:37 -07:00
Alexander Matyushentsev
d0f20393cc Issue #1406 - Don't try deleting application resource if it already have (#1407) 2019-04-09 08:30:48 -07:00
Alexander Matyushentsev
161e56c844 Issue #1404 - App controller unnecessary set namespace to cluster level resources (#1405) 2019-04-09 08:30:39 -07:00
Alexander Matyushentsev
410e00a5f5 Issue #1374 - Add k8s objects circular dependency protection to getApp method (#1379) 2019-04-09 08:30:29 -07:00
Alexander Matyushentsev
cafb0e1265 Issue #1366 - Fix null pointer dereference error in 'argocd app wait' (#1380) 2019-04-09 08:28:24 -07:00
Alexander Matyushentsev
2ff8650cef Issue #1012 - kubectl v1.13 fails to convert extensions/NetworkPolicy (#1360) 2019-04-09 08:28:20 -07:00
Tom Wieczorek
fe9619ba49 Add mapping to new canonical Ingress API group (#1348)
Since Kubernetes 1.14, Ingress resources are only available via networking.k8s.io/v1beta1.
2019-04-09 08:28:16 -07:00
Alexander Matyushentsev
0968103655 Issue #1294 - CLI diff should take into account resource customizations (#1337)
* Issue #1294 - CLI diff should take into account resource customizations

* Apply reviewer notes: add comments to type definition and e2e test
2019-04-09 08:28:03 -07:00
Alexander Matyushentsev
f17e0ce7a6 Issue #1218 - Allow using any name for secrets which store cluster credentials (#1336) 2019-04-09 08:26:01 -07:00
Alexander Matyushentsev
7b888e6d9f Issue #733 - 'argocd app wait' should fail sooner if app transitioned to (#1339)
Issue #733 - 'argocd app wait' should fail sooner if app transitioned to Degraded state
2019-04-09 08:25:57 -07:00
Jesse Suen
7d1b9f79f7 Update argocd-util import/export to support proper backup and restore (#1328) 2019-04-09 08:25:51 -07:00
Alex Collins
7878a6a9b0 Adds support for kustomize edit set image. Closes #1275 (#1324) 2019-04-09 08:25:46 -07:00
Alex Collins
6fec791452 Fixs deps (#1325) 2019-04-09 08:25:41 -07:00
Alexander Matyushentsev
f19f6c29a7 Issue #1319 - Fix invalid group filtering in 'patch-resource' command (#1320) 2019-04-09 08:25:22 -07:00
Alexander Matyushentsev
3625d65ea7 Issue #1135 - Run e2e tests in throw-away kubernetes cluster (#1318)
* Issue #1135 - Run e2e tests in throw-away kubernetes cluster
2019-04-09 08:24:49 -07:00
Jesse Suen
cd4bb2553d Update version and manifests v0.12.0 2019-03-22 11:58:31 -07:00
Jesse Suen
47e86f6488 Use Recreate deployment strategy for controller (#1315) 2019-03-22 11:52:40 -07:00
Jesse Suen
b0282b17f9 Fix goroutine leak in RetryUntilSucceed (#1314) 2019-03-22 11:52:34 -07:00
Jesse Suen
f6661cc841 Support a separate OAuth2 CLI clientID different from server (#1307) 2019-03-22 03:31:00 -07:00
Andre Krueger
e8deeb2622 Honor os environment variables for helm commands (#1306) 2019-03-22 03:31:00 -07:00
Alexander Matyushentsev
b0301b43dd Issue #1308 - argo diff --local fails if live object does not exist (#1309) 2019-03-21 15:38:47 -07:00
Jesse Suen
d5ee07ef62 Update VERSION to v0.12.0-rc6 2019-03-20 14:56:19 -07:00
Jesse Suen
9468012cba Update manifests to v0.12.0-rc6 2019-03-20 14:48:48 -07:00
Alexander Matyushentsev
3ffc4586dc Unavailable cache should not prevent reconciling/syncing application (#1303) 2019-03-20 14:44:09 -07:00
Jesse Suen
14c9b63f21 Update redis-ha chart to resolve redis failover issues (#1301) 2019-03-20 14:42:30 -07:00
Marc
20dea3fa9e only print to stdout, if there is a diff + exit code (#1288) 2019-03-20 08:53:49 -07:00
Alexander Matyushentsev
8f990a9b91 Issue #1258 - Disable CGO_ENABLED for server/controller binaries (#1286) 2019-03-20 08:53:40 -07:00
Alexander Matyushentsev
3ca2a8bc2a Controller don't stop running watches on cluster resync (#1298) 2019-03-20 08:53:31 -07:00
Alexander Matyushentsev
268df4364d Update manifests to v0.12.0-rc5 2019-03-18 23:49:42 -07:00
Alexander Matyushentsev
68bb7e2046 Issue #1290 - Fix concurrent read/write error in state cache (#1293) 2019-03-18 23:39:18 -07:00
Jesse Suen
4e921a279c Fix a goroutine leak in api-server application.PodLogs and application.Watch (#1292) 2019-03-18 21:51:33 -07:00
Alexander Matyushentsev
ff72b82bd6 Issue #1287 - Fix local diff of non-namespaced resources. Also handle duplicates in local diff (#1289) 2019-03-18 21:51:26 -07:00
Jesse Suen
17393c3e70 Fix isssue where argocd app set -p required repo privileges. (#1280)
Grant patch privileges to argocd-server
2019-03-18 14:42:18 -07:00
Alexander Matyushentsev
3ed3a44944 Issue #1070 - Handle duplicated resource definitions (#1284) 2019-03-18 14:41:53 -07:00
Jesse Suen
27922c8f83 Add golang prometheus metrics to controller and repo-server (#1281) 2019-03-18 14:41:13 -07:00
Jesse Suen
c35f35666f Git cloning via SSH was not verifying host public key (#1276) 2019-03-15 14:29:40 -07:00
Alexander Matyushentsev
6a61987d3d Rename Application observedAt to reconciledAt and use observedAt to notify about partial app refresh (#1270) 2019-03-14 16:42:59 -07:00
Alexander Matyushentsev
c68e4a5a56 Bug fix: set 'Version' field while saving application resources tree (#1268) 2019-03-14 15:53:21 -07:00
Alexander Matyushentsev
1525f8e051 Avoid doing full reconciliation unless application 'managed' resource has changed (#1267) 2019-03-14 15:01:38 -07:00
Jesse Suen
1c2b248d27 Support kustomize apps with remote bases in private repos in the same host (#1264) 2019-03-14 14:26:07 -07:00
Alexander Matyushentsev
22fe538645 Update manifests to v0.12.0-rc4 2019-03-12 14:27:35 -07:00
Alexander Matyushentsev
24d73e56f1 Issue #1252 - Application controller incorrectly build application objects tree (#1253) 2019-03-12 12:21:50 -07:00
Alexander Matyushentsev
8f93bdc2a5 Issue #1247 - Fix CRD creation/deletion handling (#1249) 2019-03-12 12:21:45 -07:00
Alex Collins
6d982ca397 Migrates from gometalinter to golangci-lint. Closes #1225 (#1226) 2019-03-12 12:21:34 -07:00
Jesse Suen
eff67bffad Replace git fetch implementation with git CLI (from go-git) (#1244) 2019-03-08 14:09:22 -08:00
Jesse Suen
2b781eea49 Bump version and manifests to v0.12.0-rc3 2019-03-06 16:53:14 -08:00
Jesse Suen
2f0dc20235 Fix nil pointer dereference in CompareAppState (#1234) 2019-03-06 13:46:52 -08:00
Jesse Suen
36dc50c121 Bump version and manifests to v0.12.0-rc2 2019-03-06 02:01:07 -08:00
Alexander Matyushentsev
f8f974e871 Issue #1231 - Deprecated resource kinds from 'extensions' groups are not reconciled correctly (#1232) 2019-03-06 01:59:33 -08:00
Alexander Matyushentsev
c4b474ae98 Issue #1229 - App creation failed for public repository (#1230) 2019-03-06 01:59:26 -08:00
Jesse Suen
8d98d6e058 Sort kustomize params in GetAppDetails 2019-03-05 15:45:12 -08:00
Jesse Suen
233708ecdd Set release to v0.12.0-rc1 2019-03-05 15:23:34 -08:00
134 changed files with 4182 additions and 2679 deletions

View File

@@ -10,28 +10,94 @@ spec:
value: master
- name: repo
value: https://github.com/argoproj/argo-cd.git
volumes:
- name: k3setc
emptyDir: {}
- name: k3svar
emptyDir: {}
- name: tmp
emptyDir: {}
templates:
- name: argo-cd-ci
steps:
- - name: build
template: ci-dind
arguments:
parameters:
- name: cmd
value: make image
- - name: build-e2e
template: build-e2e
- name: test
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && make lint test && bash <(curl -s https://codecov.io/bash) -f coverage.out"
- name: test-e2e
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && make test-e2e"
# The step builds argo cd image, deploy argo cd components into throw-away kubernetes cluster provisioned using k3s and run e2e tests against it.
- name: build-e2e
inputs:
artifacts:
- name: code
path: /go/src/github.com/argoproj/argo-cd
git:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [sh, -c]
# Main contains build argocd image. The image is saved it into k3s agent images directory so it could be preloaded by the k3s cluster.
args: ["
dep ensure && until docker ps; do sleep 3; done && \
make image DEV_IMAGE=true && mkdir -p /var/lib/rancher/k3s/agent/images && \
docker save argocd:latest > /var/lib/rancher/k3s/agent/images/argocd.tar && \
touch /var/lib/rancher/k3s/ready && until ls /etc/rancher/k3s/k3s.yaml; do sleep 3; done && \
kubectl create ns argocd-e2e && kustomize build ./test/manifests/ci | kubectl apply -n argocd-e2e -f - && \
kubectl rollout status deployment -n argocd-e2e argocd-application-controller && kubectl rollout status deployment -n argocd-e2e argocd-server && \
git config --global user.email \"test@example.com\" && \
export ARGOCD_SERVER=$(kubectl get service argocd-server -o=jsonpath={.spec.clusterIP} -n argocd-e2e):443 && make test-e2e"
]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: USER
value: argocd
- name: DOCKER_HOST
value: 127.0.0.1
- name: DOCKER_BUILDKIT
value: "1"
- name: KUBECONFIG
value: /etc/rancher/k3s/k3s.yaml
volumeMounts:
- name: tmp
mountPath: /tmp
- name: k3setc
mountPath: /etc/rancher/k3s
- name: k3svar
mountPath: /var/lib/rancher/k3s
sidecars:
- name: dind
image: docker:18.09-dind
securityContext:
privileged: true
resources:
requests:
memory: 2048Mi
cpu: 500m
mirrorVolumeMounts: true
# Steps waits for file /var/lib/rancher/k3s/ready which indicates that all required images are ready, then starts the cluster.
- name: k3s
image: rancher/k3s:v0.3.0-rc1
imagePullPolicy: Always
command: [sh, -c]
args: ["until ls /var/lib/rancher/k3s/ready; do sleep 3; done && k3s server || true"]
securityContext:
privileged: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: k3setc
mountPath: /etc/rancher/k3s
- name: k3svar
mountPath: /var/lib/rancher/k3s
- name: ci-builder
inputs:
@@ -44,7 +110,7 @@ spec:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.12.0
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [bash, -c]
args: ["{{inputs.parameters.cmd}}"]
@@ -73,7 +139,7 @@ spec:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.12.0
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [sh, -c]
args: ["until docker ps; do sleep 3; done && {{inputs.parameters.cmd}}"]

22
.golangci.yml Normal file
View File

@@ -0,0 +1,22 @@
run:
deadline: 8m
skip-files:
- ".*\\.pb\\.go"
skip-dirs:
- pkg/client
- vendor
linter-settings:
goimports:
local-prefixes: github.com/argoproj/argo-cd
linters:
enable:
- vet
- gofmt
- goimports
- deadcode
- varcheck
- structcheck
- ineffassign
- unconvert
- misspell
disable-all: true

View File

@@ -40,10 +40,8 @@ go get -u github.com/golang/protobuf/protoc-gen-go
go get -u github.com/go-swagger/go-swagger/cmd/swagger
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
go get -u gopkg.in/alecthomas/gometalinter.v2
go get -u github.com/golangci/golangci-lint/cmd/golangci-lint
go get -u github.com/mattn/goreman
gometalinter.v2 --install
```
## Building

View File

@@ -42,8 +42,7 @@ RUN wget https://github.com/gobuffalo/packr/releases/download/v${PACKR_VERSION}/
# Install kubectl
# NOTE: keep the version synced with https://storage.googleapis.com/kubernetes-release/release/stable.txt
# Keep version at 1.12.X until https://github.com/argoproj/argo-cd/issues/1012 is resolved
ENV KUBECTL_VERSION=1.12.4
ENV KUBECTL_VERSION=1.14.0
RUN curl -L -o /usr/local/bin/kubectl -LO https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl && \
chmod +x /usr/local/bin/kubectl && \
kubectl version --client
@@ -69,7 +68,7 @@ RUN curl -L -o /usr/local/bin/kustomize1 https://github.com/kubernetes-sigs/kust
kustomize1 version
ENV KUSTOMIZE_VERSION=2.0.2
ENV KUSTOMIZE_VERSION=2.0.3
RUN curl -L -o /usr/local/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize && \
kustomize version
@@ -79,6 +78,17 @@ ENV AWS_IAM_AUTHENTICATOR_VERSION=0.4.0-alpha.1
RUN curl -L -o /usr/local/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/${AWS_IAM_AUTHENTICATOR_VERSION}/aws-iam-authenticator_${AWS_IAM_AUTHENTICATOR_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/aws-iam-authenticator
# Install golangci-lint
RUN wget https://install.goreleaser.com/github.com/golangci/golangci-lint.sh && \
chmod +x ./golangci-lint.sh && \
./golangci-lint.sh -b $GOPATH/bin && \
golangci-lint linters
COPY .golangci.yml ${GOPATH}/src/dummy/.golangci.yml
RUN cd ${GOPATH}/src/dummy && \
touch dummy.go \
golangci-lint run
####################################################################################################
# Argo CD Base - used as the base for both the release and dev argocd images
@@ -94,6 +104,8 @@ RUN groupadd -g 999 argocd && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY hack/ssh_known_hosts /etc/ssh/ssh_known_hosts
COPY hack/git-ask-pass.sh /usr/local/bin/git-ask-pass.sh
COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl

43
Gopkg.lock generated
View File

@@ -48,25 +48,6 @@
pruneopts = ""
revision = "09c41003ee1d5015b75f331e52215512e7145b8d"
[[projects]]
branch = "master"
digest = "1:a74730e052a45a3fab1d310fdef2ec17ae3d6af16228421e238320846f2aaec8"
name = "github.com/alecthomas/template"
packages = [
".",
"parse",
]
pruneopts = ""
revision = "a0175ee3bccc567396460bf5acd36800cb10c49c"
[[projects]]
branch = "master"
digest = "1:8483994d21404c8a1d489f6be756e25bfccd3b45d65821f25695577791a08e68"
name = "github.com/alecthomas/units"
packages = ["."]
pruneopts = ""
revision = "2efee857e7cfd4f3d0138cc3cbb1b4966962b93a"
[[projects]]
branch = "master"
digest = "1:0caf9208419fa5db5a0ca7112affaa9550c54291dda8e2abac0c0e76181c959e"
@@ -760,7 +741,6 @@
packages = [
"expfmt",
"internal/bitbucket.org/ww/goautoneg",
"log",
"model",
]
pruneopts = ""
@@ -992,8 +972,6 @@
packages = [
"unix",
"windows",
"windows/registry",
"windows/svc/eventlog",
]
pruneopts = ""
revision = "d0be0721c37eeb5299f245a996a483160fc36940"
@@ -1115,14 +1093,6 @@
revision = "8dea3dc473e90c8179e519d91302d0597c0ca1d1"
version = "v1.15.0"
[[projects]]
digest = "1:15d017551627c8bb091bde628215b2861bed128855343fdd570c62d08871f6e1"
name = "gopkg.in/alecthomas/kingpin.v2"
packages = ["."]
pruneopts = ""
revision = "947dcec5ba9c011838740e680966fd7087a71d0d"
version = "v2.2.6"
[[projects]]
digest = "1:bf7444e1e6a36e633f4f1624a67b9e4734cfb879c27ac0a2082ac16aff8462ac"
name = "gopkg.in/go-playground/webhooks.v3"
@@ -1292,14 +1262,7 @@
branch = "release-1.12"
digest = "1:39be82077450762b5e14b5268e679a14ac0e9c7d3286e2fcface437556a29e4c"
name = "k8s.io/apiextensions-apiserver"
packages = [
"pkg/apis/apiextensions",
"pkg/apis/apiextensions/v1beta1",
"pkg/client/clientset/clientset",
"pkg/client/clientset/clientset/scheme",
"pkg/client/clientset/clientset/typed/apiextensions/v1beta1",
"pkg/features",
]
packages = ["pkg/features"]
pruneopts = ""
revision = "ca1024863b48cf0701229109df75ac5f0bb4907e"
@@ -1378,6 +1341,7 @@
"discovery",
"discovery/fake",
"dynamic",
"dynamic/fake",
"informers/core/v1",
"informers/internalinterfaces",
"kubernetes",
@@ -1610,7 +1574,6 @@
"github.com/pkg/errors",
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/prometheus/common/log",
"github.com/sirupsen/logrus",
"github.com/skratchdot/open-golang/open",
"github.com/soheilhy/cmux",
@@ -1654,7 +1617,6 @@
"k8s.io/api/core/v1",
"k8s.io/api/extensions/v1beta1",
"k8s.io/api/rbac/v1",
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset",
"k8s.io/apimachinery/pkg/api/equality",
"k8s.io/apimachinery/pkg/api/errors",
"k8s.io/apimachinery/pkg/apis/meta/v1",
@@ -1673,6 +1635,7 @@
"k8s.io/client-go/discovery",
"k8s.io/client-go/discovery/fake",
"k8s.io/client-go/dynamic",
"k8s.io/client-go/dynamic/fake",
"k8s.io/client-go/informers/core/v1",
"k8s.io/client-go/kubernetes",
"k8s.io/client-go/kubernetes/fake",

View File

@@ -38,10 +38,6 @@ required = [
branch = "release-1.12"
name = "k8s.io/api"
[[constraint]]
name = "k8s.io/apiextensions-apiserver"
branch = "release-1.12"
[[constraint]]
branch = "release-1.12"
name = "k8s.io/code-generator"

View File

@@ -81,15 +81,15 @@ manifests:
# files into the go binary
.PHONY: server
server: clean-debug
${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
.PHONY: repo-server
repo-server:
go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
.PHONY: controller
controller:
${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd/argocd-application-controller
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd/argocd-application-controller
.PHONY: packr
packr:
@@ -119,10 +119,11 @@ endif
.PHONY: builder-image
builder-image:
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) --target builder .
docker push $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG)
.PHONY: dep-ensure
dep-ensure:
dep ensure
dep ensure -no-vendor
.PHONY: format-code
format-code:
@@ -130,14 +131,14 @@ format-code:
.PHONY: lint
lint:
gometalinter.v2 --config gometalinter.json ./...
golangci-lint run
.PHONY: test
test:
go test -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-e2e
test-e2e:
test-e2e: cli
go test -v -failfast -timeout 20m ./test/e2e
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
@@ -159,4 +160,4 @@ release-precheck: manifests
@if [ "$(GIT_TAG)" != "v`cat VERSION`" ]; then echo 'VERSION does not match git tag'; exit 1; fi
.PHONY: release
release: release-precheck precheckin image release-cli
release: release-precheck pre-commit image release-cli

View File

@@ -1,5 +1,5 @@
controller: go run ./cmd/argocd-application-controller/main.go --redis localhost:6379 --repo-server localhost:8081
api-server: go run ./cmd/argocd-server/main.go --redis localhost:6379 --disable-auth --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app
controller: sh -c "ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081"
api-server: sh -c "ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --disable-auth --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app"
repo-server: go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.14.0 serve /dex.yaml"
redis: docker run --rm -i -p 6379:6379 redis:5.0.3-alpine --save "" --appendonly no

View File

@@ -1 +1 @@
0.12.0
0.12.2

View File

@@ -1503,6 +1503,9 @@
"clusterOIDCConfig": {
"type": "object",
"properties": {
"cliClientID": {
"type": "string"
},
"clientID": {
"type": "string"
},
@@ -1526,6 +1529,12 @@
"oidcConfig": {
"$ref": "#/definitions/clusterOIDCConfig"
},
"resourceOverrides": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/v1alpha1ResourceOverride"
}
},
"url": {
"type": "string"
}
@@ -1693,11 +1702,19 @@
"title": "KustomizeAppSpec contains kustomize app name and path in source repo",
"properties": {
"imageTags": {
"description": "imageTags is a list of available image tags. This is only populated for Kustomize 1.",
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1KustomizeImageTag"
}
},
"images": {
"description": "images is a list of available images. This is only populated for Kustomize 2.",
"type": "array",
"items": {
"type": "string"
}
},
"path": {
"type": "string"
}
@@ -2477,6 +2494,13 @@
"$ref": "#/definitions/v1alpha1KustomizeImageTag"
}
},
"images": {
"type": "array",
"title": "Images are kustomize 2.0 image overrides",
"items": {
"type": "string"
}
},
"namePrefix": {
"type": "string",
"title": "NamePrefix is a prefix appended to resources for kustomize apps"
@@ -2543,12 +2567,18 @@
"operationState": {
"$ref": "#/definitions/v1alpha1OperationState"
},
"reconciledAt": {
"$ref": "#/definitions/v1Time"
},
"resources": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1ResourceStatus"
}
},
"sourceType": {
"type": "string"
},
"sync": {
"$ref": "#/definitions/v1alpha1SyncStatus"
}
@@ -2841,6 +2871,10 @@
"connectionState": {
"$ref": "#/definitions/v1alpha1ConnectionState"
},
"insecureIgnoreHostKey": {
"type": "boolean",
"format": "boolean"
},
"password": {
"type": "string"
},
@@ -2957,6 +2991,18 @@
}
}
},
"v1alpha1ResourceOverride": {
"type": "object",
"title": "ResourceOverride holds configuration to customize resource diffing and health assessment",
"properties": {
"healthLua": {
"type": "string"
},
"ignoreDifferences": {
"type": "string"
}
}
},
"v1alpha1ResourceResult": {
"type": "object",
"title": "ResourceResult holds the operation result details of a specific resource",

View File

@@ -3,19 +3,21 @@ package main
import (
"fmt"
"net"
"net/http"
"os"
"time"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/ksonnet"
"github.com/argoproj/argo-cd/util/stats"
"github.com/argoproj/argo-cd/util/tls"
)
@@ -23,7 +25,6 @@ import (
const (
// CLIName is the name of the CLI
cliName = "argocd-repo-server"
port = 8081
)
func newCommand() *cobra.Command {
@@ -48,14 +49,13 @@ func newCommand() *cobra.Command {
server, err := reposerver.NewServer(git.NewFactory(), cache, tlsConfigCustomizer, parallelismLimit)
errors.CheckError(err)
grpc := server.CreateGRPC()
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", port))
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", common.PortRepoServer))
errors.CheckError(err)
ksVers, err := ksonnet.KsonnetVersion()
errors.CheckError(err)
http.Handle("/metrics", promhttp.Handler())
go func() { errors.CheckError(http.ListenAndServe(fmt.Sprintf(":%d", common.PortRepoServerMetrics), nil)) }()
log.Infof("argocd-repo-server %s serving on %s", argocd.GetVersion(), listener.Addr())
log.Infof("ksonnet version: %s", ksVers)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")

View File

@@ -81,7 +81,7 @@ func NewCommand() *cobra.Command {
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
argocd := server.NewServer(ctx, argoCDOpts)
argocd.Run(ctx, 8080)
argocd.Run(ctx, common.PortAPIServer)
cancel()
}
},

View File

@@ -1,26 +1,31 @@
package main
import (
"bufio"
"context"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"strings"
"syscall"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/util"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/dex"
@@ -36,9 +41,15 @@ import (
const (
// CLIName is the name of the CLI
cliName = "argocd-util"
// YamlSeparator separates sections of a YAML file
yamlSeparator = "\n---\n"
yamlSeparator = "---\n"
)
var (
configMapResource = schema.GroupVersionResource{Group: "", Version: "v1", Resource: "configmaps"}
secretResource = schema.GroupVersionResource{Group: "", Version: "v1", Resource: "secrets"}
applicationsResource = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "applications"}
appprojectsResource = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "appprojects"}
)
// NewCommand returns a new instance of an argocd command
@@ -195,94 +206,153 @@ func NewGenDexConfigCommand() *cobra.Command {
func NewImportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
prune bool
dryRun bool
)
var command = cobra.Command{
Use: "import SOURCE",
Short: "Import Argo CD data from stdin (specify `-') or a file",
RunE: func(c *cobra.Command, args []string) error {
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
var (
input []byte
err error
newSettings *settings.ArgoCDSettings
newRepos []*v1alpha1.Repository
newClusters []*v1alpha1.Cluster
newApps []*v1alpha1.Application
newRBACCM *apiv1.ConfigMap
)
if in := args[0]; in == "-" {
input, err = ioutil.ReadAll(os.Stdin)
errors.CheckError(err)
} else {
input, err = ioutil.ReadFile(in)
errors.CheckError(err)
}
inputStrings := strings.Split(string(input), yamlSeparator)
err = yaml.Unmarshal([]byte(inputStrings[0]), &newSettings)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[1]), &newRepos)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[2]), &newClusters)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[3]), &newApps)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[4]), &newRBACCM)
errors.CheckError(err)
config, err := clientConfig.ClientConfig()
config.QPS = 100
config.Burst = 50
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
acdClients := newArgoCDClientsets(config, namespace)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClientset, namespace)
err = settingsMgr.SaveSettings(newSettings)
var input []byte
if in := args[0]; in == "-" {
input, err = ioutil.ReadAll(os.Stdin)
} else {
input, err = ioutil.ReadFile(in)
}
errors.CheckError(err)
db := db.NewDB(namespace, settingsMgr, kubeClientset)
var dryRunMsg string
if dryRun {
dryRunMsg = " (dry run)"
}
_, err = kubeClientset.CoreV1().ConfigMaps(namespace).Create(newRBACCM)
// pruneObjects tracks live objects and it's current resource version. any remaining
// items in this map indicates the resource should be pruned since it no longer appears
// in the backup
pruneObjects := make(map[kube.ResourceKey]string)
configMaps, err := acdClients.configMaps.List(metav1.ListOptions{})
errors.CheckError(err)
for _, cm := range configMaps.Items {
cmName := cm.GetName()
if cmName == common.ArgoCDConfigMapName || cmName == common.ArgoCDRBACConfigMapName {
pruneObjects[kube.ResourceKey{Group: "", Kind: "ConfigMap", Name: cm.GetName()}] = cm.GetResourceVersion()
}
}
secrets, err := acdClients.secrets.List(metav1.ListOptions{})
errors.CheckError(err)
for _, secret := range secrets.Items {
if isArgoCDSecret(nil, secret) {
pruneObjects[kube.ResourceKey{Group: "", Kind: "Secret", Name: secret.GetName()}] = secret.GetResourceVersion()
}
}
applications, err := acdClients.applications.List(metav1.ListOptions{})
errors.CheckError(err)
for _, app := range applications.Items {
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "Application", Name: app.GetName()}] = app.GetResourceVersion()
}
projects, err := acdClients.projects.List(metav1.ListOptions{})
errors.CheckError(err)
for _, proj := range projects.Items {
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "AppProject", Name: proj.GetName()}] = proj.GetResourceVersion()
}
for _, repo := range newRepos {
_, err := db.CreateRepository(context.Background(), repo)
if err != nil {
log.Warn(err)
// Create or replace existing object
objs, err := kube.SplitYAML(string(input))
errors.CheckError(err)
for _, obj := range objs {
gvk := obj.GroupVersionKind()
key := kube.ResourceKey{Group: gvk.Group, Kind: gvk.Kind, Name: obj.GetName()}
resourceVersion, exists := pruneObjects[key]
delete(pruneObjects, key)
var dynClient dynamic.ResourceInterface
switch obj.GetKind() {
case "Secret":
dynClient = acdClients.secrets
case "ConfigMap":
dynClient = acdClients.configMaps
case "AppProject":
dynClient = acdClients.projects
case "Application":
dynClient = acdClients.applications
}
if !exists {
if !dryRun {
_, err = dynClient.Create(obj, metav1.CreateOptions{})
errors.CheckError(err)
}
fmt.Printf("%s/%s %s created%s\n", gvk.Group, gvk.Kind, obj.GetName(), dryRunMsg)
} else {
if !dryRun {
obj.SetResourceVersion(resourceVersion)
_, err = dynClient.Update(obj, metav1.UpdateOptions{})
errors.CheckError(err)
}
fmt.Printf("%s/%s %s replaced%s\n", gvk.Group, gvk.Kind, obj.GetName(), dryRunMsg)
}
}
for _, cluster := range newClusters {
_, err := db.CreateCluster(context.Background(), cluster)
if err != nil {
log.Warn(err)
// Delete objects not in backup
for key := range pruneObjects {
if prune {
var dynClient dynamic.ResourceInterface
switch key.Kind {
case "Secret":
dynClient = acdClients.secrets
case "AppProject":
dynClient = acdClients.projects
case "Application":
dynClient = acdClients.applications
default:
log.Fatalf("Unexpected kind '%s' in prune list", key.Kind)
}
if !dryRun {
err = dynClient.Delete(key.Name, &metav1.DeleteOptions{})
errors.CheckError(err)
}
fmt.Printf("%s/%s %s pruned%s\n", key.Group, key.Kind, key.Name, dryRunMsg)
} else {
fmt.Printf("%s/%s %s needs pruning\n", key.Group, key.Kind, key.Name)
}
}
appClientset := appclientset.NewForConfigOrDie(config)
for _, app := range newApps {
out, err := appClientset.ArgoprojV1alpha1().Applications(namespace).Create(app)
errors.CheckError(err)
log.Println(out)
}
return nil
},
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().BoolVar(&dryRun, "dry-run", false, "Print what will be performed")
command.Flags().BoolVar(&prune, "prune", false, "Prune secrets, applications and projects which do not appear in the backup")
return &command
}
type argoCDClientsets struct {
configMaps dynamic.ResourceInterface
secrets dynamic.ResourceInterface
applications dynamic.ResourceInterface
projects dynamic.ResourceInterface
}
func newArgoCDClientsets(config *rest.Config, namespace string) *argoCDClientsets {
dynamicIf, err := dynamic.NewForConfig(config)
errors.CheckError(err)
return &argoCDClientsets{
configMaps: dynamicIf.Resource(configMapResource).Namespace(namespace),
secrets: dynamicIf.Resource(secretResource).Namespace(namespace),
applications: dynamicIf.Resource(applicationsResource).Namespace(namespace),
projects: dynamicIf.Resource(appprojectsResource).Namespace(namespace),
}
}
// NewExportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewExportCommand() *cobra.Command {
var (
@@ -292,75 +362,48 @@ func NewExportCommand() *cobra.Command {
var command = cobra.Command{
Use: "export",
Short: "Export all Argo CD data to stdout (default) or a file",
RunE: func(c *cobra.Command, args []string) error {
Run: func(c *cobra.Command, args []string) {
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClientset, namespace)
settings, err := settingsMgr.GetSettings()
errors.CheckError(err)
// certificate data is included in secrets that are exported alongside
settings.Certificate = nil
db := db.NewDB(namespace, settingsMgr, kubeClientset)
clusters, err := db.ListClusters(context.Background())
errors.CheckError(err)
repoURLs, err := db.ListRepoURLs(context.Background())
errors.CheckError(err)
repos := make([]*v1alpha1.Repository, len(repoURLs))
for i := range repoURLs {
repo, err := db.GetRepository(context.Background(), repoURLs[i])
errors.CheckError(err)
repos = append(repos, repo)
}
appClientset := appclientset.NewForConfigOrDie(config)
apps, err := appClientset.ArgoprojV1alpha1().Applications(namespace).List(metav1.ListOptions{})
errors.CheckError(err)
rbacCM, err := kubeClientset.CoreV1().ConfigMaps(namespace).Get(common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
// remove extraneous cruft from output
rbacCM.ObjectMeta = metav1.ObjectMeta{
Name: rbacCM.ObjectMeta.Name,
}
// remove extraneous cruft from output
for idx, app := range apps.Items {
apps.Items[idx].ObjectMeta = metav1.ObjectMeta{
Name: app.ObjectMeta.Name,
Finalizers: app.ObjectMeta.Finalizers,
}
apps.Items[idx].Status = v1alpha1.ApplicationStatus{
History: app.Status.History,
}
apps.Items[idx].Operation = nil
}
// take a list of exportable objects, marshal them to YAML,
// and return a string joined by a delimiter
output := func(delimiter string, oo ...interface{}) string {
out := make([]string, 0)
for _, o := range oo {
data, err := yaml.Marshal(o)
errors.CheckError(err)
out = append(out, string(data))
}
return strings.Join(out, delimiter)
}(yamlSeparator, settings, clusters.Items, repos, apps.Items, rbacCM)
var writer io.Writer
if out == "-" {
fmt.Println(output)
writer = os.Stdout
} else {
err = ioutil.WriteFile(out, []byte(output), 0644)
f, err := os.Create(out)
errors.CheckError(err)
defer util.Close(f)
writer = bufio.NewWriter(f)
}
acdClients := newArgoCDClientsets(config, namespace)
acdConfigMap, err := acdClients.configMaps.Get(common.ArgoCDConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdConfigMap)
acdRBACConfigMap, err := acdClients.configMaps.Get(common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdRBACConfigMap)
referencedSecrets := getReferencedSecrets(*acdConfigMap)
secrets, err := acdClients.secrets.List(metav1.ListOptions{})
errors.CheckError(err)
for _, secret := range secrets.Items {
if isArgoCDSecret(referencedSecrets, secret) {
export(writer, secret)
}
}
projects, err := acdClients.projects.List(metav1.ListOptions{})
errors.CheckError(err)
for _, proj := range projects.Items {
export(writer, proj)
}
applications, err := acdClients.applications.List(metav1.ListOptions{})
errors.CheckError(err)
for _, app := range applications.Items {
export(writer, app)
}
return nil
},
}
@@ -370,13 +413,109 @@ func NewExportCommand() *cobra.Command {
return &command
}
// NewClusterConfig returns a new instance of `argocd-util cluster-kubeconfig` command
// getReferencedSecrets examines the argocd-cm config for any referenced repo secrets and returns a
// map of all referenced secrets.
func getReferencedSecrets(un unstructured.Unstructured) map[string]bool {
var cm apiv1.ConfigMap
err := runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &cm)
errors.CheckError(err)
referencedSecrets := make(map[string]bool)
if reposRAW, ok := cm.Data["repositories"]; ok {
repoCreds := make([]settings.RepoCredentials, 0)
err := yaml.Unmarshal([]byte(reposRAW), &repoCreds)
errors.CheckError(err)
for _, cred := range repoCreds {
if cred.PasswordSecret != nil {
referencedSecrets[cred.PasswordSecret.Name] = true
}
if cred.SSHPrivateKeySecret != nil {
referencedSecrets[cred.SSHPrivateKeySecret.Name] = true
}
if cred.UsernameSecret != nil {
referencedSecrets[cred.UsernameSecret.Name] = true
}
}
}
if helmReposRAW, ok := cm.Data["helm.repositories"]; ok {
helmRepoCreds := make([]settings.HelmRepoCredentials, 0)
err := yaml.Unmarshal([]byte(helmReposRAW), &helmRepoCreds)
errors.CheckError(err)
for _, cred := range helmRepoCreds {
if cred.CASecret != nil {
referencedSecrets[cred.CASecret.Name] = true
}
if cred.CertSecret != nil {
referencedSecrets[cred.CertSecret.Name] = true
}
if cred.KeySecret != nil {
referencedSecrets[cred.KeySecret.Name] = true
}
if cred.UsernameSecret != nil {
referencedSecrets[cred.UsernameSecret.Name] = true
}
if cred.PasswordSecret != nil {
referencedSecrets[cred.PasswordSecret.Name] = true
}
}
}
return referencedSecrets
}
// isArgoCDSecret returns whether or not the given secret is a part of Argo CD configuration
// (e.g. argocd-secret, repo credentials, or cluster credentials)
func isArgoCDSecret(repoSecretRefs map[string]bool, un unstructured.Unstructured) bool {
secretName := un.GetName()
if secretName == common.ArgoCDSecretName {
return true
}
if repoSecretRefs != nil {
if _, ok := repoSecretRefs[secretName]; ok {
return true
}
}
if labels := un.GetLabels(); labels != nil {
if _, ok := labels[common.LabelKeySecretType]; ok {
return true
}
}
if annotations := un.GetAnnotations(); annotations != nil {
if annotations[common.AnnotationKeyManagedBy] == common.AnnotationValueManagedByArgoCD {
return true
}
}
return false
}
// export writes the unstructured object and removes extraneous cruft from output before writing
func export(w io.Writer, un unstructured.Unstructured) {
name := un.GetName()
finalizers := un.GetFinalizers()
apiVersion := un.GetAPIVersion()
kind := un.GetKind()
labels := un.GetLabels()
annotations := un.GetAnnotations()
unstructured.RemoveNestedField(un.Object, "metadata")
un.SetName(name)
un.SetFinalizers(finalizers)
un.SetAPIVersion(apiVersion)
un.SetKind(kind)
un.SetLabels(labels)
un.SetAnnotations(annotations)
data, err := yaml.Marshal(un.Object)
errors.CheckError(err)
_, err = w.Write(data)
errors.CheckError(err)
_, err = w.Write([]byte(yamlSeparator))
errors.CheckError(err)
}
// NewClusterConfig returns a new instance of `argocd-util kubeconfig` command
func NewClusterConfig() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
)
var command = &cobra.Command{
Use: "cluster-kubeconfig CLUSTER_URL OUTPUT_PATH",
Use: "kubeconfig CLUSTER_URL OUTPUT_PATH",
Short: "Generates kubeconfig for the specified cluster",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
@@ -387,12 +526,8 @@ func NewClusterConfig() *cobra.Command {
output := args[1]
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
if !(wasSpecified) {
namespace = "argocd"
}
kubeclientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)

View File

@@ -27,13 +27,13 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apiclient"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/server/application"
apirepository "github.com/argoproj/argo-cd/server/repository"
"github.com/argoproj/argo-cd/server/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
@@ -43,7 +43,6 @@ import (
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
argosettings "github.com/argoproj/argo-cd/util/settings"
)
// NewApplicationCommand returns a new instance of an `argocd app` command
@@ -116,7 +115,7 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
},
}
setAppOptions(c.Flags(), &app, &appOpts)
setParameterOverrides(&app, argocdClient, appOpts.parameters)
setParameterOverrides(&app, appOpts.parameters)
}
if app.Name == "" {
c.HelpFunc()(c, args)
@@ -225,6 +224,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
func printAppSummaryTable(app *argoappv1.Application, appURL string) {
fmt.Printf(printOpFmtStr, "Name:", app.Name)
fmt.Printf(printOpFmtStr, "Project:", app.Spec.GetProject())
fmt.Printf(printOpFmtStr, "Server:", app.Spec.Destination.Server)
fmt.Printf(printOpFmtStr, "Namespace:", app.Spec.Destination.Namespace)
fmt.Printf(printOpFmtStr, "URL:", appURL)
@@ -353,7 +353,7 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
c.HelpFunc()(c, args)
os.Exit(1)
}
setParameterOverrides(app, argocdClient, appOpts.parameters)
setParameterOverrides(app, appOpts.parameters)
_, err = appIf.UpdateSpec(ctx, &application.ApplicationUpdateSpecRequest{
Name: &app.Name,
Spec: app.Spec,
@@ -611,6 +611,17 @@ func getLocalObjects(app *argoappv1.Application, local string, appLabelKey strin
return objs
}
type resourceInfoProvider struct {
namespacedByGk map[schema.GroupKind]bool
}
// Infer if obj is namespaced or not from corresponding live objects list. If corresponding live object has namespace then target object is also namespaced.
// If live object is missing then it does not matter if target is namespaced or not.
func (p *resourceInfoProvider) IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error) {
key := kube.GetResourceKey(obj)
return p.namespacedByGk[key.GroupKind()], nil
}
func groupLocalObjs(localObs []*unstructured.Unstructured, liveObjs []*unstructured.Unstructured, appNamespace string) map[kube.ResourceKey]*unstructured.Unstructured {
namespacedByGk := make(map[schema.GroupKind]bool)
for i := range liveObjs {
@@ -619,22 +630,13 @@ func groupLocalObjs(localObs []*unstructured.Unstructured, liveObjs []*unstructu
namespacedByGk[schema.GroupKind{Group: key.Group, Kind: key.Kind}] = key.Namespace != ""
}
}
localObs, _, err := controller.DeduplicateTargetObjects("", appNamespace, localObs, &resourceInfoProvider{namespacedByGk: namespacedByGk})
errors.CheckError(err)
objByKey := make(map[kube.ResourceKey]*unstructured.Unstructured)
for i := range localObs {
obj := localObs[i]
gk := obj.GroupVersionKind().GroupKind()
// Infer if obj is namespaced or not from corresponding live objects list. If corresponding live object has namespace then target object is also namespaced.
// If live object is missing then it does not matter if target is namespaced or not.
namespace := obj.GetNamespace()
if !namespacedByGk[gk] {
namespace = ""
} else {
if namespace == "" {
namespace = appNamespace
}
}
if !hook.IsHook(obj) {
objByKey[kube.NewResourceKey(gk.Group, gk.Kind, namespace, obj.GetName())] = obj
objByKey[kube.GetResourceKey(obj)] = obj
}
}
return objByKey
@@ -651,11 +653,11 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
var command = &cobra.Command{
Use: "diff APPNAME",
Short: shortDesc,
Long: shortDesc + "\nUses 'diff' to render the difference. KUBECTL_EXTERNAL_DIFF environment variable can be used to select your own diff tool.",
Long: shortDesc + "\nUses 'diff' to render the difference. KUBECTL_EXTERNAL_DIFF environment variable can be used to select your own diff tool.\nReturns the following exit codes: 2 on general errors, 1 when a diff is found, and 0 when no diff is found",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
os.Exit(2)
}
clientset := argocdclient.NewClientOrDie(clientOpts)
@@ -673,20 +675,31 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
live *unstructured.Unstructured
target *unstructured.Unstructured
}, 0)
conn, settingsIf := clientset.NewSettingsClientOrDie()
defer util.Close(conn)
argoSettings, err := settingsIf.Get(context.Background(), &settings.SettingsQuery{})
errors.CheckError(err)
if local != "" {
conn, settingsIf := clientset.NewSettingsClientOrDie()
defer util.Close(conn)
argoSettings, err := settingsIf.Get(context.Background(), &settings.SettingsQuery{})
errors.CheckError(err)
localObjs := groupLocalObjs(getLocalObjects(app, local, argoSettings.AppLabelKey), liveObjs, app.Spec.Destination.Namespace)
for _, res := range resources.Items {
var live = &unstructured.Unstructured{}
err := json.Unmarshal([]byte(res.LiveState), &live)
errors.CheckError(err)
key := kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name)
var key kube.ResourceKey
if live != nil {
key = kube.GetResourceKey(live)
} else {
var target = &unstructured.Unstructured{}
err = json.Unmarshal([]byte(res.TargetState), &target)
errors.CheckError(err)
key = kube.GetResourceKey(target)
}
if local, ok := localObjs[key]; ok || live != nil {
if local != nil {
if local != nil && !kube.IsCRD(local) {
err = kube.SetAppInstanceLabel(local, argoSettings.AppLabelKey, appName)
errors.CheckError(err)
}
@@ -737,15 +750,20 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
}
foundDiffs := false
for i := range items {
item := items[i]
// TODO (amatyushentsev): use resource overrides exposed from API server
normalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, make(map[string]argosettings.ResourceOverride))
overrides := make(map[string]argoappv1.ResourceOverride)
for k := range argoSettings.ResourceOverrides {
val := argoSettings.ResourceOverrides[k]
overrides[k] = *val
}
normalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, overrides)
errors.CheckError(err)
// Diff is already available in ResourceDiff Diff field but we have to recalculate diff again due to https://github.com/yudai/gojsondiff/issues/31
diffRes := diff.Diff(item.target, item.live, normalizer)
fmt.Printf("===== %s/%s %s/%s ======\n", item.key.Group, item.key.Kind, item.key.Namespace, item.key.Name)
if diffRes.Modified || item.target == nil || item.live == nil {
fmt.Printf("===== %s/%s %s/%s ======\n", item.key.Group, item.key.Kind, item.key.Namespace, item.key.Name)
var live *unstructured.Unstructured
var target *unstructured.Unstructured
if item.target != nil && item.live != nil {
@@ -757,9 +775,13 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
target = item.target
}
foundDiffs = true
printDiff(item.key.Name, target, live)
}
}
if foundDiffs {
os.Exit(1)
}
},
}
@@ -1271,6 +1293,10 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
var doPrint bool
stateKey := newState.Key()
if prevState, found := prevStates[stateKey]; found {
if watchHealth && prevState.Health != argoappv1.HealthStatusUnknown && prevState.Health != argoappv1.HealthStatusDegraded && newState.Health == argoappv1.HealthStatusDegraded {
printFinalStatus(app)
return nil, fmt.Errorf("Application '%s' health state has transitioned from %s to %s", appName, prevState.Health, newState.Health)
}
doPrint = prevState.Merge(newState)
} else {
prevStates[stateKey] = newState
@@ -1290,21 +1316,30 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
// If the app is a ksonnet app, then parameters are expected to be in the form: component=param=value
// Otherwise, the app is assumed to be a helm app and is expected to be in the form:
// param=value
func setParameterOverrides(app *argoappv1.Application, argocdClient argocdclient.Client, parameters []string) {
func setParameterOverrides(app *argoappv1.Application, parameters []string) {
if len(parameters) == 0 {
return
}
conn, repoIf := argocdClient.NewRepoClientOrDie()
defer util.Close(conn)
var sourceType argoappv1.ApplicationSourceType
if st, _ := app.Spec.Source.ExplicitType(); st != nil {
sourceType = *st
} else if app.Status.SourceType != "" {
sourceType = app.Status.SourceType
} else {
// HACK: we don't know the source type, so make an educated guess based on the supplied
// parameter string. This code handles the corner case where app doesn't exist yet, and the
// command is something like: `argocd app create MYAPP -p foo=bar`
// This logic is not foolproof, but when ksonnet is deprecated, this will no longer matter
// since helm will remain as the only source type which has parameters.
if len(strings.SplitN(parameters[0], "=", 3)) == 3 {
sourceType = argoappv1.ApplicationSourceTypeKsonnet
} else if len(strings.SplitN(parameters[0], "=", 2)) == 2 {
sourceType = argoappv1.ApplicationSourceTypeHelm
}
}
appDetails, err := repoIf.GetAppDetails(context.Background(), &apirepository.RepoAppDetailsQuery{
Repo: app.Spec.Source.RepoURL,
Revision: app.Spec.Source.TargetRevision,
Path: app.Spec.Source.Path,
})
errors.CheckError(err)
if appDetails.Ksonnet != nil {
switch sourceType {
case argoappv1.ApplicationSourceTypeKsonnet:
if app.Spec.Source.Ksonnet == nil {
app.Spec.Source.Ksonnet = &argoappv1.ApplicationSourceKsonnet{}
}
@@ -1330,7 +1365,7 @@ func setParameterOverrides(app *argoappv1.Application, argocdClient argocdclient
app.Spec.Source.Ksonnet.Parameters = append(app.Spec.Source.Ksonnet.Parameters, newParam)
}
}
} else if appDetails.Helm != nil {
case argoappv1.ApplicationSourceTypeHelm:
if app.Spec.Source.Helm == nil {
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{}
}
@@ -1355,7 +1390,7 @@ func setParameterOverrides(app *argoappv1.Application, argocdClient argocdclient
app.Spec.Source.Helm.Parameters = append(app.Spec.Source.Helm.Parameters, newParam)
}
}
} else {
default:
log.Fatalf("Parameters can only be set against Ksonnet or Helm applications")
}
}
@@ -1448,6 +1483,9 @@ const printOpFmtStr = "%-20s%s\n"
const defaultCheckTimeoutSeconds = 0
func printOperationResult(opState *argoappv1.OperationState) {
if opState == nil {
return
}
if opState.SyncResult != nil {
fmt.Printf(printOpFmtStr, "Operation:", "Sync")
fmt.Printf(printOpFmtStr, "Sync Revision:", opState.SyncResult.Revision)
@@ -1665,7 +1703,7 @@ func NewApplicationPatchResourceCommand(clientOpts *argocdclient.ClientOptions)
for i := range liveObjs {
obj := liveObjs[i]
gvk := obj.GroupVersionKind()
if command.Flags().Changed("group") && kind != gvk.Group {
if command.Flags().Changed("group") && group != gvk.Group {
continue
}
if namespace != "" && namespace != obj.GetNamespace() {

View File

@@ -15,7 +15,7 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"

View File

@@ -39,9 +39,10 @@ func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
// NewRepoAddCommand returns a new instance of an `argocd repo add` command
func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
insecureIgnoreHostKey bool
)
var command = &cobra.Command{
Use: "add REPO",
@@ -59,12 +60,13 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
repo.SSHPrivateKey = string(keyData)
}
repo.InsecureIgnoreHostKey = insecureIgnoreHostKey
// First test the repo *without* username/password. This gives us a hint on whether this
// is a private repo.
// NOTE: it is important not to run git commands to test git credentials on the user's
// system since it may mess with their git credential store (e.g. osx keychain).
// See issue #315
err := git.TestRepo(repo.Repo, "", "", repo.SSHPrivateKey)
err := git.TestRepo(repo.Repo, "", "", repo.SSHPrivateKey, repo.InsecureIgnoreHostKey)
if err != nil {
if git.IsSSHURL(repo.Repo) {
// If we failed using git SSH credentials, then the repo is automatically bad
@@ -88,6 +90,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command.Flags().StringVar(&repo.Username, "username", "", "username to the repository")
command.Flags().StringVar(&repo.Password, "password", "", "password to the repository")
command.Flags().StringVar(&sshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
command.Flags().BoolVar(&insecureIgnoreHostKey, "insecure-ignore-host-key", false, "disables SSH strict host key checking")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing repository with the same name even if the spec differs")
return command
}

View File

@@ -22,6 +22,7 @@ const (
PortRepoServer = 8081
PortArgoCDMetrics = 8082
PortArgoCDAPIServerMetrics = 8083
PortRepoServerMetrics = 8084
)
// Argo CD application related constants

View File

@@ -112,8 +112,8 @@ func NewApplicationController(
}
appInformer, appLister := ctrl.newApplicationInformerAndLister()
projInformer := v1alpha1.NewAppProjectInformer(applicationClientset, namespace, appResyncPeriod, cache.Indexers{})
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settings, kubectlCmd, func(appName string) {
ctrl.requestAppRefresh(appName)
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settings, kubectlCmd, func(appName string, fullRefresh bool) {
ctrl.requestAppRefresh(appName, fullRefresh)
ctrl.appRefreshQueue.Add(fmt.Sprintf("%s/%s", ctrl.namespace, appName))
})
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectlCmd, ctrl.settings, stateCache, projInformer)
@@ -143,11 +143,11 @@ func (ctrl *ApplicationController) getApp(name string) (*appv1.Application, erro
}
func (ctrl *ApplicationController) setAppManagedResources(a *appv1.Application, comparisonResult *comparisonResult) error {
tree, err := ctrl.resourceTree(a, comparisonResult.managedResources)
managedResources, err := ctrl.managedResources(a, comparisonResult)
if err != nil {
return err
}
managedResources, err := ctrl.managedResources(a, comparisonResult)
tree, err := ctrl.resourceTree(a, managedResources)
if err != nil {
return err
}
@@ -158,20 +158,40 @@ func (ctrl *ApplicationController) setAppManagedResources(a *appv1.Application,
return ctrl.cache.SetAppManagedResources(a.Name, managedResources)
}
func (ctrl *ApplicationController) resourceTree(a *appv1.Application, resources []managedResource) ([]*appv1.ResourceNode, error) {
func (ctrl *ApplicationController) resourceTree(a *appv1.Application, managedResources []*appv1.ResourceDiff) ([]*appv1.ResourceNode, error) {
items := make([]*appv1.ResourceNode, 0)
for i := range resources {
managedResource := resources[i]
node := appv1.ResourceNode{
Name: managedResource.Name,
Version: managedResource.Version,
Kind: managedResource.Kind,
Group: managedResource.Group,
Namespace: managedResource.Namespace,
for i := range managedResources {
managedResource := managedResources[i]
var live = &unstructured.Unstructured{}
err := json.Unmarshal([]byte(managedResource.LiveState), &live)
if err != nil {
return nil, err
}
if managedResource.Live != nil {
node.ResourceVersion = managedResource.Live.GetResourceVersion()
children, err := ctrl.stateCache.GetChildren(a.Spec.Destination.Server, managedResource.Live)
var target = &unstructured.Unstructured{}
err = json.Unmarshal([]byte(managedResource.TargetState), &target)
if err != nil {
return nil, err
}
version := ""
resourceVersion := ""
if live != nil {
resourceVersion = live.GetResourceVersion()
version = live.GroupVersionKind().Version
} else if target != nil {
version = target.GroupVersionKind().Version
}
node := appv1.ResourceNode{
Version: version,
ResourceVersion: resourceVersion,
Name: managedResource.Name,
Kind: managedResource.Kind,
Group: managedResource.Group,
Namespace: managedResource.Namespace,
}
if live != nil {
children, err := ctrl.stateCache.GetChildren(a.Spec.Destination.Server, live)
if err != nil {
return nil, err
}
@@ -269,20 +289,20 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
<-ctx.Done()
}
func (ctrl *ApplicationController) requestAppRefresh(appName string) {
func (ctrl *ApplicationController) requestAppRefresh(appName string, fullRefresh bool) {
ctrl.refreshRequestedAppsMutex.Lock()
defer ctrl.refreshRequestedAppsMutex.Unlock()
ctrl.refreshRequestedApps[appName] = true
ctrl.refreshRequestedApps[appName] = fullRefresh || ctrl.refreshRequestedApps[appName]
}
func (ctrl *ApplicationController) isRefreshRequested(appName string) bool {
func (ctrl *ApplicationController) isRefreshRequested(appName string) (bool, bool) {
ctrl.refreshRequestedAppsMutex.Lock()
defer ctrl.refreshRequestedAppsMutex.Unlock()
_, ok := ctrl.refreshRequestedApps[appName]
fullRefresh, ok := ctrl.refreshRequestedApps[appName]
if ok {
delete(ctrl.refreshRequestedApps, appName)
}
return ok
return ok, fullRefresh
}
func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext bool) {
@@ -347,7 +367,9 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
}
objs := make([]*unstructured.Unstructured, 0)
for k := range objsMap {
objs = append(objs, objsMap[k])
if objsMap[k].GetDeletionTimestamp() == nil && !kube.IsCRD(objsMap[k]) {
objs = append(objs, objsMap[k])
}
}
err = util.RunAllAsync(len(objs), func(i int) error {
obj := objs[i]
@@ -475,7 +497,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
if state.Phase.Completed() {
// if we just completed an operation, force a refresh so that UI will report up-to-date
// sync/health information
ctrl.requestAppRefresh(app.ObjectMeta.Name)
ctrl.requestAppRefresh(app.ObjectMeta.Name, true)
}
}
@@ -566,19 +588,39 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
needRefresh, refreshType := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout)
needRefresh, refreshType, fullRefresh := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout)
if !needRefresh {
return
}
startTime := time.Now()
defer func() {
reconcileDuration := time.Now().Sub(startTime)
ctrl.metricsServer.IncReconcile(origApp, reconcileDuration)
logCtx := log.WithFields(log.Fields{"application": origApp.Name, "time_ms": reconcileDuration.Seconds() * 1e3})
logCtx := log.WithFields(log.Fields{"application": origApp.Name, "time_ms": reconcileDuration.Seconds() * 1e3, "full": fullRefresh})
logCtx.Info("Reconciliation completed")
}()
app := origApp.DeepCopy()
logCtx := log.WithFields(log.Fields{"application": app.Name})
if !fullRefresh {
if managedResources, err := ctrl.cache.GetAppManagedResources(app.Name); err != nil {
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fallback to full reconciliation")
} else {
if tree, err := ctrl.resourceTree(app, managedResources); err != nil {
app.Status.Conditions = []appv1.ApplicationCondition{{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()}}
} else {
if err = ctrl.cache.SetAppResourcesTree(app.Name, tree); err != nil {
logCtx.Errorf("Failed to cache resources tree: %v", err)
return
}
}
app.Status.ObservedAt = metav1.Now()
ctrl.persistAppStatus(origApp, &app.Status)
return
}
}
conditions, hasErrors := ctrl.refreshAppConditions(app)
if hasErrors {
@@ -598,7 +640,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
err = ctrl.setAppManagedResources(app, compareResult)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
logCtx.Errorf("Failed to cache app resources: %v", err)
}
syncErrCond := ctrl.autoSync(app, compareResult.syncStatus)
@@ -606,26 +648,32 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
conditions = append(conditions, *syncErrCond)
}
app.Status.ObservedAt = compareResult.observedAt
app.Status.ObservedAt = compareResult.reconciledAt
app.Status.ReconciledAt = compareResult.reconciledAt
app.Status.Sync = *compareResult.syncStatus
app.Status.Health = *compareResult.healthStatus
app.Status.Resources = compareResult.resources
app.Status.Conditions = conditions
app.Status.SourceType = compareResult.appSourceType
ctrl.persistAppStatus(origApp, &app.Status)
return
}
// needRefreshAppStatus answers if application status needs to be refreshed.
// Returns true if application never been compared, has changed or comparison result has expired.
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) (bool, appv1.RefreshType) {
// Additionally returns whether full refresh was requested or not.
// If full refresh is requested then target and live state should be reconciled, else only live state tree should be updated.
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) (bool, appv1.RefreshType, bool) {
logCtx := log.WithFields(log.Fields{"application": app.Name})
var reason string
fullRefresh := true
refreshType := appv1.RefreshTypeNormal
expired := app.Status.ObservedAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
expired := app.Status.ReconciledAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
if requestedType, ok := app.IsRefreshRequested(); ok {
refreshType = requestedType
reason = fmt.Sprintf("%s refresh requested", refreshType)
} else if ctrl.isRefreshRequested(app.Name) {
} else if requested, full := ctrl.isRefreshRequested(app.Name); requested {
fullRefresh = full
reason = fmt.Sprintf("controller refresh requested")
} else if app.Status.Sync.Status == appv1.SyncStatusCodeUnknown && expired {
reason = "comparison status unknown"
@@ -634,13 +682,13 @@ func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application,
} else if !app.Spec.Destination.Equals(app.Status.Sync.ComparedTo.Destination) {
reason = "spec.destination differs"
} else if expired {
reason = fmt.Sprintf("comparison expired. observedAt: %v, expiry: %v", app.Status.ObservedAt, statusRefreshTimeout)
reason = fmt.Sprintf("comparison expired. reconciledAt: %v, expiry: %v", app.Status.ReconciledAt, statusRefreshTimeout)
}
if reason != "" {
logCtx.Infof("Refreshing app status (%s)", reason)
return true, refreshType
return true, refreshType, fullRefresh
}
return false, refreshType
return false, refreshType, fullRefresh
}
func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application) ([]appv1.ApplicationCondition, bool) {
@@ -659,7 +707,7 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
})
}
} else {
specConditions, err := argo.GetSpecErrors(context.Background(), &app.Spec, proj, ctrl.repoClientset, ctrl.db)
specConditions, _, err := argo.GetSpecErrors(context.Background(), &app.Spec, proj, ctrl.repoClientset, ctrl.db)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionUnknownError,
@@ -672,11 +720,12 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
// List of condition types which have to be reevaluated by controller; all remaining conditions should stay as is.
reevaluateTypes := map[appv1.ApplicationConditionType]bool{
appv1.ApplicationConditionInvalidSpecError: true,
appv1.ApplicationConditionUnknownError: true,
appv1.ApplicationConditionComparisonError: true,
appv1.ApplicationConditionSharedResourceWarning: true,
appv1.ApplicationConditionSyncError: true,
appv1.ApplicationConditionInvalidSpecError: true,
appv1.ApplicationConditionUnknownError: true,
appv1.ApplicationConditionComparisonError: true,
appv1.ApplicationConditionSharedResourceWarning: true,
appv1.ApplicationConditionSyncError: true,
appv1.ApplicationConditionRepeatedResourceWarning: true,
}
appConditions := make([]appv1.ApplicationCondition, 0)
for i := 0; i < len(app.Status.Conditions); i++ {
@@ -858,7 +907,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
if oldOK && newOK {
if toggledAutomatedSync(oldApp, newApp) {
log.WithField("application", newApp.Name).Info("Enabled automated sync")
ctrl.requestAppRefresh(newApp.Name)
ctrl.requestAppRefresh(newApp.Name, true)
}
}
ctrl.appRefreshQueue.Add(key)

View File

@@ -105,12 +105,12 @@ data:
# minikube
name: aHR0cHM6Ly9sb2NhbGhvc3Q6NjQ0Mw==
# https://localhost:6443
server: aHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3Zj
server: aHR0cHM6Ly9sb2NhbGhvc3Q6NjQ0Mw==
kind: Secret
metadata:
labels:
argocd.argoproj.io/secret-type: cluster
name: localhost-6443
name: some-secret
namespace: ` + test.FakeArgoCDNamespace + `
type: Opaque
`

View File

@@ -2,7 +2,6 @@ package cache
import (
"context"
"fmt"
"sync"
log "github.com/sirupsen/logrus"
@@ -19,7 +18,7 @@ import (
)
type LiveStateCache interface {
IsNamespaced(server string, gvk schema.GroupVersionKind) (bool, error)
IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error)
// Returns child nodes for a given k8s resource
GetChildren(server string, obj *unstructured.Unstructured) ([]appv1.ResourceNode, error)
// Returns state of live nodes which correspond for target nodes of specified application.
@@ -43,7 +42,7 @@ func GetTargetObjKey(a *appv1.Application, un *unstructured.Unstructured, isName
return key
}
func NewLiveStateCache(db db.ArgoDB, appInformer cache.SharedIndexInformer, settings *settings.ArgoCDSettings, kubectl kube.Kubectl, onAppUpdated func(appName string)) LiveStateCache {
func NewLiveStateCache(db db.ArgoDB, appInformer cache.SharedIndexInformer, settings *settings.ArgoCDSettings, kubectl kube.Kubectl, onAppUpdated func(appName string, fullRefresh bool)) LiveStateCache {
return &liveStateCache{
appInformer: appInformer,
db: db,
@@ -60,7 +59,7 @@ type liveStateCache struct {
clusters map[string]*clusterInfo
lock *sync.Mutex
appInformer cache.SharedIndexInformer
onAppUpdated func(appName string)
onAppUpdated func(appName string, fullRefresh bool)
kubectl kube.Kubectl
settings *settings.ArgoCDSettings
}
@@ -73,13 +72,6 @@ func (c *liveStateCache) processEvent(event watch.EventType, obj *unstructured.U
return info.processEvent(event, obj)
}
func (c *liveStateCache) removeCluster(server string) {
c.lock.Lock()
defer c.lock.Unlock()
delete(c.clusters, server)
log.Infof("Dropped cluster %s cache", server)
}
func (c *liveStateCache) getCluster(server string) (*clusterInfo, error) {
c.lock.Lock()
defer c.lock.Unlock()
@@ -90,7 +82,7 @@ func (c *liveStateCache) getCluster(server string) (*clusterInfo, error) {
return nil, err
}
info = &clusterInfo{
apis: make(map[schema.GroupKind]*gkInfo),
apisMeta: make(map[schema.GroupKind]*apiMeta),
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
nsIndex: make(map[string]map[kube.ResourceKey]*node),
@@ -140,12 +132,12 @@ func (c *liveStateCache) Delete(server string, obj *unstructured.Unstructured) e
return clusterInfo.delete(obj)
}
func (c *liveStateCache) IsNamespaced(server string, gvk schema.GroupVersionKind) (bool, error) {
func (c *liveStateCache) IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error) {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return false, err
}
return clusterInfo.isNamespaced(gvk.GroupKind()), nil
return clusterInfo.isNamespaced(obj), nil
}
func (c *liveStateCache) GetChildren(server string, obj *unstructured.Unstructured) ([]appv1.ResourceNode, error) {
@@ -175,135 +167,29 @@ func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
// Run watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (c *liveStateCache) Run(ctx context.Context) {
watchingClustersLock := sync.Mutex{}
watchingClusters := make(map[string]struct {
cancel context.CancelFunc
cluster *appv1.Cluster
})
util.RetryUntilSucceed(func() error {
clusterEventCallback := func(event *db.ClusterEvent) {
info, ok := watchingClusters[event.Cluster.Server]
hasApps := isClusterHasApps(c.appInformer.GetStore().List(), event.Cluster)
// cluster resources must be watched only if cluster has at least one app
if (event.Type == watch.Deleted || !hasApps) && ok {
info.cancel()
watchingClustersLock.Lock()
delete(watchingClusters, event.Cluster.Server)
watchingClustersLock.Unlock()
} else if event.Type != watch.Deleted && !ok && hasApps {
ctx, cancel := context.WithCancel(ctx)
watchingClustersLock.Lock()
watchingClusters[event.Cluster.Server] = struct {
cancel context.CancelFunc
cluster *appv1.Cluster
}{
cancel: func() {
c.removeCluster(event.Cluster.Server)
cancel()
},
cluster: event.Cluster,
c.lock.Lock()
defer c.lock.Unlock()
if cluster, ok := c.clusters[event.Cluster.Server]; ok {
if event.Type == watch.Deleted {
cluster.invalidate()
delete(c.clusters, event.Cluster.Server)
} else if event.Type == watch.Modified {
cluster.cluster = event.Cluster
cluster.invalidate()
}
watchingClustersLock.Unlock()
go c.watchClusterResources(ctx, *event.Cluster)
} else if event.Type == watch.Added && isClusterHasApps(c.appInformer.GetStore().List(), event.Cluster) {
go func() {
// warm up cache for cluster with apps
_, _ = c.getSyncedCluster(event.Cluster.Server)
}()
}
}
onAppModified := func(obj interface{}) {
if app, ok := obj.(*appv1.Application); ok {
var cluster *appv1.Cluster
info, infoOk := watchingClusters[app.Spec.Destination.Server]
if infoOk {
cluster = info.cluster
} else {
cluster, _ = c.db.GetCluster(ctx, app.Spec.Destination.Server)
}
if cluster != nil {
// trigger cluster event every time when app created/deleted to either start or stop watching resources
clusterEventCallback(&db.ClusterEvent{Cluster: cluster, Type: watch.Modified})
}
}
}
c.appInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: onAppModified,
UpdateFunc: func(oldObj, newObj interface{}) {
oldApp, oldOk := oldObj.(*appv1.Application)
newApp, newOk := newObj.(*appv1.Application)
if oldOk && newOk {
if oldApp.Spec.Destination.Server != newApp.Spec.Destination.Server {
onAppModified(oldObj)
onAppModified(newApp)
}
}
},
DeleteFunc: onAppModified,
})
return c.db.WatchClusters(ctx, clusterEventCallback)
}, "watch clusters", ctx, clusterRetryTimeout)
<-ctx.Done()
}
// watchClusterResources watches for resource changes annotated with application label on specified cluster and schedule corresponding app refresh.
func (c *liveStateCache) watchClusterResources(ctx context.Context, item appv1.Cluster) {
util.RetryUntilSucceed(func() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Recovered from panic: %v\n", r)
}
}()
config := item.RESTConfig()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
ch, err := c.kubectl.WatchResources(ctx, config, c.settings, func(gk schema.GroupKind) (s string, e error) {
clusterInfo, err := c.getSyncedCluster(item.Server)
if err != nil {
return "", err
}
return clusterInfo.getResourceVersion(gk), nil
})
if err != nil {
return err
}
for event := range ch {
if event.WatchEvent != nil {
eventObj := event.WatchEvent.Object.(*unstructured.Unstructured)
if kube.IsCRD(eventObj) {
// restart if new CRD has been created after watch started
if event.WatchEvent.Type == watch.Added {
c.removeCluster(item.Server)
return fmt.Errorf("Restarting the watch because a new CRD %s was added", eventObj.GetName())
} else if event.WatchEvent.Type == watch.Deleted {
c.removeCluster(item.Server)
return fmt.Errorf("Restarting the watch because CRD %s was deleted", eventObj.GetName())
}
}
err = c.processEvent(event.WatchEvent.Type, eventObj, item.Server)
if err != nil {
log.Warnf("Failed to process event %s for obj %v: %v", event.WatchEvent.Type, event.WatchEvent.Object, err)
}
} else {
err = c.updateCache(item.Server, event.CacheRefresh.GVK.GroupKind(), event.CacheRefresh.ResourceVersion, event.CacheRefresh.Objects)
if err != nil {
log.Warnf("Failed to process event %s for obj %v: %v", event.WatchEvent.Type, event.WatchEvent.Object, err)
}
}
}
return fmt.Errorf("resource updates channel has closed")
}, fmt.Sprintf("watch app resources on %s", item.Server), ctx, clusterRetryTimeout)
}
func (c *liveStateCache) updateCache(server string, gk schema.GroupKind, resourceVersion string, objs []unstructured.Unstructured) error {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return err
}
clusterInfo.updateCache(gk, resourceVersion, objs)
return nil
}

View File

@@ -1,129 +0,0 @@
package cache
import (
"context"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apimachinery/pkg/watch"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
)
const (
pollInterval = 500 * time.Millisecond
)
func TestWatchClusterResourcesHandlesResourceEvents(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
events := make(chan kube.WatchEvent)
defer func() {
cancel()
close(events)
}()
pod := testPod.DeepCopy()
kubeMock := &kubetest.MockKubectlCmd{
Resources: []kube.ResourcesBatch{{
GVK: pod.GroupVersionKind(),
Objects: make([]unstructured.Unstructured, 0),
}},
Events: events,
}
server := "https://test"
clusterCache := newClusterExt(kubeMock)
cache := &liveStateCache{
clusters: map[string]*clusterInfo{server: clusterCache},
lock: &sync.Mutex{},
kubectl: kubeMock,
}
go cache.watchClusterResources(ctx, v1alpha1.Cluster{Server: server})
assert.False(t, clusterCache.synced())
events <- kube.WatchEvent{WatchEvent: &watch.Event{Object: pod, Type: watch.Added}}
err := wait.Poll(pollInterval, wait.ForeverTestTimeout, func() (bool, error) {
_, hasPod := clusterCache.nodes[kube.GetResourceKey(pod)]
return hasPod, nil
})
assert.Nil(t, err)
pod.SetResourceVersion("updated-resource-version")
events <- kube.WatchEvent{WatchEvent: &watch.Event{Object: pod, Type: watch.Modified}}
err = wait.Poll(pollInterval, wait.ForeverTestTimeout, func() (bool, error) {
updatedPodInfo, hasPod := clusterCache.nodes[kube.GetResourceKey(pod)]
return hasPod && updatedPodInfo.resourceVersion == "updated-resource-version", nil
})
assert.Nil(t, err)
events <- kube.WatchEvent{WatchEvent: &watch.Event{Object: pod, Type: watch.Deleted}}
err = wait.Poll(pollInterval, wait.ForeverTestTimeout, func() (bool, error) {
_, hasPod := clusterCache.nodes[kube.GetResourceKey(pod)]
return !hasPod, nil
})
assert.Nil(t, err)
}
func TestClusterCacheDroppedOnCreatedDeletedCRD(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
events := make(chan kube.WatchEvent)
defer func() {
cancel()
close(events)
}()
kubeMock := &kubetest.MockKubectlCmd{
Resources: []kube.ResourcesBatch{{
GVK: testCRD.GroupVersionKind(),
Objects: make([]unstructured.Unstructured, 0),
}},
Events: events,
}
server := "https://test"
clusterCache := newClusterExt(kubeMock)
cache := &liveStateCache{
clusters: map[string]*clusterInfo{server: clusterCache},
lock: &sync.Mutex{},
kubectl: kubeMock,
}
go cache.watchClusterResources(ctx, v1alpha1.Cluster{Server: server})
err := clusterCache.ensureSynced()
assert.Nil(t, err)
events <- kube.WatchEvent{WatchEvent: &watch.Event{Object: testCRD, Type: watch.Added}}
err = wait.Poll(pollInterval, wait.ForeverTestTimeout, func() (bool, error) {
cache.lock.Lock()
defer cache.lock.Unlock()
_, hasCache := cache.clusters[server]
return !hasCache, nil
})
assert.Nil(t, err)
cache.clusters[server] = clusterCache
events <- kube.WatchEvent{WatchEvent: &watch.Event{Object: testCRD, Type: watch.Deleted}}
err = wait.Poll(pollInterval, wait.ForeverTestTimeout, func() (bool, error) {
cache.lock.Lock()
defer cache.lock.Unlock()
_, hasCache := cache.clusters[server]
return !hasCache, nil
})
assert.Nil(t, err)
}

View File

@@ -1,6 +1,7 @@
package cache
import (
"context"
"fmt"
"runtime/debug"
"sync"
@@ -21,44 +22,38 @@ import (
)
const (
clusterSyncTimeout = 24 * time.Hour
clusterRetryTimeout = 10 * time.Second
clusterSyncTimeout = 24 * time.Hour
clusterRetryTimeout = 10 * time.Second
watchResourcesRetryTimeout = 1 * time.Second
)
type gkInfo struct {
resource metav1.APIResource
type apiMeta struct {
namespaced bool
resourceVersion string
watchCancel context.CancelFunc
}
type clusterInfo struct {
apis map[schema.GroupKind]*gkInfo
nodes map[kube.ResourceKey]*node
nsIndex map[string]map[kube.ResourceKey]*node
lock *sync.Mutex
onAppUpdated func(appName string)
syncLock *sync.Mutex
syncTime *time.Time
syncError error
apisMeta map[schema.GroupKind]*apiMeta
lock *sync.Mutex
nodes map[kube.ResourceKey]*node
nsIndex map[string]map[kube.ResourceKey]*node
onAppUpdated func(appName string, fullRefresh bool)
kubectl kube.Kubectl
cluster *appv1.Cluster
syncLock *sync.Mutex
syncTime *time.Time
syncError error
log *log.Entry
settings *settings.ArgoCDSettings
}
func (c *clusterInfo) getResourceVersion(gk schema.GroupKind) string {
func (c *clusterInfo) replaceResourceCache(gk schema.GroupKind, resourceVersion string, objs []unstructured.Unstructured) {
c.lock.Lock()
defer c.lock.Unlock()
info, ok := c.apis[gk]
if ok {
return info.resourceVersion
}
return ""
}
func (c *clusterInfo) updateCache(gk schema.GroupKind, resourceVersion string, objs []unstructured.Unstructured) {
c.lock.Lock()
defer c.lock.Unlock()
info, ok := c.apis[gk]
info, ok := c.apisMeta[gk]
if ok {
objByKind := make(map[kube.ResourceKey]*unstructured.Unstructured)
for i := range objs {
@@ -136,7 +131,13 @@ func (c *clusterInfo) removeNode(key kube.ResourceKey) {
}
func (c *clusterInfo) invalidate() {
c.syncLock.Lock()
defer c.syncLock.Unlock()
c.syncTime = nil
for i := range c.apisMeta {
c.apisMeta[i].watchCancel()
}
c.apisMeta = nil
}
func (c *clusterInfo) synced() bool {
@@ -149,38 +150,163 @@ func (c *clusterInfo) synced() bool {
return time.Now().Before(c.syncTime.Add(clusterSyncTimeout))
}
func (c *clusterInfo) sync() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
}()
func (c *clusterInfo) stopWatching(gk schema.GroupKind) {
c.syncLock.Lock()
defer c.syncLock.Unlock()
if info, ok := c.apisMeta[gk]; ok {
info.watchCancel()
delete(c.apisMeta, gk)
c.replaceResourceCache(gk, "", []unstructured.Unstructured{})
log.Warnf("Stop watching %s not found on %s.", gk, c.cluster.Server)
}
}
c.log.Info("Start syncing cluster")
// startMissingWatches lists supported cluster resources and start watching for changes unless watch is already running
func (c *clusterInfo) startMissingWatches() error {
c.apis = make(map[schema.GroupKind]*gkInfo)
c.nodes = make(map[kube.ResourceKey]*node)
resources, err := c.kubectl.GetResources(c.cluster.RESTConfig(), c.settings, "")
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.settings)
if err != nil {
log.Errorf("Failed to sync cluster %s: %v", c.cluster.Server, err)
return err
}
appLabelKey := c.settings.GetAppInstanceLabelKey()
for res := range resources {
if res.Error != nil {
return res.Error
for i := range apis {
api := apis[i]
if _, ok := c.apisMeta[api.GroupKind]; !ok {
ctx, cancel := context.WithCancel(context.Background())
info := &apiMeta{namespaced: api.Meta.Namespaced, watchCancel: cancel}
c.apisMeta[api.GroupKind] = info
go c.watchEvents(ctx, api, info)
}
if _, ok := c.apis[res.GVK.GroupKind()]; !ok {
c.apis[res.GVK.GroupKind()] = &gkInfo{
resourceVersion: res.ListResourceVersion,
resource: res.ResourceInfo,
}
return nil
}
func runSynced(lock *sync.Mutex, action func() error) error {
lock.Lock()
defer lock.Unlock()
return action()
}
func (c *clusterInfo) watchEvents(ctx context.Context, api kube.APIResourceInfo, info *apiMeta) {
util.RetryUntilSucceed(func() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
}()
err = runSynced(c.syncLock, func() error {
if info.resourceVersion == "" {
list, err := api.Interface.List(metav1.ListOptions{})
if err != nil {
return err
}
c.replaceResourceCache(api.GroupKind, list.GetResourceVersion(), list.Items)
}
return nil
})
if err != nil {
return err
}
w, err := api.Interface.Watch(metav1.ListOptions{ResourceVersion: info.resourceVersion})
if errors.IsNotFound(err) {
c.stopWatching(api.GroupKind)
return nil
}
err = runSynced(c.syncLock, func() error {
if errors.IsGone(err) {
info.resourceVersion = ""
log.Warnf("Resource version of %s on %s is too old.", api.GroupKind, c.cluster.Server)
}
return err
})
if err != nil {
return err
}
defer w.Stop()
for {
select {
case <-ctx.Done():
return nil
case event, ok := <-w.ResultChan():
if ok {
obj := event.Object.(*unstructured.Unstructured)
info.resourceVersion = obj.GetResourceVersion()
err = c.processEvent(event.Type, obj)
if err != nil {
log.Warnf("Failed to process event %s %s/%s/%s: %v", event.Type, obj.GroupVersionKind(), obj.GetNamespace(), obj.GetName(), err)
continue
}
if kube.IsCRD(obj) {
if event.Type == watch.Deleted {
group, groupOk, groupErr := unstructured.NestedString(obj.Object, "spec", "group")
kind, kindOk, kindErr := unstructured.NestedString(obj.Object, "spec", "names", "kind")
if groupOk && groupErr == nil && kindOk && kindErr == nil {
gk := schema.GroupKind{Group: group, Kind: kind}
c.stopWatching(gk)
}
} else {
err = runSynced(c.syncLock, func() error {
return c.startMissingWatches()
})
}
}
if err != nil {
log.Warnf("Failed to start missing watch: %v", err)
}
} else {
return fmt.Errorf("Watch %s on %s has closed", api.GroupKind, c.cluster.Server)
}
}
}
for i := range res.Objects {
c.setNode(createObjInfo(&res.Objects[i], appLabelKey))
}, fmt.Sprintf("watch %s on %s", api.GroupKind, c.cluster.Server), ctx, watchResourcesRetryTimeout)
}
func (c *clusterInfo) sync() (err error) {
c.log.Info("Start syncing cluster")
for i := range c.apisMeta {
c.apisMeta[i].watchCancel()
}
c.apisMeta = make(map[schema.GroupKind]*apiMeta)
c.nodes = make(map[kube.ResourceKey]*node)
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.settings)
if err != nil {
return err
}
lock := sync.Mutex{}
err = util.RunAllAsync(len(apis), func(i int) error {
api := apis[i]
list, err := api.Interface.List(metav1.ListOptions{})
if err != nil {
return err
}
lock.Lock()
for i := range list.Items {
c.setNode(createObjInfo(&list.Items[i], c.settings.GetAppInstanceLabelKey()))
}
lock.Unlock()
return nil
})
if err == nil {
err = c.startMissingWatches()
}
if err != nil {
log.Errorf("Failed to sync cluster %s: %v", c.cluster.Server, err)
return err
}
c.log.Info("Cluster successfully synced")
@@ -188,9 +314,6 @@ func (c *clusterInfo) sync() (err error) {
}
func (c *clusterInfo) ensureSynced() error {
if c.synced() {
return c.syncError
}
c.syncLock.Lock()
defer c.syncLock.Unlock()
if c.synced() {
@@ -219,8 +342,8 @@ func (c *clusterInfo) getChildren(obj *unstructured.Unstructured) []appv1.Resour
return children
}
func (c *clusterInfo) isNamespaced(gk schema.GroupKind) bool {
if api, ok := c.apis[gk]; ok && !api.resource.Namespaced {
func (c *clusterInfo) isNamespaced(obj *unstructured.Unstructured) bool {
if api, ok := c.apisMeta[kube.GetResourceKey(obj).GroupKind()]; ok && !api.namespaced {
return false
}
return true
@@ -242,7 +365,7 @@ func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*uns
lock := &sync.Mutex{}
err := util.RunAllAsync(len(targetObjs), func(i int) error {
targetObj := targetObjs[i]
key := GetTargetObjKey(a, targetObj, c.isNamespaced(targetObj.GroupVersionKind().GroupKind()))
key := GetTargetObjKey(a, targetObj, c.isNamespaced(targetObj))
lock.Lock()
managedObj := managedObjs[key]
lock.Unlock()
@@ -256,7 +379,6 @@ func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*uns
managedObj, err = c.kubectl.GetResource(c.cluster.RESTConfig(), targetObj.GroupVersionKind(), existingObj.ref.Name, existingObj.ref.Namespace)
if err != nil {
if errors.IsNotFound(err) {
c.checkAndInvalidateStaleCache(targetObj.GroupVersionKind(), existingObj.ref.Namespace, existingObj.ref.Name)
return nil
}
return err
@@ -284,36 +406,13 @@ func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*uns
}
func (c *clusterInfo) delete(obj *unstructured.Unstructured) error {
err := c.kubectl.DeleteResource(c.cluster.RESTConfig(), obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), false)
if err != nil && errors.IsNotFound(err) {
// a delete request came in for an object which does not exist. it's possible that our cache
// is stale. Check and invalidate if it is
c.lock.Lock()
c.checkAndInvalidateStaleCache(obj.GroupVersionKind(), obj.GetNamespace(), obj.GetName())
c.lock.Unlock()
return nil
}
return err
}
// checkAndInvalidateStaleCache checks if our cache is stale and invalidate it based on error
// should be called whenever we suspect our cache is stale
func (c *clusterInfo) checkAndInvalidateStaleCache(gvk schema.GroupVersionKind, namespace string, name string) {
if _, ok := c.nodes[kube.NewResourceKey(gvk.Group, gvk.Kind, namespace, name)]; ok {
if c.syncTime != nil {
c.log.Warnf("invalidated stale cache due to mismatch of %s, %s/%s", gvk, namespace, name)
c.invalidate()
}
}
return c.kubectl.DeleteResource(c.cluster.RESTConfig(), obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), false)
}
func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstructured) error {
c.lock.Lock()
defer c.lock.Unlock()
key := kube.GetResourceKey(un)
if info, ok := c.apis[schema.GroupKind{Group: key.Group, Kind: key.Kind}]; ok {
info.resourceVersion = un.GetResourceVersion()
}
existingNode, exists := c.nodes[key]
if event == watch.Deleted {
if exists {
@@ -342,18 +441,23 @@ func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstruc
if app == "" || skipAppRequeing(key) {
continue
}
toNotify[app] = true
toNotify[app] = n.isRootAppNode() || toNotify[app]
}
}
for name := range toNotify {
c.onAppUpdated(name)
for name, full := range toNotify {
c.onAppUpdated(name, full)
}
}
func (c *clusterInfo) onNodeRemoved(key kube.ResourceKey, existingNode *node) {
func (c *clusterInfo) onNodeRemoved(key kube.ResourceKey, n *node) {
appName := n.appName
if ns, ok := c.nsIndex[key.Namespace]; ok {
appName = n.getApp(ns)
}
c.removeNode(key)
if existingNode.appName != "" {
c.onAppUpdated(existingNode.appName)
if appName != "" {
c.onAppUpdated(appName, n.isRootAppNode())
}
}

View File

@@ -1,11 +1,15 @@
package cache
import (
"fmt"
"sort"
"strings"
"sync"
"testing"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/dynamic/fake"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
@@ -49,8 +53,7 @@ var (
resourceVersion: "123"`)
testRS = strToUnstructured(`
apiVersion: v1
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: helm-guestbook-rs
@@ -62,52 +65,52 @@ var (
resourceVersion: "123"`)
testDeploy = strToUnstructured(`
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: helm-guestbook
name: helm-guestbook
namespace: default
resourceVersion: "123"`)
testCRD = strToUnstructured(`
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: my-custom-resource-definition
resourceVersion: "123"`)
)
func newCluster(objs ...*unstructured.Unstructured) *clusterInfo {
resByGVK := make(map[schema.GroupVersionKind][]unstructured.Unstructured)
runtimeObjs := make([]runtime.Object, len(objs))
for i := range objs {
resByGVK[objs[i].GroupVersionKind()] = append(resByGVK[objs[i].GroupVersionKind()], *objs[i])
runtimeObjs[i] = objs[i]
}
resources := make([]kube.ResourcesBatch, 0)
for gvk, objects := range resByGVK {
resources = append(resources, kube.ResourcesBatch{
ListResourceVersion: "1",
GVK: gvk,
Objects: objects,
})
}
return newClusterExt(kubetest.MockKubectlCmd{
Resources: resources,
})
scheme := runtime.NewScheme()
client := fake.NewSimpleDynamicClient(scheme, runtimeObjs...)
apiResources := []kube.APIResourceInfo{{
GroupKind: schema.GroupKind{Group: "", Kind: "Pod"},
Interface: client.Resource(schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"}),
Meta: metav1.APIResource{Namespaced: true},
}, {
GroupKind: schema.GroupKind{Group: "apps", Kind: "ReplicaSet"},
Interface: client.Resource(schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "replicasets"}),
Meta: metav1.APIResource{Namespaced: true},
}, {
GroupKind: schema.GroupKind{Group: "apps", Kind: "Deployment"},
Interface: client.Resource(schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "deployments"}),
Meta: metav1.APIResource{Namespaced: true},
}}
return newClusterExt(kubetest.MockKubectlCmd{APIResources: apiResources})
}
func newClusterExt(kubectl kube.Kubectl) *clusterInfo {
return &clusterInfo{
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
onAppUpdated: func(appName string) {},
onAppUpdated: func(appName string, fullRefresh bool) {},
kubectl: kubectl,
nsIndex: make(map[string]map[kube.ResourceKey]*node),
cluster: &appv1.Cluster{},
syncTime: nil,
syncLock: &sync.Mutex{},
apis: make(map[schema.GroupKind]*gkInfo),
apisMeta: make(map[schema.GroupKind]*apiMeta),
log: log.WithField("cluster", "test"),
settings: &settings.ArgoCDSettings{},
}
@@ -135,8 +138,8 @@ func TestGetChildren(t *testing.T) {
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
Group: "extensions",
Version: "v1beta1",
Group: "apps",
Version: "v1",
ResourceVersion: "123",
Children: rsChildren,
Info: []appv1.InfoItem{},
@@ -149,7 +152,7 @@ func TestGetManagedLiveObjs(t *testing.T) {
assert.Nil(t, err)
targetDeploy := strToUnstructured(`
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-guestbook
@@ -269,8 +272,8 @@ func TestUpdateResourceTags(t *testing.T) {
func TestUpdateAppResource(t *testing.T) {
updatesReceived := make([]string, 0)
cluster := newCluster(testPod, testRS, testDeploy)
cluster.onAppUpdated = func(appName string) {
updatesReceived = append(updatesReceived, appName)
cluster.onAppUpdated = func(appName string, fullRefresh bool) {
updatesReceived = append(updatesReceived, fmt.Sprintf("%s: %v", appName, fullRefresh))
}
err := cluster.ensureSynced()
@@ -279,7 +282,7 @@ func TestUpdateAppResource(t *testing.T) {
err = cluster.processEvent(watch.Modified, mustToUnstructured(testPod))
assert.Nil(t, err)
assert.Equal(t, []string{"helm-guestbook"}, updatesReceived)
assert.Contains(t, updatesReceived, "helm-guestbook: false")
}
func TestCircularReference(t *testing.T) {
@@ -296,6 +299,11 @@ func TestCircularReference(t *testing.T) {
children := cluster.getChildren(dep)
assert.Len(t, children, 1)
node := cluster.nodes[kube.GetResourceKey(dep)]
assert.NotNil(t, node)
app := node.getApp(cluster.nodes)
assert.Equal(t, "", app)
}
func TestWatchCacheUpdated(t *testing.T) {
@@ -316,7 +324,7 @@ func TestWatchCacheUpdated(t *testing.T) {
podGroupKind := testPod.GroupVersionKind().GroupKind()
cluster.updateCache(podGroupKind, "updated-list-version", []unstructured.Unstructured{*updated, *added})
cluster.replaceResourceCache(podGroupKind, "updated-list-version", []unstructured.Unstructured{*updated, *added})
_, ok := cluster.nodes[kube.GetResourceKey(removed)]
assert.False(t, ok)
@@ -327,6 +335,4 @@ func TestWatchCacheUpdated(t *testing.T) {
_, ok = cluster.nodes[kube.GetResourceKey(added)]
assert.True(t, ok)
assert.Equal(t, cluster.getResourceVersion(podGroupKind), "updated-list-version")
}

View File

@@ -3,7 +3,7 @@ package cache
import (
"fmt"
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
k8snode "k8s.io/kubernetes/pkg/util/node"

View File

@@ -2,14 +2,11 @@
package mocks
import (
"context"
)
import "github.com/argoproj/argo-cd/util/kube"
import "github.com/stretchr/testify/mock"
import "k8s.io/apimachinery/pkg/runtime/schema"
import "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
import "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
import context "context"
import kube "github.com/argoproj/argo-cd/util/kube"
import mock "github.com/stretchr/testify/mock"
import unstructured "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
import v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
// LiveStateCache is an autogenerated mock type for the LiveStateCache type
type LiveStateCache struct {
@@ -81,20 +78,20 @@ func (_m *LiveStateCache) Invalidate() {
_m.Called()
}
// IsNamespaced provides a mock function with given fields: server, gvk
func (_m *LiveStateCache) IsNamespaced(server string, gvk schema.GroupVersionKind) (bool, error) {
ret := _m.Called(server, gvk)
// IsNamespaced provides a mock function with given fields: server, obj
func (_m *LiveStateCache) IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error) {
ret := _m.Called(server, obj)
var r0 bool
if rf, ok := ret.Get(0).(func(string, schema.GroupVersionKind) bool); ok {
r0 = rf(server, gvk)
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured) bool); ok {
r0 = rf(server, obj)
} else {
r0 = ret.Get(0).(bool)
}
var r1 error
if rf, ok := ret.Get(1).(func(string, schema.GroupVersionKind) error); ok {
r1 = rf(server, gvk)
if rf, ok := ret.Get(1).(func(string, *unstructured.Unstructured) error); ok {
r1 = rf(server, obj)
} else {
r1 = ret.Error(1)
}

View File

@@ -6,7 +6,7 @@ import (
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -21,13 +21,17 @@ type node struct {
resource *unstructured.Unstructured
}
func (n *node) isRootAppNode() bool {
return n.appName != "" && len(n.ownerRefs) == 0
}
func (n *node) resourceKey() kube.ResourceKey {
return kube.NewResourceKey(n.ref.GroupVersionKind().Group, n.ref.Kind, n.ref.Namespace, n.ref.Name)
}
func (n *node) isParentOf(child *node) bool {
ownerGvk := n.ref.GroupVersionKind()
for _, ownerRef := range child.ownerRefs {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
if kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name) == n.resourceKey() {
return true
}
@@ -45,13 +49,24 @@ func ownerRefGV(ownerRef metav1.OwnerReference) schema.GroupVersion {
}
func (n *node) getApp(ns map[kube.ResourceKey]*node) string {
return n.getAppRecursive(ns, map[kube.ResourceKey]bool{})
}
func (n *node) getAppRecursive(ns map[kube.ResourceKey]*node, visited map[kube.ResourceKey]bool) string {
if !visited[n.resourceKey()] {
visited[n.resourceKey()] = true
} else {
log.Warnf("Circular dependency detected: %v.", visited)
return n.appName
}
if n.appName != "" {
return n.appName
}
for _, ownerRef := range n.ownerRefs {
gv := ownerRefGV(ownerRef)
if parent, ok := ns[kube.NewResourceKey(gv.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name)]; ok {
app := parent.getApp(ns)
app := parent.getAppRecursive(ns, visited)
if app != "" {
return app
}

25
controller/cache/node_test.go vendored Normal file
View File

@@ -0,0 +1,25 @@
package cache
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestIsParentOf(t *testing.T) {
child := createObjInfo(testPod, "")
parent := createObjInfo(testRS, "")
grandParent := createObjInfo(testDeploy, "")
assert.True(t, parent.isParentOf(child))
assert.False(t, grandParent.isParentOf(child))
}
func TestIsParentOfSameKindDifferentGroup(t *testing.T) {
rs := testRS.DeepCopy()
rs.SetAPIVersion("somecrd.io/v1")
child := createObjInfo(testPod, "")
invalidParent := createObjInfo(rs, "")
assert.False(t, invalidParent.isParentOf(child))
}

View File

@@ -60,6 +60,8 @@ var (
func NewMetricsServer(addr string, appLister applister.ApplicationLister) *MetricsServer {
mux := http.NewServeMux()
appRegistry := NewAppRegistry(appLister)
appRegistry.MustRegister(prometheus.NewProcessCollector(prometheus.ProcessCollectorOpts{}))
appRegistry.MustRegister(prometheus.NewGoCollector())
mux.Handle(MetricsPath, promhttp.HandlerFor(appRegistry, promhttp.HandlerOpts{}))
syncCounter := prometheus.NewCounterVec(

View File

@@ -220,7 +220,7 @@ func TestReconcileMetrics(t *testing.T) {
metricsServ := NewMetricsServer("localhost:8082", appLister)
fakeApp := newFakeApp(fakeApp)
metricsServ.IncReconcile(fakeApp, time.Duration(5*time.Second))
metricsServ.IncReconcile(fakeApp, 5*time.Second)
req, err := http.NewRequest("GET", "/metrics", nil)
assert.NoError(t, err)

View File

@@ -49,6 +49,10 @@ func GetLiveObjs(res []managedResource) []*unstructured.Unstructured {
return objs
}
type ResourceInfoProvider interface {
IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error)
}
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool) (*comparisonResult, error)
@@ -56,7 +60,7 @@ type AppStateManager interface {
}
type comparisonResult struct {
observedAt metav1.Time
reconciledAt metav1.Time
syncStatus *v1alpha1.SyncStatus
healthStatus *v1alpha1.HealthStatus
resources []v1alpha1.ResourceStatus
@@ -131,6 +135,43 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
return targetObjs, hooks, manifestInfo, nil
}
func DeduplicateTargetObjects(
server string,
namespace string,
objs []*unstructured.Unstructured,
infoProvider ResourceInfoProvider,
) ([]*unstructured.Unstructured, []v1alpha1.ApplicationCondition, error) {
targetByKey := make(map[kubeutil.ResourceKey][]*unstructured.Unstructured)
for i := range objs {
obj := objs[i]
isNamespaced, err := infoProvider.IsNamespaced(server, obj)
if err != nil {
return objs, nil, err
}
if !isNamespaced {
obj.SetNamespace("")
} else if obj.GetNamespace() == "" {
obj.SetNamespace(namespace)
}
key := kubeutil.GetResourceKey(obj)
targetByKey[key] = append(targetByKey[key], obj)
}
conditions := make([]v1alpha1.ApplicationCondition, 0)
result := make([]*unstructured.Unstructured, 0)
for key, targets := range targetByKey {
if len(targets) > 1 {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionRepeatedResourceWarning,
Message: fmt.Sprintf("Resource %s appeared %d times among application resources.", key.String(), len(targets)),
})
}
result = append(result, targets[len(targets)-1])
}
return result, conditions, nil
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
@@ -151,6 +192,12 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
targetObjs, dedupConditions, err := DeduplicateTargetObjects(app.Spec.Destination.Server, app.Spec.Destination.Namespace, targetObjs, m.liveStateCache)
if err != nil {
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
}
conditions = append(conditions, dedupConditions...)
logCtx.Debugf("Generated config manifests")
liveObjByKey, err := m.liveStateCache.GetManagedLiveObjs(app, targetObjs)
if err != nil {
@@ -175,7 +222,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
for i, obj := range targetObjs {
gvk := obj.GroupVersionKind()
ns := util.FirstNonEmpty(obj.GetNamespace(), app.Spec.Destination.Namespace)
if namespaced, err := m.liveStateCache.IsNamespaced(app.Spec.Destination.Server, obj.GroupVersionKind()); err == nil && !namespaced {
if namespaced, err := m.liveStateCache.IsNamespaced(app.Spec.Destination.Server, obj); err == nil && !namespaced {
ns = ""
}
key := kubeutil.NewResourceKey(gvk.Group, gvk.Kind, ns, obj.GetName())
@@ -215,7 +262,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
gvk := obj.GroupVersionKind()
resState := v1alpha1.ResourceStatus{
Namespace: util.FirstNonEmpty(obj.GetNamespace(), app.Spec.Destination.Namespace),
Namespace: obj.GetNamespace(),
Name: obj.GetName(),
Kind: gvk.Kind,
Version: gvk.Version,
@@ -270,7 +317,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
}
compRes := comparisonResult{
observedAt: observedAt,
reconciledAt: observedAt,
syncStatus: &syncStatus,
healthStatus: healthStatus,
resources: resourceSummaries,
@@ -278,7 +325,9 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision st
conditions: conditions,
hooks: hooks,
diffNormalizer: diffNormalizer,
appSourceType: v1alpha1.ApplicationSourceType(manifestInfo.SourceType),
}
if manifestInfo != nil {
compRes.appSourceType = v1alpha1.ApplicationSourceType(manifestInfo.SourceType)
}
return &compRes, nil
}

View File

@@ -141,3 +141,40 @@ func TestCompareAppStateExtraHook(t *testing.T) {
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.conditions))
}
func toJSON(t *testing.T, obj *unstructured.Unstructured) string {
data, err := json.Marshal(obj)
assert.NoError(t, err)
return string(data)
}
func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
obj1 := test.NewPod()
obj1.SetNamespace(test.FakeDestNamespace)
obj2 := test.NewPod()
obj3 := test.NewPod()
obj3.SetNamespace("kube-system")
app := newFakeApp()
data := fakeData{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{toJSON(t, obj1), toJSON(t, obj2), toJSON(t, obj3)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(obj1): obj1,
kube.GetResourceKey(obj3): obj3,
},
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Contains(t, compRes.conditions, argoappv1.ApplicationCondition{
Message: "Resource /Pod/fake-dest-ns/my-pod appeared 2 times among application resources.",
Type: argoappv1.ApplicationConditionRepeatedResourceWarning,
})
assert.Equal(t, 2, len(compRes.resources))
}

View File

@@ -4,7 +4,7 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"

View File

@@ -126,7 +126,7 @@ data:
## Clusters
Cluster credentials are stored in secrets same as repository credentials but does not require entry in `argocd-cm` config map. Each secret must have label
`argocd.argoproj.io/secret-type: cluster` and name which is following convention: `<hostname>-<port>`.
`argocd.argoproj.io/secret-type: cluster`.
The secret data must include following fields:
* `name` - cluster name
@@ -166,7 +166,7 @@ Cluster secret example:
apiVersion: v1
kind: Secret
metadata:
name: mycluster.com-443
name: mycluster-secret
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque

View File

@@ -103,4 +103,10 @@ data:
issuer: https://dev-123456.oktapreview.com
clientID: aaaabbbbccccddddeee
clientSecret: $oidc.okta.clientSecret
# Some OIDC providers require a separate clientID for different callback URLs.
# For example, if configuring Argo CD with self-hosted Dex, you will need a separate client ID
# for the 'localhost' (CLI) client to Dex. This field is optional. If omitted, the CLI will
# use the same clientID as the Argo CD server
cliClientID: vvvvwwwwxxxxyyyyzzzz
```

View File

@@ -1,31 +0,0 @@
{
"Vendor": true,
"DisableAll": true,
"Deadline": "8m",
"Enable": [
"vet",
"gofmt",
"goimports",
"deadcode",
"errcheck",
"varcheck",
"structcheck",
"ineffassign",
"unconvert",
"misspell"
],
"Linters": {
"goimports": {"Command": "goimports -l --local github.com/argoproj/argo-cd"}
},
"Skip": [
"pkg/client",
"vendor/",
".pb.go"
],
"Exclude": [
"pkg/client",
"vendor/",
".pb.go",
".*warning.*fmt.Fprint"
]
}

7
hack/git-ask-pass.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/sh
# This script is used as the commaned supplied to GIT_ASKPASS as a way to supply username/password
# credentials to git, without having to use git credentials helpers, or having on-disk config.
case "$1" in
Username*) echo "${GIT_USERNAME}" ;;
Password*) echo "${GIT_PASSWORD}" ;;
esac

8
hack/ssh_known_hosts Normal file
View File

@@ -0,0 +1,8 @@
# This file was automatically generated. DO NOT EDIT
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H

24
hack/update-ssh-known-hosts.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/bin/bash
set -e
KNOWN_HOSTS_FILE=$(dirname "$0")/ssh_known_hosts
HEADER="# This file was automatically generated. DO NOT EDIT"
echo "$HEADER" > $KNOWN_HOSTS_FILE
ssh-keyscan github.com gitlab.com bitbucket.org ssh.dev.azure.com vs-ssh.visualstudio.com | sort -u >> $KNOWN_HOSTS_FILE
chmod 0644 $KNOWN_HOSTS_FILE
# Public SSH keys can be verified at the following URLs:
# - github.com: https://help.github.com/articles/github-s-ssh-key-fingerprints/
# - gitlab.com: https://docs.gitlab.com/ee/user/gitlab_com/#ssh-host-keys-fingerprints
# - bitbucket.org: https://confluence.atlassian.com/bitbucket/ssh-keys-935365775.html
# - ssh.dev.azure.com, vs-ssh.visualstudio.com: https://docs.microsoft.com/en-us/azure/devops/repos/git/use-ssh-keys-to-authenticate?view=azure-devops
diff - <(ssh-keygen -l -f $KNOWN_HOSTS_FILE | sort -k 3) <<EOF
2048 SHA256:zzXQOXSRBEiUtuE8AikJYKwbHaxvSc0ojez9YXaGp1A bitbucket.org (RSA)
2048 SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8 github.com (RSA)
256 SHA256:HbW3g8zUjNSksFbqTiUWPWg2Bq1x8xdGUrliXFzSnUw gitlab.com (ECDSA)
256 SHA256:eUXGGm1YGsMAS7vkcx6JOJdOGHPem5gQp4taiCfCLB8 gitlab.com (ED25519)
2048 SHA256:ROQFvPThGrW4RuWLoL9tq9I9zJ42fK4XywyRtbOz/EQ gitlab.com (RSA)
2048 SHA256:ohD8VZEXGWo6Ez8GSEJQ9WpafgLFsOfLOtGGQCQo6Og ssh.dev.azure.com (RSA)
2048 SHA256:ohD8VZEXGWo6Ez8GSEJQ9WpafgLFsOfLOtGGQCQo6Og vs-ssh.visualstudio.com (RSA)
EOF

View File

@@ -7,6 +7,8 @@ metadata:
app.kubernetes.io/component: application-controller
name: argocd-application-controller
spec:
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: argocd-application-controller

View File

@@ -12,7 +12,7 @@ bases:
images:
- name: argoproj/argocd
newName: argoproj/argocd
newTag: latest
newTag: v0.12.2
- name: argoproj/argocd-ui
newName: argoproj/argocd-ui
newTag: latest
newTag: v0.12.2

View File

@@ -26,6 +26,7 @@ spec:
- argocd-redis:6379
ports:
- containerPort: 8081
- containerPort: 8084
readinessProbe:
tcpSocket:
port: 8081

View File

@@ -8,7 +8,13 @@ metadata:
name: argocd-repo-server
spec:
ports:
- port: 8081
- name: server
protocol: TCP
port: 8081
targetPort: 8081
- name: metrics
protocol: TCP
port: 8084
targetPort: 8084
selector:
app.kubernetes.io/name: argocd-repo-server

View File

@@ -7,26 +7,24 @@ metadata:
app.kubernetes.io/component: server
name: argocd-server
rules:
# support viewing and deleting a live object view in UI
- apiGroups:
- '*'
resources:
- '*'
verbs:
- delete
- get
# support listing events of
- delete # supports deletion a live object in UI
- get # supports viewing live object manifest in UI
- patch # supports `argocd app patch`
- apiGroups:
- ""
resources:
- events
verbs:
- list
# support viewing pod logs from UI
- list # supports listing events in UI
- apiGroups:
- ""
resources:
- pods
- pods/log
verbs:
- get
- get # supports viewing pod logs from UI

View File

@@ -17,7 +17,7 @@ patchesStrategicMerge:
images:
- name: argoproj/argocd
newName: argoproj/argocd
newTag: latest
newTag: v0.12.2
- name: argoproj/argocd-ui
newName: argoproj/argocd-ui
newTag: latest
newTag: v0.12.2

View File

@@ -1,6 +1,6 @@
dependencies:
- name: redis-ha
repository: https://kubernetes-charts.storage.googleapis.com
version: 3.1.6
digest: sha256:f804c4ab7502492dcd2e9865984df6f9df8273cc8ce4b729de2a1a80c31db459
generated: 2019-02-23T03:25:05.793489-08:00
version: 3.3.1
digest: sha256:3e273208c389589d3d8935ddc39bf245f72c3ea0b6d3e61c4dc0862b7c3839eb
generated: 2019-03-19T21:34:14.183861-07:00

View File

@@ -1,4 +1,4 @@
dependencies:
- name: redis-ha
version: 3.1.6
version: 3.3.1
repository: https://kubernetes-charts.storage.googleapis.com

View File

@@ -8,7 +8,7 @@ metadata:
labels:
heritage: Tiller
release: argocd
chart: redis-ha-3.1.6
chart: redis-ha-3.3.1
app: argocd-redis-ha
data:
redis.conf: |
@@ -20,7 +20,7 @@ data:
rdbchecksum yes
rdbcompression yes
repl-diskless-sync yes
save 900 1
save ""
sentinel.conf: |
dir "/data"
@@ -66,8 +66,9 @@ data:
echo "Setting up defaults"
if [ "$INDEX" = "0" ]; then
echo "Setting this pod as the default master"
sed -i "s/^.*slaveof.*//" "$REDIS_CONF"
redis_update "$ANNOUNCE_IP"
sentinel_update "$ANNOUNCE_IP"
sed -i "s/^.*slaveof.*//" "$REDIS_CONF"
else
DEFAULT_MASTER="$(getent hosts "$SERVICE-announce-0" | awk '{ print $1 }')"
if [ -z "$DEFAULT_MASTER" ]; then
@@ -135,7 +136,7 @@ metadata:
labels:
heritage: Tiller
release: argocd
chart: redis-ha-3.1.6
chart: redis-ha-3.3.1
app: argocd-redis-ha
data:
check-quorum.sh: |
@@ -170,6 +171,59 @@ data:
exit 1
fi
---
# Source: redis-ha/charts/redis-ha/templates/redis-ha-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: argocd-redis-ha
labels:
heritage: Tiller
release: argocd
chart: redis-ha-3.3.1
app: argocd-redis-ha
---
# Source: redis-ha/charts/redis-ha/templates/redis-ha-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argocd-redis-ha
labels:
heritage: Tiller
release: argocd
chart: redis-ha-3.3.1
app: argocd-redis-ha
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
# Source: redis-ha/charts/redis-ha/templates/redis-ha-rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: argocd-redis-ha
labels:
heritage: Tiller
release: argocd
chart: redis-ha-3.3.1
app: argocd-redis-ha
subjects:
- kind: ServiceAccount
name: argocd-redis-ha
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argocd-redis-ha
---
# Source: redis-ha/charts/redis-ha/templates/redis-ha-announce-service.yaml
@@ -182,7 +236,7 @@ metadata:
app: redis-ha
heritage: "Tiller"
release: "argocd"
chart: redis-ha-3.1.6
chart: redis-ha-3.3.1
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
@@ -210,7 +264,7 @@ metadata:
app: redis-ha
heritage: "Tiller"
release: "argocd"
chart: redis-ha-3.1.6
chart: redis-ha-3.3.1
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
@@ -238,7 +292,7 @@ metadata:
app: redis-ha
heritage: "Tiller"
release: "argocd"
chart: redis-ha-3.1.6
chart: redis-ha-3.3.1
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
@@ -268,7 +322,7 @@ metadata:
app: redis-ha
heritage: "Tiller"
release: "argocd"
chart: redis-ha-3.1.6
chart: redis-ha-3.3.1
annotations:
spec:
type: ClusterIP
@@ -296,7 +350,7 @@ metadata:
app: redis-ha
heritage: "Tiller"
release: "argocd"
chart: redis-ha-3.1.6
chart: redis-ha-3.3.1
spec:
selector:
matchLabels:
@@ -310,8 +364,8 @@ spec:
template:
metadata:
annotations:
checksum/init-config: fd61dcd77a6f4a93ac6609398c79958da8eeae692c1203f77809abf3301e09e6
checksum/probe-config: 6a71f32ef8f1e6e6046352b15c3ee4e201a0288442ae2abc3a64461df55d7bc4
checksum/init-config: 06440ee4a409be2aa01dfa08c14dd964fe3bad2ada57da1a538ad5cd771a045f
checksum/probe-config: 4b9888f173366e436f167856ee3469e8c1fd5221e29caa2129373db23578ec10
labels:
release: argocd
app: redis-ha
@@ -338,6 +392,7 @@ spec:
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: argocd-redis-ha
initContainers:
- name: config-init
image: redis:5.0.3-alpine
@@ -351,13 +406,13 @@ spec:
- /readonly-config/init.sh
env:
- name: SENTINEL_ID_0
value: fcf9c936f413aee4a7faceeef0817f2894e477ff
value: e791a161cb06f0d3eb0cc392117d34cf0eae9d71
- name: SENTINEL_ID_1
value: cdf48d26a604810dc199f2d532723d2087f68e1e
value: d9b3204a90597a7500530efd881942d8145996ac
- name: SENTINEL_ID_2
value: 4e85671bbbc383ff723c51ffb0ac2bc2e7a60cea
value: d9deb539c0402841c2492e9959c8086602fa4284
volumeMounts:
- name: config

View File

@@ -3,3 +3,5 @@ redis-ha:
enabled: false
redis:
masterGroupName: argocd
config:
save: "\"\""

View File

@@ -42,6 +42,23 @@ patchesJson6902:
kind: StatefulSet
name: argocd-redis-ha-server
path: overlays/modify-labels.yaml
- target:
version: v1
kind: ServiceAccount
name: argocd-redis-ha
path: overlays/modify-labels.yaml
- target:
group: rbac.authorization.k8s.io
version: v1
kind: Role
name: argocd-redis-ha
path: overlays/modify-labels.yaml
- target:
group: rbac.authorization.k8s.io
version: v1
kind: RoleBinding
name: argocd-redis-ha
path: overlays/modify-labels.yaml
# add pod template labels
- target:

View File

@@ -55,6 +55,15 @@ metadata:
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: redis
app.kubernetes.io/name: argocd-redis-ha
app.kubernetes.io/part-of: argocd
name: argocd-redis-ha
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: server
@@ -122,6 +131,22 @@ rules:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: redis
app.kubernetes.io/name: argocd-redis-ha
app.kubernetes.io/part-of: argocd
name: argocd-redis-ha
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: server
@@ -199,6 +224,7 @@ rules:
verbs:
- delete
- get
- patch
- apiGroups:
- ""
resources:
@@ -247,6 +273,22 @@ subjects:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: redis
app.kubernetes.io/name: argocd-redis-ha
app.kubernetes.io/part-of: argocd
name: argocd-redis-ha
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argocd-redis-ha
subjects:
- kind: ServiceAccount
name: argocd-redis-ha
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: server
@@ -326,18 +368,19 @@ data:
$REDIS_PORT\" >> $REDIS_CONF\n}\n\ncopy_config() {\n cp /readonly-config/redis.conf
\"$REDIS_CONF\"\n cp /readonly-config/sentinel.conf \"$SENTINEL_CONF\"\n}\n\nsetup_defaults()
{\n echo \"Setting up defaults\"\n if [ \"$INDEX\" = \"0\" ]; then\n echo
\"Setting this pod as the default master\"\n sed -i \"s/^.*slaveof.*//\"
\"$REDIS_CONF\"\n sentinel_update \"$ANNOUNCE_IP\"\n else\n DEFAULT_MASTER=\"$(getent
hosts \"$SERVICE-announce-0\" | awk '{ print $1 }')\"\n if [ -z \"$DEFAULT_MASTER\"
]; then\n echo \"Unable to resolve host\"\n exit 1\n fi\n
\ echo \"Setting default slave config..\"\n redis_update \"$DEFAULT_MASTER\"\n
\ sentinel_update \"$DEFAULT_MASTER\"\n fi\n}\n\nfind_master() {\n echo
\"Attempting to find master\"\n if [ \"$(redis-cli -h \"$MASTER\" ping)\" !=
\"PONG\" ]; then\n echo \"Can't ping master, attempting to force failover\"\n
\ if redis-cli -h \"$SERVICE\" -p \"$SENTINEL_PORT\" sentinel failover \"$MASTER_GROUP\"
| grep -q 'NOGOODSLAVE' ; then \n setup_defaults\n return
0\n fi\n sleep 10\n MASTER=\"$(redis-cli -h $SERVICE -p $SENTINEL_PORT
sentinel get-master-addr-by-name $MASTER_GROUP | grep -E '[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}')\"\n
\"Setting this pod as the default master\"\n redis_update \"$ANNOUNCE_IP\"\n
\ sentinel_update \"$ANNOUNCE_IP\"\n sed -i \"s/^.*slaveof.*//\"
\"$REDIS_CONF\"\n else\n DEFAULT_MASTER=\"$(getent hosts \"$SERVICE-announce-0\"
| awk '{ print $1 }')\"\n if [ -z \"$DEFAULT_MASTER\" ]; then\n echo
\"Unable to resolve host\"\n exit 1\n fi\n echo \"Setting
default slave config..\"\n redis_update \"$DEFAULT_MASTER\"\n sentinel_update
\"$DEFAULT_MASTER\"\n fi\n}\n\nfind_master() {\n echo \"Attempting to find
master\"\n if [ \"$(redis-cli -h \"$MASTER\" ping)\" != \"PONG\" ]; then\n
\ echo \"Can't ping master, attempting to force failover\"\n if redis-cli
-h \"$SERVICE\" -p \"$SENTINEL_PORT\" sentinel failover \"$MASTER_GROUP\" | grep
-q 'NOGOODSLAVE' ; then \n setup_defaults\n return 0\n fi\n
\ sleep 10\n MASTER=\"$(redis-cli -h $SERVICE -p $SENTINEL_PORT sentinel
get-master-addr-by-name $MASTER_GROUP | grep -E '[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}')\"\n
\ if [ \"$MASTER\" ]; then\n sentinel_update \"$MASTER\"\n redis_update
\"$MASTER\"\n else\n echo \"Could not failover, exiting...\"\n
\ exit 1\n fi\n else\n echo \"Found reachable master,
@@ -357,7 +400,7 @@ data:
rdbchecksum yes
rdbcompression yes
repl-diskless-sync yes
save 900 1
save ""
sentinel.conf: |
dir "/data"
sentinel down-after-milliseconds argocd 10000
@@ -571,8 +614,14 @@ metadata:
name: argocd-repo-server
spec:
ports:
- port: 8081
- name: server
port: 8081
protocol: TCP
targetPort: 8081
- name: metrics
port: 8084
protocol: TCP
targetPort: 8084
selector:
app.kubernetes.io/name: argocd-repo-server
---
@@ -626,6 +675,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-application-controller
strategy:
type: Recreate
template:
metadata:
labels:
@@ -646,7 +697,7 @@ spec:
- argocd-redis-ha-announce-2:26379
- --sentinelmaster
- argocd
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -693,7 +744,7 @@ spec:
- cp
- /usr/local/bin/argocd-util
- /shared
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: copyutil
volumeMounts:
@@ -748,11 +799,12 @@ spec:
- argocd-redis-ha-announce-2:26379
- --sentinelmaster
- argocd
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-repo-server
ports:
- containerPort: 8081
- containerPort: 8084
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 10
@@ -804,7 +856,7 @@ spec:
- argocd-redis-ha-announce-2:26379
- --sentinelmaster
- argocd
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-server
ports:
@@ -825,7 +877,7 @@ spec:
- -r
- /app
- /shared
image: argoproj/argocd-ui:latest
image: argoproj/argocd-ui:v0.12.2
imagePullPolicy: Always
name: ui
volumeMounts:
@@ -854,8 +906,8 @@ spec:
template:
metadata:
annotations:
checksum/init-config: fd61dcd77a6f4a93ac6609398c79958da8eeae692c1203f77809abf3301e09e6
checksum/probe-config: 6a71f32ef8f1e6e6046352b15c3ee4e201a0288442ae2abc3a64461df55d7bc4
checksum/init-config: 06440ee4a409be2aa01dfa08c14dd964fe3bad2ada57da1a538ad5cd771a045f
checksum/probe-config: 4b9888f173366e436f167856ee3469e8c1fd5221e29caa2129373db23578ec10
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -945,11 +997,11 @@ spec:
- sh
env:
- name: SENTINEL_ID_0
value: fcf9c936f413aee4a7faceeef0817f2894e477ff
value: e791a161cb06f0d3eb0cc392117d34cf0eae9d71
- name: SENTINEL_ID_1
value: cdf48d26a604810dc199f2d532723d2087f68e1e
value: d9b3204a90597a7500530efd881942d8145996ac
- name: SENTINEL_ID_2
value: 4e85671bbbc383ff723c51ffb0ac2bc2e7a60cea
value: d9deb539c0402841c2492e9959c8086602fa4284
image: redis:5.0.3-alpine
imagePullPolicy: IfNotPresent
name: config-init
@@ -964,6 +1016,7 @@ spec:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: argocd-redis-ha
volumes:
- configMap:
name: argocd-redis-ha-configmap

View File

@@ -55,6 +55,15 @@ metadata:
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: redis
app.kubernetes.io/name: argocd-redis-ha
app.kubernetes.io/part-of: argocd
name: argocd-redis-ha
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: server
@@ -122,6 +131,22 @@ rules:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: redis
app.kubernetes.io/name: argocd-redis-ha
app.kubernetes.io/part-of: argocd
name: argocd-redis-ha
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: server
@@ -197,6 +222,22 @@ subjects:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: redis
app.kubernetes.io/name: argocd-redis-ha
app.kubernetes.io/part-of: argocd
name: argocd-redis-ha
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argocd-redis-ha
subjects:
- kind: ServiceAccount
name: argocd-redis-ha
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: server
@@ -242,18 +283,19 @@ data:
$REDIS_PORT\" >> $REDIS_CONF\n}\n\ncopy_config() {\n cp /readonly-config/redis.conf
\"$REDIS_CONF\"\n cp /readonly-config/sentinel.conf \"$SENTINEL_CONF\"\n}\n\nsetup_defaults()
{\n echo \"Setting up defaults\"\n if [ \"$INDEX\" = \"0\" ]; then\n echo
\"Setting this pod as the default master\"\n sed -i \"s/^.*slaveof.*//\"
\"$REDIS_CONF\"\n sentinel_update \"$ANNOUNCE_IP\"\n else\n DEFAULT_MASTER=\"$(getent
hosts \"$SERVICE-announce-0\" | awk '{ print $1 }')\"\n if [ -z \"$DEFAULT_MASTER\"
]; then\n echo \"Unable to resolve host\"\n exit 1\n fi\n
\ echo \"Setting default slave config..\"\n redis_update \"$DEFAULT_MASTER\"\n
\ sentinel_update \"$DEFAULT_MASTER\"\n fi\n}\n\nfind_master() {\n echo
\"Attempting to find master\"\n if [ \"$(redis-cli -h \"$MASTER\" ping)\" !=
\"PONG\" ]; then\n echo \"Can't ping master, attempting to force failover\"\n
\ if redis-cli -h \"$SERVICE\" -p \"$SENTINEL_PORT\" sentinel failover \"$MASTER_GROUP\"
| grep -q 'NOGOODSLAVE' ; then \n setup_defaults\n return
0\n fi\n sleep 10\n MASTER=\"$(redis-cli -h $SERVICE -p $SENTINEL_PORT
sentinel get-master-addr-by-name $MASTER_GROUP | grep -E '[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}')\"\n
\"Setting this pod as the default master\"\n redis_update \"$ANNOUNCE_IP\"\n
\ sentinel_update \"$ANNOUNCE_IP\"\n sed -i \"s/^.*slaveof.*//\"
\"$REDIS_CONF\"\n else\n DEFAULT_MASTER=\"$(getent hosts \"$SERVICE-announce-0\"
| awk '{ print $1 }')\"\n if [ -z \"$DEFAULT_MASTER\" ]; then\n echo
\"Unable to resolve host\"\n exit 1\n fi\n echo \"Setting
default slave config..\"\n redis_update \"$DEFAULT_MASTER\"\n sentinel_update
\"$DEFAULT_MASTER\"\n fi\n}\n\nfind_master() {\n echo \"Attempting to find
master\"\n if [ \"$(redis-cli -h \"$MASTER\" ping)\" != \"PONG\" ]; then\n
\ echo \"Can't ping master, attempting to force failover\"\n if redis-cli
-h \"$SERVICE\" -p \"$SENTINEL_PORT\" sentinel failover \"$MASTER_GROUP\" | grep
-q 'NOGOODSLAVE' ; then \n setup_defaults\n return 0\n fi\n
\ sleep 10\n MASTER=\"$(redis-cli -h $SERVICE -p $SENTINEL_PORT sentinel
get-master-addr-by-name $MASTER_GROUP | grep -E '[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}')\"\n
\ if [ \"$MASTER\" ]; then\n sentinel_update \"$MASTER\"\n redis_update
\"$MASTER\"\n else\n echo \"Could not failover, exiting...\"\n
\ exit 1\n fi\n else\n echo \"Found reachable master,
@@ -273,7 +315,7 @@ data:
rdbchecksum yes
rdbcompression yes
repl-diskless-sync yes
save 900 1
save ""
sentinel.conf: |
dir "/data"
sentinel down-after-milliseconds argocd 10000
@@ -487,8 +529,14 @@ metadata:
name: argocd-repo-server
spec:
ports:
- port: 8081
- name: server
port: 8081
protocol: TCP
targetPort: 8081
- name: metrics
port: 8084
protocol: TCP
targetPort: 8084
selector:
app.kubernetes.io/name: argocd-repo-server
---
@@ -542,6 +590,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-application-controller
strategy:
type: Recreate
template:
metadata:
labels:
@@ -562,7 +612,7 @@ spec:
- argocd-redis-ha-announce-2:26379
- --sentinelmaster
- argocd
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -609,7 +659,7 @@ spec:
- cp
- /usr/local/bin/argocd-util
- /shared
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: copyutil
volumeMounts:
@@ -664,11 +714,12 @@ spec:
- argocd-redis-ha-announce-2:26379
- --sentinelmaster
- argocd
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-repo-server
ports:
- containerPort: 8081
- containerPort: 8084
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 10
@@ -720,7 +771,7 @@ spec:
- argocd-redis-ha-announce-2:26379
- --sentinelmaster
- argocd
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-server
ports:
@@ -741,7 +792,7 @@ spec:
- -r
- /app
- /shared
image: argoproj/argocd-ui:latest
image: argoproj/argocd-ui:v0.12.2
imagePullPolicy: Always
name: ui
volumeMounts:
@@ -770,8 +821,8 @@ spec:
template:
metadata:
annotations:
checksum/init-config: fd61dcd77a6f4a93ac6609398c79958da8eeae692c1203f77809abf3301e09e6
checksum/probe-config: 6a71f32ef8f1e6e6046352b15c3ee4e201a0288442ae2abc3a64461df55d7bc4
checksum/init-config: 06440ee4a409be2aa01dfa08c14dd964fe3bad2ada57da1a538ad5cd771a045f
checksum/probe-config: 4b9888f173366e436f167856ee3469e8c1fd5221e29caa2129373db23578ec10
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -861,11 +912,11 @@ spec:
- sh
env:
- name: SENTINEL_ID_0
value: fcf9c936f413aee4a7faceeef0817f2894e477ff
value: e791a161cb06f0d3eb0cc392117d34cf0eae9d71
- name: SENTINEL_ID_1
value: cdf48d26a604810dc199f2d532723d2087f68e1e
value: d9b3204a90597a7500530efd881942d8145996ac
- name: SENTINEL_ID_2
value: 4e85671bbbc383ff723c51ffb0ac2bc2e7a60cea
value: d9deb539c0402841c2492e9959c8086602fa4284
image: redis:5.0.3-alpine
imagePullPolicy: IfNotPresent
name: config-init
@@ -880,6 +931,7 @@ spec:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: argocd-redis-ha
volumes:
- configMap:
name: argocd-redis-ha-configmap

View File

@@ -199,6 +199,7 @@ rules:
verbs:
- delete
- get
- patch
- apiGroups:
- ""
resources:
@@ -384,8 +385,14 @@ metadata:
name: argocd-repo-server
spec:
ports:
- port: 8081
- name: server
port: 8081
protocol: TCP
targetPort: 8081
- name: metrics
port: 8084
protocol: TCP
targetPort: 8084
selector:
app.kubernetes.io/name: argocd-repo-server
---
@@ -439,6 +446,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-application-controller
strategy:
type: Recreate
template:
metadata:
labels:
@@ -451,7 +460,7 @@ spec:
- "20"
- --operation-processors
- "10"
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -498,7 +507,7 @@ spec:
- cp
- /usr/local/bin/argocd-util
- /shared
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: copyutil
volumeMounts:
@@ -561,11 +570,12 @@ spec:
- argocd-repo-server
- --redis
- argocd-redis:6379
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-repo-server
ports:
- containerPort: 8081
- containerPort: 8084
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 10
@@ -594,7 +604,7 @@ spec:
- argocd-server
- --staticassets
- /shared/app
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-server
ports:
@@ -615,7 +625,7 @@ spec:
- -r
- /app
- /shared
image: argoproj/argocd-ui:latest
image: argoproj/argocd-ui:v0.12.2
imagePullPolicy: Always
name: ui
volumeMounts:

View File

@@ -300,8 +300,14 @@ metadata:
name: argocd-repo-server
spec:
ports:
- port: 8081
- name: server
port: 8081
protocol: TCP
targetPort: 8081
- name: metrics
port: 8084
protocol: TCP
targetPort: 8084
selector:
app.kubernetes.io/name: argocd-repo-server
---
@@ -355,6 +361,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-application-controller
strategy:
type: Recreate
template:
metadata:
labels:
@@ -367,7 +375,7 @@ spec:
- "20"
- --operation-processors
- "10"
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -414,7 +422,7 @@ spec:
- cp
- /usr/local/bin/argocd-util
- /shared
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: copyutil
volumeMounts:
@@ -477,11 +485,12 @@ spec:
- argocd-repo-server
- --redis
- argocd-redis:6379
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-repo-server
ports:
- containerPort: 8081
- containerPort: 8084
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 10
@@ -510,7 +519,7 @@ spec:
- argocd-server
- --staticassets
- /shared/app
image: argoproj/argocd:latest
image: argoproj/argocd:v0.12.2
imagePullPolicy: Always
name: argocd-server
ports:
@@ -531,7 +540,7 @@ spec:
- -r
- /app
- /shared
image: argoproj/argocd-ui:latest
image: argoproj/argocd-ui:v0.12.2
imagePullPolicy: Always
name: ui
volumeMounts:

View File

@@ -3,7 +3,6 @@ package apiclient
import (
"context"
"crypto/tls"
"crypto/x509"
"encoding/base64"
"errors"
"fmt"
@@ -25,7 +24,7 @@ import (
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/status"
"github.com/argoproj/argo-cd"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
@@ -39,6 +38,7 @@ import (
"github.com/argoproj/argo-cd/server/version"
grpc_util "github.com/argoproj/argo-cd/util/grpc"
"github.com/argoproj/argo-cd/util/localconfig"
tls_util "github.com/argoproj/argo-cd/util/tls"
)
const (
@@ -201,7 +201,11 @@ func (c *client) OIDCConfig(ctx context.Context, set *settings.Settings) (*oauth
clientID = common.ArgoCDCLIClientAppID
issuerURL = fmt.Sprintf("%s%s", set.URL, common.DexAPIEndpoint)
} else if set.OIDCConfig != nil && set.OIDCConfig.Issuer != "" {
clientID = set.OIDCConfig.ClientID
if set.OIDCConfig.CLIClientID != "" {
clientID = set.OIDCConfig.CLIClientID
} else {
clientID = set.OIDCConfig.ClientID
}
issuerURL = set.OIDCConfig.Issuer
} else {
return nil, nil, fmt.Errorf("%s is not configured with SSO", c.ServerAddr)
@@ -389,7 +393,7 @@ func (c *client) newConn() (*grpc.ClientConn, io.Closer, error) {
func (c *client) tlsConfig() (*tls.Config, error) {
var tlsConfig tls.Config
if len(c.CertPEMData) > 0 {
cp := x509.NewCertPool()
cp := tls_util.BestEffortSystemCertPool()
if !cp.AppendCertsFromPEM(c.CertPEMData) {
return nil, fmt.Errorf("credentials: failed to append certificates")
}

View File

@@ -11,7 +11,6 @@ import (
"os"
"strconv"
"strings"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
@@ -20,6 +19,7 @@ import (
argocderrors "github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/rand"
)
const (
@@ -103,7 +103,7 @@ func (c *client) executeRequest(fullMethodName string, msg []byte, md metadata.M
}
func (c *client) startGRPCProxy() (*grpc.Server, net.Listener, error) {
serverAddr := fmt.Sprintf("%s/argocd-%d.sock", os.TempDir(), time.Now().Unix())
serverAddr := fmt.Sprintf("%s/argocd-%s.sock", os.TempDir(), rand.RandString(16))
ln, err := net.Listen("unix", serverAddr)
if err != nil {
@@ -193,10 +193,9 @@ func (c *client) useGRPCProxy() (net.Addr, io.Closer, error) {
c.proxyUsersCount = c.proxyUsersCount - 1
if c.proxyUsersCount == 0 {
c.proxyServer.Stop()
err := c.proxyListener.Close()
c.proxyListener = nil
c.proxyServer = nil
return err
return nil
}
return nil
}}, nil

File diff suppressed because it is too large Load Diff

View File

@@ -178,6 +178,9 @@ message ApplicationSourceKustomize {
// ImageTags are kustomize 1.0 image tag overrides
repeated KustomizeImageTag imageTags = 2;
// Images are kustomize 2.0 image overrides
repeated string images = 3;
}
// ApplicationSourcePlugin holds config management plugin specific options
@@ -215,9 +218,13 @@ message ApplicationStatus {
repeated ApplicationCondition conditions = 5;
optional k8s.io.apimachinery.pkg.apis.meta.v1.Time observedAt = 6;
optional k8s.io.apimachinery.pkg.apis.meta.v1.Time reconciledAt = 6;
optional OperationState operationState = 7;
optional k8s.io.apimachinery.pkg.apis.meta.v1.Time observedAt = 8;
optional string sourceType = 9;
}
// ApplicationWatchEvent contains information about application change.
@@ -444,6 +451,8 @@ message Repository {
optional string sshPrivateKey = 4;
optional ConnectionState connectionState = 5;
optional bool insecureIgnoreHostKey = 6;
}
// RepositoryList is a collection of Repositories.
@@ -502,6 +511,13 @@ message ResourceNode {
optional string resourceVersion = 8;
}
// ResourceOverride holds configuration to customize resource diffing and health assessment
message ResourceOverride {
optional string healthLua = 1;
optional string ignoreDifferences = 2;
}
// ResourceResult holds the operation result details of a specific resource
message ResourceResult {
optional string group = 1;

View File

@@ -135,6 +135,8 @@ type ApplicationSourceKustomize struct {
NamePrefix string `json:"namePrefix" protobuf:"bytes,1,opt,name=namePrefix"`
// ImageTags are kustomize 1.0 image tag overrides
ImageTags []KustomizeImageTag `json:"imageTags" protobuf:"bytes,2,opt,name=imageTags"`
// Images are kustomize 2.0 image overrides
Images []string `json:"images" protobuf:"bytes,3,opt,name=images"`
}
// KustomizeImageTag is a kustomize image tag
@@ -146,7 +148,7 @@ type KustomizeImageTag struct {
}
func (k *ApplicationSourceKustomize) IsZero() bool {
return k.NamePrefix == "" && len(k.ImageTags) == 0
return k.NamePrefix == "" && len(k.ImageTags) == 0 && len(k.Images) == 0
}
// JsonnetVar is a jsonnet variable
@@ -220,8 +222,10 @@ type ApplicationStatus struct {
Health HealthStatus `json:"health,omitempty" protobuf:"bytes,3,opt,name=health"`
History []RevisionHistory `json:"history,omitempty" protobuf:"bytes,4,opt,name=history"`
Conditions []ApplicationCondition `json:"conditions,omitempty" protobuf:"bytes,5,opt,name=conditions"`
ObservedAt metav1.Time `json:"observedAt,omitempty" protobuf:"bytes,6,opt,name=observedAt"`
ReconciledAt metav1.Time `json:"reconciledAt,omitempty" protobuf:"bytes,6,opt,name=reconciledAt"`
OperationState *OperationState `json:"operationState,omitempty" protobuf:"bytes,7,opt,name=operationState"`
ObservedAt metav1.Time `json:"observedAt,omitempty" protobuf:"bytes,8,opt,name=observedAt"`
SourceType ApplicationSourceType `json:"sourceType,omitempty" protobuf:"bytes,9,opt,name=sourceType"`
}
// Operation contains requested operation parameters.
@@ -468,6 +472,8 @@ const (
ApplicationConditionUnknownError = "UnknownError"
// ApplicationConditionSharedResourceWarning indicates that controller detected resources which belongs to more than one application
ApplicationConditionSharedResourceWarning = "SharedResourceWarning"
// ApplicationConditionRepeatedResourceWarning indicates that application source has resource with same Group, Kind, Name, Namespace multiple times
ApplicationConditionRepeatedResourceWarning = "RepeatedResourceWarning"
)
// ApplicationCondition contains details about current application condition
@@ -665,13 +671,20 @@ type HelmRepository struct {
Password string `json:"password,omitempty" protobuf:"bytes,7,opt,name=password"`
}
// ResourceOverride holds configuration to customize resource diffing and health assessment
type ResourceOverride struct {
HealthLua string `json:"health.lua,omitempty" protobuf:"bytes,1,opt,name=healthLua"`
IgnoreDifferences string `json:"ignoreDifferences,omitempty" protobuf:"bytes,2,opt,name=ignoreDifferences"`
}
// Repository is a Git repository holding application configurations
type Repository struct {
Repo string `json:"repo" protobuf:"bytes,1,opt,name=repo"`
Username string `json:"username,omitempty" protobuf:"bytes,2,opt,name=username"`
Password string `json:"password,omitempty" protobuf:"bytes,3,opt,name=password"`
SSHPrivateKey string `json:"sshPrivateKey,omitempty" protobuf:"bytes,4,opt,name=sshPrivateKey"`
ConnectionState ConnectionState `json:"connectionState,omitempty" protobuf:"bytes,5,opt,name=connectionState"`
Repo string `json:"repo" protobuf:"bytes,1,opt,name=repo"`
Username string `json:"username,omitempty" protobuf:"bytes,2,opt,name=username"`
Password string `json:"password,omitempty" protobuf:"bytes,3,opt,name=password"`
SSHPrivateKey string `json:"sshPrivateKey,omitempty" protobuf:"bytes,4,opt,name=sshPrivateKey"`
ConnectionState ConnectionState `json:"connectionState,omitempty" protobuf:"bytes,5,opt,name=connectionState"`
InsecureIgnoreHostKey bool `json:"insecureIgnoreHostKey,omitempty" protobuf:"bytes,6,opt,name=insecureIgnoreHostKey"`
}
// RepositoryList is a collection of Repositories.

View File

@@ -394,6 +394,11 @@ func (in *ApplicationSourceKustomize) DeepCopyInto(out *ApplicationSourceKustomi
*out = make([]KustomizeImageTag, len(*in))
copy(*out, *in)
}
if in.Images != nil {
in, out := &in.Images, &out.Images
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
@@ -479,7 +484,7 @@ func (in *ApplicationStatus) DeepCopyInto(out *ApplicationStatus) {
*out = make([]ApplicationCondition, len(*in))
copy(*out, *in)
}
in.ObservedAt.DeepCopyInto(&out.ObservedAt)
in.ReconciledAt.DeepCopyInto(&out.ReconciledAt)
if in.OperationState != nil {
in, out := &in.OperationState, &out.OperationState
if *in == nil {
@@ -489,6 +494,7 @@ func (in *ApplicationStatus) DeepCopyInto(out *ApplicationStatus) {
(*in).DeepCopyInto(*out)
}
}
in.ObservedAt.DeepCopyInto(&out.ObservedAt)
return
}
@@ -1039,6 +1045,22 @@ func (in *ResourceNode) DeepCopy() *ResourceNode {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceOverride) DeepCopyInto(out *ResourceOverride) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceOverride.
func (in *ResourceOverride) DeepCopy() *ResourceOverride {
if in == nil {
return nil
}
out := new(ResourceOverride)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceResult) DeepCopyInto(out *ResourceResult) {
*out = *in

View File

@@ -200,6 +200,7 @@ func GenerateManifests(appPath string, q *ManifestRequest) (*ManifestResponse, e
if err != nil {
return nil, err
}
defer h.Dispose()
targetObjs, err = h.Template(q.AppLabelValue, q.Namespace, q.ApplicationSource.Helm)
if err != nil {
if !helm.IsMissingDependencyErr(err) {
@@ -215,8 +216,8 @@ func GenerateManifests(appPath string, q *ManifestRequest) (*ManifestResponse, e
}
}
case v1alpha1.ApplicationSourceTypeKustomize:
k := kustomize.NewKustomizeApp(appPath)
targetObjs, _, err = k.Build(q.ApplicationSource.Kustomize)
k := kustomize.NewKustomizeApp(appPath, kustomizeCredentials(q.Repo))
targetObjs, _, _, err = k.Build(q.ApplicationSource.Kustomize)
case v1alpha1.ApplicationSourceTypePlugin:
targetObjs, err = runConfigManagementPlugin(appPath, q, q.Plugins)
case v1alpha1.ApplicationSourceTypeDirectory:
@@ -489,7 +490,7 @@ func pathExists(ss ...string) bool {
func (s *Service) newClientResolveRevision(repo *v1alpha1.Repository, revision string) (git.Client, string, error) {
repoURL := git.NormalizeGitURL(repo.Repo)
appRepoPath := tempRepoPath(repoURL)
gitClient, err := s.gitFactory.NewClient(repoURL, appRepoPath, repo.Username, repo.Password, repo.SSHPrivateKey)
gitClient, err := s.gitFactory.NewClient(repoURL, appRepoPath, repo.Username, repo.Password, repo.SSHPrivateKey, repo.InsecureIgnoreHostKey)
if err != nil {
return nil, "", err
}
@@ -627,6 +628,7 @@ func (s *Service) GetAppDetails(ctx context.Context, q *RepoServerAppDetailsQuer
if err != nil {
return nil, err
}
defer h.Dispose()
params, err := h.GetParameters(q.valueFiles())
if err != nil {
return nil, err
@@ -635,19 +637,38 @@ func (s *Service) GetAppDetails(ctx context.Context, q *RepoServerAppDetailsQuer
case v1alpha1.ApplicationSourceTypeKustomize:
res.Kustomize = &KustomizeAppSpec{}
res.Kustomize.Path = q.Path
k := kustomize.NewKustomizeApp(appPath)
_, params, err := k.Build(nil)
k := kustomize.NewKustomizeApp(appPath, kustomizeCredentials(q.Repo))
_, imageTags, images, err := k.Build(nil)
if err != nil {
return nil, err
}
res.Kustomize.ImageTags = params
res.Kustomize.ImageTags = kustomizeImageTags(imageTags)
res.Kustomize.Images = images
}
return &res, nil
}
func kustomizeImageTags(imageTags []kustomize.ImageTag) []*v1alpha1.KustomizeImageTag {
output := make([]*v1alpha1.KustomizeImageTag, len(imageTags))
for i, imageTag := range imageTags {
output[i] = &v1alpha1.KustomizeImageTag{Name: imageTag.Name, Value: imageTag.Value}
}
return output
}
func (q *RepoServerAppDetailsQuery) valueFiles() []string {
if q.Helm == nil {
return nil
}
return q.Helm.ValueFiles
}
func kustomizeCredentials(repo *v1alpha1.Repository) *kustomize.GitCredentials {
if repo == nil || repo.Password == "" {
return nil
}
return &kustomize.GitCredentials{
Username: repo.Username,
Password: repo.Password,
}
}

View File

@@ -47,7 +47,7 @@ func (m *ManifestRequest) Reset() { *m = ManifestRequest{} }
func (m *ManifestRequest) String() string { return proto.CompactTextString(m) }
func (*ManifestRequest) ProtoMessage() {}
func (*ManifestRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{0}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{0}
}
func (m *ManifestRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -154,7 +154,7 @@ func (m *ManifestResponse) Reset() { *m = ManifestResponse{} }
func (m *ManifestResponse) String() string { return proto.CompactTextString(m) }
func (*ManifestResponse) ProtoMessage() {}
func (*ManifestResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{1}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{1}
}
func (m *ManifestResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -232,7 +232,7 @@ func (m *ListDirRequest) Reset() { *m = ListDirRequest{} }
func (m *ListDirRequest) String() string { return proto.CompactTextString(m) }
func (*ListDirRequest) ProtoMessage() {}
func (*ListDirRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{2}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{2}
}
func (m *ListDirRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -294,7 +294,7 @@ func (m *FileList) Reset() { *m = FileList{} }
func (m *FileList) String() string { return proto.CompactTextString(m) }
func (*FileList) ProtoMessage() {}
func (*FileList) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{3}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{3}
}
func (m *FileList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -344,7 +344,7 @@ func (m *GetFileRequest) Reset() { *m = GetFileRequest{} }
func (m *GetFileRequest) String() string { return proto.CompactTextString(m) }
func (*GetFileRequest) ProtoMessage() {}
func (*GetFileRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{4}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{4}
}
func (m *GetFileRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -406,7 +406,7 @@ func (m *GetFileResponse) Reset() { *m = GetFileResponse{} }
func (m *GetFileResponse) String() string { return proto.CompactTextString(m) }
func (*GetFileResponse) ProtoMessage() {}
func (*GetFileResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{5}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{5}
}
func (m *GetFileResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -459,7 +459,7 @@ func (m *RepoServerAppDetailsQuery) Reset() { *m = RepoServerAppDetailsQ
func (m *RepoServerAppDetailsQuery) String() string { return proto.CompactTextString(m) }
func (*RepoServerAppDetailsQuery) ProtoMessage() {}
func (*RepoServerAppDetailsQuery) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{6}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{6}
}
func (m *RepoServerAppDetailsQuery) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -541,7 +541,7 @@ func (m *HelmAppDetailsQuery) Reset() { *m = HelmAppDetailsQuery{} }
func (m *HelmAppDetailsQuery) String() string { return proto.CompactTextString(m) }
func (*HelmAppDetailsQuery) ProtoMessage() {}
func (*HelmAppDetailsQuery) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{7}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{7}
}
func (m *HelmAppDetailsQuery) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -593,7 +593,7 @@ func (m *RepoAppDetailsResponse) Reset() { *m = RepoAppDetailsResponse{}
func (m *RepoAppDetailsResponse) String() string { return proto.CompactTextString(m) }
func (*RepoAppDetailsResponse) ProtoMessage() {}
func (*RepoAppDetailsResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{8}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{8}
}
func (m *RepoAppDetailsResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -673,7 +673,7 @@ func (m *KsonnetAppSpec) Reset() { *m = KsonnetAppSpec{} }
func (m *KsonnetAppSpec) String() string { return proto.CompactTextString(m) }
func (*KsonnetAppSpec) ProtoMessage() {}
func (*KsonnetAppSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{9}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{9}
}
func (m *KsonnetAppSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -745,7 +745,7 @@ func (m *HelmAppSpec) Reset() { *m = HelmAppSpec{} }
func (m *HelmAppSpec) String() string { return proto.CompactTextString(m) }
func (*HelmAppSpec) ProtoMessage() {}
func (*HelmAppSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{10}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{10}
}
func (m *HelmAppSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -804,18 +804,21 @@ func (m *HelmAppSpec) GetParameters() []*v1alpha1.HelmParameter {
// KustomizeAppSpec contains kustomize app name and path in source repo
type KustomizeAppSpec struct {
Path string `protobuf:"bytes,1,opt,name=path,proto3" json:"path,omitempty"`
ImageTags []*v1alpha1.KustomizeImageTag `protobuf:"bytes,2,rep,name=imageTags" json:"imageTags,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
Path string `protobuf:"bytes,1,opt,name=path,proto3" json:"path,omitempty"`
// imageTags is a list of available image tags. This is only populated for Kustomize 1.
ImageTags []*v1alpha1.KustomizeImageTag `protobuf:"bytes,2,rep,name=imageTags" json:"imageTags,omitempty"`
// images is a list of available images. This is only populated for Kustomize 2.
Images []string `protobuf:"bytes,3,rep,name=images" json:"images,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *KustomizeAppSpec) Reset() { *m = KustomizeAppSpec{} }
func (m *KustomizeAppSpec) String() string { return proto.CompactTextString(m) }
func (*KustomizeAppSpec) ProtoMessage() {}
func (*KustomizeAppSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{11}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{11}
}
func (m *KustomizeAppSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -858,6 +861,13 @@ func (m *KustomizeAppSpec) GetImageTags() []*v1alpha1.KustomizeImageTag {
return nil
}
func (m *KustomizeAppSpec) GetImages() []string {
if m != nil {
return m.Images
}
return nil
}
type KsonnetEnvironment struct {
// Name is the user defined name of an environment
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
@@ -876,7 +886,7 @@ func (m *KsonnetEnvironment) Reset() { *m = KsonnetEnvironment{} }
func (m *KsonnetEnvironment) String() string { return proto.CompactTextString(m) }
func (*KsonnetEnvironment) ProtoMessage() {}
func (*KsonnetEnvironment) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{12}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{12}
}
func (m *KsonnetEnvironment) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -947,7 +957,7 @@ func (m *KsonnetEnvironmentDestination) Reset() { *m = KsonnetEnvironmen
func (m *KsonnetEnvironmentDestination) String() string { return proto.CompactTextString(m) }
func (*KsonnetEnvironmentDestination) ProtoMessage() {}
func (*KsonnetEnvironmentDestination) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{13}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{13}
}
func (m *KsonnetEnvironmentDestination) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1001,7 +1011,7 @@ func (m *DirectoryAppSpec) Reset() { *m = DirectoryAppSpec{} }
func (m *DirectoryAppSpec) String() string { return proto.CompactTextString(m) }
func (*DirectoryAppSpec) ProtoMessage() {}
func (*DirectoryAppSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_repository_8ffb26589116c4a4, []int{14}
return fileDescriptor_repository_ca2a992ee8cf91c1, []int{14}
}
func (m *DirectoryAppSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1882,6 +1892,21 @@ func (m *KustomizeAppSpec) MarshalTo(dAtA []byte) (int, error) {
i += n
}
}
if len(m.Images) > 0 {
for _, s := range m.Images {
dAtA[i] = 0x1a
i++
l = len(s)
for l >= 1<<7 {
dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
l >>= 7
i++
}
dAtA[i] = uint8(l)
i++
i += copy(dAtA[i:], s)
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
@@ -2308,6 +2333,12 @@ func (m *KustomizeAppSpec) Size() (n int) {
n += 1 + l + sovRepository(uint64(l))
}
}
if len(m.Images) > 0 {
for _, s := range m.Images {
l = len(s)
n += 1 + l + sovRepository(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
@@ -4385,6 +4416,35 @@ func (m *KustomizeAppSpec) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Images", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowRepository
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthRepository
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Images = append(m.Images, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipRepository(dAtA[iNdEx:])
@@ -4844,76 +4904,77 @@ var (
)
func init() {
proto.RegisterFile("reposerver/repository/repository.proto", fileDescriptor_repository_8ffb26589116c4a4)
proto.RegisterFile("reposerver/repository/repository.proto", fileDescriptor_repository_ca2a992ee8cf91c1)
}
var fileDescriptor_repository_8ffb26589116c4a4 = []byte{
// 1066 bytes of a gzipped FileDescriptorProto
var fileDescriptor_repository_ca2a992ee8cf91c1 = []byte{
// 1075 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x17, 0x4d, 0x6f, 0x1b, 0x45,
0xb4, 0x6b, 0x3b, 0x1f, 0x7e, 0x4e, 0xdb, 0x64, 0x88, 0xca, 0xe2, 0x06, 0x63, 0xad, 0x28, 0x0a,
0x02, 0xd6, 0x8a, 0x5b, 0xa4, 0xa8, 0x12, 0x42, 0xa1, 0x09, 0x69, 0x94, 0x54, 0xa4, 0x9b, 0x52,
0x09, 0x84, 0x54, 0x4d, 0xd6, 0xaf, 0xeb, 0xa9, 0xed, 0x9d, 0x61, 0x67, 0x6c, 0xc9, 0xfd, 0x03,
0xdc, 0xb9, 0x72, 0xe3, 0xc6, 0x91, 0x5f, 0xc0, 0x11, 0x8e, 0x1c, 0x11, 0x27, 0x94, 0x5f, 0x82,
0x66, 0xbc, 0x6b, 0xcf, 0x3a, 0xdb, 0x08, 0xc9, 0x02, 0x7a, 0xb1, 0xde, 0xbe, 0xef, 0xef, 0x79,
0x86, 0xf7, 0x12, 0x14, 0x5c, 0x62, 0x32, 0xc2, 0xa4, 0x65, 0x40, 0xa6, 0x78, 0x32, 0xb6, 0x40,
0x5f, 0x24, 0x5c, 0x71, 0x02, 0x33, 0x4c, 0x7d, 0x33, 0xe2, 0x11, 0x37, 0xe8, 0x96, 0x86, 0x26,
0x1c, 0xf5, 0xad, 0x88, 0xf3, 0xa8, 0x8f, 0x2d, 0x2a, 0x58, 0x8b, 0xc6, 0x31, 0x57, 0x54, 0x31,
0x1e, 0xcb, 0x94, 0xea, 0xf5, 0x76, 0xa5, 0xcf, 0xb8, 0xa1, 0x86, 0x3c, 0xc1, 0xd6, 0x68, 0xa7,
0x15, 0x61, 0x8c, 0x09, 0x55, 0xd8, 0x49, 0x79, 0x8e, 0x22, 0xa6, 0xba, 0xc3, 0x73, 0x3f, 0xe4,
0x83, 0x16, 0x4d, 0x8c, 0x89, 0x17, 0x06, 0xf8, 0x28, 0xec, 0xb4, 0x44, 0x2f, 0xd2, 0xc2, 0xb2,
0x45, 0x85, 0xe8, 0xb3, 0xd0, 0x28, 0x6f, 0x8d, 0x76, 0x68, 0x5f, 0x74, 0xe9, 0x25, 0x55, 0xde,
0x9f, 0x15, 0xb8, 0xf9, 0x88, 0xc6, 0xec, 0x39, 0x4a, 0x15, 0xe0, 0xb7, 0x43, 0x94, 0x8a, 0x7c,
0x05, 0x15, 0x1d, 0x84, 0xeb, 0x34, 0x9d, 0xed, 0x5a, 0xfb, 0xc0, 0x9f, 0x59, 0xf3, 0x33, 0x6b,
0x06, 0x78, 0x16, 0x76, 0x7c, 0xd1, 0x8b, 0x7c, 0x6d, 0xcd, 0xb7, 0xac, 0xf9, 0x99, 0x35, 0x3f,
0x98, 0xe6, 0x22, 0x30, 0x2a, 0x49, 0x1d, 0x56, 0x13, 0x1c, 0x31, 0xc9, 0x78, 0xec, 0x96, 0x9a,
0xce, 0x76, 0x35, 0x98, 0x7e, 0x13, 0x17, 0x56, 0x62, 0xfe, 0x80, 0x86, 0x5d, 0x74, 0xcb, 0x4d,
0x67, 0x7b, 0x35, 0xc8, 0x3e, 0x49, 0x13, 0x6a, 0x54, 0x88, 0x13, 0x7a, 0x8e, 0xfd, 0x63, 0x1c,
0xbb, 0x15, 0x23, 0x68, 0xa3, 0xc8, 0xbb, 0x70, 0x3d, 0xfb, 0x7c, 0x4a, 0xfb, 0x43, 0x74, 0x97,
0x0c, 0x4f, 0x1e, 0x49, 0xb6, 0xa0, 0x1a, 0xd3, 0x01, 0x4a, 0x41, 0x43, 0x74, 0x57, 0x0d, 0xc7,
0x0c, 0x41, 0x5e, 0xc2, 0x86, 0x15, 0xc4, 0x19, 0x1f, 0x26, 0x21, 0xba, 0x60, 0x72, 0x70, 0xb2,
0x40, 0x0e, 0xf6, 0xe6, 0x75, 0x06, 0x97, 0xcd, 0x90, 0x08, 0xaa, 0x5d, 0xec, 0x0f, 0x4c, 0xbe,
0xdc, 0x5a, 0xb3, 0xbc, 0x5d, 0x6b, 0x1f, 0x2d, 0x60, 0xf3, 0x61, 0xa6, 0x6b, 0x92, 0xfb, 0x99,
0x6e, 0xd2, 0x83, 0x15, 0xd1, 0x1f, 0x46, 0x2c, 0x96, 0xee, 0x9a, 0x31, 0xf3, 0x78, 0x01, 0x33,
0x0f, 0x78, 0xfc, 0x9c, 0x45, 0x8f, 0x68, 0x4c, 0x23, 0x1c, 0x60, 0xac, 0x4e, 0x8d, 0xe6, 0x20,
0xb3, 0xe0, 0xfd, 0xe8, 0xc0, 0xfa, 0xac, 0xb9, 0xa4, 0xe0, 0xb1, 0x34, 0x45, 0x18, 0xa4, 0x38,
0xe9, 0x3a, 0xcd, 0xb2, 0x2e, 0xc2, 0x14, 0x91, 0x2f, 0x51, 0x69, 0xbe, 0x44, 0xb7, 0x60, 0x79,
0x32, 0x82, 0xa6, 0x43, 0xaa, 0x41, 0xfa, 0x95, 0x6b, 0xab, 0xca, 0x5c, 0x5b, 0x35, 0x00, 0xa4,
0x49, 0xf2, 0x93, 0xb1, 0x40, 0x77, 0xd9, 0x50, 0x2d, 0x8c, 0xf7, 0x83, 0x03, 0x37, 0x4e, 0x98,
0x54, 0xfb, 0x2c, 0xf9, 0x9f, 0x07, 0x80, 0x40, 0x45, 0x50, 0xd5, 0x4d, 0x63, 0x33, 0xb0, 0xd7,
0x84, 0xd5, 0xcf, 0x59, 0x1f, 0xb5, 0x83, 0x64, 0x13, 0x96, 0x98, 0xc2, 0x41, 0x96, 0xb5, 0xc9,
0x87, 0xf1, 0xff, 0x10, 0x95, 0xe6, 0x7a, 0x0d, 0xfd, 0xbf, 0x03, 0x37, 0xa7, 0xce, 0xa5, 0x0d,
0xfc, 0x08, 0x6e, 0xdc, 0xe0, 0xc6, 0x2f, 0xe0, 0x08, 0x47, 0x8e, 0x88, 0x13, 0xca, 0x2f, 0x41,
0x33, 0xde, 0xb5, 0x67, 0x9d, 0x6d, 0x84, 0x64, 0x01, 0xbd, 0xac, 0xde, 0xbc, 0xef, 0xcf, 0x99,
0xb7, 0xf0, 0x5e, 0x82, 0x82, 0x4b, 0x4c, 0x46, 0x98, 0xb4, 0x0c, 0xc8, 0x14, 0x4f, 0xc6, 0x16,
0xe8, 0x8b, 0x84, 0x2b, 0x4e, 0x60, 0x86, 0xa9, 0x6f, 0x46, 0x3c, 0xe2, 0x06, 0xdd, 0xd2, 0xd0,
0x84, 0xa3, 0xbe, 0x15, 0x71, 0x1e, 0xf5, 0xb1, 0x45, 0x05, 0x6b, 0xd1, 0x38, 0xe6, 0x8a, 0x2a,
0xc6, 0x63, 0x99, 0x52, 0xbd, 0xde, 0xae, 0xf4, 0x19, 0x37, 0xd4, 0x90, 0x27, 0xd8, 0x1a, 0xed,
0xb4, 0x22, 0x8c, 0x31, 0xa1, 0x0a, 0x3b, 0x29, 0xcf, 0x51, 0xc4, 0x54, 0x77, 0x78, 0xee, 0x87,
0x7c, 0xd0, 0xa2, 0x89, 0x31, 0xf1, 0xc2, 0x00, 0x1f, 0x85, 0x9d, 0x96, 0xe8, 0x45, 0x5a, 0x58,
0xb6, 0xa8, 0x10, 0x7d, 0x16, 0x1a, 0xe5, 0xad, 0xd1, 0x0e, 0xed, 0x8b, 0x2e, 0xbd, 0xa4, 0xca,
0xfb, 0xb3, 0x02, 0x37, 0x1f, 0xd1, 0x98, 0x3d, 0x47, 0xa9, 0x02, 0xfc, 0x76, 0x88, 0x52, 0x91,
0xaf, 0xa0, 0xa2, 0x83, 0x70, 0x9d, 0xa6, 0xb3, 0x5d, 0x6b, 0x1f, 0xf8, 0x33, 0x6b, 0x7e, 0x66,
0xcd, 0x00, 0xcf, 0xc2, 0x8e, 0x2f, 0x7a, 0x91, 0xaf, 0xad, 0xf9, 0x96, 0x35, 0x3f, 0xb3, 0xe6,
0x07, 0xd3, 0x5c, 0x04, 0x46, 0x25, 0xa9, 0xc3, 0x6a, 0x82, 0x23, 0x26, 0x19, 0x8f, 0xdd, 0x52,
0xd3, 0xd9, 0xae, 0x06, 0xd3, 0x33, 0x71, 0x61, 0x25, 0xe6, 0x0f, 0x68, 0xd8, 0x45, 0xb7, 0xdc,
0x74, 0xb6, 0x57, 0x83, 0xec, 0x48, 0x9a, 0x50, 0xa3, 0x42, 0x9c, 0xd0, 0x73, 0xec, 0x1f, 0xe3,
0xd8, 0xad, 0x18, 0x41, 0x1b, 0x45, 0xde, 0x85, 0xeb, 0xd9, 0xf1, 0x29, 0xed, 0x0f, 0xd1, 0x5d,
0x32, 0x3c, 0x79, 0x24, 0xd9, 0x82, 0x6a, 0x4c, 0x07, 0x28, 0x05, 0x0d, 0xd1, 0x5d, 0x35, 0x1c,
0x33, 0x04, 0x79, 0x09, 0x1b, 0x56, 0x10, 0x67, 0x7c, 0x98, 0x84, 0xe8, 0x82, 0xc9, 0xc1, 0xc9,
0x02, 0x39, 0xd8, 0x9b, 0xd7, 0x19, 0x5c, 0x36, 0x43, 0x22, 0xa8, 0x76, 0xb1, 0x3f, 0x30, 0xf9,
0x72, 0x6b, 0xcd, 0xf2, 0x76, 0xad, 0x7d, 0xb4, 0x80, 0xcd, 0x87, 0x99, 0xae, 0x49, 0xee, 0x67,
0xba, 0x49, 0x0f, 0x56, 0x44, 0x7f, 0x18, 0xb1, 0x58, 0xba, 0x6b, 0xc6, 0xcc, 0xe3, 0x05, 0xcc,
0x3c, 0xe0, 0xf1, 0x73, 0x16, 0x3d, 0xa2, 0x31, 0x8d, 0x70, 0x80, 0xb1, 0x3a, 0x35, 0x9a, 0x83,
0xcc, 0x82, 0xf7, 0x83, 0x03, 0xeb, 0xb3, 0xe6, 0x92, 0x82, 0xc7, 0xd2, 0x14, 0x61, 0x90, 0xe2,
0xa4, 0xeb, 0x34, 0xcb, 0xba, 0x08, 0x53, 0x44, 0xbe, 0x44, 0xa5, 0xf9, 0x12, 0xdd, 0x82, 0xe5,
0xc9, 0x08, 0x9a, 0x0e, 0xa9, 0x06, 0xe9, 0x29, 0xd7, 0x56, 0x95, 0xb9, 0xb6, 0x6a, 0x00, 0x48,
0x93, 0xe4, 0x27, 0x63, 0x81, 0xee, 0xb2, 0xa1, 0x5a, 0x18, 0xef, 0x7b, 0x07, 0x6e, 0x9c, 0x30,
0xa9, 0xf6, 0x59, 0xf2, 0x3f, 0x0f, 0x00, 0x81, 0x8a, 0xa0, 0xaa, 0x9b, 0xc6, 0x66, 0x60, 0xaf,
0x09, 0xab, 0x9f, 0xb3, 0x3e, 0x6a, 0x07, 0xc9, 0x26, 0x2c, 0x31, 0x85, 0x83, 0x2c, 0x6b, 0x93,
0x83, 0xf1, 0xff, 0x10, 0x95, 0xe6, 0x7a, 0x0d, 0xfd, 0xbf, 0x03, 0x37, 0xa7, 0xce, 0xa5, 0x0d,
0x40, 0xa0, 0xd2, 0xa1, 0x8a, 0x1a, 0xef, 0xd6, 0x02, 0x03, 0x7b, 0x3f, 0x97, 0xe1, 0x2d, 0x6d,
0xeb, 0xcc, 0xd4, 0x73, 0x4f, 0x88, 0x7d, 0x54, 0x94, 0xf5, 0xe5, 0xe3, 0x21, 0x26, 0xe3, 0xd7,
0x28, 0x9e, 0xfc, 0xa0, 0x56, 0xfe, 0x9b, 0x41, 0x5d, 0xfa, 0xb7, 0x07, 0x95, 0xdc, 0x85, 0x8a,
0xb6, 0x6c, 0xa6, 0xa3, 0xd6, 0x7e, 0xc7, 0xb7, 0x5e, 0x35, 0xed, 0xe1, 0x5c, 0x3d, 0x02, 0xc3,
0xec, 0x7d, 0x0c, 0x6f, 0x14, 0x10, 0xf5, 0xbc, 0x8d, 0xf4, 0xb6, 0xd5, 0x35, 0xcf, 0x5a, 0xd5,
0xc2, 0x78, 0xdf, 0x95, 0xe0, 0x96, 0x0e, 0x71, 0x26, 0x67, 0x77, 0x86, 0xd2, 0x43, 0xea, 0x4c,
0x12, 0xae, 0x61, 0x72, 0x0f, 0x56, 0x7a, 0x92, 0xc7, 0x31, 0x2a, 0x53, 0x9f, 0x5a, 0xbb, 0x6e,
0x7b, 0x77, 0x3c, 0x21, 0xed, 0x09, 0x71, 0x26, 0x30, 0x0c, 0x32, 0x56, 0xf2, 0x41, 0x1a, 0x50,
0xd9, 0x88, 0xbc, 0x59, 0x10, 0x90, 0xe1, 0x37, 0x4c, 0xe4, 0x3e, 0x54, 0x7b, 0x43, 0xa9, 0xf8,
0x80, 0xbd, 0x44, 0xb3, 0x3e, 0x6a, 0xed, 0xad, 0x9c, 0x91, 0x8c, 0x98, 0x89, 0xcd, 0xd8, 0xb5,
0x6c, 0x87, 0x25, 0x18, 0x6a, 0x46, 0xf3, 0xe8, 0xcc, 0xc9, 0xee, 0x67, 0xc4, 0xa9, 0xec, 0x94,
0xdd, 0xfb, 0xa3, 0x04, 0x37, 0xf2, 0x01, 0xe8, 0x0c, 0xe8, 0x6d, 0x97, 0x65, 0x40, 0xc3, 0xd3,
0x36, 0x2c, 0x59, 0x6d, 0x78, 0x0a, 0x6b, 0x18, 0x8f, 0x58, 0xc2, 0x63, 0x5d, 0x4e, 0xe9, 0x96,
0x4d, 0x8b, 0x7c, 0xf8, 0xea, 0xd4, 0xf8, 0x07, 0x16, 0xfb, 0x41, 0xac, 0x92, 0x71, 0x90, 0xd3,
0x40, 0x7a, 0x00, 0x82, 0x26, 0x74, 0x80, 0x0a, 0x93, 0xac, 0xb3, 0x8f, 0x17, 0x68, 0xb9, 0xd4,
0xfc, 0x69, 0xa6, 0x33, 0xb0, 0xd4, 0xd7, 0x9f, 0xc1, 0xc6, 0x25, 0x7f, 0xc8, 0x3a, 0x94, 0x7b,
0x38, 0x4e, 0x43, 0xd7, 0x20, 0xb9, 0x07, 0x4b, 0xa6, 0x71, 0xd2, 0xca, 0x37, 0x0a, 0xc2, 0xb3,
0xd4, 0x04, 0x13, 0xe6, 0xfb, 0xa5, 0x5d, 0xc7, 0xfb, 0xc5, 0x81, 0x9a, 0x55, 0xe8, 0x7f, 0x9c,
0xd7, 0x7c, 0xf3, 0x96, 0xe7, 0x9b, 0x97, 0x74, 0x0b, 0xb2, 0xf4, 0x70, 0xc1, 0xf9, 0x2f, 0x4c,
0x91, 0xf7, 0xbd, 0x03, 0xeb, 0xf3, 0x8d, 0x37, 0x75, 0xd9, 0xb1, 0x5c, 0x7e, 0x01, 0x55, 0x36,
0xa0, 0x11, 0x3e, 0xa1, 0x91, 0x74, 0x4b, 0xc6, 0xa3, 0x45, 0xce, 0x95, 0xa9, 0xcd, 0xa3, 0x54,
0x69, 0x30, 0x53, 0xef, 0xfd, 0xe4, 0x00, 0xb9, 0x9c, 0xf8, 0xc2, 0xec, 0x36, 0x00, 0x7a, 0xbb,
0xf2, 0x29, 0x26, 0xd6, 0x6a, 0xb5, 0x30, 0x85, 0xcb, 0xf5, 0x18, 0x6a, 0x1d, 0x94, 0x8a, 0xc5,
0xc6, 0xa7, 0x74, 0x14, 0xdf, 0xbf, 0xba, 0xea, 0xfb, 0x33, 0x81, 0xc0, 0x96, 0xf6, 0xbe, 0x84,
0xb7, 0xaf, 0xe4, 0xb6, 0x8e, 0x09, 0x27, 0x77, 0x4c, 0x5c, 0x79, 0x82, 0x78, 0x04, 0xd6, 0xe7,
0x67, 0xba, 0xfd, 0x6b, 0x09, 0x36, 0x66, 0xaf, 0x97, 0xfe, 0x65, 0x21, 0x92, 0x2f, 0x60, 0xfd,
0x30, 0xbd, 0xb6, 0xb3, 0x23, 0x88, 0xdc, 0xb6, 0x83, 0x99, 0xbb, 0xbb, 0xeb, 0x5b, 0xc5, 0xc4,
0xc9, 0x72, 0xf4, 0xae, 0x91, 0x4f, 0x60, 0x25, 0x3d, 0x54, 0x48, 0x6e, 0x09, 0xe6, 0xaf, 0x97,
0xfa, 0xa6, 0x4d, 0xcb, 0x8e, 0x07, 0xef, 0x1a, 0xd9, 0x87, 0x95, 0xf4, 0x29, 0xce, 0x8b, 0xe7,
0x8f, 0x87, 0xfa, 0xed, 0x42, 0xda, 0xd4, 0x89, 0x6f, 0xe0, 0xfa, 0xa1, 0xd9, 0x2a, 0xe9, 0xf2,
0x26, 0x77, 0x6c, 0xfe, 0x57, 0xbe, 0xe1, 0x75, 0x6f, 0x9e, 0xed, 0xf2, 0xfe, 0xf7, 0xae, 0x7d,
0xf6, 0xe9, 0x6f, 0x17, 0x0d, 0xe7, 0xf7, 0x8b, 0x86, 0xf3, 0xd7, 0x45, 0xc3, 0xf9, 0x7a, 0xe7,
0xaa, 0xff, 0x39, 0x85, 0xff, 0xc7, 0xce, 0x97, 0xcd, 0xdf, 0x9a, 0xbb, 0x7f, 0x07, 0x00, 0x00,
0xff, 0xff, 0x7a, 0x57, 0xd9, 0x78, 0xaf, 0x0d, 0x00, 0x00,
0xec, 0x7d, 0x0c, 0x6f, 0x14, 0x10, 0xf5, 0xbc, 0x8d, 0xf4, 0x6d, 0xab, 0x6b, 0x9e, 0xb5, 0xaa,
0x85, 0xf1, 0xbe, 0x2b, 0xc1, 0x2d, 0x1d, 0xe2, 0x4c, 0xce, 0xee, 0x0c, 0xa5, 0x87, 0xd4, 0x99,
0x24, 0x5c, 0xc3, 0xe4, 0x1e, 0xac, 0xf4, 0x24, 0x8f, 0x63, 0x54, 0xa6, 0x3e, 0xb5, 0x76, 0xdd,
0xf6, 0xee, 0x78, 0x42, 0xda, 0x13, 0xe2, 0x4c, 0x60, 0x18, 0x64, 0xac, 0xe4, 0x83, 0x34, 0xa0,
0xb2, 0x11, 0x79, 0xb3, 0x20, 0x20, 0xc3, 0x6f, 0x98, 0xc8, 0x7d, 0xa8, 0xf6, 0x86, 0x52, 0xf1,
0x01, 0x7b, 0x89, 0xe6, 0xfa, 0xa8, 0xb5, 0xb7, 0x72, 0x46, 0x32, 0x62, 0x26, 0x36, 0x63, 0xd7,
0xb2, 0x1d, 0x96, 0x60, 0xa8, 0x19, 0xcd, 0xa3, 0x33, 0x27, 0xbb, 0x9f, 0x11, 0xa7, 0xb2, 0x53,
0x76, 0xef, 0x8f, 0x12, 0xdc, 0xc8, 0x07, 0xa0, 0x33, 0xa0, 0x6f, 0xbb, 0x2c, 0x03, 0x1a, 0x9e,
0xb6, 0x61, 0xc9, 0x6a, 0xc3, 0x53, 0x58, 0xc3, 0x78, 0xc4, 0x12, 0x1e, 0xeb, 0x72, 0x4a, 0xb7,
0x6c, 0x5a, 0xe4, 0xc3, 0x57, 0xa7, 0xc6, 0x3f, 0xb0, 0xd8, 0x0f, 0x62, 0x95, 0x8c, 0x83, 0x9c,
0x06, 0xd2, 0x03, 0x10, 0x34, 0xa1, 0x03, 0x54, 0x98, 0x64, 0x9d, 0x7d, 0xbc, 0x40, 0xcb, 0xa5,
0xe6, 0x4f, 0x33, 0x9d, 0x81, 0xa5, 0xbe, 0xfe, 0x0c, 0x36, 0x2e, 0xf9, 0x43, 0xd6, 0xa1, 0xdc,
0xc3, 0x71, 0x1a, 0xba, 0x06, 0xc9, 0x3d, 0x58, 0x32, 0x8d, 0x93, 0x56, 0xbe, 0x51, 0x10, 0x9e,
0xa5, 0x26, 0x98, 0x30, 0xdf, 0x2f, 0xed, 0x3a, 0xde, 0x2f, 0x0e, 0xd4, 0xac, 0x42, 0xff, 0xe3,
0xbc, 0xe6, 0x9b, 0xb7, 0x3c, 0xdf, 0xbc, 0xa4, 0x5b, 0x90, 0xa5, 0x87, 0x0b, 0xce, 0x7f, 0x61,
0x8a, 0xbc, 0x9f, 0x1c, 0x58, 0x9f, 0x6f, 0xbc, 0xa9, 0xcb, 0x8e, 0xe5, 0xf2, 0x0b, 0xa8, 0xb2,
0x01, 0x8d, 0xf0, 0x09, 0x8d, 0xa4, 0x5b, 0x32, 0x1e, 0x2d, 0xb2, 0xae, 0x4c, 0x6d, 0x1e, 0xa5,
0x4a, 0x83, 0x99, 0x7a, 0xfd, 0xfe, 0x9a, 0x43, 0x96, 0x9a, 0xf4, 0xe4, 0xfd, 0xe8, 0x00, 0xb9,
0x5c, 0x90, 0xc2, 0xac, 0x37, 0x00, 0x7a, 0xbb, 0xf2, 0x29, 0x26, 0xd6, 0x95, 0x6b, 0x61, 0x0a,
0x2f, 0xdd, 0x63, 0xa8, 0x75, 0x50, 0x2a, 0x16, 0x1b, 0x5f, 0xd3, 0x11, 0x7d, 0xff, 0xea, 0x6e,
0xd8, 0x9f, 0x09, 0x04, 0xb6, 0xb4, 0xf7, 0x25, 0xbc, 0x7d, 0x25, 0xb7, 0xb5, 0x64, 0x38, 0xb9,
0x25, 0xe3, 0xca, 0xd5, 0xc4, 0x23, 0xb0, 0x3e, 0x3f, 0xeb, 0xed, 0x5f, 0x4b, 0xb0, 0x31, 0x7b,
0xd5, 0xf4, 0x97, 0x85, 0x48, 0xbe, 0x80, 0xf5, 0xc3, 0x74, 0x0b, 0xcf, 0x96, 0x23, 0x72, 0xdb,
0x0e, 0x66, 0x6e, 0x1f, 0xaf, 0x6f, 0x15, 0x13, 0x27, 0x97, 0xa6, 0x77, 0x8d, 0x7c, 0x02, 0x2b,
0xe9, 0x02, 0x43, 0x72, 0x97, 0x63, 0x7e, 0xab, 0xa9, 0x6f, 0xda, 0xb4, 0x6c, 0xa9, 0xf0, 0xae,
0x91, 0x7d, 0x58, 0x49, 0x9f, 0xe8, 0xbc, 0x78, 0x7e, 0xa9, 0xa8, 0xdf, 0x2e, 0xa4, 0x4d, 0x9d,
0xf8, 0x06, 0xae, 0x1f, 0x9a, 0xdb, 0x26, 0xbd, 0xd4, 0xc9, 0x1d, 0x9b, 0xff, 0x95, 0x6f, 0x7b,
0xdd, 0x9b, 0x67, 0xbb, 0xfc, 0x2e, 0x78, 0xd7, 0x3e, 0xfb, 0xf4, 0xb7, 0x8b, 0x86, 0xf3, 0xfb,
0x45, 0xc3, 0xf9, 0xeb, 0xa2, 0xe1, 0x7c, 0xbd, 0x73, 0xd5, 0xff, 0x4f, 0xe1, 0x7f, 0xda, 0xf9,
0xb2, 0xf9, 0xdd, 0xb9, 0xfb, 0x77, 0x00, 0x00, 0x00, 0xff, 0xff, 0x50, 0x2e, 0x2e, 0x03, 0xc7,
0x0d, 0x00, 0x00,
}

View File

@@ -97,7 +97,10 @@ message HelmAppSpec {
// KustomizeAppSpec contains kustomize app name and path in source repo
message KustomizeAppSpec {
string path = 1;
// imageTags is a list of available image tags. This is only populated for Kustomize 1.
repeated github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.KustomizeImageTag imageTags = 2;
// images is a list of available images. This is only populated for Kustomize 2.
repeated string images = 3;
}
message KsonnetEnvironment {

View File

@@ -36,7 +36,7 @@ type fakeGitClientFactory struct {
root string
}
func (f *fakeGitClientFactory) NewClient(repoURL, path, username, password, sshPrivateKey string) (git.Client, error) {
func (f *fakeGitClientFactory) NewClient(repoURL, path, username, password, sshPrivateKey string, insecureIgnoreHostKey bool) (git.Client, error) {
mockClient := gitmocks.Client{}
root := "./testdata"
if f.root != "" {
@@ -251,7 +251,6 @@ func TestGetAppDetailsKustomize(t *testing.T) {
Path: "kustomization_yaml",
})
assert.NoError(t, err)
assert.Equal(t, "nginx", res.Kustomize.ImageTags[0].Name)
assert.Equal(t, "1.15.4", res.Kustomize.ImageTags[0].Value)
assert.Equal(t, 2, len(res.Kustomize.ImageTags))
assert.Nil(t, res.Kustomize.Images)
assert.Equal(t, []*argoappv1.KustomizeImageTag{{Name: "nginx", Value: "1.15.4"}, {Name: "k8s.gcr.io/nginx-slim", Value: "0.8"}}, res.Kustomize.ImageTags)
}

View File

@@ -361,6 +361,8 @@ func (s *Server) Patch(ctx context.Context, q *ApplicationPatchRequest) (*appv1.
return nil, err
}
s.logEvent(app, ctx, argo.EventReasonResourceUpdated, fmt.Sprintf("patched application %s/%s", app.Namespace, app.Name))
err = json.Unmarshal(patchApp, &app)
if err != nil {
return nil, err
@@ -433,6 +435,11 @@ func (s *Server) Watch(q *ApplicationQuery, ws ApplicationService_WatchServer) e
if err != nil {
return err
}
defer w.Stop()
logCtx := log.NewEntry(log.New())
if q.Name != nil {
logCtx = logCtx.WithField("application", *q.Name)
}
claims := ws.Context().Value("claims")
done := make(chan bool)
go func() {
@@ -448,17 +455,17 @@ func (s *Server) Watch(q *ApplicationQuery, ws ApplicationService_WatchServer) e
Application: a,
})
if err != nil {
log.Warnf("Unable to send stream message: %v", err)
logCtx.Warnf("Unable to send stream message: %v", err)
}
}
}
done <- true
logCtx.Info("k8s application watch event channel closed")
close(done)
}()
select {
case <-ws.Context().Done():
w.Stop()
logCtx.Info("client watch grpc context closed")
case <-done:
w.Stop()
}
return nil
}
@@ -492,18 +499,13 @@ func (s *Server) validateAndNormalizeApp(ctx context.Context, app *appv1.Applica
}
}
conditions, err := argo.GetSpecErrors(ctx, &app.Spec, proj, s.repoClientset, s.db)
conditions, appSourceType, err := argo.GetSpecErrors(ctx, &app.Spec, proj, s.repoClientset, s.db)
if err != nil {
return err
}
if len(conditions) > 0 {
return status.Errorf(codes.InvalidArgument, "application spec is invalid: %s", argo.FormatAppConditions(conditions))
}
appSourceType, err := argo.QueryAppSourceType(ctx, app, s.repoClientset, s.db)
if err != nil {
return err
}
app.Spec = *argo.NormalizeApplicationSpec(&app.Spec, appSourceType)
return nil
}
@@ -620,6 +622,7 @@ func (s *Server) PatchResource(ctx context.Context, q *ApplicationResourcePatchR
if err != nil {
return nil, err
}
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, fmt.Sprintf("patched resource %s/%s '%s'", q.Group, q.Kind, q.ResourceName))
return &ApplicationResourceResponse{
Manifest: string(data),
}, nil
@@ -728,7 +731,10 @@ func (s *Server) PodLogs(q *ApplicationPodLogsQuery, ws ApplicationService_PodLo
if err != nil {
return err
}
logCtx := log.WithField("application", q.Name)
defer util.Close(stream)
done := make(chan bool)
gracefulExit := false
go func() {
scanner := bufio.NewScanner(stream)
for scanner.Scan() {
@@ -745,18 +751,25 @@ func (s *Server) PodLogs(q *ApplicationPodLogsQuery, ws ApplicationService_PodLo
TimeStamp: metaLogTime,
})
if err != nil {
log.Warnf("Unable to send stream message: %v", err)
logCtx.Warnf("Unable to send stream message: %v", err)
}
}
}
}
}
done <- true
if gracefulExit {
logCtx.Info("k8s pod logs scanner completed due to closed grpc context")
} else if err := scanner.Err(); err != nil {
logCtx.Warnf("k8s pod logs scanner failed with error: %v", err)
} else {
logCtx.Info("k8s pod logs scanner completed with EOF")
}
close(done)
}()
select {
case <-ws.Context().Done():
util.Close(stream)
logCtx.Info("client pod logs grpc context closed")
gracefulExit = true
case <-done:
}
return nil
@@ -889,7 +902,7 @@ func (s *Server) resolveRevision(ctx context.Context, app *appv1.Application, sy
// If we couldn't retrieve from the repo service, assume public repositories
repo = &appv1.Repository{Repo: app.Spec.Source.RepoURL}
}
gitClient, err := s.gitFactory.NewClient(repo.Repo, "", repo.Username, repo.Password, repo.SSHPrivateKey)
gitClient, err := s.gitFactory.NewClient(repo.Repo, "", repo.Username, repo.Password, repo.SSHPrivateKey, repo.InsecureIgnoreHostKey)
if err != nil {
return "", "", err
}

View File

@@ -39,7 +39,7 @@ func NewServer(db db.ArgoDB, enf *rbac.Enforcer, cache *cache.Cache) *Server {
}
}
func (s *Server) getConnectionState(ctx context.Context, cluster appv1.Cluster) appv1.ConnectionState {
func (s *Server) getConnectionState(ctx context.Context, cluster appv1.Cluster, errorMessage string) appv1.ConnectionState {
if connectionState, err := s.cache.GetClusterConnectionState(cluster.Server); err == nil {
return connectionState
}
@@ -59,6 +59,12 @@ func (s *Server) getConnectionState(ctx context.Context, cluster appv1.Cluster)
connectionState.Status = appv1.ConnectionStatusFailed
connectionState.Message = fmt.Sprintf("Unable to connect to cluster: %v", err)
}
if errorMessage != "" {
connectionState.Status = appv1.ConnectionStatusFailed
connectionState.Message = fmt.Sprintf("%s %s", errorMessage, connectionState.Message)
}
err = s.cache.SetClusterConnectionState(cluster.Server, &connectionState)
if err != nil {
log.Warnf("getConnectionState cache set error %s: %v", cluster.Server, err)
@@ -72,26 +78,36 @@ func (s *Server) List(ctx context.Context, q *ClusterQuery) (*appv1.ClusterList,
if err != nil {
return nil, err
}
newItems := make([]appv1.Cluster, 0)
clustersByServer := make(map[string][]appv1.Cluster)
for _, clust := range clusterList.Items {
if s.enf.Enforce(ctx.Value("claims"), rbacpolicy.ResourceClusters, rbacpolicy.ActionGet, clust.Server) {
newItems = append(newItems, clust)
clustersByServer[clust.Server] = append(clustersByServer[clust.Server], clust)
}
}
servers := make([]string, 0)
for server := range clustersByServer {
servers = append(servers, server)
}
err = util.RunAllAsync(len(newItems), func(i int) error {
clust := newItems[i]
if clust.ConnectionState.Status == "" {
clust.ConnectionState = s.getConnectionState(ctx, clust)
items := make([]appv1.Cluster, len(servers))
err = util.RunAllAsync(len(servers), func(i int) error {
clusters := clustersByServer[servers[i]]
clust := clusters[0]
warningMessage := ""
if len(clusters) > 1 {
warningMessage = fmt.Sprintf("There are %d credentials configured this cluster.", len(clusters))
}
newItems[i] = *redact(&clust)
if clust.ConnectionState.Status == "" {
clust.ConnectionState = s.getConnectionState(ctx, clust, warningMessage)
}
items[i] = *redact(&clust)
return nil
})
if err != nil {
return nil, err
}
clusterList.Items = newItems
clusterList.Items = items
return clusterList, err
}

View File

@@ -59,7 +59,7 @@ func (s *Server) getConnectionState(ctx context.Context, url string) appsv1.Conn
}
repo, err := s.db.GetRepository(ctx, url)
if err == nil {
err = git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey)
err = git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey, repo.InsecureIgnoreHostKey)
}
if err != nil {
connectionState.Status = appsv1.ConnectionStatusFailed
@@ -235,7 +235,7 @@ func (s *Server) Create(ctx context.Context, q *RepoCreateRequest) (*appsv1.Repo
return nil, err
}
r := q.Repo
err := git.TestRepo(r.Repo, r.Username, r.Password, r.SSHPrivateKey)
err := git.TestRepo(r.Repo, r.Username, r.Password, r.SSHPrivateKey, r.InsecureIgnoreHostKey)
if err != nil {
return nil, err
}

View File

@@ -371,12 +371,13 @@ func (a *ArgoCDServer) newGRPCServer() *grpc.Server {
grpc.ConnectionTimeout(300 * time.Second),
}
sensitiveMethods := map[string]bool{
"/cluster.ClusterService/Create": true,
"/cluster.ClusterService/Update": true,
"/session.SessionService/Create": true,
"/account.AccountService/UpdatePassword": true,
"/repository.RepositoryService/Create": true,
"/repository.RepositoryService/Update": true,
"/cluster.ClusterService/Create": true,
"/cluster.ClusterService/Update": true,
"/session.SessionService/Create": true,
"/account.AccountService/UpdatePassword": true,
"/repository.RepositoryService/Create": true,
"/repository.RepositoryService/Update": true,
"/application.ApplicationService/PatchResource": true,
}
// NOTE: notice we do not configure the gRPC server here with TLS (e.g. grpc.Creds(creds))
// This is because TLS handshaking occurs in cmux handling

View File

@@ -4,6 +4,7 @@ import (
"github.com/ghodss/yaml"
"golang.org/x/net/context"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/settings"
)
@@ -25,9 +26,16 @@ func (s *Server) Get(ctx context.Context, q *SettingsQuery) (*Settings, error) {
if err != nil {
return nil, err
}
overrides := make(map[string]*v1alpha1.ResourceOverride)
for k := range argoCDSettings.ResourceOverrides {
val := argoCDSettings.ResourceOverrides[k]
overrides[k] = &val
}
set := Settings{
URL: argoCDSettings.URL,
AppLabelKey: argoCDSettings.GetAppInstanceLabelKey(),
URL: argoCDSettings.URL,
AppLabelKey: argoCDSettings.GetAppInstanceLabelKey(),
ResourceOverrides: overrides,
}
if argoCDSettings.DexConfig != "" {
var cfg DexConfig
@@ -38,9 +46,10 @@ func (s *Server) Get(ctx context.Context, q *SettingsQuery) (*Settings, error) {
}
if oidcConfig := argoCDSettings.OIDCConfig(); oidcConfig != nil {
set.OIDCConfig = &OIDCConfig{
Name: oidcConfig.Name,
Issuer: oidcConfig.Issuer,
ClientID: oidcConfig.ClientID,
Name: oidcConfig.Name,
Issuer: oidcConfig.Issuer,
ClientID: oidcConfig.ClientID,
CLIClientID: oidcConfig.CLIClientID,
}
}
return &set, nil

View File

@@ -12,6 +12,7 @@ package settings // import "github.com/argoproj/argo-cd/server/settings"
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
import _ "github.com/gogo/protobuf/gogoproto"
import _ "google.golang.org/genproto/googleapis/api/annotations"
@@ -42,7 +43,7 @@ func (m *SettingsQuery) Reset() { *m = SettingsQuery{} }
func (m *SettingsQuery) String() string { return proto.CompactTextString(m) }
func (*SettingsQuery) ProtoMessage() {}
func (*SettingsQuery) Descriptor() ([]byte, []int) {
return fileDescriptor_settings_ec03f87e34619c39, []int{0}
return fileDescriptor_settings_f96c7d59ef70e4fa, []int{0}
}
func (m *SettingsQuery) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -72,20 +73,21 @@ func (m *SettingsQuery) XXX_DiscardUnknown() {
var xxx_messageInfo_SettingsQuery proto.InternalMessageInfo
type Settings struct {
URL string `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"`
DexConfig *DexConfig `protobuf:"bytes,2,opt,name=dexConfig" json:"dexConfig,omitempty"`
OIDCConfig *OIDCConfig `protobuf:"bytes,3,opt,name=oidcConfig" json:"oidcConfig,omitempty"`
AppLabelKey string `protobuf:"bytes,4,opt,name=appLabelKey,proto3" json:"appLabelKey,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
URL string `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"`
DexConfig *DexConfig `protobuf:"bytes,2,opt,name=dexConfig" json:"dexConfig,omitempty"`
OIDCConfig *OIDCConfig `protobuf:"bytes,3,opt,name=oidcConfig" json:"oidcConfig,omitempty"`
AppLabelKey string `protobuf:"bytes,4,opt,name=appLabelKey,proto3" json:"appLabelKey,omitempty"`
ResourceOverrides map[string]*v1alpha1.ResourceOverride `protobuf:"bytes,5,rep,name=resourceOverrides" json:"resourceOverrides,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Settings) Reset() { *m = Settings{} }
func (m *Settings) String() string { return proto.CompactTextString(m) }
func (*Settings) ProtoMessage() {}
func (*Settings) Descriptor() ([]byte, []int) {
return fileDescriptor_settings_ec03f87e34619c39, []int{1}
return fileDescriptor_settings_f96c7d59ef70e4fa, []int{1}
}
func (m *Settings) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -142,6 +144,13 @@ func (m *Settings) GetAppLabelKey() string {
return ""
}
func (m *Settings) GetResourceOverrides() map[string]*v1alpha1.ResourceOverride {
if m != nil {
return m.ResourceOverrides
}
return nil
}
type DexConfig struct {
Connectors []*Connector `protobuf:"bytes,1,rep,name=connectors" json:"connectors,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
@@ -153,7 +162,7 @@ func (m *DexConfig) Reset() { *m = DexConfig{} }
func (m *DexConfig) String() string { return proto.CompactTextString(m) }
func (*DexConfig) ProtoMessage() {}
func (*DexConfig) Descriptor() ([]byte, []int) {
return fileDescriptor_settings_ec03f87e34619c39, []int{2}
return fileDescriptor_settings_f96c7d59ef70e4fa, []int{2}
}
func (m *DexConfig) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -201,7 +210,7 @@ func (m *Connector) Reset() { *m = Connector{} }
func (m *Connector) String() string { return proto.CompactTextString(m) }
func (*Connector) ProtoMessage() {}
func (*Connector) Descriptor() ([]byte, []int) {
return fileDescriptor_settings_ec03f87e34619c39, []int{3}
return fileDescriptor_settings_f96c7d59ef70e4fa, []int{3}
}
func (m *Connector) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -248,6 +257,7 @@ type OIDCConfig struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Issuer string `protobuf:"bytes,2,opt,name=issuer,proto3" json:"issuer,omitempty"`
ClientID string `protobuf:"bytes,3,opt,name=clientID,proto3" json:"clientID,omitempty"`
CLIClientID string `protobuf:"bytes,4,opt,name=cliClientID,proto3" json:"cliClientID,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
@@ -257,7 +267,7 @@ func (m *OIDCConfig) Reset() { *m = OIDCConfig{} }
func (m *OIDCConfig) String() string { return proto.CompactTextString(m) }
func (*OIDCConfig) ProtoMessage() {}
func (*OIDCConfig) Descriptor() ([]byte, []int) {
return fileDescriptor_settings_ec03f87e34619c39, []int{4}
return fileDescriptor_settings_f96c7d59ef70e4fa, []int{4}
}
func (m *OIDCConfig) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -307,9 +317,17 @@ func (m *OIDCConfig) GetClientID() string {
return ""
}
func (m *OIDCConfig) GetCLIClientID() string {
if m != nil {
return m.CLIClientID
}
return ""
}
func init() {
proto.RegisterType((*SettingsQuery)(nil), "cluster.SettingsQuery")
proto.RegisterType((*Settings)(nil), "cluster.Settings")
proto.RegisterMapType((map[string]*v1alpha1.ResourceOverride)(nil), "cluster.Settings.ResourceOverridesEntry")
proto.RegisterType((*DexConfig)(nil), "cluster.DexConfig")
proto.RegisterType((*Connector)(nil), "cluster.Connector")
proto.RegisterType((*OIDCConfig)(nil), "cluster.OIDCConfig")
@@ -457,6 +475,34 @@ func (m *Settings) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintSettings(dAtA, i, uint64(len(m.AppLabelKey)))
i += copy(dAtA[i:], m.AppLabelKey)
}
if len(m.ResourceOverrides) > 0 {
for k, _ := range m.ResourceOverrides {
dAtA[i] = 0x2a
i++
v := m.ResourceOverrides[k]
msgSize := 0
if v != nil {
msgSize = v.Size()
msgSize += 1 + sovSettings(uint64(msgSize))
}
mapSize := 1 + len(k) + sovSettings(uint64(len(k))) + msgSize
i = encodeVarintSettings(dAtA, i, uint64(mapSize))
dAtA[i] = 0xa
i++
i = encodeVarintSettings(dAtA, i, uint64(len(k)))
i += copy(dAtA[i:], k)
if v != nil {
dAtA[i] = 0x12
i++
i = encodeVarintSettings(dAtA, i, uint64(v.Size()))
n3, err := v.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n3
}
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
@@ -562,6 +608,12 @@ func (m *OIDCConfig) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintSettings(dAtA, i, uint64(len(m.ClientID)))
i += copy(dAtA[i:], m.ClientID)
}
if len(m.CLIClientID) > 0 {
dAtA[i] = 0x22
i++
i = encodeVarintSettings(dAtA, i, uint64(len(m.CLIClientID)))
i += copy(dAtA[i:], m.CLIClientID)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
@@ -605,6 +657,19 @@ func (m *Settings) Size() (n int) {
if l > 0 {
n += 1 + l + sovSettings(uint64(l))
}
if len(m.ResourceOverrides) > 0 {
for k, v := range m.ResourceOverrides {
_ = k
_ = v
l = 0
if v != nil {
l = v.Size()
l += 1 + sovSettings(uint64(l))
}
mapEntrySize := 1 + len(k) + sovSettings(uint64(len(k))) + l
n += mapEntrySize + 1 + sovSettings(uint64(mapEntrySize))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
@@ -658,6 +723,10 @@ func (m *OIDCConfig) Size() (n int) {
if l > 0 {
n += 1 + l + sovSettings(uint64(l))
}
l = len(m.CLIClientID)
if l > 0 {
n += 1 + l + sovSettings(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
@@ -881,6 +950,129 @@ func (m *Settings) Unmarshal(dAtA []byte) error {
}
m.AppLabelKey = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ResourceOverrides", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSettings
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthSettings
}
postIndex := iNdEx + msglen
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.ResourceOverrides == nil {
m.ResourceOverrides = make(map[string]*v1alpha1.ResourceOverride)
}
var mapkey string
var mapvalue *v1alpha1.ResourceOverride
for iNdEx < postIndex {
entryPreIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSettings
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
if fieldNum == 1 {
var stringLenmapkey uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSettings
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapkey |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLenmapkey := int(stringLenmapkey)
if intStringLenmapkey < 0 {
return ErrInvalidLengthSettings
}
postStringIndexmapkey := iNdEx + intStringLenmapkey
if postStringIndexmapkey > l {
return io.ErrUnexpectedEOF
}
mapkey = string(dAtA[iNdEx:postStringIndexmapkey])
iNdEx = postStringIndexmapkey
} else if fieldNum == 2 {
var mapmsglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSettings
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
mapmsglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if mapmsglen < 0 {
return ErrInvalidLengthSettings
}
postmsgIndex := iNdEx + mapmsglen
if mapmsglen < 0 {
return ErrInvalidLengthSettings
}
if postmsgIndex > l {
return io.ErrUnexpectedEOF
}
mapvalue = &v1alpha1.ResourceOverride{}
if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil {
return err
}
iNdEx = postmsgIndex
} else {
iNdEx = entryPreIndex
skippy, err := skipSettings(dAtA[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthSettings
}
if (iNdEx + skippy) > postIndex {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
m.ResourceOverrides[mapkey] = mapvalue
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipSettings(dAtA[iNdEx:])
@@ -1210,6 +1402,35 @@ func (m *OIDCConfig) Unmarshal(dAtA []byte) error {
}
m.ClientID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field CLIClientID", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSettings
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthSettings
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.CLIClientID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipSettings(dAtA[iNdEx:])
@@ -1338,35 +1559,45 @@ var (
)
func init() {
proto.RegisterFile("server/settings/settings.proto", fileDescriptor_settings_ec03f87e34619c39)
proto.RegisterFile("server/settings/settings.proto", fileDescriptor_settings_f96c7d59ef70e4fa)
}
var fileDescriptor_settings_ec03f87e34619c39 = []byte{
// 415 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x52, 0x41, 0x6b, 0xd4, 0x40,
0x18, 0x65, 0x9a, 0xd2, 0x6e, 0xbe, 0xaa, 0xd5, 0x51, 0x4a, 0x5c, 0x64, 0x77, 0xc9, 0x69, 0x41,
0x4c, 0x74, 0x7b, 0xf2, 0x24, 0x24, 0x0b, 0x52, 0x2d, 0x88, 0x53, 0xbc, 0x08, 0x1e, 0x92, 0xd9,
0xcf, 0x38, 0x92, 0xce, 0x84, 0xc9, 0x64, 0x71, 0xaf, 0xfe, 0x05, 0xff, 0x8f, 0x67, 0x8f, 0x82,
0xf7, 0x45, 0x82, 0x3f, 0x44, 0x32, 0x9b, 0xa4, 0xa9, 0xf6, 0xf6, 0xe6, 0xbd, 0x79, 0xc9, 0x9b,
0xef, 0x7b, 0x30, 0x29, 0x51, 0xaf, 0x51, 0x87, 0x25, 0x1a, 0x23, 0x64, 0x56, 0xf6, 0x20, 0x28,
0xb4, 0x32, 0x8a, 0x1e, 0xf2, 0xbc, 0x2a, 0x0d, 0xea, 0xf1, 0x83, 0x4c, 0x65, 0xca, 0x72, 0x61,
0x83, 0x76, 0xf2, 0xf8, 0x51, 0xa6, 0x54, 0x96, 0x63, 0x98, 0x14, 0x22, 0x4c, 0xa4, 0x54, 0x26,
0x31, 0x42, 0xc9, 0xd6, 0xec, 0x1f, 0xc3, 0xed, 0x8b, 0xf6, 0x73, 0x6f, 0x2b, 0xd4, 0x1b, 0xff,
0x3b, 0x81, 0x51, 0xc7, 0xd0, 0x87, 0xe0, 0x54, 0x3a, 0xf7, 0xc8, 0x8c, 0xcc, 0xdd, 0xe8, 0xb0,
0xde, 0x4e, 0x9d, 0x77, 0xec, 0x9c, 0x35, 0x1c, 0x7d, 0x0a, 0xee, 0x0a, 0xbf, 0xc4, 0x4a, 0x7e,
0x14, 0x99, 0xb7, 0x37, 0x23, 0xf3, 0xa3, 0x05, 0x0d, 0xda, 0x24, 0xc1, 0xb2, 0x53, 0xd8, 0xd5,
0x25, 0x1a, 0x03, 0x28, 0xb1, 0xe2, 0xad, 0xc5, 0xb1, 0x96, 0xfb, 0xbd, 0xe5, 0xcd, 0xd9, 0x32,
0xde, 0x49, 0xd1, 0x9d, 0x7a, 0x3b, 0x85, 0xab, 0x33, 0x1b, 0xd8, 0xe8, 0x0c, 0x8e, 0x92, 0xa2,
0x38, 0x4f, 0x52, 0xcc, 0x5f, 0xe3, 0xc6, 0xdb, 0x6f, 0x92, 0xb1, 0x21, 0xe5, 0xbf, 0x00, 0xb7,
0xff, 0x3d, 0x5d, 0x00, 0x70, 0x25, 0x25, 0x72, 0xa3, 0x74, 0xe9, 0x91, 0x99, 0x73, 0x2d, 0x66,
0xdc, 0x49, 0x6c, 0x70, 0xcb, 0x3f, 0x05, 0xb7, 0x17, 0x28, 0x85, 0x7d, 0x99, 0x5c, 0xe2, 0x6e,
0x04, 0xcc, 0xe2, 0x86, 0x33, 0x9b, 0x02, 0xed, 0xab, 0x5d, 0x66, 0xb1, 0x9f, 0xc2, 0x20, 0xf1,
0x8d, 0xae, 0x13, 0x38, 0x10, 0x65, 0x59, 0xa1, 0x6e, 0x7d, 0xed, 0x89, 0xce, 0x61, 0xc4, 0x73,
0x81, 0xd2, 0x9c, 0x2d, 0xed, 0x50, 0xdc, 0xe8, 0x56, 0xbd, 0x9d, 0x8e, 0xe2, 0x96, 0x63, 0xbd,
0xba, 0xf8, 0x00, 0xc7, 0xdd, 0x66, 0x2e, 0x50, 0xaf, 0x05, 0x47, 0xfa, 0x0a, 0x9c, 0x97, 0x68,
0xe8, 0x49, 0xff, 0xa4, 0x6b, 0xcb, 0x1c, 0xdf, 0xfb, 0x8f, 0xf7, 0xbd, 0xaf, 0xbf, 0xfe, 0x7c,
0xdb, 0xa3, 0xf4, 0xae, 0x2d, 0xc4, 0xfa, 0x59, 0xdf, 0xa6, 0xe8, 0xf9, 0x8f, 0x7a, 0x42, 0x7e,
0xd6, 0x13, 0xf2, 0xbb, 0x9e, 0x90, 0xf7, 0x8f, 0x33, 0x61, 0x3e, 0x55, 0x69, 0xc0, 0xd5, 0x65,
0x98, 0x68, 0xdb, 0xab, 0xcf, 0x16, 0x3c, 0xe1, 0xab, 0xf0, 0x9f, 0x46, 0xa6, 0x07, 0xb6, 0x4c,
0xa7, 0x7f, 0x03, 0x00, 0x00, 0xff, 0xff, 0xcf, 0x0f, 0xf6, 0xb0, 0xab, 0x02, 0x00, 0x00,
var fileDescriptor_settings_f96c7d59ef70e4fa = []byte{
// 566 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x53, 0x5d, 0x6b, 0x13, 0x41,
0x14, 0x65, 0xbb, 0xfd, 0xca, 0x8d, 0xda, 0x76, 0x94, 0x12, 0x83, 0x24, 0x21, 0x4f, 0x01, 0x71,
0xd7, 0xa4, 0x2f, 0xea, 0x8b, 0x90, 0x54, 0x24, 0xb6, 0x50, 0x9c, 0xa2, 0x0f, 0x82, 0xc8, 0x74,
0x72, 0xdd, 0x8e, 0xd9, 0xee, 0x2c, 0xb3, 0xb3, 0x8b, 0x79, 0xf5, 0x1f, 0x88, 0xf8, 0x27, 0xfc,
0x25, 0x3e, 0x0a, 0xbe, 0x07, 0x59, 0xfc, 0x21, 0xb2, 0xb3, 0x1f, 0x59, 0x9a, 0xe2, 0xdb, 0xd9,
0x73, 0xee, 0x99, 0x9d, 0x7b, 0xcf, 0x5c, 0xe8, 0x44, 0xa8, 0x12, 0x54, 0x6e, 0x84, 0x5a, 0x8b,
0xc0, 0x8b, 0x2a, 0xe0, 0x84, 0x4a, 0x6a, 0x49, 0x76, 0xb8, 0x1f, 0x47, 0x1a, 0x55, 0xfb, 0x9e,
0x27, 0x3d, 0x69, 0x38, 0x37, 0x43, 0xb9, 0xdc, 0x7e, 0xe0, 0x49, 0xe9, 0xf9, 0xe8, 0xb2, 0x50,
0xb8, 0x2c, 0x08, 0xa4, 0x66, 0x5a, 0xc8, 0xa0, 0x30, 0xb7, 0xa7, 0x9e, 0xd0, 0x97, 0xf1, 0x85,
0xc3, 0xe5, 0x95, 0xcb, 0x94, 0xb1, 0x7f, 0x32, 0xe0, 0x11, 0x9f, 0xb9, 0xe1, 0xdc, 0xcb, 0x6c,
0x91, 0xcb, 0xc2, 0xd0, 0x17, 0xdc, 0x18, 0xdd, 0x64, 0xc8, 0xfc, 0xf0, 0x92, 0x0d, 0x5d, 0x0f,
0x03, 0x54, 0x4c, 0xe3, 0x2c, 0x3f, 0xaa, 0xbf, 0x07, 0xb7, 0xcf, 0x8b, 0x9b, 0xbd, 0x8e, 0x51,
0x2d, 0xfa, 0x3f, 0x6c, 0xd8, 0x2d, 0x19, 0x72, 0x1f, 0xec, 0x58, 0xf9, 0x2d, 0xab, 0x67, 0x0d,
0x1a, 0xe3, 0x9d, 0x74, 0xd9, 0xb5, 0xdf, 0xd0, 0x53, 0x9a, 0x71, 0xe4, 0x31, 0x34, 0x66, 0xf8,
0x79, 0x22, 0x83, 0x8f, 0xc2, 0x6b, 0x6d, 0xf4, 0xac, 0x41, 0x73, 0x44, 0x9c, 0xa2, 0x29, 0xe7,
0xb8, 0x54, 0xe8, 0xaa, 0x88, 0x4c, 0x00, 0xa4, 0x98, 0xf1, 0xc2, 0x62, 0x1b, 0xcb, 0xdd, 0xca,
0x72, 0x36, 0x3d, 0x9e, 0xe4, 0xd2, 0xf8, 0x4e, 0xba, 0xec, 0xc2, 0xea, 0x9b, 0xd6, 0x6c, 0xa4,
0x07, 0x4d, 0x16, 0x86, 0xa7, 0xec, 0x02, 0xfd, 0x13, 0x5c, 0xb4, 0x36, 0xb3, 0x9b, 0xd1, 0x3a,
0x45, 0xde, 0xc2, 0x81, 0xc2, 0x48, 0xc6, 0x8a, 0xe3, 0x59, 0x82, 0x4a, 0x89, 0x19, 0x46, 0xad,
0xad, 0x9e, 0x3d, 0x68, 0x8e, 0x06, 0xd5, 0xdf, 0xca, 0x0e, 0x1d, 0x7a, 0xbd, 0xf4, 0x45, 0xa0,
0xd5, 0x82, 0xae, 0x1f, 0xd1, 0xfe, 0x6a, 0xc1, 0xe1, 0xcd, 0xd5, 0x64, 0x1f, 0xec, 0x39, 0x2e,
0xf2, 0x31, 0xd1, 0x0c, 0x12, 0x06, 0x5b, 0x09, 0xf3, 0x63, 0x2c, 0x26, 0x73, 0xe2, 0xac, 0x12,
0x73, 0xca, 0xc4, 0x0c, 0xf8, 0xc0, 0x67, 0x4e, 0x38, 0xf7, 0x9c, 0x2c, 0x31, 0xa7, 0x96, 0x98,
0x53, 0x26, 0xb6, 0x76, 0x43, 0x9a, 0x9f, 0xfc, 0x6c, 0xe3, 0x89, 0xd5, 0x7f, 0x0e, 0x8d, 0x6a,
0xd4, 0x64, 0x04, 0xc0, 0x65, 0x10, 0x20, 0xd7, 0x52, 0x45, 0x2d, 0xcb, 0x74, 0xbc, 0x8a, 0x64,
0x52, 0x4a, 0xb4, 0x56, 0xd5, 0x3f, 0x82, 0x46, 0x25, 0x10, 0x02, 0x9b, 0x01, 0xbb, 0xc2, 0xa2,
0x0f, 0x83, 0x33, 0x4e, 0x2f, 0xc2, 0xbc, 0x8f, 0x06, 0x35, 0xb8, 0xff, 0xdd, 0x82, 0x5a, 0x3c,
0x37, 0xda, 0x0e, 0x61, 0x5b, 0x44, 0x51, 0x8c, 0xaa, 0x30, 0x16, 0x5f, 0x64, 0x00, 0xbb, 0xdc,
0x17, 0x18, 0xe8, 0xe9, 0xb1, 0x79, 0x01, 0x8d, 0xf1, 0xad, 0x74, 0xd9, 0xdd, 0x9d, 0x14, 0x1c,
0xad, 0x54, 0x32, 0x84, 0x26, 0xf7, 0x45, 0x29, 0xe4, 0x41, 0x8f, 0xf7, 0xd2, 0x65, 0xb7, 0x39,
0x39, 0x9d, 0x56, 0xf5, 0xf5, 0x9a, 0xd1, 0x7b, 0xd8, 0x2b, 0x73, 0x3d, 0x47, 0x95, 0x08, 0x8e,
0xe4, 0x15, 0xd8, 0x2f, 0x51, 0x93, 0xc3, 0xb5, 0xe0, 0xcd, 0x63, 0x6f, 0x1f, 0xac, 0xf1, 0xfd,
0xd6, 0x97, 0xdf, 0x7f, 0xbf, 0x6d, 0x10, 0xb2, 0x6f, 0x76, 0x2f, 0x19, 0x56, 0x8b, 0x3b, 0x7e,
0xfa, 0x33, 0xed, 0x58, 0xbf, 0xd2, 0x8e, 0xf5, 0x27, 0xed, 0x58, 0xef, 0x1e, 0xfe, 0x6f, 0x07,
0xaf, 0x2d, 0xff, 0xc5, 0xb6, 0x59, 0xb6, 0xa3, 0x7f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x62, 0x4c,
0x68, 0x20, 0x16, 0x04, 0x00, 0x00,
}

View File

@@ -8,6 +8,7 @@ package cluster;
import "gogoproto/gogo.proto";
import "google/api/annotations.proto";
import "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1/generated.proto";
// SettingsQuery is a query for Argo CD settings
message SettingsQuery {
@@ -18,6 +19,7 @@ message Settings {
DexConfig dexConfig = 2;
OIDCConfig oidcConfig = 3 [(gogoproto.customname) = "OIDCConfig"];
string appLabelKey = 4;
map<string, github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.ResourceOverride> resourceOverrides = 5;
}
message DexConfig {
@@ -33,6 +35,7 @@ message OIDCConfig {
string name = 1;
string issuer = 2;
string clientID = 3 [(gogoproto.customname) = "ClientID"];
string cliClientID = 4 [(gogoproto.customname) = "CLIClientID"];
}
// SettingsService

17
test/README.md Normal file
View File

@@ -0,0 +1,17 @@
E2E tests
=============
The directory contains E2E tests and test applications. Tests assume that Argo CD services are installed into `argocd-e2e` namespace or cluster in current context. One throw-away
namespace `argocd-e2e***` is created prior to tests execute. The throw-away namespace is used as a target namespace for test applications.
The `test/e2e/testdata` directory contains various Argo CD applications. Before test execution directory is copies into `/tmp/argocd-e2e***` temp directory and used in tests as a
git repository via file url: `file:///tmp/argocd-e2e***`.
Use the following steps to run tests locally:
1. (Do it once) Create namespace `argocd-e2e` and apply base manifests: `kubectl create ns -n argocd-e2e && kustomize build test/manifests/base | kubectl apply -n argocd-e2e -f -`
1. Change kubectl context namespace to `argocd-e2e` and start services using `goreman start`
1. Keep Argo CD services running and run tests using `make test-e2e`
The tests are executed by Argo Workflow defined at `.argo-ci/ci.yaml`. CI job The build argo cd image, deploy argo cd components into throw-away kubernetes cluster provisioned
using k3s and run e2e tests against it.

View File

@@ -1,222 +1,392 @@
package e2e
import (
"context"
"fmt"
"path"
"sort"
"strconv"
"strings"
"testing"
"time"
"k8s.io/apimachinery/pkg/types"
"github.com/argoproj/argo-cd/util/diff"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/types"
// load the gcp plugin (required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/kube"
)
func TestAppManagement(t *testing.T) {
assertAppHasEvent := func(a *v1alpha1.Application, message string, reason string) {
list, err := fixture.KubeClient.CoreV1().Events(fixture.Namespace).List(metav1.ListOptions{
FieldSelector: fields.SelectorFromSet(map[string]string{
"involvedObject.name": a.Name,
"involvedObject.uid": string(a.UID),
"involvedObject.namespace": fixture.Namespace,
}).String(),
})
if err != nil {
t.Fatalf("Unable to get app events %v", err)
}
for i := range list.Items {
event := list.Items[i]
if event.Reason == reason && strings.Contains(event.Message, message) {
return
}
}
t.Errorf("Unable to find event with reason=%s; message=%s", reason, message)
}
const (
guestbookPath = "guestbook"
)
testApp := &v1alpha1.Application{
func assertAppHasEvent(t *testing.T, a *v1alpha1.Application, message string, reason string) {
list, err := fixture.KubeClientset.CoreV1().Events(fixture.ArgoCDNamespace).List(metav1.ListOptions{
FieldSelector: fields.SelectorFromSet(map[string]string{
"involvedObject.name": a.Name,
"involvedObject.uid": string(a.UID),
"involvedObject.namespace": fixture.ArgoCDNamespace,
}).String(),
})
assert.NoError(t, err)
for i := range list.Items {
event := list.Items[i]
if event.Reason == reason && strings.Contains(event.Message, message) {
return
}
}
t.Errorf("Unable to find event with reason=%s; message=%s", reason, message)
}
func getTestApp() *v1alpha1.Application {
return &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "test-app",
},
Spec: v1alpha1.ApplicationSpec{
Source: v1alpha1.ApplicationSource{
RepoURL: "https://github.com/argoproj/argo-cd.git",
Path: ".",
Ksonnet: &v1alpha1.ApplicationSourceKsonnet{
Environment: "minikube",
},
RepoURL: fixture.RepoURL(),
Path: guestbookPath,
},
Destination: v1alpha1.ApplicationDestination{
Server: fixture.Config.Host,
Namespace: fixture.Namespace,
Server: common.KubernetesInternalAPIServerAddr,
Namespace: fixture.DeploymentNamespace,
},
},
}
}
t.Run("TestAppCreation", func(t *testing.T) {
appName := "app-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.RunCli("app", "create",
"--name", appName,
"--repo", "https://github.com/argoproj/argo-cd.git",
"--env", "minikube",
"--path", ".",
"--dest-server", fixture.Config.Host,
"--dest-namespace", fixture.Namespace)
if err != nil {
t.Fatalf("Unable to create app %v", err)
}
func createAndSync(t *testing.T, appPath string) *v1alpha1.Application {
app := getTestApp()
app.Spec.Source.Path = appPath
app, err := fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(appName, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get app %v", err)
}
assert.Equal(t, appName, app.Name)
assert.Equal(t, "https://github.com/argoproj/argo-cd.git", app.Spec.Source.RepoURL)
assert.Equal(t, "minikube", app.Spec.Source.Ksonnet.Environment)
assert.Equal(t, ".", app.Spec.Source.Path)
assert.Equal(t, fixture.Namespace, app.Spec.Destination.Namespace)
assert.Equal(t, fixture.Config.Host, app.Spec.Destination.Server)
assertAppHasEvent(app, "create", argo.EventReasonResourceCreated)
app, err := fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Create(app)
assert.NoError(t, err)
_, err = fixture.RunCli("app", "sync", app.Name)
assert.NoError(t, err)
_, err = fixture.RunCli("app", "wait", app.Name, "--sync", "--timeout", "5")
assert.NoError(t, err)
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.Name, metav1.GetOptions{})
assert.NoError(t, err)
return app
}
func createAndSyncDefault(t *testing.T) *v1alpha1.Application {
return createAndSync(t, guestbookPath)
}
func TestAppCreation(t *testing.T) {
fixture.EnsureCleanState()
appName := "app-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.RunCli("app", "create",
"--name", appName,
"--repo", fixture.RepoURL(),
"--path", guestbookPath,
"--dest-server", common.KubernetesInternalAPIServerAddr,
"--dest-namespace", fixture.DeploymentNamespace)
assert.NoError(t, err)
var app *v1alpha1.Application
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(appName, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeOutOfSync, err
})
t.Run("TestAppDeletion", func(t *testing.T) {
app := fixture.CreateApp(t, testApp)
_, err := fixture.RunCli("app", "delete", app.Name)
assert.Equal(t, appName, app.Name)
assert.Equal(t, fixture.RepoURL(), app.Spec.Source.RepoURL)
assert.Equal(t, guestbookPath, app.Spec.Source.Path)
assert.Equal(t, fixture.DeploymentNamespace, app.Spec.Destination.Namespace)
assert.Equal(t, common.KubernetesInternalAPIServerAddr, app.Spec.Destination.Server)
assertAppHasEvent(t, app, "create", argo.EventReasonResourceCreated)
}
func TestAppDeletion(t *testing.T) {
fixture.EnsureCleanState()
app, err := fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Create(getTestApp())
assert.NoError(t, err)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeOutOfSync, err
})
_, err = fixture.RunCli("app", "delete", app.Name)
assert.NoError(t, err)
WaitUntil(t, func() (bool, error) {
_, err := fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.Name, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to delete app %v", err)
}
WaitUntil(t, func() (bool, error) {
_, err := fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.Name, metav1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
return true, nil
}
return false, err
if errors.IsNotFound(err) {
return true, nil
}
return false, nil
})
assertAppHasEvent(app, "delete", argo.EventReasonResourceDeleted)
return false, err
}
return false, nil
})
t.Run("TestTrackAppStateAndSyncApp", func(t *testing.T) {
app := fixture.CreateApp(t, testApp)
assertAppHasEvent(t, app, "delete", argo.EventReasonResourceDeleted)
}
// sync app and make sure it reaches InSync state
_, err := fixture.RunCli("app", "sync", app.Name)
if err != nil {
t.Fatalf("Unable to sync app %v", err)
}
assertAppHasEvent(app, "sync", argo.EventReasonResourceUpdated)
func TestTrackAppStateAndSyncApp(t *testing.T) {
fixture.EnsureCleanState()
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeSynced, err
})
assert.Equal(t, v1alpha1.SyncStatusCodeSynced, app.Status.Sync.Status)
assert.True(t, app.Status.OperationState.SyncResult != nil)
assert.True(t, app.Status.OperationState.Phase == v1alpha1.OperationSucceeded)
app := createAndSyncDefault(t)
assertAppHasEvent(t, app, "sync", argo.EventReasonResourceUpdated)
assert.Equal(t, v1alpha1.SyncStatusCodeSynced, app.Status.Sync.Status)
assert.True(t, app.Status.OperationState.SyncResult != nil)
assert.True(t, app.Status.OperationState.Phase == v1alpha1.OperationSucceeded)
}
func TestAppRollbackSuccessful(t *testing.T) {
fixture.EnsureCleanState()
// create app and ensure it's comparison status is not SyncStatusCodeUnknown
app, err := fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Create(getTestApp())
assert.NoError(t, err)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Revision != "", nil
})
t.Run("TestAppRollbackSuccessful", func(t *testing.T) {
// create app and ensure it's comparion status is not SyncStatusCodeUnknown
app := fixture.CreateApp(t, testApp)
appWithHistory := app.DeepCopy()
appWithHistory.Status.History = []v1alpha1.RevisionHistory{{
ID: 1,
Revision: app.Status.Sync.Revision,
Source: app.Spec.Source,
}, {
ID: 2,
Revision: "cdb",
Source: app.Spec.Source,
}}
patch, _, err := diff.CreateTwoWayMergePatch(app, appWithHistory, &v1alpha1.Application{})
assert.NoError(t, err)
appWithHistory := app.DeepCopy()
appWithHistory.Status.History = []v1alpha1.RevisionHistory{{
ID: 1,
Revision: "abc",
Source: app.Spec.Source,
}, {
ID: 2,
Revision: "cdb",
Source: app.Spec.Source,
}}
patch, _, err := diff.CreateTwoWayMergePatch(app, appWithHistory, &v1alpha1.Application{})
assert.Nil(t, err)
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Patch(app.Name, types.MergePatchType, patch)
assert.NoError(t, err)
app, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Patch(app.Name, types.MergePatchType, patch)
assert.Nil(t, err)
// sync app and make sure it reaches InSync state
_, err = fixture.RunCli("app", "rollback", app.Name, "1")
assert.NoError(t, err)
// sync app and make sure it reaches InSync state
_, err = fixture.RunCli("app", "rollback", app.Name, "1")
if err != nil {
t.Fatalf("Unable to sync app %v", err)
}
assertAppHasEvent(t, app, "rollback", argo.EventReasonOperationStarted)
assertAppHasEvent(app, "rollback", argo.EventReasonOperationStarted)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeSynced, err
})
assert.Equal(t, v1alpha1.SyncStatusCodeSynced, app.Status.Sync.Status)
assert.True(t, app.Status.OperationState.SyncResult != nil)
assert.Equal(t, 2, len(app.Status.OperationState.SyncResult.Resources))
assert.True(t, app.Status.OperationState.Phase == v1alpha1.OperationSucceeded)
assert.Equal(t, 3, len(app.Status.History))
}
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeSynced, err
})
assert.Equal(t, v1alpha1.SyncStatusCodeSynced, app.Status.Sync.Status)
assert.True(t, app.Status.OperationState.SyncResult != nil)
assert.Equal(t, 2, len(app.Status.OperationState.SyncResult.Resources))
assert.True(t, app.Status.OperationState.Phase == v1alpha1.OperationSucceeded)
assert.Equal(t, 3, len(app.Status.History))
func TestComparisonFailsIfClusterNotAdded(t *testing.T) {
fixture.EnsureCleanState()
invalidApp := getTestApp()
invalidApp.Spec.Destination.Server = "https://not-registered-cluster/api"
app, err := fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Create(invalidApp)
assert.NoError(t, err)
WaitUntil(t, func() (done bool, err error) {
app, err := fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeUnknown && len(app.Status.Conditions) > 0, err
})
t.Run("TestComparisonFailsIfClusterNotAdded", func(t *testing.T) {
invalidApp := testApp.DeepCopy()
invalidApp.Spec.Destination.Server = "https://not-registered-cluster/api"
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
assert.NoError(t, err)
app := fixture.CreateApp(t, invalidApp)
assert.Equal(t, v1alpha1.ApplicationConditionInvalidSpecError, app.Status.Conditions[0].Type)
WaitUntil(t, func() (done bool, err error) {
app, err := fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeUnknown && len(app.Status.Conditions) > 0, err
})
_, err = fixture.RunCli("app", "delete", app.Name, "--cascade=false")
assert.NoError(t, err)
}
app, err := fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get app %v", err)
}
func TestArgoCDWaitEnsureAppIsNotCrashing(t *testing.T) {
fixture.EnsureCleanState()
assert.Equal(t, v1alpha1.ApplicationConditionInvalidSpecError, app.Status.Conditions[0].Type)
app := createAndSyncDefault(t)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeSynced && app.Status.Health.Status == v1alpha1.HealthStatusHealthy, err
})
t.Run("TestArgoCDWaitEnsureAppIsNotCrashing", func(t *testing.T) {
updatedApp := testApp.DeepCopy()
_, err := fixture.RunCli("app", "set", app.Name, "--path", "crashing-guestbook")
assert.NoError(t, err)
// deploy app and make sure it is healthy
app := fixture.CreateApp(t, updatedApp)
_, err := fixture.RunCli("app", "sync", app.Name)
if err != nil {
t.Fatalf("Unable to sync app %v", err)
}
_, err = fixture.RunCli("app", "sync", app.Name)
assert.NoError(t, err)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeSynced && app.Status.Health.Status == v1alpha1.HealthStatusHealthy, err
})
// deploy app which fails and make sure it became unhealthy
app.Spec.Source.Ksonnet.Parameters = append(
app.Spec.Source.Ksonnet.Parameters,
v1alpha1.KsonnetParameter{Name: "command", Value: "wrong-command", Component: "guestbook-ui"})
_, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Update(app)
if err != nil {
t.Fatalf("Unable to set app parameter %v", err)
}
_, err = fixture.RunCli("app", "sync", app.Name)
if err != nil {
t.Fatalf("Unable to sync app %v", err)
}
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeSynced && app.Status.Health.Status == v1alpha1.HealthStatusDegraded, err
})
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeSynced && app.Status.Health.Status == v1alpha1.HealthStatusDegraded, err
})
}
func TestManipulateApplicationResources(t *testing.T) {
fixture.EnsureCleanState()
app := createAndSyncDefault(t)
manifests, err := fixture.RunCli("app", "manifests", app.Name, "--source", "live")
assert.NoError(t, err)
resources, err := kube.SplitYAML(manifests)
assert.NoError(t, err)
assert.Equal(t, 2, len(resources))
index := sort.Search(len(resources), func(i int) bool {
return resources[i].GetKind() == kube.DeploymentKind
})
assert.True(t, index > -1)
deployment := resources[index]
closer, client, err := fixture.ArgoCDClientset.NewApplicationClient()
assert.NoError(t, err)
defer util.Close(closer)
_, err = client.DeleteResource(context.Background(), &application.ApplicationResourceDeleteRequest{
Name: &app.Name,
Group: deployment.GroupVersionKind().Group,
Kind: deployment.GroupVersionKind().Kind,
Version: deployment.GroupVersionKind().Version,
Namespace: deployment.GetNamespace(),
ResourceName: deployment.GetName(),
})
assert.NoError(t, err)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeOutOfSync, err
})
}
func TestAppWithSecrets(t *testing.T) {
fixture.EnsureCleanState()
app := createAndSync(t, "secrets")
app.Spec.IgnoreDifferences = []v1alpha1.ResourceIgnoreDifferences{{
Kind: kube.SecretKind, JSONPointers: []string{"/data/username"},
}}
closer, client, err := fixture.ArgoCDClientset.NewApplicationClient()
assert.NoError(t, err)
defer util.Close(closer)
_, err = client.UpdateSpec(context.Background(), &application.ApplicationUpdateSpecRequest{
Name: &app.Name,
Spec: app.Spec,
})
assert.NoError(t, err)
diffOutput, err := fixture.RunCli("app", "diff", app.Name)
assert.NoError(t, err)
assert.Empty(t, diffOutput)
}
func TestResourceDiffing(t *testing.T) {
fixture.EnsureCleanState()
app := getTestApp()
// deploy app and make sure it is healthy
app, err := fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Create(app)
assert.NoError(t, err)
_, err = fixture.RunCli("app", "sync", app.Name)
assert.NoError(t, err)
WaitUntil(t, func() (done bool, err error) {
app, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
return err == nil && app.Status.Sync.Status == v1alpha1.SyncStatusCodeSynced, err
})
// Patch deployment
_, err = fixture.KubeClientset.AppsV1().Deployments(fixture.DeploymentNamespace).Patch(
"guestbook-ui", types.JSONPatchType, []byte(`[{ "op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "test" }]`))
assert.NoError(t, err)
closer, client, err := fixture.ArgoCDClientset.NewApplicationClient()
assert.NoError(t, err)
defer util.Close(closer)
refresh := string(v1alpha1.RefreshTypeNormal)
app, err = client.Get(context.Background(), &application.ApplicationQuery{Name: &app.Name, Refresh: &refresh})
assert.NoError(t, err)
// Make sure application is out of sync due to deployment image difference
assert.Equal(t, string(v1alpha1.SyncStatusCodeOutOfSync), string(app.Status.Sync.Status))
diffOutput, _ := fixture.RunCli("app", "diff", app.Name, "--local", "testdata/guestbook")
assert.Contains(t, diffOutput, fmt.Sprintf("===== apps/Deployment %s/guestbook-ui ======", fixture.DeploymentNamespace))
// Update settings to ignore image difference
settings, err := fixture.SettingsManager.GetSettings()
assert.NoError(t, err)
settings.ResourceOverrides = map[string]v1alpha1.ResourceOverride{
"apps/Deployment": {IgnoreDifferences: ` jsonPointers: ["/spec/template/spec/containers/0/image"]`},
}
err = fixture.SettingsManager.SaveSettings(settings)
assert.NoError(t, err)
app, err = client.Get(context.Background(), &application.ApplicationQuery{Name: &app.Name, Refresh: &refresh})
assert.NoError(t, err)
// Make sure application is in synced state and CLI show no difference
assert.Equal(t, string(v1alpha1.SyncStatusCodeSynced), string(app.Status.Sync.Status))
diffOutput, err = fixture.RunCli("app", "diff", app.Name, "--local", "testdata/guestbook")
assert.Empty(t, diffOutput)
assert.NoError(t, err)
}
func TestEdgeCasesApplicationResources(t *testing.T) {
apps := map[string]string{
"DeprecatedExtensions": "deprecated-extensions",
"CRDs": "crd-creation",
"DuplicatedResources": "duplicated-resources",
}
for name, appPath := range apps {
t.Run(fmt.Sprintf("Test%s", name), func(t *testing.T) {
fixture.EnsureCleanState()
app := createAndSync(t, appPath)
closer, client, err := fixture.ArgoCDClientset.NewApplicationClient()
assert.NoError(t, err)
defer util.Close(closer)
refresh := string(v1alpha1.RefreshTypeNormal)
app, err = client.Get(context.Background(), &application.ApplicationQuery{Name: &app.Name, Refresh: &refresh})
assert.NoError(t, err)
assert.Equal(t, string(v1alpha1.SyncStatusCodeSynced), string(app.Status.Sync.Status))
diffOutput, err := fixture.RunCli("app", "diff", app.Name, "--local", path.Join("testdata", appPath))
assert.Empty(t, diffOutput)
assert.NoError(t, err)
})
}
}

View File

@@ -2,342 +2,225 @@ package e2e
import (
"context"
"crypto/tls"
"encoding/json"
"fmt"
"log"
"net"
"io/ioutil"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"testing"
"time"
v1 "k8s.io/api/core/v1"
apiextensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/server/session"
"github.com/argoproj/argo-cd/util"
grpc_util "github.com/argoproj/argo-cd/util/grpc"
"github.com/argoproj/argo-cd/util/settings"
argoexec "github.com/argoproj/pkg/exec"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/cmd/argocd/commands"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/server"
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/server/cluster"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/assets"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/rbac"
"github.com/argoproj/argo-cd/util/settings"
)
const (
TestTimeout = time.Minute * 3
defaultAriServer = "localhost:8080"
adminPassword = "password"
testingLabel = "e2e.argoproj.io"
)
// Fixture represents e2e tests fixture.
type Fixture struct {
Config *rest.Config
KubeClient kubernetes.Interface
ExtensionsClient apiextensionsclient.Interface
AppClient appclientset.Interface
DB db.ArgoDB
Namespace string
RepoServerAddress string
ApiServerAddress string
ControllerServerAddress string
Enforcer *rbac.Enforcer
SettingsMgr *settings.SettingsManager
KubeClientset kubernetes.Interface
AppClientset appclientset.Interface
ArgoCDNamespace string
DeploymentNamespace string
ArgoCDClientset argocdclient.Client
SettingsManager *settings.SettingsManager
tearDownCallback func()
repoDirectory string
apiServerAddress string
token string
plainText bool
}
func createNamespace(kubeClient *kubernetes.Clientset) (string, error) {
ns := &v1.Namespace{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "argo-e2e-test-",
},
}
cns, err := kubeClient.CoreV1().Namespaces().Create(ns)
if err != nil {
return "", err
}
return cns.Name, nil
}
func (f *Fixture) setup() error {
_, err := exec.Command("kubectl", "apply", "-f", "../../manifests/crds/application-crd.yaml", "-f", "../../manifests/crds/appproject-crd.yaml").Output()
if err != nil {
return err
}
cm := v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDRBACConfigMapName,
},
Data: map[string]string{
rbac.ConfigMapPolicyDefaultKey: "role:admin",
},
}
_, err = f.KubeClient.CoreV1().ConfigMaps(f.Namespace).Create(&cm)
if err != nil {
return err
}
err = f.SettingsMgr.SaveSettings(&settings.ArgoCDSettings{})
if err != nil {
return err
}
err = f.ensureClusterRegistered()
if err != nil {
return err
}
repoServerPort, err := test.GetFreePort()
if err != nil {
return err
}
apiServerPort, err := test.GetFreePort()
if err != nil {
return err
}
controllerServerPort, err := test.GetFreePort()
if err != nil {
return err
}
repoSrv, err := reposerver.NewServer(&FakeGitClientFactory{}, cache.NewCache(cache.NewInMemoryCache(1*time.Hour)), func(config *tls.Config) {}, 0)
if err != nil {
return err
}
repoServerGRPC := repoSrv.CreateGRPC()
f.RepoServerAddress = fmt.Sprintf("127.0.0.1:%d", repoServerPort)
f.ApiServerAddress = fmt.Sprintf("127.0.0.1:%d", apiServerPort)
f.ControllerServerAddress = fmt.Sprintf("127.0.0.1:%d", controllerServerPort)
ctx, cancel := context.WithCancel(context.Background())
apiServer := server.NewServer(ctx, server.ArgoCDServerOpts{
Namespace: f.Namespace,
AppClientset: f.AppClient,
DisableAuth: true,
Insecure: true,
KubeClientset: f.KubeClient,
RepoClientset: reposerver.NewRepoServerClientset(f.RepoServerAddress),
Cache: cache.NewCache(cache.NewInMemoryCache(1 * time.Hour)),
})
go func() {
apiServer.Run(ctx, apiServerPort)
}()
err = waitUntilE(func() (done bool, err error) {
clientset, err := f.NewApiClientset()
if err != nil {
return false, nil
}
conn, appClient, err := clientset.NewApplicationClient()
if err != nil {
return false, nil
}
defer util.Close(conn)
_, err = appClient.List(context.Background(), &application.ApplicationQuery{})
return err == nil, nil
})
if err != nil {
cancel()
return err
}
ctrl, err := f.createController()
if err != nil {
cancel()
return err
}
ctrlCtx, cancelCtrl := context.WithCancel(context.Background())
go ctrl.Run(ctrlCtx, 1, 1)
go func() {
var listener net.Listener
listener, err = net.Listen("tcp", f.RepoServerAddress)
if err == nil {
err = repoServerGRPC.Serve(listener)
}
}()
f.tearDownCallback = func() {
cancel()
cancelCtrl()
repoServerGRPC.Stop()
}
return err
}
func (f *Fixture) ensureClusterRegistered() error {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
loadingRules.DefaultClientConfig = &clientcmd.DefaultClientConfig
overrides := clientcmd.ConfigOverrides{}
clientConfig := clientcmd.NewInteractiveDeferredLoadingClientConfig(loadingRules, &overrides, os.Stdin)
conf, err := clientConfig.ClientConfig()
if err != nil {
return err
}
// Install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
managerBearerToken, err := common.InstallClusterManagerRBAC(clientset)
errors.CheckError(err)
clst := commands.NewCluster(f.Config.Host, conf, managerBearerToken, nil)
clstCreateReq := cluster.ClusterCreateRequest{Cluster: clst}
_, err = cluster.NewServer(f.DB, f.Enforcer, cache.NewCache(cache.NewInMemoryCache(1*time.Minute))).Create(context.Background(), &clstCreateReq)
return err
}
// TearDown deletes fixture resources.
func (f *Fixture) TearDown() {
if f.tearDownCallback != nil {
f.tearDownCallback()
}
apps, err := f.AppClient.ArgoprojV1alpha1().Applications(f.Namespace).List(metav1.ListOptions{})
if err == nil {
for _, app := range apps.Items {
if len(app.Finalizers) > 0 {
var patch []byte
patch, err = json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
"finalizers": make([]string, 0),
},
})
if err == nil {
_, err = f.AppClient.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
}
if err != nil {
break
}
}
}
if err == nil {
err = f.KubeClient.CoreV1().Namespaces().Delete(f.Namespace, &metav1.DeleteOptions{})
}
if err != nil {
println("Unable to tear down fixture")
}
}
// GetKubeConfig creates new kubernetes client config using specified config path and config overrides variables
func GetKubeConfig(configPath string, overrides clientcmd.ConfigOverrides) *rest.Config {
// getKubeConfig creates new kubernetes client config using specified config path and config overrides variables
func getKubeConfig(configPath string, overrides clientcmd.ConfigOverrides) *rest.Config {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
loadingRules.ExplicitPath = configPath
clientConfig := clientcmd.NewInteractiveDeferredLoadingClientConfig(loadingRules, &overrides, os.Stdin)
var err error
restConfig, err := clientConfig.ClientConfig()
if err != nil {
log.Fatal(err)
}
errors.CheckError(err)
return restConfig
}
// NewFixture creates e2e tests fixture: ensures that Application CRD is installed, creates temporal namespace, starts repo and api server,
// configure currently available cluster.
func NewFixture() (*Fixture, error) {
config := GetKubeConfig("", clientcmd.ConfigOverrides{})
extensionsClient := apiextensionsclient.NewForConfigOrDie(config)
config := getKubeConfig("", clientcmd.ConfigOverrides{})
appClient := appclientset.NewForConfigOrDie(config)
kubeClient := kubernetes.NewForConfigOrDie(config)
namespace, err := createNamespace(kubeClient)
if err != nil {
return nil, err
apiServerAddress := os.Getenv(argocdclient.EnvArgoCDServer)
if apiServerAddress == "" {
apiServerAddress = defaultAriServer
}
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, namespace)
db := db.NewDB(namespace, settingsMgr, kubeClient)
enforcer := rbac.NewEnforcer(kubeClient, namespace, common.ArgoCDRBACConfigMapName, nil)
err = enforcer.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
if err != nil {
return nil, err
}
enforcer.SetDefaultRole("role:admin")
log.Warnf("Using Argo CD server %s", apiServerAddress)
tlsTestResult, err := grpc_util.TestTLS(apiServerAddress)
errors.CheckError(err)
argocdclientset, err := argocdclient.NewClient(&argocdclient.ClientOptions{Insecure: true, ServerAddr: apiServerAddress, PlainText: !tlsTestResult.TLS})
errors.CheckError(err)
closer, client, err := argocdclientset.NewSessionClient()
errors.CheckError(err)
defer util.Close(closer)
sessionResponse, err := client.Create(context.Background(), &session.SessionCreateRequest{Username: "admin", Password: adminPassword})
errors.CheckError(err)
argocdclientset, err = argocdclient.NewClient(&argocdclient.ClientOptions{
Insecure: true,
ServerAddr: apiServerAddress,
AuthToken: sessionResponse.Token,
PlainText: !tlsTestResult.TLS,
})
errors.CheckError(err)
testRepo, err := ioutil.TempDir("/tmp", "argocd-e2e")
errors.CheckError(err)
testRepo = path.Base(testRepo)
errors.CheckError(err)
_, err = argoexec.RunCommand(
"sh", "-c",
fmt.Sprintf("cp -r testdata/* /tmp/%s && chmod 777 /tmp/%s && cd /tmp/%s && git init && git add . && git commit -m 'initial commit'", testRepo, testRepo, testRepo))
errors.CheckError(err)
fixture := &Fixture{
Config: config,
ExtensionsClient: extensionsClient,
AppClient: appClient,
DB: db,
KubeClient: kubeClient,
Namespace: namespace,
Enforcer: enforcer,
SettingsMgr: settingsMgr,
}
err = fixture.setup()
if err != nil {
return nil, err
AppClientset: appClient,
KubeClientset: kubeClient,
ArgoCDClientset: argocdclientset,
ArgoCDNamespace: "argocd-e2e",
SettingsManager: settings.NewSettingsManager(context.Background(), kubeClient, "argocd-e2e"),
apiServerAddress: apiServerAddress,
token: sessionResponse.Token,
repoDirectory: testRepo,
plainText: !tlsTestResult.TLS,
}
fixture.DeploymentNamespace = fixture.createDeploymentNamespace()
return fixture, nil
}
// CreateApp creates application
func (f *Fixture) CreateApp(t *testing.T, application *v1alpha1.Application) *v1alpha1.Application {
application = application.DeepCopy()
application.Name = fmt.Sprintf("e2e-test-%v", time.Now().Unix())
labels := application.ObjectMeta.Labels
if labels == nil {
labels = make(map[string]string)
application.ObjectMeta.Labels = labels
}
application.Spec.Source.Ksonnet.Parameters = append(
application.Spec.Source.Ksonnet.Parameters,
v1alpha1.KsonnetParameter{Name: "name", Value: application.Name, Component: "guestbook-ui"})
app, err := f.AppClient.ArgoprojV1alpha1().Applications(f.Namespace).Create(application)
if err != nil {
t.Fatal(fmt.Sprintf("Unable to create app %v", err))
}
return app
func (f *Fixture) RepoURL() string {
return fmt.Sprintf("file:///tmp/%s", f.repoDirectory)
}
// createController creates new controller instance
func (f *Fixture) createController() (*controller.ApplicationController, error) {
return controller.NewApplicationController(
f.Namespace,
f.SettingsMgr,
f.KubeClient,
f.AppClient,
reposerver.NewRepoServerClientset(f.RepoServerAddress),
cache.NewCache(cache.NewInMemoryCache(1*time.Hour)),
10*time.Second)
// cleanup deletes test namespace resources.
func (f *Fixture) cleanup() {
f.EnsureCleanState()
f.deleteDeploymentNamespace()
f.cleanupTestRepo()
}
func (f *Fixture) NewApiClientset() (argocdclient.Client, error) {
return argocdclient.NewClient(&argocdclient.ClientOptions{
Insecure: true,
PlainText: true,
ServerAddr: f.ApiServerAddress,
func (f *Fixture) cleanupTestRepo() {
err := os.RemoveAll(path.Join("/tmp", f.repoDirectory))
errors.CheckError(err)
}
func (f *Fixture) createDeploymentNamespace() string {
ns, err := f.KubeClientset.CoreV1().Namespaces().Create(&corev1.Namespace{
ObjectMeta: v1.ObjectMeta{
GenerateName: "argocd-e2e-",
Labels: map[string]string{
testingLabel: "true",
},
},
})
errors.CheckError(err)
return ns.Name
}
func (f *Fixture) EnsureCleanState() {
argoSettings, err := f.SettingsManager.GetSettings()
errors.CheckError(err)
if len(argoSettings.ResourceOverrides) > 0 {
argoSettings.ResourceOverrides = nil
errors.CheckError(f.SettingsManager.SaveSettings(argoSettings))
}
closer, client := f.ArgoCDClientset.NewApplicationClientOrDie()
defer util.Close(closer)
apps, err := client.List(context.Background(), &application.ApplicationQuery{})
errors.CheckError(err)
err = util.RunAllAsync(len(apps.Items), func(i int) error {
cascade := true
appName := apps.Items[i].Name
_, err := client.Delete(context.Background(), &application.ApplicationDeleteRequest{Name: &appName, Cascade: &cascade})
if err != nil {
return nil
}
return waitUntilE(func() (bool, error) {
_, err := f.AppClientset.ArgoprojV1alpha1().Applications(f.ArgoCDNamespace).Get(appName, v1.GetOptions{})
if apierrors.IsNotFound(err) {
return true, nil
}
return false, err
})
})
errors.CheckError(err)
projs, err := f.AppClientset.ArgoprojV1alpha1().AppProjects(f.ArgoCDNamespace).List(v1.ListOptions{})
errors.CheckError(err)
err = util.RunAllAsync(len(projs.Items), func(i int) error {
if projs.Items[i].Name == "default" {
return nil
}
return f.AppClientset.ArgoprojV1alpha1().AppProjects(f.ArgoCDNamespace).Delete(projs.Items[i].Name, &v1.DeleteOptions{})
})
errors.CheckError(err)
}
func (f *Fixture) deleteDeploymentNamespace() {
labelSelector := labels.NewSelector()
req, err := labels.NewRequirement(testingLabel, selection.Equals, []string{"true"})
errors.CheckError(err)
labelSelector = labelSelector.Add(*req)
namespaces, err := f.KubeClientset.CoreV1().Namespaces().List(v1.ListOptions{LabelSelector: labelSelector.String()})
errors.CheckError(err)
for _, ns := range namespaces.Items {
if ns.DeletionTimestamp == nil {
err = f.KubeClientset.CoreV1().Namespaces().Delete(ns.Name, &v1.DeleteOptions{})
if err != nil {
log.Warnf("Failed to delete e2e deployment namespace: %s", ns.Name)
}
}
}
}
func (f *Fixture) RunCli(args ...string) (string, error) {
args = append([]string{"run", "../../cmd/argocd/main.go"}, args...)
cmd := exec.Command("go", append(args, "--server", f.ApiServerAddress, "--plaintext")...)
if f.plainText {
args = append(args, "--plaintext")
}
cmd := exec.Command("../../dist/argocd", append(args, "--server", f.apiServerAddress, "--auth-token", f.token, "--insecure")...)
log.Infof("CLI: %s", strings.Join(cmd.Args, " "))
outBytes, err := cmd.Output()
if err != nil {
exErr, ok := err.(*exec.ExitError)
@@ -348,7 +231,7 @@ func (f *Fixture) RunCli(args ...string) (string, error) {
if outBytes != nil {
errOutput = string(outBytes) + "\n" + errOutput
}
return "", fmt.Errorf(strings.TrimSpace(errOutput))
return errOutput, fmt.Errorf(strings.TrimSpace(errOutput))
}
return string(outBytes), nil
}
@@ -377,63 +260,3 @@ func WaitUntil(t *testing.T, condition wait.ConditionFunc) {
t.Fatalf("Failed to wait for expected condition: %v", err)
}
}
type FakeGitClientFactory struct{}
func (f *FakeGitClientFactory) NewClient(repoURL, path, username, password, sshPrivateKey string) (git.Client, error) {
return &FakeGitClient{
root: path,
}, nil
}
// FakeGitClient is a test git client implementation which always clone local test repo.
type FakeGitClient struct {
root string
}
func (c *FakeGitClient) Init() error {
_, err := exec.Command("rm", "-rf", c.root).Output()
if err != nil {
return err
}
_, err = exec.Command("cp", "-r", "../../examples/guestbook", c.root).Output()
return err
}
func (c *FakeGitClient) Root() string {
return c.root
}
func (c *FakeGitClient) Fetch() error {
// do nothing
return nil
}
func (c *FakeGitClient) Checkout(revision string) error {
// do nothing
return nil
}
func (c *FakeGitClient) Reset() error {
// do nothing
return nil
}
func (c *FakeGitClient) LsRemote(s string) (string, error) {
return "abcdef123456890", nil
}
func (c *FakeGitClient) LsFiles(s string) ([]string, error) {
matches, err := filepath.Glob(path.Join(c.root, s))
if err != nil {
return nil, err
}
for i := range matches {
matches[i] = strings.TrimPrefix(matches[i], c.root)
}
return matches, nil
}
func (c *FakeGitClient) CommitSHA() (string, error) {
return "abcdef123456890", nil
}

View File

@@ -18,7 +18,7 @@ func TestMain(m *testing.M) {
os.Exit(-1)
} else {
code := m.Run()
fixture.TearDown()
fixture.cleanup()
os.Exit(code)
}
}

View File

@@ -11,276 +11,274 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/argo"
)
func TestProjectManagement(t *testing.T) {
assertProjHasEvent := func(a *v1alpha1.AppProject, message string, reason string) {
list, err := fixture.KubeClient.CoreV1().Events(fixture.Namespace).List(metav1.ListOptions{
FieldSelector: fields.SelectorFromSet(map[string]string{
"involvedObject.name": a.Name,
"involvedObject.uid": string(a.UID),
"involvedObject.namespace": fixture.Namespace,
}).String(),
})
if err != nil {
t.Fatalf("Unable to get app events %v", err)
func assertProjHasEvent(t *testing.T, a *v1alpha1.AppProject, message string, reason string) {
list, err := fixture.KubeClientset.CoreV1().Events(fixture.ArgoCDNamespace).List(metav1.ListOptions{
FieldSelector: fields.SelectorFromSet(map[string]string{
"involvedObject.name": a.Name,
"involvedObject.uid": string(a.UID),
"involvedObject.namespace": fixture.ArgoCDNamespace,
}).String(),
})
assert.NoError(t, err)
for i := range list.Items {
event := list.Items[i]
if event.Reason == reason && strings.Contains(event.Message, message) {
return
}
for i := range list.Items {
event := list.Items[i]
if event.Reason == reason && strings.Contains(event.Message, message) {
return
}
}
t.Errorf("Unable to find event with reason=%s; message=%s", reason, message)
}
t.Errorf("Unable to find event with reason=%s; message=%s", reason, message)
}
func TestProjectCreation(t *testing.T) {
fixture.EnsureCleanState()
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.RunCli("proj", "create", projectName,
"--description", "Test description",
"-d", "https://192.168.99.100:8443,default",
"-d", "https://192.168.99.100:8443,service",
"-s", "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
proj, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Get(projectName, metav1.GetOptions{})
assert.NoError(t, err)
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 2, len(proj.Spec.Destinations))
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[0].Server)
assert.Equal(t, "default", proj.Spec.Destinations[0].Namespace)
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[1].Server)
assert.Equal(t, "service", proj.Spec.Destinations[1].Namespace)
assert.Equal(t, 1, len(proj.Spec.SourceRepos))
assert.Equal(t, "https://github.com/argoproj/argo-cd.git", proj.Spec.SourceRepos[0])
assertProjHasEvent(t, proj, "create", argo.EventReasonResourceCreated)
}
func TestProjectDeletion(t *testing.T) {
fixture.EnsureCleanState()
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
proj, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
assert.NoError(t, err)
_, err = fixture.RunCli("proj", "delete", projectName)
assert.NoError(t, err)
_, err = fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Get(projectName, metav1.GetOptions{})
assert.True(t, errors.IsNotFound(err))
assertProjHasEvent(t, proj, "delete", argo.EventReasonResourceDeleted)
}
func TestSetProject(t *testing.T) {
fixture.EnsureCleanState()
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
assert.NoError(t, err)
_, err = fixture.RunCli("proj", "set", projectName,
"--description", "updated description",
"-d", "https://192.168.99.100:8443,default",
"-d", "https://192.168.99.100:8443,service")
assert.NoError(t, err)
proj, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Get(projectName, metav1.GetOptions{})
assert.NoError(t, err)
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 2, len(proj.Spec.Destinations))
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[0].Server)
assert.Equal(t, "default", proj.Spec.Destinations[0].Namespace)
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[1].Server)
assert.Equal(t, "service", proj.Spec.Destinations[1].Namespace)
assertProjHasEvent(t, proj, "update", argo.EventReasonResourceUpdated)
}
func TestAddProjectDestination(t *testing.T) {
fixture.EnsureCleanState()
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
t.Run("TestProjectCreation", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.RunCli("proj", "create", projectName,
"--description", "Test description",
"-d", "https://192.168.99.100:8443,default",
"-d", "https://192.168.99.100:8443,service",
"-s", "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "add-destination", projectName,
"https://192.168.99.100:8443",
"test1",
)
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get project %v", err)
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 2, len(proj.Spec.Destinations))
if err != nil {
t.Fatalf("Unable to add project destination %v", err)
}
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[0].Server)
assert.Equal(t, "default", proj.Spec.Destinations[0].Namespace)
_, err = fixture.RunCli("proj", "add-destination", projectName,
"https://192.168.99.100:8443",
"test1",
)
assert.NotNil(t, err)
assert.True(t, strings.Contains(err.Error(), "already defined"))
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[1].Server)
assert.Equal(t, "service", proj.Spec.Destinations[1].Namespace)
proj, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Get(projectName, metav1.GetOptions{})
assert.NoError(t, err)
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 1, len(proj.Spec.Destinations))
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[0].Server)
assert.Equal(t, "test1", proj.Spec.Destinations[0].Namespace)
assertProjHasEvent(t, proj, "update", argo.EventReasonResourceUpdated)
}
func TestRemoveProjectDestination(t *testing.T) {
fixture.EnsureCleanState()
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Create(&v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: projectName},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{{
Server: "https://192.168.99.100:8443",
Namespace: "test",
}},
},
})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
_, err = fixture.RunCli("proj", "remove-destination", projectName,
"https://192.168.99.100:8443",
"test",
)
if err != nil {
t.Fatalf("Unable to remove project destination %v", err)
}
_, err = fixture.RunCli("proj", "remove-destination", projectName,
"https://192.168.99.100:8443",
"test1",
)
assert.NotNil(t, err)
assert.True(t, strings.Contains(err.Error(), "does not exist"))
proj, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Get(projectName, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get project %v", err)
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 0, len(proj.Spec.Destinations))
assertProjHasEvent(t, proj, "update", argo.EventReasonResourceUpdated)
}
func TestAddProjectSource(t *testing.T) {
fixture.EnsureCleanState()
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
_, err = fixture.RunCli("proj", "add-source", projectName, "https://github.com/argoproj/argo-cd.git")
if err != nil {
t.Fatalf("Unable to add project source %v", err)
}
_, err = fixture.RunCli("proj", "add-source", projectName, "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
proj, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Get(projectName, metav1.GetOptions{})
assert.NoError(t, err)
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 1, len(proj.Spec.SourceRepos))
assert.Equal(t, "https://github.com/argoproj/argo-cd.git", proj.Spec.SourceRepos[0])
}
func TestRemoveProjectSource(t *testing.T) {
fixture.EnsureCleanState()
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Create(&v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: projectName},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"https://github.com/argoproj/argo-cd.git"},
},
})
assert.NoError(t, err)
_, err = fixture.RunCli("proj", "remove-source", projectName, "https://github.com/argoproj/argo-cd.git")
assert.NoError(t, err)
_, err = fixture.RunCli("proj", "remove-source", projectName, "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
proj, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Get(projectName, metav1.GetOptions{})
assert.NoError(t, err)
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 0, len(proj.Spec.SourceRepos))
assertProjHasEvent(t, proj, "update", argo.EventReasonResourceUpdated)
}
func TestUseJWTToken(t *testing.T) {
fixture.EnsureCleanState()
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
appName := "app-" + strconv.FormatInt(time.Now().Unix(), 10)
roleName := "roleTest"
testApp := &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: appName,
},
Spec: v1alpha1.ApplicationSpec{
Source: v1alpha1.ApplicationSource{
RepoURL: fixture.RepoURL(),
Path: "guestbook",
},
Destination: v1alpha1.ApplicationDestination{
Server: common.KubernetesInternalAPIServerAddr,
Namespace: fixture.ArgoCDNamespace,
},
Project: projectName,
},
}
_, err := fixture.AppClientset.ArgoprojV1alpha1().AppProjects(fixture.ArgoCDNamespace).Create(&v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: projectName},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{{
Server: common.KubernetesInternalAPIServerAddr,
Namespace: fixture.ArgoCDNamespace,
}},
SourceRepos: []string{"*"},
},
})
assert.Nil(t, err)
_, err = fixture.AppClientset.ArgoprojV1alpha1().Applications(fixture.ArgoCDNamespace).Create(testApp)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "create", projectName, roleName)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "create-token", projectName, roleName)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "add-policy", projectName, roleName, "-a", "get", "-o", "*", "-p", "allow")
assert.Nil(t, err)
assert.Equal(t, 1, len(proj.Spec.SourceRepos))
assert.Equal(t, "https://github.com/argoproj/argo-cd.git", proj.Spec.SourceRepos[0])
assertProjHasEvent(proj, "create", argo.EventReasonResourceCreated)
})
t.Run("TestProjectDeletion", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
_, err = fixture.RunCli("proj", "delete", projectName)
if err != nil {
t.Fatalf("Unable to delete project %v", err)
}
_, err = fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
assert.True(t, errors.IsNotFound(err))
assertProjHasEvent(proj, "delete", argo.EventReasonResourceDeleted)
})
t.Run("TestSetProject", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
_, err = fixture.RunCli("proj", "set", projectName,
"--description", "updated description",
"-d", "https://192.168.99.100:8443,default",
"-d", "https://192.168.99.100:8443,service")
if err != nil {
t.Fatalf("Unable to update project %v", err)
}
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get project %v", err)
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 2, len(proj.Spec.Destinations))
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[0].Server)
assert.Equal(t, "default", proj.Spec.Destinations[0].Namespace)
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[1].Server)
assert.Equal(t, "service", proj.Spec.Destinations[1].Namespace)
assertProjHasEvent(proj, "update", argo.EventReasonResourceUpdated)
})
t.Run("TestAddProjectDestination", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
_, err = fixture.RunCli("proj", "add-destination", projectName,
"https://192.168.99.100:8443",
"test1",
)
if err != nil {
t.Fatalf("Unable to add project destination %v", err)
}
_, err = fixture.RunCli("proj", "add-destination", projectName,
"https://192.168.99.100:8443",
"test1",
)
assert.NotNil(t, err)
assert.True(t, strings.Contains(err.Error(), "already defined"))
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get project %v", err)
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 1, len(proj.Spec.Destinations))
assert.Equal(t, "https://192.168.99.100:8443", proj.Spec.Destinations[0].Server)
assert.Equal(t, "test1", proj.Spec.Destinations[0].Namespace)
assertProjHasEvent(proj, "update", argo.EventReasonResourceUpdated)
})
t.Run("TestRemoveProjectDestination", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: projectName},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{{
Server: "https://192.168.99.100:8443",
Namespace: "test",
}},
},
})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
_, err = fixture.RunCli("proj", "remove-destination", projectName,
"https://192.168.99.100:8443",
"test",
)
if err != nil {
t.Fatalf("Unable to remove project destination %v", err)
}
_, err = fixture.RunCli("proj", "remove-destination", projectName,
"https://192.168.99.100:8443",
"test1",
)
assert.NotNil(t, err)
assert.True(t, strings.Contains(err.Error(), "does not exist"))
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get project %v", err)
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 0, len(proj.Spec.Destinations))
assertProjHasEvent(proj, "update", argo.EventReasonResourceUpdated)
})
t.Run("TestAddProjectSource", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
_, err = fixture.RunCli("proj", "add-source", projectName, "https://github.com/argoproj/argo-cd.git")
if err != nil {
t.Fatalf("Unable to add project source %v", err)
}
_, err = fixture.RunCli("proj", "add-source", projectName, "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get project %v", err)
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 1, len(proj.Spec.SourceRepos))
assert.Equal(t, "https://github.com/argoproj/argo-cd.git", proj.Spec.SourceRepos[0])
})
t.Run("TestRemoveProjectSource", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
_, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: projectName},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"https://github.com/argoproj/argo-cd.git"},
},
})
if err != nil {
t.Fatalf("Unable to create project %v", err)
}
_, err = fixture.RunCli("proj", "remove-source", projectName, "https://github.com/argoproj/argo-cd.git")
if err != nil {
t.Fatalf("Unable to remove project source %v", err)
}
_, err = fixture.RunCli("proj", "remove-source", projectName, "https://github.com/argoproj/argo-cd.git")
assert.Nil(t, err)
proj, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Get(projectName, metav1.GetOptions{})
if err != nil {
t.Fatalf("Unable to get project %v", err)
}
assert.Equal(t, projectName, proj.Name)
assert.Equal(t, 0, len(proj.Spec.SourceRepos))
assertProjHasEvent(proj, "update", argo.EventReasonResourceUpdated)
})
t.Run("TestUseJWTToken", func(t *testing.T) {
projectName := "proj-" + strconv.FormatInt(time.Now().Unix(), 10)
appName := "app-" + strconv.FormatInt(time.Now().Unix(), 10)
roleName := "roleTest"
testApp := &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: appName,
},
Spec: v1alpha1.ApplicationSpec{
Source: v1alpha1.ApplicationSource{
RepoURL: "https://github.com/argoproj/argo-cd.git",
Path: ".",
Ksonnet: &v1alpha1.ApplicationSourceKsonnet{
Environment: "minikube",
},
},
Destination: v1alpha1.ApplicationDestination{
Server: fixture.Config.Host,
Namespace: fixture.Namespace,
},
Project: projectName,
},
}
_, err := fixture.AppClient.ArgoprojV1alpha1().AppProjects(fixture.Namespace).Create(&v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: projectName}})
assert.Nil(t, err)
_, err = fixture.AppClient.ArgoprojV1alpha1().Applications(fixture.Namespace).Create(testApp)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "create", projectName, roleName)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "create-token", projectName, roleName)
assert.Nil(t, err)
_, err = fixture.RunCli("proj", "role", "add-policy", projectName, roleName, "-a", "get", "-o", "*", "-p", "allow")
assert.Nil(t, err)
})
}

View File

@@ -10,42 +10,38 @@ import (
"github.com/argoproj/argo-cd/util"
)
func TestRepoManagement(t *testing.T) {
t.Run("TestAddRemovePublicRepo", func(t *testing.T) {
repoUrl := "https://github.com/argoproj/argo-cd.git"
_, err := fixture.RunCli("repo", "add", repoUrl)
assert.Nil(t, err)
clientSet, err := fixture.NewApiClientset()
assert.Nil(t, err)
func TestAddRemovePublicRepo(t *testing.T) {
repoUrl := "https://github.com/argoproj/argocd-example-apps.git"
_, err := fixture.RunCli("repo", "add", repoUrl)
assert.NoError(t, err)
conn, repoClient, err := clientSet.NewRepoClient()
assert.Nil(t, err)
defer util.Close(conn)
conn, repoClient, err := fixture.ArgoCDClientset.NewRepoClient()
assert.Nil(t, err)
defer util.Close(conn)
repo, err := repoClient.List(context.Background(), &repository.RepoQuery{})
repo, err := repoClient.List(context.Background(), &repository.RepoQuery{})
assert.Nil(t, err)
exists := false
for i := range repo.Items {
if repo.Items[i].Repo == repoUrl {
exists = true
break
}
assert.Nil(t, err)
exists := false
for i := range repo.Items {
if repo.Items[i].Repo == repoUrl {
exists = true
break
}
assert.True(t, exists)
}
assert.True(t, exists)
_, err = fixture.RunCli("repo", "rm", repoUrl)
assert.Nil(t, err)
_, err = fixture.RunCli("repo", "rm", repoUrl)
assert.Nil(t, err)
repo, err = repoClient.List(context.Background(), &repository.RepoQuery{})
assert.Nil(t, err)
exists = false
for i := range repo.Items {
if repo.Items[i].Repo == repoUrl {
exists = true
break
}
repo, err = repoClient.List(context.Background(), &repository.RepoQuery{})
assert.Nil(t, err)
exists = false
for i := range repo.Items {
if repo.Items[i].Repo == repoUrl {
exists = true
break
}
assert.False(t, exists)
})
}
assert.False(t, exists)
}

View File

@@ -0,0 +1,12 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: guestbook-ui
spec:
replicas: 1
progressDeadlineSeconds: 3
template:
spec:
containers:
- name: guestbook-ui
command: ["fail"]

View File

@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../guestbook
patches:
- guestbook-deployment.yaml

View File

@@ -5,7 +5,7 @@ metadata:
labels:
app: extensions-deployment
spec:
replicas: 1
replicas: 0
selector:
matchLabels:
app: extensions-deployment

View File

@@ -0,0 +1,13 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: extensions-ingress
spec:
rules:
- host: extensions-ingress
http:
paths:
- backend:
serviceName: extensions-service
servicePort: 8080
path: /

View File

@@ -0,0 +1,33 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterdummies.argoproj.io
spec:
group: argoproj.io
version: v1alpha1
scope: Cluster
names:
kind: ClusterDummy
plural: clusterdummies
---
apiVersion: argoproj.io/v1alpha1
kind: ClusterDummy
metadata:
name: cluster-dummy-crd-instance
---
apiVersion: argoproj.io/v1alpha1
kind: ClusterDummy
metadata:
name: cluster-dummy-crd-instance
namespace: default
---
apiVersion: argoproj.io/v1alpha1
kind: ClusterDummy
metadata:
name: cluster-dummy-crd-instance
namespace: kube-system

View File

@@ -0,0 +1,32 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: dummies.argoproj.io
spec:
group: argoproj.io
version: v1alpha1
scope: Namespaced
names:
kind: Dummy
plural: dummies
---
apiVersion: argoproj.io/v1alpha1
kind: Dummy
metadata:
name: dummy-crd-instance
namespace: default
---
apiVersion: argoproj.io/v1alpha1
kind: Dummy
metadata:
name: dummy-crd-instance
namespace: kube-system
---
apiVersion: argoproj.io/v1alpha1
kind: Dummy
metadata:
name: dummy-crd-instance

View File

@@ -0,0 +1,20 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: guestbook-ui
spec:
replicas: 0
revisionHistoryLimit: 3
selector:
matchLabels:
app: guestbook-ui
template:
metadata:
labels:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
spec:
ports:
- port: 80
targetPort: 80
selector:
app: guestbook-ui

View File

@@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./guestbook-ui-deployment.yaml
- ./guestbook-ui-svc.yaml

Some files were not shown because too many files have changed in this diff Show More