Compare commits
321 Commits
release-2.
...
v2.2.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6da92a8e81 | ||
|
|
d5368f5714 | ||
|
|
25cfb27d51 | ||
|
|
47d23e1f07 | ||
|
|
1dc14dc172 | ||
|
|
5c06333914 | ||
|
|
656bee1402 | ||
|
|
2a30c92a7e | ||
|
|
6a1fec9d33 | ||
|
|
0faeeb843d | ||
|
|
48bdabad1a | ||
|
|
c3fd7f5f2d | ||
|
|
3f75a7faa3 | ||
|
|
0f14657301 | ||
|
|
02768367b5 | ||
|
|
3b628b3af8 | ||
|
|
d8d2920eff | ||
|
|
1a72853ca3 | ||
|
|
5354e7d823 | ||
|
|
34d8f12c99 | ||
|
|
8840688e6e | ||
|
|
c081ef0b00 | ||
|
|
caa246a38d | ||
|
|
ebf27dea3e | ||
|
|
98fcb417eb | ||
|
|
a3517d594b | ||
|
|
e0c9827a37 | ||
|
|
375e27bd7a | ||
|
|
46fdb6f364 | ||
|
|
2e84638827 | ||
|
|
0a64781616 | ||
|
|
43a29e6b2d | ||
|
|
b77f947e1d | ||
|
|
dc24380065 | ||
|
|
b0278c4493 | ||
|
|
1431e04b90 | ||
|
|
5bb45435e0 | ||
|
|
691b77ff9e | ||
|
|
b52793ab07 | ||
|
|
fc13eda8d6 | ||
|
|
ff45418948 | ||
|
|
0d4e40ed6a | ||
|
|
fe3cc7218c | ||
|
|
513d8a4ae0 | ||
|
|
6036e9b4c9 | ||
|
|
8a6b759c4a | ||
|
|
c2b3e74089 | ||
|
|
8b10b64dfa | ||
|
|
e5c34f472e | ||
|
|
49b71522c4 | ||
|
|
7a6b2ffe56 | ||
|
|
c7792522c5 | ||
|
|
f4fd836a1b | ||
|
|
2770c690a5 | ||
|
|
5b914750a4 | ||
|
|
43cba9cc0b | ||
|
|
0dbffb8557 | ||
|
|
7b0cf773cc | ||
|
|
d3bec0faaa | ||
|
|
7750f579bb | ||
|
|
5aeb113968 | ||
|
|
4522a88289 | ||
|
|
5f4ecf1ef2 | ||
|
|
7ced59f346 | ||
|
|
a912611b7a | ||
|
|
a21b0363e3 | ||
|
|
07b8a1f63c | ||
|
|
1804d771c7 | ||
|
|
7fe8e5ff37 | ||
|
|
b8510f35f1 | ||
|
|
aa6aed3c04 | ||
|
|
d6d620e98c | ||
|
|
3e17a69df3 | ||
|
|
da63fb26d1 | ||
|
|
401b266235 | ||
|
|
f46ac844b8 | ||
|
|
046840dcf7 | ||
|
|
d460f844a4 | ||
|
|
970bb80b28 | ||
|
|
81e801d8ee | ||
|
|
872eff292b | ||
|
|
00874af5f2 | ||
|
|
ad97ed52d5 | ||
|
|
7ea35924c9 | ||
|
|
c7074fe67a | ||
|
|
9a11790df2 | ||
|
|
695699d08e | ||
|
|
374965a606 | ||
|
|
ddc9f56d05 | ||
|
|
3312dc0037 | ||
|
|
67dd012b87 | ||
|
|
10bb1c4e1f | ||
|
|
f3730da01e | ||
|
|
d8f6ee2d72 | ||
|
|
b5d1433a1f | ||
|
|
83ff035df7 | ||
|
|
1d4b91efa2 | ||
|
|
43bcbfb093 | ||
|
|
f0cd21a9f3 | ||
|
|
d7d8fe3da2 | ||
|
|
c80c1d4f95 | ||
|
|
6644cbe452 | ||
|
|
ae02bc27fc | ||
|
|
5faea8f339 | ||
|
|
678d35d850 | ||
|
|
363e1d2d49 | ||
|
|
b2c70df619 | ||
|
|
3c874ae065 | ||
|
|
4dd570eb11 | ||
|
|
1bc231dd99 | ||
|
|
8a8373faba | ||
|
|
384f5b2bae | ||
|
|
b0736219b6 | ||
|
|
4399348ae5 | ||
|
|
df5cce866f | ||
|
|
07eeddf5f1 | ||
|
|
0c757534ed | ||
|
|
d8d7b30f58 | ||
|
|
9d88e614cd | ||
|
|
4e025838c0 | ||
|
|
d0c6280d38 | ||
|
|
e084a110e4 | ||
|
|
df7e5ffd96 | ||
|
|
6f8b10c324 | ||
|
|
eb70a1e3fa | ||
|
|
35bd9e1ed6 | ||
|
|
778be231b9 | ||
|
|
d790e96aa9 | ||
|
|
a6039e1be9 | ||
|
|
3d4deeb4f0 | ||
|
|
ebd87b77ed | ||
|
|
31b2cbee70 | ||
|
|
a2677f5537 | ||
|
|
65d669565b | ||
|
|
07f4034aaa | ||
|
|
459833d7c3 | ||
|
|
bbdbf81108 | ||
|
|
1842c98374 | ||
|
|
94372adcd4 | ||
|
|
e3df023310 | ||
|
|
6f5f7ffeee | ||
|
|
97302103d9 | ||
|
|
380037b63d | ||
|
|
0d68194205 | ||
|
|
20313adbe2 | ||
|
|
a731997c59 | ||
|
|
bb88a1c975 | ||
|
|
917d0797fa | ||
|
|
993cb838a9 | ||
|
|
eb2c69d45e | ||
|
|
65012f502b | ||
|
|
2f6e91ad22 | ||
|
|
b264729bfb | ||
|
|
3952f66fc7 | ||
|
|
b1f979aad6 | ||
|
|
2a579a6c05 | ||
|
|
9e234b9dec | ||
|
|
4f69e93802 | ||
|
|
3799d706df | ||
|
|
344353cb16 | ||
|
|
d698e5d68a | ||
|
|
5f9993a75b | ||
|
|
f613fab6f3 | ||
|
|
183d729d40 | ||
|
|
b26e92873f | ||
|
|
36ab4dadda | ||
|
|
91fea1470f | ||
|
|
17e6ebdce1 | ||
|
|
6f794d0dc9 | ||
|
|
9cf71bef90 | ||
|
|
25db5038c7 | ||
|
|
b6c458e8f4 | ||
|
|
2147ed3aea | ||
|
|
3df6be73e7 | ||
|
|
d823efee5d | ||
|
|
1817b6b4d4 | ||
|
|
7ed06cc9f6 | ||
|
|
49378a95c4 | ||
|
|
a38f3176e2 | ||
|
|
08d1cf0e55 | ||
|
|
85c45ca76e | ||
|
|
626c963b1c | ||
|
|
fc49eb2e46 | ||
|
|
e0c07b8027 | ||
|
|
7122b83fc3 | ||
|
|
c64e8df84c | ||
|
|
be884d2172 | ||
|
|
d2100a30a0 | ||
|
|
9559613c8f | ||
|
|
bd7f8c8eda | ||
|
|
a90ab1a2e1 | ||
|
|
a5126648e1 | ||
|
|
d9e8a0fc31 | ||
|
|
b559bf88d1 | ||
|
|
c0f2bf55a7 | ||
|
|
4b92f96034 | ||
|
|
24054820da | ||
|
|
22600228c2 | ||
|
|
9025318adf | ||
|
|
5d9aa989f3 | ||
|
|
9613b240fb | ||
|
|
8fbcc1c7fc | ||
|
|
d2ff65ae39 | ||
|
|
dd26a48f8a | ||
|
|
8c19c7864d | ||
|
|
a9f009cb10 | ||
|
|
ce1d8031ae | ||
|
|
dd2900eaeb | ||
|
|
c0bcd6b255 | ||
|
|
6897aa4ed2 | ||
|
|
42903044f3 | ||
|
|
700806f408 | ||
|
|
b271d6ac5c | ||
|
|
a940cb5914 | ||
|
|
87de4edd15 | ||
|
|
abba8dddce | ||
|
|
44520ea479 | ||
|
|
ae803a2f1e | ||
|
|
49a854a738 | ||
|
|
81155ccb0c | ||
|
|
18057fa5ee | ||
|
|
ca5003f3ba | ||
|
|
1d3d03f5d3 | ||
|
|
a894d4b128 | ||
|
|
440e4dc5d9 | ||
|
|
d6973c7b8a | ||
|
|
34e904dd7b | ||
|
|
c7e532941f | ||
|
|
613db278a7 | ||
|
|
91c883668e | ||
|
|
d12eafa826 | ||
|
|
0dcac9a8d9 | ||
|
|
0078a4d9ee | ||
|
|
8ceceb8b55 | ||
|
|
2e3405134b | ||
|
|
5b1906d2a7 | ||
|
|
0093bddc39 | ||
|
|
788613cb14 | ||
|
|
53dd0da8c3 | ||
|
|
8f21138d54 | ||
|
|
676a8714bf | ||
|
|
4ae1d876fd | ||
|
|
efc8ec1d78 | ||
|
|
e3b7a2be13 | ||
|
|
44ec4e0a82 | ||
|
|
24ad4618c2 | ||
|
|
d3fb6fb7db | ||
|
|
95d9016ed4 | ||
|
|
e195995790 | ||
|
|
124995c275 | ||
|
|
7b89c4e53c | ||
|
|
1ab85de378 | ||
|
|
eb76649ef5 | ||
|
|
3887289090 | ||
|
|
d842e8333a | ||
|
|
311926b8c9 | ||
|
|
63d809614c | ||
|
|
741329afb6 | ||
|
|
d36cf3e2bb | ||
|
|
53bbb8ecf4 | ||
|
|
86ca66b2d6 | ||
|
|
6ac9753b63 | ||
|
|
afb1c8b210 | ||
|
|
117eadaceb | ||
|
|
10b4e44c42 | ||
|
|
8cf8a8cf81 | ||
|
|
8a29cf8fbf | ||
|
|
4a3dd76f19 | ||
|
|
b91732a8eb | ||
|
|
97e8c7d00d | ||
|
|
e5ee00f4d0 | ||
|
|
aab0b173d5 | ||
|
|
50993206a5 | ||
|
|
4164db88f5 | ||
|
|
c5eb25ceba | ||
|
|
685b07ed34 | ||
|
|
3091bc5b6f | ||
|
|
810a977846 | ||
|
|
adade448a8 | ||
|
|
c22d8321cc | ||
|
|
819225c709 | ||
|
|
dd50b88cd1 | ||
|
|
8b1c364f58 | ||
|
|
d74ce38c8c | ||
|
|
826288f861 | ||
|
|
e86c5ea17c | ||
|
|
f7614e0c4e | ||
|
|
2130c8a949 | ||
|
|
9d6ccee305 | ||
|
|
9c0495d9a3 | ||
|
|
ef653cac57 | ||
|
|
9099d0ba83 | ||
|
|
50b0010470 | ||
|
|
6b0d449997 | ||
|
|
c607a3b294 | ||
|
|
4fef2114fe | ||
|
|
8c216ed7a8 | ||
|
|
d404f0e425 | ||
|
|
c5ebc584d0 | ||
|
|
241a6fb1ba | ||
|
|
93e624d590 | ||
|
|
8f4e002ae4 | ||
|
|
3abf8c6735 | ||
|
|
3ff8481694 | ||
|
|
0d23207c59 | ||
|
|
fcd28b6eb1 | ||
|
|
7e27d1072f | ||
|
|
1e5c763d6f | ||
|
|
7fbf50493b | ||
|
|
f170d24ed1 | ||
|
|
bcfc112d82 | ||
|
|
66acd16bce | ||
|
|
bee20c2154 | ||
|
|
17bef1c2c6 | ||
|
|
2abf284f81 | ||
|
|
221d0c048d | ||
|
|
cd9dcecccd | ||
|
|
52090425c6 | ||
|
|
228c465393 | ||
|
|
4a69cce7b1 | ||
|
|
c7738a0cae |
2
.github/workflows/ci-build.yaml
vendored
@@ -12,7 +12,7 @@ on:
|
||||
|
||||
env:
|
||||
# Golang version to use across CI steps
|
||||
GOLANG_VERSION: '1.16.5'
|
||||
GOLANG_VERSION: '1.16.11'
|
||||
|
||||
jobs:
|
||||
build-docker:
|
||||
|
||||
2
.github/workflows/gh-pages.yaml
vendored
@@ -16,7 +16,7 @@ jobs:
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: 3.x
|
||||
python-version: 3.9.8
|
||||
- name: build
|
||||
run: |
|
||||
pip install -r docs/requirements.txt
|
||||
|
||||
4
.github/workflows/image.yaml
vendored
@@ -6,7 +6,7 @@ on:
|
||||
- master
|
||||
|
||||
env:
|
||||
GOLANG_VERSION: '1.16.5'
|
||||
GOLANG_VERSION: '1.16.11'
|
||||
|
||||
jobs:
|
||||
publish:
|
||||
@@ -51,7 +51,7 @@ jobs:
|
||||
env:
|
||||
TOKEN: ${{ secrets.TOKEN }}
|
||||
- run: |
|
||||
docker run -v $(pwd):/src -w /src --rm -t lyft/kustomizer:v3.3.0 kustomize edit set image quay.io/argoproj/argocd=ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }}
|
||||
docker run -u $(id -u):$(id -g) -v $(pwd):/src -w /src --rm -t ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }} kustomize edit set image quay.io/argoproj/argocd=ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }}
|
||||
git config --global user.email 'ci@argoproj.com'
|
||||
git config --global user.name 'CI'
|
||||
git diff --exit-code && echo 'Already deployed' || (git commit -am 'Upgrade argocd to ${{ steps.image.outputs.tag }}' && git push)
|
||||
|
||||
2
.github/workflows/release.yaml
vendored
@@ -12,7 +12,7 @@ on:
|
||||
- '!release-v0*'
|
||||
|
||||
env:
|
||||
GOLANG_VERSION: '1.16.5'
|
||||
GOLANG_VERSION: '1.16.11'
|
||||
|
||||
jobs:
|
||||
prepare-release:
|
||||
|
||||
11
CHANGELOG.md
@@ -1,6 +1,15 @@
|
||||
# Changelog
|
||||
|
||||
## v2.1.0 (Unreleased)
|
||||
## v2.1.1 (2021-08-25)
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- fix: password reset requirements (#7071)
|
||||
- fix: Custom Styles feature is broken (#7067)
|
||||
- fix(ui): Add State to props passed to Extensions (#7045)
|
||||
- fix: keep uid_entrypoint.sh for backward compatibility (#7047)
|
||||
|
||||
## v2.1.0 (2021-08-20)
|
||||
|
||||
> [Upgrade instructions](./docs/operator-manual/upgrading/2.0-2.1.md)
|
||||
|
||||
|
||||
12
Dockerfile
@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:21.04
|
||||
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
|
||||
# Also used as the image in CI jobs so needs all dependencies
|
||||
####################################################################################################
|
||||
FROM docker.io/library/golang:1.16.5 as builder
|
||||
FROM docker.io/library/golang:1.16.11 as builder
|
||||
|
||||
RUN echo 'deb http://deb.debian.org/debian buster-backports main' >> /etc/apt/sources.list
|
||||
|
||||
@@ -47,7 +47,6 @@ RUN groupadd -g 999 argocd && \
|
||||
mkdir -p /home/argocd && \
|
||||
chown argocd:0 /home/argocd && \
|
||||
chmod g=u /home/argocd && \
|
||||
chmod g=u /etc/passwd && \
|
||||
apt-get update && \
|
||||
apt-get dist-upgrade -y && \
|
||||
apt-get install -y git git-lfs python3-pip tini gpg tzdata && \
|
||||
@@ -62,9 +61,9 @@ COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
|
||||
COPY --from=builder /usr/local/bin/helm2 /usr/local/bin/helm2
|
||||
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
|
||||
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
|
||||
# script to add current (possibly arbitrary) user to /etc/passwd at runtime
|
||||
# (if it's not already there, to be openshift friendly)
|
||||
COPY uid_entrypoint.sh /usr/local/bin/uid_entrypoint.sh
|
||||
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
|
||||
# keep uid_entrypoint.sh for backward compatibility
|
||||
RUN ln -s /usr/local/bin/entrypoint.sh /usr/local/bin/uid_entrypoint.sh
|
||||
|
||||
# support for mounting configuration from a configmap
|
||||
RUN mkdir -p /app/config/ssh && \
|
||||
@@ -102,7 +101,7 @@ RUN NODE_ENV='production' NODE_ONLINE_ENV='online' yarn build
|
||||
####################################################################################################
|
||||
# Argo CD Build stage which performs the actual build of Argo CD binaries
|
||||
####################################################################################################
|
||||
FROM golang:1.16.5 as argocd-build
|
||||
FROM docker.io/library/golang:1.16.11 as argocd-build
|
||||
|
||||
WORKDIR /go/src/github.com/argoproj/argo-cd
|
||||
|
||||
@@ -131,6 +130,7 @@ COPY --from=argocd-build /go/src/github.com/argoproj/argo-cd/dist/argocd* /usr/l
|
||||
USER root
|
||||
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-server
|
||||
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-repo-server
|
||||
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-cmp-server
|
||||
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-application-controller
|
||||
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-dex
|
||||
|
||||
|
||||
9
Makefile
@@ -266,6 +266,7 @@ image:
|
||||
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-server
|
||||
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-application-controller
|
||||
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-repo-server
|
||||
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-cmp-server
|
||||
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-dex
|
||||
cp Dockerfile.dev dist
|
||||
docker build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) -f dist/Dockerfile.dev dist
|
||||
@@ -409,12 +410,14 @@ start-e2e-local:
|
||||
if test -d /tmp/argo-e2e/app/config/gpg; then rm -rf /tmp/argo-e2e/app/config/gpg/*; fi
|
||||
mkdir -p /tmp/argo-e2e/app/config/gpg/keys && chmod 0700 /tmp/argo-e2e/app/config/gpg/keys
|
||||
mkdir -p /tmp/argo-e2e/app/config/gpg/source && chmod 0700 /tmp/argo-e2e/app/config/gpg/source
|
||||
mkdir -p /tmp/argo-e2e/app/config/plugin && chmod 0700 /tmp/argo-e2e/app/config/plugin
|
||||
# set paths for locally managed ssh known hosts and tls certs data
|
||||
ARGOCD_SSH_DATA_PATH=/tmp/argo-e2e/app/config/ssh \
|
||||
ARGOCD_TLS_DATA_PATH=/tmp/argo-e2e/app/config/tls \
|
||||
ARGOCD_GPG_DATA_PATH=/tmp/argo-e2e/app/config/gpg/source \
|
||||
ARGOCD_GNUPGHOME=/tmp/argo-e2e/app/config/gpg/keys \
|
||||
ARGOCD_GPG_ENABLED=$(ARGOCD_GPG_ENABLED) \
|
||||
ARGOCD_PLUGINCONFIGFILEPATH=/tmp/argo-e2e/app/config/plugin \
|
||||
ARGOCD_E2E_DISABLE_AUTH=false \
|
||||
ARGOCD_ZJWT_FEATURE_FLAG=always \
|
||||
ARGOCD_IN_CI=$(ARGOCD_IN_CI) \
|
||||
@@ -451,6 +454,12 @@ start-local: mod-vendor-local dep-ui-local
|
||||
ARGOCD_E2E_TEST=false \
|
||||
goreman -f $(ARGOCD_PROCFILE) start ${ARGOCD_START}
|
||||
|
||||
# Run goreman start with exclude option , provide exclude env variable with list of services
|
||||
.PHONY: run
|
||||
run:
|
||||
bash ./hack/goreman-start.sh
|
||||
|
||||
|
||||
# Runs pre-commit validation with the virtualized toolchain
|
||||
.PHONY: pre-commit
|
||||
pre-commit: codegen build lint test
|
||||
|
||||
3
OWNERS
@@ -9,6 +9,7 @@ approvers:
|
||||
- jessesuen
|
||||
- jgwest
|
||||
- mayzhang2000
|
||||
- rbreeze
|
||||
|
||||
reviewers:
|
||||
- dthomson25
|
||||
@@ -18,3 +19,5 @@ reviewers:
|
||||
- reginapizza
|
||||
- hblixt
|
||||
- chetan-rns
|
||||
- wanghong230
|
||||
- pasha-codefresh
|
||||
6
Procfile
@@ -1,8 +1,8 @@
|
||||
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller go run ./cmd/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
|
||||
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server go run ./cmd/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} "
|
||||
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v2/cmd gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:v2.27.0 serve /dex.yaml"
|
||||
redis: bash -c "if [ $ARGOCD_REDIS_LOCAL == 'true' ]; then redis-server --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; else docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:6.2.4-alpine --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; fi"
|
||||
repo-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server go run ./cmd/main.go --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379}"
|
||||
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v2/cmd gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:v2.30.0 serve /dex.yaml"
|
||||
redis: bash -c "if [ \"$ARGOCD_REDIS_LOCAL\" == 'true' ]; then redis-server --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; else docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:6.2.4-alpine --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; fi"
|
||||
repo-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-/tmp/argo-e2e/app/config/plugin} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} go run ./cmd/main.go --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379}"
|
||||
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
|
||||
git-server: test/fixture/testrepos/start-git.sh
|
||||
helm-registry: test/fixture/testrepos/start-helm-registry.sh
|
||||
|
||||
@@ -45,6 +45,8 @@ Participation in the Argo CD project is governed by the [CNCF Code of Conduct](h
|
||||
|
||||
### Blogs and Presentations
|
||||
|
||||
1. [Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo](https://github.com/terrytangyuan/awesome-argo)
|
||||
1. [GitOps Without Pipelines With ArgoCD Image Updater](https://youtu.be/avPUQin9kzU)
|
||||
1. [Combining Argo CD (GitOps), Crossplane (Control Plane), And KubeVela (OAM)](https://youtu.be/eEcgn_gU3SM)
|
||||
1. [How to Apply GitOps to Everything - Combining Argo CD and Crossplane](https://youtu.be/yrj4lmScKHQ)
|
||||
1. [Couchbase - How To Run a Database Cluster in Kubernetes Using Argo CD](https://youtu.be/nkPoPaVzExY)
|
||||
@@ -69,3 +71,4 @@ Participation in the Argo CD project is governed by the [CNCF Code of Conduct](h
|
||||
1. [Applied GitOps with Argo CD](https://thenewstack.io/applied-gitops-with-argocd/)
|
||||
1. [Solving configuration drift using GitOps with Argo CD](https://www.cncf.io/blog/2020/12/17/solving-configuration-drift-using-gitops-with-argo-cd/)
|
||||
1. [Decentralized GitOps over environments](https://blogs.sap.com/2021/05/06/decentralized-gitops-over-environments/)
|
||||
1. [How GitOps and Operators mark the rise of Infrastructure-As-Software](https://paytmlabs.com/blog/2021/10/how-to-improve-operational-work-with-operators-and-gitops/)
|
||||
|
||||
16
SECURITY.md
@@ -1,6 +1,6 @@
|
||||
# Security Policy for Argo CD
|
||||
|
||||
Version: **v1.1 (2020-06-29)**
|
||||
Version: **v1.2 (2020-08-07)**
|
||||
|
||||
## Preface
|
||||
|
||||
@@ -56,13 +56,11 @@ We will do our best to react quickly on your inquiry, and to coordinate a fix
|
||||
and disclosure with you. Sometimes, it might take a little longer for us to
|
||||
react (e.g. out of office conditions), so please bear with us in these cases.
|
||||
|
||||
We will publish security advisiories using the Git Hub SA feature to keep our
|
||||
community well informed, and will credit you for your findings (unless you
|
||||
prefer to stay anonymous, of course).
|
||||
We will publish security advisiories using the
|
||||
[Git Hub Security Advisories](https://github.com/argoproj/argo-cd/security/advisories)
|
||||
feature to keep our community well informed, and will credit you for your
|
||||
findings (unless you prefer to stay anonymous, of course).
|
||||
|
||||
Please report vulnerabilities by e-mail to all of the following people:
|
||||
Please report vulnerabilities by e-mail to the following address:
|
||||
|
||||
* jfischer@redhat.com
|
||||
* Jesse_Suen@intuit.com
|
||||
* Alexander_Matyushentsev@intuit.com
|
||||
* Edward_Lee@intuit.com
|
||||
* cncf-argo-security@lists.cncf.io
|
||||
|
||||
26
USERS.md
@@ -9,6 +9,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [7shifts](https://www.7shifts.com/)
|
||||
1. [Adevinta](https://www.adevinta.com/)
|
||||
1. [Adventure](https://jp.adventurekk.com/)
|
||||
1. [Akuity](https://akuity.io/)
|
||||
1. [Alibaba Group](https://www.alibabagroup.com/)
|
||||
1. [Ambassador Labs](https://www.getambassador.io/)
|
||||
1. [Ant Group](https://www.antgroup.com/)
|
||||
@@ -22,15 +23,17 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Beat](https://thebeat.co/en/)
|
||||
1. [Beez Innovation Labs](https://www.beezlabs.com/)
|
||||
1. [BioBox Analytics](https://biobox.io)
|
||||
1. [BigPanda](https://bigpanda.io)
|
||||
1. [BMW Group](https://www.bmwgroup.com/)
|
||||
1. [Camptocamp](https://camptocamp.com)
|
||||
1. [Capital One](https://www.capitalone.com)
|
||||
1. [CARFAX](https://www.carfax.com)
|
||||
1. [Celonis](https://www.celonis.com/)
|
||||
1. [Chime](https://www.chime.com)
|
||||
1. [Codefresh](https://www.codefresh.io/)
|
||||
1. [Codility](https://www.codility.com/)
|
||||
1. [Commonbond](https://commonbond.co/)
|
||||
1. [Crédit Agricole](https://www.ca-cib.com)
|
||||
1. [Crédit Agricole CIB](https://www.ca-cib.com)
|
||||
1. [CROZ d.o.o.](https://croz.net/)
|
||||
1. [CyberAgent](https://www.cyberagent.co.jp/en/)
|
||||
1. [Cybozu](https://cybozu-global.com)
|
||||
@@ -48,6 +51,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Garner](https://www.garnercorp.com)
|
||||
1. [G DATA CyberDefense AG](https://www.gdata-software.com/)
|
||||
1. [Generali Deutschland AG](https://www.generali.de/)
|
||||
1. [Gitpod](https://www.gitpod.io)
|
||||
1. [Glovo](https://www.glovoapp.com)
|
||||
1. [GMETRI](https://gmetri.com/)
|
||||
1. [Gojek](https://www.gojek.io/)
|
||||
@@ -58,12 +62,14 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Hiya](https://hiya.com)
|
||||
1. [Honestbank](https://honestbank.com)
|
||||
1. [IBM](https://www.ibm.com/)
|
||||
1. [IITS-Consulting](https://iits-consulting.de)
|
||||
1. [Index Exchange](https://www.indexexchange.com/)
|
||||
1. [InsideBoard](https://www.insideboard.com)
|
||||
1. [Intuit](https://www.intuit.com/)
|
||||
1. [Joblift](https://joblift.com/)
|
||||
1. [JovianX](https://www.jovianx.com/)
|
||||
1. [Karrot](https://www.daangn.com/)
|
||||
1. [KarrotPay](https://www.daangnpay.com/)
|
||||
1. [Kasa](https://kasa.co.kr/)
|
||||
1. [Keptn](https://keptn.sh)
|
||||
1. [Kinguin](https://www.kinguin.net/)
|
||||
@@ -74,9 +80,11 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Lytt](https://www.lytt.co/)
|
||||
1. [Major League Baseball](https://mlb.com)
|
||||
1. [Mambu](https://www.mambu.com/)
|
||||
1. [Mattermost](https://www.mattermost.com)
|
||||
1. [Max Kelsen](https://www.maxkelsen.com/)
|
||||
1. [MindSpore](https://mindspore.cn)
|
||||
1. [Mirantis](https://mirantis.com/)
|
||||
1. [mixi Group](https://mixi.co.jp/)
|
||||
1. [Moengage](https://www.moengage.com/)
|
||||
1. [Money Forward](https://corp.moneyforward.com/en/)
|
||||
1. [MOO Print](https://www.moo.com/)
|
||||
@@ -93,6 +101,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Opensurvey](https://www.opensurvey.co.kr/)
|
||||
1. [Optoro](https://www.optoro.com/)
|
||||
1. [Orbital Insight](https://orbitalinsight.com/)
|
||||
1. [Packlink](https://www.packlink.com/)
|
||||
1. [PayPay](https://paypay.ne.jp/)
|
||||
1. [Peloton Interactive](https://www.onepeloton.com/)
|
||||
1. [Pipefy](https://www.pipefy.com/)
|
||||
@@ -110,9 +119,11 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Saildrone](https://www.saildrone.com/)
|
||||
1. [Saloodo! GmbH](https://www.saloodo.com)
|
||||
1. [Schwarz IT](https://jobs.schwarz/it-mission)
|
||||
1. [Snyk](https://snyk.io/)
|
||||
1. [Speee](https://speee.jp/)
|
||||
1. [Spendesk](https://spendesk.com/)
|
||||
1. [Sumo Logic](https://sumologic.com/)
|
||||
1. [Sutpc](http://www.sutpc.com/)
|
||||
1. [Swisscom](https://www.swisscom.ch)
|
||||
1. [Swissquote](https://github.com/swissquote)
|
||||
1. [Syncier](https://syncier.com/)
|
||||
@@ -122,10 +133,12 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [ThousandEyes](https://www.thousandeyes.com/)
|
||||
1. [Ticketmaster](https://ticketmaster.com)
|
||||
1. [Tiger Analytics](https://www.tigeranalytics.com/)
|
||||
1. [Tigera](https://www.tigera.io/)
|
||||
1. [Toss](https://toss.im/en)
|
||||
1. [tru.ID](https://tru.id)
|
||||
1. [Twilio SendGrid](https://sendgrid.com)
|
||||
1. [tZERO](https://www.tzero.com/)
|
||||
1. [ungleich.ch](https://ungleich.ch/)
|
||||
1. [UBIO](https://ub.io/)
|
||||
1. [UFirstGroup](https://www.ufirstgroup.com/en/)
|
||||
1. [Universidad Mesoamericana](https://www.umes.edu.gt/)
|
||||
@@ -135,6 +148,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Volvo Cars](https://www.volvocars.com/)
|
||||
1. [VSHN - The DevOps Company](https://vshn.ch/)
|
||||
1. [Walkbase](https://www.walkbase.com/)
|
||||
1. [Wehkamp](https://www.wehkamp.nl/)
|
||||
1. [WeMo Scooter](https://www.wemoscooter.com/)
|
||||
1. [Webstores](https://www.webstores.nl)
|
||||
1. [Whitehat Berlin](https://whitehat.berlin) by Guido Maria Serra +Fenaroli
|
||||
@@ -154,3 +168,13 @@ Currently, the following organizations are **officially** using Argo CD:
|
||||
1. [Beleza Na Web](https://www.belezanaweb.com.br/)
|
||||
1. [MariaDB](https://mariadb.com)
|
||||
1. [Lightricks](https://www.lightricks.com/)
|
||||
1. [RightRev](https://rightrev.com/)
|
||||
1. [MeDirect](https://medirect.com.mt/)
|
||||
1. [Snapp](https://snapp.ir/)
|
||||
1. [Technacy](https://www.technacy.it/)
|
||||
1. [freee](https://corp.freee.co.jp/en/company/)
|
||||
1. [Youverify](https://youverify.co/)
|
||||
1. [Keeeb](https://www.keeeb.com/)
|
||||
1. [p3r](https://www.p3r.one/)
|
||||
1. [Faro](https://www.faro.com/)
|
||||
1. [Rise](https://www.risecard.eu/)
|
||||
|
||||
@@ -11,4 +11,4 @@ g = _, _
|
||||
e = some(where (p.eft == allow)) && !some(where (p.eft == deny))
|
||||
|
||||
[matchers]
|
||||
m = g(r.sub, p.sub) && globMatch(r.res, p.res) && globMatch(r.act, p.act) && globMatch(r.obj, p.obj)
|
||||
m = g(r.sub, p.sub) && globOrRegexMatch(r.res, p.res) && globOrRegexMatch(r.act, p.act) && globOrRegexMatch(r.obj, p.obj)
|
||||
|
||||
@@ -753,6 +753,11 @@
|
||||
"type": "string",
|
||||
"name": "resourceName",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"name": "previous",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -932,6 +937,11 @@
|
||||
"type": "string",
|
||||
"name": "resourceName",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"name": "previous",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -2081,6 +2091,37 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"/api/v1/projects/{name}/detailed": {
|
||||
"get": {
|
||||
"tags": [
|
||||
"ProjectService"
|
||||
],
|
||||
"summary": "GetDetailedProject returns a project that include project, global project and scoped resources by name",
|
||||
"operationId": "ProjectService_GetDetailedProject",
|
||||
"parameters": [
|
||||
{
|
||||
"type": "string",
|
||||
"name": "name",
|
||||
"in": "path",
|
||||
"required": true
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "A successful response.",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/projectDetailedProjectsResponse"
|
||||
}
|
||||
},
|
||||
"default": {
|
||||
"description": "An unexpected error response.",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/runtimeError"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"/api/v1/projects/{name}/events": {
|
||||
"get": {
|
||||
"tags": [
|
||||
@@ -2863,6 +2904,12 @@
|
||||
"description": "HTTP/HTTPS proxy to access the repository.",
|
||||
"name": "proxy",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Reference between project and repository that allow you automatically to be added as item inside SourceRepos project entity.",
|
||||
"name": "project",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
@@ -3605,6 +3652,9 @@
|
||||
"oidcConfig": {
|
||||
"$ref": "#/definitions/clusterOIDCConfig"
|
||||
},
|
||||
"passwordPattern": {
|
||||
"type": "string"
|
||||
},
|
||||
"plugins": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
@@ -3620,9 +3670,18 @@
|
||||
"statusBadgeEnabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"trackingMethod": {
|
||||
"type": "string"
|
||||
},
|
||||
"uiBannerContent": {
|
||||
"type": "string"
|
||||
},
|
||||
"uiBannerPermanent": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"uiBannerPosition": {
|
||||
"type": "string"
|
||||
},
|
||||
"uiBannerURL": {
|
||||
"type": "string"
|
||||
},
|
||||
@@ -3674,6 +3733,32 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"projectDetailedProjectsResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"clusters": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/definitions/v1alpha1Cluster"
|
||||
}
|
||||
},
|
||||
"globalProjects": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/definitions/v1alpha1AppProject"
|
||||
}
|
||||
},
|
||||
"project": {
|
||||
"$ref": "#/definitions/v1alpha1AppProject"
|
||||
},
|
||||
"repositories": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/definitions/v1alpha1Repository"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"projectEmptyResponse": {
|
||||
"type": "object"
|
||||
},
|
||||
@@ -4283,6 +4368,10 @@
|
||||
"description": "Operation is the type of operation which lead to this ManagedFieldsEntry being created.\nThe only valid values for this field are 'Apply' and 'Update'.",
|
||||
"type": "string"
|
||||
},
|
||||
"subresource": {
|
||||
"description": "Subresource is the name of the subresource used to update that object, or\nempty string if the object was updated through the main resource. The\nvalue of this field is used to distinguish between managers, even if they\nshare the same name. For example, a status update will be distinct from a\nregular update using the same manager name.\nNote that the APIVersion field is not related to the Subresource field and\nit always corresponds to the version of the main resource.",
|
||||
"type": "string"
|
||||
},
|
||||
"time": {
|
||||
"$ref": "#/definitions/v1Time"
|
||||
}
|
||||
@@ -4437,7 +4526,7 @@
|
||||
},
|
||||
"v1ObjectReference": {
|
||||
"type": "object",
|
||||
"title": "ObjectReference contains enough information to let you inspect or modify the referred object.\n---\nNew uses of this type are discouraged because of difficulty describing its usage when embedded in APIs.\n 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage.\n 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular\n restrictions like, \"must refer only to types A and B\" or \"UID not honored\" or \"name must be restricted\".\n Those cannot be well described when embedded.\n 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen.\n 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity\n during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple\n and the version of the actual struct is irrelevant.\n 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type\n will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control.\nInstead of using this type, create a locally provided and used type that is well-focused on your reference.\nFor example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object",
|
||||
"title": "ObjectReference contains enough information to let you inspect or modify the referred object.\n---\nNew uses of this type are discouraged because of difficulty describing its usage when embedded in APIs.\n 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage.\n 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular\n restrictions like, \"must refer only to types A and B\" or \"UID not honored\" or \"name must be restricted\".\n Those cannot be well described when embedded.\n 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen.\n 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity\n during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple\n and the version of the actual struct is irrelevant.\n 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type\n will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control.\nInstead of using this type, create a locally provided and used type that is well-focused on your reference.\nFor example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n+structType=atomic",
|
||||
"properties": {
|
||||
"apiVersion": {
|
||||
"type": "string",
|
||||
@@ -4470,8 +4559,8 @@
|
||||
}
|
||||
},
|
||||
"v1OwnerReference": {
|
||||
"description": "OwnerReference contains enough information to let you identify an owning\nobject. An owning object must be in the same namespace as the dependent, or\nbe cluster-scoped, so there is no namespace field.",
|
||||
"type": "object",
|
||||
"title": "OwnerReference contains enough information to let you identify an owning\nobject. An owning object must be in the same namespace as the dependent, or\nbe cluster-scoped, so there is no namespace field.\n+structType=atomic",
|
||||
"properties": {
|
||||
"apiVersion": {
|
||||
"description": "API version of the referent.",
|
||||
@@ -4810,6 +4899,10 @@
|
||||
"$ref": "#/definitions/v1alpha1HelmParameter"
|
||||
}
|
||||
},
|
||||
"passCredentials": {
|
||||
"type": "boolean",
|
||||
"title": "PassCredentials pass credentials to all domains (Helm's --pass-credentials)"
|
||||
},
|
||||
"releaseName": {
|
||||
"type": "string",
|
||||
"title": "ReleaseName is the Helm release name to use. If omitted it will use the application name"
|
||||
@@ -5106,6 +5199,13 @@
|
||||
"type": "object",
|
||||
"title": "Cluster is the definition of a cluster resource",
|
||||
"properties": {
|
||||
"annotations": {
|
||||
"type": "object",
|
||||
"title": "Annotations for cluster secret metadata",
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"clusterResources": {
|
||||
"description": "Indicates if cluster level resources should be managed. This setting is used only if cluster is connected in a namespaced mode.",
|
||||
"type": "boolean"
|
||||
@@ -5119,6 +5219,13 @@
|
||||
"info": {
|
||||
"$ref": "#/definitions/v1alpha1ClusterInfo"
|
||||
},
|
||||
"labels": {
|
||||
"type": "object",
|
||||
"title": "Labels for cluster secret metadata",
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"title": "Name of the cluster. If omitted, will use the server address"
|
||||
@@ -5130,6 +5237,10 @@
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"project": {
|
||||
"type": "string",
|
||||
"title": "Reference between project and cluster that allow you automatically to be added as item inside Destinations project entity"
|
||||
},
|
||||
"refreshRequestedAt": {
|
||||
"$ref": "#/definitions/v1Time"
|
||||
},
|
||||
@@ -5276,6 +5387,9 @@
|
||||
"init": {
|
||||
"$ref": "#/definitions/v1alpha1Command"
|
||||
},
|
||||
"lockRepo": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"name": {
|
||||
"type": "string"
|
||||
}
|
||||
@@ -5846,6 +5960,10 @@
|
||||
"type": "string",
|
||||
"title": "Password contains the password or PAT used for authenticating at the remote repository"
|
||||
},
|
||||
"project": {
|
||||
"type": "string",
|
||||
"title": "Reference between project and repository that allow you automatically to be added as item inside SourceRepos project entity"
|
||||
},
|
||||
"proxy": {
|
||||
"type": "string",
|
||||
"title": "Proxy specifies the HTTP/HTTPS proxy used to access the repo"
|
||||
@@ -6534,6 +6652,10 @@
|
||||
"schedule": {
|
||||
"type": "string",
|
||||
"title": "Schedule is the time the window will begin, specified in cron format"
|
||||
},
|
||||
"timeZone": {
|
||||
"type": "string",
|
||||
"title": "TimeZone of the sync that will be applied to the schedule"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@@ -49,6 +49,7 @@ func NewCommand() *cobra.Command {
|
||||
glogLevel int
|
||||
metricsPort int
|
||||
metricsCacheExpiration time.Duration
|
||||
metricsAplicationLabels []string
|
||||
kubectlParallelismLimit int64
|
||||
cacheSrc func() (*appstatecache.Cache, error)
|
||||
redisClient *redis.Client
|
||||
@@ -110,10 +111,14 @@ func NewCommand() *cobra.Command {
|
||||
errors.CheckError(err)
|
||||
cache.Cache.SetClient(cacheutil.NewTwoLevelClient(cache.Cache.GetClient(), 10*time.Minute))
|
||||
|
||||
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, namespace)
|
||||
var appController *controller.ApplicationController
|
||||
|
||||
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, namespace, settings.WithRepoOrClusterChangedHandler(func() {
|
||||
appController.InvalidateProjectsCache()
|
||||
}))
|
||||
kubectl := kubeutil.NewKubectl()
|
||||
clusterFilter := getClusterFilter()
|
||||
appController, err := controller.NewApplicationController(
|
||||
appController, err = controller.NewApplicationController(
|
||||
namespace,
|
||||
settingsMgr,
|
||||
kubeClient,
|
||||
@@ -125,6 +130,7 @@ func NewCommand() *cobra.Command {
|
||||
time.Duration(selfHealTimeoutSeconds)*time.Second,
|
||||
metricsPort,
|
||||
metricsCacheExpiration,
|
||||
metricsAplicationLabels,
|
||||
kubectlParallelismLimit,
|
||||
clusterFilter)
|
||||
errors.CheckError(err)
|
||||
@@ -144,20 +150,21 @@ func NewCommand() *cobra.Command {
|
||||
}
|
||||
|
||||
clientConfig = cli.AddKubectlFlagsToCmd(&command)
|
||||
command.Flags().Int64Var(&appResyncPeriod, "app-resync", int64(env.ParseDurationFromEnv("ARGOCD_RECONCILIATION_TIMEOUT", defaultAppResyncPeriod*time.Second, 0, math.MaxInt32).Seconds()), "Time period in seconds for application resync.")
|
||||
command.Flags().Int64Var(&appResyncPeriod, "app-resync", int64(env.ParseDurationFromEnv("ARGOCD_RECONCILIATION_TIMEOUT", defaultAppResyncPeriod*time.Second, 0, math.MaxInt64).Seconds()), "Time period in seconds for application resync.")
|
||||
command.Flags().StringVar(&repoServerAddress, "repo-server", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER", common.DefaultRepoServerAddr), "Repo server address.")
|
||||
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", int(env.ParseDurationFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS", 60*time.Second, 0, math.MaxInt32).Seconds()), "Repo server RPC call timeout seconds.")
|
||||
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS", 60, 0, math.MaxInt64), "Repo server RPC call timeout seconds.")
|
||||
command.Flags().IntVar(&statusProcessors, "status-processors", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_STATUS_PROCESSORS", 20, 0, math.MaxInt32), "Number of application status processors")
|
||||
command.Flags().IntVar(&operationProcessors, "operation-processors", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_OPERATION_PROCESSORS", 10, 0, math.MaxInt32), "Number of application operation processors")
|
||||
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
|
||||
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
|
||||
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
|
||||
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
|
||||
command.Flags().DurationVar(&metricsCacheExpiration, "metrics-cache-expiration", env.ParseDurationFromEnv("ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION", 0*time.Second, 0, math.MaxInt32), "Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)")
|
||||
command.Flags().DurationVar(&metricsCacheExpiration, "metrics-cache-expiration", env.ParseDurationFromEnv("ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION", 0*time.Second, 0, math.MaxInt64), "Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)")
|
||||
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS", 5, 0, math.MaxInt32), "Specifies timeout between application self heal attempts")
|
||||
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", 20, "Number of allowed concurrent kubectl fork/execs. Any value less the 1 means no limit.")
|
||||
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
|
||||
command.Flags().BoolVar(&repoServerStrictTLS, "repo-server-strict-tls", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS", false), "Whether to use strict validation of the TLS cert presented by the repo server")
|
||||
command.Flags().StringSliceVar(&metricsAplicationLabels, "metrics-application-labels", []string{}, "List of Application labels that will be added to the argocd_application_labels metric")
|
||||
cacheSrc = appstatecache.AddCacheFlagsToCmd(&command, func(client *redis.Client) {
|
||||
redisClient = client
|
||||
})
|
||||
|
||||
58
cmd/argocd-cmp-server/commands/argocd_cmp_server.go
Normal file
@@ -0,0 +1,58 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/pkg/stats"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
|
||||
"github.com/argoproj/argo-cd/v2/cmpserver"
|
||||
"github.com/argoproj/argo-cd/v2/cmpserver/plugin"
|
||||
"github.com/argoproj/argo-cd/v2/common"
|
||||
"github.com/argoproj/argo-cd/v2/util/cli"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
)
|
||||
|
||||
const (
|
||||
// CLIName is the name of the CLI
|
||||
cliName = "argocd-cmp-server"
|
||||
)
|
||||
|
||||
func NewCommand() *cobra.Command {
|
||||
var (
|
||||
configFilePath string
|
||||
)
|
||||
var command = cobra.Command{
|
||||
Use: cliName,
|
||||
Short: "Run ArgoCD ConfigManagementPlugin Server",
|
||||
Long: "ArgoCD ConfigManagementPlugin Server is an internal service which runs as sidecar container in reposerver deployment. It can be configured by following options.",
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(c *cobra.Command, args []string) error {
|
||||
cli.SetLogFormat(cmdutil.LogFormat)
|
||||
cli.SetLogLevel(cmdutil.LogLevel)
|
||||
|
||||
config, err := plugin.ReadPluginConfig(configFilePath)
|
||||
errors.CheckError(err)
|
||||
|
||||
server, err := cmpserver.NewServer(plugin.CMPServerInitConstants{
|
||||
PluginConfig: *config,
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
// register dumper
|
||||
stats.RegisterStackDumper()
|
||||
stats.StartStatsTicker(10 * time.Minute)
|
||||
stats.RegisterHeapDumper("memprofile")
|
||||
|
||||
// run argocd-cmp-server server
|
||||
server.Run()
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", "text", "Set the logging format. One of: text|json")
|
||||
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
|
||||
command.Flags().StringVar(&configFilePath, "config-dir-path", common.DefaultPluginConfigFilePath, "Config management plugin configuration file location, Default is '/home/argocd/cmp-server/config/'")
|
||||
return &command
|
||||
}
|
||||
@@ -63,6 +63,7 @@ func NewCommand() *cobra.Command {
|
||||
frameOptions string
|
||||
repoServerPlaintext bool
|
||||
repoServerStrictTLS bool
|
||||
staticAssetsDir string
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: cliName,
|
||||
@@ -139,6 +140,7 @@ func NewCommand() *cobra.Command {
|
||||
Cache: cache,
|
||||
XFrameOptions: frameOptions,
|
||||
RedisClient: redisClient,
|
||||
StaticAssetsDir: staticAssetsDir,
|
||||
}
|
||||
|
||||
stats.RegisterStackDumper()
|
||||
@@ -157,9 +159,7 @@ func NewCommand() *cobra.Command {
|
||||
|
||||
clientConfig = cli.AddKubectlFlagsToCmd(command)
|
||||
command.Flags().BoolVar(&insecure, "insecure", env.ParseBoolFromEnv("ARGOCD_SERVER_INSECURE", false), "Run server without TLS")
|
||||
var staticAssetsDir string
|
||||
command.Flags().StringVar(&staticAssetsDir, "staticassets", "", "Static assets directory path")
|
||||
_ = command.Flags().MarkDeprecated("staticassets", "The --staticassets flag is not longer supported. Static assets are embedded into binary.")
|
||||
command.Flags().StringVar(&staticAssetsDir, "staticassets", env.StringFromEnv("ARGOCD_SERVER_STATIC_ASSETS", "/shared/app"), "Directory path that contains additional static assets")
|
||||
command.Flags().StringVar(&baseHRef, "basehref", env.StringFromEnv("ARGOCD_SERVER_BASEHREF", "/"), "Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from /")
|
||||
command.Flags().StringVar(&rootPath, "rootpath", env.StringFromEnv("ARGOCD_SERVER_ROOTPATH", ""), "Used if Argo CD is running behind reverse proxy under subpath different from /")
|
||||
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_SERVER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
|
||||
@@ -172,7 +172,7 @@ func NewCommand() *cobra.Command {
|
||||
command.AddCommand(cli.NewVersionCmd(cliName))
|
||||
command.Flags().IntVar(&listenPort, "port", common.DefaultPortAPIServer, "Listen on given port")
|
||||
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDAPIServerMetrics, "Start metrics on given port")
|
||||
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", int(env.ParseDurationFromEnv("ARGOCD_SERVER_REPO_SERVER_TIMEOUT_SECONDS", 60*time.Second, 0, math.MaxInt32).Seconds()), "Repo server RPC call timeout seconds.")
|
||||
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", env.ParseNumFromEnv("ARGOCD_SERVER_REPO_SERVER_TIMEOUT_SECONDS", 60, 0, math.MaxInt64), "Repo server RPC call timeout seconds.")
|
||||
command.Flags().StringVar(&frameOptions, "x-frame-options", env.StringFromEnv("ARGOCD_SERVER_X_FRAME_OPTIONS", "sameorigin"), "Set X-Frame-Options header in HTTP responses to `value`. To disable, set to \"\".")
|
||||
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_SERVER_REPO_SERVER_PLAINTEXT", false), "Use a plaintext client (non-TLS) to connect to repository server")
|
||||
command.Flags().BoolVar(&repoServerStrictTLS, "repo-server-strict-tls", env.ParseBoolFromEnv("ARGOCD_SERVER_REPO_SERVER_STRICT_TLS", false), "Perform strict validation of TLS certificates when connecting to repo server")
|
||||
|
||||
@@ -54,7 +54,19 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "update-password",
|
||||
Short: "Update password",
|
||||
Short: "Update an account's password",
|
||||
Long: `
|
||||
This command can be used to update the password of the currently logged on
|
||||
user, or an arbitrary local user account when the currently logged on user
|
||||
has appropriate RBAC permissions to change other accounts.
|
||||
`,
|
||||
Example: `
|
||||
# Update the current user's password
|
||||
argocd account update-password
|
||||
|
||||
# Update the password for user foobar
|
||||
argocd account update-password --account foobar
|
||||
`,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
if len(args) != 0 {
|
||||
c.HelpFunc()(c, args)
|
||||
@@ -67,16 +79,20 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
|
||||
userInfo := getCurrentAccount(acdClient)
|
||||
|
||||
if userInfo.Iss == sessionutil.SessionManagerClaimsIssuer && currentPassword == "" {
|
||||
fmt.Print("*** Enter current password: ")
|
||||
fmt.Printf("*** Enter password of currently logged in user (%s): ", userInfo.Username)
|
||||
password, err := term.ReadPassword(int(os.Stdin.Fd()))
|
||||
errors.CheckError(err)
|
||||
currentPassword = string(password)
|
||||
fmt.Print("\n")
|
||||
}
|
||||
|
||||
if account == "" {
|
||||
account = userInfo.Username
|
||||
}
|
||||
|
||||
if newPassword == "" {
|
||||
var err error
|
||||
newPassword, err = cli.ReadAndConfirmPassword()
|
||||
newPassword, err = cli.ReadAndConfirmPassword(account)
|
||||
errors.CheckError(err)
|
||||
}
|
||||
|
||||
@@ -111,7 +127,7 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
|
||||
},
|
||||
}
|
||||
|
||||
command.Flags().StringVar(¤tPassword, "current-password", "", "current password you wish to change")
|
||||
command.Flags().StringVar(¤tPassword, "current-password", "", "password of the currently logged on user")
|
||||
command.Flags().StringVar(&newPassword, "new-password", "", "new password you want to update to")
|
||||
command.Flags().StringVar(&account, "account", "", "an account name that should be updated. Defaults to current user account")
|
||||
return command
|
||||
|
||||
@@ -10,6 +10,8 @@ import (
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/util/argo"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/spf13/cobra"
|
||||
apiv1 "k8s.io/api/core/v1"
|
||||
@@ -88,9 +90,12 @@ func NewGenAppSpecCommand() *cobra.Command {
|
||||
argocd admin app generate-spec ksane --repo https://github.com/argoproj/argocd-example-apps.git --path plugins/kasane --dest-namespace default --dest-server https://kubernetes.default.svc --config-management-plugin kasane
|
||||
`,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
app, err := cmdutil.ConstructApp(fileURL, appName, labels, annotations, args, appOpts, c.Flags())
|
||||
apps, err := cmdutil.ConstructApps(fileURL, appName, labels, annotations, args, appOpts, c.Flags())
|
||||
errors.CheckError(err)
|
||||
|
||||
if len(apps) > 1 {
|
||||
errors.CheckError(fmt.Errorf("failed to generate spec, more than one application is not supported"))
|
||||
}
|
||||
app := apps[0]
|
||||
if app.Name == "" {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
@@ -329,11 +334,11 @@ func reconcileApplications(
|
||||
|
||||
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClientset, namespace)
|
||||
argoDB := db.NewDB(namespace, settingsMgr, kubeClientset)
|
||||
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
|
||||
appInformerFactory := appinformers.NewSharedInformerFactoryWithOptions(
|
||||
appClientset,
|
||||
1*time.Hour,
|
||||
namespace,
|
||||
func(options *v1.ListOptions) {},
|
||||
appinformers.WithNamespace(namespace),
|
||||
appinformers.WithTweakListOptions(func(options *v1.ListOptions) {}),
|
||||
)
|
||||
|
||||
appInformer := appInformerFactory.Argoproj().V1alpha1().Applications().Informer()
|
||||
@@ -350,7 +355,7 @@ func reconcileApplications(
|
||||
return true
|
||||
}, func(r *http.Request) error {
|
||||
return nil
|
||||
})
|
||||
}, []string{})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -366,7 +371,7 @@ func reconcileApplications(
|
||||
)
|
||||
|
||||
appStateManager := controller.NewAppStateManager(
|
||||
argoDB, appClientset, repoServerClient, namespace, kubeutil.NewKubectl(), settingsMgr, stateCache, projInformer, server, cache, time.Second)
|
||||
argoDB, appClientset, repoServerClient, namespace, kubeutil.NewKubectl(), settingsMgr, stateCache, projInformer, server, cache, time.Second, argo.NewResourceTracking())
|
||||
|
||||
appsList, err := appClientset.ArgoprojV1alpha1().Applications(namespace).List(context.Background(), v1.ListOptions{LabelSelector: selector})
|
||||
if err != nil {
|
||||
@@ -408,5 +413,5 @@ func reconcileApplications(
|
||||
}
|
||||
|
||||
func newLiveStateCache(argoDB db.ArgoDB, appInformer kubecache.SharedIndexInformer, settingsMgr *settings.SettingsManager, server *metrics.MetricsServer) cache.LiveStateCache {
|
||||
return cache.NewLiveStateCache(argoDB, appInformer, settingsMgr, kubeutil.NewKubectl(), server, func(managedByApp map[string]bool, ref apiv1.ObjectReference) {}, nil)
|
||||
return cache.NewLiveStateCache(argoDB, appInformer, settingsMgr, kubeutil.NewKubectl(), server, func(managedByApp map[string]bool, ref apiv1.ObjectReference) {}, nil, argo.NewResourceTracking())
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/sirupsen/logrus"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
apierr "k8s.io/apimachinery/pkg/api/errors"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
@@ -88,7 +88,11 @@ func NewExportCommand() *cobra.Command {
|
||||
}
|
||||
applicationSets, err := acdClients.applicationSets.List(context.Background(), v1.ListOptions{})
|
||||
if err != nil && !apierr.IsNotFound(err) {
|
||||
errors.CheckError(err)
|
||||
if apierr.IsForbidden(err) {
|
||||
log.Warn(err)
|
||||
} else {
|
||||
errors.CheckError(err)
|
||||
}
|
||||
}
|
||||
if applicationSets != nil {
|
||||
for _, appSet := range applicationSets.Items {
|
||||
@@ -176,6 +180,17 @@ func NewImportCommand() *cobra.Command {
|
||||
for _, proj := range projects.Items {
|
||||
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "AppProject", Name: proj.GetName()}] = proj
|
||||
}
|
||||
applicationSets, err := acdClients.applicationSets.List(context.Background(), v1.ListOptions{})
|
||||
if apierr.IsForbidden(err) || apierr.IsNotFound(err) {
|
||||
log.Warnf("argoproj.io/ApplicationSet: %v\n", err)
|
||||
} else {
|
||||
errors.CheckError(err)
|
||||
}
|
||||
if applicationSets != nil {
|
||||
for _, appSet := range applicationSets.Items {
|
||||
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "ApplicationSet", Name: appSet.GetName()}] = appSet
|
||||
}
|
||||
}
|
||||
|
||||
// Create or replace existing object
|
||||
backupObjects, err := kube.SplitYAML(input)
|
||||
@@ -199,22 +214,39 @@ func NewImportCommand() *cobra.Command {
|
||||
dynClient = acdClients.applicationSets
|
||||
}
|
||||
if !exists {
|
||||
isForbidden := false
|
||||
if !dryRun {
|
||||
_, err = dynClient.Create(context.Background(), bakObj, v1.CreateOptions{})
|
||||
errors.CheckError(err)
|
||||
if apierr.IsForbidden(err) || apierr.IsNotFound(err) {
|
||||
isForbidden = true
|
||||
log.Warnf("%s/%s %s: %v", gvk.Group, gvk.Kind, bakObj.GetName(), err)
|
||||
} else {
|
||||
errors.CheckError(err)
|
||||
}
|
||||
}
|
||||
fmt.Printf("%s/%s %s created%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
|
||||
if !isForbidden {
|
||||
fmt.Printf("%s/%s %s created%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
|
||||
}
|
||||
|
||||
} else if specsEqual(*bakObj, liveObj) {
|
||||
if verbose {
|
||||
fmt.Printf("%s/%s %s unchanged%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
|
||||
}
|
||||
} else {
|
||||
isForbidden := false
|
||||
if !dryRun {
|
||||
newLive := updateLive(bakObj, &liveObj)
|
||||
_, err = dynClient.Update(context.Background(), newLive, v1.UpdateOptions{})
|
||||
errors.CheckError(err)
|
||||
if apierr.IsForbidden(err) || apierr.IsNotFound(err) {
|
||||
isForbidden = true
|
||||
log.Warnf("%s/%s %s: %v", gvk.Group, gvk.Kind, bakObj.GetName(), err)
|
||||
} else {
|
||||
errors.CheckError(err)
|
||||
}
|
||||
}
|
||||
if !isForbidden {
|
||||
fmt.Printf("%s/%s %s updated%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
|
||||
}
|
||||
fmt.Printf("%s/%s %s updated%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -239,16 +271,24 @@ func NewImportCommand() *cobra.Command {
|
||||
}
|
||||
}
|
||||
}
|
||||
case "ApplicationSet":
|
||||
dynClient = acdClients.applicationSets
|
||||
default:
|
||||
logrus.Fatalf("Unexpected kind '%s' in prune list", key.Kind)
|
||||
log.Fatalf("Unexpected kind '%s' in prune list", key.Kind)
|
||||
}
|
||||
isForbidden := false
|
||||
if !dryRun {
|
||||
err = dynClient.Delete(context.Background(), key.Name, v1.DeleteOptions{})
|
||||
if err != nil && !apierr.IsNotFound(err) {
|
||||
if apierr.IsForbidden(err) || apierr.IsNotFound(err) {
|
||||
isForbidden = true
|
||||
log.Warnf("%s/%s %s: %v\n", key.Group, key.Kind, key.Name, err)
|
||||
} else {
|
||||
errors.CheckError(err)
|
||||
}
|
||||
}
|
||||
fmt.Printf("%s/%s %s pruned%s\n", key.Group, key.Kind, key.Name, dryRunMsg)
|
||||
if !isForbidden {
|
||||
fmt.Printf("%s/%s %s pruned%s\n", key.Group, key.Kind, key.Name, dryRunMsg)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("%s/%s %s needs pruning\n", key.Group, key.Kind, key.Name)
|
||||
}
|
||||
@@ -304,6 +344,8 @@ func updateLive(bak, live *unstructured.Unstructured) *unstructured.Unstructured
|
||||
if _, ok := bak.Object["status"]; ok {
|
||||
newLive.Object["status"] = bak.Object["status"]
|
||||
}
|
||||
case "ApplicationSet":
|
||||
newLive.Object["spec"] = bak.Object["spec"]
|
||||
}
|
||||
return newLive
|
||||
}
|
||||
|
||||
@@ -35,6 +35,7 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/util/glob"
|
||||
kubeutil "github.com/argoproj/argo-cd/v2/util/kube"
|
||||
"github.com/argoproj/argo-cd/v2/util/settings"
|
||||
"github.com/argoproj/argo-cd/v2/util/text/label"
|
||||
)
|
||||
|
||||
func NewClusterCommand(pathOpts *clientcmd.PathOptions) *cobra.Command {
|
||||
@@ -508,6 +509,8 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
|
||||
bearerToken string
|
||||
generateToken bool
|
||||
outputFormat string
|
||||
labels []string
|
||||
annotations []string
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "generate-spec CONTEXT",
|
||||
@@ -561,7 +564,13 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
|
||||
if clusterOpts.Name != "" {
|
||||
contextName = clusterOpts.Name
|
||||
}
|
||||
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, bearerToken, awsAuthConf, execProviderConf)
|
||||
|
||||
labelsMap, err := label.Parse(labels)
|
||||
errors.CheckError(err)
|
||||
annotationsMap, err := label.Parse(annotations)
|
||||
errors.CheckError(err)
|
||||
|
||||
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, bearerToken, awsAuthConf, execProviderConf, labelsMap, annotationsMap)
|
||||
if clusterOpts.InCluster {
|
||||
clst.Server = argoappv1.KubernetesInternalAPIServerAddr
|
||||
}
|
||||
@@ -590,6 +599,8 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
|
||||
command.Flags().StringVar(&clusterOpts.ServiceAccount, "service-account", "argocd-manager", fmt.Sprintf("System namespace service account to use for kubernetes resource management. If not set then default \"%s\" SA will be used", clusterauth.ArgoCDManagerServiceAccount))
|
||||
command.Flags().StringVar(&clusterOpts.SystemNamespace, "system-namespace", common.DefaultSystemNamespace, "Use different system namespace")
|
||||
command.Flags().StringVarP(&outputFormat, "output", "o", "yaml", "Output format. One of: json|yaml")
|
||||
command.Flags().StringArrayVar(&labels, "label", nil, "Set metadata labels (e.g. --label key=value)")
|
||||
command.Flags().StringArrayVar(&annotations, "annotation", nil, "Set metadata annotations (e.g. --annotation key=value)")
|
||||
cmdutil.AddClusterFlags(command, &clusterOpts)
|
||||
return command
|
||||
}
|
||||
|
||||
@@ -401,7 +401,7 @@ argocd admin settings resource-overrides ignore-differences ./deploy.yaml --argo
|
||||
|
||||
executeResourceOverrideCommand(cmdCtx, args, func(res unstructured.Unstructured, override v1alpha1.ResourceOverride, overrides map[string]v1alpha1.ResourceOverride) {
|
||||
gvk := res.GroupVersionKind()
|
||||
if len(override.IgnoreDifferences.JSONPointers) == 0 {
|
||||
if len(override.IgnoreDifferences.JSONPointers) == 0 && len(override.IgnoreDifferences.JQPathExpressions) == 0 {
|
||||
_, _ = fmt.Printf("Ignore differences are not configured for '%s/%s'\n", gvk.Group, gvk.Kind)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -14,6 +14,8 @@ import (
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/sync/common"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/diff"
|
||||
"github.com/argoproj/gitops-engine/pkg/health"
|
||||
"github.com/argoproj/gitops-engine/pkg/sync/hook"
|
||||
@@ -48,7 +50,6 @@ import (
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
"github.com/argoproj/argo-cd/v2/util/git"
|
||||
argoio "github.com/argoproj/argo-cd/v2/util/io"
|
||||
argokube "github.com/argoproj/argo-cd/v2/util/kube"
|
||||
"github.com/argoproj/argo-cd/v2/util/templates"
|
||||
"github.com/argoproj/argo-cd/v2/util/text/label"
|
||||
)
|
||||
@@ -92,6 +93,7 @@ func NewApplicationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
|
||||
command.AddCommand(NewApplicationEditCommand(clientOpts))
|
||||
command.AddCommand(NewApplicationPatchCommand(clientOpts))
|
||||
command.AddCommand(NewApplicationPatchResourceCommand(clientOpts))
|
||||
command.AddCommand(NewApplicationDeleteResourceCommand(clientOpts))
|
||||
command.AddCommand(NewApplicationResourceActionsCommand(clientOpts))
|
||||
command.AddCommand(NewApplicationListResourcesCommand(clientOpts))
|
||||
command.AddCommand(NewApplicationLogsCommand(clientOpts))
|
||||
@@ -133,24 +135,27 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
argocdClient := argocdclient.NewClientOrDie(clientOpts)
|
||||
|
||||
app, err := cmdutil.ConstructApp(fileURL, appName, labels, annotations, args, appOpts, c.Flags())
|
||||
apps, err := cmdutil.ConstructApps(fileURL, appName, labels, annotations, args, appOpts, c.Flags())
|
||||
errors.CheckError(err)
|
||||
|
||||
if app.Name == "" {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
for _, app := range apps {
|
||||
if app.Name == "" {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
conn, appIf := argocdClient.NewApplicationClientOrDie()
|
||||
defer argoio.Close(conn)
|
||||
appCreateRequest := applicationpkg.ApplicationCreateRequest{
|
||||
Application: *app,
|
||||
Upsert: &upsert,
|
||||
Validate: &appOpts.Validate,
|
||||
}
|
||||
created, err := appIf.Create(context.Background(), &appCreateRequest)
|
||||
errors.CheckError(err)
|
||||
fmt.Printf("application '%s' created\n", created.ObjectMeta.Name)
|
||||
}
|
||||
|
||||
conn, appIf := argocdClient.NewApplicationClientOrDie()
|
||||
defer argoio.Close(conn)
|
||||
appCreateRequest := applicationpkg.ApplicationCreateRequest{
|
||||
Application: *app,
|
||||
Upsert: &upsert,
|
||||
Validate: &appOpts.Validate,
|
||||
}
|
||||
created, err := appIf.Create(context.Background(), &appCreateRequest)
|
||||
errors.CheckError(err)
|
||||
fmt.Printf("application '%s' created\n", created.ObjectMeta.Name)
|
||||
},
|
||||
}
|
||||
command.Flags().StringVar(&appName, "name", "", "A name for the app, ignored if a file is set (DEPRECATED)")
|
||||
@@ -666,6 +671,10 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
||||
}
|
||||
}
|
||||
}
|
||||
if app.Spec.Source.Helm.PassCredentials {
|
||||
app.Spec.Source.Helm.PassCredentials = false
|
||||
updated = true
|
||||
}
|
||||
}
|
||||
|
||||
if app.Spec.Source.Plugin != nil {
|
||||
@@ -675,7 +684,7 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
||||
}
|
||||
for _, env := range pluginEnvs {
|
||||
err = app.Spec.Source.Plugin.RemoveEnvEntry(env)
|
||||
if err != nil {
|
||||
if err == nil {
|
||||
updated = true
|
||||
}
|
||||
}
|
||||
@@ -732,8 +741,8 @@ func liveObjects(resources []*argoappv1.ResourceDiff) ([]*unstructured.Unstructu
|
||||
}
|
||||
|
||||
func getLocalObjects(app *argoappv1.Application, local, localRepoRoot, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions,
|
||||
configManagementPlugins []*argoappv1.ConfigManagementPlugin) []*unstructured.Unstructured {
|
||||
manifestStrings := getLocalObjectsString(app, local, localRepoRoot, appLabelKey, kubeVersion, kustomizeOptions, configManagementPlugins)
|
||||
configManagementPlugins []*argoappv1.ConfigManagementPlugin, trackingMethod string) []*unstructured.Unstructured {
|
||||
manifestStrings := getLocalObjectsString(app, local, localRepoRoot, appLabelKey, kubeVersion, kustomizeOptions, configManagementPlugins, trackingMethod)
|
||||
objs := make([]*unstructured.Unstructured, len(manifestStrings))
|
||||
for i := range manifestStrings {
|
||||
obj := unstructured.Unstructured{}
|
||||
@@ -745,7 +754,7 @@ func getLocalObjects(app *argoappv1.Application, local, localRepoRoot, appLabelK
|
||||
}
|
||||
|
||||
func getLocalObjectsString(app *argoappv1.Application, local, localRepoRoot, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions,
|
||||
configManagementPlugins []*argoappv1.ConfigManagementPlugin) []string {
|
||||
configManagementPlugins []*argoappv1.ConfigManagementPlugin, trackingMethod string) []string {
|
||||
|
||||
res, err := repository.GenerateManifests(local, localRepoRoot, app.Spec.Source.TargetRevision, &repoapiclient.ManifestRequest{
|
||||
Repo: &argoappv1.Repository{Repo: app.Spec.Source.RepoURL},
|
||||
@@ -756,6 +765,7 @@ func getLocalObjectsString(app *argoappv1.Application, local, localRepoRoot, app
|
||||
KustomizeOptions: kustomizeOptions,
|
||||
KubeVersion: kubeVersion,
|
||||
Plugins: configManagementPlugins,
|
||||
TrackingMethod: trackingMethod,
|
||||
}, true)
|
||||
errors.CheckError(err)
|
||||
|
||||
@@ -841,7 +851,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
defer argoio.Close(conn)
|
||||
cluster, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Name: app.Spec.Destination.Name, Server: app.Spec.Destination.Server})
|
||||
errors.CheckError(err)
|
||||
localObjs := groupObjsByKey(getLocalObjects(app, local, localRepoRoot, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins), liveObjs, app.Spec.Destination.Namespace)
|
||||
localObjs := groupObjsByKey(getLocalObjects(app, local, localRepoRoot, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins, argoSettings.TrackingMethod), liveObjs, app.Spec.Destination.Namespace)
|
||||
items = groupObjsForDiff(resources, localObjs, items, argoSettings, appName)
|
||||
} else if revision != "" {
|
||||
var unstructureds []*unstructured.Unstructured
|
||||
@@ -923,6 +933,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
}
|
||||
|
||||
func groupObjsForDiff(resources *application.ManagedResourcesResponse, objs map[kube.ResourceKey]*unstructured.Unstructured, items []objKeyLiveTarget, argoSettings *settings.Settings, appName string) []objKeyLiveTarget {
|
||||
resourceTracking := argo.NewResourceTracking()
|
||||
for _, res := range resources.Items {
|
||||
var live = &unstructured.Unstructured{}
|
||||
err := json.Unmarshal([]byte(res.NormalizedLiveState), &live)
|
||||
@@ -936,7 +947,7 @@ func groupObjsForDiff(resources *application.ManagedResourcesResponse, objs map[
|
||||
}
|
||||
if local, ok := objs[key]; ok || live != nil {
|
||||
if local != nil && !kube.IsCRD(local) {
|
||||
err = argokube.SetAppInstanceLabel(local, argoSettings.AppLabelKey, appName)
|
||||
err = resourceTracking.SetAppInstance(local, argoSettings.AppLabelKey, appName, "", argoappv1.TrackingMethod(argoSettings.GetTrackingMethod()))
|
||||
errors.CheckError(err)
|
||||
}
|
||||
|
||||
@@ -1269,6 +1280,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
timeout uint
|
||||
strategy string
|
||||
force bool
|
||||
replace bool
|
||||
async bool
|
||||
retryLimit int64
|
||||
retryBackoffDuration time.Duration
|
||||
@@ -1376,18 +1388,35 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
cluster, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Name: app.Spec.Destination.Name, Server: app.Spec.Destination.Server})
|
||||
errors.CheckError(err)
|
||||
argoio.Close(conn)
|
||||
localObjsStrings = getLocalObjectsString(app, local, localRepoRoot, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins)
|
||||
localObjsStrings = getLocalObjectsString(app, local, localRepoRoot, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins, argoSettings.TrackingMethod)
|
||||
}
|
||||
|
||||
syncOptionsFactory := func() *applicationpkg.SyncOptions {
|
||||
syncOptions := applicationpkg.SyncOptions{}
|
||||
items := make([]string, 0)
|
||||
if replace {
|
||||
items = append(items, common.SyncOptionReplace)
|
||||
}
|
||||
|
||||
if len(items) == 0 {
|
||||
// for prevent send even empty array if not need
|
||||
return nil
|
||||
}
|
||||
syncOptions.Items = items
|
||||
return &syncOptions
|
||||
}
|
||||
|
||||
syncReq := applicationpkg.ApplicationSyncRequest{
|
||||
Name: &appName,
|
||||
DryRun: dryRun,
|
||||
Revision: revision,
|
||||
Resources: selectedResources,
|
||||
Prune: prune,
|
||||
Manifests: localObjsStrings,
|
||||
Infos: getInfos(infos),
|
||||
Name: &appName,
|
||||
DryRun: dryRun,
|
||||
Revision: revision,
|
||||
Resources: selectedResources,
|
||||
Prune: prune,
|
||||
Manifests: localObjsStrings,
|
||||
Infos: getInfos(infos),
|
||||
SyncOptions: syncOptionsFactory(),
|
||||
}
|
||||
|
||||
switch strategy {
|
||||
case "apply":
|
||||
syncReq.Strategy = &argoappv1.SyncStrategy{Apply: &argoappv1.SyncStrategyApply{}}
|
||||
@@ -1444,6 +1473,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
command.Flags().Int64Var(&retryBackoffFactor, "retry-backoff-factor", argoappv1.DefaultSyncRetryFactor, "Factor multiplies the base duration after each failed retry")
|
||||
command.Flags().StringVar(&strategy, "strategy", "", "Sync strategy (one of: apply|hook)")
|
||||
command.Flags().BoolVar(&force, "force", false, "Use a force apply")
|
||||
command.Flags().BoolVar(&replace, "replace", false, "Use a kubectl create/replace instead apply")
|
||||
command.Flags().BoolVar(&async, "async", false, "Do not wait for application to sync before continuing")
|
||||
command.Flags().StringVar(&local, "local", "", "Path to a local directory. When this flag is present no git queries will be made")
|
||||
command.Flags().StringVar(&localRepoRoot, "local-repo-root", "/", "Path to the repository root. Used together with --local allows setting the repository root")
|
||||
@@ -1814,6 +1844,23 @@ func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra
|
||||
return command
|
||||
}
|
||||
|
||||
func findRevisionHistory(application *argoappv1.Application, historyId int64) (*argoappv1.RevisionHistory, error) {
|
||||
// in case if history id not passed and need fetch previous history revision
|
||||
if historyId == -1 {
|
||||
l := len(application.Status.History)
|
||||
if l < 2 {
|
||||
return nil, fmt.Errorf("Application '%s' should have at least two successful deployments", application.ObjectMeta.Name)
|
||||
}
|
||||
return &application.Status.History[l-2], nil
|
||||
}
|
||||
for _, di := range application.Status.History {
|
||||
if di.ID == historyId {
|
||||
return &di, nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Application '%s' does not have deployment id '%d' in history\n", application.ObjectMeta.Name, historyId)
|
||||
}
|
||||
|
||||
// NewApplicationRollbackCommand returns a new instance of an `argocd app rollback` command
|
||||
func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var (
|
||||
@@ -1821,36 +1868,33 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
|
||||
timeout uint
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "rollback APPNAME ID",
|
||||
Short: "Rollback application to a previous deployed version by History ID",
|
||||
Use: "rollback APPNAME [ID]",
|
||||
Short: "Rollback application to a previous deployed version by History ID, omitted will Rollback to the previous version",
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
if len(args) != 2 {
|
||||
if len(args) == 0 {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
appName := args[0]
|
||||
depID, err := strconv.Atoi(args[1])
|
||||
errors.CheckError(err)
|
||||
var err error
|
||||
depID := -1
|
||||
if len(args) > 1 {
|
||||
depID, err = strconv.Atoi(args[1])
|
||||
errors.CheckError(err)
|
||||
}
|
||||
acdClient := argocdclient.NewClientOrDie(clientOpts)
|
||||
conn, appIf := acdClient.NewApplicationClientOrDie()
|
||||
defer argoio.Close(conn)
|
||||
ctx := context.Background()
|
||||
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName})
|
||||
errors.CheckError(err)
|
||||
var depInfo *argoappv1.RevisionHistory
|
||||
for _, di := range app.Status.History {
|
||||
if di.ID == int64(depID) {
|
||||
depInfo = &di
|
||||
break
|
||||
}
|
||||
}
|
||||
if depInfo == nil {
|
||||
log.Fatalf("Application '%s' does not have deployment id '%d' in history\n", app.ObjectMeta.Name, depID)
|
||||
}
|
||||
|
||||
depInfo, err := findRevisionHistory(app, int64(depID))
|
||||
errors.CheckError(err)
|
||||
|
||||
_, err = appIf.Rollback(ctx, &applicationpkg.ApplicationRollbackRequest{
|
||||
Name: &appName,
|
||||
ID: int64(depID),
|
||||
ID: depInfo.ID,
|
||||
Prune: prune,
|
||||
})
|
||||
errors.CheckError(err)
|
||||
@@ -2188,3 +2232,59 @@ func NewApplicationPatchResourceCommand(clientOpts *argocdclient.ClientOptions)
|
||||
|
||||
return command
|
||||
}
|
||||
|
||||
func NewApplicationDeleteResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var resourceName string
|
||||
var namespace string
|
||||
var kind string
|
||||
var group string
|
||||
var force bool
|
||||
var orphan bool
|
||||
var all bool
|
||||
command := &cobra.Command{
|
||||
Use: "delete-resource APPNAME",
|
||||
Short: "Delete resource in an application",
|
||||
}
|
||||
|
||||
command.Flags().StringVar(&resourceName, "resource-name", "", "Name of resource")
|
||||
command.Flags().StringVar(&kind, "kind", "", "Kind")
|
||||
err := command.MarkFlagRequired("kind")
|
||||
errors.CheckError(err)
|
||||
command.Flags().StringVar(&group, "group", "", "Group")
|
||||
command.Flags().StringVar(&namespace, "namespace", "", "Namespace")
|
||||
command.Flags().BoolVar(&force, "force", false, "Indicates whether to orphan the dependents of the deleted resource")
|
||||
command.Flags().BoolVar(&orphan, "orphan", false, "Indicates whether to force delete the resource")
|
||||
command.Flags().BoolVar(&all, "all", false, "Indicates whether to patch multiple matching of resources")
|
||||
command.Run = func(c *cobra.Command, args []string) {
|
||||
if len(args) != 1 {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
appName := args[0]
|
||||
|
||||
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
|
||||
defer argoio.Close(conn)
|
||||
ctx := context.Background()
|
||||
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
|
||||
errors.CheckError(err)
|
||||
objectsToDelete := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
|
||||
for i := range objectsToDelete {
|
||||
obj := objectsToDelete[i]
|
||||
gvk := obj.GroupVersionKind()
|
||||
_, err = appIf.DeleteResource(ctx, &applicationpkg.ApplicationResourceDeleteRequest{
|
||||
Name: &appName,
|
||||
Namespace: obj.GetNamespace(),
|
||||
ResourceName: obj.GetName(),
|
||||
Version: gvk.Version,
|
||||
Group: gvk.Group,
|
||||
Kind: gvk.Kind,
|
||||
Force: &force,
|
||||
Orphan: &orphan,
|
||||
})
|
||||
errors.CheckError(err)
|
||||
log.Infof("Resource '%s' deleted", obj.GetName())
|
||||
}
|
||||
}
|
||||
|
||||
return command
|
||||
}
|
||||
|
||||
163
cmd/argocd/commands/app_test.go
Normal file
@@ -0,0 +1,163 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
func TestFindRevisionHistoryWithoutPassedId(t *testing.T) {
|
||||
|
||||
histories := v1alpha1.RevisionHistories{}
|
||||
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 1})
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 2})
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 3})
|
||||
|
||||
status := v1alpha1.ApplicationStatus{
|
||||
Resources: nil,
|
||||
Sync: v1alpha1.SyncStatus{},
|
||||
Health: v1alpha1.HealthStatus{},
|
||||
History: histories,
|
||||
Conditions: nil,
|
||||
ReconciledAt: nil,
|
||||
OperationState: nil,
|
||||
ObservedAt: nil,
|
||||
SourceType: "",
|
||||
Summary: v1alpha1.ApplicationSummary{},
|
||||
}
|
||||
|
||||
application := v1alpha1.Application{
|
||||
Status: status,
|
||||
}
|
||||
|
||||
history, err := findRevisionHistory(&application, -1)
|
||||
|
||||
if err != nil {
|
||||
t.Fatal("Find revision history should fail without errors")
|
||||
}
|
||||
|
||||
if history == nil {
|
||||
t.Fatal("History should be found")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestFindRevisionHistoryWithoutPassedIdAndEmptyHistoryList(t *testing.T) {
|
||||
|
||||
histories := v1alpha1.RevisionHistories{}
|
||||
|
||||
status := v1alpha1.ApplicationStatus{
|
||||
Resources: nil,
|
||||
Sync: v1alpha1.SyncStatus{},
|
||||
Health: v1alpha1.HealthStatus{},
|
||||
History: histories,
|
||||
Conditions: nil,
|
||||
ReconciledAt: nil,
|
||||
OperationState: nil,
|
||||
ObservedAt: nil,
|
||||
SourceType: "",
|
||||
Summary: v1alpha1.ApplicationSummary{},
|
||||
}
|
||||
|
||||
application := v1alpha1.Application{
|
||||
Status: status,
|
||||
}
|
||||
|
||||
history, err := findRevisionHistory(&application, -1)
|
||||
|
||||
if err == nil {
|
||||
t.Fatal("Find revision history should fail with errors")
|
||||
}
|
||||
|
||||
if history != nil {
|
||||
t.Fatal("History should be empty")
|
||||
}
|
||||
|
||||
if err.Error() != "Application '' should have at least two successful deployments" {
|
||||
t.Fatal("Find revision history should fail with correct error message")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestFindRevisionHistoryWithPassedId(t *testing.T) {
|
||||
|
||||
histories := v1alpha1.RevisionHistories{}
|
||||
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 1})
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 2})
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 3, Revision: "123"})
|
||||
|
||||
status := v1alpha1.ApplicationStatus{
|
||||
Resources: nil,
|
||||
Sync: v1alpha1.SyncStatus{},
|
||||
Health: v1alpha1.HealthStatus{},
|
||||
History: histories,
|
||||
Conditions: nil,
|
||||
ReconciledAt: nil,
|
||||
OperationState: nil,
|
||||
ObservedAt: nil,
|
||||
SourceType: "",
|
||||
Summary: v1alpha1.ApplicationSummary{},
|
||||
}
|
||||
|
||||
application := v1alpha1.Application{
|
||||
Status: status,
|
||||
}
|
||||
|
||||
history, err := findRevisionHistory(&application, 3)
|
||||
|
||||
if err != nil {
|
||||
t.Fatal("Find revision history should fail without errors")
|
||||
}
|
||||
|
||||
if history == nil {
|
||||
t.Fatal("History should be found")
|
||||
}
|
||||
|
||||
if history.Revision != "123" {
|
||||
t.Fatal("Failed to find correct history with correct revision")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestFindRevisionHistoryWithPassedIdThatNotExist(t *testing.T) {
|
||||
|
||||
histories := v1alpha1.RevisionHistories{}
|
||||
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 1})
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 2})
|
||||
histories = append(histories, v1alpha1.RevisionHistory{ID: 3, Revision: "123"})
|
||||
|
||||
status := v1alpha1.ApplicationStatus{
|
||||
Resources: nil,
|
||||
Sync: v1alpha1.SyncStatus{},
|
||||
Health: v1alpha1.HealthStatus{},
|
||||
History: histories,
|
||||
Conditions: nil,
|
||||
ReconciledAt: nil,
|
||||
OperationState: nil,
|
||||
ObservedAt: nil,
|
||||
SourceType: "",
|
||||
Summary: v1alpha1.ApplicationSummary{},
|
||||
}
|
||||
|
||||
application := v1alpha1.Application{
|
||||
Status: status,
|
||||
}
|
||||
|
||||
history, err := findRevisionHistory(&application, 4)
|
||||
|
||||
if err == nil {
|
||||
t.Fatal("Find revision history should fail with errors")
|
||||
}
|
||||
|
||||
if history != nil {
|
||||
t.Fatal("History should be not found")
|
||||
}
|
||||
|
||||
if err.Error() != "Application '' does not have deployment id '4' in history\n" {
|
||||
t.Fatal("Find revision history should fail with correct error message")
|
||||
}
|
||||
|
||||
}
|
||||
@@ -4,13 +4,18 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"text/tabwriter"
|
||||
|
||||
"github.com/mattn/go-isatty"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/util/cli"
|
||||
"github.com/argoproj/argo-cd/v2/util/text/label"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
|
||||
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
|
||||
@@ -18,7 +23,6 @@ import (
|
||||
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
|
||||
clusterpkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster"
|
||||
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v2/util/cli"
|
||||
"github.com/argoproj/argo-cd/v2/util/clusterauth"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
"github.com/argoproj/argo-cd/v2/util/io"
|
||||
@@ -60,6 +64,8 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
var (
|
||||
clusterOpts cmdutil.ClusterOptions
|
||||
skipConfirmation bool
|
||||
labels []string
|
||||
annotations []string
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "add CONTEXT",
|
||||
@@ -122,18 +128,27 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
}
|
||||
errors.CheckError(err)
|
||||
}
|
||||
|
||||
labelsMap, err := label.Parse(labels)
|
||||
errors.CheckError(err)
|
||||
annotationsMap, err := label.Parse(annotations)
|
||||
errors.CheckError(err)
|
||||
|
||||
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
|
||||
defer io.Close(conn)
|
||||
if clusterOpts.Name != "" {
|
||||
contextName = clusterOpts.Name
|
||||
}
|
||||
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, managerBearerToken, awsAuthConf, execProviderConf)
|
||||
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, managerBearerToken, awsAuthConf, execProviderConf, labelsMap, annotationsMap)
|
||||
if clusterOpts.InCluster {
|
||||
clst.Server = argoappv1.KubernetesInternalAPIServerAddr
|
||||
}
|
||||
if clusterOpts.Shard >= 0 {
|
||||
clst.Shard = &clusterOpts.Shard
|
||||
}
|
||||
if clusterOpts.Project != "" {
|
||||
clst.Project = clusterOpts.Project
|
||||
}
|
||||
clstCreateReq := clusterpkg.ClusterCreateRequest{
|
||||
Cluster: clst,
|
||||
Upsert: clusterOpts.Upsert,
|
||||
@@ -148,6 +163,8 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
command.Flags().StringVar(&clusterOpts.ServiceAccount, "service-account", "", fmt.Sprintf("System namespace service account to use for kubernetes resource management. If not set then default \"%s\" SA will be created", clusterauth.ArgoCDManagerServiceAccount))
|
||||
command.Flags().StringVar(&clusterOpts.SystemNamespace, "system-namespace", common.DefaultSystemNamespace, "Use different system namespace")
|
||||
command.Flags().BoolVarP(&skipConfirmation, "yes", "y", false, "Skip explicit confirmation")
|
||||
command.Flags().StringArrayVar(&labels, "label", nil, "Set metadata labels (e.g. --label key=value)")
|
||||
command.Flags().StringArrayVar(&annotations, "annotation", nil, "Set metadata annotations (e.g. --annotation key=value)")
|
||||
cmdutil.AddClusterFlags(command, &clusterOpts)
|
||||
return command
|
||||
}
|
||||
@@ -158,9 +175,10 @@ func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
|
||||
output string
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "get SERVER",
|
||||
Short: "Get cluster information",
|
||||
Example: `argocd cluster get https://12.34.567.89`,
|
||||
Use: "get SERVER/NAME",
|
||||
Short: "Get cluster information",
|
||||
Example: `argocd cluster get https://12.34.567.89
|
||||
argocd cluster get in-cluster`,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
if len(args) == 0 {
|
||||
c.HelpFunc()(c, args)
|
||||
@@ -169,8 +187,8 @@ func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
|
||||
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
|
||||
defer io.Close(conn)
|
||||
clusters := make([]argoappv1.Cluster, 0)
|
||||
for _, clusterName := range args {
|
||||
clst, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: clusterName})
|
||||
for _, clusterSelector := range args {
|
||||
clst, err := clusterIf.Get(context.Background(), getQueryBySelector(clusterSelector))
|
||||
errors.CheckError(err)
|
||||
clusters = append(clusters, *clst)
|
||||
}
|
||||
@@ -257,17 +275,29 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
|
||||
// Print table of cluster information
|
||||
func printClusterTable(clusters []argoappv1.Cluster) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\n")
|
||||
_, _ = fmt.Fprintf(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\tPROJECT\n")
|
||||
for _, c := range clusters {
|
||||
server := c.Server
|
||||
if len(c.Namespaces) > 0 {
|
||||
server = fmt.Sprintf("%s (%d namespaces)", c.Server, len(c.Namespaces))
|
||||
}
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\n", server, c.Name, c.ServerVersion, c.ConnectionState.Status, c.ConnectionState.Message)
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\n", server, c.Name, c.ServerVersion, c.ConnectionState.Status, c.ConnectionState.Message, c.Project)
|
||||
}
|
||||
_ = w.Flush()
|
||||
}
|
||||
|
||||
// Returns cluster query for getting cluster depending on the cluster selector
|
||||
func getQueryBySelector(clusterSelector string) *clusterpkg.ClusterQuery {
|
||||
var query clusterpkg.ClusterQuery
|
||||
isServer, err := regexp.MatchString(`^https?://`, clusterSelector)
|
||||
if isServer || err != nil {
|
||||
query.Server = clusterSelector
|
||||
} else {
|
||||
query.Name = clusterSelector
|
||||
}
|
||||
return &query
|
||||
}
|
||||
|
||||
// Print list of cluster servers
|
||||
func printClusterServers(clusters []argoappv1.Cluster) {
|
||||
for _, c := range clusters {
|
||||
|
||||
@@ -6,8 +6,24 @@ import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func Test_getQueryBySelector(t *testing.T) {
|
||||
query := getQueryBySelector("my-cluster")
|
||||
assert.Equal(t, query.Name, "my-cluster")
|
||||
assert.Equal(t, query.Server, "")
|
||||
|
||||
query = getQueryBySelector("http://my-server")
|
||||
assert.Equal(t, query.Name, "")
|
||||
assert.Equal(t, query.Server, "http://my-server")
|
||||
|
||||
query = getQueryBySelector("https://my-server")
|
||||
assert.Equal(t, query.Name, "")
|
||||
assert.Equal(t, query.Server, "https://my-server")
|
||||
}
|
||||
|
||||
func Test_printClusterTable(t *testing.T) {
|
||||
printClusterTable([]v1alpha1.Cluster{
|
||||
{
|
||||
|
||||
@@ -18,6 +18,7 @@ import (
|
||||
|
||||
type forwardCacheClient struct {
|
||||
namespace string
|
||||
context string
|
||||
init sync.Once
|
||||
client cache.CacheClient
|
||||
err error
|
||||
@@ -25,7 +26,9 @@ type forwardCacheClient struct {
|
||||
|
||||
func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error) error {
|
||||
c.init.Do(func() {
|
||||
overrides := clientcmd.ConfigOverrides{}
|
||||
overrides := clientcmd.ConfigOverrides{
|
||||
CurrentContext: c.context,
|
||||
}
|
||||
redisPort, err := kubeutil.PortForward(6379, c.namespace, &overrides,
|
||||
"app.kubernetes.io/name=argocd-redis-ha-haproxy", "app.kubernetes.io/name=argocd-redis")
|
||||
if err != nil {
|
||||
@@ -74,6 +77,7 @@ func (c *forwardCacheClient) NotifyUpdated(key string) error {
|
||||
|
||||
type forwardRepoClientset struct {
|
||||
namespace string
|
||||
context string
|
||||
init sync.Once
|
||||
repoClientset repoapiclient.Clientset
|
||||
err error
|
||||
@@ -81,7 +85,9 @@ type forwardRepoClientset struct {
|
||||
|
||||
func (c *forwardRepoClientset) NewRepoServerClient() (io.Closer, repoapiclient.RepoServerServiceClient, error) {
|
||||
c.init.Do(func() {
|
||||
overrides := clientcmd.ConfigOverrides{}
|
||||
overrides := clientcmd.ConfigOverrides{
|
||||
CurrentContext: c.context,
|
||||
}
|
||||
repoServerPort, err := kubeutil.PortForward(8081, c.namespace, &overrides, "app.kubernetes.io/name=argocd-repo-server")
|
||||
if err != nil {
|
||||
c.err = err
|
||||
|
||||
@@ -12,10 +12,10 @@ import (
|
||||
"github.com/golang/protobuf/ptypes/empty"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
"k8s.io/apimachinery/pkg/util/runtime"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
|
||||
argoapi "github.com/argoproj/argo-cd/v2/pkg/apiclient"
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
@@ -43,21 +43,20 @@ func testAPI(clientOpts *argoapi.ClientOptions) error {
|
||||
return err
|
||||
}
|
||||
|
||||
func addKubectlFlagsToCmd(cmd *cobra.Command) clientcmd.ClientConfig {
|
||||
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
|
||||
loadingRules.DefaultClientConfig = &clientcmd.DefaultClientConfig
|
||||
overrides := clientcmd.ConfigOverrides{}
|
||||
kflags := clientcmd.RecommendedConfigOverrideFlags("")
|
||||
cmd.Flags().StringVar(&loadingRules.ExplicitPath, "kubeconfig", "", "Path to a kube config. Only required if out-of-cluster")
|
||||
clientcmd.BindOverrideFlags(&overrides, cmd.PersistentFlags(), kflags)
|
||||
return clientcmd.NewInteractiveDeferredLoadingClientConfig(loadingRules, &overrides, os.Stdin)
|
||||
}
|
||||
|
||||
// InitCommand allows executing command in a headless mode: on the fly starts Argo CD API server and
|
||||
// changes provided client options to use started API server port
|
||||
func InitCommand(cmd *cobra.Command, clientOpts *argoapi.ClientOptions, port *int) *cobra.Command {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
clientConfig := addKubectlFlagsToCmd(cmd)
|
||||
flags := pflag.NewFlagSet("tmp", pflag.ContinueOnError)
|
||||
clientConfig := cli.AddKubectlFlagsToSet(flags)
|
||||
// copy k8s persistent flags into argocd command flags
|
||||
flags.VisitAll(func(flag *pflag.Flag) {
|
||||
// skip Kubernetes server flags since argocd has it's own server flag
|
||||
if flag.Name == "server" {
|
||||
return
|
||||
}
|
||||
cmd.Flags().AddFlag(flag)
|
||||
})
|
||||
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
|
||||
startInProcessAPI := clientOpts.Core
|
||||
if !startInProcessAPI {
|
||||
@@ -109,12 +108,17 @@ func InitCommand(cmd *cobra.Command, clientOpts *argoapi.ClientOptions, port *in
|
||||
return err
|
||||
}
|
||||
|
||||
var context string
|
||||
if cmd.Flag("context").Changed {
|
||||
context = cmd.Flag("context").Value.String()
|
||||
}
|
||||
|
||||
mr, err := miniredis.Run()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
appstateCache := appstatecache.NewCache(cacheutil.NewCache(&forwardCacheClient{namespace: namespace}), time.Hour)
|
||||
appstateCache := appstatecache.NewCache(cacheutil.NewCache(&forwardCacheClient{namespace: namespace, context: context}), time.Hour)
|
||||
srv := server.NewServer(ctx, server.ArgoCDServerOpts{
|
||||
EnableGZip: false,
|
||||
Namespace: namespace,
|
||||
@@ -126,7 +130,7 @@ func InitCommand(cmd *cobra.Command, clientOpts *argoapi.ClientOptions, port *in
|
||||
KubeClientset: kubeClientset,
|
||||
Insecure: true,
|
||||
ListenHost: "localhost",
|
||||
RepoClientset: &forwardRepoClientset{namespace: namespace},
|
||||
RepoClientset: &forwardRepoClientset{namespace: namespace, context: context},
|
||||
})
|
||||
|
||||
go srv.Run(ctx, *port, 0)
|
||||
|
||||
@@ -90,6 +90,8 @@ argocd login cd.argoproj.io --core`,
|
||||
ServerAddr: server,
|
||||
Insecure: globalClientOpts.Insecure,
|
||||
PlainText: globalClientOpts.PlainText,
|
||||
ClientCertFile: globalClientOpts.ClientCertFile,
|
||||
ClientCertKeyFile: globalClientOpts.ClientCertKeyFile,
|
||||
GRPCWeb: globalClientOpts.GRPCWeb,
|
||||
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
|
||||
PortForward: globalClientOpts.PortForward,
|
||||
|
||||
@@ -219,8 +219,17 @@ func NewProjectRemoveSignatureKeyCommand(clientOpts *argocdclient.ClientOptions)
|
||||
|
||||
// NewProjectAddDestinationCommand returns a new instance of an `argocd proj add-destination` command
|
||||
func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var nameInsteadServer bool
|
||||
|
||||
buildApplicationDestination := func(destination string, namespace string, nameInsteadServer bool) v1alpha1.ApplicationDestination {
|
||||
if nameInsteadServer {
|
||||
return v1alpha1.ApplicationDestination{Name: destination, Namespace: namespace}
|
||||
}
|
||||
return v1alpha1.ApplicationDestination{Server: destination, Namespace: namespace}
|
||||
}
|
||||
|
||||
var command = &cobra.Command{
|
||||
Use: "add-destination PROJECT SERVER NAMESPACE",
|
||||
Use: "add-destination PROJECT SERVER/NAME NAMESPACE",
|
||||
Short: "Add project destination",
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
if len(args) != 3 {
|
||||
@@ -228,8 +237,8 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
|
||||
os.Exit(1)
|
||||
}
|
||||
projName := args[0]
|
||||
server := args[1]
|
||||
namespace := args[2]
|
||||
destination := buildApplicationDestination(args[1], namespace, nameInsteadServer)
|
||||
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
|
||||
defer argoio.Close(conn)
|
||||
|
||||
@@ -237,15 +246,18 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
|
||||
errors.CheckError(err)
|
||||
|
||||
for _, dest := range proj.Spec.Destinations {
|
||||
if dest.Namespace == namespace && dest.Server == server {
|
||||
dstServerExist := destination.Server != "" && dest.Server == destination.Server
|
||||
dstNameExist := destination.Name != "" && dest.Name == destination.Name
|
||||
if dest.Namespace == namespace && (dstServerExist || dstNameExist) {
|
||||
log.Fatal("Specified destination is already defined in project")
|
||||
}
|
||||
}
|
||||
proj.Spec.Destinations = append(proj.Spec.Destinations, v1alpha1.ApplicationDestination{Server: server, Namespace: namespace})
|
||||
proj.Spec.Destinations = append(proj.Spec.Destinations, destination)
|
||||
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
|
||||
errors.CheckError(err)
|
||||
},
|
||||
}
|
||||
command.Flags().BoolVar(&nameInsteadServer, "name", false, "Use name as destination instead server")
|
||||
return command
|
||||
}
|
||||
|
||||
|
||||
@@ -116,6 +116,7 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
|
||||
namespaces []string
|
||||
clusters []string
|
||||
manualSync bool
|
||||
timeZone string
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "add PROJECT",
|
||||
@@ -132,7 +133,7 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
|
||||
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
|
||||
errors.CheckError(err)
|
||||
|
||||
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync)
|
||||
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync, timeZone)
|
||||
errors.CheckError(err)
|
||||
|
||||
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
|
||||
@@ -146,6 +147,7 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
|
||||
command.Flags().StringSliceVar(&namespaces, "namespaces", []string{}, "Namespaces that the schedule will be applied to. Comma separated, wildcards supported (e.g. --namespaces default,\\*-prod)")
|
||||
command.Flags().StringSliceVar(&clusters, "clusters", []string{}, "Clusters that the schedule will be applied to. Comma separated, wildcards supported (e.g. --clusters prod,staging)")
|
||||
command.Flags().BoolVar(&manualSync, "manual-sync", false, "Allow manual syncs for both deny and allow windows")
|
||||
command.Flags().StringVar(&timeZone, "time-zone", "UTC", "Time zone of the sync window")
|
||||
|
||||
return command
|
||||
}
|
||||
@@ -189,6 +191,7 @@ func NewProjectWindowsUpdateCommand(clientOpts *argocdclient.ClientOptions) *cob
|
||||
applications []string
|
||||
namespaces []string
|
||||
clusters []string
|
||||
timeZone string
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "update PROJECT ID",
|
||||
@@ -212,7 +215,7 @@ func NewProjectWindowsUpdateCommand(clientOpts *argocdclient.ClientOptions) *cob
|
||||
|
||||
for i, window := range proj.Spec.SyncWindows {
|
||||
if id == i {
|
||||
err := window.Update(schedule, duration, applications, namespaces, clusters)
|
||||
err := window.Update(schedule, duration, applications, namespaces, clusters, timeZone)
|
||||
if err != nil {
|
||||
errors.CheckError(err)
|
||||
}
|
||||
@@ -228,6 +231,7 @@ func NewProjectWindowsUpdateCommand(clientOpts *argocdclient.ClientOptions) *cob
|
||||
command.Flags().StringSliceVar(&applications, "applications", []string{}, "Applications that the schedule will be applied to. Comma separated, wildcards supported (e.g. --applications prod-\\*,website)")
|
||||
command.Flags().StringSliceVar(&namespaces, "namespaces", []string{}, "Namespaces that the schedule will be applied to. Comma separated, wildcards supported (e.g. --namespaces default,\\*-prod)")
|
||||
command.Flags().StringSliceVar(&clusters, "clusters", []string{}, "Clusters that the schedule will be applied to. Comma separated, wildcards supported (e.g. --clusters prod,staging)")
|
||||
command.Flags().StringVar(&timeZone, "time-zone", "UTC", "Time zone of the sync window. (e.g. --time-zone \"America/New_York\")")
|
||||
return command
|
||||
}
|
||||
|
||||
|
||||
@@ -43,13 +43,15 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
|
||||
var tokenString string
|
||||
var refreshToken string
|
||||
clientOpts := argocdclient.ClientOptions{
|
||||
ConfigPath: "",
|
||||
ServerAddr: configCtx.Server.Server,
|
||||
Insecure: configCtx.Server.Insecure,
|
||||
GRPCWeb: globalClientOpts.GRPCWeb,
|
||||
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
|
||||
PlainText: configCtx.Server.PlainText,
|
||||
Headers: globalClientOpts.Headers,
|
||||
ConfigPath: "",
|
||||
ServerAddr: configCtx.Server.Server,
|
||||
Insecure: configCtx.Server.Insecure,
|
||||
ClientCertFile: globalClientOpts.ClientCertFile,
|
||||
ClientCertKeyFile: globalClientOpts.ClientCertKeyFile,
|
||||
GRPCWeb: globalClientOpts.GRPCWeb,
|
||||
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
|
||||
PlainText: configCtx.Server.PlainText,
|
||||
Headers: globalClientOpts.Headers,
|
||||
}
|
||||
acdClient := argocdclient.NewClientOrDie(&clientOpts)
|
||||
claims, err := configCtx.User.Claims()
|
||||
|
||||
@@ -182,6 +182,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
GithubAppInstallationID: repoOpts.Repo.GithubAppInstallationId,
|
||||
GithubAppEnterpriseBaseUrl: repoOpts.Repo.GitHubAppEnterpriseBaseURL,
|
||||
Proxy: repoOpts.Proxy,
|
||||
Project: repoOpts.Repo.Project,
|
||||
}
|
||||
_, err := repoIf.ValidateAccess(context.Background(), &repoAccessReq)
|
||||
errors.CheckError(err)
|
||||
@@ -226,7 +227,7 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
|
||||
// Print table of repo info
|
||||
func printRepoTable(repos appsv1.Repositories) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\n")
|
||||
_, _ = fmt.Fprintf(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\tPROJECT\n")
|
||||
for _, r := range repos {
|
||||
var hasCreds string
|
||||
if !r.HasCredentials() {
|
||||
@@ -238,7 +239,7 @@ func printRepoTable(repos appsv1.Repositories) {
|
||||
hasCreds = "true"
|
||||
}
|
||||
}
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%v\t%v\t%v\t%s\t%s\t%s\n", r.Type, r.Name, r.Repo, r.IsInsecure(), r.EnableOCI, r.EnableLFS, hasCreds, r.ConnectionState.Status, r.ConnectionState.Message)
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%v\t%v\t%v\t%s\t%s\t%s\t%s\n", r.Type, r.Name, r.Repo, r.IsInsecure(), r.EnableOCI, r.EnableLFS, hasCreds, r.ConnectionState.Status, r.ConnectionState.Message, r.Project)
|
||||
}
|
||||
_ = w.Flush()
|
||||
}
|
||||
|
||||
@@ -60,6 +60,9 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
|
||||
|
||||
# Add credentials with GitHub App authentication to use for all repositories under https://ghe.example.com/repos
|
||||
argocd repocreds add https://ghe.example.com/repos/ --github-app-id 1 --github-app-installation-id 2 --github-app-private-key-path test.private-key.pem --github-app-enterprise-base-url https://ghe.example.com/api/v3
|
||||
|
||||
# Add credentials with helm oci registry so that these oci registry urls do not need to be added as repos individually.
|
||||
argocd repocreds add localhost:5000/myrepo --enable-oci --type helm
|
||||
`
|
||||
|
||||
var command = &cobra.Command{
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
appcontroller "github.com/argoproj/argo-cd/v2/cmd/argocd-application-controller/commands"
|
||||
cmpserver "github.com/argoproj/argo-cd/v2/cmd/argocd-cmp-server/commands"
|
||||
dex "github.com/argoproj/argo-cd/v2/cmd/argocd-dex/commands"
|
||||
reposerver "github.com/argoproj/argo-cd/v2/cmd/argocd-repo-server/commands"
|
||||
apiserver "github.com/argoproj/argo-cd/v2/cmd/argocd-server/commands"
|
||||
@@ -34,6 +35,8 @@ func main() {
|
||||
command = appcontroller.NewCommand()
|
||||
case "argocd-repo-server":
|
||||
command = reposerver.NewCommand()
|
||||
case "argocd-cmp-server":
|
||||
command = cmpserver.NewCommand()
|
||||
case "argocd-dex":
|
||||
command = dex.NewCommand()
|
||||
default:
|
||||
|
||||
169
cmd/util/app.go
@@ -9,6 +9,8 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
@@ -40,6 +42,7 @@ type AppOptions struct {
|
||||
helmSetStrings []string
|
||||
helmSetFiles []string
|
||||
helmVersion string
|
||||
helmPassCredentials bool
|
||||
project string
|
||||
syncPolicy string
|
||||
syncOptions []string
|
||||
@@ -86,6 +89,7 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
|
||||
command.Flags().StringVar(&opts.values, "values-literal-file", "", "Filename or URL to import as a literal Helm values block")
|
||||
command.Flags().StringVar(&opts.releaseName, "release-name", "", "Helm release-name")
|
||||
command.Flags().StringVar(&opts.helmVersion, "helm-version", "", "Helm version")
|
||||
command.Flags().BoolVar(&opts.helmPassCredentials, "helm-pass-credentials", false, "Pass credentials to all domain")
|
||||
command.Flags().StringArrayVar(&opts.helmSets, "helm-set", []string{}, "Helm set values on the command line (can be repeated to set several values: --helm-set key1=val1 --helm-set key2=val2)")
|
||||
command.Flags().StringArrayVar(&opts.helmSetStrings, "helm-set-string", []string{}, "Helm set STRING values on the command line (can be repeated to set several values: --helm-set-string key1=val1 --helm-set-string key2=val2)")
|
||||
command.Flags().StringArrayVar(&opts.helmSetFiles, "helm-set-file", []string{}, "Helm set values from respective files specified via the command line (can be repeated to set several values: --helm-set-file key1=path1 --helm-set-file key2=path2)")
|
||||
@@ -122,6 +126,9 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
|
||||
|
||||
func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, appOpts *AppOptions) int {
|
||||
visited := 0
|
||||
if flags == nil {
|
||||
return visited
|
||||
}
|
||||
flags.Visit(func(f *pflag.Flag) {
|
||||
visited++
|
||||
switch f.Name {
|
||||
@@ -156,6 +163,8 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
|
||||
setHelmOpt(&spec.Source, helmOpts{releaseName: appOpts.releaseName})
|
||||
case "helm-version":
|
||||
setHelmOpt(&spec.Source, helmOpts{version: appOpts.helmVersion})
|
||||
case "helm-pass-credentials":
|
||||
setHelmOpt(&spec.Source, helmOpts{passCredentials: appOpts.helmPassCredentials})
|
||||
case "helm-set":
|
||||
setHelmOpt(&spec.Source, helmOpts{helmSets: appOpts.helmSets})
|
||||
case "helm-set-string":
|
||||
@@ -372,13 +381,14 @@ func setPluginOptEnvs(src *argoappv1.ApplicationSource, envs []string) {
|
||||
}
|
||||
|
||||
type helmOpts struct {
|
||||
valueFiles []string
|
||||
values string
|
||||
releaseName string
|
||||
version string
|
||||
helmSets []string
|
||||
helmSetStrings []string
|
||||
helmSetFiles []string
|
||||
valueFiles []string
|
||||
values string
|
||||
releaseName string
|
||||
version string
|
||||
helmSets []string
|
||||
helmSetStrings []string
|
||||
helmSetFiles []string
|
||||
passCredentials bool
|
||||
}
|
||||
|
||||
func setHelmOpt(src *argoappv1.ApplicationSource, opts helmOpts) {
|
||||
@@ -397,6 +407,9 @@ func setHelmOpt(src *argoappv1.ApplicationSource, opts helmOpts) {
|
||||
if opts.version != "" {
|
||||
src.Helm.Version = opts.version
|
||||
}
|
||||
if opts.passCredentials {
|
||||
src.Helm.PassCredentials = opts.passCredentials
|
||||
}
|
||||
for _, text := range opts.helmSets {
|
||||
p, err := argoappv1.NewHelmParameter(text, false)
|
||||
if err != nil {
|
||||
@@ -518,39 +531,99 @@ func SetParameterOverrides(app *argoappv1.Application, parameters []string) {
|
||||
}
|
||||
}
|
||||
|
||||
func readAppFromStdin(app *argoappv1.Application) error {
|
||||
func readApps(yml []byte, apps *[]*argoappv1.Application) error {
|
||||
yamls, _ := kube.SplitYAMLToString(yml)
|
||||
|
||||
var err error
|
||||
|
||||
for _, yml := range yamls {
|
||||
var app argoappv1.Application
|
||||
err = config.Unmarshal([]byte(yml), &app)
|
||||
*apps = append(*apps, &app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func readAppsFromStdin(apps *[]*argoappv1.Application) error {
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
err := config.UnmarshalReader(reader, &app)
|
||||
data, err := ioutil.ReadAll(reader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = readApps(data, apps)
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to read manifest from stdin: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func readAppFromURI(fileURL string, app *argoappv1.Application) error {
|
||||
parsedURL, err := url.ParseRequestURI(fileURL)
|
||||
if err != nil || !(parsedURL.Scheme == "http" || parsedURL.Scheme == "https") {
|
||||
err = config.UnmarshalLocalFile(fileURL, &app)
|
||||
} else {
|
||||
err = config.UnmarshalRemoteFile(fileURL, &app)
|
||||
func readAppsFromURI(fileURL string, apps *[]*argoappv1.Application) error {
|
||||
|
||||
readFilePayload := func() ([]byte, error) {
|
||||
parsedURL, err := url.ParseRequestURI(fileURL)
|
||||
if err != nil || !(parsedURL.Scheme == "http" || parsedURL.Scheme == "https") {
|
||||
return ioutil.ReadFile(fileURL)
|
||||
}
|
||||
return config.ReadRemoteFile(fileURL)
|
||||
}
|
||||
return err
|
||||
|
||||
yml, err := readFilePayload()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return readApps(yml, apps)
|
||||
}
|
||||
|
||||
func ConstructApp(fileURL, appName string, labels, annotations, args []string, appOpts AppOptions, flags *pflag.FlagSet) (*argoappv1.Application, error) {
|
||||
var app argoappv1.Application
|
||||
if fileURL == "-" {
|
||||
// read stdin
|
||||
err := readAppFromStdin(&app)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else if fileURL != "" {
|
||||
// read uri
|
||||
err := readAppFromURI(fileURL, &app)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
func constructAppsFromStdin() ([]*argoappv1.Application, error) {
|
||||
apps := make([]*argoappv1.Application, 0)
|
||||
// read stdin
|
||||
err := readAppsFromStdin(&apps)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return apps, nil
|
||||
}
|
||||
|
||||
func constructAppsBaseOnName(appName string, labels, annotations, args []string, appOpts AppOptions, flags *pflag.FlagSet) ([]*argoappv1.Application, error) {
|
||||
var app *argoappv1.Application
|
||||
// read arguments
|
||||
if len(args) == 1 {
|
||||
if appName != "" && appName != args[0] {
|
||||
return nil, fmt.Errorf("--name argument '%s' does not match app name %s", appName, args[0])
|
||||
}
|
||||
appName = args[0]
|
||||
}
|
||||
app = &argoappv1.Application{
|
||||
TypeMeta: v1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: application.Group + "/v1alpha1",
|
||||
},
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: appName,
|
||||
},
|
||||
}
|
||||
SetAppSpecOptions(flags, &app.Spec, &appOpts)
|
||||
SetParameterOverrides(app, appOpts.Parameters)
|
||||
mergeLabels(app, labels)
|
||||
setAnnotations(app, annotations)
|
||||
return []*argoappv1.Application{
|
||||
app,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func constructAppsFromFileUrl(fileURL, appName string, labels, annotations, args []string, appOpts AppOptions, flags *pflag.FlagSet) ([]*argoappv1.Application, error) {
|
||||
apps := make([]*argoappv1.Application, 0)
|
||||
// read uri
|
||||
err := readAppsFromURI(fileURL, &apps)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, app := range apps {
|
||||
if len(args) == 1 && args[0] != app.Name {
|
||||
return nil, fmt.Errorf("app name '%s' does not match app spec metadata.name '%s'", args[0], app.Name)
|
||||
}
|
||||
@@ -561,32 +634,20 @@ func ConstructApp(fileURL, appName string, labels, annotations, args []string, a
|
||||
return nil, fmt.Errorf("app.Name is empty. --name argument can be used to provide app.Name")
|
||||
}
|
||||
SetAppSpecOptions(flags, &app.Spec, &appOpts)
|
||||
SetParameterOverrides(&app, appOpts.Parameters)
|
||||
mergeLabels(&app, labels)
|
||||
setAnnotations(&app, annotations)
|
||||
} else {
|
||||
// read arguments
|
||||
if len(args) == 1 {
|
||||
if appName != "" && appName != args[0] {
|
||||
return nil, fmt.Errorf("--name argument '%s' does not match app name %s", appName, args[0])
|
||||
}
|
||||
appName = args[0]
|
||||
}
|
||||
app = argoappv1.Application{
|
||||
TypeMeta: v1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: application.Group + "/v1alpha1",
|
||||
},
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: appName,
|
||||
},
|
||||
}
|
||||
SetAppSpecOptions(flags, &app.Spec, &appOpts)
|
||||
SetParameterOverrides(&app, appOpts.Parameters)
|
||||
mergeLabels(&app, labels)
|
||||
setAnnotations(&app, annotations)
|
||||
SetParameterOverrides(app, appOpts.Parameters)
|
||||
mergeLabels(app, labels)
|
||||
setAnnotations(app, annotations)
|
||||
}
|
||||
return &app, nil
|
||||
return apps, nil
|
||||
}
|
||||
|
||||
func ConstructApps(fileURL, appName string, labels, annotations, args []string, appOpts AppOptions, flags *pflag.FlagSet) ([]*argoappv1.Application, error) {
|
||||
if fileURL == "-" {
|
||||
return constructAppsFromStdin()
|
||||
} else if fileURL != "" {
|
||||
return constructAppsFromFileUrl(fileURL, appName, labels, annotations, args, appOpts, flags)
|
||||
}
|
||||
return constructAppsBaseOnName(appName, labels, annotations, args, appOpts, flags)
|
||||
}
|
||||
|
||||
func mergeLabels(app *argoappv1.Application, labels []string) {
|
||||
|
||||
@@ -1,12 +1,16 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
func Test_setHelmOpt(t *testing.T) {
|
||||
@@ -45,6 +49,11 @@ func Test_setHelmOpt(t *testing.T) {
|
||||
setHelmOpt(&src, helmOpts{version: "v3"})
|
||||
assert.Equal(t, "v3", src.Helm.Version)
|
||||
})
|
||||
t.Run("HelmPassCredentials", func(t *testing.T) {
|
||||
src := v1alpha1.ApplicationSource{}
|
||||
setHelmOpt(&src, helmOpts{passCredentials: true})
|
||||
assert.Equal(t, true, src.Helm.PassCredentials)
|
||||
})
|
||||
}
|
||||
|
||||
func Test_setKustomizeOpt(t *testing.T) {
|
||||
@@ -190,3 +199,106 @@ func Test_setAnnotations(t *testing.T) {
|
||||
assert.Equal(t, map[string]string{"hoge": ""}, app.Annotations)
|
||||
})
|
||||
}
|
||||
|
||||
const appsYaml = `---
|
||||
# Source: apps/templates/helm.yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: sth1
|
||||
namespace: argocd
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
spec:
|
||||
destination:
|
||||
namespace: sth
|
||||
server: 'https://kubernetes.default.svc'
|
||||
project: default
|
||||
source:
|
||||
repoURL: 'https://github.com/pasha-codefresh/argocd-example-apps'
|
||||
targetRevision: HEAD
|
||||
path: apps
|
||||
helm:
|
||||
valueFiles:
|
||||
- values.yaml
|
||||
---
|
||||
# Source: apps/templates/helm.yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: sth2
|
||||
namespace: argocd
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
spec:
|
||||
destination:
|
||||
namespace: sth
|
||||
server: 'https://kubernetes.default.svc'
|
||||
project: default
|
||||
source:
|
||||
repoURL: 'https://github.com/pasha-codefresh/argocd-example-apps'
|
||||
targetRevision: HEAD
|
||||
path: apps
|
||||
helm:
|
||||
valueFiles:
|
||||
- values.yaml`
|
||||
|
||||
func TestReadAppsFromURI(t *testing.T) {
|
||||
file, err := ioutil.TempFile(os.TempDir(), "")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer func() {
|
||||
_ = os.Remove(file.Name())
|
||||
}()
|
||||
|
||||
_, _ = file.WriteString(appsYaml)
|
||||
_ = file.Sync()
|
||||
|
||||
apps := make([]*argoappv1.Application, 0)
|
||||
err = readAppsFromURI(file.Name(), &apps)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 2, len(apps))
|
||||
|
||||
assert.Equal(t, "sth1", apps[0].Name)
|
||||
assert.Equal(t, "sth2", apps[1].Name)
|
||||
|
||||
}
|
||||
|
||||
func TestConstructAppFromStdin(t *testing.T) {
|
||||
file, err := ioutil.TempFile(os.TempDir(), "")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer func() {
|
||||
_ = os.Remove(file.Name())
|
||||
}()
|
||||
|
||||
_, _ = file.WriteString(appsYaml)
|
||||
_ = file.Sync()
|
||||
|
||||
if _, err := file.Seek(0, 0); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
os.Stdin = file
|
||||
|
||||
apps, err := ConstructApps("-", "test", []string{}, []string{}, []string{}, AppOptions{}, nil)
|
||||
|
||||
if err := file.Close(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 2, len(apps))
|
||||
assert.Equal(t, "sth1", apps[0].Name)
|
||||
assert.Equal(t, "sth2", apps[1].Name)
|
||||
|
||||
}
|
||||
|
||||
func TestConstructBasedOnName(t *testing.T) {
|
||||
apps, err := ConstructApps("", "test", []string{}, []string{}, []string{}, AppOptions{}, nil)
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, len(apps))
|
||||
assert.Equal(t, "test", apps[0].Name)
|
||||
}
|
||||
|
||||
@@ -55,7 +55,7 @@ func PrintKubeContexts(ca clientcmd.ConfigAccess) {
|
||||
}
|
||||
}
|
||||
|
||||
func NewCluster(name string, namespaces []string, clusterResources bool, conf *rest.Config, managerBearerToken string, awsAuthConf *argoappv1.AWSAuthConfig, execProviderConf *argoappv1.ExecProviderConfig) *argoappv1.Cluster {
|
||||
func NewCluster(name string, namespaces []string, clusterResources bool, conf *rest.Config, managerBearerToken string, awsAuthConf *argoappv1.AWSAuthConfig, execProviderConf *argoappv1.ExecProviderConfig, labels, annotations map[string]string) *argoappv1.Cluster {
|
||||
tlsClientConfig := argoappv1.TLSClientConfig{
|
||||
Insecure: conf.TLSClientConfig.Insecure,
|
||||
ServerName: conf.TLSClientConfig.ServerName,
|
||||
@@ -89,6 +89,8 @@ func NewCluster(name string, namespaces []string, clusterResources bool, conf *r
|
||||
AWSAuthConfig: awsAuthConf,
|
||||
ExecProviderConfig: execProviderConf,
|
||||
},
|
||||
Labels: labels,
|
||||
Annotations: annotations,
|
||||
}
|
||||
|
||||
// Bearer token will preferentially be used for auth if present,
|
||||
@@ -111,6 +113,7 @@ type ClusterOptions struct {
|
||||
Namespaces []string
|
||||
ClusterResources bool
|
||||
Name string
|
||||
Project string
|
||||
Shard int64
|
||||
ExecProviderCommand string
|
||||
ExecProviderArgs []string
|
||||
@@ -126,6 +129,7 @@ func AddClusterFlags(command *cobra.Command, opts *ClusterOptions) {
|
||||
command.Flags().StringArrayVar(&opts.Namespaces, "namespace", nil, "List of namespaces which are allowed to manage")
|
||||
command.Flags().BoolVar(&opts.ClusterResources, "cluster-resources", false, "Indicates if cluster level resources should be managed. The setting is used only if list of managed namespaces is not empty.")
|
||||
command.Flags().StringVar(&opts.Name, "name", "", "Overwrite the cluster name")
|
||||
command.Flags().StringVar(&opts.Project, "project", "", "project of the cluster")
|
||||
command.Flags().Int64Var(&opts.Shard, "shard", -1, "Cluster shard number; inferred from hostname if not set")
|
||||
command.Flags().StringVar(&opts.ExecProviderCommand, "exec-command", "", "Command to run to provide client credentials to the cluster. You may need to build a custom ArgoCD image to ensure the command is available at runtime.")
|
||||
command.Flags().StringArrayVar(&opts.ExecProviderArgs, "exec-command-args", nil, "Arguments to supply to the --exec-command executable")
|
||||
|
||||
@@ -11,6 +11,8 @@ import (
|
||||
)
|
||||
|
||||
func Test_newCluster(t *testing.T) {
|
||||
labels := map[string]string{"key1": "val1"}
|
||||
annotations := map[string]string{"key2": "val2"}
|
||||
clusterWithData := NewCluster("test-cluster", []string{"test-namespace"}, false, &rest.Config{
|
||||
TLSClientConfig: rest.TLSClientConfig{
|
||||
Insecure: false,
|
||||
@@ -23,11 +25,13 @@ func Test_newCluster(t *testing.T) {
|
||||
},
|
||||
"test-bearer-token",
|
||||
&v1alpha1.AWSAuthConfig{},
|
||||
&v1alpha1.ExecProviderConfig{})
|
||||
&v1alpha1.ExecProviderConfig{}, labels, annotations)
|
||||
|
||||
assert.Equal(t, "test-cert-data", string(clusterWithData.Config.CertData))
|
||||
assert.Equal(t, "test-key-data", string(clusterWithData.Config.KeyData))
|
||||
assert.Equal(t, "", clusterWithData.Config.BearerToken)
|
||||
assert.Equal(t, labels, clusterWithData.Labels)
|
||||
assert.Equal(t, annotations, clusterWithData.Annotations)
|
||||
|
||||
clusterWithFiles := NewCluster("test-cluster", []string{"test-namespace"}, false, &rest.Config{
|
||||
TLSClientConfig: rest.TLSClientConfig{
|
||||
@@ -41,11 +45,13 @@ func Test_newCluster(t *testing.T) {
|
||||
},
|
||||
"test-bearer-token",
|
||||
&v1alpha1.AWSAuthConfig{},
|
||||
&v1alpha1.ExecProviderConfig{})
|
||||
&v1alpha1.ExecProviderConfig{}, labels, nil)
|
||||
|
||||
assert.True(t, strings.Contains(string(clusterWithFiles.Config.CertData), "test-cert-data"))
|
||||
assert.True(t, strings.Contains(string(clusterWithFiles.Config.KeyData), "test-key-data"))
|
||||
assert.Equal(t, "", clusterWithFiles.Config.BearerToken)
|
||||
assert.Equal(t, labels, clusterWithFiles.Labels)
|
||||
assert.Nil(t, clusterWithFiles.Annotations)
|
||||
|
||||
clusterWithBearerToken := NewCluster("test-cluster", []string{"test-namespace"}, false, &rest.Config{
|
||||
TLSClientConfig: rest.TLSClientConfig{
|
||||
@@ -57,7 +63,9 @@ func Test_newCluster(t *testing.T) {
|
||||
},
|
||||
"test-bearer-token",
|
||||
&v1alpha1.AWSAuthConfig{},
|
||||
&v1alpha1.ExecProviderConfig{})
|
||||
&v1alpha1.ExecProviderConfig{}, nil, nil)
|
||||
|
||||
assert.Equal(t, "test-bearer-token", clusterWithBearerToken.Config.BearerToken)
|
||||
assert.Nil(t, clusterWithBearerToken.Labels)
|
||||
assert.Nil(t, clusterWithBearerToken.Annotations)
|
||||
}
|
||||
|
||||
@@ -27,6 +27,7 @@ type RepoOptions struct {
|
||||
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
|
||||
command.Flags().StringVar(&opts.Repo.Type, "type", common.DefaultRepoType, "type of the repository, \"git\" or \"helm\"")
|
||||
command.Flags().StringVar(&opts.Repo.Name, "name", "", "name of the repository, mandatory for repositories of type helm")
|
||||
command.Flags().StringVar(&opts.Repo.Project, "project", "", "project of the repository")
|
||||
command.Flags().StringVar(&opts.Repo.Username, "username", "", "username to the repository")
|
||||
command.Flags().StringVar(&opts.Repo.Password, "password", "", "password to the repository")
|
||||
command.Flags().StringVar(&opts.SshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
|
||||
|
||||
66
cmpserver/apiclient/clientset.go
Normal file
@@ -0,0 +1,66 @@
|
||||
package apiclient
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
|
||||
grpc_retry "github.com/grpc-ecosystem/go-grpc-middleware/retry"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc"
|
||||
|
||||
grpc_util "github.com/argoproj/argo-cd/v2/util/grpc"
|
||||
"github.com/argoproj/argo-cd/v2/util/io"
|
||||
)
|
||||
|
||||
const (
|
||||
// MaxGRPCMessageSize contains max grpc message size
|
||||
MaxGRPCMessageSize = 100 * 1024 * 1024
|
||||
)
|
||||
|
||||
// Clientset represents config management plugin server api clients
|
||||
type Clientset interface {
|
||||
NewConfigManagementPluginClient() (io.Closer, ConfigManagementPluginServiceClient, error)
|
||||
}
|
||||
|
||||
type clientSet struct {
|
||||
address string
|
||||
timeoutSeconds int
|
||||
}
|
||||
|
||||
func (c *clientSet) NewConfigManagementPluginClient() (io.Closer, ConfigManagementPluginServiceClient, error) {
|
||||
conn, err := NewConnection(c.address, c.timeoutSeconds)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return conn, NewConfigManagementPluginServiceClient(conn), nil
|
||||
}
|
||||
|
||||
func NewConnection(address string, timeoutSeconds int) (*grpc.ClientConn, error) {
|
||||
retryOpts := []grpc_retry.CallOption{
|
||||
grpc_retry.WithMax(3),
|
||||
grpc_retry.WithBackoff(grpc_retry.BackoffLinear(1000 * time.Millisecond)),
|
||||
}
|
||||
unaryInterceptors := []grpc.UnaryClientInterceptor{grpc_retry.UnaryClientInterceptor(retryOpts...)}
|
||||
if timeoutSeconds > 0 {
|
||||
unaryInterceptors = append(unaryInterceptors, grpc_util.WithTimeout(time.Duration(timeoutSeconds)*time.Second))
|
||||
}
|
||||
dialOpts := []grpc.DialOption{
|
||||
grpc.WithStreamInterceptor(grpc_retry.StreamClientInterceptor(retryOpts...)),
|
||||
grpc.WithUnaryInterceptor(grpc_middleware.ChainUnaryClient(unaryInterceptors...)),
|
||||
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(MaxGRPCMessageSize), grpc.MaxCallSendMsgSize(MaxGRPCMessageSize)),
|
||||
}
|
||||
|
||||
dialOpts = append(dialOpts, grpc.WithInsecure())
|
||||
conn, err := grpc_util.BlockingDial(context.Background(), "unix", address, nil, dialOpts...)
|
||||
if err != nil {
|
||||
log.Errorf("Unable to connect to config management plugin service with address %s", address)
|
||||
return nil, err
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
// NewCMPServerClientset creates new instance of config management plugin server Clientset
|
||||
func NewConfigManagementPluginClientSet(address string, timeoutSeconds int) Clientset {
|
||||
return &clientSet{address: address, timeoutSeconds: timeoutSeconds}
|
||||
}
|
||||
1944
cmpserver/apiclient/plugin.pb.go
Normal file
91
cmpserver/plugin/config.go
Normal file
@@ -0,0 +1,91 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/common"
|
||||
configUtil "github.com/argoproj/argo-cd/v2/util/config"
|
||||
)
|
||||
|
||||
const (
|
||||
ConfigManagementPluginKind string = "ConfigManagementPlugin"
|
||||
)
|
||||
|
||||
type PluginConfig struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
Metadata metav1.ObjectMeta `json:"metadata"`
|
||||
Spec PluginConfigSpec `json:"spec"`
|
||||
}
|
||||
|
||||
type PluginConfigSpec struct {
|
||||
Version string `json:"version"`
|
||||
Init Command `json:"init,omitempty"`
|
||||
Generate Command `json:"generate"`
|
||||
Discover Discover `json:"discover"`
|
||||
AllowConcurrency bool `json:"allowConcurrency"`
|
||||
LockRepo bool `json:"lockRepo"`
|
||||
}
|
||||
|
||||
//Discover holds find and fileName
|
||||
type Discover struct {
|
||||
Find Find `json:"find"`
|
||||
FileName string `json:"fileName"`
|
||||
}
|
||||
|
||||
// Command holds binary path and arguments list
|
||||
type Command struct {
|
||||
Command []string `json:"command,omitempty"`
|
||||
Args []string `json:"args,omitempty"`
|
||||
}
|
||||
|
||||
// Find holds find command or glob pattern
|
||||
type Find struct {
|
||||
Command
|
||||
Glob string `json:"glob"`
|
||||
}
|
||||
|
||||
func ReadPluginConfig(filePath string) (*PluginConfig, error) {
|
||||
path := fmt.Sprintf("%s/%s", strings.TrimRight(filePath, "/"), common.PluginConfigFileName)
|
||||
|
||||
var config PluginConfig
|
||||
err := configUtil.UnmarshalLocalFile(path, &config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err = ValidatePluginConfig(config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &config, nil
|
||||
}
|
||||
|
||||
func ValidatePluginConfig(config PluginConfig) error {
|
||||
if config.Metadata.Name == "" {
|
||||
return fmt.Errorf("invalid plugin configuration file. metadata.name should be non-empty.")
|
||||
}
|
||||
if config.TypeMeta.Kind != ConfigManagementPluginKind {
|
||||
return fmt.Errorf("invalid plugin configuration file. kind should be %s, found %s", ConfigManagementPluginKind, config.TypeMeta.Kind)
|
||||
}
|
||||
if len(config.Spec.Generate.Command) == 0 {
|
||||
return fmt.Errorf("invalid plugin configuration file. spec.generate command should be non-empty")
|
||||
}
|
||||
if config.Spec.Discover.Find.Glob == "" && len(config.Spec.Discover.Find.Command.Command) == 0 && config.Spec.Discover.FileName == "" {
|
||||
return fmt.Errorf("invalid plugin configuration file. atleast one of discover.find.command or discover.find.glob or discover.fineName should be non-empty")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cfg *PluginConfig) Address() string {
|
||||
var address string
|
||||
pluginSockFilePath := common.GetPluginSockFilePath()
|
||||
if cfg.Spec.Version != "" {
|
||||
address = fmt.Sprintf("%s/%s-%s.sock", pluginSockFilePath, cfg.Metadata.Name, cfg.Spec.Version)
|
||||
} else {
|
||||
address = fmt.Sprintf("%s/%s.sock", pluginSockFilePath, cfg.Metadata.Name)
|
||||
}
|
||||
return address
|
||||
}
|
||||
137
cmpserver/plugin/plugin.go
Normal file
@@ -0,0 +1,137 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
"github.com/mattn/go-zglob"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/cmpserver/apiclient"
|
||||
executil "github.com/argoproj/argo-cd/v2/util/exec"
|
||||
)
|
||||
|
||||
// Service implements ConfigManagementPluginService interface
|
||||
type Service struct {
|
||||
initConstants CMPServerInitConstants
|
||||
}
|
||||
|
||||
type CMPServerInitConstants struct {
|
||||
PluginConfig PluginConfig
|
||||
}
|
||||
|
||||
// NewService returns a new instance of the ConfigManagementPluginService
|
||||
func NewService(initConstants CMPServerInitConstants) *Service {
|
||||
return &Service{
|
||||
initConstants: initConstants,
|
||||
}
|
||||
}
|
||||
|
||||
func runCommand(command Command, path string, env []string) (string, error) {
|
||||
if len(command.Command) == 0 {
|
||||
return "", fmt.Errorf("Command is empty")
|
||||
}
|
||||
cmd := exec.Command(command.Command[0], append(command.Command[1:], command.Args...)...)
|
||||
cmd.Env = env
|
||||
cmd.Dir = path
|
||||
return executil.Run(cmd)
|
||||
}
|
||||
|
||||
// Environ returns a list of environment variables in name=value format from a list of variables
|
||||
func environ(envVars []*apiclient.EnvEntry) []string {
|
||||
var environ []string
|
||||
for _, item := range envVars {
|
||||
if item != nil && item.Name != "" && item.Value != "" {
|
||||
environ = append(environ, fmt.Sprintf("%s=%s", item.Name, item.Value))
|
||||
}
|
||||
}
|
||||
return environ
|
||||
}
|
||||
|
||||
// GenerateManifest runs generate command from plugin config file and returns generated manifest files
|
||||
func (s *Service) GenerateManifest(ctx context.Context, q *apiclient.ManifestRequest) (*apiclient.ManifestResponse, error) {
|
||||
config := s.initConstants.PluginConfig
|
||||
|
||||
env := append(os.Environ(), environ(q.Env)...)
|
||||
if len(config.Spec.Init.Command) > 0 {
|
||||
_, err := runCommand(config.Spec.Init, q.AppPath, env)
|
||||
if err != nil {
|
||||
return &apiclient.ManifestResponse{}, err
|
||||
}
|
||||
}
|
||||
|
||||
out, err := runCommand(config.Spec.Generate, q.AppPath, env)
|
||||
if err != nil {
|
||||
return &apiclient.ManifestResponse{}, err
|
||||
}
|
||||
|
||||
manifests, err := kube.SplitYAMLToString([]byte(out))
|
||||
if err != nil {
|
||||
return &apiclient.ManifestResponse{}, err
|
||||
}
|
||||
|
||||
return &apiclient.ManifestResponse{
|
||||
Manifests: manifests,
|
||||
}, err
|
||||
}
|
||||
|
||||
// MatchRepository checks whether the application repository type is supported by config management plugin server
|
||||
func (s *Service) MatchRepository(ctx context.Context, q *apiclient.RepositoryRequest) (*apiclient.RepositoryResponse, error) {
|
||||
var repoResponse apiclient.RepositoryResponse
|
||||
config := s.initConstants.PluginConfig
|
||||
if config.Spec.Discover.FileName != "" {
|
||||
log.Debugf("config.Spec.Discover.FileName is provided")
|
||||
pattern := strings.TrimSuffix(q.Path, "/") + "/" + strings.TrimPrefix(config.Spec.Discover.FileName, "/")
|
||||
matches, err := filepath.Glob(pattern)
|
||||
if err != nil || len(matches) == 0 {
|
||||
log.Debugf("Could not find match for pattern %s. Error is %v.", pattern, err)
|
||||
return &repoResponse, err
|
||||
} else if len(matches) > 0 {
|
||||
repoResponse.IsSupported = true
|
||||
return &repoResponse, nil
|
||||
}
|
||||
}
|
||||
|
||||
if config.Spec.Discover.Find.Glob != "" {
|
||||
log.Debugf("config.Spec.Discover.Find.Glob is provided")
|
||||
pattern := strings.TrimSuffix(q.Path, "/") + "/" + strings.TrimPrefix(config.Spec.Discover.Find.Glob, "/")
|
||||
// filepath.Glob doesn't have '**' support hence selecting third-party lib
|
||||
// https://github.com/golang/go/issues/11862
|
||||
matches, err := zglob.Glob(pattern)
|
||||
if err != nil || len(matches) == 0 {
|
||||
log.Debugf("Could not find match for pattern %s. Error is %v.", pattern, err)
|
||||
return &repoResponse, err
|
||||
} else if len(matches) > 0 {
|
||||
repoResponse.IsSupported = true
|
||||
return &repoResponse, nil
|
||||
}
|
||||
}
|
||||
|
||||
log.Debugf("Going to try runCommand.")
|
||||
find, err := runCommand(config.Spec.Discover.Find.Command, q.Path, os.Environ())
|
||||
if err != nil {
|
||||
return &repoResponse, err
|
||||
}
|
||||
|
||||
var isSupported bool
|
||||
if find != "" {
|
||||
isSupported = true
|
||||
}
|
||||
return &apiclient.RepositoryResponse{
|
||||
IsSupported: isSupported,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// GetPluginConfig returns plugin config
|
||||
func (s *Service) GetPluginConfig(ctx context.Context, q *apiclient.ConfigRequest) (*apiclient.ConfigResponse, error) {
|
||||
config := s.initConstants.PluginConfig
|
||||
return &apiclient.ConfigResponse{
|
||||
AllowConcurrency: config.Spec.AllowConcurrency,
|
||||
LockRepo: config.Spec.LockRepo,
|
||||
}, nil
|
||||
}
|
||||
61
cmpserver/plugin/plugin.proto
Normal file
@@ -0,0 +1,61 @@
|
||||
syntax = "proto3";
|
||||
option go_package = "github.com/argoproj/argo-cd/v2/cmpserver/apiclient";
|
||||
|
||||
package plugin;
|
||||
|
||||
import "k8s.io/api/core/v1/generated.proto";
|
||||
|
||||
// ManifestRequest is a query for manifest generation.
|
||||
message ManifestRequest {
|
||||
// Name of the application for which the request is triggered
|
||||
string appName = 1;
|
||||
string appPath = 2;
|
||||
string repoPath = 3;
|
||||
bool noCache = 4;
|
||||
repeated EnvEntry env = 5;
|
||||
}
|
||||
|
||||
// EnvEntry represents an entry in the application's environment
|
||||
message EnvEntry {
|
||||
// Name is the name of the variable, usually expressed in uppercase
|
||||
string name = 1;
|
||||
// Value is the value of the variable
|
||||
string value = 2;
|
||||
}
|
||||
|
||||
message ManifestResponse {
|
||||
repeated string manifests = 1;
|
||||
string sourceType = 2;
|
||||
}
|
||||
|
||||
message RepositoryRequest {
|
||||
string path = 1;
|
||||
repeated EnvEntry env = 2;
|
||||
}
|
||||
|
||||
message RepositoryResponse {
|
||||
bool isSupported = 1;
|
||||
}
|
||||
|
||||
message ConfigRequest {
|
||||
}
|
||||
|
||||
message ConfigResponse {
|
||||
bool allowConcurrency = 1;
|
||||
bool lockRepo = 2;
|
||||
}
|
||||
|
||||
// ConfigManagementPlugin Service
|
||||
service ConfigManagementPluginService {
|
||||
// GenerateManifest generates manifest for application in specified repo name and revision
|
||||
rpc GenerateManifest(ManifestRequest) returns (ManifestResponse) {
|
||||
}
|
||||
|
||||
// MatchRepository returns whether or not the given path is supported by the plugin
|
||||
rpc MatchRepository(RepositoryRequest) returns (RepositoryResponse) {
|
||||
}
|
||||
|
||||
// Get configuration of the plugin
|
||||
rpc GetPluginConfig(ConfigRequest) returns (ConfigResponse) {
|
||||
}
|
||||
}
|
||||
65
cmpserver/plugin/plugin_test.go
Normal file
@@ -0,0 +1,65 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/cmpserver/apiclient"
|
||||
)
|
||||
|
||||
func newService(configFilePath string) (*Service, error) {
|
||||
config, err := ReadPluginConfig(configFilePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
initConstants := CMPServerInitConstants{
|
||||
PluginConfig: *config,
|
||||
}
|
||||
|
||||
service := &Service{
|
||||
initConstants: initConstants,
|
||||
}
|
||||
return service, nil
|
||||
}
|
||||
|
||||
func TestMatchRepository(t *testing.T) {
|
||||
configFilePath := "./testdata/ksonnet/config"
|
||||
service, err := newService(configFilePath)
|
||||
require.NoError(t, err)
|
||||
|
||||
q := apiclient.RepositoryRequest{}
|
||||
path, err := os.Getwd()
|
||||
require.NoError(t, err)
|
||||
q.Path = path
|
||||
|
||||
res1, err := service.MatchRepository(context.Background(), &q)
|
||||
require.NoError(t, err)
|
||||
require.True(t, res1.IsSupported)
|
||||
}
|
||||
|
||||
func Test_Negative_ConfigFile_DoesnotExist(t *testing.T) {
|
||||
configFilePath := "./testdata/kustomize-neg/config"
|
||||
service, err := newService(configFilePath)
|
||||
require.Error(t, err)
|
||||
require.Nil(t, service)
|
||||
}
|
||||
|
||||
func TestGenerateManifest(t *testing.T) {
|
||||
configFilePath := "./testdata/kustomize/config"
|
||||
service, err := newService(configFilePath)
|
||||
require.NoError(t, err)
|
||||
|
||||
q := apiclient.ManifestRequest{}
|
||||
res1, err := service.GenerateManifest(context.Background(), &q)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, res1)
|
||||
|
||||
expectedOutput := "{\"apiVersion\":\"v1\",\"data\":{\"foo\":\"bar\"},\"kind\":\"ConfigMap\",\"metadata\":{\"name\":\"my-map\"}}"
|
||||
if res1 != nil {
|
||||
require.Equal(t, expectedOutput, res1.Manifests[0])
|
||||
}
|
||||
}
|
||||
15
cmpserver/plugin/testdata/ksonnet/config/plugin.yaml
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: ConfigManagementPlugin
|
||||
metadata:
|
||||
name: ksonnet
|
||||
spec:
|
||||
version: v1.0
|
||||
init:
|
||||
command: [ks, version]
|
||||
generate:
|
||||
command: [sh, -c, "ks show $ARGOCD_APP_ENV"]
|
||||
discover:
|
||||
find:
|
||||
glob: "**/*/main.jsonnet"
|
||||
allowConcurrency: false
|
||||
lockRepo: false
|
||||
0
cmpserver/plugin/testdata/ksonnet/main.jsonnet
vendored
Normal file
16
cmpserver/plugin/testdata/kustomize-neg/config/plugin-bad.yaml
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: ConfigManagementPlugin
|
||||
metadata:
|
||||
name: kustomize
|
||||
spec:
|
||||
version: v1.0
|
||||
init:
|
||||
command: [kustomize, version]
|
||||
generate:
|
||||
command: [sh, -c, "cd testdata/kustomize && kustomize build"]
|
||||
discover:
|
||||
find:
|
||||
command: [sh, -c, find . -name kustomization.yaml]
|
||||
glob: "**/*/kustomization.yaml"
|
||||
allowConcurrency: true
|
||||
lockRepo: false
|
||||
6
cmpserver/plugin/testdata/kustomize/cm.yaml
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: my-map
|
||||
data:
|
||||
foo: bar
|
||||
16
cmpserver/plugin/testdata/kustomize/config/plugin.yaml
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: ConfigManagementPlugin
|
||||
metadata:
|
||||
name: kustomize
|
||||
spec:
|
||||
version: v1.0
|
||||
init:
|
||||
command: [kustomize, version]
|
||||
generate:
|
||||
command: [sh, -c, "cd testdata/kustomize && kustomize build"]
|
||||
discover:
|
||||
find:
|
||||
command: [sh, -c, find . -name kustomization.yaml]
|
||||
glob: "**/*/kustomization.yaml"
|
||||
allowConcurrency: true
|
||||
lockRepo: false
|
||||
5
cmpserver/plugin/testdata/kustomize/kustomization.yaml
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- ./cm.yaml
|
||||
107
cmpserver/server.go
Normal file
@@ -0,0 +1,107 @@
|
||||
package cmpserver
|
||||
|
||||
import (
|
||||
"net"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
|
||||
grpc_logrus "github.com/grpc-ecosystem/go-grpc-middleware/logging/logrus"
|
||||
grpc_prometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/health"
|
||||
"google.golang.org/grpc/health/grpc_health_v1"
|
||||
"google.golang.org/grpc/reflection"
|
||||
|
||||
"github.com/argoproj/argo-cd/v2/cmpserver/apiclient"
|
||||
"github.com/argoproj/argo-cd/v2/cmpserver/plugin"
|
||||
"github.com/argoproj/argo-cd/v2/common"
|
||||
versionpkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/version"
|
||||
"github.com/argoproj/argo-cd/v2/server/version"
|
||||
"github.com/argoproj/argo-cd/v2/util/errors"
|
||||
grpc_util "github.com/argoproj/argo-cd/v2/util/grpc"
|
||||
)
|
||||
|
||||
// ArgoCDCMPServer is the config management plugin server implementation
|
||||
type ArgoCDCMPServer struct {
|
||||
log *log.Entry
|
||||
opts []grpc.ServerOption
|
||||
initConstants plugin.CMPServerInitConstants
|
||||
stopCh chan os.Signal
|
||||
doneCh chan interface{}
|
||||
sig os.Signal
|
||||
}
|
||||
|
||||
// NewServer returns a new instance of the Argo CD config management plugin server
|
||||
func NewServer(initConstants plugin.CMPServerInitConstants) (*ArgoCDCMPServer, error) {
|
||||
if os.Getenv(common.EnvEnableGRPCTimeHistogramEnv) == "true" {
|
||||
grpc_prometheus.EnableHandlingTimeHistogram()
|
||||
}
|
||||
|
||||
serverLog := log.NewEntry(log.StandardLogger())
|
||||
streamInterceptors := []grpc.StreamServerInterceptor{grpc_logrus.StreamServerInterceptor(serverLog), grpc_prometheus.StreamServerInterceptor, grpc_util.PanicLoggerStreamServerInterceptor(serverLog)}
|
||||
unaryInterceptors := []grpc.UnaryServerInterceptor{grpc_logrus.UnaryServerInterceptor(serverLog), grpc_prometheus.UnaryServerInterceptor, grpc_util.PanicLoggerUnaryServerInterceptor(serverLog)}
|
||||
|
||||
serverOpts := []grpc.ServerOption{
|
||||
grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(unaryInterceptors...)),
|
||||
grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(streamInterceptors...)),
|
||||
grpc.MaxRecvMsgSize(apiclient.MaxGRPCMessageSize),
|
||||
grpc.MaxSendMsgSize(apiclient.MaxGRPCMessageSize),
|
||||
}
|
||||
|
||||
return &ArgoCDCMPServer{
|
||||
log: serverLog,
|
||||
opts: serverOpts,
|
||||
stopCh: make(chan os.Signal),
|
||||
doneCh: make(chan interface{}),
|
||||
initConstants: initConstants,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (a *ArgoCDCMPServer) Run() {
|
||||
config := a.initConstants.PluginConfig
|
||||
|
||||
// Listen on the socket address
|
||||
_ = os.Remove(config.Address())
|
||||
listener, err := net.Listen("unix", config.Address())
|
||||
errors.CheckError(err)
|
||||
log.Infof("argocd-cmp-server %s serving on %s", common.GetVersion(), listener.Addr())
|
||||
|
||||
signal.Notify(a.stopCh, syscall.SIGINT, syscall.SIGTERM)
|
||||
go a.Shutdown(config.Address())
|
||||
|
||||
grpcServer := a.CreateGRPC()
|
||||
err = grpcServer.Serve(listener)
|
||||
errors.CheckError(err)
|
||||
|
||||
if a.sig != nil {
|
||||
<-a.doneCh
|
||||
}
|
||||
}
|
||||
|
||||
// CreateGRPC creates new configured grpc server
|
||||
func (a *ArgoCDCMPServer) CreateGRPC() *grpc.Server {
|
||||
server := grpc.NewServer(a.opts...)
|
||||
versionpkg.RegisterVersionServiceServer(server, version.NewServer(nil, func() (bool, error) {
|
||||
return true, nil
|
||||
}))
|
||||
pluginService := plugin.NewService(a.initConstants)
|
||||
apiclient.RegisterConfigManagementPluginServiceServer(server, pluginService)
|
||||
|
||||
healthService := health.NewServer()
|
||||
grpc_health_v1.RegisterHealthServer(server, healthService)
|
||||
|
||||
// Register reflection service on gRPC server.
|
||||
reflection.Register(server)
|
||||
|
||||
return server
|
||||
}
|
||||
|
||||
func (a *ArgoCDCMPServer) Shutdown(address string) {
|
||||
defer signal.Stop(a.stopCh)
|
||||
a.sig = <-a.stopCh
|
||||
_ = os.Remove(address)
|
||||
close(a.doneCh)
|
||||
}
|
||||
@@ -54,6 +54,12 @@ const (
|
||||
DefaultGnuPgHomePath = "/app/config/gpg/keys"
|
||||
// Default path to repo server TLS endpoint config
|
||||
DefaultAppConfigPath = "/app/config"
|
||||
// Default path to cmp server plugin socket file
|
||||
DefaultPluginSockFilePath = "/home/argocd/cmp-server/plugins"
|
||||
// Default path to cmp server plugin configuration file
|
||||
DefaultPluginConfigFilePath = "/home/argocd/cmp-server/config"
|
||||
// Plugin Config File is a ConfigManagementPlugin manifest located inside the plugin container
|
||||
PluginConfigFileName = "plugin.yaml"
|
||||
)
|
||||
|
||||
// Argo CD application related constants
|
||||
@@ -70,6 +76,9 @@ const (
|
||||
ChangePasswordSSOTokenMaxAge = time.Minute * 5
|
||||
// GithubAppCredsExpirationDuration is the default time used to cache the GitHub app credentials
|
||||
GithubAppCredsExpirationDuration = time.Minute * 60
|
||||
|
||||
// PasswordPatten is the default password patten
|
||||
PasswordPatten = `^.{8,32}$`
|
||||
)
|
||||
|
||||
// Dex related constants
|
||||
@@ -110,6 +119,9 @@ const (
|
||||
// LabelValueSecretTypeRepoCreds indicates a secret type of repository credentials
|
||||
LabelValueSecretTypeRepoCreds = "repo-creds"
|
||||
|
||||
// The Argo CD application name is used as the instance name
|
||||
AnnotationKeyAppInstance = "argocd.argoproj.io/tracking-id"
|
||||
|
||||
// AnnotationCompareOptions is a comma-separated list of options for comparison
|
||||
AnnotationCompareOptions = "argocd.argoproj.io/compare-options"
|
||||
|
||||
@@ -141,6 +153,12 @@ const (
|
||||
EnvVarTLSDataPath = "ARGOCD_TLS_DATA_PATH"
|
||||
// Specifies number of git remote operations attempts count
|
||||
EnvGitAttemptsCount = "ARGOCD_GIT_ATTEMPTS_COUNT"
|
||||
// Specifices max duration of git remote operation retry
|
||||
EnvGitRetryMaxDuration = "ARGOCD_GIT_RETRY_MAX_DURATION"
|
||||
// Specifies duration of git remote operation retry
|
||||
EnvGitRetryDuration = "ARGOCD_GIT_RETRY_DURATION"
|
||||
// Specifies fator of git remote operation retry
|
||||
EnvGitRetryFactor = "ARGOCD_GIT_RETRY_FACTOR"
|
||||
// Overrides git submodule support, true by default
|
||||
EnvGitSubmoduleEnabled = "ARGOCD_GIT_MODULES_ENABLED"
|
||||
// EnvGnuPGHome is the path to ArgoCD's GnuPG keyring for signature verification
|
||||
@@ -169,6 +187,10 @@ const (
|
||||
EnvLogFormat = "ARGOCD_LOG_FORMAT"
|
||||
// EnvLogLevel log level that is defined by `--loglevel` option
|
||||
EnvLogLevel = "ARGOCD_LOG_LEVEL"
|
||||
// EnvMaxCookieNumber max number of chunks a cookie can be broken into
|
||||
EnvMaxCookieNumber = "ARGOCD_MAX_COOKIE_NUMBER"
|
||||
// EnvPluginSockFilePath allows to override the pluginSockFilePath for repo server and cmp server
|
||||
EnvPluginSockFilePath = "ARGOCD_PLUGINSOCKFILEPATH"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -181,6 +203,12 @@ const (
|
||||
CacheVersion = "1.8.3"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultGitRetryMaxDuration time.Duration = time.Second * 5 // 5s
|
||||
DefaultGitRetryDuration time.Duration = time.Millisecond * 250 // 0.25s
|
||||
DefaultGitRetryFactor = int64(2)
|
||||
)
|
||||
|
||||
// GetGnuPGHomePath retrieves the path to use for GnuPG home directory, which is either taken from GNUPGHOME environment or a default value
|
||||
func GetGnuPGHomePath() string {
|
||||
if gnuPgHome := os.Getenv(EnvGnuPGHome); gnuPgHome == "" {
|
||||
@@ -189,3 +217,12 @@ func GetGnuPGHomePath() string {
|
||||
return gnuPgHome
|
||||
}
|
||||
}
|
||||
|
||||
// GetPluginSockFilePath retrieves the path of plugin sock file, which is either taken from PluginSockFilePath environment or a default value
|
||||
func GetPluginSockFilePath() string {
|
||||
if pluginSockFilePath := os.Getenv(EnvPluginSockFilePath); pluginSockFilePath == "" {
|
||||
return DefaultPluginSockFilePath
|
||||
} else {
|
||||
return pluginSockFilePath
|
||||
}
|
||||
}
|
||||
|
||||
@@ -112,6 +112,7 @@ type ApplicationController struct {
|
||||
metricsServer *metrics.MetricsServer
|
||||
kubectlSemaphore *semaphore.Weighted
|
||||
clusterFilter func(cluster *appv1.Cluster) bool
|
||||
projByNameCache sync.Map
|
||||
}
|
||||
|
||||
// NewApplicationController creates new instance of ApplicationController.
|
||||
@@ -127,6 +128,7 @@ func NewApplicationController(
|
||||
selfHealTimeout time.Duration,
|
||||
metricsPort int,
|
||||
metricsCacheExpiration time.Duration,
|
||||
metricsApplicationLabels []string,
|
||||
kubectlParallelismLimit int64,
|
||||
clusterFilter func(cluster *appv1.Cluster) bool,
|
||||
) (*ApplicationController, error) {
|
||||
@@ -151,6 +153,7 @@ func NewApplicationController(
|
||||
settingsMgr: settingsMgr,
|
||||
selfHealTimeout: selfHealTimeout,
|
||||
clusterFilter: clusterFilter,
|
||||
projByNameCache: sync.Map{},
|
||||
}
|
||||
if kubectlParallelismLimit > 0 {
|
||||
ctrl.kubectlSemaphore = semaphore.NewWeighted(kubectlParallelismLimit)
|
||||
@@ -163,16 +166,26 @@ func NewApplicationController(
|
||||
AddFunc: func(obj interface{}) {
|
||||
if key, err := cache.MetaNamespaceKeyFunc(obj); err == nil {
|
||||
ctrl.projectRefreshQueue.Add(key)
|
||||
if projMeta, ok := obj.(metav1.Object); ok {
|
||||
ctrl.InvalidateProjectsCache(projMeta.GetName())
|
||||
}
|
||||
|
||||
}
|
||||
},
|
||||
UpdateFunc: func(old, new interface{}) {
|
||||
if key, err := cache.MetaNamespaceKeyFunc(new); err == nil {
|
||||
ctrl.projectRefreshQueue.Add(key)
|
||||
if projMeta, ok := new.(metav1.Object); ok {
|
||||
ctrl.InvalidateProjectsCache(projMeta.GetName())
|
||||
}
|
||||
}
|
||||
},
|
||||
DeleteFunc: func(obj interface{}) {
|
||||
if key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj); err == nil {
|
||||
ctrl.projectRefreshQueue.Add(key)
|
||||
if projMeta, ok := obj.(metav1.Object); ok {
|
||||
ctrl.InvalidateProjectsCache(projMeta.GetName())
|
||||
}
|
||||
}
|
||||
},
|
||||
})
|
||||
@@ -180,7 +193,7 @@ func NewApplicationController(
|
||||
var err error
|
||||
ctrl.metricsServer, err = metrics.NewMetricsServer(metricsAddr, appLister, ctrl.canProcessApp, func(r *http.Request) error {
|
||||
return nil
|
||||
})
|
||||
}, metricsApplicationLabels)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -190,8 +203,8 @@ func NewApplicationController(
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settingsMgr, kubectl, ctrl.metricsServer, ctrl.handleObjectUpdated, clusterFilter)
|
||||
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer, argoCache, ctrl.statusRefreshTimeout)
|
||||
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settingsMgr, kubectl, ctrl.metricsServer, ctrl.handleObjectUpdated, clusterFilter, argo.NewResourceTracking())
|
||||
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer, argoCache, ctrl.statusRefreshTimeout, argo.NewResourceTracking())
|
||||
ctrl.appInformer = appInformer
|
||||
ctrl.appLister = appLister
|
||||
ctrl.projInformer = projInformer
|
||||
@@ -201,6 +214,19 @@ func NewApplicationController(
|
||||
return &ctrl, nil
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) InvalidateProjectsCache(names ...string) {
|
||||
if len(names) > 0 {
|
||||
for _, name := range names {
|
||||
ctrl.projByNameCache.Delete(name)
|
||||
}
|
||||
} else {
|
||||
ctrl.projByNameCache.Range(func(key, _ interface{}) bool {
|
||||
ctrl.projByNameCache.Delete(key)
|
||||
return true
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) GetMetricsServer() *metrics.MetricsServer {
|
||||
return ctrl.metricsServer
|
||||
}
|
||||
@@ -230,8 +256,35 @@ func isSelfReferencedApp(app *appv1.Application, ref v1.ObjectReference) bool {
|
||||
gvk.Kind == application.ApplicationKind
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) newAppProjCache(name string) *appProjCache {
|
||||
return &appProjCache{name: name, ctrl: ctrl}
|
||||
}
|
||||
|
||||
type appProjCache struct {
|
||||
name string
|
||||
ctrl *ApplicationController
|
||||
|
||||
lock sync.Mutex
|
||||
appProj *appv1.AppProject
|
||||
}
|
||||
|
||||
func (projCache *appProjCache) GetAppProject(ctx context.Context) (*appv1.AppProject, error) {
|
||||
projCache.lock.Lock()
|
||||
defer projCache.lock.Unlock()
|
||||
if projCache.appProj != nil {
|
||||
return projCache.appProj, nil
|
||||
}
|
||||
proj, err := argo.GetAppProjectByName(projCache.name, applisters.NewAppProjectLister(projCache.ctrl.projInformer.GetIndexer()), projCache.ctrl.namespace, projCache.ctrl.settingsMgr, projCache.ctrl.db, ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
projCache.appProj = proj
|
||||
return projCache.appProj, nil
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) getAppProj(app *appv1.Application) (*appv1.AppProject, error) {
|
||||
return argo.GetAppProject(&app.Spec, applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()), ctrl.namespace, ctrl.settingsMgr)
|
||||
projCache, _ := ctrl.projByNameCache.LoadOrStore(app.Spec.GetProject(), ctrl.newAppProjCache(app.Spec.GetProject()))
|
||||
return projCache.(*appProjCache).GetAppProject(context.TODO())
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]bool, ref v1.ObjectReference) {
|
||||
@@ -244,12 +297,8 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
// exclude resource unless it is permitted in the app project. If project is not permitted then it is not controlled by the user and there is no point showing the warning.
|
||||
if proj, err := ctrl.getAppProj(app); err == nil && proj.IsGroupKindPermitted(ref.GroupVersionKind().GroupKind(), true) &&
|
||||
!isKnownOrphanedResourceExclusion(kube.NewResourceKey(ref.GroupVersionKind().Group, ref.GroupVersionKind().Kind, ref.Namespace, ref.Name), proj) {
|
||||
|
||||
managedByApp[app.Name] = false
|
||||
}
|
||||
managedByApp[app.Name] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -276,17 +325,21 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
|
||||
func (ctrl *ApplicationController) setAppManagedResources(a *appv1.Application, comparisonResult *comparisonResult) (*appv1.ApplicationTree, error) {
|
||||
managedResources, err := ctrl.managedResources(comparisonResult)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error getting managed resources: %s", err)
|
||||
}
|
||||
tree, err := ctrl.getResourceTree(a, managedResources)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error getting resource tree: %s", err)
|
||||
}
|
||||
err = ctrl.cache.SetAppResourcesTree(a.Name, tree)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error setting app resource tree: %s", err)
|
||||
}
|
||||
return tree, ctrl.cache.SetAppManagedResources(a.Name, managedResources)
|
||||
err = ctrl.cache.SetAppManagedResources(a.Name, managedResources)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error setting app managed resources: %s", err)
|
||||
}
|
||||
return tree, nil
|
||||
}
|
||||
|
||||
// returns true of given resources exist in the namespace by default and not managed by the user
|
||||
@@ -316,7 +369,7 @@ func isKnownOrphanedResourceExclusion(key kube.ResourceKey, proj *appv1.AppProje
|
||||
func (ctrl *ApplicationController) getResourceTree(a *appv1.Application, managedResources []*appv1.ResourceDiff) (*appv1.ApplicationTree, error) {
|
||||
nodes := make([]appv1.ResourceNode, 0)
|
||||
|
||||
proj, err := argo.GetAppProject(&a.Spec, applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()), ctrl.namespace, ctrl.settingsMgr)
|
||||
proj, err := ctrl.getAppProj(a)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -510,18 +563,18 @@ func (ctrl *ApplicationController) managedResources(comparisonResult *comparison
|
||||
var err error
|
||||
target, live, err = diff.HideSecretData(res.Target, res.Live)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error hiding secret data: %s", err)
|
||||
}
|
||||
compareOptions, err := ctrl.settingsMgr.GetResourceCompareOptions()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error getting resource compare options: %s", err)
|
||||
}
|
||||
resDiffPtr, err := diff.Diff(target, live,
|
||||
diff.WithNormalizer(comparisonResult.diffNormalizer),
|
||||
diff.WithLogr(logutils.NewLogrusLogger(logutils.NewWithCurrentConfig())),
|
||||
diff.IgnoreAggregatedRoles(compareOptions.IgnoreAggregatedRoles))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error applying diff: %s", err)
|
||||
}
|
||||
resDiff = *resDiffPtr
|
||||
}
|
||||
@@ -529,7 +582,7 @@ func (ctrl *ApplicationController) managedResources(comparisonResult *comparison
|
||||
if live != nil {
|
||||
data, err := json.Marshal(live)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error marshaling live json: %s", err)
|
||||
}
|
||||
item.LiveState = string(data)
|
||||
} else {
|
||||
@@ -539,7 +592,7 @@ func (ctrl *ApplicationController) managedResources(comparisonResult *comparison
|
||||
if target != nil {
|
||||
data, err := json.Marshal(target)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error marshaling target json: %s", err)
|
||||
}
|
||||
item.TargetState = string(data)
|
||||
} else {
|
||||
@@ -665,6 +718,18 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
||||
}
|
||||
app := origApp.DeepCopy()
|
||||
|
||||
if app.Operation != nil {
|
||||
// If we get here, we are about process an operation but we cannot rely on informer since it might has stale data.
|
||||
// So always retrieve the latest version to ensure it is not stale to avoid unnecessary syncing.
|
||||
// We cannot rely on informer since applications might be updated by both application controller and api server.
|
||||
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(context.Background(), app.ObjectMeta.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
log.Errorf("Failed to retrieve latest application state: %v", err)
|
||||
return
|
||||
}
|
||||
app = freshApp
|
||||
}
|
||||
|
||||
if app.Operation != nil {
|
||||
ctrl.processRequestedAppOperation(app)
|
||||
} else if app.DeletionTimestamp != nil && app.CascadedDeletion() {
|
||||
@@ -789,7 +854,7 @@ func (ctrl *ApplicationController) getPermittedAppLiveObjects(app *appv1.Applica
|
||||
}
|
||||
// Don't delete live resources which are not permitted in the app project
|
||||
for k, v := range objsMap {
|
||||
if !proj.IsLiveResourcePermitted(v, app.Spec.Destination.Server) {
|
||||
if !proj.IsLiveResourcePermitted(v, app.Spec.Destination.Server, app.Spec.Destination.Name) {
|
||||
delete(objsMap, k)
|
||||
}
|
||||
}
|
||||
@@ -1085,7 +1150,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
|
||||
}
|
||||
|
||||
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace)
|
||||
patchedApp, err := appClient.Patch(context.Background(), app.Name, types.MergePatchType, patchJSON, metav1.PatchOptions{})
|
||||
_, err = appClient.Patch(context.Background(), app.Name, types.MergePatchType, patchJSON, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
// Stop retrying updating deleted application
|
||||
if apierr.IsNotFound(err) {
|
||||
@@ -1115,10 +1180,6 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
|
||||
ctrl.auditLogger.LogAppEvent(app, eventInfo, strings.Join(messages, " "))
|
||||
ctrl.metricsServer.IncSync(app, state)
|
||||
}
|
||||
// write back to informer in order to avoid stale cache
|
||||
if err := ctrl.appInformer.GetStore().Update(patchedApp); err != nil {
|
||||
log.Warnf("Fails to update informer: %v", err)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
@@ -1586,7 +1647,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
proj, err := ctrl.getAppProj(app)
|
||||
proj, err := applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()).AppProjects(ctrl.namespace).Get(app.Spec.GetProject())
|
||||
if err != nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
@@ -106,6 +106,7 @@ func newFakeController(data *fakeData) *ApplicationController {
|
||||
time.Minute,
|
||||
common.DefaultPortArgoCDMetrics,
|
||||
data.metricsCacheExpiration,
|
||||
[]string{},
|
||||
0,
|
||||
nil,
|
||||
)
|
||||
@@ -784,11 +785,11 @@ func TestHandleOrphanedResourceUpdated(t *testing.T) {
|
||||
|
||||
isRequested, level := ctrl.isRefreshRequested(app1.Name)
|
||||
assert.True(t, isRequested)
|
||||
assert.Equal(t, ComparisonWithNothing, level)
|
||||
assert.Equal(t, CompareWithRecent, level)
|
||||
|
||||
isRequested, level = ctrl.isRefreshRequested(app2.Name)
|
||||
assert.True(t, isRequested)
|
||||
assert.Equal(t, ComparisonWithNothing, level)
|
||||
assert.Equal(t, CompareWithRecent, level)
|
||||
}
|
||||
|
||||
func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
|
||||
@@ -1083,12 +1084,10 @@ func TestProcessRequestedAppOperation_FailedNoRetries(t *testing.T) {
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]interface{}{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
patchedApp := &v1alpha1.Application{}
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &patchedApp))
|
||||
}
|
||||
return true, patchedApp, nil
|
||||
return true, nil, nil
|
||||
})
|
||||
|
||||
ctrl.processRequestedAppOperation(app)
|
||||
@@ -1110,12 +1109,10 @@ func TestProcessRequestedAppOperation_InvalidDestination(t *testing.T) {
|
||||
fakeAppCs.Lock()
|
||||
defer fakeAppCs.Unlock()
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
patchedApp := &v1alpha1.Application{}
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &patchedApp))
|
||||
}
|
||||
return true, patchedApp, nil
|
||||
return true, nil, nil
|
||||
})
|
||||
}()
|
||||
|
||||
@@ -1138,12 +1135,10 @@ func TestProcessRequestedAppOperation_FailedHasRetries(t *testing.T) {
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]interface{}{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
patchedApp := &v1alpha1.Application{}
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &patchedApp))
|
||||
}
|
||||
return true, patchedApp, nil
|
||||
return true, nil, nil
|
||||
})
|
||||
|
||||
ctrl.processRequestedAppOperation(app)
|
||||
@@ -1183,12 +1178,10 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailed(t *testing.T) {
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]interface{}{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
patchedApp := &v1alpha1.Application{}
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &patchedApp))
|
||||
}
|
||||
return true, patchedApp, nil
|
||||
return true, nil, nil
|
||||
})
|
||||
|
||||
ctrl.processRequestedAppOperation(app)
|
||||
@@ -1218,12 +1211,10 @@ func TestProcessRequestedAppOperation_HasRetriesTerminated(t *testing.T) {
|
||||
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
|
||||
receivedPatch := map[string]interface{}{}
|
||||
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
patchedApp := &v1alpha1.Application{}
|
||||
if patchAction, ok := action.(kubetesting.PatchAction); ok {
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
|
||||
assert.NoError(t, json.Unmarshal(patchAction.GetPatch(), &patchedApp))
|
||||
}
|
||||
return true, patchedApp, nil
|
||||
return true, nil, nil
|
||||
})
|
||||
|
||||
ctrl.processRequestedAppOperation(app)
|
||||
|
||||
104
controller/cache/cache.go
vendored
@@ -3,7 +3,7 @@ package cache
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"math"
|
||||
"reflect"
|
||||
"sync"
|
||||
"time"
|
||||
@@ -24,6 +24,7 @@ import (
|
||||
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v2/util/argo"
|
||||
"github.com/argoproj/argo-cd/v2/util/db"
|
||||
"github.com/argoproj/argo-cd/v2/util/env"
|
||||
logutils "github.com/argoproj/argo-cd/v2/util/log"
|
||||
"github.com/argoproj/argo-cd/v2/util/lua"
|
||||
"github.com/argoproj/argo-cd/v2/util/settings"
|
||||
@@ -32,25 +33,47 @@ import (
|
||||
const (
|
||||
// EnvClusterCacheResyncDuration is the env variable that holds cluster cache re-sync duration
|
||||
EnvClusterCacheResyncDuration = "ARGOCD_CLUSTER_CACHE_RESYNC_DURATION"
|
||||
|
||||
// EnvClusterCacheWatchResyncDuration is the env variable that holds cluster cache watch re-sync duration
|
||||
EnvClusterCacheWatchResyncDuration = "ARGOCD_CLUSTER_CACHE_WATCH_RESYNC_DURATION"
|
||||
|
||||
// EnvClusterCacheListPageSize is the env variable to control size of the list page size when making K8s queries
|
||||
EnvClusterCacheListPageSize = "ARGOCD_CLUSTER_CACHE_LIST_PAGE_SIZE"
|
||||
|
||||
// EnvClusterCacheListSemaphore is the env variable to control size of the list semaphore
|
||||
// This is used to limit the number of concurrent memory consuming operations on the
|
||||
// k8s list queries results across all clusters to avoid memory spikes during cache initialization.
|
||||
EnvClusterCacheListSemaphore = "ARGOCD_CLUSTER_CACHE_LIST_SEMAPHORE"
|
||||
)
|
||||
|
||||
// GitOps engine cluster cache tuning options
|
||||
var (
|
||||
// K8SClusterResyncDuration controls the duration of cluster cache refresh
|
||||
K8SClusterResyncDuration = 12 * time.Hour
|
||||
// clusterCacheResyncDuration controls the duration of cluster cache refresh.
|
||||
// NOTE: this differs from gitops-engine default of 24h
|
||||
clusterCacheResyncDuration = 12 * time.Hour
|
||||
|
||||
// clusterCacheWatchResyncDuration controls the maximum duration that group/kind watches are allowed to run
|
||||
// for before relisting & restarting the watch
|
||||
clusterCacheWatchResyncDuration = 10 * time.Minute
|
||||
|
||||
// The default limit of 50 is chosen based on experiments.
|
||||
clusterCacheListSemaphoreSize int64 = 50
|
||||
|
||||
// clusterCacheListPageSize is the page size when performing K8s list requests.
|
||||
// 500 is equal to kubectl's size
|
||||
clusterCacheListPageSize int64 = 500
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
if clusterResyncDurationStr := os.Getenv(EnvClusterCacheResyncDuration); clusterResyncDurationStr != "" {
|
||||
if duration, err := time.ParseDuration(clusterResyncDurationStr); err == nil {
|
||||
K8SClusterResyncDuration = duration
|
||||
}
|
||||
}
|
||||
clusterCacheResyncDuration = env.ParseDurationFromEnv(EnvClusterCacheResyncDuration, clusterCacheResyncDuration, 0, math.MaxInt64)
|
||||
clusterCacheWatchResyncDuration = env.ParseDurationFromEnv(EnvClusterCacheWatchResyncDuration, clusterCacheWatchResyncDuration, 0, math.MaxInt64)
|
||||
clusterCacheListPageSize = env.ParseInt64FromEnv(EnvClusterCacheListPageSize, clusterCacheListPageSize, 0, math.MaxInt64)
|
||||
clusterCacheListSemaphoreSize = env.ParseInt64FromEnv(EnvClusterCacheListSemaphore, clusterCacheListSemaphoreSize, 0, math.MaxInt64)
|
||||
}
|
||||
|
||||
type LiveStateCache interface {
|
||||
// Returns k8s server version
|
||||
GetVersionsInfo(serverURL string) (string, []metav1.APIGroup, error)
|
||||
GetVersionsInfo(serverURL string) (string, []kube.APIResourceInfo, error)
|
||||
// Returns true of given group kind is a namespaced resource
|
||||
IsNamespaced(server string, gk schema.GroupKind) (bool, error)
|
||||
// Returns synced cluster cache
|
||||
@@ -105,19 +128,19 @@ func NewLiveStateCache(
|
||||
kubectl kube.Kubectl,
|
||||
metricsServer *metrics.MetricsServer,
|
||||
onObjectUpdated ObjectUpdatedHandler,
|
||||
clusterFilter func(cluster *appv1.Cluster) bool) LiveStateCache {
|
||||
clusterFilter func(cluster *appv1.Cluster) bool,
|
||||
resourceTracking argo.ResourceTracking) LiveStateCache {
|
||||
|
||||
return &liveStateCache{
|
||||
appInformer: appInformer,
|
||||
db: db,
|
||||
clusters: make(map[string]clustercache.ClusterCache),
|
||||
onObjectUpdated: onObjectUpdated,
|
||||
kubectl: kubectl,
|
||||
settingsMgr: settingsMgr,
|
||||
metricsServer: metricsServer,
|
||||
// The default limit of 50 is chosen based on experiments.
|
||||
listSemaphore: semaphore.NewWeighted(50),
|
||||
clusterFilter: clusterFilter,
|
||||
appInformer: appInformer,
|
||||
db: db,
|
||||
clusters: make(map[string]clustercache.ClusterCache),
|
||||
onObjectUpdated: onObjectUpdated,
|
||||
kubectl: kubectl,
|
||||
settingsMgr: settingsMgr,
|
||||
metricsServer: metricsServer,
|
||||
clusterFilter: clusterFilter,
|
||||
resourceTracking: resourceTracking,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -127,17 +150,14 @@ type cacheSettings struct {
|
||||
}
|
||||
|
||||
type liveStateCache struct {
|
||||
db db.ArgoDB
|
||||
appInformer cache.SharedIndexInformer
|
||||
onObjectUpdated ObjectUpdatedHandler
|
||||
kubectl kube.Kubectl
|
||||
settingsMgr *settings.SettingsManager
|
||||
metricsServer *metrics.MetricsServer
|
||||
clusterFilter func(cluster *appv1.Cluster) bool
|
||||
|
||||
// listSemaphore is used to limit the number of concurrent memory consuming operations on the
|
||||
// k8s list queries results across all clusters to avoid memory spikes during cache initialization.
|
||||
listSemaphore *semaphore.Weighted
|
||||
db db.ArgoDB
|
||||
appInformer cache.SharedIndexInformer
|
||||
onObjectUpdated ObjectUpdatedHandler
|
||||
kubectl kube.Kubectl
|
||||
settingsMgr *settings.SettingsManager
|
||||
metricsServer *metrics.MetricsServer
|
||||
clusterFilter func(cluster *appv1.Cluster) bool
|
||||
resourceTracking argo.ResourceTracking
|
||||
|
||||
clusters map[string]clustercache.ClusterCache
|
||||
cacheSettings cacheSettings
|
||||
@@ -285,9 +305,12 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
|
||||
return nil, fmt.Errorf("controller is configured to ignore cluster %s", cluster.Server)
|
||||
}
|
||||
|
||||
clusterCache = clustercache.NewClusterCache(cluster.RESTConfig(),
|
||||
clustercache.SetListSemaphore(c.listSemaphore),
|
||||
clustercache.SetResyncTimeout(K8SClusterResyncDuration),
|
||||
trackingMethod := argo.GetTrackingMethod(c.settingsMgr)
|
||||
clusterCacheOpts := []clustercache.UpdateSettingsFunc{
|
||||
clustercache.SetListSemaphore(semaphore.NewWeighted(clusterCacheListSemaphoreSize)),
|
||||
clustercache.SetListPageSize(clusterCacheListPageSize),
|
||||
clustercache.SetWatchResyncTimeout(clusterCacheWatchResyncDuration),
|
||||
clustercache.SetResyncTimeout(clusterCacheResyncDuration),
|
||||
clustercache.SetSettings(cacheSettings.clusterSettings),
|
||||
clustercache.SetNamespaces(cluster.Namespaces),
|
||||
clustercache.SetClusterResources(cluster.ClusterResources),
|
||||
@@ -295,7 +318,8 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
|
||||
res := &ResourceInfo{}
|
||||
populateNodeInfo(un, res)
|
||||
res.Health, _ = health.GetResourceHealth(un, cacheSettings.clusterSettings.ResourceHealthOverride)
|
||||
appName := kube.GetAppInstanceLabel(un, cacheSettings.appInstanceLabelKey)
|
||||
|
||||
appName := c.resourceTracking.GetAppName(un, cacheSettings.appInstanceLabelKey, trackingMethod)
|
||||
if isRoot && appName != "" {
|
||||
res.AppName = appName
|
||||
}
|
||||
@@ -306,7 +330,9 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
|
||||
return res, res.AppName != "" || gvk.Kind == kube.CustomResourceDefinitionKind
|
||||
}),
|
||||
clustercache.SetLogr(logutils.NewLogrusLogger(log.WithField("server", cluster.Server))),
|
||||
)
|
||||
}
|
||||
|
||||
clusterCache = clustercache.NewClusterCache(cluster.RESTConfig(), clusterCacheOpts...)
|
||||
|
||||
_ = clusterCache.OnResourceUpdated(func(newRes *clustercache.Resource, oldRes *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) {
|
||||
toNotify := make(map[string]bool)
|
||||
@@ -419,12 +445,12 @@ func (c *liveStateCache) GetManagedLiveObjs(a *appv1.Application, targetObjs []*
|
||||
})
|
||||
}
|
||||
|
||||
func (c *liveStateCache) GetVersionsInfo(serverURL string) (string, []metav1.APIGroup, error) {
|
||||
func (c *liveStateCache) GetVersionsInfo(serverURL string) (string, []kube.APIResourceInfo, error) {
|
||||
clusterInfo, err := c.getSyncedCluster(serverURL)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
return clusterInfo.GetServerVersion(), clusterInfo.GetAPIGroups(), nil
|
||||
return clusterInfo.GetServerVersion(), clusterInfo.GetAPIResources(), nil
|
||||
}
|
||||
|
||||
func (c *liveStateCache) isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
|
||||
|
||||
62
controller/cache/info.go
vendored
@@ -1,9 +1,12 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/text"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
@@ -86,16 +89,36 @@ func populateServiceInfo(un *unstructured.Unstructured, res *ResourceInfo) {
|
||||
res.NetworkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetLabels: targetLabels, Ingress: ingress}
|
||||
}
|
||||
|
||||
func getServiceName(backend map[string]interface{}, gvk schema.GroupVersionKind) (string, error) {
|
||||
switch gvk.Group {
|
||||
case "extensions":
|
||||
return fmt.Sprintf("%s", backend["serviceName"]), nil
|
||||
case "networking.k8s.io":
|
||||
switch gvk.Version {
|
||||
case "v1beta1":
|
||||
return fmt.Sprintf("%s", backend["serviceName"]), nil
|
||||
case "v1":
|
||||
if service, ok, err := unstructured.NestedMap(backend, "service"); ok && err == nil {
|
||||
return fmt.Sprintf("%s", service["name"]), nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return "", errors.New("unable to resolve string")
|
||||
}
|
||||
|
||||
func populateIngressInfo(un *unstructured.Unstructured, res *ResourceInfo) {
|
||||
ingress := getIngress(un)
|
||||
targetsMap := make(map[v1alpha1.ResourceRef]bool)
|
||||
gvk := un.GroupVersionKind()
|
||||
if backend, ok, err := unstructured.NestedMap(un.Object, "spec", "backend"); ok && err == nil {
|
||||
targetsMap[v1alpha1.ResourceRef{
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Namespace: un.GetNamespace(),
|
||||
Name: fmt.Sprintf("%s", backend["serviceName"]),
|
||||
}] = true
|
||||
if serviceName, err := getServiceName(backend, gvk); err == nil {
|
||||
targetsMap[v1alpha1.ResourceRef{
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Namespace: un.GetNamespace(),
|
||||
Name: serviceName,
|
||||
}] = true
|
||||
}
|
||||
}
|
||||
urlsSet := make(map[string]bool)
|
||||
if rules, ok, err := unstructured.NestedSlice(un.Object, "spec", "rules"); ok && err == nil {
|
||||
@@ -123,13 +146,15 @@ func populateIngressInfo(un *unstructured.Unstructured, res *ResourceInfo) {
|
||||
continue
|
||||
}
|
||||
|
||||
if serviceName, ok, err := unstructured.NestedString(path, "backend", "serviceName"); ok && err == nil {
|
||||
targetsMap[v1alpha1.ResourceRef{
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Namespace: un.GetNamespace(),
|
||||
Name: serviceName,
|
||||
}] = true
|
||||
if backend, ok, err := unstructured.NestedMap(path, "backend"); ok && err == nil {
|
||||
if serviceName, err := getServiceName(backend, gvk); err == nil {
|
||||
targetsMap[v1alpha1.ResourceRef{
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Namespace: un.GetNamespace(),
|
||||
Name: serviceName,
|
||||
}] = true
|
||||
}
|
||||
}
|
||||
|
||||
if host == nil || host == "" {
|
||||
@@ -146,6 +171,17 @@ func populateIngressInfo(un *unstructured.Unstructured, res *ResourceInfo) {
|
||||
tlshost := tlsline["host"]
|
||||
if tlshost == host {
|
||||
stringPort = "https"
|
||||
continue
|
||||
}
|
||||
if hosts := tlsline["hosts"]; hosts != nil {
|
||||
tlshosts, ok := tlsline["hosts"].(map[string]interface{})
|
||||
if ok {
|
||||
for j := range tlshosts {
|
||||
if tlshosts[j] == host {
|
||||
stringPort = "https"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
93
controller/cache/info_test.go
vendored
@@ -133,6 +133,43 @@ var (
|
||||
ingress:
|
||||
- ip: 107.178.210.11`)
|
||||
|
||||
testIngressNetworkingV1 = strToUnstructured(`
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: helm-guestbook
|
||||
namespace: default
|
||||
uid: "4"
|
||||
spec:
|
||||
backend:
|
||||
service:
|
||||
name: not-found-service
|
||||
port:
|
||||
number: 443
|
||||
rules:
|
||||
- host: helm-guestbook.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
service:
|
||||
name: helm-guestbook
|
||||
port:
|
||||
number: 443
|
||||
path: /
|
||||
- backend:
|
||||
service:
|
||||
name: helm-guestbook
|
||||
port:
|
||||
name: https
|
||||
path: /
|
||||
tls:
|
||||
- host: helm-guestbook.com
|
||||
secretName: my-tls-secret
|
||||
status:
|
||||
loadBalancer:
|
||||
ingress:
|
||||
- ip: 107.178.210.11`)
|
||||
|
||||
testIstioVirtualService = strToUnstructured(`
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
@@ -255,27 +292,35 @@ func TestGetIstioVirtualServiceInfo(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestGetIngressInfo(t *testing.T) {
|
||||
info := &ResourceInfo{}
|
||||
populateNodeInfo(testIngress, info)
|
||||
assert.Equal(t, 0, len(info.Info))
|
||||
sort.Slice(info.NetworkingInfo.TargetRefs, func(i, j int) bool {
|
||||
return strings.Compare(info.NetworkingInfo.TargetRefs[j].Name, info.NetworkingInfo.TargetRefs[i].Name) < 0
|
||||
})
|
||||
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
|
||||
Ingress: []v1.LoadBalancerIngress{{IP: "107.178.210.11"}},
|
||||
TargetRefs: []v1alpha1.ResourceRef{{
|
||||
Namespace: "default",
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Name: "not-found-service",
|
||||
}, {
|
||||
Namespace: "default",
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Name: "helm-guestbook",
|
||||
}},
|
||||
ExternalURLs: []string{"https://helm-guestbook.com/"},
|
||||
}, info.NetworkingInfo)
|
||||
var tests = []struct {
|
||||
Ingress *unstructured.Unstructured
|
||||
}{
|
||||
{testIngress},
|
||||
{testIngressNetworkingV1},
|
||||
}
|
||||
for _, tc := range tests {
|
||||
info := &ResourceInfo{}
|
||||
populateNodeInfo(tc.Ingress, info)
|
||||
assert.Equal(t, 0, len(info.Info))
|
||||
sort.Slice(info.NetworkingInfo.TargetRefs, func(i, j int) bool {
|
||||
return strings.Compare(info.NetworkingInfo.TargetRefs[j].Name, info.NetworkingInfo.TargetRefs[i].Name) < 0
|
||||
})
|
||||
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
|
||||
Ingress: []v1.LoadBalancerIngress{{IP: "107.178.210.11"}},
|
||||
TargetRefs: []v1alpha1.ResourceRef{{
|
||||
Namespace: "default",
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Name: "not-found-service",
|
||||
}, {
|
||||
Namespace: "default",
|
||||
Group: "",
|
||||
Kind: kube.ServiceKind,
|
||||
Name: "helm-guestbook",
|
||||
}},
|
||||
ExternalURLs: []string{"https://helm-guestbook.com/"},
|
||||
}, info.NetworkingInfo)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetIngressInfoWildCardPath(t *testing.T) {
|
||||
@@ -396,7 +441,7 @@ func TestGetIngressInfoNoHost(t *testing.T) {
|
||||
}
|
||||
func TestExternalUrlWithSubPath(t *testing.T) {
|
||||
ingress := strToUnstructured(`
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: helm-guestbook
|
||||
@@ -424,7 +469,7 @@ func TestExternalUrlWithSubPath(t *testing.T) {
|
||||
}
|
||||
func TestExternalUrlWithMultipleSubPaths(t *testing.T) {
|
||||
ingress := strToUnstructured(`
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: helm-guestbook
|
||||
@@ -463,7 +508,7 @@ func TestExternalUrlWithMultipleSubPaths(t *testing.T) {
|
||||
}
|
||||
func TestExternalUrlWithNoSubPath(t *testing.T) {
|
||||
ingress := strToUnstructured(`
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: helm-guestbook
|
||||
|
||||
10
controller/cache/mocks/LiveStateCache.go
vendored
@@ -17,8 +17,6 @@ import (
|
||||
|
||||
unstructured "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
v1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -113,7 +111,7 @@ func (_m *LiveStateCache) GetNamespaceTopLevelResources(server string, namespace
|
||||
}
|
||||
|
||||
// GetVersionsInfo provides a mock function with given fields: serverURL
|
||||
func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []v1.APIGroup, error) {
|
||||
func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []kube.APIResourceInfo, error) {
|
||||
ret := _m.Called(serverURL)
|
||||
|
||||
var r0 string
|
||||
@@ -123,12 +121,12 @@ func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []v1.APIGro
|
||||
r0 = ret.Get(0).(string)
|
||||
}
|
||||
|
||||
var r1 []v1.APIGroup
|
||||
if rf, ok := ret.Get(1).(func(string) []v1.APIGroup); ok {
|
||||
var r1 []kube.APIResourceInfo
|
||||
if rf, ok := ret.Get(1).(func(string) []kube.APIResourceInfo); ok {
|
||||
r1 = rf(serverURL)
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).([]v1.APIGroup)
|
||||
r1 = ret.Get(1).([]kube.APIResourceInfo)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -106,7 +106,7 @@ func (c *clusterInfoUpdater) updateClusterInfo(cluster appv1.Cluster, info *cach
|
||||
}
|
||||
if info != nil {
|
||||
clusterInfo.ServerVersion = info.K8SVersion
|
||||
clusterInfo.APIVersions = argo.APIGroupsToVersions(info.APIGroups)
|
||||
clusterInfo.APIVersions = argo.APIResourcesToStrings(info.APIResources, false)
|
||||
if info.LastCacheSyncTime == nil {
|
||||
clusterInfo.ConnectionState.Status = appv1.ConnectionStatusUnknown
|
||||
} else if info.SyncError == nil {
|
||||
|
||||
@@ -41,6 +41,12 @@ var (
|
||||
descClusterDefaultLabels,
|
||||
nil,
|
||||
)
|
||||
descClusterConnectionStatus = prometheus.NewDesc(
|
||||
"argocd_cluster_connection_status",
|
||||
"The k8s cluster current connection status.",
|
||||
append(descClusterDefaultLabels, "k8s_version"),
|
||||
nil,
|
||||
)
|
||||
)
|
||||
|
||||
type HasClustersInfo interface {
|
||||
@@ -77,9 +83,11 @@ func (c *clusterCollector) Describe(ch chan<- *prometheus.Desc) {
|
||||
ch <- descClusterCacheResources
|
||||
ch <- descClusterAPIs
|
||||
ch <- descClusterCacheAgeSeconds
|
||||
ch <- descClusterConnectionStatus
|
||||
}
|
||||
|
||||
func (c *clusterCollector) Collect(ch chan<- prometheus.Metric) {
|
||||
|
||||
now := time.Now()
|
||||
for _, c := range c.info {
|
||||
defaultValues := []string{c.Server}
|
||||
@@ -91,5 +99,6 @@ func (c *clusterCollector) Collect(ch chan<- prometheus.Metric) {
|
||||
cacheAgeSeconds = int(now.Sub(*c.LastCacheSyncTime).Seconds())
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(descClusterCacheAgeSeconds, prometheus.GaugeValue, float64(cacheAgeSeconds), defaultValues...)
|
||||
ch <- prometheus.MustNewConstMetric(descClusterConnectionStatus, prometheus.GaugeValue, boolFloat64(c.SyncError == nil), append(defaultValues, c.K8SVersion)...)
|
||||
}
|
||||
}
|
||||
|
||||
104
controller/metrics/clustercollector_test.go
Normal file
@@ -0,0 +1,104 @@
|
||||
package metrics
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"testing"
|
||||
|
||||
gitopsCache "github.com/argoproj/gitops-engine/pkg/cache"
|
||||
)
|
||||
|
||||
func TestMetricClusterConnectivity(t *testing.T) {
|
||||
type testCases struct {
|
||||
testCombination
|
||||
skip bool
|
||||
description string
|
||||
metricLabels []string
|
||||
clustersInfo []gitopsCache.ClusterInfo
|
||||
}
|
||||
cases := []testCases{
|
||||
{
|
||||
description: "metric will have value 1 if connected with the cluster",
|
||||
skip: false,
|
||||
metricLabels: []string{"non-existing"},
|
||||
testCombination: testCombination{
|
||||
applications: []string{fakeApp},
|
||||
responseContains: `
|
||||
# TYPE argocd_cluster_connection_status gauge
|
||||
argocd_cluster_connection_status{k8s_version="1.21",server="server1"} 1
|
||||
`,
|
||||
},
|
||||
clustersInfo: []gitopsCache.ClusterInfo{
|
||||
{
|
||||
Server: "server1",
|
||||
K8SVersion: "1.21",
|
||||
SyncError: nil,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
description: "metric will have value 0 if not connected with the cluster",
|
||||
skip: false,
|
||||
metricLabels: []string{"non-existing"},
|
||||
testCombination: testCombination{
|
||||
applications: []string{fakeApp},
|
||||
responseContains: `
|
||||
# TYPE argocd_cluster_connection_status gauge
|
||||
argocd_cluster_connection_status{k8s_version="1.21",server="server1"} 0
|
||||
`,
|
||||
},
|
||||
clustersInfo: []gitopsCache.ClusterInfo{
|
||||
{
|
||||
Server: "server1",
|
||||
K8SVersion: "1.21",
|
||||
SyncError: errors.New("error connecting with cluster"),
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
description: "will have one metric per cluster",
|
||||
skip: false,
|
||||
metricLabels: []string{"non-existing"},
|
||||
testCombination: testCombination{
|
||||
applications: []string{fakeApp},
|
||||
responseContains: `
|
||||
# TYPE argocd_cluster_connection_status gauge
|
||||
argocd_cluster_connection_status{k8s_version="1.21",server="server1"} 1
|
||||
argocd_cluster_connection_status{k8s_version="1.21",server="server2"} 1
|
||||
argocd_cluster_connection_status{k8s_version="1.21",server="server3"} 1
|
||||
`,
|
||||
},
|
||||
clustersInfo: []gitopsCache.ClusterInfo{
|
||||
{
|
||||
Server: "server1",
|
||||
K8SVersion: "1.21",
|
||||
SyncError: nil,
|
||||
},
|
||||
{
|
||||
Server: "server2",
|
||||
K8SVersion: "1.21",
|
||||
SyncError: nil,
|
||||
},
|
||||
{
|
||||
Server: "server3",
|
||||
K8SVersion: "1.21",
|
||||
SyncError: nil,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
c := c
|
||||
t.Run(c.description, func(t *testing.T) {
|
||||
if !c.skip {
|
||||
cfg := TestMetricServerConfig{
|
||||
FakeAppYAMLs: c.applications,
|
||||
ExpectedResponse: c.responseContains,
|
||||
AppLabels: c.metricLabels,
|
||||
ClustersInfo: c.clustersInfo,
|
||||
}
|
||||
runTest(t, cfg)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"net/http"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/health"
|
||||
@@ -20,6 +21,7 @@ import (
|
||||
applister "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v2/util/git"
|
||||
"github.com/argoproj/argo-cd/v2/util/healthz"
|
||||
"github.com/argoproj/argo-cd/v2/util/profile"
|
||||
)
|
||||
|
||||
type MetricsServer struct {
|
||||
@@ -49,6 +51,8 @@ const (
|
||||
var (
|
||||
descAppDefaultLabels = []string{"namespace", "name", "project"}
|
||||
|
||||
descAppLabels *prometheus.Desc
|
||||
|
||||
descAppInfo = prometheus.NewDesc(
|
||||
"argocd_app_info",
|
||||
"Information about application.",
|
||||
@@ -121,7 +125,7 @@ var (
|
||||
redisRequestCounter = prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "argocd_redis_request_total",
|
||||
Help: "Number of kubernetes requests executed during application reconciliation.",
|
||||
Help: "Number of redis requests executed during application reconciliation.",
|
||||
},
|
||||
[]string{"hostname", "initiator", "failed"},
|
||||
)
|
||||
@@ -137,19 +141,31 @@ var (
|
||||
)
|
||||
|
||||
// NewMetricsServer returns a new prometheus server which collects application metrics
|
||||
func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, healthCheck func(r *http.Request) error) (*MetricsServer, error) {
|
||||
func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, healthCheck func(r *http.Request) error, appLabels []string) (*MetricsServer, error) {
|
||||
hostname, err := os.Hostname()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(appLabels) > 0 {
|
||||
normalizedLabels := normalizeLabels("label", appLabels)
|
||||
descAppLabels = prometheus.NewDesc(
|
||||
"argocd_app_labels",
|
||||
"Argo Application labels converted to Prometheus labels",
|
||||
append(descAppDefaultLabels, normalizedLabels...),
|
||||
nil,
|
||||
)
|
||||
}
|
||||
|
||||
mux := http.NewServeMux()
|
||||
registry := NewAppRegistry(appLister, appFilter)
|
||||
registry := NewAppRegistry(appLister, appFilter, appLabels)
|
||||
mux.Handle(MetricsPath, promhttp.HandlerFor(prometheus.Gatherers{
|
||||
// contains app controller specific metrics
|
||||
registry,
|
||||
// contains process, golang and controller workqueues metrics
|
||||
prometheus.DefaultGatherer,
|
||||
}, promhttp.HandlerOpts{}))
|
||||
profile.RegisterProfiler(mux)
|
||||
healthz.ServeHealthCheck(mux, healthCheck)
|
||||
|
||||
registry.MustRegister(syncCounter)
|
||||
@@ -180,6 +196,17 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFil
|
||||
}, nil
|
||||
}
|
||||
|
||||
func normalizeLabels(prefix string, appLabels []string) []string {
|
||||
results := []string{}
|
||||
for _, label := range appLabels {
|
||||
//prometheus labels don't accept dash in their name
|
||||
curr := strings.ReplaceAll(label, "-", "_")
|
||||
result := fmt.Sprintf("%s_%s", prefix, curr)
|
||||
results = append(results, result)
|
||||
}
|
||||
return results
|
||||
}
|
||||
|
||||
func (m *MetricsServer) RegisterClustersInfoSource(ctx context.Context, source HasClustersInfo) {
|
||||
collector := &clusterCollector{infoSource: source}
|
||||
go collector.Run(ctx)
|
||||
@@ -272,25 +299,30 @@ func (m *MetricsServer) SetExpiration(cacheExpiration time.Duration) error {
|
||||
type appCollector struct {
|
||||
store applister.ApplicationLister
|
||||
appFilter func(obj interface{}) bool
|
||||
appLabels []string
|
||||
}
|
||||
|
||||
// NewAppCollector returns a prometheus collector for application metrics
|
||||
func NewAppCollector(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool) prometheus.Collector {
|
||||
func NewAppCollector(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, appLabels []string) prometheus.Collector {
|
||||
return &appCollector{
|
||||
store: appLister,
|
||||
appFilter: appFilter,
|
||||
appLabels: appLabels,
|
||||
}
|
||||
}
|
||||
|
||||
// NewAppRegistry creates a new prometheus registry that collects applications
|
||||
func NewAppRegistry(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool) *prometheus.Registry {
|
||||
func NewAppRegistry(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, appLabels []string) *prometheus.Registry {
|
||||
registry := prometheus.NewRegistry()
|
||||
registry.MustRegister(NewAppCollector(appLister, appFilter))
|
||||
registry.MustRegister(NewAppCollector(appLister, appFilter, appLabels))
|
||||
return registry
|
||||
}
|
||||
|
||||
// Describe implements the prometheus.Collector interface
|
||||
func (c *appCollector) Describe(ch chan<- *prometheus.Desc) {
|
||||
if len(c.appLabels) > 0 {
|
||||
ch <- descAppLabels
|
||||
}
|
||||
ch <- descAppInfo
|
||||
ch <- descAppSyncStatusCode
|
||||
ch <- descAppHealthStatus
|
||||
@@ -305,7 +337,7 @@ func (c *appCollector) Collect(ch chan<- prometheus.Metric) {
|
||||
}
|
||||
for _, app := range apps {
|
||||
if c.appFilter(app) {
|
||||
collectApps(ch, app)
|
||||
c.collectApps(ch, app)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -317,7 +349,7 @@ func boolFloat64(b bool) float64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
func collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
|
||||
func (c *appCollector) collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
|
||||
addConstMetric := func(desc *prometheus.Desc, t prometheus.ValueType, v float64, lv ...string) {
|
||||
project := app.Spec.GetProject()
|
||||
lv = append([]string{app.Namespace, app.Name, project}, lv...)
|
||||
@@ -344,6 +376,15 @@ func collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
|
||||
|
||||
addGauge(descAppInfo, 1, git.NormalizeGitURL(app.Spec.Source.RepoURL), app.Spec.Destination.Server, app.Spec.Destination.Namespace, string(syncStatus), string(healthStatus), operation)
|
||||
|
||||
if len(c.appLabels) > 0 {
|
||||
labelValues := []string{}
|
||||
for _, desiredLabel := range c.appLabels {
|
||||
value := app.GetLabels()[desiredLabel]
|
||||
labelValues = append(labelValues, value)
|
||||
}
|
||||
addGauge(descAppLabels, 1, labelValues...)
|
||||
}
|
||||
|
||||
// Deprecated controller metrics
|
||||
if os.Getenv(EnvVarLegacyControllerMetrics) == "true" {
|
||||
addGauge(descAppCreated, float64(app.CreationTimestamp.Unix()))
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
gitopsCache "github.com/argoproj/gitops-engine/pkg/cache"
|
||||
"github.com/argoproj/gitops-engine/pkg/sync/common"
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -29,6 +30,9 @@ kind: Application
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: argocd
|
||||
labels:
|
||||
team-name: my-team
|
||||
team-bu: bu-id
|
||||
spec:
|
||||
destination:
|
||||
namespace: dummy-namespace
|
||||
@@ -50,6 +54,9 @@ kind: Application
|
||||
metadata:
|
||||
name: my-app-2
|
||||
namespace: argocd
|
||||
labels:
|
||||
team-name: my-team
|
||||
team-bu: bu-id
|
||||
spec:
|
||||
destination:
|
||||
namespace: dummy-namespace
|
||||
@@ -77,6 +84,9 @@ metadata:
|
||||
name: my-app-3
|
||||
namespace: argocd
|
||||
deletionTimestamp: "2020-03-16T09:17:45Z"
|
||||
labels:
|
||||
team-name: my-team
|
||||
team-bu: bu-id
|
||||
spec:
|
||||
destination:
|
||||
namespace: dummy-namespace
|
||||
@@ -138,7 +148,7 @@ func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.Applic
|
||||
fakeApps = append(fakeApps, a)
|
||||
}
|
||||
appClientset := appclientset.NewSimpleClientset(fakeApps...)
|
||||
factory := appinformer.NewFilteredSharedInformerFactory(appClientset, 0, "argocd", func(options *metav1.ListOptions) {})
|
||||
factory := appinformer.NewSharedInformerFactoryWithOptions(appClientset, 0, appinformer.WithNamespace("argocd"), appinformer.WithTweakListOptions(func(options *metav1.ListOptions) {}))
|
||||
appInformer := factory.Argoproj().V1alpha1().Applications().Informer()
|
||||
go appInformer.Run(ctx.Done())
|
||||
if !cache.WaitForCacheSync(ctx.Done(), appInformer.HasSynced) {
|
||||
@@ -148,30 +158,71 @@ func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.Applic
|
||||
}
|
||||
|
||||
func testApp(t *testing.T, fakeAppYAMLs []string, expectedResponse string) {
|
||||
cancel, appLister := newFakeLister(fakeAppYAMLs...)
|
||||
t.Helper()
|
||||
testMetricServer(t, fakeAppYAMLs, expectedResponse, []string{})
|
||||
}
|
||||
|
||||
type fakeClusterInfo struct {
|
||||
clustersInfo []gitopsCache.ClusterInfo
|
||||
}
|
||||
|
||||
func (f *fakeClusterInfo) GetClustersInfo() []gitopsCache.ClusterInfo {
|
||||
return f.clustersInfo
|
||||
}
|
||||
|
||||
type TestMetricServerConfig struct {
|
||||
FakeAppYAMLs []string
|
||||
ExpectedResponse string
|
||||
AppLabels []string
|
||||
ClustersInfo []gitopsCache.ClusterInfo
|
||||
}
|
||||
|
||||
func testMetricServer(t *testing.T, fakeAppYAMLs []string, expectedResponse string, appLabels []string) {
|
||||
t.Helper()
|
||||
cfg := TestMetricServerConfig{
|
||||
FakeAppYAMLs: fakeAppYAMLs,
|
||||
ExpectedResponse: expectedResponse,
|
||||
AppLabels: appLabels,
|
||||
ClustersInfo: []gitopsCache.ClusterInfo{},
|
||||
}
|
||||
runTest(t, cfg)
|
||||
}
|
||||
|
||||
func runTest(t *testing.T, cfg TestMetricServerConfig) {
|
||||
t.Helper()
|
||||
cancel, appLister := newFakeLister(cfg.FakeAppYAMLs...)
|
||||
defer cancel()
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, cfg.AppLabels)
|
||||
assert.NoError(t, err)
|
||||
|
||||
if len(cfg.ClustersInfo) > 0 {
|
||||
ci := &fakeClusterInfo{clustersInfo: cfg.ClustersInfo}
|
||||
collector := &clusterCollector{
|
||||
infoSource: ci,
|
||||
info: ci.GetClustersInfo(),
|
||||
}
|
||||
metricsServ.registry.MustRegister(collector)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("GET", "/metrics", nil)
|
||||
assert.NoError(t, err)
|
||||
rr := httptest.NewRecorder()
|
||||
metricsServ.Handler.ServeHTTP(rr, req)
|
||||
assert.Equal(t, rr.Code, http.StatusOK)
|
||||
body := rr.Body.String()
|
||||
log.Println(body)
|
||||
assertMetricsPrinted(t, expectedResponse, body)
|
||||
assertMetricsPrinted(t, cfg.ExpectedResponse, body)
|
||||
}
|
||||
|
||||
type testCombination struct {
|
||||
applications []string
|
||||
expectedResponse string
|
||||
responseContains string
|
||||
}
|
||||
|
||||
func TestMetrics(t *testing.T) {
|
||||
combinations := []testCombination{
|
||||
{
|
||||
applications: []string{fakeApp, fakeApp2, fakeApp3},
|
||||
expectedResponse: `
|
||||
responseContains: `
|
||||
# HELP argocd_app_info Information about application.
|
||||
# TYPE argocd_app_info gauge
|
||||
argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:6443",health_status="Degraded",name="my-app-3",namespace="argocd",operation="delete",project="important-project",repo="https://github.com/argoproj/argocd-example-apps",sync_status="OutOfSync"} 1
|
||||
@@ -181,7 +232,7 @@ argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:
|
||||
},
|
||||
{
|
||||
applications: []string{fakeDefaultApp},
|
||||
expectedResponse: `
|
||||
responseContains: `
|
||||
# HELP argocd_app_info Information about application.
|
||||
# TYPE argocd_app_info gauge
|
||||
argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:6443",health_status="Healthy",name="my-app",namespace="argocd",operation="",project="default",repo="https://github.com/argoproj/argocd-example-apps",sync_status="Synced"} 1
|
||||
@@ -190,7 +241,50 @@ argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:
|
||||
}
|
||||
|
||||
for _, combination := range combinations {
|
||||
testApp(t, combination.applications, combination.expectedResponse)
|
||||
testApp(t, combination.applications, combination.responseContains)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMetricLabels(t *testing.T) {
|
||||
type testCases struct {
|
||||
testCombination
|
||||
description string
|
||||
metricLabels []string
|
||||
}
|
||||
cases := []testCases{
|
||||
{
|
||||
description: "will return the labels metrics successfully",
|
||||
metricLabels: []string{"team-name", "team-bu"},
|
||||
testCombination: testCombination{
|
||||
applications: []string{fakeApp, fakeApp2, fakeApp3},
|
||||
responseContains: `
|
||||
# TYPE argocd_app_labels gauge
|
||||
argocd_app_labels{label_team_bu="bu-id",label_team_name="my-team",name="my-app",namespace="argocd",project="important-project"} 1
|
||||
argocd_app_labels{label_team_bu="bu-id",label_team_name="my-team",name="my-app-2",namespace="argocd",project="important-project"} 1
|
||||
argocd_app_labels{label_team_bu="bu-id",label_team_name="my-team",name="my-app-3",namespace="argocd",project="important-project"} 1
|
||||
`,
|
||||
},
|
||||
},
|
||||
{
|
||||
description: "metric will have empty label value if not present in the application",
|
||||
metricLabels: []string{"non-existing"},
|
||||
testCombination: testCombination{
|
||||
applications: []string{fakeApp, fakeApp2, fakeApp3},
|
||||
responseContains: `
|
||||
# TYPE argocd_app_labels gauge
|
||||
argocd_app_labels{label_non_existing="",name="my-app",namespace="argocd",project="important-project"} 1
|
||||
argocd_app_labels{label_non_existing="",name="my-app-2",namespace="argocd",project="important-project"} 1
|
||||
argocd_app_labels{label_non_existing="",name="my-app-3",namespace="argocd",project="important-project"} 1
|
||||
`,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
c := c
|
||||
t.Run(c.description, func(t *testing.T) {
|
||||
testMetricServer(t, c.applications, c.responseContains, c.metricLabels)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -222,7 +316,7 @@ argocd_app_sync_status{name="my-app",namespace="argocd",project="important-proje
|
||||
func TestMetricsSyncCounter(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
defer cancel()
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
appSyncTotal := `
|
||||
@@ -252,11 +346,12 @@ argocd_app_sync_total{dest_server="https://localhost:6443",name="my-app",namespa
|
||||
|
||||
// assertMetricsPrinted asserts every line in the expected lines appears in the body
|
||||
func assertMetricsPrinted(t *testing.T, expectedLines, body string) {
|
||||
t.Helper()
|
||||
for _, line := range strings.Split(expectedLines, "\n") {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
assert.Contains(t, body, line)
|
||||
assert.Contains(t, body, line, "expected metrics mismatch")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -273,7 +368,7 @@ func assertMetricsNotPrinted(t *testing.T, expectedLines, body string) {
|
||||
func TestReconcileMetrics(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
defer cancel()
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
appReconcileMetrics := `
|
||||
@@ -306,7 +401,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
|
||||
func TestMetricsReset(t *testing.T) {
|
||||
cancel, appLister := newFakeLister()
|
||||
defer cancel()
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck)
|
||||
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
appSyncTotal := `
|
||||
|
||||
@@ -64,6 +64,7 @@ type AppStateManager interface {
|
||||
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
|
||||
}
|
||||
|
||||
// comparisonResult holds the state of an application after the reconciliation
|
||||
type comparisonResult struct {
|
||||
syncStatus *v1alpha1.SyncStatus
|
||||
healthStatus *v1alpha1.HealthStatus
|
||||
@@ -98,6 +99,7 @@ type appStateManager struct {
|
||||
cache *appstatecache.Cache
|
||||
namespace string
|
||||
statusRefreshTimeout time.Duration
|
||||
resourceTracking argo.ResourceTracking
|
||||
}
|
||||
|
||||
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
|
||||
@@ -148,12 +150,13 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
kustomizeOptions, err := kustomizeSettings.GetOptions(app.Spec.Source)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
ts.AddCheckpoint("build_options_ms")
|
||||
serverVersion, apiGroups, err := m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
|
||||
serverVersion, apiResources, err := m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
@@ -171,9 +174,10 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
|
||||
Plugins: tools,
|
||||
KustomizeOptions: kustomizeOptions,
|
||||
KubeVersion: serverVersion,
|
||||
ApiVersions: argo.APIGroupsToVersions(apiGroups),
|
||||
ApiVersions: argo.APIResourcesToStrings(apiResources, true),
|
||||
VerifySignature: verifySignature,
|
||||
HelmRepoCreds: permittedHelmCredentials,
|
||||
TrackingMethod: string(argo.GetTrackingMethod(m.settingsMgr)),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
@@ -455,14 +459,16 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
|
||||
// filter out all resources which are not permitted in the application project
|
||||
for k, v := range liveObjByKey {
|
||||
if !project.IsLiveResourcePermitted(v, app.Spec.Destination.Server) {
|
||||
if !project.IsLiveResourcePermitted(v, app.Spec.Destination.Server, app.Spec.Destination.Name) {
|
||||
delete(liveObjByKey, k)
|
||||
}
|
||||
}
|
||||
|
||||
trackingMethod := argo.GetTrackingMethod(m.settingsMgr)
|
||||
|
||||
for _, liveObj := range liveObjByKey {
|
||||
if liveObj != nil {
|
||||
appInstanceName := kubeutil.GetAppInstanceLabel(liveObj, appLabelKey)
|
||||
appInstanceName := m.resourceTracking.GetAppName(liveObj, appLabelKey, trackingMethod)
|
||||
if appInstanceName != "" && appInstanceName != app.Name {
|
||||
conditions = append(conditions, v1alpha1.ApplicationCondition{
|
||||
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
|
||||
@@ -497,6 +503,10 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
|
||||
_, refreshRequested := app.IsRefreshRequested()
|
||||
noCache = noCache || refreshRequested || app.Status.Expired(m.statusRefreshTimeout)
|
||||
|
||||
for i := range reconciliation.Target {
|
||||
_ = m.resourceTracking.Normalize(reconciliation.Target[i], reconciliation.Live[i], appLabelKey, string(trackingMethod))
|
||||
}
|
||||
|
||||
if noCache || specChanged || revisionChanged || m.cache.GetAppManagedResources(app.Name, &cachedDiff) != nil {
|
||||
// (rare) cache miss
|
||||
diffResults, err = diff.DiffArray(reconciliation.Target, reconciliation.Live, diffOpts...)
|
||||
@@ -681,6 +691,7 @@ func NewAppStateManager(
|
||||
metricsServer *metrics.MetricsServer,
|
||||
cache *appstatecache.Cache,
|
||||
statusRefreshTimeout time.Duration,
|
||||
resourceTracking argo.ResourceTracking,
|
||||
) AppStateManager {
|
||||
return &appStateManager{
|
||||
liveStateCache: liveStateCache,
|
||||
@@ -694,5 +705,6 @@ func NewAppStateManager(
|
||||
projInformer: projInformer,
|
||||
metricsServer: metricsServer,
|
||||
statusRefreshTimeout: statusRefreshTimeout,
|
||||
resourceTracking: resourceTracking,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -60,6 +60,16 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
|
||||
return
|
||||
}
|
||||
syncOp = *state.Operation.Sync
|
||||
|
||||
// validates if it should fail the sync if it finds shared resources
|
||||
hasSharedResource, sharedResourceMessage := hasSharedResourceCondition(app)
|
||||
if syncOp.SyncOptions.HasOption("FailOnSharedResource=true") &&
|
||||
hasSharedResource {
|
||||
state.Phase = common.OperationFailed
|
||||
state.Message = fmt.Sprintf("Shared resouce found: %s", sharedResourceMessage)
|
||||
return
|
||||
}
|
||||
|
||||
if syncOp.Source == nil {
|
||||
// normal sync case (where source is taken from app.spec.source)
|
||||
source = app.Spec.Source
|
||||
@@ -87,7 +97,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
|
||||
revision = syncOp.Revision
|
||||
}
|
||||
|
||||
proj, err := argo.GetAppProject(&app.Spec, listersv1alpha1.NewAppProjectLister(m.projInformer.GetIndexer()), m.namespace, m.settingsMgr)
|
||||
proj, err := argo.GetAppProject(&app.Spec, listersv1alpha1.NewAppProjectLister(m.projInformer.GetIndexer()), m.namespace, m.settingsMgr, m.db, context.TODO())
|
||||
if err != nil {
|
||||
state.Phase = common.OperationError
|
||||
state.Message = fmt.Sprintf("Failed to load application project: %v", err)
|
||||
@@ -176,7 +186,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
|
||||
if !proj.IsGroupKindPermitted(un.GroupVersionKind().GroupKind(), res.Namespaced) {
|
||||
return fmt.Errorf("Resource %s:%s is not permitted in project %s.", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, proj.Name)
|
||||
}
|
||||
if res.Namespaced && !proj.IsDestinationPermitted(v1alpha1.ApplicationDestination{Namespace: un.GetNamespace(), Server: app.Spec.Destination.Server}) {
|
||||
if res.Namespaced && !proj.IsDestinationPermitted(v1alpha1.ApplicationDestination{Namespace: un.GetNamespace(), Server: app.Spec.Destination.Server, Name: app.Spec.Destination.Name}) {
|
||||
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), proj.Name)
|
||||
}
|
||||
return nil
|
||||
@@ -245,6 +255,18 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
|
||||
}
|
||||
}
|
||||
|
||||
// hasSharedResourceCondition will check if the Application has any resource that has already
|
||||
// been synced by another Application. If the resource is found in another Application it returns
|
||||
// true along with a human readable message of which specific resource has this condition.
|
||||
func hasSharedResourceCondition(app *v1alpha1.Application) (bool, string) {
|
||||
for _, condition := range app.Status.Conditions {
|
||||
if condition.Type == v1alpha1.ApplicationConditionSharedResourceWarning {
|
||||
return true, condition.Message
|
||||
}
|
||||
}
|
||||
return false, ""
|
||||
}
|
||||
|
||||
// delayBetweenSyncWaves is a gitops-engine SyncWaveHook which introduces an artificial delay
|
||||
// between each sync wave. We introduce an artificial delay in order give other controllers a
|
||||
// _chance_ to react to the spec change that we just applied. This is important because without
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/argoproj/gitops-engine/pkg/sync/common"
|
||||
"github.com/argoproj/gitops-engine/pkg/utils/kube"
|
||||
"github.com/stretchr/testify/assert"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
@@ -142,3 +143,71 @@ func TestSyncComparisonError(t *testing.T) {
|
||||
assert.NotEmpty(t, conditions)
|
||||
assert.Equal(t, "abc123", opState.SyncResult.Revision)
|
||||
}
|
||||
|
||||
func TestAppStateManager_SyncAppState(t *testing.T) {
|
||||
type fixture struct {
|
||||
project *v1alpha1.AppProject
|
||||
application *v1alpha1.Application
|
||||
controller *ApplicationController
|
||||
}
|
||||
|
||||
setup := func() *fixture {
|
||||
app := newFakeApp()
|
||||
app.Status.OperationState = nil
|
||||
app.Status.History = nil
|
||||
|
||||
project := &v1alpha1.AppProject{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: test.FakeArgoCDNamespace,
|
||||
Name: "default",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
SignatureKeys: []v1alpha1.SignatureKey{{KeyID: "test"}},
|
||||
},
|
||||
}
|
||||
data := fakeData{
|
||||
apps: []runtime.Object{app, project},
|
||||
manifestResponse: &apiclient.ManifestResponse{
|
||||
Manifests: []string{},
|
||||
Namespace: test.FakeDestNamespace,
|
||||
Server: test.FakeClusterURL,
|
||||
Revision: "abc123",
|
||||
},
|
||||
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
|
||||
}
|
||||
ctrl := newFakeController(&data)
|
||||
|
||||
return &fixture{
|
||||
project: project,
|
||||
application: app,
|
||||
controller: ctrl,
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("will fail the sync if finds shared resources", func(t *testing.T) {
|
||||
// given
|
||||
t.Parallel()
|
||||
f := setup()
|
||||
syncErrorMsg := "deployment already applied by another application"
|
||||
condition := v1alpha1.ApplicationCondition{
|
||||
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
|
||||
Message: syncErrorMsg,
|
||||
}
|
||||
f.application.Status.Conditions = append(f.application.Status.Conditions, condition)
|
||||
|
||||
// Sync with source unspecified
|
||||
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
|
||||
Sync: &v1alpha1.SyncOperation{
|
||||
Source: &v1alpha1.ApplicationSource{},
|
||||
SyncOptions: []string{"FailOnSharedResource=true"},
|
||||
},
|
||||
}}
|
||||
|
||||
// when
|
||||
f.controller.appStateManager.SyncAppState(f.application, opState)
|
||||
|
||||
// then
|
||||
assert.Equal(t, common.OperationFailed, opState.Phase)
|
||||
assert.Contains(t, opState.Message, syncErrorMsg)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1 +1 @@
|
||||
Please refer to [the Contribution Guide](https://argo-cd.readthedocs.io/en/latest/developer-guide/contributing/)
|
||||
Please refer to [the Contribution Guide](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/)
|
||||
|
||||
0
docs/assets/azure-enterprise-claims.png
Executable file → Normal file
|
Before Width: | Height: | Size: 7.2 KiB After Width: | Height: | Size: 7.2 KiB |
0
docs/assets/azure-enterprise-saml-urls.png
Executable file → Normal file
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 52 KiB |
0
docs/assets/azure-enterprise-users.png
Executable file → Normal file
|
Before Width: | Height: | Size: 31 KiB After Width: | Height: | Size: 31 KiB |
BIN
docs/assets/banner.png
Normal file
|
After Width: | Height: | Size: 31 KiB |
BIN
docs/assets/google-admin-oidc-uris.png
Normal file
|
After Width: | Height: | Size: 19 KiB |
BIN
docs/assets/google-groups-membership.png
Normal file
|
After Width: | Height: | Size: 35 KiB |
@@ -15,7 +15,7 @@ Since the CI pipeline is triggered on Git commits, there is currently no (known)
|
||||
If you are absolutely sure that the failure was due to a failure in the pipeline, and not an error within the changes you committed, you can push an empty commit to your branch, thus retriggering the pipeline without any code changes. To do so, issue
|
||||
|
||||
```bash
|
||||
git commit --allow-empty -m "Retrigger CI pipeline"
|
||||
git commit -s --allow-empty -m "Retrigger CI pipeline"
|
||||
git push origin <yourbranch>
|
||||
```
|
||||
|
||||
@@ -23,7 +23,9 @@ git push origin <yourbranch>
|
||||
|
||||
First, make sure the failing build step succeeds on your machine. Remember the containerized build toolchain is available, too.
|
||||
|
||||
If the build is failing at the `Ensuring Gopkg.lock is up-to-date` step, you need to update the dependencies before you push your commits. Run `make dep-ensure` and `make dep` and commit the changes to `Gopkg.lock` to your branch.
|
||||
If the build is failing at the `Ensure Go modules synchronicity` step, you need to first download all Go dependent modules locally via `go mod download` and then run `go mod tidy` to make sure the dependent Go modules are tidied up. Finally commit and push your changes to `go.mod` and `go.sum` to your branch.
|
||||
|
||||
If the build is failing at the `Build & cache Go code`, you need to make sure `make build-local` runs successfully on your local machine.
|
||||
|
||||
### Why does the codegen step fail?
|
||||
|
||||
|
||||
112
docs/developer-guide/code-contributions.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# Submitting code contributions to Argo CD
|
||||
|
||||
## Preface
|
||||
|
||||
The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organisations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
|
||||
|
||||
We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons.
|
||||
|
||||
If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
|
||||
enhancement process outlined below before you start to write code or submit a PR. This will ensure that your idea is well aligned with the project's strategy and technical requirements, and it will help greatly in getting your code merged into our code base.
|
||||
|
||||
Before submitting code for a new feature (and also, to some extent, for more complex bug fixes) please
|
||||
[raise an Enhancement Proposal or Bug Issue](https://github.com/argoproj/argo-cd/issues/new/choose)
|
||||
first.
|
||||
|
||||
Each enhancement proposal needs to go through our
|
||||
[triage process](#triage-process)
|
||||
before we accept code contributions. To facilitate triage and to provide transparency, we use
|
||||
[this GitHub project](https://github.com/orgs/argoproj/projects/18) to keep track of this process' outcome.
|
||||
|
||||
_Please_ do not spend too much time on larger features or refactorings before the corresponding enhancement has been triaged. This may save everyone some amount of frustration and time, as the enhancement proposal might be rejected, and the code would never get merged. However, sometimes it's helpful to have some PoC code along with a proposal.
|
||||
|
||||
We will do our best to triage incoming enhancement proposals quickly, with one of the following outcomes:
|
||||
|
||||
* Accepted
|
||||
* Rejected
|
||||
* Proposal requires a design document to be further discussed
|
||||
|
||||
Depending on how many enhancement proposals we receive at given times, it may take some time until we can look at yours.
|
||||
|
||||
Also, please make sure you have read our
|
||||
[Toolchain Guide](toolchain-guide.md)
|
||||
to understand our toolchain and our continuous integration processes. It contains some invaluable information to get started with the complex code base that makes up Argo CD.
|
||||
|
||||
## Quick start
|
||||
|
||||
If you want a quick start contributing to Argo CD, take a look at issues that are labeled with
|
||||
[help wanted](https://github.com/argoproj/argo-cd/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
|
||||
or
|
||||
[good first issue](https://github.com/argoproj/argo-cd/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
|
||||
|
||||
These are issues that were already triaged and accepted.
|
||||
|
||||
If the issue is already attached to next
|
||||
[version milestone](https://github.com/argoproj/argo-cd/milestones),
|
||||
we have decided to also dedicate some of our time on reviews to PRs received for these issues.
|
||||
|
||||
We encourage our community to pick up issues that are labeled in this way *and* are attached to the next version's milestone, with a promise for them to get a proper review with the clear intention for the incoming PRs to get merged.
|
||||
|
||||
## Triage process
|
||||
|
||||
### Overview
|
||||
|
||||
Our triage process for enhancements proposals ensures that we take a look at all incoming enhancements to determine whether we will accept code submissions to implement them.
|
||||
|
||||
The process works as follows:
|
||||
|
||||
* New Enhancement Proposals raised on our GitHub issue tracker are moved to the _Incoming_ column of the project's board. These are the proposals that are in the queue for triage.
|
||||
* The _Active_ column holds the issues that are currently being triaged, or will be triaged next.
|
||||
* The _Accepted_ column holds the issues that have been triaged and are considered good to be implemented (e.g. the project agreed that the feature would be great to have)
|
||||
* The _Declined_ column holds the issues that were rejected during triage. The issue will be updated with information about why the proposal has been rejected.
|
||||
* The _Needs discussion_ column holds the issues that were found to require additional information, or even a design document, during triage.
|
||||
|
||||
### Triage cadence
|
||||
|
||||
Triage of enhancement proposals is performed transparently, offline using issue comments and online in our weekly contributor's meeting. _Everyone_ is invited to participate in triaging, the process is not limited to participation only by maintainers.
|
||||
|
||||
Usually, we will triage enhancement proposals in a First-In-First-Out order, which mean that oldest proposals will be triaged first.
|
||||
|
||||
We aim to triage at least 10 proposals a week. Depending on our available time, we may be triaging a higher or lower number of proposals in any given week.
|
||||
|
||||
## Proposal states
|
||||
|
||||
### Accepted proposals
|
||||
|
||||
When a proposal is considered _Accepted_, it was decided that this enhancement would be valuable to the community at large and fits into the overall strategic roadmap of the project.
|
||||
|
||||
Implementation of the issue may be started, either by the proposal's creator or another community member (including maintainers of the project).
|
||||
|
||||
The issue should be refined enough by now to contain any concerns and guidelines to be taken into consideration during implementation.
|
||||
|
||||
### Declined proposals
|
||||
|
||||
We don't decline proposals lightly, and we will do our best to give a proper reasoning why we think that the proposal does not fit with the future of the project. Reasons for declining proposals may be - amongst others - that the change would be breaking for many, or that it does not meet the strategic direction of the project. Usually, discussion will be facilitated with the enhancement's creator before declining a proposal.
|
||||
|
||||
Once a proposal is in _Declined_ state it's unlikely that we will accept code contributions for its implementation.
|
||||
|
||||
### Proposals that need discussion
|
||||
|
||||
Sometimes, we can't completely understand a proposal from its GitHub issue and require more information on the original intent or on more details about the implementation. If we are confronted with such an issue during the triage, we move this issue to the _Needs discussion_ column to indicate that we expect the issue's creator to supply more information on their idea. We may ask you to provide this information, either by adding that information to the issue itself or by joining one of our
|
||||
[regular contributor's meeting](#regular-contributor-meeting)
|
||||
to discuss the proposal with us.
|
||||
|
||||
Also, issues that we find to require a more formal design document will be moved to this column.
|
||||
|
||||
## Design documents
|
||||
|
||||
For some enhancement proposals (especially those that will change behavior of Argo CD substantially, are attached with some caveats or where upgrade/downgrade paths are not clear), a more formal design document will be required in order to fully discuss and understand the enhancement in the broader community. This requirement is usually determined during triage. If you submitted an enhancement proposal, we may ask you to provide this more formal write down, along with some concerns or topics that need to be adressed.
|
||||
|
||||
Design documents are usually submitted as PR and use [this template](https://github.com/argoproj/argo-cd/blob/master/docs/proposals/001-proposal-template.md) as a guide what kind of information we're looking for. Discussion will take place in the review process. When a design document gets merged, we consider it as approved and code can be written and submitted to implement this specific design.
|
||||
|
||||
## Regular contributor meeting
|
||||
|
||||
Our community regularly meets virtually to discuss issues, ideas and enhancements around Argo CD. We do invite you to join this virtual meetings if you want to bring up certain things (including your enhancement proposals), participate in our triaging or just want to get to know other contributors.
|
||||
|
||||
The current cadence of our meetings is weekly, every Thursday at 4pm UTC (9am Pacific, 12pm Eastern, 6pm Central European, 9:30pm Indian). We use Zoom to conduct these meetings.
|
||||
|
||||
* [Agenda document (Google Docs, includes Zoom link)](https://docs.google.com/document/d/1xkoFkVviB70YBzSEa4bDnu-rUZ1sIFtwKKG1Uw8XsY8)
|
||||
|
||||
If you want to discuss something, we kindly ask you to put your item on the
|
||||
[agenda](https://docs.google.com/document/d/1xkoFkVviB70YBzSEa4bDnu-rUZ1sIFtwKKG1Uw8XsY8)
|
||||
for one of the upcoming meetings so that we can plan in the time for discussing about it.
|
||||
@@ -1,327 +1,2 @@
|
||||
# Contribution guide
|
||||
|
||||
## Preface
|
||||
|
||||
We want to make contributing to ArgoCD as simple and smooth as possible.
|
||||
|
||||
This guide shall help you in setting up your build & test environment, so that you can start developing and testing bug fixes and feature enhancements without having to make too much effort in setting up a local toolchain.
|
||||
|
||||
If you want to submit a PR, please read this document carefully, as it contains important information guiding you through our PR quality gates.
|
||||
|
||||
As is the case with the development process, this document is under constant change. If you notice any error, or if you think this document is out-of-date, or if you think it is missing something: Feel free to submit a PR or submit a bug to our GitHub issue tracker.
|
||||
|
||||
If you need guidance with submitting a PR, or have any other questions regarding development of ArgoCD, do not hesitate to [join our Slack](https://argoproj.github.io/community/join-slack) and get in touch with us in the `#argo-dev` channel!
|
||||
|
||||
## Before you start
|
||||
|
||||
You will need at least the following things in your toolchain in order to develop and test ArgoCD locally:
|
||||
|
||||
* A Kubernetes cluster. You won't need a fully blown multi-master, multi-node cluster, but you will need something like K3S, Minikube or microk8s. You will also need a working Kubernetes client (`kubectl`) configuration in your development environment. The configuration must reside in `~/.kube/config` and the API server URL must point to the IP address of your local machine (or VM), and **not** to `localhost` or `127.0.0.1` if you are using the virtualized development toolchain (see below)
|
||||
|
||||
* You will also need a working Docker runtime environment, to be able to build and run images.
|
||||
The Docker version must be fairly recent, and support multi-stage builds. You should not work as root. Make your local user a member of the `docker` group to be able to control the Docker service on your machine.
|
||||
|
||||
* Obviously, you will need a `git` client for pulling source code and pushing back your changes.
|
||||
|
||||
* Last but not least, you will need a Go SDK and related tools (such as GNU `make`) installed and working on your development environment. The minimum required Go version for building and testing ArgoCD is **v1.16**.
|
||||
|
||||
* We will assume that your Go workspace is at `~/go`.
|
||||
|
||||
!!! note
|
||||
**Attention minikube users**: By default, minikube will create Kubernetes client configuration that uses authentication data from files. This is incompatible with the virtualized toolchain. So if you intend to use the virtualized toolchain, you have to embed this authentication data into the client configuration. To do so, start minikube using `minikube start --embed-certs`. Please also note that minikube using the Docker driver is currently not supported with the virtualized toolchain, because the Docker driver exposes the API server on 127.0.0.1 hard-coded. If in doubt, run `make verify-kube-connect` to find out.
|
||||
|
||||
## Submitting PRs
|
||||
|
||||
When you submit a PR against ArgoCD's GitHub repository, a couple of CI checks will be run automatically to ensure your changes will build fine and meet certain quality standards. Your contribution needs to pass those checks in order to be merged into the repository.
|
||||
|
||||
In general, it might be beneficial to only submit a PR for an existing issue. Especially for larger changes, an Enhancement Proposal should exist before.
|
||||
|
||||
!!!note
|
||||
|
||||
Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from ArgoCD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
|
||||
|
||||
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to ArgoCD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.
|
||||
|
||||
The following read will help you to submit a PR that meets the standards of our CI tests:
|
||||
|
||||
### Title of the PR
|
||||
|
||||
Please use a meaningful and concise title for your PR. This will help us to pick PRs for review quickly, and the PR title will also end up in the Changelog.
|
||||
|
||||
We use the [Semantic PR title checker](https://github.com/zeke/semantic-pull-requests) to categorize your PR into one of the following categories:
|
||||
|
||||
* `fix` - Your PR contains one or more code bug fixes
|
||||
* `feat` - Your PR contains a new feature
|
||||
* `docs` - Your PR improves the documentation
|
||||
* `chore` - Your PR improves any internals of ArgoCD, such as the build process, unit tests, etc
|
||||
|
||||
Please prefix the title of your PR with one of the valid categories. For example, if you chose the title your PR `Add documentation for GitHub SSO integration`, please use `docs: Add documentation for GitHub SSO integration` instead.
|
||||
|
||||
### Contributor License Agreement
|
||||
|
||||
Every contributor to ArgoCD must have signed the current Contributor License Agreement (CLA). You only have to sign the CLA when you are a first time contributor, or when the agreement has changed since your last time signing it. The main purpose of the CLA is to ensure that you hold the required rights for your contribution. The CLA signing is an automated process.
|
||||
|
||||
You can read the current version of the CLA [here](https://cla-assistant.io/argoproj/argo-cd).
|
||||
|
||||
### PR template checklist
|
||||
|
||||
Upon opening a PR, the details will contain a checklist from a template. Please read the checklist, and tick those marks that apply to you.
|
||||
|
||||
### Automated builds & tests
|
||||
|
||||
After you have submitted your PR, and whenever you push new commits to that branch, GitHub will run a number of Continuous Integration checks against your code. It will execute the following actions, and each of them has to pass:
|
||||
|
||||
* Build the Go code (`make build`)
|
||||
* Generate API glue code and manifests (`make codegen`)
|
||||
* Run a Go linter on the code (`make lint`)
|
||||
* Run the unit tests (`make test`)
|
||||
* Run the End-to-End tests (`make test-e2e`)
|
||||
* Build and lint the UI code (`make lint-ui`)
|
||||
* Build the `argocd` CLI (`make cli`)
|
||||
|
||||
If any of these tests in the CI pipeline fail, it means that some of your contribution is considered faulty (or a test might be flaky, see below).
|
||||
|
||||
### Code test coverage
|
||||
|
||||
We use [CodeCov](https://codecov.io) in our CI pipeline to check for test coverage, and once you submit your PR, it will run and report on the coverage difference as a comment within your PR. If the difference is too high in the negative, i.e. your submission introduced a significant drop in code coverage, the CI check will fail.
|
||||
|
||||
Whenever you develop a new feature or submit a bug fix, please also write appropriate unit tests for it. If you write a completely new module, please aim for at least 80% of coverage.
|
||||
If you want to see how much coverage just a specific module (i.e. your new one) has, you can set the `TEST_MODULE` to the (fully qualified) name of that module with `make test`, i.e.:
|
||||
|
||||
```bash
|
||||
make test TEST_MODULE=github.com/argoproj/argo-cd/server/cache
|
||||
...
|
||||
ok github.com/argoproj/argo-cd/server/cache 0.029s coverage: 89.3% of statements
|
||||
```
|
||||
|
||||
## Local vs Virtualized toolchain
|
||||
|
||||
ArgoCD provides a fully virtualized development and testing toolchain using Docker images. It is recommended to use those images, as they provide the same runtime environment as the final product and it is much easier to keep up-to-date with changes to the toolchain and dependencies. But as using Docker comes with a slight performance penalty, you might want to setup a local toolchain.
|
||||
|
||||
Most relevant targets for the build & test cycles in the `Makefile` provide two variants, one of them suffixed with `-local`. For example, `make test` will run unit tests in the Docker container, `make test-local` will run it natively on your local system.
|
||||
|
||||
If you are going to use the virtualized toolchain, please bear in mind the following things:
|
||||
|
||||
* Your Kubernetes API server must listen on the interface of your local machine or VM, and not on `127.0.0.1` only.
|
||||
* Your Kubernetes client configuration (`~/.kube/config`) must not use an API URL that points to `localhost` or `127.0.0.1`.
|
||||
|
||||
You can test whether the virtualized toolchain has access to your Kubernetes cluster by running `make verify-kube-connect` (*after* you have setup your development environment, as described below), which will run `kubectl version` inside the Docker container used for running all tests.
|
||||
|
||||
The Docker container for the virtualized toolchain will use the following local mounts from your workstation, and possibly modify its contents:
|
||||
|
||||
* `~/go/src` - Your Go workspace's source directory (modifications expected)
|
||||
* `~/.cache/go-build` - Your Go build cache (modifications expected)
|
||||
* `~/.kube` - Your Kubernetes client configuration (no modifications)
|
||||
* `/tmp` - Your system's temp directory (modifications expected)
|
||||
|
||||
## Setting up your development environment
|
||||
|
||||
The following steps are required no matter whether you chose to use a virtualized or a local toolchain.
|
||||
|
||||
### Clone the ArgoCD repository from your personal fork on GitHub
|
||||
|
||||
* `mkdir -p ~/go/src/github.com/argoproj`
|
||||
* `cd ~/go/src/github.com/argoproj`
|
||||
* `git clone https://github.com/yourghuser/argo-cd`
|
||||
* `cd argo-cd`
|
||||
|
||||
### Optional: Setup an additional Git remote
|
||||
|
||||
While everyone has their own Git workflow, the author of this document recommends to create a remote called `upstream` in your local copy pointing to the original ArgoCD repository. This way, you can easily keep your local branches up-to-date by merging in latest changes from the ArgoCD repository, i.e. by doing a `git pull upstream master` in your locally checked out branch. To create the remote, run `git remote add upstream https://github.com/argoproj/argo-cd`
|
||||
|
||||
### Install the must-have requirements
|
||||
|
||||
Make sure you fulfill the pre-requisites above and run some preliminary tests. Neither of them should report an error.
|
||||
|
||||
* Run `kubectl version`
|
||||
* Run `docker version`
|
||||
* Run `go version`
|
||||
|
||||
### Build (or pull) the required Docker image
|
||||
|
||||
Build the required Docker image by running `make test-tools-image` or pull the latest version by issuing `docker pull argoproj/argocd-test-tools`.
|
||||
|
||||
The `Dockerfile` used to build these images can be found at `test/container/Dockerfile`.
|
||||
|
||||
### Test connection from build container to your K8s cluster
|
||||
|
||||
Run `make verify-kube-connect`, it should execute without error.
|
||||
|
||||
If you receive an error similar to the following:
|
||||
|
||||
```
|
||||
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
|
||||
make: *** [Makefile:386: verify-kube-connect] Error 1
|
||||
```
|
||||
|
||||
you should edit your `~/.kube/config` and modify the `server` option to point to your correct K8s API (as described above).
|
||||
|
||||
### Using k3d
|
||||
|
||||
[k3d](https://github.com/rancher/k3d) is a lightweight wrapper to run [k3s](https://github.com/rancher/k3s), a minimal Kubernetes distribution, in docker. Because it's running in a docker container, you're dealing with docker's internal networking rules when using k3d. A typical Kubernetes cluster running on your local machine is part of the same network that you're on so you can access it using **kubectl**. However, a Kubernetes cluster running within a docker container (in this case, the one launched by make) cannot access 0.0.0.0 from inside the container itself, when 0.0.0.0 is a network resource outside the container itself (and/or the container's network). This is the cost of a fully self-contained, disposable Kubernetes cluster. The following steps should help with a successful `make verify-kube-connect` execution.
|
||||
|
||||
1. Find your host IP by executing `ifconfig` on Mac/Linux and `ipconfig` on Windows. For most users, the following command works to find the IP address.
|
||||
|
||||
* For Mac:
|
||||
|
||||
```
|
||||
IP=`ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}'`
|
||||
echo $IP
|
||||
```
|
||||
|
||||
* For Linux:
|
||||
|
||||
```
|
||||
IP=`ifconfig eth0 | grep inet | grep -v inet6 | awk '{print $2}'`
|
||||
echo $IP
|
||||
```
|
||||
|
||||
Keep in mind that this IP is dynamically assigned by the router so if your router restarts for any reason, your IP might change.
|
||||
|
||||
2. Edit your ~/.kube/config and replace 0.0.0.0 with the above IP address.
|
||||
|
||||
3. Execute a `kubectl version` to make sure you can still connect to the Kubernetes API server via this new IP. Run `make verify-kube-connect` and check if it works.
|
||||
|
||||
4. Finally, so that you don't have to keep updating your kube-config whenever you spin up a new k3d cluster, add `--api-port $IP:6550` to your **k3d cluster create** command, where $IP is the value from step 1. An example command is provided here:
|
||||
|
||||
```
|
||||
k3d cluster create my-cluster --wait --k3s-server-arg '--disable=traefik' --api-port $IP:6550 -p 443:443@loadbalancer
|
||||
```
|
||||
|
||||
## The development cycle
|
||||
|
||||
When you have developed and possibly manually tested the code you want to contribute, you should ensure that everything will build correctly. Commit your changes to the local copy of your Git branch and perform the following steps:
|
||||
|
||||
### Pull in all build dependencies
|
||||
|
||||
As build dependencies change over time, you have to synchronize your development environment with the current specification. In order to pull in all required dependencies, issue:
|
||||
|
||||
* `make dep-ui`
|
||||
|
||||
ArgoCD recently migrated to Go modules. Usually, dependencies will be downloaded on build time, but the Makefile provides two targets to download and vendor all dependencies:
|
||||
|
||||
* `make mod-download` will download all required Go modules and
|
||||
* `make mod-vendor` will vendor those dependencies into the ArgoCD source tree
|
||||
|
||||
### Generate API glue code and other assets
|
||||
|
||||
ArgoCD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
|
||||
|
||||
* Run `make codegen`, this might take a while
|
||||
* Check if something has changed by running `git status` or `git diff`
|
||||
* Commit any possible changes to your local Git branch, an appropriate commit message would be `Changes from codegen`, for example.
|
||||
|
||||
!!!note
|
||||
There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
|
||||
|
||||
### Build your code and run unit tests
|
||||
|
||||
After the code glue has been generated, your code should build and the unit tests should run without any errors. Execute the following statements:
|
||||
|
||||
* `make build`
|
||||
* `make test`
|
||||
|
||||
These steps are non-modifying, so there's no need to check for changes afterwards.
|
||||
|
||||
### Lint your code base
|
||||
|
||||
In order to keep a consistent code style in our source tree, your code must be well-formed in accordance to some widely accepted rules, which are applied by a Linter.
|
||||
|
||||
The Linter might make some automatic changes to your code, such as indentation fixes. Some other errors reported by the Linter have to be fixed manually.
|
||||
|
||||
* Run `make lint` and observe any errors reported by the Linter
|
||||
* Fix any of the errors reported and commit to your local branch
|
||||
* Finally, after the Linter reports no errors anymore, run `git status` or `git diff` to check for any changes made automatically by Lint
|
||||
* If there were automatic changes, commit them to your local branch
|
||||
|
||||
If you touched UI code, you should also run the Yarn linter on it:
|
||||
|
||||
* Run `make lint-ui`
|
||||
* Fix any of the errors reported by it
|
||||
|
||||
## Contributing to Argo CD UI
|
||||
|
||||
Argo CD, along with Argo Workflows, uses shared React components from [Argo UI](https://github.com/argoproj/argo-ui). Examples of some of these components include buttons, containers, form controls,
|
||||
and others. Although you can make changes to these files and run them locally, in order to have these changes added to the Argo CD repo, you will need to follow these steps.
|
||||
|
||||
1. Fork and clone the [Argo UI repository](https://github.com/argoproj/argo-ui).
|
||||
|
||||
2. `cd` into your `argo-ui` directory, and then run `yarn install`.
|
||||
|
||||
3. Make your file changes.
|
||||
|
||||
4. Run `yarn start` to start a [storybook](https://storybook.js.org/) dev server and view the components in your browser. Make sure all your changes work as expected.
|
||||
|
||||
5. Use [yarn link](https://classic.yarnpkg.com/en/docs/cli/link/) to link Argo UI package to your Argo CD repository. (Commands below assume that `argo-ui` and `argo-cd` are both located within the same parent folder)
|
||||
|
||||
* `cd argo-ui`
|
||||
* `yarn link`
|
||||
* `cd ../argo-cd/ui`
|
||||
* `yarn link argo-ui`
|
||||
|
||||
Once `argo-ui` package has been successfully linked, test out changes in your local development environment.
|
||||
|
||||
6. Commit changes and open a PR to [Argo UI](https://github.com/argoproj/argo-ui).
|
||||
|
||||
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd` folder and run `yarn add https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/yarn.lock` file to use the lastest master commit for argo-ui.
|
||||
|
||||
8. Submit changes to `ui/yarn.lock`in a PR to Argo CD.
|
||||
|
||||
## Setting up a local toolchain
|
||||
|
||||
For development, you can either use the fully virtualized toolchain provided as Docker images, or you can set up the toolchain on your local development machine. Due to the dynamic nature of requirements, you might want to stay with the virtualized environment.
|
||||
|
||||
### Install required dependencies and build-tools
|
||||
|
||||
!!!note
|
||||
The installations instructions are valid for Linux hosts only. Mac instructions will follow shortly.
|
||||
|
||||
For installing the tools required to build and test ArgoCD on your local system, we provide convenient installer scripts. By default, they will install binaries to `/usr/local/bin` on your system, which might require `root` privileges.
|
||||
|
||||
You can change the target location by setting the `BIN` environment before running the installer scripts. For example, you can install the binaries into `~/go/bin` (which should then be the first component in your `PATH` environment, i.e. `export PATH=~/go/bin:$PATH`):
|
||||
|
||||
```shell
|
||||
make BIN=~/go/bin install-tools-local
|
||||
```
|
||||
|
||||
Additionally, you have to install at least the following tools via your OS's package manager (this list might not be always up-to-date):
|
||||
|
||||
* Git LFS plugin
|
||||
* GnuPG version 2
|
||||
|
||||
### Install Go dependencies
|
||||
|
||||
You need to pull in all required Go dependencies. To do so, run
|
||||
|
||||
* `make mod-download-local`
|
||||
* `make mod-vendor-local`
|
||||
|
||||
### Test your build toolchain
|
||||
|
||||
The first thing you can do whether your build toolchain is setup correctly is by generating the glue code for the API and after that, run a normal build:
|
||||
|
||||
* `make codegen-local`
|
||||
* `make build-local`
|
||||
|
||||
This should return without any error.
|
||||
|
||||
### Run unit-tests
|
||||
|
||||
The next thing is to make sure that unit tests are running correctly on your system. These will require that all dependencies, such as Helm, Kustomize, Git, GnuPG, etc are correctly installed and fully functioning:
|
||||
|
||||
* `make test-local`
|
||||
|
||||
### Run end-to-end tests
|
||||
|
||||
The final step is running the End-to-End testsuite, which makes sure that your Kubernetes dependencies are working properly. This will involve starting all of the ArgoCD components locally on your computer. The end-to-end tests consists of two parts: a server component, and a client component.
|
||||
|
||||
* First, start the End-to-End server: `make start-e2e-local`. This will spawn a number of processes and services on your system.
|
||||
* When all components have started, run `make test-e2e-local` to run the end-to-end tests against your local services.
|
||||
|
||||
For more information about End-to-End tests, refer to the [End-to-End test documentation](test-e2e.md).
|
||||
|
||||
|
||||
## Enhancement proposals
|
||||
|
||||
If you are proposing a major feature, change in design or process refactor, please help define how it would look like with a new enhancement proposal as described in the enhancement proposal [template](/docs/proposals/001-proposal-template.md).
|
||||
|
||||
The contents of this document have been moved to the
|
||||
[Toolchain guide](toolchain-guide.md)
|
||||
@@ -45,7 +45,7 @@ If you make changes to the Argo UI component, and your Argo CD changes depend on
|
||||
1. Make changes to Argo UI and submit the PR request.
|
||||
2. Also, prepare your Argo CD changes, but don't create the PR just yet.
|
||||
3. **After** the Argo UI PR has been merged to master, then as part of your Argo CD changes:
|
||||
- Run `yarn add git+https://github.com/argoproj/argo-ui.git`, and then,
|
||||
- Run `yarn add git+https://github.com/argoproj/argo-ui.git` in the `ui/` directory, and then,
|
||||
- Check in the regenerated yarn.lock file as part of your Argo CD commit
|
||||
4. Create the Argo CD PR when you are ready. The PR build and test checks should pass.
|
||||
|
||||
|
||||
@@ -42,7 +42,8 @@ the maintainer should label it with a `verified` label.
|
||||
|
||||
The releasing procedure is described in [releasing](./releasing.md) document. Before closing the release milestone following should be verified:
|
||||
|
||||
- [ ] All merged PRs have the `verified` label
|
||||
- [ ] All merged PRs and verified (verify and remove `needs-verification` label):
|
||||
- [ ] Triage issues reported by `yarn audit` and ensure there are no exploitable security issues.
|
||||
- [ ] Roadmap is updated based one current release changes
|
||||
- [ ] Next release milestone is created
|
||||
- [ ] Upcoming release milestone is updated
|
||||
@@ -4,9 +4,9 @@
|
||||
|
||||
During development, it might be viable to run ArgoCD outside of a Kubernetes cluster. This will greatly speed up development, as you don't have to constantly build, push and install new ArgoCD Docker images with your latest changes.
|
||||
|
||||
You will still need a working Kubernetes cluster, as described in the [Contribution Guide](contributing.md), where ArgoCD will store all of its resources.
|
||||
You will still need a working Kubernetes cluster, as described in the [Toolchain Guide](toolchain-guide.md), where ArgoCD will store all of its resources and configuration.
|
||||
|
||||
If you followed the [Contribution Guide](contributing.md) in setting up your toolchain, you can run ArgoCD locally with these simple steps:
|
||||
If you followed the [Toolchain Guide](toolchain-guide.md) in setting up your toolchain, you can run ArgoCD locally with these simple steps:
|
||||
|
||||
### Install ArgoCD resources to your cluster
|
||||
|
||||
|
||||
327
docs/developer-guide/toolchain-guide.md
Normal file
@@ -0,0 +1,327 @@
|
||||
# Development toolchain
|
||||
|
||||
## Preface
|
||||
|
||||
!!!note "Before you start"
|
||||
The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organisations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
|
||||
|
||||
We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
|
||||
[code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
|
||||
|
||||
We want to make contributing to Argo CD as simple and smooth as possible.
|
||||
|
||||
This guide shall help you in setting up your build & test environment, so that you can start developing and testing bug fixes and feature enhancements without having to make too much effort in setting up a local toolchain.
|
||||
|
||||
If you want to submit a PR, please read this document carefully, as it contains important information guiding you through our PR quality gates.
|
||||
|
||||
As is the case with the development process, this document is under constant change. If you notice any error, or if you think this document is out-of-date, or if you think it is missing something: Feel free to submit a PR or submit a bug to our GitHub issue tracker.
|
||||
|
||||
If you need guidance with submitting a PR, or have any other questions regarding development of Argo CD, do not hesitate to [join our Slack](https://argoproj.github.io/community/join-slack) and get in touch with us in the `#argo-contributors` channel!
|
||||
|
||||
## Before you start
|
||||
|
||||
You will need at least the following things in your toolchain in order to develop and test Argo CD locally:
|
||||
|
||||
* A Kubernetes cluster. You won't need a fully blown multi-master, multi-node cluster, but you will need something like K3S, Minikube or microk8s. You will also need a working Kubernetes client (`kubectl`) configuration in your development environment. The configuration must reside in `~/.kube/config` and the API server URL must point to the IP address of your local machine (or VM), and **not** to `localhost` or `127.0.0.1` if you are using the virtualized development toolchain (see below)
|
||||
|
||||
* You will also need a working Docker runtime environment, to be able to build and run images.
|
||||
The Docker version must be fairly recent, and support multi-stage builds. You should not work as root. Make your local user a member of the `docker` group to be able to control the Docker service on your machine.
|
||||
|
||||
* Obviously, you will need a `git` client for pulling source code and pushing back your changes.
|
||||
|
||||
* Last but not least, you will need a Go SDK and related tools (such as GNU `make`) installed and working on your development environment. The minimum required Go version for building and testing Argo CD is **v1.16**.
|
||||
|
||||
* We will assume that your Go workspace is at `~/go`.
|
||||
|
||||
!!! note
|
||||
**Attention minikube users**: By default, minikube will create Kubernetes client configuration that uses authentication data from files. This is incompatible with the virtualized toolchain. So if you intend to use the virtualized toolchain, you have to embed this authentication data into the client configuration. To do so, start minikube using `minikube start --embed-certs`. Please also note that minikube using the Docker driver is currently not supported with the virtualized toolchain, because the Docker driver exposes the API server on 127.0.0.1 hard-coded. If in doubt, run `make verify-kube-connect` to find out.
|
||||
|
||||
## Submitting PRs
|
||||
|
||||
### Continuous Integration process
|
||||
|
||||
When you submit a PR against Argo CD's GitHub repository, a couple of CI checks will be run automatically to ensure your changes will build fine and meet certain quality standards. Your contribution needs to pass those checks in order to be merged into the repository.
|
||||
|
||||
!!!note
|
||||
|
||||
Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
|
||||
|
||||
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.
|
||||
|
||||
The following read will help you to submit a PR that meets the standards of our CI tests:
|
||||
|
||||
### Title of the PR
|
||||
|
||||
Please use a meaningful and concise title for your PR. This will help us to pick PRs for review quickly, and the PR title will also end up in the Changelog.
|
||||
|
||||
We use the [Semantic PR title checker](https://github.com/zeke/semantic-pull-requests) to categorize your PR into one of the following categories:
|
||||
|
||||
* `fix` - Your PR contains one or more code bug fixes
|
||||
* `feat` - Your PR contains a new feature
|
||||
* `docs` - Your PR improves the documentation
|
||||
* `chore` - Your PR improves any internals of Argo CD, such as the build process, unit tests, etc
|
||||
|
||||
Please prefix the title of your PR with one of the valid categories. For example, if you chose the title your PR `Add documentation for GitHub SSO integration`, please use `docs: Add documentation for GitHub SSO integration` instead.
|
||||
|
||||
### Contributor License Agreement
|
||||
|
||||
Every contributor to Argo CD must have signed the current Contributor License Agreement (CLA). You only have to sign the CLA when you are a first time contributor, or when the agreement has changed since your last time signing it. The main purpose of the CLA is to ensure that you hold the required rights for your contribution. The CLA signing is an automated process.
|
||||
|
||||
You can read the current version of the CLA [here](https://cla-assistant.io/argoproj/argo-cd).
|
||||
|
||||
### PR template checklist
|
||||
|
||||
Upon opening a PR, the details will contain a checklist from a template. Please read the checklist, and tick those marks that apply to you.
|
||||
|
||||
### Automated builds & tests
|
||||
|
||||
After you have submitted your PR, and whenever you push new commits to that branch, GitHub will run a number of Continuous Integration checks against your code. It will execute the following actions, and each of them has to pass:
|
||||
|
||||
* Build the Go code (`make build`)
|
||||
* Generate API glue code and manifests (`make codegen`)
|
||||
* Run a Go linter on the code (`make lint`)
|
||||
* Run the unit tests (`make test`)
|
||||
* Run the End-to-End tests (`make test-e2e`)
|
||||
* Build and lint the UI code (`make lint-ui`)
|
||||
* Build the `argocd` CLI (`make cli`)
|
||||
|
||||
If any of these tests in the CI pipeline fail, it means that some of your contribution is considered faulty (or a test might be flaky, see below).
|
||||
|
||||
### Code test coverage
|
||||
|
||||
We use [CodeCov](https://codecov.io) in our CI pipeline to check for test coverage, and once you submit your PR, it will run and report on the coverage difference as a comment within your PR. If the difference is too high in the negative, i.e. your submission introduced a significant drop in code coverage, the CI check will fail.
|
||||
|
||||
Whenever you develop a new feature or submit a bug fix, please also write appropriate unit tests for it. If you write a completely new module, please aim for at least 80% of coverage.
|
||||
If you want to see how much coverage just a specific module (i.e. your new one) has, you can set the `TEST_MODULE` to the (fully qualified) name of that module with `make test`, i.e.:
|
||||
|
||||
```bash
|
||||
make test TEST_MODULE=github.com/argoproj/argo-cd/server/cache
|
||||
...
|
||||
ok github.com/argoproj/argo-cd/server/cache 0.029s coverage: 89.3% of statements
|
||||
```
|
||||
|
||||
## Local vs Virtualized toolchain
|
||||
|
||||
Argo CD provides a fully virtualized development and testing toolchain using Docker images. It is recommended to use those images, as they provide the same runtime environment as the final product and it is much easier to keep up-to-date with changes to the toolchain and dependencies. But as using Docker comes with a slight performance penalty, you might want to setup a local toolchain.
|
||||
|
||||
Most relevant targets for the build & test cycles in the `Makefile` provide two variants, one of them suffixed with `-local`. For example, `make test` will run unit tests in the Docker container, `make test-local` will run it natively on your local system.
|
||||
|
||||
If you are going to use the virtualized toolchain, please bear in mind the following things:
|
||||
|
||||
* Your Kubernetes API server must listen on the interface of your local machine or VM, and not on `127.0.0.1` only.
|
||||
* Your Kubernetes client configuration (`~/.kube/config`) must not use an API URL that points to `localhost` or `127.0.0.1`.
|
||||
|
||||
You can test whether the virtualized toolchain has access to your Kubernetes cluster by running `make verify-kube-connect` (*after* you have setup your development environment, as described below), which will run `kubectl version` inside the Docker container used for running all tests.
|
||||
|
||||
The Docker container for the virtualized toolchain will use the following local mounts from your workstation, and possibly modify its contents:
|
||||
|
||||
* `~/go/src` - Your Go workspace's source directory (modifications expected)
|
||||
* `~/.cache/go-build` - Your Go build cache (modifications expected)
|
||||
* `~/.kube` - Your Kubernetes client configuration (no modifications)
|
||||
* `/tmp` - Your system's temp directory (modifications expected)
|
||||
|
||||
## Setting up your development environment
|
||||
|
||||
The following steps are required no matter whether you chose to use a virtualized or a local toolchain.
|
||||
|
||||
### Clone the Argo CD repository from your personal fork on GitHub
|
||||
|
||||
* `mkdir -p ~/go/src/github.com/argoproj`
|
||||
* `cd ~/go/src/github.com/argoproj`
|
||||
* `git clone https://github.com/yourghuser/argo-cd`
|
||||
* `cd argo-cd`
|
||||
|
||||
### Optional: Setup an additional Git remote
|
||||
|
||||
While everyone has their own Git workflow, the author of this document recommends to create a remote called `upstream` in your local copy pointing to the original Argo CD repository. This way, you can easily keep your local branches up-to-date by merging in latest changes from the Argo CD repository, i.e. by doing a `git pull upstream master` in your locally checked out branch. To create the remote, run `git remote add upstream https://github.com/argoproj/argo-cd`
|
||||
|
||||
### Install the must-have requirements
|
||||
|
||||
Make sure you fulfill the pre-requisites above and run some preliminary tests. Neither of them should report an error.
|
||||
|
||||
* Run `kubectl version`
|
||||
* Run `docker version`
|
||||
* Run `go version`
|
||||
|
||||
### Build (or pull) the required Docker image
|
||||
|
||||
Build the required Docker image by running `make test-tools-image` or pull the latest version by issuing `docker pull argoproj/argocd-test-tools`.
|
||||
|
||||
The `Dockerfile` used to build these images can be found at `test/container/Dockerfile`.
|
||||
|
||||
### Test connection from build container to your K8s cluster
|
||||
|
||||
Run `make verify-kube-connect`, it should execute without error.
|
||||
|
||||
If you receive an error similar to the following:
|
||||
|
||||
```
|
||||
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
|
||||
make: *** [Makefile:386: verify-kube-connect] Error 1
|
||||
```
|
||||
|
||||
you should edit your `~/.kube/config` and modify the `server` option to point to your correct K8s API (as described above).
|
||||
|
||||
### Using k3d
|
||||
|
||||
[k3d](https://github.com/rancher/k3d) is a lightweight wrapper to run [k3s](https://github.com/rancher/k3s), a minimal Kubernetes distribution, in docker. Because it's running in a docker container, you're dealing with docker's internal networking rules when using k3d. A typical Kubernetes cluster running on your local machine is part of the same network that you're on so you can access it using **kubectl**. However, a Kubernetes cluster running within a docker container (in this case, the one launched by make) cannot access 0.0.0.0 from inside the container itself, when 0.0.0.0 is a network resource outside the container itself (and/or the container's network). This is the cost of a fully self-contained, disposable Kubernetes cluster. The following steps should help with a successful `make verify-kube-connect` execution.
|
||||
|
||||
1. Find your host IP by executing `ifconfig` on Mac/Linux and `ipconfig` on Windows. For most users, the following command works to find the IP address.
|
||||
|
||||
* For Mac:
|
||||
|
||||
```
|
||||
IP=`ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}'`
|
||||
echo $IP
|
||||
```
|
||||
|
||||
* For Linux:
|
||||
|
||||
```
|
||||
IP=`ifconfig eth0 | grep inet | grep -v inet6 | awk '{print $2}'`
|
||||
echo $IP
|
||||
```
|
||||
|
||||
Keep in mind that this IP is dynamically assigned by the router so if your router restarts for any reason, your IP might change.
|
||||
|
||||
2. Edit your ~/.kube/config and replace 0.0.0.0 with the above IP address.
|
||||
|
||||
3. Execute a `kubectl version` to make sure you can still connect to the Kubernetes API server via this new IP. Run `make verify-kube-connect` and check if it works.
|
||||
|
||||
4. Finally, so that you don't have to keep updating your kube-config whenever you spin up a new k3d cluster, add `--api-port $IP:6550` to your **k3d cluster create** command, where $IP is the value from step 1. An example command is provided here:
|
||||
|
||||
```
|
||||
k3d cluster create my-cluster --wait --k3s-server-arg '--disable=traefik' --api-port $IP:6550 -p 443:443@loadbalancer
|
||||
```
|
||||
|
||||
## The development cycle
|
||||
|
||||
When you have developed and possibly manually tested the code you want to contribute, you should ensure that everything will build correctly. Commit your changes to the local copy of your Git branch and perform the following steps:
|
||||
|
||||
### Pull in all build dependencies
|
||||
|
||||
As build dependencies change over time, you have to synchronize your development environment with the current specification. In order to pull in all required dependencies, issue:
|
||||
|
||||
* `make dep-ui`
|
||||
|
||||
Argo CD recently migrated to Go modules. Usually, dependencies will be downloaded on build time, but the Makefile provides two targets to download and vendor all dependencies:
|
||||
|
||||
* `make mod-download` will download all required Go modules and
|
||||
* `make mod-vendor` will vendor those dependencies into the Argo CD source tree
|
||||
|
||||
### Generate API glue code and other assets
|
||||
|
||||
Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
|
||||
|
||||
* Run `make codegen`, this might take a while
|
||||
* Check if something has changed by running `git status` or `git diff`
|
||||
* Commit any possible changes to your local Git branch, an appropriate commit message would be `Changes from codegen`, for example.
|
||||
|
||||
!!!note
|
||||
There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
|
||||
|
||||
### Build your code and run unit tests
|
||||
|
||||
After the code glue has been generated, your code should build and the unit tests should run without any errors. Execute the following statements:
|
||||
|
||||
* `make build`
|
||||
* `make test`
|
||||
|
||||
These steps are non-modifying, so there's no need to check for changes afterwards.
|
||||
|
||||
### Lint your code base
|
||||
|
||||
In order to keep a consistent code style in our source tree, your code must be well-formed in accordance to some widely accepted rules, which are applied by a Linter.
|
||||
|
||||
The Linter might make some automatic changes to your code, such as indentation fixes. Some other errors reported by the Linter have to be fixed manually.
|
||||
|
||||
* Run `make lint` and observe any errors reported by the Linter
|
||||
* Fix any of the errors reported and commit to your local branch
|
||||
* Finally, after the Linter reports no errors anymore, run `git status` or `git diff` to check for any changes made automatically by Lint
|
||||
* If there were automatic changes, commit them to your local branch
|
||||
|
||||
If you touched UI code, you should also run the Yarn linter on it:
|
||||
|
||||
* Run `make lint-ui`
|
||||
* Fix any of the errors reported by it
|
||||
|
||||
## Contributing to Argo CD UI
|
||||
|
||||
Argo CD, along with Argo Workflows, uses shared React components from [Argo UI](https://github.com/argoproj/argo-ui). Examples of some of these components include buttons, containers, form controls,
|
||||
and others. Although you can make changes to these files and run them locally, in order to have these changes added to the Argo CD repo, you will need to follow these steps.
|
||||
|
||||
1. Fork and clone the [Argo UI repository](https://github.com/argoproj/argo-ui).
|
||||
|
||||
2. `cd` into your `argo-ui` directory, and then run `yarn install`.
|
||||
|
||||
3. Make your file changes.
|
||||
|
||||
4. Run `yarn start` to start a [storybook](https://storybook.js.org/) dev server and view the components in your browser. Make sure all your changes work as expected.
|
||||
|
||||
5. Use [yarn link](https://classic.yarnpkg.com/en/docs/cli/link/) to link Argo UI package to your Argo CD repository. (Commands below assume that `argo-ui` and `argo-cd` are both located within the same parent folder)
|
||||
|
||||
* `cd argo-ui`
|
||||
* `yarn link`
|
||||
* `cd ../argo-cd/ui`
|
||||
* `yarn link argo-ui`
|
||||
|
||||
Once `argo-ui` package has been successfully linked, test out changes in your local development environment.
|
||||
|
||||
6. Commit changes and open a PR to [Argo UI](https://github.com/argoproj/argo-ui).
|
||||
|
||||
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd/ui` folder and run `yarn add git+https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/yarn.lock` file to use the lastest master commit for argo-ui.
|
||||
|
||||
8. Submit changes to `ui/yarn.lock`in a PR to Argo CD.
|
||||
|
||||
## Setting up a local toolchain
|
||||
|
||||
For development, you can either use the fully virtualized toolchain provided as Docker images, or you can set up the toolchain on your local development machine. Due to the dynamic nature of requirements, you might want to stay with the virtualized environment.
|
||||
|
||||
### Install required dependencies and build-tools
|
||||
|
||||
!!!note
|
||||
The installations instructions are valid for Linux hosts only. Mac instructions will follow shortly.
|
||||
|
||||
For installing the tools required to build and test Argo CD on your local system, we provide convenient installer scripts. By default, they will install binaries to `/usr/local/bin` on your system, which might require `root` privileges.
|
||||
|
||||
You can change the target location by setting the `BIN` environment before running the installer scripts. For example, you can install the binaries into `~/go/bin` (which should then be the first component in your `PATH` environment, i.e. `export PATH=~/go/bin:$PATH`):
|
||||
|
||||
```shell
|
||||
make BIN=~/go/bin install-tools-local
|
||||
```
|
||||
|
||||
Additionally, you have to install at least the following tools via your OS's package manager (this list might not be always up-to-date):
|
||||
|
||||
* Git LFS plugin
|
||||
* GnuPG version 2
|
||||
|
||||
### Install Go dependencies
|
||||
|
||||
You need to pull in all required Go dependencies. To do so, run
|
||||
|
||||
* `make mod-download-local`
|
||||
* `make mod-vendor-local`
|
||||
|
||||
### Test your build toolchain
|
||||
|
||||
The first thing you can do to test whether your build toolchain is setup correctly is by generating the glue code for the API and after that, run a normal build:
|
||||
|
||||
* `make codegen-local`
|
||||
* `make build-local`
|
||||
|
||||
This should return without any error.
|
||||
|
||||
### Run unit-tests
|
||||
|
||||
The next thing is to make sure that unit tests are running correctly on your system. These will require that all dependencies, such as Helm, Kustomize, Git, GnuPG, etc are correctly installed and fully functioning:
|
||||
|
||||
* `make test-local`
|
||||
|
||||
### Run end-to-end tests
|
||||
|
||||
The final step is running the End-to-End testsuite, which makes sure that your Kubernetes dependencies are working properly. This will involve starting all of the Argo CD components locally on your computer. The end-to-end tests consists of two parts: a server component, and a client component.
|
||||
|
||||
* First, start the End-to-End server: `make start-e2e-local`. This will spawn a number of processes and services on your system.
|
||||
* When all components have started, run `make test-e2e-local` to run the end-to-end tests against your local services.
|
||||
|
||||
For more information about End-to-End tests, refer to the [End-to-End test documentation](test-e2e.md).
|
||||
@@ -28,7 +28,7 @@ kubectl create namespace argocd
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/core-install.yaml
|
||||
```
|
||||
|
||||
Use `argocd login --k8s-api` to [configure](./user-guide/commands/argocd_login.md) CLI access and skip steps 3-5.
|
||||
Use `argocd login --core` to [configure](./user-guide/commands/argocd_login.md) CLI access and skip steps 3-5.
|
||||
|
||||
## 2. Download Argo CD CLI
|
||||
|
||||
@@ -73,13 +73,9 @@ in your Argo CD installation namespace. You can simply retrieve this password
|
||||
using `kubectl`:
|
||||
|
||||
```bash
|
||||
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
|
||||
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
|
||||
```
|
||||
|
||||
For better readability, e.g. if you want to copy & paste the generated password,
|
||||
you can simply append `&& echo` to above command, which will add a newline to
|
||||
the output.
|
||||
|
||||
!!! warning
|
||||
You should delete the `argocd-initial-admin-secret` from the Argo CD
|
||||
namespace once you changed the password. The secret serves no other
|
||||
@@ -93,6 +89,9 @@ Using the username `admin` and the password from above, login to Argo CD's IP or
|
||||
argocd login <ARGOCD_SERVER>
|
||||
```
|
||||
|
||||
!!! note
|
||||
The CLI environment must be able to communicate with the Argo CD controller. If it isn't directly accessible as described above in step 3, you can tell the CLI to access it using port forwarding through one of these mechanisms: 1) add `--port-forward-namespace argocd` flag to every CLI command; or 2) set `ARGOCD_OPTS` environment variable: `export ARGOCD_OPTS='--port-forward-namespace argocd'`.
|
||||
|
||||
Change the password using the command:
|
||||
|
||||
```bash
|
||||
@@ -131,10 +130,11 @@ An example repository containing a guestbook application is available at
|
||||
|
||||
### Creating Apps Via CLI
|
||||
|
||||
!!! note
|
||||
You can access Argo CD using port forwarding: add `--port-forward-namespace argocd` flag to every CLI command or set `ARGOCD_OPTS` environment variable: `export ARGOCD_OPTS='--port-forward-namespace argocd'`:
|
||||
Create the example guestbook application with the following command:
|
||||
|
||||
`argocd app create guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --dest-server https://kubernetes.default.svc --dest-namespace default`
|
||||
```bash
|
||||
argocd app create guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --dest-server https://kubernetes.default.svc --dest-namespace default`
|
||||
```
|
||||
|
||||
### Creating Apps Via UI
|
||||
|
||||
@@ -152,7 +152,7 @@ Connect the [https://github.com/argoproj/argocd-example-apps.git](https://github
|
||||
|
||||

|
||||
|
||||
For **Destination**, set cluster to `in-cluster` and namespace to `default`:
|
||||
For **Destination**, set cluster URL to `https://kubernetes.default.svc` (or `in-cluster` for cluster name) and namespace to `default`:
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -18,6 +18,9 @@ data:
|
||||
# Specifies token expiration duration
|
||||
users.session.duration: "24h"
|
||||
|
||||
# Specifies regex expression for password
|
||||
passwordPattern: "^.{8,32}$"
|
||||
|
||||
# Enables google analytics tracking is specified
|
||||
ga.trackingid: "UA-12345-1"
|
||||
# Unless set to 'false' then user ids are hashed before sending to google analytics
|
||||
@@ -221,6 +224,15 @@ data:
|
||||
# Optional link for banner. If set, the entire banner text will become a link.
|
||||
# You can have bannercontent without a bannerurl, but not the other way around.
|
||||
ui.bannerurl: "https://argoproj.github.io"
|
||||
# Uncomment to make the banner not show the close buttons, thereby making the banner permanent.
|
||||
# Because it is permanent, only one line of text is available to not take up too much real estate in the UI,
|
||||
# so it is recommended that the length of the bannercontent text is kept reasonably short. Note that you can
|
||||
# have either a permanent banner or a regular closeable banner, and NOT both. eg. A user can't dismiss a
|
||||
# notification message (closeable) banner, to then immediately see a permanent banner.
|
||||
# ui.bannerpermanent: "true"
|
||||
# An option to specify the position of the banner, either the top or bottom of the page. The default is at the top.
|
||||
# Uncomment to make the banner appear at the bottom of the page. Any value other than "bottom" will make the banner appear at the top.
|
||||
# ui.bannerposition: "bottom"
|
||||
|
||||
# Application reconciliation timeout is the max amount of time required to discover if a new manifests version got
|
||||
# published to the repository. Reconciliation by timeout is disabled if timeout is set to 0. Three minutes by default.
|
||||
|
||||
@@ -48,6 +48,8 @@ data:
|
||||
server.basehref: "/"
|
||||
# Used if Argo CD is running behind reverse proxy under subpath different from /
|
||||
server.rootpath: "/"
|
||||
# Directory path that contains additional static assets
|
||||
server.staticassets: "/shared/app"
|
||||
|
||||
# Set the logging format. One of: text|json (default "text")
|
||||
server.log.format: "text"
|
||||
|
||||
@@ -28,3 +28,8 @@ data:
|
||||
# If omitted, defaults to: '[groups]'. The scope value can be a string, or a list of strings.
|
||||
scopes: '[cognito:groups, email]'
|
||||
|
||||
# matchMode configures the matchers function for casbin.
|
||||
# There are two options for this, 'glob' for glob matcher or 'regex' for regex matcher. If omitted or mis-configured,
|
||||
# will be set to 'glob' as default.
|
||||
policy.matchMode: 'glob'
|
||||
|
||||
|
||||
@@ -14,3 +14,5 @@ data:
|
||||
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
|
||||
ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
|
||||
vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
|
||||
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
|
||||
github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
|
||||
|
||||
@@ -88,11 +88,31 @@ spec:
|
||||
name: styles
|
||||
```
|
||||
|
||||
Note that the CSS file should be mounted within a subdirectory of the existing "/shared/app" directory
|
||||
Note that the CSS file should be mounted within a subdirectory of the "/shared/app" directory
|
||||
(e.g. "/shared/app/custom"). Otherwise, the file will likely fail to be imported by the browser with an
|
||||
"incorrect MIME type" error.
|
||||
"incorrect MIME type" error. The subdirectory can be changed using `server.staticassets` key of the
|
||||
[argocd-cmd-params-cm.yaml](./argocd-cmd-params-cm.yaml) ConfigMap.
|
||||
|
||||
## Developing Style Overlays
|
||||
The styles specified in the injected CSS file should be specific to components and classes defined in [argo-ui](https://github.com/argoproj/argo-ui).
|
||||
It is recommended to test out the styles you wish to apply first by making use of your browser's built-in developer tools. For a more full-featured
|
||||
experience, you may wish to build a separate project using the [Argo CD UI dev server](https://webpack.js.org/configuration/dev-server/).
|
||||
experience, you may wish to build a separate project using the [Argo CD UI dev server](https://webpack.js.org/configuration/dev-server/).
|
||||
|
||||
## Banners
|
||||
|
||||
Argo CD can optionally display a banner that can be used to notify your users of upcoming maintenance and operational changes. This feature can be enabled by specifying the banner message using the `ui.bannercontent` field in the `argocd-cm` ConfigMap and Argo CD will display this message at the top of every UI page. You can optionally add a link to this message by setting `ui.bannerurl`.
|
||||
|
||||
### argocd-cm
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
...
|
||||
name: argocd-cm
|
||||
data:
|
||||
ui.bannercontent: "Banner message linked to a URL"
|
||||
ui.bannerurl: "www.bannerlink.com"
|
||||
```
|
||||
|
||||

|
||||
|
||||
@@ -69,5 +69,5 @@ RUN apt-get update && \
|
||||
chmod +x /usr/local/bin/sops
|
||||
|
||||
# Switch back to non-root user
|
||||
USER argocd
|
||||
USER 999
|
||||
```
|
||||
|
||||
@@ -171,6 +171,9 @@ Repository details are stored in secrets. To configure a repo, create a secret w
|
||||
Consider using [bitnami-labs/sealed-secrets](https://github.com/bitnami-labs/sealed-secrets) to store an encrypted secret definition as a Kubernetes manifest.
|
||||
Each repository must have a `url` field and, depending on whether you connect using HTTPS, SSH, or GitHub App, `username` and `password` (for HTTPS), `sshPrivateKey` (for SSH), or `githubAppPrivateKey` (for GitHub App).
|
||||
|
||||
!!!warning
|
||||
When using [bitnami-labs/sealed-secrets](https://github.com/bitnami-labs/sealed-secrets) the labels will be removed and have to be readded as descibed here: https://github.com/bitnami-labs/sealed-secrets#sealedsecrets-as-templates-for-secrets
|
||||
|
||||
Example for HTTPS:
|
||||
|
||||
```yaml
|
||||
@@ -182,6 +185,7 @@ metadata:
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
type: git
|
||||
url: https://github.com/argoproj/private-repo
|
||||
password: my-password
|
||||
username: my-username
|
||||
@@ -198,6 +202,7 @@ metadata:
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
type: git
|
||||
url: git@github.com:argoproj/my-private-repository
|
||||
sshPrivateKey: |
|
||||
-----BEGIN OPENSSH PRIVATE KEY-----
|
||||
@@ -215,6 +220,7 @@ metadata:
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
type: git
|
||||
repo: https://github.com/argoproj/my-private-repository
|
||||
githubAppID: 1
|
||||
githubAppInstallationID: 2
|
||||
@@ -231,6 +237,7 @@ metadata:
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
type: git
|
||||
repo: https://ghe.example.com/argoproj/my-private-repository
|
||||
githubAppID: 1
|
||||
githubAppInstallationID: 2
|
||||
@@ -257,6 +264,7 @@ metadata:
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
type: git
|
||||
url: https://github.com/argoproj/private-repo
|
||||
---
|
||||
apiVersion: v1
|
||||
@@ -267,6 +275,7 @@ metadata:
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
type: git
|
||||
url: https://github.com/argoproj/other-private-repo
|
||||
---
|
||||
apiVersion: v1
|
||||
@@ -277,6 +286,7 @@ metadata:
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repo-creds
|
||||
stringData:
|
||||
type: git
|
||||
url: https://github.com/argoproj
|
||||
password: my-password
|
||||
username: my-username
|
||||
@@ -396,6 +406,8 @@ data:
|
||||
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
|
||||
ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
|
||||
vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H
|
||||
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
|
||||
github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
|
||||
```
|
||||
|
||||
!!! note
|
||||
@@ -416,6 +428,7 @@ metadata:
|
||||
labels:
|
||||
argocd.argoproj.io/secret-type: repository
|
||||
stringData:
|
||||
type: git
|
||||
url: https://github.com/argoproj/private-repo
|
||||
proxy: https://proxy-server-url:8888
|
||||
password: my-password
|
||||
@@ -495,13 +508,13 @@ execProviderConfig:
|
||||
installHint: string
|
||||
# Transport layer security configuration settings
|
||||
tlsClientConfig:
|
||||
# PEM-encoded bytes (typically read from a client certificate file).
|
||||
# Base64 encoded PEM-encoded bytes (typically read from a client certificate file).
|
||||
caData: string
|
||||
# PEM-encoded bytes (typically read from a client certificate file).
|
||||
# Base64 encoded PEM-encoded bytes (typically read from a client certificate file).
|
||||
certData: string
|
||||
# Server should be accessed without verifying the TLS certificate
|
||||
insecure: boolean
|
||||
# PEM-encoded bytes (typically read from a client certificate key file).
|
||||
# Base64 encoded PEM-encoded bytes (typically read from a client certificate key file).
|
||||
keyData: string
|
||||
# ServerName is passed to the server for SNI and is used in the client to check server
|
||||
# certificates against. If ServerName is empty, the hostname used to contact the
|
||||
|
||||
@@ -36,21 +36,19 @@ metadata:
|
||||
app.kubernetes.io/name: argocd-cm
|
||||
app.kubernetes.io/part-of: argocd
|
||||
data:
|
||||
resource.customizations: |
|
||||
argoproj.io/Application:
|
||||
health.lua: |
|
||||
hs = {}
|
||||
hs.status = "Progressing"
|
||||
hs.message = ""
|
||||
if obj.status ~= nil then
|
||||
if obj.status.health ~= nil then
|
||||
hs.status = obj.status.health.status
|
||||
if obj.status.health.message ~= nil then
|
||||
hs.message = obj.status.health.message
|
||||
end
|
||||
end
|
||||
resource.customizations.health.argoproj.io_Application: |
|
||||
hs = {}
|
||||
hs.status = "Progressing"
|
||||
hs.message = ""
|
||||
if obj.status ~= nil then
|
||||
if obj.status.health ~= nil then
|
||||
hs.status = obj.status.health.status
|
||||
if obj.status.health.message ~= nil then
|
||||
hs.message = obj.status.health.message
|
||||
end
|
||||
return hs
|
||||
end
|
||||
end
|
||||
return hs
|
||||
```
|
||||
|
||||
## Custom Health Checks
|
||||
|
||||
@@ -38,7 +38,7 @@ and might fail. To avoid failed syncs use `ARGOCD_GIT_ATTEMPTS_COUNT` environmen
|
||||
|
||||
* `argocd_git_request_total` - Number of git requests. The metric provides two tags: `repo` - Git repo URL; `request_type` - `ls-remote` or `fetch`.
|
||||
|
||||
* `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` (v1.8+) - environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issue. Note: metric is expensive to both query and store!
|
||||
* `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` - environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issue. Note: metric is expensive to both query and store!
|
||||
|
||||
### argocd-application-controller
|
||||
|
||||
@@ -86,7 +86,7 @@ spec:
|
||||
value: "2"
|
||||
```
|
||||
|
||||
* `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` (v1.8+)- environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issue. Note: metric is expensive to both query and store!
|
||||
* `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` - environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issue. Note: metric is expensive to both query and store!
|
||||
|
||||
**metrics**
|
||||
|
||||
@@ -119,7 +119,7 @@ If the manifest generation has no side effects then requests are processed in pa
|
||||
* **Multiple Helm based applications pointing to the same directory in one Git repository:** ensure that your Helm chart don't have conditional
|
||||
[dependencies](https://helm.sh/docs/chart_best_practices/dependencies/#conditions-and-tags) and create `.argocd-allow-concurrency` file in chart directory.
|
||||
|
||||
* **Multiple Custom plugin based applications:** avoid creating temporal files during manifest generation and and create `.argocd-allow-concurrency` file in app directory.
|
||||
* **Multiple Custom plugin based applications:** avoid creating temporal files during manifest generation and create `.argocd-allow-concurrency` file in app directory.
|
||||
|
||||
* **Multiple Kustomize or Ksonnet applications in same repository with [parameter overrides](../user-guide/parameters.md):** sorry, no workaround for now.
|
||||
|
||||
|
||||
@@ -79,7 +79,7 @@ Since Contour Ingress supports only a single protocol per Ingress object, define
|
||||
|
||||
Internal HTTP/HTTPS Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-http
|
||||
@@ -102,7 +102,7 @@ spec:
|
||||
|
||||
Internal gRPC Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-grpc
|
||||
@@ -124,7 +124,7 @@ spec:
|
||||
|
||||
External HTTPS SSO Callback Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-external-callback-http
|
||||
@@ -178,7 +178,7 @@ In order to expose the Argo CD API server with a single ingress rule and hostnam
|
||||
must be used to passthrough TLS connections and terminate TLS at the Argo CD API server.
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-ingress
|
||||
@@ -244,7 +244,7 @@ way would be to define two Ingress objects. One for HTTP/HTTPS, and the other fo
|
||||
|
||||
HTTP/HTTPS Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-http-ingress
|
||||
@@ -269,7 +269,7 @@ spec:
|
||||
|
||||
gRPC Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-grpc-ingress
|
||||
@@ -379,7 +379,7 @@ spec:
|
||||
Once we create this service, we can configure the Ingress to conditionally route all `application/grpc` traffic to the new HTTP2 backend, using the `alb.ingress.kubernetes.io/conditions` annotation, as seen below. Note: The value after the . in the condition annotation _must_ be the same name as the service that you want traffic to route to - and will be applied on any path with a matching serviceName.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1 # Use extensions/v1beta1 for Kubernetes 1.18 and older
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
@@ -408,6 +408,162 @@ Once we create this service, we can configure the Ingress to conditionally route
|
||||
- argocd.argoproj.io
|
||||
```
|
||||
|
||||
## Google Cloud load balancers with Kubernetes Ingress
|
||||
|
||||
You can make use of the integration of GKE with Google Cloud to deploy Load Balancers using just Kubernetes objects.
|
||||
|
||||
For this we will need these five objects:
|
||||
- A Service
|
||||
- A BackendConfig
|
||||
- A FrontendConfig
|
||||
- A secret with your SSL certificate
|
||||
- An Ingress for GKE
|
||||
|
||||
If you need detail for all the options available for these Google integrations, you can check the [Google docs on configuring Ingress features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)
|
||||
|
||||
### Disable internal TLS
|
||||
|
||||
First, to avoid internal redirection loops from HTTP to HTTPS, the API server should be run with TLS disabled. Edit the argocd-server deployment to add the --insecure flag to the argocd-server command. For this you can edit your resource live with `kubectl -n argocd edit deployments.apps argocd-server` or use a kustomize patch before installing Argo CD.
|
||||
|
||||
The container command should change from:
|
||||
```yaml
|
||||
containers:
|
||||
- command:
|
||||
- argocd-server
|
||||
- --staticassets
|
||||
- /shared/app
|
||||
```
|
||||
|
||||
To:
|
||||
```yaml
|
||||
containers:
|
||||
- command:
|
||||
- argocd-server
|
||||
- --insecure
|
||||
- --staticassets
|
||||
- /shared/app
|
||||
```
|
||||
|
||||
### Creating a service
|
||||
|
||||
Now you need an externally accesible service. This is practically the same as the internal service Argo CD has, but as a NodePort and with Google Cloud annotations. Note that this service is annotated to use a [Network Endpoint Group](https://cloud.google.com/load-balancing/docs/negs) (NEG) to allow your load balancer to send traffic directly to your pods without using kube-proxy, so remove the `neg` annotation it that's not what you want.
|
||||
|
||||
The service:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: argocd-server-external
|
||||
namespace: argocd
|
||||
annotations:
|
||||
cloud.google.com/neg: '{"ingress": true}'
|
||||
cloud.google.com/backend-config: '{"ports": {"http":"argocd-backend-config"}}'
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: 8080
|
||||
selector:
|
||||
app.kubernetes.io/name: argocd-server
|
||||
```
|
||||
|
||||
### Creating a BackendConfig
|
||||
|
||||
See that previous service referencing a backend config called `argo-backend-config`? So lets deploy it using this yaml:
|
||||
|
||||
```yaml
|
||||
apiVersion: cloud.google.com/v1
|
||||
kind: BackendConfig
|
||||
metadata:
|
||||
name: argocd-backend-config
|
||||
namespace: argocd
|
||||
spec:
|
||||
healthCheck:
|
||||
checkIntervalSec: 30
|
||||
timeoutSec: 5
|
||||
healthyThreshold: 1
|
||||
unhealthyThreshold: 2
|
||||
type: HTTP
|
||||
requestPath: /healthz
|
||||
port: 8080
|
||||
```
|
||||
|
||||
It uses the same health check as the pods.
|
||||
|
||||
### Creating a FrontendConfig
|
||||
|
||||
Now we can deploy a frontend config with an HTTP to HTTPS redirect:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.gke.io/v1beta1
|
||||
kind: FrontendConfig
|
||||
metadata:
|
||||
name: argocd-frontend-config
|
||||
namespace: argocd
|
||||
spec:
|
||||
redirectToHttps:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
---
|
||||
!!! note
|
||||
|
||||
The next two steps (the certificate secret and the Ingress) are described supposing that you manage the certificate yourself, and you have the certificate and key files for it. In the case that your certificate is Google-managed, fix the next two steps using the [guide to use a Google-managed SSL certificate](https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_google-managed_certificate).
|
||||
|
||||
---
|
||||
|
||||
### Creating a certificate secret
|
||||
|
||||
We need now to create a secret with the SSL certificate we want in our load balancer. It's as easy as executing this command on the path you have your certificate keys stored:
|
||||
|
||||
```
|
||||
kubectl -n argocd create secret tls secret-yourdomain-com \
|
||||
--cert cert-file.crt --key key-file.key
|
||||
```
|
||||
|
||||
### Creating an Ingress
|
||||
|
||||
And finally, to top it all, our Ingress. Note the reference to our frontend config, the service, and to the certificate secret:
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd
|
||||
namespace: argocd
|
||||
annotations:
|
||||
networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config
|
||||
spec:
|
||||
tls:
|
||||
- secretName: secret-yourdomain-com
|
||||
rules:
|
||||
- host: argocd.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- path: /*
|
||||
backend:
|
||||
serviceName: argocd-server-external
|
||||
servicePort: http
|
||||
```
|
||||
---
|
||||
!!! warning "Deprecation Warning"
|
||||
|
||||
Note that, according to this [deprecation guide](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122), if you're using Kubernetes 1.22+, instead of `networking.k8s.io/v1beta1`, you should use `networking.k8s.io/v1`.
|
||||
|
||||
---
|
||||
|
||||
As you may know already, it can take some minutes to deploy the load balancer and become ready to accept connections. Once it's ready, get the public IP address for your Load Balancer, go to your DNS server (Google or third party) and point your domain or subdomain (i.e. argocd.yourdomain.com) to that IP address.
|
||||
|
||||
You can get that IP address describing the Ingress object like this:
|
||||
|
||||
```
|
||||
kubectl -n argocd describe ingresses argocd | grep Address
|
||||
```
|
||||
|
||||
Once the DNS change is propagated, you're ready to use Argo with your Google Cloud Load Balancer
|
||||
|
||||
## Authenticating through multiple layers of authenticating reverse proxies
|
||||
|
||||
ArgoCD endpoints may be protected by one or more reverse proxies layers, in that case, you can provide additional headers through the `argocd` CLI `--header` parameter to authenticate through those layers.
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# Installation
|
||||
|
||||
Argo CD has two type of installations: multi-tennant and core.
|
||||
Argo CD has two type of installations: multi-tenant and core.
|
||||
|
||||
## Multi-Tenant
|
||||
|
||||
The multi-tenant installation is the most common way to install Argo CD. This type of installation is typically used to service multiple application developer teams
|
||||
in the organization and maintained by a platform team.
|
||||
|
||||
The end-users can access Argo CD via API server using Web UI or `argocd` CLI. The `argocd` has to be configured using `argocd login <server-host>` command
|
||||
The end-users can access Argo CD via the API server using the Web UI or `argocd` CLI. The `argocd` CLI has to be configured using `argocd login <server-host>` command
|
||||
(learn more [here](../user-guide/commands/argocd_login.md)).
|
||||
|
||||
Two types of installation manifests are provided:
|
||||
@@ -18,7 +18,7 @@ Not recommended for production use. This type of installation is typically used
|
||||
|
||||
* [install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/install.yaml) - Standard Argo CD installation with cluster-admin access. Use this
|
||||
manifest set if you plan to use Argo CD to deploy applications in the same cluster that Argo CD runs
|
||||
in (i.e. kubernetes.svc.default). Will still be able to deploy to external clusters with inputted
|
||||
in (i.e. kubernetes.svc.default). It will still be able to deploy to external clusters with inputted
|
||||
credentials.
|
||||
|
||||
* [namespace-install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/namespace-install.yaml) - Installation of Argo CD which requires only
|
||||
@@ -26,11 +26,11 @@ Not recommended for production use. This type of installation is typically used
|
||||
need Argo CD to deploy applications in the same cluster that Argo CD runs in, and will rely solely
|
||||
on inputted cluster credentials. An example of using this set of manifests is if you run several
|
||||
Argo CD instances for different teams, where each instance will be deploying applications to
|
||||
external clusters. Will still be possible to deploy to the same cluster (kubernetes.svc.default)
|
||||
external clusters. It will still be possible to deploy to the same cluster (kubernetes.svc.default)
|
||||
with inputted credentials (i.e. `argocd cluster add <CONTEXT> --in-cluster --namespace <YOUR NAMESPACE>`).
|
||||
|
||||
> Note: Argo CD CRDs are not included into [namespace-install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/namespace-install.yaml).
|
||||
> and have to be installed separately. The CRD manifests are located in [manifests/crds](https://github.com/argoproj/argo-cd/blob/master/manifests/crds) directory.
|
||||
> and have to be installed separately. The CRD manifests are located in the [manifests/crds](https://github.com/argoproj/argo-cd/blob/master/manifests/crds) directory.
|
||||
> Use the following command to install them:
|
||||
> ```bash
|
||||
> kubectl apply -k https://github.com/argoproj/argo-cd/manifests/crds\?ref\=stable
|
||||
@@ -38,7 +38,7 @@ Not recommended for production use. This type of installation is typically used
|
||||
|
||||
### High Availability:
|
||||
|
||||
High Availability installation is recommended for production use. Bundle includes the same components but tunned for high availability and resiliency.
|
||||
High Availability installation is recommended for production use. This bundle includes the same components but tuned for high availability and resiliency.
|
||||
|
||||
* [ha/install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/ha/install.yaml) - the same as install.yaml but with multiple replicas for
|
||||
supported components.
|
||||
@@ -49,10 +49,16 @@ High Availability installation is recommended for production use. Bundle include
|
||||
## Core
|
||||
|
||||
The core installation is most suitable for cluster administrators who indepently use Argo CD and don't need multi-tenancy features. This installation
|
||||
includes less components and easier to setup. The bundle does not include API server, UI as well as install non-HA light-weight version of each component.
|
||||
includes fewer components and is easier to setup. The bundle does not include the API server or UI, and installs the lightweight (non-HA) version of each component.
|
||||
|
||||
The end-users need Kubernetes access to manage Argo CD. The `argocd` CLI has to be configured using `argocd login --k8s-api` command. The Web UI is also
|
||||
available and can be started using `argocd admin dashboard` command.
|
||||
The end-users need Kubernetes access to manage Argo CD. The `argocd` CLI has to be configured using the following commands:
|
||||
|
||||
```bash
|
||||
kubectl config set-context --current --namespace=argocd # change current kube context to argocd namespace
|
||||
argocd login --core
|
||||
```
|
||||
|
||||
The Web UI is also available and can be started using the `argocd admin dashboard` command.
|
||||
|
||||
Installation manifests are available at [core-install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/core-install.yaml).
|
||||
|
||||
@@ -74,4 +80,4 @@ resources:
|
||||
## Helm
|
||||
|
||||
The Argo CD can be installed using [Helm](https://helm.sh/). The Helm chart is currently community maintained and available at
|
||||
[argo-helm/charts/argo-cd](https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd).
|
||||
[argo-helm/charts/argo-cd](https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd).
|
||||
|
||||
@@ -1,13 +1,27 @@
|
||||
# Metrics
|
||||
|
||||
Argo CD exposes two sets of Prometheus metrics
|
||||
Argo CD exposes different sets of Prometheus metrics per server.
|
||||
|
||||
## Application Metrics
|
||||
## Application Controller Metrics
|
||||
Metrics about applications. Scraped at the `argocd-metrics:8082/metrics` endpoint.
|
||||
|
||||
* Gauge for application health status
|
||||
* Gauge for application sync status
|
||||
* Counter for application sync history
|
||||
| Metric | Type | Description |
|
||||
|--------|:----:|-------------|
|
||||
| `argocd_app_info` | gauge | Information about Applications. It contains labels such as `sync_status` and `health_status` that reflect the application state in ArgoCD. |
|
||||
| `argocd_app_k8s_request_total` | counter | Number of kubernetes requests executed during application reconciliation |
|
||||
| `argocd_app_labels` | gauge | Argo Application labels converted to Prometheus labels. Disabled by default. See section bellow about how to enable it. |
|
||||
| `argocd_app_reconcile` | histogram | Application reconciliation performance. |
|
||||
| `argocd_app_sync_total` | counter | Counter for application sync history |
|
||||
| `argocd_cluster_api_resource_objects` | gauge | Number of k8s resource objects in the cache. |
|
||||
| `argocd_cluster_api_resources` | gauge | Number of monitored kubernetes API resources. |
|
||||
| `argocd_cluster_cache_age_seconds` | gauge | Cluster cache age in seconds. |
|
||||
| `argocd_cluster_connection_status` | gauge | The k8s cluster current connection status. |
|
||||
| `argocd_cluster_events_total` | counter | Number of processes k8s resource events. |
|
||||
| `argocd_cluster_info` | gauge | Information about cluster. |
|
||||
| `argocd_kubectl_exec_pending` | gauge | Number of pending kubectl executions |
|
||||
| `argocd_kubectl_exec_total` | counter | Number of kubectl executions |
|
||||
| `argocd_redis_request_duration` | histogram | Redis requests duration. |
|
||||
| `argocd_redis_request_total` | counter | Number of redis requests executed during application reconciliation |
|
||||
|
||||
If you use ArgoCD with many application and project creation and deletion,
|
||||
the metrics page will keep in cache your application and project's history.
|
||||
@@ -16,10 +30,57 @@ to deleted resources, you can schedule a metrics reset to clean the
|
||||
history with an application controller flag. Example:
|
||||
`--metrics-cache-expiration="24h0m0s"`.
|
||||
|
||||
### Exposing Application labels as Prometheus metrics
|
||||
|
||||
There are use-cases where ArgoCD Applications contain labels that are desired to be exposed as Prometheus metrics.
|
||||
Some examples are:
|
||||
|
||||
* Having the team name as a label to allow routing alerts to specific receivers
|
||||
* Creating dashboards broken down by business units
|
||||
|
||||
As the Application labels are specific to each company, this feature is disabled by default. To enable it, add the
|
||||
`--metrics-application-labels` flag to the ArgoCD application controller.
|
||||
|
||||
The example bellow will expose the ArgoCD Application labels `team-name` and `business-unit` to Prometheus:
|
||||
|
||||
containers:
|
||||
- command:
|
||||
- argocd-application-controller
|
||||
- --metrics-application-labels
|
||||
- team-name
|
||||
- --metrics-application-labels
|
||||
- business-unit
|
||||
|
||||
In this case, the metric would look like:
|
||||
|
||||
```
|
||||
# TYPE argocd_app_labels gauge
|
||||
argocd_app_labels{label_business_unit="bu-id-1",label_team_name="my-team",name="my-app-1",namespace="argocd",project="important-project"} 1
|
||||
argocd_app_labels{label_business_unit="bu-id-1",label_team_name="my-team",name="my-app-2",namespace="argocd",project="important-project"} 1
|
||||
argocd_app_labels{label_business_unit="bu-id-2",label_team_name="another-team",name="my-app-3",namespace="argocd",project="important-project"} 1
|
||||
```
|
||||
|
||||
## API Server Metrics
|
||||
Metrics about API Server API request and response activity (request totals, response codes, etc...).
|
||||
Scraped at the `argocd-server-metrics:8083/metrics` endpoint.
|
||||
|
||||
| Metric | Type | Description |
|
||||
|--------|:----:|-------------|
|
||||
| `argocd_redis_request_duration` | histogram | Redis requests duration. |
|
||||
| `argocd_redis_request_total` | counter | Number of kubernetes requests executed during application reconciliation. |
|
||||
|
||||
## Repo Server Metrics
|
||||
Metrics about the Repo Server.
|
||||
Scraped at the `argocd-repo-server:8084/metrics` endpoint.
|
||||
|
||||
| Metric | Type | Description |
|
||||
|--------|:----:|-------------|
|
||||
| `argocd_git_request_duration_seconds` | histogram | Git requests duration seconds. |
|
||||
| `argocd_git_request_total` | counter | Number of git requests performed by repo server |
|
||||
| `argocd_redis_request_duration_seconds` | histogram | Redis requests duration seconds. |
|
||||
| `argocd_redis_request_total` | counter | Number of kubernetes requests executed during application reconciliation. |
|
||||
| `argocd_repo_pending_request_total` | gauge | Number of pending requests requiring repository lock |
|
||||
|
||||
## Prometheus Operator
|
||||
|
||||
If using Prometheus Operator, the following ServiceMonitor example manifests can be used.
|
||||
@@ -65,7 +126,7 @@ metadata:
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: argocd-repo-server-metrics
|
||||
app.kubernetes.io/name: argocd-repo-server
|
||||
endpoints:
|
||||
- port: metrics
|
||||
```
|
||||
|
||||
@@ -4,7 +4,7 @@ Argo CD is un-opinionated about how secrets are managed. There's many ways to do
|
||||
|
||||
* [Bitnami Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets)
|
||||
* [GoDaddy Kubernetes External Secrets](https://github.com/godaddy/kubernetes-external-secrets)
|
||||
* [External Secrets Operator](https://github.com/ContainerSolutions/externalsecret-operator)
|
||||
* [External Secrets Operator](https://github.com/external-secrets/external-secrets)
|
||||
* [Hashicorp Vault](https://www.vaultproject.io)
|
||||
* [Banzai Cloud Bank-Vaults](https://github.com/banzaicloud/bank-vaults)
|
||||
* [Helm Secrets](https://github.com/jkroepke/helm-secrets)
|
||||
|
||||
@@ -11,7 +11,8 @@ Authentication to Argo CD API server is performed exclusively using [JSON Web To
|
||||
in one of the following ways:
|
||||
|
||||
1. For the local `admin` user, a username/password is exchanged for a JWT using the `/api/v1/session`
|
||||
endpoint. This token is signed & issued by the Argo CD API server itself, and has no expiration.
|
||||
endpoint. This token is signed & issued by the Argo CD API server itself and it expires after 24 hours
|
||||
(this token used not to expire, see [CVE-2021-26921](https://github.com/argoproj/argo-cd/security/advisories/GHSA-9h6w-j7w4-jr52)).
|
||||
When the admin password is updated, all existing admin JWT tokens are immediately revoked.
|
||||
The password is stored as a bcrypt hash in the [`argocd-secret`](https://github.com/argoproj/argo-cd/blob/master/manifests/base/config/argocd-secret.yaml) Secret.
|
||||
|
||||
@@ -154,3 +155,12 @@ Payloads from webhook events are considered untrusted. Argo CD only examines the
|
||||
the involved applications of the webhook event (e.g. which repo was modified), then refreshes
|
||||
the related application for reconciliation. This refresh is the same refresh which occurs regularly
|
||||
at three minute intervals, just fast-tracked by the webhook event.
|
||||
|
||||
## Logging
|
||||
|
||||
Argo CD logs payloads of most API requests except request that are considered sensitive, such as
|
||||
`/cluster.ClusterService/Create`, `/session.SessionService/Create` etc. The full list of method
|
||||
can be found in [server/server.go](https://github.com/argoproj/argo-cd/blob/abba8dddce8cd897ba23320e3715690f465b4a95/server/server.go#L516).
|
||||
|
||||
Argo CD does not log IP addresses of clients requesting API endpoints, since the API server is typically behind a proxy. Instead, it is recommended
|
||||
to configure IP addresses logging in the proxy server that sits in front of the API server.
|
||||
@@ -30,6 +30,7 @@ argocd-application-controller [flags]
|
||||
--kubectl-parallelism-limit int Number of allowed concurrent kubectl fork/execs. Any value less the 1 means no limit. (default 20)
|
||||
--logformat string Set the logging format. One of: text|json (default "text")
|
||||
--loglevel string Set the logging level. One of: debug|info|warn|error (default "info")
|
||||
--metrics-application-labels strings List of Application labels that will be added to the argocd_application_labels metric
|
||||
--metrics-cache-expiration duration Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)
|
||||
--metrics-port int Start metrics server on given port (default 8082)
|
||||
-n, --namespace string If present, the namespace scope for this CLI request
|
||||
|
||||
@@ -56,6 +56,7 @@ argocd-server [flags]
|
||||
--sentinel stringArray Redis sentinel hostname and port (e.g. argocd-redis-ha-announce-0:6379).
|
||||
--sentinelmaster string Redis sentinel master group name. (default "master")
|
||||
--server string The address and port of the Kubernetes API server
|
||||
--staticassets string Directory path that contains additional static assets (default "/shared/app")
|
||||
--tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
|
||||
--tlsciphers string The list of acceptable ciphers to be used when establishing TLS connections. Use 'list' to list available ciphers. (default "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384:TLS_RSA_WITH_AES_256_GCM_SHA384")
|
||||
--tlsmaxversion string The maximum SSL/TLS version that is acceptable (one of: 1.0|1.1|1.2|1.3) (default "1.3")
|
||||
|
||||