mirror of
https://github.com/argoproj/argo-cd.git
synced 2026-03-12 11:28:46 +01:00
Compare commits
94 Commits
release-0.
...
release-0.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5732f19d20 | ||
|
|
0dfcd2f7d8 | ||
|
|
7e49cff7e9 | ||
|
|
5d185b6584 | ||
|
|
7bfa374b40 | ||
|
|
6c20e0f7d7 | ||
|
|
93d2dd7ed0 | ||
|
|
89ece31762 | ||
|
|
d565a0a119 | ||
|
|
1e2b554f01 | ||
|
|
8cb2f5d7e4 | ||
|
|
c5814d5946 | ||
|
|
a4a81d1de9 | ||
|
|
cb27cec021 | ||
|
|
e13e13e7ae | ||
|
|
88d41f8efa | ||
|
|
dbe09104a1 | ||
|
|
6a18870ec1 | ||
|
|
ca9f992fc2 | ||
|
|
063ff34f00 | ||
|
|
a9980c3025 | ||
|
|
3f5967c83e | ||
|
|
22b77f5b34 | ||
|
|
c76db90437 | ||
|
|
080f7ff4e0 | ||
|
|
c5730c8f5f | ||
|
|
d46f284d9f | ||
|
|
221f19ae15 | ||
|
|
550cb277df | ||
|
|
9c79af9340 | ||
|
|
1ba52c8880 | ||
|
|
92629067f7 | ||
|
|
93a808e65a | ||
|
|
bf99b251f8 | ||
|
|
f491540636 | ||
|
|
7f84f7d541 | ||
|
|
42b01f7126 | ||
|
|
7e5c17939b | ||
|
|
d6937ec629 | ||
|
|
f5a32f47d3 | ||
|
|
316fcc6126 | ||
|
|
e163177a12 | ||
|
|
1fe257c71e | ||
|
|
1eaa813f28 | ||
|
|
924dad8980 | ||
|
|
1ba10a1a20 | ||
|
|
ab02e10791 | ||
|
|
dd94e5e5c3 | ||
|
|
1fcb90c4d9 | ||
|
|
523c7ddf82 | ||
|
|
3577a68d2d | ||
|
|
d963f5fcc5 | ||
|
|
bed82d68df | ||
|
|
359271dfa8 | ||
|
|
0af77a0706 | ||
|
|
e6efd79ad8 | ||
|
|
c953934d2e | ||
|
|
5b4742d42b | ||
|
|
269f70df51 | ||
|
|
67177f933b | ||
|
|
606fdcded7 | ||
|
|
70b9db68b4 | ||
|
|
dc8a2f5d62 | ||
|
|
8830cf9556 | ||
|
|
bfb558eb92 | ||
|
|
505866a4c6 | ||
|
|
acd2de80fb | ||
|
|
0b08bf4537 | ||
|
|
223091482c | ||
|
|
4699946e1b | ||
|
|
097f87fd52 | ||
|
|
66b4f3a685 | ||
|
|
02116d4bfc | ||
|
|
fb17589af6 | ||
|
|
15ce7ea880 | ||
|
|
57a3123a55 | ||
|
|
32e96e4bb2 | ||
|
|
47ee26a77a | ||
|
|
9cd5d52fbc | ||
|
|
aa2afcd47b | ||
|
|
fd510e7933 | ||
|
|
e29d5b9634 | ||
|
|
2f9891b15b | ||
|
|
c3ecd615ff | ||
|
|
4e22a3cb21 | ||
|
|
bc98b65190 | ||
|
|
02b756ef40 | ||
|
|
954706570c | ||
|
|
e2faf6970f | ||
|
|
a528ae9c12 | ||
|
|
0a5871eba4 | ||
|
|
27471d5249 | ||
|
|
ed484c00db | ||
|
|
b868f26ca4 |
130
CHANGELOG.md
130
CHANGELOG.md
@@ -1,5 +1,135 @@
|
||||
# Changelog
|
||||
|
||||
## v0.10.0 (TBD)
|
||||
|
||||
### Changes since v0.9:
|
||||
|
||||
+ Allow more fine-grained sync (issue #508)
|
||||
+ Display init container logs (issue #681)
|
||||
+ Redirect to /auth/login instead of /login when SSO token is used for authenticaion (issue #348)
|
||||
+ Support ability to use a helm values files from a URL (issue #624)
|
||||
+ Support public not-connected repo in app creation UI (issue #426)
|
||||
+ Use ksonnet CLI instead of ksonnet libs (issue #626)
|
||||
+ We should be able to select the order of the `yaml` files while creating a Helm App (#664)
|
||||
* Remove default params from app history (issue #556)
|
||||
* Update to ksonnet v0.13.0
|
||||
* Update to kustomize 1.0.8
|
||||
- API Server fails to return apps due to grpc max message size limit (issue #690)
|
||||
- App Creation UI for Helm Apps shows only files prefixed with `values-` (issue #663)
|
||||
- App creation UI should allow specifying values files outside of helm app directory bug (issue #658)
|
||||
- argocd-server logs credentials in plain text when adding git repositories (issue #653)
|
||||
- Azure Repos do not work as a repository (issue #643)
|
||||
- Better update conflict error handing during app editing (issue #685)
|
||||
- Cluster watch needs to be restarted when CRDs get created (issue #627)
|
||||
- Credentials not being accepted for Google Source Repositories (issue #651)
|
||||
- Default project is created without permission to deploy cluster level resources (issue #679)
|
||||
- Generate role token click resets policy changes (issue #655)
|
||||
- Input type text instead of password on Connect repo panel (issue #693)
|
||||
- Metrics endpoint not reachable through the metrics kubernetes service (issue #672)
|
||||
- Operation stuck in 'in progress' state if application has no resources (issue #682)
|
||||
- Project should influence options for cluster and namespace during app creation (issue #592)
|
||||
- Repo server unable to execute ls-remote for private repos (issue #639)
|
||||
- Resource is always out of sync if it has only 'ksonnet.io/component' label (issue #686)
|
||||
- Resource nodes are 'jumping' on app details page (issue #683)
|
||||
- Sync always suggest using latest revision instead of target UI bug (issue #669)
|
||||
- Temporary ignore service catalog resources (issue #650)
|
||||
|
||||
## v0.9.2 (2018-09-28)
|
||||
|
||||
* Update to kustomize 1.0.8
|
||||
- Fix issue where argocd-server logged credentials in plain text during repo add (issue #653)
|
||||
- Credentials not being accepted for Google Source Repositories (issue #651)
|
||||
- Azure Repos do not work as a repository (issue #643)
|
||||
- Temporary ignore service catalog resources (issue #650)
|
||||
- Normalize policies by always adding space after comma
|
||||
|
||||
## v0.9.1 (2018-09-24)
|
||||
|
||||
- Repo server unable to execute ls-remote for private repos (issue #639)
|
||||
|
||||
## v0.9.0 (2018-09-24)
|
||||
|
||||
### Notes about upgrading from v0.8
|
||||
* Cluster wide resources should be allowed in default project (due to issue #330):
|
||||
|
||||
```
|
||||
argocd project allow-cluster-resource default '*' '*'
|
||||
```
|
||||
|
||||
* Projects now provide the ability to allow or deny deployments of cluster-scoped resources
|
||||
(e.g. Namespaces, ClusterRoles, CustomResourceDefinitions). When upgrading from v0.8 to v0.9, to
|
||||
match the behavior of v0.8 (which did not have restrictions on deploying resources) and continue to
|
||||
allow deployment of cluster-scoped resources, an additional command should be run:
|
||||
|
||||
```bash
|
||||
argocd proj allow-cluster-resource default '*' '*'
|
||||
```
|
||||
|
||||
The above command allows the `default` project to deploy any cluster-scoped resources which matches
|
||||
the behavior of v0.8.
|
||||
|
||||
* The secret keys in the argocd-secret containing the TLS certificate and key, has been renamed from
|
||||
`server.crt` and `server.key` to the standard `tls.crt` and `tls.key` keys. This enables ArgoCD
|
||||
to integrate better with Ingress and cert-manager. When upgrading to v0.9, the `server.crt` and
|
||||
`server.key` keys in argocd-secret should be renamed to the new keys.
|
||||
|
||||
### Changes since v0.8:
|
||||
+ Auto-sync option in application CRD instance (issue #79)
|
||||
+ Support raw jsonnet as an application source (issue #540)
|
||||
+ Reorder K8s resources to correct creation order (issue #102)
|
||||
+ Redact K8s secrets from API server payloads (issue #470)
|
||||
+ Support --in-cluster authentication without providing a kubeconfig (issue #527)
|
||||
+ Special handling of CustomResourceDefinitions (issue #613)
|
||||
+ ArgoCD should download helm chart dependencies (issue #582)
|
||||
+ Export ArgoCD stats as prometheus style metrics (issue #513)
|
||||
+ Support restricting TLS version (issue #609)
|
||||
+ Use 'kubectl auth reconcile' before 'kubectl apply' (issue #523)
|
||||
+ Projects need controls on cluster-scoped resources (issue #330)
|
||||
+ Support IAM Authentication for managing external K8s clusters (issue #482)
|
||||
+ Compatibility with cert manager (issue #617)
|
||||
* Enable TLS for repo server (issue #553)
|
||||
* Split out dex into it's own deployment (instead of sidecar) (issue #555)
|
||||
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
|
||||
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
|
||||
+ [UI] Improve resource diff rendering (issue #457)
|
||||
+ [UI] Indicate number of ready containers in pod (issue #539)
|
||||
+ [UI] Indicate when app is overriding parameters (issue #503)
|
||||
+ [UI] Provide a YAML view of resources (issue #396)
|
||||
+ [UI] Project Role/Token management from UI (issue #548)
|
||||
+ [UI] App creation wizard should allow specifying source revision (issue #562)
|
||||
+ [UI] Ability to modify application from UI (issue #615)
|
||||
+ [UI] indicate when operation is in progress or has failed (issue #566)
|
||||
- Fix issue where changes were not pulled when tracking a branch (issue #567)
|
||||
- Lazy enforcement of unknown cluster/namespace restricted resources (issue #599)
|
||||
- Fix controller hot loop when app source contains bad manifests (issue #568)
|
||||
- Fix issue where ArgoCD fails to deploy when resources are in a K8s list format (issue #584)
|
||||
- Fix comparison failure when app contains unregistered custom resource (issue #583)
|
||||
- Fix issue where helm hooks were being deployed as part of sync (issue #605)
|
||||
- Fix race conditions in kube.GetResourcesWithLabel and DeleteResourceWithLabel (issue #587)
|
||||
- [UI] Fix issue where projects filter does not work when application got changed
|
||||
- [UI] Creating apps from directories is not obvious (issue #565)
|
||||
- Helm hooks are being deployed as resources (issue #605)
|
||||
- Disagreement in three way diff calculation (issue #597)
|
||||
- SIGSEGV in kube.GetResourcesWithLabel (issue #587)
|
||||
- ArgoCD fails to deploy resources list (issue #584)
|
||||
- Branch tracking not working properly (issue #567)
|
||||
- Controller hot loop when application source has bad manifests (issue #568)
|
||||
|
||||
## v0.8.2 (2018-09-12)
|
||||
- Downgrade ksonnet from v0.12.0 to v0.11.0 due to quote unescape regression
|
||||
- Fix CLI panic when performing an initial `argocd sync/wait`
|
||||
|
||||
## v0.8.1 (2018-09-10)
|
||||
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
|
||||
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
|
||||
+ [UI] Improve resource diff rendering (issue #457)
|
||||
+ [UI] Indicate number of ready containers in pod (issue #539)
|
||||
+ [UI] Indicate when app is overriding parameters (issue #503)
|
||||
+ [UI] Provide a YAML view of resources (issue #396)
|
||||
- Fix issue where changes were not pulled when tracking a branch (issue #567)
|
||||
- Fix controller hot loop when app source contains bad manifests (issue #568)
|
||||
- [UI] Fix issue where projects filter does not work when application got changed
|
||||
|
||||
## v0.8.0 (2018-09-04)
|
||||
|
||||
### Notes about upgrading from v0.7
|
||||
|
||||
14
Dockerfile
14
Dockerfile
@@ -49,22 +49,26 @@ RUN curl -L -o /usr/local/bin/kubectl -LO https://storage.googleapis.com/kuberne
|
||||
# Option 1: build ksonnet ourselves
|
||||
#RUN go get -v -u github.com/ksonnet/ksonnet && mv ${GOPATH}/bin/ksonnet /usr/local/bin/ks
|
||||
# Option 2: use official tagged ksonnet release
|
||||
ENV KSONNET_VERSION=0.12.0
|
||||
ENV KSONNET_VERSION=0.13.0
|
||||
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
|
||||
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
|
||||
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks
|
||||
|
||||
# Install helm
|
||||
ENV HELM_VERSION=2.9.1
|
||||
ENV HELM_VERSION=2.11.0
|
||||
RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
|
||||
tar -C /tmp/ -xf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
|
||||
mv /tmp/linux-amd64/helm /usr/local/bin/helm
|
||||
|
||||
# Install kustomize
|
||||
ENV KUSTOMIZE_VERSION=1.0.7
|
||||
ENV KUSTOMIZE_VERSION=1.0.10
|
||||
RUN curl -L -o /usr/local/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64 && \
|
||||
chmod +x /usr/local/bin/kustomize
|
||||
|
||||
ENV AWS_IAM_AUTHENTICATOR_VERSION=0.3.0
|
||||
RUN curl -L -o /usr/local/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.3.0/heptio-authenticator-aws_${AWS_IAM_AUTHENTICATOR_VERSION}_linux_amd64 && \
|
||||
chmod +x /usr/local/bin/aws-iam-authenticator
|
||||
|
||||
|
||||
####################################################################################################
|
||||
# ArgoCD Build stage which performs the actual build of ArgoCD binaries
|
||||
@@ -109,6 +113,7 @@ COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
|
||||
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
|
||||
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl
|
||||
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
|
||||
COPY --from=builder /usr/local/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
|
||||
|
||||
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
|
||||
ENV USER=argocd
|
||||
@@ -123,6 +128,9 @@ RUN ln -s /usr/local/bin/argocd /argocd && \
|
||||
ln -s /usr/local/bin/argocd-repo-server /argocd-repo-server
|
||||
|
||||
USER argocd
|
||||
|
||||
RUN helm init --client-only
|
||||
|
||||
WORKDIR /home/argocd
|
||||
ARG BINARY
|
||||
CMD ${BINARY}
|
||||
|
||||
484
Gopkg.lock
generated
484
Gopkg.lock
generated
@@ -9,16 +9,6 @@
|
||||
revision = "767c40d6a2e058483c25fa193e963a22da17236d"
|
||||
version = "v0.18.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:6204a59b379aadf05380cf8cf3ae0f5867588ba028fe84f260312a79ae717272"
|
||||
name = "github.com/GeertJohan/go.rice"
|
||||
packages = [
|
||||
".",
|
||||
"embedded",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "c02ca9a983da5807ddf7d796784928f5be4afd09"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:8ec1618fc3ee146af104d6c13be250f25e5976e34557d4afbfe4b28035ce6c05"
|
||||
name = "github.com/Knetic/govaluate"
|
||||
@@ -43,15 +33,15 @@
|
||||
revision = "de5bf2ad457846296e2031421a34e2568e304e35"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:26a8fd03a1fb25aa92c58080d8ca76363d56694c148f6175266e0393c0d2e729"
|
||||
branch = "master"
|
||||
digest = "1:0caf9208419fa5db5a0ca7112affaa9550c54291dda8e2abac0c0e76181c959e"
|
||||
name = "github.com/argoproj/argo"
|
||||
packages = [
|
||||
"pkg/apis/workflow",
|
||||
"pkg/apis/workflow/v1alpha1",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "ac241c95c13f08e868cd6f5ee32c9ce273e239ff"
|
||||
version = "v2.1.1"
|
||||
revision = "7ef1cea68c94f7f0e1e2f8bd75bedc5a7df8af90"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -73,12 +63,12 @@
|
||||
version = "v9"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:79421244ba5848aae4b0a5c41e633a04e4894cb0b164a219dc8c15ec7facb7f1"
|
||||
name = "github.com/blang/semver"
|
||||
packages = ["."]
|
||||
branch = "master"
|
||||
digest = "1:c0bec5f9b98d0bc872ff5e834fac186b807b656683bd29cb82fb207a1513fabb"
|
||||
name = "github.com/beorn7/perks"
|
||||
packages = ["quantile"]
|
||||
pruneopts = ""
|
||||
revision = "2ee87856327ba09384cabd113bc6b5d174e9ec0f"
|
||||
version = "v3.5.1"
|
||||
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:e04162bd6a6d4950541bae744c968108e14913b1cebccf29f7650b573f44adb3"
|
||||
@@ -114,14 +104,6 @@
|
||||
pruneopts = ""
|
||||
revision = "1180514eaf4d9f38d0d19eef639a1d695e066e72"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:5fd5c4d4282935b7a575299494f2c09e9d2cacded7815c83aff7c1602aff3154"
|
||||
name = "github.com/daaku/go.zipexe"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "a5fe2436ffcb3236e175e5149162b41cd28bd27d"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:56c130d885a4aacae1dd9c7b71cfe39912c7ebc1ff7d2b46083c8812996dc43b"
|
||||
name = "github.com/davecgh/go-spew"
|
||||
@@ -157,6 +139,21 @@
|
||||
revision = "3658237ded108b4134956c1b3050349d93e7b895"
|
||||
version = "v2.7.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:ba7c75e38d81b9cf3e8601c081567be3b71bccca8c11aee5de98871360aa4d7b"
|
||||
name = "github.com/emirpasic/gods"
|
||||
packages = [
|
||||
"containers",
|
||||
"lists",
|
||||
"lists/arraylist",
|
||||
"trees",
|
||||
"trees/binaryheap",
|
||||
"utils",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "f6c17b524822278a87e3b3bd809fec33b51f5b46"
|
||||
version = "v1.9.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:b13707423743d41665fd23f0c36b2f37bb49c30e94adb813319c44188a51ba22"
|
||||
name = "github.com/ghodss/yaml"
|
||||
@@ -290,7 +287,7 @@
|
||||
version = "v1.11.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:0a3f6a0c68ab8f3d455f8892295503b179e571b7fefe47cc6c556405d1f83411"
|
||||
digest = "1:6e73003ecd35f4487a5e88270d3ca0a81bc80dc88053ac7e4dcfec5fba30d918"
|
||||
name = "github.com/gogo/protobuf"
|
||||
packages = [
|
||||
"gogoproto",
|
||||
@@ -314,6 +311,7 @@
|
||||
"protoc-gen-gofast",
|
||||
"protoc-gen-gogo/descriptor",
|
||||
"protoc-gen-gogo/generator",
|
||||
"protoc-gen-gogo/generator/internal/remap",
|
||||
"protoc-gen-gogo/grpc",
|
||||
"protoc-gen-gogo/plugin",
|
||||
"protoc-gen-gogofast",
|
||||
@@ -322,8 +320,8 @@
|
||||
"vanity/command",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "1adfc126b41513cc696b209667c8656ea7aac67c"
|
||||
version = "v1.0.0"
|
||||
revision = "636bf0302bc95575d69441b25a2603156ffdddf1"
|
||||
version = "v1.1.1"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -334,8 +332,7 @@
|
||||
revision = "23def4e6c14b4da8ac2ed8007337bc5eb5007998"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:27828cf74799ad14fcafece9f78f350cdbcd4fbe92c14ad4cba256fbbfa328ef"
|
||||
digest = "1:3dd078fda7500c341bc26cfbc6c6a34614f295a2457149fc1045cab767cbcf18"
|
||||
name = "github.com/golang/protobuf"
|
||||
packages = [
|
||||
"jsonpb",
|
||||
@@ -343,6 +340,7 @@
|
||||
"protoc-gen-go",
|
||||
"protoc-gen-go/descriptor",
|
||||
"protoc-gen-go/generator",
|
||||
"protoc-gen-go/generator/internal/remap",
|
||||
"protoc-gen-go/grpc",
|
||||
"protoc-gen-go/plugin",
|
||||
"ptypes",
|
||||
@@ -353,10 +351,19 @@
|
||||
"ptypes/timestamp",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "e09c5db296004fbe3f74490e84dcd62c3c5ddb1b"
|
||||
revision = "aa810b61a9c79d51363740d207bb46cf8e620ed5"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:b3679b7aa6d7243a170d5f43810e0f4a7c3a2120340069f7a077625c51c83fd9"
|
||||
branch = "master"
|
||||
digest = "1:1e5b1e14524ed08301977b7b8e10c719ed853cbf3f24ecb66fae783a46f207a6"
|
||||
name = "github.com/google/btree"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "4030bb1f1f0c35b30ca7009e9ebd06849dd45306"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:14d826ee25139b4674e9768ac287a135f4e7c14e1134a5b15e4e152edfd49f41"
|
||||
name = "github.com/google/go-jsonnet"
|
||||
packages = [
|
||||
".",
|
||||
@@ -364,7 +371,7 @@
|
||||
"parser",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "v0.11.2"
|
||||
revision = "dfddf2b4e3aec377b0dcdf247ff92e7d078b8179"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -386,6 +393,17 @@
|
||||
revision = "ee43cbb60db7bd22502942cccbc39059117352ab"
|
||||
version = "v0.1.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:009a1928b8c096338b68b5822d838a72b4d8520715c1463614476359f3282ec8"
|
||||
name = "github.com/gregjones/httpcache"
|
||||
packages = [
|
||||
".",
|
||||
"diskcache",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "9cad4c3443a7200dd6400aef47183728de563a38"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:9dca8c981b8aed7448d94e78bc68a76784867a38b3036d5aabc0b32d92ffd1f4"
|
||||
@@ -434,14 +452,6 @@
|
||||
pruneopts = ""
|
||||
revision = "0fb14efe8c47ae851c0034ed7a448854d3d34cf3"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:f81c8d7354cc0c6340f2f7a48724ee6c2b3db3e918ecd441c985b4d2d97dd3e7"
|
||||
name = "github.com/howeyc/gopass"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "bf9dde6d0d2c004a008c27aaee91170c786f6db8"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:23bc0b496ba341c6e3ba24d6358ff4a40a704d9eb5f9a3bd8e8fbd57ad869013"
|
||||
name = "github.com/imdario/mergo"
|
||||
@@ -459,59 +469,35 @@
|
||||
version = "v1.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:dd5cdbd84daf24b2a009364f3c24859b1e4de1eab87c451fb3bce09935d909fc"
|
||||
branch = "master"
|
||||
digest = "1:95abc4eba158a39873bd4fabdee576d0ae13826b550f8b710881d80ae4093a0f"
|
||||
name = "github.com/jbenet/go-context"
|
||||
packages = ["io"]
|
||||
pruneopts = ""
|
||||
revision = "d14ea06fba99483203c19d92cfcd13ebe73135f4"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:31c6f3c4f1e15fcc24fcfc9f5f24603ff3963c56d6fa162116493b4025fb6acc"
|
||||
name = "github.com/json-iterator/go"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "e7c7f3b33712573affdcc7a107218e7926b9a05b"
|
||||
version = "1.0.6"
|
||||
revision = "f2b4162afba35581b6d4a50d3b8f34e33c144682"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:41e0bed5df4f9fd04c418bf9b6b7179b3671e416ad6175332601ca1c8dc74606"
|
||||
name = "github.com/kevinburke/ssh_config"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "81db2a75821ed34e682567d48be488a1c3121088"
|
||||
version = "0.5"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:2c5ad58492804c40bdaf5d92039b0cde8b5becd2b7feeb37d7d1cc36a8aa8dbe"
|
||||
name = "github.com/kardianos/osext"
|
||||
digest = "1:448b4a6e39e46d8740b00dc871f26d58dc39341b160e01267b7917132831a136"
|
||||
name = "github.com/konsorten/go-windows-terminal-sequences"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "ae77be60afb1dcacde03767a8c37337fad28ac14"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:f4975a8aa19d1a4e512cfebf9a2c3751faedd8939be80d193d157463b47eb334"
|
||||
name = "github.com/ksonnet/ksonnet"
|
||||
packages = [
|
||||
"metadata/params",
|
||||
"pkg/app",
|
||||
"pkg/component",
|
||||
"pkg/docparser",
|
||||
"pkg/lib",
|
||||
"pkg/log",
|
||||
"pkg/node",
|
||||
"pkg/params",
|
||||
"pkg/prototype",
|
||||
"pkg/schema",
|
||||
"pkg/util/jsonnet",
|
||||
"pkg/util/kslib",
|
||||
"pkg/util/strings",
|
||||
"pkg/util/version",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "5b4917a292a76a62e3d97852279575293ba73b50"
|
||||
version = "v0.12.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:a345c560e5609bd71b1f54993f3b087ca45eb0e6226886c642ce519de81896cb"
|
||||
name = "github.com/ksonnet/ksonnet-lib"
|
||||
packages = [
|
||||
"ksonnet-gen/astext",
|
||||
"ksonnet-gen/jsonnet",
|
||||
"ksonnet-gen/ksonnet",
|
||||
"ksonnet-gen/kubespec",
|
||||
"ksonnet-gen/kubeversion",
|
||||
"ksonnet-gen/nodemaker",
|
||||
"ksonnet-gen/printer",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "83f20ee933bcd13fcf4ad1b49a40c92135c5569c"
|
||||
version = "v0.1.10"
|
||||
revision = "b729f2633dfe35f4d1d8a32385f6685610ce1cb5"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -525,6 +511,22 @@
|
||||
pruneopts = ""
|
||||
revision = "32fa128f234d041f196a9f3e0fea5ac9772c08e1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:63722a4b1e1717be7b98fc686e0b30d5e7f734b9e93d7dee86293b6deab7ea28"
|
||||
name = "github.com/matttproud/golang_protobuf_extensions"
|
||||
packages = ["pbutil"]
|
||||
pruneopts = ""
|
||||
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
|
||||
version = "v1.0.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:096a8a9182648da3d00ff243b88407838902b6703fc12657f76890e08d1899bf"
|
||||
name = "github.com/mitchellh/go-homedir"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "ae18d6b8b3205b561c79e8e5f69bff09736185f4"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:eb9117392ee8e7aa44f78e0db603f70b1050ee0ebda4bd40040befb5b218c546"
|
||||
@@ -533,6 +535,22 @@
|
||||
pruneopts = ""
|
||||
revision = "bb74f1db0675b241733089d5a1faa5dd8b0ef57b"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:0c0ff2a89c1bb0d01887e1dac043ad7efbf3ec77482ef058ac423d13497e16fd"
|
||||
name = "github.com/modern-go/concurrent"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94"
|
||||
version = "1.0.3"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:e32bdbdb7c377a07a9a46378290059822efdce5c8d96fe71940d87cb4f918855"
|
||||
name = "github.com/modern-go/reflect2"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd"
|
||||
version = "1.0.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:4c0404dc03d974acd5fcd8b8d3ce687b13bd169db032b89275e8b9d77b98ce8c"
|
||||
name = "github.com/patrickmn/go-cache"
|
||||
@@ -541,6 +559,30 @@
|
||||
revision = "a3647f8e31d79543b2d0f0ae2fe5c379d72cedc0"
|
||||
version = "v2.1.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:049b5bee78dfdc9628ee0e557219c41f683e5b06c5a5f20eaba0105ccc586689"
|
||||
name = "github.com/pelletier/go-buffruneio"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "c37440a7cf42ac63b919c752ca73a85067e05992"
|
||||
version = "v0.2.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:c24598ffeadd2762552269271b3b1510df2d83ee6696c1e543a0ff653af494bc"
|
||||
name = "github.com/petar/GoLLRB"
|
||||
packages = ["llrb"]
|
||||
pruneopts = ""
|
||||
revision = "53be0d36a84c2a886ca057d34b6aa4468df9ccb4"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:b46305723171710475f2dd37547edd57b67b9de9f2a6267cafdd98331fd6897f"
|
||||
name = "github.com/peterbourgon/diskv"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "5f041e8faa004a95c88a202771f4cc3e991971e6"
|
||||
version = "v2.0.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:7365acd48986e205ccb8652cc746f09c8b7876030d53710ea6ef7d0bd0dcd7ca"
|
||||
name = "github.com/pkg/errors"
|
||||
@@ -568,6 +610,50 @@
|
||||
pruneopts = ""
|
||||
revision = "525d0eb5f91d30e3b1548de401b7ef9ea6898520"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:9d34d575593e3dd27bbd119138ba009ef1535a0df2aad7259e1dd5aed7405eea"
|
||||
name = "github.com/prometheus/client_golang"
|
||||
packages = [
|
||||
"prometheus",
|
||||
"prometheus/internal",
|
||||
"prometheus/promhttp",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "7858729281ec582767b20e0d696b6041d995d5e0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:185cf55b1f44a1bf243558901c3f06efa5c64ba62cfdcbb1bf7bbe8c3fb68561"
|
||||
name = "github.com/prometheus/client_model"
|
||||
packages = ["go"]
|
||||
pruneopts = ""
|
||||
revision = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:f477ef7b65d94fb17574fc6548cef0c99a69c1634ea3b6da248b63a61ebe0498"
|
||||
name = "github.com/prometheus/common"
|
||||
packages = [
|
||||
"expfmt",
|
||||
"internal/bitbucket.org/ww/goautoneg",
|
||||
"model",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "c7de2306084e37d54b8be01f3541a8464345e9a5"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:e04aaa0e8f8da0ed3d6c0700bd77eda52a47f38510063209d72d62f0ef807d5e"
|
||||
name = "github.com/prometheus/procfs"
|
||||
packages = [
|
||||
".",
|
||||
"internal/util",
|
||||
"nfs",
|
||||
"xfs",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:1ee3e3e12ffdb5ba70b918148685cab6340bbc0d03ba723bcb46062d1bea69c6"
|
||||
@@ -585,11 +671,12 @@
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:c92f01303e3ab3b5da92657841639cb53d1548f0d2733d12ef3b9fd9d47c869e"
|
||||
digest = "1:01d968ff6535945510c944983eee024e81f1c949043e9bbfe5ab206ebc3588a4"
|
||||
name = "github.com/sirupsen/logrus"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "ea8897e79973357ba785ac2533559a6297e83c44"
|
||||
revision = "a67f783a3814b8729bd2dac5780b5f78f8dbd64d"
|
||||
version = "v1.1.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -607,16 +694,6 @@
|
||||
revision = "e09e9389d85d8492d313d73d1469c029e710623f"
|
||||
version = "v0.1.4"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:a35a4db30a6094deac33fdb99de9ed99fefc39a7bf06b57d9f04bcaa425bb183"
|
||||
name = "github.com/spf13/afero"
|
||||
packages = [
|
||||
".",
|
||||
"mem",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "9be650865eab0c12963d8753212f4f9c66cdcf12"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:2208a80fc3259291e43b30f42f844d18f4218036dff510f42c653ec9890d460a"
|
||||
name = "github.com/spf13/cobra"
|
||||
@@ -632,6 +709,19 @@
|
||||
pruneopts = ""
|
||||
revision = "e57e3eeb33f795204c1ca35f56c44f83227c6e66"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:b1861b9a1aa0801b0b62945ed7477c1ab61a4bd03b55dfbc27f6d4f378110c8c"
|
||||
name = "github.com/src-d/gcfg"
|
||||
packages = [
|
||||
".",
|
||||
"scanner",
|
||||
"token",
|
||||
"types",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "f187355171c936ac84a82793659ebb4936bc1c23"
|
||||
version = "v1.3.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:306417ea2f31ea733df356a2b895de63776b6a5107085b33458e5cd6eb1d584d"
|
||||
name = "github.com/stretchr/objx"
|
||||
@@ -641,15 +731,15 @@
|
||||
version = "v0.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:a30066593578732a356dc7e5d7f78d69184ca65aeeff5939241a3ab10559bb06"
|
||||
digest = "1:c587772fb8ad29ad4db67575dad25ba17a51f072ff18a22b4f0257a4d9c24f75"
|
||||
name = "github.com/stretchr/testify"
|
||||
packages = [
|
||||
"assert",
|
||||
"mock",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "12b6f73e6084dad08a7c6e575284b177ecafbc71"
|
||||
version = "v1.2.1"
|
||||
revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686"
|
||||
version = "v1.2.2"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:51cf0fca93f4866709ceaf01b750e51d997c299a7bd2edf7ccd79e3b428754ae"
|
||||
@@ -662,6 +752,14 @@
|
||||
revision = "a053f3dac71df214bfe8b367f34220f0029c9c02"
|
||||
version = "v3.3.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:afc0b8068986a01e2d8f449917829753a54f6bd4d1265c2b4ad9cba75560020f"
|
||||
name = "github.com/xanzy/ssh-agent"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "640f0ab560aeb89d523bb6ac322b1244d5c3796c"
|
||||
version = "v0.2.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:529ed3f98838f69e13761788d0cc71b44e130058fab13bae2ce09f7a176bced4"
|
||||
name = "github.com/yudai/gojsondiff"
|
||||
@@ -688,8 +786,21 @@
|
||||
packages = [
|
||||
"bcrypt",
|
||||
"blowfish",
|
||||
"cast5",
|
||||
"curve25519",
|
||||
"ed25519",
|
||||
"ed25519/internal/edwards25519",
|
||||
"internal/chacha20",
|
||||
"openpgp",
|
||||
"openpgp/armor",
|
||||
"openpgp/elgamal",
|
||||
"openpgp/errors",
|
||||
"openpgp/packet",
|
||||
"openpgp/s2k",
|
||||
"poly1305",
|
||||
"ssh",
|
||||
"ssh/agent",
|
||||
"ssh/knownhosts",
|
||||
"ssh/terminal",
|
||||
]
|
||||
pruneopts = ""
|
||||
@@ -727,22 +838,22 @@
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:8aad4e360d6645abe564e925bd6d8d3b94975e52ce68af0c28f91b5aedb0637f"
|
||||
digest = "1:b2ea75de0ccb2db2ac79356407f8a4cd8f798fe15d41b381c00abf3ae8e55ed1"
|
||||
name = "golang.org/x/sync"
|
||||
packages = ["errgroup"]
|
||||
pruneopts = ""
|
||||
revision = "fd80eb99c8f653c847d294a001bdf2a3a6f768f5"
|
||||
revision = "1d60e4601c6fd243af51cc01ddf169918a5407ca"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:407b5f905024dd94ee08c1777fabb380fb3d380f92a7f7df2592be005337eeb3"
|
||||
digest = "1:ed900376500543ca05f2a2383e1f541b4606f19cd22f34acb81b17a0b90c7f3e"
|
||||
name = "golang.org/x/sys"
|
||||
packages = [
|
||||
"unix",
|
||||
"windows",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "37707fdb30a5b38865cfb95e5aab41707daec7fd"
|
||||
revision = "d0be0721c37eeb5299f245a996a483160fc36940"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -781,8 +892,6 @@
|
||||
digest = "1:77e1d6ed91936b206979806b0aacbf817ec54b840803d8f8cd7a1de5bfbf92a4"
|
||||
name = "golang.org/x/tools"
|
||||
packages = [
|
||||
"cmd/cover",
|
||||
"cover",
|
||||
"go/ast/astutil",
|
||||
"imports",
|
||||
]
|
||||
@@ -821,7 +930,7 @@
|
||||
revision = "2b5a72b8730b0b16380010cfe5286c42108d88e7"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:d2dc833c73202298c92b63a7e180e2b007b5a3c3c763e3b9fe1da249b5c7f5b9"
|
||||
digest = "1:15656947b87a6a240e61dcfae9e71a55a8d5677f240d12ab48f02cdbabf1e309"
|
||||
name = "google.golang.org/grpc"
|
||||
packages = [
|
||||
".",
|
||||
@@ -833,9 +942,13 @@
|
||||
"credentials",
|
||||
"encoding",
|
||||
"encoding/proto",
|
||||
"grpclb/grpc_lb_v1/messages",
|
||||
"grpclog",
|
||||
"internal",
|
||||
"internal/backoff",
|
||||
"internal/channelz",
|
||||
"internal/envconfig",
|
||||
"internal/grpcrand",
|
||||
"internal/transport",
|
||||
"keepalive",
|
||||
"metadata",
|
||||
"naming",
|
||||
@@ -848,11 +961,10 @@
|
||||
"stats",
|
||||
"status",
|
||||
"tap",
|
||||
"transport",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "8e4536a86ab602859c20df5ebfd0bd4228d08655"
|
||||
version = "v1.10.0"
|
||||
revision = "8dea3dc473e90c8179e519d91302d0597c0ca1d1"
|
||||
version = "v1.15.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:bf7444e1e6a36e633f4f1624a67b9e4734cfb879c27ac0a2082ac16aff8462ac"
|
||||
@@ -898,6 +1010,77 @@
|
||||
revision = "76dd09796242edb5b897103a75df2645c028c960"
|
||||
version = "v2.1.6"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:c8f3ff1edaf7208bf7633e5952ffb8d697552343f8010aee12427400b434ae63"
|
||||
name = "gopkg.in/src-d/go-billy.v4"
|
||||
packages = [
|
||||
".",
|
||||
"helper/chroot",
|
||||
"helper/polyfill",
|
||||
"osfs",
|
||||
"util",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "59952543636f55de3f860b477b615093d5c2c3e4"
|
||||
version = "v4.2.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:a77c60b21f224f0ddc9c7e9407008c6e7dfbca88e5a6e827aa27ecf80497ebb6"
|
||||
name = "gopkg.in/src-d/go-git.v4"
|
||||
packages = [
|
||||
".",
|
||||
"config",
|
||||
"internal/revision",
|
||||
"plumbing",
|
||||
"plumbing/cache",
|
||||
"plumbing/filemode",
|
||||
"plumbing/format/config",
|
||||
"plumbing/format/diff",
|
||||
"plumbing/format/gitignore",
|
||||
"plumbing/format/idxfile",
|
||||
"plumbing/format/index",
|
||||
"plumbing/format/objfile",
|
||||
"plumbing/format/packfile",
|
||||
"plumbing/format/pktline",
|
||||
"plumbing/object",
|
||||
"plumbing/protocol/packp",
|
||||
"plumbing/protocol/packp/capability",
|
||||
"plumbing/protocol/packp/sideband",
|
||||
"plumbing/revlist",
|
||||
"plumbing/storer",
|
||||
"plumbing/transport",
|
||||
"plumbing/transport/client",
|
||||
"plumbing/transport/file",
|
||||
"plumbing/transport/git",
|
||||
"plumbing/transport/http",
|
||||
"plumbing/transport/internal/common",
|
||||
"plumbing/transport/server",
|
||||
"plumbing/transport/ssh",
|
||||
"storage",
|
||||
"storage/filesystem",
|
||||
"storage/filesystem/dotgit",
|
||||
"storage/memory",
|
||||
"utils/binary",
|
||||
"utils/diff",
|
||||
"utils/ioutil",
|
||||
"utils/merkletrie",
|
||||
"utils/merkletrie/filesystem",
|
||||
"utils/merkletrie/index",
|
||||
"utils/merkletrie/internal/frame",
|
||||
"utils/merkletrie/noder",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "d3cec13ac0b195bfb897ed038a08b5130ab9969e"
|
||||
version = "v4.7.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:ceec7e96590fb8168f36df4795fefe17051d4b0c2acc7ec4e260d8138c4dafac"
|
||||
name = "gopkg.in/warnings.v0"
|
||||
packages = ["."]
|
||||
pruneopts = ""
|
||||
revision = "ec4a0fea49c7b46c2aeb0b51aac55779c607e52b"
|
||||
version = "v0.1.2"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:81314a486195626940617e43740b4fa073f265b0715c9f54ce2027fee1cb5f61"
|
||||
name = "gopkg.in/yaml.v2"
|
||||
@@ -906,8 +1089,8 @@
|
||||
revision = "eb3733d160e74a9c7e442f435eb3bea458e1d19f"
|
||||
|
||||
[[projects]]
|
||||
branch = "release-1.10"
|
||||
digest = "1:5beb32094452970c0d73a2bdacd79aa9cfaa4947a774d521c1bed4b4c2705f15"
|
||||
branch = "release-1.12"
|
||||
digest = "1:ed04c5203ecbf6358fb6a774b0ecd40ea992d6dcc42adc1d3b7cf9eceb66b6c8"
|
||||
name = "k8s.io/api"
|
||||
packages = [
|
||||
"admission/v1beta1",
|
||||
@@ -922,10 +1105,12 @@
|
||||
"authorization/v1beta1",
|
||||
"autoscaling/v1",
|
||||
"autoscaling/v2beta1",
|
||||
"autoscaling/v2beta2",
|
||||
"batch/v1",
|
||||
"batch/v1beta1",
|
||||
"batch/v2alpha1",
|
||||
"certificates/v1beta1",
|
||||
"coordination/v1beta1",
|
||||
"core/v1",
|
||||
"events/v1beta1",
|
||||
"extensions/v1beta1",
|
||||
@@ -936,17 +1121,18 @@
|
||||
"rbac/v1alpha1",
|
||||
"rbac/v1beta1",
|
||||
"scheduling/v1alpha1",
|
||||
"scheduling/v1beta1",
|
||||
"settings/v1alpha1",
|
||||
"storage/v1",
|
||||
"storage/v1alpha1",
|
||||
"storage/v1beta1",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "8b7507fac302640dd5f1efbf9643199952cc58db"
|
||||
revision = "475331a8afff5587f47d0470a93f79c60c573c03"
|
||||
|
||||
[[projects]]
|
||||
branch = "release-1.10"
|
||||
digest = "1:7cb811fe9560718bd0ada29f2091acab5c4b4380ed23ef2824f64ce7038d899e"
|
||||
branch = "release-1.12"
|
||||
digest = "1:39be82077450762b5e14b5268e679a14ac0e9c7d3286e2fcface437556a29e4c"
|
||||
name = "k8s.io/apiextensions-apiserver"
|
||||
packages = [
|
||||
"pkg/apis/apiextensions",
|
||||
@@ -956,20 +1142,17 @@
|
||||
"pkg/client/clientset/clientset/typed/apiextensions/v1beta1",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "b13a681559816a9c14f93086bbeeed1c7baf2bcb"
|
||||
revision = "ca1024863b48cf0701229109df75ac5f0bb4907e"
|
||||
|
||||
[[projects]]
|
||||
branch = "release-1.10"
|
||||
digest = "1:b9c6e8e91bab6a419c58a63377532782a9f5616552164c38a9527f91c9309bbe"
|
||||
branch = "release-1.12"
|
||||
digest = "1:5899da40e41bcc8c1df101b72954096bba9d85b763bc17efc846062ccc111c7b"
|
||||
name = "k8s.io/apimachinery"
|
||||
packages = [
|
||||
"pkg/api/equality",
|
||||
"pkg/api/errors",
|
||||
"pkg/api/meta",
|
||||
"pkg/api/resource",
|
||||
"pkg/apimachinery",
|
||||
"pkg/apimachinery/announced",
|
||||
"pkg/apimachinery/registered",
|
||||
"pkg/apis/meta/internalversion",
|
||||
"pkg/apis/meta/v1",
|
||||
"pkg/apis/meta/v1/unstructured",
|
||||
@@ -996,6 +1179,7 @@
|
||||
"pkg/util/intstr",
|
||||
"pkg/util/json",
|
||||
"pkg/util/mergepatch",
|
||||
"pkg/util/naming",
|
||||
"pkg/util/net",
|
||||
"pkg/util/runtime",
|
||||
"pkg/util/sets",
|
||||
@@ -1010,11 +1194,11 @@
|
||||
"third_party/forked/golang/reflect",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "f6313580a4d36c7c74a3d845dda6e116642c4f90"
|
||||
revision = "f71dbbc36e126f5a371b85f6cca96bc8c57db2b6"
|
||||
|
||||
[[projects]]
|
||||
branch = "release-7.0"
|
||||
digest = "1:3a45889089f89cc371fb45b3f8a478248b755e4af17a8cf592e49bdf3481a0b3"
|
||||
branch = "release-9.0"
|
||||
digest = "1:77bf3d9f18ec82e08ac6c4c7e2d9d1a2ef8d16b25d3ff72fcefcf9256d751573"
|
||||
name = "k8s.io/client-go"
|
||||
packages = [
|
||||
"discovery",
|
||||
@@ -1032,12 +1216,15 @@
|
||||
"informers/autoscaling",
|
||||
"informers/autoscaling/v1",
|
||||
"informers/autoscaling/v2beta1",
|
||||
"informers/autoscaling/v2beta2",
|
||||
"informers/batch",
|
||||
"informers/batch/v1",
|
||||
"informers/batch/v1beta1",
|
||||
"informers/batch/v2alpha1",
|
||||
"informers/certificates",
|
||||
"informers/certificates/v1beta1",
|
||||
"informers/coordination",
|
||||
"informers/coordination/v1beta1",
|
||||
"informers/core",
|
||||
"informers/core/v1",
|
||||
"informers/events",
|
||||
@@ -1055,6 +1242,7 @@
|
||||
"informers/rbac/v1beta1",
|
||||
"informers/scheduling",
|
||||
"informers/scheduling/v1alpha1",
|
||||
"informers/scheduling/v1beta1",
|
||||
"informers/settings",
|
||||
"informers/settings/v1alpha1",
|
||||
"informers/storage",
|
||||
@@ -1086,6 +1274,8 @@
|
||||
"kubernetes/typed/autoscaling/v1/fake",
|
||||
"kubernetes/typed/autoscaling/v2beta1",
|
||||
"kubernetes/typed/autoscaling/v2beta1/fake",
|
||||
"kubernetes/typed/autoscaling/v2beta2",
|
||||
"kubernetes/typed/autoscaling/v2beta2/fake",
|
||||
"kubernetes/typed/batch/v1",
|
||||
"kubernetes/typed/batch/v1/fake",
|
||||
"kubernetes/typed/batch/v1beta1",
|
||||
@@ -1094,6 +1284,8 @@
|
||||
"kubernetes/typed/batch/v2alpha1/fake",
|
||||
"kubernetes/typed/certificates/v1beta1",
|
||||
"kubernetes/typed/certificates/v1beta1/fake",
|
||||
"kubernetes/typed/coordination/v1beta1",
|
||||
"kubernetes/typed/coordination/v1beta1/fake",
|
||||
"kubernetes/typed/core/v1",
|
||||
"kubernetes/typed/core/v1/fake",
|
||||
"kubernetes/typed/events/v1beta1",
|
||||
@@ -1112,6 +1304,8 @@
|
||||
"kubernetes/typed/rbac/v1beta1/fake",
|
||||
"kubernetes/typed/scheduling/v1alpha1",
|
||||
"kubernetes/typed/scheduling/v1alpha1/fake",
|
||||
"kubernetes/typed/scheduling/v1beta1",
|
||||
"kubernetes/typed/scheduling/v1beta1/fake",
|
||||
"kubernetes/typed/settings/v1alpha1",
|
||||
"kubernetes/typed/settings/v1alpha1/fake",
|
||||
"kubernetes/typed/storage/v1",
|
||||
@@ -1127,10 +1321,12 @@
|
||||
"listers/apps/v1beta2",
|
||||
"listers/autoscaling/v1",
|
||||
"listers/autoscaling/v2beta1",
|
||||
"listers/autoscaling/v2beta2",
|
||||
"listers/batch/v1",
|
||||
"listers/batch/v1beta1",
|
||||
"listers/batch/v2alpha1",
|
||||
"listers/certificates/v1beta1",
|
||||
"listers/coordination/v1beta1",
|
||||
"listers/core/v1",
|
||||
"listers/events/v1beta1",
|
||||
"listers/extensions/v1beta1",
|
||||
@@ -1140,12 +1336,14 @@
|
||||
"listers/rbac/v1alpha1",
|
||||
"listers/rbac/v1beta1",
|
||||
"listers/scheduling/v1alpha1",
|
||||
"listers/scheduling/v1beta1",
|
||||
"listers/settings/v1alpha1",
|
||||
"listers/storage/v1",
|
||||
"listers/storage/v1alpha1",
|
||||
"listers/storage/v1beta1",
|
||||
"pkg/apis/clientauthentication",
|
||||
"pkg/apis/clientauthentication/v1alpha1",
|
||||
"pkg/apis/clientauthentication/v1beta1",
|
||||
"pkg/version",
|
||||
"plugin/pkg/client/auth/exec",
|
||||
"plugin/pkg/client/auth/gcp",
|
||||
@@ -1166,6 +1364,7 @@
|
||||
"transport",
|
||||
"util/buffer",
|
||||
"util/cert",
|
||||
"util/connrotation",
|
||||
"util/flowcontrol",
|
||||
"util/homedir",
|
||||
"util/integer",
|
||||
@@ -1174,11 +1373,11 @@
|
||||
"util/workqueue",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "26a26f55b28aa1b338fbaf6fbbe0bcd76aed05e0"
|
||||
revision = "13596e875accbd333e0b5bd5fd9462185acd9958"
|
||||
|
||||
[[projects]]
|
||||
branch = "release-1.10"
|
||||
digest = "1:34b0b3400ffdc2533ed4ea23721956638c2776ba49ca4c5def71dddcf0cdfd9b"
|
||||
branch = "release-1.12"
|
||||
digest = "1:e6fffdf0dfeb0d189a7c6d735e76e7564685d3b6513f8b19d3651191cb6b084b"
|
||||
name = "k8s.io/code-generator"
|
||||
packages = [
|
||||
"cmd/go-to-protobuf",
|
||||
@@ -1187,7 +1386,7 @@
|
||||
"third_party/forked/golang/reflect",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "9de8e796a74d16d2a285165727d04c185ebca6dc"
|
||||
revision = "3dcf91f64f638563e5106f21f50c31fa361c918d"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -1215,7 +1414,7 @@
|
||||
revision = "50ae88d24ede7b8bad68e23c805b5d3da5c8abaf"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:ad247ab9725165a7f289779d46747da832e33a4efe8ae264461afc571f65dac8"
|
||||
digest = "1:6061aa42761235df375f20fa4a1aa6d1845cba3687575f3adb2ef3f3bc540af5"
|
||||
name = "k8s.io/kubernetes"
|
||||
packages = [
|
||||
"pkg/apis/apps",
|
||||
@@ -1224,11 +1423,12 @@
|
||||
"pkg/apis/core",
|
||||
"pkg/apis/extensions",
|
||||
"pkg/apis/networking",
|
||||
"pkg/apis/policy",
|
||||
"pkg/kubectl/scheme",
|
||||
]
|
||||
pruneopts = ""
|
||||
revision = "81753b10df112992bf51bbc2c2f85208aad78335"
|
||||
version = "v1.10.2"
|
||||
revision = "17c77c7898218073f14c8d573582e8d2313dc740"
|
||||
version = "v1.12.2"
|
||||
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
@@ -1253,10 +1453,10 @@
|
||||
"github.com/gogo/protobuf/proto",
|
||||
"github.com/gogo/protobuf/protoc-gen-gofast",
|
||||
"github.com/gogo/protobuf/protoc-gen-gogofast",
|
||||
"github.com/golang/glog",
|
||||
"github.com/golang/protobuf/proto",
|
||||
"github.com/golang/protobuf/protoc-gen-go",
|
||||
"github.com/golang/protobuf/ptypes/empty",
|
||||
"github.com/google/go-jsonnet",
|
||||
"github.com/grpc-ecosystem/go-grpc-middleware",
|
||||
"github.com/grpc-ecosystem/go-grpc-middleware/auth",
|
||||
"github.com/grpc-ecosystem/go-grpc-middleware/logging",
|
||||
@@ -1266,15 +1466,14 @@
|
||||
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger",
|
||||
"github.com/grpc-ecosystem/grpc-gateway/runtime",
|
||||
"github.com/grpc-ecosystem/grpc-gateway/utilities",
|
||||
"github.com/ksonnet/ksonnet/pkg/app",
|
||||
"github.com/ksonnet/ksonnet/pkg/component",
|
||||
"github.com/patrickmn/go-cache",
|
||||
"github.com/pkg/errors",
|
||||
"github.com/prometheus/client_golang/prometheus",
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp",
|
||||
"github.com/qiangmzsx/string-adapter",
|
||||
"github.com/sirupsen/logrus",
|
||||
"github.com/skratchdot/open-golang/open",
|
||||
"github.com/soheilhy/cmux",
|
||||
"github.com/spf13/afero",
|
||||
"github.com/spf13/cobra",
|
||||
"github.com/spf13/pflag",
|
||||
"github.com/stretchr/testify/assert",
|
||||
@@ -1283,11 +1482,11 @@
|
||||
"github.com/yudai/gojsondiff",
|
||||
"github.com/yudai/gojsondiff/formatter",
|
||||
"golang.org/x/crypto/bcrypt",
|
||||
"golang.org/x/crypto/ssh",
|
||||
"golang.org/x/crypto/ssh/terminal",
|
||||
"golang.org/x/net/context",
|
||||
"golang.org/x/oauth2",
|
||||
"golang.org/x/sync/errgroup",
|
||||
"golang.org/x/tools/cmd/cover",
|
||||
"google.golang.org/genproto/googleapis/api/annotations",
|
||||
"google.golang.org/grpc",
|
||||
"google.golang.org/grpc/codes",
|
||||
@@ -1300,6 +1499,13 @@
|
||||
"gopkg.in/go-playground/webhooks.v3/bitbucket",
|
||||
"gopkg.in/go-playground/webhooks.v3/github",
|
||||
"gopkg.in/go-playground/webhooks.v3/gitlab",
|
||||
"gopkg.in/src-d/go-git.v4",
|
||||
"gopkg.in/src-d/go-git.v4/config",
|
||||
"gopkg.in/src-d/go-git.v4/plumbing",
|
||||
"gopkg.in/src-d/go-git.v4/plumbing/transport",
|
||||
"gopkg.in/src-d/go-git.v4/plumbing/transport/http",
|
||||
"gopkg.in/src-d/go-git.v4/plumbing/transport/ssh",
|
||||
"gopkg.in/src-d/go-git.v4/storage/memory",
|
||||
"k8s.io/api/apps/v1",
|
||||
"k8s.io/api/apps/v1beta1",
|
||||
"k8s.io/api/apps/v1beta2",
|
||||
|
||||
55
Gopkg.toml
55
Gopkg.toml
@@ -1,62 +1,63 @@
|
||||
# Packages should only be added to the following list when we use them *outside* of our go code.
|
||||
# (e.g. we want to build the binary to invoke as part of the build process, such as in
|
||||
# generate-proto.sh). Normal use of golang packages should be added via `dep ensure`, and pinned
|
||||
# with a [[constraint]] or [[override]] when version is important.
|
||||
required = [
|
||||
"github.com/golang/protobuf/protoc-gen-go",
|
||||
"github.com/gogo/protobuf/protoc-gen-gofast",
|
||||
"github.com/gogo/protobuf/protoc-gen-gogofast",
|
||||
"golang.org/x/sync/errgroup",
|
||||
"k8s.io/code-generator/cmd/go-to-protobuf",
|
||||
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway",
|
||||
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger",
|
||||
"github.com/golang/protobuf/protoc-gen-go",
|
||||
"golang.org/x/tools/cmd/cover",
|
||||
"github.com/argoproj/pkg/time",
|
||||
"github.com/dustin/go-humanize",
|
||||
"golang.org/x/sync/errgroup",
|
||||
]
|
||||
|
||||
[[constraint]]
|
||||
name = "google.golang.org/grpc"
|
||||
version = "1.9.2"
|
||||
version = "1.15.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/gogo/protobuf"
|
||||
version = "1.1.1"
|
||||
|
||||
# override github.com/grpc-ecosystem/go-grpc-middleware's constraint on master
|
||||
[[override]]
|
||||
name = "github.com/golang/protobuf"
|
||||
version = "1.2.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/grpc-ecosystem/grpc-gateway"
|
||||
version = "v1.3.1"
|
||||
|
||||
# override argo outdated dependency
|
||||
[[override]]
|
||||
branch = "release-1.10"
|
||||
name = "k8s.io/api"
|
||||
# prometheus does not believe in semversioning yet
|
||||
[[constraint]]
|
||||
name = "github.com/prometheus/client_golang"
|
||||
revision = "7858729281ec582767b20e0d696b6041d995d5e0"
|
||||
|
||||
[[override]]
|
||||
branch = "release-1.10"
|
||||
name = "k8s.io/apimachinery"
|
||||
[[constraint]]
|
||||
branch = "release-1.12"
|
||||
name = "k8s.io/api"
|
||||
|
||||
[[constraint]]
|
||||
name = "k8s.io/apiextensions-apiserver"
|
||||
branch = "release-1.10"
|
||||
branch = "release-1.12"
|
||||
|
||||
[[constraint]]
|
||||
branch = "release-1.10"
|
||||
branch = "release-1.12"
|
||||
name = "k8s.io/code-generator"
|
||||
|
||||
[[override]]
|
||||
branch = "release-7.0"
|
||||
[[constraint]]
|
||||
branch = "release-9.0"
|
||||
name = "k8s.io/client-go"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/stretchr/testify"
|
||||
version = "1.2.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/ksonnet/ksonnet"
|
||||
version = "v0.12.0"
|
||||
version = "1.2.2"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/gobuffalo/packr"
|
||||
version = "v1.11.0"
|
||||
|
||||
# override ksonnet's logrus dependency
|
||||
[[override]]
|
||||
name = "github.com/sirupsen/logrus"
|
||||
revision = "ea8897e79973357ba785ac2533559a6297e83c44"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/argoproj/pkg"
|
||||
|
||||
8
Makefile
8
Makefile
@@ -78,9 +78,8 @@ cli-darwin: clean-debug
|
||||
argocd-util: clean-debug
|
||||
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
|
||||
|
||||
.PHONY: install-manifest
|
||||
install-manifest:
|
||||
if [ "${IMAGE_NAMESPACE}" = "" ] ; then echo "IMAGE_NAMESPACE must be set to build install manifest" ; exit 1 ; fi
|
||||
.PHONY: manifests
|
||||
manifests:
|
||||
./hack/update-manifests.sh
|
||||
|
||||
.PHONY: server
|
||||
@@ -149,9 +148,10 @@ clean: clean-debug
|
||||
precheckin: test lint
|
||||
|
||||
.PHONY: release-precheck
|
||||
release-precheck: install-manifest
|
||||
release-precheck: manifests
|
||||
@if [ "$(GIT_TREE_STATE)" != "clean" ]; then echo 'git tree state is $(GIT_TREE_STATE)' ; exit 1; fi
|
||||
@if [ -z "$(GIT_TAG)" ]; then echo 'commit must be tagged to perform release' ; exit 1; fi
|
||||
@if [ "$(GIT_TAG)" != "v`cat VERSION`" ]; then echo 'VERSION does not match git tag'; exit 1; fi
|
||||
|
||||
.PHONY: release
|
||||
release: release-precheck precheckin cli-darwin cli-linux server-image controller-image repo-server-image cli-image
|
||||
|
||||
2
Procfile
2
Procfile
@@ -1,4 +1,4 @@
|
||||
controller: go run ./cmd/argocd-application-controller/main.go
|
||||
api-server: go run ./cmd/argocd-server/main.go --insecure
|
||||
api-server: go run ./cmd/argocd-server/main.go --insecure --disable-auth
|
||||
repo-server: go run ./cmd/argocd-repo-server/main.go --loglevel debug
|
||||
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -p 5557:5557 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/coreos/dex:v2.10.0 serve /dex.yaml"
|
||||
|
||||
14
README.md
14
README.md
@@ -23,14 +23,21 @@ is provided for additional features.
|
||||
Argo CD follows the **GitOps** pattern of using git repositories as the source of truth for defining
|
||||
the desired application state. Kubernetes manifests can be specified in several ways:
|
||||
* [ksonnet](https://ksonnet.io) applications
|
||||
* [kustomize](https://kustomize.io) applications
|
||||
* [helm](https://helm.sh) charts
|
||||
* Simple directory of YAML/json manifests
|
||||
* Plain directory of YAML/json manifests
|
||||
|
||||
Argo CD automates the deployment of the desired application states in the specified target environments.
|
||||
Application deployments can track updates to branches, tags, or pinned to a specific version of
|
||||
manifests at a git commit. See [tracking strategies](docs/tracking_strategies.md) for additional
|
||||
details about the different tracking strategies available.
|
||||
|
||||
For a quick 10 minute overview of ArgoCD, check out the demo presented to the Sig Apps community
|
||||
meeting:
|
||||
[](https://youtu.be/aWDIQMbp1cc?t=1m4s)
|
||||
|
||||
|
||||
|
||||
## Architecture
|
||||
|
||||

|
||||
@@ -48,6 +55,7 @@ For additional details, see [architecture overview](docs/architecture.md).
|
||||
## Features
|
||||
|
||||
* Automated deployment of applications to specified target environments
|
||||
* Flexibility in support for multiple config management tools (Ksonnet, Kustomize, Helm, plain-YAML)
|
||||
* Continuous monitoring of deployed applications
|
||||
* Automated or manual syncing of applications to its desired state
|
||||
* Web and CLI based visualization of applications and differences between live vs. desired state
|
||||
@@ -58,13 +66,11 @@ For additional details, see [architecture overview](docs/architecture.md).
|
||||
* PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades)
|
||||
* Audit trails for application events and API calls
|
||||
* Parameter overrides for overriding ksonnet/helm parameters in git
|
||||
* Service account/access key management for CI pipelines
|
||||
|
||||
## Development Status
|
||||
* Argo CD is being used in production to deploy SaaS services at Intuit
|
||||
|
||||
## Roadmap
|
||||
* Auto-sync toggle to directly apply git state changes to live state
|
||||
* Service account/access key management for CI pipelines
|
||||
* Support for additional config management tools (Kustomize?)
|
||||
* Revamped UI, and feature parity with CLI
|
||||
* Customizable application actions
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
|
||||
// load the gcp plugin (required to authenticate against GKE clusters).
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
|
||||
// load the oidc plugin (required to authenticate with OpenID Connect).
|
||||
@@ -23,7 +24,6 @@ import (
|
||||
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
|
||||
"github.com/argoproj/argo-cd/reposerver"
|
||||
"github.com/argoproj/argo-cd/util/cli"
|
||||
"github.com/argoproj/argo-cd/util/db"
|
||||
"github.com/argoproj/argo-cd/util/stats"
|
||||
)
|
||||
|
||||
@@ -66,25 +66,14 @@ func newCommand() *cobra.Command {
|
||||
namespace, _, err := clientConfig.Namespace()
|
||||
errors.CheckError(err)
|
||||
|
||||
// TODO (amatyushentsev): Use config map to store controller configuration
|
||||
controllerConfig := controller.ApplicationControllerConfig{
|
||||
Namespace: namespace,
|
||||
InstanceID: "",
|
||||
}
|
||||
db := db.NewDB(namespace, kubeClient)
|
||||
resyncDuration := time.Duration(appResyncPeriod) * time.Second
|
||||
repoClientset := reposerver.NewRepositoryServerClientset(repoServerAddress)
|
||||
appStateManager := controller.NewAppStateManager(db, appClient, repoClientset, namespace)
|
||||
|
||||
appController := controller.NewApplicationController(
|
||||
namespace,
|
||||
kubeClient,
|
||||
appClient,
|
||||
repoClientset,
|
||||
db,
|
||||
appStateManager,
|
||||
resyncDuration,
|
||||
&controllerConfig)
|
||||
resyncDuration)
|
||||
secretController := controller.NewSecretController(kubeClient, repoClientset, resyncDuration, namespace)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
"github.com/argoproj/argo-cd/util/git"
|
||||
"github.com/argoproj/argo-cd/util/ksonnet"
|
||||
"github.com/argoproj/argo-cd/util/stats"
|
||||
"github.com/argoproj/argo-cd/util/tls"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -27,7 +28,8 @@ const (
|
||||
|
||||
func newCommand() *cobra.Command {
|
||||
var (
|
||||
logLevel string
|
||||
logLevel string
|
||||
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
|
||||
)
|
||||
var command = cobra.Command{
|
||||
Use: cliName,
|
||||
@@ -37,7 +39,11 @@ func newCommand() *cobra.Command {
|
||||
errors.CheckError(err)
|
||||
log.SetLevel(level)
|
||||
|
||||
server := reposerver.NewServer(git.NewFactory(), newCache())
|
||||
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
|
||||
errors.CheckError(err)
|
||||
|
||||
server, err := reposerver.NewServer(git.NewFactory(), newCache(), tlsConfigCustomizer)
|
||||
errors.CheckError(err)
|
||||
grpc := server.CreateGRPC()
|
||||
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", port))
|
||||
errors.CheckError(err)
|
||||
@@ -57,6 +63,7 @@ func newCommand() *cobra.Command {
|
||||
}
|
||||
|
||||
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
|
||||
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
|
||||
return &command
|
||||
}
|
||||
|
||||
|
||||
@@ -17,18 +17,20 @@ import (
|
||||
"github.com/argoproj/argo-cd/server"
|
||||
"github.com/argoproj/argo-cd/util/cli"
|
||||
"github.com/argoproj/argo-cd/util/stats"
|
||||
"github.com/argoproj/argo-cd/util/tls"
|
||||
)
|
||||
|
||||
// NewCommand returns a new instance of an argocd command
|
||||
func NewCommand() *cobra.Command {
|
||||
var (
|
||||
insecure bool
|
||||
logLevel string
|
||||
glogLevel int
|
||||
clientConfig clientcmd.ClientConfig
|
||||
staticAssetsDir string
|
||||
repoServerAddress string
|
||||
disableAuth bool
|
||||
insecure bool
|
||||
logLevel string
|
||||
glogLevel int
|
||||
clientConfig clientcmd.ClientConfig
|
||||
staticAssetsDir string
|
||||
repoServerAddress string
|
||||
disableAuth bool
|
||||
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: cliName,
|
||||
@@ -50,18 +52,22 @@ func NewCommand() *cobra.Command {
|
||||
namespace, _, err := clientConfig.Namespace()
|
||||
errors.CheckError(err)
|
||||
|
||||
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
|
||||
errors.CheckError(err)
|
||||
|
||||
kubeclientset := kubernetes.NewForConfigOrDie(config)
|
||||
appclientset := appclientset.NewForConfigOrDie(config)
|
||||
repoclientset := reposerver.NewRepositoryServerClientset(repoServerAddress)
|
||||
|
||||
argoCDOpts := server.ArgoCDServerOpts{
|
||||
Insecure: insecure,
|
||||
Namespace: namespace,
|
||||
StaticAssetsDir: staticAssetsDir,
|
||||
KubeClientset: kubeclientset,
|
||||
AppClientset: appclientset,
|
||||
RepoClientset: repoclientset,
|
||||
DisableAuth: disableAuth,
|
||||
Insecure: insecure,
|
||||
Namespace: namespace,
|
||||
StaticAssetsDir: staticAssetsDir,
|
||||
KubeClientset: kubeclientset,
|
||||
AppClientset: appclientset,
|
||||
RepoClientset: repoclientset,
|
||||
DisableAuth: disableAuth,
|
||||
TLSConfigCustomizer: tlsConfigCustomizer,
|
||||
}
|
||||
|
||||
stats.RegisterStackDumper()
|
||||
@@ -86,5 +92,6 @@ func NewCommand() *cobra.Command {
|
||||
command.Flags().StringVar(&repoServerAddress, "repo-server", "localhost:8081", "Repo server address.")
|
||||
command.Flags().BoolVar(&disableAuth, "disable-auth", false, "Disable client authentication")
|
||||
command.AddCommand(cli.NewVersionCmd(cliName))
|
||||
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(command)
|
||||
return command
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
"github.com/argoproj/argo-cd/util/cli"
|
||||
"github.com/argoproj/argo-cd/util/db"
|
||||
"github.com/argoproj/argo-cd/util/dex"
|
||||
"github.com/argoproj/argo-cd/util/kube"
|
||||
"github.com/argoproj/argo-cd/util/settings"
|
||||
"github.com/ghodss/yaml"
|
||||
log "github.com/sirupsen/logrus"
|
||||
@@ -59,6 +60,7 @@ func NewCommand() *cobra.Command {
|
||||
command.AddCommand(NewImportCommand())
|
||||
command.AddCommand(NewExportCommand())
|
||||
command.AddCommand(NewSettingsCommand())
|
||||
command.AddCommand(NewClusterConfig())
|
||||
|
||||
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
|
||||
return command
|
||||
@@ -366,7 +368,8 @@ func NewSettingsCommand() *cobra.Command {
|
||||
errors.CheckError(err)
|
||||
settingsMgr := settings.NewSettingsManager(kubeclientset, namespace)
|
||||
|
||||
_ = settings.UpdateSettings(superuserPassword, settingsMgr, updateSignature, updateSuperuser, namespace)
|
||||
_, err = settings.UpdateSettings(superuserPassword, settingsMgr, updateSignature, updateSuperuser, namespace)
|
||||
errors.CheckError(err)
|
||||
},
|
||||
}
|
||||
command.Flags().BoolVar(&updateSuperuser, "update-superuser", false, "force updating the superuser password")
|
||||
@@ -376,6 +379,42 @@ func NewSettingsCommand() *cobra.Command {
|
||||
return command
|
||||
}
|
||||
|
||||
// NewClusterConfig returns a new instance of `argocd-util cluster-kubeconfig` command
|
||||
func NewClusterConfig() *cobra.Command {
|
||||
var (
|
||||
clientConfig clientcmd.ClientConfig
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "cluster-kubeconfig CLUSTER_URL OUTPUT_PATH",
|
||||
Short: "Generates kubeconfig for the specified cluster",
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
if len(args) != 2 {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
serverUrl := args[0]
|
||||
output := args[1]
|
||||
conf, err := clientConfig.ClientConfig()
|
||||
errors.CheckError(err)
|
||||
namespace, wasSpecified, err := clientConfig.Namespace()
|
||||
errors.CheckError(err)
|
||||
if !(wasSpecified) {
|
||||
namespace = "argocd"
|
||||
}
|
||||
|
||||
kubeclientset, err := kubernetes.NewForConfig(conf)
|
||||
errors.CheckError(err)
|
||||
|
||||
cluster, err := db.NewDB(namespace, kubeclientset).GetCluster(context.Background(), serverUrl)
|
||||
errors.CheckError(err)
|
||||
err = kube.WriteKubeConfig(cluster.RESTConfig(), namespace, output)
|
||||
errors.CheckError(err)
|
||||
},
|
||||
}
|
||||
clientConfig = cli.AddKubectlFlagsToCmd(command)
|
||||
return command
|
||||
}
|
||||
|
||||
func main() {
|
||||
if err := NewCommand().Execute(); err != nil {
|
||||
fmt.Println(err)
|
||||
|
||||
@@ -50,7 +50,9 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
|
||||
fmt.Print("\n")
|
||||
}
|
||||
if newPassword == "" {
|
||||
newPassword = settings.ReadAndConfirmPassword()
|
||||
var err error
|
||||
newPassword, err = settings.ReadAndConfirmPassword()
|
||||
errors.CheckError(err)
|
||||
}
|
||||
|
||||
updatePasswordRequest := account.UpdatePasswordRequest{
|
||||
|
||||
@@ -119,6 +119,18 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
||||
if len(appOpts.valuesFiles) > 0 {
|
||||
app.Spec.Source.ValuesFiles = appOpts.valuesFiles
|
||||
}
|
||||
switch appOpts.syncPolicy {
|
||||
case "automated":
|
||||
app.Spec.SyncPolicy = &argoappv1.SyncPolicy{
|
||||
Automated: &argoappv1.SyncPolicyAutomated{
|
||||
Prune: appOpts.autoPrune,
|
||||
},
|
||||
}
|
||||
case "none", "":
|
||||
app.Spec.SyncPolicy = nil
|
||||
default:
|
||||
log.Fatalf("Invalid sync-policy: %s", appOpts.syncPolicy)
|
||||
}
|
||||
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
|
||||
defer util.Close(conn)
|
||||
appCreateRequest := application.ApplicationCreateRequest{
|
||||
@@ -182,6 +194,16 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
if len(app.Spec.Source.ValuesFiles) > 0 {
|
||||
fmt.Printf(printOpFmtStr, "Helm Values:", strings.Join(app.Spec.Source.ValuesFiles, ","))
|
||||
}
|
||||
var syncPolicy string
|
||||
if app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil {
|
||||
syncPolicy = "Automated"
|
||||
if app.Spec.SyncPolicy.Automated.Prune {
|
||||
syncPolicy += " (Prune)"
|
||||
}
|
||||
} else {
|
||||
syncPolicy = "<none>"
|
||||
}
|
||||
fmt.Printf(printOpFmtStr, "Sync Policy:", syncPolicy)
|
||||
|
||||
if len(app.Status.Conditions) > 0 {
|
||||
fmt.Println()
|
||||
@@ -313,6 +335,17 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
app.Spec.Destination.Namespace = appOpts.destNamespace
|
||||
case "project":
|
||||
app.Spec.Project = appOpts.project
|
||||
case "sync-policy":
|
||||
switch appOpts.syncPolicy {
|
||||
case "automated":
|
||||
app.Spec.SyncPolicy = &argoappv1.SyncPolicy{
|
||||
Automated: &argoappv1.SyncPolicyAutomated{},
|
||||
}
|
||||
case "none":
|
||||
app.Spec.SyncPolicy = nil
|
||||
default:
|
||||
log.Fatalf("Invalid sync-policy: %s", appOpts.syncPolicy)
|
||||
}
|
||||
}
|
||||
})
|
||||
if visited == 0 {
|
||||
@@ -320,6 +353,13 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
if c.Flags().Changed("auto-prune") {
|
||||
if app.Spec.SyncPolicy == nil || app.Spec.SyncPolicy.Automated == nil {
|
||||
log.Fatal("Cannot set --auto-prune: application not configured with automatic sync")
|
||||
}
|
||||
app.Spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
|
||||
}
|
||||
|
||||
setParameterOverrides(app, appOpts.parameters)
|
||||
oldOverrides := app.Spec.Source.ComponentParameterOverrides
|
||||
updatedSpec, err := appIf.UpdateSpec(context.Background(), &application.ApplicationUpdateSpecRequest{
|
||||
@@ -358,6 +398,8 @@ type appOptions struct {
|
||||
parameters []string
|
||||
valuesFiles []string
|
||||
project string
|
||||
syncPolicy string
|
||||
autoPrune bool
|
||||
}
|
||||
|
||||
func addAppFlags(command *cobra.Command, opts *appOptions) {
|
||||
@@ -370,6 +412,8 @@ func addAppFlags(command *cobra.Command, opts *appOptions) {
|
||||
command.Flags().StringArrayVarP(&opts.parameters, "parameter", "p", []string{}, "set a parameter override (e.g. -p guestbook=image=example/guestbook:latest)")
|
||||
command.Flags().StringArrayVar(&opts.valuesFiles, "values", []string{}, "Helm values file(s) to use")
|
||||
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
|
||||
command.Flags().StringVar(&opts.syncPolicy, "sync-policy", "", "Set the sync policy (one of: automated, none)")
|
||||
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning when sync is automated")
|
||||
}
|
||||
|
||||
// NewApplicationUnsetCommand returns a new instance of an `argocd app unset` command
|
||||
@@ -658,7 +702,7 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
|
||||
defer util.Close(conn)
|
||||
|
||||
_, err := waitOnApplicationStatus(appIf, appName, timeout, watchSync, watchHealth, watchOperations)
|
||||
_, err := waitOnApplicationStatus(appIf, appName, timeout, watchSync, watchHealth, watchOperations, nil)
|
||||
errors.CheckError(err)
|
||||
},
|
||||
}
|
||||
@@ -734,8 +778,6 @@ func printAppResources(w io.Writer, app *argoappv1.Application, showOperation bo
|
||||
if opState != nil {
|
||||
if opState.SyncResult != nil {
|
||||
syncRes = opState.SyncResult
|
||||
} else if opState.RollbackResult != nil {
|
||||
syncRes = opState.RollbackResult
|
||||
}
|
||||
}
|
||||
if syncRes != nil {
|
||||
@@ -779,12 +821,17 @@ func printAppResources(w io.Writer, app *argoappv1.Application, showOperation bo
|
||||
// NewApplicationSyncCommand returns a new instance of an `argocd app sync` command
|
||||
func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var (
|
||||
revision string
|
||||
prune bool
|
||||
dryRun bool
|
||||
timeout uint
|
||||
strategy string
|
||||
force bool
|
||||
revision string
|
||||
resources *[]string
|
||||
prune bool
|
||||
dryRun bool
|
||||
timeout uint
|
||||
strategy string
|
||||
force bool
|
||||
)
|
||||
const (
|
||||
resourceFieldDelimiter = ":"
|
||||
resourceFieldCount = 3
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "sync APPNAME",
|
||||
@@ -797,11 +844,28 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
|
||||
defer util.Close(conn)
|
||||
appName := args[0]
|
||||
var syncResources []argoappv1.SyncOperationResource
|
||||
if resources != nil {
|
||||
syncResources = []argoappv1.SyncOperationResource{}
|
||||
for _, r := range *resources {
|
||||
fields := strings.Split(r, resourceFieldDelimiter)
|
||||
if len(fields) != resourceFieldCount {
|
||||
log.Fatalf("Resource should have GROUP%sKIND%sNAME, but instead got: %s", resourceFieldDelimiter, resourceFieldDelimiter, r)
|
||||
}
|
||||
rsrc := argoappv1.SyncOperationResource{
|
||||
Group: fields[0],
|
||||
Kind: fields[1],
|
||||
Name: fields[2],
|
||||
}
|
||||
syncResources = append(syncResources, rsrc)
|
||||
}
|
||||
}
|
||||
syncReq := application.ApplicationSyncRequest{
|
||||
Name: &appName,
|
||||
DryRun: dryRun,
|
||||
Revision: revision,
|
||||
Prune: prune,
|
||||
Name: &appName,
|
||||
DryRun: dryRun,
|
||||
Revision: revision,
|
||||
Resources: syncResources,
|
||||
Prune: prune,
|
||||
}
|
||||
switch strategy {
|
||||
case "apply":
|
||||
@@ -817,7 +881,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
_, err := appIf.Sync(ctx, &syncReq)
|
||||
errors.CheckError(err)
|
||||
|
||||
app, err := waitOnApplicationStatus(appIf, appName, timeout, false, false, true)
|
||||
app, err := waitOnApplicationStatus(appIf, appName, timeout, false, false, true, syncResources)
|
||||
errors.CheckError(err)
|
||||
|
||||
pruningRequired := 0
|
||||
@@ -838,6 +902,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
||||
command.Flags().BoolVar(&dryRun, "dry-run", false, "Preview apply without affecting cluster")
|
||||
command.Flags().BoolVar(&prune, "prune", false, "Allow deleting unexpected resources")
|
||||
command.Flags().StringVar(&revision, "revision", "", "Sync to a specific revision. Preserves parameter overrides")
|
||||
resources = command.Flags().StringArray("resource", nil, fmt.Sprintf("Sync only specific resources as GROUP%sKIND%sNAME. Fields may be blank. This option may be specified repeatedly", resourceFieldDelimiter, resourceFieldDelimiter))
|
||||
command.Flags().UintVar(&timeout, "timeout", defaultCheckTimeoutSeconds, "Time out after this many seconds")
|
||||
command.Flags().StringVar(&strategy, "strategy", "", "Sync strategy (one of: apply|hook)")
|
||||
command.Flags().BoolVar(&force, "force", false, "Use a force apply")
|
||||
@@ -892,7 +957,7 @@ func (rs *resourceState) Merge(newState *resourceState) bool {
|
||||
return updated
|
||||
}
|
||||
|
||||
func calculateResourceStates(app *argoappv1.Application) map[string]*resourceState {
|
||||
func calculateResourceStates(app *argoappv1.Application, syncResources []argoappv1.SyncOperationResource) map[string]*resourceState {
|
||||
resStates := make(map[string]*resourceState)
|
||||
for _, res := range app.Status.ComparisonResult.Resources {
|
||||
obj, err := argoappv1.UnmarshalToUnstructured(res.TargetState)
|
||||
@@ -901,6 +966,9 @@ func calculateResourceStates(app *argoappv1.Application) map[string]*resourceSta
|
||||
obj, err = argoappv1.UnmarshalToUnstructured(res.LiveState)
|
||||
errors.CheckError(err)
|
||||
}
|
||||
if syncResources != nil && !argo.ContainsSyncResource(obj, syncResources) {
|
||||
continue
|
||||
}
|
||||
newState := newResourceState(obj.GetKind(), obj.GetName(), string(res.Status), res.Health.Status, "", "")
|
||||
key := newState.Key()
|
||||
if prev, ok := resStates[key]; ok {
|
||||
@@ -911,10 +979,10 @@ func calculateResourceStates(app *argoappv1.Application) map[string]*resourceSta
|
||||
}
|
||||
|
||||
var opResult *argoappv1.SyncOperationResult
|
||||
if app.Status.OperationState.SyncResult != nil {
|
||||
opResult = app.Status.OperationState.SyncResult
|
||||
} else if app.Status.OperationState.RollbackResult != nil {
|
||||
opResult = app.Status.OperationState.SyncResult
|
||||
if app.Status.OperationState != nil {
|
||||
if app.Status.OperationState.SyncResult != nil {
|
||||
opResult = app.Status.OperationState.SyncResult
|
||||
}
|
||||
}
|
||||
if opResult == nil {
|
||||
return resStates
|
||||
@@ -942,7 +1010,7 @@ func calculateResourceStates(app *argoappv1.Application) map[string]*resourceSta
|
||||
return resStates
|
||||
}
|
||||
|
||||
func waitOnApplicationStatus(appClient application.ApplicationServiceClient, appName string, timeout uint, watchSync, watchHealth, watchOperation bool) (*argoappv1.Application, error) {
|
||||
func waitOnApplicationStatus(appClient application.ApplicationServiceClient, appName string, timeout uint, watchSync bool, watchHealth bool, watchOperation bool, syncResources []argoappv1.SyncOperationResource) (*argoappv1.Application, error) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
@@ -999,7 +1067,7 @@ func waitOnApplicationStatus(appClient application.ApplicationServiceClient, app
|
||||
return app, nil
|
||||
}
|
||||
|
||||
newStates := calculateResourceStates(app)
|
||||
newStates := calculateResourceStates(app, syncResources)
|
||||
for _, newState := range newStates {
|
||||
var doPrint bool
|
||||
stateKey := newState.Key()
|
||||
@@ -1100,7 +1168,9 @@ func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra
|
||||
for _, depInfo := range app.Status.History {
|
||||
switch output {
|
||||
case "wide":
|
||||
paramStr := paramString(depInfo.Params)
|
||||
manifest, err := appIf.GetManifests(context.Background(), &application.ApplicationManifestQuery{Name: &appName, Revision: depInfo.Revision})
|
||||
errors.CheckError(err)
|
||||
paramStr := paramString(manifest.GetParams())
|
||||
fmt.Fprintf(w, "%d\t%s\t%s\t%s\n", depInfo.ID, depInfo.DeployedAt, depInfo.Revision, paramStr)
|
||||
default:
|
||||
fmt.Fprintf(w, "%d\t%s\t%s\n", depInfo.ID, depInfo.DeployedAt, depInfo.Revision)
|
||||
@@ -1113,7 +1183,7 @@ func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra
|
||||
return command
|
||||
}
|
||||
|
||||
func paramString(params []argoappv1.ComponentParameter) string {
|
||||
func paramString(params []*argoappv1.ComponentParameter) string {
|
||||
if len(params) == 0 {
|
||||
return ""
|
||||
}
|
||||
@@ -1164,7 +1234,7 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
|
||||
})
|
||||
errors.CheckError(err)
|
||||
|
||||
_, err = waitOnApplicationStatus(appIf, appName, timeout, false, false, true)
|
||||
_, err = waitOnApplicationStatus(appIf, appName, timeout, false, false, true, nil)
|
||||
errors.CheckError(err)
|
||||
},
|
||||
}
|
||||
@@ -1179,8 +1249,6 @@ const defaultCheckTimeoutSeconds = 0
|
||||
func printOperationResult(opState *argoappv1.OperationState) {
|
||||
if opState.SyncResult != nil {
|
||||
fmt.Printf(printOpFmtStr, "Operation:", "Sync")
|
||||
} else if opState.RollbackResult != nil {
|
||||
fmt.Printf(printOpFmtStr, "Operation:", "Rollback")
|
||||
}
|
||||
fmt.Printf(printOpFmtStr, "Phase:", opState.Phase)
|
||||
fmt.Printf(printOpFmtStr, "Start:", opState.StartedAt)
|
||||
|
||||
@@ -44,8 +44,10 @@ func NewClusterCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientc
|
||||
// NewClusterAddCommand returns a new instance of an `argocd cluster add` command
|
||||
func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientcmd.PathOptions) *cobra.Command {
|
||||
var (
|
||||
inCluster bool
|
||||
upsert bool
|
||||
inCluster bool
|
||||
upsert bool
|
||||
awsRoleArn string
|
||||
awsClusterName string
|
||||
)
|
||||
var command = &cobra.Command{
|
||||
Use: "add",
|
||||
@@ -63,6 +65,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
if clstContext == nil {
|
||||
log.Fatalf("Context %s does not exist in kubeconfig", args[0])
|
||||
}
|
||||
|
||||
overrides := clientcmd.ConfigOverrides{
|
||||
Context: *clstContext,
|
||||
}
|
||||
@@ -70,15 +73,23 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
conf, err := clientConfig.ClientConfig()
|
||||
errors.CheckError(err)
|
||||
|
||||
// Install RBAC resources for managing the cluster
|
||||
clientset, err := kubernetes.NewForConfig(conf)
|
||||
errors.CheckError(err)
|
||||
managerBearerToken, err := common.InstallClusterManagerRBAC(clientset)
|
||||
errors.CheckError(err)
|
||||
|
||||
managerBearerToken := ""
|
||||
var awsAuthConf *argoappv1.AWSAuthConfig
|
||||
if awsClusterName != "" {
|
||||
awsAuthConf = &argoappv1.AWSAuthConfig{
|
||||
ClusterName: awsClusterName,
|
||||
RoleARN: awsRoleArn,
|
||||
}
|
||||
} else {
|
||||
// Install RBAC resources for managing the cluster
|
||||
clientset, err := kubernetes.NewForConfig(conf)
|
||||
errors.CheckError(err)
|
||||
managerBearerToken, err = common.InstallClusterManagerRBAC(clientset)
|
||||
errors.CheckError(err)
|
||||
}
|
||||
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
|
||||
defer util.Close(conn)
|
||||
clst := NewCluster(args[0], conf, managerBearerToken)
|
||||
clst := NewCluster(args[0], conf, managerBearerToken, awsAuthConf)
|
||||
if inCluster {
|
||||
clst.Server = common.KubernetesInternalAPIServerAddr
|
||||
}
|
||||
@@ -94,6 +105,8 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
|
||||
command.PersistentFlags().StringVar(&pathOpts.LoadingRules.ExplicitPath, pathOpts.ExplicitFileFlag, pathOpts.LoadingRules.ExplicitPath, "use a particular kubeconfig file")
|
||||
command.Flags().BoolVar(&inCluster, "in-cluster", false, "Indicates ArgoCD resides inside this cluster and should connect using the internal k8s hostname (kubernetes.default.svc)")
|
||||
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing cluster with the same name even if the spec differs")
|
||||
command.Flags().StringVar(&awsClusterName, "aws-cluster-name", "", "AWS Cluster name if set then aws-iam-authenticator will be used to access cluster")
|
||||
command.Flags().StringVar(&awsRoleArn, "aws-role-arn", "", "Optional AWS role arn. If set then AWS IAM Authenticator assume a role to perform cluster operations instead of the default AWS credential provider chain.")
|
||||
return command
|
||||
}
|
||||
|
||||
@@ -136,7 +149,7 @@ func printKubeContexts(ca clientcmd.ConfigAccess) {
|
||||
}
|
||||
}
|
||||
|
||||
func NewCluster(name string, conf *rest.Config, managerBearerToken string) *argoappv1.Cluster {
|
||||
func NewCluster(name string, conf *rest.Config, managerBearerToken string, awsAuthConf *argoappv1.AWSAuthConfig) *argoappv1.Cluster {
|
||||
tlsClientConfig := argoappv1.TLSClientConfig{
|
||||
Insecure: conf.TLSClientConfig.Insecure,
|
||||
ServerName: conf.TLSClientConfig.ServerName,
|
||||
@@ -165,6 +178,7 @@ func NewCluster(name string, conf *rest.Config, managerBearerToken string) *argo
|
||||
Config: argoappv1.ClusterConfig{
|
||||
BearerToken: managerBearerToken,
|
||||
TLSClientConfig: tlsClientConfig,
|
||||
AWSAuthConfig: awsAuthConf,
|
||||
},
|
||||
}
|
||||
return &clst
|
||||
|
||||
@@ -76,6 +76,10 @@ func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
command.AddCommand(NewProjectRemoveDestinationCommand(clientOpts))
|
||||
command.AddCommand(NewProjectAddSourceCommand(clientOpts))
|
||||
command.AddCommand(NewProjectRemoveSourceCommand(clientOpts))
|
||||
command.AddCommand(NewProjectAllowClusterResourceCommand(clientOpts))
|
||||
command.AddCommand(NewProjectDenyClusterResourceCommand(clientOpts))
|
||||
command.AddCommand(NewProjectAllowNamespaceResourceCommand(clientOpts))
|
||||
command.AddCommand(NewProjectDenyNamespaceResourceCommand(clientOpts))
|
||||
return command
|
||||
}
|
||||
|
||||
@@ -603,6 +607,104 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
||||
return command
|
||||
}
|
||||
|
||||
func modifyProjectResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.ClientOptions, action func(proj *v1alpha1.AppProject, group string, kind string) bool) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: cmdUse,
|
||||
Short: cmdDesc,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
if len(args) != 3 {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
projName, group, kind := args[0], args[1], args[2]
|
||||
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
|
||||
defer util.Close(conn)
|
||||
|
||||
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
|
||||
errors.CheckError(err)
|
||||
|
||||
if action(proj, group, kind) {
|
||||
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
|
||||
errors.CheckError(err)
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// NewProjectAllowNamespaceResourceCommand returns a new instance of an `deny-cluster-resources` command
|
||||
func NewProjectAllowNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
use := "allow-namespace-resource PROJECT GROUP KIND"
|
||||
desc := "Removes a namespaced API resource from the blacklist"
|
||||
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
|
||||
index := -1
|
||||
for i, item := range proj.Spec.NamespaceResourceBlacklist {
|
||||
if item.Group == group && item.Kind == kind {
|
||||
index = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if index == -1 {
|
||||
log.Info("Specified cluster resource is not blacklisted")
|
||||
return false
|
||||
}
|
||||
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist[:index], proj.Spec.NamespaceResourceBlacklist[index+1:]...)
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
// NewProjectDenyNamespaceResourceCommand returns a new instance of an `argocd proj deny-namespace-resource` command
|
||||
func NewProjectDenyNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
use := "deny-namespace-resource PROJECT GROUP KIND"
|
||||
desc := "Adds a namespaced API resource to the blacklist"
|
||||
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
|
||||
for _, item := range proj.Spec.NamespaceResourceBlacklist {
|
||||
if item.Group == group && item.Kind == kind {
|
||||
log.Infof("Group '%s' and kind '%s' are already blacklisted in project", item.Group, item.Kind)
|
||||
return false
|
||||
}
|
||||
}
|
||||
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist, v1.GroupKind{Group: group, Kind: kind})
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
// NewProjectDenyClusterResourceCommand returns a new instance of an `deny-cluster-resource` command
|
||||
func NewProjectDenyClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
use := "deny-cluster-resource PROJECT GROUP KIND"
|
||||
desc := "Removes a cluster-scoped API resource from the whitelist"
|
||||
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
|
||||
index := -1
|
||||
for i, item := range proj.Spec.ClusterResourceWhitelist {
|
||||
if item.Group == group && item.Kind == kind {
|
||||
index = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if index == -1 {
|
||||
log.Info("Specified cluster resource already denied in project")
|
||||
return false
|
||||
}
|
||||
proj.Spec.ClusterResourceWhitelist = append(proj.Spec.ClusterResourceWhitelist[:index], proj.Spec.ClusterResourceWhitelist[index+1:]...)
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
// NewProjectAllowClusterResourceCommand returns a new instance of an `argocd proj allow-cluster-resource` command
|
||||
func NewProjectAllowClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
use := "allow-cluster-resource PROJECT GROUP KIND"
|
||||
desc := "Adds a cluster-scoped API resource to the whitelist"
|
||||
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
|
||||
for _, item := range proj.Spec.ClusterResourceWhitelist {
|
||||
if item.Group == group && item.Kind == kind {
|
||||
log.Infof("Group '%s' and kind '%s' are already whitelisted in project", item.Group, item.Kind)
|
||||
return false
|
||||
}
|
||||
}
|
||||
proj.Spec.ClusterResourceWhitelist = append(proj.Spec.ClusterResourceWhitelist, v1.GroupKind{Group: group, Kind: kind})
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
// NewProjectRemoveSourceCommand returns a new instance of an `argocd proj remove-src` command
|
||||
func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var command = &cobra.Command{
|
||||
@@ -677,9 +779,9 @@ func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
|
||||
projects, err := projIf.List(context.Background(), &project.ProjectQuery{})
|
||||
errors.CheckError(err)
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\n")
|
||||
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\n")
|
||||
for _, p := range projects.Items {
|
||||
fmt.Fprintf(w, "%s\t%s\t%v\n", p.Name, p.Spec.Description, p.Spec.Destinations)
|
||||
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, p.Spec.Destinations, p.Spec.SourceRepos, p.Spec.ClusterResourceWhitelist, p.Spec.NamespaceResourceBlacklist)
|
||||
}
|
||||
_ = w.Flush()
|
||||
},
|
||||
|
||||
@@ -87,7 +87,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
}
|
||||
command.Flags().StringVar(&repo.Username, "username", "", "username to the repository")
|
||||
command.Flags().StringVar(&repo.Password, "password", "", "password to the repository")
|
||||
command.Flags().StringVar(&sshPrivateKeyPath, "sshPrivateKeyPath", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
|
||||
command.Flags().StringVar(&sshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
|
||||
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing repository with the same name even if the spec differs")
|
||||
return command
|
||||
}
|
||||
|
||||
@@ -102,4 +102,8 @@ var ArgoCDManagerPolicyRules = []rbacv1.PolicyRule{
|
||||
Resources: []string{"*"},
|
||||
Verbs: []string{"*"},
|
||||
},
|
||||
{
|
||||
NonResourceURLs: []string{"*"},
|
||||
Verbs: []string{"*"},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -14,9 +14,7 @@ import (
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/fields"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"k8s.io/apimachinery/pkg/selection"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
"k8s.io/apimachinery/pkg/util/runtime"
|
||||
"k8s.io/apimachinery/pkg/util/strategicpatch"
|
||||
@@ -46,6 +44,7 @@ const (
|
||||
type ApplicationController struct {
|
||||
namespace string
|
||||
kubeClientset kubernetes.Interface
|
||||
kubectl kube.Kubectl
|
||||
applicationClientset appclientset.Interface
|
||||
auditLogger *argo.AuditLogger
|
||||
appRefreshQueue workqueue.RateLimitingInterface
|
||||
@@ -70,28 +69,28 @@ func NewApplicationController(
|
||||
kubeClientset kubernetes.Interface,
|
||||
applicationClientset appclientset.Interface,
|
||||
repoClientset reposerver.Clientset,
|
||||
db db.ArgoDB,
|
||||
appStateManager AppStateManager,
|
||||
appResyncPeriod time.Duration,
|
||||
config *ApplicationControllerConfig,
|
||||
) *ApplicationController {
|
||||
appRefreshQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
|
||||
appOperationQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
|
||||
return &ApplicationController{
|
||||
db := db.NewDB(namespace, kubeClientset)
|
||||
kubectlCmd := kube.KubectlCmd{}
|
||||
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectlCmd)
|
||||
ctrl := ApplicationController{
|
||||
namespace: namespace,
|
||||
kubeClientset: kubeClientset,
|
||||
kubectl: kubectlCmd,
|
||||
applicationClientset: applicationClientset,
|
||||
repoClientset: repoClientset,
|
||||
appRefreshQueue: appRefreshQueue,
|
||||
appOperationQueue: appOperationQueue,
|
||||
appRefreshQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
|
||||
appOperationQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
|
||||
appStateManager: appStateManager,
|
||||
appInformer: newApplicationInformer(applicationClientset, appRefreshQueue, appOperationQueue, appResyncPeriod, config),
|
||||
db: db,
|
||||
statusRefreshTimeout: appResyncPeriod,
|
||||
forceRefreshApps: make(map[string]bool),
|
||||
forceRefreshAppsMutex: &sync.Mutex{},
|
||||
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, "application-controller"),
|
||||
}
|
||||
ctrl.appInformer = ctrl.newApplicationInformer()
|
||||
return &ctrl
|
||||
}
|
||||
|
||||
// Run starts the Application CRD controller.
|
||||
@@ -100,13 +99,14 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
|
||||
defer ctrl.appRefreshQueue.ShutDown()
|
||||
|
||||
go ctrl.appInformer.Run(ctx.Done())
|
||||
go ctrl.watchAppsResources()
|
||||
|
||||
if !cache.WaitForCacheSync(ctx.Done(), ctrl.appInformer.HasSynced) {
|
||||
log.Error("Timed out waiting for caches to sync")
|
||||
return
|
||||
}
|
||||
|
||||
go ctrl.watchAppsResources()
|
||||
|
||||
for i := 0; i < statusProcessors; i++ {
|
||||
go wait.Until(func() {
|
||||
for ctrl.processAppRefreshQueueItem() {
|
||||
@@ -142,14 +142,35 @@ func (ctrl *ApplicationController) isRefreshForced(appName string) bool {
|
||||
|
||||
// watchClusterResources watches for resource changes annotated with application label on specified cluster and schedule corresponding app refresh.
|
||||
func (ctrl *ApplicationController) watchClusterResources(ctx context.Context, item appv1.Cluster) {
|
||||
config := item.RESTConfig()
|
||||
retryUntilSucceed(func() error {
|
||||
ch, err := kube.WatchResourcesWithLabel(ctx, config, "", common.LabelApplicationName)
|
||||
retryUntilSucceed(func() (err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
err = fmt.Errorf("Recovered from panic: %v\n", r)
|
||||
}
|
||||
}()
|
||||
config := item.RESTConfig()
|
||||
watchStartTime := time.Now()
|
||||
ch, err := ctrl.kubectl.WatchResources(ctx, config, "", func(gvk schema.GroupVersionKind) metav1.ListOptions {
|
||||
ops := metav1.ListOptions{}
|
||||
if !kube.IsCRDGroupVersionKind(gvk) {
|
||||
ops.LabelSelector = common.LabelApplicationName
|
||||
}
|
||||
return ops
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for event := range ch {
|
||||
eventObj := event.Object.(*unstructured.Unstructured)
|
||||
if kube.IsCRD(eventObj) {
|
||||
// restart if new CRD has been created after watch started
|
||||
if event.Type == watch.Added && watchStartTime.Before(eventObj.GetCreationTimestamp().Time) {
|
||||
return fmt.Errorf("Restarting the watch because a new CRD was added.")
|
||||
} else if event.Type == watch.Deleted {
|
||||
return fmt.Errorf("Restarting the watch because a CRD was deleted.")
|
||||
}
|
||||
}
|
||||
objLabels := eventObj.GetLabels()
|
||||
if objLabels == nil {
|
||||
objLabels = make(map[string]string)
|
||||
@@ -160,26 +181,68 @@ func (ctrl *ApplicationController) watchClusterResources(ctx context.Context, it
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("resource updates channel has closed")
|
||||
}, fmt.Sprintf("watch app resources on %s", config.Host), ctx, watchResourcesRetryTimeout)
|
||||
}, fmt.Sprintf("watch app resources on %s", item.Server), ctx, watchResourcesRetryTimeout)
|
||||
|
||||
}
|
||||
|
||||
func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
|
||||
for _, obj := range apps {
|
||||
if app, ok := obj.(*appv1.Application); ok && app.Spec.Destination.Server == cluster.Server {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// WatchAppsResources watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
|
||||
func (ctrl *ApplicationController) watchAppsResources() {
|
||||
watchingClusters := make(map[string]context.CancelFunc)
|
||||
watchingClusters := make(map[string]struct {
|
||||
cancel context.CancelFunc
|
||||
cluster *appv1.Cluster
|
||||
})
|
||||
|
||||
retryUntilSucceed(func() error {
|
||||
return ctrl.db.WatchClusters(context.Background(), func(event *db.ClusterEvent) {
|
||||
cancel, ok := watchingClusters[event.Cluster.Server]
|
||||
if event.Type == watch.Deleted && ok {
|
||||
cancel()
|
||||
clusterEventCallback := func(event *db.ClusterEvent) {
|
||||
info, ok := watchingClusters[event.Cluster.Server]
|
||||
hasApps := isClusterHasApps(ctrl.appInformer.GetStore().List(), event.Cluster)
|
||||
|
||||
// cluster resources must be watched only if cluster has at least one app
|
||||
if (event.Type == watch.Deleted || !hasApps) && ok {
|
||||
info.cancel()
|
||||
delete(watchingClusters, event.Cluster.Server)
|
||||
} else if event.Type != watch.Deleted && !ok {
|
||||
} else if event.Type != watch.Deleted && !ok && hasApps {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
watchingClusters[event.Cluster.Server] = cancel
|
||||
watchingClusters[event.Cluster.Server] = struct {
|
||||
cancel context.CancelFunc
|
||||
cluster *appv1.Cluster
|
||||
}{
|
||||
cancel: cancel,
|
||||
cluster: event.Cluster,
|
||||
}
|
||||
go ctrl.watchClusterResources(ctx, *event.Cluster)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
onAppModified := func(obj interface{}) {
|
||||
if app, ok := obj.(*appv1.Application); ok {
|
||||
var cluster *appv1.Cluster
|
||||
info, infoOk := watchingClusters[app.Spec.Destination.Server]
|
||||
if infoOk {
|
||||
cluster = info.cluster
|
||||
} else {
|
||||
cluster, _ = ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
|
||||
}
|
||||
if cluster != nil {
|
||||
// trigger cluster event every time when app created/deleted to either start or stop watching resources
|
||||
clusterEventCallback(&db.ClusterEvent{Cluster: cluster, Type: watch.Modified})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ctrl.appInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: onAppModified, DeleteFunc: onAppModified})
|
||||
|
||||
return ctrl.db.WatchClusters(context.Background(), clusterEventCallback)
|
||||
|
||||
}, "watch clusters", context.Background(), watchResourcesRetryTimeout)
|
||||
|
||||
<-context.Background().Done()
|
||||
@@ -252,12 +315,13 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Application) {
|
||||
log.Infof("Deleting resources for application %s", app.Name)
|
||||
logCtx := log.WithField("application", app.Name)
|
||||
logCtx.Infof("Deleting resources")
|
||||
// Get refreshed application info, since informer app copy might be stale
|
||||
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(app.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
if !errors.IsNotFound(err) {
|
||||
log.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
|
||||
logCtx.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -266,7 +330,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
|
||||
if err == nil {
|
||||
config := clst.RESTConfig()
|
||||
err = kube.DeleteResourceWithLabel(config, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
|
||||
err = kube.DeleteResourcesWithLabel(config, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
|
||||
if err == nil {
|
||||
app.SetCascadedDeletion(false)
|
||||
var patch []byte
|
||||
@@ -281,14 +345,14 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
log.Errorf("Unable to delete application resources: %v", err)
|
||||
ctrl.setAppCondition(app, appv1.ApplicationCondition{
|
||||
Type: appv1.ApplicationConditionDeletionError,
|
||||
Message: err.Error(),
|
||||
})
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonStatusRefreshed, Action: "refresh_status"}, v1.EventTypeWarning)
|
||||
message := fmt.Sprintf("Unable to delete application resources: %v", err)
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonStatusRefreshed, Type: v1.EventTypeWarning}, message)
|
||||
} else {
|
||||
log.Infof("Successfully deleted resources for application %s", app.Name)
|
||||
logCtx.Info("Successfully deleted resources")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -320,11 +384,12 @@ func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condi
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Application) {
|
||||
logCtx := log.WithField("application", app.Name)
|
||||
var state *appv1.OperationState
|
||||
// Recover from any unexpected panics and automatically set the status to be failed
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
logCtx.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
|
||||
state.Phase = appv1.OperationError
|
||||
if rerr, ok := r.(error); ok {
|
||||
state.Message = rerr.Error()
|
||||
@@ -341,20 +406,20 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
|
||||
// again. To detect this, always retrieve the latest version to ensure it is not stale.
|
||||
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
log.Errorf("Failed to retrieve latest application state: %v", err)
|
||||
logCtx.Errorf("Failed to retrieve latest application state: %v", err)
|
||||
return
|
||||
}
|
||||
if !isOperationInProgress(freshApp) {
|
||||
log.Infof("Skipping operation on stale application state (%s)", app.ObjectMeta.Name)
|
||||
logCtx.Infof("Skipping operation on stale application state")
|
||||
return
|
||||
}
|
||||
app = freshApp
|
||||
state = app.Status.OperationState.DeepCopy()
|
||||
log.Infof("Resuming in-progress operation. app: %s, phase: %s, message: %s", app.ObjectMeta.Name, state.Phase, state.Message)
|
||||
logCtx.Infof("Resuming in-progress operation. phase: %s, message: %s", state.Phase, state.Message)
|
||||
} else {
|
||||
state = &appv1.OperationState{Phase: appv1.OperationRunning, Operation: *app.Operation, StartedAt: metav1.Now()}
|
||||
ctrl.setOperationState(app, state)
|
||||
log.Infof("Initialized new operation. app: %s, operation: %v", app.ObjectMeta.Name, *app.Operation)
|
||||
logCtx.Infof("Initialized new operation: %v", *app.Operation)
|
||||
}
|
||||
ctrl.appStateManager.SyncAppState(app, state)
|
||||
|
||||
@@ -400,7 +465,6 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
|
||||
// If operation is completed, clear the operation field to indicate no operation is
|
||||
// in progress.
|
||||
patch["operation"] = nil
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Action: "refresh_status"}, v1.EventTypeNormal)
|
||||
}
|
||||
if reflect.DeepEqual(app.Status.OperationState, state) {
|
||||
log.Infof("No operation updates necessary to '%s'. Skipping patch", app.Name)
|
||||
@@ -416,6 +480,18 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
|
||||
return err
|
||||
}
|
||||
log.Infof("updated '%s' operation (phase: %s)", app.Name, state.Phase)
|
||||
if state.Phase.Completed() {
|
||||
eventInfo := argo.EventInfo{Reason: argo.EventReasonOperationCompleted}
|
||||
var message string
|
||||
if state.Phase.Successful() {
|
||||
eventInfo.Type = v1.EventTypeNormal
|
||||
message = "Operation succeeded"
|
||||
} else {
|
||||
eventInfo.Type = v1.EventTypeWarning
|
||||
message = fmt.Sprintf("Operation failed: %v", state.Message)
|
||||
}
|
||||
ctrl.auditLogger.LogAppEvent(app, eventInfo, message)
|
||||
}
|
||||
return nil
|
||||
}, "Update application operation state", context.Background(), updateOperationStateTimeout)
|
||||
}
|
||||
@@ -475,10 +551,16 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
parameters = manifestInfo.Params
|
||||
}
|
||||
|
||||
healthState, err := setApplicationHealth(comparisonResult)
|
||||
healthState, err := setApplicationHealth(ctrl.kubectl, comparisonResult)
|
||||
if err != nil {
|
||||
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
|
||||
}
|
||||
|
||||
syncErrCond := ctrl.autoSync(app, comparisonResult)
|
||||
if syncErrCond != nil {
|
||||
conditions = append(conditions, *syncErrCond)
|
||||
}
|
||||
|
||||
ctrl.updateAppStatus(app, comparisonResult, healthState, parameters, conditions)
|
||||
return
|
||||
}
|
||||
@@ -486,18 +568,20 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
||||
// needRefreshAppStatus answers if application status needs to be refreshed.
|
||||
// Returns true if application never been compared, has changed or comparison result has expired.
|
||||
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) bool {
|
||||
logCtx := log.WithFields(log.Fields{"application": app.Name})
|
||||
var reason string
|
||||
expired := app.Status.ComparisonResult.ComparedAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
|
||||
if ctrl.isRefreshForced(app.Name) {
|
||||
reason = "force refresh"
|
||||
} else if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown {
|
||||
} else if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown && expired {
|
||||
reason = "comparison status unknown"
|
||||
} else if !app.Spec.Source.Equals(app.Status.ComparisonResult.ComparedTo) {
|
||||
reason = "spec.source differs"
|
||||
} else if app.Status.ComparisonResult.ComparedAt.Add(statusRefreshTimeout).Before(time.Now().UTC()) {
|
||||
} else if expired {
|
||||
reason = fmt.Sprintf("comparison expired. comparedAt: %v, expiry: %v", app.Status.ComparisonResult.ComparedAt, statusRefreshTimeout)
|
||||
}
|
||||
if reason != "" {
|
||||
log.Infof("Refreshing application '%s' status (%s)", app.Name, reason)
|
||||
logCtx.Infof("Refreshing app status (%s)", reason)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
@@ -536,6 +620,7 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
|
||||
appv1.ApplicationConditionUnknownError: true,
|
||||
appv1.ApplicationConditionComparisonError: true,
|
||||
appv1.ApplicationConditionSharedResourceWarning: true,
|
||||
appv1.ApplicationConditionSyncError: true,
|
||||
}
|
||||
appConditions := make([]appv1.ApplicationCondition, 0)
|
||||
for i := 0; i < len(app.Status.Conditions); i++ {
|
||||
@@ -557,7 +642,7 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
|
||||
}
|
||||
|
||||
// setApplicationHealth updates the health statuses of all resources performed in the comparison
|
||||
func setApplicationHealth(comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error) {
|
||||
func setApplicationHealth(kubectl kube.Kubectl, comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error) {
|
||||
var savedErr error
|
||||
appHealth := appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
|
||||
if comparisonResult.Status == appv1.ComparisonStatusUnknown {
|
||||
@@ -572,7 +657,7 @@ func setApplicationHealth(comparisonResult *appv1.ComparisonResult) (*appv1.Heal
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
healthState, err := health.GetAppHealth(&obj)
|
||||
healthState, err := health.GetAppHealth(kubectl, &obj)
|
||||
if err != nil && savedErr == nil {
|
||||
savedErr = err
|
||||
}
|
||||
@@ -594,12 +679,21 @@ func (ctrl *ApplicationController) updateAppStatus(
|
||||
parameters []*appv1.ComponentParameter,
|
||||
conditions []appv1.ApplicationCondition,
|
||||
) {
|
||||
logCtx := log.WithFields(log.Fields{"application": app.Name})
|
||||
modifiedApp := app.DeepCopy()
|
||||
if comparisonResult != nil {
|
||||
modifiedApp.Status.ComparisonResult = *comparisonResult
|
||||
log.Infof("App %s comparison result: prev: %s. current: %s", app.Name, app.Status.ComparisonResult.Status, comparisonResult.Status)
|
||||
if app.Status.ComparisonResult.Status != comparisonResult.Status {
|
||||
message := fmt.Sprintf("Updated sync status: %s -> %s", app.Status.ComparisonResult.Status, comparisonResult.Status)
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message)
|
||||
}
|
||||
logCtx.Infof("Comparison result: prev: %s. current: %s", app.Status.ComparisonResult.Status, comparisonResult.Status)
|
||||
}
|
||||
if healthState != nil {
|
||||
if modifiedApp.Status.Health.Status != healthState.Status {
|
||||
message := fmt.Sprintf("Updated health status: %s -> %s", modifiedApp.Status.Health.Status, healthState.Status)
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message)
|
||||
}
|
||||
modifiedApp.Status.Health = *healthState
|
||||
}
|
||||
if parameters != nil {
|
||||
@@ -613,59 +707,104 @@ func (ctrl *ApplicationController) updateAppStatus(
|
||||
}
|
||||
origBytes, err := json.Marshal(app)
|
||||
if err != nil {
|
||||
log.Errorf("Error updating application %s (marshal orig app): %v", app.Name, err)
|
||||
logCtx.Errorf("Error updating (marshal orig app): %v", err)
|
||||
return
|
||||
}
|
||||
modifiedBytes, err := json.Marshal(modifiedApp)
|
||||
if err != nil {
|
||||
log.Errorf("Error updating application %s (marshal modified app): %v", app.Name, err)
|
||||
logCtx.Errorf("Error updating (marshal modified app): %v", err)
|
||||
return
|
||||
}
|
||||
patch, err := strategicpatch.CreateTwoWayMergePatch(origBytes, modifiedBytes, appv1.Application{})
|
||||
if err != nil {
|
||||
log.Errorf("Error calculating patch for app %s update: %v", app.Name, err)
|
||||
logCtx.Errorf("Error calculating patch for update: %v", err)
|
||||
return
|
||||
}
|
||||
if string(patch) == "{}" {
|
||||
log.Infof("No status changes to %s. Skipping patch", app.Name)
|
||||
logCtx.Infof("No status changes. Skipping patch")
|
||||
return
|
||||
}
|
||||
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace)
|
||||
_, err = appClient.Patch(app.Name, types.MergePatchType, patch)
|
||||
if err != nil {
|
||||
log.Warnf("Error updating application %s: %v", app.Name, err)
|
||||
logCtx.Warnf("Error updating application: %v", err)
|
||||
} else {
|
||||
log.Infof("Application %s update successful", app.Name)
|
||||
logCtx.Infof("Update successful")
|
||||
}
|
||||
}
|
||||
|
||||
func newApplicationInformer(
|
||||
appClientset appclientset.Interface,
|
||||
appQueue workqueue.RateLimitingInterface,
|
||||
appOperationQueue workqueue.RateLimitingInterface,
|
||||
appResyncPeriod time.Duration,
|
||||
config *ApplicationControllerConfig) cache.SharedIndexInformer {
|
||||
// autoSync will initiate a sync operation for an application configured with automated sync
|
||||
func (ctrl *ApplicationController) autoSync(app *appv1.Application, comparisonResult *appv1.ComparisonResult) *appv1.ApplicationCondition {
|
||||
if app.Spec.SyncPolicy == nil || app.Spec.SyncPolicy.Automated == nil {
|
||||
return nil
|
||||
}
|
||||
logCtx := log.WithFields(log.Fields{"application": app.Name})
|
||||
if app.Operation != nil {
|
||||
logCtx.Infof("Skipping auto-sync: another operation is in progress")
|
||||
return nil
|
||||
}
|
||||
// Only perform auto-sync if we detect OutOfSync status. This is to prevent us from attempting
|
||||
// a sync when application is already in a Synced or Unknown state
|
||||
if comparisonResult.Status != appv1.ComparisonStatusOutOfSync {
|
||||
logCtx.Infof("Skipping auto-sync: application status is %s", comparisonResult.Status)
|
||||
return nil
|
||||
}
|
||||
desiredCommitSHA := comparisonResult.Revision
|
||||
|
||||
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
|
||||
appClientset,
|
||||
appResyncPeriod,
|
||||
config.Namespace,
|
||||
func(options *metav1.ListOptions) {
|
||||
var instanceIDReq *labels.Requirement
|
||||
var err error
|
||||
if config.InstanceID != "" {
|
||||
instanceIDReq, err = labels.NewRequirement(common.LabelKeyApplicationControllerInstanceID, selection.Equals, []string{config.InstanceID})
|
||||
} else {
|
||||
instanceIDReq, err = labels.NewRequirement(common.LabelKeyApplicationControllerInstanceID, selection.DoesNotExist, nil)
|
||||
}
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// It is possible for manifests to remain OutOfSync even after a sync/kubectl apply (e.g.
|
||||
// auto-sync with pruning disabled). We need to ensure that we do not keep Syncing an
|
||||
// application in an infinite loop. To detect this, we only attempt the Sync if the revision
|
||||
// and parameter overrides are different from our most recent sync operation.
|
||||
if alreadyAttemptedSync(app, desiredCommitSHA) {
|
||||
if app.Status.OperationState.Phase != appv1.OperationSucceeded {
|
||||
logCtx.Warnf("Skipping auto-sync: failed previous sync attempt to %s", desiredCommitSHA)
|
||||
message := fmt.Sprintf("Failed sync attempt to %s: %s", desiredCommitSHA, app.Status.OperationState.Message)
|
||||
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}
|
||||
}
|
||||
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredCommitSHA)
|
||||
return nil
|
||||
}
|
||||
|
||||
options.FieldSelector = fields.Everything().String()
|
||||
labelSelector := labels.NewSelector().Add(*instanceIDReq)
|
||||
options.LabelSelector = labelSelector.String()
|
||||
op := appv1.Operation{
|
||||
Sync: &appv1.SyncOperation{
|
||||
Revision: desiredCommitSHA,
|
||||
Prune: app.Spec.SyncPolicy.Automated.Prune,
|
||||
ParameterOverrides: app.Spec.Source.ComponentParameterOverrides,
|
||||
},
|
||||
}
|
||||
appIf := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace)
|
||||
_, err := argo.SetAppOperation(context.Background(), appIf, ctrl.auditLogger, app.Name, &op)
|
||||
if err != nil {
|
||||
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredCommitSHA, err)
|
||||
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}
|
||||
}
|
||||
message := fmt.Sprintf("Initiated automated sync to '%s'", desiredCommitSHA)
|
||||
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonOperationStarted, Type: v1.EventTypeNormal}, message)
|
||||
logCtx.Info(message)
|
||||
return nil
|
||||
}
|
||||
|
||||
// alreadyAttemptedSync returns whether or not the most recent sync was performed against the
|
||||
// commitSHA and with the same parameter overrides which are currently set in the app
|
||||
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) bool {
|
||||
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
|
||||
return false
|
||||
}
|
||||
if app.Status.OperationState.SyncResult.Revision != commitSHA {
|
||||
return false
|
||||
}
|
||||
if !reflect.DeepEqual(appv1.ParameterOverrides(app.Spec.Source.ComponentParameterOverrides), app.Status.OperationState.Operation.Sync.ParameterOverrides) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) newApplicationInformer() cache.SharedIndexInformer {
|
||||
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
|
||||
ctrl.applicationClientset,
|
||||
ctrl.statusRefreshTimeout,
|
||||
ctrl.namespace,
|
||||
func(options *metav1.ListOptions) {},
|
||||
)
|
||||
informer := appInformerFactory.Argoproj().V1alpha1().Applications().Informer()
|
||||
informer.AddEventHandler(
|
||||
@@ -673,23 +812,32 @@ func newApplicationInformer(
|
||||
AddFunc: func(obj interface{}) {
|
||||
key, err := cache.MetaNamespaceKeyFunc(obj)
|
||||
if err == nil {
|
||||
appQueue.Add(key)
|
||||
appOperationQueue.Add(key)
|
||||
ctrl.appRefreshQueue.Add(key)
|
||||
ctrl.appOperationQueue.Add(key)
|
||||
}
|
||||
},
|
||||
UpdateFunc: func(old, new interface{}) {
|
||||
key, err := cache.MetaNamespaceKeyFunc(new)
|
||||
if err == nil {
|
||||
appQueue.Add(key)
|
||||
appOperationQueue.Add(key)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
oldApp, oldOK := old.(*appv1.Application)
|
||||
newApp, newOK := new.(*appv1.Application)
|
||||
if oldOK && newOK {
|
||||
if toggledAutomatedSync(oldApp, newApp) {
|
||||
log.WithField("application", newApp.Name).Info("Enabled automated sync")
|
||||
ctrl.forceAppRefresh(newApp.Name)
|
||||
}
|
||||
}
|
||||
ctrl.appRefreshQueue.Add(key)
|
||||
ctrl.appOperationQueue.Add(key)
|
||||
},
|
||||
DeleteFunc: func(obj interface{}) {
|
||||
// IndexerInformer uses a delta queue, therefore for deletes we have to use this
|
||||
// key function.
|
||||
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
|
||||
if err == nil {
|
||||
appQueue.Add(key)
|
||||
ctrl.appRefreshQueue.Add(key)
|
||||
}
|
||||
},
|
||||
},
|
||||
@@ -700,3 +848,17 @@ func newApplicationInformer(
|
||||
func isOperationInProgress(app *appv1.Application) bool {
|
||||
return app.Status.OperationState != nil && !app.Status.OperationState.Phase.Completed()
|
||||
}
|
||||
|
||||
// toggledAutomatedSync tests if an app went from auto-sync disabled to enabled.
|
||||
// if it was toggled to be enabled, the informer handler will force a refresh
|
||||
func toggledAutomatedSync(old *appv1.Application, new *appv1.Application) bool {
|
||||
if new.Spec.SyncPolicy == nil || new.Spec.SyncPolicy.Automated == nil {
|
||||
return false
|
||||
}
|
||||
// auto-sync is enabled. check if it was previously disabled
|
||||
if old.Spec.SyncPolicy == nil || old.Spec.SyncPolicy.Automated == nil {
|
||||
return true
|
||||
}
|
||||
// nothing changed
|
||||
return false
|
||||
}
|
||||
|
||||
231
controller/appcontroller_test.go
Normal file
231
controller/appcontroller_test.go
Normal file
@@ -0,0 +1,231 @@
|
||||
package controller
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
|
||||
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
|
||||
reposerver "github.com/argoproj/argo-cd/reposerver/mocks"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func newFakeController(apps ...runtime.Object) *ApplicationController {
|
||||
kubeClientset := fake.NewSimpleClientset()
|
||||
appClientset := appclientset.NewSimpleClientset(apps...)
|
||||
repoClientset := reposerver.Clientset{}
|
||||
return NewApplicationController(
|
||||
"argocd",
|
||||
kubeClientset,
|
||||
appClientset,
|
||||
&repoClientset,
|
||||
time.Minute,
|
||||
)
|
||||
}
|
||||
|
||||
var fakeApp = `
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: argocd
|
||||
spec:
|
||||
destination:
|
||||
namespace: dummy-namespace
|
||||
server: https://localhost:6443
|
||||
project: default
|
||||
source:
|
||||
path: some/path
|
||||
repoURL: https://github.com/argoproj/argocd-example-apps.git
|
||||
syncPolicy:
|
||||
automated: {}
|
||||
status:
|
||||
operationState:
|
||||
finishedAt: 2018-09-21T23:50:29Z
|
||||
message: successfully synced
|
||||
operation:
|
||||
sync:
|
||||
revision: HEAD
|
||||
phase: Succeeded
|
||||
startedAt: 2018-09-21T23:50:25Z
|
||||
syncResult:
|
||||
resources:
|
||||
- kind: RoleBinding
|
||||
message: |-
|
||||
rolebinding.rbac.authorization.k8s.io/always-outofsync reconciled
|
||||
rolebinding.rbac.authorization.k8s.io/always-outofsync configured
|
||||
name: always-outofsync
|
||||
namespace: default
|
||||
status: Synced
|
||||
revision: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
`
|
||||
|
||||
func newFakeApp() *argoappv1.Application {
|
||||
var app argoappv1.Application
|
||||
err := yaml.Unmarshal([]byte(fakeApp), &app)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return &app
|
||||
}
|
||||
|
||||
func TestAutoSync(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(app)
|
||||
compRes := argoappv1.ComparisonResult{
|
||||
Status: argoappv1.ComparisonStatusOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
}
|
||||
cond := ctrl.autoSync(app, &compRes)
|
||||
assert.Nil(t, cond)
|
||||
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, app.Operation)
|
||||
assert.NotNil(t, app.Operation.Sync)
|
||||
assert.False(t, app.Operation.Sync.Prune)
|
||||
}
|
||||
|
||||
func TestSkipAutoSync(t *testing.T) {
|
||||
// Verify we skip when we previously synced to it in our most recent history
|
||||
// Set current to 'aaaaa', desired to 'aaaa' and mark system OutOfSync
|
||||
app := newFakeApp()
|
||||
ctrl := newFakeController(app)
|
||||
compRes := argoappv1.ComparisonResult{
|
||||
Status: argoappv1.ComparisonStatusOutOfSync,
|
||||
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
}
|
||||
cond := ctrl.autoSync(app, &compRes)
|
||||
assert.Nil(t, cond)
|
||||
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, app.Operation)
|
||||
|
||||
// Verify we skip when we are already Synced (even if revision is different)
|
||||
app = newFakeApp()
|
||||
ctrl = newFakeController(app)
|
||||
compRes = argoappv1.ComparisonResult{
|
||||
Status: argoappv1.ComparisonStatusSynced,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
}
|
||||
cond = ctrl.autoSync(app, &compRes)
|
||||
assert.Nil(t, cond)
|
||||
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, app.Operation)
|
||||
|
||||
// Verify we skip when auto-sync is disabled
|
||||
app = newFakeApp()
|
||||
app.Spec.SyncPolicy = nil
|
||||
ctrl = newFakeController(app)
|
||||
compRes = argoappv1.ComparisonResult{
|
||||
Status: argoappv1.ComparisonStatusOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
}
|
||||
cond = ctrl.autoSync(app, &compRes)
|
||||
assert.Nil(t, cond)
|
||||
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, app.Operation)
|
||||
|
||||
// Verify we skip when previous sync attempt failed and return error condition
|
||||
// Set current to 'aaaaa', desired to 'bbbbb' and add 'bbbbb' to failure history
|
||||
app = newFakeApp()
|
||||
app.Status.OperationState = &argoappv1.OperationState{
|
||||
Operation: argoappv1.Operation{
|
||||
Sync: &argoappv1.SyncOperation{},
|
||||
},
|
||||
Phase: argoappv1.OperationFailed,
|
||||
SyncResult: &argoappv1.SyncOperationResult{
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
},
|
||||
}
|
||||
ctrl = newFakeController(app)
|
||||
compRes = argoappv1.ComparisonResult{
|
||||
Status: argoappv1.ComparisonStatusOutOfSync,
|
||||
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
|
||||
}
|
||||
cond = ctrl.autoSync(app, &compRes)
|
||||
assert.NotNil(t, cond)
|
||||
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, app.Operation)
|
||||
}
|
||||
|
||||
// TestAutoSyncIndicateError verifies we skip auto-sync and return error condition if previous sync failed
|
||||
func TestAutoSyncIndicateError(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.Source.ComponentParameterOverrides = []argoappv1.ComponentParameter{
|
||||
{
|
||||
Name: "a",
|
||||
Value: "1",
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(app)
|
||||
compRes := argoappv1.ComparisonResult{
|
||||
Status: argoappv1.ComparisonStatusOutOfSync,
|
||||
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
}
|
||||
app.Status.OperationState = &argoappv1.OperationState{
|
||||
Operation: argoappv1.Operation{
|
||||
Sync: &argoappv1.SyncOperation{
|
||||
ParameterOverrides: argoappv1.ParameterOverrides{
|
||||
{
|
||||
Name: "a",
|
||||
Value: "1",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Phase: argoappv1.OperationFailed,
|
||||
SyncResult: &argoappv1.SyncOperationResult{
|
||||
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
},
|
||||
}
|
||||
cond := ctrl.autoSync(app, &compRes)
|
||||
assert.NotNil(t, cond)
|
||||
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, app.Operation)
|
||||
}
|
||||
|
||||
// TestAutoSyncParameterOverrides verifies we auto-sync if revision is same but parameter overrides are different
|
||||
func TestAutoSyncParameterOverrides(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.Source.ComponentParameterOverrides = []argoappv1.ComponentParameter{
|
||||
{
|
||||
Name: "a",
|
||||
Value: "1",
|
||||
},
|
||||
}
|
||||
ctrl := newFakeController(app)
|
||||
compRes := argoappv1.ComparisonResult{
|
||||
Status: argoappv1.ComparisonStatusOutOfSync,
|
||||
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
}
|
||||
app.Status.OperationState = &argoappv1.OperationState{
|
||||
Operation: argoappv1.Operation{
|
||||
Sync: &argoappv1.SyncOperation{
|
||||
ParameterOverrides: argoappv1.ParameterOverrides{
|
||||
{
|
||||
Name: "a",
|
||||
Value: "2", // this value changed
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Phase: argoappv1.OperationFailed,
|
||||
SyncResult: &argoappv1.SyncOperationResult{
|
||||
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
|
||||
},
|
||||
}
|
||||
cond := ctrl.autoSync(app, &compRes)
|
||||
assert.Nil(t, cond)
|
||||
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, app.Operation)
|
||||
}
|
||||
@@ -3,16 +3,9 @@ package controller
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"runtime/debug"
|
||||
"time"
|
||||
|
||||
"runtime/debug"
|
||||
|
||||
"github.com/argoproj/argo-cd/common"
|
||||
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/reposerver"
|
||||
"github.com/argoproj/argo-cd/reposerver/repository"
|
||||
"github.com/argoproj/argo-cd/util"
|
||||
"github.com/argoproj/argo-cd/util/db"
|
||||
log "github.com/sirupsen/logrus"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
@@ -25,6 +18,12 @@ import (
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
"k8s.io/client-go/util/workqueue"
|
||||
|
||||
"github.com/argoproj/argo-cd/common"
|
||||
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/reposerver"
|
||||
"github.com/argoproj/argo-cd/util/db"
|
||||
"github.com/argoproj/argo-cd/util/git"
|
||||
)
|
||||
|
||||
type SecretController struct {
|
||||
@@ -93,22 +92,13 @@ func (ctrl *SecretController) getRepoConnectionState(repo *v1alpha1.Repository)
|
||||
ModifiedAt: repo.ConnectionState.ModifiedAt,
|
||||
Status: v1alpha1.ConnectionStatusUnknown,
|
||||
}
|
||||
closer, client, err := ctrl.repoClientset.NewRepositoryClient()
|
||||
if err != nil {
|
||||
log.Errorf("Unable to create repository client: %v", err)
|
||||
return state
|
||||
}
|
||||
|
||||
defer util.Close(closer)
|
||||
|
||||
_, err = client.ListDir(context.Background(), &repository.ListDirRequest{Repo: repo, Path: ".gitignore"})
|
||||
err := git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey)
|
||||
if err == nil {
|
||||
state.Status = v1alpha1.ConnectionStatusSuccessful
|
||||
} else {
|
||||
state.Status = v1alpha1.ConnectionStatusFailed
|
||||
state.Message = err.Error()
|
||||
}
|
||||
|
||||
return state
|
||||
}
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"time"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
apierr "k8s.io/apimachinery/pkg/api/errors"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
@@ -35,10 +36,11 @@ type AppStateManager interface {
|
||||
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
|
||||
}
|
||||
|
||||
// ksonnetAppStateManager allows to compare application using KSonnet CLI
|
||||
type ksonnetAppStateManager struct {
|
||||
// appStateManager allows to compare application using KSonnet CLI
|
||||
type appStateManager struct {
|
||||
db db.ArgoDB
|
||||
appclientset appclientset.Interface
|
||||
kubectl kubeutil.Kubectl
|
||||
repoClientset reposerver.Clientset
|
||||
namespace string
|
||||
}
|
||||
@@ -83,7 +85,7 @@ func groupLiveObjects(liveObjs []*unstructured.Unstructured, targetObjs []*unstr
|
||||
return liveByFullName
|
||||
}
|
||||
|
||||
func (s *ksonnetAppStateManager) getTargetObjs(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) ([]*unstructured.Unstructured, *repository.ManifestResponse, error) {
|
||||
func (s *appStateManager) getTargetObjs(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) ([]*unstructured.Unstructured, *repository.ManifestResponse, error) {
|
||||
repo := s.getRepo(app.Spec.Source.RepoURL)
|
||||
conn, repoClient, err := s.repoClientset.NewRepositoryClient()
|
||||
if err != nil {
|
||||
@@ -121,6 +123,7 @@ func (s *ksonnetAppStateManager) getTargetObjs(app *v1alpha1.Application, revisi
|
||||
ComponentParameterOverrides: mfReqOverrides,
|
||||
AppLabel: app.Name,
|
||||
ValueFiles: app.Spec.Source.ValuesFiles,
|
||||
Namespace: app.Spec.Destination.Namespace,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
@@ -140,7 +143,7 @@ func (s *ksonnetAppStateManager) getTargetObjs(app *v1alpha1.Application, revisi
|
||||
return targetObjs, manifestInfo, nil
|
||||
}
|
||||
|
||||
func (s *ksonnetAppStateManager) getLiveObjs(app *v1alpha1.Application, targetObjs []*unstructured.Unstructured) (
|
||||
func (s *appStateManager) getLiveObjs(app *v1alpha1.Application, targetObjs []*unstructured.Unstructured) (
|
||||
[]*unstructured.Unstructured, map[string]*unstructured.Unstructured, error) {
|
||||
|
||||
// Get the REST config for the cluster corresponding to the environment
|
||||
@@ -168,7 +171,10 @@ func (s *ksonnetAppStateManager) getLiveObjs(app *v1alpha1.Application, targetOb
|
||||
controlledLiveObj := make([]*unstructured.Unstructured, len(targetObjs))
|
||||
|
||||
// Move live resources which have corresponding target object to controlledLiveObj
|
||||
dynClientPool := dynamic.NewDynamicClientPool(restConfig)
|
||||
dynamicIf, err := dynamic.NewForConfig(restConfig)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
disco, err := discovery.NewDiscoveryClientForConfig(restConfig)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
@@ -182,30 +188,29 @@ func (s *ksonnetAppStateManager) getLiveObjs(app *v1alpha1.Application, targetOb
|
||||
// of ArgoCD. In order to determine that it is truly missing, we fall back to perform a
|
||||
// direct lookup of the resource by name. See issue #141
|
||||
gvk := targetObj.GroupVersionKind()
|
||||
dclient, err := dynClientPool.ClientForGroupVersionKind(gvk)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
apiResource, err := kubeutil.ServerResourceForGroupVersionKind(disco, gvk)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
liveObj, err = kubeutil.GetLiveResource(dclient, targetObj, apiResource, app.Spec.Destination.Namespace)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
if !apierr.IsNotFound(err) {
|
||||
return nil, nil, err
|
||||
}
|
||||
// If we get here, the app is comprised of a custom resource which has yet to be registered
|
||||
} else {
|
||||
liveObj, err = kubeutil.GetLiveResource(dynamicIf, targetObj, apiResource, app.Spec.Destination.Namespace)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
controlledLiveObj[i] = liveObj
|
||||
delete(liveObjByFullName, fullName)
|
||||
}
|
||||
|
||||
return controlledLiveObj, liveObjByFullName, nil
|
||||
}
|
||||
|
||||
// CompareAppState compares application git state to the live app state, using the specified
|
||||
// revision and supplied overrides. If revision or overrides are empty, then compares against
|
||||
// revision and overrides in the app spec.
|
||||
func (s *ksonnetAppStateManager) CompareAppState(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) (
|
||||
func (s *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) (
|
||||
*v1alpha1.ComparisonResult, *repository.ManifestResponse, []v1alpha1.ApplicationCondition, error) {
|
||||
|
||||
failedToLoadObjs := false
|
||||
@@ -318,6 +323,10 @@ func (s *ksonnetAppStateManager) CompareAppState(app *v1alpha1.Application, revi
|
||||
Resources: resources,
|
||||
Status: comparisonStatus,
|
||||
}
|
||||
|
||||
if manifestInfo != nil {
|
||||
compResult.Revision = manifestInfo.Revision
|
||||
}
|
||||
return &compResult, manifestInfo, conditions, nil
|
||||
}
|
||||
|
||||
@@ -360,7 +369,7 @@ func getResourceFullName(obj *unstructured.Unstructured) string {
|
||||
return fmt.Sprintf("%s:%s", obj.GetKind(), obj.GetName())
|
||||
}
|
||||
|
||||
func (s *ksonnetAppStateManager) getRepo(repoURL string) *v1alpha1.Repository {
|
||||
func (s *appStateManager) getRepo(repoURL string) *v1alpha1.Repository {
|
||||
repo, err := s.db.GetRepository(context.Background(), repoURL)
|
||||
if err != nil {
|
||||
// If we couldn't retrieve from the repo service, assume public repositories
|
||||
@@ -369,7 +378,7 @@ func (s *ksonnetAppStateManager) getRepo(repoURL string) *v1alpha1.Repository {
|
||||
return repo
|
||||
}
|
||||
|
||||
func (s *ksonnetAppStateManager) persistDeploymentInfo(
|
||||
func (s *appStateManager) persistDeploymentInfo(
|
||||
app *v1alpha1.Application, revision string, envParams []*v1alpha1.ComponentParameter, overrides *[]v1alpha1.ComponentParameter) error {
|
||||
|
||||
params := make([]v1alpha1.ComponentParameter, len(envParams))
|
||||
@@ -384,7 +393,6 @@ func (s *ksonnetAppStateManager) persistDeploymentInfo(
|
||||
history := append(app.Status.History, v1alpha1.DeploymentInfo{
|
||||
ComponentParameterOverrides: app.Spec.Source.ComponentParameterOverrides,
|
||||
Revision: revision,
|
||||
Params: params,
|
||||
DeployedAt: metav1.NewTime(time.Now().UTC()),
|
||||
ID: nextID,
|
||||
})
|
||||
@@ -411,10 +419,12 @@ func NewAppStateManager(
|
||||
appclientset appclientset.Interface,
|
||||
repoClientset reposerver.Clientset,
|
||||
namespace string,
|
||||
kubectl kubeutil.Kubectl,
|
||||
) AppStateManager {
|
||||
return &ksonnetAppStateManager{
|
||||
return &appStateManager{
|
||||
db: db,
|
||||
appclientset: appclientset,
|
||||
kubectl: kubectl,
|
||||
repoClientset: repoClientset,
|
||||
namespace: namespace,
|
||||
}
|
||||
|
||||
52
controller/state_test.go
Normal file
52
controller/state_test.go
Normal file
@@ -0,0 +1,52 @@
|
||||
package controller
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
)
|
||||
|
||||
var podManifest = []byte(`
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: my-pod
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx:1.7.9
|
||||
name: nginx
|
||||
resources:
|
||||
requests:
|
||||
cpu: 0.2
|
||||
`)
|
||||
|
||||
func newPod() *unstructured.Unstructured {
|
||||
var un unstructured.Unstructured
|
||||
err := yaml.Unmarshal(podManifest, &un)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return &un
|
||||
}
|
||||
|
||||
func TestIsHook(t *testing.T) {
|
||||
pod := newPod()
|
||||
assert.False(t, isHook(pod))
|
||||
|
||||
pod.SetAnnotations(map[string]string{"helm.sh/hook": "post-install"})
|
||||
assert.True(t, isHook(pod))
|
||||
|
||||
pod = newPod()
|
||||
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "PreSync"})
|
||||
assert.True(t, isHook(pod))
|
||||
|
||||
pod = newPod()
|
||||
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "Skip"})
|
||||
assert.False(t, isHook(pod))
|
||||
|
||||
pod = newPod()
|
||||
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "Unknown"})
|
||||
assert.False(t, isHook(pod))
|
||||
}
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
@@ -29,13 +30,16 @@ import (
|
||||
|
||||
type syncContext struct {
|
||||
appName string
|
||||
proj *appv1.AppProject
|
||||
comparison *appv1.ComparisonResult
|
||||
config *rest.Config
|
||||
dynClientPool dynamic.ClientPool
|
||||
disco *discovery.DiscoveryClient
|
||||
dynamicIf dynamic.Interface
|
||||
disco discovery.DiscoveryInterface
|
||||
kubectl kube.Kubectl
|
||||
namespace string
|
||||
syncOp *appv1.SyncOperation
|
||||
syncRes *appv1.SyncOperationResult
|
||||
syncResources []appv1.SyncOperationResource
|
||||
opState *appv1.OperationState
|
||||
manifestInfo *repository.ManifestResponse
|
||||
log *log.Entry
|
||||
@@ -43,8 +47,8 @@ type syncContext struct {
|
||||
lock sync.Mutex
|
||||
}
|
||||
|
||||
func (s *ksonnetAppStateManager) SyncAppState(app *appv1.Application, state *appv1.OperationState) {
|
||||
// Sync requests are usually requested with ambiguous revisions (e.g. master, HEAD, v1.2.3).
|
||||
func (s *appStateManager) SyncAppState(app *appv1.Application, state *appv1.OperationState) {
|
||||
// Sync requests might be requested with ambiguous revisions (e.g. master, HEAD, v1.2.3).
|
||||
// This can change meaning when resuming operations (e.g a hook sync). After calculating a
|
||||
// concrete git commit SHA, the SHA is remembered in the status.operationState.syncResult and
|
||||
// rollbackResult fields. This ensures that when resuming an operation, we sync to the same
|
||||
@@ -52,10 +56,13 @@ func (s *ksonnetAppStateManager) SyncAppState(app *appv1.Application, state *app
|
||||
var revision string
|
||||
var syncOp appv1.SyncOperation
|
||||
var syncRes *appv1.SyncOperationResult
|
||||
var syncResources []appv1.SyncOperationResource
|
||||
var overrides []appv1.ComponentParameter
|
||||
|
||||
if state.Operation.Sync != nil {
|
||||
syncOp = *state.Operation.Sync
|
||||
syncResources = syncOp.Resources
|
||||
overrides = []appv1.ComponentParameter(state.Operation.Sync.ParameterOverrides)
|
||||
if state.SyncResult != nil {
|
||||
syncRes = state.SyncResult
|
||||
revision = state.SyncResult.Revision
|
||||
@@ -63,34 +70,6 @@ func (s *ksonnetAppStateManager) SyncAppState(app *appv1.Application, state *app
|
||||
syncRes = &appv1.SyncOperationResult{}
|
||||
state.SyncResult = syncRes
|
||||
}
|
||||
} else if state.Operation.Rollback != nil {
|
||||
var deploymentInfo *appv1.DeploymentInfo
|
||||
for _, info := range app.Status.History {
|
||||
if info.ID == app.Operation.Rollback.ID {
|
||||
deploymentInfo = &info
|
||||
break
|
||||
}
|
||||
}
|
||||
if deploymentInfo == nil {
|
||||
state.Phase = appv1.OperationFailed
|
||||
state.Message = fmt.Sprintf("application %s does not have deployment with id %v", app.Name, app.Operation.Rollback.ID)
|
||||
return
|
||||
}
|
||||
// Rollback is just a convenience around Sync
|
||||
syncOp = appv1.SyncOperation{
|
||||
Revision: deploymentInfo.Revision,
|
||||
DryRun: state.Operation.Rollback.DryRun,
|
||||
Prune: state.Operation.Rollback.Prune,
|
||||
SyncStrategy: &appv1.SyncStrategy{Apply: &appv1.SyncStrategyApply{}},
|
||||
}
|
||||
overrides = deploymentInfo.ComponentParameterOverrides
|
||||
if state.RollbackResult != nil {
|
||||
syncRes = state.RollbackResult
|
||||
revision = state.RollbackResult.Revision
|
||||
} else {
|
||||
syncRes = &appv1.SyncOperationResult{}
|
||||
state.RollbackResult = syncRes
|
||||
}
|
||||
} else {
|
||||
state.Phase = appv1.OperationFailed
|
||||
state.Message = "Invalid operation request: no operation specified"
|
||||
@@ -103,6 +82,7 @@ func (s *ksonnetAppStateManager) SyncAppState(app *appv1.Application, state *app
|
||||
// Take the value in the requested operation. We will resolve this to a SHA later.
|
||||
revision = syncOp.Revision
|
||||
}
|
||||
|
||||
comparison, manifestInfo, conditions, err := s.CompareAppState(app, revision, overrides)
|
||||
if err != nil {
|
||||
state.Phase = appv1.OperationError
|
||||
@@ -132,23 +112,38 @@ func (s *ksonnetAppStateManager) SyncAppState(app *appv1.Application, state *app
|
||||
}
|
||||
|
||||
restConfig := clst.RESTConfig()
|
||||
dynClientPool := dynamic.NewDynamicClientPool(restConfig)
|
||||
disco, err := discovery.NewDiscoveryClientForConfig(restConfig)
|
||||
dynamicIf, err := dynamic.NewForConfig(restConfig)
|
||||
if err != nil {
|
||||
state.Phase = appv1.OperationError
|
||||
state.Message = fmt.Sprintf("Failed to initialize dynamic client: %v", err)
|
||||
return
|
||||
}
|
||||
disco, err := discovery.NewDiscoveryClientForConfig(restConfig)
|
||||
if err != nil {
|
||||
state.Phase = appv1.OperationError
|
||||
state.Message = fmt.Sprintf("Failed to initialize discovery client: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
proj, err := argo.GetAppProject(&app.Spec, s.appclientset, s.namespace)
|
||||
if err != nil {
|
||||
state.Phase = appv1.OperationError
|
||||
state.Message = fmt.Sprintf("Failed to load application project: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
syncCtx := syncContext{
|
||||
appName: app.Name,
|
||||
proj: proj,
|
||||
comparison: comparison,
|
||||
config: restConfig,
|
||||
dynClientPool: dynClientPool,
|
||||
dynamicIf: dynamicIf,
|
||||
disco: disco,
|
||||
kubectl: s.kubectl,
|
||||
namespace: app.Spec.Destination.Namespace,
|
||||
syncOp: &syncOp,
|
||||
syncRes: syncRes,
|
||||
syncResources: syncResources,
|
||||
opState: state,
|
||||
manifestInfo: manifestInfo,
|
||||
log: log.WithFields(log.Fields{"application": app.Name}),
|
||||
@@ -160,7 +155,7 @@ func (s *ksonnetAppStateManager) SyncAppState(app *appv1.Application, state *app
|
||||
syncCtx.sync()
|
||||
}
|
||||
|
||||
if !syncOp.DryRun && syncCtx.opState.Phase.Successful() {
|
||||
if !syncOp.DryRun && len(syncOp.Resources) == 0 && syncCtx.opState.Phase.Successful() {
|
||||
err := s.persistDeploymentInfo(app, manifestInfo.Revision, manifestInfo.Params, nil)
|
||||
if err != nil {
|
||||
state.Phase = appv1.OperationError
|
||||
@@ -249,13 +244,22 @@ func (sc *syncContext) generateSyncTasks() ([]syncTask, bool) {
|
||||
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("Failed to unmarshal target object: %v", err))
|
||||
return nil, false
|
||||
}
|
||||
syncTask := syncTask{
|
||||
liveObj: liveObj,
|
||||
targetObj: targetObj,
|
||||
if sc.syncResources == nil || argo.ContainsSyncResource(liveObj, sc.syncResources) || argo.ContainsSyncResource(targetObj, sc.syncResources) {
|
||||
syncTask := syncTask{
|
||||
liveObj: liveObj,
|
||||
targetObj: targetObj,
|
||||
}
|
||||
syncTasks = append(syncTasks, syncTask)
|
||||
}
|
||||
syncTasks = append(syncTasks, syncTask)
|
||||
}
|
||||
return syncTasks, true
|
||||
if len(syncTasks) == 0 {
|
||||
if len(sc.comparison.Resources) == 0 {
|
||||
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("Application has no resources"))
|
||||
} else {
|
||||
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("Specified resources filter does not match any application resource"))
|
||||
}
|
||||
}
|
||||
return syncTasks, len(syncTasks) > 0
|
||||
}
|
||||
|
||||
// startedPreSyncPhase detects if we already started the PreSync stage of a sync operation.
|
||||
@@ -310,12 +314,13 @@ func (sc *syncContext) applyObject(targetObj *unstructured.Unstructured, dryRun
|
||||
Kind: targetObj.GetKind(),
|
||||
Namespace: sc.namespace,
|
||||
}
|
||||
message, err := kube.ApplyResource(sc.config, targetObj, sc.namespace, dryRun, force)
|
||||
message, err := sc.kubectl.ApplyResource(sc.config, targetObj, sc.namespace, dryRun, force)
|
||||
if err != nil {
|
||||
resDetails.Message = err.Error()
|
||||
resDetails.Status = appv1.ResourceDetailsSyncFailed
|
||||
return resDetails
|
||||
}
|
||||
|
||||
resDetails.Message = message
|
||||
resDetails.Status = appv1.ResourceDetailsSynced
|
||||
return resDetails
|
||||
@@ -333,7 +338,7 @@ func (sc *syncContext) pruneObject(liveObj *unstructured.Unstructured, prune, dr
|
||||
resDetails.Message = "pruned (dry run)"
|
||||
resDetails.Status = appv1.ResourceDetailsSyncedAndPruned
|
||||
} else {
|
||||
err := kube.DeleteResource(sc.config, liveObj, sc.namespace)
|
||||
err := sc.kubectl.DeleteResource(sc.config, liveObj, sc.namespace)
|
||||
if err != nil {
|
||||
resDetails.Message = err.Error()
|
||||
resDetails.Status = appv1.ResourceDetailsSyncFailed
|
||||
@@ -349,26 +354,49 @@ func (sc *syncContext) pruneObject(liveObj *unstructured.Unstructured, prune, dr
|
||||
return resDetails
|
||||
}
|
||||
|
||||
func hasCRDOfGroupKind(tasks []syncTask, group, kind string) bool {
|
||||
for _, task := range tasks {
|
||||
if kube.IsCRD(task.targetObj) {
|
||||
crdGroup, ok, err := unstructured.NestedString(task.targetObj.Object, "spec", "group")
|
||||
if err != nil || !ok {
|
||||
continue
|
||||
}
|
||||
crdKind, ok, err := unstructured.NestedString(task.targetObj.Object, "spec", "names", "kind")
|
||||
if err != nil || !ok {
|
||||
continue
|
||||
}
|
||||
if group == crdGroup && crdKind == kind {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// performs a apply based sync of the given sync tasks (possibly pruning the objects).
|
||||
// If update is true, will updates the resource details with the result.
|
||||
// Or if the prune/apply failed, will also update the result.
|
||||
func (sc *syncContext) doApplySync(syncTasks []syncTask, dryRun, force, update bool) bool {
|
||||
syncSuccessful := true
|
||||
// apply all resources in parallel
|
||||
|
||||
var createTasks []syncTask
|
||||
var pruneTasks []syncTask
|
||||
for _, syncTask := range syncTasks {
|
||||
if syncTask.targetObj == nil {
|
||||
pruneTasks = append(pruneTasks, syncTask)
|
||||
} else {
|
||||
createTasks = append(createTasks, syncTask)
|
||||
}
|
||||
}
|
||||
sort.Sort(newKindSorter(createTasks, resourceOrder))
|
||||
|
||||
var wg sync.WaitGroup
|
||||
for _, task := range syncTasks {
|
||||
for _, task := range pruneTasks {
|
||||
wg.Add(1)
|
||||
go func(t syncTask) {
|
||||
defer wg.Done()
|
||||
var resDetails appv1.ResourceDetails
|
||||
if t.targetObj == nil {
|
||||
resDetails = sc.pruneObject(t.liveObj, sc.syncOp.Prune, dryRun)
|
||||
} else {
|
||||
if isHook(t.targetObj) {
|
||||
return
|
||||
}
|
||||
resDetails = sc.applyObject(t.targetObj, dryRun, force)
|
||||
}
|
||||
resDetails = sc.pruneObject(t.liveObj, sc.syncOp.Prune, dryRun)
|
||||
if !resDetails.Status.Successful() {
|
||||
syncSuccessful = false
|
||||
}
|
||||
@@ -378,6 +406,76 @@ func (sc *syncContext) doApplySync(syncTasks []syncTask, dryRun, force, update b
|
||||
}(task)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
processCreateTasks := func(tasks []syncTask, gvk schema.GroupVersionKind) {
|
||||
serverRes, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
|
||||
if err != nil {
|
||||
// Special case for custom resources: if custom resource definition is not supported by the cluster by defined in application then
|
||||
// skip verification using `kubectl apply --dry-run` and since CRD should be created during app synchronization.
|
||||
if dryRun && apierr.IsNotFound(err) && hasCRDOfGroupKind(createTasks, gvk.Group, gvk.Kind) {
|
||||
return
|
||||
} else {
|
||||
syncSuccessful = false
|
||||
for _, task := range tasks {
|
||||
sc.setResourceDetails(&appv1.ResourceDetails{
|
||||
Name: task.targetObj.GetName(),
|
||||
Kind: task.targetObj.GetKind(),
|
||||
Namespace: sc.namespace,
|
||||
Message: err.Error(),
|
||||
Status: appv1.ResourceDetailsSyncFailed,
|
||||
})
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if !sc.proj.IsResourcePermitted(metav1.GroupKind{Group: gvk.Group, Kind: gvk.Kind}, serverRes.Namespaced) {
|
||||
syncSuccessful = false
|
||||
for _, task := range tasks {
|
||||
sc.setResourceDetails(&appv1.ResourceDetails{
|
||||
Name: task.targetObj.GetName(),
|
||||
Kind: task.targetObj.GetKind(),
|
||||
Namespace: sc.namespace,
|
||||
Message: fmt.Sprintf("Resource %s:%s is not permitted in project %s.", gvk.Group, gvk.Kind, sc.proj.Name),
|
||||
Status: appv1.ResourceDetailsSyncFailed,
|
||||
})
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
var createWg sync.WaitGroup
|
||||
for i := range tasks {
|
||||
createWg.Add(1)
|
||||
go func(t syncTask) {
|
||||
defer createWg.Done()
|
||||
if isHook(t.targetObj) {
|
||||
return
|
||||
}
|
||||
resDetails := sc.applyObject(t.targetObj, dryRun, force)
|
||||
if !resDetails.Status.Successful() {
|
||||
syncSuccessful = false
|
||||
}
|
||||
if update || !resDetails.Status.Successful() {
|
||||
sc.setResourceDetails(&resDetails)
|
||||
}
|
||||
}(tasks[i])
|
||||
}
|
||||
createWg.Wait()
|
||||
}
|
||||
|
||||
var tasksGroup []syncTask
|
||||
for _, task := range createTasks {
|
||||
//Only wait if the type of the next task is different than the previous type
|
||||
if len(tasksGroup) > 0 && tasksGroup[0].targetObj.GetKind() != task.targetObj.GetKind() {
|
||||
processCreateTasks(tasksGroup, tasksGroup[0].targetObj.GroupVersionKind())
|
||||
tasksGroup = []syncTask{task}
|
||||
} else {
|
||||
tasksGroup = append(tasksGroup, task)
|
||||
}
|
||||
}
|
||||
if len(tasksGroup) > 0 {
|
||||
processCreateTasks(tasksGroup, tasksGroup[0].targetObj.GroupVersionKind())
|
||||
}
|
||||
return syncSuccessful
|
||||
}
|
||||
|
||||
@@ -412,7 +510,7 @@ func (sc *syncContext) doHookSync(syncTasks []syncTask, hooks []*unstructured.Un
|
||||
// already started the post-sync phase, then we do not need to perform the health check.
|
||||
postSyncHooks, _ := sc.getHooks(appv1.HookTypePostSync)
|
||||
if len(postSyncHooks) > 0 && !sc.startedPostSyncPhase() {
|
||||
healthState, err := setApplicationHealth(sc.comparison)
|
||||
healthState, err := setApplicationHealth(sc.kubectl, sc.comparison)
|
||||
sc.log.Infof("PostSync application health check: %s", healthState.Status)
|
||||
if err != nil {
|
||||
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("failed to check application health: %v", err))
|
||||
@@ -534,15 +632,12 @@ func (sc *syncContext) runHook(hook *unstructured.Unstructured, hookType appv1.H
|
||||
}
|
||||
|
||||
gvk := hook.GroupVersionKind()
|
||||
dclient, err := sc.dynClientPool.ClientForGroupVersionKind(gvk)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
resIf := dclient.Resource(apiResource, sc.namespace)
|
||||
resource := kube.ToGroupVersionResource(gvk.GroupVersion().String(), apiResource)
|
||||
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, sc.namespace)
|
||||
|
||||
var liveObj *unstructured.Unstructured
|
||||
existing, err := resIf.Get(hook.GetName(), metav1.GetOptions{})
|
||||
@@ -555,7 +650,7 @@ func (sc *syncContext) runHook(hook *unstructured.Unstructured, hookType appv1.H
|
||||
if err != nil {
|
||||
sc.log.Warnf("Failed to set application label on hook %v: %v", hook, err)
|
||||
}
|
||||
_, err := kube.ApplyResource(sc.config, hook, sc.namespace, false, false)
|
||||
_, err := sc.kubectl.ApplyResource(sc.config, hook, sc.namespace, false, false)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("Failed to create %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
|
||||
}
|
||||
@@ -627,7 +722,7 @@ func isHelmHook(obj *unstructured.Unstructured) bool {
|
||||
if annotations == nil {
|
||||
return false
|
||||
}
|
||||
_, ok := annotations[common.AnnotationHook]
|
||||
_, ok := annotations[common.AnnotationHelmHook]
|
||||
return ok
|
||||
}
|
||||
|
||||
@@ -845,14 +940,97 @@ func (sc *syncContext) deleteHook(name, kind, apiVersion string) error {
|
||||
Version: groupVersion[1],
|
||||
Kind: kind,
|
||||
}
|
||||
dclient, err := sc.dynClientPool.ClientForGroupVersionKind(gvk)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
resIf := dclient.Resource(apiResource, sc.namespace)
|
||||
resource := kube.ToGroupVersionResource(gvk.GroupVersion().String(), apiResource)
|
||||
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, sc.namespace)
|
||||
return resIf.Delete(name, &metav1.DeleteOptions{})
|
||||
}
|
||||
|
||||
// This code is mostly taken from https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go
|
||||
|
||||
// sortOrder is an ordering of Kinds.
|
||||
type sortOrder []string
|
||||
|
||||
// resourceOrder represents the correct order of Kubernetes resources within a manifest
|
||||
var resourceOrder sortOrder = []string{
|
||||
"Namespace",
|
||||
"ResourceQuota",
|
||||
"LimitRange",
|
||||
"PodSecurityPolicy",
|
||||
"Secret",
|
||||
"ConfigMap",
|
||||
"StorageClass",
|
||||
"PersistentVolume",
|
||||
"PersistentVolumeClaim",
|
||||
"ServiceAccount",
|
||||
"CustomResourceDefinition",
|
||||
"ClusterRole",
|
||||
"ClusterRoleBinding",
|
||||
"Role",
|
||||
"RoleBinding",
|
||||
"Service",
|
||||
"DaemonSet",
|
||||
"Pod",
|
||||
"ReplicationController",
|
||||
"ReplicaSet",
|
||||
"Deployment",
|
||||
"StatefulSet",
|
||||
"Job",
|
||||
"CronJob",
|
||||
"Ingress",
|
||||
"APIService",
|
||||
}
|
||||
|
||||
type kindSorter struct {
|
||||
ordering map[string]int
|
||||
manifests []syncTask
|
||||
}
|
||||
|
||||
func newKindSorter(m []syncTask, s sortOrder) *kindSorter {
|
||||
o := make(map[string]int, len(s))
|
||||
for v, k := range s {
|
||||
o[k] = v
|
||||
}
|
||||
|
||||
return &kindSorter{
|
||||
manifests: m,
|
||||
ordering: o,
|
||||
}
|
||||
}
|
||||
|
||||
func (k *kindSorter) Len() int { return len(k.manifests) }
|
||||
|
||||
func (k *kindSorter) Swap(i, j int) { k.manifests[i], k.manifests[j] = k.manifests[j], k.manifests[i] }
|
||||
|
||||
func (k *kindSorter) Less(i, j int) bool {
|
||||
a := k.manifests[i].targetObj
|
||||
if a == nil {
|
||||
return false
|
||||
}
|
||||
b := k.manifests[j].targetObj
|
||||
if b == nil {
|
||||
return true
|
||||
}
|
||||
first, aok := k.ordering[a.GetKind()]
|
||||
second, bok := k.ordering[b.GetKind()]
|
||||
// if same kind (including unknown) sub sort alphanumeric
|
||||
if first == second {
|
||||
// if both are unknown and of different kind sort by kind alphabetically
|
||||
if !aok && !bok && a.GetKind() != b.GetKind() {
|
||||
return a.GetKind() < b.GetKind()
|
||||
}
|
||||
return a.GetName() < b.GetName()
|
||||
}
|
||||
// unknown kind is last
|
||||
if !aok {
|
||||
return false
|
||||
}
|
||||
if !bok {
|
||||
return true
|
||||
}
|
||||
// sort different kinds
|
||||
return first < second
|
||||
}
|
||||
|
||||
@@ -1,26 +1,402 @@
|
||||
package controller
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
|
||||
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/util/kube"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
apiv1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
fakedisco "k8s.io/client-go/discovery/fake"
|
||||
"k8s.io/client-go/rest"
|
||||
testcore "k8s.io/client-go/testing"
|
||||
)
|
||||
|
||||
func newTestSyncCtx() *syncContext {
|
||||
type kubectlOutput struct {
|
||||
output string
|
||||
err error
|
||||
}
|
||||
|
||||
type mockKubectlCmd struct {
|
||||
commands map[string]kubectlOutput
|
||||
events chan watch.Event
|
||||
}
|
||||
|
||||
func (k mockKubectlCmd) WatchResources(
|
||||
ctx context.Context, config *rest.Config, namespace string, selector func(kind schema.GroupVersionKind) v1.ListOptions) (chan watch.Event, error) {
|
||||
|
||||
return k.events, nil
|
||||
}
|
||||
|
||||
func (k mockKubectlCmd) DeleteResource(config *rest.Config, obj *unstructured.Unstructured, namespace string) error {
|
||||
command, ok := k.commands[obj.GetName()]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return command.err
|
||||
}
|
||||
|
||||
func (k mockKubectlCmd) ApplyResource(config *rest.Config, obj *unstructured.Unstructured, namespace string, dryRun, force bool) (string, error) {
|
||||
command, ok := k.commands[obj.GetName()]
|
||||
if !ok {
|
||||
return "", nil
|
||||
}
|
||||
return command.output, command.err
|
||||
}
|
||||
|
||||
// ConvertToVersion converts an unstructured object into the specified group/version
|
||||
func (k mockKubectlCmd) ConvertToVersion(obj *unstructured.Unstructured, group, version string) (*unstructured.Unstructured, error) {
|
||||
return obj, nil
|
||||
}
|
||||
|
||||
func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
|
||||
fakeDisco := &fakedisco.FakeDiscovery{Fake: &testcore.Fake{}}
|
||||
fakeDisco.Resources = append(resources, &v1.APIResourceList{
|
||||
APIResources: []v1.APIResource{
|
||||
{Kind: "pod", Namespaced: true},
|
||||
{Kind: "deployment", Namespaced: true},
|
||||
{Kind: "service", Namespaced: true},
|
||||
},
|
||||
})
|
||||
kube.FlushServerResourcesCache()
|
||||
return &syncContext{
|
||||
comparison: &v1alpha1.ComparisonResult{},
|
||||
config: &rest.Config{},
|
||||
namespace: "test-namespace",
|
||||
syncOp: &v1alpha1.SyncOperation{},
|
||||
opState: &v1alpha1.OperationState{},
|
||||
log: log.WithFields(log.Fields{"application": "fake-app"}),
|
||||
syncRes: &v1alpha1.SyncOperationResult{},
|
||||
syncOp: &v1alpha1.SyncOperation{
|
||||
Prune: true,
|
||||
SyncStrategy: &v1alpha1.SyncStrategy{
|
||||
Apply: &v1alpha1.SyncStrategyApply{},
|
||||
},
|
||||
},
|
||||
proj: &v1alpha1.AppProject{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "test",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
ClusterResourceWhitelist: []v1.GroupKind{
|
||||
{Group: "*", Kind: "*"},
|
||||
},
|
||||
},
|
||||
},
|
||||
opState: &v1alpha1.OperationState{},
|
||||
disco: fakeDisco,
|
||||
log: log.WithFields(log.Fields{"application": "fake-app"}),
|
||||
}
|
||||
}
|
||||
|
||||
func TestSyncCreateInSortedOrder(t *testing.T) {
|
||||
syncCtx := newTestSyncCtx()
|
||||
syncCtx.kubectl = mockKubectlCmd{}
|
||||
syncCtx.comparison = &v1alpha1.ComparisonResult{
|
||||
Resources: []v1alpha1.ResourceState{{
|
||||
LiveState: "",
|
||||
TargetState: "{\"kind\":\"pod\"}",
|
||||
}, {
|
||||
LiveState: "",
|
||||
TargetState: "{\"kind\":\"service\"}",
|
||||
},
|
||||
},
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Len(t, syncCtx.syncRes.Resources, 2)
|
||||
for i := range syncCtx.syncRes.Resources {
|
||||
if syncCtx.syncRes.Resources[i].Kind == "pod" {
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
|
||||
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
|
||||
} else {
|
||||
t.Error("Resource isn't a pod or a service")
|
||||
}
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
|
||||
}
|
||||
|
||||
func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
|
||||
syncCtx := newTestSyncCtx(&v1.APIResourceList{
|
||||
GroupVersion: v1alpha1.SchemeGroupVersion.String(),
|
||||
APIResources: []v1.APIResource{
|
||||
{Name: "workflows", Namespaced: false, Kind: "Workflow", Group: "argoproj.io"},
|
||||
{Name: "application", Namespaced: false, Kind: "Application", Group: "argoproj.io"},
|
||||
},
|
||||
}, &v1.APIResourceList{
|
||||
GroupVersion: "rbac.authorization.k8s.io/v1",
|
||||
APIResources: []v1.APIResource{
|
||||
{Name: "clusterroles", Namespaced: false, Kind: "ClusterRole", Group: "rbac.authorization.k8s.io"},
|
||||
},
|
||||
})
|
||||
|
||||
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{
|
||||
{Group: "argoproj.io", Kind: "*"},
|
||||
}
|
||||
|
||||
syncCtx.kubectl = mockKubectlCmd{}
|
||||
syncCtx.comparison = &v1alpha1.ComparisonResult{
|
||||
Resources: []v1alpha1.ResourceState{{
|
||||
LiveState: "",
|
||||
TargetState: `{"apiVersion": "rbac.authorization.k8s.io/v1", "kind": "ClusterRole", "metadata": {"name": "argo-ui-cluster-role" }}`,
|
||||
}},
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Len(t, syncCtx.syncRes.Resources, 1)
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
|
||||
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
|
||||
}
|
||||
|
||||
func TestSyncBlacklistedNamespacedResources(t *testing.T) {
|
||||
syncCtx := newTestSyncCtx()
|
||||
|
||||
syncCtx.proj.Spec.NamespaceResourceBlacklist = []v1.GroupKind{
|
||||
{Group: "*", Kind: "deployment"},
|
||||
}
|
||||
|
||||
syncCtx.kubectl = mockKubectlCmd{}
|
||||
syncCtx.comparison = &v1alpha1.ComparisonResult{
|
||||
Resources: []v1alpha1.ResourceState{{
|
||||
LiveState: "",
|
||||
TargetState: "{\"kind\":\"deployment\"}",
|
||||
}},
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Len(t, syncCtx.syncRes.Resources, 1)
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
|
||||
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
|
||||
}
|
||||
|
||||
func TestSyncSuccessfully(t *testing.T) {
|
||||
syncCtx := newTestSyncCtx()
|
||||
syncCtx.kubectl = mockKubectlCmd{}
|
||||
syncCtx.comparison = &v1alpha1.ComparisonResult{
|
||||
Resources: []v1alpha1.ResourceState{{
|
||||
LiveState: "",
|
||||
TargetState: "{\"kind\":\"service\"}",
|
||||
}, {
|
||||
LiveState: "{\"kind\":\"pod\"}",
|
||||
TargetState: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Len(t, syncCtx.syncRes.Resources, 2)
|
||||
for i := range syncCtx.syncRes.Resources {
|
||||
if syncCtx.syncRes.Resources[i].Kind == "pod" {
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
|
||||
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
|
||||
} else {
|
||||
t.Error("Resource isn't a pod or a service")
|
||||
}
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
|
||||
}
|
||||
|
||||
func TestSyncDeleteSuccessfully(t *testing.T) {
|
||||
syncCtx := newTestSyncCtx()
|
||||
syncCtx.kubectl = mockKubectlCmd{}
|
||||
syncCtx.comparison = &v1alpha1.ComparisonResult{
|
||||
Resources: []v1alpha1.ResourceState{{
|
||||
LiveState: "{\"kind\":\"service\"}",
|
||||
TargetState: "",
|
||||
}, {
|
||||
LiveState: "{\"kind\":\"pod\"}",
|
||||
TargetState: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
syncCtx.sync()
|
||||
for i := range syncCtx.syncRes.Resources {
|
||||
if syncCtx.syncRes.Resources[i].Kind == "pod" {
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
|
||||
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
|
||||
} else {
|
||||
t.Error("Resource isn't a pod or a service")
|
||||
}
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
|
||||
}
|
||||
|
||||
func TestSyncCreateFailure(t *testing.T) {
|
||||
syncCtx := newTestSyncCtx()
|
||||
syncCtx.kubectl = mockKubectlCmd{
|
||||
commands: map[string]kubectlOutput{
|
||||
"test-service": {
|
||||
output: "",
|
||||
err: fmt.Errorf("error: error validating \"test.yaml\": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false"),
|
||||
},
|
||||
},
|
||||
}
|
||||
syncCtx.comparison = &v1alpha1.ComparisonResult{
|
||||
Resources: []v1alpha1.ResourceState{{
|
||||
LiveState: "",
|
||||
TargetState: "{\"kind\":\"service\", \"metadata\":{\"name\":\"test-service\"}}",
|
||||
},
|
||||
},
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Len(t, syncCtx.syncRes.Resources, 1)
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
|
||||
}
|
||||
|
||||
func TestSyncPruneFailure(t *testing.T) {
|
||||
syncCtx := newTestSyncCtx()
|
||||
syncCtx.kubectl = mockKubectlCmd{
|
||||
commands: map[string]kubectlOutput{
|
||||
"test-service": {
|
||||
output: "",
|
||||
err: fmt.Errorf(" error: timed out waiting for \"test-service\" to be synced"),
|
||||
},
|
||||
},
|
||||
}
|
||||
syncCtx.comparison = &v1alpha1.ComparisonResult{
|
||||
Resources: []v1alpha1.ResourceState{{
|
||||
LiveState: "{\"kind\":\"service\", \"metadata\":{\"name\":\"test-service\"}}",
|
||||
TargetState: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
syncCtx.sync()
|
||||
assert.Len(t, syncCtx.syncRes.Resources, 1)
|
||||
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
|
||||
}
|
||||
|
||||
func TestRunWorkflows(t *testing.T) {
|
||||
// syncCtx := newTestSyncCtx()
|
||||
// syncCtx.doWorkflowSync(nil, nil)
|
||||
|
||||
}
|
||||
|
||||
func unsortedManifest() []syncTask {
|
||||
return []syncTask{
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "Pod",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "Service",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "PersistentVolume",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "ConfigMap",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func sortedManifest() []syncTask {
|
||||
return []syncTask{
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "ConfigMap",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "PersistentVolume",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "Service",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "Pod",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func TestSortKubernetesResourcesSuccessfully(t *testing.T) {
|
||||
unsorted := unsortedManifest()
|
||||
ks := newKindSorter(unsorted, resourceOrder)
|
||||
sort.Sort(ks)
|
||||
|
||||
expectedOrder := sortedManifest()
|
||||
assert.Equal(t, len(unsorted), len(expectedOrder))
|
||||
for i, sorted := range unsorted {
|
||||
assert.Equal(t, expectedOrder[i], sorted)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestSortManifestHandleNil(t *testing.T) {
|
||||
task := syncTask{
|
||||
targetObj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"GroupVersion": apiv1.SchemeGroupVersion.String(),
|
||||
"kind": "Service",
|
||||
},
|
||||
},
|
||||
}
|
||||
manifest := []syncTask{
|
||||
{},
|
||||
task,
|
||||
}
|
||||
ks := newKindSorter(manifest, resourceOrder)
|
||||
sort.Sort(ks)
|
||||
assert.Equal(t, task, manifest[0])
|
||||
assert.Nil(t, manifest[1].targetObj)
|
||||
|
||||
}
|
||||
|
||||
@@ -9,8 +9,14 @@
|
||||
## Features
|
||||
* [Application Sources](application_sources.md)
|
||||
* [Application Parameters](parameters.md)
|
||||
* [Projects](projects.md)
|
||||
* [Automated Sync](auto_sync.md)
|
||||
* [Resource Health](health.md)
|
||||
* [Resource Hooks](resource_hooks.md)
|
||||
* [Single Sign On](sso.md)
|
||||
* [Webhooks](webhook.md)
|
||||
* [RBAC](rbac.md)
|
||||
* [RBAC](rbac.md)
|
||||
|
||||
## Other
|
||||
* [Configuring Ingress](ingress.md)
|
||||
* [F.A.Q.](faq.md)
|
||||
|
||||
@@ -3,8 +3,9 @@
|
||||
ArgoCD supports several different ways in which kubernetes manifests can be defined:
|
||||
|
||||
* [ksonnet](https://ksonnet.io) applications
|
||||
* [kustomize](https://kustomize.io) applications
|
||||
* [helm](https://helm.sh) charts
|
||||
* Simple directory of YAML/json manifests
|
||||
* Directory of YAML/json/jsonnet manifests
|
||||
|
||||
Some additional considerations should be made when deploying apps of a particular type:
|
||||
|
||||
|
||||
51
docs/auto_sync.md
Normal file
51
docs/auto_sync.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Automated Sync Policy
|
||||
|
||||
ArgoCD has the ability to automatically sync an application when it detects differences between
|
||||
the desired manifests in git, and the live state in the cluster. A benefit of automatic sync is that
|
||||
CI/CD pipelines no longer need direct access to the ArgoCD API server to perform the deployment.
|
||||
Instead, the pipeline makes a commit and push to the git repository with the changes to the
|
||||
manifests in the tracking git repo.
|
||||
|
||||
To configure automated sync run:
|
||||
```bash
|
||||
argocd app set <APPNAME> --sync-policy automated
|
||||
```
|
||||
|
||||
Alternatively, if creating the application an application manifest, specify a syncPolicy with an
|
||||
`automated` policy.
|
||||
```yaml
|
||||
spec:
|
||||
syncPolicy:
|
||||
automated: {}
|
||||
```
|
||||
|
||||
## Automatic Pruning
|
||||
|
||||
By default (and as a safety mechanism), automated sync will not delete resources when ArgoCD detects
|
||||
the resource is no longer defined in git. To prune the resources, a manual sync can always be
|
||||
performed (with pruning checked). Pruning can also be enabled to happen automatically as part of the
|
||||
automated sync by running:
|
||||
|
||||
```bash
|
||||
argocd app set <APPNAME> --auto-prune
|
||||
```
|
||||
|
||||
Or by setting the prune option to true in the automated sync policy:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
```
|
||||
|
||||
## Automated Sync Semantics
|
||||
|
||||
* An automated sync will only be performed if the application is OutOfSync. Applications in a
|
||||
Synced or error state will not attempt automated sync.
|
||||
* Automated sync will only attempt one synchronization per unique combination of commit SHA1 and
|
||||
application parameters. If the most recent successful sync in the history was already performed
|
||||
against the same commit-SHA and parameters, a second sync will not be attempted.
|
||||
* Automatic sync will not reattempt a sync if the previous sync attempt against the same commit-SHA
|
||||
and parameters had failed.
|
||||
* Rollback cannot be performed against an application with automated sync enabled.
|
||||
16
docs/faq.md
Normal file
16
docs/faq.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# FAQ
|
||||
|
||||
## Why is my application still `OutOfSync` immediately after a successful Sync?
|
||||
|
||||
It is possible for an application to still be `OutOfSync` even immediately after a successful Sync
|
||||
operation. Some reasons for this might be:
|
||||
* There may be problems in manifests themselves, which may contain extra/unknown fields from the
|
||||
actual K8s spec. These extra fields would get dropped when querying Kubernetes for the live state,
|
||||
resulting in an `OutOfSync` status indicating a missing field was detected.
|
||||
* The sync was performed (with pruning disabled), and there are resources which need to be deleted.
|
||||
* A mutating webhook altered the manifest after it was submitted to Kubernetes
|
||||
|
||||
To debug `OutOfSync` issues, run the `app diff` command to see the differences between git and live:
|
||||
```
|
||||
argocd app diff APPNAME
|
||||
```
|
||||
@@ -7,15 +7,15 @@ An example guestbook application is provided to demonstrate how ArgoCD works.
|
||||
* Have a [kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) file (default location is `~/.kube/config`).
|
||||
|
||||
## 1. Install ArgoCD
|
||||
```
|
||||
```bash
|
||||
kubectl create namespace argocd
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.7.2/manifests/install.yaml
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.9.2/manifests/install.yaml
|
||||
```
|
||||
This will create a new namespace, `argocd`, where ArgoCD services and application resources will live.
|
||||
|
||||
NOTE:
|
||||
* On GKE with RBAC enabled, you may need to grant your account the ability to create new cluster roles
|
||||
```
|
||||
```bash
|
||||
kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUREMAIL@gmail.com
|
||||
```
|
||||
|
||||
@@ -24,86 +24,86 @@ kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=c
|
||||
Download the latest ArgoCD version:
|
||||
|
||||
On Mac:
|
||||
```
|
||||
```bash
|
||||
brew install argoproj/tap/argocd
|
||||
```
|
||||
|
||||
On Linux:
|
||||
|
||||
```
|
||||
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.7.2/argocd-linux-amd64
|
||||
```bash
|
||||
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.9.2/argocd-linux-amd64
|
||||
chmod +x /usr/local/bin/argocd
|
||||
```
|
||||
|
||||
## 3. Open access to ArgoCD API server
|
||||
## 3. Access the ArgoCD API server
|
||||
|
||||
By default, the ArgoCD API server is not exposed with an external IP. To expose the API server,
|
||||
change the service type to `LoadBalancer`:
|
||||
By default, the ArgoCD API server is not exposed with an external IP. To access the API server,
|
||||
choose one of the following means to expose the ArgoCD API server:
|
||||
|
||||
```
|
||||
### Service Type LoadBalancer
|
||||
Change the argocd-server service type to `LoadBalancer`:
|
||||
|
||||
```bash
|
||||
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
|
||||
```
|
||||
|
||||
## 4. Login to the server from the CLI
|
||||
### Ingress
|
||||
Follow the [ingress documentation](ingress.md) on how to configure ArgoCD with ingress.
|
||||
|
||||
Login with using the `admin` user. The initial password is autogenerated to be the pod name of the
|
||||
### Port Forwarding
|
||||
`kubectl port-forward` can also be used to connect to the API server without exposing the service.
|
||||
The API server can be accessed using the localhost address/port.
|
||||
|
||||
|
||||
## 4. Login using the CLI
|
||||
|
||||
Login as the `admin` user. The initial password is autogenerated to be the pod name of the
|
||||
ArgoCD API server. This can be retrieved with the command:
|
||||
```
|
||||
```bash
|
||||
kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2
|
||||
```
|
||||
|
||||
Using the above password, login to ArgoCD's external IP:
|
||||
|
||||
On Minikube:
|
||||
```
|
||||
argocd login $(minikube service argocd-server -n argocd --url | cut -d'/' -f 3) --name minikube
|
||||
```
|
||||
Other clusters:
|
||||
```
|
||||
```bash
|
||||
kubectl get svc -n argocd argocd-server
|
||||
argocd login <EXTERNAL-IP>
|
||||
```
|
||||
|
||||
After logging in, change the password using the command:
|
||||
```
|
||||
```bash
|
||||
argocd account update-password
|
||||
argocd relogin
|
||||
```
|
||||
|
||||
|
||||
## 5. Register a cluster to deploy apps to
|
||||
## 5. Register a cluster to deploy apps to (optional)
|
||||
|
||||
We will now register a cluster to deploy applications to. First list all clusters contexts in your
|
||||
kubconfig:
|
||||
```
|
||||
This step registers a cluster's credentials to ArgoCD, and is only necessary when deploying to
|
||||
an external cluster. When deploying internally (to the same cluster that ArgoCD is running in),
|
||||
https://kubernetes.default.svc should be used as the application's K8s API server address.
|
||||
|
||||
First list all clusters contexts in your current kubconfig:
|
||||
```bash
|
||||
argocd cluster add
|
||||
```
|
||||
|
||||
Choose a context name from the list and supply it to `argocd cluster add CONTEXTNAME`. For example,
|
||||
for minikube context, run:
|
||||
```
|
||||
argocd cluster add minikube --in-cluster
|
||||
for docker-for-desktop context, run:
|
||||
```bash
|
||||
argocd cluster add docker-for-desktop
|
||||
```
|
||||
|
||||
The above command installs an `argocd-manager` ServiceAccount and ClusterRole into the cluster
|
||||
associated with the supplied kubectl context. ArgoCD uses the service account token to perform its
|
||||
associated with the supplied kubectl context. ArgoCD uses this service account token to perform its
|
||||
management tasks (i.e. deploy/monitoring).
|
||||
|
||||
The `--in-cluster` option indicates that the cluster we are registering, is the same cluster that
|
||||
ArgoCD is running in. This allows ArgoCD to connect to the cluster using the internal kubernetes
|
||||
hostname (kubernetes.default.svc). When registering a cluster external to ArgoCD, the `--in-cluster`
|
||||
flag should be omitted.
|
||||
|
||||
## 6. Create the application from a git repository
|
||||
## 6. Create an application from a git repository location
|
||||
|
||||
### Creating apps via UI
|
||||
|
||||
Open a browser to the ArgoCD external UI, and login using the credentials set in step 4.
|
||||
|
||||
On Minikube:
|
||||
```
|
||||
minikube service argocd-server -n argocd
|
||||
```
|
||||
Open a browser to the ArgoCD external UI, and login using the credentials set in step 4, and the
|
||||
external IP/hostname set in step 4.
|
||||
|
||||
Connect a git repository containing your apps. An example repository containing a sample
|
||||
guestbook application is available at https://github.com/argoproj/argocd-example-apps.git.
|
||||
@@ -122,8 +122,8 @@ After connecting a git repository, select the guestbook application for creation
|
||||
|
||||
Applications can be also be created using the ArgoCD CLI:
|
||||
|
||||
```
|
||||
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
|
||||
```bash
|
||||
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path ksonnet-guestbook
|
||||
```
|
||||
|
||||
## 7. Sync (deploy) the application
|
||||
@@ -131,10 +131,10 @@ argocd app create guestbook-default --repo https://github.com/argoproj/argocd-ex
|
||||
Once the guestbook application is created, you can now view its status:
|
||||
|
||||
From UI:
|
||||

|
||||

|
||||
|
||||
From CLI:
|
||||
```
|
||||
```bash
|
||||
$ argocd app get guestbook-default
|
||||
Name: guestbook-default
|
||||
Server: https://kubernetes.default.svc
|
||||
@@ -153,7 +153,7 @@ Deployment guestbook-ui OutOfSync
|
||||
The application status is initially in an `OutOfSync` state, since the application has yet to be
|
||||
deployed, and no Kubernetes resources have been created. To sync (deploy) the application, run:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ argocd app sync guestbook-default
|
||||
Application: guestbook-default
|
||||
Operation: Sync
|
||||
@@ -166,12 +166,12 @@ Deployment guestbook-ui deployment.apps "guestbook-ui" created
|
||||
```
|
||||
|
||||
This command retrieves the manifests from git repository and performs a `kubectl apply` of the
|
||||
manifests. The guestbook app is now running and you can now view its resource
|
||||
components, logs, events, and assessed health:
|
||||
manifests. The guestbook app is now running and you can now view its resource components, logs,
|
||||
events, and assessed health status:
|
||||
|
||||

|
||||
|
||||
## 8. Next Steps
|
||||
|
||||
ArgoCD supports additional features such as SSO, WebHooks, RBAC, Projects. See the rest of
|
||||
the [documentation](./) for details.
|
||||
ArgoCD supports additional features such as automated sync, SSO, WebHooks, RBAC, Projects. See the
|
||||
rest of the [documentation](./) for details.
|
||||
|
||||
@@ -6,7 +6,6 @@ surfaced to the overall Application health status as a whole. The following chec
|
||||
specific types of kuberentes resources:
|
||||
|
||||
### Deployment, ReplicaSet, StatefulSet DaemonSet
|
||||
|
||||
* Observed generation is equal to desired generation.
|
||||
* Number of **updated** replicas equals the number of desired replicas.
|
||||
|
||||
@@ -16,3 +15,6 @@ with at least one value for `hostname` or `IP`.
|
||||
|
||||
### Ingress
|
||||
* The `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`.
|
||||
|
||||
### PersistentVolumeClaim
|
||||
* The `status.phase` is `Bound`
|
||||
|
||||
126
docs/ingress.md
Normal file
126
docs/ingress.md
Normal file
@@ -0,0 +1,126 @@
|
||||
# Ingress Configuration
|
||||
|
||||
ArgoCD runs both a gRPC server (used by the CLI), as well as a HTTP/HTTPS server (used by the UI).
|
||||
Both protocols are exposed by the argocd-server service object on the following ports:
|
||||
* 443 - gRPC/HTTPS
|
||||
* 80 - HTTP (redirects to HTTPS)
|
||||
|
||||
There are several ways how Ingress can be configured.
|
||||
|
||||
## [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx)
|
||||
|
||||
### Option 1: ssl-passthrough
|
||||
|
||||
Because ArgoCD serves multiple protocols (gRPC/HTTPS) on the same port (443), this provides a
|
||||
challenge when attempting to define a single nginx ingress object and rule for the argocd-service,
|
||||
since the `nginx.ingress.kubernetes.io/backend-protocol` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol)
|
||||
accepts only a single value for the backend protocol (e.g. HTTP, HTTPS, GRPC, GRPCS).
|
||||
|
||||
In order to expose the ArgoCD API server with a single ingress rule and hostname, the
|
||||
`nginx.ingress.kubernetes.io/ssl-passthrough` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough)
|
||||
must be used to passthrough TLS connections and terminate TLS at the ArgoCD API server.
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-ingress
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: nginx
|
||||
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
|
||||
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
|
||||
spec:
|
||||
rules:
|
||||
- host: argocd.example.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: argocd-server
|
||||
servicePort: https
|
||||
```
|
||||
|
||||
The above rule terminates TLS at the ArgoCD API server, which detects the protocol being used,
|
||||
and responds appropriately. Note that the `nginx.ingress.kubernetes.io/ssl-passthrough` annotation
|
||||
requires that the `--enable-ssl-passthrough` flag be added to the command line arguments to
|
||||
`nginx-ingress-controller`.
|
||||
|
||||
### Option 2: Multiple ingress objects and hosts
|
||||
|
||||
Since ingress-nginx Ingress supports only a single protocol per Ingress object, an alternative
|
||||
way would be to define two Ingress objects. One for HTTP/HTTPS, and the other for gRPC:
|
||||
|
||||
HTTP/HTTPS Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-http-ingress
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "nginx"
|
||||
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
|
||||
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: argocd-server
|
||||
servicePort: http
|
||||
host: argocd.example.com
|
||||
tls:
|
||||
- hosts:
|
||||
- argocd.example.com
|
||||
secretName: argocd-secret
|
||||
```
|
||||
|
||||
gRPC Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: argocd-server-grpc-ingress
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "nginx"
|
||||
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: argocd-server
|
||||
servicePort: https
|
||||
host: grpc.argocd.example.com
|
||||
tls:
|
||||
- hosts:
|
||||
- grpc.argocd.example.com
|
||||
secretName: argocd-secret
|
||||
```
|
||||
|
||||
The API server should then be run with TLS disabled. Edit the `argocd-server` deployment to add the
|
||||
`--insecure` flag to the argocd-server command:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
name: argocd-server
|
||||
containers:
|
||||
- command:
|
||||
- /argocd-server
|
||||
- --staticassets
|
||||
- /shared/app
|
||||
- --repo-server
|
||||
- argocd-repo-server:8081
|
||||
- --insecure
|
||||
```
|
||||
|
||||
The obvious disadvantage to this approach is that this technique require two separate hostnames for
|
||||
the API server -- one for gRPC and the other for HTTP/HTTPS. However it allow TLS termination to
|
||||
happen at the ingress controller.
|
||||
|
||||
|
||||
## AWS Application Load Balancers (ALBs) and Classic ELB (HTTP mode)
|
||||
|
||||
Neither ALBs and Classic ELB in HTTP mode, do not have full support for HTTP2/gRPC which is the
|
||||
protocol used by the `argocd` CLI. Thus, when using an AWS load balancer, either Classic ELB in
|
||||
passthrough mode is needed, or NLBs.
|
||||
47
docs/internal/releasing.md
Normal file
47
docs/internal/releasing.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# ArgoCD Release Instructions
|
||||
|
||||
1. Tag, build, and push argo-cd-ui
|
||||
```bash
|
||||
cd argo-cd-ui
|
||||
git checkout -b release-X.Y
|
||||
git tag vX.Y.Z
|
||||
git push upstream release-X.Y --tags
|
||||
IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true yarn docker
|
||||
```
|
||||
|
||||
2. Create release-X.Y branch (if creating initial X.Y release)
|
||||
```bash
|
||||
git checkout -b release-X.Y
|
||||
git push upstream release-X.Y
|
||||
```
|
||||
|
||||
3. Update VERSION and manifests with new version
|
||||
```bash
|
||||
vi VERSION # ensure value is desired X.Y.Z semantic version
|
||||
vi manifests/base/kustomization.yaml # update with new image tags
|
||||
make manifests
|
||||
git commit -a -m "Update manifests to vX.Y.Z"
|
||||
git push upstream release-X.Y
|
||||
```
|
||||
|
||||
4. Tag, build, and push release to docker hub
|
||||
```bash
|
||||
git tag vX.Y.Z
|
||||
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true
|
||||
git push upstream vX.Y.Z
|
||||
```
|
||||
|
||||
5. Update argocd brew formula
|
||||
```bash
|
||||
git clone https://github.com/argoproj/homebrew-tap
|
||||
cd homebrew-tap
|
||||
shasum -a 256 ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
|
||||
# edit argocd.rb with version and checksum
|
||||
git commit -a -m "Update argocd to vX.Y.Z"
|
||||
git push
|
||||
```
|
||||
|
||||
6. Update documentation:
|
||||
* Edit CHANGELOG.md with release notes
|
||||
* Update getting_started.md with new version
|
||||
* Create GitHub release from new tag and upload binaries (e.g. dist/argocd-darwin-amd64)
|
||||
181
docs/projects.md
Normal file
181
docs/projects.md
Normal file
@@ -0,0 +1,181 @@
|
||||
## Projects
|
||||
|
||||
Projects provide a logical grouping of applications, which is useful when ArgoCD is used by multiple
|
||||
teams. Projects provide the following features:
|
||||
|
||||
* ability to restrict *what* may be deployed (the git source repositories)
|
||||
* ability to restrict *where* apps may be deployed (the destination clusters and namespaces)
|
||||
* ability to control what type of objects may be deployed (e.g. RBAC, CRDs, DaemonSets, NetworkPolicy etc...)
|
||||
|
||||
### The default project
|
||||
|
||||
Every application belongs to a single project. If unspecified, an application belongs to the
|
||||
`default` project, which is created automatically and by default, permits deployments from any
|
||||
source repo, to any cluster, and all resource Kinds. The default project can be modified, but not
|
||||
deleted. When initially created, it's specification is configured to be the most permissive:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
sourceRepos:
|
||||
- '*'
|
||||
destinations:
|
||||
- namespace: '*'
|
||||
server: '*'
|
||||
clusterResourceWhitelist:
|
||||
- group: '*'
|
||||
kind: '*'
|
||||
```
|
||||
|
||||
### Creating Projects
|
||||
|
||||
Additional projects can be created to give separate teams different levels of access to namespaces.
|
||||
The following command creates a new project `myproject` which can deploy applications to namespace
|
||||
`mynamespace` of cluster `https://kubernetes.default.svc`. The permitted git source repository is
|
||||
set to `https://github.com/argoproj/argocd-example-apps.git` repository.
|
||||
|
||||
```
|
||||
argocd proj create myproject -d https://kubernetes.default.svc,mynamespace -s https://github.com/argoproj/argocd-example-apps.git
|
||||
```
|
||||
|
||||
### Managing Projects
|
||||
|
||||
Permitted source git repositories are managed using commands:
|
||||
|
||||
```bash
|
||||
argocd project add-source <PROJECT> <REPO>
|
||||
argocd project remove-source <PROJECT> <REPO>
|
||||
```
|
||||
|
||||
Permitted destination clusters and namespaces are managed with the commands:
|
||||
```
|
||||
argocd project add-destination <PROJECT> <CLUSTER>,<NAMESPACE>
|
||||
argocd project remove-destination <PROJECT> <CLUSTER>,<NAMESPACE>
|
||||
```
|
||||
|
||||
Permitted destination K8s resource kinds are managed with the commands. Note that namespaced-scoped
|
||||
resources are restricted via a blacklist, whereas cluster-scoped resources are restricted via
|
||||
whitelist.
|
||||
```
|
||||
argocd project allow-cluster-resource <PROJECT> <GROUP> <KIND>
|
||||
argocd project allow-namespace-resource <PROJECT> <GROUP> <KIND>
|
||||
argocd project deny-cluster-resource <PROJECT> <GROUP> <KIND>
|
||||
argocd project deny-namespace-resource <PROJECT> <GROUP> <KIND>
|
||||
```
|
||||
|
||||
### Assign application to a project
|
||||
|
||||
The application project can be changed using `app set` command. In order to change the project of
|
||||
an app, the user must have permissions to access the new project.
|
||||
|
||||
```
|
||||
argocd app set guestbook-default --project myproject
|
||||
```
|
||||
|
||||
### Configuring RBAC with projects
|
||||
|
||||
Once projects have been defined, RBAC rules can be written to restrict access to the applications
|
||||
in the project. The following example configures RBAC for two GitHub teams: `team1` and `team2`,
|
||||
both in the GitHub org, `some-github-org`. There are two projects, `project-a` and `project-b`.
|
||||
`team1` can only manage applications in `project-a`, while `team2` can only manage applications in
|
||||
`project-b`. Both `team1` and `team2` have the ability to manage repositories.
|
||||
|
||||
*ConfigMap `argocd-rbac-cm` example:*
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
data:
|
||||
policy.default: ""
|
||||
policy.csv: |
|
||||
p, some-github-org:team1, applications, *, project-a/*, allow
|
||||
p, some-github-org:team2, applications, *, project-a/*, allow
|
||||
|
||||
p, role:org-admin, repositories, get, *, allow
|
||||
p, role:org-admin, repositories, create, *, allow
|
||||
p, role:org-admin, repositories, update, *, allow
|
||||
p, role:org-admin, repositories, delete, *, allow
|
||||
|
||||
g, some-github-org:team1, org-admin
|
||||
g, some-github-org:team2, org-admin
|
||||
```
|
||||
|
||||
## Project Roles
|
||||
|
||||
Projects include a feature called roles that enable automated access to a project's applications.
|
||||
These can be used to give a CI pipeline a restricted set of permissions. For example, a CI system
|
||||
may only be able to sync a single app (but not change its source or destination).
|
||||
|
||||
Projects can have multiple roles, and those roles can have different access granted to them. These
|
||||
permissions are called policies, and they are stored within the role as a list of policy strings.
|
||||
A role's policy can only grant access to that role and are limited to applications within the role's
|
||||
project. However, the policies have an option for granting wildcard access to any application
|
||||
within a project.
|
||||
|
||||
In order to create roles in a project and add policies to a role, a user will need permission to
|
||||
update a project. The following commands can be used to manage a role.
|
||||
|
||||
```bash
|
||||
argoproj proj role list
|
||||
argoproj proj role get
|
||||
argoproj proj role create
|
||||
argoproj proj role delete
|
||||
argoproj proj role add-policy
|
||||
argoproj proj role remove-policy
|
||||
```
|
||||
|
||||
Project roles in itself are not useful without generating a token to associate to that role. ArgoCD
|
||||
supports JWT tokens as the means to authenticate to a role. Since the JWT token is
|
||||
associated with a role's policies, any changes to the role's policies will immediately take effect
|
||||
for that JWT token.
|
||||
|
||||
The following commands are used to manage the JWT tokens.
|
||||
|
||||
```bash
|
||||
argoproj proj role create-token PROJECT ROLE-NAME
|
||||
argoproj proj role delete-token PROJECT ROLE-NAME ISSUED-AT
|
||||
```
|
||||
|
||||
Since the JWT tokens aren't stored in ArgoCD, they can only be retrieved when they are created. A
|
||||
user can leverage them in the cli by either passing them in using the `--auth-token` flag or setting
|
||||
the ARGOCD_AUTH_TOKEN environment variable. The JWT tokens can be used until they expire or are
|
||||
revoked. The JWT tokens can created with or without an expiration, but the default on the cli is
|
||||
creates them without an expirations date. Even if a token has not expired, it cannot be used if
|
||||
the token has been revoked.
|
||||
|
||||
Below is an example of leveraging a JWT token to access a guestbook application. It makes the
|
||||
assumption that the user already has a project named myproject and an application called
|
||||
guestbook-default.
|
||||
|
||||
```bash
|
||||
PROJ=myproject
|
||||
APP=guestbook-default
|
||||
ROLE=get-role
|
||||
argocd proj role create $PROJ $ROLE
|
||||
argocd proj role create-token $PROJ $ROLE -e 10m
|
||||
JWT=<value from command above>
|
||||
argocd proj role list $PROJ
|
||||
argocd proj role get $PROJ $ROLE
|
||||
|
||||
# This command will fail because the JWT Token associated with the project role does not have a policy to allow access to the application
|
||||
argocd app get $APP --auth-token $JWT
|
||||
# Adding a policy to grant access to the application for the new role
|
||||
argocd proj role add-policy $PROJ $ROLE --action get --permission allow --object $APP
|
||||
argocd app get $PROJ-$ROLE --auth-token $JWT
|
||||
|
||||
# Removing the policy we added and adding one with a wildcard.
|
||||
argocd proj role remove-policy $PROJ $TOKEN -a get -o $PROJ-$TOKEN
|
||||
argocd proj role remove-policy $PROJ $TOKEN -a get -o '*'
|
||||
# The wildcard allows us to access the application due to the wildcard.
|
||||
argocd app get $PROJ-$TOKEN --auth-token $JWT
|
||||
argocd proj role get $PROJ
|
||||
|
||||
|
||||
argocd proj role get $PROJ $ROLE
|
||||
# Revoking the JWT token
|
||||
argocd proj role delete-token $PROJ $ROLE <id field from the last command>
|
||||
# This will fail since the JWT Token was deleted for the project role.
|
||||
argocd app get $APP --auth-token $JWT
|
||||
```
|
||||
|
||||
136
docs/rbac.md
136
docs/rbac.md
@@ -14,149 +14,27 @@ RBAC configuration allows defining roles and groups. ArgoCD has two pre-defined
|
||||
* `role:admin` - unrestricted access to all resources
|
||||
These role definitions can be seen in [builtin-policy.csv](../util/rbac/builtin-policy.csv)
|
||||
|
||||
Additional roles and groups can be configured in `argocd-rbac-cm` ConfigMap. The example below custom role `org-admin`. The role is assigned to any user which belongs to
|
||||
`your-github-org:your-team` group. All other users get `role:readonly` and cannot modify ArgoCD settings.
|
||||
Additional roles and groups can be configured in `argocd-rbac-cm` ConfigMap. The example below
|
||||
configures a custom role, named `org-admin`. The role is assigned to any user which belongs to
|
||||
`your-github-org:your-team` group. All other users get the default policy of `role:readonly`,
|
||||
which cannot modify ArgoCD settings.
|
||||
|
||||
*ConfigMap `argocd-rbac-cm` example:*
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
data:
|
||||
policy.default: role:readonly
|
||||
policy.csv: |
|
||||
p, role:org-admin, applications, *, */*, allow
|
||||
p, role:org-admin, applications/*, *, */*, allow
|
||||
|
||||
p, role:org-admin, clusters, get, *, allow
|
||||
p, role:org-admin, repositories, get, *, allow
|
||||
p, role:org-admin, repositories/apps, get, *, allow
|
||||
|
||||
p, role:org-admin, repositories, create, *, allow
|
||||
p, role:org-admin, repositories, update, *, allow
|
||||
p, role:org-admin, repositories, delete, *, allow
|
||||
|
||||
g, your-github-org:your-team, role:org-admin
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
```
|
||||
|
||||
## Configure Projects
|
||||
|
||||
Argo projects allow grouping applications which is useful if ArgoCD is used by multiple teams. Additionally, projects restrict source repositories and destination
|
||||
Kubernetes clusters which can be used by applications belonging to the project.
|
||||
|
||||
### 1. Create new project
|
||||
|
||||
Following command creates project `myproject` which can deploy applications to namespace `default` of cluster `https://kubernetes.default.svc`. The valid application source is defined in the `https://github.com/argoproj/argocd-example-apps.git` repository.
|
||||
|
||||
```
|
||||
argocd proj create myproject -d https://kubernetes.default.svc,default -s https://github.com/argoproj/argocd-example-apps.git
|
||||
```
|
||||
|
||||
Project sources and destinations can be managed using commands
|
||||
```
|
||||
argocd project add-destination
|
||||
argocd project remove-destination
|
||||
argocd project add-source
|
||||
argocd project remove-source
|
||||
```
|
||||
|
||||
### 2. Assign application to a project
|
||||
|
||||
Each application belongs to a project. By default, all application belongs to the default project which provides access to any source repo/cluster. The application project can be
|
||||
changes using `app set` command:
|
||||
|
||||
```
|
||||
argocd app set guestbook-default --project myproject
|
||||
```
|
||||
|
||||
### 3. Update RBAC rules
|
||||
|
||||
Following example configure admin access for two teams. Each team has access only two application of one project (`team1` can access `default` project and `team2` can access
|
||||
`myproject` project).
|
||||
|
||||
*ConfigMap `argocd-rbac-cm` example:*
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
policy.default: ""
|
||||
policy.csv: |
|
||||
p, role:team1-admin, applications, *, default/*, allow
|
||||
p, role:team1-admin, applications/*, *, default/*, allow
|
||||
|
||||
p, role:team1-admin, applications, *, myproject/*, allow
|
||||
p, role:team1-admin, applications/*, *, myproject/*, allow
|
||||
|
||||
p, role:org-admin, clusters, get, *, allow
|
||||
p, role:org-admin, repositories, get, *, allow
|
||||
p, role:org-admin, repositories/apps, get, *, allow
|
||||
|
||||
p, role:org-admin, repositories, create, *, allow
|
||||
p, role:org-admin, repositories, update, *, allow
|
||||
p, role:org-admin, repositories, delete, *, allow
|
||||
|
||||
g, role:team1-admin, org-admin
|
||||
g, role:team2-admin, org-admin
|
||||
g, your-github-org:your-team1, role:team1-admin
|
||||
g, your-github-org:your-team2, role:team2-admin
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
```
|
||||
## Project Roles
|
||||
Projects include a feature called roles that allow users to define access to project's applications. A project can have multiple roles, and those roles can have different access granted to them. These permissions are called policies, and they are stored within the role as a list of casbin strings. A role's policy can only grant access to that role and are limited to applications within the role's project. However, the policies have an option for granting wildcard access to any application within a project.
|
||||
|
||||
In order to create roles in a project and add policies to a role, a user will need permission to update a project. The following commands can be used to manage a role.
|
||||
```
|
||||
argoproj proj role list
|
||||
argoproj proj role get
|
||||
argoproj proj role create
|
||||
argoproj proj role delete
|
||||
argoproj proj role add-policy
|
||||
argoproj proj role remove-policy
|
||||
```
|
||||
|
||||
Project roles can not be used unless a user creates a entity that is associated with that project role. ArgoCD supports creating JWT tokens with a role associated with it. Since the JWT token is associated with a role's policies, any changes to the role's policies will immediately take effect for that JWT token.
|
||||
|
||||
A user will need permission to update a project in order to create a JWT token for a role, and they can use the following commands to manage the JWT tokens.
|
||||
|
||||
```
|
||||
argoproj proj role create-token
|
||||
argoproj proj role delete-token
|
||||
```
|
||||
Since the JWT tokens aren't stored in ArgoCD, they can only be retrieved when they are created. A user can leverage them in the cli by either passing them in using the `--auth-token` flag or setting the ARGOCD_AUTH_TOKEN environment variable. The JWT tokens can be used until they expire or are revoked. The JWT tokens can created with or without an expiration, but the default on the cli is creates them without an expirations date. Even if a token has not expired, it can not be used if the token has been revoke.
|
||||
|
||||
Below is an example of leveraging a JWT token to access the guestbook application. It makes the assumption that the user already has a project named myproject and an application called guestbook-default.
|
||||
```
|
||||
PROJ=myproject
|
||||
APP=guestbook-default
|
||||
ROLE=get-role
|
||||
argocd proj role create $PROJ $ROLE
|
||||
argocd proj role create-token $PROJ $ROLE -e 10m
|
||||
JWT=<value from command above>
|
||||
argocd proj role list $PROJ
|
||||
argocd proj role get $PROJ $ROLE
|
||||
|
||||
#This command will fail because the JWT Token associated with the project role does not have a policy to allow access to the application
|
||||
argocd app get $APP --auth-token $JWT
|
||||
# Adding a policy to grant access to the application for the new role
|
||||
argocd proj role add-policy $PROJ $ROLE --action get --permission allow --object $APP
|
||||
argocd app get $PROJ-$ROLE --auth-token $JWT
|
||||
|
||||
# Removing the policy we added and adding one with a wildcard.
|
||||
argocd proj role remove-policy $PROJ $TOKEN -a get -o $PROJ-$TOKEN
|
||||
argocd proj role remove-policy $PROJ $TOKEN -a get -o '*'
|
||||
# The wildcard allows us to access the application due to the wildcard.
|
||||
argocd app get $PROJ-$TOKEN --auth-token $JWT
|
||||
argocd proj role get $PROJ
|
||||
|
||||
|
||||
argocd proj role get $PROJ $ROLE
|
||||
# Revoking the JWT token
|
||||
argocd proj role delete-token $PROJ $ROLE <id field from the last command>
|
||||
# This will fail since the JWT Token was deleted for the project role.
|
||||
argocd app get $APP --auth-token $JWT
|
||||
```
|
||||
|
||||
|
||||
@@ -4,13 +4,14 @@ An ArgoCD application spec provides several different ways of track kubernetes r
|
||||
git. This document describes the different techniques and the means of deploying those manifests to
|
||||
the target environment.
|
||||
|
||||
## Branch Tracking
|
||||
## HEAD / Branch Tracking
|
||||
|
||||
If a branch name is specified, ArgoCD will continually compare live state against the resource
|
||||
manifests defined at the tip of the specified branch.
|
||||
If a branch name, or a symbolic reference (like HEAD) is specified, ArgoCD will continually compare
|
||||
live state against the resource manifests defined at the tip of the specified branch or the
|
||||
deferenced commit of the symbolic reference.
|
||||
|
||||
To redeploy an application, a user makes changes to the manifests, and commit/pushes those the
|
||||
changes to the tracked branch, which will then be detected by ArgoCD controller.
|
||||
changes to the tracked branch/symbolic reference, which will then be detected by ArgoCD controller.
|
||||
|
||||
## Tag Tracking
|
||||
|
||||
@@ -33,9 +34,9 @@ which is pinned to a commit, is by updating the tracking revision in the applica
|
||||
commit containing the new manifests. Note that [parameter overrides](parameters.md) can still be set
|
||||
on an application which is pinned to a revision.
|
||||
|
||||
## Auto-Sync [(Not Yet Implemented)]((https://github.com/argoproj/argo-cd/issues/79))
|
||||
## Automated Sync
|
||||
|
||||
In all tracking strategies, the application will have the option to sync automatically. If auto-sync
|
||||
In all tracking strategies, the application has the option to sync automatically. If [auto-sync](auto_sync.md)
|
||||
is configured, the new resources manifests will be applied automatically -- as soon as a difference
|
||||
is detected between the target state (git) and live state. If auto-sync is disabled, a manual sync
|
||||
will be needed using the Argo UI, CLI, or API.
|
||||
|
||||
@@ -64,8 +64,8 @@ for i in ${PROTO_FILES}; do
|
||||
# Path to the google API gateway annotations.proto will be different depending if we are
|
||||
# building natively (e.g. from workspace) vs. part of a docker build.
|
||||
if [ -f /.dockerenv ]; then
|
||||
GOOGLE_PROTO_API_PATH=/root/go/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
|
||||
GOGO_PROTOBUF_PATH=/root/go/src/github.com/gogo/protobuf
|
||||
GOOGLE_PROTO_API_PATH=$GOPATH/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
|
||||
GOGO_PROTOBUF_PATH=$GOPATH/src/github.com/gogo/protobuf
|
||||
else
|
||||
GOOGLE_PROTO_API_PATH=${PROJECT_ROOT}/vendor/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
|
||||
GOGO_PROTOBUF_PATH=${PROJECT_ROOT}/vendor/github.com/gogo/protobuf
|
||||
|
||||
@@ -1,11 +1,22 @@
|
||||
#!/bin/sh
|
||||
|
||||
IMAGE_NAMESPACE=${IMAGE_NAMESPACE:='argoproj'}
|
||||
IMAGE_TAG=${IMAGE_TAG:='latest'}
|
||||
SRCROOT="$( cd "$(dirname "$0")/.." ; pwd -P )"
|
||||
AUTOGENMSG="# This is an auto-generated file. DO NOT EDIT"
|
||||
|
||||
for i in "$(ls manifests/components/*.yaml)"; do
|
||||
sed -i '' 's@\( image: \(.*\)/\(argocd-.*\):.*\)@ image: '"${IMAGE_NAMESPACE}"'/\3:'"${IMAGE_TAG}"'@g' $i
|
||||
done
|
||||
update_image () {
|
||||
if [ ! -z "${IMAGE_NAMESPACE}" ]; then
|
||||
sed -i '' 's| image: \(.*\)/\(argocd-.*\)| image: '"${IMAGE_NAMESPACE}"'/\2|g' ${1}
|
||||
fi
|
||||
if [ ! -z "${IMAGE_TAG}" ]; then
|
||||
sed -i '' 's|\( image: .*/argocd-.*\)\:.*|\1:'"${IMAGE_TAG}"'|g' ${1}
|
||||
fi
|
||||
}
|
||||
|
||||
echo "${AUTOGENMSG}" > ${SRCROOT}/manifests/install.yaml
|
||||
kustomize build ${SRCROOT}/manifests/cluster-install >> ${SRCROOT}/manifests/install.yaml
|
||||
update_image ${SRCROOT}/manifests/install.yaml
|
||||
|
||||
echo "${AUTOGENMSG}" > ${SRCROOT}/manifests/namespace-install.yaml
|
||||
kustomize build ${SRCROOT}/manifests/base >> ${SRCROOT}/manifests/namespace-install.yaml
|
||||
update_image ${SRCROOT}/manifests/namespace-install.yaml
|
||||
|
||||
echo "# This is an auto-generated file. DO NOT EDIT" > manifests/install.yaml
|
||||
cat manifests/components/*.yaml >> manifests/install.yaml
|
||||
|
||||
16
manifests/README.md
Normal file
16
manifests/README.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# ArgoCD Installation Manifests
|
||||
|
||||
Two sets of installation manifests are provided:
|
||||
|
||||
* [install.yaml](install.yaml) - Standard ArgoCD installation with cluster-admin access. Use this
|
||||
manifest set if you plan to use ArgoCD to deploy applications in the same cluster that ArgoCD runs
|
||||
in (i.e. kubernetes.svc.default). Will still be able to deploy to external clusters with inputted
|
||||
credentials.
|
||||
|
||||
* [namespace-install.yaml](namespace-install.yaml) - Installation of ArgoCD which requires only
|
||||
namespace level privileges (does not need cluster roles). Use this manifest set if you do not
|
||||
need ArgoCD to deploy applications in the same cluster that ArgoCD runs in, and will rely solely
|
||||
on inputted cluster credentials. An example of using this set of manifests is if you run several
|
||||
ArgoCD instances for different teams, where each instance will bedeploying applications to
|
||||
external clusters. Will still be possible to deploy to the same cluster (kubernetes.svc.default)
|
||||
with inputted credentials (i.e. `argocd cluster add <CONTEXT> --in-cluster`).
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
@@ -13,7 +12,14 @@ spec:
|
||||
app: application-controller
|
||||
spec:
|
||||
containers:
|
||||
- command: [/argocd-application-controller, --repo-server, 'argocd-repo-server:8081']
|
||||
image: argoproj/argocd-application-controller:v0.8.0
|
||||
- command:
|
||||
- /argocd-application-controller
|
||||
- --repo-server
|
||||
- argocd-repo-server:8081
|
||||
- --status-processors
|
||||
- "20"
|
||||
- --operation-processors
|
||||
- "10"
|
||||
image: argoproj/argocd-application-controller:latest
|
||||
name: application-controller
|
||||
serviceAccountName: application-controller
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
23
manifests/base/argocd-cm.yaml
Normal file
23
manifests/base/argocd-cm.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-cm
|
||||
# data:
|
||||
# # ArgoCD's externally facing base URL. Required for configuring SSO
|
||||
# # url: https://argo-cd-demo.argoproj.io
|
||||
#
|
||||
# # A dex connector configuration. See documentation on how to configure SSO:
|
||||
# # https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
|
||||
# dex.config: |
|
||||
# connectors:
|
||||
# # GitHub example
|
||||
# - type: github
|
||||
# id: github
|
||||
# name: GitHub
|
||||
# config:
|
||||
# clientID: aabbccddeeff00112233
|
||||
# clientSecret: $dex.github.clientSecret
|
||||
# orgs:
|
||||
# - name: your-github-org
|
||||
# teams:
|
||||
# - red-team
|
||||
12
manifests/base/argocd-metrics-service.yaml
Normal file
12
manifests/base/argocd-metrics-service.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: argocd-metrics
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 8082
|
||||
targetPort: 8082
|
||||
selector:
|
||||
app: argocd-server
|
||||
15
manifests/base/argocd-rbac-cm.yaml
Normal file
15
manifests/base/argocd-rbac-cm.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
# data:
|
||||
# # An RBAC policy .csv file containing additional policy and role definitions.
|
||||
# # See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md on how to write RBAC policies.
|
||||
# policy.csv: |
|
||||
# # Give all members of "my-org:team-alpha" the ability to sync apps in "my-project"
|
||||
# p, my-org:team-alpha, applications, sync, my-project/*, allow
|
||||
# # Make all members of "my-org:team-beta" admins
|
||||
# g, my-org:team-beta, role:admin
|
||||
#
|
||||
# # The default role ArgoCD will fall back to, when authorizing API requests
|
||||
# policy.default: role:readonly
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
@@ -12,9 +11,15 @@ spec:
|
||||
labels:
|
||||
app: argocd-repo-server
|
||||
spec:
|
||||
automountServiceAccountToken: false
|
||||
containers:
|
||||
- name: argocd-repo-server
|
||||
image: argoproj/argocd-repo-server:v0.8.0
|
||||
image: argoproj/argocd-repo-server:latest
|
||||
command: [/argocd-repo-server]
|
||||
ports:
|
||||
- containerPort: 8081
|
||||
- containerPort: 8081
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 8081
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
26
manifests/base/argocd-secret.yaml
Normal file
26
manifests/base/argocd-secret.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: argocd-secret
|
||||
type: Opaque
|
||||
# data:
|
||||
# # TLS certificate and private key for API server.
|
||||
# # Autogenerated with a self-signed ceritificate if keys are missing.
|
||||
# tls.crt:
|
||||
# tls.key:
|
||||
#
|
||||
# # bcrypt hash of the admin password and it's last modified time. Autogenerated on initial
|
||||
# # startup. To reset a forgotten password, delete both keys and restart argocd-server.
|
||||
# admin.password:
|
||||
# admin.passwordMtime:
|
||||
#
|
||||
# # random server signature key for session validation. Autogenerated on initial startup
|
||||
# server.secretkey:
|
||||
#
|
||||
# # The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
|
||||
# # events. To enable webhooks, configure one or more of the following keys with the shared git
|
||||
# # provider webhook secret. The payload URL configured in the git provider should use the
|
||||
# # /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
|
||||
# github.webhook.secret:
|
||||
# gitlab.webhook.secret:
|
||||
# bitbucket.webhook.uuid:
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
@@ -14,21 +13,15 @@ spec:
|
||||
spec:
|
||||
serviceAccountName: argocd-server
|
||||
initContainers:
|
||||
- name: copyutil
|
||||
image: argoproj/argocd-server:v0.8.0
|
||||
command: [cp, /argocd-util, /shared]
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
- name: ui
|
||||
image: argoproj/argocd-ui:v0.8.0
|
||||
image: argoproj/argocd-ui:latest
|
||||
command: [cp, -r, /app, /shared]
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
containers:
|
||||
- name: argocd-server
|
||||
image: argoproj/argocd-server:v0.8.0
|
||||
image: argoproj/argocd-server:latest
|
||||
command: [/argocd-server, --staticassets, /shared/app, --repo-server, 'argocd-repo-server:8081']
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
@@ -39,12 +32,6 @@ spec:
|
||||
port: 8080
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 30
|
||||
- name: dex
|
||||
image: quay.io/coreos/dex:v2.10.0
|
||||
command: [/shared/argocd-util, rundex]
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: static-files
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
@@ -1,4 +1,3 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
34
manifests/base/dex-server-deployment.yaml
Normal file
34
manifests/base/dex-server-deployment.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dex-server
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dex-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dex-server
|
||||
spec:
|
||||
serviceAccountName: dex-server
|
||||
initContainers:
|
||||
- name: copyutil
|
||||
image: argoproj/argocd-server:latest
|
||||
command: [cp, /argocd-util, /shared]
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
containers:
|
||||
- name: dex
|
||||
image: quay.io/dexidp/dex:v2.11.0
|
||||
command: [/shared/argocd-util, rundex]
|
||||
ports:
|
||||
- containerPort: 5556
|
||||
- containerPort: 5557
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: static-files
|
||||
14
manifests/base/dex-server-role.yaml
Normal file
14
manifests/base/dex-server-role.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: dex-server-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
11
manifests/base/dex-server-rolebinding.yaml
Normal file
11
manifests/base/dex-server-rolebinding.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: dex-server-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: dex-server-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: dex-server
|
||||
4
manifests/base/dex-server-sa.yaml
Normal file
4
manifests/base/dex-server-sa.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: dex-server
|
||||
16
manifests/base/dex-server-service.yaml
Normal file
16
manifests/base/dex-server-service.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: dex-server
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 5556
|
||||
targetPort: 5556
|
||||
- name: grpc
|
||||
protocol: TCP
|
||||
port: 5557
|
||||
targetPort: 5557
|
||||
selector:
|
||||
app: dex-server
|
||||
33
manifests/base/kustomization.yaml
Normal file
33
manifests/base/kustomization.yaml
Normal file
@@ -0,0 +1,33 @@
|
||||
resources:
|
||||
- application-crd.yaml
|
||||
- appproject-crd.yaml
|
||||
- argocd-cm.yaml
|
||||
- argocd-secret.yaml
|
||||
- argocd-rbac-cm.yaml
|
||||
- application-controller-sa.yaml
|
||||
- application-controller-role.yaml
|
||||
- application-controller-rolebinding.yaml
|
||||
- application-controller-deployment.yaml
|
||||
- argocd-server-sa.yaml
|
||||
- argocd-server-role.yaml
|
||||
- argocd-server-rolebinding.yaml
|
||||
- argocd-server-deployment.yaml
|
||||
- argocd-server-service.yaml
|
||||
- argocd-metrics-service.yaml
|
||||
- argocd-repo-server-deployment.yaml
|
||||
- argocd-repo-server-service.yaml
|
||||
- dex-server-sa.yaml
|
||||
- dex-server-role.yaml
|
||||
- dex-server-rolebinding.yaml
|
||||
- dex-server-deployment.yaml
|
||||
- dex-server-service.yaml
|
||||
|
||||
imageTags:
|
||||
- name: argoproj/argocd-server
|
||||
newTag: v0.10.6
|
||||
- name: argoproj/argocd-ui
|
||||
newTag: v0.10.6
|
||||
- name: argoproj/argocd-repo-server
|
||||
newTag: v0.10.6
|
||||
- name: argoproj/argocd-application-controller
|
||||
newTag: v0.10.6
|
||||
@@ -0,0 +1,15 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: application-controller-clusterrole
|
||||
rules:
|
||||
- apiGroups:
|
||||
- '*'
|
||||
resources:
|
||||
- '*'
|
||||
verbs:
|
||||
- '*'
|
||||
- nonResourceURLs:
|
||||
- '*'
|
||||
verbs:
|
||||
- '*'
|
||||
@@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: application-controller-clusterrolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: application-controller-clusterrole
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: application-controller
|
||||
namespace: argocd
|
||||
24
manifests/cluster-install/argocd-server-clusterrole.yaml
Normal file
24
manifests/cluster-install/argocd-server-clusterrole.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: argocd-server-clusterrole
|
||||
rules:
|
||||
- apiGroups:
|
||||
- '*'
|
||||
resources:
|
||||
- '*'
|
||||
verbs:
|
||||
- delete
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- list
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- pods/log
|
||||
verbs:
|
||||
- get
|
||||
@@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: argocd-server-clusterrolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: argocd-server-clusterrole
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: argocd-server
|
||||
namespace: argocd
|
||||
8
manifests/cluster-install/kustomization.yaml
Normal file
8
manifests/cluster-install/kustomization.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
bases:
|
||||
- ../base
|
||||
|
||||
resources:
|
||||
- application-controller-clusterrole.yaml
|
||||
- application-controller-clusterrolebinding.yaml
|
||||
- argocd-server-clusterrole.yaml
|
||||
- argocd-server-clusterrolebinding.yaml
|
||||
@@ -1,25 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-cm
|
||||
#data:
|
||||
# ArgoCD's externally facing URL
|
||||
# url: https://argo-cd-demo.argoproj.io
|
||||
|
||||
# A dex connector configuration.
|
||||
# Visit https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
|
||||
# for instructions on configuring SSO.
|
||||
# dex.config: |
|
||||
# connectors:
|
||||
# # GitHub example
|
||||
# - type: github
|
||||
# id: github
|
||||
# name: GitHub
|
||||
# config:
|
||||
# clientID: aabbccddeeff00112233
|
||||
# clientSecret: $dex.github.clientSecret
|
||||
# orgs:
|
||||
# - name: your-github-org
|
||||
# teams:
|
||||
# - red-team
|
||||
@@ -1,26 +0,0 @@
|
||||
---
|
||||
# NOTE: some values in this secret will be populated by the initial startup of the API server
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: argocd-secret
|
||||
type: Opaque
|
||||
#data:
|
||||
# TLS certificate and private key for API server
|
||||
# server.crt:
|
||||
# server.key:
|
||||
|
||||
# The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
|
||||
# events. To enable webhooks, configure one or more of the following keys with the shared git
|
||||
# provider webhook secret. The payload URL configured in the git provider should use the
|
||||
# /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
|
||||
# github.webhook.secret:
|
||||
# gitlab.webhook.secret:
|
||||
# bitbucket.webhook.uuid:
|
||||
|
||||
# bcrypt hash of the admin password (autogenerated on initial startup).
|
||||
# To reset a forgotten password, delete this key and restart the argocd-server
|
||||
# admin.password:
|
||||
|
||||
# random server signature key for session validation (autogenerated on initial startup)
|
||||
# server.secretkey:
|
||||
@@ -1,16 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
#data:
|
||||
# An RBAC policy .csv file containing additional policy and role definitions.
|
||||
# See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md on how to write RBAC policies.
|
||||
# policy.csv: |
|
||||
# # Give all members of "my-org:team-alpha" the ability to sync apps in "my-project"
|
||||
# p, my-org:team-alpha, applications, sync, my-project/*, allow
|
||||
# # Make all members of "my-org:team-beta" admins
|
||||
# g, my-org:team-beta, role:admin
|
||||
|
||||
# The default role ArgoCD will fall back to, when authorizing API requests
|
||||
# policy.default: role:readonly
|
||||
@@ -1,5 +1,4 @@
|
||||
# This is an auto-generated file. DO NOT EDIT
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
@@ -30,76 +29,19 @@ spec:
|
||||
version: v1alpha1
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: argocd-cm
|
||||
#data:
|
||||
# ArgoCD's externally facing URL
|
||||
# url: https://argo-cd-demo.argoproj.io
|
||||
|
||||
# A dex connector configuration.
|
||||
# Visit https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
|
||||
# for instructions on configuring SSO.
|
||||
# dex.config: |
|
||||
# connectors:
|
||||
# # GitHub example
|
||||
# - type: github
|
||||
# id: github
|
||||
# name: GitHub
|
||||
# config:
|
||||
# clientID: aabbccddeeff00112233
|
||||
# clientSecret: $dex.github.clientSecret
|
||||
# orgs:
|
||||
# - name: your-github-org
|
||||
# teams:
|
||||
# - red-team
|
||||
---
|
||||
# NOTE: some values in this secret will be populated by the initial startup of the API server
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: argocd-secret
|
||||
type: Opaque
|
||||
#data:
|
||||
# TLS certificate and private key for API server
|
||||
# server.crt:
|
||||
# server.key:
|
||||
|
||||
# The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
|
||||
# events. To enable webhooks, configure one or more of the following keys with the shared git
|
||||
# provider webhook secret. The payload URL configured in the git provider should use the
|
||||
# /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
|
||||
# github.webhook.secret:
|
||||
# gitlab.webhook.secret:
|
||||
# bitbucket.webhook.uuid:
|
||||
|
||||
# bcrypt hash of the admin password (autogenerated on initial startup).
|
||||
# To reset a forgotten password, delete this key and restart the argocd-server
|
||||
# admin.password:
|
||||
|
||||
# random server signature key for session validation (autogenerated on initial startup)
|
||||
# server.secretkey:
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
#data:
|
||||
# An RBAC policy .csv file containing additional policy and role definitions.
|
||||
# See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md on how to write RBAC policies.
|
||||
# policy.csv: |
|
||||
# # Give all members of "my-org:team-alpha" the ability to sync apps in "my-project"
|
||||
# p, my-org:team-alpha, applications, sync, my-project/*, allow
|
||||
# # Make all members of "my-org:team-beta" admins
|
||||
# g, my-org:team-beta, role:admin
|
||||
|
||||
# The default role ArgoCD will fall back to, when authorizing API requests
|
||||
# policy.default: role:readonly
|
||||
name: application-controller
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: application-controller
|
||||
name: argocd-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: dex-server
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
@@ -136,43 +78,6 @@ rules:
|
||||
verbs:
|
||||
- create
|
||||
- list
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: application-controller-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: application-controller-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: application-controller
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: application-controller
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: application-controller
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: application-controller
|
||||
spec:
|
||||
containers:
|
||||
- command: [/argocd-application-controller, --repo-server, 'argocd-repo-server:8081']
|
||||
image: argoproj/argocd-application-controller:v0.8.0
|
||||
name: application-controller
|
||||
serviceAccountName: application-controller
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: argocd-server
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
@@ -214,6 +119,74 @@ rules:
|
||||
- list
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: dex-server-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: application-controller-clusterrole
|
||||
rules:
|
||||
- apiGroups:
|
||||
- '*'
|
||||
resources:
|
||||
- '*'
|
||||
verbs:
|
||||
- '*'
|
||||
- nonResourceURLs:
|
||||
- '*'
|
||||
verbs:
|
||||
- '*'
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: argocd-server-clusterrole
|
||||
rules:
|
||||
- apiGroups:
|
||||
- '*'
|
||||
resources:
|
||||
- '*'
|
||||
verbs:
|
||||
- delete
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- list
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- pods/log
|
||||
verbs:
|
||||
- get
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: application-controller-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: application-controller-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: application-controller
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: argocd-server-role-binding
|
||||
@@ -225,55 +198,83 @@ subjects:
|
||||
- kind: ServiceAccount
|
||||
name: argocd-server
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: dex-server-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: dex-server-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: dex-server
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: application-controller-clusterrolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: application-controller-clusterrole
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: application-controller
|
||||
namespace: argocd
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: argocd-server-clusterrolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: argocd-server-clusterrole
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: argocd-server
|
||||
namespace: argocd
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-cm
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: argocd-secret
|
||||
type: Opaque
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: argocd-metrics
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 8082
|
||||
protocol: TCP
|
||||
targetPort: 8082
|
||||
selector:
|
||||
matchLabels:
|
||||
app: argocd-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: argocd-server
|
||||
spec:
|
||||
serviceAccountName: argocd-server
|
||||
initContainers:
|
||||
- name: copyutil
|
||||
image: argoproj/argocd-server:v0.8.0
|
||||
command: [cp, /argocd-util, /shared]
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
- name: ui
|
||||
image: argoproj/argocd-ui:v0.8.0
|
||||
command: [cp, -r, /app, /shared]
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
containers:
|
||||
- name: argocd-server
|
||||
image: argoproj/argocd-server:v0.8.0
|
||||
command: [/argocd-server, --staticassets, /shared/app, --repo-server, 'argocd-repo-server:8081']
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 30
|
||||
- name: dex
|
||||
image: quay.io/coreos/dex:v2.10.0
|
||||
command: [/shared/argocd-util, rundex]
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: static-files
|
||||
app: argocd-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: argocd-repo-server
|
||||
spec:
|
||||
ports:
|
||||
- port: 8081
|
||||
targetPort: 8081
|
||||
selector:
|
||||
app: argocd-repo-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
@@ -282,16 +283,59 @@ metadata:
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: 8080
|
||||
- name: https
|
||||
protocol: TCP
|
||||
port: 443
|
||||
protocol: TCP
|
||||
targetPort: 8080
|
||||
selector:
|
||||
app: argocd-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: dex-server
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 5556
|
||||
protocol: TCP
|
||||
targetPort: 5556
|
||||
- name: grpc
|
||||
port: 5557
|
||||
protocol: TCP
|
||||
targetPort: 5557
|
||||
selector:
|
||||
app: dex-server
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: application-controller
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: application-controller
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: application-controller
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /argocd-application-controller
|
||||
- --repo-server
|
||||
- argocd-repo-server:8081
|
||||
- --status-processors
|
||||
- "20"
|
||||
- --operation-processors
|
||||
- "10"
|
||||
image: argoproj/argocd-application-controller:v0.10.6
|
||||
name: application-controller
|
||||
serviceAccountName: application-controller
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
@@ -305,20 +349,103 @@ spec:
|
||||
labels:
|
||||
app: argocd-repo-server
|
||||
spec:
|
||||
automountServiceAccountToken: false
|
||||
containers:
|
||||
- name: argocd-repo-server
|
||||
image: argoproj/argocd-repo-server:v0.8.0
|
||||
command: [/argocd-repo-server]
|
||||
- command:
|
||||
- /argocd-repo-server
|
||||
image: argoproj/argocd-repo-server:v0.10.6
|
||||
name: argocd-repo-server
|
||||
ports:
|
||||
- containerPort: 8081
|
||||
- containerPort: 8081
|
||||
readinessProbe:
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
tcpSocket:
|
||||
port: 8081
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: argocd-repo-server
|
||||
name: argocd-server
|
||||
spec:
|
||||
ports:
|
||||
- port: 8081
|
||||
targetPort: 8081
|
||||
selector:
|
||||
app: argocd-repo-server
|
||||
matchLabels:
|
||||
app: argocd-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: argocd-server
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /argocd-server
|
||||
- --staticassets
|
||||
- /shared/app
|
||||
- --repo-server
|
||||
- argocd-repo-server:8081
|
||||
image: argoproj/argocd-server:v0.10.6
|
||||
name: argocd-server
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 30
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
initContainers:
|
||||
- command:
|
||||
- cp
|
||||
- -r
|
||||
- /app
|
||||
- /shared
|
||||
image: argoproj/argocd-ui:v0.10.6
|
||||
name: ui
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
serviceAccountName: argocd-server
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: static-files
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dex-server
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dex-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dex-server
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /shared/argocd-util
|
||||
- rundex
|
||||
image: quay.io/dexidp/dex:v2.11.0
|
||||
name: dex
|
||||
ports:
|
||||
- containerPort: 5556
|
||||
- containerPort: 5557
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
initContainers:
|
||||
- command:
|
||||
- cp
|
||||
- /argocd-util
|
||||
- /shared
|
||||
image: argoproj/argocd-server:v0.10.6
|
||||
name: copyutil
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
serviceAccountName: dex-server
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: static-files
|
||||
|
||||
384
manifests/namespace-install.yaml
Normal file
384
manifests/namespace-install.yaml
Normal file
@@ -0,0 +1,384 @@
|
||||
# This is an auto-generated file. DO NOT EDIT
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: applications.argoproj.io
|
||||
spec:
|
||||
group: argoproj.io
|
||||
names:
|
||||
kind: Application
|
||||
plural: applications
|
||||
shortNames:
|
||||
- app
|
||||
scope: Namespaced
|
||||
version: v1alpha1
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: appprojects.argoproj.io
|
||||
spec:
|
||||
group: argoproj.io
|
||||
names:
|
||||
kind: AppProject
|
||||
plural: appprojects
|
||||
shortNames:
|
||||
- appproj
|
||||
- appprojs
|
||||
scope: Namespaced
|
||||
version: v1alpha1
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: application-controller
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: argocd-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: dex-server
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: application-controller-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- watch
|
||||
- list
|
||||
- patch
|
||||
- update
|
||||
- apiGroups:
|
||||
- argoproj.io
|
||||
resources:
|
||||
- applications
|
||||
- appprojects
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- list
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: argocd-server-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
- configmaps
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- argoproj.io
|
||||
resources:
|
||||
- applications
|
||||
- appprojects
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- update
|
||||
- delete
|
||||
- patch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- list
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: dex-server-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: application-controller-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: application-controller-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: application-controller
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: argocd-server-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: argocd-server-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: argocd-server
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: dex-server-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: dex-server-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: dex-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-cm
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: argocd-rbac-cm
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: argocd-secret
|
||||
type: Opaque
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: argocd-metrics
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 8082
|
||||
protocol: TCP
|
||||
targetPort: 8082
|
||||
selector:
|
||||
app: argocd-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: argocd-repo-server
|
||||
spec:
|
||||
ports:
|
||||
- port: 8081
|
||||
targetPort: 8081
|
||||
selector:
|
||||
app: argocd-repo-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: argocd-server
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: 8080
|
||||
- name: https
|
||||
port: 443
|
||||
protocol: TCP
|
||||
targetPort: 8080
|
||||
selector:
|
||||
app: argocd-server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: dex-server
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 5556
|
||||
protocol: TCP
|
||||
targetPort: 5556
|
||||
- name: grpc
|
||||
port: 5557
|
||||
protocol: TCP
|
||||
targetPort: 5557
|
||||
selector:
|
||||
app: dex-server
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: application-controller
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: application-controller
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: application-controller
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /argocd-application-controller
|
||||
- --repo-server
|
||||
- argocd-repo-server:8081
|
||||
- --status-processors
|
||||
- "20"
|
||||
- --operation-processors
|
||||
- "10"
|
||||
image: argoproj/argocd-application-controller:v0.10.6
|
||||
name: application-controller
|
||||
serviceAccountName: application-controller
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: argocd-repo-server
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: argocd-repo-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: argocd-repo-server
|
||||
spec:
|
||||
automountServiceAccountToken: false
|
||||
containers:
|
||||
- command:
|
||||
- /argocd-repo-server
|
||||
image: argoproj/argocd-repo-server:v0.10.6
|
||||
name: argocd-repo-server
|
||||
ports:
|
||||
- containerPort: 8081
|
||||
readinessProbe:
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
tcpSocket:
|
||||
port: 8081
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: argocd-server
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: argocd-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: argocd-server
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /argocd-server
|
||||
- --staticassets
|
||||
- /shared/app
|
||||
- --repo-server
|
||||
- argocd-repo-server:8081
|
||||
image: argoproj/argocd-server:v0.10.6
|
||||
name: argocd-server
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 30
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
initContainers:
|
||||
- command:
|
||||
- cp
|
||||
- -r
|
||||
- /app
|
||||
- /shared
|
||||
image: argoproj/argocd-ui:v0.10.6
|
||||
name: ui
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
serviceAccountName: argocd-server
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: static-files
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dex-server
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dex-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dex-server
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /shared/argocd-util
|
||||
- rundex
|
||||
image: quay.io/dexidp/dex:v2.11.0
|
||||
name: dex
|
||||
ports:
|
||||
- containerPort: 5556
|
||||
- containerPort: 5557
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
initContainers:
|
||||
- command:
|
||||
- cp
|
||||
- /argocd-util
|
||||
- /shared
|
||||
image: argoproj/argocd-server:v0.10.6
|
||||
name: copyutil
|
||||
volumeMounts:
|
||||
- mountPath: /shared
|
||||
name: static-files
|
||||
serviceAccountName: dex-server
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: static-files
|
||||
@@ -40,6 +40,8 @@ const (
|
||||
EnvArgoCDServer = "ARGOCD_SERVER"
|
||||
// EnvArgoCDAuthToken is the environment variable to look for an ArgoCD auth token
|
||||
EnvArgoCDAuthToken = "ARGOCD_AUTH_TOKEN"
|
||||
// MaxGRPCMessageSize contains max grpc message size
|
||||
MaxGRPCMessageSize = 100 * 1024 * 1024
|
||||
)
|
||||
|
||||
// Client defines an interface for interaction with an Argo CD server.
|
||||
@@ -277,7 +279,8 @@ func (c *client) NewConn() (*grpc.ClientConn, error) {
|
||||
endpointCredentials := jwtCredentials{
|
||||
Token: c.AuthToken,
|
||||
}
|
||||
return grpc_util.BlockingDial(context.Background(), "tcp", c.ServerAddr, creds, grpc.WithPerRPCCredentials(endpointCredentials))
|
||||
return grpc_util.BlockingDial(context.Background(), "tcp", c.ServerAddr, creds,
|
||||
grpc.WithPerRPCCredentials(endpointCredentials), grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(MaxGRPCMessageSize)))
|
||||
}
|
||||
|
||||
func (c *client) tlsConfig() (*tls.Config, error) {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -8,11 +8,19 @@ package github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1;
|
||||
import "k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto";
|
||||
import "k8s.io/apimachinery/pkg/runtime/generated.proto";
|
||||
import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto";
|
||||
import "k8s.io/apimachinery/pkg/util/intstr/generated.proto";
|
||||
|
||||
// Package-wide variables from generator "generated".
|
||||
option go_package = "v1alpha1";
|
||||
|
||||
// AWSAuthConfig is an AWS IAM authentication configuration
|
||||
message AWSAuthConfig {
|
||||
// ClusterName contains AWS cluster name
|
||||
optional string clusterName = 1;
|
||||
|
||||
// RoleARN contains optional role ARN. If set then AWS IAM Authenticator assume a role to perform cluster operations instead of the default AWS credential provider chain.
|
||||
optional string roleARN = 2;
|
||||
}
|
||||
|
||||
// AppProject is a definition of AppProject resource.
|
||||
// +genclient
|
||||
// +genclient:noStatus
|
||||
@@ -43,6 +51,12 @@ message AppProjectSpec {
|
||||
optional string description = 3;
|
||||
|
||||
repeated ProjectRole roles = 4;
|
||||
|
||||
// ClusterResourceWhitelist contains list of whitelisted cluster level resources
|
||||
repeated k8s.io.apimachinery.pkg.apis.meta.v1.GroupKind clusterResourceWhitelist = 5;
|
||||
|
||||
// NamespaceResourceBlacklist contains list of blacklisted namespace level resources
|
||||
repeated k8s.io.apimachinery.pkg.apis.meta.v1.GroupKind namespaceResourceBlacklist = 6;
|
||||
}
|
||||
|
||||
// Application is a definition of Application resource.
|
||||
@@ -117,6 +131,9 @@ message ApplicationSpec {
|
||||
|
||||
// Project is a application project name. Empty name means that application belongs to 'default' project.
|
||||
optional string project = 3;
|
||||
|
||||
// SyncPolicy controls when a sync will be performed
|
||||
optional SyncPolicy syncPolicy = 4;
|
||||
}
|
||||
|
||||
// ApplicationStatus contains information about application status in target environment.
|
||||
@@ -176,6 +193,9 @@ message ClusterConfig {
|
||||
|
||||
// TLSClientConfig contains settings to enable transport layer security
|
||||
optional TLSClientConfig tlsClientConfig = 4;
|
||||
|
||||
// AWSAuthConfig contains IAM authentication configuration
|
||||
optional AWSAuthConfig awsAuthConfig = 5;
|
||||
}
|
||||
|
||||
// ClusterList is a collection of Clusters.
|
||||
@@ -194,6 +214,8 @@ message ComparisonResult {
|
||||
optional string status = 5;
|
||||
|
||||
repeated ResourceState resources = 6;
|
||||
|
||||
optional string revision = 7;
|
||||
}
|
||||
|
||||
// ComponentParameter contains information about component parameter value
|
||||
@@ -216,8 +238,6 @@ message ConnectionState {
|
||||
|
||||
// DeploymentInfo contains information relevant to an application deployment
|
||||
message DeploymentInfo {
|
||||
repeated ComponentParameter params = 1;
|
||||
|
||||
optional string revision = 2;
|
||||
|
||||
repeated ComponentParameter componentParameterOverrides = 3;
|
||||
@@ -264,8 +284,6 @@ message JWTToken {
|
||||
// Operation contains requested operation parameters.
|
||||
message Operation {
|
||||
optional SyncOperation sync = 1;
|
||||
|
||||
optional RollbackOperation rollback = 2;
|
||||
}
|
||||
|
||||
// OperationState contains information about state of currently performing operation on application.
|
||||
@@ -282,9 +300,6 @@ message OperationState {
|
||||
// SyncResult is the result of a Sync operation
|
||||
optional SyncOperationResult syncResult = 4;
|
||||
|
||||
// RollbackResult is the result of a Rollback operation
|
||||
optional SyncOperationResult rollbackResult = 5;
|
||||
|
||||
// StartedAt contains time of operation start
|
||||
optional k8s.io.apimachinery.pkg.apis.meta.v1.Time startedAt = 6;
|
||||
|
||||
@@ -292,6 +307,15 @@ message OperationState {
|
||||
optional k8s.io.apimachinery.pkg.apis.meta.v1.Time finishedAt = 7;
|
||||
}
|
||||
|
||||
// ParameterOverrides masks the value so protobuf can generate
|
||||
// +protobuf.nullable=true
|
||||
// +protobuf.options.(gogoproto.goproto_stringer)=false
|
||||
message ParameterOverrides {
|
||||
// items, if empty, will result in an empty slice
|
||||
|
||||
repeated ComponentParameter items = 1;
|
||||
}
|
||||
|
||||
// ProjectRole represents a role that has access to a project
|
||||
message ProjectRole {
|
||||
optional string name = 1;
|
||||
@@ -356,17 +380,10 @@ message ResourceState {
|
||||
optional HealthStatus health = 5;
|
||||
}
|
||||
|
||||
message RollbackOperation {
|
||||
optional int64 id = 1;
|
||||
|
||||
optional bool prune = 2;
|
||||
|
||||
optional bool dryRun = 3;
|
||||
}
|
||||
|
||||
// SyncOperation contains sync operation details.
|
||||
message SyncOperation {
|
||||
// Revision is the git revision in which to sync the application to
|
||||
// Revision is the git revision in which to sync the application to.
|
||||
// If omitted, will use the revision specified in app spec.
|
||||
optional string revision = 1;
|
||||
|
||||
// Prune deletes resources that are no longer tracked in git
|
||||
@@ -377,6 +394,23 @@ message SyncOperation {
|
||||
|
||||
// SyncStrategy describes how to perform the sync
|
||||
optional SyncStrategy syncStrategy = 4;
|
||||
|
||||
// ParameterOverrides applies any parameter overrides as part of the sync
|
||||
// If nil, uses the parameter override set in application.
|
||||
// If empty, sets no parameter overrides
|
||||
optional ParameterOverrides parameterOverrides = 5;
|
||||
|
||||
// Resources describes which resources to sync
|
||||
repeated SyncOperationResource resources = 6;
|
||||
}
|
||||
|
||||
// SyncOperationResource contains resources to sync.
|
||||
message SyncOperationResource {
|
||||
optional string group = 1;
|
||||
|
||||
optional string kind = 2;
|
||||
|
||||
optional string name = 3;
|
||||
}
|
||||
|
||||
// SyncOperationResult represent result of sync operation
|
||||
@@ -391,12 +425,24 @@ message SyncOperationResult {
|
||||
repeated HookStatus hooks = 3;
|
||||
}
|
||||
|
||||
// SyncStrategy indicates the
|
||||
// SyncPolicy controls when a sync will be performed in response to updates in git
|
||||
message SyncPolicy {
|
||||
// Automated will keep an application synced to the target revision
|
||||
optional SyncPolicyAutomated automated = 1;
|
||||
}
|
||||
|
||||
// SyncPolicyAutomated controls the behavior of an automated sync
|
||||
message SyncPolicyAutomated {
|
||||
// Prune will prune resources automatically as part of automated sync (default: false)
|
||||
optional bool prune = 1;
|
||||
}
|
||||
|
||||
// SyncStrategy controls the manner in which a sync is performed
|
||||
message SyncStrategy {
|
||||
// Apply wil perform a `kubectl apply` to perform the sync. This is the default strategy
|
||||
// Apply wil perform a `kubectl apply` to perform the sync.
|
||||
optional SyncStrategyApply apply = 1;
|
||||
|
||||
// Hook will submit any referenced resources to perform the sync
|
||||
// Hook will submit any referenced resources to perform the sync. This is the default strategy
|
||||
optional SyncStrategyHook hook = 2;
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,8 @@ package v1alpha1
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
fmt "fmt"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
@@ -9,14 +11,32 @@ import (
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd/api"
|
||||
|
||||
"github.com/argoproj/argo-cd/common"
|
||||
"github.com/argoproj/argo-cd/util/git"
|
||||
)
|
||||
|
||||
// SyncOperationResource contains resources to sync.
|
||||
type SyncOperationResource struct {
|
||||
Group string `json:"group,omitempty" protobuf:"bytes,1,opt,name=group"`
|
||||
Kind string `json:"kind" protobuf:"bytes,2,opt,name=kind"`
|
||||
Name string `json:"name" protobuf:"bytes,3,opt,name=name"`
|
||||
}
|
||||
|
||||
// HasIdentity determines whether a sync operation is identified by a manifest.
|
||||
func (r SyncOperationResource) HasIdentity(u *unstructured.Unstructured) bool {
|
||||
gvk := u.GroupVersionKind()
|
||||
if u.GetName() == r.Name && gvk.Kind == r.Kind && gvk.Group == r.Group {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// SyncOperation contains sync operation details.
|
||||
type SyncOperation struct {
|
||||
// Revision is the git revision in which to sync the application to
|
||||
// Revision is the git revision in which to sync the application to.
|
||||
// If omitted, will use the revision specified in app spec.
|
||||
Revision string `json:"revision,omitempty" protobuf:"bytes,1,opt,name=revision"`
|
||||
// Prune deletes resources that are no longer tracked in git
|
||||
Prune bool `json:"prune,omitempty" protobuf:"bytes,2,opt,name=prune"`
|
||||
@@ -24,18 +44,26 @@ type SyncOperation struct {
|
||||
DryRun bool `json:"dryRun,omitempty" protobuf:"bytes,3,opt,name=dryRun"`
|
||||
// SyncStrategy describes how to perform the sync
|
||||
SyncStrategy *SyncStrategy `json:"syncStrategy,omitempty" protobuf:"bytes,4,opt,name=syncStrategy"`
|
||||
// ParameterOverrides applies any parameter overrides as part of the sync
|
||||
// If nil, uses the parameter override set in application.
|
||||
// If empty, sets no parameter overrides
|
||||
ParameterOverrides ParameterOverrides `json:"parameterOverrides" protobuf:"bytes,5,opt,name=parameterOverrides"`
|
||||
// Resources describes which resources to sync
|
||||
Resources []SyncOperationResource `json:"resources,omitempty" protobuf:"bytes,6,opt,name=resources"`
|
||||
}
|
||||
|
||||
type RollbackOperation struct {
|
||||
ID int64 `json:"id" protobuf:"bytes,1,opt,name=id"`
|
||||
Prune bool `json:"prune,omitempty" protobuf:"bytes,2,opt,name=prune"`
|
||||
DryRun bool `json:"dryRun,omitempty" protobuf:"bytes,3,opt,name=dryRun"`
|
||||
// ParameterOverrides masks the value so protobuf can generate
|
||||
// +protobuf.nullable=true
|
||||
// +protobuf.options.(gogoproto.goproto_stringer)=false
|
||||
type ParameterOverrides []ComponentParameter
|
||||
|
||||
func (po ParameterOverrides) String() string {
|
||||
return fmt.Sprintf("%v", []ComponentParameter(po))
|
||||
}
|
||||
|
||||
// Operation contains requested operation parameters.
|
||||
type Operation struct {
|
||||
Sync *SyncOperation `json:"sync,omitempty" protobuf:"bytes,1,opt,name=sync"`
|
||||
Rollback *RollbackOperation `json:"rollback,omitempty" protobuf:"bytes,2,opt,name=rollback"`
|
||||
Sync *SyncOperation `json:"sync,omitempty" protobuf:"bytes,1,opt,name=sync"`
|
||||
}
|
||||
|
||||
type OperationPhase string
|
||||
@@ -70,19 +98,29 @@ type OperationState struct {
|
||||
Message string `json:"message,omitempty" protobuf:"bytes,3,opt,name=message"`
|
||||
// SyncResult is the result of a Sync operation
|
||||
SyncResult *SyncOperationResult `json:"syncResult,omitempty" protobuf:"bytes,4,opt,name=syncResult"`
|
||||
// RollbackResult is the result of a Rollback operation
|
||||
RollbackResult *SyncOperationResult `json:"rollbackResult,omitempty" protobuf:"bytes,5,opt,name=rollbackResult"`
|
||||
// StartedAt contains time of operation start
|
||||
StartedAt metav1.Time `json:"startedAt" protobuf:"bytes,6,opt,name=startedAt"`
|
||||
// FinishedAt contains time of operation completion
|
||||
FinishedAt *metav1.Time `json:"finishedAt" protobuf:"bytes,7,opt,name=finishedAt"`
|
||||
}
|
||||
|
||||
// SyncStrategy indicates the
|
||||
// SyncPolicy controls when a sync will be performed in response to updates in git
|
||||
type SyncPolicy struct {
|
||||
// Automated will keep an application synced to the target revision
|
||||
Automated *SyncPolicyAutomated `json:"automated,omitempty" protobuf:"bytes,1,opt,name=automated"`
|
||||
}
|
||||
|
||||
// SyncPolicyAutomated controls the behavior of an automated sync
|
||||
type SyncPolicyAutomated struct {
|
||||
// Prune will prune resources automatically as part of automated sync (default: false)
|
||||
Prune bool `json:"prune,omitempty" protobuf:"bytes,1,opt,name=prune"`
|
||||
}
|
||||
|
||||
// SyncStrategy controls the manner in which a sync is performed
|
||||
type SyncStrategy struct {
|
||||
// Apply wil perform a `kubectl apply` to perform the sync. This is the default strategy
|
||||
// Apply wil perform a `kubectl apply` to perform the sync.
|
||||
Apply *SyncStrategyApply `json:"apply,omitempty" protobuf:"bytes,1,opt,name=apply"`
|
||||
// Hook will submit any referenced resources to perform the sync
|
||||
// Hook will submit any referenced resources to perform the sync. This is the default strategy
|
||||
Hook *SyncStrategyHook `json:"hook,omitempty" protobuf:"bytes,2,opt,name=hook"`
|
||||
}
|
||||
|
||||
@@ -171,7 +209,6 @@ type ResourceDetails struct {
|
||||
|
||||
// DeploymentInfo contains information relevant to an application deployment
|
||||
type DeploymentInfo struct {
|
||||
Params []ComponentParameter `json:"params" protobuf:"bytes,1,name=params"`
|
||||
Revision string `json:"revision" protobuf:"bytes,2,opt,name=revision"`
|
||||
ComponentParameterOverrides []ComponentParameter `json:"componentParameterOverrides,omitempty" protobuf:"bytes,3,opt,name=componentParameterOverrides"`
|
||||
DeployedAt metav1.Time `json:"deployedAt" protobuf:"bytes,4,opt,name=deployedAt"`
|
||||
@@ -218,6 +255,8 @@ type ApplicationSpec struct {
|
||||
Destination ApplicationDestination `json:"destination" protobuf:"bytes,2,name=destination"`
|
||||
// Project is a application project name. Empty name means that application belongs to 'default' project.
|
||||
Project string `json:"project" protobuf:"bytes,3,name=project"`
|
||||
// SyncPolicy controls when a sync will be performed
|
||||
SyncPolicy *SyncPolicy `json:"syncPolicy,omitempty" protobuf:"bytes,4,name=syncPolicy"`
|
||||
}
|
||||
|
||||
// ComponentParameter contains information about component parameter value
|
||||
@@ -283,8 +322,10 @@ const (
|
||||
ApplicationConditionDeletionError = "DeletionError"
|
||||
// ApplicationConditionInvalidSpecError indicates that application source is invalid
|
||||
ApplicationConditionInvalidSpecError = "InvalidSpecError"
|
||||
// ApplicationComparisonError indicates controller failed to compare application state
|
||||
// ApplicationConditionComparisonError indicates controller failed to compare application state
|
||||
ApplicationConditionComparisonError = "ComparisonError"
|
||||
// ApplicationConditionSyncError indicates controller failed to automatically sync the application
|
||||
ApplicationConditionSyncError = "SyncError"
|
||||
// ApplicationConditionUnknownError indicates an unknown controller error
|
||||
ApplicationConditionUnknownError = "UnknownError"
|
||||
// ApplicationConditionSharedResourceWarning indicates that controller detected resources which belongs to more than one application
|
||||
@@ -305,6 +346,7 @@ type ComparisonResult struct {
|
||||
ComparedTo ApplicationSource `json:"comparedTo" protobuf:"bytes,2,opt,name=comparedTo"`
|
||||
Status ComparisonStatus `json:"status" protobuf:"bytes,5,opt,name=status,casttype=ComparisonStatus"`
|
||||
Resources []ResourceState `json:"resources" protobuf:"bytes,6,opt,name=resources"`
|
||||
Revision string `json:"revision" protobuf:"bytes,7,opt,name=revision"`
|
||||
}
|
||||
|
||||
type HealthStatus struct {
|
||||
@@ -374,6 +416,15 @@ type ClusterList struct {
|
||||
Items []Cluster `json:"items" protobuf:"bytes,2,rep,name=items"`
|
||||
}
|
||||
|
||||
// AWSAuthConfig is an AWS IAM authentication configuration
|
||||
type AWSAuthConfig struct {
|
||||
// ClusterName contains AWS cluster name
|
||||
ClusterName string `json:"clusterName,omitempty" protobuf:"bytes,1,opt,name=clusterName"`
|
||||
|
||||
// RoleARN contains optional role ARN. If set then AWS IAM Authenticator assume a role to perform cluster operations instead of the default AWS credential provider chain.
|
||||
RoleARN string `json:"roleARN,omitempty" protobuf:"bytes,2,opt,name=roleARN"`
|
||||
}
|
||||
|
||||
// ClusterConfig is the configuration attributes. This structure is subset of the go-client
|
||||
// rest.Config with annotations added for marshalling.
|
||||
type ClusterConfig struct {
|
||||
@@ -388,6 +439,9 @@ type ClusterConfig struct {
|
||||
|
||||
// TLSClientConfig contains settings to enable transport layer security
|
||||
TLSClientConfig `json:"tlsClientConfig" protobuf:"bytes,4,opt,name=tlsClientConfig"`
|
||||
|
||||
// AWSAuthConfig contains IAM authentication configuration
|
||||
AWSAuthConfig *AWSAuthConfig `json:"awsAuthConfig" protobuf:"bytes,5,opt,name=awsAuthConfig"`
|
||||
}
|
||||
|
||||
// TLSClientConfig contains settings to enable transport layer security
|
||||
@@ -447,6 +501,8 @@ type AppProject struct {
|
||||
func (proj *AppProject) ProjectPoliciesString() string {
|
||||
var policies []string
|
||||
for _, role := range proj.Spec.Roles {
|
||||
projectPolicy := fmt.Sprintf("p, proj:%s:%s, projects, get, %s, allow", proj.ObjectMeta.Name, role.Name, proj.ObjectMeta.Name)
|
||||
policies = append(policies, projectPolicy)
|
||||
policies = append(policies, role.Policies...)
|
||||
}
|
||||
return strings.Join(policies, "\n")
|
||||
@@ -464,6 +520,12 @@ type AppProjectSpec struct {
|
||||
Description string `json:"description,omitempty" protobuf:"bytes,3,opt,name=description"`
|
||||
|
||||
Roles []ProjectRole `json:"roles,omitempty" protobuf:"bytes,4,rep,name=roles"`
|
||||
|
||||
// ClusterResourceWhitelist contains list of whitelisted cluster level resources
|
||||
ClusterResourceWhitelist []metav1.GroupKind `json:"clusterResourceWhitelist,omitempty" protobuf:"bytes,5,opt,name=clusterResourceWhitelist"`
|
||||
|
||||
// NamespaceResourceBlacklist contains list of blacklisted namespace level resources
|
||||
NamespaceResourceBlacklist []metav1.GroupKind `json:"namespaceResourceBlacklist,omitempty" protobuf:"bytes,6,opt,name=namespaceResourceBlacklist"`
|
||||
}
|
||||
|
||||
// ProjectRole represents a role that has access to a project
|
||||
@@ -541,7 +603,28 @@ func (spec ApplicationSpec) GetProject() string {
|
||||
return spec.Project
|
||||
}
|
||||
|
||||
// IsSourcePermitted validiates if the provided application's source is a one of the allowed sources for the project.
|
||||
func isResourceInList(res metav1.GroupKind, list []metav1.GroupKind) bool {
|
||||
for _, item := range list {
|
||||
ok, err := filepath.Match(item.Kind, res.Kind)
|
||||
if ok && err == nil {
|
||||
ok, err = filepath.Match(item.Group, res.Group)
|
||||
if ok && err == nil {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (proj AppProject) IsResourcePermitted(res metav1.GroupKind, namespaced bool) bool {
|
||||
if namespaced {
|
||||
return !isResourceInList(res, proj.Spec.NamespaceResourceBlacklist)
|
||||
} else {
|
||||
return isResourceInList(res, proj.Spec.ClusterResourceWhitelist)
|
||||
}
|
||||
}
|
||||
|
||||
// IsSourcePermitted validates if the provided application's source is a one of the allowed sources for the project.
|
||||
func (proj AppProject) IsSourcePermitted(src ApplicationSource) bool {
|
||||
|
||||
normalizedURL := git.NormalizeGitURL(src.RepoURL)
|
||||
@@ -558,7 +641,6 @@ func (proj AppProject) IsSourcePermitted(src ApplicationSource) bool {
|
||||
|
||||
// IsDestinationPermitted validiates if the provided application's destination is one of the allowed destinations for the project
|
||||
func (proj AppProject) IsDestinationPermitted(dst ApplicationDestination) bool {
|
||||
|
||||
for _, item := range proj.Spec.Destinations {
|
||||
if item.Server == dst.Server || item.Server == "*" {
|
||||
if item.Namespace == dst.Namespace || item.Namespace == "*" {
|
||||
@@ -571,18 +653,42 @@ func (proj AppProject) IsDestinationPermitted(dst ApplicationDestination) bool {
|
||||
|
||||
// RESTConfig returns a go-client REST config from cluster
|
||||
func (c *Cluster) RESTConfig() *rest.Config {
|
||||
if c.Server == common.KubernetesInternalAPIServerAddr && c.Config.Username == "" && c.Config.Password == "" && c.Config.BearerToken == "" {
|
||||
config, err := rest.InClusterConfig()
|
||||
if err != nil {
|
||||
panic("Unable to create in-cluster config")
|
||||
}
|
||||
return config
|
||||
}
|
||||
tlsClientConfig := rest.TLSClientConfig{
|
||||
Insecure: c.Config.TLSClientConfig.Insecure,
|
||||
ServerName: c.Config.TLSClientConfig.ServerName,
|
||||
CertData: c.Config.TLSClientConfig.CertData,
|
||||
KeyData: c.Config.TLSClientConfig.KeyData,
|
||||
CAData: c.Config.TLSClientConfig.CAData,
|
||||
}
|
||||
if c.Config.AWSAuthConfig != nil {
|
||||
args := []string{"token", "-i", c.Config.AWSAuthConfig.ClusterName}
|
||||
if c.Config.AWSAuthConfig.RoleARN != "" {
|
||||
args = append(args, "-r", c.Config.AWSAuthConfig.RoleARN)
|
||||
}
|
||||
return &rest.Config{
|
||||
Host: c.Server,
|
||||
TLSClientConfig: tlsClientConfig,
|
||||
ExecProvider: &api.ExecConfig{
|
||||
APIVersion: "client.authentication.k8s.io/v1alpha1",
|
||||
Command: "aws-iam-authenticator",
|
||||
Args: args,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return &rest.Config{
|
||||
Host: c.Server,
|
||||
Username: c.Config.Username,
|
||||
Password: c.Config.Password,
|
||||
BearerToken: c.Config.BearerToken,
|
||||
TLSClientConfig: rest.TLSClientConfig{
|
||||
Insecure: c.Config.TLSClientConfig.Insecure,
|
||||
ServerName: c.Config.TLSClientConfig.ServerName,
|
||||
CertData: c.Config.TLSClientConfig.CertData,
|
||||
KeyData: c.Config.TLSClientConfig.KeyData,
|
||||
CAData: c.Config.TLSClientConfig.CAData,
|
||||
},
|
||||
Host: c.Server,
|
||||
Username: c.Config.Username,
|
||||
Password: c.Config.Password,
|
||||
BearerToken: c.Config.BearerToken,
|
||||
TLSClientConfig: tlsClientConfig,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -9,6 +9,22 @@ import (
|
||||
runtime "k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *AWSAuthConfig) DeepCopyInto(out *AWSAuthConfig) {
|
||||
*out = *in
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AWSAuthConfig.
|
||||
func (in *AWSAuthConfig) DeepCopy() *AWSAuthConfig {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(AWSAuthConfig)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *AppProject) DeepCopyInto(out *AppProject) {
|
||||
*out = *in
|
||||
@@ -89,6 +105,16 @@ func (in *AppProjectSpec) DeepCopyInto(out *AppProjectSpec) {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
if in.ClusterResourceWhitelist != nil {
|
||||
in, out := &in.ClusterResourceWhitelist, &out.ClusterResourceWhitelist
|
||||
*out = make([]v1.GroupKind, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.NamespaceResourceBlacklist != nil {
|
||||
in, out := &in.NamespaceResourceBlacklist, &out.NamespaceResourceBlacklist
|
||||
*out = make([]v1.GroupKind, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -235,6 +261,15 @@ func (in *ApplicationSpec) DeepCopyInto(out *ApplicationSpec) {
|
||||
*out = *in
|
||||
in.Source.DeepCopyInto(&out.Source)
|
||||
out.Destination = in.Destination
|
||||
if in.SyncPolicy != nil {
|
||||
in, out := &in.SyncPolicy, &out.SyncPolicy
|
||||
if *in == nil {
|
||||
*out = nil
|
||||
} else {
|
||||
*out = new(SyncPolicy)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -331,6 +366,15 @@ func (in *Cluster) DeepCopy() *Cluster {
|
||||
func (in *ClusterConfig) DeepCopyInto(out *ClusterConfig) {
|
||||
*out = *in
|
||||
in.TLSClientConfig.DeepCopyInto(&out.TLSClientConfig)
|
||||
if in.AWSAuthConfig != nil {
|
||||
in, out := &in.AWSAuthConfig, &out.AWSAuthConfig
|
||||
if *in == nil {
|
||||
*out = nil
|
||||
} else {
|
||||
*out = new(AWSAuthConfig)
|
||||
**out = **in
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -437,11 +481,6 @@ func (in *ConnectionState) DeepCopy() *ConnectionState {
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *DeploymentInfo) DeepCopyInto(out *DeploymentInfo) {
|
||||
*out = *in
|
||||
if in.Params != nil {
|
||||
in, out := &in.Params, &out.Params
|
||||
*out = make([]ComponentParameter, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.ComponentParameterOverrides != nil {
|
||||
in, out := &in.ComponentParameterOverrides, &out.ComponentParameterOverrides
|
||||
*out = make([]ComponentParameter, len(*in))
|
||||
@@ -521,15 +560,6 @@ func (in *Operation) DeepCopyInto(out *Operation) {
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
if in.Rollback != nil {
|
||||
in, out := &in.Rollback, &out.Rollback
|
||||
if *in == nil {
|
||||
*out = nil
|
||||
} else {
|
||||
*out = new(RollbackOperation)
|
||||
**out = **in
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -556,15 +586,6 @@ func (in *OperationState) DeepCopyInto(out *OperationState) {
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
if in.RollbackResult != nil {
|
||||
in, out := &in.RollbackResult, &out.RollbackResult
|
||||
if *in == nil {
|
||||
*out = nil
|
||||
} else {
|
||||
*out = new(SyncOperationResult)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
in.StartedAt.DeepCopyInto(&out.StartedAt)
|
||||
if in.FinishedAt != nil {
|
||||
in, out := &in.FinishedAt, &out.FinishedAt
|
||||
@@ -718,22 +739,6 @@ func (in *ResourceState) DeepCopy() *ResourceState {
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *RollbackOperation) DeepCopyInto(out *RollbackOperation) {
|
||||
*out = *in
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RollbackOperation.
|
||||
func (in *RollbackOperation) DeepCopy() *RollbackOperation {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(RollbackOperation)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *SyncOperation) DeepCopyInto(out *SyncOperation) {
|
||||
*out = *in
|
||||
@@ -746,6 +751,16 @@ func (in *SyncOperation) DeepCopyInto(out *SyncOperation) {
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
}
|
||||
if in.ParameterOverrides != nil {
|
||||
in, out := &in.ParameterOverrides, &out.ParameterOverrides
|
||||
*out = make(ParameterOverrides, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.Resources != nil {
|
||||
in, out := &in.Resources, &out.Resources
|
||||
*out = make([]SyncOperationResource, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -759,6 +774,22 @@ func (in *SyncOperation) DeepCopy() *SyncOperation {
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *SyncOperationResource) DeepCopyInto(out *SyncOperationResource) {
|
||||
*out = *in
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SyncOperationResource.
|
||||
func (in *SyncOperationResource) DeepCopy() *SyncOperationResource {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(SyncOperationResource)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *SyncOperationResult) DeepCopyInto(out *SyncOperationResult) {
|
||||
*out = *in
|
||||
@@ -799,6 +830,47 @@ func (in *SyncOperationResult) DeepCopy() *SyncOperationResult {
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *SyncPolicy) DeepCopyInto(out *SyncPolicy) {
|
||||
*out = *in
|
||||
if in.Automated != nil {
|
||||
in, out := &in.Automated, &out.Automated
|
||||
if *in == nil {
|
||||
*out = nil
|
||||
} else {
|
||||
*out = new(SyncPolicyAutomated)
|
||||
**out = **in
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SyncPolicy.
|
||||
func (in *SyncPolicy) DeepCopy() *SyncPolicy {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(SyncPolicy)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *SyncPolicyAutomated) DeepCopyInto(out *SyncPolicyAutomated) {
|
||||
*out = *in
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SyncPolicyAutomated.
|
||||
func (in *SyncPolicyAutomated) DeepCopy() *SyncPolicyAutomated {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(SyncPolicyAutomated)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *SyncStrategy) DeepCopyInto(out *SyncStrategy) {
|
||||
*out = *in
|
||||
|
||||
@@ -2,7 +2,6 @@ package versioned
|
||||
|
||||
import (
|
||||
argoprojv1alpha1 "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/typed/application/v1alpha1"
|
||||
glog "github.com/golang/glog"
|
||||
discovery "k8s.io/client-go/discovery"
|
||||
rest "k8s.io/client-go/rest"
|
||||
flowcontrol "k8s.io/client-go/util/flowcontrol"
|
||||
@@ -56,7 +55,6 @@ func NewForConfig(c *rest.Config) (*Clientset, error) {
|
||||
|
||||
cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy)
|
||||
if err != nil {
|
||||
glog.Errorf("failed to create the DiscoveryClient: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
return &cs, nil
|
||||
|
||||
@@ -23,9 +23,10 @@ func NewSimpleClientset(objects ...runtime.Object) *Clientset {
|
||||
}
|
||||
}
|
||||
|
||||
fakePtr := testing.Fake{}
|
||||
fakePtr.AddReactor("*", "*", testing.ObjectReaction(o))
|
||||
fakePtr.AddWatchReactor("*", func(action testing.Action) (handled bool, ret watch.Interface, err error) {
|
||||
cs := &Clientset{}
|
||||
cs.discovery = &fakediscovery.FakeDiscovery{Fake: &cs.Fake}
|
||||
cs.AddReactor("*", "*", testing.ObjectReaction(o))
|
||||
cs.AddWatchReactor("*", func(action testing.Action) (handled bool, ret watch.Interface, err error) {
|
||||
gvr := action.GetResource()
|
||||
ns := action.GetNamespace()
|
||||
watch, err := o.Watch(gvr, ns)
|
||||
@@ -35,7 +36,7 @@ func NewSimpleClientset(objects ...runtime.Object) *Clientset {
|
||||
return true, watch, nil
|
||||
})
|
||||
|
||||
return &Clientset{fakePtr, &fakediscovery.FakeDiscovery{Fake: &fakePtr}}
|
||||
return cs
|
||||
}
|
||||
|
||||
// Clientset implements clientset.Interface. Meant to be embedded into a
|
||||
|
||||
@@ -6,15 +6,14 @@ import (
|
||||
runtime "k8s.io/apimachinery/pkg/runtime"
|
||||
schema "k8s.io/apimachinery/pkg/runtime/schema"
|
||||
serializer "k8s.io/apimachinery/pkg/runtime/serializer"
|
||||
util_runtime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
)
|
||||
|
||||
var scheme = runtime.NewScheme()
|
||||
var codecs = serializer.NewCodecFactory(scheme)
|
||||
var parameterCodec = runtime.NewParameterCodec(scheme)
|
||||
|
||||
func init() {
|
||||
v1.AddToGroupVersion(scheme, schema.GroupVersion{Version: "v1"})
|
||||
AddToScheme(scheme)
|
||||
var localSchemeBuilder = runtime.SchemeBuilder{
|
||||
argoprojv1alpha1.AddToScheme,
|
||||
}
|
||||
|
||||
// AddToScheme adds all types of this clientset into the given scheme. This allows composition
|
||||
@@ -27,10 +26,13 @@ func init() {
|
||||
// )
|
||||
//
|
||||
// kclientset, _ := kubernetes.NewForConfig(c)
|
||||
// aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
|
||||
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
|
||||
//
|
||||
// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types
|
||||
// correctly.
|
||||
func AddToScheme(scheme *runtime.Scheme) {
|
||||
argoprojv1alpha1.AddToScheme(scheme)
|
||||
var AddToScheme = localSchemeBuilder.AddToScheme
|
||||
|
||||
func init() {
|
||||
v1.AddToGroupVersion(scheme, schema.GroupVersion{Version: "v1"})
|
||||
util_runtime.Must(AddToScheme(scheme))
|
||||
}
|
||||
|
||||
@@ -6,15 +6,14 @@ import (
|
||||
runtime "k8s.io/apimachinery/pkg/runtime"
|
||||
schema "k8s.io/apimachinery/pkg/runtime/schema"
|
||||
serializer "k8s.io/apimachinery/pkg/runtime/serializer"
|
||||
util_runtime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
)
|
||||
|
||||
var Scheme = runtime.NewScheme()
|
||||
var Codecs = serializer.NewCodecFactory(Scheme)
|
||||
var ParameterCodec = runtime.NewParameterCodec(Scheme)
|
||||
|
||||
func init() {
|
||||
v1.AddToGroupVersion(Scheme, schema.GroupVersion{Version: "v1"})
|
||||
AddToScheme(Scheme)
|
||||
var localSchemeBuilder = runtime.SchemeBuilder{
|
||||
argoprojv1alpha1.AddToScheme,
|
||||
}
|
||||
|
||||
// AddToScheme adds all types of this clientset into the given scheme. This allows composition
|
||||
@@ -27,10 +26,13 @@ func init() {
|
||||
// )
|
||||
//
|
||||
// kclientset, _ := kubernetes.NewForConfig(c)
|
||||
// aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
|
||||
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
|
||||
//
|
||||
// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types
|
||||
// correctly.
|
||||
func AddToScheme(scheme *runtime.Scheme) {
|
||||
argoprojv1alpha1.AddToScheme(scheme)
|
||||
var AddToScheme = localSchemeBuilder.AddToScheme
|
||||
|
||||
func init() {
|
||||
v1.AddToGroupVersion(Scheme, schema.GroupVersion{Version: "v1"})
|
||||
util_runtime.Must(AddToScheme(Scheme))
|
||||
}
|
||||
|
||||
@@ -44,7 +44,7 @@ func (c *FakeApplications) List(opts v1.ListOptions) (result *v1alpha1.Applicati
|
||||
if label == nil {
|
||||
label = labels.Everything()
|
||||
}
|
||||
list := &v1alpha1.ApplicationList{}
|
||||
list := &v1alpha1.ApplicationList{ListMeta: obj.(*v1alpha1.ApplicationList).ListMeta}
|
||||
for _, item := range obj.(*v1alpha1.ApplicationList).Items {
|
||||
if label.Matches(labels.Set(item.Labels)) {
|
||||
list.Items = append(list.Items, item)
|
||||
|
||||
@@ -44,7 +44,7 @@ func (c *FakeAppProjects) List(opts v1.ListOptions) (result *v1alpha1.AppProject
|
||||
if label == nil {
|
||||
label = labels.Everything()
|
||||
}
|
||||
list := &v1alpha1.AppProjectList{}
|
||||
list := &v1alpha1.AppProjectList{ListMeta: obj.(*v1alpha1.AppProjectList).ListMeta}
|
||||
for _, item := range obj.(*v1alpha1.AppProjectList).Items {
|
||||
if label.Matches(labels.Set(item.Labels)) {
|
||||
list.Items = append(list.Items, item)
|
||||
|
||||
@@ -14,12 +14,16 @@ import (
|
||||
cache "k8s.io/client-go/tools/cache"
|
||||
)
|
||||
|
||||
// SharedInformerOption defines the functional option type for SharedInformerFactory.
|
||||
type SharedInformerOption func(*sharedInformerFactory) *sharedInformerFactory
|
||||
|
||||
type sharedInformerFactory struct {
|
||||
client versioned.Interface
|
||||
namespace string
|
||||
tweakListOptions internalinterfaces.TweakListOptionsFunc
|
||||
lock sync.Mutex
|
||||
defaultResync time.Duration
|
||||
customResync map[reflect.Type]time.Duration
|
||||
|
||||
informers map[reflect.Type]cache.SharedIndexInformer
|
||||
// startedInformers is used for tracking which informers have been started.
|
||||
@@ -27,23 +31,62 @@ type sharedInformerFactory struct {
|
||||
startedInformers map[reflect.Type]bool
|
||||
}
|
||||
|
||||
// NewSharedInformerFactory constructs a new instance of sharedInformerFactory
|
||||
// WithCustomResyncConfig sets a custom resync period for the specified informer types.
|
||||
func WithCustomResyncConfig(resyncConfig map[v1.Object]time.Duration) SharedInformerOption {
|
||||
return func(factory *sharedInformerFactory) *sharedInformerFactory {
|
||||
for k, v := range resyncConfig {
|
||||
factory.customResync[reflect.TypeOf(k)] = v
|
||||
}
|
||||
return factory
|
||||
}
|
||||
}
|
||||
|
||||
// WithTweakListOptions sets a custom filter on all listers of the configured SharedInformerFactory.
|
||||
func WithTweakListOptions(tweakListOptions internalinterfaces.TweakListOptionsFunc) SharedInformerOption {
|
||||
return func(factory *sharedInformerFactory) *sharedInformerFactory {
|
||||
factory.tweakListOptions = tweakListOptions
|
||||
return factory
|
||||
}
|
||||
}
|
||||
|
||||
// WithNamespace limits the SharedInformerFactory to the specified namespace.
|
||||
func WithNamespace(namespace string) SharedInformerOption {
|
||||
return func(factory *sharedInformerFactory) *sharedInformerFactory {
|
||||
factory.namespace = namespace
|
||||
return factory
|
||||
}
|
||||
}
|
||||
|
||||
// NewSharedInformerFactory constructs a new instance of sharedInformerFactory for all namespaces.
|
||||
func NewSharedInformerFactory(client versioned.Interface, defaultResync time.Duration) SharedInformerFactory {
|
||||
return NewFilteredSharedInformerFactory(client, defaultResync, v1.NamespaceAll, nil)
|
||||
return NewSharedInformerFactoryWithOptions(client, defaultResync)
|
||||
}
|
||||
|
||||
// NewFilteredSharedInformerFactory constructs a new instance of sharedInformerFactory.
|
||||
// Listers obtained via this SharedInformerFactory will be subject to the same filters
|
||||
// as specified here.
|
||||
// Deprecated: Please use NewSharedInformerFactoryWithOptions instead
|
||||
func NewFilteredSharedInformerFactory(client versioned.Interface, defaultResync time.Duration, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) SharedInformerFactory {
|
||||
return &sharedInformerFactory{
|
||||
return NewSharedInformerFactoryWithOptions(client, defaultResync, WithNamespace(namespace), WithTweakListOptions(tweakListOptions))
|
||||
}
|
||||
|
||||
// NewSharedInformerFactoryWithOptions constructs a new instance of a SharedInformerFactory with additional options.
|
||||
func NewSharedInformerFactoryWithOptions(client versioned.Interface, defaultResync time.Duration, options ...SharedInformerOption) SharedInformerFactory {
|
||||
factory := &sharedInformerFactory{
|
||||
client: client,
|
||||
namespace: namespace,
|
||||
tweakListOptions: tweakListOptions,
|
||||
namespace: v1.NamespaceAll,
|
||||
defaultResync: defaultResync,
|
||||
informers: make(map[reflect.Type]cache.SharedIndexInformer),
|
||||
startedInformers: make(map[reflect.Type]bool),
|
||||
customResync: make(map[reflect.Type]time.Duration),
|
||||
}
|
||||
|
||||
// Apply all options
|
||||
for _, opt := range options {
|
||||
factory = opt(factory)
|
||||
}
|
||||
|
||||
return factory
|
||||
}
|
||||
|
||||
// Start initializes all requested informers.
|
||||
@@ -92,7 +135,13 @@ func (f *sharedInformerFactory) InformerFor(obj runtime.Object, newFunc internal
|
||||
if exists {
|
||||
return informer
|
||||
}
|
||||
informer = newFunc(f.client, f.defaultResync)
|
||||
|
||||
resyncPeriod, exists := f.customResync[informerType]
|
||||
if !exists {
|
||||
resyncPeriod = f.defaultResync
|
||||
}
|
||||
|
||||
informer = newFunc(f.client, resyncPeriod)
|
||||
f.informers[informerType] = informer
|
||||
|
||||
return informer
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
package reposerver
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
|
||||
"github.com/argoproj/argo-cd/reposerver/repository"
|
||||
"github.com/argoproj/argo-cd/util"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials"
|
||||
)
|
||||
|
||||
// Clientset represets repository server api clients
|
||||
@@ -17,7 +20,7 @@ type clientSet struct {
|
||||
}
|
||||
|
||||
func (c *clientSet) NewRepositoryClient() (util.Closer, repository.RepositoryServiceClient, error) {
|
||||
conn, err := grpc.Dial(c.address, grpc.WithInsecure())
|
||||
conn, err := grpc.Dial(c.address, grpc.WithTransportCredentials(credentials.NewTLS(&tls.Config{InsecureSkipVerify: true})))
|
||||
if err != nil {
|
||||
log.Errorf("Unable to connect to repository service with address %s", c.address)
|
||||
return nil, nil, err
|
||||
|
||||
@@ -11,11 +11,12 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ksonnet/ksonnet/pkg/app"
|
||||
"github.com/google/go-jsonnet"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
"github.com/argoproj/argo-cd/common"
|
||||
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
@@ -60,17 +61,7 @@ func NewService(gitFactory git.ClientFactory, cache cache.Cache) *Service {
|
||||
|
||||
// ListDir lists the contents of a GitHub repo
|
||||
func (s *Service) ListDir(ctx context.Context, q *ListDirRequest) (*FileList, error) {
|
||||
appRepoPath := tempRepoPath(q.Repo.Repo)
|
||||
s.repoLock.Lock(appRepoPath)
|
||||
defer s.repoLock.Unlock(appRepoPath)
|
||||
|
||||
gitClient := s.gitFactory.NewClient(q.Repo.Repo, appRepoPath, q.Repo.Username, q.Repo.Password, q.Repo.SSHPrivateKey)
|
||||
err := gitClient.Init()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
commitSHA, err := gitClient.LsRemote(q.Revision)
|
||||
gitClient, commitSHA, err := s.newClientResolveRevision(q.Repo, q.Revision)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -82,7 +73,9 @@ func (s *Service) ListDir(ctx context.Context, q *ListDirRequest) (*FileList, er
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
err = checkoutRevision(gitClient, q.Revision)
|
||||
s.repoLock.Lock(gitClient.Root())
|
||||
defer s.repoLock.Unlock(gitClient.Root())
|
||||
commitSHA, err = checkoutRevision(gitClient, commitSHA)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -96,7 +89,7 @@ func (s *Service) ListDir(ctx context.Context, q *ListDirRequest) (*FileList, er
|
||||
Items: lsFiles,
|
||||
}
|
||||
err = s.cache.Set(&cache.Item{
|
||||
Key: cacheKey,
|
||||
Key: listDirCacheKey(commitSHA, q),
|
||||
Object: &res,
|
||||
Expiration: DefaultRepoCacheExpiration,
|
||||
})
|
||||
@@ -107,16 +100,21 @@ func (s *Service) ListDir(ctx context.Context, q *ListDirRequest) (*FileList, er
|
||||
}
|
||||
|
||||
func (s *Service) GetFile(ctx context.Context, q *GetFileRequest) (*GetFileResponse, error) {
|
||||
appRepoPath := tempRepoPath(q.Repo.Repo)
|
||||
s.repoLock.Lock(appRepoPath)
|
||||
defer s.repoLock.Unlock(appRepoPath)
|
||||
|
||||
gitClient := s.gitFactory.NewClient(q.Repo.Repo, appRepoPath, q.Repo.Username, q.Repo.Password, q.Repo.SSHPrivateKey)
|
||||
err := gitClient.Init()
|
||||
gitClient, commitSHA, err := s.newClientResolveRevision(q.Repo, q.Revision)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = checkoutRevision(gitClient, q.Revision)
|
||||
cacheKey := getFileCacheKey(commitSHA, q)
|
||||
var res GetFileResponse
|
||||
err = s.cache.Get(cacheKey, &res)
|
||||
if err == nil {
|
||||
log.Infof("getfile cache hit: %s", cacheKey)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
s.repoLock.Lock(gitClient.Root())
|
||||
defer s.repoLock.Unlock(gitClient.Root())
|
||||
commitSHA, err = checkoutRevision(gitClient, commitSHA)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -124,36 +122,27 @@ func (s *Service) GetFile(ctx context.Context, q *GetFileRequest) (*GetFileRespo
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
res := GetFileResponse{
|
||||
res = GetFileResponse{
|
||||
Data: data,
|
||||
}
|
||||
err = s.cache.Set(&cache.Item{
|
||||
Key: getFileCacheKey(commitSHA, q),
|
||||
Object: &res,
|
||||
Expiration: DefaultRepoCacheExpiration,
|
||||
})
|
||||
if err != nil {
|
||||
log.Warnf("getfile cache set error %s: %v", cacheKey, err)
|
||||
}
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (s *Service) GenerateManifest(c context.Context, q *ManifestRequest) (*ManifestResponse, error) {
|
||||
var res ManifestResponse
|
||||
if git.IsCommitSHA(q.Revision) {
|
||||
cacheKey := manifestCacheKey(q.Revision, q)
|
||||
err := s.cache.Get(cacheKey, res)
|
||||
if err == nil {
|
||||
log.Infof("manifest cache hit: %s", cacheKey)
|
||||
return &res, nil
|
||||
}
|
||||
}
|
||||
appRepoPath := tempRepoPath(q.Repo.Repo)
|
||||
s.repoLock.Lock(appRepoPath)
|
||||
defer s.repoLock.Unlock(appRepoPath)
|
||||
|
||||
gitClient := s.gitFactory.NewClient(q.Repo.Repo, appRepoPath, q.Repo.Username, q.Repo.Password, q.Repo.SSHPrivateKey)
|
||||
err := gitClient.Init()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
commitSHA, err := gitClient.LsRemote(q.Revision)
|
||||
gitClient, commitSHA, err := s.newClientResolveRevision(q.Repo, q.Revision)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cacheKey := manifestCacheKey(commitSHA, q)
|
||||
var res ManifestResponse
|
||||
err = s.cache.Get(cacheKey, &res)
|
||||
if err == nil {
|
||||
log.Infof("manifest cache hit: %s", cacheKey)
|
||||
@@ -165,11 +154,13 @@ func (s *Service) GenerateManifest(c context.Context, q *ManifestRequest) (*Mani
|
||||
log.Infof("manifest cache miss: %s", cacheKey)
|
||||
}
|
||||
|
||||
err = checkoutRevision(gitClient, q.Revision)
|
||||
s.repoLock.Lock(gitClient.Root())
|
||||
defer s.repoLock.Unlock(gitClient.Root())
|
||||
commitSHA, err = checkoutRevision(gitClient, commitSHA)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
appPath := path.Join(appRepoPath, q.Path)
|
||||
appPath := path.Join(gitClient.Root(), q.Path)
|
||||
|
||||
genRes, err := generateManifests(appPath, q)
|
||||
if err != nil {
|
||||
@@ -178,7 +169,7 @@ func (s *Service) GenerateManifest(c context.Context, q *ManifestRequest) (*Mani
|
||||
res = *genRes
|
||||
res.Revision = commitSHA
|
||||
err = s.cache.Set(&cache.Item{
|
||||
Key: cacheKey,
|
||||
Key: manifestCacheKey(commitSHA, q),
|
||||
Object: res,
|
||||
Expiration: DefaultRepoCacheExpiration,
|
||||
})
|
||||
@@ -192,16 +183,20 @@ func (s *Service) GenerateManifest(c context.Context, q *ManifestRequest) (*Mani
|
||||
func generateManifests(appPath string, q *ManifestRequest) (*ManifestResponse, error) {
|
||||
var targetObjs []*unstructured.Unstructured
|
||||
var params []*v1alpha1.ComponentParameter
|
||||
var env *app.EnvironmentConfig
|
||||
var dest *v1alpha1.ApplicationDestination
|
||||
var err error
|
||||
|
||||
appSourceType := IdentifyAppSourceTypeByAppDir(appPath)
|
||||
switch appSourceType {
|
||||
case AppSourceKsonnet:
|
||||
targetObjs, params, env, err = ksShow(appPath, q.Environment, q.ComponentParameterOverrides)
|
||||
targetObjs, params, dest, err = ksShow(appPath, q.Environment, q.ComponentParameterOverrides)
|
||||
case AppSourceHelm:
|
||||
h := helm.NewHelmApp(appPath)
|
||||
targetObjs, err = h.Template(q.AppLabel, q.ValueFiles, q.ComponentParameterOverrides)
|
||||
err = h.DependencyBuild()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
targetObjs, err = h.Template(q.AppLabel, q.Namespace, q.ValueFiles, q.ComponentParameterOverrides)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -218,30 +213,49 @@ func generateManifests(appPath string, q *ManifestRequest) (*ManifestResponse, e
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// TODO(jessesuen): we need to sort objects based on their dependency order of creation
|
||||
|
||||
manifests := make([]string, len(targetObjs))
|
||||
for i, target := range targetObjs {
|
||||
if q.AppLabel != "" {
|
||||
err = kube.SetLabel(target, common.LabelApplicationName, q.AppLabel)
|
||||
manifests := make([]string, 0)
|
||||
for _, obj := range targetObjs {
|
||||
var targets []*unstructured.Unstructured
|
||||
if obj.IsList() {
|
||||
err = obj.EachListItem(func(object runtime.Object) error {
|
||||
unstructuredObj, ok := object.(*unstructured.Unstructured)
|
||||
if ok {
|
||||
targets = append(targets, unstructuredObj)
|
||||
return nil
|
||||
} else {
|
||||
return fmt.Errorf("resource list item has unexpected type")
|
||||
}
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
targets = []*unstructured.Unstructured{obj}
|
||||
}
|
||||
manifestStr, err := json.Marshal(target.Object)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
for _, target := range targets {
|
||||
if q.AppLabel != "" && !kube.IsCRD(target) {
|
||||
err = kube.SetLabel(target, common.LabelApplicationName, q.AppLabel)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
manifestStr, err := json.Marshal(target.Object)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
manifests = append(manifests, string(manifestStr))
|
||||
}
|
||||
manifests[i] = string(manifestStr)
|
||||
}
|
||||
|
||||
res := ManifestResponse{
|
||||
Manifests: manifests,
|
||||
Params: params,
|
||||
}
|
||||
if env != nil {
|
||||
res.Namespace = env.Destination.Namespace
|
||||
res.Server = env.Destination.Server
|
||||
if dest != nil {
|
||||
res.Namespace = dest.Namespace
|
||||
res.Server = dest.Server
|
||||
}
|
||||
return &res, nil
|
||||
}
|
||||
@@ -280,34 +294,39 @@ func IdentifyAppSourceTypeByAppPath(appFilePath string) AppSourceType {
|
||||
}
|
||||
|
||||
// checkoutRevision is a convenience function to initialize a repo, fetch, and checkout a revision
|
||||
func checkoutRevision(gitClient git.Client, revision string) error {
|
||||
err := gitClient.Fetch()
|
||||
// Returns the 40 character commit SHA after the checkout has been performed
|
||||
func checkoutRevision(gitClient git.Client, commitSHA string) (string, error) {
|
||||
err := gitClient.Init()
|
||||
if err != nil {
|
||||
return err
|
||||
return "", status.Errorf(codes.Internal, "Failed to initialize git repo: %v", err)
|
||||
}
|
||||
err = gitClient.Reset()
|
||||
err = gitClient.Fetch()
|
||||
if err != nil {
|
||||
log.Warn(err)
|
||||
return "", status.Errorf(codes.Internal, "Failed to fetch git repo: %v", err)
|
||||
}
|
||||
err = gitClient.Checkout(revision)
|
||||
err = gitClient.Checkout(commitSHA)
|
||||
if err != nil {
|
||||
return err
|
||||
return "", status.Errorf(codes.Internal, "Failed to checkout %s: %v", commitSHA, err)
|
||||
}
|
||||
return nil
|
||||
return gitClient.CommitSHA()
|
||||
}
|
||||
|
||||
func manifestCacheKey(commitSHA string, q *ManifestRequest) string {
|
||||
pStr, _ := json.Marshal(q.ComponentParameterOverrides)
|
||||
valuesFiles := strings.Join(q.ValueFiles, ",")
|
||||
return fmt.Sprintf("mfst|%s|%s|%s|%s|%s|%s", q.AppLabel, q.Path, q.Environment, commitSHA, string(pStr), valuesFiles)
|
||||
return fmt.Sprintf("mfst|%s|%s|%s|%s|%s|%s|%s", q.AppLabel, q.Path, q.Environment, commitSHA, string(pStr), valuesFiles, q.Namespace)
|
||||
}
|
||||
|
||||
func listDirCacheKey(commitSHA string, q *ListDirRequest) string {
|
||||
return fmt.Sprintf("ldir|%s|%s", q.Path, commitSHA)
|
||||
}
|
||||
|
||||
func getFileCacheKey(commitSHA string, q *GetFileRequest) string {
|
||||
return fmt.Sprintf("gfile|%s|%s", q.Path, commitSHA)
|
||||
}
|
||||
|
||||
// ksShow runs `ks show` in an app directory after setting any component parameter overrides
|
||||
func ksShow(appPath, envName string, overrides []*v1alpha1.ComponentParameter) ([]*unstructured.Unstructured, []*v1alpha1.ComponentParameter, *app.EnvironmentConfig, error) {
|
||||
func ksShow(appPath, envName string, overrides []*v1alpha1.ComponentParameter) ([]*unstructured.Unstructured, []*v1alpha1.ComponentParameter, *v1alpha1.ApplicationDestination, error) {
|
||||
ksApp, err := ksonnet.NewKsonnetApp(appPath)
|
||||
if err != nil {
|
||||
return nil, nil, nil, status.Errorf(codes.FailedPrecondition, "unable to load application from %s: %v", appPath, err)
|
||||
@@ -324,8 +343,7 @@ func ksShow(appPath, envName string, overrides []*v1alpha1.ComponentParameter) (
|
||||
}
|
||||
}
|
||||
}
|
||||
appSpec := ksApp.App()
|
||||
env, err := appSpec.Environment(envName)
|
||||
dest, err := ksApp.Destination(envName)
|
||||
if err != nil {
|
||||
return nil, nil, nil, status.Errorf(codes.NotFound, "environment %q does not exist in ksonnet app", envName)
|
||||
}
|
||||
@@ -333,10 +351,10 @@ func ksShow(appPath, envName string, overrides []*v1alpha1.ComponentParameter) (
|
||||
if err != nil {
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
return targetObjs, params, env, nil
|
||||
return targetObjs, params, dest, nil
|
||||
}
|
||||
|
||||
var manifestFile = regexp.MustCompile(`^.*\.(yaml|yml|json)$`)
|
||||
var manifestFile = regexp.MustCompile(`^.*\.(yaml|yml|json|jsonnet)$`)
|
||||
|
||||
// findManifests looks at all yaml files in a directory and unmarshals them into a list of unstructured objects
|
||||
func findManifests(appPath string) ([]*unstructured.Unstructured, error) {
|
||||
@@ -360,6 +378,29 @@ func findManifests(appPath string) ([]*unstructured.Unstructured, error) {
|
||||
return nil, status.Errorf(codes.FailedPrecondition, "Failed to unmarshal %q: %v", f.Name(), err)
|
||||
}
|
||||
objs = append(objs, &obj)
|
||||
} else if strings.HasSuffix(f.Name(), ".jsonnet") {
|
||||
vm := jsonnet.MakeVM()
|
||||
vm.Importer(&jsonnet.FileImporter{
|
||||
JPaths: []string{appPath},
|
||||
})
|
||||
jsonStr, err := vm.EvaluateSnippet(f.Name(), string(out))
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.FailedPrecondition, "Failed to evaluate jsonnet %q: %v", f.Name(), err)
|
||||
}
|
||||
|
||||
// attempt to unmarshal either array or single object
|
||||
var jsonObjs []*unstructured.Unstructured
|
||||
err = json.Unmarshal([]byte(jsonStr), &jsonObjs)
|
||||
if err == nil {
|
||||
objs = append(objs, jsonObjs...)
|
||||
} else {
|
||||
var jsonObj unstructured.Unstructured
|
||||
err = json.Unmarshal([]byte(jsonStr), &jsonObj)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.FailedPrecondition, "Failed to unmarshal generated json %q: %v", f.Name(), err)
|
||||
}
|
||||
objs = append(objs, &jsonObj)
|
||||
}
|
||||
} else {
|
||||
yamlObjs, err := kube.SplitYAML(string(out))
|
||||
if err != nil {
|
||||
@@ -387,3 +428,18 @@ func pathExists(name string) bool {
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// newClientResolveRevision is a helper to perform the common task of instantiating a git client
|
||||
// and resolving a revision to a commit SHA
|
||||
func (s *Service) newClientResolveRevision(repo *v1alpha1.Repository, revision string) (git.Client, string, error) {
|
||||
appRepoPath := tempRepoPath(repo.Repo)
|
||||
gitClient, err := s.gitFactory.NewClient(repo.Repo, appRepoPath, repo.Username, repo.Password, repo.SSHPrivateKey)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
commitSHA, err := gitClient.LsRemote(revision)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return gitClient, commitSHA, nil
|
||||
}
|
||||
|
||||
@@ -1,29 +1,15 @@
|
||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: reposerver/repository/repository.proto
|
||||
|
||||
/*
|
||||
Package repository is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
reposerver/repository/repository.proto
|
||||
|
||||
It has these top-level messages:
|
||||
ManifestRequest
|
||||
ManifestResponse
|
||||
ListDirRequest
|
||||
FileList
|
||||
GetFileRequest
|
||||
GetFileResponse
|
||||
*/
|
||||
package repository
|
||||
package repository // import "github.com/argoproj/argo-cd/reposerver/repository"
|
||||
|
||||
import proto "github.com/gogo/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
import math "math"
|
||||
import v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
import _ "github.com/gogo/protobuf/gogoproto"
|
||||
import _ "google.golang.org/genproto/googleapis/api/annotations"
|
||||
import _ "k8s.io/api/core/v1"
|
||||
import github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
|
||||
import context "golang.org/x/net/context"
|
||||
import grpc "google.golang.org/grpc"
|
||||
@@ -43,21 +29,53 @@ const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
// ManifestRequest is a query for manifest generation.
|
||||
type ManifestRequest struct {
|
||||
Repo *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository `protobuf:"bytes,1,opt,name=repo" json:"repo,omitempty"`
|
||||
Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"`
|
||||
Path string `protobuf:"bytes,3,opt,name=path,proto3" json:"path,omitempty"`
|
||||
Environment string `protobuf:"bytes,4,opt,name=environment,proto3" json:"environment,omitempty"`
|
||||
AppLabel string `protobuf:"bytes,5,opt,name=appLabel,proto3" json:"appLabel,omitempty"`
|
||||
ComponentParameterOverrides []*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ComponentParameter `protobuf:"bytes,6,rep,name=componentParameterOverrides" json:"componentParameterOverrides,omitempty"`
|
||||
ValueFiles []string `protobuf:"bytes,7,rep,name=valueFiles" json:"valueFiles,omitempty"`
|
||||
Repo *v1alpha1.Repository `protobuf:"bytes,1,opt,name=repo" json:"repo,omitempty"`
|
||||
Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"`
|
||||
Path string `protobuf:"bytes,3,opt,name=path,proto3" json:"path,omitempty"`
|
||||
Environment string `protobuf:"bytes,4,opt,name=environment,proto3" json:"environment,omitempty"`
|
||||
AppLabel string `protobuf:"bytes,5,opt,name=appLabel,proto3" json:"appLabel,omitempty"`
|
||||
ComponentParameterOverrides []*v1alpha1.ComponentParameter `protobuf:"bytes,6,rep,name=componentParameterOverrides" json:"componentParameterOverrides,omitempty"`
|
||||
ValueFiles []string `protobuf:"bytes,7,rep,name=valueFiles" json:"valueFiles,omitempty"`
|
||||
Namespace string `protobuf:"bytes,8,opt,name=namespace,proto3" json:"namespace,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *ManifestRequest) Reset() { *m = ManifestRequest{} }
|
||||
func (m *ManifestRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ManifestRequest) ProtoMessage() {}
|
||||
func (*ManifestRequest) Descriptor() ([]byte, []int) { return fileDescriptorRepository, []int{0} }
|
||||
func (m *ManifestRequest) Reset() { *m = ManifestRequest{} }
|
||||
func (m *ManifestRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ManifestRequest) ProtoMessage() {}
|
||||
func (*ManifestRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_repository_49651600e73b0b40, []int{0}
|
||||
}
|
||||
func (m *ManifestRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *ManifestRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_ManifestRequest.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *ManifestRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_ManifestRequest.Merge(dst, src)
|
||||
}
|
||||
func (m *ManifestRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *ManifestRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_ManifestRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
func (m *ManifestRequest) GetRepo() *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository {
|
||||
var xxx_messageInfo_ManifestRequest proto.InternalMessageInfo
|
||||
|
||||
func (m *ManifestRequest) GetRepo() *v1alpha1.Repository {
|
||||
if m != nil {
|
||||
return m.Repo
|
||||
}
|
||||
@@ -92,7 +110,7 @@ func (m *ManifestRequest) GetAppLabel() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *ManifestRequest) GetComponentParameterOverrides() []*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ComponentParameter {
|
||||
func (m *ManifestRequest) GetComponentParameterOverrides() []*v1alpha1.ComponentParameter {
|
||||
if m != nil {
|
||||
return m.ComponentParameterOverrides
|
||||
}
|
||||
@@ -106,18 +124,56 @@ func (m *ManifestRequest) GetValueFiles() []string {
|
||||
return nil
|
||||
}
|
||||
|
||||
type ManifestResponse struct {
|
||||
Manifests []string `protobuf:"bytes,1,rep,name=manifests" json:"manifests,omitempty"`
|
||||
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
|
||||
Server string `protobuf:"bytes,3,opt,name=server,proto3" json:"server,omitempty"`
|
||||
Revision string `protobuf:"bytes,4,opt,name=revision,proto3" json:"revision,omitempty"`
|
||||
Params []*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ComponentParameter `protobuf:"bytes,5,rep,name=params" json:"params,omitempty"`
|
||||
func (m *ManifestRequest) GetNamespace() string {
|
||||
if m != nil {
|
||||
return m.Namespace
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *ManifestResponse) Reset() { *m = ManifestResponse{} }
|
||||
func (m *ManifestResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*ManifestResponse) ProtoMessage() {}
|
||||
func (*ManifestResponse) Descriptor() ([]byte, []int) { return fileDescriptorRepository, []int{1} }
|
||||
type ManifestResponse struct {
|
||||
Manifests []string `protobuf:"bytes,1,rep,name=manifests" json:"manifests,omitempty"`
|
||||
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
|
||||
Server string `protobuf:"bytes,3,opt,name=server,proto3" json:"server,omitempty"`
|
||||
Revision string `protobuf:"bytes,4,opt,name=revision,proto3" json:"revision,omitempty"`
|
||||
Params []*v1alpha1.ComponentParameter `protobuf:"bytes,5,rep,name=params" json:"params,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *ManifestResponse) Reset() { *m = ManifestResponse{} }
|
||||
func (m *ManifestResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*ManifestResponse) ProtoMessage() {}
|
||||
func (*ManifestResponse) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_repository_49651600e73b0b40, []int{1}
|
||||
}
|
||||
func (m *ManifestResponse) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *ManifestResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_ManifestResponse.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *ManifestResponse) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_ManifestResponse.Merge(dst, src)
|
||||
}
|
||||
func (m *ManifestResponse) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *ManifestResponse) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_ManifestResponse.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_ManifestResponse proto.InternalMessageInfo
|
||||
|
||||
func (m *ManifestResponse) GetManifests() []string {
|
||||
if m != nil {
|
||||
@@ -147,7 +203,7 @@ func (m *ManifestResponse) GetRevision() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *ManifestResponse) GetParams() []*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ComponentParameter {
|
||||
func (m *ManifestResponse) GetParams() []*v1alpha1.ComponentParameter {
|
||||
if m != nil {
|
||||
return m.Params
|
||||
}
|
||||
@@ -156,17 +212,48 @@ func (m *ManifestResponse) GetParams() []*github_com_argoproj_argo_cd_pkg_apis_a
|
||||
|
||||
// ListDirRequest requests a repository directory structure
|
||||
type ListDirRequest struct {
|
||||
Repo *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository `protobuf:"bytes,1,opt,name=repo" json:"repo,omitempty"`
|
||||
Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"`
|
||||
Path string `protobuf:"bytes,3,opt,name=path,proto3" json:"path,omitempty"`
|
||||
Repo *v1alpha1.Repository `protobuf:"bytes,1,opt,name=repo" json:"repo,omitempty"`
|
||||
Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"`
|
||||
Path string `protobuf:"bytes,3,opt,name=path,proto3" json:"path,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *ListDirRequest) Reset() { *m = ListDirRequest{} }
|
||||
func (m *ListDirRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ListDirRequest) ProtoMessage() {}
|
||||
func (*ListDirRequest) Descriptor() ([]byte, []int) { return fileDescriptorRepository, []int{2} }
|
||||
func (m *ListDirRequest) Reset() { *m = ListDirRequest{} }
|
||||
func (m *ListDirRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ListDirRequest) ProtoMessage() {}
|
||||
func (*ListDirRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_repository_49651600e73b0b40, []int{2}
|
||||
}
|
||||
func (m *ListDirRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *ListDirRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_ListDirRequest.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *ListDirRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_ListDirRequest.Merge(dst, src)
|
||||
}
|
||||
func (m *ListDirRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *ListDirRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_ListDirRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
func (m *ListDirRequest) GetRepo() *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository {
|
||||
var xxx_messageInfo_ListDirRequest proto.InternalMessageInfo
|
||||
|
||||
func (m *ListDirRequest) GetRepo() *v1alpha1.Repository {
|
||||
if m != nil {
|
||||
return m.Repo
|
||||
}
|
||||
@@ -189,13 +276,44 @@ func (m *ListDirRequest) GetPath() string {
|
||||
|
||||
// FileList returns the contents of the repo of a ListDir request
|
||||
type FileList struct {
|
||||
Items []string `protobuf:"bytes,1,rep,name=items" json:"items,omitempty"`
|
||||
Items []string `protobuf:"bytes,1,rep,name=items" json:"items,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *FileList) Reset() { *m = FileList{} }
|
||||
func (m *FileList) String() string { return proto.CompactTextString(m) }
|
||||
func (*FileList) ProtoMessage() {}
|
||||
func (*FileList) Descriptor() ([]byte, []int) { return fileDescriptorRepository, []int{3} }
|
||||
func (m *FileList) Reset() { *m = FileList{} }
|
||||
func (m *FileList) String() string { return proto.CompactTextString(m) }
|
||||
func (*FileList) ProtoMessage() {}
|
||||
func (*FileList) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_repository_49651600e73b0b40, []int{3}
|
||||
}
|
||||
func (m *FileList) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *FileList) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_FileList.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *FileList) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_FileList.Merge(dst, src)
|
||||
}
|
||||
func (m *FileList) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *FileList) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_FileList.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_FileList proto.InternalMessageInfo
|
||||
|
||||
func (m *FileList) GetItems() []string {
|
||||
if m != nil {
|
||||
@@ -206,17 +324,48 @@ func (m *FileList) GetItems() []string {
|
||||
|
||||
// GetFileRequest return
|
||||
type GetFileRequest struct {
|
||||
Repo *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository `protobuf:"bytes,1,opt,name=repo" json:"repo,omitempty"`
|
||||
Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"`
|
||||
Path string `protobuf:"bytes,3,opt,name=path,proto3" json:"path,omitempty"`
|
||||
Repo *v1alpha1.Repository `protobuf:"bytes,1,opt,name=repo" json:"repo,omitempty"`
|
||||
Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"`
|
||||
Path string `protobuf:"bytes,3,opt,name=path,proto3" json:"path,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *GetFileRequest) Reset() { *m = GetFileRequest{} }
|
||||
func (m *GetFileRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*GetFileRequest) ProtoMessage() {}
|
||||
func (*GetFileRequest) Descriptor() ([]byte, []int) { return fileDescriptorRepository, []int{4} }
|
||||
func (m *GetFileRequest) Reset() { *m = GetFileRequest{} }
|
||||
func (m *GetFileRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*GetFileRequest) ProtoMessage() {}
|
||||
func (*GetFileRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_repository_49651600e73b0b40, []int{4}
|
||||
}
|
||||
func (m *GetFileRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *GetFileRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_GetFileRequest.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *GetFileRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_GetFileRequest.Merge(dst, src)
|
||||
}
|
||||
func (m *GetFileRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *GetFileRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_GetFileRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
func (m *GetFileRequest) GetRepo() *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository {
|
||||
var xxx_messageInfo_GetFileRequest proto.InternalMessageInfo
|
||||
|
||||
func (m *GetFileRequest) GetRepo() *v1alpha1.Repository {
|
||||
if m != nil {
|
||||
return m.Repo
|
||||
}
|
||||
@@ -239,13 +388,44 @@ func (m *GetFileRequest) GetPath() string {
|
||||
|
||||
// GetFileResponse returns the contents of the file of a GetFile request
|
||||
type GetFileResponse struct {
|
||||
Data []byte `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"`
|
||||
Data []byte `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *GetFileResponse) Reset() { *m = GetFileResponse{} }
|
||||
func (m *GetFileResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*GetFileResponse) ProtoMessage() {}
|
||||
func (*GetFileResponse) Descriptor() ([]byte, []int) { return fileDescriptorRepository, []int{5} }
|
||||
func (m *GetFileResponse) Reset() { *m = GetFileResponse{} }
|
||||
func (m *GetFileResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*GetFileResponse) ProtoMessage() {}
|
||||
func (*GetFileResponse) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_repository_49651600e73b0b40, []int{5}
|
||||
}
|
||||
func (m *GetFileResponse) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *GetFileResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_GetFileResponse.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *GetFileResponse) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_GetFileResponse.Merge(dst, src)
|
||||
}
|
||||
func (m *GetFileResponse) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *GetFileResponse) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_GetFileResponse.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_GetFileResponse proto.InternalMessageInfo
|
||||
|
||||
func (m *GetFileResponse) GetData() []byte {
|
||||
if m != nil {
|
||||
@@ -292,7 +472,7 @@ func NewRepositoryServiceClient(cc *grpc.ClientConn) RepositoryServiceClient {
|
||||
|
||||
func (c *repositoryServiceClient) GenerateManifest(ctx context.Context, in *ManifestRequest, opts ...grpc.CallOption) (*ManifestResponse, error) {
|
||||
out := new(ManifestResponse)
|
||||
err := grpc.Invoke(ctx, "/repository.RepositoryService/GenerateManifest", in, out, c.cc, opts...)
|
||||
err := c.cc.Invoke(ctx, "/repository.RepositoryService/GenerateManifest", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -301,7 +481,7 @@ func (c *repositoryServiceClient) GenerateManifest(ctx context.Context, in *Mani
|
||||
|
||||
func (c *repositoryServiceClient) ListDir(ctx context.Context, in *ListDirRequest, opts ...grpc.CallOption) (*FileList, error) {
|
||||
out := new(FileList)
|
||||
err := grpc.Invoke(ctx, "/repository.RepositoryService/ListDir", in, out, c.cc, opts...)
|
||||
err := c.cc.Invoke(ctx, "/repository.RepositoryService/ListDir", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -310,7 +490,7 @@ func (c *repositoryServiceClient) ListDir(ctx context.Context, in *ListDirReques
|
||||
|
||||
func (c *repositoryServiceClient) GetFile(ctx context.Context, in *GetFileRequest, opts ...grpc.CallOption) (*GetFileResponse, error) {
|
||||
out := new(GetFileResponse)
|
||||
err := grpc.Invoke(ctx, "/repository.RepositoryService/GetFile", in, out, c.cc, opts...)
|
||||
err := c.cc.Invoke(ctx, "/repository.RepositoryService/GetFile", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -483,6 +663,15 @@ func (m *ManifestRequest) MarshalTo(dAtA []byte) (int, error) {
|
||||
i += copy(dAtA[i:], s)
|
||||
}
|
||||
}
|
||||
if len(m.Namespace) > 0 {
|
||||
dAtA[i] = 0x42
|
||||
i++
|
||||
i = encodeVarintRepository(dAtA, i, uint64(len(m.Namespace)))
|
||||
i += copy(dAtA[i:], m.Namespace)
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -546,6 +735,9 @@ func (m *ManifestResponse) MarshalTo(dAtA []byte) (int, error) {
|
||||
i += n
|
||||
}
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -586,6 +778,9 @@ func (m *ListDirRequest) MarshalTo(dAtA []byte) (int, error) {
|
||||
i = encodeVarintRepository(dAtA, i, uint64(len(m.Path)))
|
||||
i += copy(dAtA[i:], m.Path)
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -619,6 +814,9 @@ func (m *FileList) MarshalTo(dAtA []byte) (int, error) {
|
||||
i += copy(dAtA[i:], s)
|
||||
}
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -659,6 +857,9 @@ func (m *GetFileRequest) MarshalTo(dAtA []byte) (int, error) {
|
||||
i = encodeVarintRepository(dAtA, i, uint64(len(m.Path)))
|
||||
i += copy(dAtA[i:], m.Path)
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -683,6 +884,9 @@ func (m *GetFileResponse) MarshalTo(dAtA []byte) (int, error) {
|
||||
i = encodeVarintRepository(dAtA, i, uint64(len(m.Data)))
|
||||
i += copy(dAtA[i:], m.Data)
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -730,6 +934,13 @@ func (m *ManifestRequest) Size() (n int) {
|
||||
n += 1 + l + sovRepository(uint64(l))
|
||||
}
|
||||
}
|
||||
l = len(m.Namespace)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovRepository(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -760,6 +971,9 @@ func (m *ManifestResponse) Size() (n int) {
|
||||
n += 1 + l + sovRepository(uint64(l))
|
||||
}
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -778,6 +992,9 @@ func (m *ListDirRequest) Size() (n int) {
|
||||
if l > 0 {
|
||||
n += 1 + l + sovRepository(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -790,6 +1007,9 @@ func (m *FileList) Size() (n int) {
|
||||
n += 1 + l + sovRepository(uint64(l))
|
||||
}
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -808,6 +1028,9 @@ func (m *GetFileRequest) Size() (n int) {
|
||||
if l > 0 {
|
||||
n += 1 + l + sovRepository(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -818,6 +1041,9 @@ func (m *GetFileResponse) Size() (n int) {
|
||||
if l > 0 {
|
||||
n += 1 + l + sovRepository(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -890,7 +1116,7 @@ func (m *ManifestRequest) Unmarshal(dAtA []byte) error {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.Repo == nil {
|
||||
m.Repo = &github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository{}
|
||||
m.Repo = &v1alpha1.Repository{}
|
||||
}
|
||||
if err := m.Repo.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
@@ -1038,7 +1264,7 @@ func (m *ManifestRequest) Unmarshal(dAtA []byte) error {
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.ComponentParameterOverrides = append(m.ComponentParameterOverrides, &github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ComponentParameter{})
|
||||
m.ComponentParameterOverrides = append(m.ComponentParameterOverrides, &v1alpha1.ComponentParameter{})
|
||||
if err := m.ComponentParameterOverrides[len(m.ComponentParameterOverrides)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1072,6 +1298,35 @@ func (m *ManifestRequest) Unmarshal(dAtA []byte) error {
|
||||
}
|
||||
m.ValueFiles = append(m.ValueFiles, string(dAtA[iNdEx:postIndex]))
|
||||
iNdEx = postIndex
|
||||
case 8:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowRepository
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthRepository
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Namespace = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipRepository(dAtA[iNdEx:])
|
||||
@@ -1084,6 +1339,7 @@ func (m *ManifestRequest) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1264,7 +1520,7 @@ func (m *ManifestResponse) Unmarshal(dAtA []byte) error {
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Params = append(m.Params, &github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ComponentParameter{})
|
||||
m.Params = append(m.Params, &v1alpha1.ComponentParameter{})
|
||||
if err := m.Params[len(m.Params)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1281,6 +1537,7 @@ func (m *ManifestResponse) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1346,7 +1603,7 @@ func (m *ListDirRequest) Unmarshal(dAtA []byte) error {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.Repo == nil {
|
||||
m.Repo = &github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository{}
|
||||
m.Repo = &v1alpha1.Repository{}
|
||||
}
|
||||
if err := m.Repo.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
@@ -1422,6 +1679,7 @@ func (m *ListDirRequest) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1501,6 +1759,7 @@ func (m *FileList) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1566,7 +1825,7 @@ func (m *GetFileRequest) Unmarshal(dAtA []byte) error {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.Repo == nil {
|
||||
m.Repo = &github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Repository{}
|
||||
m.Repo = &v1alpha1.Repository{}
|
||||
}
|
||||
if err := m.Repo.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
@@ -1642,6 +1901,7 @@ func (m *GetFileRequest) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1723,6 +1983,7 @@ func (m *GetFileResponse) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1837,44 +2098,47 @@ var (
|
||||
ErrIntOverflowRepository = fmt.Errorf("proto: integer overflow")
|
||||
)
|
||||
|
||||
func init() { proto.RegisterFile("reposerver/repository/repository.proto", fileDescriptorRepository) }
|
||||
|
||||
var fileDescriptorRepository = []byte{
|
||||
// 576 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x54, 0xcd, 0x6e, 0xd3, 0x40,
|
||||
0x10, 0xae, 0xc9, 0x5f, 0xb3, 0x41, 0xb4, 0xac, 0x22, 0x64, 0x39, 0x51, 0x64, 0x59, 0x02, 0xe5,
|
||||
0x82, 0xad, 0x84, 0x0b, 0x17, 0x84, 0x04, 0x85, 0x0a, 0xa9, 0x55, 0x91, 0x39, 0xc1, 0x05, 0x6d,
|
||||
0x9c, 0xc1, 0x59, 0x62, 0xef, 0x2e, 0xbb, 0x1b, 0x4b, 0xbc, 0x03, 0x12, 0x0f, 0xc0, 0x0b, 0x71,
|
||||
0xe4, 0x11, 0x50, 0x6e, 0x7d, 0x0b, 0xe4, 0x8d, 0x1d, 0x3b, 0x6d, 0xd4, 0x0b, 0xaa, 0xd4, 0xdb,
|
||||
0xcc, 0x37, 0xb3, 0xf3, 0xcd, 0x7e, 0x33, 0x1a, 0xf4, 0x44, 0x82, 0xe0, 0x0a, 0x64, 0x06, 0x32,
|
||||
0x30, 0x26, 0xd5, 0x5c, 0x7e, 0xaf, 0x99, 0xbe, 0x90, 0x5c, 0x73, 0x8c, 0x2a, 0xc4, 0xe9, 0xc7,
|
||||
0x3c, 0xe6, 0x06, 0x0e, 0x72, 0x6b, 0x93, 0xe1, 0x0c, 0x63, 0xce, 0xe3, 0x04, 0x02, 0x22, 0x68,
|
||||
0x40, 0x18, 0xe3, 0x9a, 0x68, 0xca, 0x99, 0x2a, 0xa2, 0xde, 0xf2, 0xb9, 0xf2, 0x29, 0x37, 0xd1,
|
||||
0x88, 0x4b, 0x08, 0xb2, 0x49, 0x10, 0x03, 0x03, 0x49, 0x34, 0xcc, 0x8b, 0x9c, 0x77, 0x31, 0xd5,
|
||||
0x8b, 0xd5, 0xcc, 0x8f, 0x78, 0x1a, 0x10, 0x69, 0x28, 0xbe, 0x1a, 0xe3, 0x69, 0x34, 0x0f, 0xc4,
|
||||
0x32, 0xce, 0x1f, 0xab, 0x80, 0x08, 0x91, 0xd0, 0xc8, 0x14, 0x0f, 0xb2, 0x09, 0x49, 0xc4, 0x82,
|
||||
0x5c, 0x2b, 0xe5, 0xfd, 0x68, 0xa0, 0xa3, 0x73, 0xc2, 0xe8, 0x17, 0x50, 0x3a, 0x84, 0x6f, 0x2b,
|
||||
0x50, 0x1a, 0x7f, 0x44, 0xcd, 0xfc, 0x13, 0xb6, 0xe5, 0x5a, 0xe3, 0xde, 0xf4, 0x8d, 0x5f, 0xb1,
|
||||
0xf9, 0x25, 0x9b, 0x31, 0x3e, 0x47, 0x73, 0x5f, 0x2c, 0x63, 0x3f, 0x67, 0xf3, 0x6b, 0x6c, 0x7e,
|
||||
0xc9, 0xe6, 0x87, 0x5b, 0x2d, 0x42, 0x53, 0x12, 0x3b, 0xe8, 0x50, 0x42, 0x46, 0x15, 0xe5, 0xcc,
|
||||
0xbe, 0xe7, 0x5a, 0xe3, 0x6e, 0xb8, 0xf5, 0x31, 0x46, 0x4d, 0x41, 0xf4, 0xc2, 0x6e, 0x18, 0xdc,
|
||||
0xd8, 0xd8, 0x45, 0x3d, 0x60, 0x19, 0x95, 0x9c, 0xa5, 0xc0, 0xb4, 0xdd, 0x34, 0xa1, 0x3a, 0x94,
|
||||
0x57, 0x24, 0x42, 0x9c, 0x91, 0x19, 0x24, 0x76, 0x6b, 0x53, 0xb1, 0xf4, 0xf1, 0x4f, 0x0b, 0x0d,
|
||||
0x22, 0x9e, 0x0a, 0xce, 0x80, 0xe9, 0xf7, 0x44, 0x92, 0x14, 0x34, 0xc8, 0x8b, 0x0c, 0xa4, 0xa4,
|
||||
0x73, 0x50, 0x76, 0xdb, 0x6d, 0x8c, 0x7b, 0xd3, 0xf3, 0xff, 0xf8, 0xe0, 0xeb, 0x6b, 0xd5, 0xc3,
|
||||
0x9b, 0x18, 0xf1, 0x08, 0xa1, 0x8c, 0x24, 0x2b, 0x78, 0x4b, 0x13, 0x50, 0x76, 0xc7, 0x6d, 0x8c,
|
||||
0xbb, 0x61, 0x0d, 0xf1, 0x2e, 0x2d, 0x74, 0x5c, 0x8d, 0x43, 0x09, 0xce, 0x14, 0xe0, 0x21, 0xea,
|
||||
0xa6, 0x05, 0xa6, 0x6c, 0xcb, 0xbc, 0xa9, 0x80, 0x3c, 0xca, 0x48, 0x0a, 0x4a, 0x90, 0x08, 0x0a,
|
||||
0x4d, 0x2b, 0x00, 0x3f, 0x42, 0xed, 0xcd, 0xd2, 0x16, 0xb2, 0x16, 0xde, 0xce, 0x20, 0x9a, 0x57,
|
||||
0x06, 0x01, 0xa8, 0x2d, 0xf2, 0xd6, 0x95, 0xdd, 0xba, 0x0d, 0x81, 0x8a, 0xe2, 0xde, 0x2f, 0x0b,
|
||||
0x3d, 0x38, 0xa3, 0x4a, 0x9f, 0x50, 0x79, 0xf7, 0x36, 0xcf, 0x73, 0xd1, 0x61, 0x3e, 0x92, 0xbc,
|
||||
0x41, 0xdc, 0x47, 0x2d, 0xaa, 0x21, 0x2d, 0xc5, 0xdf, 0x38, 0xa6, 0xff, 0x53, 0xd0, 0x79, 0xd6,
|
||||
0x1d, 0xec, 0xff, 0x31, 0x3a, 0xda, 0x36, 0x57, 0xec, 0x11, 0x46, 0xcd, 0x39, 0xd1, 0xc4, 0x74,
|
||||
0x77, 0x3f, 0x34, 0xf6, 0xf4, 0xd2, 0x42, 0x0f, 0x2b, 0xae, 0x0f, 0x20, 0x33, 0x1a, 0x01, 0xbe,
|
||||
0x40, 0xc7, 0xa7, 0xc5, 0xa1, 0x28, 0xb7, 0x11, 0x0f, 0xfc, 0xda, 0xad, 0xbb, 0x72, 0x32, 0x9c,
|
||||
0xe1, 0xfe, 0xe0, 0x86, 0xd8, 0x3b, 0xc0, 0x2f, 0x50, 0xa7, 0x18, 0x35, 0x76, 0xea, 0xa9, 0xbb,
|
||||
0xf3, 0x77, 0xfa, 0xf5, 0x58, 0x29, 0xbf, 0x77, 0x80, 0x4f, 0x50, 0xa7, 0xf8, 0xcc, 0xee, 0xf3,
|
||||
0x5d, 0xf9, 0x9d, 0xc1, 0xde, 0x58, 0xd9, 0xc4, 0xab, 0x97, 0xbf, 0xd7, 0x23, 0xeb, 0xcf, 0x7a,
|
||||
0x64, 0xfd, 0x5d, 0x8f, 0xac, 0x4f, 0x93, 0x9b, 0x8e, 0xe8, 0xde, 0x63, 0x3f, 0x6b, 0x9b, 0x9b,
|
||||
0xf9, 0xec, 0x5f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x12, 0x81, 0x49, 0x46, 0x0c, 0x06, 0x00, 0x00,
|
||||
func init() {
|
||||
proto.RegisterFile("reposerver/repository/repository.proto", fileDescriptor_repository_49651600e73b0b40)
|
||||
}
|
||||
|
||||
var fileDescriptor_repository_49651600e73b0b40 = []byte{
|
||||
// 584 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x55, 0xdd, 0x8a, 0xd3, 0x40,
|
||||
0x14, 0xde, 0x6c, 0xbb, 0xdd, 0x76, 0x2a, 0xee, 0x3a, 0x14, 0x09, 0x69, 0x29, 0x21, 0xa0, 0xf4,
|
||||
0xc6, 0x84, 0xd6, 0x1b, 0x6f, 0x44, 0xd0, 0xd5, 0x45, 0xd8, 0x65, 0x25, 0x5e, 0xe9, 0x8d, 0x4c,
|
||||
0xd3, 0x63, 0x3a, 0x36, 0x99, 0x19, 0x67, 0xa6, 0x01, 0x9f, 0xc2, 0x07, 0xf0, 0x0d, 0x7c, 0x12,
|
||||
0x2f, 0x7d, 0x04, 0xe9, 0xdd, 0xbe, 0x85, 0x64, 0x9a, 0x34, 0x69, 0xb7, 0xec, 0x8d, 0x08, 0x7b,
|
||||
0x77, 0xe6, 0x3b, 0x27, 0xdf, 0x77, 0xfe, 0x38, 0x41, 0x8f, 0x25, 0x08, 0xae, 0x40, 0x66, 0x20,
|
||||
0x03, 0x63, 0x52, 0xcd, 0xe5, 0xb7, 0x9a, 0xe9, 0x0b, 0xc9, 0x35, 0xc7, 0xa8, 0x42, 0x9c, 0x5e,
|
||||
0xcc, 0x63, 0x6e, 0xe0, 0x20, 0xb7, 0xd6, 0x11, 0xce, 0x20, 0xe6, 0x3c, 0x4e, 0x20, 0x20, 0x82,
|
||||
0x06, 0x84, 0x31, 0xae, 0x89, 0xa6, 0x9c, 0xa9, 0xc2, 0xeb, 0x2d, 0x9e, 0x29, 0x9f, 0x72, 0xe3,
|
||||
0x8d, 0xb8, 0x84, 0x20, 0x1b, 0x07, 0x31, 0x30, 0x90, 0x44, 0xc3, 0xac, 0x88, 0x79, 0x1b, 0x53,
|
||||
0x3d, 0x5f, 0x4e, 0xfd, 0x88, 0xa7, 0x01, 0x91, 0x46, 0xe2, 0x8b, 0x31, 0x9e, 0x44, 0xb3, 0x40,
|
||||
0x2c, 0xe2, 0xfc, 0x63, 0x15, 0x10, 0x21, 0x12, 0x1a, 0x19, 0xf2, 0x20, 0x1b, 0x93, 0x44, 0xcc,
|
||||
0xc9, 0x0d, 0x2a, 0xef, 0x67, 0x03, 0x9d, 0x5c, 0x12, 0x46, 0x3f, 0x83, 0xd2, 0x21, 0x7c, 0x5d,
|
||||
0x82, 0xd2, 0xf8, 0x03, 0x6a, 0xe6, 0x45, 0xd8, 0x96, 0x6b, 0x8d, 0xba, 0x93, 0xd7, 0x7e, 0xa5,
|
||||
0xe6, 0x97, 0x6a, 0xc6, 0xf8, 0x14, 0xcd, 0x7c, 0xb1, 0x88, 0xfd, 0x5c, 0xcd, 0xaf, 0xa9, 0xf9,
|
||||
0xa5, 0x9a, 0x1f, 0x6e, 0x7a, 0x11, 0x1a, 0x4a, 0xec, 0xa0, 0xb6, 0x84, 0x8c, 0x2a, 0xca, 0x99,
|
||||
0x7d, 0xe8, 0x5a, 0xa3, 0x4e, 0xb8, 0x79, 0x63, 0x8c, 0x9a, 0x82, 0xe8, 0xb9, 0xdd, 0x30, 0xb8,
|
||||
0xb1, 0xb1, 0x8b, 0xba, 0xc0, 0x32, 0x2a, 0x39, 0x4b, 0x81, 0x69, 0xbb, 0x69, 0x5c, 0x75, 0x28,
|
||||
0x67, 0x24, 0x42, 0x5c, 0x90, 0x29, 0x24, 0xf6, 0xd1, 0x9a, 0xb1, 0x7c, 0xe3, 0xef, 0x16, 0xea,
|
||||
0x47, 0x3c, 0x15, 0x9c, 0x01, 0xd3, 0xef, 0x88, 0x24, 0x29, 0x68, 0x90, 0x57, 0x19, 0x48, 0x49,
|
||||
0x67, 0xa0, 0xec, 0x96, 0xdb, 0x18, 0x75, 0x27, 0x97, 0xff, 0x50, 0xe0, 0xab, 0x1b, 0xec, 0xe1,
|
||||
0x6d, 0x8a, 0x78, 0x88, 0x50, 0x46, 0x92, 0x25, 0xbc, 0xa1, 0x09, 0x28, 0xfb, 0xd8, 0x6d, 0x8c,
|
||||
0x3a, 0x61, 0x0d, 0xc1, 0x03, 0xd4, 0x61, 0x24, 0x05, 0x25, 0x48, 0x04, 0x76, 0xdb, 0x94, 0x53,
|
||||
0x01, 0xde, 0xb5, 0x85, 0x4e, 0xab, 0x61, 0x29, 0xc1, 0x99, 0x82, 0xfc, 0x93, 0xb4, 0xc0, 0x94,
|
||||
0x6d, 0x19, 0xc6, 0x0a, 0xd8, 0x26, 0x3c, 0xdc, 0x21, 0xc4, 0x0f, 0x51, 0x6b, 0xbd, 0xd2, 0x45,
|
||||
0xd3, 0x8b, 0xd7, 0xd6, 0x98, 0x9a, 0x3b, 0x63, 0x02, 0xd4, 0x12, 0x79, 0x61, 0xca, 0x3e, 0xfa,
|
||||
0x1f, 0xed, 0x2b, 0xc8, 0xbd, 0x1f, 0x16, 0xba, 0x7f, 0x41, 0x95, 0x3e, 0xa3, 0xf2, 0xee, 0xed,
|
||||
0xa5, 0xe7, 0xa2, 0x76, 0x3e, 0xb0, 0x3c, 0x41, 0xdc, 0x43, 0x47, 0x54, 0x43, 0x5a, 0x36, 0x7f,
|
||||
0xfd, 0x30, 0xf9, 0x9f, 0x83, 0xce, 0xa3, 0xee, 0x60, 0xfe, 0x8f, 0xd0, 0xc9, 0x26, 0xb9, 0x62,
|
||||
0x8f, 0x30, 0x6a, 0xce, 0x88, 0x26, 0x26, 0xbb, 0x7b, 0xa1, 0xb1, 0x27, 0xd7, 0x16, 0x7a, 0x50,
|
||||
0x69, 0xbd, 0x07, 0x99, 0xd1, 0x08, 0xf0, 0x15, 0x3a, 0x3d, 0x2f, 0xce, 0x48, 0xb9, 0x8d, 0xb8,
|
||||
0xef, 0xd7, 0x2e, 0xe1, 0xce, 0x41, 0x71, 0x06, 0xfb, 0x9d, 0x6b, 0x61, 0xef, 0x00, 0x3f, 0x47,
|
||||
0xc7, 0xc5, 0xa8, 0xb1, 0x53, 0x0f, 0xdd, 0x9e, 0xbf, 0xd3, 0xab, 0xfb, 0xca, 0xf6, 0x7b, 0x07,
|
||||
0xf8, 0x0c, 0x1d, 0x17, 0xc5, 0x6c, 0x7f, 0xbe, 0xdd, 0x7e, 0xa7, 0xbf, 0xd7, 0x57, 0x26, 0xf1,
|
||||
0xf2, 0xc5, 0xaf, 0xd5, 0xd0, 0xfa, 0xbd, 0x1a, 0x5a, 0x7f, 0x56, 0x43, 0xeb, 0xe3, 0xf8, 0xb6,
|
||||
0x13, 0xbb, 0xf7, 0x57, 0x30, 0x6d, 0x99, 0x8b, 0xfa, 0xf4, 0x6f, 0x00, 0x00, 0x00, 0xff, 0xff,
|
||||
0x22, 0x8e, 0xa1, 0x51, 0x2a, 0x06, 0x00, 0x00,
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ message ManifestRequest {
|
||||
string appLabel = 5;
|
||||
repeated github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.ComponentParameter componentParameterOverrides = 6;
|
||||
repeated string valueFiles = 7;
|
||||
string namespace = 8;
|
||||
}
|
||||
|
||||
message ManifestResponse {
|
||||
|
||||
@@ -6,14 +6,24 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestGenerateManifestInDir(t *testing.T) {
|
||||
func TestGenerateYamlManifestInDir(t *testing.T) {
|
||||
// update this value if we add/remove manifests
|
||||
const countOfManifests = 22
|
||||
|
||||
q := ManifestRequest{}
|
||||
res1, err := generateManifests("../../manifests/components", &q)
|
||||
res1, err := generateManifests("../../manifests/base", &q)
|
||||
assert.Nil(t, err)
|
||||
assert.True(t, len(res1.Manifests) == 16) // update this value if we add/remove manifests
|
||||
assert.Equal(t, len(res1.Manifests), countOfManifests)
|
||||
|
||||
// this will test concatenated manifests to verify we split YAMLs correctly
|
||||
res2, err := generateManifests("../../manifests", &q)
|
||||
res2, err := generateManifests("./testdata/concatenated", &q)
|
||||
assert.Nil(t, err)
|
||||
assert.True(t, len(res2.Manifests) == len(res1.Manifests))
|
||||
assert.Equal(t, 3, len(res2.Manifests))
|
||||
}
|
||||
|
||||
func TestGenerateJsonnetManifestInDir(t *testing.T) {
|
||||
q := ManifestRequest{}
|
||||
res1, err := generateManifests("./testdata/jsonnet", &q)
|
||||
assert.Nil(t, err)
|
||||
assert.Equal(t, len(res1.Manifests), 2)
|
||||
}
|
||||
|
||||
17
reposerver/repository/testdata/concatenated/concatenated.yaml
vendored
Normal file
17
reposerver/repository/testdata/concatenated/concatenated.yaml
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: sa1
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: sa2
|
||||
---
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: sa3
|
||||
---
|
||||
58
reposerver/repository/testdata/jsonnet/guestbook-ui.jsonnet
vendored
Normal file
58
reposerver/repository/testdata/jsonnet/guestbook-ui.jsonnet
vendored
Normal file
@@ -0,0 +1,58 @@
|
||||
local params = import 'params.libsonnet';
|
||||
|
||||
[
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Service",
|
||||
"metadata": {
|
||||
"name": params.name
|
||||
},
|
||||
"spec": {
|
||||
"ports": [
|
||||
{
|
||||
"port": params.servicePort,
|
||||
"targetPort": params.containerPort
|
||||
}
|
||||
],
|
||||
"selector": {
|
||||
"app": params.name
|
||||
},
|
||||
"type": params.type
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "apps/v1beta2",
|
||||
"kind": "Deployment",
|
||||
"metadata": {
|
||||
"name": params.name
|
||||
},
|
||||
"spec": {
|
||||
"replicas": params.replicas,
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"app": params.name
|
||||
},
|
||||
},
|
||||
"template": {
|
||||
"metadata": {
|
||||
"labels": {
|
||||
"app": params.name
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"image": params.image,
|
||||
"name": params.name,
|
||||
"ports": [
|
||||
{
|
||||
"containerPort": params.containerPort
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
8
reposerver/repository/testdata/jsonnet/params.libsonnet
vendored
Normal file
8
reposerver/repository/testdata/jsonnet/params.libsonnet
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
containerPort: 80,
|
||||
image: "gcr.io/heptio-images/ks-guestbook-demo:0.2",
|
||||
name: "guestbook-ui",
|
||||
replicas: 1,
|
||||
servicePort: 80,
|
||||
type: "LoadBalancer",
|
||||
}
|
||||
@@ -1,15 +1,19 @@
|
||||
package reposerver
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
|
||||
"github.com/argoproj/argo-cd/reposerver/repository"
|
||||
"github.com/argoproj/argo-cd/server/version"
|
||||
"github.com/argoproj/argo-cd/util/cache"
|
||||
"github.com/argoproj/argo-cd/util/git"
|
||||
grpc_util "github.com/argoproj/argo-cd/util/grpc"
|
||||
tlsutil "github.com/argoproj/argo-cd/util/tls"
|
||||
"github.com/grpc-ecosystem/go-grpc-middleware"
|
||||
"github.com/grpc-ecosystem/go-grpc-middleware/logging/logrus"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials"
|
||||
"google.golang.org/grpc/reflection"
|
||||
)
|
||||
|
||||
@@ -18,28 +22,51 @@ type ArgoCDRepoServer struct {
|
||||
log *log.Entry
|
||||
gitFactory git.ClientFactory
|
||||
cache cache.Cache
|
||||
opts []grpc.ServerOption
|
||||
}
|
||||
|
||||
// NewServer returns a new instance of the ArgoCD Repo server
|
||||
func NewServer(gitFactory git.ClientFactory, cache cache.Cache) *ArgoCDRepoServer {
|
||||
func NewServer(gitFactory git.ClientFactory, cache cache.Cache, tlsConfCustomizer tlsutil.ConfigCustomizer) (*ArgoCDRepoServer, error) {
|
||||
// generate TLS cert
|
||||
hosts := []string{
|
||||
"localhost",
|
||||
"argocd-repo-server",
|
||||
}
|
||||
cert, err := tlsutil.GenerateX509KeyPair(tlsutil.CertOptions{
|
||||
Hosts: hosts,
|
||||
Organization: "Argo CD",
|
||||
IsCA: true,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tlsConfig := &tls.Config{Certificates: []tls.Certificate{*cert}}
|
||||
tlsConfCustomizer(tlsConfig)
|
||||
|
||||
opts := []grpc.ServerOption{grpc.Creds(credentials.NewTLS(tlsConfig))}
|
||||
|
||||
return &ArgoCDRepoServer{
|
||||
log: log.NewEntry(log.New()),
|
||||
gitFactory: gitFactory,
|
||||
cache: cache,
|
||||
}
|
||||
opts: opts,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// CreateGRPC creates new configured grpc server
|
||||
func (a *ArgoCDRepoServer) CreateGRPC() *grpc.Server {
|
||||
server := grpc.NewServer(
|
||||
grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(
|
||||
grpc_logrus.StreamServerInterceptor(a.log),
|
||||
grpc_util.PanicLoggerStreamServerInterceptor(a.log),
|
||||
)),
|
||||
grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(
|
||||
grpc_logrus.UnaryServerInterceptor(a.log),
|
||||
grpc_util.PanicLoggerUnaryServerInterceptor(a.log),
|
||||
)),
|
||||
append(a.opts,
|
||||
grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(
|
||||
grpc_logrus.StreamServerInterceptor(a.log),
|
||||
grpc_util.PanicLoggerStreamServerInterceptor(a.log),
|
||||
)),
|
||||
grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(
|
||||
grpc_logrus.UnaryServerInterceptor(a.log),
|
||||
grpc_util.PanicLoggerUnaryServerInterceptor(a.log),
|
||||
)))...,
|
||||
)
|
||||
version.RegisterVersionServiceServer(server, &version.Server{})
|
||||
manifestService := repository.NewService(a.gitFactory, a.cache)
|
||||
|
||||
@@ -1,17 +1,7 @@
|
||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: server/account/account.proto
|
||||
|
||||
/*
|
||||
Package account is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
server/account/account.proto
|
||||
|
||||
It has these top-level messages:
|
||||
UpdatePasswordRequest
|
||||
UpdatePasswordResponse
|
||||
*/
|
||||
package account
|
||||
package account // import "github.com/argoproj/argo-cd/server/account"
|
||||
|
||||
import proto "github.com/gogo/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
@@ -36,14 +26,45 @@ var _ = math.Inf
|
||||
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
type UpdatePasswordRequest struct {
|
||||
NewPassword string `protobuf:"bytes,1,opt,name=newPassword,proto3" json:"newPassword,omitempty"`
|
||||
CurrentPassword string `protobuf:"bytes,2,opt,name=currentPassword,proto3" json:"currentPassword,omitempty"`
|
||||
NewPassword string `protobuf:"bytes,1,opt,name=newPassword,proto3" json:"newPassword,omitempty"`
|
||||
CurrentPassword string `protobuf:"bytes,2,opt,name=currentPassword,proto3" json:"currentPassword,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *UpdatePasswordRequest) Reset() { *m = UpdatePasswordRequest{} }
|
||||
func (m *UpdatePasswordRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*UpdatePasswordRequest) ProtoMessage() {}
|
||||
func (*UpdatePasswordRequest) Descriptor() ([]byte, []int) { return fileDescriptorAccount, []int{0} }
|
||||
func (m *UpdatePasswordRequest) Reset() { *m = UpdatePasswordRequest{} }
|
||||
func (m *UpdatePasswordRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*UpdatePasswordRequest) ProtoMessage() {}
|
||||
func (*UpdatePasswordRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_account_d27ff2bbd0f6944b, []int{0}
|
||||
}
|
||||
func (m *UpdatePasswordRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *UpdatePasswordRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_UpdatePasswordRequest.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *UpdatePasswordRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_UpdatePasswordRequest.Merge(dst, src)
|
||||
}
|
||||
func (m *UpdatePasswordRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *UpdatePasswordRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_UpdatePasswordRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_UpdatePasswordRequest proto.InternalMessageInfo
|
||||
|
||||
func (m *UpdatePasswordRequest) GetNewPassword() string {
|
||||
if m != nil {
|
||||
@@ -60,12 +81,43 @@ func (m *UpdatePasswordRequest) GetCurrentPassword() string {
|
||||
}
|
||||
|
||||
type UpdatePasswordResponse struct {
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *UpdatePasswordResponse) Reset() { *m = UpdatePasswordResponse{} }
|
||||
func (m *UpdatePasswordResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*UpdatePasswordResponse) ProtoMessage() {}
|
||||
func (*UpdatePasswordResponse) Descriptor() ([]byte, []int) { return fileDescriptorAccount, []int{1} }
|
||||
func (m *UpdatePasswordResponse) Reset() { *m = UpdatePasswordResponse{} }
|
||||
func (m *UpdatePasswordResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*UpdatePasswordResponse) ProtoMessage() {}
|
||||
func (*UpdatePasswordResponse) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_account_d27ff2bbd0f6944b, []int{1}
|
||||
}
|
||||
func (m *UpdatePasswordResponse) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *UpdatePasswordResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_UpdatePasswordResponse.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *UpdatePasswordResponse) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_UpdatePasswordResponse.Merge(dst, src)
|
||||
}
|
||||
func (m *UpdatePasswordResponse) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *UpdatePasswordResponse) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_UpdatePasswordResponse.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_UpdatePasswordResponse proto.InternalMessageInfo
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*UpdatePasswordRequest)(nil), "account.UpdatePasswordRequest")
|
||||
@@ -97,7 +149,7 @@ func NewAccountServiceClient(cc *grpc.ClientConn) AccountServiceClient {
|
||||
|
||||
func (c *accountServiceClient) UpdatePassword(ctx context.Context, in *UpdatePasswordRequest, opts ...grpc.CallOption) (*UpdatePasswordResponse, error) {
|
||||
out := new(UpdatePasswordResponse)
|
||||
err := grpc.Invoke(ctx, "/account.AccountService/UpdatePassword", in, out, c.cc, opts...)
|
||||
err := c.cc.Invoke(ctx, "/account.AccountService/UpdatePassword", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -173,6 +225,9 @@ func (m *UpdatePasswordRequest) MarshalTo(dAtA []byte) (int, error) {
|
||||
i = encodeVarintAccount(dAtA, i, uint64(len(m.CurrentPassword)))
|
||||
i += copy(dAtA[i:], m.CurrentPassword)
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -191,6 +246,9 @@ func (m *UpdatePasswordResponse) MarshalTo(dAtA []byte) (int, error) {
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -214,12 +272,18 @@ func (m *UpdatePasswordRequest) Size() (n int) {
|
||||
if l > 0 {
|
||||
n += 1 + l + sovAccount(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *UpdatePasswordResponse) Size() (n int) {
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -335,6 +399,7 @@ func (m *UpdatePasswordRequest) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -385,6 +450,7 @@ func (m *UpdatePasswordResponse) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -499,9 +565,11 @@ var (
|
||||
ErrIntOverflowAccount = fmt.Errorf("proto: integer overflow")
|
||||
)
|
||||
|
||||
func init() { proto.RegisterFile("server/account/account.proto", fileDescriptorAccount) }
|
||||
func init() {
|
||||
proto.RegisterFile("server/account/account.proto", fileDescriptor_account_d27ff2bbd0f6944b)
|
||||
}
|
||||
|
||||
var fileDescriptorAccount = []byte{
|
||||
var fileDescriptor_account_d27ff2bbd0f6944b = []byte{
|
||||
// 268 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x29, 0x4e, 0x2d, 0x2a,
|
||||
0x4b, 0x2d, 0xd2, 0x4f, 0x4c, 0x4e, 0xce, 0x2f, 0xcd, 0x2b, 0x81, 0xd1, 0x7a, 0x05, 0x45, 0xf9,
|
||||
|
||||
@@ -43,6 +43,7 @@ type Server struct {
|
||||
kubeclientset kubernetes.Interface
|
||||
appclientset appclientset.Interface
|
||||
repoClientset reposerver.Clientset
|
||||
kubectl kube.Kubectl
|
||||
db db.ArgoDB
|
||||
appComparator controller.AppStateManager
|
||||
enf *rbac.Enforcer
|
||||
@@ -56,6 +57,7 @@ func NewServer(
|
||||
kubeclientset kubernetes.Interface,
|
||||
appclientset appclientset.Interface,
|
||||
repoClientset reposerver.Clientset,
|
||||
kubectl kube.Kubectl,
|
||||
db db.ArgoDB,
|
||||
enf *rbac.Enforcer,
|
||||
projectLock *util.KeyLock,
|
||||
@@ -67,7 +69,8 @@ func NewServer(
|
||||
kubeclientset: kubeclientset,
|
||||
db: db,
|
||||
repoClientset: repoClientset,
|
||||
appComparator: controller.NewAppStateManager(db, appclientset, repoClientset, namespace),
|
||||
kubectl: kubectl,
|
||||
appComparator: controller.NewAppStateManager(db, appclientset, repoClientset, namespace, kubectl),
|
||||
enf: enf,
|
||||
projectLock: projectLock,
|
||||
auditLogger: argo.NewAuditLogger(namespace, kubeclientset, "argocd-server"),
|
||||
@@ -79,6 +82,65 @@ func appRBACName(app appv1.Application) string {
|
||||
return fmt.Sprintf("%s/%s", app.Spec.GetProject(), app.Name)
|
||||
}
|
||||
|
||||
func toString(val interface{}) string {
|
||||
if val == nil {
|
||||
return ""
|
||||
}
|
||||
return fmt.Sprintf("%s", val)
|
||||
}
|
||||
|
||||
// hideSecretData checks if given object kind is Secret, replaces data keys with stars and returns unchanged data map. The method additionally check if data key if different
|
||||
// from corresponding key of optional parameter `otherData` and adds extra star to keep information about difference. So if secret data is out of sync user still can see which
|
||||
// fields are different.
|
||||
func hideSecretData(state string, otherData map[string]interface{}) (string, map[string]interface{}) {
|
||||
obj, err := appv1.UnmarshalToUnstructured(state)
|
||||
if err == nil {
|
||||
if obj != nil && obj.GetKind() == kube.SecretKind {
|
||||
if data, ok, err := unstructured.NestedMap(obj.Object, "data"); err == nil && ok {
|
||||
unchangedData := make(map[string]interface{})
|
||||
for k, v := range data {
|
||||
unchangedData[k] = v
|
||||
}
|
||||
for k := range data {
|
||||
replacement := "********"
|
||||
if otherData != nil {
|
||||
if val, ok := otherData[k]; ok && toString(val) != toString(data[k]) {
|
||||
replacement = replacement + "*"
|
||||
}
|
||||
}
|
||||
data[k] = replacement
|
||||
}
|
||||
_ = unstructured.SetNestedMap(obj.Object, data, "data")
|
||||
newState, err := json.Marshal(obj)
|
||||
if err == nil {
|
||||
return string(newState), unchangedData
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return state, nil
|
||||
}
|
||||
|
||||
func hideNodesSecrets(nodes []appv1.ResourceNode) {
|
||||
for i := range nodes {
|
||||
node := nodes[i]
|
||||
node.State, _ = hideSecretData(node.State, nil)
|
||||
hideNodesSecrets(node.Children)
|
||||
nodes[i] = node
|
||||
}
|
||||
}
|
||||
|
||||
func hideAppSecrets(app *appv1.Application) {
|
||||
for i := range app.Status.ComparisonResult.Resources {
|
||||
res := app.Status.ComparisonResult.Resources[i]
|
||||
var data map[string]interface{}
|
||||
res.LiveState, data = hideSecretData(res.LiveState, nil)
|
||||
res.TargetState, _ = hideSecretData(res.TargetState, data)
|
||||
hideNodesSecrets(res.ChildLiveResources)
|
||||
app.Status.ComparisonResult.Resources[i] = res
|
||||
}
|
||||
}
|
||||
|
||||
// List returns list of applications
|
||||
func (s *Server) List(ctx context.Context, q *ApplicationQuery) (*appv1.ApplicationList, error) {
|
||||
appList, err := s.appclientset.ArgoprojV1alpha1().Applications(s.ns).List(metav1.ListOptions{})
|
||||
@@ -92,6 +154,11 @@ func (s *Server) List(ctx context.Context, q *ApplicationQuery) (*appv1.Applicat
|
||||
}
|
||||
}
|
||||
newItems = argoutil.FilterByProjects(newItems, q.Projects)
|
||||
for i := range newItems {
|
||||
app := newItems[i]
|
||||
hideAppSecrets(&app)
|
||||
newItems[i] = app
|
||||
}
|
||||
appList.Items = newItems
|
||||
return appList, nil
|
||||
}
|
||||
@@ -133,8 +200,9 @@ func (s *Server) Create(ctx context.Context, q *ApplicationCreateRequest) (*appv
|
||||
}
|
||||
|
||||
if err == nil {
|
||||
s.logEvent(out, ctx, argo.EventReasonResourceCreated, "create")
|
||||
s.logEvent(out, ctx, argo.EventReasonResourceCreated, "created application")
|
||||
}
|
||||
hideAppSecrets(out)
|
||||
return out, err
|
||||
}
|
||||
|
||||
@@ -144,7 +212,7 @@ func (s *Server) GetManifests(ctx context.Context, q *ApplicationManifestQuery)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications/manifests", "get", appRBACName(*a)) {
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "get", appRBACName(*a)) {
|
||||
return nil, grpc.ErrPermissionDenied
|
||||
}
|
||||
repo := s.getRepo(ctx, a.Spec.Source.RepoURL)
|
||||
@@ -174,6 +242,7 @@ func (s *Server) GetManifests(ctx context.Context, q *ApplicationManifestQuery)
|
||||
ComponentParameterOverrides: overrides,
|
||||
AppLabel: a.Name,
|
||||
ValueFiles: a.Spec.Source.ValuesFiles,
|
||||
Namespace: a.Spec.Destination.Namespace,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -202,6 +271,7 @@ func (s *Server) Get(ctx context.Context, q *ApplicationQuery) (*appv1.Applicati
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
hideAppSecrets(a)
|
||||
return a, nil
|
||||
}
|
||||
|
||||
@@ -211,7 +281,7 @@ func (s *Server) ListResourceEvents(ctx context.Context, q *ApplicationResourceE
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications/events", "get", appRBACName(*a)) {
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "get", appRBACName(*a)) {
|
||||
return nil, grpc.ErrPermissionDenied
|
||||
}
|
||||
var (
|
||||
@@ -266,7 +336,14 @@ func (s *Server) Update(ctx context.Context, q *ApplicationUpdateRequest) (*appv
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Update(a)
|
||||
out, err := s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Update(a)
|
||||
if out != nil {
|
||||
hideAppSecrets(out)
|
||||
}
|
||||
if err == nil {
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, "updated application")
|
||||
}
|
||||
return out, err
|
||||
}
|
||||
|
||||
// removeInvalidOverrides removes any parameter overrides that are no longer valid
|
||||
@@ -299,7 +376,6 @@ func (s *Server) removeInvalidOverrides(a *appv1.Application, q *ApplicationUpda
|
||||
|
||||
// UpdateSpec updates an application spec and filters out any invalid parameter overrides
|
||||
func (s *Server) UpdateSpec(ctx context.Context, q *ApplicationUpdateSpecRequest) (*appv1.ApplicationSpec, error) {
|
||||
|
||||
s.projectLock.Lock(q.Spec.Project)
|
||||
defer s.projectLock.Unlock(q.Spec.Project)
|
||||
|
||||
@@ -322,9 +398,7 @@ func (s *Server) UpdateSpec(ctx context.Context, q *ApplicationUpdateSpecRequest
|
||||
a.Spec = q.Spec
|
||||
_, err = s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Update(a)
|
||||
if err == nil {
|
||||
if err != nil {
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, "update")
|
||||
}
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, "updated application spec")
|
||||
return &q.Spec, nil
|
||||
}
|
||||
if !apierr.IsConflict(err) {
|
||||
@@ -387,7 +461,7 @@ func (s *Server) Delete(ctx context.Context, q *ApplicationDeleteRequest) (*Appl
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceDeleted, "delete")
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceDeleted, "deleted application")
|
||||
return &ApplicationResponse{}, nil
|
||||
}
|
||||
|
||||
@@ -406,6 +480,7 @@ func (s *Server) Watch(q *ApplicationQuery, ws ApplicationService_WatchServer) e
|
||||
// do not emit apps user does not have accessing
|
||||
continue
|
||||
}
|
||||
hideAppSecrets(&a)
|
||||
err = ws.Send(&appv1.ApplicationWatchEvent{
|
||||
Type: next.Type,
|
||||
Application: a,
|
||||
@@ -480,7 +555,7 @@ func (s *Server) DeleteResource(ctx context.Context, q *ApplicationDeleteResourc
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications/resources", "delete", appRBACName(*a)) {
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "delete", appRBACName(*a)) {
|
||||
return nil, grpc.ErrPermissionDenied
|
||||
}
|
||||
found := findResource(a, q)
|
||||
@@ -491,10 +566,11 @@ func (s *Server) DeleteResource(ctx context.Context, q *ApplicationDeleteResourc
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = kube.DeleteResource(config, found, namespace)
|
||||
err = s.kubectl.DeleteResource(config, found, namespace)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceDeleted, fmt.Sprintf("deleted resource %s/%s '%s'", q.APIVersion, q.Kind, q.ResourceName))
|
||||
return &ApplicationResponse{}, nil
|
||||
}
|
||||
|
||||
@@ -543,7 +619,7 @@ func (s *Server) PodLogs(q *ApplicationPodLogsQuery, ws ApplicationService_PodLo
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !s.enf.EnforceClaims(ws.Context().Value("claims"), "applications/logs", "get", appRBACName(*a)) {
|
||||
if !s.enf.EnforceClaims(ws.Context().Value("claims"), "applications", "get", appRBACName(*a)) {
|
||||
return grpc.ErrPermissionDenied
|
||||
}
|
||||
config, namespace, err := s.getApplicationClusterConfig(*q.Name)
|
||||
@@ -631,70 +707,100 @@ func (s *Server) getRepo(ctx context.Context, repoURL string) *appv1.Repository
|
||||
|
||||
// Sync syncs an application to its target state
|
||||
func (s *Server) Sync(ctx context.Context, syncReq *ApplicationSyncRequest) (*appv1.Application, error) {
|
||||
a, err := s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Get(*syncReq.Name, metav1.GetOptions{})
|
||||
appIf := s.appclientset.ArgoprojV1alpha1().Applications(s.ns)
|
||||
a, err := appIf.Get(*syncReq.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "sync", appRBACName(*a)) {
|
||||
return nil, grpc.ErrPermissionDenied
|
||||
}
|
||||
return s.setAppOperation(ctx, *syncReq.Name, "sync", func(app *appv1.Application) (*appv1.Operation, error) {
|
||||
syncOp := appv1.SyncOperation{
|
||||
Revision: syncReq.Revision,
|
||||
Prune: syncReq.Prune,
|
||||
DryRun: syncReq.DryRun,
|
||||
SyncStrategy: syncReq.Strategy,
|
||||
if a.Spec.SyncPolicy != nil && a.Spec.SyncPolicy.Automated != nil {
|
||||
if syncReq.Revision != "" && syncReq.Revision != a.Spec.Source.TargetRevision {
|
||||
return nil, status.Errorf(codes.FailedPrecondition, "Cannot sync to %s: auto-sync currently set to %s", syncReq.Revision, a.Spec.Source.TargetRevision)
|
||||
}
|
||||
return &appv1.Operation{
|
||||
Sync: &syncOp,
|
||||
}, nil
|
||||
})
|
||||
}
|
||||
|
||||
parameterOverrides := make(appv1.ParameterOverrides, 0)
|
||||
if syncReq.Parameter != nil {
|
||||
// If parameter overrides are supplied, the caller explicitly states to use the provided
|
||||
// list of overrides. NOTE: gogo/protobuf cannot currently distinguish between empty arrays
|
||||
// vs nil arrays, which is why the wrapping syncReq.Parameter is examined for intent.
|
||||
// See: https://github.com/gogo/protobuf/issues/181
|
||||
for _, p := range syncReq.Parameter.Overrides {
|
||||
parameterOverrides = append(parameterOverrides, appv1.ComponentParameter{
|
||||
Name: p.Name,
|
||||
Value: p.Value,
|
||||
Component: p.Component,
|
||||
})
|
||||
}
|
||||
} else {
|
||||
// If parameter overrides are omitted completely, we use what is set in the application
|
||||
if a.Spec.Source.ComponentParameterOverrides != nil {
|
||||
parameterOverrides = appv1.ParameterOverrides(a.Spec.Source.ComponentParameterOverrides)
|
||||
}
|
||||
}
|
||||
|
||||
op := appv1.Operation{
|
||||
Sync: &appv1.SyncOperation{
|
||||
Revision: syncReq.Revision,
|
||||
Prune: syncReq.Prune,
|
||||
DryRun: syncReq.DryRun,
|
||||
SyncStrategy: syncReq.Strategy,
|
||||
ParameterOverrides: parameterOverrides,
|
||||
Resources: syncReq.Resources,
|
||||
},
|
||||
}
|
||||
a, err = argo.SetAppOperation(ctx, appIf, s.auditLogger, *syncReq.Name, &op)
|
||||
if err == nil {
|
||||
rev := syncReq.Revision
|
||||
if syncReq.Revision == "" {
|
||||
rev = a.Spec.Source.TargetRevision
|
||||
}
|
||||
message := fmt.Sprintf("initiated sync to %s", rev)
|
||||
s.logEvent(a, ctx, argo.EventReasonOperationStarted, message)
|
||||
}
|
||||
return a, err
|
||||
}
|
||||
|
||||
func (s *Server) Rollback(ctx context.Context, rollbackReq *ApplicationRollbackRequest) (*appv1.Application, error) {
|
||||
a, err := s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Get(*rollbackReq.Name, metav1.GetOptions{})
|
||||
appIf := s.appclientset.ArgoprojV1alpha1().Applications(s.ns)
|
||||
a, err := appIf.Get(*rollbackReq.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "rollback", appRBACName(*a)) {
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "sync", appRBACName(*a)) {
|
||||
return nil, grpc.ErrPermissionDenied
|
||||
}
|
||||
return s.setAppOperation(ctx, *rollbackReq.Name, "rollback", func(app *appv1.Application) (*appv1.Operation, error) {
|
||||
return &appv1.Operation{
|
||||
Rollback: &appv1.RollbackOperation{
|
||||
ID: rollbackReq.ID,
|
||||
Prune: rollbackReq.Prune,
|
||||
DryRun: rollbackReq.DryRun,
|
||||
},
|
||||
}, nil
|
||||
})
|
||||
}
|
||||
if a.Spec.SyncPolicy != nil && a.Spec.SyncPolicy.Automated != nil {
|
||||
return nil, status.Errorf(codes.FailedPrecondition, "Rollback cannot be initiated when auto-sync is enabled")
|
||||
}
|
||||
|
||||
func (s *Server) setAppOperation(ctx context.Context, appName string, operationName string, operationCreator func(app *appv1.Application) (*appv1.Operation, error)) (*appv1.Application, error) {
|
||||
for {
|
||||
a, err := s.Get(ctx, &ApplicationQuery{Name: &appName})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if a.Operation != nil {
|
||||
return nil, status.Errorf(codes.InvalidArgument, "another operation is already in progress")
|
||||
}
|
||||
op, err := operationCreator(a)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
a.Operation = op
|
||||
a.Status.OperationState = nil
|
||||
_, err = s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Update(a)
|
||||
if err != nil && apierr.IsConflict(err) {
|
||||
log.Warnf("Failed to set operation for app '%s' due to update conflict. Retrying again...", appName)
|
||||
} else {
|
||||
if err == nil {
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, operationName)
|
||||
}
|
||||
return a, err
|
||||
var deploymentInfo *appv1.DeploymentInfo
|
||||
for _, info := range a.Status.History {
|
||||
if info.ID == rollbackReq.ID {
|
||||
deploymentInfo = &info
|
||||
break
|
||||
}
|
||||
}
|
||||
if deploymentInfo == nil {
|
||||
return nil, fmt.Errorf("application %s does not have deployment with id %v", a.Name, rollbackReq.ID)
|
||||
}
|
||||
// Rollback is just a convenience around Sync
|
||||
op := appv1.Operation{
|
||||
Sync: &appv1.SyncOperation{
|
||||
Revision: deploymentInfo.Revision,
|
||||
DryRun: rollbackReq.DryRun,
|
||||
Prune: rollbackReq.Prune,
|
||||
SyncStrategy: &appv1.SyncStrategy{Apply: &appv1.SyncStrategyApply{}},
|
||||
ParameterOverrides: deploymentInfo.ComponentParameterOverrides,
|
||||
},
|
||||
}
|
||||
a, err = argo.SetAppOperation(ctx, appIf, s.auditLogger, *rollbackReq.Name, &op)
|
||||
if err == nil {
|
||||
s.logEvent(a, ctx, argo.EventReasonOperationStarted, fmt.Sprintf("initiated rollback to %d", rollbackReq.ID))
|
||||
}
|
||||
return a, err
|
||||
}
|
||||
|
||||
func (s *Server) TerminateOperation(ctx context.Context, termOpReq *OperationTerminateRequest) (*OperationTerminateResponse, error) {
|
||||
@@ -702,7 +808,7 @@ func (s *Server) TerminateOperation(ctx context.Context, termOpReq *OperationTer
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "terminateop", appRBACName(*a)) {
|
||||
if !s.enf.EnforceClaims(ctx.Value("claims"), "applications", "sync", appRBACName(*a)) {
|
||||
return nil, grpc.ErrPermissionDenied
|
||||
}
|
||||
|
||||
@@ -718,18 +824,24 @@ func (s *Server) TerminateOperation(ctx context.Context, termOpReq *OperationTer
|
||||
if !apierr.IsConflict(err) {
|
||||
return nil, err
|
||||
}
|
||||
log.Warnf("Failed to set operation for app '%s' due to update conflict. Retrying again...", termOpReq.Name)
|
||||
log.Warnf("Failed to set operation for app '%s' due to update conflict. Retrying again...", *termOpReq.Name)
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
a, err = s.appclientset.ArgoprojV1alpha1().Applications(s.ns).Get(*termOpReq.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
} else {
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, "terminateop")
|
||||
s.logEvent(a, ctx, argo.EventReasonResourceUpdated, "terminated running operation")
|
||||
}
|
||||
}
|
||||
return nil, status.Errorf(codes.Internal, "Failed to terminate app. Too many conflicts")
|
||||
}
|
||||
|
||||
func (s *Server) logEvent(a *appv1.Application, ctx context.Context, reason string, action string) {
|
||||
s.auditLogger.LogAppEvent(a, argo.EventInfo{Reason: reason, Action: action, Username: session.Username(ctx)}, v1.EventTypeNormal)
|
||||
eventInfo := argo.EventInfo{Type: v1.EventTypeNormal, Reason: reason}
|
||||
user := session.Username(ctx)
|
||||
if user == "" {
|
||||
user = "Unknown user"
|
||||
}
|
||||
message := fmt.Sprintf("%s %s", user, action)
|
||||
s.auditLogger.LogAppEvent(a, eventInfo, message)
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -57,6 +57,20 @@ message ApplicationSyncRequest {
|
||||
optional bool dryRun = 3 [(gogoproto.nullable) = false];
|
||||
optional bool prune = 4 [(gogoproto.nullable) = false];
|
||||
optional github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.SyncStrategy strategy = 5;
|
||||
optional ParameterOverrides parameter = 6;
|
||||
repeated github.com.argoproj.argo_cd.pkg.apis.application.v1alpha1.SyncOperationResource resources = 7 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
// ParameterOverrides is a wrapper on a list of parameters. If omitted, the application's overrides
|
||||
// in the spec will be used. If set, will use the supplied list of overrides
|
||||
message ParameterOverrides {
|
||||
repeated Parameter overrides = 1;
|
||||
}
|
||||
|
||||
message Parameter {
|
||||
required string name = 1 [(gogoproto.nullable) = false];
|
||||
optional string value = 2 [(gogoproto.nullable) = false];
|
||||
optional string component = 3 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
// ApplicationUpdateSpecRequest is a request to update application spec
|
||||
|
||||
@@ -19,6 +19,7 @@ import (
|
||||
"github.com/argoproj/argo-cd/test"
|
||||
"github.com/argoproj/argo-cd/util"
|
||||
"github.com/argoproj/argo-cd/util/db"
|
||||
"github.com/argoproj/argo-cd/util/kube"
|
||||
"github.com/argoproj/argo-cd/util/rbac"
|
||||
)
|
||||
|
||||
@@ -109,6 +110,7 @@ func newTestAppServer() ApplicationServiceServer {
|
||||
kubeclientset,
|
||||
apps.NewSimpleClientset(defaultProj),
|
||||
mockRepoClient,
|
||||
kube.KubectlCmd{},
|
||||
db,
|
||||
enforcer,
|
||||
util.NewKeyLock(),
|
||||
|
||||
@@ -1,32 +1,21 @@
|
||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: server/cluster/cluster.proto
|
||||
|
||||
/*
|
||||
Package cluster is a generated protocol buffer package.
|
||||
package cluster // import "github.com/argoproj/argo-cd/server/cluster"
|
||||
|
||||
/*
|
||||
Cluster Service
|
||||
|
||||
Cluster Service API performs CRUD actions against cluster resources
|
||||
|
||||
It is generated from these files:
|
||||
server/cluster/cluster.proto
|
||||
|
||||
It has these top-level messages:
|
||||
ClusterQuery
|
||||
ClusterResponse
|
||||
ClusterCreateRequest
|
||||
ClusterCreateFromKubeConfigRequest
|
||||
ClusterUpdateRequest
|
||||
*/
|
||||
package cluster
|
||||
|
||||
import proto "github.com/gogo/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
import math "math"
|
||||
import v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
import _ "github.com/gogo/protobuf/gogoproto"
|
||||
import _ "google.golang.org/genproto/googleapis/api/annotations"
|
||||
import _ "k8s.io/api/core/v1"
|
||||
import github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
|
||||
import context "golang.org/x/net/context"
|
||||
import grpc "google.golang.org/grpc"
|
||||
@@ -46,13 +35,44 @@ const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
// ClusterQuery is a query for cluster resources
|
||||
type ClusterQuery struct {
|
||||
Server string `protobuf:"bytes,1,opt,name=server,proto3" json:"server,omitempty"`
|
||||
Server string `protobuf:"bytes,1,opt,name=server,proto3" json:"server,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *ClusterQuery) Reset() { *m = ClusterQuery{} }
|
||||
func (m *ClusterQuery) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterQuery) ProtoMessage() {}
|
||||
func (*ClusterQuery) Descriptor() ([]byte, []int) { return fileDescriptorCluster, []int{0} }
|
||||
func (m *ClusterQuery) Reset() { *m = ClusterQuery{} }
|
||||
func (m *ClusterQuery) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterQuery) ProtoMessage() {}
|
||||
func (*ClusterQuery) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_cluster_0875510a34378ea0, []int{0}
|
||||
}
|
||||
func (m *ClusterQuery) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *ClusterQuery) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_ClusterQuery.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *ClusterQuery) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_ClusterQuery.Merge(dst, src)
|
||||
}
|
||||
func (m *ClusterQuery) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *ClusterQuery) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_ClusterQuery.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_ClusterQuery proto.InternalMessageInfo
|
||||
|
||||
func (m *ClusterQuery) GetServer() string {
|
||||
if m != nil {
|
||||
@@ -62,24 +82,86 @@ func (m *ClusterQuery) GetServer() string {
|
||||
}
|
||||
|
||||
type ClusterResponse struct {
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *ClusterResponse) Reset() { *m = ClusterResponse{} }
|
||||
func (m *ClusterResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterResponse) ProtoMessage() {}
|
||||
func (*ClusterResponse) Descriptor() ([]byte, []int) { return fileDescriptorCluster, []int{1} }
|
||||
func (m *ClusterResponse) Reset() { *m = ClusterResponse{} }
|
||||
func (m *ClusterResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterResponse) ProtoMessage() {}
|
||||
func (*ClusterResponse) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_cluster_0875510a34378ea0, []int{1}
|
||||
}
|
||||
func (m *ClusterResponse) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *ClusterResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_ClusterResponse.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *ClusterResponse) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_ClusterResponse.Merge(dst, src)
|
||||
}
|
||||
func (m *ClusterResponse) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *ClusterResponse) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_ClusterResponse.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_ClusterResponse proto.InternalMessageInfo
|
||||
|
||||
type ClusterCreateRequest struct {
|
||||
Cluster *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster `protobuf:"bytes,1,opt,name=cluster" json:"cluster,omitempty"`
|
||||
Upsert bool `protobuf:"varint,2,opt,name=upsert,proto3" json:"upsert,omitempty"`
|
||||
Cluster *v1alpha1.Cluster `protobuf:"bytes,1,opt,name=cluster" json:"cluster,omitempty"`
|
||||
Upsert bool `protobuf:"varint,2,opt,name=upsert,proto3" json:"upsert,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *ClusterCreateRequest) Reset() { *m = ClusterCreateRequest{} }
|
||||
func (m *ClusterCreateRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterCreateRequest) ProtoMessage() {}
|
||||
func (*ClusterCreateRequest) Descriptor() ([]byte, []int) { return fileDescriptorCluster, []int{2} }
|
||||
func (m *ClusterCreateRequest) Reset() { *m = ClusterCreateRequest{} }
|
||||
func (m *ClusterCreateRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterCreateRequest) ProtoMessage() {}
|
||||
func (*ClusterCreateRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_cluster_0875510a34378ea0, []int{2}
|
||||
}
|
||||
func (m *ClusterCreateRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *ClusterCreateRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_ClusterCreateRequest.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *ClusterCreateRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_ClusterCreateRequest.Merge(dst, src)
|
||||
}
|
||||
func (m *ClusterCreateRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *ClusterCreateRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_ClusterCreateRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
func (m *ClusterCreateRequest) GetCluster() *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster {
|
||||
var xxx_messageInfo_ClusterCreateRequest proto.InternalMessageInfo
|
||||
|
||||
func (m *ClusterCreateRequest) GetCluster() *v1alpha1.Cluster {
|
||||
if m != nil {
|
||||
return m.Cluster
|
||||
}
|
||||
@@ -94,18 +176,47 @@ func (m *ClusterCreateRequest) GetUpsert() bool {
|
||||
}
|
||||
|
||||
type ClusterCreateFromKubeConfigRequest struct {
|
||||
Kubeconfig string `protobuf:"bytes,1,opt,name=kubeconfig,proto3" json:"kubeconfig,omitempty"`
|
||||
Context string `protobuf:"bytes,2,opt,name=context,proto3" json:"context,omitempty"`
|
||||
Upsert bool `protobuf:"varint,3,opt,name=upsert,proto3" json:"upsert,omitempty"`
|
||||
InCluster bool `protobuf:"varint,4,opt,name=inCluster,proto3" json:"inCluster,omitempty"`
|
||||
Kubeconfig string `protobuf:"bytes,1,opt,name=kubeconfig,proto3" json:"kubeconfig,omitempty"`
|
||||
Context string `protobuf:"bytes,2,opt,name=context,proto3" json:"context,omitempty"`
|
||||
Upsert bool `protobuf:"varint,3,opt,name=upsert,proto3" json:"upsert,omitempty"`
|
||||
InCluster bool `protobuf:"varint,4,opt,name=inCluster,proto3" json:"inCluster,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *ClusterCreateFromKubeConfigRequest) Reset() { *m = ClusterCreateFromKubeConfigRequest{} }
|
||||
func (m *ClusterCreateFromKubeConfigRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterCreateFromKubeConfigRequest) ProtoMessage() {}
|
||||
func (*ClusterCreateFromKubeConfigRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptorCluster, []int{3}
|
||||
return fileDescriptor_cluster_0875510a34378ea0, []int{3}
|
||||
}
|
||||
func (m *ClusterCreateFromKubeConfigRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *ClusterCreateFromKubeConfigRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_ClusterCreateFromKubeConfigRequest.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *ClusterCreateFromKubeConfigRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_ClusterCreateFromKubeConfigRequest.Merge(dst, src)
|
||||
}
|
||||
func (m *ClusterCreateFromKubeConfigRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *ClusterCreateFromKubeConfigRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_ClusterCreateFromKubeConfigRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_ClusterCreateFromKubeConfigRequest proto.InternalMessageInfo
|
||||
|
||||
func (m *ClusterCreateFromKubeConfigRequest) GetKubeconfig() string {
|
||||
if m != nil {
|
||||
@@ -136,15 +247,46 @@ func (m *ClusterCreateFromKubeConfigRequest) GetInCluster() bool {
|
||||
}
|
||||
|
||||
type ClusterUpdateRequest struct {
|
||||
Cluster *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster `protobuf:"bytes,1,opt,name=cluster" json:"cluster,omitempty"`
|
||||
Cluster *v1alpha1.Cluster `protobuf:"bytes,1,opt,name=cluster" json:"cluster,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *ClusterUpdateRequest) Reset() { *m = ClusterUpdateRequest{} }
|
||||
func (m *ClusterUpdateRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterUpdateRequest) ProtoMessage() {}
|
||||
func (*ClusterUpdateRequest) Descriptor() ([]byte, []int) { return fileDescriptorCluster, []int{4} }
|
||||
func (m *ClusterUpdateRequest) Reset() { *m = ClusterUpdateRequest{} }
|
||||
func (m *ClusterUpdateRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClusterUpdateRequest) ProtoMessage() {}
|
||||
func (*ClusterUpdateRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_cluster_0875510a34378ea0, []int{4}
|
||||
}
|
||||
func (m *ClusterUpdateRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *ClusterUpdateRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_ClusterUpdateRequest.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalTo(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (dst *ClusterUpdateRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_ClusterUpdateRequest.Merge(dst, src)
|
||||
}
|
||||
func (m *ClusterUpdateRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *ClusterUpdateRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_ClusterUpdateRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
func (m *ClusterUpdateRequest) GetCluster() *github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster {
|
||||
var xxx_messageInfo_ClusterUpdateRequest proto.InternalMessageInfo
|
||||
|
||||
func (m *ClusterUpdateRequest) GetCluster() *v1alpha1.Cluster {
|
||||
if m != nil {
|
||||
return m.Cluster
|
||||
}
|
||||
@@ -171,15 +313,15 @@ const _ = grpc.SupportPackageIsVersion4
|
||||
|
||||
type ClusterServiceClient interface {
|
||||
// List returns list of clusters
|
||||
List(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ClusterList, error)
|
||||
List(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*v1alpha1.ClusterList, error)
|
||||
// Create creates a cluster
|
||||
Create(ctx context.Context, in *ClusterCreateRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
|
||||
Create(ctx context.Context, in *ClusterCreateRequest, opts ...grpc.CallOption) (*v1alpha1.Cluster, error)
|
||||
// CreateFromKubeConfig installs the argocd-manager service account into the cluster specified in the given kubeconfig and context
|
||||
CreateFromKubeConfig(ctx context.Context, in *ClusterCreateFromKubeConfigRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
|
||||
CreateFromKubeConfig(ctx context.Context, in *ClusterCreateFromKubeConfigRequest, opts ...grpc.CallOption) (*v1alpha1.Cluster, error)
|
||||
// Get returns a cluster by server address
|
||||
Get(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
|
||||
Get(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*v1alpha1.Cluster, error)
|
||||
// Update updates a cluster
|
||||
Update(ctx context.Context, in *ClusterUpdateRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
|
||||
Update(ctx context.Context, in *ClusterUpdateRequest, opts ...grpc.CallOption) (*v1alpha1.Cluster, error)
|
||||
// Delete deletes a cluster
|
||||
Delete(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*ClusterResponse, error)
|
||||
}
|
||||
@@ -192,45 +334,45 @@ func NewClusterServiceClient(cc *grpc.ClientConn) ClusterServiceClient {
|
||||
return &clusterServiceClient{cc}
|
||||
}
|
||||
|
||||
func (c *clusterServiceClient) List(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ClusterList, error) {
|
||||
out := new(github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ClusterList)
|
||||
err := grpc.Invoke(ctx, "/cluster.ClusterService/List", in, out, c.cc, opts...)
|
||||
func (c *clusterServiceClient) List(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*v1alpha1.ClusterList, error) {
|
||||
out := new(v1alpha1.ClusterList)
|
||||
err := c.cc.Invoke(ctx, "/cluster.ClusterService/List", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *clusterServiceClient) Create(ctx context.Context, in *ClusterCreateRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error) {
|
||||
out := new(github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster)
|
||||
err := grpc.Invoke(ctx, "/cluster.ClusterService/Create", in, out, c.cc, opts...)
|
||||
func (c *clusterServiceClient) Create(ctx context.Context, in *ClusterCreateRequest, opts ...grpc.CallOption) (*v1alpha1.Cluster, error) {
|
||||
out := new(v1alpha1.Cluster)
|
||||
err := c.cc.Invoke(ctx, "/cluster.ClusterService/Create", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *clusterServiceClient) CreateFromKubeConfig(ctx context.Context, in *ClusterCreateFromKubeConfigRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error) {
|
||||
out := new(github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster)
|
||||
err := grpc.Invoke(ctx, "/cluster.ClusterService/CreateFromKubeConfig", in, out, c.cc, opts...)
|
||||
func (c *clusterServiceClient) CreateFromKubeConfig(ctx context.Context, in *ClusterCreateFromKubeConfigRequest, opts ...grpc.CallOption) (*v1alpha1.Cluster, error) {
|
||||
out := new(v1alpha1.Cluster)
|
||||
err := c.cc.Invoke(ctx, "/cluster.ClusterService/CreateFromKubeConfig", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *clusterServiceClient) Get(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error) {
|
||||
out := new(github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster)
|
||||
err := grpc.Invoke(ctx, "/cluster.ClusterService/Get", in, out, c.cc, opts...)
|
||||
func (c *clusterServiceClient) Get(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*v1alpha1.Cluster, error) {
|
||||
out := new(v1alpha1.Cluster)
|
||||
err := c.cc.Invoke(ctx, "/cluster.ClusterService/Get", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *clusterServiceClient) Update(ctx context.Context, in *ClusterUpdateRequest, opts ...grpc.CallOption) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error) {
|
||||
out := new(github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster)
|
||||
err := grpc.Invoke(ctx, "/cluster.ClusterService/Update", in, out, c.cc, opts...)
|
||||
func (c *clusterServiceClient) Update(ctx context.Context, in *ClusterUpdateRequest, opts ...grpc.CallOption) (*v1alpha1.Cluster, error) {
|
||||
out := new(v1alpha1.Cluster)
|
||||
err := c.cc.Invoke(ctx, "/cluster.ClusterService/Update", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -239,7 +381,7 @@ func (c *clusterServiceClient) Update(ctx context.Context, in *ClusterUpdateRequ
|
||||
|
||||
func (c *clusterServiceClient) Delete(ctx context.Context, in *ClusterQuery, opts ...grpc.CallOption) (*ClusterResponse, error) {
|
||||
out := new(ClusterResponse)
|
||||
err := grpc.Invoke(ctx, "/cluster.ClusterService/Delete", in, out, c.cc, opts...)
|
||||
err := c.cc.Invoke(ctx, "/cluster.ClusterService/Delete", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -250,15 +392,15 @@ func (c *clusterServiceClient) Delete(ctx context.Context, in *ClusterQuery, opt
|
||||
|
||||
type ClusterServiceServer interface {
|
||||
// List returns list of clusters
|
||||
List(context.Context, *ClusterQuery) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.ClusterList, error)
|
||||
List(context.Context, *ClusterQuery) (*v1alpha1.ClusterList, error)
|
||||
// Create creates a cluster
|
||||
Create(context.Context, *ClusterCreateRequest) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
|
||||
Create(context.Context, *ClusterCreateRequest) (*v1alpha1.Cluster, error)
|
||||
// CreateFromKubeConfig installs the argocd-manager service account into the cluster specified in the given kubeconfig and context
|
||||
CreateFromKubeConfig(context.Context, *ClusterCreateFromKubeConfigRequest) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
|
||||
CreateFromKubeConfig(context.Context, *ClusterCreateFromKubeConfigRequest) (*v1alpha1.Cluster, error)
|
||||
// Get returns a cluster by server address
|
||||
Get(context.Context, *ClusterQuery) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
|
||||
Get(context.Context, *ClusterQuery) (*v1alpha1.Cluster, error)
|
||||
// Update updates a cluster
|
||||
Update(context.Context, *ClusterUpdateRequest) (*github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster, error)
|
||||
Update(context.Context, *ClusterUpdateRequest) (*v1alpha1.Cluster, error)
|
||||
// Delete deletes a cluster
|
||||
Delete(context.Context, *ClusterQuery) (*ClusterResponse, error)
|
||||
}
|
||||
@@ -429,6 +571,9 @@ func (m *ClusterQuery) MarshalTo(dAtA []byte) (int, error) {
|
||||
i = encodeVarintCluster(dAtA, i, uint64(len(m.Server)))
|
||||
i += copy(dAtA[i:], m.Server)
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -447,6 +592,9 @@ func (m *ClusterResponse) MarshalTo(dAtA []byte) (int, error) {
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -485,6 +633,9 @@ func (m *ClusterCreateRequest) MarshalTo(dAtA []byte) (int, error) {
|
||||
}
|
||||
i++
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -535,6 +686,9 @@ func (m *ClusterCreateFromKubeConfigRequest) MarshalTo(dAtA []byte) (int, error)
|
||||
}
|
||||
i++
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -563,6 +717,9 @@ func (m *ClusterUpdateRequest) MarshalTo(dAtA []byte) (int, error) {
|
||||
}
|
||||
i += n2
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
i += copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
@@ -582,12 +739,18 @@ func (m *ClusterQuery) Size() (n int) {
|
||||
if l > 0 {
|
||||
n += 1 + l + sovCluster(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *ClusterResponse) Size() (n int) {
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -601,6 +764,9 @@ func (m *ClusterCreateRequest) Size() (n int) {
|
||||
if m.Upsert {
|
||||
n += 2
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -621,6 +787,9 @@ func (m *ClusterCreateFromKubeConfigRequest) Size() (n int) {
|
||||
if m.InCluster {
|
||||
n += 2
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -631,6 +800,9 @@ func (m *ClusterUpdateRequest) Size() (n int) {
|
||||
l = m.Cluster.Size()
|
||||
n += 1 + l + sovCluster(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -717,6 +889,7 @@ func (m *ClusterQuery) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -767,6 +940,7 @@ func (m *ClusterResponse) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -832,7 +1006,7 @@ func (m *ClusterCreateRequest) Unmarshal(dAtA []byte) error {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.Cluster == nil {
|
||||
m.Cluster = &github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster{}
|
||||
m.Cluster = &v1alpha1.Cluster{}
|
||||
}
|
||||
if err := m.Cluster.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
@@ -870,6 +1044,7 @@ func (m *ClusterCreateRequest) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1018,6 +1193,7 @@ func (m *ClusterCreateFromKubeConfigRequest) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1083,7 +1259,7 @@ func (m *ClusterUpdateRequest) Unmarshal(dAtA []byte) error {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.Cluster == nil {
|
||||
m.Cluster = &github_com_argoproj_argo_cd_pkg_apis_application_v1alpha1.Cluster{}
|
||||
m.Cluster = &v1alpha1.Cluster{}
|
||||
}
|
||||
if err := m.Cluster.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
@@ -1101,6 +1277,7 @@ func (m *ClusterUpdateRequest) Unmarshal(dAtA []byte) error {
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
@@ -1215,9 +1392,11 @@ var (
|
||||
ErrIntOverflowCluster = fmt.Errorf("proto: integer overflow")
|
||||
)
|
||||
|
||||
func init() { proto.RegisterFile("server/cluster/cluster.proto", fileDescriptorCluster) }
|
||||
func init() {
|
||||
proto.RegisterFile("server/cluster/cluster.proto", fileDescriptor_cluster_0875510a34378ea0)
|
||||
}
|
||||
|
||||
var fileDescriptorCluster = []byte{
|
||||
var fileDescriptor_cluster_0875510a34378ea0 = []byte{
|
||||
// 564 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x95, 0xcd, 0x6e, 0x13, 0x31,
|
||||
0x10, 0xc7, 0xe5, 0xb6, 0xda, 0x12, 0x83, 0xf8, 0xb0, 0x0a, 0x5a, 0xd2, 0x10, 0xa5, 0x3e, 0x54,
|
||||
|
||||
130
server/metrics/metrics.go
Normal file
130
server/metrics/metrics.go
Normal file
@@ -0,0 +1,130 @@
|
||||
package metrics
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
|
||||
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
|
||||
applister "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
|
||||
)
|
||||
|
||||
const (
|
||||
// MetricsPath is the endpoint to collect application metrics
|
||||
MetricsPath = "/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
descAppDefaultLabels = []string{"namespace", "name"}
|
||||
|
||||
descAppInfo = prometheus.NewDesc(
|
||||
"argocd_app_info",
|
||||
"Information about application.",
|
||||
append(descAppDefaultLabels, "project", "repo", "dest_server", "dest_namespace"),
|
||||
nil,
|
||||
)
|
||||
descAppCreated = prometheus.NewDesc(
|
||||
"argocd_app_created_time",
|
||||
"Creation time in unix timestamp for an application.",
|
||||
descAppDefaultLabels,
|
||||
nil,
|
||||
)
|
||||
descAppSyncStatus = prometheus.NewDesc(
|
||||
"argocd_app_sync_status",
|
||||
"The application current sync status.",
|
||||
append(descAppDefaultLabels, "sync_status"),
|
||||
nil,
|
||||
)
|
||||
descAppHealthStatus = prometheus.NewDesc(
|
||||
"argocd_app_health_status",
|
||||
"The application current health status.",
|
||||
append(descAppDefaultLabels, "health_status"),
|
||||
nil,
|
||||
)
|
||||
)
|
||||
|
||||
// NewMetricsServer returns a new prometheus server which collects application metrics
|
||||
func NewMetricsServer(port int, appLister applister.ApplicationLister) *http.Server {
|
||||
mux := http.NewServeMux()
|
||||
appRegistry := NewAppRegistry(appLister)
|
||||
mux.Handle(MetricsPath, promhttp.HandlerFor(appRegistry, promhttp.HandlerOpts{}))
|
||||
return &http.Server{
|
||||
Addr: fmt.Sprintf("0.0.0.0:%d", port),
|
||||
Handler: mux,
|
||||
}
|
||||
}
|
||||
|
||||
type appCollector struct {
|
||||
store applister.ApplicationLister
|
||||
}
|
||||
|
||||
// NewAppCollector returns a prometheus collector for application metrics
|
||||
func NewAppCollector(appLister applister.ApplicationLister) prometheus.Collector {
|
||||
return &appCollector{
|
||||
store: appLister,
|
||||
}
|
||||
}
|
||||
|
||||
// NewAppRegistry creates a new prometheus registry that collects applications
|
||||
func NewAppRegistry(appLister applister.ApplicationLister) *prometheus.Registry {
|
||||
registry := prometheus.NewRegistry()
|
||||
registry.MustRegister(NewAppCollector(appLister))
|
||||
return registry
|
||||
}
|
||||
|
||||
// Describe implements the prometheus.Collector interface
|
||||
func (c *appCollector) Describe(ch chan<- *prometheus.Desc) {
|
||||
ch <- descAppInfo
|
||||
ch <- descAppCreated
|
||||
ch <- descAppSyncStatus
|
||||
ch <- descAppHealthStatus
|
||||
}
|
||||
|
||||
// Collect implements the prometheus.Collector interface
|
||||
func (c *appCollector) Collect(ch chan<- prometheus.Metric) {
|
||||
apps, err := c.store.List(labels.NewSelector())
|
||||
if err != nil {
|
||||
log.Warnf("Failed to collect applications: %v", err)
|
||||
return
|
||||
}
|
||||
for _, app := range apps {
|
||||
collectApps(ch, app)
|
||||
}
|
||||
}
|
||||
|
||||
func boolFloat64(b bool) float64 {
|
||||
if b {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
|
||||
addConstMetric := func(desc *prometheus.Desc, t prometheus.ValueType, v float64, lv ...string) {
|
||||
lv = append([]string{app.Namespace, app.Name}, lv...)
|
||||
ch <- prometheus.MustNewConstMetric(desc, t, v, lv...)
|
||||
}
|
||||
addGauge := func(desc *prometheus.Desc, v float64, lv ...string) {
|
||||
addConstMetric(desc, prometheus.GaugeValue, v, lv...)
|
||||
}
|
||||
|
||||
addGauge(descAppInfo, 1, app.Spec.Project, app.Spec.Source.RepoURL, app.Spec.Destination.Server, app.Spec.Destination.Namespace)
|
||||
|
||||
addGauge(descAppCreated, float64(app.CreationTimestamp.Unix()))
|
||||
|
||||
syncStatus := app.Status.ComparisonResult.Status
|
||||
addGauge(descAppSyncStatus, boolFloat64(syncStatus == argoappv1.ComparisonStatusSynced), string(argoappv1.ComparisonStatusSynced))
|
||||
addGauge(descAppSyncStatus, boolFloat64(syncStatus == argoappv1.ComparisonStatusOutOfSync), string(argoappv1.ComparisonStatusOutOfSync))
|
||||
addGauge(descAppSyncStatus, boolFloat64(syncStatus == argoappv1.ComparisonStatusUnknown || syncStatus == ""), string(argoappv1.ComparisonStatusUnknown))
|
||||
|
||||
healthStatus := app.Status.Health.Status
|
||||
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusUnknown || healthStatus == ""), string(argoappv1.HealthStatusUnknown))
|
||||
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusProgressing), string(argoappv1.HealthStatusProgressing))
|
||||
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusHealthy), string(argoappv1.HealthStatusHealthy))
|
||||
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusDegraded), string(argoappv1.HealthStatusDegraded))
|
||||
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusMissing), string(argoappv1.HealthStatusMissing))
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user