mirror of
https://github.com/argoproj/argo-cd.git
synced 2026-02-20 01:28:45 +01:00
fix: do not exclude APIService resources (#22586)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
This commit is contained in:
committed by
GitHub
parent
17337de6eb
commit
76d1772b5c
@@ -35,25 +35,27 @@ Starting from 3.0, this flag is removed and the logs RBAC is enforced by default
|
||||
|
||||
#### Detection
|
||||
|
||||
Users who have `server.rbac.log.enforce.enable: "true"` in their `argocd-cm` ConfigMap, are unaffected by this change.
|
||||
Users who have `server.rbac.log.enforce.enable: "true"` in their `argocd-cm` ConfigMap, are unaffected by this change.
|
||||
|
||||
Users who have `policy.default: role:readonly` or `policy.default: role:admin` in their `argocd-rbac-cm` ConfigMap, are unaffected.
|
||||
Users who have `policy.default: role:readonly` or `policy.default: role:admin` in their `argocd-rbac-cm` ConfigMap, are unaffected.
|
||||
|
||||
Users who don't have a `policy.default` in their `argocd-rbac-cm` ConfigMap, and either have `server.rbac.log.enforce.enable` set to `false` or don't have this setting at all in their `argocd-cm` ConfigMap are affected and should perform the below remediation steps.
|
||||
Users who don't have a `policy.default` in their `argocd-rbac-cm` ConfigMap, and either have `server.rbac.log.enforce.enable` set to `false` or don't have this setting at all in their `argocd-cm` ConfigMap are affected and should perform the below remediation steps.
|
||||
|
||||
After the upgrade, it is recommended to remove the setting `server.rbac.log.enforce.enable` from `argocd-cm` ConfigMap, if it was there before the upgrade.
|
||||
After the upgrade, it is recommended to remove the setting `server.rbac.log.enforce.enable` from `argocd-cm` ConfigMap, if it was there before the upgrade.
|
||||
|
||||
#### Remediation
|
||||
|
||||
##### Quick remediation (global change)
|
||||
For users with an existing default policy with a custom role, add this policy to `policy.csv` for your custom role: `p, role:<YOUR_DEFAULT_ROLE>, logs, get, */*, allow`.
|
||||
For users without a default policy, add this policy to `policy.csv`: `p, role:global-log-viewer, logs, get, */*, allow` and add the default policy for this role: `policy.default: role:global-log-viewer`
|
||||
##### Quick remediation (global change)
|
||||
|
||||
For users with an existing default policy with a custom role, add this policy to `policy.csv` for your custom role: `p, role:<YOUR_DEFAULT_ROLE>, logs, get, */*, allow`.
|
||||
For users without a default policy, add this policy to `policy.csv`: `p, role:global-log-viewer, logs, get, */*, allow` and add the default policy for this role: `policy.default: role:global-log-viewer`
|
||||
|
||||
##### Recommended remediation (per-policy change)
|
||||
Explicitly add a `logs, get` policy to every role that has a policy for `applications, get` or for `applications, *`.
|
||||
This is the recommended way to maintain the principle of least privilege.
|
||||
Similar to the way access to Applications are currently managed, access to logs can be either granted on a Project scope level (Project resource) or on the `argocd-rbac-cm` ConfigMap level.
|
||||
See this [example](../upgrading/2.3-2.4.md#example-1) for more details.
|
||||
|
||||
Explicitly add a `logs, get` policy to every role that has a policy for `applications, get` or for `applications, *`.
|
||||
This is the recommended way to maintain the principle of least privilege.
|
||||
Similar to the way access to Applications are currently managed, access to logs can be either granted on a Project scope level (Project resource) or on the `argocd-rbac-cm` ConfigMap level.
|
||||
See this [example](../upgrading/2.3-2.4.md#example-1) for more details.
|
||||
|
||||
### Default `resource.exclusions` configurations
|
||||
|
||||
@@ -63,7 +65,7 @@ which we exclude for performance reasons, reducing connections and load to the K
|
||||
|
||||
The excluded Kinds are:
|
||||
|
||||
- **Kubernetes Resources**: `Endpoints`, `EndpointSlice`, `APIService`, `Lease`, `SelfSubjectReview`, `TokenReview`, `LocalSubjectAccessReview`, `SelfSubjectAccessReview`, `SelfSubjectRulesReview`, `SubjectAccessReview`, `CertificateSigningRequest`, `PolicyReport` and `ClusterPolicyReport`.
|
||||
- **Kubernetes Resources**: `Endpoints`, `EndpointSlice`, `Lease`, `SelfSubjectReview`, `TokenReview`, `LocalSubjectAccessReview`, `SelfSubjectAccessReview`, `SelfSubjectRulesReview`, `SubjectAccessReview`, `CertificateSigningRequest`, `PolicyReport` and `ClusterPolicyReport`.
|
||||
- **Cert Manager**: `CertificateRequest`.
|
||||
- **Kyverno**: `EphemeralReport`, `ClusterEphemeralReport`, `AdmissionReport`, `ClusterAdmissionReport`, `BackgroundScanReport`, `ClusterBackgroundScanReport` and `UpdateRequest`.
|
||||
- **Cilium**: `CiliumIdentity`, `CiliumEndpoint` and `CiliumEndpointSlice`.
|
||||
@@ -216,26 +218,28 @@ spec:
|
||||
namespace: guestbook
|
||||
```
|
||||
|
||||
### Upgraded Helm version with breaking changes
|
||||
Helm was upgraded to 3.17.1.
|
||||
This may require changing your `values.yaml` files for subcharts, if the `values.yaml` contain a section with a `null` object.
|
||||
See related issue in [Helm GitHub repository](https://github.com/helm/helm/issues/12469)
|
||||
See Helm 3.17.1 [release notes](https://github.com/helm/helm/releases/tag/v3.17.1)
|
||||
Example of such a [problem and resolution](https://github.com/argoproj/argo-cd/pull/22035/files)
|
||||
### Upgraded Helm version with breaking changes
|
||||
|
||||
Helm was upgraded to 3.17.1.
|
||||
This may require changing your `values.yaml` files for subcharts, if the `values.yaml` contain a section with a `null` object.
|
||||
See related issue in [Helm GitHub repository](https://github.com/helm/helm/issues/12469)
|
||||
See Helm 3.17.1 [release notes](https://github.com/helm/helm/releases/tag/v3.17.1)
|
||||
Example of such a [problem and resolution](https://github.com/argoproj/argo-cd/pull/22035/files)
|
||||
Explanation:
|
||||
|
||||
- Prior to Helm 3.17.1, `null` object in `values.yaml` resulted in a warning: `cannot overwrite table with non table` upon performing `helm template`, and the resulting K8s object was not overridden with the invalid `null` value.
|
||||
- In Helm 3.17.1, this behavior changed and `null` object in `values.yaml` still results in this warning upon performing `helm template`, but the resulting K8s object will be overridden with the invalid `null` value.
|
||||
- To resolve the issue, identify `values.yaml` with `null` object values, and remove those `null` values.
|
||||
- To resolve the issue, identify `values.yaml` with `null` object values, and remove those `null` values.
|
||||
|
||||
### Use Annotation-Based Tracking by Default
|
||||
|
||||
The default behavior for [tracking resources](../../user-guide/resource_tracking.md) has changed to use annotation-based
|
||||
tracking instead of label-based tracking. Annotation-based tracking is more reliable and less prone to errors caused by
|
||||
The default behavior for [tracking resources](../../user-guide/resource_tracking.md) has changed to use annotation-based
|
||||
tracking instead of label-based tracking. Annotation-based tracking is more reliable and less prone to errors caused by
|
||||
external code copying tracking labels from one resource to another.
|
||||
|
||||
#### Detection
|
||||
|
||||
To detect if you are impacted, check the `argocd-cm` ConfigMap for the `application.resourceTrackingMethod` field. If it
|
||||
To detect if you are impacted, check the `argocd-cm` ConfigMap for the `application.resourceTrackingMethod` field. If it
|
||||
unset or is set to `label`, you are using label-based tracking. If it is set to `annotation`, you are already using
|
||||
annotation-based tracking and are not impacted by this change.
|
||||
|
||||
|
||||
@@ -70,10 +70,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/core-install-with-hydrator.yaml
generated
4
manifests/core-install-with-hydrator.yaml
generated
@@ -24269,10 +24269,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/core-install.yaml
generated
4
manifests/core-install.yaml
generated
@@ -24260,10 +24260,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/ha/install-with-hydrator.yaml
generated
4
manifests/ha/install-with-hydrator.yaml
generated
@@ -24678,10 +24678,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/ha/install.yaml
generated
4
manifests/ha/install.yaml
generated
@@ -24669,10 +24669,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
@@ -513,10 +513,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/ha/namespace-install.yaml
generated
4
manifests/ha/namespace-install.yaml
generated
@@ -504,10 +504,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/install-with-hydrator.yaml
generated
4
manifests/install-with-hydrator.yaml
generated
@@ -24629,10 +24629,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/install.yaml
generated
4
manifests/install.yaml
generated
@@ -24620,10 +24620,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/namespace-install-with-hydrator.yaml
generated
4
manifests/namespace-install-with-hydrator.yaml
generated
@@ -464,10 +464,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
4
manifests/namespace-install.yaml
generated
4
manifests/namespace-install.yaml
generated
@@ -455,10 +455,6 @@ data:
|
||||
- Endpoints
|
||||
- EndpointSlice
|
||||
### Internal Kubernetes resources excluded reduce the number of watched events
|
||||
- apiGroups:
|
||||
- apiregistration.k8s.io
|
||||
kinds:
|
||||
- APIService
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
kinds:
|
||||
|
||||
Reference in New Issue
Block a user