chore: gitops-engine post migration fixes (#24727)

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
This commit is contained in:
Leonardo Luz Almeida
2025-09-29 09:05:41 -04:00
committed by GitHub
parent 8dd534ec38
commit a2b3f0a78e
13 changed files with 252 additions and 758 deletions

View File

@@ -65,12 +65,8 @@ jobs:
go mod download
- name: Check for tidiness of go.mod and go.sum
run: |
rm go.work.sum
go work sync
go work vendor
go mod tidy
git diff --exit-code -- .
make workspace-vendor
git diff --exit-code -- . ':!go.work.sum'
build-go:
name: Build & cache Go code
if: ${{ needs.changes.outputs.backend == 'true' }}

View File

@@ -379,6 +379,14 @@ mod-vendor: test-tools-image
mod-vendor-local: mod-download-local
go work vendor
# Update the go.work.sum file and the vendor folder
.PHONY: workspace-vendor
workspace-vendor:
rm -rf vendor
rm -f go.work.sum
go work vendor
go mod tidy
# Run linter on the code
.PHONY: lint
lint: test-tools-image

View File

@@ -29,10 +29,9 @@ As build dependencies change over time, you have to synchronize your development
* `make dep-ui` or `make dep-ui-local`
Argo CD recently migrated to Go modules. Usually, dependencies will be downloaded at build time, but the Makefile provides two targets to download and vendor all dependencies:
Argo CD recently migrated to [Go Workspaces](https://go.dev/blog/get-familiar-with-workspaces), allowing it to manage both the main Go module dependencies and the gitops-engine dependencies seamlessly. Dependencies are typically downloaded during the build process. However, if you want to ensure your environment is up-to-date, refer to the following make target:
* `make mod-download` or `make mod-download-local` will download all required Go modules and
* `make mod-vendor` or `make mod-vendor-local` will vendor those dependencies into the Argo CD source tree
* `make workspace-vendor`: synchronize all Go dependencies and update the vendor folder.
### Generate API glue code and other assets

View File

@@ -1,52 +1,11 @@
# Ongoing migration
The gitops-engine repository is migrating to https://github.com/argoproj/argo-cd.
As part of this migration, we are going to:
- close PRs with no updates older than 1 year
- ask ppl to update the remaining PRs that had some update in the last year but are outdated.
- review the pending recent open PRs that we deem important
Hold on your PRs until the migration is complete.
# GitOps Engine
Various GitOps operators address different use-cases and provide different user experiences but all have similar set of core features. The team behind
[Argo CD](https://github.com/argoproj/argo-cd) has implemented a reusable library that implements core GitOps features:
Various GitOps operators address different use-cases and provide
different user experiences but all have similar set of core features.
This library implements core GitOps features:
- Kubernetes resource cache ✅
- Resources reconciliation ✅
- Sync Planning ✅
- Access to Git repositories
- Manifest Generation
## Proposals, specifications and ideas
Do you want to propose one more feature and want to enhance the existing one?
Proposals and ideas are in markdown docs in the [`specs/`](specs/) directory.
To create a new proposal, simply copy the spec [`template`](specs/template.md),
name the file corresponding to the title of your proposal, and place it in the
`specs/` directory.
A good starting point to understand the structure is the [GitOps Engine Design spec](specs/design.md).
We tried to answer frequently asked question in a [separate FAQ document](docs/faq.md).
## Governance
This project is licensed under the [Apache 2 license](LICENSE).
The GitOps Engine follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
## Get involved
If you are as excited about GitOps and one common engine for it as much as we are, please get in touch. If you want to write code that's great, if you want to share feedback, ideas and use-cases, that's great too.
Find us on the [#argo-cd-contributors][argo-cd-contributors-slack] on CNCF Slack (get an [invite here][cncf-slack]).
[argo-cd-contributors-slack]: https://cloud-native.slack.com/archives/C020XM04CUW
[cncf-slack]: https://slack.cncf.io/
### Contributing to the effort
At this stage we are interested in feedback, use-cases and help on the GitOps Engine.

View File

@@ -1,73 +0,0 @@
# Frequently Asked Questions
## General
**Q**: What's the backstory behind this?
**A**: In November 2019 the teams behind Argo CD and Flux announced that they were going to join efforts. Some of the announcement blog posts explain what the thinking of the time was:
- Jay Pipes on the [AWS blog](https://aws.amazon.com/de/blogs/containers/help-us-write-a-new-chapter-for-gitops-kubernetes-and-open-source-collaboration/)
- Pratik Wadher on the [Intuit blog](https://www.intuit.com/blog/technology/introducing-argo-flux/)
- Tamao Nakahara on the [Weaveworks blog](https://www.weave.works/blog/argo-flux-join-forces)
In the course of the next months, the two engineering teams [met on a regular basis](https://docs.google.com/document/d/17AEZgv6yVuD4HS7_oNPiMKmS7Q6vjkhk6jH0YCELpRk/edit) and scoped out the future of the project. Two options were on the table:
1. Rethink APIs and build the project from the ground up.
1. Extract useful code from Argo into an Engine project.
The latter was deemed to be the most practical solution.
March 2020 the Flux team made a [proof of concept](https://github.com/fluxcd/flux/pull/2886) available, which rebased the Flux on top of the GitOps Engine, but while looking at the breaking changes this was going to introduce the Flux team decided that it was a time for a [more ground-breaking approach](https://www.weave.works/blog/gitops-with-flux-v2) on how to do GitOps. After some experimentation, the GitOps Toolkit was put out as an RFC in June 2020.
A [number of other projects](https://github.com/search?q=argoproj%2Fgitops-engine&type=Code) already started looking at integrating the GitOps Engine.
The Argo and Flux teams decided all of this on good terms. All of these discussions were immensely helpful in shaping both projects' future. You might see each of us stealing good ideas from the other in the future and celebrate each others successes. There might be future collaborations, we'll keep you posted.
----
**Q**: What are you hoping to get out of this collaboration?
**A**: Our primary motives for coming together to do this are:
- Argo CD and Flux CD are two of the main GitOps projects, solving very similar problems, having very similar views on implementing GitOps.
- We want to offer a shared vision for GitOps and the best possible GitOps experience for everyone.
- We hope to bring a bigger community together than we can on our own.
- We want to learn from each other's approaches and offer the best in breed GitOps solution out there.
----
**Q**: What can current Argo CD users look forward to from this collaboration with Flux CD?
**A**: We hope Argo CD might get the following:
- Docker registry monitoring feature. It would be fantastic if we could extract existing Flux CD code into a reusable component which works for both Argo CD and Flux CD.
- Better cluster management experience. Right now Argo CD users use app of apps pattern which is not perfect. Perhaps we can learn from Flux CD community and contribute to GitOps engine to improve both Argo CD and Flux CD.
- Advanced Git related features like GPG commit verification, git secrets.
- Simplified installations/management.
----
**Q**: Does this project scope just synchronization of environment (git sync operator) or does it include progressive delivery?
**A**: We will be starting to work on a spec for progressive delivery alignment between Argo Rollouts and Weave Flagger in 2020.
----
**Q**: What comes after the GitOps Engine?
**A**: The ultimate goal is to merge user experiences and eventually have one project.
Right now nobody knows how exactly that project will look like and how exactly we get there. We will start with baby steps e.g. get rid of code duplication (gitops engine), merge documentation, slacks/issues tracking and then incrementally try to merge CLI/UI to get one user experience.
We also want to highlight that before we even start doing this we want to be properly introduced to each others communities, and understand each others use cases. As without this knowledge we won't be able to create something that will serve all of you.
----
**Q**: Was it hard to put together the design for the GitOps Engine? What was most important to you when you started putting it together?
**A**: It was hard for the right reasons. We wanted to design something which ticked off all these points:
1. Realistic both in theory and practice (e.g. starting a new project from scratch didn't make sense, creating a Frankenstein-like project from pieces of both projects also didn't make sense).
1. Would allow us keep us nurturing the communities of Argo CD and Flux CD, towards a common product, without jeopardizing them (e.g. without disrespecting the communities).
1. Useful. Both projects would benefit from it as a first step towards a joint product/solution.
In addition to that finding a common language, without falling into project specific terms (keeping it really abstract), was also quite a challenge, e.g. what a repository is to Flux CD, is an application to Argo CD.

View File

@@ -1,17 +0,0 @@
# Releasing
This document describes the `gitops-engine` library releasing process.
# Versioning
* The library is versioned using the [semantic versioning](http://semver.org/): new version will be backwards-compatible
with earlier versions within a single major version.
* The library has its own release cycle and not tied to Argo CD release cycle.
* The first library release is v0.1.0.
# Release process
* Release branch is created for every minor release.
* The branch name should use the following convention: `release-<major>.<minor>`. For example all v0.1 releases should
be in `release-0.1` branch.
* Actual release is a git tag which uses the following naming convention: `v<major>-<minor>-<patch>`. For example: `v0.1.0`, `v0.1.1` etc.

View File

@@ -1,34 +0,0 @@
# Deployment Repo Update Automation
## Summary
The GitOps driven continuous deployment cycle starts with a change in the Git repository that contains resource manifests. Flux provides the
[Automated Image Update](https://docs.fluxcd.io/en/latest/references/automated-image-update.html) feature that continuously monitors the docker registry and automatically
updates deployment repo when a new image is released. This functionality is not available for Argo CD users. Also, some Argo CD users need only functionality related to the
Git repository updating and don't need docker registry monitoring.
This document is meant to collect requirements for the Git repository update functionality. As a next step, we could discuss if it is possible to implement a Golang library or
a service that can be used in combination with Argo CD and Flux.
> Note: Flux already plans to split out the docker registry monitor and image updating feature into a separate component. We should consider re-using the extracted component.
## Requirements
### Manifests updating
When updates are discovered for any image referenced in resource manifests in the configuration repository, new manifests that refer to the updated image tags/versions must be generated.
The manifests might be stored as raw YAML files or as the templating tool package such as Kustomize or Helm. The manifest updating functionality should take new images
set as an input and update manifest files or templating tool configs to use the provided set of images.
### Commit signing
The user might want to use GPG signing for each commit in the deployment repo. The commit signing feature should allow to optionally
sign the commit with the image changes.
### Interaction with Git
The feature provides the following basic functionalities:
* Clone Git repo or update the local copy of a previously cloned copy
* Configure local Git user name and email.
* Push changes back to Git remote repo.
* Rebase remote changes in case of concurrent repository update.

View File

@@ -1,207 +0,0 @@
# GitOps Engine Design - Bottom Up [On Hold - see [Top Down](./design-top-down.md)]
## Summary
The Bottom Up approach assumes that teams identify similar components in Argo CD and Flux and merge them one by one. The following five components were identified so far:
* Access to Git repositories
* Kubernetes resource cache
* Manifest Generation
* Resources reconciliation
* Sync Planning
The rest of the document describes separate proposals of how exactly components could be merged.
## Kubernetes Resource Cache
Both Argo CD and Flux have to load target cluster resources. This is required to enable the following main use cases:
- Collect resources that are no longer in Git and delete/warn the user.
- Present information about cluster state to the user: Argo CD UI, fluxctl `list-images`, `list-workloads` commands.
- Compare the state of the cluster with the configuration in Git.
Projects use different approaches to collect cluster state information. Argo CD leverages Kubernetes watch APIs to maintain
lightweight cluster state cache. Flux fetches required resources when information is needed.
The problem is that Kubernetes does not provide an SQL like API which allows to effectively find required resources and in
some cases, Flux has to load whole cluster/namespace state into memory and go through the in-memory resources list. This is
a time and memory consuming approach, which also puts pressure on Kubernetes' API server.
### Goals
Extract Argo CD caching logic into a reusable component that maintains a lightweight cluster state cache. argoproj/argo-cd/controller/cache
### Non-Goals
Support multi-cluster caching. The ability to maintain a cache of multiple-cluster is implemented in Argo CD code but it is tightly coupled
to how Argo CD stores cluster credentials and add too much complexity.
### Proposal.
The cluster cache component encapsulates interaction with Kubernetes APIs and allows to quickly inspect Kubernetes resources in a thread-safe
manner. The component is responsible for the following tasks:
- Identify resource APIs supported by the target cluster and provide APIs metadata (e.g. if API is namespaced or cluster scope).
- Notifying about changes in the resource APIs supported by the target cluster (e.g. added CRDs, removed CRDs ...).
- Loads initial state and watch for changes in every supported resource API.
- Handles available changes APIs: start/stops watches; removes obsolete APIs from the cache.
The component does not cache the whole resource manifest because it would require too much memory. Instead, it stores only
resource identifiers and relationships between resources. The whole resource manifest or any other resource metadata should
be cached by the component user using event handlers.
The component watches only the preferred version of each resource API. So resource object passed to the event handlers has the
structure of the preferred version.
The component is responsible for the handling of following Kubernetes API edge cases:
Resources of the deprecated extensions API group have duplicates in groups apps, networking.k8s.io, policy.
* The ReplicaSet from apps group might reference Deployment from the extensions group as a parent.
* The relationship between Service and Endpoint is not explicit: [kubernetes/#28483](https://github.com/kubernetes/kubernetes/issues/28483)
* The relationship between ServiceAccount and Token is not explicit.
* Resources of OpenShift deprecated groups authorization.openshift.io and project.openshift.io create
duplicates in rbac.authorization.k8s.io and core groups.
#### Top-Level Component APIs
The listing below represents top-level API exposed by cluster cache component:
```golang
// ResourceID is a unique resource identifier.
type ResourceID struct {
// Namespace is empty for cluster-scoped resources.
Namespace string
Name string
Group string
Kind string
}
type ListOptions struct {
// A selector to restrict the list of returned objects by their labels.
Selector metav1.LabelSelector
// Restricts list of returned objects by namespace. If not empty the only namespaced resources are returned.
Namespaces []string
// If set to true then only namespaced object are returned.
NamespacedOnly bool
// If set to true then only cluster level object are returned.
ClusterLevelOnly bool
// If set to true then only objects without owners are returned.
TopLevelOnly bool
}
// Cache provides a set of methods to access the cached cluster's state. All methods are thread safe.
type Cache interface {
// List returns a list of resource ids which match the specified list options
List(options ListOptions) ([]ResourceID, error)
// GetResourceAPIMetadata returns API ( metav1.APIResource includes information about supported verb, namespaced/cluster level etc)
GetResourceAPIMetadata(gk schema.GroupKind) (metav1.APIResource, error)
// IterateChildTree builds a DAG using parent-child relationships based on ownerReferences resource field and
// traverse resources in a topological order starting from specified root ids
IterateChildTree(roots []ResourceID, action func(key ResourceID) error) error
}
```
The Cache interface methods are serving following use cases:
List:
* Returns resources managed by Argo CD/Flux. Typically top-level resources labeled with a special label.
* Returns orphaned namespace resources. This will enable the Argo CD feature of warning the user if a namespace has any unmanaged resources.
GetResourceAPIMetadata:
* Answers whether a resource namespace-scoped or cluster-scoped. This is useful in two cases:
* to gracefully handle user errors when cluster-level resource Git have namespace. This is incorrect, but kubectl gracefully handles such errors.
* set fallback namespace to namespaced resources without namespace
* Helps to create a dynamic K8s client and specify the resource/kind.
IterateChildTree:
* The method allows Argo CD to get information about resources tree which is used to visualize cluster state in the UI
#### Customizations
The listing below contains a set of data structures that allows customizing caching behavior.
```golang
type ResourceFilter struct {
APIGroups []string
Kinds []string
}
type ResourcesAPIFilter struct {
// ResourceExclusions holds the api groups, kinds which should be excluded
ResourceExclusions []ResourceFilter
// ResourceInclusions holds the only api groups that should be included. Assumes that everything is included in empty.
ResourceInclusions []ResourceFilter
}
// ResourceEventHandlers is a set of handlers which are executed when resources updated/created/deleted.
type ResourceEventHandlers struct {
OnCreated func(obj *unstructured.Unstructured)
OnUpdated func(updated *unstructured.Unstructured)
OnDeleted func(key ResourceID)
}
// Settings contains list of parameters which customize caching behavior
type Settings struct {
Filter ResourcesAPIFilter
EventHandlers ResourceEventHandlers
Namespaces []string
ResyncPeriod time.Duration
}
func NewClusterCache(config *rest.Config, settings Settings) (Cache, error)
```
ResourceEventHandlers:
A set of callbacks that are executed when cache changes. Useful to collect and cache additional information about resources, for example:
* Cache whole manifest of a managed resource to use it later for reconciliation
* Cache resource metadata such as a list of images or health status.
ResourceAPIFilter:
Enables limiting the set of monitored resource APIs.
Namespaces:
Allows switching component into namespace only mode. If one or more namespaces are specified then component ignore cluster level resources and watch only resources in the specified namespaces.
NOTE: Kubernetes API allows to list/watch resources only in one namespace or the whole cluster. So if more than one namespace is specified then component have to start separate set of watches for each namespace.
ResyncPeriod:
Specifies interval after which cluster cache should be automatically invalidated.
#### Health Assessment (optionally)
The health assessment subcomponent provides the ability to get health information about a given resource. The health assessment package is not directly related to caching but helps to leverage functionality provided by caching and thus its proposed for inclusion into the caching component..
The health assessment logic is available in package argoproj/argo-cd/util/health and includes the following features:
* Support for several Kubernetes built-in resources such as Pod, ReplicaSet, Pod, Ingress and few others
* A framework that allows customizing health assessment logic using Lua script. Framework includes testing infrastructure.
The health information is represented by the following data structure:
```golang
type HealthStatus struct {
Status HealthStatusCode
Message string
}
```
The health status might take one of the following values:
* Healthy/Degraded - self explanatory
* Progressing - the resource is not healthy yet but there is still a chance to become Healthy.
* Unknown - the health assessment failed. The error message is in the `Message` field.
* Suspended - the resource is neither progressing nor degraded. For example Deployment is considered suspended if `spec.paused` field is set to true.
* Missing - the expected resource is missing
The library API is represented by a single method:
```golang
type HealthAssessor interface {
GetResourceHealth(obj *unstructured.Unstructured) (*HealthStatus, error)
}
```
#### Additional Considerations
The live state cache could be useful for the docker-registry monitoring feature: the `OnUpdated` resource event handler can be used to maintain a images pull secrets. However, if the docker registry part is extracted into a separate binary we would have to run a separate instance of a cluster cache which means 2x more Kubernetes API calls. The workaround would be to optionally point Docker Registry Monitor to Flux?
## Reconciliation [WIP]
## Access to Git repositories [WIP]
## Manifest Generation [WIP]
## Resources reconciliation [WIP]
## Sync Planning [WIP]

View File

@@ -1,227 +0,0 @@
# GitOps Engine Design - Top Down [WIP]
## Summary
During the elaboration of [Bottom Up](./design-bottom-up.md) option, we've discovered that this
approach is challenging. Although components are similar at a high-level there is
still a lot of differences in design and implementation, so it is hard to re-use either of them in
both projects without redesign. This is not surprising because code was developed
by different teams and with a focus on different use-cases. To speed-up the process it is
proposed to take a whole sub-system of one project and make it customizable enough to
be suitable for both Argo CD and Flux.
## Proposal
It is proposed to extract of Argo CD application controller subsystem, that is responsible for
interacting with Kubernetes and resources reconciliation and syncing. The sub-system should be refactored
and used in both Argo CD and Flux. During refactoring we should make sure that it supports all
customizations that are required to keep Flux behavior untouched.There are two main reasons to try using
Argo CD as a base for reconciliation engine:
- Argo CD uses the _Application_ abstraction to represent the desired state and target the Kubernetes cluster.
This abstraction works for both Argo CD and Flux.
- The Argo CD controller leverages Kubernetes watch APIs instead of polling. This enables Argo CD features
such as Health assessment, UI and could provide better performance to Flux as well.
### Manifest Generation
The manifest generation logic is very different in Argo CD and Flux: Argo CD focuses on providing first class
support for existing config management tools while Flux provides a flexible way to connect any config
management tool using .flux.yaml files. I think we should merge manifest generation step by step, after core
of GitOps engine is established.
### Repository Access
Flux has much more Git related features than Argo CD. I think we might do the same exercise as a next step:
generalize Flux's Git access, commit verification, write back features and contribute to the engine as
a whole sub-system.
### Hypothesis and assumptions
The proposed solution is based on the assumption that despite implementation differences the core functionality of Argo CD and Flux behaves in the same way. Both projects
ultimately extract the set of manifests from Git and use "kubectl apply" to change the cluster state. The minor differences are expected but we can resolve them by introducing new
knobs.
Also, the proposed approach is based on the assumption that Argo CD engine is flexible enough to cover all Flux use-cases, reproduce Flux's behavior with minor differences and can
be easily integrated into Argo CD.
However, there is a risk that there will be too many differences and it might be not feasible to support all of them. To get early feedback, we will start with a Proof-of-Concept
(PoC from now on) implementation which will serve as an experiment to assess the feasibility of the approach.
### Acceptance criteria
To consider the PoC successful (and with the exception of features excluded from the PoC to save time),
all the following must hold true:
1. All the Flux unit and end-to-end tests must pass. The existing tests are limited, so we may decide to
include additional ones.
2. The UX of Flux must remain unchanged. That includes:
- The flags of `fluxd` and `fluxctl` must be respected and can be used in the same way as before
resulting in the same configuration behavioural changes.
- Flux's API must remain unchanged. In particular, the web API (used by fluxctl) and the websocket API (e.g. used to
communicate with Weave Cloud) must work without changes.
3. Flux's writing behaviour on Git and Kubernetes must be identical. In particular:
- Flux+GitEngine should make changes in Git if and only if Flux without GitEngine would had done it,
in the same way (same content) and in the same situations
- Flux+GitEngine should add and update Kubernetes resources if and only if Flux without GitEngine would had done,
in the same way (same content) and in the same situations
Unfortunately, there isn't a straightforward way to decidedly check for (3).
Additionally, there must be a clear way forward (in the shape of well-defined steps)
for the features not covered by the PoC to work (complying with the points above) int he final GitOps
Engine.
### GitOps Engine PoC
The PoC deliverables are:
- All PoC changes are in separate branches.
- Argo CD controller will be moved to https://github.com/argoproj/gitops-engine.
- Flux will import GitOps engine component from the https://github.com/argoproj/gitops-engine repository and use it to perform cluster state syncing.
- The flux installation and fluxctl behavior will remain the same other than using GitOps engine internally. That means there will be no creation of Application CRD or Argo CD
specific ConfigMaps/Secrets.
- For the sake of saving time POC does not include implementing features mentioned before. So no commit verification, only plain .yaml files support, and full cluster mode.
## Design Details
The proposed design is based on the PoC that contains draft implementation and provides the base idea of a target design. PoC is available in
[argo-cd#fargo/engine](https://github.com/alexmt/argo-cd/tree/fargo/engine) and [flux#fargo](https://github.com/alexmt/flux/tree/fargo)
The GitOps engine API consists of three main packages:
* The `utils` package. It consist of loosely coupled packages that implements K8S resource diffing, health assessment, etc
* The `sync<Term>` package that contains Sync<Term> data structure definition and CRUD client interface.
* The `engine` package that leverages `utils` and uses provided set of Applications as a configuration source.
```
gitops-engine
|-- pkg
| |-- utils
| | |-- diff # provides Kubernetes resource diffing functionality
| | |-- kube # provides utility methods to interact with Kubernetes
| | |-- lua # provides utility methods to run Lua scripts
| | |-- sync<Term> # provides utility methods to manipulate Sync<Term> (e.g. start sync operation, wait for sync operation, force reconciliation)
| | `-- health # provides Kubernetes resources health assessment
| |-- sync<Term>
| | |-- apis # contains data structures that describe Sync<Term>
| | `-- client # contains client that provides Sync<Term> CRUD operations
| `-- engine # the engine implementation
```
The engine is responsible for the reconciliation of Kubernetes resources, which includes:
- Interacting with the Kubernetes API: load and maintain the cluster state; pushing required changes to the Kubernetes API.
- Reconciliation logic: match target K8S cluster resources with the resources stored in Git and decide which resources should be updated/deleted/created.
- Syncing logic: determine the order in which resources should be modified; features like sync hooks, waves, etc.
The manifests generation is out of scope and should be implemented by the Engine consumer.
### Engine API
> `Sync<Term>` is a place holder to a real name. The name is still under discussion
The engine API includes three main parts:
- `Engine` golang interface
- `Sync<Term>` data structure and `Sync<Term>Store` interface which provides access to the units.
- Set of data structures which allows configuring reconciliation process.
**Engine interface** - provides set of features that allows updating reconciliation settings and subscribe to engine events.
```golang
type Engine interface {
// Run starts reconciliation loop using specified number of processors for reconciliation and operation execution.
Run(ctx context.Context, statusProcessors int, operationProcessors int)
// SetReconciliationSettings updates reconciliation settings
SetReconciliationSettings(settings ReconciliationSettings)
// OnBeforeSync registers callback that is executed before each sync operation.
OnBeforeSync(callback func(appName string, tasks []SyncTaskInfo) ([]SyncTaskInfo, error)) Unsubscribe
// OnSyncCompleted registers callback that is executed after each sync operation.
OnSyncCompleted(callback func(appName string, state OperationState) error) Unsubscribe
// OnClusterCacheInitialized registers a callback that is executed when cluster cache initialization is completed.
// Callback is useful to wait until cluster cache is fully initialized before starting to use cached data.
OnClusterCacheInitialized(callback func(server string)) Unsubscribe
// OnResourceUpdated registers a callback that is executed when cluster resource got updated.
OnResourceUpdated(callback func(cluster string, un *unstructured.Unstructured)) Unsubscribe
// OnResourceRemoved registers a callback that is executed when a cluster resource gets removed.
OnResourceRemoved(callback func(cluster string, key kube.ResourceKey)) Unsubscribe
// On<Term>Event registers callback that is executed on every sync <Term> event.
On<Term>Event(callback func(id string, info EventInfo, message string)) Unsubscribe
}
type Unsubscribe func()
```
**Sync<Term> data structure** - allows engine accessing sync <Term> and update status.
```golang
type Sync<Term> struct {
ID string
Status Status
Source ManifestSource
Destination ManifestDestination
Operation *Operation
}
type Sync<Term>Store interface {
List() ([]Sync<Term>, error)
Get(id string) (Sync<Term>, error)
SetStatus(id string, status Status) (Sync<Term>, error)
SetOperation(id string, operation *Operation) error
}
```
**Settings data structures** - holds reconciliation settings and cluster credentials.
```golang
type ReconciliationSettings struct {
// AppInstanceLabelKey holds label key which is automatically added to every resource managed by the application
AppInstanceLabelKey string
// ResourcesFilter holds settings which allows to configure list of managed resource APIs
ResourcesFilter resource.ResourcesFilter
// ResourceOverrides holds settings which customize resource diffing logic
ResourceOverrides map[scheme.GroupKind]appv1.ResourceOverride
}
// provides access to cluster credentials
type CredentialsStore interface {
GetCluster(ctx context.Context, name string) (*appv1.Cluster, error)
WatchClusters(ctx context.Context, callback func(event *ClusterEvent)) error
}
```
**ManifestGenerator** a Golang interface that must be implemented by the GitOps engine consumer.
```golang
type ManifestResponse struct {
// Generated manifests
Manifests []string
// Resolved Git revision
Revision string
}
type ManifestGenerator interface {
Generate(ctx context.Context, app *appv1.Application, revision string, noCache bool) (*ManifestResponse, error)
}
```
**Engine instantiation** in order to create engine the consumer must provide the cluster credentials store and manifest generator as well as reconciliation settings.
The code snippets below contains `NewEngine` function definition and usage example:
```golang
func NewEngine(settings ReconciliationSettings, creds CredentialsStore, manifests ManifestGenerator)
```
```golang
myManifests := manifests{}
myClusters := clusters{}
engine := NewEngine(ReconciliationSettings{}, myManifests, myClusters)
engine.OnBeforeSync(func(appName string, tasks []SyncTaskInfo) ([]SyncTaskInfo, error) {
// customize syncing process by returning update list of sync tasks or fail to prevent syncing
return tasks, nil
})
```
## Alternatives
[Bottom-up](./design-bottom-up.md)

View File

@@ -1,29 +0,0 @@
# GitOps Engine Design
## Summary
Flux and ArgoCD are two popular open-source GitOps implementations. They currently offer different user experiences but, at their core, Flux and ArgoCD have a lot in common.
Therefore, the Flux and ArgoCD maintainers have decided to join forces, with the hypothesis that working on a single project will be more effective, avoiding duplicate work and
ultimately bringing more and better value to the end-user.
Effectively merging Flux and ArgoCD into a single solution is a long term goal. As a first step, both the ArgoCD and Flux teams are going to work on designing and implementing
the GitOps Engine.
![](https://user-images.githubusercontent.com/426437/66851601-ea9a6880-ef2f-11e9-807d-0c5f09fcc384.png)
The maintenance and support of the GitOps Engine will be a joined effort by the Flux and ArgoCD teams.
## Goals
The GitOps Engine:
* should contain core functionality pre-existing in ArgoCD and Flux.
## Non-Goals
* is not intended as a general framework for implementing GitOps services.
## Proposals
Teams have considered two ways to extract common functionality into the GitOps engine:
1. [Bottom-up](./design-bottom-up.md). Identify components that are used in both projects and move them one by one into GitOps engine repository.
1. [Top-down](./design-top-down.md). Take a whole sub-system of one project and make it customizable enough to be suitable for both Argo CD and Flux.

View File

@@ -1,31 +0,0 @@
# Docker Image Update Monitoring
## Summary
Many GitOps users would like to automate Kubernetes manifest changes in the deployment repository
(see [Deployment Repo Update Automation](./deployment-repo-update.md)). The changes might be triggered by
the CI pipeline run or a new image in the Docker registry. Flux provides docker registry monitoring as part of
[Automated Image Update](https://docs.fluxcd.io/en/latest/references/automated-image-update.html) feature.
This document is meant to collect requirements for a component that provides docker registry monitoring functionality and
can be used by Argo CD and potentially Flux users.
## Requirements
### Configurable Event Handler
When a new docker image is discovered the component should execute an event handler and pass the docker image name/version as a parameter.
The event handler is a shell script. The user should be able to specify the handler in the component configuration.
### Docker Registry WebHooks
Some Docker Registries send a webhook when a new image gets pushed. The component should provide a webhook handler which when invokes an event handler.
### Image Pulling
In addition to the webhook, the component should support images metadata pulling. The pulling should detect the new images and invoke an event handler for each new image.
### Image Credentials Auto-Discovering
If a component is running inside of a Kubernetes cluster together with the deployments then it already has access to the Docker registry credentials. Auto-Discovering functionality
detect available docker registry credentials and use them to access registries instead of requiring users to configure credentials manually.

View File

@@ -1,87 +0,0 @@
<!--
This template was adapted from the Kubernetes KEP template:
https://github.com/kubernetes/enhancements/blob/master/keps/YYYYMMDD-kep-template.md
-->
# Title
This is the title of the spec. Keep it simple and descriptive. A good title
can help communicate what the spec is and should be considered as part of any
review.
The title should be lowercased and spaces/punctuation should be replaced with
`-`.
## Summary
The `Summary` section is incredibly important for producing high quality
user-focused documentation such as release notes or a development roadmap. It
should be possible to collect this information before implementation begins in
order to avoid requiring implementers to split their attention between writing
release notes and implementing the feature itself. Ensure that the tone and
content of the `Summary` section is useful for a wide audience.
A good summary is probably at least a paragraph in length.
## Goals
List the specific goals of the spec. How will we know that this has succeeded?
## Non-Goals
What is out of scope for this spec? Listing non-goals helps to focus
discussion and make progress.
## Proposal
This is where we get down to the nitty gritty of what the proposal actually is.
### User Stories [optional]
Detail the things that people will be able to do if this spec is implemented.
Include as much detail as possible so that people can understand the "how" of
the system. The goal here is to make this feel real for users without getting
bogged down.
#### Story 1
#### Story 2
### Implementation Details/Notes/Constraints [optional]
What are the caveats to the implementation? What are some important details
that didn't come across above. Go in to as much detail as necessary here.
This might be a good place to talk about core concepts and how they relate.
### Risks and Mitigations
What are the risks of this proposal and how do we mitigate. Think broadly.
For example, consider both security and how this will impact the larger
kubernetes ecosystem.
How will security be reviewed and by whom? How will UX be reviewed and by
whom?
Consider including folks that also work outside the SIG or subproject.
## Design Details
### Upgrade / Downgrade / Migration Strategy
If applicable, how will the component be upgraded and downgraded? Does this
spec propose migrating users from one component or behaviour to another?
### Public API changes
Does the spec propose a public API-facing change? If so, describe the impact of
changes.
## Drawbacks [optional]
Why should this spec _not_ be implemented.
## Alternatives [optional]
Similar to the `Drawbacks` section the `Alternatives` section is used to
highlight and record other possible approaches to delivering the value proposed
by the spec.

View File

@@ -1,6 +1,243 @@
al.essio.dev/pkg/shellescape v1.6.0/go.mod h1:6sIqp7X2P6mThCQ7twERpZTuigpr6KbZWtls1U8I890=
bitbucket.org/bertimus9/systemstat v0.5.0/go.mod h1:EkUWPp8lKFPMXP8vnbpT5JDI0W/sTiLZAvN8ONWErHY=
buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.9-20250912141014-52f32327d4b0.1/go.mod h1:aY3zbkNan5F+cGm9lITDP6oxJIwu0dn9KjJuJjWaHkg=
buf.build/go/protovalidate v0.14.0/go.mod h1:+F/oISho9MO7gJQNYC2VWLzcO1fTPmaTA08SDYJZncA=
buf.build/go/protoyaml v0.6.0/go.mod h1:RgUOsBu/GYKLDSIRgQXniXbNgFlGEZnQpRAUdLAFV2Q=
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cloud.google.com/go v0.112.2 h1:ZaGT6LiG7dBzi6zNOvVZwacaXlmf3lRqnC4DQzqyRQw=
cloud.google.com/go v0.112.2/go.mod h1:iEqjp//KquGIJV/m+Pk3xecgKNhV+ry+vVTsy4TbDms=
cloud.google.com/go/accessapproval v1.7.5/go.mod h1:g88i1ok5dvQ9XJsxpUInWWvUBrIZhyPDPbk4T01OoJ0=
cloud.google.com/go/accesscontextmanager v1.8.5/go.mod h1:TInEhcZ7V9jptGNqN3EzZ5XMhT6ijWxTGjzyETwmL0Q=
cloud.google.com/go/aiplatform v1.60.0/go.mod h1:eTlGuHOahHprZw3Hio5VKmtThIOak5/qy6pzdsqcQnM=
cloud.google.com/go/analytics v0.23.0/go.mod h1:YPd7Bvik3WS95KBok2gPXDqQPHy08TsCQG6CdUCb+u0=
cloud.google.com/go/apigateway v1.6.5/go.mod h1:6wCwvYRckRQogyDDltpANi3zsCDl6kWi0b4Je+w2UiI=
cloud.google.com/go/apigeeconnect v1.6.5/go.mod h1:MEKm3AiT7s11PqTfKE3KZluZA9O91FNysvd3E6SJ6Ow=
cloud.google.com/go/apigeeregistry v0.8.3/go.mod h1:aInOWnqF4yMQx8kTjDqHNXjZGh/mxeNlAf52YqtASUs=
cloud.google.com/go/appengine v1.8.5/go.mod h1:uHBgNoGLTS5di7BvU25NFDuKa82v0qQLjyMJLuPQrVo=
cloud.google.com/go/area120 v0.8.5/go.mod h1:BcoFCbDLZjsfe4EkCnEq1LKvHSK0Ew/zk5UFu6GMyA0=
cloud.google.com/go/artifactregistry v1.14.7/go.mod h1:0AUKhzWQzfmeTvT4SjfI4zjot72EMfrkvL9g9aRjnnM=
cloud.google.com/go/asset v1.17.2/go.mod h1:SVbzde67ehddSoKf5uebOD1sYw8Ab/jD/9EIeWg99q4=
cloud.google.com/go/assuredworkloads v1.11.5/go.mod h1:FKJ3g3ZvkL2D7qtqIGnDufFkHxwIpNM9vtmhvt+6wqk=
cloud.google.com/go/automl v1.13.5/go.mod h1:MDw3vLem3yh+SvmSgeYUmUKqyls6NzSumDm9OJ3xJ1Y=
cloud.google.com/go/baremetalsolution v1.2.4/go.mod h1:BHCmxgpevw9IEryE99HbYEfxXkAEA3hkMJbYYsHtIuY=
cloud.google.com/go/batch v1.8.0/go.mod h1:k8V7f6VE2Suc0zUM4WtoibNrA6D3dqBpB+++e3vSGYc=
cloud.google.com/go/beyondcorp v1.0.4/go.mod h1:Gx8/Rk2MxrvWfn4WIhHIG1NV7IBfg14pTKv1+EArVcc=
cloud.google.com/go/bigquery v1.59.1/go.mod h1:VP1UJYgevyTwsV7desjzNzDND5p6hZB+Z8gZJN1GQUc=
cloud.google.com/go/billing v1.18.2/go.mod h1:PPIwVsOOQ7xzbADCwNe8nvK776QpfrOAUkvKjCUcpSE=
cloud.google.com/go/binaryauthorization v1.8.1/go.mod h1:1HVRyBerREA/nhI7yLang4Zn7vfNVA3okoAR9qYQJAQ=
cloud.google.com/go/certificatemanager v1.7.5/go.mod h1:uX+v7kWqy0Y3NG/ZhNvffh0kuqkKZIXdvlZRO7z0VtM=
cloud.google.com/go/channel v1.17.5/go.mod h1:FlpaOSINDAXgEext0KMaBq/vwpLMkkPAw9b2mApQeHc=
cloud.google.com/go/cloudbuild v1.15.1/go.mod h1:gIofXZSu+XD2Uy+qkOrGKEx45zd7s28u/k8f99qKals=
cloud.google.com/go/clouddms v1.7.4/go.mod h1:RdrVqoFG9RWI5AvZ81SxJ/xvxPdtcRhFotwdE79DieY=
cloud.google.com/go/cloudtasks v1.12.6/go.mod h1:b7c7fe4+TJsFZfDyzO51F7cjq7HLUlRi/KZQLQjDsaY=
cloud.google.com/go/compute v1.24.0 h1:phWcR2eWzRJaL/kOiJwfFsPs4BaKq1j6vnpZrc1YlVg=
cloud.google.com/go/compute v1.24.0/go.mod h1:kw1/T+h/+tK2LJK0wiPPx1intgdAM3j/g3hFDlscY40=
cloud.google.com/go/contactcenterinsights v1.13.0/go.mod h1:ieq5d5EtHsu8vhe2y3amtZ+BE+AQwX5qAy7cpo0POsI=
cloud.google.com/go/container v1.31.0/go.mod h1:7yABn5s3Iv3lmw7oMmyGbeV6tQj86njcTijkkGuvdZA=
cloud.google.com/go/containeranalysis v0.11.4/go.mod h1:cVZT7rXYBS9NG1rhQbWL9pWbXCKHWJPYraE8/FTSYPE=
cloud.google.com/go/datacatalog v1.19.3/go.mod h1:ra8V3UAsciBpJKQ+z9Whkxzxv7jmQg1hfODr3N3YPJ4=
cloud.google.com/go/dataflow v0.9.5/go.mod h1:udl6oi8pfUHnL0z6UN9Lf9chGqzDMVqcYTcZ1aPnCZQ=
cloud.google.com/go/dataform v0.9.2/go.mod h1:S8cQUwPNWXo7m/g3DhWHsLBoufRNn9EgFrMgne2j7cI=
cloud.google.com/go/datafusion v1.7.5/go.mod h1:bYH53Oa5UiqahfbNK9YuYKteeD4RbQSNMx7JF7peGHc=
cloud.google.com/go/datalabeling v0.8.5/go.mod h1:IABB2lxQnkdUbMnQaOl2prCOfms20mcPxDBm36lps+s=
cloud.google.com/go/dataplex v1.14.2/go.mod h1:0oGOSFlEKef1cQeAHXy4GZPB/Ife0fz/PxBf+ZymA2U=
cloud.google.com/go/dataproc/v2 v2.4.0/go.mod h1:3B1Ht2aRB8VZIteGxQS/iNSJGzt9+CA0WGnDVMEm7Z4=
cloud.google.com/go/dataqna v0.8.5/go.mod h1:vgihg1mz6n7pb5q2YJF7KlXve6tCglInd6XO0JGOlWM=
cloud.google.com/go/datastore v1.15.0/go.mod h1:GAeStMBIt9bPS7jMJA85kgkpsMkvseWWXiaHya9Jes8=
cloud.google.com/go/datastream v1.10.4/go.mod h1:7kRxPdxZxhPg3MFeCSulmAJnil8NJGGvSNdn4p1sRZo=
cloud.google.com/go/deploy v1.17.1/go.mod h1:SXQyfsXrk0fBmgBHRzBjQbZhMfKZ3hMQBw5ym7MN/50=
cloud.google.com/go/dialogflow v1.49.0/go.mod h1:dhVrXKETtdPlpPhE7+2/k4Z8FRNUp6kMV3EW3oz/fe0=
cloud.google.com/go/dlp v1.11.2/go.mod h1:9Czi+8Y/FegpWzgSfkRlyz+jwW6Te9Rv26P3UfU/h/w=
cloud.google.com/go/documentai v1.25.0/go.mod h1:ftLnzw5VcXkLItp6pw1mFic91tMRyfv6hHEY5br4KzY=
cloud.google.com/go/domains v0.9.5/go.mod h1:dBzlxgepazdFhvG7u23XMhmMKBjrkoUNaw0A8AQB55Y=
cloud.google.com/go/edgecontainer v1.1.5/go.mod h1:rgcjrba3DEDEQAidT4yuzaKWTbkTI5zAMu3yy6ZWS0M=
cloud.google.com/go/errorreporting v0.3.0/go.mod h1:xsP2yaAp+OAW4OIm60An2bbLpqIhKXdWR/tawvl7QzU=
cloud.google.com/go/essentialcontacts v1.6.6/go.mod h1:XbqHJGaiH0v2UvtuucfOzFXN+rpL/aU5BCZLn4DYl1Q=
cloud.google.com/go/eventarc v1.13.4/go.mod h1:zV5sFVoAa9orc/52Q+OuYUG9xL2IIZTbbuTHC6JSY8s=
cloud.google.com/go/filestore v1.8.1/go.mod h1:MbN9KcaM47DRTIuLfQhJEsjaocVebNtNQhSLhKCF5GM=
cloud.google.com/go/firestore v1.14.0/go.mod h1:96MVaHLsEhbvkBEdZgfN+AS/GIkco1LRpH9Xp9YZfzQ=
cloud.google.com/go/functions v1.16.0/go.mod h1:nbNpfAG7SG7Duw/o1iZ6ohvL7mc6MapWQVpqtM29n8k=
cloud.google.com/go/gkebackup v1.3.5/go.mod h1:KJ77KkNN7Wm1LdMopOelV6OodM01pMuK2/5Zt1t4Tvc=
cloud.google.com/go/gkeconnect v0.8.5/go.mod h1:LC/rS7+CuJ5fgIbXv8tCD/mdfnlAadTaUufgOkmijuk=
cloud.google.com/go/gkehub v0.14.5/go.mod h1:6bzqxM+a+vEH/h8W8ec4OJl4r36laxTs3A/fMNHJ0wA=
cloud.google.com/go/gkemulticloud v1.1.1/go.mod h1:C+a4vcHlWeEIf45IB5FFR5XGjTeYhF83+AYIpTy4i2Q=
cloud.google.com/go/gsuiteaddons v1.6.5/go.mod h1:Lo4P2IvO8uZ9W+RaC6s1JVxo42vgy+TX5a6hfBZ0ubs=
cloud.google.com/go/iam v1.1.6/go.mod h1:O0zxdPeGBoFdWW3HWmBxJsk0pfvNM/p/qa82rWOGTwI=
cloud.google.com/go/iap v1.9.4/go.mod h1:vO4mSq0xNf/Pu6E5paORLASBwEmphXEjgCFg7aeNu1w=
cloud.google.com/go/ids v1.4.5/go.mod h1:p0ZnyzjMWxww6d2DvMGnFwCsSxDJM666Iir1bK1UuBo=
cloud.google.com/go/iot v1.7.5/go.mod h1:nq3/sqTz3HGaWJi1xNiX7F41ThOzpud67vwk0YsSsqs=
cloud.google.com/go/kms v1.15.7/go.mod h1:ub54lbsa6tDkUwnu4W7Yt1aAIFLnspgh0kPGToDukeI=
cloud.google.com/go/language v1.12.3/go.mod h1:evFX9wECX6mksEva8RbRnr/4wi/vKGYnAJrTRXU8+f8=
cloud.google.com/go/lifesciences v0.9.5/go.mod h1:OdBm0n7C0Osh5yZB7j9BXyrMnTRGBJIZonUMxo5CzPw=
cloud.google.com/go/logging v1.9.0/go.mod h1:1Io0vnZv4onoUnsVUQY3HZ3Igb1nBchky0A0y7BBBhE=
cloud.google.com/go/longrunning v0.5.6/go.mod h1:vUaDrWYOMKRuhiv6JBnn49YxCPz2Ayn9GqyjaBT8/mA=
cloud.google.com/go/managedidentities v1.6.5/go.mod h1:fkFI2PwwyRQbjLxlm5bQ8SjtObFMW3ChBGNqaMcgZjI=
cloud.google.com/go/maps v1.6.4/go.mod h1:rhjqRy8NWmDJ53saCfsXQ0LKwBHfi6OSh5wkq6BaMhI=
cloud.google.com/go/mediatranslation v0.8.5/go.mod h1:y7kTHYIPCIfgyLbKncgqouXJtLsU+26hZhHEEy80fSs=
cloud.google.com/go/memcache v1.10.5/go.mod h1:/FcblbNd0FdMsx4natdj+2GWzTq+cjZvMa1I+9QsuMA=
cloud.google.com/go/metastore v1.13.4/go.mod h1:FMv9bvPInEfX9Ac1cVcRXp8EBBQnBcqH6gz3KvJ9BAE=
cloud.google.com/go/monitoring v1.18.0/go.mod h1:c92vVBCeq/OB4Ioyo+NbN2U7tlg5ZH41PZcdvfc+Lcg=
cloud.google.com/go/networkconnectivity v1.14.4/go.mod h1:PU12q++/IMnDJAB+3r+tJtuCXCfwfN+C6Niyj6ji1Po=
cloud.google.com/go/networkmanagement v1.9.4/go.mod h1:daWJAl0KTFytFL7ar33I6R/oNBH8eEOX/rBNHrC/8TA=
cloud.google.com/go/networksecurity v0.9.5/go.mod h1:KNkjH/RsylSGyyZ8wXpue8xpCEK+bTtvof8SBfIhMG8=
cloud.google.com/go/notebooks v1.11.3/go.mod h1:0wQyI2dQC3AZyQqWnRsp+yA+kY4gC7ZIVP4Qg3AQcgo=
cloud.google.com/go/optimization v1.6.3/go.mod h1:8ve3svp3W6NFcAEFr4SfJxrldzhUl4VMUJmhrqVKtYA=
cloud.google.com/go/orchestration v1.8.5/go.mod h1:C1J7HesE96Ba8/hZ71ISTV2UAat0bwN+pi85ky38Yq8=
cloud.google.com/go/orgpolicy v1.12.1/go.mod h1:aibX78RDl5pcK3jA8ysDQCFkVxLj3aOQqrbBaUL2V5I=
cloud.google.com/go/osconfig v1.12.5/go.mod h1:D9QFdxzfjgw3h/+ZaAb5NypM8bhOMqBzgmbhzWViiW8=
cloud.google.com/go/oslogin v1.13.1/go.mod h1:vS8Sr/jR7QvPWpCjNqy6LYZr5Zs1e8ZGW/KPn9gmhws=
cloud.google.com/go/phishingprotection v0.8.5/go.mod h1:g1smd68F7mF1hgQPuYn3z8HDbNre8L6Z0b7XMYFmX7I=
cloud.google.com/go/policytroubleshooter v1.10.3/go.mod h1:+ZqG3agHT7WPb4EBIRqUv4OyIwRTZvsVDHZ8GlZaoxk=
cloud.google.com/go/privatecatalog v0.9.5/go.mod h1:fVWeBOVe7uj2n3kWRGlUQqR/pOd450J9yZoOECcQqJk=
cloud.google.com/go/pubsub v1.36.1/go.mod h1:iYjCa9EzWOoBiTdd4ps7QoMtMln5NwaZQpK1hbRfBDE=
cloud.google.com/go/pubsublite v1.8.1/go.mod h1:fOLdU4f5xldK4RGJrBMm+J7zMWNj/k4PxwEZXy39QS0=
cloud.google.com/go/recaptchaenterprise/v2 v2.9.2/go.mod h1:trwwGkfhCmp05Ll5MSJPXY7yvnO0p4v3orGANAFHAuU=
cloud.google.com/go/recommendationengine v0.8.5/go.mod h1:A38rIXHGFvoPvmy6pZLozr0g59NRNREz4cx7F58HAsQ=
cloud.google.com/go/recommender v1.12.1/go.mod h1:gf95SInWNND5aPas3yjwl0I572dtudMhMIG4ni8nr+0=
cloud.google.com/go/redis v1.14.2/go.mod h1:g0Lu7RRRz46ENdFKQ2EcQZBAJ2PtJHJLuiiRuEXwyQw=
cloud.google.com/go/resourcemanager v1.9.5/go.mod h1:hep6KjelHA+ToEjOfO3garMKi/CLYwTqeAw7YiEI9x8=
cloud.google.com/go/resourcesettings v1.6.5/go.mod h1:WBOIWZraXZOGAgoR4ukNj0o0HiSMO62H9RpFi9WjP9I=
cloud.google.com/go/retail v1.16.0/go.mod h1:LW7tllVveZo4ReWt68VnldZFWJRzsh9np+01J9dYWzE=
cloud.google.com/go/run v1.3.4/go.mod h1:FGieuZvQ3tj1e9GnzXqrMABSuir38AJg5xhiYq+SF3o=
cloud.google.com/go/scheduler v1.10.6/go.mod h1:pe2pNCtJ+R01E06XCDOJs1XvAMbv28ZsQEbqknxGOuE=
cloud.google.com/go/secretmanager v1.11.5/go.mod h1:eAGv+DaCHkeVyQi0BeXgAHOU0RdrMeZIASKc+S7VqH4=
cloud.google.com/go/security v1.15.5/go.mod h1:KS6X2eG3ynWjqcIX976fuToN5juVkF6Ra6c7MPnldtc=
cloud.google.com/go/securitycenter v1.24.4/go.mod h1:PSccin+o1EMYKcFQzz9HMMnZ2r9+7jbc+LvPjXhpwcU=
cloud.google.com/go/servicedirectory v1.11.4/go.mod h1:Bz2T9t+/Ehg6x+Y7Ycq5xiShYLD96NfEsWNHyitj1qM=
cloud.google.com/go/shell v1.7.5/go.mod h1:hL2++7F47/IfpfTO53KYf1EC+F56k3ThfNEXd4zcuiE=
cloud.google.com/go/spanner v1.56.0/go.mod h1:DndqtUKQAt3VLuV2Le+9Y3WTnq5cNKrnLb/Piqcj+h0=
cloud.google.com/go/speech v1.21.1/go.mod h1:E5GHZXYQlkqWQwY5xRSLHw2ci5NMQNG52FfMU1aZrIA=
cloud.google.com/go/storagetransfer v1.10.4/go.mod h1:vef30rZKu5HSEf/x1tK3WfWrL0XVoUQN/EPDRGPzjZs=
cloud.google.com/go/talent v1.6.6/go.mod h1:y/WQDKrhVz12WagoarpAIyKKMeKGKHWPoReZ0g8tseQ=
cloud.google.com/go/texttospeech v1.7.5/go.mod h1:tzpCuNWPwrNJnEa4Pu5taALuZL4QRRLcb+K9pbhXT6M=
cloud.google.com/go/tpu v1.6.5/go.mod h1:P9DFOEBIBhuEcZhXi+wPoVy/cji+0ICFi4TtTkMHSSs=
cloud.google.com/go/trace v1.10.5/go.mod h1:9hjCV1nGBCtXbAE4YK7OqJ8pmPYSxPA0I67JwRd5s3M=
cloud.google.com/go/translate v1.10.3/go.mod h1:GW0vC1qvPtd3pgtypCv4k4U8B7EdgK9/QEF2aJEUovs=
cloud.google.com/go/video v1.20.4/go.mod h1:LyUVjyW+Bwj7dh3UJnUGZfyqjEto9DnrvTe1f/+QrW0=
cloud.google.com/go/videointelligence v1.11.5/go.mod h1:/PkeQjpRponmOerPeJxNPuxvi12HlW7Em0lJO14FC3I=
cloud.google.com/go/vision/v2 v2.8.0/go.mod h1:ocqDiA2j97pvgogdyhoxiQp2ZkDCyr0HWpicywGGRhU=
cloud.google.com/go/vmmigration v1.7.5/go.mod h1:pkvO6huVnVWzkFioxSghZxIGcsstDvYiVCxQ9ZH3eYI=
cloud.google.com/go/vmwareengine v1.1.1/go.mod h1:nMpdsIVkUrSaX8UvmnBhzVzG7PPvNYc5BszcvIVudYs=
cloud.google.com/go/vpcaccess v1.7.5/go.mod h1:slc5ZRvvjP78c2dnL7m4l4R9GwL3wDLcpIWz6P/ziig=
cloud.google.com/go/webrisk v1.9.5/go.mod h1:aako0Fzep1Q714cPEM5E+mtYX8/jsfegAuS8aivxy3U=
cloud.google.com/go/websecurityscanner v1.6.5/go.mod h1:QR+DWaxAz2pWooylsBF854/Ijvuoa3FCyS1zBa1rAVQ=
cloud.google.com/go/workflows v1.12.4/go.mod h1:yQ7HUqOkdJK4duVtMeBCAOPiN1ZF1E9pAMX51vpwB/w=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0/go.mod h1:Cz6ft6Dkn3Et6l2v2a9/RpN7epQ1GtDlO6lj8bEcOvw=
github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab/go.mod h1:3VYc5hodBMJ5+l/7J4xAyMeuM2PNuepvHlGs8yilUCA=
github.com/MakeNowJust/heredoc/v2 v2.0.1/go.mod h1:6/2Abh5s+hc3g9nbWLe9ObDIOhaRrqsyY9MWy+4JdRM=
github.com/Microsoft/hnslib v0.1.1/go.mod h1:DRQR4IjLae6WHYVhW7uqe44hmFUiNhmaWA+jwMbz5tM=
github.com/alecthomas/kingpin/v2 v2.4.0/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw=
github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0=
github.com/cenkalti/backoff v2.1.1+incompatible h1:tKJnvO2kl0zmb/jA5UKAt4VoEVw1qxKWjE/Bpp46npY=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/container-storage-interface/spec v1.9.0/go.mod h1:ZfDu+3ZRyeVqxZM0Ds19MVLkN2d1XJ5MAfi1L3VjlT0=
github.com/containerd/containerd/api v1.8.0/go.mod h1:dFv4lt6S20wTu/hMcP4350RL87qPWLVa/OHOwmmdnYc=
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/containerd/ttrpc v1.2.6/go.mod h1:YCXHsb32f+Sq5/72xHubdiJRQY9inL4a4ZQrAbN1q9o=
github.com/containerd/typeurl/v2 v2.2.2/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk=
github.com/coredns/caddy v1.1.1/go.mod h1:A6ntJQlAWuQfFlsd9hvigKbo2WS0VUs2l1e2F+BawD4=
github.com/coredns/corefile-migration v1.0.26/go.mod h1:56DPqONc3njpVPsdilEnfijCwNGC3/kTJLl7i7SPavY=
github.com/coreos/go-oidc v2.3.0+incompatible h1:+5vEsrgprdLjjQ9FzIKAzQz1wwPD+83hQRfUIPh7rO0=
github.com/coreos/go-oidc v2.3.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/danieljoos/wincred v1.2.2/go.mod h1:w7w4Utbrz8lqeMbDAK0lkNJUv5sAOkFi7nd/ogr0Uh8=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA=
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
github.com/euank/go-kmsg-parser v2.0.0+incompatible/go.mod h1:MhmAMZ8V4CYH4ybgdRwPr2TU5ThnS43puaKEMpja1uw=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/cadvisor v0.52.1/go.mod h1:OAhPcx1nOm5YwMh/JhpUOMKyv1YKLRtS9KgzWPndHmA=
github.com/google/cel-go v0.26.1/go.mod h1:A9O8OU9rdvrK5MQyrqfIxo1a0u4g3sF8KB6PUIaryMM=
github.com/google/go-pkcs11 v0.3.0/go.mod h1:6eQoGcuNJpa7jnd5pMGdkSaQpNDYvPlXWMcjXXThLlY=
github.com/grpc-ecosystem/go-grpc-middleware v1.2.2 h1:FlFbCRLd5Jr4iYXZufAvgWN6Ao0JrI5chLINnUXDDr0=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg=
github.com/ishidawataru/sctp v0.0.0-20250521072954-ae8eb7fa7995/go.mod h1:co9pwDoBCm1kGxawmb4sPq0cSIOOWNPT4KnHotMP1Zg=
github.com/jessevdk/go-flags v1.6.1/go.mod h1:Mk8T1hIAWpOiJiHa9rJASDK2UGWji0EuPGBnNLMooyc=
github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k=
github.com/k0kubun/pp v3.0.1+incompatible/go.mod h1:GWse8YhT0p8pT4ir3ZgBbfZild3tgzSScAn6HmfYukg=
github.com/karrick/godirwalk v1.17.0/go.mod h1:j4mkqPuvaLI8mp1DroR3P6ad7cyYd4c1qeJ3RV7ULlk=
github.com/keybase/dbus v0.0.0-20220506165403-5aa21ea2c23a/go.mod h1:YPNKjjE7Ubp9dTbnWvsP3HT+hYnY6TfXzubYTBeUxc8=
github.com/libopenstorage/openstorage v1.0.0/go.mod h1:Sp1sIObHjat1BeXhfMqLZ14wnOzEhNx2YQedreMcUyc=
github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/cli v1.1.5/go.mod h1:v8+iFts2sPIKUV1ltktPXMCC8fumSKFItNcD2cLtRR4=
github.com/moby/ipvs v1.1.0/go.mod h1:4VJMWuf098bsUMmZEiD4Tjk/O7mOn3l1PTD3s4OoYAs=
github.com/moby/sys/mountinfo v0.7.2/go.mod h1:1YOa8w8Ih7uW0wALDUgT1dTTSBrZ+HiBLGws92L2RU4=
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
github.com/mrunalp/fileutils v0.5.1/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/olekukonko/ts v0.0.0-20171002115256-78ecb04241c0/go.mod h1:F/7q8/HZz+TXjlsoZQQKVYvXTZaFH4QRa3y+j1p7MS0=
github.com/opencontainers/cgroups v0.0.1/go.mod h1:s8lktyhlGUqM7OSRL5P7eAW6Wb+kWPNvt4qvVfzA5vs=
github.com/opencontainers/runtime-spec v1.2.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.11.1/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/pquerna/cachecontrol v0.1.0/go.mod h1:NrUG3Z7Rdu85UNR3vm7SOsl1nFIeSiQnrHV5K9mBcUI=
github.com/russross/blackfriday v1.6.0 h1:KqfZb0pUVN2lYqZUYRddxF4OR8ZMURnJIG5Y3VRLtww=
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1/go.mod h1:uToXkOrWAZ6/Oc07xWQrPOhJotwFIyu2bBVN41fcDUY=
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
github.com/stoewer/go-strcase v1.3.1/go.mod h1:fAH5hQ5pehh+j3nZfvwdk2RgEgQjAoM8wodgtPmh1xo=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75/go.mod h1:KO6IkyS8Y3j8OdNO85qEYBsRPuteD+YciPomcXdrMnk=
github.com/vishvananda/netlink v1.3.1/go.mod h1:ARtKouGSTGchR8aMwmkzC0qiNPrrWO5JS/XMVl45+b4=
github.com/vishvananda/netns v0.0.5/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
github.com/zalando/go-keyring v0.2.6/go.mod h1:2TCrxYrbUNYfNS/Kgy/LSrkSQzZ5UPVH85RwfczwvcI=
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
go.etcd.io/bbolt v1.4.2/go.mod h1:Is8rSHO/b4f3XigBC0lL0+4FwAQv3HXEEIgFMuKHceM=
go.etcd.io/etcd/api/v3 v3.6.4/go.mod h1:eFhhvfR8Px1P6SEuLT600v+vrhdDTdcfMzmnxVXXSbk=
go.etcd.io/etcd/client/pkg/v3 v3.6.4/go.mod h1:sbdzr2cl3HzVmxNw//PH7aLGVtY4QySjQFuaCgcRFAI=
go.etcd.io/etcd/client/v3 v3.6.4/go.mod h1:jaNNHCyg2FdALyKWnd7hxZXZxZANb0+KGY+YQaEMISo=
go.etcd.io/etcd/pkg/v3 v3.6.4/go.mod h1:kKcYWP8gHuBRcteyv6MXWSN0+bVMnfgqiHueIZnKMtE=
go.etcd.io/etcd/server/v3 v3.6.4/go.mod h1:aYCL/h43yiONOv0QIR82kH/2xZ7m+IWYjzRmyQfnCAg=
go.etcd.io/raft/v3 v3.6.0/go.mod h1:nLvLevg6+xrVtHUmVaTcTz603gQPHfh7kUAwV6YpfGo=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/contrib/detectors/gcp v1.36.0/go.mod h1:IbBN8uAIIx734PTonTPxAxnjc2pQTxWNkwfstZ+6H2k=
go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.44.0/go.mod h1:uq8DrRaen3suIWTpdR/JNHCGpurSvMv9D5Nr5CU5TXc=
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca/go.mod h1:jxU+3+j+71eXOW14274+SmmuW82qJzl6iZSeqEtTGds=
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
google.golang.org/genproto/googleapis/bytestream v0.0.0-20250219182151-9fdb1cabc7b2/go.mod h1:35wIojE/F1ptq1nfNDNjtowabHoMSA2qQs7+smpCO5s=
gopkg.in/go-jose/go-jose.v2 v2.6.3/go.mod h1:zzZDPkNNw/c9IE7Z9jr11mBZQhKQTMzoEEIoEdZlFBI=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
k8s.io/cloud-provider v0.34.0/go.mod h1:JbMa0t6JIGDMLI7Py6bdp9TN6cfuHrWGq+E/X+Ljkmo=
k8s.io/cluster-bootstrap v0.34.0/go.mod h1:ZpbQwB+CDTYZIjDKM6Hnt081s0xswcFrlhW7mHVNc7k=
k8s.io/cri-api v0.34.0/go.mod h1:4qVUjidMg7/Z9YGZpqIDygbkPWkg3mkS1PvOx/kpHTE=
k8s.io/cri-client v0.34.0/go.mod h1:KkGaUJWMvCdpSTf15Wiqtf3WKl3qjcvkBcMApPCqpxQ=
k8s.io/csi-translation-lib v0.34.0/go.mod h1:lZ+vpT3/6hx7GxXcI1mcoHxZSONvxgl2NwawzFnJP4Y=
k8s.io/dynamic-resource-allocation v0.34.0/go.mod h1:aqmoDIvXjQRhSgxQkFLl6+Ndg6MfdEOI+TQsj1j9V+g=
k8s.io/endpointslice v0.34.0/go.mod h1:aUArEJwcmRHkFG91fXsMmJXlGzlsRNfWsWNlaq6Rhqo=
k8s.io/externaljwt v0.34.0/go.mod h1:LIqFAVwSkcWVlP3c78wxe2VGmgDySxfqX/wwXzVrV/Q=
k8s.io/kms v0.34.0/go.mod h1:s1CFkLG7w9eaTYvctOxosx88fl4spqmixnNpys0JAtM=
k8s.io/kube-controller-manager v0.34.0/go.mod h1:qhiHYDzVwqtZBwg2bp2DiuyXpb2xPYgl6EfOyD1puzI=
k8s.io/kube-proxy v0.34.0/go.mod h1:tfwI8dCKm5Q0r+aVIbrq/aC36Kk936w2LZu8/rvJzWI=
k8s.io/kube-scheduler v0.34.0/go.mod h1:7pt2HDb32lZOihbt/aamuMBvSe1o+rrd2rQC8aJyfP0=
k8s.io/kubelet v0.34.0/go.mod h1:NqbF8ViVettlZbf9hw9DJhubaWn7rGvDDTcLMDm6tQ0=
k8s.io/metrics v0.34.0/go.mod h1:KCuXmotE0v4AvoARKUP8NC4lUnbK/Du1mluGdor5h4M=
k8s.io/mount-utils v0.34.0/go.mod h1:MIjjYlqJ0ziYQg0MO09kc9S96GIcMkhF/ay9MncF0GA=
k8s.io/pod-security-admission v0.34.0/go.mod h1:ICOx2MB6W7ZEjfIOJ5NuJFfMFZbeXWgxOmz08Ox51iQ=
k8s.io/sample-apiserver v0.34.0/go.mod h1:UuFQOwVRrKuCK1/ksjcuVjMU9WKD54ysaI+PgVVfyNI=
k8s.io/system-validators v1.10.1/go.mod h1:awfSS706v9R12VC7u7K89FKfqVy44G+E0L1A0FX9Wmw=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2/go.mod h1:Ve9uj1L+deCXFrPOk1LpFXqTg7LCFzFso6PA48q/XZw=
sigs.k8s.io/knftables v0.0.17/go.mod h1:f/5ZLKYEUPUhVjUCg6l80ACdL7CIIyeL0DxfgojGRTk=
sigs.k8s.io/kustomize/kustomize/v5 v5.7.1/go.mod h1:+5/SrBcJ4agx1SJknGuR/c9thwRSKLxnKoI5BzXFaLU=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=