Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.
The fix ensures thread-safe operations by always operating on
copies rather than shared objects.
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
* Using live obj to get the resource key if not nil
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Fixed failing unit tests
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Added test case for validating cluster scoped resources with ApplyOutOfSyncOnly=true
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Fixed gofumpt formatting errors
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Corrected unit tests for cluster scoped resources
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Removed unwanted code comments
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Added comments for explaining the reason why ns is set from live object
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Added comments in the unit test
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
---------
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* fix: go mod tidy is not working due to k8s.io/externaljwt dependency
Signed-off-by: pasha <pasha.k@fyxt.com>
* feat: Enable SkipDryRunOnMissingResource sync option on Application level
Signed-off-by: pasha <pasha.k@fyxt.com>
* feat: Enable SkipDryRunOnMissingResource sync option on Application level
* feat: add support for skipping dry run on missing resources in sync context
- Introduced a new option to skip dry run verification for missing resources at the application level.
- Updated the sync context to include a flag for this feature.
- Enhanced tests to cover scenarios where the skip dry run annotation is applied to all resources.
---------
Signed-off-by: pasha <pasha.k@fyxt.com>
Co-authored-by: pasha <pasha.k@fyxt.com>
map[] in error output exposes secret data in last-applied-annotation
& patch error
Invalid secrets with stringData exposes the secret values in diff. Attempt a
normalization to prevent it.
Refactor stringData to data conversion to eliminate code duplication
Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
* chore: add CODEOWNERS and EMERITUS.md
Setting up CODEOWNERS file so people are automatically notified about new PRs. Can
eventually setup ruleset that requires at least one review before merging.
I based currently list in CODEOWNERS on people who have recently merged PRs. I'm wondering
if reviewers and approvers are a team/group instead of list of folks.
Setup EMERITUS.md that contains list from OWNERS. Feedback on this PR will update
this PR.
Signed-off-by: jmeridth <jmeridth@gmail.com>
* chore: match this repo's CODEOWNERS to argoproj/argo-cd CODEOWNERS
Signed-off-by: jmeridth <jmeridth@gmail.com>
---------
Signed-off-by: jmeridth <jmeridth@gmail.com>
* fix: Server side diff now works correctly with some fields removal
Helps with https://github.com/argoproj/argo-cd/issues/20792
Removed and modified sets may only contain the fields that changed, not including key fields like "name". This can cause merge to fail, since it expects those fields to be present if they are present in the predicted live.
Fortunately, we can inspect the set and derive the key fields necessary. Then they can be added to the set and used during a merge.
Also, have a new test which fails before the fix, but passes now.
Failure of the new test before the fix
```
Error: Received unexpected error:
error removing non config mutations for resource Deployment/nginx-deployment: error reverting webhook removed fields in predicted live resource: .spec.template.spec.containers: element 0: associative list with keys has an element that omits key field "name" (and doesn't have default value)
Test: TestServerSideDiff/will_test_removing_some_field_with_undoing_changes_done_by_webhook
```
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
* Use new version of structured merge diff with a new option
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
* Add DCO
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
* Try to fix sonar exclusions config
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
---------
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
- [x] fix warnings about case of `as` to `AS` in Dockerfile
- `FromAsCasing: 'as' and 'FROM' keywords' casing do not match (line 1)`
- [x] shorten go version in go.mod
- [x] update Dockerfile Go version from 1.17 to 1.22 to match go.mod
- [x] upgrade alipine/git image version to latest, current was 4 years old
- -from alpine/git:v2.24.3 (4 years old) to alpine/git:v2.45.2
- [x] fix warning with linting
- `WARN [config_reader] The configuration option 'run.skip-files' is deprecated, please use 'issues.exclude-files'`
- [x] add .tool-versions (asdf) to .gitignore
Signed-off-by: jmeridth <jmeridth@gmail.com>
* fix: Ability to disable Server Side Apply on individual resource level
Signed-off-by: pashakostohrys <pavel@codefresh.io>
* fix: Ability to disable Server Side Apply on individual resource level
Signed-off-by: pashakostohrys <pavel@codefresh.io>
---------
Signed-off-by: pashakostohrys <pavel@codefresh.io>
* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]
Closes#600
The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
* improvements to graph building
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* use old name
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]
Closes#600
The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
* finish merge
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]
Closes#600
The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
* discard unneeded copies of child resources as we go
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* remove unnecessary comment
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* make childrenByUID sparse
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* eliminate duplicate map
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* fix comment
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* add useful comment back
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* use nsNodes instead of dupe map
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* remove unused struct
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* skip invalid APIVersion
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
---------
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* sync.Reconcile: guard against incomplete discovery
When Reconcile performs its logic to compare the desired state (target
objects) against the actual state (live objects), it looks up each live
object based on a key comprised of data from the target object: API
group, API kind, namespace, and name. While group, kind, and name will
always be accurate, there is a chance that the value for namespace is
not. If a cluster-scoped target object has a namespace (because it
incorrectly has a namespace from its source) or the namespace parameter
passed into the Reconcile method has a non-empty value (indicating a
default value to use on namespace-scoped objects that don't have it set
in the source), AND the resInfo ResourceInfoProvider has incomplete or
missing API discovery data, the call to IsNamespacedOrUnknown will
return true when the information is unknown. This leads to the key being
incorrect - it will have a value for namespace when it shouldn't. As a
result, indexing into liveObjByKey will fail. This failure results in
the reconciliation containing incorrect data: there will be a nil entry
appended to targetObjs when there shouldn't be.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
* Address code review comments
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
---------
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
* fix: deduplicate OpenAPI definitions for GVKParser
* do the thing that was the whole point
* more logs
* don't uniquify models
* schema for both
* more logs
* fix logic
* better tainted gvk handling, better docs, update mocks
* add a test
* improvements from comments
---------
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
* Prune resources in reverse of sync wave order
Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
* Use waveOverride var instead of directly patching live obj
Directly patching live objs results into incorrect wave ordering
as the new wave value from live obj is used to perform reordering during next sync
Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
---------
Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
* Revert "feat: retry with client side dry run if server one was failed (#548)"
This reverts commit c0c2dd1f6f.
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Revert "fix(server): use server side dry run in case if it is server side apply (#546)"
This reverts commit 4a5648ee41.
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Fixed the logic to disable server side apply if it is a dry run
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Added more values in the log message for better debugging
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Fixed compilation error
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Written an inline fn to get string value of dry-run strategy
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
* Added comment as requested with reference to the issue number
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
---------
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
Co-authored-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
* feat: retry with client side dry run if server one was failed
Signed-off-by: pashakostohrys <pavel@codefresh.io>
* feat: retry with client side dry run if server one was failed
Signed-off-by: pashakostohrys <pavel@codefresh.io>
* feat: retry with client side dry run if server one was failed
Signed-off-by: pashakostohrys <pavel@codefresh.io>
---------
Signed-off-by: pashakostohrys <pavel@codefresh.io>
* fix: use server side dry run in case if it is server side apply
Signed-off-by: pashakostohrys <pavel@codefresh.io>
* fix: use server side dry run in case if it is server side apply
Signed-off-by: pashakostohrys <pavel@codefresh.io>
---------
Signed-off-by: pashakostohrys <pavel@codefresh.io>
* fix: avoid acquiring lock on mutex and semaphore at the same time to prevent deadlock
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
* apply reviewer notes
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
---------
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
* Revert "Revert "feat: Ability to create custom labels for namespaces created … (#455)"
This reverts commit ce2fb703a6.
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
* feat: enable namespace to be updated
Rename `WithNamespaceCreation` to `WithNamespaceModifier`, since this
method is also used for modifying existing namespaces. This method
takes a single argument for the actual updating, and unless this method
gets invoked by its caller no updating will take place (fulfilling what
the `createNamespace` argument used to do).
Within `autoCreateNamespace`, everywhere where we previously added tasks
we'll now need to check whether the namespace should be created (or
modified), which is now delegated to the `appendNsTask` and
`appendFailedNsTask` methods.
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
* fix: Only consider resources which supports appropriate verb for any given operation
Signed-off-by: jannfis <jann@mistrust.net>
* Fix unit tests
Signed-off-by: jannfis <jann@mistrust.net>
* Return MethodNotSupported and add some tests
Signed-off-by: jannfis <jann@mistrust.net>
* feat: support existing early from IterateHierarchy method
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
* reviewer notes: comment action callback return value; add missing return value check
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
* feat: Support for retries when building up cluster cache
Signed-off-by: jannfis <jann@mistrust.net>
* Oops.
Signed-off-by: jannfis <jann@mistrust.net>
* RetryLimit must be at least 1
Signed-off-by: jannfis <jann@mistrust.net>
* RetryLimit must be at least 1
Signed-off-by: jannfis <jann@mistrust.net>
This commit bumps the k8s.io/kubernetes dependencies and all other
kubernetes deps to 1.23.1. There have been two breakages with the
upgrade:
- The `apply.NewApplyOptions` in the kubectl libraries [has been
removed / refactored](8165f83007 (diff-2672399cb3d708d5fed859b0a74387522408ab868b1c2457587b39cabe2b75ce))
and split up into `ApplyOptions` and `ApplyFlags`. This commit
populates the `ApplyOptions` directly, as going through
the `ApplyFlags` and then calling `ToOptions()` is almost impossible
due to its arguments.
- The `github.com/go-logr/logr` package has had some breaking changes
between the previous alpha release `v.0.4.0` and its stable release
`v1.0.0`. The generated mock has been updated to use `logr.LogSink`
instead (`logr.Logger` is not an interface anymore), and the test code
has been updated accordingly.
- Go has been updated to 1.17.6, as `sigs.k8s.io/json` depends on it.
Signed-off-by: Ramon Rüttimann <ramon@nine.ch>
apply reviewer notes: bump golang version; add missing ApplyOptions parameters
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
FakeDynamicClient returns typed lists (not unstructured lists) since Kubernetes 1.20. The type cast now handles that.
Signed-off-by: Mikhail Mazurskiy <mmazurskiy@gitlab.com>
In the case that you have a HPA with a minimum or maximum replica count set, and the metrics would indicate a need to scale beyond those, this is still an expected and healthy state.
Signed-off-by: Mike Bryant <mikebryant@bulb.co.uk>
* fix: Dry run stuck on pre sync hook
Signed-off-by: May Zhang <may_zhang@intuit.com>
* fix: Dry run stuck on pre sync hook
Signed-off-by: May Zhang <may_zhang@intuit.com>
* fix: issue 5080 to allow resouce prunning at the final wave of sync phase
Signed-off-by: May Zhang <may_zhang@intuit.com>
* fix: Setting prunLast tasks to lastWave + 1
Signed-off-by: May Zhang <may_zhang@intuit.com>
* fix: HPA health check is making incorrect assumption on annotations
Signed-off-by: May Zhang <may_zhang@intuit.com>
* fix: added handle of v1, v2beta1 and v2beta2
Signed-off-by: May Zhang <may_zhang@intuit.com>
* fix: clean up test data file
Signed-off-by: May Zhang <may_zhang@intuit.com>
* fix: Hook Deletion Policies HookSucceeded should be run after whole Hook succeed and not only Resource succeed
* fix: handle HookFailed.
* fix: fixing lint error.
* fix: Support transition from a git managed namespace to auto-create namespace
* fix: Support transition from a git managed namespace to auto-create namespace
* fix: use sync task to remove label
* fix: withNamespaceCreation
* fix: fix failed test
* fix: remove obsolete comment.
* fix: health status is set to healthy for statefulset with updateStrategy: OnDelete
* fix: updated message
* fix: added test
* fix: health status for daemon set with Ondelete updateStrategy
* fix: leverage RetryWatcher to watch cluster events and introduce periodical K8S API state resynchronization
* Apply reviewer notes
* enable race detection in tests
* chore: bump kubernetes deps
* refactor: remove dependency on pkg/errors
* refactor: remove indirect dependency version
* refactor: remove dependency on github.com/google/shlex
as it was only used in tests
* feat: use Kubernetes v1.18.6 libraries
* support switching between server and client side dry run mode
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
1. Instead of global semaphore use a
per-cache semaphore. Removes thread-safety
issues, allows fine control over limiting
if multiple caches are used in a program
2. Use the semaphore to guard whole
sections that use expensive list
operations, not just the list API call.
This ensures that memory usage is capped,
not the list operations.
3. Allow to control list pager.
Reduce default prefetch limit to 1 page
from 10.
Co-authored-by: Alexander Matyushentsev <Alexander_Matyushentsev@intuit.com>
* feat: added prehook for creating ns
* feat: added prehook for creating ns
Initial Draft
* feat: added prehook for creating ns
* feat: added prehook for creating ns
* feat: added prehook for creating ns checking the health of namespace created.
* feat: added prehook for creating ns. checking if ns existed already.
* feat: getSyncTasks returns the same list each time. added checking if resources contains ns already.
* feat: move const variable to right location; added additional checking if namespace is included in resources.
* feat: fixing compile issue.
* feat: moved code closer together.
* feat: adding test cases.
* feat: auto create only for sc.namespace
* feat: fix failed test
* feat: update livObj
* feat: added error handling
* feat: added error handling
* feat: move into its own fuction
* feat: fixing compile error
* fix: auto namespace creation
* fix: simplify sort method
* fix: remove sorting of namespace
* refactor: remove global variable
handlerKey does not need to be global,
field is just fine.
Atomic access is no longer needed because field
is protected by mutex.
* fix: use the correct mutex
* refactor: pre-allocate slice
* feat: added prehook for creating ns
* feat: added prehook for creating ns
Initial Draft
* feat: added prehook for creating ns
* feat: added prehook for creating ns
* feat: added prehook for creating ns checking the health of namespace created.
* feat: added prehook for creating ns. checking if ns existed already.
* feat: getSyncTasks returns the same list each time. added checking if resources contains ns already.
* feat: move const variable to right location; added additional checking if namespace is included in resources.
* feat: fixing compile issue.
* feat: moved code closer together.
* feat: adding test cases.
* feat: auto create only for sc.namespace
* feat: fix failed test
* feat: update livObj
* feat: added error handling
* feat: added error handling
* feat: move into its own fuction
* feat: fixing compile error
* add methods in errors package to quit with exit codes
Signed-off-by: darshanime <deathbullet@gmail.com>
* use CheckErrorWithCode to exit with appropriate error code
Signed-off-by: darshanime <deathbullet@gmail.com>
For the announcement we used the "argo + flux" logo and while
the project does not have its own logo yet, I think it's only
fitting to add it, to add some colour to our Github and make it
instantly clear which two communities came together here.
relates to: #17
--body "🍒 Cherry-pick PR created for ${{ inputs.version_number }}: #$(gh pr list --head ${{ steps.cherry-pick.outputs.branch_name }} --json number --jq '.[0].number')"
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v",appStatus.Application,updateCountMap[appStepMap[appStatus.Application]],maxUpdateVal,getAppStep(appStatus.Application,appStepMap),applicationSet.Name)
statusLogCtx.Infof("Application is not allowed to update yet, %v/%v Applications already updating in step %v",updateCountMap[appStepMap[appStatus.Application]],maxUpdateVal,getAppStep(appStatus.Application,appStepMap))
// ensure that Applications generated with RollingSync do not have an automated sync policy, since the AppSet controller will handle triggering the sync operation instead
Long:"Argo CD Commit Server is an internal service which commits and pushes hydrated manifests to git. This command runs Commit Server in the foreground.",
command.Flags().BoolVar(&includeHiddenDirectories,"include-hidden-directories",env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_INCLUDE_HIDDEN_DIRECTORIES",false),"Include hidden directories from Git")
command.Flags().BoolVar(&cmpUseManifestGeneratePaths,"plugin-use-manifest-generate-paths",env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_PLUGIN_USE_MANIFEST_GENERATE_PATHS",false),"Pass the resources described in argocd.argoproj.io/manifest-generate-paths value to the cmpserver to generate the application manifests.")
command.Flags().StringSliceVar(&ociMediaTypes,"oci-layer-media-types",env.StringsFromEnv("ARGOCD_REPO_SERVER_OCI_LAYER_MEDIA_TYPES",[]string{"application/vnd.oci.image.layer.v1.tar","application/vnd.oci.image.layer.v1.tar+gzip","application/vnd.cncf.helm.chart.content.v1.tar+gzip"},","),"Comma separated list of allowed media types for OCI media types. This only accounts for media types within layers.")
command.Flags().BoolVar(&enableBuiltinGitConfig,"enable-builtin-git-config",env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_ENABLE_BUILTIN_GIT_CONFIG",true),"Enable builtin git configuration options that are required for correct argocd-repo-server operation.")
command.Flags().StringVar(&opts.syncPolicy,"sync-policy","","Set the sync policy (one of: manual (aliases of manual: none), automated (aliases of automated: auto, automatic))")
command.Flags().StringArrayVar(&opts.syncOptions,"sync-option",[]string{},"Add or remove a sync option, e.g add `Prune=false`. Remove using `!` prefix, e.g. `!Prune=false`")
command.Flags().BoolVar(&opts.autoPrune,"auto-prune",false,"Set automatic pruning when sync is automated")
command.Flags().BoolVar(&opts.selfHeal,"self-heal",false,"Set self healing when sync is automated")
command.Flags().BoolVar(&opts.allowEmpty,"allow-empty",false,"Set allow zero live resources when sync is automated")
command.Flags().BoolVar(&opts.autoPrune,"auto-prune",false,"Set automatic pruning for automated sync policy")
command.Flags().BoolVar(&opts.selfHeal,"self-heal",false,"Set self healing for automated sync policy")
command.Flags().BoolVar(&opts.allowEmpty,"allow-empty",false,"Set allow zero live resources for automated sync policy")
GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
> [!NOTE]
> GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
> even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
> to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
The image is automatically deployed to the dev Argo CD instance: [https://cd.apps.argoproj.io/](https://cd.apps.argoproj.io/)
This configuration example will be used as the basis for the next steps.
!!! note
The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
> [!NOTE]
> The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
### Configure component env variables
The component that you will run in your IDE for debugging (`api-server` in our case) will need env variables. Copy the env variables from `Procfile`, located in the `argo-cd` root folder of your development branch. The env variables are located before the `$COMMAND` section in the `sh -c` section of the component run command.
@@ -112,8 +112,8 @@ Example for an `api-server` launch configuration snippet, based on our above exa
</component>
```
!!! note
As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
> [!NOTE]
> As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
## Run Argo CD without the debugged component
Next, we need to run all Argo CD components, except for the debugged component (cause we will run this component separately in the IDE).
@@ -143,4 +143,4 @@ To debug the `api-server`, run:
Finally, run the component you wish to debug from your IDE and make sure it does not have any errors.
## Important
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.
After your GitOps Engine PR has been merged, ArgoCD needs to be updated to pull in the version of the GitOps engine that contains your change. Here are the steps:
- Retrieve the SHA hash for your commit. You will use this in the next step.
- From the `argo-cd` folder, run the following command
`go get github.com/argoproj/gitops-engine@<git-commit-sha>`
If you get an error message `invalid version: unknown revision` then you got the wrong SHA hash
- Run:
`go mod tidy`
- The following files are changed:
-`go.mod`
-`go.sum`
- Create an ArgoCD PR with a `refactor:` type in its title for the two file changes.
### Tips:
- See https://github.com/argoproj/argo-cd/pull/4434 as an example
- The PR might require additional, dependent changes in ArgoCD that are directly impacted by the changes made in the engine.
When you have developed and possibly manually tested the code you want to contribute, you should ensure that everything builds correctly. Commit your changes locally and perform the following steps, for each step the commands for both local and virtualized toolchain are listed.
### Docker priviliges for virtualized toolchain users
### Docker privileges for virtualized toolchain users
[These instructions](toolchain-guide.md#docker-privileges) are relevant for most of the steps below
### Using Podman for virtualized toolchain users
@@ -29,11 +29,6 @@ As build dependencies change over time, you have to synchronize your development
*`make dep-ui` or `make dep-ui-local`
Argo CD recently migrated to Go modules. Usually, dependencies will be downloaded at build time, but the Makefile provides two targets to download and vendor all dependencies:
*`make mod-download` or `make mod-download-local` will download all required Go modules and
*`make mod-vendor` or `make mod-vendor-local` will vendor those dependencies into the Argo CD source tree
### Generate API glue code and other assets
Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
@@ -42,8 +37,8 @@ Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/prot
* Check if something has changed by running `git status` or `git diff`
* Commit any possible changes to your local Git branch, an appropriate commit message would be `Changes from codegen`, for example.
!!!note
There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
> [!NOTE]
> There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
Sure thing! You can either open an Enhancement Proposal in our GitHub issue tracker or you can [join us on Slack](https://argoproj.github.io/community/join-slack) in channel #argo-contributors to discuss your ideas and get guidance for submitting a PR.
!!! note
Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
> [!NOTE]
> Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
!!! warning "As an Argo CD user, you probably don't want to be reading this section of the docs."
This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
* A chat bot
* A Slack integration
> [!WARNING]
> **As an Argo CD user, you probably don't want to be reading this section of the docs.**
>
> This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
>
> * A chat bot
> * A Slack integration
## Preface
#### Understand the [Code Contribution Guide](code-contributions.md)
@@ -26,7 +28,7 @@ For backend and frontend contributions, that require a full building-testing-run
## Contributing to Argo CD Notifications documentation
This guide will help you get started quickly with contributing documentation changes, performing the minimum setup you'll need.
The notificaions docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
The notifications docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
For backend and frontend contributions, that require a full building-testing-running-locally cycle, please refer to [Contributing to Argo CD backend and frontend ](index.md#contributing-to-argo-cd-backend-and-frontend)
### Fork and clone Argo CD repository
@@ -100,4 +102,4 @@ Need help? Start with the [Contributors FAQ](faq/)
Once the script is executed successfully, a GitHub workflow will start
execution. You can follow its progress under the [Actions](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml) tab, the name of the action is `Publish ArgoCD Release`.
!!! warning
You cannot perform more than one release on the same release branch at the
same time.
> [!WARNING]
> You cannot perform more than one release on the same release branch at the
(depending on your toolchain) to build a new set of installation manifests which include your specific image reference.
!!!note
Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
> [!NOTE]
> Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
[code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
> [!NOTE]
> **Before you start**
>
> The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
>
> We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
> [code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
If you want to submit a PR, please read this document carefully, as it contains important information guiding you through our PR quality gates.
@@ -34,8 +36,8 @@ make pre-commit-local
When you submit a PR against Argo CD's GitHub repository, a couple of CI checks will be run automatically to ensure your changes will build fine and meet certain quality standards. Your contribution needs to pass those checks in order to be merged into the repository.
!!!note
Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
> [!NOTE]
> Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.
@@ -6,16 +6,18 @@ namespace `argocd-e2e***` is created prior to the execution of the tests. The th
The [/test/e2e/testdata](https://github.com/argoproj/argo-cd/tree/master/test/e2e/testdata) directory contains various Argo CD applications. Before test execution, the directory is copied into `/tmp/argo-e2e***` temp directory and used in tests as a
Git repository via file url: `file:///tmp/argo-e2e***`.
!!! note "Rancher Desktop Volume Sharing"
The e2e git server runs in a container. If you are using Rancher Desktop, you will need to enable volume sharing for
the e2e container to access the testdata directory. To do this, add the following to
`~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml` and restart Rancher Desktop:
```yaml
mounts:
- location: /private/tmp
writable: true
```
> [!NOTE]
> **Rancher Desktop Volume Sharing**
>
> The e2e git server runs in a container. If you are using Rancher Desktop, you will need to enable volume sharing for
> the e2e container to access the testdata directory. To do this, add the following to
> `~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml` and restart Rancher Desktop:
@@ -120,8 +120,8 @@ Goreman is used to start all needed processes to get a working Argo CD developme
#### Install required dependencies and build-tools
!!!note
The installations instructions are valid for Linux hosts only. Mac instructions will follow shortly.
> [!NOTE]
> The installations instructions are valid for Linux hosts only. Mac instructions will follow shortly.
For installing the tools required to build and test Argo CD on your local system, we provide convenient installer scripts. By default, they will install binaries to `/usr/local/bin` on your system, which might require `root` privileges.
To apply the new password hash, use the following command (replacing the hash with your own):
@@ -140,16 +144,25 @@ Argo CD automatically sets the `app.kubernetes.io/instance` label and uses it to
If the tool does this too, this causes confusion. You can change this label by setting
the `application.instanceLabelKey` value in the `argocd-cm`. We recommend that you use `argocd.argoproj.io/instance`.
!!! note
When you make this change your applications will become out of sync and will need re-syncing.
> [!NOTE]
> When you make this change your applications will become out of sync and will need re-syncing.
See [#1482](https://github.com/argoproj/argo-cd/issues/1482).
## How often does Argo CD check for changes to my Git or Helm repository ?
The default maximum polling interval is 3 minutes (120 seconds + 60 seconds jitter).
You can change the setting by updating the `timeout.reconciliation` value and the `timeout.reconciliation.jitter` in the [argocd-cm](https://github.com/argoproj/argo-cd/blob/2d6ce088acd4fb29271ffb6f6023dbb27594d59b/docs/operator-manual/argocd-cm.yaml#L279-L282) config map. If there are any Git changes, Argo CD will only update applications with the [auto-sync setting](user-guide/auto_sync.md) enabled. If you set it to `0` then Argo CD will stop polling Git repositories automatically and you can only use alternative methods such as [webhooks](operator-manual/webhook.md) and/or manual syncs for deploying applications.
By default, Argo CD checks (polls) Git repositories every 3 minutes to detect changes.
This default interval is calculated as 120 seconds + up to 60 seconds of jitter (a small random delay to avoid simultaneous polling). You can customize this behavior by updating the following keys in the `argocd-cm` ConfigMap:
```yaml
timeout.reconciliation:120s
timeout.reconciliation.jitter:60s
```
During each polling cycle, Argo CD checks whether your tracked repositories have changed. If changes are found:
- Applications with auto-sync enabled will automatically sync to match the new state.
- Applications without auto-sync will simply be marked as OutOfSync in the UI.
Setting `timeout.reconciliation` to 0 completely disables automatic polling. In that case, Argo CD will only detect changes when triggered through webhooks or a manual refresh. When setting it to 0, it may also be required to configure ARGOCD_DEFAULT_CACHE_EXPIRATION.
However, setting this value to 0 is not recommended for several reasons such as failure of webhooks due to network issues, misconfiguration etc. If you are using webhooks and are interested in improving Argo CD performance / resource consumption, you can set `timeout.reconciliation` to a lower-frequency interval to reduce the frequency of explicit polling, for example `15m`, `1h` or other interval that is appropriate for your case.
## Why is my ArgoCD application `Out Of Sync` when there are no actual changes to the resource limits (or other fields with unit values)?
@@ -199,7 +212,8 @@ If you're not running in a production system (e.g. you're testing Argo CD out),
argocd ... --insecure
```
!!! warning "Do not use `--insecure` in production"
> [!WARNING]
> Do not use `--insecure` in production.
## I have configured Dex via `dex.config` in `argocd-cm`, it still says Dex is unconfigured. Why?
@@ -266,8 +280,10 @@ In this case, the duplicated keys have been **emphasized** to help you identify
The most common instance of this error is with `env:` fields for `containers`.
!!! note "Dynamic applications"
It's possible that your application is being generated by a tool in which case the duplication might not be evident within the scope of a single file. If you have trouble debugging this problem, consider filing a ticket to the owner of the generator tool asking them to improve its validation and error reporting.
> [!NOTE]
> **Dynamic applications**
>
> It's possible that your application is being generated by a tool in which case the duplication might not be evident within the scope of a single file. If you have trouble debugging this problem, consider filing a ticket to the owner of the generator tool asking them to improve its validation and error reporting.
## How to rotate Redis secret?
* Delete `argocd-redis` secret in the namespace where Argo CD is installed.
This guide assumes you have a grounding in the tools that Argo CD is based on. Please read [understanding the basics](understand_the_basics.md) to learn about these tools.
> [!TIP]
> This guide assumes you have a grounding in the tools that Argo CD is based on. Please read [understanding the basics](understand_the_basics.md) to learn about these tools.
This will create a new namespace, `argocd`, where Argo CD services and application resources will live.
!!! warning
The installation manifests include `ClusterRoleBinding` resources that reference `argocd` namespace. If you are installing Argo CD into a different
namespace then make sure to update the namespace reference.
> [!WARNING]
> The installation manifests include `ClusterRoleBinding` resources that reference `argocd` namespace. If you are installing Argo CD into a different
> namespace then make sure to update the namespace reference.
!!! tip
If you are not interested in UI, SSO, and multi-cluster features, then you can install only the [core](operator-manual/core.md#installing) Argo CD components.
> [!TIP]
> If you are not interested in UI, SSO, and multi-cluster features, then you can install only the [core](operator-manual/core.md#installing) Argo CD components.
This default installation will have a self-signed certificate and cannot be accessed without a bit of extra work.
Do one of:
@@ -32,16 +32,18 @@ Do one of:
* Configure the client OS to trust the self signed certificate.
* Use the --insecure flag on all Argo CD CLI operations in this guide.
!!! note
Default namespace for `kubectl` config must be set to `argocd`.
This is only needed for the following commands since the previous commands have -n argocd already:
Use `argocd login --core` to [configure](./user-guide/commands/argocd_login.md) CLI access and skip steps 3-5.
!!! note
This default installation for Redis is using password authentication. The Redis password is stored in Kubernetes secret `argocd-redis` with key `auth` in the namespace where Argo CD is installed.
> [!NOTE]
> This default installation for Redis is using password authentication. The Redis password is stored in Kubernetes secret `argocd-redis` with key `auth` in the namespace where Argo CD is installed.
## 2. Download Argo CD CLI
@@ -94,12 +96,12 @@ using the `argocd` CLI:
argocd admin initial-password -n argocd
```
!!! warning
You should delete the `argocd-initial-admin-secret` from the Argo CD
namespace once you changed the password. The secret serves no other
purpose than to store the initially generated password in clear and can
safely be deleted at any time. It will be re-created on demand by Argo CD
if a new admin password must be re-generated.
> [!WARNING]
> You should delete the `argocd-initial-admin-secret` from the Argo CD
> namespace once you changed the password. The secret serves no other
> purpose than to store the initially generated password in clear and can
> safely be deleted at any time. It will be re-created on demand by Argo CD
> if a new admin password must be re-generated.
Using the username `admin` and the password from above, login to Argo CD's IP or hostname:
@@ -107,8 +109,8 @@ Using the username `admin` and the password from above, login to Argo CD's IP or
argocd login <ARGOCD_SERVER>
```
!!! note
The CLI environment must be able to communicate with the Argo CD API server. If it isn't directly accessible as described above in step 3, you can tell the CLI to access it using port forwarding through one of these mechanisms: 1) add `--port-forward-namespace argocd` flag to every CLI command; or 2) set `ARGOCD_OPTS` environment variable: `export ARGOCD_OPTS='--port-forward-namespace argocd'`.
> [!NOTE]
> The CLI environment must be able to communicate with the Argo CD API server. If it isn't directly accessible as described above in step 3, you can tell the CLI to access it using port forwarding through one of these mechanisms: 1) add `--port-forward-namespace argocd` flag to every CLI command; or 2) set `ARGOCD_OPTS` environment variable: `export ARGOCD_OPTS='--port-forward-namespace argocd'`.
Change the password using the command:
@@ -137,17 +139,17 @@ The above command installs a ServiceAccount (`argocd-manager`), into the kube-sy
that kubectl context, and binds the service account to an admin-level ClusterRole. Argo CD uses this
service account token to perform its management tasks (i.e. deploy/monitoring).
!!! note
The rules of the `argocd-manager-role` role can be modified such that it only has `create`, `update`, `patch`, `delete` privileges to a limited set of namespaces, groups, kinds.
However `get`, `list`, `watch` privileges are required at the cluster-scope for Argo CD to function.
> [!NOTE]
> The rules of the `argocd-manager-role` role can be modified such that it only has `create`, `update`, `patch`, `delete` privileges to a limited set of namespaces, groups, kinds.
> However `get`, `list`, `watch` privileges are required at the cluster-scope for Argo CD to function.
## 6. Create An Application From A Git Repository
An example repository containing a guestbook application is available at
[https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git) to demonstrate how Argo CD works.
!!! note
Note: The following example application may only be compatible with AMD64 architecture. If you are running on a different architecture (such as ARM64 or ARMv7), you may encounter issues with dependencies or container images that are not built for your platform. Consider verifying the compatibility of the application or building architecture-specific images if necessary.
> [!NOTE]
> Note: The following example application may only be compatible with AMD64 architecture. If you are running on a different architecture (such as ARM64 or ARMv7), you may encounter issues with dependencies or container images that are not built for your platform. Consider verifying the compatibility of the application or building architecture-specific images if necessary.
`argocd-notifications-controller-rbac-clusterrole.yaml` and `argocd-notifications-controller-rbac-clusterrolebinding.yaml` are used to support notifications controller to notify apps in all namespaces.
!!! note
At some later point in time, we may make this cluster role part of the default installation manifests.
> [!NOTE]
> At some later point in time, we may make this cluster role part of the default installation manifests.
### Allowing additional namespaces in an AppProject
@@ -118,15 +118,15 @@ Also, the Argo CD API will enforce these constraints, regardless of the Argo CD
The `.spec.sourceNamespaces` field of the `AppProject` is a list that can contain an arbitrary amount of namespaces, and each entry supports shell-style wildcard, so that you can allow namespaces with patterns like `team-one-*`.
!!! warning
Do not add user controlled namespaces in the `.spec.sourceNamespaces` field of any privileged AppProject like the `default` project. Always make sure that the AppProject follows the principle of granting least required privileges. Never grant access to the `argocd` namespace within the AppProject.
> [!WARNING]
> Do not add user controlled namespaces in the `.spec.sourceNamespaces` field of any privileged AppProject like the `default` project. Always make sure that the AppProject follows the principle of granting least required privileges. Never grant access to the `argocd` namespace within the AppProject.
!!! note
For backwards compatibility, Applications in the Argo CD control plane's namespace (`argocd`) are allowed to set their `.spec.project` field to reference any AppProject, regardless of the restrictions placed by the AppProject's `.spec.sourceNamespaces` field.
!!! note
Currently it's not possible to have a applicationset in one namespace and have the application
be generated in another. See [#11104](https://github.com/argoproj/argo-cd/issues/11104) for more info.
> [!NOTE]
> For backwards compatibility, Applications in the Argo CD control plane's namespace (`argocd`) are allowed to set their `.spec.project` field to reference any AppProject, regardless of the restrictions placed by the AppProject's `.spec.sourceNamespaces` field.
> [!NOTE]
> Currently it's not possible to have a applicationset in one namespace and have the application
> be generated in another. See [#11104](https://github.com/argoproj/argo-cd/issues/11104) for more info.
This is an experimental, [alpha-quality](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#alpha)
feature that allows you to control the service account used for the sync operation. The configured service account
could have lesser privileges required for creating resources compared to the highly privileged access required for
the control plane operations.
> [!WARNING]
> **Alpha Feature (Since 2.13.0)**
>
> This is an experimental, [alpha-quality](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#alpha)
> feature that allows you to control the service account used for the sync operation. The configured service account
> could have lesser privileges required for creating resources compared to the highly privileged access required for
> the control plane operations.
!!! warning
Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues.
> [!WARNING]
> Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues.
## Introduction
@@ -17,8 +19,8 @@ By default, application syncs in Argo CD have the same privileges as the Argo CD
Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature, as it is disabled by default.
!!! note
This feature is considered alpha as of now. Some of the implementation details may change over the course of time until it is promoted to a stable status. We will be happy if early adopters use this feature and provide us with bug reports and feedback.
> [!NOTE]
> This feature is considered alpha as of now. Some of the implementation details may change over the course of time until it is promoted to a stable status. We will be happy if early adopters use this feature and provide us with bug reports and feedback.
### What is Impersonation
@@ -26,6 +28,10 @@ Impersonation is a feature in Kubernetes and enabled in the `kubectl` CLI client
Impersonation requests first authenticate as the requesting user, then switch to the impersonated user info.
### Feature scope
Impersonation is currently only supported for the lifecycle of objects managed by an Application directly, which includes sync operations (creation, update and pruning of resources) and deletion as part of Application finalizer logic. This *does not* includes operations triggered via ArgoCD's UI, which will still be executed with Argo CD's control-plane service account.
## Prerequisites
In a multi-team/multi-tenant environment, a team/tenant is typically granted access to a target namespace to self-manage their kubernetes resources in a declarative way.
@@ -65,11 +71,11 @@ data:
application.sync.impersonation.enabled:"false"
```
!!! note
This feature is disabled by default.
> [!NOTE]
> This feature is disabled by default.
!!! note
This feature can be enabled/disabled only at the system level and once enabled/disabled it is applicable to all Applications managed by ArgoCD.
> [!NOTE]
> This feature can be enabled/disabled only at the system level and once enabled/disabled it is applicable to all Applications managed by ArgoCD.
@@ -15,15 +15,15 @@ The end result is that when an ApplicationSet is deleted, the following occurs (
Thus the lifecycle of the `ApplicationSet`, the `Application`, and the `Application`'s resources, are equivalent.
!!! note
See also the [controlling resource modification](Controlling-Resource-Modification.md) page for more information about how to prevent deletion or modification of Application resources by the ApplicationSet controller.
> [!NOTE]
> See also the [controlling resource modification](Controlling-Resource-Modification.md) page for more information about how to prevent deletion or modification of Application resources by the ApplicationSet controller.
It *is* still possible to delete an `ApplicationSet` resource, while preventing `Application`s (and their deployed resources) from also being deleted, using a non-cascading delete:
Even if using a non-cascaded delete, the `resources-finalizer.argocd.argoproj.io` is still specified on the `Application`. Thus, when the `Application` is deleted, all of its deployed resources will also be deleted. (The lifecycle of the Application, and its *child* objects, are still equivalent.)
To prevent the deletion of the resources of the Application, such as Services, Deployments, etc, set `.syncPolicy.preserveResourcesOnDeletion` to true in the ApplicationSet. This syncPolicy parameter prevents the finalizer from being added to the Application.
> [!WARNING]
> Even if using a non-cascaded delete, the `resources-finalizer.argocd.argoproj.io` is still specified on the `Application`. Thus, when the `Application` is deleted, all of its deployed resources will also be deleted. (The lifecycle of the Application, and its *child* objects, are still equivalent.)
>
> To prevent the deletion of the resources of the Application, such as Services, Deployments, etc, set `.syncPolicy.preserveResourcesOnDeletion` to true in the ApplicationSet. This syncPolicy parameter prevents the finalizer from being added to the Application.
Please note url used in the `api` field of the `ApplicationSet` must match the url declared by the Administrator including the protocol
> [!NOTE]
> Please note url used in the `api` field of the `ApplicationSet` must match the url declared by the Administrator including the protocol
!!! warning
The allow-list only applies to SCM providers for which the user may configure a custom `api`. Where an SCM or PR
generator does not accept a custom API URL, the provider is implicitly allowed.
> [!WARNING]
> The allow-list only applies to SCM providers for which the user may configure a custom `api`. Where an SCM or PR
> generator does not accept a custom API URL, the provider is implicitly allowed.
If you do not intend to allow users to use the SCM or PR generators, you can disable them entirely by setting the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_SCM_PROVIDERS` to argocd-cmd-params-cm `applicationsetcontroller.enable.scm.providers` to `false`.
@@ -10,9 +10,11 @@ Thus the ApplicationSet controller:
- Does not connect to clusters other than the one Argo CD is deployed to
- Does not interact with namespaces other than the one Argo CD is deployed within
!!!important "Use the Argo CD namespace"
All ApplicationSet resources and the ApplicationSet controller must be installed in the same namespace as Argo CD.
ApplicationSet resources in a different namespace will be ignored.
> [!IMPORTANT]
> **Use the Argo CD namespace**
>
> All ApplicationSet resources and the ApplicationSet controller must be installed in the same namespace as Argo CD.
> ApplicationSet resources in a different namespace will be ignored.
It is Argo CD itself that is responsible for the actual deployment of the generated child `Application` resources, such as Deployments, Services, and ConfigMaps.
## Preserving changes made to an Applications annotations and labels
!!! note
The same behavior can be achieved on a per-app basis using the [`ignoreApplicationDifferences`](#ignore-certain-changes-to-applications)
feature described above. However, preserved fields may be configured globally, a feature that is not yet available
for `ignoreApplicationDifferences`.
> [!NOTE]
> The same behavior can be achieved on a per-app basis using the [`ignoreApplicationDifferences`](#ignore-certain-changes-to-applications)
> feature described above. However, preserved fields may be configured globally, a feature that is not yet available
> for `ignoreApplicationDifferences`.
It is common practice in Kubernetes to store state in annotations, operators will often make use of this. To allow for this, it is possible to configure a list of annotations that the ApplicationSet should preserve when reconciling.
@@ -325,9 +325,9 @@ The ApplicationSet controller will leave this annotation and label as-is when re
By default, the Argo CD notifications and the Argo CD refresh type annotations are also preserved.
!!!note
One can also set global preserved fields for the controller by passing a comma separated list of annotations and labels to
`ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_ANNOTATIONS` and `ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS` respectively.
> [!NOTE]
> One can also set global preserved fields for the controller by passing a comma separated list of annotations and labels to
> `ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_ANNOTATIONS` and `ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS` respectively.
@@ -79,7 +79,9 @@ The ApplicationSet needs to be created in the Argo CD namespace, placing the `Co
The ClusterDecisionResource generator passes the 'name', 'server' and any other key/value in the duck-type resource's status list as parameters into the ApplicationSet template. In this example, the decision array contained an additional key `clusterName`, which is now available to the ApplicationSet template.
!!! note "Clusters listed as `Status.Decisions` must be predefined in Argo CD"
The cluster names listed in the `Status.Decisions` *must* be defined within Argo CD, in order to generate applications for these values. The ApplicationSet controller does not create clusters within Argo CD.
The Default Cluster list key is `clusters`.
> [!NOTE]
> **Clusters listed as `Status.Decisions` must be predefined in Argo CD**
>
> The cluster names listed in the `Status.Decisions` *must* be defined within Argo CD, in order to generate applications for these values. The ApplicationSet controller does not create clusters within Argo CD.
@@ -13,8 +13,8 @@ It automatically provides the following parameter values to the Application temp
-`metadata.labels.<key>`*(for each label in the Secret)*
-`metadata.annotations.<key>`*(for each annotation in the Secret)*
!!! note
Use the `nameNormalized` parameter if your cluster name contains characters (such as underscores) that are not valid for Kubernetes resource names. This prevents rendering invalid Kubernetes resources with names like `my_cluster-app1`, and instead would convert them to `my-cluster-app1`.
> [!NOTE]
> Use the `nameNormalized` parameter if your cluster name contains characters (such as underscores) that are not valid for Kubernetes resource names. This prevents rendering invalid Kubernetes resources with names like `my_cluster-app1`, and instead would convert them to `my-cluster-app1`.
Within [Argo CD cluster Secrets](../../declarative-setup/#clusters) are data fields describing the cluster:
@@ -203,8 +203,8 @@ spec:
In this example the `revision` value from the `generators.clusters` fields is passed into the template as `values.revision`, containing either `HEAD` or `stable` (based on which generator generated the set of parameters).
!!! note
The `values.` prefix is always prepended to values provided via `generators.clusters.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
> [!NOTE]
> The `values.` prefix is always prepended to values provided via `generators.clusters.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
In `values` we can also interpolate the following parameter values (i.e. the same values as presented in the beginning of this page)
@@ -312,4 +312,4 @@ spec:
- name: cluster2
```
In case you are using several cluster generators, each with the flatList option, one Application would be generated by cluster generator, as we can't simply merge values and templates that would potentially differ in each generator.
In case you are using several cluster generators, each with the flatList option, one Application would be generated by cluster generator, as we can't simply merge values and templates that would potentially differ in each generator.
The Git generator contains two subtypes: the Git directory generator, and Git file generator.
!!! warning
Git generators are often used to make it easier for (non-admin) developers to create Applications.
If the `project` field in your ApplicationSet is templated, developers may be able to create Applications under Projects with excessive permissions.
For ApplicationSets with a templated `project` field, [the source of truth _must_ be controlled by admins](./Security.md#templated-project-field)
- in the case of git generators, PRs must require admin approval.
- Git generator does not support Signature Verification For ApplicationSets with a templated `project` field.
- You must only use "non-scoped" repositories for ApplicationSets with a templated `project` field (see ["Repository Credentials for Applicationsets" below](#repository-credentials-for-applicationsets)).
> [!WARNING]
> Git generators are often used to make it easier for (non-admin) developers to create Applications.
> If the `project` field in your ApplicationSet is templated, developers may be able to create Applications under Projects with excessive permissions.
> For ApplicationSets with a templated `project` field, [the source of truth _must_ be controlled by admins](./Security.md#templated-project-field)
> - in the case of git generators, PRs must require admin approval.
> - Git generator does not support Signature Verification For ApplicationSets with a templated `project` field.
> - You must only use "non-scoped" repositories for ApplicationSets with a templated `project` field (see ["Repository Credentials for Applicationsets" below](#repository-credentials-for-applicationsets)).
## Git Generator: Directories
@@ -121,11 +120,12 @@ spec:
This example excludes the `exclude-helm-guestbook` directory from the list of directories scanned for this `ApplicationSet` resource.
!!! note "Exclude rules have higher priority than include rules"
If a directory matches at least one `exclude` pattern, it will be excluded. Or, said another way, *exclude rules take precedence over include rules.*
As a corollary, which directories are included/excluded is not affected by the order of `path`s in the `directories` field list (because, as above, exclude rules always take precedence over include rules).
> [!NOTE]
> **Exclude rules have higher priority than include rules**
>
> If a directory matches at least one `exclude` pattern, it will be excluded. Or, said another way, *exclude rules take precedence over include rules.*
>
> As a corollary, which directories are included/excluded is not affected by the order of `path`s in the `directories` field list (because, as above, exclude rules always take precedence over include rules).
For example, with these directories:
@@ -237,8 +237,8 @@ spec:
namespace:'{{.values.cluster}}'
```
!!! note
The `values.` prefix is always prepended to values provided via `generators.git.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
> [!NOTE]
> The `values.` prefix is always prepended to values provided via `generators.git.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
In `values` we can also interpolate all fields set by the git directory generator as mentioned above.
@@ -417,8 +417,8 @@ spec:
namespace:guestbook
```
!!! note
The `values.` prefix is always prepended to values provided via `generators.git.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
> [!NOTE]
> The `values.` prefix is always prepended to values provided via `generators.git.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
In `values` we can also interpolate all fields set by the git files generator as mentioned above.
@@ -431,16 +431,16 @@ different default value is set by the
You can customize this interval per ApplicationSet using
`requeueAfterSeconds`.
!!!note
The Git generator uses the ArgoCD Repo Server to retrieve file
and directory lists from Git. Therefore, the Git generator is
affected by the Repo Server's Revision Cache Expiration setting
(see the description of the `timeout.reconciliation` parameter in
> If this value exceeds the configured Git Polling Interval, the
> Git generator might not see files or directories from new commits
> until the previous cache entry expires.
>
## The `argocd.argoproj.io/application-set-refresh` Annotation
Setting the `argocd.argoproj.io/application-set-refresh` annotation
@@ -475,8 +475,8 @@ spec:
# ...
```
!!! note
The ApplicationSet controller webhook does not use the same webhook as the API server as defined [here](../webhook.md). ApplicationSet exposes a webhook server as a service of type ClusterIP. An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source.
> [!NOTE]
> The ApplicationSet controller webhook does not use the same webhook as the API server as defined [here](../webhook.md). ApplicationSet exposes a webhook server as a service of type ClusterIP. An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source.
### 1. Create the webhook in the Git provider
@@ -487,8 +487,8 @@ arbitrary value in the secret. This value will be used when configuring the webh
When creating the webhook in GitHub, the "Content type" needs to be set to "application/json". The default value "application/x-www-form-urlencoded" is not supported by the library used to handle the hooks
> [!NOTE]
> When creating the webhook in GitHub, the "Content type" needs to be set to "application/json". The default value "application/x-www-form-urlencoded" is not supported by the library used to handle the hooks
### 2. Configure ApplicationSet with the webhook secret (Optional)
These clusters *must* already be defined within Argo CD, in order to generate applications for these values. The ApplicationSet controller does not create clusters within Argo CD (for instance, it does not have the credentials to do so).
> [!NOTE]
> **Clusters must be predefined in Argo CD**
>
> These clusters *must* already be defined within Argo CD, in order to generate applications for these values. The ApplicationSet controller does not create clusters within Argo CD (for instance, it does not have the credentials to do so).
## Dynamically generated elements
The List generator can also dynamically generate its elements based on a yaml/json it gets from a previous generator like git by combining the two with a matrix generator. In this example we are using the matrix generator with a git followed by a list generator and pass the content of a file in git as input to the `elementsYaml` field of the list generator:
-`configMapRef.name`: A `ConfigMap` name containing the plugin configuration to use for RPC call.
-`input.parameters`: Input parameters included in the RPC call to the plugin. (Optional)
!!! note
The concept of the plugin should not undermine the spirit of GitOps by externalizing data outside of Git. The goal is to be complementary in specific contexts.
For example, when using one of the PullRequest generators, it's impossible to retrieve parameters related to the CI (only the commit hash is available), which limits the possibilities. By using a plugin, it's possible to retrieve the necessary parameters from a separate data source and use them to extend the functionality of the generator.
> [!NOTE]
> The concept of the plugin should not undermine the spirit of GitOps by externalizing data outside of Git. The goal is to be complementary in specific contexts.
> For example, when using one of the PullRequest generators, it's impossible to retrieve parameters related to the CI (only the commit hash is available), which limits the possibilities. By using a plugin, it's possible to retrieve the necessary parameters from a separate data source and use them to extend the functionality of the generator.
### Add a ConfigMap to configure the access of the plugin
Know the security implications of PR generators in ApplicationSets.
[Only admins may create ApplicationSets](./Security.md#only-admins-may-createupdatedelete-applicationsets) to avoid
leaking Secrets, and [only admins may create PRs](./Security.md#templated-project-field) if the `project` field of
an ApplicationSet with a PR generator is templated, to avoid granting management of out-of-bounds resources.
> [!NOTE]
> Know the security implications of PR generators in ApplicationSets.
> [Only admins may create ApplicationSets](./Security.md#only-admins-may-createupdatedelete-applicationsets) to avoid
> leaking Secrets, and [only admins may create PRs](./Security.md#templated-project-field) if the `project` field of
> an ApplicationSet with a PR generator is templated, to avoid granting management of out-of-bounds resources.
## GitHub
@@ -452,8 +452,8 @@ When using a Pull Request generator, the ApplicationSet controller polls every `
The configuration is almost the same as the one described [in the Git generator](Generators-Git.md), but there is one difference: if you want to use the Pull Request Generator as well, additionally configure the following settings.
!!! note
The ApplicationSet controller webhook does not use the same webhook as the API server as defined [here](../webhook.md). ApplicationSet exposes a webhook server as a service of type ClusterIP. An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source.
> [!NOTE]
> The ApplicationSet controller webhook does not use the same webhook as the API server as defined [here](../webhook.md). ApplicationSet exposes a webhook server as a service of type ClusterIP. An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source.
### Github webhook configuration
@@ -528,7 +528,7 @@ spec:
namespace:default
```
!!! note
The `values.` prefix is always prepended to values provided via `generators.pullRequest.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
> [!NOTE]
> The `values.` prefix is always prepended to values provided via `generators.pullRequest.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
In `values` we can also interpolate all fields set by the Pull Request generator as mentioned above.
*`cloneProtocol`: Which protocol to use for the SCM URL. Default is provider-specific but ssh if possible. Not all providers necessarily support all protocols, see provider documentation below for available options.
!!! note
Know the security implications of using SCM generators. [Only admins may create ApplicationSets](./Security.md#only-admins-may-createupdatedelete-applicationsets)
to avoid leaking Secrets, and [only admins may create repos/branches](./Security.md#templated-project-field) if the
`project` field of an ApplicationSet with an SCM generator is templated, to avoid granting management of
out-of-bounds resources.
> [!NOTE]
> Know the security implications of using SCM generators. [Only admins may create ApplicationSets](./Security.md#only-admins-may-createupdatedelete-applicationsets)
> to avoid leaking Secrets, and [only admins may create repos/branches](./Security.md#templated-project-field) if the
> `project` field of an ApplicationSet with an SCM generator is templated, to avoid granting management of
> out-of-bounds resources.
## GitHub
@@ -492,7 +492,7 @@ spec:
namespace:default
```
!!! note
The `values.` prefix is always prepended to values provided via `generators.scmProvider.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
> [!NOTE]
> The `values.` prefix is always prepended to values provided via `generators.scmProvider.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.
In `values` we can also interpolate all fields set by the SCM generator as mentioned above.
- After each successful commit to *argoproj/applicationset*`master` branch, a GitHub action will run that performs a container build/push to [`argoproj/argocd-applicationset:latest`](https://quay.io/repository/argoproj/argocd-applicationset?tab=tags )
- [Documentation for the `master`-branch-based developer builds](https://argocd-applicationset.readthedocs.io/en/master/) is available from Read the Docs.
!!! warning
Development builds contain newer features and bug fixes, but are more likely to be unstable, as compared to release builds.
> [!WARNING]
> Development builds contain newer features and bug fixes, but are more likely to be unstable, as compared to release builds.
See the `master` branch [Read the Docs](https://argocd-applicationset.readthedocs.io/en/master/) page for documentation on post-release features. -->
This is an experimental, [alpha-quality](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#alpha)
feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications
> [!WARNING]
> **Alpha Feature (Since v2.6.0)**
>
This is an experimental, [alpha-quality](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#alpha)
feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications
owned by an ApplicationSet resource. It may be removed in future releases or modified in backwards-incompatible ways.
## Use Cases
The Progressive Syncs feature set is intended to be light and flexible. The feature only interacts with the health of managed Applications. It is not intended to support direct integrations with other Rollout controllers (such as the native ReplicaSet controller or Argo Rollouts).
* Progressive Syncs watch for the managed Application resources to become "Healthy" before proceeding to the next stage.
* Deployments, DaemonSets, StatefulSets, and [Argo Rollouts](https://argoproj.github.io/argo-rollouts/) are all supported, because the Application enters a "Progressing" state while pods are being rolled out. In fact, any resource with a health check that can report a "Progressing" status is supported.
* [Argo CD Resource Hooks](../../user-guide/resource_hooks.md) are supported. We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used, such as smoke testing after a DaemonSet change.
- Progressive Syncs watch for the managed Application resources to become "Healthy" before proceeding to the next stage.
- Deployments, DaemonSets, StatefulSets, and [Argo Rollouts](https://argoproj.github.io/argo-rollouts/) are all supported, because the Application enters a "Progressing" state while pods are being rolled out. In fact, any resource with a health check that can report a "Progressing" status is supported.
- [Argo CD Resource Hooks](../../user-guide/resource_hooks.md) are supported. We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used, such as smoke testing after a DaemonSet change.
## Enabling Progressive Syncs
As an experimental feature, progressive syncs must be explicitly enabled, in one of these ways.
1. Pass `--enable-progressive-syncs` to the ApplicationSet controller args.
@@ -23,17 +28,18 @@ As an experimental feature, progressive syncs must be explicitly enabled, in one
ApplicationSet strategies control both how applications are created (or updated) and deleted. These operations are configured using two separate fields:
* **Creation Strategy** (`type` field): Controls application creation and updates
* **Deletion Strategy** (`deletionOrder` field): Controls application deletion order
- **Creation Strategy** (`type` field): Controls application creation and updates
- **Deletion Strategy** (`deletionOrder` field): Controls application deletion order
### Creation Strategies
The `type` field controls how applications are created and updated. Available values:
* **AllAtOnce** (default)
* **RollingSync**
- **AllAtOnce** (default)
- **RollingSync**
#### AllAtOnce
This default Application update behavior is unchanged from the original ApplicationSet implementation.
All Applications managed by the ApplicationSet resource are updated simultaneously when the ApplicationSet is updated.
@@ -41,25 +47,25 @@ All Applications managed by the ApplicationSet resource are updated simultaneous
```yaml
spec:
strategy:
type:AllAtOnce# explicit, but this is the default
type:AllAtOnce# explicit, but this is the default
```
#### RollingSync
This update strategy allows you to group Applications by labels present on the generated Application resources.
When the ApplicationSet changes, the changes will be applied to each group of Application resources sequentially.
* Application groups are selected using their labels and `matchExpressions`.
* All `matchExpressions` must be true for an Application to be selected (multiple expressions match with AND behavior).
* The `In` and `NotIn` operators must match at least one value to be considered true (OR behavior).
* The `NotIn` operator has priority in the event that both a `NotIn` and `In` operator produce a match.
* All Applications in each group must become Healthy before the ApplicationSet controller will proceed to update the next group of Applications.
* The number of simultaneous Application updates in a group will not exceed its `maxUpdate` parameter (default is 100%, unbounded).
* RollingSync will capture external changes outside the ApplicationSet resource, since it relies on watching the OutOfSync status of the managed Applications.
* RollingSync will force all generated Applications to have autosync disabled. Warnings are printed in the applicationset-controller logs for any Application specs with an automated syncPolicy enabled.
* Sync operations are triggered the same way as if they were triggered by the UI or CLI (by directly setting the `operation` status field on the Application resource). This means that a RollingSync will respect sync windows just as if a user had clicked the "Sync" button in the Argo UI.
* When a sync is triggered, the sync is performed with the same syncPolicy configured for the Application. For example, this preserves the Application's retry settings.
* If an Application is considered "Pending" for `applicationsetcontroller.default.application.progressing.timeout` seconds, the Application is automatically moved to Healthy status (default 300).
* If an Application is not selected in any step, it will be excluded from the rolling sync and needs to be manually synced through the CLI or UI.
- Application groups are selected using their labels and `matchExpressions`.
- All `matchExpressions` must be true for an Application to be selected (multiple expressions match with AND behavior).
- The `In` and `NotIn` operators must match at least one value to be considered true (OR behavior).
- The `NotIn` operator has priority in the event that both a `NotIn` and `In` operator produce a match.
- All Applications in each group must become Healthy before the ApplicationSet controller will proceed to update the next group of Applications.
- The number of simultaneous Application updates in a group will not exceed its `maxUpdate` parameter (default is 100%, unbounded).
- RollingSync will capture external changes outside the ApplicationSet resource, since it relies on watching the OutOfSync status of the managed Applications.
- RollingSync will force all generated Applications to have autosync disabled. Warnings are printed in the applicationset-controller logs for any Application specs with an automated syncPolicy enabled.
- Sync operations are triggered the same way as if they were triggered by the UI or CLI (by directly setting the `operation` status field on the Application resource). This means that a RollingSync will respect sync windows just as if a user had clicked the "Sync" button in the Argo UI.
- When a sync is triggered, the sync is performed with the same syncPolicy configured for the Application. For example, this preserves the Application's retry settings.
- If an Application is not selected in any step, it will be excluded from the rolling sync and needs to be manually synced through the CLI or UI.
```yaml
spec:
@@ -84,25 +90,28 @@ spec:
The `deletionOrder` field controls the order in which applications are deleted when they are removed from the ApplicationSet. Available values:
* **AllAtOnce** (default)
* **Reverse**
- **AllAtOnce** (default)
- **Reverse**
#### AllAtOnce Deletion
This is the default behavior where all applications that need to be deleted are removed simultaneously. This works with both `AllAtOnce` and `RollingSync` creation strategies.
```yaml
spec:
strategy:
type:RollingSync# or AllAtOnce
deletionOrder:AllAtOnce# explicit, but this is the default
type:RollingSync# or AllAtOnce
deletionOrder:AllAtOnce# explicit, but this is the default
```
#### Reverse Deletion
When using `deletionOrder: Reverse` with RollingSync strategy, applications are deleted in reverse order of the steps defined in `rollingSync.steps`. This ensures that applications deployed in later steps are deleted before applications deployed in earlier steps.
This strategy is particularly useful when you need to tear down dependent services in the particular sequence.
**Requirements for Reverse deletion:**
- Must be used with `type: RollingSync`
- Must be used with `type: RollingSync`
- Requires `rollingSync.steps` to be defined
- Applications are deleted in reverse order of step sequence
@@ -119,28 +128,30 @@ spec:
- key:envLabel
operator:In
values:
- env-dev # Step 1:Created first, deleted last
- env-dev # Step 1:Created first, deleted last
- matchExpressions:
- key:envLabel
- key:envLabel
operator:In
values:
- env-prod # Step 2:Created second, deleted first
- env-prod # Step 2:Created second, deleted first
```
In this example, when applications are deleted:
1.`env-prod` applications (Step 2) are deleted first
2.`env-dev` applications (Step 1) are deleted second
This deletion order is useful for scenarios where you need to tear down dependent services in the correct sequence, such as deleting frontend services before backend dependencies.
#### Example
The following example illustrates how to stage a progressive sync over Applications with explicitly configured environment labels.
Once a change is pushed, the following will happen in order.
* All `env-dev` Applications will be updated simultaneously.
* The rollout will wait for all `env-qa` Applications to be manually synced via the `argocd` CLI or by clicking the Sync button in the UI.
* 10% of all `env-prod` Applications will be updated at a time until all `env-prod` Applications have been updated.
- All `env-dev` Applications will be updated simultaneously.
- The rollout will wait for all `env-qa` Applications to be manually synced via the `argocd` CLI or by clicking the Sync button in the UI.
- 10% of all `env-prod` Applications will be updated at a time until all `env-prod` Applications have been updated.
```yaml
apiVersion:argoproj.io/v1alpha1
@@ -149,20 +160,20 @@ metadata:
name:guestbook
spec:
generators:
- list:
elements:
- cluster:engineering-dev
url:https://1.2.3.4
env:env-dev
- cluster:engineering-qa
url:https://2.4.6.8
env:env-qa
- cluster:engineering-prod
url:https://9.8.7.6/
env:env-prod
- list:
elements:
- cluster:engineering-dev
url:https://1.2.3.4
env:env-dev
- cluster:engineering-qa
url:https://2.4.6.8
env:env-qa
- cluster:engineering-prod
url:https://9.8.7.6/
env:env-prod
strategy:
type:RollingSync
deletionOrder:Reverse# Applications will be deleted in reverse order of steps
deletionOrder:Reverse# Applications will be deleted in reverse order of steps
rollingSync:
steps:
- matchExpressions:
@@ -176,15 +187,15 @@ spec:
operator:In
values:
- env-qa
maxUpdate:0# if 0, no matched applications will be updated
maxUpdate:0# if 0, no matched applications will be updated
- matchExpressions:
- key:envLabel
operator:In
values:
- env-prod
maxUpdate:10%# maxUpdate supports both integer and percentage string values (rounds down, but floored at 1 Application for >0%)
maxUpdate:10%# maxUpdate supports both integer and percentage string values (rounds down, but floored at 1 Application for >0%)
`templatePatch` only works when [go templating](../applicationset/GoTemplate.md) is enabled.
This means that the `goTemplate` field under `spec` needs to be set to `true` for template patching to work.
> [!IMPORTANT]
> `templatePatch` only works when [go templating](../applicationset/GoTemplate.md) is enabled.
> This means that the `goTemplate` field under `spec` needs to be set to `true` for template patching to work.
!!! important
The `templatePatch` can apply arbitrary changes to the template. If parameters include untrustworthy user input, it
may be possible to inject malicious changes into the template. It is recommended to use `templatePatch` only with
trusted input or to carefully escape the input before using it in the template. Piping input to `toJson` should help
prevent, for example, a user from successfully injecting a string with newlines.
> [!IMPORTANT]
> The `templatePatch` can apply arbitrary changes to the template. If parameters include untrustworthy user input, it
> may be possible to inject malicious changes into the template. It is recommended to use `templatePatch` only with
> trusted input or to carefully escape the input before using it in the template. Piping input to `toJson` should help
> prevent, for example, a user from successfully injecting a string with newlines.
>
> The `spec.project` field is not supported in `templatePatch`. If you need to change the project, you can use the
> `spec.project` field in the `template` field.
The `spec.project` field is not supported in `templatePatch`. If you need to change the project, you can use the
`spec.project` field in the `template` field.
!!! important
When writing a `templatePatch`, you're crafting a patch. So, if the patch includes an empty `spec: # nothing in here`, it will effectively clear out existing fields. See [#17040](https://github.com/argoproj/argo-cd/issues/17040) for an example of this behavior.
> [!IMPORTANT]
> When writing a `templatePatch`, you're crafting a patch. So, if the patch includes an empty `spec: # nothing in here`, it will effectively clear out existing fields. See [#17040](https://github.com/argoproj/argo-cd/issues/17040) for an example of this behavior.
@@ -15,8 +15,8 @@ The ApplicationSet controller, supplements Argo CD by adding additional features
- Improved support for monorepos: in the context of Argo CD, a monorepo is multiple Argo CD Application resources defined within a single Git repository
- Within multitenant clusters, improves the ability of individual cluster tenants to deploy applications using Argo CD (without needing to involve privileged cluster administrators in enabling the destination clusters/namespaces)
!!! note
Be aware of the [security implications](./Security.md) of ApplicationSets before using them.
> [!NOTE]
> Be aware of the [security implications](./Security.md) of ApplicationSets before using them.
# application.sync.impersonation.enabled enables application sync to use a custom service account, via impersonation. This allows decoupling sync from control-plane service account.
application.sync.impersonation.enabled:"false"
# If true, passing passing a different revision from the one given in the application when syncing requires the `override` privilege.
# The current default setting up to now (`false`) requires only `sync` privilege for syncing to a different revision.
# We highly recommend that this be set to `true`. The next major release will set the default to be `true`.
# This template iterates through the fields in the `.metadata` object,
# and formats them based on their type (map, array, or primitive values).
@@ -458,3 +463,4 @@ data:
{{- if .metadata.author }}
Co-authored-by: {{ .metadata.author }}
{{- end }}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.