Compare commits

...

30 Commits

Author SHA1 Message Date
github-actions[bot]
becb020064 Bump version to 3.1.8 on release-3.1 branch (#24794)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 11:29:59 -04:00
Steve Ramage
c63c2d8909 fix(docs): include v3.1 upgrade docs (#23529) [backport] (#24799)
Signed-off-by: Mubarak Jama <mubarak.jama@gmail.com>
Signed-off-by: Steve Ramage <gitcommits@sjrx.net>
Co-authored-by: Mubarak Jama <83465122+mubarak-j@users.noreply.github.com>
2025-09-30 05:24:54 -10:00
Ville Vesilehto
e20828f869 Merge commit from fork
Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.

The fix ensures thread-safe operations by always operating on
copies rather than shared objects.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-09-30 10:45:59 -04:00
Michael Crenshaw
761fc27068 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:45:32 -04:00
Michael Crenshaw
1a023f1ca7 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:07:24 -04:00
Michael Crenshaw
5c466a4e39 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 09:45:21 -04:00
argo-cd-cherry-pick-bot[bot]
b2fa7dcde6 fix: #24781 update crossplane healthchecks to V2 version (cherry-pick #24782 for 3.1) (#24783)
Signed-off-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
Co-authored-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
2025-09-30 18:04:57 +05:30
argo-cd-cherry-pick-bot[bot]
38808d03cd fix: allow for backwards compatibility of durations defined in days (cherry-pick #24769 for 3.1) (#24771)
Signed-off-by: lplazas <felipe.plazas10@gmail.com>
Co-authored-by: lplazas <luis.plazas@workato.com>
2025-09-29 15:23:09 +05:30
Atif Ali
41eac62eac fix: Clear ApplicationSet applicationStatus when ProgressiveSync is d… (#24715)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-09-26 10:18:37 -04:00
argo-cd-cherry-pick-bot[bot]
54bab39a80 fix: update ExternalSecret discovery.lua to also include the refreshPolicy (cherry-pick #24707 for 3.1) (#24712)
Signed-off-by: AvivGuiser <avivguiser@gmail.com>
Co-authored-by: AvivGuiser <avivguiser@gmail.com>
2025-09-23 13:38:55 -04:00
github-actions[bot]
511ebd799e Bump version to 3.1.7 on release-3.1 branch (#24702)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-09-22 15:15:25 -07:00
Alexander Matyushentsev
2e4458b91a fix: limit number of resources in appset status (#24690) (#24696)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-22 14:57:55 -07:00
argo-cd-cherry-pick-bot[bot]
7f92418a9c ci(release): only set latest release in github when latest (cherry-pick #24525 for 3.1) (#24685)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-22 11:46:48 -04:00
argo-cd-cherry-pick-bot[bot]
f3d59b0bb7 fix: resolve argocdService initialization issue in notifications CLI (cherry-pick #24664 for 3.1) (#24681)
Signed-off-by: puretension <rlrlfhtm5@gmail.com>
Co-authored-by: DOHYEONG LEE <rlrlfhtm5@gmail.com>
2025-09-22 04:43:49 -10:00
argo-cd-cherry-pick-bot[bot]
4081e2983a fix(server): validate new project on update (#23970) (cherry-pick #23973 for 3.1) (#24662)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-19 11:21:57 -04:00
argo-cd-cherry-pick-bot[bot]
c26cd5502b fix: Progress Sync Unknown in UI (cherry-pick #24202 for 3.1) (#24643)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <56743004+aali309@users.noreply.github.com>
2025-09-18 14:30:42 -04:00
github-actions[bot]
96797ba846 Bump version to 3.1.6 on release-3.1 branch (#24635)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: blakepettersson <1227954+blakepettersson@users.noreply.github.com>
2025-09-18 06:28:14 -10:00
argo-cd-cherry-pick-bot[bot]
b46a57ab82 fix(oci): loosen up layer restrictions (cherry-pick #24640 for 3.1) (#24649)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-18 06:01:17 -10:00
Alexander Matyushentsev
2b3df7f5a8 fix: use informer in webhook handler to reduce memory usage (#24622) (#24626)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-18 12:43:06 +05:30
argo-cd-cherry-pick-bot[bot]
4ef56634b4 docs: Delete dangling word in Source Hydrator docs (cherry-pick #24601 for 3.1) (#24603)
Signed-off-by: José Maia <josecbmaia@hotmail.com>
Co-authored-by: José Maia <josecbmaia@hotmail.com>
2025-09-17 11:36:10 -04:00
argo-cd-cherry-pick-bot[bot]
cb9574597e fix: correct post-delete finalizer removal when cluster not found (cherry-pick #24415 for 3.1) (#24590)
Signed-off-by: Pavel Aborilov <aborilov@gmail.com>
Co-authored-by: Pavel <aborilov@gmail.com>
2025-09-16 16:14:23 -07:00
Linus Ehlers
468870f65d fix: Ensure that symlink targets are not made absolute on extracting a tar (#24145) - backport/cherry-pick to 3.1 (#24519)
Signed-off-by: Linus Ehlers <Linus.Ehlers@ppi.de>
2025-09-11 10:48:02 -04:00
github-actions[bot]
cfeed49105 Bump version to 3.1.5 on release-3.1 branch (#24503)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-10 11:30:13 -04:00
Codey Jenkins
c21141a51f fix(cherry pick 3.1): RunResourceAction: error getting Lua resource action: built-in script does not exist #24491 (#24500)
Signed-off-by: Codey Jenkins <FourFifthsCode@users.noreply.github.com>
2025-09-10 11:28:01 -04:00
Fox Piacenti
0415c60af9 docs: Update URL for HA manifests to stable. (#24454)
Signed-off-by: Fox Danger Piacenti <fox@opencraft.com>
2025-09-09 12:36:21 +03:00
Nitish Kumar
9a3235ef92 fix(3.1): change the appset namespace to server namespace when generating appset (#24478)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-09 10:45:31 +03:00
OpenGuidou
3320f1ed7a fix(cherry-pick-3.1): Do not block project update when a cluster referenced in an App doesn't exist (#24450)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
2025-09-08 11:38:25 -04:00
github-actions[bot]
20dd73af34 Bump version to 3.1.4 on release-3.1 branch (#24424)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: agaudreault <47184027+agaudreault@users.noreply.github.com>
2025-09-05 14:56:15 -04:00
Alexandre Gaudreault
206d57b0de chore(deps): bump gitops-engine (#24418)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-05 13:29:18 -04:00
github-actions[bot]
c1467b81bc Bump version to 3.1.3 on release-3.1 branch (#24401)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 13:30:54 -04:00
71 changed files with 1459 additions and 379 deletions

View File

@@ -32,6 +32,42 @@ jobs:
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup variables
id: var
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
IS_LATEST=false
# Ensure latest release tag matches github.ref_name
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
IS_LATEST=true
fi
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
argocd-image-provenance:
needs: [argocd-image]
permissions:
@@ -50,15 +86,17 @@ jobs:
goreleaser:
needs:
- setup-variables
- argocd-image
- argocd-image-provenance
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -142,7 +180,7 @@ jobs:
permissions:
contents: write # Needed for release uploads
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes}}
hashes: ${{ steps.sbom-hash.outputs.hashes }}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
steps:
@@ -221,6 +259,7 @@ jobs:
post-release:
needs:
- setup-variables
- argocd-image
- goreleaser
- generate-sbom
@@ -229,6 +268,8 @@ jobs:
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -242,27 +283,6 @@ jobs:
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
- name: Check if tag is the latest version and not a pre-release
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
# Ensure latest tag matches github.ref_name & not a pre-release
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
echo "TAG_STABLE=true" >> $GITHUB_ENV
else
echo "TAG_STABLE=false" >> $GITHUB_ENV
fi
- name: Update stable tag to latest version
run: |
git tag -f stable ${{ github.ref_name }}

View File

@@ -49,13 +49,14 @@ archives:
- argocd-cli
name_template: |-
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
formats: [ binary ]
formats: [binary]
checksum:
name_template: 'cli_checksums.txt'
algorithm: sha256
release:
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
prerelease: auto
draft: false
header: |

View File

@@ -1 +1 @@
3.1.2
3.1.8

View File

@@ -92,6 +92,7 @@ type ApplicationSetReconciler struct {
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -230,6 +231,16 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
}
}
} else {
// Progressive Sync is disabled, clear any existing applicationStatus to prevent stale data
if len(applicationSetInfo.Status.ApplicationStatus) > 0 {
logCtx.Infof("Progressive Sync disabled, removing %v AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear AppSet application statuses when Progressive Sync is disabled for %v: %w", applicationSetInfo.Name, err)
}
}
}
var validApps []argov1alpha1.Application
@@ -1310,6 +1321,11 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
statuses = statuses[:r.MaxResourcesStatusCount]
}
appset.Status.Resources = statuses
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {

View File

@@ -6117,10 +6117,11 @@ func TestUpdateResourceStatus(t *testing.T) {
require.NoError(t, err)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
maxResourcesStatusCount int
}{
{
name: "handles an empty application list",
@@ -6284,6 +6285,73 @@ func TestUpdateResourceStatus(t *testing.T) {
apps: []v1alpha1.Application{},
expectedResources: nil,
},
{
name: "truncates resources status list to",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "argocd",
},
Status: v1alpha1.ApplicationSetStatus{
Resources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
{
Name: "app2",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
},
},
},
apps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.AppHealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "app2",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.AppHealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
},
expectedResources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeSynced,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
maxResourcesStatusCount: 1,
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
@@ -6294,13 +6362,14 @@ func TestUpdateResourceStatus(t *testing.T) {
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
MaxResourcesStatusCount: cc.maxResourcesStatusCount,
}
err := r.updateResourcesStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps)
@@ -7250,3 +7319,82 @@ func TestSyncApplication(t *testing.T) {
})
}
}
func TestReconcileProgressiveSyncDisabled(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
enableProgressiveSyncs bool
expectedAppStatuses []v1alpha1.ApplicationSetApplicationStatus
}{
{
name: "clears applicationStatus when Progressive Sync is disabled",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-appset",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{},
Template: v1alpha1.ApplicationSetTemplate{},
},
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "test-appset-guestbook",
Message: "Application resource became Healthy, updating status from Progressing to Healthy.",
Status: "Healthy",
Step: "1",
},
},
},
},
enableProgressiveSyncs: false,
expectedAppStatuses: nil,
},
} {
t.Run(cc.name, func(t *testing.T) {
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&cc.appSet).WithStatusSubresource(&cc.appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Renderer: &utils.Render{},
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
EnableProgressiveSyncs: cc.enableProgressiveSyncs,
}
req := ctrl.Request{
NamespacedName: types.NamespacedName{
Namespace: cc.appSet.Namespace,
Name: cc.appSet.Name,
},
}
// Run reconciliation
_, err = r.Reconcile(t.Context(), req)
require.NoError(t, err)
// Fetch the updated ApplicationSet
var updatedAppSet v1alpha1.ApplicationSet
err = r.Get(t.Context(), req.NamespacedName, &updatedAppSet)
require.NoError(t, err)
// Verify the applicationStatus field
assert.Equal(t, cc.expectedAppStatuses, updatedAppSet.Status.ApplicationStatus, "applicationStatus should match expected value")
})
}
}

View File

@@ -29,10 +29,10 @@ type GitGenerator struct {
}
// NewGitGenerator creates a new instance of Git Generator
func NewGitGenerator(repos services.Repos, namespace string) Generator {
func NewGitGenerator(repos services.Repos, controllerNamespace string) Generator {
g := &GitGenerator{
repos: repos,
namespace: namespace,
namespace: controllerNamespace,
}
return g
@@ -78,11 +78,11 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
if !strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
project := appSet.Spec.Template.Spec.Project
appProject := &argoprojiov1alpha1.AppProject{}
namespace := g.namespace
if namespace == "" {
namespace = appSet.Namespace
controllerNamespace := g.namespace
if controllerNamespace == "" {
controllerNamespace = appSet.Namespace
}
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: namespace}, appProject); err != nil {
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: controllerNamespace}, appProject); err != nil {
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled

View File

@@ -10,15 +10,15 @@ import (
"github.com/argoproj/argo-cd/v3/applicationset/services"
)
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, namespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, controllerNamespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
terminalGenerators := map[string]Generator{
"List": NewListGenerator(),
"Clusters": NewClusterGenerator(ctx, c, k8sClient, namespace),
"Git": NewGitGenerator(argoCDService, namespace),
"Clusters": NewClusterGenerator(ctx, c, k8sClient, controllerNamespace),
"Git": NewGitGenerator(argoCDService, controllerNamespace),
"SCMProvider": NewSCMProviderGenerator(c, scmConfig),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, namespace),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, controllerNamespace),
"PullRequest": NewPullRequestGenerator(c, scmConfig),
"Plugin": NewPluginGenerator(c, namespace),
"Plugin": NewPluginGenerator(c, controllerNamespace),
}
nestedGenerators := map[string]Generator{

View File

@@ -79,6 +79,7 @@ func NewCommand() *cobra.Command {
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
maxResourcesStatusCount int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -231,6 +232,7 @@ func NewCommand() *cobra.Command {
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -275,6 +277,7 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
return &command
}

View File

@@ -30,11 +30,12 @@ func NewNotificationsCommand() *cobra.Command {
)
var argocdService service.Service
toolsCommand := cmd.NewToolsCommand(
"notifications",
"argocd admin notifications",
applications,
settings.GetFactorySettingsForCLI(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
settings.GetFactorySettingsForCLI(func() service.Service { return argocdService }, "argocd-notifications-secret", "argocd-notifications-cm", false),
func(clientConfig clientcmd.ClientConfig) {
k8sCfg, err := clientConfig.ClientConfig()
if err != nil {

View File

@@ -1202,7 +1202,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
if err != nil {
logCtx.Warnf("Unable to get destination cluster: %v", err)
app.UnSetCascadedDeletion()
app.UnSetPostDeleteFinalizer()
app.UnSetPostDeleteFinalizerAll()
if err := ctrl.updateFinalizers(app); err != nil {
return err
}

View File

@@ -290,6 +290,8 @@ data:
applicationsetcontroller.global.preserved.labels: "acme.com/label1,acme.com/label2"
# Enable GitHub API metrics for generators that use GitHub API
applicationsetcontroller.enable.github.api.metrics: "true"
# The maximum number of resources stored in the status of an ApplicationSet. This is a safeguard to prevent the status from growing too large.
applicationsetcontroller.status.max.resources.count: "5000"
## Argo CD Notifications Controller Properties
# Set the logging level. One of: debug|info|warn|error (default "info")

View File

@@ -2,7 +2,7 @@
Argo CD is largely stateless. All data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/stable/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
!!! note

View File

@@ -37,6 +37,7 @@ argocd-applicationset-controller [flags]
--kubeconfig string Path to a kube config. Only required if out-of-cluster
--logformat string Set the logging format. One of: json|text (default "json")
--loglevel string Set the logging level. One of: debug|info|warn|error (default "info")
--max-resources-status-count int Max number of resources stored in appset status.
--metrics-addr string The address the metric endpoint binds to. (default ":8080")
--metrics-applicationset-labels strings List of Application labels that will be added to the argocd_applicationset_labels metric
-n, --namespace string If present, the namespace scope for this CLI request

View File

@@ -188,7 +188,7 @@ git commit -m "Bump image to v1.2.3" \
```
!!!note Newlines are not allowed
The commit trailers must not contain newlines. The
The commit trailers must not contain newlines.
So the full CI script might look something like this:

2
go.mod
View File

@@ -12,7 +12,7 @@ require (
github.com/Masterminds/sprig/v3 v3.3.0
github.com/TomOnTime/utfutil v1.0.0
github.com/alicebob/miniredis/v2 v2.35.0
github.com/argoproj/gitops-engine v0.7.1-0.20250617174952-093aef0dad58
github.com/argoproj/gitops-engine v0.7.1-0.20250905160054-e48120133eec
github.com/argoproj/notifications-engine v0.4.1-0.20250309174002-87bf0576a872
github.com/argoproj/pkg v0.13.6
github.com/argoproj/pkg/v2 v2.0.1

4
go.sum
View File

@@ -113,8 +113,8 @@ github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFI
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/appscode/go v0.0.0-20191119085241-0887d8ec2ecc/go.mod h1:OawnOmAL4ZX3YaPdN+8HTNwBveT1jMsqP74moa9XUbE=
github.com/argoproj/gitops-engine v0.7.1-0.20250617174952-093aef0dad58 h1:9ESamu44v3dR9j/I4/4Aa1Fx3QSIE8ElK1CR8Z285uk=
github.com/argoproj/gitops-engine v0.7.1-0.20250617174952-093aef0dad58/go.mod h1:aIBEG3ohgaC1gh/sw2On6knkSnXkqRLDoBj234Dqczw=
github.com/argoproj/gitops-engine v0.7.1-0.20250905160054-e48120133eec h1:rNAwbRQFvRIuW/e2bU+B10mlzghYXsnwZedYeA7Drz4=
github.com/argoproj/gitops-engine v0.7.1-0.20250905160054-e48120133eec/go.mod h1:aIBEG3ohgaC1gh/sw2On6knkSnXkqRLDoBj234Dqczw=
github.com/argoproj/notifications-engine v0.4.1-0.20250309174002-87bf0576a872 h1:ADGAdyN9ty0+RmTT/yn+xV9vwkqvLn9O1ccqeP0Zeas=
github.com/argoproj/notifications-engine v0.4.1-0.20250309174002-87bf0576a872/go.mod h1:d1RazGXWvKRFv9//rg4MRRR7rbvbE7XLgTSMT5fITTE=
github.com/argoproj/pkg v0.13.6 h1:36WPD9MNYECHcO1/R1pj6teYspiK7uMQLCgLGft2abM=

View File

@@ -187,6 +187,12 @@ spec:
name: argocd-cmd-params-cm
key: applicationsetcontroller.requeue.after
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: applicationsetcontroller.status.max.resources.count
optional: true
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.1.2
newTag: v3.1.8

View File

@@ -5,7 +5,7 @@ kind: Kustomization
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.1.2
newTag: v3.1.8
resources:
- ./application-controller
- ./dex

View File

@@ -24699,7 +24699,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24825,7 +24831,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -24953,7 +24959,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25244,7 +25250,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25296,7 +25302,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25638,7 +25644,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -24667,7 +24667,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24787,7 +24793,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25078,7 +25084,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25130,7 +25136,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25472,7 +25478,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.1.2
newTag: v3.1.8

View File

@@ -12,7 +12,7 @@ patches:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.1.2
newTag: v3.1.8
resources:
- ../../base/application-controller
- ../../base/applicationset-controller

View File

@@ -26065,7 +26065,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -26191,7 +26197,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26342,7 +26348,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -26438,7 +26444,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26562,7 +26568,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26879,7 +26885,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26931,7 +26937,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -27305,7 +27311,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -27683,7 +27689,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -26035,7 +26035,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -26178,7 +26184,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -26274,7 +26280,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26398,7 +26404,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26715,7 +26721,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26767,7 +26773,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -27141,7 +27147,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -27519,7 +27525,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -1868,7 +1868,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1994,7 +2000,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2145,7 +2151,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2241,7 +2247,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2365,7 +2371,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2682,7 +2688,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2734,7 +2740,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -3108,7 +3114,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3486,7 +3492,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -1838,7 +1838,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1981,7 +1987,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2077,7 +2083,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2201,7 +2207,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2518,7 +2524,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2570,7 +2576,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2944,7 +2950,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3322,7 +3328,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -25159,7 +25159,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25285,7 +25291,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25436,7 +25442,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25532,7 +25538,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25634,7 +25640,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25925,7 +25931,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25977,7 +25983,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26349,7 +26355,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26727,7 +26733,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

22
manifests/install.yaml generated
View File

@@ -25127,7 +25127,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25270,7 +25276,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25366,7 +25372,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25468,7 +25474,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25759,7 +25765,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25811,7 +25817,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26183,7 +26189,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26561,7 +26567,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -962,7 +962,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1088,7 +1094,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1239,7 +1245,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1335,7 +1341,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1437,7 +1443,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1728,7 +1734,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1780,7 +1786,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2152,7 +2158,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2530,7 +2536,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -930,7 +930,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1073,7 +1079,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1169,7 +1175,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1271,7 +1277,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1562,7 +1568,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1614,7 +1620,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -1986,7 +1992,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2364,7 +2370,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.1.2
image: quay.io/argoproj/argocd:v3.1.8
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -131,6 +131,7 @@ nav:
- operator-manual/server-commands/additional-configuration-method.md
- Upgrading:
- operator-manual/upgrading/overview.md
- operator-manual/upgrading/3.0-3.1.md
- operator-manual/upgrading/2.14-3.0.md
- operator-manual/upgrading/2.13-2.14.md
- operator-manual/upgrading/2.12-2.13.md

View File

@@ -3277,6 +3277,14 @@ func (app *Application) SetPostDeleteFinalizer(stage ...string) {
setFinalizer(&app.ObjectMeta, strings.Join(append([]string{PostDeleteFinalizerName}, stage...), "/"), true)
}
func (app *Application) UnSetPostDeleteFinalizerAll() {
for _, finalizer := range app.Finalizers {
if strings.HasPrefix(finalizer, PostDeleteFinalizerName) {
setFinalizer(&app.ObjectMeta, finalizer, false)
}
}
}
func (app *Application) UnSetPostDeleteFinalizer(stage ...string) {
setFinalizer(&app.ObjectMeta, strings.Join(append([]string{PostDeleteFinalizerName}, stage...), "/"), false)
}

View File

@@ -1,4 +1,4 @@
-- Health check copied from here: https://github.com/crossplane/docs/blob/bd701357e9d5eecf529a0b42f23a78850a6d1d87/content/master/guides/crossplane-with-argo-cd.md
-- Health check copied from here: https://github.com/crossplane/docs/blob/709889c5dbe6e5a2ea3dffd66fe276cf465b47b5/content/master/guides/crossplane-with-argo-cd.md
health_status = {
status = "Progressing",
@@ -18,9 +18,10 @@ local has_no_status = {
"Composition",
"CompositionRevision",
"DeploymentRuntimeConfig",
"ControllerConfig",
"ClusterProviderConfig",
"ProviderConfig",
"ProviderConfigUsage"
"ProviderConfigUsage",
"ControllerConfig" -- Added to ensure that healthcheck is backwards-compatible with Crossplane v1
}
if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.kind) then
health_status.status = "Healthy"
@@ -29,7 +30,7 @@ if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.
end
if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then
if obj.kind == "ProviderConfig" and obj.status.users ~= nil then
if (obj.kind == "ProviderConfig" or obj.kind == "ClusterProviderConfig") and obj.status.users ~= nil then
health_status.status = "Healthy"
health_status.message = "Resource is in use."
return health_status
@@ -54,7 +55,7 @@ for i, condition in ipairs(obj.status.conditions) do
end
end
if contains({"Ready", "Healthy", "Offered", "Established"}, condition.type) then
if contains({"Ready", "Healthy", "Offered", "Established", "ValidPipeline", "RevisionHealthy"}, condition.type) then
if condition.status == "True" then
health_status.status = "Healthy"
health_status.message = "Resource is up-to-date."

View File

@@ -3,3 +3,7 @@ tests:
status: Healthy
message: "Resource is up-to-date."
inputPath: testdata/composition_healthy.yaml
- healthStatus:
status: Healthy
message: "Resource is up-to-date."
inputPath: testdata/configurationrevision_healthy.yaml

View File

@@ -0,0 +1,22 @@
apiVersion: pkg.crossplane.io/v1
kind: ConfigurationRevision
metadata:
annotations:
meta.crossplane.io/license: Apache-2.0
meta.crossplane.io/maintainer: Upbound <support@upbound.io>
meta.crossplane.io/source: github.com/upbound/configuration-getting-started
name: upbound-configuration-getting-started-869bca254eb1
spec:
desiredState: Active
ignoreCrossplaneConstraints: false
image: xpkg.upbound.io/upbound/configuration-getting-started:v0.3.0
packagePullPolicy: IfNotPresent
revision: 1
skipDependencyResolution: false
status:
conditions:
- lastTransitionTime: "2025-09-29T18:06:40Z"
observedGeneration: 1
reason: HealthyPackageRevision
status: "True"
type: RevisionHealthy

View File

@@ -1,4 +1,4 @@
-- Health check copied from here: https://github.com/crossplane/docs/blob/bd701357e9d5eecf529a0b42f23a78850a6d1d87/content/master/guides/crossplane-with-argo-cd.md
-- Health check copied from here: https://github.com/crossplane/docs/blob/709889c5dbe6e5a2ea3dffd66fe276cf465b47b5/content/master/guides/crossplane-with-argo-cd.md
health_status = {
status = "Progressing",
@@ -15,6 +15,7 @@ local function contains (table, val)
end
local has_no_status = {
"ClusterProviderConfig",
"ProviderConfig",
"ProviderConfigUsage"
}
@@ -26,7 +27,7 @@ if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.
end
if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then
if obj.kind == "ProviderConfig" and obj.status.users ~= nil then
if (obj.kind == "ProviderConfig" or obj.kind == "ClusterProviderConfig") and obj.status.users ~= nil then
health_status.status = "Healthy"
health_status.message = "Resource is in use."
return health_status

View File

@@ -7,3 +7,6 @@ discoveryTests:
- inputPath: testdata/external-secret.yaml
result:
- name: "refresh"
- inputPath: testdata/external-secret-refresh-policy.yaml
result:
- name: "refresh"

View File

@@ -3,10 +3,11 @@ local actions = {}
local disable_refresh = false
local time_units = {"ns", "us", "µs", "ms", "s", "m", "h"}
local digits = obj.spec.refreshInterval
local policy = obj.spec.refreshPolicy
if digits ~= nil then
digits = tostring(digits)
for _, time_unit in ipairs(time_units) do
if digits == "0" or digits == "0" .. time_unit then
if (digits == "0" or digits == "0" .. time_unit) and policy ~= "OnChange" then
disable_refresh = true
break
end

View File

@@ -0,0 +1,55 @@
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
creationTimestamp: '2021-11-16T21:59:33Z'
generation: 1
name: test-healthy
namespace: argocd
resourceVersion: '136487331'
selfLink: /apis/external-secrets.io/v1alpha1/namespaces/argocd/externalsecrets/test-healthy
uid: 1e754a7e-0781-4d57-932d-4651d5b19586
spec:
data:
- remoteRef:
key: secret/sa/example
property: api.address
secretKey: url
- remoteRef:
key: secret/sa/example
property: ca.crt
secretKey: ca
- remoteRef:
key: secret/sa/example
property: token
secretKey: token
refreshInterval: 0
refreshPolicy: OnChange
secretStoreRef:
kind: SecretStore
name: example
target:
creationPolicy: Owner
template:
data:
config: |
{
"bearerToken": "{{ .token | base64decode | toString }}",
"tlsClientConfig": {
"insecure": false,
"caData": "{{ .ca | toString }}"
}
}
name: cluster-test
server: '{{ .url | toString }}'
metadata:
labels:
argocd.argoproj.io/secret-type: cluster
status:
conditions:
- lastTransitionTime: '2021-11-16T21:59:34Z'
message: Secret was synced
reason: SecretSynced
status: 'True'
type: Ready
refreshTime: '2021-11-29T18:32:24Z'
syncedResourceVersion: 1-519a61da0dc68b2575b4f8efada70e42

View File

@@ -1324,6 +1324,12 @@ func (s *Server) validateAndNormalizeApp(ctx context.Context, app *v1alpha1.Appl
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionUpdate, currApp.RBACName(s.ns)); err != nil {
return err
}
// Validate that the new project exists and the application is allowed to use it
newProj, err := s.getAppProject(ctx, app, log.WithFields(applog.GetAppLogFields(app)))
if err != nil {
return err
}
proj = newProj
}
if _, err := argo.GetDestinationCluster(ctx, app.Spec.Destination, s.db); err != nil {
@@ -2517,6 +2523,7 @@ func (s *Server) RunResourceAction(ctx context.Context, q *application.ResourceA
Kind: q.Kind,
Version: q.Version,
Group: q.Group,
Action: q.Action,
Project: q.Project,
}
return s.RunResourceActionV2(ctx, qV2)

View File

@@ -988,7 +988,21 @@ func TestNoAppEnumeration(t *testing.T) {
assert.EqualError(t, err, "rpc error: code = NotFound desc = applications.argoproj.io \"doest-not-exist\" not found", "when the request specifies a project, we can return the standard k8s error message")
})
//nolint:staticcheck,SA1019 // RunResourceAction is deprecated, but we still need to support it for backward compatibility.
t.Run("RunResourceAction", func(t *testing.T) {
_, err := appServer.RunResourceAction(adminCtx, &application.ResourceActionRunRequest{Name: ptr.To("test"), ResourceName: ptr.To("test"), Group: ptr.To("apps"), Kind: ptr.To("Deployment"), Namespace: ptr.To("test"), Action: ptr.To("restart")})
require.NoError(t, err)
_, err = appServer.RunResourceAction(noRoleCtx, &application.ResourceActionRunRequest{Name: ptr.To("test")})
require.EqualError(t, err, common.PermissionDeniedAPIError.Error(), "error message must be _only_ the permission error, to avoid leaking information about app existence")
_, err = appServer.RunResourceAction(noRoleCtx, &application.ResourceActionRunRequest{Group: ptr.To("argoproj.io"), Kind: ptr.To("Application"), Name: ptr.To("test")})
require.EqualError(t, err, common.PermissionDeniedAPIError.Error(), "error message must be _only_ the permission error, to avoid leaking information about app existence")
_, err = appServer.RunResourceAction(adminCtx, &application.ResourceActionRunRequest{Name: ptr.To("doest-not-exist")})
require.EqualError(t, err, common.PermissionDeniedAPIError.Error(), "error message must be _only_ the permission error, to avoid leaking information about app existence")
_, err = appServer.RunResourceAction(adminCtx, &application.ResourceActionRunRequest{Name: ptr.To("doest-not-exist"), Project: ptr.To("test")})
assert.EqualError(t, err, "rpc error: code = NotFound desc = applications.argoproj.io \"doest-not-exist\" not found", "when the request specifies a project, we can return the standard k8s error message")
})
t.Run("RunResourceActionV2", func(t *testing.T) {
_, err := appServer.RunResourceActionV2(adminCtx, &application.ResourceActionRunRequestV2{Name: ptr.To("test"), ResourceName: ptr.To("test"), Group: ptr.To("apps"), Kind: ptr.To("Deployment"), Namespace: ptr.To("test"), Action: ptr.To("restart")})
require.NoError(t, err)
_, err = appServer.RunResourceActionV2(noRoleCtx, &application.ResourceActionRunRequestV2{Name: ptr.To("test")})
@@ -1511,14 +1525,130 @@ func TestCreateAppWithOperation(t *testing.T) {
}
func TestUpdateApp(t *testing.T) {
testApp := newTestApp()
appServer := newTestAppServer(t, testApp)
testApp.Spec.Project = ""
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: testApp,
t.Parallel()
t.Run("Same spec", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
appServer := newTestAppServer(t, testApp)
testApp.Spec.Project = ""
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: testApp,
})
require.NoError(t, err)
assert.Equal(t, "default", app.Spec.Project)
})
t.Run("Invalid existing app can be updated", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
testApp.Spec.Destination.Server = "https://invalid-cluster"
appServer := newTestAppServer(t, testApp)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Source.Name = "updated"
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.NoError(t, err)
require.NotNil(t, app)
assert.Equal(t, "updated", app.Spec.Source.Name)
})
t.Run("Can update application project from invalid", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
restrictedProj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "restricted-proj", Namespace: "default"},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"not-your-repo"},
Destinations: []v1alpha1.ApplicationDestination{{Server: "*", Namespace: "not-your-namespace"}},
},
}
testApp.Spec.Project = restrictedProj.Name
appServer := newTestAppServer(t, testApp, restrictedProj)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Project = "my-proj"
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.NoError(t, err)
require.NotNil(t, app)
assert.Equal(t, "my-proj", app.Spec.Project)
})
t.Run("Cannot update application project to invalid", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
restrictedProj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "restricted-proj", Namespace: "default"},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"not-your-repo"},
Destinations: []v1alpha1.ApplicationDestination{{Server: "*", Namespace: "not-your-namespace"}},
},
}
appServer := newTestAppServer(t, testApp, restrictedProj)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Project = restrictedProj.Name
_, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.Error(t, err)
require.ErrorContains(t, err, "application repo https://github.com/argoproj/argocd-example-apps.git is not permitted in project 'restricted-proj'")
require.ErrorContains(t, err, "application destination server 'fake-cluster' and namespace 'fake-dest-ns' do not match any of the allowed destinations in project 'restricted-proj'")
})
t.Run("Cannot update application project to inexisting", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
appServer := newTestAppServer(t, testApp)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Project = "i-do-not-exist"
_, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.Error(t, err)
require.ErrorContains(t, err, "app is not allowed in project \"i-do-not-exist\", or the project does not exist")
})
t.Run("Can update application project with project argument", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
appServer := newTestAppServer(t, testApp)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Project = "my-proj"
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
Project: ptr.To("default"),
})
require.NoError(t, err)
require.NotNil(t, app)
assert.Equal(t, "my-proj", app.Spec.Project)
})
t.Run("Existing label and annotations are replaced", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
testApp.Annotations = map[string]string{"test": "test-value", "update": "old"}
testApp.Labels = map[string]string{"test": "test-value", "update": "old"}
appServer := newTestAppServer(t, testApp)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Annotations = map[string]string{"update": "new"}
updateApp.Labels = map[string]string{"update": "new"}
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.NoError(t, err)
require.NotNil(t, app)
assert.Len(t, app.Annotations, 1)
assert.Equal(t, "new", app.GetAnnotations()["update"])
assert.Len(t, app.Labels, 1)
assert.Equal(t, "new", app.GetLabels()["update"])
})
require.NoError(t, err)
assert.Equal(t, "default", app.Spec.Project)
}
func TestUpdateAppSpec(t *testing.T) {

View File

@@ -200,7 +200,7 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
}
if q.GetDryRun() {
apps, err := s.generateApplicationSetApps(ctx, log.WithField("applicationset", appset.Name), *appset, namespace)
apps, err := s.generateApplicationSetApps(ctx, log.WithField("applicationset", appset.Name), *appset)
if err != nil {
return nil, fmt.Errorf("unable to generate Applications of ApplicationSet: %w", err)
}
@@ -260,12 +260,12 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
return updated, nil
}
func (s *Server) generateApplicationSetApps(ctx context.Context, logEntry *log.Entry, appset v1alpha1.ApplicationSet, namespace string) ([]v1alpha1.Application, error) {
func (s *Server) generateApplicationSetApps(ctx context.Context, logEntry *log.Entry, appset v1alpha1.ApplicationSet) ([]v1alpha1.Application, error) {
argoCDDB := s.db
scmConfig := generators.NewSCMConfig(s.ScmRootCAPath, s.AllowedScmProviders, s.EnableScmProviders, s.EnableGitHubAPIMetrics, github_app.NewAuthCredentials(argoCDDB.(db.RepoCredsDB)), true)
argoCDService := services.NewArgoCDService(s.db, s.GitSubmoduleEnabled, s.repoClientSet, s.EnableNewGitFileGlobbing)
appSetGenerators := generators.GetGenerators(ctx, s.client, s.k8sClient, namespace, argoCDService, s.dynamicClient, scmConfig)
appSetGenerators := generators.GetGenerators(ctx, s.client, s.k8sClient, s.ns, argoCDService, s.dynamicClient, scmConfig)
apps, _, err := appsettemplate.GenerateApplications(logEntry, appset, appSetGenerators, &appsetutils.Render{}, s.client)
if err != nil {
@@ -363,11 +363,15 @@ func (s *Server) Generate(ctx context.Context, q *applicationset.ApplicationSetG
if appset == nil {
return nil, errors.New("error creating ApplicationSets: ApplicationSets is nil in request")
}
namespace := s.appsetNamespaceOrDefault(appset.Namespace)
// The RBAC check needs to be performed against the appset namespace
// However, when trying to generate params, the server namespace needs
// to be passed.
namespace := s.appsetNamespaceOrDefault(appset.Namespace)
if !s.isNamespaceEnabled(namespace) {
return nil, security.NamespaceNotPermittedError(namespace)
}
projectName, err := s.validateAppSet(appset)
if err != nil {
return nil, fmt.Errorf("error validating ApplicationSets: %w", err)
@@ -380,7 +384,16 @@ func (s *Server) Generate(ctx context.Context, q *applicationset.ApplicationSetG
logger := log.New()
logger.SetOutput(logs)
apps, err := s.generateApplicationSetApps(ctx, logger.WithField("applicationset", appset.Name), *appset, namespace)
// The server namespace will be used in the function
// since this is the exact namespace that is being used
// to generate parameters (especially for git generator).
//
// In case of Git generator, if the namespace is set to
// appset namespace, we'll look for a project in the appset
// namespace that would lead to error when generating params
// for an appset in any namespace feature.
// See https://github.com/argoproj/argo-cd/issues/22942
apps, err := s.generateApplicationSetApps(ctx, logger.WithField("applicationset", appset.Name), *appset)
if err != nil {
return nil, fmt.Errorf("unable to generate Applications of ApplicationSet: %w\n%s", err, logs.String())
}

View File

@@ -4,6 +4,9 @@ import (
"sort"
"testing"
"sigs.k8s.io/controller-runtime/pkg/client"
cr_fake "sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/pkg/v2/sync"
"github.com/stretchr/testify/assert"
@@ -50,7 +53,7 @@ func fakeCluster() *appsv1.Cluster {
}
// return an ApplicationServiceServer which returns fake data
func newTestAppSetServer(t *testing.T, objects ...runtime.Object) *Server {
func newTestAppSetServer(t *testing.T, objects ...client.Object) *Server {
t.Helper()
f := func(enf *rbac.Enforcer) {
_ = enf.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
@@ -61,7 +64,7 @@ func newTestAppSetServer(t *testing.T, objects ...runtime.Object) *Server {
}
// return an ApplicationServiceServer which returns fake data
func newTestNamespacedAppSetServer(t *testing.T, objects ...runtime.Object) *Server {
func newTestNamespacedAppSetServer(t *testing.T, objects ...client.Object) *Server {
t.Helper()
f := func(enf *rbac.Enforcer) {
_ = enf.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
@@ -71,7 +74,7 @@ func newTestNamespacedAppSetServer(t *testing.T, objects ...runtime.Object) *Ser
return newTestAppSetServerWithEnforcerConfigure(t, f, scopedNamespaces, objects...)
}
func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforcer), namespace string, objects ...runtime.Object) *Server {
func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforcer), namespace string, objects ...client.Object) *Server {
t.Helper()
kubeclientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
@@ -115,7 +118,11 @@ func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforce
objects = append(objects, defaultProj, myProj)
fakeAppsClientset := apps.NewSimpleClientset(objects...)
runtimeObjects := make([]runtime.Object, len(objects))
for i := range objects {
runtimeObjects[i] = objects[i]
}
fakeAppsClientset := apps.NewSimpleClientset(runtimeObjects...)
factory := appinformer.NewSharedInformerFactoryWithOptions(fakeAppsClientset, 0, appinformer.WithNamespace(namespace), appinformer.WithTweakListOptions(func(_ *metav1.ListOptions) {}))
fakeProjLister := factory.Argoproj().V1alpha1().AppProjects().Lister().AppProjects(testNamespace)
@@ -138,6 +145,13 @@ func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforce
panic("Timed out waiting for caches to sync")
}
scheme := runtime.NewScheme()
err = appsv1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
crClient := cr_fake.NewClientBuilder().WithScheme(scheme).WithObjects(objects...).Build()
projInformer := factory.Argoproj().V1alpha1().AppProjects().Informer()
go projInformer.Run(ctx.Done())
if !k8scache.WaitForCacheSync(ctx.Done(), projInformer.HasSynced) {
@@ -148,7 +162,7 @@ func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforce
db,
kubeclientset,
nil,
nil,
crClient,
enforcer,
nil,
fakeAppsClientset,
@@ -640,3 +654,54 @@ func TestResourceTree(t *testing.T) {
assert.EqualError(t, err, "namespace 'NOT-ALLOWED' is not permitted")
})
}
func TestAppSet_Generate_Cluster(t *testing.T) {
appSet1 := newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet1"
appset.Spec.Template.Name = "{{name}}"
appset.Spec.Generators = []appsv1.ApplicationSetGenerator{
{
Clusters: &appsv1.ClusterGenerator{},
},
}
})
t.Run("Generate in default namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appsetQuery := applicationset.ApplicationSetGenerateRequest{
ApplicationSet: appSet1,
}
res, err := appSetServer.Generate(t.Context(), &appsetQuery)
require.NoError(t, err)
require.Len(t, res.Applications, 2)
assert.Equal(t, "fake-cluster", res.Applications[0].Name)
assert.Equal(t, "in-cluster", res.Applications[1].Name)
})
t.Run("Generate in different namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appSet1Ns := appSet1.DeepCopy()
appSet1Ns.Namespace = "external-namespace"
appsetQuery := applicationset.ApplicationSetGenerateRequest{ApplicationSet: appSet1Ns}
res, err := appSetServer.Generate(t.Context(), &appsetQuery)
require.NoError(t, err)
require.Len(t, res.Applications, 2)
assert.Equal(t, "fake-cluster", res.Applications[0].Name)
assert.Equal(t, "in-cluster", res.Applications[1].Name)
})
t.Run("Generate in not allowed namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appSet1Ns := appSet1.DeepCopy()
appSet1Ns.Namespace = "NOT-ALLOWED"
appsetQuery := applicationset.ApplicationSetGenerateRequest{ApplicationSet: appSet1Ns}
_, err := appSetServer.Generate(t.Context(), &appsetQuery)
assert.EqualError(t, err, "namespace 'NOT-ALLOWED' is not permitted")
})
}

View File

@@ -420,7 +420,8 @@ func (s *Server) Update(ctx context.Context, q *project.ProjectUpdateRequest) (*
destCluster, err := argo.GetDestinationCluster(ctx, a.Spec.Destination, s.db)
if err != nil {
if err.Error() != argo.ErrDestinationMissing {
return nil, err
// If cluster is not found, we should discard this app, as it's most likely already in error
continue
}
invalidDstCount++
}

View File

@@ -743,6 +743,35 @@ p, role:admin, projects, update, *, allow`)
_, err := projectServer.GetSyncWindowsState(ctx, &project.SyncWindowsQuery{Name: projectWithSyncWindows.Name})
assert.EqualError(t, err, "rpc error: code = PermissionDenied desc = permission denied: projects, get, test")
})
t.Run("TestAddSyncWindowWhenAnAppReferencesAClusterThatDoesNotExist", func(t *testing.T) {
_ = enforcer.SetBuiltinPolicy(`p, role:admin, projects, get, *, allow
p, role:admin, projects, update, *, allow`)
sessionMgr := session.NewSessionManager(settingsMgr, test.NewFakeProjLister(), "", nil, session.NewUserStateStorage(nil))
projectWithAppWithInvalidCluster := existingProj.DeepCopy()
argoDB := db.NewDB("default", settingsMgr, kubeclientset)
invalidApp := v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{Name: "test-invalid", Namespace: "default"},
Spec: v1alpha1.ApplicationSpec{Source: &v1alpha1.ApplicationSource{}, Project: "test", Destination: v1alpha1.ApplicationDestination{Namespace: "ns3", Server: "https://server4"}},
}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projectWithAppWithInvalidCluster, &invalidApp), enforcer, sync.NewKeyLock(), sessionMgr, nil, projInformer, settingsMgr, argoDB, testEnableEventList)
// Add sync window
syncWindow := v1alpha1.SyncWindow{
Kind: "deny",
Schedule: "* * * * *",
Duration: "1h",
Applications: []string{"*"},
Clusters: []string{"*"},
}
projectWithAppWithInvalidCluster.Spec.SyncWindows = append(projectWithAppWithInvalidCluster.Spec.SyncWindows, &syncWindow)
res, err := projectServer.Update(ctx, &project.ProjectUpdateRequest{
Project: projectWithAppWithInvalidCluster,
})
require.NoError(t, err)
assert.Len(t, res.Spec.SyncWindows, 1)
})
}
func newEnforcer(kubeclientset *fake.Clientset) *rbac.Enforcer {

View File

@@ -1238,7 +1238,7 @@ func (server *ArgoCDServer) newHTTPServer(ctx context.Context, port int, grpcWeb
// Webhook handler for git events (Note: cache timeouts are hardcoded because API server does not write to cache and not really using them)
argoDB := db.NewDB(server.Namespace, server.settingsMgr, server.KubeClientset)
acdWebhookHandler := webhook.NewHandler(server.Namespace, server.ApplicationNamespaces, server.WebhookParallelism, server.AppClientset, server.settings, server.settingsMgr, server.RepoServerCache, server.Cache, argoDB, server.settingsMgr.GetMaxWebhookPayloadSize())
acdWebhookHandler := webhook.NewHandler(server.Namespace, server.ApplicationNamespaces, server.WebhookParallelism, server.AppClientset, server.appLister, server.settings, server.settingsMgr, server.RepoServerCache, server.Cache, argoDB, server.settingsMgr.GetMaxWebhookPayloadSize())
mux.HandleFunc("/api/webhook", acdWebhookHandler.Handler)

View File

@@ -212,7 +212,7 @@ func PushChartToOCIRegistry(t *testing.T, chartPathName, chartName, chartVersion
require.NoError(t, err1)
defer func() { _ = os.RemoveAll(tempDest) }()
chartAbsPath, err2 := filepath.Abs("./testdata/" + chartPathName)
chartAbsPath, err2 := filepath.Abs("./" + chartPathName)
require.NoError(t, err2)
t.Setenv("HELM_EXPERIMENTAL_OCI", "1")
@@ -236,7 +236,7 @@ func PushChartToAuthenticatedOCIRegistry(t *testing.T, chartPathName, chartName,
require.NoError(t, err1)
defer func() { _ = os.RemoveAll(tempDest) }()
chartAbsPath, err2 := filepath.Abs("./testdata/" + chartPathName)
chartAbsPath, err2 := filepath.Abs("./" + chartPathName)
require.NoError(t, err2)
t.Setenv("HELM_EXPERIMENTAL_OCI", "1")
@@ -274,13 +274,13 @@ func PushChartToAuthenticatedOCIRegistry(t *testing.T, chartPathName, chartName,
// PushImageToOCIRegistry adds a helm chart to helm OCI registry
func PushImageToOCIRegistry(t *testing.T, pathName, tag string) {
t.Helper()
imagePath := "./testdata/" + pathName
imagePath := "./" + pathName
errors.NewHandler(t).FailOnErr(fixture.Run(
imagePath,
"oras",
"push",
fmt.Sprintf("%s:%s", fmt.Sprintf("%s/%s", strings.TrimPrefix(fixture.OCIHostURL, "oci://"), pathName), tag),
fmt.Sprintf("%s:%s", fmt.Sprintf("%s/%s", strings.TrimPrefix(fixture.OCIHostURL, "oci://"), filepath.Base(pathName)), tag),
".",
))
}
@@ -288,13 +288,13 @@ func PushImageToOCIRegistry(t *testing.T, pathName, tag string) {
// PushImageToAuthenticatedOCIRegistry adds a helm chart to helm OCI registry
func PushImageToAuthenticatedOCIRegistry(t *testing.T, pathName, tag string) {
t.Helper()
imagePath := "./testdata/" + pathName
imagePath := "./" + pathName
errors.NewHandler(t).FailOnErr(fixture.Run(
imagePath,
"oras",
"push",
fmt.Sprintf("%s:%s", fmt.Sprintf("%s/%s", strings.TrimPrefix(fixture.AuthenticatedOCIHostURL, "oci://"), pathName), tag),
fmt.Sprintf("%s:%s", fmt.Sprintf("%s/%s", strings.TrimPrefix(fixture.AuthenticatedOCIHostURL, "oci://"), filepath.Base(pathName)), tag),
".",
))
}

View File

@@ -552,7 +552,7 @@ func TestHelmRepoDiffLocal(t *testing.T) {
func TestHelmOCIRegistry(t *testing.T) {
Given(t).
PushChartToOCIRegistry("helm-values", "helm-values", "1.0.0").
PushChartToOCIRegistry("testdata/helm-values", "helm-values", "1.0.0").
HelmOCIRepoAdded("myrepo").
RepoURLType(fixture.RepoURLTypeHelmOCI).
Chart("helm-values").
@@ -570,7 +570,7 @@ func TestHelmOCIRegistry(t *testing.T) {
func TestGitWithHelmOCIRegistryDependencies(t *testing.T) {
Given(t).
PushChartToOCIRegistry("helm-values", "helm-values", "1.0.0").
PushChartToOCIRegistry("testdata/helm-values", "helm-values", "1.0.0").
HelmOCIRepoAdded("myrepo").
Path("helm-oci-with-dependencies").
When().
@@ -586,8 +586,8 @@ func TestGitWithHelmOCIRegistryDependencies(t *testing.T) {
func TestHelmOCIRegistryWithDependencies(t *testing.T) {
Given(t).
PushChartToOCIRegistry("helm-values", "helm-values", "1.0.0").
PushChartToOCIRegistry("helm-oci-with-dependencies", "helm-oci-with-dependencies", "1.0.0").
PushChartToOCIRegistry("testdata/helm-values", "helm-values", "1.0.0").
PushChartToOCIRegistry("testdata/helm-oci-with-dependencies", "helm-oci-with-dependencies", "1.0.0").
HelmOCIRepoAdded("myrepo").
RepoURLType(fixture.RepoURLTypeHelmOCI).
Chart("helm-oci-with-dependencies").
@@ -605,7 +605,7 @@ func TestHelmOCIRegistryWithDependencies(t *testing.T) {
func TestTemplatesGitWithHelmOCIDependencies(t *testing.T) {
Given(t).
PushChartToOCIRegistry("helm-values", "helm-values", "1.0.0").
PushChartToOCIRegistry("testdata/helm-values", "helm-values", "1.0.0").
HelmoOCICredentialsWithoutUserPassAdded().
Path("helm-oci-with-dependencies").
When().
@@ -621,8 +621,8 @@ func TestTemplatesGitWithHelmOCIDependencies(t *testing.T) {
func TestTemplatesHelmOCIWithDependencies(t *testing.T) {
Given(t).
PushChartToOCIRegistry("helm-values", "helm-values", "1.0.0").
PushChartToOCIRegistry("helm-oci-with-dependencies", "helm-oci-with-dependencies", "1.0.0").
PushChartToOCIRegistry("testdata/helm-values", "helm-values", "1.0.0").
PushChartToOCIRegistry("testdata/helm-oci-with-dependencies", "helm-oci-with-dependencies", "1.0.0").
HelmoOCICredentialsWithoutUserPassAdded().
RepoURLType(fixture.RepoURLTypeHelmOCI).
Chart("helm-oci-with-dependencies").

View File

@@ -14,7 +14,7 @@ import (
func TestOCIImage(t *testing.T) {
Given(t).
RepoURLType(fixture.RepoURLTypeOCI).
PushImageToOCIRegistry("guestbook", "1.0.0").
PushImageToOCIRegistry("testdata/guestbook", "1.0.0").
OCIRepoAdded("guestbook", "guestbook").
Revision("1.0.0").
OCIRegistry(fixture.OCIHostURL).
@@ -37,8 +37,8 @@ func TestOCIImage(t *testing.T) {
func TestOCIWithOCIHelmRegistryDependencies(t *testing.T) {
Given(t).
RepoURLType(fixture.RepoURLTypeOCI).
PushChartToOCIRegistry("helm-values", "helm-values", "1.0.0").
PushImageToOCIRegistry("helm-oci-with-dependencies", "1.0.0").
PushChartToOCIRegistry("testdata/helm-values", "helm-values", "1.0.0").
PushImageToOCIRegistry("testdata/helm-oci-with-dependencies", "1.0.0").
OCIRegistry(fixture.OCIHostURL).
OCIRepoAdded("helm-oci-with-dependencies", "helm-oci-with-dependencies").
OCIRegistryPath("helm-oci-with-dependencies").
@@ -58,8 +58,8 @@ func TestOCIWithOCIHelmRegistryDependencies(t *testing.T) {
func TestOCIWithAuthedOCIHelmRegistryDeps(t *testing.T) {
Given(t).
RepoURLType(fixture.RepoURLTypeOCI).
PushChartToAuthenticatedOCIRegistry("helm-values", "helm-values", "1.0.0").
PushImageToOCIRegistry("helm-oci-authed-with-dependencies", "1.0.0").
PushChartToAuthenticatedOCIRegistry("testdata/helm-values", "helm-values", "1.0.0").
PushImageToOCIRegistry("testdata/helm-oci-authed-with-dependencies", "1.0.0").
OCIRepoAdded("helm-oci-authed-with-dependencies", "helm-oci-authed-with-dependencies").
AuthenticatedOCIRepoAdded("helm-values", "myrepo/helm-values").
OCIRegistry(fixture.OCIHostURL).
@@ -76,3 +76,19 @@ func TestOCIWithAuthedOCIHelmRegistryDeps(t *testing.T) {
Expect(HealthIs(health.HealthStatusHealthy)).
Expect(SyncStatusIs(SyncStatusCodeSynced))
}
func TestOCIImageWithOutOfBoundsSymlink(t *testing.T) {
Given(t).
RepoURLType(fixture.RepoURLTypeOCI).
PushImageToOCIRegistry("testdata3/symlink-out-of-bounds", "1.0.0").
OCIRepoAdded("symlink-out-of-bounds", "symlink-out-of-bounds").
Revision("1.0.0").
OCIRegistry(fixture.OCIHostURL).
OCIRegistryPath("symlink-out-of-bounds").
Path(".").
When().
IgnoreErrors().
CreateApp().
Then().
Expect(Error("", "could not decompress layer: illegal filepath in symlink"))
}

View File

@@ -0,0 +1 @@
The out-of-bounds symlinks can negatively affect the other testcases, therefore it is separated in its own testdata directory.

View File

@@ -0,0 +1 @@
..

View File

@@ -58,16 +58,12 @@ const sectionHeader = (info: SectionInfo, onClick?: () => any) => {
);
};
const hasRollingSyncEnabled = (application: models.Application): boolean => {
return application.metadata.ownerReferences?.some(ref => ref.kind === 'ApplicationSet') || false;
const getApplicationSetOwnerRef = (application: models.Application) => {
return application.metadata.ownerReferences?.find(ref => ref.kind === 'ApplicationSet');
};
const ProgressiveSyncStatus = ({application}: {application: models.Application}) => {
if (!hasRollingSyncEnabled(application)) {
return null;
}
const appSetRef = application.metadata.ownerReferences.find(ref => ref.kind === 'ApplicationSet');
const appSetRef = getApplicationSetOwnerRef(application);
if (!appSetRef) {
return null;
}
@@ -75,12 +71,46 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
return (
<DataLoader
input={application}
errorRenderer={() => {
// For any errors, show a minimal error state
return (
<div className='application-status-panel__item'>
{sectionHeader({
title: 'PROGRESSIVE SYNC',
helpContent: 'Shows the current status of progressive sync for applications managed by an ApplicationSet.'
})}
<div className='application-status-panel__item-value'>
<i className='fa fa-exclamation-triangle' style={{color: COLORS.sync.unknown}} /> Error
</div>
<div className='application-status-panel__item-name'>Unable to load Progressive Sync status</div>
</div>
);
}}
load={async () => {
const appSet = await services.applications.getApplicationSet(appSetRef.name, application.metadata.namespace);
return appSet?.spec?.strategy?.type === 'RollingSync' ? appSet : null;
}}>
{(appSet: models.ApplicationSet) => {
// Check if user has permission to read ApplicationSets
const canReadApplicationSets = await services.accounts.canI('applicationsets', 'get', application.spec.project + '/' + application.metadata.name);
// Find ApplicationSet by searching all namespaces dynamically
const appSetList = await services.applications.listApplicationSets();
const appSet = appSetList.items?.find(item => item.metadata.name === appSetRef.name);
if (!appSet) {
throw new Error(`ApplicationSet ${appSetRef.name} not found in any namespace`);
}
return {canReadApplicationSets, appSet};
}}>
{({canReadApplicationSets, appSet}: {canReadApplicationSets: boolean; appSet: models.ApplicationSet}) => {
// Hide panel if: Progressive Sync disabled, no permission, or not RollingSync strategy
if (!appSet.status?.applicationStatus || appSet?.spec?.strategy?.type !== 'RollingSync' || !canReadApplicationSets) {
return null;
}
// Get the current application's status from the ApplicationSet applicationStatus
const appResource = appSet.status?.applicationStatus?.find(status => status.application === application.metadata.name);
// If no application status is found, show a default status
if (!appResource) {
return (
<div className='application-status-panel__item'>
{sectionHeader({
@@ -88,14 +118,15 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
helpContent: 'Shows the current status of progressive sync for applications managed by an ApplicationSet with RollingSync strategy.'
})}
<div className='application-status-panel__item-value'>
<i className='fa fa-question-circle' style={{color: COLORS.sync.unknown}} /> Unknown
<i className='fa fa-clock' style={{color: COLORS.sync.out_of_sync}} /> Waiting
</div>
<div className='application-status-panel__item-name'>Application status not yet available from ApplicationSet</div>
</div>
);
}
// Get the current application's status from the ApplicationSet resources
const appResource = appSet.status?.applicationStatus?.find(status => status.application === application.metadata.name);
// Get last transition time from application status
const lastTransitionTime = appResource?.lastTransitionTime;
return (
<div className='application-status-panel__item'>
@@ -106,12 +137,14 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
<div className='application-status-panel__item-value' style={{color: getProgressiveSyncStatusColor(appResource.status)}}>
{getProgressiveSyncStatusIcon({status: appResource.status})}&nbsp;{appResource.status}
</div>
<div className='application-status-panel__item-value'>Wave: {appResource.step}</div>
<div className='application-status-panel__item-name' style={{marginBottom: '0.5em'}}>
Last Transition: <br />
<Timestamp date={appResource.lastTransitionTime} />
</div>
{appResource.message && <div className='application-status-panel__item-name'>{appResource.message}</div>}
{appResource?.step && <div className='application-status-panel__item-value'>Wave: {appResource.step}</div>}
{lastTransitionTime && (
<div className='application-status-panel__item-name' style={{marginBottom: '0.5em'}}>
Last Transition: <br />
<Timestamp date={lastTransitionTime} />
</div>
)}
{appResource?.message && <div className='application-status-panel__item-name'>{appResource.message}</div>}
</div>
);
}}
@@ -123,7 +156,9 @@ export const ApplicationStatusPanel = ({application, showDiff, showOperation, sh
const [showProgressiveSync, setShowProgressiveSync] = React.useState(false);
React.useEffect(() => {
setShowProgressiveSync(hasRollingSyncEnabled(application));
// Only show Progressive Sync if the application has an ApplicationSet parent
// The actual strategy validation will be done inside ProgressiveSyncStatus component
setShowProgressiveSync(!!getApplicationSetOwnerRef(application));
}, [application]);
const today = new Date();

View File

@@ -1718,6 +1718,10 @@ export const getProgressiveSyncStatusIcon = ({status, isButton}: {status: string
return {icon: 'fa-clock', color: COLORS.sync.out_of_sync};
case 'Error':
return {icon: 'fa-times-circle', color: COLORS.health.degraded};
case 'Synced':
return {icon: 'fa-check-circle', color: COLORS.sync.synced};
case 'OutOfSync':
return {icon: 'fa-exclamation-triangle', color: COLORS.sync.out_of_sync};
default:
return {icon: 'fa-question-circle', color: COLORS.sync.unknown};
}
@@ -1740,6 +1744,10 @@ export const getProgressiveSyncStatusColor = (status: string): string => {
return COLORS.health.healthy;
case 'Error':
return COLORS.health.degraded;
case 'Synced':
return COLORS.sync.synced;
case 'OutOfSync':
return COLORS.sync.out_of_sync;
default:
return COLORS.sync.unknown;
}

View File

@@ -1117,3 +1117,8 @@ export interface ApplicationSet {
resources?: ApplicationSetResource[];
};
}
export interface ApplicationSetList {
metadata: models.ListMeta;
items: ApplicationSet[];
}

View File

@@ -550,4 +550,8 @@ export class ApplicationsService {
.query({appsetNamespace: namespace})
.then(res => res.body as models.ApplicationSet);
}
public async listApplicationSets(): Promise<models.ApplicationSetList> {
return requests.get(`/applicationsets`).then(res => res.body as models.ApplicationSetList);
}
}

View File

@@ -588,7 +588,7 @@ func ValidatePermissions(ctx context.Context, spec *argoappv1.ApplicationSpec, p
if !proj.IsSourcePermitted(spec.SourceHydrator.GetDrySource()) {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", spec.GetSource().RepoURL, spec.Project),
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", spec.SourceHydrator.GetDrySource().RepoURL, proj.Name),
})
}
case spec.HasMultipleSources():
@@ -602,7 +602,7 @@ func ValidatePermissions(ctx context.Context, spec *argoappv1.ApplicationSpec, p
if !proj.IsSourcePermitted(source) {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", source.RepoURL, spec.Project),
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", source.RepoURL, proj.Name),
})
}
}
@@ -615,7 +615,7 @@ func ValidatePermissions(ctx context.Context, spec *argoappv1.ApplicationSpec, p
if !proj.IsSourcePermitted(spec.GetSource()) {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", spec.GetSource().RepoURL, spec.Project),
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", spec.GetSource().RepoURL, proj.Name),
})
}
}
@@ -628,22 +628,21 @@ func ValidatePermissions(ctx context.Context, spec *argoappv1.ApplicationSpec, p
})
return conditions, nil
}
if destCluster.Server != "" {
permitted, err := proj.IsDestinationPermitted(destCluster, spec.Destination.Namespace, func(project string) ([]*argoappv1.Cluster, error) {
return db.GetProjectClusters(ctx, project)
permitted, err := proj.IsDestinationPermitted(destCluster, spec.Destination.Namespace, func(project string) ([]*argoappv1.Cluster, error) {
return db.GetProjectClusters(ctx, project)
})
if err != nil {
return nil, err
}
if !permitted {
server := destCluster.Server
if spec.Destination.Name != "" {
server = destCluster.Name
}
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application destination server '%s' and namespace '%s' do not match any of the allowed destinations in project '%s'", server, spec.Destination.Namespace, proj.Name),
})
if err != nil {
return nil, err
}
if !permitted {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application destination server '%s' and namespace '%s' do not match any of the allowed destinations in project '%s'", spec.Destination.Server, spec.Destination.Namespace, spec.Project),
})
}
} else if destCluster.Server == "" {
conditions = append(conditions, argoappv1.ApplicationCondition{Type: argoappv1.ApplicationConditionInvalidSpecError, Message: ErrDestinationMissing})
}
return conditions, nil
}

View File

@@ -49,7 +49,7 @@ func ReceiveRepoStream(ctx context.Context, receiver StreamReceiver, destDir str
if err != nil {
return nil, fmt.Errorf("error receiving tgz file: %w", err)
}
err = files.Untgz(destDir, tgzFile, math.MaxInt64, preserveFileMode)
err = files.Untgz(destDir, tgzFile, math.MaxInt64, preserveFileMode, false)
if err != nil {
return nil, fmt.Errorf("error decompressing tgz file: %w", err)
}

View File

@@ -34,9 +34,9 @@ func (s *secretsRepositoryBackend) CreateRepository(ctx context.Context, reposit
},
}
s.repositoryToSecret(repository, repositorySecret)
updatedSecret := s.repositoryToSecret(repository, repositorySecret)
_, err := s.db.createSecret(ctx, repositorySecret)
_, err := s.db.createSecret(ctx, updatedSecret)
if err != nil {
if apierrors.IsAlreadyExists(err) {
hasLabel, err := s.hasRepoTypeLabel(secName)
@@ -142,9 +142,9 @@ func (s *secretsRepositoryBackend) UpdateRepository(ctx context.Context, reposit
return nil, err
}
s.repositoryToSecret(repository, repositorySecret)
updatedSecret := s.repositoryToSecret(repository, repositorySecret)
_, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, repositorySecret, metav1.UpdateOptions{})
_, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, updatedSecret, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@@ -187,9 +187,9 @@ func (s *secretsRepositoryBackend) CreateRepoCreds(ctx context.Context, repoCred
},
}
repoCredsToSecret(repoCreds, repoCredsSecret)
updatedSecret := repoCredsToSecret(repoCreds, repoCredsSecret)
_, err := s.db.createSecret(ctx, repoCredsSecret)
_, err := s.db.createSecret(ctx, updatedSecret)
if err != nil {
if apierrors.IsAlreadyExists(err) {
return nil, status.Errorf(codes.AlreadyExists, "repository credentials %q already exists", repoCreds.URL)
@@ -237,9 +237,9 @@ func (s *secretsRepositoryBackend) UpdateRepoCreds(ctx context.Context, repoCred
return nil, err
}
repoCredsToSecret(repoCreds, repoCredsSecret)
updatedSecret := repoCredsToSecret(repoCreds, repoCredsSecret)
repoCredsSecret, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, repoCredsSecret, metav1.UpdateOptions{})
repoCredsSecret, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, updatedSecret, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@@ -323,73 +323,75 @@ func (s *secretsRepositoryBackend) GetAllOCIRepoCreds(_ context.Context) ([]*app
}
func secretToRepository(secret *corev1.Secret) (*appsv1.Repository, error) {
secretCopy := secret.DeepCopy()
repository := &appsv1.Repository{
Name: string(secret.Data["name"]),
Repo: string(secret.Data["url"]),
Username: string(secret.Data["username"]),
Password: string(secret.Data["password"]),
BearerToken: string(secret.Data["bearerToken"]),
SSHPrivateKey: string(secret.Data["sshPrivateKey"]),
TLSClientCertData: string(secret.Data["tlsClientCertData"]),
TLSClientCertKey: string(secret.Data["tlsClientCertKey"]),
Type: string(secret.Data["type"]),
GithubAppPrivateKey: string(secret.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secret.Data["githubAppEnterpriseBaseUrl"]),
Proxy: string(secret.Data["proxy"]),
NoProxy: string(secret.Data["noProxy"]),
Project: string(secret.Data["project"]),
GCPServiceAccountKey: string(secret.Data["gcpServiceAccountKey"]),
Name: string(secretCopy.Data["name"]),
Repo: string(secretCopy.Data["url"]),
Username: string(secretCopy.Data["username"]),
Password: string(secretCopy.Data["password"]),
BearerToken: string(secretCopy.Data["bearerToken"]),
SSHPrivateKey: string(secretCopy.Data["sshPrivateKey"]),
TLSClientCertData: string(secretCopy.Data["tlsClientCertData"]),
TLSClientCertKey: string(secretCopy.Data["tlsClientCertKey"]),
Type: string(secretCopy.Data["type"]),
GithubAppPrivateKey: string(secretCopy.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secretCopy.Data["githubAppEnterpriseBaseUrl"]),
Proxy: string(secretCopy.Data["proxy"]),
NoProxy: string(secretCopy.Data["noProxy"]),
Project: string(secretCopy.Data["project"]),
GCPServiceAccountKey: string(secretCopy.Data["gcpServiceAccountKey"]),
}
insecureIgnoreHostKey, err := boolOrFalse(secret, "insecureIgnoreHostKey")
insecureIgnoreHostKey, err := boolOrFalse(secretCopy, "insecureIgnoreHostKey")
if err != nil {
return repository, err
}
repository.InsecureIgnoreHostKey = insecureIgnoreHostKey
insecure, err := boolOrFalse(secret, "insecure")
insecure, err := boolOrFalse(secretCopy, "insecure")
if err != nil {
return repository, err
}
repository.Insecure = insecure
enableLfs, err := boolOrFalse(secret, "enableLfs")
enableLfs, err := boolOrFalse(secretCopy, "enableLfs")
if err != nil {
return repository, err
}
repository.EnableLFS = enableLfs
enableOCI, err := boolOrFalse(secret, "enableOCI")
enableOCI, err := boolOrFalse(secretCopy, "enableOCI")
if err != nil {
return repository, err
}
repository.EnableOCI = enableOCI
insecureOCIForceHTTP, err := boolOrFalse(secret, "insecureOCIForceHttp")
insecureOCIForceHTTP, err := boolOrFalse(secretCopy, "insecureOCIForceHttp")
if err != nil {
return repository, err
}
repository.InsecureOCIForceHttp = insecureOCIForceHTTP
githubAppID, err := intOrZero(secret, "githubAppID")
githubAppID, err := intOrZero(secretCopy, "githubAppID")
if err != nil {
return repository, err
}
repository.GithubAppId = githubAppID
githubAppInstallationID, err := intOrZero(secret, "githubAppInstallationID")
githubAppInstallationID, err := intOrZero(secretCopy, "githubAppInstallationID")
if err != nil {
return repository, err
}
repository.GithubAppInstallationId = githubAppInstallationID
forceBasicAuth, err := boolOrFalse(secret, "forceHttpBasicAuth")
forceBasicAuth, err := boolOrFalse(secretCopy, "forceHttpBasicAuth")
if err != nil {
return repository, err
}
repository.ForceHttpBasicAuth = forceBasicAuth
useAzureWorkloadIdentity, err := boolOrFalse(secret, "useAzureWorkloadIdentity")
useAzureWorkloadIdentity, err := boolOrFalse(secretCopy, "useAzureWorkloadIdentity")
if err != nil {
return repository, err
}
@@ -398,86 +400,92 @@ func secretToRepository(secret *corev1.Secret) (*appsv1.Repository, error) {
return repository, nil
}
func (s *secretsRepositoryBackend) repositoryToSecret(repository *appsv1.Repository, secret *corev1.Secret) {
if secret.Data == nil {
secret.Data = make(map[string][]byte)
func (s *secretsRepositoryBackend) repositoryToSecret(repository *appsv1.Repository, secret *corev1.Secret) *corev1.Secret {
secretCopy := secret.DeepCopy()
if secretCopy.Data == nil {
secretCopy.Data = make(map[string][]byte)
}
updateSecretString(secret, "name", repository.Name)
updateSecretString(secret, "project", repository.Project)
updateSecretString(secret, "url", repository.Repo)
updateSecretString(secret, "username", repository.Username)
updateSecretString(secret, "password", repository.Password)
updateSecretString(secret, "bearerToken", repository.BearerToken)
updateSecretString(secret, "sshPrivateKey", repository.SSHPrivateKey)
updateSecretBool(secret, "enableOCI", repository.EnableOCI)
updateSecretBool(secret, "insecureOCIForceHttp", repository.InsecureOCIForceHttp)
updateSecretString(secret, "tlsClientCertData", repository.TLSClientCertData)
updateSecretString(secret, "tlsClientCertKey", repository.TLSClientCertKey)
updateSecretString(secret, "type", repository.Type)
updateSecretString(secret, "githubAppPrivateKey", repository.GithubAppPrivateKey)
updateSecretInt(secret, "githubAppID", repository.GithubAppId)
updateSecretInt(secret, "githubAppInstallationID", repository.GithubAppInstallationId)
updateSecretString(secret, "githubAppEnterpriseBaseUrl", repository.GitHubAppEnterpriseBaseURL)
updateSecretBool(secret, "insecureIgnoreHostKey", repository.InsecureIgnoreHostKey)
updateSecretBool(secret, "insecure", repository.Insecure)
updateSecretBool(secret, "enableLfs", repository.EnableLFS)
updateSecretString(secret, "proxy", repository.Proxy)
updateSecretString(secret, "noProxy", repository.NoProxy)
updateSecretString(secret, "gcpServiceAccountKey", repository.GCPServiceAccountKey)
updateSecretBool(secret, "forceHttpBasicAuth", repository.ForceHttpBasicAuth)
updateSecretBool(secret, "useAzureWorkloadIdentity", repository.UseAzureWorkloadIdentity)
addSecretMetadata(secret, s.getSecretType())
updateSecretString(secretCopy, "name", repository.Name)
updateSecretString(secretCopy, "project", repository.Project)
updateSecretString(secretCopy, "url", repository.Repo)
updateSecretString(secretCopy, "username", repository.Username)
updateSecretString(secretCopy, "password", repository.Password)
updateSecretString(secretCopy, "bearerToken", repository.BearerToken)
updateSecretString(secretCopy, "sshPrivateKey", repository.SSHPrivateKey)
updateSecretBool(secretCopy, "enableOCI", repository.EnableOCI)
updateSecretBool(secretCopy, "insecureOCIForceHttp", repository.InsecureOCIForceHttp)
updateSecretString(secretCopy, "tlsClientCertData", repository.TLSClientCertData)
updateSecretString(secretCopy, "tlsClientCertKey", repository.TLSClientCertKey)
updateSecretString(secretCopy, "type", repository.Type)
updateSecretString(secretCopy, "githubAppPrivateKey", repository.GithubAppPrivateKey)
updateSecretInt(secretCopy, "githubAppID", repository.GithubAppId)
updateSecretInt(secretCopy, "githubAppInstallationID", repository.GithubAppInstallationId)
updateSecretString(secretCopy, "githubAppEnterpriseBaseUrl", repository.GitHubAppEnterpriseBaseURL)
updateSecretBool(secretCopy, "insecureIgnoreHostKey", repository.InsecureIgnoreHostKey)
updateSecretBool(secretCopy, "insecure", repository.Insecure)
updateSecretBool(secretCopy, "enableLfs", repository.EnableLFS)
updateSecretString(secretCopy, "proxy", repository.Proxy)
updateSecretString(secretCopy, "noProxy", repository.NoProxy)
updateSecretString(secretCopy, "gcpServiceAccountKey", repository.GCPServiceAccountKey)
updateSecretBool(secretCopy, "forceHttpBasicAuth", repository.ForceHttpBasicAuth)
updateSecretBool(secretCopy, "useAzureWorkloadIdentity", repository.UseAzureWorkloadIdentity)
addSecretMetadata(secretCopy, s.getSecretType())
return secretCopy
}
func (s *secretsRepositoryBackend) secretToRepoCred(secret *corev1.Secret) (*appsv1.RepoCreds, error) {
secretCopy := secret.DeepCopy()
repository := &appsv1.RepoCreds{
URL: string(secret.Data["url"]),
Username: string(secret.Data["username"]),
Password: string(secret.Data["password"]),
BearerToken: string(secret.Data["bearerToken"]),
SSHPrivateKey: string(secret.Data["sshPrivateKey"]),
TLSClientCertData: string(secret.Data["tlsClientCertData"]),
TLSClientCertKey: string(secret.Data["tlsClientCertKey"]),
Type: string(secret.Data["type"]),
GithubAppPrivateKey: string(secret.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secret.Data["githubAppEnterpriseBaseUrl"]),
GCPServiceAccountKey: string(secret.Data["gcpServiceAccountKey"]),
Proxy: string(secret.Data["proxy"]),
NoProxy: string(secret.Data["noProxy"]),
URL: string(secretCopy.Data["url"]),
Username: string(secretCopy.Data["username"]),
Password: string(secretCopy.Data["password"]),
BearerToken: string(secretCopy.Data["bearerToken"]),
SSHPrivateKey: string(secretCopy.Data["sshPrivateKey"]),
TLSClientCertData: string(secretCopy.Data["tlsClientCertData"]),
TLSClientCertKey: string(secretCopy.Data["tlsClientCertKey"]),
Type: string(secretCopy.Data["type"]),
GithubAppPrivateKey: string(secretCopy.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secretCopy.Data["githubAppEnterpriseBaseUrl"]),
GCPServiceAccountKey: string(secretCopy.Data["gcpServiceAccountKey"]),
Proxy: string(secretCopy.Data["proxy"]),
NoProxy: string(secretCopy.Data["noProxy"]),
}
enableOCI, err := boolOrFalse(secret, "enableOCI")
enableOCI, err := boolOrFalse(secretCopy, "enableOCI")
if err != nil {
return repository, err
}
repository.EnableOCI = enableOCI
insecureOCIForceHTTP, err := boolOrFalse(secret, "insecureOCIForceHttp")
insecureOCIForceHTTP, err := boolOrFalse(secretCopy, "insecureOCIForceHttp")
if err != nil {
return repository, err
}
repository.InsecureOCIForceHttp = insecureOCIForceHTTP
githubAppID, err := intOrZero(secret, "githubAppID")
githubAppID, err := intOrZero(secretCopy, "githubAppID")
if err != nil {
return repository, err
}
repository.GithubAppId = githubAppID
githubAppInstallationID, err := intOrZero(secret, "githubAppInstallationID")
githubAppInstallationID, err := intOrZero(secretCopy, "githubAppInstallationID")
if err != nil {
return repository, err
}
repository.GithubAppInstallationId = githubAppInstallationID
forceBasicAuth, err := boolOrFalse(secret, "forceHttpBasicAuth")
forceBasicAuth, err := boolOrFalse(secretCopy, "forceHttpBasicAuth")
if err != nil {
return repository, err
}
repository.ForceHttpBasicAuth = forceBasicAuth
useAzureWorkloadIdentity, err := boolOrFalse(secret, "useAzureWorkloadIdentity")
useAzureWorkloadIdentity, err := boolOrFalse(secretCopy, "useAzureWorkloadIdentity")
if err != nil {
return repository, err
}
@@ -486,31 +494,35 @@ func (s *secretsRepositoryBackend) secretToRepoCred(secret *corev1.Secret) (*app
return repository, nil
}
func repoCredsToSecret(repoCreds *appsv1.RepoCreds, secret *corev1.Secret) {
if secret.Data == nil {
secret.Data = make(map[string][]byte)
func repoCredsToSecret(repoCreds *appsv1.RepoCreds, secret *corev1.Secret) *corev1.Secret {
secretCopy := secret.DeepCopy()
if secretCopy.Data == nil {
secretCopy.Data = make(map[string][]byte)
}
updateSecretString(secret, "url", repoCreds.URL)
updateSecretString(secret, "username", repoCreds.Username)
updateSecretString(secret, "password", repoCreds.Password)
updateSecretString(secret, "bearerToken", repoCreds.BearerToken)
updateSecretString(secret, "sshPrivateKey", repoCreds.SSHPrivateKey)
updateSecretBool(secret, "enableOCI", repoCreds.EnableOCI)
updateSecretBool(secret, "insecureOCIForceHttp", repoCreds.InsecureOCIForceHttp)
updateSecretString(secret, "tlsClientCertData", repoCreds.TLSClientCertData)
updateSecretString(secret, "tlsClientCertKey", repoCreds.TLSClientCertKey)
updateSecretString(secret, "type", repoCreds.Type)
updateSecretString(secret, "githubAppPrivateKey", repoCreds.GithubAppPrivateKey)
updateSecretInt(secret, "githubAppID", repoCreds.GithubAppId)
updateSecretInt(secret, "githubAppInstallationID", repoCreds.GithubAppInstallationId)
updateSecretString(secret, "githubAppEnterpriseBaseUrl", repoCreds.GitHubAppEnterpriseBaseURL)
updateSecretString(secret, "gcpServiceAccountKey", repoCreds.GCPServiceAccountKey)
updateSecretString(secret, "proxy", repoCreds.Proxy)
updateSecretString(secret, "noProxy", repoCreds.NoProxy)
updateSecretBool(secret, "forceHttpBasicAuth", repoCreds.ForceHttpBasicAuth)
updateSecretBool(secret, "useAzureWorkloadIdentity", repoCreds.UseAzureWorkloadIdentity)
addSecretMetadata(secret, common.LabelValueSecretTypeRepoCreds)
updateSecretString(secretCopy, "url", repoCreds.URL)
updateSecretString(secretCopy, "username", repoCreds.Username)
updateSecretString(secretCopy, "password", repoCreds.Password)
updateSecretString(secretCopy, "bearerToken", repoCreds.BearerToken)
updateSecretString(secretCopy, "sshPrivateKey", repoCreds.SSHPrivateKey)
updateSecretBool(secretCopy, "enableOCI", repoCreds.EnableOCI)
updateSecretBool(secretCopy, "insecureOCIForceHttp", repoCreds.InsecureOCIForceHttp)
updateSecretString(secretCopy, "tlsClientCertData", repoCreds.TLSClientCertData)
updateSecretString(secretCopy, "tlsClientCertKey", repoCreds.TLSClientCertKey)
updateSecretString(secretCopy, "type", repoCreds.Type)
updateSecretString(secretCopy, "githubAppPrivateKey", repoCreds.GithubAppPrivateKey)
updateSecretInt(secretCopy, "githubAppID", repoCreds.GithubAppId)
updateSecretInt(secretCopy, "githubAppInstallationID", repoCreds.GithubAppInstallationId)
updateSecretString(secretCopy, "githubAppEnterpriseBaseUrl", repoCreds.GitHubAppEnterpriseBaseURL)
updateSecretString(secretCopy, "gcpServiceAccountKey", repoCreds.GCPServiceAccountKey)
updateSecretString(secretCopy, "proxy", repoCreds.Proxy)
updateSecretString(secretCopy, "noProxy", repoCreds.NoProxy)
updateSecretBool(secretCopy, "forceHttpBasicAuth", repoCreds.ForceHttpBasicAuth)
updateSecretBool(secretCopy, "useAzureWorkloadIdentity", repoCreds.UseAzureWorkloadIdentity)
addSecretMetadata(secretCopy, common.LabelValueSecretTypeRepoCreds)
return secretCopy
}
func (s *secretsRepositoryBackend) getRepositorySecret(repoURL, project string, allowFallback bool) (*corev1.Secret, error) {

View File

@@ -1,7 +1,9 @@
package db
import (
"fmt"
"strconv"
"sync"
"testing"
"github.com/stretchr/testify/assert"
@@ -85,9 +87,9 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
t.Parallel()
secret := &corev1.Secret{}
s := secretsRepositoryBackend{}
s.repositoryToSecret(repo, secret)
delete(secret.Labels, common.LabelKeySecretType)
f := setupWithK8sObjects(secret)
updatedSecret := s.repositoryToSecret(repo, secret)
delete(updatedSecret.Labels, common.LabelKeySecretType)
f := setupWithK8sObjects(updatedSecret)
f.clientSet.ReactionChain = nil
f.clientSet.AddReactor("create", "secrets", func(_ k8stesting.Action) (handled bool, ret runtime.Object, err error) {
gr := schema.GroupResource{
@@ -122,8 +124,8 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
},
}
s := secretsRepositoryBackend{}
s.repositoryToSecret(repo, secret)
f := setupWithK8sObjects(secret)
updatedSecret := s.repositoryToSecret(repo, secret)
f := setupWithK8sObjects(updatedSecret)
f.clientSet.ReactionChain = nil
f.clientSet.WatchReactionChain = nil
f.clientSet.AddReactor("create", "secrets", func(_ k8stesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -134,7 +136,7 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
return true, nil, apierrors.NewAlreadyExists(gr, "already exists")
})
watcher := watch.NewFakeWithChanSize(1, true)
watcher.Add(secret)
watcher.Add(updatedSecret)
f.clientSet.AddWatchReactor("secrets", func(_ k8stesting.Action) (handled bool, ret watch.Interface, err error) {
return true, watcher, nil
})
@@ -946,7 +948,7 @@ func TestRepoCredsToSecret(t *testing.T) {
GithubAppInstallationId: 456,
GitHubAppEnterpriseBaseURL: "GitHubAppEnterpriseBaseURL",
}
repoCredsToSecret(creds, s)
s = repoCredsToSecret(creds, s)
assert.Equal(t, []byte(creds.URL), s.Data["url"])
assert.Equal(t, []byte(creds.Username), s.Data["username"])
assert.Equal(t, []byte(creds.Password), s.Data["password"])
@@ -962,3 +964,169 @@ func TestRepoCredsToSecret(t *testing.T) {
assert.Equal(t, map[string]string{common.AnnotationKeyManagedBy: common.AnnotationValueManagedByArgoCD}, s.Annotations)
assert.Equal(t, map[string]string{common.LabelKeySecretType: common.LabelValueSecretTypeRepoCreds}, s.Labels)
}
func TestRaceConditionInRepoCredsOperations(t *testing.T) {
// Create a single shared secret that will be accessed concurrently
sharedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: RepoURLToSecretName(repoSecretPrefix, "git@github.com:argoproj/argo-cd.git", ""),
Namespace: testNamespace,
Labels: map[string]string{
common.LabelKeySecretType: common.LabelValueSecretTypeRepoCreds,
},
},
Data: map[string][]byte{
"url": []byte("git@github.com:argoproj/argo-cd.git"),
"username": []byte("test-user"),
"password": []byte("test-pass"),
},
}
// Create test credentials that we'll use for conversion
repoCreds := &appsv1.RepoCreds{
URL: "git@github.com:argoproj/argo-cd.git",
Username: "test-user",
Password: "test-pass",
}
backend := &secretsRepositoryBackend{}
var wg sync.WaitGroup
concurrentOps := 50
errChan := make(chan error, concurrentOps*2) // Channel to collect errors
// Launch goroutines that perform concurrent operations
for i := 0; i < concurrentOps; i++ {
wg.Add(2)
// One goroutine converts from RepoCreds to Secret
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in repoCredsToSecret: %v", r)
}
}()
_ = repoCredsToSecret(repoCreds, sharedSecret)
}()
// Another goroutine converts from Secret to RepoCreds
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in secretToRepoCred: %v", r)
}
}()
creds, err := backend.secretToRepoCred(sharedSecret)
if err != nil {
errChan <- fmt.Errorf("error in secretToRepoCred: %w", err)
return
}
// Verify data integrity
if creds.URL != repoCreds.URL || creds.Username != repoCreds.Username || creds.Password != repoCreds.Password {
errChan <- fmt.Errorf("data mismatch in conversion: expected %v, got %v", repoCreds, creds)
}
}()
}
wg.Wait()
close(errChan)
// Check for any errors that occurred during concurrent operations
for err := range errChan {
t.Errorf("concurrent operation error: %v", err)
}
// Verify final state
finalCreds, err := backend.secretToRepoCred(sharedSecret)
require.NoError(t, err)
assert.Equal(t, repoCreds.URL, finalCreds.URL)
assert.Equal(t, repoCreds.Username, finalCreds.Username)
assert.Equal(t, repoCreds.Password, finalCreds.Password)
}
func TestRaceConditionInRepositoryOperations(t *testing.T) {
// Create a single shared secret that will be accessed concurrently
sharedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: RepoURLToSecretName(repoSecretPrefix, "git@github.com:argoproj/argo-cd.git", ""),
Namespace: testNamespace,
Labels: map[string]string{
common.LabelKeySecretType: common.LabelValueSecretTypeRepository,
},
},
Data: map[string][]byte{
"url": []byte("git@github.com:argoproj/argo-cd.git"),
"name": []byte("test-repo"),
"username": []byte("test-user"),
"password": []byte("test-pass"),
},
}
// Create test repository that we'll use for conversion
repo := &appsv1.Repository{
Name: "test-repo",
Repo: "git@github.com:argoproj/argo-cd.git",
Username: "test-user",
Password: "test-pass",
}
backend := &secretsRepositoryBackend{}
var wg sync.WaitGroup
concurrentOps := 50
errChan := make(chan error, concurrentOps*2) // Channel to collect errors
// Launch goroutines that perform concurrent operations
for i := 0; i < concurrentOps; i++ {
wg.Add(2)
// One goroutine converts from Repository to Secret
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in repositoryToSecret: %v", r)
}
}()
_ = backend.repositoryToSecret(repo, sharedSecret)
}()
// Another goroutine converts from Secret to Repository
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in secretToRepository: %v", r)
}
}()
repository, err := secretToRepository(sharedSecret)
if err != nil {
errChan <- fmt.Errorf("error in secretToRepository: %w", err)
return
}
// Verify data integrity
if repository.Name != repo.Name || repository.Repo != repo.Repo ||
repository.Username != repo.Username || repository.Password != repo.Password {
errChan <- fmt.Errorf("data mismatch in conversion: expected %v, got %v", repo, repository)
}
}()
}
wg.Wait()
close(errChan)
// Check for any errors that occurred during concurrent operations
for err := range errChan {
t.Errorf("concurrent operation error: %v", err)
}
// Verify final state
finalRepo, err := secretToRepository(sharedSecret)
require.NoError(t, err)
assert.Equal(t, repo.Name, finalRepo.Name)
assert.Equal(t, repo.Repo, finalRepo.Repo)
assert.Equal(t, repo.Username, finalRepo.Username)
assert.Equal(t, repo.Password, finalRepo.Password)
}

11
util/env/env.go vendored
View File

@@ -7,6 +7,8 @@ import (
"strings"
"time"
timeutil "github.com/argoproj/pkg/time"
log "github.com/sirupsen/logrus"
)
@@ -125,8 +127,13 @@ func ParseDurationFromEnv(env string, defaultValue, minimum, maximum time.Durati
}
dur, err := time.ParseDuration(str)
if err != nil {
log.Warnf("Could not parse '%s' as a duration string from environment %s", str, env)
return defaultValue
// provides backwards compatibility for durations defined in days, see: https://github.com/argoproj/argo-cd/issues/24740
durPtr, err2 := timeutil.ParseDuration(str)
if err2 != nil {
log.Warnf("Could not parse '%s' as a duration from environment %s", str, env)
return defaultValue
}
dur = *durPtr
}
if dur < minimum {

84
util/env/env_test.go vendored
View File

@@ -142,6 +142,90 @@ func TestParseDurationFromEnv(t *testing.T) {
}
}
func TestParseDurationFromEnvEdgeCases(t *testing.T) {
envKey := "SOME_ENV_KEY"
def := 3 * time.Minute
minimum := 1 * time.Second
maximum := 2160 * time.Hour // 3 months
testCases := []struct {
name string
env string
expected time.Duration
}{{
name: "EnvNotSet",
expected: def,
}, {
name: "Durations defined as days are valid",
env: "12d",
expected: time.Hour * 24 * 12,
}, {
name: "Negative durations should fail parsing and use the default value",
env: "-1h",
expected: def,
}, {
name: "Negative day durations should fail parsing and use the default value",
env: "-12d",
expected: def,
}, {
name: "Scientific notation should fail parsing and use the default value",
env: "1e3s",
expected: def,
}, {
name: "Durations with a leading zero are considered valid and parsed as decimal notation",
env: "0755s",
expected: time.Second * 755,
}, {
name: "Durations with many leading zeroes are considered valid and parsed as decimal notation",
env: "000083m",
expected: time.Minute * 83,
}, {
name: "Decimal Durations should not fail parsing",
env: "30.5m",
expected: time.Minute*30 + time.Second*30,
}, {
name: "Decimal Day Durations should fail parsing and use the default value",
env: "30.5d",
expected: def,
}, {
name: "Fraction Durations should fail parsing and use the default value",
env: "1/2h",
expected: def,
}, {
name: "Durations without a time unit should fail parsing and use the default value",
env: "15",
expected: def,
}, {
name: "Durations with a trailing symbol should fail parsing and use the default value",
env: "+12d",
expected: def,
}, {
name: "Leading space Duration should fail parsing use the default value",
env: " 2h",
expected: def,
}, {
name: "Trailing space Duration should fail parsing use the default value",
env: "6m ",
expected: def,
}, {
name: "Empty Duration should fail parsing use the default value",
env: "",
expected: def,
}, {
name: "Whitespace Duration should fail parsing and use the default value",
env: " ",
expected: def,
}}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Setenv(envKey, tc.env)
val := ParseDurationFromEnv(envKey, def, minimum, maximum)
assert.Equal(t, tc.expected, val)
})
}
}
func Test_ParseBoolFromEnv(t *testing.T) {
envKey := "SOMEKEY"

View File

@@ -135,7 +135,7 @@ func untarChart(tempDir string, cachedChartPath string, manifestMaxExtractedSize
if err != nil {
return fmt.Errorf("error opening cached chart path %s: %w", cachedChartPath, err)
}
return files.Untgz(tempDir, reader, manifestMaxExtractedSize, false)
return files.Untgz(tempDir, reader, manifestMaxExtractedSize, false, false)
}
func (c *nativeHelmChart) ExtractChart(chart string, version string, passCredentials bool, manifestMaxExtractedSize int64, disableManifestMaxExtractedSize bool) (string, utilio.Closer, error) {

View File

@@ -71,7 +71,7 @@ func writeFile(srcPath string, inclusions []string, exclusions []string, writer
// - a full path
// - points to an empty directory or
// - points to a non-existing directory
func Untgz(dstPath string, r io.Reader, maxSize int64, preserveFileMode bool) error {
func Untgz(dstPath string, r io.Reader, maxSize int64, preserveFileMode bool, relativizeSymlinks bool) error {
if !filepath.IsAbs(dstPath) {
return fmt.Errorf("dstPath points to a relative path: %s", dstPath)
}
@@ -81,7 +81,7 @@ func Untgz(dstPath string, r io.Reader, maxSize int64, preserveFileMode bool) er
return fmt.Errorf("error reading file: %w", err)
}
defer gzr.Close()
return untar(dstPath, io.LimitReader(gzr, maxSize), preserveFileMode)
return untar(dstPath, io.LimitReader(gzr, maxSize), preserveFileMode, relativizeSymlinks)
}
// Untar will loop over the tar reader creating the file structure at dstPath.
@@ -89,12 +89,12 @@ func Untgz(dstPath string, r io.Reader, maxSize int64, preserveFileMode bool) er
// - a full path
// - points to an empty directory or
// - points to a non-existing directory
func Untar(dstPath string, r io.Reader, maxSize int64, preserveFileMode bool) error {
func Untar(dstPath string, r io.Reader, maxSize int64, preserveFileMode bool, relativizeSymlinks bool) error {
if !filepath.IsAbs(dstPath) {
return fmt.Errorf("dstPath points to a relative path: %s", dstPath)
}
return untar(dstPath, io.LimitReader(r, maxSize), preserveFileMode)
return untar(dstPath, io.LimitReader(r, maxSize), preserveFileMode, relativizeSymlinks)
}
// untar will loop over the tar reader creating the file structure at dstPath.
@@ -102,7 +102,7 @@ func Untar(dstPath string, r io.Reader, maxSize int64, preserveFileMode bool) er
// - a full path
// - points to an empty directory or
// - points to a non existing directory
func untar(dstPath string, r io.Reader, preserveFileMode bool) error {
func untar(dstPath string, r io.Reader, preserveFileMode bool, relativizeSymlinks bool) error {
tr := tar.NewReader(r)
for {
@@ -136,16 +136,27 @@ func untar(dstPath string, r io.Reader, preserveFileMode bool) error {
case tar.TypeSymlink:
// Sanity check to protect against symlink exploit
linkTarget := filepath.Join(filepath.Dir(target), header.Linkname)
realPath, err := filepath.EvalSymlinks(linkTarget)
realLinkTarget, err := filepath.EvalSymlinks(linkTarget)
if os.IsNotExist(err) {
realPath = linkTarget
realLinkTarget = linkTarget
} else if err != nil {
return fmt.Errorf("error checking symlink realpath: %w", err)
}
if !Inbound(realPath, dstPath) {
if !Inbound(realLinkTarget, dstPath) {
return fmt.Errorf("illegal filepath in symlink: %s", linkTarget)
}
err = os.Symlink(realPath, target)
if relativizeSymlinks {
// Relativizing all symlink targets because path.CheckOutOfBoundsSymlinks disallows any absolute symlinks
// and it makes more sense semantically to view symlinks in archives as relative.
// Inbound ensures that we never allow symlinks that break out of the target directory.
realLinkTarget, err = filepath.Rel(filepath.Dir(target), realLinkTarget)
if err != nil {
return fmt.Errorf("error relativizing link target: %w", err)
}
}
err = os.Symlink(realLinkTarget, target)
if err != nil {
return fmt.Errorf("error creating symlink: %w", err)
}

View File

@@ -159,7 +159,7 @@ func TestUntgz(t *testing.T) {
destDir := filepath.Join(tmpDir, "untgz1")
// when
err := files.Untgz(destDir, tgzFile, math.MaxInt64, false)
err := files.Untgz(destDir, tgzFile, math.MaxInt64, false, false)
// then
require.NoError(t, err)
@@ -182,7 +182,23 @@ func TestUntgz(t *testing.T) {
destDir := filepath.Join(tmpDir, "untgz2")
// when
err := files.Untgz(destDir, tgzFile, math.MaxInt64, false)
err := files.Untgz(destDir, tgzFile, math.MaxInt64, false, false)
// then
assert.ErrorContains(t, err, "illegal filepath in symlink")
})
t.Run("will protect against symlink exploit when relativizing symlinks", func(t *testing.T) {
// given
tmpDir := createTmpDir(t)
defer deleteTmpDir(t, tmpDir)
tgzFile := createTgz(t, filepath.Join(getTestDataDir(t), "symlink-exploit"), tmpDir)
defer tgzFile.Close()
destDir := filepath.Join(tmpDir, "untgz2")
// when
err := files.Untgz(destDir, tgzFile, math.MaxInt64, false, true)
// then
assert.ErrorContains(t, err, "illegal filepath in symlink")
@@ -198,14 +214,31 @@ func TestUntgz(t *testing.T) {
destDir := filepath.Join(tmpDir, "untgz1")
// when
err := files.Untgz(destDir, tgzFile, math.MaxInt64, false)
err := files.Untgz(destDir, tgzFile, math.MaxInt64, true, false)
require.NoError(t, err)
// then
scriptFileInfo, err := os.Stat(path.Join(destDir, "script.sh"))
require.NoError(t, err)
assert.Equal(t, os.FileMode(0o644), scriptFileInfo.Mode())
assert.Equal(t, os.FileMode(0o755), scriptFileInfo.Mode())
})
t.Run("relativizes symlinks", func(t *testing.T) {
// given
tmpDir := createTmpDir(t)
defer deleteTmpDir(t, tmpDir)
tgzFile := createTgz(t, getTestAppDir(t), tmpDir)
defer tgzFile.Close()
destDir := filepath.Join(tmpDir, "symlink-relativize")
// when
err := files.Untgz(destDir, tgzFile, math.MaxInt64, false, true)
// then
require.NoError(t, err)
names := readFiles(t, destDir)
assert.Equal(t, "../README.md", names["applicationset/readme-symlink"])
})
}

View File

@@ -183,7 +183,7 @@ func ReceiveManifestFileStream(ctx context.Context, receiver RepoStreamReceiver,
if err != nil {
return nil, nil, fmt.Errorf("error receiving tgz file: %w", err)
}
err = files.Untgz(destDir, tgzFile, maxExtractedSize, false)
err = files.Untgz(destDir, tgzFile, maxExtractedSize, false, false)
if err != nil {
return nil, nil, fmt.Errorf("error decompressing tgz file: %w", err)
}

View File

@@ -28,11 +28,12 @@ func GetFactorySettings(argocdService service.Service, secretName, configMapName
}
// GetFactorySettingsForCLI allows the initialization of argocdService to be deferred until it is used, when InitGetVars is called.
func GetFactorySettingsForCLI(argocdService service.Service, secretName, configMapName string, selfServiceNotificationEnabled bool) api.Settings {
func GetFactorySettingsForCLI(serviceGetter func() service.Service, secretName, configMapName string, selfServiceNotificationEnabled bool) api.Settings {
return api.Settings{
SecretName: secretName,
ConfigMapName: configMapName,
InitGetVars: func(cfg *api.Config, configMap *corev1.ConfigMap, secret *corev1.Secret) (api.GetVars, error) {
argocdService := serviceGetter()
if argocdService == nil {
return nil, errors.New("argocdService is not initialized")
}

View File

@@ -232,12 +232,28 @@ func (c *nativeOCIClient) Extract(ctx context.Context, digest string) (string, u
return "", nil, err
}
if len(ociManifest.Layers) != 1 {
return "", nil, fmt.Errorf("expected only a single oci layer, got %d", len(ociManifest.Layers))
// Add a guard to defend against a ridiculous amount of layers. No idea what a good amount is, but normally we
// shouldn't expect more than 2-3 in most real world use cases.
if len(ociManifest.Layers) > 10 {
return "", nil, fmt.Errorf("expected no more than 10 oci layers, got %d", len(ociManifest.Layers))
}
if !slices.Contains(c.allowedMediaTypes, ociManifest.Layers[0].MediaType) {
return "", nil, fmt.Errorf("oci layer media type %s is not in the list of allowed media types", ociManifest.Layers[0].MediaType)
contentLayers := 0
// Strictly speaking we only allow for a single content layer. There are images which contains extra layers, such
// as provenance/attestation layers. Pending a better story to do this natively, we will skip such layers for now.
for _, layer := range ociManifest.Layers {
if isContentLayer(layer.MediaType) {
if !slices.Contains(c.allowedMediaTypes, layer.MediaType) {
return "", nil, fmt.Errorf("oci layer media type %s is not in the list of allowed media types", layer.MediaType)
}
contentLayers++
}
}
if contentLayers != 1 {
return "", nil, fmt.Errorf("expected only a single oci content layer, got %d", contentLayers)
}
err = saveCompressedImageToPath(ctx, digest, c.repo, cachedPath)
@@ -405,7 +421,15 @@ func fileExists(filePath string) (bool, error) {
return true, nil
}
// TODO: A content layer could in theory be something that is not a compressed file, e.g a single yaml file or like.
// While IMO the utility in the context of Argo CD is limited, I'd at least like to make it known here and add an extensibility
// point for it in case we decide to loosen the current requirements.
func isContentLayer(mediaType string) bool {
return isCompressedLayer(mediaType)
}
func isCompressedLayer(mediaType string) bool {
// TODO: Is zstd something which is used in the wild? For now let's stick to these suffixes
return strings.HasSuffix(mediaType, "tar+gzip") || strings.HasSuffix(mediaType, "tar")
}
@@ -500,7 +524,7 @@ func isHelmOCI(mediaType string) bool {
// Push looks in all the layers of an OCI image. Once it finds a layer that is compressed, it extracts the layer to a tempDir
// and then renames the temp dir to the directory where the repo-server expects to find k8s manifests.
func (s *compressedLayerExtracterStore) Push(ctx context.Context, desc imagev1.Descriptor, content io.Reader) error {
if isCompressedLayer(desc.MediaType) {
if isContentLayer(desc.MediaType) {
srcDir, err := files.CreateTempDir(os.TempDir())
if err != nil {
return err
@@ -508,9 +532,9 @@ func (s *compressedLayerExtracterStore) Push(ctx context.Context, desc imagev1.D
defer os.RemoveAll(srcDir)
if strings.HasSuffix(desc.MediaType, "tar+gzip") {
err = files.Untgz(srcDir, content, s.maxSize, false)
err = files.Untgz(srcDir, content, s.maxSize, false, true)
} else {
err = files.Untar(srcDir, content, s.maxSize, false)
err = files.Untar(srcDir, content, s.maxSize, false, true)
}
if err != nil {

View File

@@ -122,7 +122,7 @@ func Test_nativeOCIClient_Extract(t *testing.T) {
expectedError: errors.New("cannot extract contents of oci image with revision sha256:1b6dfd71e2b35c2f35dffc39007c2276f3c0e235cbae4c39cba74bd406174e22: failed to perform \"Push\" on destination: could not decompress layer: error while iterating on tar reader: unexpected EOF"),
},
{
name: "extraction fails due to multiple layers",
name: "extraction fails due to multiple content layers",
fields: fields{
allowedMediaTypes: []string{imagev1.MediaTypeImageLayerGzip},
},
@@ -135,7 +135,21 @@ func Test_nativeOCIClient_Extract(t *testing.T) {
manifestMaxExtractedSize: 1000,
disableManifestMaxExtractedSize: false,
},
expectedError: errors.New("expected only a single oci layer, got 2"),
expectedError: errors.New("expected only a single oci content layer, got 2"),
},
{
name: "extraction with multiple layers, but just a single content layer",
fields: fields{
allowedMediaTypes: []string{imagev1.MediaTypeImageLayerGzip},
},
args: args{
digestFunc: func(store *memory.Store) string {
layerBlob := createGzippedTarWithContent(t, "some-path", "some content")
return generateManifest(t, store, layerConf{content.NewDescriptorFromBytes(imagev1.MediaTypeImageLayerGzip, layerBlob), layerBlob}, layerConf{content.NewDescriptorFromBytes("application/vnd.cncf.helm.chart.provenance.v1.prov", []byte{}), []byte{}})
},
manifestMaxExtractedSize: 1000,
disableManifestMaxExtractedSize: false,
},
},
{
name: "extraction fails due to invalid media type",

View File

@@ -13,6 +13,9 @@ import (
"time"
bb "github.com/ktrysmt/go-bitbucket"
"k8s.io/apimachinery/pkg/labels"
alpha1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
"github.com/Masterminds/semver/v3"
"github.com/go-playground/webhooks/v6/azuredevops"
@@ -23,7 +26,6 @@ import (
"github.com/go-playground/webhooks/v6/gogs"
gogsclient "github.com/gogits/go-gogs-client"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@@ -60,6 +62,7 @@ type ArgoCDWebhookHandler struct {
ns string
appNs []string
appClientset appclientset.Interface
appsLister alpha1.ApplicationLister
github *github.Webhook
gitlab *gitlab.Webhook
bitbucket *bitbucket.Webhook
@@ -72,7 +75,7 @@ type ArgoCDWebhookHandler struct {
maxWebhookPayloadSizeB int64
}
func NewHandler(namespace string, applicationNamespaces []string, webhookParallelism int, appClientset appclientset.Interface, set *settings.ArgoCDSettings, settingsSrc settingsSource, repoCache *cache.Cache, serverCache *servercache.Cache, argoDB db.ArgoDB, maxWebhookPayloadSizeB int64) *ArgoCDWebhookHandler {
func NewHandler(namespace string, applicationNamespaces []string, webhookParallelism int, appClientset appclientset.Interface, appsLister alpha1.ApplicationLister, set *settings.ArgoCDSettings, settingsSrc settingsSource, repoCache *cache.Cache, serverCache *servercache.Cache, argoDB db.ArgoDB, maxWebhookPayloadSizeB int64) *ArgoCDWebhookHandler {
githubWebhook, err := github.New(github.Options.Secret(set.WebhookGitHubSecret))
if err != nil {
log.Warnf("Unable to init the GitHub webhook")
@@ -115,6 +118,7 @@ func NewHandler(namespace string, applicationNamespaces []string, webhookParalle
db: argoDB,
queue: make(chan any, payloadQueueSize),
maxWebhookPayloadSizeB: maxWebhookPayloadSizeB,
appsLister: appsLister,
}
acdWebhook.startWorkerPool(webhookParallelism)
@@ -150,10 +154,12 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
case azuredevops.GitPushEvent:
// See: https://learn.microsoft.com/en-us/azure/devops/service-hooks/events?view=azure-devops#git.push
webURLs = append(webURLs, payload.Resource.Repository.RemoteURL)
revision = ParseRevision(payload.Resource.RefUpdates[0].Name)
change.shaAfter = ParseRevision(payload.Resource.RefUpdates[0].NewObjectID)
change.shaBefore = ParseRevision(payload.Resource.RefUpdates[0].OldObjectID)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
if len(payload.Resource.RefUpdates) > 0 {
revision = ParseRevision(payload.Resource.RefUpdates[0].Name)
change.shaAfter = ParseRevision(payload.Resource.RefUpdates[0].NewObjectID)
change.shaBefore = ParseRevision(payload.Resource.RefUpdates[0].OldObjectID)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
}
// unfortunately, Azure DevOps doesn't provide a list of changed files
case github.PushPayload:
// See: https://developer.github.com/v3/activity/events/types/#pushevent
@@ -251,13 +257,15 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
// Webhook module does not parse the inner links
if payload.Repository.Links != nil {
for _, l := range payload.Repository.Links["clone"].([]any) {
link := l.(map[string]any)
if link["name"] == "http" {
webURLs = append(webURLs, link["href"].(string))
}
if link["name"] == "ssh" {
webURLs = append(webURLs, link["href"].(string))
clone, ok := payload.Repository.Links["clone"].([]any)
if ok {
for _, l := range clone {
link := l.(map[string]any)
if link["name"] == "http" || link["name"] == "ssh" {
if href, ok := link["href"].(string); ok {
webURLs = append(webURLs, href)
}
}
}
}
}
@@ -276,11 +284,13 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
// so we cannot update changedFiles for this type of payload
case gogsclient.PushPayload:
webURLs = append(webURLs, payload.Repo.HTMLURL)
revision = ParseRevision(payload.Ref)
change.shaAfter = ParseRevision(payload.After)
change.shaBefore = ParseRevision(payload.Before)
touchedHead = bool(payload.Repo.DefaultBranch == revision)
if payload.Repo != nil {
webURLs = append(webURLs, payload.Repo.HTMLURL)
touchedHead = payload.Repo.DefaultBranch == revision
}
for _, commit := range payload.Commits {
changedFiles = append(changedFiles, commit.Added...)
changedFiles = append(changedFiles, commit.Modified...)
@@ -313,8 +323,8 @@ func (a *ArgoCDWebhookHandler) HandleEvent(payload any) {
nsFilter = ""
}
appIf := a.appClientset.ArgoprojV1alpha1().Applications(nsFilter)
apps, err := appIf.List(context.Background(), metav1.ListOptions{})
appIf := a.appsLister.Applications(nsFilter)
apps, err := appIf.List(labels.Everything())
if err != nil {
log.Warnf("Failed to list applications: %v", err)
return
@@ -339,9 +349,9 @@ func (a *ArgoCDWebhookHandler) HandleEvent(payload any) {
// Skip any application that is neither in the control plane's namespace
// nor in the list of enabled namespaces.
var filteredApps []v1alpha1.Application
for _, app := range apps.Items {
for _, app := range apps {
if app.Namespace == a.ns || glob.MatchStringInList(a.appNs, app.Namespace, glob.REGEXP) {
filteredApps = append(filteredApps, app)
filteredApps = append(filteredApps, *app)
}
}

View File

@@ -15,8 +15,11 @@ import (
"text/template"
"time"
"github.com/go-playground/webhooks/v6/azuredevops"
bb "github.com/ktrysmt/go-bitbucket"
"github.com/stretchr/testify/mock"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"github.com/go-playground/webhooks/v6/bitbucket"
@@ -28,6 +31,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
kubetesting "k8s.io/client-go/testing"
argov1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
servercache "github.com/argoproj/argo-cd/v3/server/cache"
"github.com/argoproj/argo-cd/v3/util/cache/appstate"
"github.com/argoproj/argo-cd/v3/util/db"
@@ -98,6 +102,31 @@ func NewMockHandlerForBitbucketCallback(reactor *reactorDef, applicationNamespac
return newMockHandler(reactor, applicationNamespaces, defaultMaxPayloadSize, &mockDB, &argoSettings, objects...)
}
type fakeAppsLister struct {
argov1.ApplicationLister
argov1.ApplicationNamespaceLister
namespace string
clientset *appclientset.Clientset
}
func (f *fakeAppsLister) Applications(namespace string) argov1.ApplicationNamespaceLister {
return &fakeAppsLister{namespace: namespace, clientset: f.clientset}
}
func (f *fakeAppsLister) List(selector labels.Selector) ([]*v1alpha1.Application, error) {
res, err := f.clientset.ArgoprojV1alpha1().Applications(f.namespace).List(context.Background(), metav1.ListOptions{
LabelSelector: selector.String(),
})
if err != nil {
return nil, err
}
var apps []*v1alpha1.Application
for i := range res.Items {
apps = append(apps, &res.Items[i])
}
return apps, nil
}
func newMockHandler(reactor *reactorDef, applicationNamespaces []string, maxPayloadSize int64, argoDB db.ArgoDB, argoSettings *settings.ArgoCDSettings, objects ...runtime.Object) *ArgoCDWebhookHandler {
appClientset := appclientset.NewSimpleClientset(objects...)
if reactor != nil {
@@ -109,8 +138,7 @@ func newMockHandler(reactor *reactorDef, applicationNamespaces []string, maxPayl
appClientset.AddReactor(reactor.verb, reactor.resource, reactor.reaction)
}
cacheClient := cacheutil.NewCache(cacheutil.NewInMemoryCache(1 * time.Hour))
return NewHandler("argocd", applicationNamespaces, 10, appClientset, argoSettings, &fakeSettingsSrc{}, cache.NewCache(
return NewHandler("argocd", applicationNamespaces, 10, appClientset, &fakeAppsLister{clientset: appClientset}, argoSettings, &fakeSettingsSrc{}, cache.NewCache(
cacheClient,
1*time.Minute,
1*time.Minute,
@@ -702,6 +730,26 @@ func Test_affectedRevisionInfo_appRevisionHasChanged(t *testing.T) {
{true, "refs/tags/no-slashes", bitbucketPushPayload("no-slashes"), "bitbucket push branch or tag name without slashes, targetRevision tag prefixed"},
{true, "refs/tags/no-slashes", bitbucketRefChangedPayload("no-slashes"), "bitbucket ref changed branch or tag name without slashes, targetRevision tag prefixed"},
{true, "refs/tags/no-slashes", gogsPushPayload("no-slashes"), "gogs push branch or tag name without slashes, targetRevision tag prefixed"},
// Tests fix for https://github.com/argoproj/argo-cd/security/advisories/GHSA-wp4p-9pxh-cgx2
{true, "test", gogsclient.PushPayload{Ref: "test", Repo: nil}, "gogs push branch with nil repo in payload"},
// Testing fix for https://github.com/argoproj/argo-cd/security/advisories/GHSA-gpx4-37g2-c8pv
{false, "test", azuredevops.GitPushEvent{Resource: azuredevops.Resource{RefUpdates: []azuredevops.RefUpdate{}}}, "Azure DevOps malformed push event with no ref updates"},
{true, "some-ref", bitbucketserver.RepositoryReferenceChangedPayload{
Changes: []bitbucketserver.RepositoryChange{
{Reference: bitbucketserver.RepositoryReference{ID: "refs/heads/some-ref"}},
},
Repository: bitbucketserver.Repository{Links: map[string]any{"clone": "boom"}}, // The string "boom" here is what previously caused a panic.
}, "bitbucket push branch or tag name, malformed link"}, // https://github.com/argoproj/argo-cd/security/advisories/GHSA-f9gq-prrc-hrhc
{true, "some-ref", bitbucketserver.RepositoryReferenceChangedPayload{
Changes: []bitbucketserver.RepositoryChange{
{Reference: bitbucketserver.RepositoryReference{ID: "refs/heads/some-ref"}},
},
Repository: bitbucketserver.Repository{Links: map[string]any{"clone": []any{map[string]any{"name": "http", "href": []string{}}}}}, // The href as an empty array is what previously caused a panic.
}, "bitbucket push branch or tag name, malformed href"},
}
for _, testCase := range tests {
testCopy := testCase