Compare commits

...

20 Commits

Author SHA1 Message Date
github-actions[bot]
973eccee0a Bump version to 3.2.0-rc2 on release-3.2 branch (#24797)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 11:32:15 -04:00
Ville Vesilehto
8f8a1ecacb Merge commit from fork
Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.

The fix ensures thread-safe operations by always operating on
copies rather than shared objects.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-09-30 10:45:59 -04:00
Michael Crenshaw
46409ae734 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:45:32 -04:00
Michael Crenshaw
5f5d46c78b Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:07:24 -04:00
Michael Crenshaw
722036d447 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 09:45:21 -04:00
argo-cd-cherry-pick-bot[bot]
001bfda068 fix: #24781 update crossplane healthchecks to V2 version (cherry-pick #24782 for 3.2) (#24784)
Signed-off-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
Co-authored-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
2025-09-30 18:04:50 +05:30
argo-cd-cherry-pick-bot[bot]
4821d71e3d fix(health): typo in PromotionStrategy health.lua (cherry-pick #24726 for 3.2) (#24760)
Co-authored-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2025-09-27 23:45:10 +02:00
Atif Ali
ef8ac49807 fix: Clear ApplicationSet applicationStatus when ProgressiveSync is disabled (cherry-pick #24587 for 3.2 (#24716)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-09-26 10:19:45 -04:00
Alexander Matyushentsev
feab307df3 feat: add status.resourcesCount field to appset and change limit default (#24698) (#24711)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-24 17:35:03 +05:30
argo-cd-cherry-pick-bot[bot]
087378c669 fix: update ExternalSecret discovery.lua to also include the refreshPolicy (cherry-pick #24707 for 3.2) (#24713)
Signed-off-by: AvivGuiser <avivguiser@gmail.com>
Co-authored-by: AvivGuiser <avivguiser@gmail.com>
2025-09-23 13:38:41 -04:00
Alexander Matyushentsev
f3c8e1d5e3 fix: limit number of resources in appset status (#24690) (#24697)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-22 14:58:00 -07:00
argo-cd-cherry-pick-bot[bot]
28510cdda6 fix: resolve argocdService initialization issue in notifications CLI (cherry-pick #24664 for 3.2) (#24680)
Signed-off-by: puretension <rlrlfhtm5@gmail.com>
Co-authored-by: DOHYEONG LEE <rlrlfhtm5@gmail.com>
2025-09-22 19:40:48 +02:00
argo-cd-cherry-pick-bot[bot]
6a2df4380a ci(release): only set latest release in github when latest (cherry-pick #24525 for 3.2) (#24686)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-22 11:46:55 -04:00
Michael Crenshaw
cd87a13a0d chore(ci): update github runners to oci gh arc runners (3.2) (#24632) (#24653)
Signed-off-by: Koray Oksay <koray.oksay@gmail.com>
Co-authored-by: Koray Oksay <koray.oksay@gmail.com>
2025-09-18 20:01:54 -04:00
argo-cd-cherry-pick-bot[bot]
1453367645 fix: Progress Sync Unknown in UI (cherry-pick #24202 for 3.2) (#24641)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <56743004+aali309@users.noreply.github.com>
2025-09-18 14:37:39 -04:00
argo-cd-cherry-pick-bot[bot]
50531e6ab3 fix(oci): loosen up layer restrictions (cherry-pick #24640 for 3.2) (#24648)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-18 06:25:30 -10:00
argo-cd-cherry-pick-bot[bot]
bf9f927d55 fix: use informer in webhook handler to reduce memory usage (cherry-pick #24622 for 3.2) (#24623)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-17 14:51:08 -07:00
argo-cd-cherry-pick-bot[bot]
ee0de13be4 docs: Delete dangling word in Source Hydrator docs (cherry-pick #24601 for 3.2) (#24604)
Signed-off-by: José Maia <josecbmaia@hotmail.com>
Co-authored-by: José Maia <josecbmaia@hotmail.com>
2025-09-17 11:36:06 -04:00
argo-cd-cherry-pick-bot[bot]
4ac3f920d5 chore: bumps redis version to 8.2.1 (cherry-pick #24523 for 3.2) (#24582)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2025-09-16 12:22:10 -04:00
github-actions[bot]
06ef059f9f Bump version to 3.2.0-rc1 on release-3.2 branch (#24581)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-16 09:37:35 -04:00
60 changed files with 2387 additions and 1250 deletions

View File

@@ -407,7 +407,7 @@ jobs:
test-e2e:
name: Run end-to-end tests
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-latest-16-cores
runs-on: oracle-vm-16cpu-64gb-x86-64
strategy:
fail-fast: false
matrix:
@@ -426,7 +426,7 @@ jobs:
- build-go
- changes
env:
GOPATH: /home/runner/go
GOPATH: /home/ubuntu/go
ARGOCD_FAKE_IN_CLUSTER: 'true'
ARGOCD_SSH_DATA_PATH: '/tmp/argo-e2e/app/config/ssh'
ARGOCD_TLS_DATA_PATH: '/tmp/argo-e2e/app/config/tls'
@@ -462,9 +462,9 @@ jobs:
set -x
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
sudo mkdir -p $HOME/.kube && sudo chown -R runner $HOME/.kube
sudo mkdir -p $HOME/.kube && sudo chown -R ubuntu $HOME/.kube
sudo k3s kubectl config view --raw > $HOME/.kube/config
sudo chown runner $HOME/.kube/config
sudo chown ubuntu $HOME/.kube/config
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
@@ -474,7 +474,7 @@ jobs:
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Add ~/go/bin to PATH
run: |
echo "/home/runner/go/bin" >> $GITHUB_PATH
echo "/home/ubuntu/go/bin" >> $GITHUB_PATH
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
@@ -496,11 +496,11 @@ jobs:
run: |
docker pull ghcr.io/dexidp/dex:v2.43.0
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:7.2.7-alpine
docker pull redis:8.2.1-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist
chown runner dist
chown ubuntu dist
- name: Run E2E server and wait for it being available
timeout-minutes: 30
run: |

View File

@@ -32,6 +32,42 @@ jobs:
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup variables
id: var
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
IS_LATEST=false
# Ensure latest release tag matches github.ref_name
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
IS_LATEST=true
fi
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
argocd-image-provenance:
needs: [argocd-image]
permissions:
@@ -50,15 +86,17 @@ jobs:
goreleaser:
needs:
- setup-variables
- argocd-image
- argocd-image-provenance
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -142,7 +180,7 @@ jobs:
permissions:
contents: write # Needed for release uploads
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes}}
hashes: ${{ steps.sbom-hash.outputs.hashes }}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
steps:
@@ -221,6 +259,7 @@ jobs:
post-release:
needs:
- setup-variables
- argocd-image
- goreleaser
- generate-sbom
@@ -229,6 +268,8 @@ jobs:
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -242,27 +283,6 @@ jobs:
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
- name: Check if tag is the latest version and not a pre-release
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
# Ensure latest tag matches github.ref_name & not a pre-release
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
echo "TAG_STABLE=true" >> $GITHUB_ENV
else
echo "TAG_STABLE=false" >> $GITHUB_ENV
fi
- name: Update stable tag to latest version
run: |
git tag -f stable ${{ github.ref_name }}

View File

@@ -49,13 +49,14 @@ archives:
- argocd-cli
name_template: |-
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
formats: [ binary ]
formats: [binary]
checksum:
name_template: 'cli_checksums.txt'
algorithm: sha256
release:
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
prerelease: auto
draft: false
header: |

View File

@@ -1 +1 @@
3.2.0
3.2.0-rc2

View File

@@ -100,6 +100,7 @@ type ApplicationSetReconciler struct {
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -251,6 +252,16 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
}
}
} else {
// Progressive Sync is disabled, clear any existing applicationStatus to prevent stale data
if len(applicationSetInfo.Status.ApplicationStatus) > 0 {
logCtx.Infof("Progressive Sync disabled, removing %v AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear AppSet application statuses when Progressive Sync is disabled for %v: %w", applicationSetInfo.Name, err)
}
}
}
var validApps []argov1alpha1.Application
@@ -1398,7 +1409,13 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
resourcesCount := int64(len(statuses))
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
statuses = statuses[:r.MaxResourcesStatusCount]
}
appset.Status.Resources = statuses
appset.Status.ResourcesCount = resourcesCount
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
@@ -1411,6 +1428,7 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
}
updatedAppset.Status.Resources = appset.Status.Resources
updatedAppset.Status.ResourcesCount = resourcesCount
// Update the newly fetched object with new status resources
err := r.Client.Status().Update(ctx, updatedAppset)

View File

@@ -6410,10 +6410,11 @@ func TestUpdateResourceStatus(t *testing.T) {
require.NoError(t, err)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
maxResourcesStatusCount int
}{
{
name: "handles an empty application list",
@@ -6577,6 +6578,73 @@ func TestUpdateResourceStatus(t *testing.T) {
apps: []v1alpha1.Application{},
expectedResources: nil,
},
{
name: "truncates resources status list to",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "argocd",
},
Status: v1alpha1.ApplicationSetStatus{
Resources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
{
Name: "app2",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
},
},
},
apps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.AppHealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "app2",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.AppHealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
},
expectedResources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeSynced,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
maxResourcesStatusCount: 1,
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
@@ -6587,13 +6655,14 @@ func TestUpdateResourceStatus(t *testing.T) {
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
MaxResourcesStatusCount: cc.maxResourcesStatusCount,
}
err := r.updateResourcesStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps)
@@ -7544,109 +7613,81 @@ func TestSyncApplication(t *testing.T) {
}
}
func TestIsRollingSyncDeletionReversed(t *testing.T) {
tests := []struct {
name string
appset *v1alpha1.ApplicationSet
expected bool
func TestReconcileProgressiveSyncDisabled(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
enableProgressiveSyncs bool
expectedAppStatuses []v1alpha1.ApplicationSetApplicationStatus
}{
{
name: "Deletion Order on strategy is set as Reverse",
appset: &v1alpha1.ApplicationSet{
name: "clears applicationStatus when Progressive Sync is disabled",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-appset",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{
{
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
{
Key: "environment",
Operator: "In",
Values: []string{
"dev",
},
},
},
},
{
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
{
Key: "environment",
Operator: "In",
Values: []string{
"staging",
},
},
},
},
},
Generators: []v1alpha1.ApplicationSetGenerator{},
Template: v1alpha1.ApplicationSetTemplate{},
},
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "test-appset-guestbook",
Message: "Application resource became Healthy, updating status from Progressing to Healthy.",
Status: "Healthy",
Step: "1",
},
DeletionOrder: ReverseDeletionOrder,
},
},
},
expected: true,
enableProgressiveSyncs: false,
expectedAppStatuses: nil,
},
{
name: "Deletion Order on strategy is set as AllAtOnce",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{},
},
DeletionOrder: AllAtOnceDeletionOrder,
},
} {
t.Run(cc.name, func(t *testing.T) {
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&cc.appSet).WithStatusSubresource(&cc.appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Renderer: &utils.Render{},
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
EnableProgressiveSyncs: cc.enableProgressiveSyncs,
}
req := ctrl.Request{
NamespacedName: types.NamespacedName{
Namespace: cc.appSet.Namespace,
Name: cc.appSet.Name,
},
},
expected: false,
},
{
name: "Deletion Order on strategy is set as Reverse but no steps in RollingSync",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{},
},
DeletionOrder: ReverseDeletionOrder,
},
},
},
expected: false,
},
{
name: "Deletion Order on strategy is set as Reverse, but AllAtOnce is explicitly set",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "AllAtOnce",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{},
},
DeletionOrder: ReverseDeletionOrder,
},
},
},
expected: false,
},
{
name: "Strategy is Nil",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{},
},
},
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isProgressiveSyncDeletionOrderReversed(tt.appset)
assert.Equal(t, tt.expected, result)
}
// Run reconciliation
_, err = r.Reconcile(t.Context(), req)
require.NoError(t, err)
// Fetch the updated ApplicationSet
var updatedAppSet v1alpha1.ApplicationSet
err = r.Get(t.Context(), req.NamespacedName, &updatedAppSet)
require.NoError(t, err)
// Verify the applicationStatus field
assert.Equal(t, cc.expectedAppStatuses, updatedAppSet.Status.ApplicationStatus, "applicationStatus should match expected value")
})
}
}

5
assets/swagger.json generated
View File

@@ -7322,6 +7322,11 @@
"items": {
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
}
},
"resourcesCount": {
"description": "ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when\nthe number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).",
"type": "integer",
"format": "int64"
}
}
},

View File

@@ -79,6 +79,7 @@ func NewCommand() *cobra.Command {
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
maxResourcesStatusCount int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -231,6 +232,7 @@ func NewCommand() *cobra.Command {
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -275,6 +277,7 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
return &command
}

View File

@@ -30,11 +30,12 @@ func NewNotificationsCommand() *cobra.Command {
)
var argocdService service.Service
toolsCommand := cmd.NewToolsCommand(
"notifications",
"argocd admin notifications",
applications,
settings.GetFactorySettingsForCLI(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
settings.GetFactorySettingsForCLI(func() service.Service { return argocdService }, "argocd-notifications-secret", "argocd-notifications-cm", false),
func(clientConfig clientcmd.ClientConfig) {
k8sCfg, err := clientConfig.ClientConfig()
if err != nil {

View File

@@ -292,6 +292,8 @@ data:
applicationsetcontroller.global.preserved.labels: "acme.com/label1,acme.com/label2"
# Enable GitHub API metrics for generators that use GitHub API
applicationsetcontroller.enable.github.api.metrics: "false"
# The maximum number of resources stored in the status of an ApplicationSet. This is a safeguard to prevent the status from growing too large.
applicationsetcontroller.status.max.resources.count: "5000"
## Argo CD Notifications Controller Properties
# Set the logging level. One of: debug|info|warn|error (default "info")

View File

@@ -37,6 +37,7 @@ argocd-applicationset-controller [flags]
--kubeconfig string Path to a kube config. Only required if out-of-cluster
--logformat string Set the logging format. One of: json|text (default "json")
--loglevel string Set the logging level. One of: debug|info|warn|error (default "info")
--max-resources-status-count int Max number of resources stored in appset status.
--metrics-addr string The address the metric endpoint binds to. (default ":8080")
--metrics-applicationset-labels strings List of Application labels that will be added to the argocd_applicationset_labels metric
-n, --namespace string If present, the namespace scope for this CLI request

View File

@@ -1,2 +1,5 @@
This page is populated for released Argo CD versions. Use the version selector to view this table for a specific
version.
| Argo CD version | Kubernetes versions |
|-----------------|---------------------|
| 3.2 | v1.33, v1.32, v1.31, v1.30 |
| 3.1 | v1.33, v1.32, v1.31, v1.30 |
| 3.0 | v1.32, v1.31, v1.30, v1.29 |

View File

@@ -86,3 +86,9 @@ If you do not want your CronJob to affect the Application's aggregated Health, y
Due to security reasons ([GHSA-786q-9hcg-v9ff](https://github.com/argoproj/argo-cd/security/advisories/GHSA-786q-9hcg-v9ff)),
the project API response was sanitized to remove sensitive information. This includes
credentials of project-scoped repositories and clusters.
## ApplicationSet `resources` field of `status` resource is limited to 5000 elements by default
The `resources` field of the `status` resource of an ApplicationSet is now limited to 5000 elements by default. This is
to prevent status bloat and exceeding etcd limits. The limit can be configured by setting the `applicationsetcontroller.status.max.resources.count`
field in the `argocd-cmd-params-cm` ConfigMap.

View File

@@ -195,7 +195,7 @@ git commit -m "Bump image to v1.2.3" \
```
!!!note Newlines are not allowed
The commit trailers must not contain newlines. The
The commit trailers must not contain newlines.
So the full CI script might look something like this:

View File

@@ -187,6 +187,12 @@ spec:
name: argocd-cmd-params-cm
key: applicationsetcontroller.requeue.after
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: applicationsetcontroller.status.max.resources.count
optional: true
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v3.2.0-rc2

View File

@@ -5,7 +5,7 @@ kind: Kustomization
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v3.2.0-rc2
resources:
- ./application-controller
- ./dex

View File

@@ -40,7 +40,7 @@ spec:
serviceAccountName: argocd-redis
containers:
- name: redis
image: redis:7.2.7-alpine
image: redis:8.2.1-alpine
imagePullPolicy: Always
args:
- "--save"

View File

@@ -23737,6 +23737,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -24838,7 +24841,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24964,7 +24973,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25076,7 +25085,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25092,7 +25101,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25383,7 +25392,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25435,7 +25444,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25783,7 +25792,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -23737,6 +23737,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -24806,7 +24809,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24910,7 +24919,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -24926,7 +24935,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25217,7 +25226,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25269,7 +25278,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25617,7 +25626,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v3.2.0-rc2

View File

@@ -17823,6 +17823,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata

View File

@@ -12,7 +12,7 @@ patches:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v3.2.0-rc2
resources:
- ../../base/application-controller
- ../../base/applicationset-controller

View File

@@ -1,6 +1,6 @@
dependencies:
- name: redis-ha
repository: https://dandydeveloper.github.io/charts
version: 4.33.2
digest: sha256:dfb01cb345d8e0c3cf41294ca9b7eae8272083f3ed165cf58d35e492403cf0aa
generated: "2025-03-04T08:25:54.199096-08:00"
version: 4.34.11
digest: sha256:65651a2e28ac28852dd75e642e42ec06557d9bc14949c62d817dec315812346e
generated: "2025-09-16T11:02:38.114394+03:00"

View File

@@ -1,4 +1,4 @@
dependencies:
- name: redis-ha
version: 4.33.2
version: 4.34.11
repository: https://dandydeveloper.github.io/charts

View File

@@ -9,7 +9,7 @@ metadata:
labels:
heritage: Helm
release: argocd
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
app: argocd-redis-ha
secrets:
- name: argocd-redis
@@ -23,7 +23,7 @@ metadata:
labels:
heritage: Helm
release: argocd
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
app: argocd-redis-ha
---
# Source: redis-ha/charts/redis-ha/templates/redis-ha-configmap.yaml
@@ -35,7 +35,7 @@ metadata:
labels:
heritage: Helm
release: argocd
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
app: argocd-redis-ha
data:
redis.conf: |
@@ -606,12 +606,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -779,7 +795,7 @@ metadata:
labels:
heritage: Helm
release: argocd
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
app: argocd-redis-ha
data:
redis_liveness.sh: |
@@ -855,7 +871,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
rules:
- apiGroups:
- ""
@@ -874,8 +890,8 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
component: argocd-redis-ha-haproxy
chart: redis-ha-4.34.11
component: haproxy
rules:
- apiGroups:
- ""
@@ -894,7 +910,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
subjects:
- kind: ServiceAccount
name: argocd-redis-ha
@@ -913,8 +929,8 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
component: argocd-redis-ha-haproxy
chart: redis-ha-4.34.11
component: haproxy
subjects:
- kind: ServiceAccount
name: argocd-redis-ha-haproxy
@@ -933,7 +949,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
spec:
publishNotReadyAddresses: true
@@ -962,7 +978,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
spec:
publishNotReadyAddresses: true
@@ -991,7 +1007,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
spec:
publishNotReadyAddresses: true
@@ -1020,7 +1036,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
spec:
type: ClusterIP
@@ -1048,8 +1064,8 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
component: argocd-redis-ha-haproxy
chart: redis-ha-4.34.11
component: haproxy
annotations:
spec:
type: ClusterIP
@@ -1076,7 +1092,8 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
component: haproxy
spec:
strategy:
type: RollingUpdate
@@ -1086,12 +1103,14 @@ spec:
matchLabels:
app: redis-ha-haproxy
release: argocd
component: haproxy
template:
metadata:
name: argocd-redis-ha-haproxy
labels:
app: redis-ha-haproxy
release: argocd
component: haproxy
annotations:
prometheus.io/port: "9101"
prometheus.io/scrape: "true"
@@ -1109,7 +1128,7 @@ spec:
nodeSelector:
{}
tolerations:
null
[]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -1117,6 +1136,7 @@ spec:
matchLabels:
app: redis-ha-haproxy
release: argocd
component: haproxy
topologyKey: kubernetes.io/hostname
initContainers:
- name: config-init
@@ -1210,7 +1230,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
{}
spec:
@@ -1226,7 +1246,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
release: argocd
app: redis-ha
@@ -1250,7 +1270,7 @@ spec:
automountServiceAccountToken: false
initContainers:
- name: config-init
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
resources:
{}
@@ -1290,7 +1310,7 @@ spec:
containers:
- name: redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
command:
- redis-server
@@ -1334,11 +1354,11 @@ spec:
- -c
- /health/redis_readiness.sh
startupProbe:
initialDelaySeconds: 5
periodSeconds: 10
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 15
successThreshold: 1
failureThreshold: 3
failureThreshold: 5
exec:
command:
- sh
@@ -1364,7 +1384,7 @@ spec:
- /bin/sh
- /readonly-config/trigger-failover-if-master.sh
- name: sentinel
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
command:
- redis-sentinel
@@ -1437,7 +1457,7 @@ spec:
- sleep 30; redis-cli -p 26379 sentinel reset argocd
- name: split-brain-fix
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
command:
- sh

View File

@@ -27,7 +27,7 @@ redis-ha:
serviceAccount:
automountToken: true
image:
tag: 7.2.7-alpine
tag: 8.2.1-alpine
sentinel:
bind: '0.0.0.0'
lifecycle:

View File

@@ -23737,6 +23737,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -25203,12 +25206,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -26188,7 +26207,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -26314,7 +26339,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26465,7 +26490,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -26561,7 +26586,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26685,7 +26710,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -27002,7 +27027,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -27054,7 +27079,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -27428,7 +27453,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -27812,7 +27837,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -27887,7 +27912,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -27910,7 +27935,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -27958,9 +27983,9 @@ spec:
- sh
- -c
- /health/redis_readiness.sh
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
@@ -27981,7 +28006,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -28056,7 +28081,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -28091,7 +28116,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -23737,6 +23737,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -25194,12 +25197,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -26158,7 +26177,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -26301,7 +26326,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -26397,7 +26422,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26521,7 +26546,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26838,7 +26863,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26890,7 +26915,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -27264,7 +27289,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -27648,7 +27673,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -27723,7 +27748,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -27746,7 +27771,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -27794,9 +27819,9 @@ spec:
- sh
- -c
- /health/redis_readiness.sh
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
@@ -27817,7 +27842,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -27892,7 +27917,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -27927,7 +27952,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -890,12 +890,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -1875,7 +1891,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -2001,7 +2023,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2152,7 +2174,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2248,7 +2270,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2372,7 +2394,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2689,7 +2711,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2741,7 +2763,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -3115,7 +3137,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3499,7 +3521,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -3574,7 +3596,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -3597,7 +3619,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -3645,9 +3667,9 @@ spec:
- sh
- -c
- /health/redis_readiness.sh
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
@@ -3668,7 +3690,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -3743,7 +3765,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -3778,7 +3800,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -881,12 +881,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -1845,7 +1861,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1988,7 +2010,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2084,7 +2106,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2208,7 +2230,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2525,7 +2547,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2577,7 +2599,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2951,7 +2973,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3335,7 +3357,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -3410,7 +3432,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -3433,7 +3455,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -3481,9 +3503,9 @@ spec:
- sh
- -c
- /health/redis_readiness.sh
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
@@ -3504,7 +3526,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -3579,7 +3601,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -3614,7 +3636,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -23737,6 +23737,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -25282,7 +25285,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25408,7 +25417,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25559,7 +25568,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25655,7 +25664,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25741,7 +25750,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25757,7 +25766,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26048,7 +26057,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26100,7 +26109,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26472,7 +26481,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26856,7 +26865,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:

27
manifests/install.yaml generated
View File

@@ -23737,6 +23737,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -25250,7 +25253,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25393,7 +25402,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25489,7 +25498,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25575,7 +25584,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25591,7 +25600,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25882,7 +25891,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25934,7 +25943,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26306,7 +26315,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26690,7 +26699,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -969,7 +969,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1095,7 +1101,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1246,7 +1252,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1342,7 +1348,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1428,7 +1434,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1444,7 +1450,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1735,7 +1741,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1787,7 +1793,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2159,7 +2165,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2543,7 +2549,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -937,7 +937,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1080,7 +1086,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1176,7 +1182,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1262,7 +1268,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.1-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1278,7 +1284,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1569,7 +1575,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1621,7 +1627,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -1993,7 +1999,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2377,7 +2383,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc2
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -805,6 +805,9 @@ type ApplicationSetStatus struct {
ApplicationStatus []ApplicationSetApplicationStatus `json:"applicationStatus,omitempty" protobuf:"bytes,2,name=applicationStatus"`
// Resources is a list of Applications resources managed by this application set.
Resources []ResourceStatus `json:"resources,omitempty" protobuf:"bytes,3,opt,name=resources"`
// ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when
// the number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).
ResourcesCount int64 `json:"resourcesCount,omitempty" protobuf:"varint,4,opt,name=resourcesCount"`
}
// ApplicationSetCondition contains details about an applicationset condition, which is usually an error or warning

File diff suppressed because it is too large Load Diff

View File

@@ -369,6 +369,10 @@ message ApplicationSetStatus {
// Resources is a list of Applications resources managed by this application set.
repeated ResourceStatus resources = 3;
// ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when
// the number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).
optional int64 resourcesCount = 4;
}
// ApplicationSetStrategy configures how generated Applications are updated in sequence.

View File

@@ -1,4 +1,4 @@
-- Health check copied from here: https://github.com/crossplane/docs/blob/bd701357e9d5eecf529a0b42f23a78850a6d1d87/content/master/guides/crossplane-with-argo-cd.md
-- Health check copied from here: https://github.com/crossplane/docs/blob/709889c5dbe6e5a2ea3dffd66fe276cf465b47b5/content/master/guides/crossplane-with-argo-cd.md
health_status = {
status = "Progressing",
@@ -18,9 +18,10 @@ local has_no_status = {
"Composition",
"CompositionRevision",
"DeploymentRuntimeConfig",
"ControllerConfig",
"ClusterProviderConfig",
"ProviderConfig",
"ProviderConfigUsage"
"ProviderConfigUsage",
"ControllerConfig" -- Added to ensure that healthcheck is backwards-compatible with Crossplane v1
}
if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.kind) then
health_status.status = "Healthy"
@@ -29,7 +30,7 @@ if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.
end
if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then
if obj.kind == "ProviderConfig" and obj.status.users ~= nil then
if (obj.kind == "ProviderConfig" or obj.kind == "ClusterProviderConfig") and obj.status.users ~= nil then
health_status.status = "Healthy"
health_status.message = "Resource is in use."
return health_status
@@ -54,7 +55,7 @@ for i, condition in ipairs(obj.status.conditions) do
end
end
if contains({"Ready", "Healthy", "Offered", "Established"}, condition.type) then
if contains({"Ready", "Healthy", "Offered", "Established", "ValidPipeline", "RevisionHealthy"}, condition.type) then
if condition.status == "True" then
health_status.status = "Healthy"
health_status.message = "Resource is up-to-date."

View File

@@ -3,3 +3,7 @@ tests:
status: Healthy
message: "Resource is up-to-date."
inputPath: testdata/composition_healthy.yaml
- healthStatus:
status: Healthy
message: "Resource is up-to-date."
inputPath: testdata/configurationrevision_healthy.yaml

View File

@@ -0,0 +1,22 @@
apiVersion: pkg.crossplane.io/v1
kind: ConfigurationRevision
metadata:
annotations:
meta.crossplane.io/license: Apache-2.0
meta.crossplane.io/maintainer: Upbound <support@upbound.io>
meta.crossplane.io/source: github.com/upbound/configuration-getting-started
name: upbound-configuration-getting-started-869bca254eb1
spec:
desiredState: Active
ignoreCrossplaneConstraints: false
image: xpkg.upbound.io/upbound/configuration-getting-started:v0.3.0
packagePullPolicy: IfNotPresent
revision: 1
skipDependencyResolution: false
status:
conditions:
- lastTransitionTime: "2025-09-29T18:06:40Z"
observedGeneration: 1
reason: HealthyPackageRevision
status: "True"
type: RevisionHealthy

View File

@@ -1,4 +1,4 @@
-- Health check copied from here: https://github.com/crossplane/docs/blob/bd701357e9d5eecf529a0b42f23a78850a6d1d87/content/master/guides/crossplane-with-argo-cd.md
-- Health check copied from here: https://github.com/crossplane/docs/blob/709889c5dbe6e5a2ea3dffd66fe276cf465b47b5/content/master/guides/crossplane-with-argo-cd.md
health_status = {
status = "Progressing",
@@ -15,6 +15,7 @@ local function contains (table, val)
end
local has_no_status = {
"ClusterProviderConfig",
"ProviderConfig",
"ProviderConfigUsage"
}
@@ -26,7 +27,7 @@ if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.
end
if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then
if obj.kind == "ProviderConfig" and obj.status.users ~= nil then
if (obj.kind == "ProviderConfig" or obj.kind == "ClusterProviderConfig") and obj.status.users ~= nil then
health_status.status = "Healthy"
health_status.message = "Resource is in use."
return health_status

View File

@@ -7,3 +7,6 @@ discoveryTests:
- inputPath: testdata/external-secret.yaml
result:
- name: "refresh"
- inputPath: testdata/external-secret-refresh-policy.yaml
result:
- name: "refresh"

View File

@@ -3,10 +3,11 @@ local actions = {}
local disable_refresh = false
local time_units = {"ns", "us", "µs", "ms", "s", "m", "h"}
local digits = obj.spec.refreshInterval
local policy = obj.spec.refreshPolicy
if digits ~= nil then
digits = tostring(digits)
for _, time_unit in ipairs(time_units) do
if digits == "0" or digits == "0" .. time_unit then
if (digits == "0" or digits == "0" .. time_unit) and policy ~= "OnChange" then
disable_refresh = true
break
end

View File

@@ -0,0 +1,55 @@
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
creationTimestamp: '2021-11-16T21:59:33Z'
generation: 1
name: test-healthy
namespace: argocd
resourceVersion: '136487331'
selfLink: /apis/external-secrets.io/v1alpha1/namespaces/argocd/externalsecrets/test-healthy
uid: 1e754a7e-0781-4d57-932d-4651d5b19586
spec:
data:
- remoteRef:
key: secret/sa/example
property: api.address
secretKey: url
- remoteRef:
key: secret/sa/example
property: ca.crt
secretKey: ca
- remoteRef:
key: secret/sa/example
property: token
secretKey: token
refreshInterval: 0
refreshPolicy: OnChange
secretStoreRef:
kind: SecretStore
name: example
target:
creationPolicy: Owner
template:
data:
config: |
{
"bearerToken": "{{ .token | base64decode | toString }}",
"tlsClientConfig": {
"insecure": false,
"caData": "{{ .ca | toString }}"
}
}
name: cluster-test
server: '{{ .url | toString }}'
metadata:
labels:
argocd.argoproj.io/secret-type: cluster
status:
conditions:
- lastTransitionTime: '2021-11-16T21:59:34Z'
message: Secret was synced
reason: SecretSynced
status: 'True'
type: Ready
refreshTime: '2021-11-29T18:32:24Z'
syncedResourceVersion: 1-519a61da0dc68b2575b4f8efada70e42

View File

@@ -68,7 +68,7 @@ local proposedSha = obj.status.environments[1].proposed.dry.sha -- Don't panic,
for _, env in ipairs(obj.status.environments) do
if env.proposed.dry.sha ~= proposedSha then
hs.status = "Progressing"
hs.status = "Not all environments have the same proposed commit SHA. This likely means the hydrator has not run for all environments yet."
hs.message = "Not all environments have the same proposed commit SHA. This likely means the hydrator has not run for all environments yet."
return hs
end
end

View File

@@ -39,3 +39,7 @@ tests:
status: Healthy
message: All environments are up-to-date on commit 'abc1234'.
inputPath: testdata/all-healthy.yaml
- healthStatus:
status: Progressing
message: Not all environments have the same proposed commit SHA. This likely means the hydrator has not run for all environments yet.
inputPath: testdata/different-proposed-commits.yaml

View File

@@ -0,0 +1,412 @@
apiVersion: promoter.argoproj.io/v1alpha1
kind: PromotionStrategy
metadata:
name: promotion-strategy
namespace: gitops-promoter
spec:
activeCommitStatuses:
- key: argocd-health
environments:
- autoMerge: false
branch: wave/0/west
- autoMerge: false
branch: wave/1/west
- autoMerge: false
branch: wave/2/west
- autoMerge: false
branch: wave/3/west
- autoMerge: false
branch: wave/4/west
- autoMerge: false
branch: wave/5/west
gitRepositoryRef:
name: cdp-deployments
status:
conditions:
- lastTransitionTime: '2025-08-02T06:43:44Z'
message: Reconciliation succeeded
observedGeneration: 6
reason: ReconciliationSuccess
status: 'True'
type: Ready
environments:
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:21Z'
sha: 6b1095752ea7bef9c5c4251cb0722ce33dfd68d1
subject: >-
This is a no-op commit merging from wave/0/west-next into
wave/0/west
branch: wave/0/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-23T17:33:12Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 28f351384c8b8e58624726c4c3713979d0143775
subject: 'chore: fake old commit message'
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-23T17:36:40Z'
sha: 3ebd45b702bc23619c8ae6724e81955afc644555
subject: Promote 28f35 to `wave/0/west` (#2929)
proposed:
hydrated: {}
pullRequest: {}
- active:
dry:
author: asingh51 <fake@example.com>
commitTime: '2025-09-22T21:04:40Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 9dc9529ddba077fcdf8cffef716b7d20e1e8b844
subject: >-
Onboarding new cluster to argocd-genai on wave 5 [ARGO-2499]
(#2921)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T21:56:33Z'
sha: 2fefa5165e46988666cbb1d2d4f39e21871fa671
subject: Promote 9dc95 to `wave/0/west` (#2925)
proposed:
hydrated: {}
pullRequest: {}
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T19:32:09Z'
sha: 6c6493caa0d5b7211bb09c93a7047c90db393cd6
subject: Promote 7f912 to `wave/0/west` (#2922)
proposed:
hydrated: {}
pullRequest: {}
proposed:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:16Z'
sha: 9dfd6245b7b2e86660a13743ecc85d26bf3df04c
subject: Merge branch 'wave/0/west' into wave/0/west-next
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:01Z'
sha: 580bbde2b775a8c6dc1484c4ba970426029e63b4
subject: >-
This is a no-op commit merging from wave/1/west-next into
wave/1/west
branch: wave/1/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-23T17:33:12Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 28f351384c8b8e58624726c4c3713979d0143775
subject: 'chore: fake old commit message'
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-23T17:37:46Z'
sha: c2b69cd2186df736ee4e27a6de5590135a3c5823
subject: Promote 28f35 to `wave/1/west` (#2928)
proposed:
hydrated: {}
pullRequest: {}
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T20:15:13Z'
sha: baf2f21f5085adaa7fe4720ca41d4f7ff918d4fd
subject: Promote 7f912 to `wave/1/west` (#2883)
proposed:
hydrated: {}
pullRequest: {}
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-16T16:32:32Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 789df516a84b1d3beb2f8ffe56e052b5b411aa74
subject: >-
chore: upgrade to 3.2.0-rc1, argo and team instances (ARGO-2412)
(#2875)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-16T16:45:40Z'
sha: 728e1a237732a20281f986916301a5dca254e3a4
subject: Promote 789df to `wave/1/west` (#2876)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:13:56Z'
sha: 50b7c73718813b9b92a0e1e166798cbd55874bbd
subject: Merge branch 'wave/1/west' into wave/1/west-next
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-23T17:33:12Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 28f351384c8b8e58624726c4c3713979d0143775
subject: 'chore: fake old commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-23T17:34:05Z'
sha: 169626417d1863434901c9225ded78963b7bd691
subject: >-
This is a no-op commit merging from wave/2/west-next into
wave/2/west
branch: wave/2/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T20:59:45Z'
sha: 1bfe993b41fe27b298c0aab9a5f185b29a478178
subject: Promote 7f912 to `wave/2/west` (#2884)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-23T17:33:12Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 28f351384c8b8e58624726c4c3713979d0143775
subject: 'chore: fake old commit message'
hydrated:
author: Argo CD
body: "Fake commit message"
commitTime: '2025-09-23T17:33:50Z'
sha: 6e0e11f0102faa5779223159a2b16ccee63a8ac0
subject: '28f3513: chore: fake old commit message'
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:08Z'
sha: 73cf41db84a6e97a74c967243e65ffce66dd4198
subject: >-
This is a no-op commit merging from wave/3/west-next into
wave/3/west
branch: wave/3/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: fake super old commit message
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T21:00:07Z'
sha: e7456f8ff4a5943ace3bb425e2d4e7dd43a53e46
subject: Promote 7f912 to `wave/3/west` (#2887)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: Argo CD
body: "Fake commit message"
commitTime: '2025-09-24T14:13:49Z'
sha: daf0534e9db36334c82870cbb2e5d14cc687e0f5
subject: 'f94c9d4: chore: fake new commit message'
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:30Z'
sha: 5abc8f1fd4056ad9f78723f3a83ac2291e87c821
subject: >-
This is a no-op commit merging from wave/4/west-next into
wave/4/west
branch: wave/4/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T21:00:29Z'
sha: 1d6e2b279108f6430825c889f6d2248fe4388aba
subject: Promote 7f912 to `wave/4/west` (#2885)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: Argo CD
body: "Fake commit message"
commitTime: '2025-09-24T14:14:09Z'
sha: 91d942a1da3799d6027a7c407eee98b1f9cdcde0
subject: 'f94c9d4: chore: fake new commit message'
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:40Z'
sha: 7bee0fcb985ac4ecccddf70a8b2d02cc0c41f231
subject: >-
This is a no-op commit merging from wave/5/west-next into
wave/5/west
branch: wave/5/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T21:00:59Z'
sha: a9d26ef79a1cc786a2a90446f7e8bbef2e5be653
subject: Promote 7f912 to `wave/5/west` (#2888)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: Argo CD
body: "Fake commit message"
commitTime: '2025-09-24T14:14:24Z'
sha: 20c41602695addbf7760236eff562f5692e585aa
subject: 'f94c9d4: chore: fake new commit message'

View File

@@ -1257,7 +1257,7 @@ func (server *ArgoCDServer) newHTTPServer(ctx context.Context, port int, grpcWeb
// Webhook handler for git events (Note: cache timeouts are hardcoded because API server does not write to cache and not really using them)
argoDB := db.NewDB(server.Namespace, server.settingsMgr, server.KubeClientset)
acdWebhookHandler := webhook.NewHandler(server.Namespace, server.ApplicationNamespaces, server.WebhookParallelism, server.AppClientset, server.settings, server.settingsMgr, server.RepoServerCache, server.Cache, argoDB, server.settingsMgr.GetMaxWebhookPayloadSize())
acdWebhookHandler := webhook.NewHandler(server.Namespace, server.ApplicationNamespaces, server.WebhookParallelism, server.AppClientset, server.appLister, server.settings, server.settingsMgr, server.RepoServerCache, server.Cache, argoDB, server.settingsMgr.GetMaxWebhookPayloadSize())
mux.HandleFunc("/api/webhook", acdWebhookHandler.Handler)

View File

@@ -58,16 +58,12 @@ const sectionHeader = (info: SectionInfo, onClick?: () => any) => {
);
};
const hasRollingSyncEnabled = (application: models.Application): boolean => {
return application.metadata.ownerReferences?.some(ref => ref.kind === 'ApplicationSet') || false;
const getApplicationSetOwnerRef = (application: models.Application) => {
return application.metadata.ownerReferences?.find(ref => ref.kind === 'ApplicationSet');
};
const ProgressiveSyncStatus = ({application}: {application: models.Application}) => {
if (!hasRollingSyncEnabled(application)) {
return null;
}
const appSetRef = application.metadata.ownerReferences.find(ref => ref.kind === 'ApplicationSet');
const appSetRef = getApplicationSetOwnerRef(application);
if (!appSetRef) {
return null;
}
@@ -75,12 +71,46 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
return (
<DataLoader
input={application}
errorRenderer={() => {
// For any errors, show a minimal error state
return (
<div className='application-status-panel__item'>
{sectionHeader({
title: 'PROGRESSIVE SYNC',
helpContent: 'Shows the current status of progressive sync for applications managed by an ApplicationSet.'
})}
<div className='application-status-panel__item-value'>
<i className='fa fa-exclamation-triangle' style={{color: COLORS.sync.unknown}} /> Error
</div>
<div className='application-status-panel__item-name'>Unable to load Progressive Sync status</div>
</div>
);
}}
load={async () => {
const appSet = await services.applications.getApplicationSet(appSetRef.name, application.metadata.namespace);
return appSet?.spec?.strategy?.type === 'RollingSync' ? appSet : null;
}}>
{(appSet: models.ApplicationSet) => {
// Check if user has permission to read ApplicationSets
const canReadApplicationSets = await services.accounts.canI('applicationsets', 'get', application.spec.project + '/' + application.metadata.name);
// Find ApplicationSet by searching all namespaces dynamically
const appSetList = await services.applications.listApplicationSets();
const appSet = appSetList.items?.find(item => item.metadata.name === appSetRef.name);
if (!appSet) {
throw new Error(`ApplicationSet ${appSetRef.name} not found in any namespace`);
}
return {canReadApplicationSets, appSet};
}}>
{({canReadApplicationSets, appSet}: {canReadApplicationSets: boolean; appSet: models.ApplicationSet}) => {
// Hide panel if: Progressive Sync disabled, no permission, or not RollingSync strategy
if (!appSet.status?.applicationStatus || appSet?.spec?.strategy?.type !== 'RollingSync' || !canReadApplicationSets) {
return null;
}
// Get the current application's status from the ApplicationSet applicationStatus
const appResource = appSet.status?.applicationStatus?.find(status => status.application === application.metadata.name);
// If no application status is found, show a default status
if (!appResource) {
return (
<div className='application-status-panel__item'>
{sectionHeader({
@@ -88,14 +118,15 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
helpContent: 'Shows the current status of progressive sync for applications managed by an ApplicationSet with RollingSync strategy.'
})}
<div className='application-status-panel__item-value'>
<i className='fa fa-question-circle' style={{color: COLORS.sync.unknown}} /> Unknown
<i className='fa fa-clock' style={{color: COLORS.sync.out_of_sync}} /> Waiting
</div>
<div className='application-status-panel__item-name'>Application status not yet available from ApplicationSet</div>
</div>
);
}
// Get the current application's status from the ApplicationSet resources
const appResource = appSet.status?.applicationStatus?.find(status => status.application === application.metadata.name);
// Get last transition time from application status
const lastTransitionTime = appResource?.lastTransitionTime;
return (
<div className='application-status-panel__item'>
@@ -106,12 +137,14 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
<div className='application-status-panel__item-value' style={{color: getProgressiveSyncStatusColor(appResource.status)}}>
{getProgressiveSyncStatusIcon({status: appResource.status})}&nbsp;{appResource.status}
</div>
<div className='application-status-panel__item-value'>Wave: {appResource.step}</div>
<div className='application-status-panel__item-name' style={{marginBottom: '0.5em'}}>
Last Transition: <br />
<Timestamp date={appResource.lastTransitionTime} />
</div>
{appResource.message && <div className='application-status-panel__item-name'>{appResource.message}</div>}
{appResource?.step && <div className='application-status-panel__item-value'>Wave: {appResource.step}</div>}
{lastTransitionTime && (
<div className='application-status-panel__item-name' style={{marginBottom: '0.5em'}}>
Last Transition: <br />
<Timestamp date={lastTransitionTime} />
</div>
)}
{appResource?.message && <div className='application-status-panel__item-name'>{appResource.message}</div>}
</div>
);
}}
@@ -123,7 +156,9 @@ export const ApplicationStatusPanel = ({application, showDiff, showOperation, sh
const [showProgressiveSync, setShowProgressiveSync] = React.useState(false);
React.useEffect(() => {
setShowProgressiveSync(hasRollingSyncEnabled(application));
// Only show Progressive Sync if the application has an ApplicationSet parent
// The actual strategy validation will be done inside ProgressiveSyncStatus component
setShowProgressiveSync(!!getApplicationSetOwnerRef(application));
}, [application]);
const today = new Date();

View File

@@ -1716,6 +1716,10 @@ export const getProgressiveSyncStatusIcon = ({status, isButton}: {status: string
return {icon: 'fa-clock', color: COLORS.sync.out_of_sync};
case 'Error':
return {icon: 'fa-times-circle', color: COLORS.health.degraded};
case 'Synced':
return {icon: 'fa-check-circle', color: COLORS.sync.synced};
case 'OutOfSync':
return {icon: 'fa-exclamation-triangle', color: COLORS.sync.out_of_sync};
default:
return {icon: 'fa-question-circle', color: COLORS.sync.unknown};
}
@@ -1738,6 +1742,10 @@ export const getProgressiveSyncStatusColor = (status: string): string => {
return COLORS.health.healthy;
case 'Error':
return COLORS.health.degraded;
case 'Synced':
return COLORS.sync.synced;
case 'OutOfSync':
return COLORS.sync.out_of_sync;
default:
return COLORS.sync.unknown;
}

View File

@@ -1134,3 +1134,8 @@ export interface ApplicationSet {
resources?: ApplicationSetResource[];
};
}
export interface ApplicationSetList {
metadata: models.ListMeta;
items: ApplicationSet[];
}

View File

@@ -550,4 +550,8 @@ export class ApplicationsService {
.query({appsetNamespace: namespace})
.then(res => res.body as models.ApplicationSet);
}
public async listApplicationSets(): Promise<models.ApplicationSetList> {
return requests.get(`/applicationsets`).then(res => res.body as models.ApplicationSetList);
}
}

View File

@@ -34,9 +34,9 @@ func (s *secretsRepositoryBackend) CreateRepository(ctx context.Context, reposit
},
}
s.repositoryToSecret(repository, repositorySecret)
updatedSecret := s.repositoryToSecret(repository, repositorySecret)
_, err := s.db.createSecret(ctx, repositorySecret)
_, err := s.db.createSecret(ctx, updatedSecret)
if err != nil {
if apierrors.IsAlreadyExists(err) {
hasLabel, err := s.hasRepoTypeLabel(secName)
@@ -142,9 +142,9 @@ func (s *secretsRepositoryBackend) UpdateRepository(ctx context.Context, reposit
return nil, err
}
s.repositoryToSecret(repository, repositorySecret)
updatedSecret := s.repositoryToSecret(repository, repositorySecret)
_, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, repositorySecret, metav1.UpdateOptions{})
_, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, updatedSecret, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@@ -187,9 +187,9 @@ func (s *secretsRepositoryBackend) CreateRepoCreds(ctx context.Context, repoCred
},
}
s.repoCredsToSecret(repoCreds, repoCredsSecret)
updatedSecret := s.repoCredsToSecret(repoCreds, repoCredsSecret)
_, err := s.db.createSecret(ctx, repoCredsSecret)
_, err := s.db.createSecret(ctx, updatedSecret)
if err != nil {
if apierrors.IsAlreadyExists(err) {
return nil, status.Errorf(codes.AlreadyExists, "repository credentials %q already exists", repoCreds.URL)
@@ -237,9 +237,9 @@ func (s *secretsRepositoryBackend) UpdateRepoCreds(ctx context.Context, repoCred
return nil, err
}
s.repoCredsToSecret(repoCreds, repoCredsSecret)
updatedSecret := s.repoCredsToSecret(repoCreds, repoCredsSecret)
repoCredsSecret, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, repoCredsSecret, metav1.UpdateOptions{})
repoCredsSecret, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, updatedSecret, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@@ -323,73 +323,75 @@ func (s *secretsRepositoryBackend) GetAllOCIRepoCreds(_ context.Context) ([]*app
}
func secretToRepository(secret *corev1.Secret) (*appsv1.Repository, error) {
secretCopy := secret.DeepCopy()
repository := &appsv1.Repository{
Name: string(secret.Data["name"]),
Repo: string(secret.Data["url"]),
Username: string(secret.Data["username"]),
Password: string(secret.Data["password"]),
BearerToken: string(secret.Data["bearerToken"]),
SSHPrivateKey: string(secret.Data["sshPrivateKey"]),
TLSClientCertData: string(secret.Data["tlsClientCertData"]),
TLSClientCertKey: string(secret.Data["tlsClientCertKey"]),
Type: string(secret.Data["type"]),
GithubAppPrivateKey: string(secret.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secret.Data["githubAppEnterpriseBaseUrl"]),
Proxy: string(secret.Data["proxy"]),
NoProxy: string(secret.Data["noProxy"]),
Project: string(secret.Data["project"]),
GCPServiceAccountKey: string(secret.Data["gcpServiceAccountKey"]),
Name: string(secretCopy.Data["name"]),
Repo: string(secretCopy.Data["url"]),
Username: string(secretCopy.Data["username"]),
Password: string(secretCopy.Data["password"]),
BearerToken: string(secretCopy.Data["bearerToken"]),
SSHPrivateKey: string(secretCopy.Data["sshPrivateKey"]),
TLSClientCertData: string(secretCopy.Data["tlsClientCertData"]),
TLSClientCertKey: string(secretCopy.Data["tlsClientCertKey"]),
Type: string(secretCopy.Data["type"]),
GithubAppPrivateKey: string(secretCopy.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secretCopy.Data["githubAppEnterpriseBaseUrl"]),
Proxy: string(secretCopy.Data["proxy"]),
NoProxy: string(secretCopy.Data["noProxy"]),
Project: string(secretCopy.Data["project"]),
GCPServiceAccountKey: string(secretCopy.Data["gcpServiceAccountKey"]),
}
insecureIgnoreHostKey, err := boolOrFalse(secret, "insecureIgnoreHostKey")
insecureIgnoreHostKey, err := boolOrFalse(secretCopy, "insecureIgnoreHostKey")
if err != nil {
return repository, err
}
repository.InsecureIgnoreHostKey = insecureIgnoreHostKey
insecure, err := boolOrFalse(secret, "insecure")
insecure, err := boolOrFalse(secretCopy, "insecure")
if err != nil {
return repository, err
}
repository.Insecure = insecure
enableLfs, err := boolOrFalse(secret, "enableLfs")
enableLfs, err := boolOrFalse(secretCopy, "enableLfs")
if err != nil {
return repository, err
}
repository.EnableLFS = enableLfs
enableOCI, err := boolOrFalse(secret, "enableOCI")
enableOCI, err := boolOrFalse(secretCopy, "enableOCI")
if err != nil {
return repository, err
}
repository.EnableOCI = enableOCI
insecureOCIForceHTTP, err := boolOrFalse(secret, "insecureOCIForceHttp")
insecureOCIForceHTTP, err := boolOrFalse(secretCopy, "insecureOCIForceHttp")
if err != nil {
return repository, err
}
repository.InsecureOCIForceHttp = insecureOCIForceHTTP
githubAppID, err := intOrZero(secret, "githubAppID")
githubAppID, err := intOrZero(secretCopy, "githubAppID")
if err != nil {
return repository, err
}
repository.GithubAppId = githubAppID
githubAppInstallationID, err := intOrZero(secret, "githubAppInstallationID")
githubAppInstallationID, err := intOrZero(secretCopy, "githubAppInstallationID")
if err != nil {
return repository, err
}
repository.GithubAppInstallationId = githubAppInstallationID
forceBasicAuth, err := boolOrFalse(secret, "forceHttpBasicAuth")
forceBasicAuth, err := boolOrFalse(secretCopy, "forceHttpBasicAuth")
if err != nil {
return repository, err
}
repository.ForceHttpBasicAuth = forceBasicAuth
useAzureWorkloadIdentity, err := boolOrFalse(secret, "useAzureWorkloadIdentity")
useAzureWorkloadIdentity, err := boolOrFalse(secretCopy, "useAzureWorkloadIdentity")
if err != nil {
return repository, err
}
@@ -398,86 +400,92 @@ func secretToRepository(secret *corev1.Secret) (*appsv1.Repository, error) {
return repository, nil
}
func (s *secretsRepositoryBackend) repositoryToSecret(repository *appsv1.Repository, secret *corev1.Secret) {
if secret.Data == nil {
secret.Data = make(map[string][]byte)
func (s *secretsRepositoryBackend) repositoryToSecret(repository *appsv1.Repository, secret *corev1.Secret) *corev1.Secret {
secretCopy := secret.DeepCopy()
if secretCopy.Data == nil {
secretCopy.Data = make(map[string][]byte)
}
updateSecretString(secret, "name", repository.Name)
updateSecretString(secret, "project", repository.Project)
updateSecretString(secret, "url", repository.Repo)
updateSecretString(secret, "username", repository.Username)
updateSecretString(secret, "password", repository.Password)
updateSecretString(secret, "bearerToken", repository.BearerToken)
updateSecretString(secret, "sshPrivateKey", repository.SSHPrivateKey)
updateSecretBool(secret, "enableOCI", repository.EnableOCI)
updateSecretBool(secret, "insecureOCIForceHttp", repository.InsecureOCIForceHttp)
updateSecretString(secret, "tlsClientCertData", repository.TLSClientCertData)
updateSecretString(secret, "tlsClientCertKey", repository.TLSClientCertKey)
updateSecretString(secret, "type", repository.Type)
updateSecretString(secret, "githubAppPrivateKey", repository.GithubAppPrivateKey)
updateSecretInt(secret, "githubAppID", repository.GithubAppId)
updateSecretInt(secret, "githubAppInstallationID", repository.GithubAppInstallationId)
updateSecretString(secret, "githubAppEnterpriseBaseUrl", repository.GitHubAppEnterpriseBaseURL)
updateSecretBool(secret, "insecureIgnoreHostKey", repository.InsecureIgnoreHostKey)
updateSecretBool(secret, "insecure", repository.Insecure)
updateSecretBool(secret, "enableLfs", repository.EnableLFS)
updateSecretString(secret, "proxy", repository.Proxy)
updateSecretString(secret, "noProxy", repository.NoProxy)
updateSecretString(secret, "gcpServiceAccountKey", repository.GCPServiceAccountKey)
updateSecretBool(secret, "forceHttpBasicAuth", repository.ForceHttpBasicAuth)
updateSecretBool(secret, "useAzureWorkloadIdentity", repository.UseAzureWorkloadIdentity)
addSecretMetadata(secret, s.getSecretType())
updateSecretString(secretCopy, "name", repository.Name)
updateSecretString(secretCopy, "project", repository.Project)
updateSecretString(secretCopy, "url", repository.Repo)
updateSecretString(secretCopy, "username", repository.Username)
updateSecretString(secretCopy, "password", repository.Password)
updateSecretString(secretCopy, "bearerToken", repository.BearerToken)
updateSecretString(secretCopy, "sshPrivateKey", repository.SSHPrivateKey)
updateSecretBool(secretCopy, "enableOCI", repository.EnableOCI)
updateSecretBool(secretCopy, "insecureOCIForceHttp", repository.InsecureOCIForceHttp)
updateSecretString(secretCopy, "tlsClientCertData", repository.TLSClientCertData)
updateSecretString(secretCopy, "tlsClientCertKey", repository.TLSClientCertKey)
updateSecretString(secretCopy, "type", repository.Type)
updateSecretString(secretCopy, "githubAppPrivateKey", repository.GithubAppPrivateKey)
updateSecretInt(secretCopy, "githubAppID", repository.GithubAppId)
updateSecretInt(secretCopy, "githubAppInstallationID", repository.GithubAppInstallationId)
updateSecretString(secretCopy, "githubAppEnterpriseBaseUrl", repository.GitHubAppEnterpriseBaseURL)
updateSecretBool(secretCopy, "insecureIgnoreHostKey", repository.InsecureIgnoreHostKey)
updateSecretBool(secretCopy, "insecure", repository.Insecure)
updateSecretBool(secretCopy, "enableLfs", repository.EnableLFS)
updateSecretString(secretCopy, "proxy", repository.Proxy)
updateSecretString(secretCopy, "noProxy", repository.NoProxy)
updateSecretString(secretCopy, "gcpServiceAccountKey", repository.GCPServiceAccountKey)
updateSecretBool(secretCopy, "forceHttpBasicAuth", repository.ForceHttpBasicAuth)
updateSecretBool(secretCopy, "useAzureWorkloadIdentity", repository.UseAzureWorkloadIdentity)
addSecretMetadata(secretCopy, s.getSecretType())
return secretCopy
}
func (s *secretsRepositoryBackend) secretToRepoCred(secret *corev1.Secret) (*appsv1.RepoCreds, error) {
secretCopy := secret.DeepCopy()
repository := &appsv1.RepoCreds{
URL: string(secret.Data["url"]),
Username: string(secret.Data["username"]),
Password: string(secret.Data["password"]),
BearerToken: string(secret.Data["bearerToken"]),
SSHPrivateKey: string(secret.Data["sshPrivateKey"]),
TLSClientCertData: string(secret.Data["tlsClientCertData"]),
TLSClientCertKey: string(secret.Data["tlsClientCertKey"]),
Type: string(secret.Data["type"]),
GithubAppPrivateKey: string(secret.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secret.Data["githubAppEnterpriseBaseUrl"]),
GCPServiceAccountKey: string(secret.Data["gcpServiceAccountKey"]),
Proxy: string(secret.Data["proxy"]),
NoProxy: string(secret.Data["noProxy"]),
URL: string(secretCopy.Data["url"]),
Username: string(secretCopy.Data["username"]),
Password: string(secretCopy.Data["password"]),
BearerToken: string(secretCopy.Data["bearerToken"]),
SSHPrivateKey: string(secretCopy.Data["sshPrivateKey"]),
TLSClientCertData: string(secretCopy.Data["tlsClientCertData"]),
TLSClientCertKey: string(secretCopy.Data["tlsClientCertKey"]),
Type: string(secretCopy.Data["type"]),
GithubAppPrivateKey: string(secretCopy.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secretCopy.Data["githubAppEnterpriseBaseUrl"]),
GCPServiceAccountKey: string(secretCopy.Data["gcpServiceAccountKey"]),
Proxy: string(secretCopy.Data["proxy"]),
NoProxy: string(secretCopy.Data["noProxy"]),
}
enableOCI, err := boolOrFalse(secret, "enableOCI")
enableOCI, err := boolOrFalse(secretCopy, "enableOCI")
if err != nil {
return repository, err
}
repository.EnableOCI = enableOCI
insecureOCIForceHTTP, err := boolOrFalse(secret, "insecureOCIForceHttp")
insecureOCIForceHTTP, err := boolOrFalse(secretCopy, "insecureOCIForceHttp")
if err != nil {
return repository, err
}
repository.InsecureOCIForceHttp = insecureOCIForceHTTP
githubAppID, err := intOrZero(secret, "githubAppID")
githubAppID, err := intOrZero(secretCopy, "githubAppID")
if err != nil {
return repository, err
}
repository.GithubAppId = githubAppID
githubAppInstallationID, err := intOrZero(secret, "githubAppInstallationID")
githubAppInstallationID, err := intOrZero(secretCopy, "githubAppInstallationID")
if err != nil {
return repository, err
}
repository.GithubAppInstallationId = githubAppInstallationID
forceBasicAuth, err := boolOrFalse(secret, "forceHttpBasicAuth")
forceBasicAuth, err := boolOrFalse(secretCopy, "forceHttpBasicAuth")
if err != nil {
return repository, err
}
repository.ForceHttpBasicAuth = forceBasicAuth
useAzureWorkloadIdentity, err := boolOrFalse(secret, "useAzureWorkloadIdentity")
useAzureWorkloadIdentity, err := boolOrFalse(secretCopy, "useAzureWorkloadIdentity")
if err != nil {
return repository, err
}
@@ -486,31 +494,35 @@ func (s *secretsRepositoryBackend) secretToRepoCred(secret *corev1.Secret) (*app
return repository, nil
}
func (s *secretsRepositoryBackend) repoCredsToSecret(repoCreds *appsv1.RepoCreds, secret *corev1.Secret) {
if secret.Data == nil {
secret.Data = make(map[string][]byte)
func (s *secretsRepositoryBackend) repoCredsToSecret(repoCreds *appsv1.RepoCreds, secret *corev1.Secret) *corev1.Secret {
secretCopy := secret.DeepCopy()
if secretCopy.Data == nil {
secretCopy.Data = make(map[string][]byte)
}
updateSecretString(secret, "url", repoCreds.URL)
updateSecretString(secret, "username", repoCreds.Username)
updateSecretString(secret, "password", repoCreds.Password)
updateSecretString(secret, "bearerToken", repoCreds.BearerToken)
updateSecretString(secret, "sshPrivateKey", repoCreds.SSHPrivateKey)
updateSecretBool(secret, "enableOCI", repoCreds.EnableOCI)
updateSecretBool(secret, "insecureOCIForceHttp", repoCreds.InsecureOCIForceHttp)
updateSecretString(secret, "tlsClientCertData", repoCreds.TLSClientCertData)
updateSecretString(secret, "tlsClientCertKey", repoCreds.TLSClientCertKey)
updateSecretString(secret, "type", repoCreds.Type)
updateSecretString(secret, "githubAppPrivateKey", repoCreds.GithubAppPrivateKey)
updateSecretInt(secret, "githubAppID", repoCreds.GithubAppId)
updateSecretInt(secret, "githubAppInstallationID", repoCreds.GithubAppInstallationId)
updateSecretString(secret, "githubAppEnterpriseBaseUrl", repoCreds.GitHubAppEnterpriseBaseURL)
updateSecretString(secret, "gcpServiceAccountKey", repoCreds.GCPServiceAccountKey)
updateSecretString(secret, "proxy", repoCreds.Proxy)
updateSecretString(secret, "noProxy", repoCreds.NoProxy)
updateSecretBool(secret, "forceHttpBasicAuth", repoCreds.ForceHttpBasicAuth)
updateSecretBool(secret, "useAzureWorkloadIdentity", repoCreds.UseAzureWorkloadIdentity)
addSecretMetadata(secret, s.getRepoCredSecretType())
updateSecretString(secretCopy, "url", repoCreds.URL)
updateSecretString(secretCopy, "username", repoCreds.Username)
updateSecretString(secretCopy, "password", repoCreds.Password)
updateSecretString(secretCopy, "bearerToken", repoCreds.BearerToken)
updateSecretString(secretCopy, "sshPrivateKey", repoCreds.SSHPrivateKey)
updateSecretBool(secretCopy, "enableOCI", repoCreds.EnableOCI)
updateSecretBool(secretCopy, "insecureOCIForceHttp", repoCreds.InsecureOCIForceHttp)
updateSecretString(secretCopy, "tlsClientCertData", repoCreds.TLSClientCertData)
updateSecretString(secretCopy, "tlsClientCertKey", repoCreds.TLSClientCertKey)
updateSecretString(secretCopy, "type", repoCreds.Type)
updateSecretString(secretCopy, "githubAppPrivateKey", repoCreds.GithubAppPrivateKey)
updateSecretInt(secretCopy, "githubAppID", repoCreds.GithubAppId)
updateSecretInt(secretCopy, "githubAppInstallationID", repoCreds.GithubAppInstallationId)
updateSecretString(secretCopy, "githubAppEnterpriseBaseUrl", repoCreds.GitHubAppEnterpriseBaseURL)
updateSecretString(secretCopy, "gcpServiceAccountKey", repoCreds.GCPServiceAccountKey)
updateSecretString(secretCopy, "proxy", repoCreds.Proxy)
updateSecretString(secretCopy, "noProxy", repoCreds.NoProxy)
updateSecretBool(secretCopy, "forceHttpBasicAuth", repoCreds.ForceHttpBasicAuth)
updateSecretBool(secretCopy, "useAzureWorkloadIdentity", repoCreds.UseAzureWorkloadIdentity)
addSecretMetadata(secretCopy, s.getRepoCredSecretType())
return secretCopy
}
func (s *secretsRepositoryBackend) getRepositorySecret(repoURL, project string, allowFallback bool) (*corev1.Secret, error) {

View File

@@ -1,7 +1,9 @@
package db
import (
"fmt"
"strconv"
"sync"
"testing"
"github.com/stretchr/testify/assert"
@@ -85,9 +87,9 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
t.Parallel()
secret := &corev1.Secret{}
s := secretsRepositoryBackend{}
s.repositoryToSecret(repo, secret)
delete(secret.Labels, common.LabelKeySecretType)
f := setupWithK8sObjects(secret)
updatedSecret := s.repositoryToSecret(repo, secret)
delete(updatedSecret.Labels, common.LabelKeySecretType)
f := setupWithK8sObjects(updatedSecret)
f.clientSet.ReactionChain = nil
f.clientSet.AddReactor("create", "secrets", func(_ k8stesting.Action) (handled bool, ret runtime.Object, err error) {
gr := schema.GroupResource{
@@ -122,8 +124,8 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
},
}
s := secretsRepositoryBackend{}
s.repositoryToSecret(repo, secret)
f := setupWithK8sObjects(secret)
updatedSecret := s.repositoryToSecret(repo, secret)
f := setupWithK8sObjects(updatedSecret)
f.clientSet.ReactionChain = nil
f.clientSet.WatchReactionChain = nil
f.clientSet.AddReactor("create", "secrets", func(_ k8stesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -134,7 +136,7 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
return true, nil, apierrors.NewAlreadyExists(gr, "already exists")
})
watcher := watch.NewFakeWithChanSize(1, true)
watcher.Add(secret)
watcher.Add(updatedSecret)
f.clientSet.AddWatchReactor("secrets", func(_ k8stesting.Action) (handled bool, ret watch.Interface, err error) {
return true, watcher, nil
})
@@ -952,7 +954,9 @@ func TestRepoCredsToSecret(t *testing.T) {
GithubAppInstallationId: 456,
GitHubAppEnterpriseBaseURL: "GitHubAppEnterpriseBaseURL",
}
testee.repoCredsToSecret(creds, s)
s = testee.repoCredsToSecret(creds, s)
assert.Equal(t, []byte(creds.URL), s.Data["url"])
assert.Equal(t, []byte(creds.Username), s.Data["username"])
assert.Equal(t, []byte(creds.Password), s.Data["password"])
@@ -994,7 +998,7 @@ func TestRepoWriteCredsToSecret(t *testing.T) {
GithubAppInstallationId: 456,
GitHubAppEnterpriseBaseURL: "GitHubAppEnterpriseBaseURL",
}
testee.repoCredsToSecret(creds, s)
s = testee.repoCredsToSecret(creds, s)
assert.Equal(t, []byte(creds.URL), s.Data["url"])
assert.Equal(t, []byte(creds.Username), s.Data["username"])
assert.Equal(t, []byte(creds.Password), s.Data["password"])
@@ -1010,3 +1014,169 @@ func TestRepoWriteCredsToSecret(t *testing.T) {
assert.Equal(t, map[string]string{common.AnnotationKeyManagedBy: common.AnnotationValueManagedByArgoCD}, s.Annotations)
assert.Equal(t, map[string]string{common.LabelKeySecretType: common.LabelValueSecretTypeRepoCredsWrite}, s.Labels)
}
func TestRaceConditionInRepoCredsOperations(t *testing.T) {
// Create a single shared secret that will be accessed concurrently
sharedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: RepoURLToSecretName(repoSecretPrefix, "git@github.com:argoproj/argo-cd.git", ""),
Namespace: testNamespace,
Labels: map[string]string{
common.LabelKeySecretType: common.LabelValueSecretTypeRepoCreds,
},
},
Data: map[string][]byte{
"url": []byte("git@github.com:argoproj/argo-cd.git"),
"username": []byte("test-user"),
"password": []byte("test-pass"),
},
}
// Create test credentials that we'll use for conversion
repoCreds := &appsv1.RepoCreds{
URL: "git@github.com:argoproj/argo-cd.git",
Username: "test-user",
Password: "test-pass",
}
backend := &secretsRepositoryBackend{}
var wg sync.WaitGroup
concurrentOps := 50
errChan := make(chan error, concurrentOps*2) // Channel to collect errors
// Launch goroutines that perform concurrent operations
for i := 0; i < concurrentOps; i++ {
wg.Add(2)
// One goroutine converts from RepoCreds to Secret
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in repoCredsToSecret: %v", r)
}
}()
_ = backend.repoCredsToSecret(repoCreds, sharedSecret)
}()
// Another goroutine converts from Secret to RepoCreds
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in secretToRepoCred: %v", r)
}
}()
creds, err := backend.secretToRepoCred(sharedSecret)
if err != nil {
errChan <- fmt.Errorf("error in secretToRepoCred: %w", err)
return
}
// Verify data integrity
if creds.URL != repoCreds.URL || creds.Username != repoCreds.Username || creds.Password != repoCreds.Password {
errChan <- fmt.Errorf("data mismatch in conversion: expected %v, got %v", repoCreds, creds)
}
}()
}
wg.Wait()
close(errChan)
// Check for any errors that occurred during concurrent operations
for err := range errChan {
t.Errorf("concurrent operation error: %v", err)
}
// Verify final state
finalCreds, err := backend.secretToRepoCred(sharedSecret)
require.NoError(t, err)
assert.Equal(t, repoCreds.URL, finalCreds.URL)
assert.Equal(t, repoCreds.Username, finalCreds.Username)
assert.Equal(t, repoCreds.Password, finalCreds.Password)
}
func TestRaceConditionInRepositoryOperations(t *testing.T) {
// Create a single shared secret that will be accessed concurrently
sharedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: RepoURLToSecretName(repoSecretPrefix, "git@github.com:argoproj/argo-cd.git", ""),
Namespace: testNamespace,
Labels: map[string]string{
common.LabelKeySecretType: common.LabelValueSecretTypeRepository,
},
},
Data: map[string][]byte{
"url": []byte("git@github.com:argoproj/argo-cd.git"),
"name": []byte("test-repo"),
"username": []byte("test-user"),
"password": []byte("test-pass"),
},
}
// Create test repository that we'll use for conversion
repo := &appsv1.Repository{
Name: "test-repo",
Repo: "git@github.com:argoproj/argo-cd.git",
Username: "test-user",
Password: "test-pass",
}
backend := &secretsRepositoryBackend{}
var wg sync.WaitGroup
concurrentOps := 50
errChan := make(chan error, concurrentOps*2) // Channel to collect errors
// Launch goroutines that perform concurrent operations
for i := 0; i < concurrentOps; i++ {
wg.Add(2)
// One goroutine converts from Repository to Secret
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in repositoryToSecret: %v", r)
}
}()
_ = backend.repositoryToSecret(repo, sharedSecret)
}()
// Another goroutine converts from Secret to Repository
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in secretToRepository: %v", r)
}
}()
repository, err := secretToRepository(sharedSecret)
if err != nil {
errChan <- fmt.Errorf("error in secretToRepository: %w", err)
return
}
// Verify data integrity
if repository.Name != repo.Name || repository.Repo != repo.Repo ||
repository.Username != repo.Username || repository.Password != repo.Password {
errChan <- fmt.Errorf("data mismatch in conversion: expected %v, got %v", repo, repository)
}
}()
}
wg.Wait()
close(errChan)
// Check for any errors that occurred during concurrent operations
for err := range errChan {
t.Errorf("concurrent operation error: %v", err)
}
// Verify final state
finalRepo, err := secretToRepository(sharedSecret)
require.NoError(t, err)
assert.Equal(t, repo.Name, finalRepo.Name)
assert.Equal(t, repo.Repo, finalRepo.Repo)
assert.Equal(t, repo.Username, finalRepo.Username)
assert.Equal(t, repo.Password, finalRepo.Password)
}

View File

@@ -28,11 +28,12 @@ func GetFactorySettings(argocdService service.Service, secretName, configMapName
}
// GetFactorySettingsForCLI allows the initialization of argocdService to be deferred until it is used, when InitGetVars is called.
func GetFactorySettingsForCLI(argocdService service.Service, secretName, configMapName string, selfServiceNotificationEnabled bool) api.Settings {
func GetFactorySettingsForCLI(serviceGetter func() service.Service, secretName, configMapName string, selfServiceNotificationEnabled bool) api.Settings {
return api.Settings{
SecretName: secretName,
ConfigMapName: configMapName,
InitGetVars: func(cfg *api.Config, configMap *corev1.ConfigMap, secret *corev1.Secret) (api.GetVars, error) {
argocdService := serviceGetter()
if argocdService == nil {
return nil, errors.New("argocdService is not initialized")
}

View File

@@ -232,12 +232,28 @@ func (c *nativeOCIClient) Extract(ctx context.Context, digest string) (string, u
return "", nil, err
}
if len(ociManifest.Layers) != 1 {
return "", nil, fmt.Errorf("expected only a single oci layer, got %d", len(ociManifest.Layers))
// Add a guard to defend against a ridiculous amount of layers. No idea what a good amount is, but normally we
// shouldn't expect more than 2-3 in most real world use cases.
if len(ociManifest.Layers) > 10 {
return "", nil, fmt.Errorf("expected no more than 10 oci layers, got %d", len(ociManifest.Layers))
}
if !slices.Contains(c.allowedMediaTypes, ociManifest.Layers[0].MediaType) {
return "", nil, fmt.Errorf("oci layer media type %s is not in the list of allowed media types", ociManifest.Layers[0].MediaType)
contentLayers := 0
// Strictly speaking we only allow for a single content layer. There are images which contains extra layers, such
// as provenance/attestation layers. Pending a better story to do this natively, we will skip such layers for now.
for _, layer := range ociManifest.Layers {
if isContentLayer(layer.MediaType) {
if !slices.Contains(c.allowedMediaTypes, layer.MediaType) {
return "", nil, fmt.Errorf("oci layer media type %s is not in the list of allowed media types", layer.MediaType)
}
contentLayers++
}
}
if contentLayers != 1 {
return "", nil, fmt.Errorf("expected only a single oci content layer, got %d", contentLayers)
}
err = saveCompressedImageToPath(ctx, digest, c.repo, cachedPath)
@@ -405,7 +421,15 @@ func fileExists(filePath string) (bool, error) {
return true, nil
}
// TODO: A content layer could in theory be something that is not a compressed file, e.g a single yaml file or like.
// While IMO the utility in the context of Argo CD is limited, I'd at least like to make it known here and add an extensibility
// point for it in case we decide to loosen the current requirements.
func isContentLayer(mediaType string) bool {
return isCompressedLayer(mediaType)
}
func isCompressedLayer(mediaType string) bool {
// TODO: Is zstd something which is used in the wild? For now let's stick to these suffixes
return strings.HasSuffix(mediaType, "tar+gzip") || strings.HasSuffix(mediaType, "tar")
}
@@ -500,7 +524,7 @@ func isHelmOCI(mediaType string) bool {
// Push looks in all the layers of an OCI image. Once it finds a layer that is compressed, it extracts the layer to a tempDir
// and then renames the temp dir to the directory where the repo-server expects to find k8s manifests.
func (s *compressedLayerExtracterStore) Push(ctx context.Context, desc imagev1.Descriptor, content io.Reader) error {
if isCompressedLayer(desc.MediaType) {
if isContentLayer(desc.MediaType) {
srcDir, err := files.CreateTempDir(os.TempDir())
if err != nil {
return err

View File

@@ -122,7 +122,7 @@ func Test_nativeOCIClient_Extract(t *testing.T) {
expectedError: errors.New("cannot extract contents of oci image with revision sha256:1b6dfd71e2b35c2f35dffc39007c2276f3c0e235cbae4c39cba74bd406174e22: failed to perform \"Push\" on destination: could not decompress layer: error while iterating on tar reader: unexpected EOF"),
},
{
name: "extraction fails due to multiple layers",
name: "extraction fails due to multiple content layers",
fields: fields{
allowedMediaTypes: []string{imagev1.MediaTypeImageLayerGzip},
},
@@ -135,7 +135,21 @@ func Test_nativeOCIClient_Extract(t *testing.T) {
manifestMaxExtractedSize: 1000,
disableManifestMaxExtractedSize: false,
},
expectedError: errors.New("expected only a single oci layer, got 2"),
expectedError: errors.New("expected only a single oci content layer, got 2"),
},
{
name: "extraction with multiple layers, but just a single content layer",
fields: fields{
allowedMediaTypes: []string{imagev1.MediaTypeImageLayerGzip},
},
args: args{
digestFunc: func(store *memory.Store) string {
layerBlob := createGzippedTarWithContent(t, "some-path", "some content")
return generateManifest(t, store, layerConf{content.NewDescriptorFromBytes(imagev1.MediaTypeImageLayerGzip, layerBlob), layerBlob}, layerConf{content.NewDescriptorFromBytes("application/vnd.cncf.helm.chart.provenance.v1.prov", []byte{}), []byte{}})
},
manifestMaxExtractedSize: 1000,
disableManifestMaxExtractedSize: false,
},
},
{
name: "extraction fails due to invalid media type",

View File

@@ -13,6 +13,9 @@ import (
"time"
bb "github.com/ktrysmt/go-bitbucket"
"k8s.io/apimachinery/pkg/labels"
alpha1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
"github.com/Masterminds/semver/v3"
"github.com/go-playground/webhooks/v6/azuredevops"
@@ -23,7 +26,6 @@ import (
"github.com/go-playground/webhooks/v6/gogs"
gogsclient "github.com/gogits/go-gogs-client"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@@ -60,6 +62,7 @@ type ArgoCDWebhookHandler struct {
ns string
appNs []string
appClientset appclientset.Interface
appsLister alpha1.ApplicationLister
github *github.Webhook
gitlab *gitlab.Webhook
bitbucket *bitbucket.Webhook
@@ -72,7 +75,7 @@ type ArgoCDWebhookHandler struct {
maxWebhookPayloadSizeB int64
}
func NewHandler(namespace string, applicationNamespaces []string, webhookParallelism int, appClientset appclientset.Interface, set *settings.ArgoCDSettings, settingsSrc settingsSource, repoCache *cache.Cache, serverCache *servercache.Cache, argoDB db.ArgoDB, maxWebhookPayloadSizeB int64) *ArgoCDWebhookHandler {
func NewHandler(namespace string, applicationNamespaces []string, webhookParallelism int, appClientset appclientset.Interface, appsLister alpha1.ApplicationLister, set *settings.ArgoCDSettings, settingsSrc settingsSource, repoCache *cache.Cache, serverCache *servercache.Cache, argoDB db.ArgoDB, maxWebhookPayloadSizeB int64) *ArgoCDWebhookHandler {
githubWebhook, err := github.New(github.Options.Secret(set.GetWebhookGitHubSecret()))
if err != nil {
log.Warnf("Unable to init the GitHub webhook")
@@ -115,6 +118,7 @@ func NewHandler(namespace string, applicationNamespaces []string, webhookParalle
db: argoDB,
queue: make(chan any, payloadQueueSize),
maxWebhookPayloadSizeB: maxWebhookPayloadSizeB,
appsLister: appsLister,
}
acdWebhook.startWorkerPool(webhookParallelism)
@@ -150,10 +154,12 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
case azuredevops.GitPushEvent:
// See: https://learn.microsoft.com/en-us/azure/devops/service-hooks/events?view=azure-devops#git.push
webURLs = append(webURLs, payload.Resource.Repository.RemoteURL)
revision = ParseRevision(payload.Resource.RefUpdates[0].Name)
change.shaAfter = ParseRevision(payload.Resource.RefUpdates[0].NewObjectID)
change.shaBefore = ParseRevision(payload.Resource.RefUpdates[0].OldObjectID)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
if len(payload.Resource.RefUpdates) > 0 {
revision = ParseRevision(payload.Resource.RefUpdates[0].Name)
change.shaAfter = ParseRevision(payload.Resource.RefUpdates[0].NewObjectID)
change.shaBefore = ParseRevision(payload.Resource.RefUpdates[0].OldObjectID)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
}
// unfortunately, Azure DevOps doesn't provide a list of changed files
case github.PushPayload:
// See: https://developer.github.com/v3/activity/events/types/#pushevent
@@ -251,13 +257,15 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
// Webhook module does not parse the inner links
if payload.Repository.Links != nil {
for _, l := range payload.Repository.Links["clone"].([]any) {
link := l.(map[string]any)
if link["name"] == "http" {
webURLs = append(webURLs, link["href"].(string))
}
if link["name"] == "ssh" {
webURLs = append(webURLs, link["href"].(string))
clone, ok := payload.Repository.Links["clone"].([]any)
if ok {
for _, l := range clone {
link := l.(map[string]any)
if link["name"] == "http" || link["name"] == "ssh" {
if href, ok := link["href"].(string); ok {
webURLs = append(webURLs, href)
}
}
}
}
}
@@ -276,11 +284,13 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
// so we cannot update changedFiles for this type of payload
case gogsclient.PushPayload:
webURLs = append(webURLs, payload.Repo.HTMLURL)
revision = ParseRevision(payload.Ref)
change.shaAfter = ParseRevision(payload.After)
change.shaBefore = ParseRevision(payload.Before)
touchedHead = bool(payload.Repo.DefaultBranch == revision)
if payload.Repo != nil {
webURLs = append(webURLs, payload.Repo.HTMLURL)
touchedHead = payload.Repo.DefaultBranch == revision
}
for _, commit := range payload.Commits {
changedFiles = append(changedFiles, commit.Added...)
changedFiles = append(changedFiles, commit.Modified...)
@@ -313,8 +323,8 @@ func (a *ArgoCDWebhookHandler) HandleEvent(payload any) {
nsFilter = ""
}
appIf := a.appClientset.ArgoprojV1alpha1().Applications(nsFilter)
apps, err := appIf.List(context.Background(), metav1.ListOptions{})
appIf := a.appsLister.Applications(nsFilter)
apps, err := appIf.List(labels.Everything())
if err != nil {
log.Warnf("Failed to list applications: %v", err)
return
@@ -339,9 +349,9 @@ func (a *ArgoCDWebhookHandler) HandleEvent(payload any) {
// Skip any application that is neither in the control plane's namespace
// nor in the list of enabled namespaces.
var filteredApps []v1alpha1.Application
for _, app := range apps.Items {
for _, app := range apps {
if app.Namespace == a.ns || glob.MatchStringInList(a.appNs, app.Namespace, glob.REGEXP) {
filteredApps = append(filteredApps, app)
filteredApps = append(filteredApps, *app)
}
}

View File

@@ -15,8 +15,11 @@ import (
"text/template"
"time"
"github.com/go-playground/webhooks/v6/azuredevops"
bb "github.com/ktrysmt/go-bitbucket"
"github.com/stretchr/testify/mock"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"github.com/go-playground/webhooks/v6/bitbucket"
@@ -28,6 +31,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
kubetesting "k8s.io/client-go/testing"
argov1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
servercache "github.com/argoproj/argo-cd/v3/server/cache"
"github.com/argoproj/argo-cd/v3/util/cache/appstate"
"github.com/argoproj/argo-cd/v3/util/db"
@@ -98,6 +102,31 @@ func NewMockHandlerForBitbucketCallback(reactor *reactorDef, applicationNamespac
return newMockHandler(reactor, applicationNamespaces, defaultMaxPayloadSize, &mockDB, &argoSettings, objects...)
}
type fakeAppsLister struct {
argov1.ApplicationLister
argov1.ApplicationNamespaceLister
namespace string
clientset *appclientset.Clientset
}
func (f *fakeAppsLister) Applications(namespace string) argov1.ApplicationNamespaceLister {
return &fakeAppsLister{namespace: namespace, clientset: f.clientset}
}
func (f *fakeAppsLister) List(selector labels.Selector) ([]*v1alpha1.Application, error) {
res, err := f.clientset.ArgoprojV1alpha1().Applications(f.namespace).List(context.Background(), metav1.ListOptions{
LabelSelector: selector.String(),
})
if err != nil {
return nil, err
}
var apps []*v1alpha1.Application
for i := range res.Items {
apps = append(apps, &res.Items[i])
}
return apps, nil
}
func newMockHandler(reactor *reactorDef, applicationNamespaces []string, maxPayloadSize int64, argoDB db.ArgoDB, argoSettings *settings.ArgoCDSettings, objects ...runtime.Object) *ArgoCDWebhookHandler {
appClientset := appclientset.NewSimpleClientset(objects...)
if reactor != nil {
@@ -109,8 +138,7 @@ func newMockHandler(reactor *reactorDef, applicationNamespaces []string, maxPayl
appClientset.AddReactor(reactor.verb, reactor.resource, reactor.reaction)
}
cacheClient := cacheutil.NewCache(cacheutil.NewInMemoryCache(1 * time.Hour))
return NewHandler("argocd", applicationNamespaces, 10, appClientset, argoSettings, &fakeSettingsSrc{}, cache.NewCache(
return NewHandler("argocd", applicationNamespaces, 10, appClientset, &fakeAppsLister{clientset: appClientset}, argoSettings, &fakeSettingsSrc{}, cache.NewCache(
cacheClient,
1*time.Minute,
1*time.Minute,
@@ -702,6 +730,26 @@ func Test_affectedRevisionInfo_appRevisionHasChanged(t *testing.T) {
{true, "refs/tags/no-slashes", bitbucketPushPayload("no-slashes"), "bitbucket push branch or tag name without slashes, targetRevision tag prefixed"},
{true, "refs/tags/no-slashes", bitbucketRefChangedPayload("no-slashes"), "bitbucket ref changed branch or tag name without slashes, targetRevision tag prefixed"},
{true, "refs/tags/no-slashes", gogsPushPayload("no-slashes"), "gogs push branch or tag name without slashes, targetRevision tag prefixed"},
// Tests fix for https://github.com/argoproj/argo-cd/security/advisories/GHSA-wp4p-9pxh-cgx2
{true, "test", gogsclient.PushPayload{Ref: "test", Repo: nil}, "gogs push branch with nil repo in payload"},
// Testing fix for https://github.com/argoproj/argo-cd/security/advisories/GHSA-gpx4-37g2-c8pv
{false, "test", azuredevops.GitPushEvent{Resource: azuredevops.Resource{RefUpdates: []azuredevops.RefUpdate{}}}, "Azure DevOps malformed push event with no ref updates"},
{true, "some-ref", bitbucketserver.RepositoryReferenceChangedPayload{
Changes: []bitbucketserver.RepositoryChange{
{Reference: bitbucketserver.RepositoryReference{ID: "refs/heads/some-ref"}},
},
Repository: bitbucketserver.Repository{Links: map[string]any{"clone": "boom"}}, // The string "boom" here is what previously caused a panic.
}, "bitbucket push branch or tag name, malformed link"}, // https://github.com/argoproj/argo-cd/security/advisories/GHSA-f9gq-prrc-hrhc
{true, "some-ref", bitbucketserver.RepositoryReferenceChangedPayload{
Changes: []bitbucketserver.RepositoryChange{
{Reference: bitbucketserver.RepositoryReference{ID: "refs/heads/some-ref"}},
},
Repository: bitbucketserver.Repository{Links: map[string]any{"clone": []any{map[string]any{"name": "http", "href": []string{}}}}}, // The href as an empty array is what previously caused a panic.
}, "bitbucket push branch or tag name, malformed href"},
}
for _, testCase := range tests {
testCopy := testCase