Compare commits

...

32 Commits

Author SHA1 Message Date
github-actions[bot]
1963030721 Bump version to 3.2.0-rc4 on release-3.2 branch (#25010)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-10-20 17:13:42 -04:00
argo-cd-cherry-pick-bot[bot]
b227ef1559 fix: don't show error about missing appset (cherry-pick #24995 for 3.2) (#24997)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-10-17 12:28:38 -07:00
argo-cd-cherry-pick-bot[bot]
d1251f407a fix(health): use promotion resource Ready condition regardless of reason (cherry-pick #24971 for 3.2) (#24973)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-15 22:28:47 -04:00
argo-cd-cherry-pick-bot[bot]
3db95b1fbe fix: make webhook payload handlers recover from panics (cherry-pick #24862 for 3.2) (#24912)
Signed-off-by: Jakub Ciolek <jakub@ciolek.dev>
Co-authored-by: Jakub Ciolek <66125090+jake-ciolek@users.noreply.github.com>
2025-10-14 14:15:16 -04:00
Carlos R.F.
7628473802 chore(deps): bump redis from 8.2.1 to 8.2.2 to address vuln (release-3.2) (#24891)
Signed-off-by: Carlos Rodriguez-Fernandez <carlosrodrifernandez@gmail.com>
2025-10-09 17:12:30 -04:00
github-actions[bot]
059e8d220e Bump version to 3.2.0-rc3 on release-3.2 branch (#24885)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-10-07 13:33:02 -04:00
argo-cd-cherry-pick-bot[bot]
1ba3929520 fix(server): ensure resource health status is inferred on application retrieval (#24832) (cherry-pick #24851 for 3.2) (#24865)
Signed-off-by: Viacheslav Rianov <rianovviacheslav@gmail.com>
Co-authored-by: Rianov Viacheslav <55545103+vr009@users.noreply.github.com>
2025-10-06 16:57:56 -04:00
Peter Jiang
a42ccaeeca chore: bump gitops engine (#24864)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-10-06 14:58:54 -04:00
Peter Jiang
d75bcfd7b2 fix(cherry-pick): server-side diff shows duplicate containerPorts (#24842)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-10-03 17:25:06 -04:00
argo-cd-cherry-pick-bot[bot]
35e3897f61 fix(health): incorrect reason in PullRequest script (cherry-pick #24826 for 3.2) (#24828)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-02 17:32:19 -04:00
argo-cd-cherry-pick-bot[bot]
dc309cbe0d docs: fix typo in hydrator commit message template documentation (cherry-pick #24822 for 3.2) (#24827)
Signed-off-by: gyu-young-park <gyoue200125@gmail.com>
Co-authored-by: gyu-young-park <44598664+gyu-young-park@users.noreply.github.com>
2025-10-02 17:06:47 -04:00
Alexandre Gaudreault
a1f42488d9 fix: hydration errors not set on applications (#24755) (#24809)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-01 11:46:15 -04:00
github-actions[bot]
973eccee0a Bump version to 3.2.0-rc2 on release-3.2 branch (#24797)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 11:32:15 -04:00
Ville Vesilehto
8f8a1ecacb Merge commit from fork
Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.

The fix ensures thread-safe operations by always operating on
copies rather than shared objects.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-09-30 10:45:59 -04:00
Michael Crenshaw
46409ae734 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:45:32 -04:00
Michael Crenshaw
5f5d46c78b Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:07:24 -04:00
Michael Crenshaw
722036d447 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 09:45:21 -04:00
argo-cd-cherry-pick-bot[bot]
001bfda068 fix: #24781 update crossplane healthchecks to V2 version (cherry-pick #24782 for 3.2) (#24784)
Signed-off-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
Co-authored-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
2025-09-30 18:04:50 +05:30
argo-cd-cherry-pick-bot[bot]
4821d71e3d fix(health): typo in PromotionStrategy health.lua (cherry-pick #24726 for 3.2) (#24760)
Co-authored-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2025-09-27 23:45:10 +02:00
Atif Ali
ef8ac49807 fix: Clear ApplicationSet applicationStatus when ProgressiveSync is disabled (cherry-pick #24587 for 3.2 (#24716)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-09-26 10:19:45 -04:00
Alexander Matyushentsev
feab307df3 feat: add status.resourcesCount field to appset and change limit default (#24698) (#24711)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-24 17:35:03 +05:30
argo-cd-cherry-pick-bot[bot]
087378c669 fix: update ExternalSecret discovery.lua to also include the refreshPolicy (cherry-pick #24707 for 3.2) (#24713)
Signed-off-by: AvivGuiser <avivguiser@gmail.com>
Co-authored-by: AvivGuiser <avivguiser@gmail.com>
2025-09-23 13:38:41 -04:00
Alexander Matyushentsev
f3c8e1d5e3 fix: limit number of resources in appset status (#24690) (#24697)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-22 14:58:00 -07:00
argo-cd-cherry-pick-bot[bot]
28510cdda6 fix: resolve argocdService initialization issue in notifications CLI (cherry-pick #24664 for 3.2) (#24680)
Signed-off-by: puretension <rlrlfhtm5@gmail.com>
Co-authored-by: DOHYEONG LEE <rlrlfhtm5@gmail.com>
2025-09-22 19:40:48 +02:00
argo-cd-cherry-pick-bot[bot]
6a2df4380a ci(release): only set latest release in github when latest (cherry-pick #24525 for 3.2) (#24686)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-22 11:46:55 -04:00
Michael Crenshaw
cd87a13a0d chore(ci): update github runners to oci gh arc runners (3.2) (#24632) (#24653)
Signed-off-by: Koray Oksay <koray.oksay@gmail.com>
Co-authored-by: Koray Oksay <koray.oksay@gmail.com>
2025-09-18 20:01:54 -04:00
argo-cd-cherry-pick-bot[bot]
1453367645 fix: Progress Sync Unknown in UI (cherry-pick #24202 for 3.2) (#24641)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <56743004+aali309@users.noreply.github.com>
2025-09-18 14:37:39 -04:00
argo-cd-cherry-pick-bot[bot]
50531e6ab3 fix(oci): loosen up layer restrictions (cherry-pick #24640 for 3.2) (#24648)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-18 06:25:30 -10:00
argo-cd-cherry-pick-bot[bot]
bf9f927d55 fix: use informer in webhook handler to reduce memory usage (cherry-pick #24622 for 3.2) (#24623)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-17 14:51:08 -07:00
argo-cd-cherry-pick-bot[bot]
ee0de13be4 docs: Delete dangling word in Source Hydrator docs (cherry-pick #24601 for 3.2) (#24604)
Signed-off-by: José Maia <josecbmaia@hotmail.com>
Co-authored-by: José Maia <josecbmaia@hotmail.com>
2025-09-17 11:36:06 -04:00
argo-cd-cherry-pick-bot[bot]
4ac3f920d5 chore: bumps redis version to 8.2.1 (cherry-pick #24523 for 3.2) (#24582)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2025-09-16 12:22:10 -04:00
github-actions[bot]
06ef059f9f Bump version to 3.2.0-rc1 on release-3.2 branch (#24581)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-16 09:37:35 -04:00
89 changed files with 4089 additions and 1612 deletions

View File

@@ -407,7 +407,7 @@ jobs:
test-e2e:
name: Run end-to-end tests
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-latest-16-cores
runs-on: oracle-vm-16cpu-64gb-x86-64
strategy:
fail-fast: false
matrix:
@@ -426,7 +426,7 @@ jobs:
- build-go
- changes
env:
GOPATH: /home/runner/go
GOPATH: /home/ubuntu/go
ARGOCD_FAKE_IN_CLUSTER: 'true'
ARGOCD_SSH_DATA_PATH: '/tmp/argo-e2e/app/config/ssh'
ARGOCD_TLS_DATA_PATH: '/tmp/argo-e2e/app/config/tls'
@@ -462,9 +462,9 @@ jobs:
set -x
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
sudo mkdir -p $HOME/.kube && sudo chown -R runner $HOME/.kube
sudo mkdir -p $HOME/.kube && sudo chown -R ubuntu $HOME/.kube
sudo k3s kubectl config view --raw > $HOME/.kube/config
sudo chown runner $HOME/.kube/config
sudo chown ubuntu $HOME/.kube/config
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
@@ -474,7 +474,7 @@ jobs:
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Add ~/go/bin to PATH
run: |
echo "/home/runner/go/bin" >> $GITHUB_PATH
echo "/home/ubuntu/go/bin" >> $GITHUB_PATH
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
@@ -496,11 +496,11 @@ jobs:
run: |
docker pull ghcr.io/dexidp/dex:v2.43.0
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:7.2.7-alpine
docker pull redis:8.2.2-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist
chown runner dist
chown ubuntu dist
- name: Run E2E server and wait for it being available
timeout-minutes: 30
run: |

View File

@@ -32,6 +32,42 @@ jobs:
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup variables
id: var
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
IS_LATEST=false
# Ensure latest release tag matches github.ref_name
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
IS_LATEST=true
fi
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
argocd-image-provenance:
needs: [argocd-image]
permissions:
@@ -50,15 +86,17 @@ jobs:
goreleaser:
needs:
- setup-variables
- argocd-image
- argocd-image-provenance
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -142,7 +180,7 @@ jobs:
permissions:
contents: write # Needed for release uploads
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes}}
hashes: ${{ steps.sbom-hash.outputs.hashes }}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
steps:
@@ -221,6 +259,7 @@ jobs:
post-release:
needs:
- setup-variables
- argocd-image
- goreleaser
- generate-sbom
@@ -229,6 +268,8 @@ jobs:
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -242,27 +283,6 @@ jobs:
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
- name: Check if tag is the latest version and not a pre-release
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
# Ensure latest tag matches github.ref_name & not a pre-release
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
echo "TAG_STABLE=true" >> $GITHUB_ENV
else
echo "TAG_STABLE=false" >> $GITHUB_ENV
fi
- name: Update stable tag to latest version
run: |
git tag -f stable ${{ github.ref_name }}

View File

@@ -49,13 +49,14 @@ archives:
- argocd-cli
name_template: |-
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
formats: [ binary ]
formats: [binary]
checksum:
name_template: 'cli_checksums.txt'
algorithm: sha256
release:
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
prerelease: auto
draft: false
header: |

View File

@@ -24,7 +24,6 @@ packages:
Renderer: {}
github.com/argoproj/argo-cd/v3/commitserver/apiclient:
interfaces:
Clientset: {}
CommitServiceClient: {}
github.com/argoproj/argo-cd/v3/commitserver/commit:
interfaces:
@@ -35,6 +34,7 @@ packages:
github.com/argoproj/argo-cd/v3/controller/hydrator:
interfaces:
Dependencies: {}
RepoGetter: {}
github.com/argoproj/argo-cd/v3/pkg/apiclient/cluster:
interfaces:
ClusterServiceServer: {}

View File

@@ -1 +1 @@
3.2.0
3.2.0-rc4

View File

@@ -100,6 +100,7 @@ type ApplicationSetReconciler struct {
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -251,6 +252,16 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
}
}
} else {
// Progressive Sync is disabled, clear any existing applicationStatus to prevent stale data
if len(applicationSetInfo.Status.ApplicationStatus) > 0 {
logCtx.Infof("Progressive Sync disabled, removing %v AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear AppSet application statuses when Progressive Sync is disabled for %v: %w", applicationSetInfo.Name, err)
}
}
}
var validApps []argov1alpha1.Application
@@ -1398,7 +1409,13 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
resourcesCount := int64(len(statuses))
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
statuses = statuses[:r.MaxResourcesStatusCount]
}
appset.Status.Resources = statuses
appset.Status.ResourcesCount = resourcesCount
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
@@ -1411,6 +1428,7 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
}
updatedAppset.Status.Resources = appset.Status.Resources
updatedAppset.Status.ResourcesCount = resourcesCount
// Update the newly fetched object with new status resources
err := r.Client.Status().Update(ctx, updatedAppset)

View File

@@ -6410,10 +6410,11 @@ func TestUpdateResourceStatus(t *testing.T) {
require.NoError(t, err)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
maxResourcesStatusCount int
}{
{
name: "handles an empty application list",
@@ -6577,6 +6578,73 @@ func TestUpdateResourceStatus(t *testing.T) {
apps: []v1alpha1.Application{},
expectedResources: nil,
},
{
name: "truncates resources status list to",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "argocd",
},
Status: v1alpha1.ApplicationSetStatus{
Resources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
{
Name: "app2",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
},
},
},
apps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.AppHealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "app2",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.AppHealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
},
expectedResources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeSynced,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
maxResourcesStatusCount: 1,
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
@@ -6587,13 +6655,14 @@ func TestUpdateResourceStatus(t *testing.T) {
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
MaxResourcesStatusCount: cc.maxResourcesStatusCount,
}
err := r.updateResourcesStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps)
@@ -7544,109 +7613,81 @@ func TestSyncApplication(t *testing.T) {
}
}
func TestIsRollingSyncDeletionReversed(t *testing.T) {
tests := []struct {
name string
appset *v1alpha1.ApplicationSet
expected bool
func TestReconcileProgressiveSyncDisabled(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
enableProgressiveSyncs bool
expectedAppStatuses []v1alpha1.ApplicationSetApplicationStatus
}{
{
name: "Deletion Order on strategy is set as Reverse",
appset: &v1alpha1.ApplicationSet{
name: "clears applicationStatus when Progressive Sync is disabled",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-appset",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{
{
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
{
Key: "environment",
Operator: "In",
Values: []string{
"dev",
},
},
},
},
{
MatchExpressions: []v1alpha1.ApplicationMatchExpression{
{
Key: "environment",
Operator: "In",
Values: []string{
"staging",
},
},
},
},
},
Generators: []v1alpha1.ApplicationSetGenerator{},
Template: v1alpha1.ApplicationSetTemplate{},
},
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "test-appset-guestbook",
Message: "Application resource became Healthy, updating status from Progressing to Healthy.",
Status: "Healthy",
Step: "1",
},
DeletionOrder: ReverseDeletionOrder,
},
},
},
expected: true,
enableProgressiveSyncs: false,
expectedAppStatuses: nil,
},
{
name: "Deletion Order on strategy is set as AllAtOnce",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{},
},
DeletionOrder: AllAtOnceDeletionOrder,
},
} {
t.Run(cc.name, func(t *testing.T) {
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&cc.appSet).WithStatusSubresource(&cc.appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Renderer: &utils.Render{},
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
EnableProgressiveSyncs: cc.enableProgressiveSyncs,
}
req := ctrl.Request{
NamespacedName: types.NamespacedName{
Namespace: cc.appSet.Namespace,
Name: cc.appSet.Name,
},
},
expected: false,
},
{
name: "Deletion Order on strategy is set as Reverse but no steps in RollingSync",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{},
},
DeletionOrder: ReverseDeletionOrder,
},
},
},
expected: false,
},
{
name: "Deletion Order on strategy is set as Reverse, but AllAtOnce is explicitly set",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "AllAtOnce",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{},
},
DeletionOrder: ReverseDeletionOrder,
},
},
},
expected: false,
},
{
name: "Strategy is Nil",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{},
},
},
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isProgressiveSyncDeletionOrderReversed(tt.appset)
assert.Equal(t, tt.expected, result)
}
// Run reconciliation
_, err = r.Reconcile(t.Context(), req)
require.NoError(t, err)
// Fetch the updated ApplicationSet
var updatedAppSet v1alpha1.ApplicationSet
err = r.Get(t.Context(), req.NamespacedName, &updatedAppSet)
require.NoError(t, err)
// Verify the applicationStatus field
assert.Equal(t, cc.expectedAppStatuses, updatedAppSet.Status.ApplicationStatus, "applicationStatus should match expected value")
})
}
}

View File

@@ -26,10 +26,14 @@ import (
"github.com/go-playground/webhooks/v6/github"
"github.com/go-playground/webhooks/v6/gitlab"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v3/util/guard"
)
const payloadQueueSize = 50000
const panicMsgAppSet = "panic while processing applicationset-controller webhook event"
type WebhookHandler struct {
sync.WaitGroup // for testing
github *github.Webhook
@@ -102,6 +106,7 @@ func NewWebhookHandler(webhookParallelism int, argocdSettingsMgr *argosettings.S
}
func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
compLog := log.WithField("component", "applicationset-webhook")
for i := 0; i < webhookParallelism; i++ {
h.Add(1)
go func() {
@@ -111,7 +116,7 @@ func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
if !ok {
return
}
h.HandleEvent(payload)
guard.RecoverAndLog(func() { h.HandleEvent(payload) }, compLog, panicMsgAppSet)
}
}()
}

9
assets/swagger.json generated
View File

@@ -7322,6 +7322,11 @@
"items": {
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
}
},
"resourcesCount": {
"description": "ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when\nthe number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).",
"type": "integer",
"format": "int64"
}
}
},
@@ -10572,8 +10577,8 @@
"type": "string"
},
"targetBranch": {
"type": "string",
"title": "TargetBranch is the branch to which hydrated manifests should be committed"
"description": "TargetBranch is the branch from which hydrated manifests will be synced.\nIf HydrateTo is not set, this is also the branch to which hydrated manifests are committed.",
"type": "string"
}
}
},

View File

@@ -79,6 +79,7 @@ func NewCommand() *cobra.Command {
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
maxResourcesStatusCount int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -231,6 +232,7 @@ func NewCommand() *cobra.Command {
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -275,6 +277,7 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
return &command
}

View File

@@ -30,11 +30,12 @@ func NewNotificationsCommand() *cobra.Command {
)
var argocdService service.Service
toolsCommand := cmd.NewToolsCommand(
"notifications",
"argocd admin notifications",
applications,
settings.GetFactorySettingsForCLI(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
settings.GetFactorySettingsForCLI(func() service.Service { return argocdService }, "argocd-notifications-secret", "argocd-notifications-cm", false),
func(clientConfig clientcmd.ClientConfig) {
k8sCfg, err := clientConfig.ClientConfig()
if err != nil {

View File

@@ -1,101 +1,14 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
"github.com/argoproj/argo-cd/v3/util/io"
mock "github.com/stretchr/testify/mock"
utilio "github.com/argoproj/argo-cd/v3/util/io"
)
// NewClientset creates a new instance of Clientset. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewClientset(t interface {
mock.TestingT
Cleanup(func())
}) *Clientset {
mock := &Clientset{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// Clientset is an autogenerated mock type for the Clientset type
type Clientset struct {
mock.Mock
CommitServiceClient apiclient.CommitServiceClient
}
type Clientset_Expecter struct {
mock *mock.Mock
}
func (_m *Clientset) EXPECT() *Clientset_Expecter {
return &Clientset_Expecter{mock: &_m.Mock}
}
// NewCommitServerClient provides a mock function for the type Clientset
func (_mock *Clientset) NewCommitServerClient() (io.Closer, apiclient.CommitServiceClient, error) {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for NewCommitServerClient")
}
var r0 io.Closer
var r1 apiclient.CommitServiceClient
var r2 error
if returnFunc, ok := ret.Get(0).(func() (io.Closer, apiclient.CommitServiceClient, error)); ok {
return returnFunc()
}
if returnFunc, ok := ret.Get(0).(func() io.Closer); ok {
r0 = returnFunc()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(io.Closer)
}
}
if returnFunc, ok := ret.Get(1).(func() apiclient.CommitServiceClient); ok {
r1 = returnFunc()
} else {
if ret.Get(1) != nil {
r1 = ret.Get(1).(apiclient.CommitServiceClient)
}
}
if returnFunc, ok := ret.Get(2).(func() error); ok {
r2 = returnFunc()
} else {
r2 = ret.Error(2)
}
return r0, r1, r2
}
// Clientset_NewCommitServerClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'NewCommitServerClient'
type Clientset_NewCommitServerClient_Call struct {
*mock.Call
}
// NewCommitServerClient is a helper method to define mock.On call
func (_e *Clientset_Expecter) NewCommitServerClient() *Clientset_NewCommitServerClient_Call {
return &Clientset_NewCommitServerClient_Call{Call: _e.mock.On("NewCommitServerClient")}
}
func (_c *Clientset_NewCommitServerClient_Call) Run(run func()) *Clientset_NewCommitServerClient_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *Clientset_NewCommitServerClient_Call) Return(closer io.Closer, commitServiceClient apiclient.CommitServiceClient, err error) *Clientset_NewCommitServerClient_Call {
_c.Call.Return(closer, commitServiceClient, err)
return _c
}
func (_c *Clientset_NewCommitServerClient_Call) RunAndReturn(run func() (io.Closer, apiclient.CommitServiceClient, error)) *Clientset_NewCommitServerClient_Call {
_c.Call.Return(run)
return _c
func (c *Clientset) NewCommitServerClient() (utilio.Closer, apiclient.CommitServiceClient, error) {
return utilio.NopCloser, c.CommitServiceClient, nil
}

View File

@@ -1878,7 +1878,7 @@ func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext boo
return
}
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp)
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp.DeepCopy())
log.WithFields(applog.GetAppLogFields(origApp)).Debug("Successfully processed app hydrate queue item")
return

View File

@@ -4,7 +4,9 @@ import (
"context"
"encoding/json"
"fmt"
"maps"
"path/filepath"
"slices"
"sync"
"time"
@@ -99,47 +101,41 @@ func NewHydrator(dependencies Dependencies, statusRefreshTimeout time.Duration,
// It's likely that multiple applications will trigger hydration at the same time. The hydration queue key is meant to
// dedupe these requests.
func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
origApp = origApp.DeepCopy()
app := origApp.DeepCopy()
if app.Spec.SourceHydrator == nil {
return
}
logCtx := log.WithFields(applog.GetAppLogFields(app))
logCtx.Debug("Processing app hydrate queue item")
// TODO: don't reuse statusRefreshTimeout. Create a new timeout for hydration.
needsHydration, reason := appNeedsHydration(origApp, h.statusRefreshTimeout)
if !needsHydration {
return
needsHydration, reason := appNeedsHydration(app)
if needsHydration {
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
StartedAt: metav1.Now(),
FinishedAt: nil,
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
}
logCtx.WithField("reason", reason).Info("Hydrating app")
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
StartedAt: metav1.Now(),
FinishedAt: nil,
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
needsRefresh := app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.StartedAt.Time) > h.statusRefreshTimeout
if needsHydration || needsRefresh {
logCtx.WithField("reason", reason).Info("Hydrating app")
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
} else {
logCtx.WithField("reason", reason).Debug("Skipping hydration")
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
origApp.Status.SourceHydrator = app.Status.SourceHydrator
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
logCtx.Debug("Successfully processed app hydrate queue item")
}
func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
key := types.HydrationQueueKey{
SourceRepoURL: git.NormalizeGitURLAllowInvalid(app.Spec.SourceHydrator.DrySource.RepoURL),
SourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
DestinationBranch: destinationBranch,
DestinationBranch: app.Spec.GetHydrateToSource().TargetRevision,
}
return key
}
@@ -148,43 +144,92 @@ func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
// hydration key, hydrates their latest commit, and updates their status accordingly. If the hydration fails, it marks
// the operation as failed and logs the error. If successful, it updates the operation to indicate that hydration was
// successful and requests a refresh of the applications to pick up the new hydrated commit.
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) (processNext bool) {
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) {
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
"sourceTargetRevision": hydrationKey.SourceTargetRevision,
"destinationBranch": hydrationKey.DestinationBranch,
})
relevantApps, drySHA, hydratedSHA, err := h.hydrateAppsLatestCommit(logCtx, hydrationKey)
if len(relevantApps) == 0 {
// return early if there are no relevant apps found to hydrate
// otherwise you'll be stuck in hydrating
logCtx.Info("Skipping hydration since there are no relevant apps found to hydrate")
// Get all applications sharing the same hydration key
apps, err := h.getAppsForHydrationKey(hydrationKey)
if err != nil {
// If we get an error here, we cannot proceed with hydration and we do not know
// which apps to update with the failure. The best we can do is log an error in
// the controller and wait for statusRefreshTimeout to retry
logCtx.WithError(err).Error("failed to get apps for hydration")
return
}
logCtx.WithField("appCount", len(apps))
// FIXME: we might end up in a race condition here where an HydrationQueueItem is processed
// before all applications had their CurrentOperation set by ProcessAppHydrateQueueItem.
// This would cause this method to update "old" CurrentOperation.
// It should only start hydration if all apps are in the HydrateOperationPhaseHydrating phase.
raceDetected := false
for _, app := range apps {
if app.Status.SourceHydrator.CurrentOperation == nil || app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
raceDetected = true
break
}
}
if raceDetected {
logCtx.Warn("race condition detected: not all apps are in HydrateOperationPhaseHydrating phase")
}
// validate all the applications to make sure they are all correctly configured.
// All applications sharing the same hydration key must succeed for the hydration to be processed.
projects, validationErrors := h.validateApplications(apps)
if len(validationErrors) > 0 {
// For the applications that have an error, set the specific error in their status.
// Applications without error will still fail with a generic error since the hydration cannot be partial
genericError := genericHydrationError(validationErrors)
for _, app := range apps {
if err, ok := validationErrors[app.QualifiedName()]; ok {
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("failed to validate hydration app: %v", err)
h.setAppHydratorError(app, err)
} else {
h.setAppHydratorError(app, genericError)
}
}
return
}
// Hydrate all the apps
drySHA, hydratedSHA, appErrors, err := h.hydrate(logCtx, apps, projects)
if err != nil {
// If there is a single error, it affects each applications
for i := range apps {
appErrors[apps[i].QualifiedName()] = err
}
}
if drySHA != "" {
logCtx = logCtx.WithField("drySHA", drySHA)
}
if err != nil {
logCtx.WithField("appCount", len(relevantApps)).WithError(err).Error("Failed to hydrate apps")
for _, app := range relevantApps {
origApp := app.DeepCopy()
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate revision %q: %v", drySHA, err.Error())
// We may or may not have gotten far enough in the hydration process to get a non-empty SHA, but set it just
// in case we did.
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("Failed to hydrate app: %v", err)
if len(appErrors) > 0 {
// For the applications that have an error, set the specific error in their status.
// Applications without error will still fail with a generic error since the hydration cannot be partial
genericError := genericHydrationError(appErrors)
for _, app := range apps {
if drySHA != "" {
// If we have a drySHA, we can set it on the app status
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
}
if err, ok := appErrors[app.QualifiedName()]; ok {
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("failed to hydrate app: %v", err)
h.setAppHydratorError(app, err)
} else {
h.setAppHydratorError(app, genericError)
}
}
return
}
logCtx.WithField("appCount", len(relevantApps)).Debug("Successfully hydrated apps")
logCtx.Debug("Successfully hydrated apps")
finishedAt := metav1.Now()
for _, app := range relevantApps {
for _, app := range apps {
origApp := app.DeepCopy()
operation := &appv1.HydrateOperation{
StartedAt: app.Status.SourceHydrator.CurrentOperation.StartedAt,
@@ -202,118 +247,123 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
// Request a refresh since we pushed a new commit.
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
if err != nil {
logCtx.WithField("app", app.QualifiedName()).WithError(err).Error("Failed to request app refresh after hydration")
logCtx.WithFields(applog.GetAppLogFields(app)).WithError(err).Error("Failed to request app refresh after hydration")
}
}
return
}
func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, string, string, error) {
relevantApps, projects, err := h.getRelevantAppsAndProjectsForHydration(logCtx, hydrationKey)
if err != nil {
return nil, "", "", fmt.Errorf("failed to get relevant apps for hydration: %w", err)
// setAppHydratorError updates the CurrentOperation with the error information.
func (h *Hydrator) setAppHydratorError(app *appv1.Application, err error) {
// if the operation is not in progress, we do not update the status
if app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
return
}
dryRevision, hydratedRevision, err := h.hydrate(logCtx, relevantApps, projects)
if err != nil {
return relevantApps, dryRevision, "", fmt.Errorf("failed to hydrate apps: %w", err)
}
return relevantApps, dryRevision, hydratedRevision, nil
origApp := app.DeepCopy()
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate: %v", err.Error())
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
}
func (h *Hydrator) getRelevantAppsAndProjectsForHydration(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, map[string]*appv1.AppProject, error) {
// getAppsForHydrationKey returns the applications matching the hydration key.
func (h *Hydrator) getAppsForHydrationKey(hydrationKey types.HydrationQueueKey) ([]*appv1.Application, error) {
// Get all apps
apps, err := h.dependencies.GetProcessableApps()
if err != nil {
return nil, nil, fmt.Errorf("failed to list apps: %w", err)
return nil, fmt.Errorf("failed to list apps: %w", err)
}
var relevantApps []*appv1.Application
projects := make(map[string]*appv1.AppProject)
uniquePaths := make(map[string]bool, len(apps.Items))
for _, app := range apps.Items {
if app.Spec.SourceHydrator == nil {
continue
}
if !git.SameURL(app.Spec.SourceHydrator.DrySource.RepoURL, hydrationKey.SourceRepoURL) ||
app.Spec.SourceHydrator.DrySource.TargetRevision != hydrationKey.SourceTargetRevision {
continue
}
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
if destinationBranch != hydrationKey.DestinationBranch {
appKey := getHydrationQueueKey(&app)
if appKey != hydrationKey {
continue
}
relevantApps = append(relevantApps, &app)
}
return relevantApps, nil
}
path := app.Spec.SourceHydrator.SyncSource.Path
// ensure that the path is always set to a path that doesn't resolve to the root of the repo
if IsRootPath(path) {
return nil, nil, fmt.Errorf("app %q has path %q which resolves to repository root", app.QualifiedName(), path)
}
// validateApplications checks that all applications are valid for hydration.
func (h *Hydrator) validateApplications(apps []*appv1.Application) (map[string]*appv1.AppProject, map[string]error) {
projects := make(map[string]*appv1.AppProject)
errors := make(map[string]error)
uniquePaths := make(map[string]string, len(apps))
var proj *appv1.AppProject
for _, app := range apps {
// Get the project for the app and validate if the app is allowed to use the source.
// We can't short-circuit this even if we have seen this project before, because we need to verify that this
// particular app is allowed to use this project. That logic is in GetProcessableAppProj.
proj, err = h.dependencies.GetProcessableAppProj(&app)
// particular app is allowed to use this project.
proj, err := h.dependencies.GetProcessableAppProj(app)
if err != nil {
return nil, nil, fmt.Errorf("failed to get project %q for app %q: %w", app.Spec.Project, app.QualifiedName(), err)
errors[app.QualifiedName()] = fmt.Errorf("failed to get project %q: %w", app.Spec.Project, err)
continue
}
permitted := proj.IsSourcePermitted(app.Spec.GetSource())
if !permitted {
// Log and skip. We don't want to fail the entire operation because of one app.
logCtx.Warnf("App %q is not permitted to use source %q", app.QualifiedName(), app.Spec.Source.String())
errors[app.QualifiedName()] = fmt.Errorf("application repo %s is not permitted in project '%s'", app.Spec.GetSource().RepoURL, proj.Name)
continue
}
projects[app.Spec.Project] = proj
// Disallow hydrating to the repository root.
// Hydrating to root would overwrite or delete files at the top level of the repo,
// which can break other applications or shared configuration.
// Every hydrated app must write into a subdirectory instead.
destPath := app.Spec.SourceHydrator.SyncSource.Path
if IsRootPath(destPath) {
errors[app.QualifiedName()] = fmt.Errorf("app is configured to hydrate to the repository root (branch %q, path %q) which is not allowed", app.Spec.GetHydrateToSource().TargetRevision, destPath)
continue
}
// TODO: test the dupe detection
// TODO: normalize the path to avoid "path/.." from being treated as different from "."
if _, ok := uniquePaths[path]; ok {
return nil, nil, fmt.Errorf("multiple app hydrators use the same destination: %v", app.Spec.SourceHydrator.SyncSource.Path)
if appName, ok := uniquePaths[destPath]; ok {
errors[app.QualifiedName()] = fmt.Errorf("app %s hydrator use the same destination: %v", appName, app.Spec.SourceHydrator.SyncSource.Path)
errors[appName] = fmt.Errorf("app %s hydrator use the same destination: %v", app.QualifiedName(), app.Spec.SourceHydrator.SyncSource.Path)
continue
}
uniquePaths[path] = true
relevantApps = append(relevantApps, &app)
uniquePaths[destPath] = app.QualifiedName()
}
return relevantApps, projects, nil
// If there are any errors, return nil for projects to avoid possible partial processing.
if len(errors) > 0 {
projects = nil
}
return projects, errors
}
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, error) {
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, map[string]error, error) {
errors := make(map[string]error)
if len(apps) == 0 {
return "", "", nil
return "", "", nil, nil
}
// These values are the same for all apps being hydrated together, so just get them from the first app.
repoURL := apps[0].Spec.SourceHydrator.DrySource.RepoURL
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
repoURL := apps[0].Spec.GetHydrateToSource().RepoURL
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
// Disallow hydrating to the repository root.
// Hydrating to root would overwrite or delete files at the top level of the repo,
// which can break other applications or shared configuration.
// Every hydrated app must write into a subdirectory instead.
for _, app := range apps {
destPath := app.Spec.SourceHydrator.SyncSource.Path
if IsRootPath(destPath) {
return "", "", fmt.Errorf(
"app %q is configured to hydrate to the repository root (branch %q, path %q) which is not allowed",
app.QualifiedName(), targetBranch, destPath,
)
}
}
// FIXME: As a convenience, the commit server will create the syncBranch if it does not exist. If the
// targetBranch does not exist, it will create it based on the syncBranch. On the next line, we take
// the `syncBranch` from the first app and assume that they're all configured the same. Instead, if any
// app has a different syncBranch, we should send the commit server an empty string and allow it to
// create the targetBranch as an orphan since we can't reliable determine a reasonable base.
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
// Get a static SHA revision from the first app so that all apps are hydrated from the same revision.
targetRevision, pathDetails, err := h.getManifests(context.Background(), apps[0], "", projects[apps[0].Spec.Project])
if err != nil {
return "", "", fmt.Errorf("failed to get manifests for app %q: %w", apps[0].QualifiedName(), err)
errors[apps[0].QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
return "", "", errors, nil
}
paths := []*commitclient.PathDetails{pathDetails}
@@ -324,18 +374,18 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
app := app
eg.Go(func() error {
_, pathDetails, err = h.getManifests(ctx, app, targetRevision, projects[app.Spec.Project])
if err != nil {
return fmt.Errorf("failed to get manifests for app %q: %w", app.QualifiedName(), err)
}
mu.Lock()
defer mu.Unlock()
if err != nil {
errors[app.QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
return errors[app.QualifiedName()]
}
paths = append(paths, pathDetails)
mu.Unlock()
return nil
})
}
err = eg.Wait()
if err != nil {
return "", "", fmt.Errorf("failed to get manifests for apps: %w", err)
if err := eg.Wait(); err != nil {
return targetRevision, "", errors, nil
}
// If all the apps are under the same project, use that project. Otherwise, use an empty string to indicate that we
@@ -344,18 +394,19 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
if len(projects) == 1 {
for p := range projects {
project = p
break
}
}
// Get the commit metadata for the target revision.
revisionMetadata, err := h.getRevisionMetadata(context.Background(), repoURL, project, targetRevision)
if err != nil {
return "", "", fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
return targetRevision, "", errors, fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
}
repo, err := h.dependencies.GetWriteCredentials(context.Background(), repoURL, project)
if err != nil {
return "", "", fmt.Errorf("failed to get hydrator credentials: %w", err)
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator credentials: %w", err)
}
if repo == nil {
// Try without credentials.
@@ -367,11 +418,11 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
// get the commit message template
commitMessageTemplate, err := h.dependencies.GetHydratorCommitMessageTemplate()
if err != nil {
return "", "", fmt.Errorf("failed to get hydrated commit message template: %w", err)
return targetRevision, "", errors, fmt.Errorf("failed to get hydrated commit message template: %w", err)
}
commitMessage, errMsg := getTemplatedCommitMessage(repoURL, targetRevision, commitMessageTemplate, revisionMetadata)
if errMsg != nil {
return "", "", fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
}
manifestsRequest := commitclient.CommitHydratedManifestsRequest{
@@ -386,14 +437,14 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
closer, commitService, err := h.commitClientset.NewCommitServerClient()
if err != nil {
return targetRevision, "", fmt.Errorf("failed to create commit service: %w", err)
return targetRevision, "", errors, fmt.Errorf("failed to create commit service: %w", err)
}
defer utilio.Close(closer)
resp, err := commitService.CommitHydratedManifests(context.Background(), &manifestsRequest)
if err != nil {
return targetRevision, "", fmt.Errorf("failed to commit hydrated manifests: %w", err)
return targetRevision, "", errors, fmt.Errorf("failed to commit hydrated manifests: %w", err)
}
return targetRevision, resp.HydratedSha, nil
return targetRevision, resp.HydratedSha, errors, nil
}
// getManifests gets the manifests for the given application and target revision. It returns the resolved revision
@@ -456,34 +507,27 @@ func (h *Hydrator) getRevisionMetadata(ctx context.Context, repoURL, project, re
}
// appNeedsHydration answers if application needs manifests hydrated.
func appNeedsHydration(app *appv1.Application, statusHydrateTimeout time.Duration) (needsHydration bool, reason string) {
if app.Spec.SourceHydrator == nil {
return false, "source hydrator not configured"
}
var hydratedAt *metav1.Time
if app.Status.SourceHydrator.CurrentOperation != nil {
hydratedAt = &app.Status.SourceHydrator.CurrentOperation.StartedAt
}
func appNeedsHydration(app *appv1.Application) (needsHydration bool, reason string) {
switch {
case app.IsHydrateRequested():
return true, "hydrate requested"
case app.Spec.SourceHydrator == nil:
return false, "source hydrator not configured"
case app.Status.SourceHydrator.CurrentOperation == nil:
return true, "no previous hydrate operation"
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating:
return false, "hydration operation already in progress"
case app.IsHydrateRequested():
return true, "hydrate requested"
case !app.Spec.SourceHydrator.DeepEquals(app.Status.SourceHydrator.CurrentOperation.SourceHydrator):
return true, "spec.sourceHydrator differs"
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseFailed && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.FinishedAt.Time) > 2*time.Minute:
return true, "previous hydrate operation failed more than 2 minutes ago"
case hydratedAt == nil || hydratedAt.Add(statusHydrateTimeout).Before(time.Now().UTC()):
return true, "hydration expired"
}
return false, ""
return false, "hydration not needed"
}
// Gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
// 1. Get the metadata template engine would use to render the template
// getTemplatedCommitMessage gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
// 1. Get the metadata template engine would use to render the template
// 2. Pass the output of Step 1 and Step 2 to template Render
func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string, dryCommitMetadata *appv1.RevisionMetadata) (string, error) {
hydratorCommitMetadata, err := hydrator.GetCommitMetadata(repoURL, revision, dryCommitMetadata)
@@ -497,6 +541,20 @@ func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string,
return templatedCommitMsg, nil
}
// genericHydrationError returns an error that summarizes the hydration errors for all applications.
func genericHydrationError(validationErrors map[string]error) error {
if len(validationErrors) == 0 {
return nil
}
keys := slices.Sorted(maps.Keys(validationErrors))
remainder := "has an error"
if len(keys) > 1 {
remainder = fmt.Sprintf("and %d more have errors", len(keys)-1)
}
return fmt.Errorf("cannot hydrate because application %s %s", keys[0], remainder)
}
// IsRootPath returns whether the path references a root path
func IsRootPath(path string) bool {
clean := filepath.Clean(path)

File diff suppressed because it is too large Load Diff

113
controller/hydrator/mocks/RepoGetter.go generated Normal file
View File

@@ -0,0 +1,113 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"context"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
mock "github.com/stretchr/testify/mock"
)
// NewRepoGetter creates a new instance of RepoGetter. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewRepoGetter(t interface {
mock.TestingT
Cleanup(func())
}) *RepoGetter {
mock := &RepoGetter{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// RepoGetter is an autogenerated mock type for the RepoGetter type
type RepoGetter struct {
mock.Mock
}
type RepoGetter_Expecter struct {
mock *mock.Mock
}
func (_m *RepoGetter) EXPECT() *RepoGetter_Expecter {
return &RepoGetter_Expecter{mock: &_m.Mock}
}
// GetRepository provides a mock function for the type RepoGetter
func (_mock *RepoGetter) GetRepository(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error) {
ret := _mock.Called(ctx, repoURL, project)
if len(ret) == 0 {
panic("no return value specified for GetRepository")
}
var r0 *v1alpha1.Repository
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) (*v1alpha1.Repository, error)); ok {
return returnFunc(ctx, repoURL, project)
}
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) *v1alpha1.Repository); ok {
r0 = returnFunc(ctx, repoURL, project)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.Repository)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context, string, string) error); ok {
r1 = returnFunc(ctx, repoURL, project)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// RepoGetter_GetRepository_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetRepository'
type RepoGetter_GetRepository_Call struct {
*mock.Call
}
// GetRepository is a helper method to define mock.On call
// - ctx context.Context
// - repoURL string
// - project string
func (_e *RepoGetter_Expecter) GetRepository(ctx interface{}, repoURL interface{}, project interface{}) *RepoGetter_GetRepository_Call {
return &RepoGetter_GetRepository_Call{Call: _e.mock.On("GetRepository", ctx, repoURL, project)}
}
func (_c *RepoGetter_GetRepository_Call) Run(run func(ctx context.Context, repoURL string, project string)) *RepoGetter_GetRepository_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 string
if args[1] != nil {
arg1 = args[1].(string)
}
var arg2 string
if args[2] != nil {
arg2 = args[2].(string)
}
run(
arg0,
arg1,
arg2,
)
})
return _c
}
func (_c *RepoGetter_GetRepository_Call) Return(repository *v1alpha1.Repository, err error) *RepoGetter_GetRepository_Call {
_c.Call.Return(repository, err)
return _c
}
func (_c *RepoGetter_GetRepository_Call) RunAndReturn(run func(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error)) *RepoGetter_GetRepository_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -292,6 +292,8 @@ data:
applicationsetcontroller.global.preserved.labels: "acme.com/label1,acme.com/label2"
# Enable GitHub API metrics for generators that use GitHub API
applicationsetcontroller.enable.github.api.metrics: "false"
# The maximum number of resources stored in the status of an ApplicationSet. This is a safeguard to prevent the status from growing too large.
applicationsetcontroller.status.max.resources.count: "5000"
## Argo CD Notifications Controller Properties
# Set the logging level. One of: debug|info|warn|error (default "info")

View File

@@ -37,6 +37,7 @@ argocd-applicationset-controller [flags]
--kubeconfig string Path to a kube config. Only required if out-of-cluster
--logformat string Set the logging format. One of: json|text (default "json")
--loglevel string Set the logging level. One of: debug|info|warn|error (default "info")
--max-resources-status-count int Max number of resources stored in appset status.
--metrics-addr string The address the metric endpoint binds to. (default ":8080")
--metrics-applicationset-labels strings List of Application labels that will be added to the argocd_applicationset_labels metric
-n, --namespace string If present, the namespace scope for this CLI request

View File

@@ -1,2 +1,5 @@
This page is populated for released Argo CD versions. Use the version selector to view this table for a specific
version.
| Argo CD version | Kubernetes versions |
|-----------------|---------------------|
| 3.2 | v1.33, v1.32, v1.31, v1.30 |
| 3.1 | v1.33, v1.32, v1.31, v1.30 |
| 3.0 | v1.32, v1.31, v1.30, v1.29 |

View File

@@ -86,3 +86,9 @@ If you do not want your CronJob to affect the Application's aggregated Health, y
Due to security reasons ([GHSA-786q-9hcg-v9ff](https://github.com/argoproj/argo-cd/security/advisories/GHSA-786q-9hcg-v9ff)),
the project API response was sanitized to remove sensitive information. This includes
credentials of project-scoped repositories and clusters.
## ApplicationSet `resources` field of `status` resource is limited to 5000 elements by default
The `resources` field of the `status` resource of an ApplicationSet is now limited to 5000 elements by default. This is
to prevent status bloat and exceeding etcd limits. The limit can be configured by setting the `applicationsetcontroller.status.max.resources.count`
field in the `argocd-cmd-params-cm` ConfigMap.

View File

@@ -195,7 +195,7 @@ git commit -m "Bump image to v1.2.3" \
```
!!!note Newlines are not allowed
The commit trailers must not contain newlines. The
The commit trailers must not contain newlines.
So the full CI script might look something like this:
@@ -297,6 +297,7 @@ data:
{{- if .metadata.author }}
Co-authored-by: {{ .metadata.author }}
{{- end }}
```
### Credential Templates

4
go.mod
View File

@@ -12,7 +12,7 @@ require (
github.com/Masterminds/sprig/v3 v3.3.0
github.com/TomOnTime/utfutil v1.0.0
github.com/alicebob/miniredis/v2 v2.35.0
github.com/argoproj/gitops-engine v0.7.1-0.20250908182407-97ad5b59a627
github.com/argoproj/gitops-engine v0.7.1-0.20251006172252-b89b0871b414
github.com/argoproj/notifications-engine v0.4.1-0.20250908182349-da04400446ff
github.com/argoproj/pkg v0.13.6
github.com/argoproj/pkg/v2 v2.0.1
@@ -115,7 +115,7 @@ require (
layeh.com/gopher-json v0.0.0-20190114024228-97fed8db8427
oras.land/oras-go/v2 v2.6.0
sigs.k8s.io/controller-runtime v0.21.0
sigs.k8s.io/structured-merge-diff/v6 v6.3.0
sigs.k8s.io/structured-merge-diff/v6 v6.3.1-0.20251003215857-446d8398e19c
sigs.k8s.io/yaml v1.6.0
)

7
go.sum
View File

@@ -113,8 +113,8 @@ github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFI
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/appscode/go v0.0.0-20191119085241-0887d8ec2ecc/go.mod h1:OawnOmAL4ZX3YaPdN+8HTNwBveT1jMsqP74moa9XUbE=
github.com/argoproj/gitops-engine v0.7.1-0.20250908182407-97ad5b59a627 h1:yntvA+uaFz62HRfWGGwlvs4ErdxoLQjCpDXufdEt2FI=
github.com/argoproj/gitops-engine v0.7.1-0.20250908182407-97ad5b59a627/go.mod h1:yJ3t/GRn9Gx2LEyMrh9X0roL7zzVlk3nvuJt6G1o6jI=
github.com/argoproj/gitops-engine v0.7.1-0.20251006172252-b89b0871b414 h1:2w1vd2VZja7Mlf/rblJkp6/Eq8fNDuM7p6pI4PTAJhg=
github.com/argoproj/gitops-engine v0.7.1-0.20251006172252-b89b0871b414/go.mod h1:2nqYZBhj8CfVZb3ATakZpi1KNb/yc7mpadIHslicTFI=
github.com/argoproj/notifications-engine v0.4.1-0.20250908182349-da04400446ff h1:pGGAeHIktPuYCRl1Z540XdxPFnedqyUhJK4VgpyJZfY=
github.com/argoproj/notifications-engine v0.4.1-0.20250908182349-da04400446ff/go.mod h1:d1RazGXWvKRFv9//rg4MRRR7rbvbE7XLgTSMT5fITTE=
github.com/argoproj/pkg v0.13.6 h1:36WPD9MNYECHcO1/R1pj6teYspiK7uMQLCgLGft2abM=
@@ -1461,8 +1461,9 @@ sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HR
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v6 v6.2.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/structured-merge-diff/v6 v6.3.1-0.20251003215857-446d8398e19c h1:RCkxmWwPjOw2O1RiDgBgI6tfISvB07jAh+GEztp7TWk=
sigs.k8s.io/structured-merge-diff/v6 v6.3.1-0.20251003215857-446d8398e19c/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=

View File

@@ -187,6 +187,12 @@ spec:
name: argocd-cmd-params-cm
key: applicationsetcontroller.requeue.after
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: applicationsetcontroller.status.max.resources.count
optional: true
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v3.2.0-rc4

View File

@@ -5,7 +5,7 @@ kind: Kustomization
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v3.2.0-rc4
resources:
- ./application-controller
- ./dex

View File

@@ -40,7 +40,7 @@ spec:
serviceAccountName: argocd-redis
containers:
- name: redis
image: redis:7.2.7-alpine
image: redis:8.2.2-alpine
imagePullPolicy: Always
args:
- "--save"

View File

@@ -1485,8 +1485,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4900,8 +4901,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4982,8 +4984,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -23737,6 +23740,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -24838,7 +24844,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24964,7 +24976,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25076,7 +25088,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25092,7 +25104,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25383,7 +25395,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25435,7 +25447,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25783,7 +25795,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -1485,8 +1485,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4900,8 +4901,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4982,8 +4984,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -23737,6 +23740,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -24806,7 +24812,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24910,7 +24922,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -24926,7 +24938,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25217,7 +25229,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25269,7 +25281,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25617,7 +25629,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v3.2.0-rc4

View File

@@ -1484,8 +1484,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4899,8 +4900,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4981,8 +4983,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path

View File

@@ -17823,6 +17823,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata

View File

@@ -12,7 +12,7 @@ patches:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v3.2.0-rc4
resources:
- ../../base/application-controller
- ../../base/applicationset-controller

View File

@@ -1,6 +1,6 @@
dependencies:
- name: redis-ha
repository: https://dandydeveloper.github.io/charts
version: 4.33.2
digest: sha256:dfb01cb345d8e0c3cf41294ca9b7eae8272083f3ed165cf58d35e492403cf0aa
generated: "2025-03-04T08:25:54.199096-08:00"
version: 4.34.11
digest: sha256:65651a2e28ac28852dd75e642e42ec06557d9bc14949c62d817dec315812346e
generated: "2025-09-16T11:02:38.114394+03:00"

View File

@@ -1,4 +1,4 @@
dependencies:
- name: redis-ha
version: 4.33.2
version: 4.34.11
repository: https://dandydeveloper.github.io/charts

View File

@@ -9,7 +9,7 @@ metadata:
labels:
heritage: Helm
release: argocd
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
app: argocd-redis-ha
secrets:
- name: argocd-redis
@@ -23,7 +23,7 @@ metadata:
labels:
heritage: Helm
release: argocd
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
app: argocd-redis-ha
---
# Source: redis-ha/charts/redis-ha/templates/redis-ha-configmap.yaml
@@ -35,7 +35,7 @@ metadata:
labels:
heritage: Helm
release: argocd
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
app: argocd-redis-ha
data:
redis.conf: |
@@ -606,12 +606,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -779,7 +795,7 @@ metadata:
labels:
heritage: Helm
release: argocd
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
app: argocd-redis-ha
data:
redis_liveness.sh: |
@@ -855,7 +871,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
rules:
- apiGroups:
- ""
@@ -874,8 +890,8 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
component: argocd-redis-ha-haproxy
chart: redis-ha-4.34.11
component: haproxy
rules:
- apiGroups:
- ""
@@ -894,7 +910,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
subjects:
- kind: ServiceAccount
name: argocd-redis-ha
@@ -913,8 +929,8 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
component: argocd-redis-ha-haproxy
chart: redis-ha-4.34.11
component: haproxy
subjects:
- kind: ServiceAccount
name: argocd-redis-ha-haproxy
@@ -933,7 +949,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
spec:
publishNotReadyAddresses: true
@@ -962,7 +978,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
spec:
publishNotReadyAddresses: true
@@ -991,7 +1007,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
spec:
publishNotReadyAddresses: true
@@ -1020,7 +1036,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
spec:
type: ClusterIP
@@ -1048,8 +1064,8 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
component: argocd-redis-ha-haproxy
chart: redis-ha-4.34.11
component: haproxy
annotations:
spec:
type: ClusterIP
@@ -1076,7 +1092,8 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
component: haproxy
spec:
strategy:
type: RollingUpdate
@@ -1086,12 +1103,14 @@ spec:
matchLabels:
app: redis-ha-haproxy
release: argocd
component: haproxy
template:
metadata:
name: argocd-redis-ha-haproxy
labels:
app: redis-ha-haproxy
release: argocd
component: haproxy
annotations:
prometheus.io/port: "9101"
prometheus.io/scrape: "true"
@@ -1109,7 +1128,7 @@ spec:
nodeSelector:
{}
tolerations:
null
[]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -1117,6 +1136,7 @@ spec:
matchLabels:
app: redis-ha-haproxy
release: argocd
component: haproxy
topologyKey: kubernetes.io/hostname
initContainers:
- name: config-init
@@ -1210,7 +1230,7 @@ metadata:
app: redis-ha
heritage: "Helm"
release: "argocd"
chart: redis-ha-4.33.2
chart: redis-ha-4.34.11
annotations:
{}
spec:
@@ -1226,7 +1246,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
release: argocd
app: redis-ha
@@ -1250,7 +1270,7 @@ spec:
automountServiceAccountToken: false
initContainers:
- name: config-init
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
resources:
{}
@@ -1290,7 +1310,7 @@ spec:
containers:
- name: redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
command:
- redis-server
@@ -1334,11 +1354,11 @@ spec:
- -c
- /health/redis_readiness.sh
startupProbe:
initialDelaySeconds: 5
periodSeconds: 10
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 15
successThreshold: 1
failureThreshold: 3
failureThreshold: 5
exec:
command:
- sh
@@ -1364,7 +1384,7 @@ spec:
- /bin/sh
- /readonly-config/trigger-failover-if-master.sh
- name: sentinel
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
command:
- redis-sentinel
@@ -1437,7 +1457,7 @@ spec:
- sleep 30; redis-cli -p 26379 sentinel reset argocd
- name: split-brain-fix
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
command:
- sh

View File

@@ -27,7 +27,7 @@ redis-ha:
serviceAccount:
automountToken: true
image:
tag: 7.2.7-alpine
tag: 8.2.2-alpine
sentinel:
bind: '0.0.0.0'
lifecycle:

View File

@@ -1485,8 +1485,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4900,8 +4901,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4982,8 +4984,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -23737,6 +23740,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -25203,12 +25209,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -26188,7 +26210,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -26314,7 +26342,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26465,7 +26493,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -26561,7 +26589,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26685,7 +26713,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -27002,7 +27030,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -27054,7 +27082,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -27428,7 +27456,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -27812,7 +27840,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -27887,7 +27915,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -27910,7 +27938,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -27958,9 +27986,9 @@ spec:
- sh
- -c
- /health/redis_readiness.sh
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
@@ -27981,7 +28009,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -28056,7 +28084,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -28091,7 +28119,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -1485,8 +1485,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4900,8 +4901,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4982,8 +4984,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -23737,6 +23740,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -25194,12 +25200,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -26158,7 +26180,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -26301,7 +26329,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -26397,7 +26425,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26521,7 +26549,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26838,7 +26866,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26890,7 +26918,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -27264,7 +27292,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -27648,7 +27676,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -27723,7 +27751,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -27746,7 +27774,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -27794,9 +27822,9 @@ spec:
- sh
- -c
- /health/redis_readiness.sh
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
@@ -27817,7 +27845,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -27892,7 +27920,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -27927,7 +27955,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -890,12 +890,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -1875,7 +1891,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -2001,7 +2023,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2152,7 +2174,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2248,7 +2270,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2372,7 +2394,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2689,7 +2711,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2741,7 +2763,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -3115,7 +3137,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3499,7 +3521,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -3574,7 +3596,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -3597,7 +3619,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -3645,9 +3667,9 @@ spec:
- sh
- -c
- /health/redis_readiness.sh
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
@@ -3668,7 +3690,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -3743,7 +3765,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -3778,7 +3800,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -881,12 +881,28 @@ data:
if [ "$MASTER" = "$ANNOUNCE_IP" ]; then
redis_role
if [ "$ROLE" != "master" ]; then
reinit
echo "waiting for redis to become master"
sleep 10
identify_master
redis_role
echo "Redis role is $ROLE, expected role is master. No need to reinitialize."
if [ "$ROLE" != "master" ]; then
echo "Redis role is $ROLE, expected role is master, reinitializing"
reinit
fi
fi
elif [ "${MASTER}" ]; then
identify_redis_master
if [ "$REDIS_MASTER" != "$MASTER" ]; then
reinit
echo "Redis master and local master are not the same. waiting."
sleep 10
identify_master
identify_redis_master
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}. No need to reinitialize."
if [ "${REDIS_MASTER}" != "${MASTER}" ]; then
echo "Redis master is ${MASTER}, expected master is ${REDIS_MASTER}, reinitializing"
reinit
fi
fi
fi
done
@@ -1845,7 +1861,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1988,7 +2010,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2084,7 +2106,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2208,7 +2230,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2525,7 +2547,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2577,7 +2599,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2951,7 +2973,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3335,7 +3357,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -3410,7 +3432,7 @@ spec:
template:
metadata:
annotations:
checksum/init-config: bd30e83dfdad9912b6c1cc32a8c26d7d01429a0730f5ee7af380fb593e874d54
checksum/init-config: fd74f7d84e39b3f6eac1d7ce5deb0083e58f218376faf363343d91a0fb4f2563
labels:
app.kubernetes.io/name: argocd-redis-ha
spec:
@@ -3433,7 +3455,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -3481,9 +3503,9 @@ spec:
- sh
- -c
- /health/redis_readiness.sh
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
@@ -3504,7 +3526,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -3579,7 +3601,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -3614,7 +3636,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -1485,8 +1485,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4900,8 +4901,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4982,8 +4984,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -23737,6 +23740,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -25282,7 +25288,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25408,7 +25420,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25559,7 +25571,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25655,7 +25667,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25741,7 +25753,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25757,7 +25769,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26048,7 +26060,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26100,7 +26112,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26472,7 +26484,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26856,7 +26868,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:

42
manifests/install.yaml generated
View File

@@ -1485,8 +1485,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4900,8 +4901,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -4982,8 +4984,9 @@ spec:
pattern: ^.{2,}|[^./]$
type: string
targetBranch:
description: TargetBranch is the branch to which hydrated
manifests should be committed
description: |-
TargetBranch is the branch from which hydrated manifests will be synced.
If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
type: string
required:
- path
@@ -23737,6 +23740,9 @@ spec:
type: string
type: object
type: array
resourcesCount:
format: int64
type: integer
type: object
required:
- metadata
@@ -25250,7 +25256,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25393,7 +25405,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25489,7 +25501,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25575,7 +25587,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25591,7 +25603,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25882,7 +25894,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25934,7 +25946,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26306,7 +26318,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26690,7 +26702,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -969,7 +969,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1095,7 +1101,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1246,7 +1252,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1342,7 +1348,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1428,7 +1434,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1444,7 +1450,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1735,7 +1741,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1787,7 +1793,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2159,7 +2165,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2543,7 +2549,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -937,7 +937,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1080,7 +1086,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1176,7 +1182,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1262,7 +1268,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:8.2.2-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1278,7 +1284,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1569,7 +1575,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1621,7 +1627,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -1993,7 +1999,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2377,7 +2383,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v3.2.0-rc4
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -805,6 +805,9 @@ type ApplicationSetStatus struct {
ApplicationStatus []ApplicationSetApplicationStatus `json:"applicationStatus,omitempty" protobuf:"bytes,2,name=applicationStatus"`
// Resources is a list of Applications resources managed by this application set.
Resources []ResourceStatus `json:"resources,omitempty" protobuf:"bytes,3,opt,name=resources"`
// ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when
// the number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).
ResourcesCount int64 `json:"resourcesCount,omitempty" protobuf:"varint,4,opt,name=resourcesCount"`
}
// ApplicationSetCondition contains details about an applicationset condition, which is usually an error or warning

File diff suppressed because it is too large Load Diff

View File

@@ -369,6 +369,10 @@ message ApplicationSetStatus {
// Resources is a list of Applications resources managed by this application set.
repeated ResourceStatus resources = 3;
// ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when
// the number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).
optional int64 resourcesCount = 4;
}
// ApplicationSetStrategy configures how generated Applications are updated in sequence.
@@ -2635,7 +2639,8 @@ message SyncPolicyAutomated {
// SyncSource specifies a location from which hydrated manifests may be synced. RepoURL is assumed based on the
// associated DrySource config in the SourceHydrator.
message SyncSource {
// TargetBranch is the branch to which hydrated manifests should be committed
// TargetBranch is the branch from which hydrated manifests will be synced.
// If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
optional string targetBranch = 1;
// Path is a directory path within the git repository where hydrated manifests should be committed to and synced

View File

@@ -443,7 +443,8 @@ type DrySource struct {
// SyncSource specifies a location from which hydrated manifests may be synced. RepoURL is assumed based on the
// associated DrySource config in the SourceHydrator.
type SyncSource struct {
// TargetBranch is the branch to which hydrated manifests should be committed
// TargetBranch is the branch from which hydrated manifests will be synced.
// If HydrateTo is not set, this is also the branch to which hydrated manifests are committed.
TargetBranch string `json:"targetBranch" protobuf:"bytes,1,name=targetBranch"`
// Path is a directory path within the git repository where hydrated manifests should be committed to and synced
// from. The Path should never point to the root of the repo. If hydrateTo is set, this is just the path from which

View File

@@ -1,4 +1,4 @@
-- Health check copied from here: https://github.com/crossplane/docs/blob/bd701357e9d5eecf529a0b42f23a78850a6d1d87/content/master/guides/crossplane-with-argo-cd.md
-- Health check copied from here: https://github.com/crossplane/docs/blob/709889c5dbe6e5a2ea3dffd66fe276cf465b47b5/content/master/guides/crossplane-with-argo-cd.md
health_status = {
status = "Progressing",
@@ -18,9 +18,10 @@ local has_no_status = {
"Composition",
"CompositionRevision",
"DeploymentRuntimeConfig",
"ControllerConfig",
"ClusterProviderConfig",
"ProviderConfig",
"ProviderConfigUsage"
"ProviderConfigUsage",
"ControllerConfig" -- Added to ensure that healthcheck is backwards-compatible with Crossplane v1
}
if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.kind) then
health_status.status = "Healthy"
@@ -29,7 +30,7 @@ if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.
end
if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then
if obj.kind == "ProviderConfig" and obj.status.users ~= nil then
if (obj.kind == "ProviderConfig" or obj.kind == "ClusterProviderConfig") and obj.status.users ~= nil then
health_status.status = "Healthy"
health_status.message = "Resource is in use."
return health_status
@@ -54,7 +55,7 @@ for i, condition in ipairs(obj.status.conditions) do
end
end
if contains({"Ready", "Healthy", "Offered", "Established"}, condition.type) then
if contains({"Ready", "Healthy", "Offered", "Established", "ValidPipeline", "RevisionHealthy"}, condition.type) then
if condition.status == "True" then
health_status.status = "Healthy"
health_status.message = "Resource is up-to-date."

View File

@@ -3,3 +3,7 @@ tests:
status: Healthy
message: "Resource is up-to-date."
inputPath: testdata/composition_healthy.yaml
- healthStatus:
status: Healthy
message: "Resource is up-to-date."
inputPath: testdata/configurationrevision_healthy.yaml

View File

@@ -0,0 +1,22 @@
apiVersion: pkg.crossplane.io/v1
kind: ConfigurationRevision
metadata:
annotations:
meta.crossplane.io/license: Apache-2.0
meta.crossplane.io/maintainer: Upbound <support@upbound.io>
meta.crossplane.io/source: github.com/upbound/configuration-getting-started
name: upbound-configuration-getting-started-869bca254eb1
spec:
desiredState: Active
ignoreCrossplaneConstraints: false
image: xpkg.upbound.io/upbound/configuration-getting-started:v0.3.0
packagePullPolicy: IfNotPresent
revision: 1
skipDependencyResolution: false
status:
conditions:
- lastTransitionTime: "2025-09-29T18:06:40Z"
observedGeneration: 1
reason: HealthyPackageRevision
status: "True"
type: RevisionHealthy

View File

@@ -1,4 +1,4 @@
-- Health check copied from here: https://github.com/crossplane/docs/blob/bd701357e9d5eecf529a0b42f23a78850a6d1d87/content/master/guides/crossplane-with-argo-cd.md
-- Health check copied from here: https://github.com/crossplane/docs/blob/709889c5dbe6e5a2ea3dffd66fe276cf465b47b5/content/master/guides/crossplane-with-argo-cd.md
health_status = {
status = "Progressing",
@@ -15,6 +15,7 @@ local function contains (table, val)
end
local has_no_status = {
"ClusterProviderConfig",
"ProviderConfig",
"ProviderConfigUsage"
}
@@ -26,7 +27,7 @@ if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.
end
if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then
if obj.kind == "ProviderConfig" and obj.status.users ~= nil then
if (obj.kind == "ProviderConfig" or obj.kind == "ClusterProviderConfig") and obj.status.users ~= nil then
health_status.status = "Healthy"
health_status.message = "Resource is in use."
return health_status

View File

@@ -7,3 +7,6 @@ discoveryTests:
- inputPath: testdata/external-secret.yaml
result:
- name: "refresh"
- inputPath: testdata/external-secret-refresh-policy.yaml
result:
- name: "refresh"

View File

@@ -3,10 +3,11 @@ local actions = {}
local disable_refresh = false
local time_units = {"ns", "us", "µs", "ms", "s", "m", "h"}
local digits = obj.spec.refreshInterval
local policy = obj.spec.refreshPolicy
if digits ~= nil then
digits = tostring(digits)
for _, time_unit in ipairs(time_units) do
if digits == "0" or digits == "0" .. time_unit then
if (digits == "0" or digits == "0" .. time_unit) and policy ~= "OnChange" then
disable_refresh = true
break
end

View File

@@ -0,0 +1,55 @@
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
creationTimestamp: '2021-11-16T21:59:33Z'
generation: 1
name: test-healthy
namespace: argocd
resourceVersion: '136487331'
selfLink: /apis/external-secrets.io/v1alpha1/namespaces/argocd/externalsecrets/test-healthy
uid: 1e754a7e-0781-4d57-932d-4651d5b19586
spec:
data:
- remoteRef:
key: secret/sa/example
property: api.address
secretKey: url
- remoteRef:
key: secret/sa/example
property: ca.crt
secretKey: ca
- remoteRef:
key: secret/sa/example
property: token
secretKey: token
refreshInterval: 0
refreshPolicy: OnChange
secretStoreRef:
kind: SecretStore
name: example
target:
creationPolicy: Owner
template:
data:
config: |
{
"bearerToken": "{{ .token | base64decode | toString }}",
"tlsClientConfig": {
"insecure": false,
"caData": "{{ .ca | toString }}"
}
}
name: cluster-test
server: '{{ .url | toString }}'
metadata:
labels:
argocd.argoproj.io/secret-type: cluster
status:
conditions:
- lastTransitionTime: '2021-11-16T21:59:34Z'
message: Secret was synced
reason: SecretSynced
status: 'True'
type: Ready
refreshTime: '2021-11-29T18:32:24Z'
syncedResourceVersion: 1-519a61da0dc68b2575b4f8efada70e42

View File

@@ -25,9 +25,17 @@ if obj.status.conditions then
hs.message = "Waiting for Argo CD commit status spec update to be observed"
return hs
end
if condition.status == "False" and condition.reason == "ReconciliationError" then
-- Check for any False condition status
if condition.status == "False" then
hs.status = "Degraded"
hs.message = "Argo CD commit status reconciliation failed: " .. (condition.message or "Unknown error")
local msg = condition.message or "Unknown error"
local reason = condition.reason or "Unknown"
-- Don't include ReconciliationError in the message since it's redundant
if reason == "ReconciliationError" then
hs.message = "Argo CD commit status reconciliation failed: " .. msg
else
hs.message = "Argo CD commit status reconciliation failed (" .. reason .. "): " .. msg
end
return hs
end
end

View File

@@ -15,6 +15,10 @@ tests:
status: Degraded
message: "Argo CD commit status reconciliation failed: Something went wrong"
inputPath: testdata/reconcile-error.yaml
- healthStatus:
status: Degraded
message: "Argo CD commit status reconciliation failed: Failed to query Argo CD applications: connection timeout"
inputPath: testdata/non-reconciliation-error.yaml
- healthStatus:
status: Progressing
message: Argo CD commit status is not ready yet

View File

@@ -0,0 +1,21 @@
apiVersion: promoter.argoproj.io/v1alpha1
kind: ArgoCDCommitStatus
metadata:
name: test-commit-status
namespace: test
generation: 1
spec:
applicationSelector:
matchLabels:
environment: production
promotionStrategyRef:
name: test
status:
conditions:
- lastTransitionTime: '2025-10-15T16:00:00Z'
message: 'Failed to query Argo CD applications: connection timeout'
observedGeneration: 1
reason: ReconciliationError
status: 'False'
type: Ready

View File

@@ -26,9 +26,17 @@ if obj.status.conditions then
hs.message = "Waiting for change transfer policy spec update to be observed"
return hs
end
if condition.status == "False" and condition.reason == "ReconciliationError" then
-- Check for any False condition status
if condition.status == "False" then
hs.status = "Degraded"
hs.message = "Change transfer policy reconciliation failed: " .. (condition.message or "Unknown error")
local msg = condition.message or "Unknown error"
local reason = condition.reason or "Unknown"
-- Don't include ReconciliationError in the message since it's redundant
if reason == "ReconciliationError" then
hs.message = "Change transfer policy reconciliation failed: " .. msg
else
hs.message = "Change transfer policy reconciliation failed (" .. reason .. "): " .. msg
end
return hs
end
end

View File

@@ -39,3 +39,7 @@ tests:
status: Healthy
message: "Environment is up-to-date, but there are non-successful active commit statuses: 1 pending, 0 successful, 0 failed. Pending commit statuses: argocd-health. "
inputPath: testdata/non-successful-environments.yaml
- healthStatus:
status: Degraded
message: "Change transfer policy reconciliation failed (PullRequestNotReady): PullRequest \"deployment-environments-qal-usw2-next-environments-qal-usw2-7a8e7b70\" is not Ready because \"ReconciliationError\": Reconciliation failed: failed to merge pull request: failed to merge pull request: failed to merge pull request: PUT https://github.example.com/api/v3/repos/org/deployment/pulls/3/merge: 405 At least 2 approving reviews are required by reviewers with write access. Required status check \"continuous-integration/jenkins/pr-head\" is expected. You're not authorized to push to this branch."
inputPath: testdata/missing-sha-and-not-ready.yaml

View File

@@ -0,0 +1,87 @@
apiVersion: promoter.argoproj.io/v1alpha1
kind: ChangeTransferPolicy
metadata:
annotations:
promoter.argoproj.io/reconcile-at: '2025-10-15T17:11:51.672763364Z'
creationTimestamp: '2025-10-15T16:26:17Z'
generation: 1
labels:
promoter.argoproj.io/environment: environments-qal-usw2
promoter.argoproj.io/promotion-strategy: strategy
name: strategy-environments-qal-usw2-27894e05
namespace: test
ownerReferences:
- apiVersion: promoter.argoproj.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: PromotionStrategy
name: strategy
uid: 146aaaba-72d5-4155-bf5b-b5fd03b8a6d0
resourceVersion: '116412858'
uid: 62e1ed12-d808-4c78-9217-f80822f930ae
spec:
activeBranch: environments/qal-usw2
activeCommitStatuses:
- key: argocd-health
autoMerge: true
gitRepositoryRef:
name: repo
proposedBranch: environments/qal-usw2-next
status:
active:
commitStatuses:
- key: argocd-health
phase: pending
dry: {}
hydrated:
author: ServiceAccount
commitTime: '2025-10-15T15:38:20Z'
sha: b060fc7f5079a5aa7364a3844cfe26ee3084e282
subject: '[Changed] - infrastructure deployment'
conditions:
- lastTransitionTime: '2025-10-15T17:34:14Z'
message: 'PullRequest "deployment-environments-qal-usw2-next-environments-qal-usw2-7a8e7b70" is not Ready because "ReconciliationError": Reconciliation failed: failed to merge pull request: failed to merge pull request: failed to merge pull request: PUT https://github.example.com/api/v3/repos/org/deployment/pulls/3/merge: 405 At least 2 approving reviews are required by reviewers with write access. Required status check "continuous-integration/jenkins/pr-head" is expected. You''re not authorized to push to this branch.'
observedGeneration: 1
reason: PullRequestNotReady
status: 'False'
type: Ready
history:
- active:
dry: {}
hydrated:
author: ServiceAccount
commitTime: '2025-10-15T15:38:20Z'
sha: b060fc7f5079a5aa7364a3844cfe26ee3084e282
subject: '[Changed] - infrastructure deployment'
proposed:
hydrated: {}
pullRequest: {}
- active:
dry: {}
hydrated:
author: ServiceAccount
commitTime: '2025-10-15T15:15:27Z'
sha: bb607f8854d83de1724bcc5515f685adf9d8a704
subject: Initial commit
proposed:
hydrated: {}
pullRequest: {}
proposed:
dry:
author: user <user@example.com>
commitTime: '2025-10-15T16:24:35Z'
repoURL: https://github.example.com/org/deployment.git
sha: ac7866685e376f32aaf09dced3cf2d39dc1ba50e
subject: Create promotion-strategy.yaml
hydrated:
author: Argo CD
body: 'Co-authored-by: user <user@example.com>'
commitTime: '2025-10-15T16:26:18Z'
sha: 2e8097c049a97c05e6f787f46aabfe3cc2a8af21
subject: 'ac78666: Create promotion-strategy.yaml'
pullRequest:
id: '3'
prCreationTime: '2025-10-15T17:11:53Z'
state: open
url: https://github.example.com/org/deployment/pull/3

View File

@@ -25,9 +25,17 @@ if obj.status.conditions then
hs.message = "Waiting for commit status spec update to be observed"
return hs
end
if condition.status == "False" and condition.reason == "ReconciliationError" then
-- Check for any False condition status
if condition.status == "False" then
hs.status = "Degraded"
hs.message = "Commit status reconciliation failed: " .. (condition.message or "Unknown error")
local msg = condition.message or "Unknown error"
local reason = condition.reason or "Unknown"
-- Don't include ReconciliationError in the message since it's redundant
if reason == "ReconciliationError" then
hs.message = "Commit status reconciliation failed: " .. msg
else
hs.message = "Commit status reconciliation failed (" .. reason .. "): " .. msg
end
return hs
end
end

View File

@@ -15,6 +15,10 @@ tests:
status: Degraded
message: "Commit status reconciliation failed: Something went wrong"
inputPath: testdata/reconcile-error.yaml
- healthStatus:
status: Degraded
message: "Commit status reconciliation failed: Failed to update commit status: API request failed"
inputPath: testdata/non-reconciliation-error.yaml
- healthStatus:
status: Progressing
message: Commit status is not ready yet

View File

@@ -0,0 +1,25 @@
apiVersion: promoter.argoproj.io/v1alpha1
kind: CommitStatus
metadata:
name: test-commit-status
namespace: test
generation: 1
spec:
sha: abc1234567890
url: https://example.com/ci/test
gitRepositoryRef:
name: repo
description: example
name: example
phase: success
status:
conditions:
- lastTransitionTime: '2025-10-15T16:00:00Z'
message: 'Failed to update commit status: API request failed'
observedGeneration: 1
reason: ReconciliationError
status: 'False'
type: Ready
id: "1"
sha: abc1234567890

View File

@@ -26,9 +26,17 @@ if obj.status.conditions then
hs.message = "Waiting for promotion strategy spec update to be observed"
return hs
end
if condition.status == "False" and condition.reason == "ReconciliationError" then
-- Check for any False condition status
if condition.status == "False" then
hs.status = "Degraded"
hs.message = "Promotion strategy reconciliation failed: " .. (condition.message or "Unknown error")
local msg = condition.message or "Unknown error"
local reason = condition.reason or "Unknown"
-- Don't include ReconciliationError in the message since it's redundant
if reason == "ReconciliationError" then
hs.message = "Promotion strategy reconciliation failed: " .. msg
else
hs.message = "Promotion strategy reconciliation failed (" .. reason .. "): " .. msg
end
return hs
end
end
@@ -68,7 +76,7 @@ local proposedSha = obj.status.environments[1].proposed.dry.sha -- Don't panic,
for _, env in ipairs(obj.status.environments) do
if env.proposed.dry.sha ~= proposedSha then
hs.status = "Progressing"
hs.status = "Not all environments have the same proposed commit SHA. This likely means the hydrator has not run for all environments yet."
hs.message = "Not all environments have the same proposed commit SHA. This likely means the hydrator has not run for all environments yet."
return hs
end
end

View File

@@ -39,3 +39,11 @@ tests:
status: Healthy
message: All environments are up-to-date on commit 'abc1234'.
inputPath: testdata/all-healthy.yaml
- healthStatus:
status: Progressing
message: Not all environments have the same proposed commit SHA. This likely means the hydrator has not run for all environments yet.
inputPath: testdata/different-proposed-commits.yaml
- healthStatus:
status: Degraded
message: "Promotion strategy reconciliation failed (ChangeTransferPolicyNotReady): ChangeTransferPolicy \"strategy-environments-qal-usw2-27894e05\" is not Ready because \"ReconciliationError\": Reconciliation failed: failed to calculate ChangeTransferPolicy status: failed to get SHAs for proposed branch \"environments/qal-usw2-next\": exit status 128: fatal: 'origin/environments/qal-usw2-next' is not a commit and a branch 'environments/qal-usw2-next' cannot be created from it"
inputPath: testdata/missing-sha-and-not-ready.yaml

View File

@@ -0,0 +1,412 @@
apiVersion: promoter.argoproj.io/v1alpha1
kind: PromotionStrategy
metadata:
name: promotion-strategy
namespace: gitops-promoter
spec:
activeCommitStatuses:
- key: argocd-health
environments:
- autoMerge: false
branch: wave/0/west
- autoMerge: false
branch: wave/1/west
- autoMerge: false
branch: wave/2/west
- autoMerge: false
branch: wave/3/west
- autoMerge: false
branch: wave/4/west
- autoMerge: false
branch: wave/5/west
gitRepositoryRef:
name: cdp-deployments
status:
conditions:
- lastTransitionTime: '2025-08-02T06:43:44Z'
message: Reconciliation succeeded
observedGeneration: 6
reason: ReconciliationSuccess
status: 'True'
type: Ready
environments:
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:21Z'
sha: 6b1095752ea7bef9c5c4251cb0722ce33dfd68d1
subject: >-
This is a no-op commit merging from wave/0/west-next into
wave/0/west
branch: wave/0/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-23T17:33:12Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 28f351384c8b8e58624726c4c3713979d0143775
subject: 'chore: fake old commit message'
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-23T17:36:40Z'
sha: 3ebd45b702bc23619c8ae6724e81955afc644555
subject: Promote 28f35 to `wave/0/west` (#2929)
proposed:
hydrated: {}
pullRequest: {}
- active:
dry:
author: asingh51 <fake@example.com>
commitTime: '2025-09-22T21:04:40Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 9dc9529ddba077fcdf8cffef716b7d20e1e8b844
subject: >-
Onboarding new cluster to argocd-genai on wave 5 [ARGO-2499]
(#2921)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T21:56:33Z'
sha: 2fefa5165e46988666cbb1d2d4f39e21871fa671
subject: Promote 9dc95 to `wave/0/west` (#2925)
proposed:
hydrated: {}
pullRequest: {}
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T19:32:09Z'
sha: 6c6493caa0d5b7211bb09c93a7047c90db393cd6
subject: Promote 7f912 to `wave/0/west` (#2922)
proposed:
hydrated: {}
pullRequest: {}
proposed:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:16Z'
sha: 9dfd6245b7b2e86660a13743ecc85d26bf3df04c
subject: Merge branch 'wave/0/west' into wave/0/west-next
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:01Z'
sha: 580bbde2b775a8c6dc1484c4ba970426029e63b4
subject: >-
This is a no-op commit merging from wave/1/west-next into
wave/1/west
branch: wave/1/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-23T17:33:12Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 28f351384c8b8e58624726c4c3713979d0143775
subject: 'chore: fake old commit message'
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-23T17:37:46Z'
sha: c2b69cd2186df736ee4e27a6de5590135a3c5823
subject: Promote 28f35 to `wave/1/west` (#2928)
proposed:
hydrated: {}
pullRequest: {}
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T20:15:13Z'
sha: baf2f21f5085adaa7fe4720ca41d4f7ff918d4fd
subject: Promote 7f912 to `wave/1/west` (#2883)
proposed:
hydrated: {}
pullRequest: {}
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-16T16:32:32Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 789df516a84b1d3beb2f8ffe56e052b5b411aa74
subject: >-
chore: upgrade to 3.2.0-rc1, argo and team instances (ARGO-2412)
(#2875)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-16T16:45:40Z'
sha: 728e1a237732a20281f986916301a5dca254e3a4
subject: Promote 789df to `wave/1/west` (#2876)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:13:56Z'
sha: 50b7c73718813b9b92a0e1e166798cbd55874bbd
subject: Merge branch 'wave/1/west' into wave/1/west-next
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-23T17:33:12Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 28f351384c8b8e58624726c4c3713979d0143775
subject: 'chore: fake old commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-23T17:34:05Z'
sha: 169626417d1863434901c9225ded78963b7bd691
subject: >-
This is a no-op commit merging from wave/2/west-next into
wave/2/west
branch: wave/2/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T20:59:45Z'
sha: 1bfe993b41fe27b298c0aab9a5f185b29a478178
subject: Promote 7f912 to `wave/2/west` (#2884)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-23T17:33:12Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 28f351384c8b8e58624726c4c3713979d0143775
subject: 'chore: fake old commit message'
hydrated:
author: Argo CD
body: "Fake commit message"
commitTime: '2025-09-23T17:33:50Z'
sha: 6e0e11f0102faa5779223159a2b16ccee63a8ac0
subject: '28f3513: chore: fake old commit message'
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:08Z'
sha: 73cf41db84a6e97a74c967243e65ffce66dd4198
subject: >-
This is a no-op commit merging from wave/3/west-next into
wave/3/west
branch: wave/3/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: fake super old commit message
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T21:00:07Z'
sha: e7456f8ff4a5943ace3bb425e2d4e7dd43a53e46
subject: Promote 7f912 to `wave/3/west` (#2887)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: Argo CD
body: "Fake commit message"
commitTime: '2025-09-24T14:13:49Z'
sha: daf0534e9db36334c82870cbb2e5d14cc687e0f5
subject: 'f94c9d4: chore: fake new commit message'
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:30Z'
sha: 5abc8f1fd4056ad9f78723f3a83ac2291e87c821
subject: >-
This is a no-op commit merging from wave/4/west-next into
wave/4/west
branch: wave/4/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T21:00:29Z'
sha: 1d6e2b279108f6430825c889f6d2248fe4388aba
subject: Promote 7f912 to `wave/4/west` (#2885)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: Argo CD
body: "Fake commit message"
commitTime: '2025-09-24T14:14:09Z'
sha: 91d942a1da3799d6027a7c407eee98b1f9cdcde0
subject: 'f94c9d4: chore: fake new commit message'
- active:
commitStatuses:
- key: argocd-health
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: GitOps Promoter
commitTime: '2025-09-24T14:14:40Z'
sha: 7bee0fcb985ac4ecccddf70a8b2d02cc0c41f231
subject: >-
This is a no-op commit merging from wave/5/west-next into
wave/5/west
branch: wave/5/west
history:
- active:
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-22T19:29:53Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: 7f91282cc8146644d0721efde1b9069ccbd475a1
subject: >-
chore: temporarily disable server-side diff on argo-argo
(ARGO-2528) (#2917)
hydrated:
author: argo-cd[bot]
body: "Fake commit message"
commitTime: '2025-09-22T21:00:59Z'
sha: a9d26ef79a1cc786a2a90446f7e8bbef2e5be653
subject: Promote 7f912 to `wave/5/west` (#2888)
proposed:
hydrated: {}
pullRequest: {}
proposed:
commitStatuses:
- key: promoter-previous-environment
phase: pending
dry:
author: Fake Person <fake@example.com>
commitTime: '2025-09-24T14:13:27Z'
repoURL: https://git.example.com/fake-org/fake-repo
sha: f94c9d42993145d9f10bb2c404b74da14ee3f74e
subject: 'chore: fake new commit message'
hydrated:
author: Argo CD
body: "Fake commit message"
commitTime: '2025-09-24T14:14:24Z'
sha: 20c41602695addbf7760236eff562f5692e585aa
subject: 'f94c9d4: chore: fake new commit message'

View File

@@ -0,0 +1,38 @@
apiVersion: promoter.argoproj.io/v1alpha1
kind: PromotionStrategy
metadata:
name: strategy
namespace: test
spec:
activeCommitStatuses:
- key: argocd-health
environments:
- autoMerge: true
branch: environments/qal-usw2
- autoMerge: true
branch: environments/e2e-usw2
gitRepositoryRef:
name: repo
status:
conditions:
- lastTransitionTime: '2025-10-15T16:31:47Z'
message: 'ChangeTransferPolicy "strategy-environments-qal-usw2-27894e05" is not Ready because "ReconciliationError": Reconciliation failed: failed to calculate ChangeTransferPolicy status: failed to get SHAs for proposed branch "environments/qal-usw2-next": exit status 128: fatal: ''origin/environments/qal-usw2-next'' is not a commit and a branch ''environments/qal-usw2-next'' cannot be created from it'
observedGeneration: 1
reason: ChangeTransferPolicyNotReady
status: 'False'
type: Ready
environments:
- active:
dry: {}
hydrated: {}
branch: environments/qal-usw2
proposed:
dry: {}
hydrated: {}
- active:
dry: {}
hydrated: {}
branch: environments/e2e-usw2
proposed:
dry: {}
hydrated: {}

View File

@@ -26,9 +26,16 @@ if obj.status.conditions then
hs.message = "Waiting for pull request spec update to be observed"
return hs
end
if condition.status == "False" and (condition.reason == "ReconcileError" or condition.reason == "Failed") then
if condition.status == "False" then
hs.status = "Degraded"
hs.message = "Pull request reconciliation failed: " .. (condition.message or "Unknown error")
local msg = condition.message or "Unknown error"
local reason = condition.reason or "Unknown"
-- Don't include ReconciliationError in the message since it's redundant
if reason == "ReconciliationError" then
hs.message = "Pull request reconciliation failed: " .. msg
else
hs.message = "Pull request reconciliation failed (" .. reason .. "): " .. msg
end
return hs
end
end

View File

@@ -14,7 +14,11 @@ tests:
- healthStatus:
status: Degraded
message: "Pull request reconciliation failed: Something went wrong"
inputPath: testdata/reconcile-error.yaml
inputPath: testdata/reconciliation-error.yaml
- healthStatus:
status: Degraded
message: "Pull request reconciliation failed: Failed to create pull request: authentication failed"
inputPath: testdata/non-reconciliation-error.yaml
- healthStatus:
status: Progressing
message: Pull request is not ready yet

View File

@@ -0,0 +1,22 @@
apiVersion: promoter.argoproj.io/v1alpha1
kind: PullRequest
metadata:
name: test-pr
namespace: test
generation: 1
spec:
sourceBranch: app-next
targetBranch: app
gitRepositoryRef:
name: repo
title: Test Pull Request
state: open
status:
conditions:
- lastTransitionTime: '2025-10-15T16:00:00Z'
message: 'Failed to create pull request: authentication failed'
observedGeneration: 1
reason: ReconciliationError
status: 'False'
type: Ready

View File

@@ -7,7 +7,7 @@ status:
conditions:
- type: Ready
status: "False"
reason: ReconcileError
reason: ReconciliationError
message: Something went wrong
observedGeneration: 2

View File

@@ -774,9 +774,8 @@ func (s *Server) Get(ctx context.Context, q *application.ApplicationQuery) (*v1a
return nil, err
}
s.inferResourcesStatusHealth(a)
if q.Refresh == nil {
s.inferResourcesStatusHealth(a)
return a, nil
}
@@ -855,7 +854,9 @@ func (s *Server) Get(ctx context.Context, q *application.ApplicationQuery) (*v1a
annotations = make(map[string]string)
}
if _, ok := annotations[v1alpha1.AnnotationKeyRefresh]; !ok {
return event.Application.DeepCopy(), nil
refreshedApp := event.Application.DeepCopy()
s.inferResourcesStatusHealth(refreshedApp)
return refreshedApp, nil
}
}
}

View File

@@ -2711,6 +2711,99 @@ func TestGetAppRefresh_HardRefresh(t *testing.T) {
}
}
func TestGetApp_HealthStatusPropagation(t *testing.T) {
newServerWithTree := func(t *testing.T) (*Server, *v1alpha1.Application) {
t.Helper()
cacheClient := cache.NewCache(cache.NewInMemoryCache(1 * time.Hour))
testApp := newTestApp()
testApp.Status.ResourceHealthSource = v1alpha1.ResourceHealthLocationAppTree
testApp.Status.Resources = []v1alpha1.ResourceStatus{
{
Group: "apps",
Kind: "Deployment",
Name: "guestbook",
Namespace: "default",
},
}
appServer := newTestAppServer(t, testApp)
appStateCache := appstate.NewCache(cacheClient, time.Minute)
appInstanceName := testApp.InstanceName(appServer.appNamespaceOrDefault(testApp.Namespace))
err := appStateCache.SetAppResourcesTree(appInstanceName, &v1alpha1.ApplicationTree{
Nodes: []v1alpha1.ResourceNode{{
ResourceRef: v1alpha1.ResourceRef{
Group: "apps",
Kind: "Deployment",
Name: "guestbook",
Namespace: "default",
},
Health: &v1alpha1.HealthStatus{Status: health.HealthStatusDegraded},
}},
})
require.NoError(t, err)
appServer.cache = servercache.NewCache(appStateCache, time.Minute, time.Minute)
return appServer, testApp
}
t.Run("propagated health status on get with no refresh", func(t *testing.T) {
appServer, testApp := newServerWithTree(t)
fetchedApp, err := appServer.Get(t.Context(), &application.ApplicationQuery{
Name: &testApp.Name,
})
require.NoError(t, err)
assert.Equal(t, health.HealthStatusDegraded, fetchedApp.Status.Resources[0].Health.Status)
})
t.Run("propagated health status on normal refresh", func(t *testing.T) {
appServer, testApp := newServerWithTree(t)
var patched int32
ch := make(chan string, 1)
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
go refreshAnnotationRemover(t, ctx, &patched, appServer, testApp.Name, ch)
fetchedApp, err := appServer.Get(t.Context(), &application.ApplicationQuery{
Name: &testApp.Name,
Refresh: ptr.To(string(v1alpha1.RefreshTypeNormal)),
})
require.NoError(t, err)
select {
case <-ch:
assert.Equal(t, int32(1), atomic.LoadInt32(&patched))
case <-time.After(10 * time.Second):
assert.Fail(t, "Out of time ( 10 seconds )")
}
assert.Equal(t, health.HealthStatusDegraded, fetchedApp.Status.Resources[0].Health.Status)
})
t.Run("propagated health status on hard refresh", func(t *testing.T) {
appServer, testApp := newServerWithTree(t)
var patched int32
ch := make(chan string, 1)
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
go refreshAnnotationRemover(t, ctx, &patched, appServer, testApp.Name, ch)
fetchedApp, err := appServer.Get(t.Context(), &application.ApplicationQuery{
Name: &testApp.Name,
Refresh: ptr.To(string(v1alpha1.RefreshTypeHard)),
})
require.NoError(t, err)
select {
case <-ch:
assert.Equal(t, int32(1), atomic.LoadInt32(&patched))
case <-time.After(10 * time.Second):
assert.Fail(t, "Out of time ( 10 seconds )")
}
assert.Equal(t, health.HealthStatusDegraded, fetchedApp.Status.Resources[0].Health.Status)
})
}
func TestInferResourcesStatusHealth(t *testing.T) {
cacheClient := cache.NewCache(cache.NewInMemoryCache(1 * time.Hour))

View File

@@ -1257,7 +1257,7 @@ func (server *ArgoCDServer) newHTTPServer(ctx context.Context, port int, grpcWeb
// Webhook handler for git events (Note: cache timeouts are hardcoded because API server does not write to cache and not really using them)
argoDB := db.NewDB(server.Namespace, server.settingsMgr, server.KubeClientset)
acdWebhookHandler := webhook.NewHandler(server.Namespace, server.ApplicationNamespaces, server.WebhookParallelism, server.AppClientset, server.settings, server.settingsMgr, server.RepoServerCache, server.Cache, argoDB, server.settingsMgr.GetMaxWebhookPayloadSize())
acdWebhookHandler := webhook.NewHandler(server.Namespace, server.ApplicationNamespaces, server.WebhookParallelism, server.AppClientset, server.appLister, server.settings, server.settingsMgr, server.RepoServerCache, server.Cache, argoDB, server.settingsMgr.GetMaxWebhookPayloadSize())
mux.HandleFunc("/api/webhook", acdWebhookHandler.Handler)

View File

@@ -58,16 +58,12 @@ const sectionHeader = (info: SectionInfo, onClick?: () => any) => {
);
};
const hasRollingSyncEnabled = (application: models.Application): boolean => {
return application.metadata.ownerReferences?.some(ref => ref.kind === 'ApplicationSet') || false;
const getApplicationSetOwnerRef = (application: models.Application) => {
return application.metadata.ownerReferences?.find(ref => ref.kind === 'ApplicationSet');
};
const ProgressiveSyncStatus = ({application}: {application: models.Application}) => {
if (!hasRollingSyncEnabled(application)) {
return null;
}
const appSetRef = application.metadata.ownerReferences.find(ref => ref.kind === 'ApplicationSet');
const appSetRef = getApplicationSetOwnerRef(application);
if (!appSetRef) {
return null;
}
@@ -75,12 +71,39 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
return (
<DataLoader
input={application}
errorRenderer={() => {
// For any errors, show a minimal error state
return (
<div className='application-status-panel__item'>
{sectionHeader({
title: 'PROGRESSIVE SYNC',
helpContent: 'Shows the current status of progressive sync for applications managed by an ApplicationSet.'
})}
<div className='application-status-panel__item-value'>
<i className='fa fa-exclamation-triangle' style={{color: COLORS.sync.unknown}} /> Error
</div>
<div className='application-status-panel__item-name'>Unable to load Progressive Sync status</div>
</div>
);
}}
load={async () => {
const appSet = await services.applications.getApplicationSet(appSetRef.name, application.metadata.namespace);
return appSet?.spec?.strategy?.type === 'RollingSync' ? appSet : null;
// Find ApplicationSet by searching all namespaces dynamically
const appSetList = await services.applications.listApplicationSets();
const appSet = appSetList.items?.find(item => item.metadata.name === appSetRef.name);
return {appSet};
}}>
{(appSet: models.ApplicationSet) => {
if (!appSet) {
{({appSet}: {appSet: models.ApplicationSet}) => {
// Hide panel if: Progressive Sync disabled, no permission, or not RollingSync strategy
if (!appSet || !appSet.status?.applicationStatus || appSet?.spec?.strategy?.type !== 'RollingSync') {
return null;
}
// Get the current application's status from the ApplicationSet applicationStatus
const appResource = appSet.status?.applicationStatus?.find(status => status.application === application.metadata.name);
// If no application status is found, show a default status
if (!appResource) {
return (
<div className='application-status-panel__item'>
{sectionHeader({
@@ -88,14 +111,15 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
helpContent: 'Shows the current status of progressive sync for applications managed by an ApplicationSet with RollingSync strategy.'
})}
<div className='application-status-panel__item-value'>
<i className='fa fa-question-circle' style={{color: COLORS.sync.unknown}} /> Unknown
<i className='fa fa-clock' style={{color: COLORS.sync.out_of_sync}} /> Waiting
</div>
<div className='application-status-panel__item-name'>Application status not yet available from ApplicationSet</div>
</div>
);
}
// Get the current application's status from the ApplicationSet resources
const appResource = appSet.status?.applicationStatus?.find(status => status.application === application.metadata.name);
// Get last transition time from application status
const lastTransitionTime = appResource?.lastTransitionTime;
return (
<div className='application-status-panel__item'>
@@ -106,12 +130,14 @@ const ProgressiveSyncStatus = ({application}: {application: models.Application})
<div className='application-status-panel__item-value' style={{color: getProgressiveSyncStatusColor(appResource.status)}}>
{getProgressiveSyncStatusIcon({status: appResource.status})}&nbsp;{appResource.status}
</div>
<div className='application-status-panel__item-value'>Wave: {appResource.step}</div>
<div className='application-status-panel__item-name' style={{marginBottom: '0.5em'}}>
Last Transition: <br />
<Timestamp date={appResource.lastTransitionTime} />
</div>
{appResource.message && <div className='application-status-panel__item-name'>{appResource.message}</div>}
{appResource?.step && <div className='application-status-panel__item-value'>Wave: {appResource.step}</div>}
{lastTransitionTime && (
<div className='application-status-panel__item-name' style={{marginBottom: '0.5em'}}>
Last Transition: <br />
<Timestamp date={lastTransitionTime} />
</div>
)}
{appResource?.message && <div className='application-status-panel__item-name'>{appResource.message}</div>}
</div>
);
}}
@@ -123,7 +149,9 @@ export const ApplicationStatusPanel = ({application, showDiff, showOperation, sh
const [showProgressiveSync, setShowProgressiveSync] = React.useState(false);
React.useEffect(() => {
setShowProgressiveSync(hasRollingSyncEnabled(application));
// Only show Progressive Sync if the application has an ApplicationSet parent
// The actual strategy validation will be done inside ProgressiveSyncStatus component
setShowProgressiveSync(!!getApplicationSetOwnerRef(application));
}, [application]);
const today = new Date();

View File

@@ -1716,6 +1716,10 @@ export const getProgressiveSyncStatusIcon = ({status, isButton}: {status: string
return {icon: 'fa-clock', color: COLORS.sync.out_of_sync};
case 'Error':
return {icon: 'fa-times-circle', color: COLORS.health.degraded};
case 'Synced':
return {icon: 'fa-check-circle', color: COLORS.sync.synced};
case 'OutOfSync':
return {icon: 'fa-exclamation-triangle', color: COLORS.sync.out_of_sync};
default:
return {icon: 'fa-question-circle', color: COLORS.sync.unknown};
}
@@ -1738,6 +1742,10 @@ export const getProgressiveSyncStatusColor = (status: string): string => {
return COLORS.health.healthy;
case 'Error':
return COLORS.health.degraded;
case 'Synced':
return COLORS.sync.synced;
case 'OutOfSync':
return COLORS.sync.out_of_sync;
default:
return COLORS.sync.unknown;
}

View File

@@ -1134,3 +1134,8 @@ export interface ApplicationSet {
resources?: ApplicationSetResource[];
};
}
export interface ApplicationSetList {
metadata: models.ListMeta;
items: ApplicationSet[];
}

View File

@@ -550,4 +550,8 @@ export class ApplicationsService {
.query({appsetNamespace: namespace})
.then(res => res.body as models.ApplicationSet);
}
public async listApplicationSets(): Promise<models.ApplicationSetList> {
return requests.get(`/applicationsets`).then(res => res.body as models.ApplicationSetList);
}
}

View File

@@ -34,9 +34,9 @@ func (s *secretsRepositoryBackend) CreateRepository(ctx context.Context, reposit
},
}
s.repositoryToSecret(repository, repositorySecret)
updatedSecret := s.repositoryToSecret(repository, repositorySecret)
_, err := s.db.createSecret(ctx, repositorySecret)
_, err := s.db.createSecret(ctx, updatedSecret)
if err != nil {
if apierrors.IsAlreadyExists(err) {
hasLabel, err := s.hasRepoTypeLabel(secName)
@@ -142,9 +142,9 @@ func (s *secretsRepositoryBackend) UpdateRepository(ctx context.Context, reposit
return nil, err
}
s.repositoryToSecret(repository, repositorySecret)
updatedSecret := s.repositoryToSecret(repository, repositorySecret)
_, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, repositorySecret, metav1.UpdateOptions{})
_, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, updatedSecret, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@@ -187,9 +187,9 @@ func (s *secretsRepositoryBackend) CreateRepoCreds(ctx context.Context, repoCred
},
}
s.repoCredsToSecret(repoCreds, repoCredsSecret)
updatedSecret := s.repoCredsToSecret(repoCreds, repoCredsSecret)
_, err := s.db.createSecret(ctx, repoCredsSecret)
_, err := s.db.createSecret(ctx, updatedSecret)
if err != nil {
if apierrors.IsAlreadyExists(err) {
return nil, status.Errorf(codes.AlreadyExists, "repository credentials %q already exists", repoCreds.URL)
@@ -237,9 +237,9 @@ func (s *secretsRepositoryBackend) UpdateRepoCreds(ctx context.Context, repoCred
return nil, err
}
s.repoCredsToSecret(repoCreds, repoCredsSecret)
updatedSecret := s.repoCredsToSecret(repoCreds, repoCredsSecret)
repoCredsSecret, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, repoCredsSecret, metav1.UpdateOptions{})
repoCredsSecret, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, updatedSecret, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@@ -323,73 +323,75 @@ func (s *secretsRepositoryBackend) GetAllOCIRepoCreds(_ context.Context) ([]*app
}
func secretToRepository(secret *corev1.Secret) (*appsv1.Repository, error) {
secretCopy := secret.DeepCopy()
repository := &appsv1.Repository{
Name: string(secret.Data["name"]),
Repo: string(secret.Data["url"]),
Username: string(secret.Data["username"]),
Password: string(secret.Data["password"]),
BearerToken: string(secret.Data["bearerToken"]),
SSHPrivateKey: string(secret.Data["sshPrivateKey"]),
TLSClientCertData: string(secret.Data["tlsClientCertData"]),
TLSClientCertKey: string(secret.Data["tlsClientCertKey"]),
Type: string(secret.Data["type"]),
GithubAppPrivateKey: string(secret.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secret.Data["githubAppEnterpriseBaseUrl"]),
Proxy: string(secret.Data["proxy"]),
NoProxy: string(secret.Data["noProxy"]),
Project: string(secret.Data["project"]),
GCPServiceAccountKey: string(secret.Data["gcpServiceAccountKey"]),
Name: string(secretCopy.Data["name"]),
Repo: string(secretCopy.Data["url"]),
Username: string(secretCopy.Data["username"]),
Password: string(secretCopy.Data["password"]),
BearerToken: string(secretCopy.Data["bearerToken"]),
SSHPrivateKey: string(secretCopy.Data["sshPrivateKey"]),
TLSClientCertData: string(secretCopy.Data["tlsClientCertData"]),
TLSClientCertKey: string(secretCopy.Data["tlsClientCertKey"]),
Type: string(secretCopy.Data["type"]),
GithubAppPrivateKey: string(secretCopy.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secretCopy.Data["githubAppEnterpriseBaseUrl"]),
Proxy: string(secretCopy.Data["proxy"]),
NoProxy: string(secretCopy.Data["noProxy"]),
Project: string(secretCopy.Data["project"]),
GCPServiceAccountKey: string(secretCopy.Data["gcpServiceAccountKey"]),
}
insecureIgnoreHostKey, err := boolOrFalse(secret, "insecureIgnoreHostKey")
insecureIgnoreHostKey, err := boolOrFalse(secretCopy, "insecureIgnoreHostKey")
if err != nil {
return repository, err
}
repository.InsecureIgnoreHostKey = insecureIgnoreHostKey
insecure, err := boolOrFalse(secret, "insecure")
insecure, err := boolOrFalse(secretCopy, "insecure")
if err != nil {
return repository, err
}
repository.Insecure = insecure
enableLfs, err := boolOrFalse(secret, "enableLfs")
enableLfs, err := boolOrFalse(secretCopy, "enableLfs")
if err != nil {
return repository, err
}
repository.EnableLFS = enableLfs
enableOCI, err := boolOrFalse(secret, "enableOCI")
enableOCI, err := boolOrFalse(secretCopy, "enableOCI")
if err != nil {
return repository, err
}
repository.EnableOCI = enableOCI
insecureOCIForceHTTP, err := boolOrFalse(secret, "insecureOCIForceHttp")
insecureOCIForceHTTP, err := boolOrFalse(secretCopy, "insecureOCIForceHttp")
if err != nil {
return repository, err
}
repository.InsecureOCIForceHttp = insecureOCIForceHTTP
githubAppID, err := intOrZero(secret, "githubAppID")
githubAppID, err := intOrZero(secretCopy, "githubAppID")
if err != nil {
return repository, err
}
repository.GithubAppId = githubAppID
githubAppInstallationID, err := intOrZero(secret, "githubAppInstallationID")
githubAppInstallationID, err := intOrZero(secretCopy, "githubAppInstallationID")
if err != nil {
return repository, err
}
repository.GithubAppInstallationId = githubAppInstallationID
forceBasicAuth, err := boolOrFalse(secret, "forceHttpBasicAuth")
forceBasicAuth, err := boolOrFalse(secretCopy, "forceHttpBasicAuth")
if err != nil {
return repository, err
}
repository.ForceHttpBasicAuth = forceBasicAuth
useAzureWorkloadIdentity, err := boolOrFalse(secret, "useAzureWorkloadIdentity")
useAzureWorkloadIdentity, err := boolOrFalse(secretCopy, "useAzureWorkloadIdentity")
if err != nil {
return repository, err
}
@@ -398,86 +400,92 @@ func secretToRepository(secret *corev1.Secret) (*appsv1.Repository, error) {
return repository, nil
}
func (s *secretsRepositoryBackend) repositoryToSecret(repository *appsv1.Repository, secret *corev1.Secret) {
if secret.Data == nil {
secret.Data = make(map[string][]byte)
func (s *secretsRepositoryBackend) repositoryToSecret(repository *appsv1.Repository, secret *corev1.Secret) *corev1.Secret {
secretCopy := secret.DeepCopy()
if secretCopy.Data == nil {
secretCopy.Data = make(map[string][]byte)
}
updateSecretString(secret, "name", repository.Name)
updateSecretString(secret, "project", repository.Project)
updateSecretString(secret, "url", repository.Repo)
updateSecretString(secret, "username", repository.Username)
updateSecretString(secret, "password", repository.Password)
updateSecretString(secret, "bearerToken", repository.BearerToken)
updateSecretString(secret, "sshPrivateKey", repository.SSHPrivateKey)
updateSecretBool(secret, "enableOCI", repository.EnableOCI)
updateSecretBool(secret, "insecureOCIForceHttp", repository.InsecureOCIForceHttp)
updateSecretString(secret, "tlsClientCertData", repository.TLSClientCertData)
updateSecretString(secret, "tlsClientCertKey", repository.TLSClientCertKey)
updateSecretString(secret, "type", repository.Type)
updateSecretString(secret, "githubAppPrivateKey", repository.GithubAppPrivateKey)
updateSecretInt(secret, "githubAppID", repository.GithubAppId)
updateSecretInt(secret, "githubAppInstallationID", repository.GithubAppInstallationId)
updateSecretString(secret, "githubAppEnterpriseBaseUrl", repository.GitHubAppEnterpriseBaseURL)
updateSecretBool(secret, "insecureIgnoreHostKey", repository.InsecureIgnoreHostKey)
updateSecretBool(secret, "insecure", repository.Insecure)
updateSecretBool(secret, "enableLfs", repository.EnableLFS)
updateSecretString(secret, "proxy", repository.Proxy)
updateSecretString(secret, "noProxy", repository.NoProxy)
updateSecretString(secret, "gcpServiceAccountKey", repository.GCPServiceAccountKey)
updateSecretBool(secret, "forceHttpBasicAuth", repository.ForceHttpBasicAuth)
updateSecretBool(secret, "useAzureWorkloadIdentity", repository.UseAzureWorkloadIdentity)
addSecretMetadata(secret, s.getSecretType())
updateSecretString(secretCopy, "name", repository.Name)
updateSecretString(secretCopy, "project", repository.Project)
updateSecretString(secretCopy, "url", repository.Repo)
updateSecretString(secretCopy, "username", repository.Username)
updateSecretString(secretCopy, "password", repository.Password)
updateSecretString(secretCopy, "bearerToken", repository.BearerToken)
updateSecretString(secretCopy, "sshPrivateKey", repository.SSHPrivateKey)
updateSecretBool(secretCopy, "enableOCI", repository.EnableOCI)
updateSecretBool(secretCopy, "insecureOCIForceHttp", repository.InsecureOCIForceHttp)
updateSecretString(secretCopy, "tlsClientCertData", repository.TLSClientCertData)
updateSecretString(secretCopy, "tlsClientCertKey", repository.TLSClientCertKey)
updateSecretString(secretCopy, "type", repository.Type)
updateSecretString(secretCopy, "githubAppPrivateKey", repository.GithubAppPrivateKey)
updateSecretInt(secretCopy, "githubAppID", repository.GithubAppId)
updateSecretInt(secretCopy, "githubAppInstallationID", repository.GithubAppInstallationId)
updateSecretString(secretCopy, "githubAppEnterpriseBaseUrl", repository.GitHubAppEnterpriseBaseURL)
updateSecretBool(secretCopy, "insecureIgnoreHostKey", repository.InsecureIgnoreHostKey)
updateSecretBool(secretCopy, "insecure", repository.Insecure)
updateSecretBool(secretCopy, "enableLfs", repository.EnableLFS)
updateSecretString(secretCopy, "proxy", repository.Proxy)
updateSecretString(secretCopy, "noProxy", repository.NoProxy)
updateSecretString(secretCopy, "gcpServiceAccountKey", repository.GCPServiceAccountKey)
updateSecretBool(secretCopy, "forceHttpBasicAuth", repository.ForceHttpBasicAuth)
updateSecretBool(secretCopy, "useAzureWorkloadIdentity", repository.UseAzureWorkloadIdentity)
addSecretMetadata(secretCopy, s.getSecretType())
return secretCopy
}
func (s *secretsRepositoryBackend) secretToRepoCred(secret *corev1.Secret) (*appsv1.RepoCreds, error) {
secretCopy := secret.DeepCopy()
repository := &appsv1.RepoCreds{
URL: string(secret.Data["url"]),
Username: string(secret.Data["username"]),
Password: string(secret.Data["password"]),
BearerToken: string(secret.Data["bearerToken"]),
SSHPrivateKey: string(secret.Data["sshPrivateKey"]),
TLSClientCertData: string(secret.Data["tlsClientCertData"]),
TLSClientCertKey: string(secret.Data["tlsClientCertKey"]),
Type: string(secret.Data["type"]),
GithubAppPrivateKey: string(secret.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secret.Data["githubAppEnterpriseBaseUrl"]),
GCPServiceAccountKey: string(secret.Data["gcpServiceAccountKey"]),
Proxy: string(secret.Data["proxy"]),
NoProxy: string(secret.Data["noProxy"]),
URL: string(secretCopy.Data["url"]),
Username: string(secretCopy.Data["username"]),
Password: string(secretCopy.Data["password"]),
BearerToken: string(secretCopy.Data["bearerToken"]),
SSHPrivateKey: string(secretCopy.Data["sshPrivateKey"]),
TLSClientCertData: string(secretCopy.Data["tlsClientCertData"]),
TLSClientCertKey: string(secretCopy.Data["tlsClientCertKey"]),
Type: string(secretCopy.Data["type"]),
GithubAppPrivateKey: string(secretCopy.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secretCopy.Data["githubAppEnterpriseBaseUrl"]),
GCPServiceAccountKey: string(secretCopy.Data["gcpServiceAccountKey"]),
Proxy: string(secretCopy.Data["proxy"]),
NoProxy: string(secretCopy.Data["noProxy"]),
}
enableOCI, err := boolOrFalse(secret, "enableOCI")
enableOCI, err := boolOrFalse(secretCopy, "enableOCI")
if err != nil {
return repository, err
}
repository.EnableOCI = enableOCI
insecureOCIForceHTTP, err := boolOrFalse(secret, "insecureOCIForceHttp")
insecureOCIForceHTTP, err := boolOrFalse(secretCopy, "insecureOCIForceHttp")
if err != nil {
return repository, err
}
repository.InsecureOCIForceHttp = insecureOCIForceHTTP
githubAppID, err := intOrZero(secret, "githubAppID")
githubAppID, err := intOrZero(secretCopy, "githubAppID")
if err != nil {
return repository, err
}
repository.GithubAppId = githubAppID
githubAppInstallationID, err := intOrZero(secret, "githubAppInstallationID")
githubAppInstallationID, err := intOrZero(secretCopy, "githubAppInstallationID")
if err != nil {
return repository, err
}
repository.GithubAppInstallationId = githubAppInstallationID
forceBasicAuth, err := boolOrFalse(secret, "forceHttpBasicAuth")
forceBasicAuth, err := boolOrFalse(secretCopy, "forceHttpBasicAuth")
if err != nil {
return repository, err
}
repository.ForceHttpBasicAuth = forceBasicAuth
useAzureWorkloadIdentity, err := boolOrFalse(secret, "useAzureWorkloadIdentity")
useAzureWorkloadIdentity, err := boolOrFalse(secretCopy, "useAzureWorkloadIdentity")
if err != nil {
return repository, err
}
@@ -486,31 +494,35 @@ func (s *secretsRepositoryBackend) secretToRepoCred(secret *corev1.Secret) (*app
return repository, nil
}
func (s *secretsRepositoryBackend) repoCredsToSecret(repoCreds *appsv1.RepoCreds, secret *corev1.Secret) {
if secret.Data == nil {
secret.Data = make(map[string][]byte)
func (s *secretsRepositoryBackend) repoCredsToSecret(repoCreds *appsv1.RepoCreds, secret *corev1.Secret) *corev1.Secret {
secretCopy := secret.DeepCopy()
if secretCopy.Data == nil {
secretCopy.Data = make(map[string][]byte)
}
updateSecretString(secret, "url", repoCreds.URL)
updateSecretString(secret, "username", repoCreds.Username)
updateSecretString(secret, "password", repoCreds.Password)
updateSecretString(secret, "bearerToken", repoCreds.BearerToken)
updateSecretString(secret, "sshPrivateKey", repoCreds.SSHPrivateKey)
updateSecretBool(secret, "enableOCI", repoCreds.EnableOCI)
updateSecretBool(secret, "insecureOCIForceHttp", repoCreds.InsecureOCIForceHttp)
updateSecretString(secret, "tlsClientCertData", repoCreds.TLSClientCertData)
updateSecretString(secret, "tlsClientCertKey", repoCreds.TLSClientCertKey)
updateSecretString(secret, "type", repoCreds.Type)
updateSecretString(secret, "githubAppPrivateKey", repoCreds.GithubAppPrivateKey)
updateSecretInt(secret, "githubAppID", repoCreds.GithubAppId)
updateSecretInt(secret, "githubAppInstallationID", repoCreds.GithubAppInstallationId)
updateSecretString(secret, "githubAppEnterpriseBaseUrl", repoCreds.GitHubAppEnterpriseBaseURL)
updateSecretString(secret, "gcpServiceAccountKey", repoCreds.GCPServiceAccountKey)
updateSecretString(secret, "proxy", repoCreds.Proxy)
updateSecretString(secret, "noProxy", repoCreds.NoProxy)
updateSecretBool(secret, "forceHttpBasicAuth", repoCreds.ForceHttpBasicAuth)
updateSecretBool(secret, "useAzureWorkloadIdentity", repoCreds.UseAzureWorkloadIdentity)
addSecretMetadata(secret, s.getRepoCredSecretType())
updateSecretString(secretCopy, "url", repoCreds.URL)
updateSecretString(secretCopy, "username", repoCreds.Username)
updateSecretString(secretCopy, "password", repoCreds.Password)
updateSecretString(secretCopy, "bearerToken", repoCreds.BearerToken)
updateSecretString(secretCopy, "sshPrivateKey", repoCreds.SSHPrivateKey)
updateSecretBool(secretCopy, "enableOCI", repoCreds.EnableOCI)
updateSecretBool(secretCopy, "insecureOCIForceHttp", repoCreds.InsecureOCIForceHttp)
updateSecretString(secretCopy, "tlsClientCertData", repoCreds.TLSClientCertData)
updateSecretString(secretCopy, "tlsClientCertKey", repoCreds.TLSClientCertKey)
updateSecretString(secretCopy, "type", repoCreds.Type)
updateSecretString(secretCopy, "githubAppPrivateKey", repoCreds.GithubAppPrivateKey)
updateSecretInt(secretCopy, "githubAppID", repoCreds.GithubAppId)
updateSecretInt(secretCopy, "githubAppInstallationID", repoCreds.GithubAppInstallationId)
updateSecretString(secretCopy, "githubAppEnterpriseBaseUrl", repoCreds.GitHubAppEnterpriseBaseURL)
updateSecretString(secretCopy, "gcpServiceAccountKey", repoCreds.GCPServiceAccountKey)
updateSecretString(secretCopy, "proxy", repoCreds.Proxy)
updateSecretString(secretCopy, "noProxy", repoCreds.NoProxy)
updateSecretBool(secretCopy, "forceHttpBasicAuth", repoCreds.ForceHttpBasicAuth)
updateSecretBool(secretCopy, "useAzureWorkloadIdentity", repoCreds.UseAzureWorkloadIdentity)
addSecretMetadata(secretCopy, s.getRepoCredSecretType())
return secretCopy
}
func (s *secretsRepositoryBackend) getRepositorySecret(repoURL, project string, allowFallback bool) (*corev1.Secret, error) {

View File

@@ -1,7 +1,9 @@
package db
import (
"fmt"
"strconv"
"sync"
"testing"
"github.com/stretchr/testify/assert"
@@ -85,9 +87,9 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
t.Parallel()
secret := &corev1.Secret{}
s := secretsRepositoryBackend{}
s.repositoryToSecret(repo, secret)
delete(secret.Labels, common.LabelKeySecretType)
f := setupWithK8sObjects(secret)
updatedSecret := s.repositoryToSecret(repo, secret)
delete(updatedSecret.Labels, common.LabelKeySecretType)
f := setupWithK8sObjects(updatedSecret)
f.clientSet.ReactionChain = nil
f.clientSet.AddReactor("create", "secrets", func(_ k8stesting.Action) (handled bool, ret runtime.Object, err error) {
gr := schema.GroupResource{
@@ -122,8 +124,8 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
},
}
s := secretsRepositoryBackend{}
s.repositoryToSecret(repo, secret)
f := setupWithK8sObjects(secret)
updatedSecret := s.repositoryToSecret(repo, secret)
f := setupWithK8sObjects(updatedSecret)
f.clientSet.ReactionChain = nil
f.clientSet.WatchReactionChain = nil
f.clientSet.AddReactor("create", "secrets", func(_ k8stesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -134,7 +136,7 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
return true, nil, apierrors.NewAlreadyExists(gr, "already exists")
})
watcher := watch.NewFakeWithChanSize(1, true)
watcher.Add(secret)
watcher.Add(updatedSecret)
f.clientSet.AddWatchReactor("secrets", func(_ k8stesting.Action) (handled bool, ret watch.Interface, err error) {
return true, watcher, nil
})
@@ -952,7 +954,9 @@ func TestRepoCredsToSecret(t *testing.T) {
GithubAppInstallationId: 456,
GitHubAppEnterpriseBaseURL: "GitHubAppEnterpriseBaseURL",
}
testee.repoCredsToSecret(creds, s)
s = testee.repoCredsToSecret(creds, s)
assert.Equal(t, []byte(creds.URL), s.Data["url"])
assert.Equal(t, []byte(creds.Username), s.Data["username"])
assert.Equal(t, []byte(creds.Password), s.Data["password"])
@@ -994,7 +998,7 @@ func TestRepoWriteCredsToSecret(t *testing.T) {
GithubAppInstallationId: 456,
GitHubAppEnterpriseBaseURL: "GitHubAppEnterpriseBaseURL",
}
testee.repoCredsToSecret(creds, s)
s = testee.repoCredsToSecret(creds, s)
assert.Equal(t, []byte(creds.URL), s.Data["url"])
assert.Equal(t, []byte(creds.Username), s.Data["username"])
assert.Equal(t, []byte(creds.Password), s.Data["password"])
@@ -1010,3 +1014,169 @@ func TestRepoWriteCredsToSecret(t *testing.T) {
assert.Equal(t, map[string]string{common.AnnotationKeyManagedBy: common.AnnotationValueManagedByArgoCD}, s.Annotations)
assert.Equal(t, map[string]string{common.LabelKeySecretType: common.LabelValueSecretTypeRepoCredsWrite}, s.Labels)
}
func TestRaceConditionInRepoCredsOperations(t *testing.T) {
// Create a single shared secret that will be accessed concurrently
sharedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: RepoURLToSecretName(repoSecretPrefix, "git@github.com:argoproj/argo-cd.git", ""),
Namespace: testNamespace,
Labels: map[string]string{
common.LabelKeySecretType: common.LabelValueSecretTypeRepoCreds,
},
},
Data: map[string][]byte{
"url": []byte("git@github.com:argoproj/argo-cd.git"),
"username": []byte("test-user"),
"password": []byte("test-pass"),
},
}
// Create test credentials that we'll use for conversion
repoCreds := &appsv1.RepoCreds{
URL: "git@github.com:argoproj/argo-cd.git",
Username: "test-user",
Password: "test-pass",
}
backend := &secretsRepositoryBackend{}
var wg sync.WaitGroup
concurrentOps := 50
errChan := make(chan error, concurrentOps*2) // Channel to collect errors
// Launch goroutines that perform concurrent operations
for i := 0; i < concurrentOps; i++ {
wg.Add(2)
// One goroutine converts from RepoCreds to Secret
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in repoCredsToSecret: %v", r)
}
}()
_ = backend.repoCredsToSecret(repoCreds, sharedSecret)
}()
// Another goroutine converts from Secret to RepoCreds
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in secretToRepoCred: %v", r)
}
}()
creds, err := backend.secretToRepoCred(sharedSecret)
if err != nil {
errChan <- fmt.Errorf("error in secretToRepoCred: %w", err)
return
}
// Verify data integrity
if creds.URL != repoCreds.URL || creds.Username != repoCreds.Username || creds.Password != repoCreds.Password {
errChan <- fmt.Errorf("data mismatch in conversion: expected %v, got %v", repoCreds, creds)
}
}()
}
wg.Wait()
close(errChan)
// Check for any errors that occurred during concurrent operations
for err := range errChan {
t.Errorf("concurrent operation error: %v", err)
}
// Verify final state
finalCreds, err := backend.secretToRepoCred(sharedSecret)
require.NoError(t, err)
assert.Equal(t, repoCreds.URL, finalCreds.URL)
assert.Equal(t, repoCreds.Username, finalCreds.Username)
assert.Equal(t, repoCreds.Password, finalCreds.Password)
}
func TestRaceConditionInRepositoryOperations(t *testing.T) {
// Create a single shared secret that will be accessed concurrently
sharedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: RepoURLToSecretName(repoSecretPrefix, "git@github.com:argoproj/argo-cd.git", ""),
Namespace: testNamespace,
Labels: map[string]string{
common.LabelKeySecretType: common.LabelValueSecretTypeRepository,
},
},
Data: map[string][]byte{
"url": []byte("git@github.com:argoproj/argo-cd.git"),
"name": []byte("test-repo"),
"username": []byte("test-user"),
"password": []byte("test-pass"),
},
}
// Create test repository that we'll use for conversion
repo := &appsv1.Repository{
Name: "test-repo",
Repo: "git@github.com:argoproj/argo-cd.git",
Username: "test-user",
Password: "test-pass",
}
backend := &secretsRepositoryBackend{}
var wg sync.WaitGroup
concurrentOps := 50
errChan := make(chan error, concurrentOps*2) // Channel to collect errors
// Launch goroutines that perform concurrent operations
for i := 0; i < concurrentOps; i++ {
wg.Add(2)
// One goroutine converts from Repository to Secret
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in repositoryToSecret: %v", r)
}
}()
_ = backend.repositoryToSecret(repo, sharedSecret)
}()
// Another goroutine converts from Secret to Repository
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in secretToRepository: %v", r)
}
}()
repository, err := secretToRepository(sharedSecret)
if err != nil {
errChan <- fmt.Errorf("error in secretToRepository: %w", err)
return
}
// Verify data integrity
if repository.Name != repo.Name || repository.Repo != repo.Repo ||
repository.Username != repo.Username || repository.Password != repo.Password {
errChan <- fmt.Errorf("data mismatch in conversion: expected %v, got %v", repo, repository)
}
}()
}
wg.Wait()
close(errChan)
// Check for any errors that occurred during concurrent operations
for err := range errChan {
t.Errorf("concurrent operation error: %v", err)
}
// Verify final state
finalRepo, err := secretToRepository(sharedSecret)
require.NoError(t, err)
assert.Equal(t, repo.Name, finalRepo.Name)
assert.Equal(t, repo.Repo, finalRepo.Repo)
assert.Equal(t, repo.Username, finalRepo.Username)
assert.Equal(t, repo.Password, finalRepo.Password)
}

20
util/guard/guard.go Normal file
View File

@@ -0,0 +1,20 @@
package guard
import (
"runtime/debug"
)
// Minimal logger contract; avoids depending on any specific logging package.
type Logger interface{ Errorf(string, ...any) }
// Run executes fn and recovers a panic, logging a component-specific message.
func RecoverAndLog(fn func(), log Logger, msg string) {
defer func() {
if r := recover(); r != nil {
if log != nil {
log.Errorf("%s: %v %s", msg, r, debug.Stack())
}
}
}()
fn()
}

92
util/guard/guard_test.go Normal file
View File

@@ -0,0 +1,92 @@
package guard
import (
"fmt"
"strings"
"sync"
"testing"
)
type nop struct{}
func (nop) Errorf(string, ...any) {}
// recorder is a thread-safe logger that captures formatted messages.
type recorder struct {
mu sync.Mutex
calls int
msgs []string
}
func (r *recorder) Errorf(format string, args ...any) {
r.mu.Lock()
defer r.mu.Unlock()
r.calls++
r.msgs = append(r.msgs, fmt.Sprintf(format, args...))
}
func TestRun_Recovers(_ *testing.T) {
RecoverAndLog(func() { panic("boom") }, nop{}, "msg") // fails if panic escapes
}
func TestRun_AllowsNextCall(t *testing.T) {
ran := false
RecoverAndLog(func() { panic("boom") }, nop{}, "msg")
RecoverAndLog(func() { ran = true }, nop{}, "msg")
if !ran {
t.Fatal("expected second callback to run")
}
}
func TestRun_LogsMessageAndStack(t *testing.T) {
r := &recorder{}
RecoverAndLog(func() { panic("boom") }, r, "msg")
if r.calls != 1 {
t.Fatalf("expected 1 log call, got %d", r.calls)
}
got := strings.Join(r.msgs, "\n")
if !strings.Contains(got, "msg") {
t.Errorf("expected log to contain message %q; got %q", "msg", got)
}
if !strings.Contains(got, "boom") {
t.Errorf("expected log to contain panic value %q; got %q", "boom", got)
}
// Heuristic check that a stack trace was included.
if !strings.Contains(got, "guard.go") && !strings.Contains(got, "runtime/panic.go") && !strings.Contains(got, "goroutine") {
t.Errorf("expected log to contain a stack trace; got %q", got)
}
}
func TestRun_NilLoggerDoesNotPanic(_ *testing.T) {
var l Logger // nil
RecoverAndLog(func() { panic("boom") }, l, "ignored")
}
func TestRun_NoPanicDoesNotLog(t *testing.T) {
r := &recorder{}
ran := false
RecoverAndLog(func() { ran = true }, r, "msg")
if !ran {
t.Fatal("expected fn to run")
}
if r.calls != 0 {
t.Fatalf("expected 0 log calls, got %d", r.calls)
}
}
func TestRun_ConcurrentPanicsLogged(t *testing.T) {
r := &recorder{}
const n = 10
var wg sync.WaitGroup
wg.Add(n)
for i := 0; i < n; i++ {
go func(i int) {
defer wg.Done()
RecoverAndLog(func() { panic(fmt.Sprintf("boom-%d", i)) }, r, "msg")
}(i)
}
wg.Wait()
if r.calls != n {
t.Fatalf("expected %d log calls, got %d", n, r.calls)
}
}

View File

@@ -28,11 +28,12 @@ func GetFactorySettings(argocdService service.Service, secretName, configMapName
}
// GetFactorySettingsForCLI allows the initialization of argocdService to be deferred until it is used, when InitGetVars is called.
func GetFactorySettingsForCLI(argocdService service.Service, secretName, configMapName string, selfServiceNotificationEnabled bool) api.Settings {
func GetFactorySettingsForCLI(serviceGetter func() service.Service, secretName, configMapName string, selfServiceNotificationEnabled bool) api.Settings {
return api.Settings{
SecretName: secretName,
ConfigMapName: configMapName,
InitGetVars: func(cfg *api.Config, configMap *corev1.ConfigMap, secret *corev1.Secret) (api.GetVars, error) {
argocdService := serviceGetter()
if argocdService == nil {
return nil, errors.New("argocdService is not initialized")
}

View File

@@ -232,12 +232,28 @@ func (c *nativeOCIClient) Extract(ctx context.Context, digest string) (string, u
return "", nil, err
}
if len(ociManifest.Layers) != 1 {
return "", nil, fmt.Errorf("expected only a single oci layer, got %d", len(ociManifest.Layers))
// Add a guard to defend against a ridiculous amount of layers. No idea what a good amount is, but normally we
// shouldn't expect more than 2-3 in most real world use cases.
if len(ociManifest.Layers) > 10 {
return "", nil, fmt.Errorf("expected no more than 10 oci layers, got %d", len(ociManifest.Layers))
}
if !slices.Contains(c.allowedMediaTypes, ociManifest.Layers[0].MediaType) {
return "", nil, fmt.Errorf("oci layer media type %s is not in the list of allowed media types", ociManifest.Layers[0].MediaType)
contentLayers := 0
// Strictly speaking we only allow for a single content layer. There are images which contains extra layers, such
// as provenance/attestation layers. Pending a better story to do this natively, we will skip such layers for now.
for _, layer := range ociManifest.Layers {
if isContentLayer(layer.MediaType) {
if !slices.Contains(c.allowedMediaTypes, layer.MediaType) {
return "", nil, fmt.Errorf("oci layer media type %s is not in the list of allowed media types", layer.MediaType)
}
contentLayers++
}
}
if contentLayers != 1 {
return "", nil, fmt.Errorf("expected only a single oci content layer, got %d", contentLayers)
}
err = saveCompressedImageToPath(ctx, digest, c.repo, cachedPath)
@@ -405,7 +421,15 @@ func fileExists(filePath string) (bool, error) {
return true, nil
}
// TODO: A content layer could in theory be something that is not a compressed file, e.g a single yaml file or like.
// While IMO the utility in the context of Argo CD is limited, I'd at least like to make it known here and add an extensibility
// point for it in case we decide to loosen the current requirements.
func isContentLayer(mediaType string) bool {
return isCompressedLayer(mediaType)
}
func isCompressedLayer(mediaType string) bool {
// TODO: Is zstd something which is used in the wild? For now let's stick to these suffixes
return strings.HasSuffix(mediaType, "tar+gzip") || strings.HasSuffix(mediaType, "tar")
}
@@ -500,7 +524,7 @@ func isHelmOCI(mediaType string) bool {
// Push looks in all the layers of an OCI image. Once it finds a layer that is compressed, it extracts the layer to a tempDir
// and then renames the temp dir to the directory where the repo-server expects to find k8s manifests.
func (s *compressedLayerExtracterStore) Push(ctx context.Context, desc imagev1.Descriptor, content io.Reader) error {
if isCompressedLayer(desc.MediaType) {
if isContentLayer(desc.MediaType) {
srcDir, err := files.CreateTempDir(os.TempDir())
if err != nil {
return err

View File

@@ -122,7 +122,7 @@ func Test_nativeOCIClient_Extract(t *testing.T) {
expectedError: errors.New("cannot extract contents of oci image with revision sha256:1b6dfd71e2b35c2f35dffc39007c2276f3c0e235cbae4c39cba74bd406174e22: failed to perform \"Push\" on destination: could not decompress layer: error while iterating on tar reader: unexpected EOF"),
},
{
name: "extraction fails due to multiple layers",
name: "extraction fails due to multiple content layers",
fields: fields{
allowedMediaTypes: []string{imagev1.MediaTypeImageLayerGzip},
},
@@ -135,7 +135,21 @@ func Test_nativeOCIClient_Extract(t *testing.T) {
manifestMaxExtractedSize: 1000,
disableManifestMaxExtractedSize: false,
},
expectedError: errors.New("expected only a single oci layer, got 2"),
expectedError: errors.New("expected only a single oci content layer, got 2"),
},
{
name: "extraction with multiple layers, but just a single content layer",
fields: fields{
allowedMediaTypes: []string{imagev1.MediaTypeImageLayerGzip},
},
args: args{
digestFunc: func(store *memory.Store) string {
layerBlob := createGzippedTarWithContent(t, "some-path", "some content")
return generateManifest(t, store, layerConf{content.NewDescriptorFromBytes(imagev1.MediaTypeImageLayerGzip, layerBlob), layerBlob}, layerConf{content.NewDescriptorFromBytes("application/vnd.cncf.helm.chart.provenance.v1.prov", []byte{}), []byte{}})
},
manifestMaxExtractedSize: 1000,
disableManifestMaxExtractedSize: false,
},
},
{
name: "extraction fails due to invalid media type",

View File

@@ -13,6 +13,9 @@ import (
"time"
bb "github.com/ktrysmt/go-bitbucket"
"k8s.io/apimachinery/pkg/labels"
alpha1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
"github.com/Masterminds/semver/v3"
"github.com/go-playground/webhooks/v6/azuredevops"
@@ -23,7 +26,6 @@ import (
"github.com/go-playground/webhooks/v6/gogs"
gogsclient "github.com/gogits/go-gogs-client"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@@ -35,6 +37,7 @@ import (
"github.com/argoproj/argo-cd/v3/util/db"
"github.com/argoproj/argo-cd/v3/util/git"
"github.com/argoproj/argo-cd/v3/util/glob"
"github.com/argoproj/argo-cd/v3/util/guard"
"github.com/argoproj/argo-cd/v3/util/settings"
)
@@ -50,6 +53,8 @@ const usernameRegex = `[\w\.][\w\.-]{0,30}[\w\.\$-]?`
const payloadQueueSize = 50000
const panicMsgServer = "panic while processing api-server webhook event"
var _ settingsSource = &settings.SettingsManager{}
type ArgoCDWebhookHandler struct {
@@ -60,6 +65,7 @@ type ArgoCDWebhookHandler struct {
ns string
appNs []string
appClientset appclientset.Interface
appsLister alpha1.ApplicationLister
github *github.Webhook
gitlab *gitlab.Webhook
bitbucket *bitbucket.Webhook
@@ -72,7 +78,7 @@ type ArgoCDWebhookHandler struct {
maxWebhookPayloadSizeB int64
}
func NewHandler(namespace string, applicationNamespaces []string, webhookParallelism int, appClientset appclientset.Interface, set *settings.ArgoCDSettings, settingsSrc settingsSource, repoCache *cache.Cache, serverCache *servercache.Cache, argoDB db.ArgoDB, maxWebhookPayloadSizeB int64) *ArgoCDWebhookHandler {
func NewHandler(namespace string, applicationNamespaces []string, webhookParallelism int, appClientset appclientset.Interface, appsLister alpha1.ApplicationLister, set *settings.ArgoCDSettings, settingsSrc settingsSource, repoCache *cache.Cache, serverCache *servercache.Cache, argoDB db.ArgoDB, maxWebhookPayloadSizeB int64) *ArgoCDWebhookHandler {
githubWebhook, err := github.New(github.Options.Secret(set.GetWebhookGitHubSecret()))
if err != nil {
log.Warnf("Unable to init the GitHub webhook")
@@ -115,6 +121,7 @@ func NewHandler(namespace string, applicationNamespaces []string, webhookParalle
db: argoDB,
queue: make(chan any, payloadQueueSize),
maxWebhookPayloadSizeB: maxWebhookPayloadSizeB,
appsLister: appsLister,
}
acdWebhook.startWorkerPool(webhookParallelism)
@@ -123,6 +130,7 @@ func NewHandler(namespace string, applicationNamespaces []string, webhookParalle
}
func (a *ArgoCDWebhookHandler) startWorkerPool(webhookParallelism int) {
compLog := log.WithField("component", "api-server-webhook")
for i := 0; i < webhookParallelism; i++ {
a.Add(1)
go func() {
@@ -132,7 +140,7 @@ func (a *ArgoCDWebhookHandler) startWorkerPool(webhookParallelism int) {
if !ok {
return
}
a.HandleEvent(payload)
guard.RecoverAndLog(func() { a.HandleEvent(payload) }, compLog, panicMsgServer)
}
}()
}
@@ -150,10 +158,12 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
case azuredevops.GitPushEvent:
// See: https://learn.microsoft.com/en-us/azure/devops/service-hooks/events?view=azure-devops#git.push
webURLs = append(webURLs, payload.Resource.Repository.RemoteURL)
revision = ParseRevision(payload.Resource.RefUpdates[0].Name)
change.shaAfter = ParseRevision(payload.Resource.RefUpdates[0].NewObjectID)
change.shaBefore = ParseRevision(payload.Resource.RefUpdates[0].OldObjectID)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
if len(payload.Resource.RefUpdates) > 0 {
revision = ParseRevision(payload.Resource.RefUpdates[0].Name)
change.shaAfter = ParseRevision(payload.Resource.RefUpdates[0].NewObjectID)
change.shaBefore = ParseRevision(payload.Resource.RefUpdates[0].OldObjectID)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
}
// unfortunately, Azure DevOps doesn't provide a list of changed files
case github.PushPayload:
// See: https://developer.github.com/v3/activity/events/types/#pushevent
@@ -251,13 +261,15 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
// Webhook module does not parse the inner links
if payload.Repository.Links != nil {
for _, l := range payload.Repository.Links["clone"].([]any) {
link := l.(map[string]any)
if link["name"] == "http" {
webURLs = append(webURLs, link["href"].(string))
}
if link["name"] == "ssh" {
webURLs = append(webURLs, link["href"].(string))
clone, ok := payload.Repository.Links["clone"].([]any)
if ok {
for _, l := range clone {
link := l.(map[string]any)
if link["name"] == "http" || link["name"] == "ssh" {
if href, ok := link["href"].(string); ok {
webURLs = append(webURLs, href)
}
}
}
}
}
@@ -276,11 +288,13 @@ func (a *ArgoCDWebhookHandler) affectedRevisionInfo(payloadIf any) (webURLs []st
// so we cannot update changedFiles for this type of payload
case gogsclient.PushPayload:
webURLs = append(webURLs, payload.Repo.HTMLURL)
revision = ParseRevision(payload.Ref)
change.shaAfter = ParseRevision(payload.After)
change.shaBefore = ParseRevision(payload.Before)
touchedHead = bool(payload.Repo.DefaultBranch == revision)
if payload.Repo != nil {
webURLs = append(webURLs, payload.Repo.HTMLURL)
touchedHead = payload.Repo.DefaultBranch == revision
}
for _, commit := range payload.Commits {
changedFiles = append(changedFiles, commit.Added...)
changedFiles = append(changedFiles, commit.Modified...)
@@ -313,8 +327,8 @@ func (a *ArgoCDWebhookHandler) HandleEvent(payload any) {
nsFilter = ""
}
appIf := a.appClientset.ArgoprojV1alpha1().Applications(nsFilter)
apps, err := appIf.List(context.Background(), metav1.ListOptions{})
appIf := a.appsLister.Applications(nsFilter)
apps, err := appIf.List(labels.Everything())
if err != nil {
log.Warnf("Failed to list applications: %v", err)
return
@@ -339,9 +353,9 @@ func (a *ArgoCDWebhookHandler) HandleEvent(payload any) {
// Skip any application that is neither in the control plane's namespace
// nor in the list of enabled namespaces.
var filteredApps []v1alpha1.Application
for _, app := range apps.Items {
for _, app := range apps {
if app.Namespace == a.ns || glob.MatchStringInList(a.appNs, app.Namespace, glob.REGEXP) {
filteredApps = append(filteredApps, app)
filteredApps = append(filteredApps, *app)
}
}

View File

@@ -15,8 +15,11 @@ import (
"text/template"
"time"
"github.com/go-playground/webhooks/v6/azuredevops"
bb "github.com/ktrysmt/go-bitbucket"
"github.com/stretchr/testify/mock"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"github.com/go-playground/webhooks/v6/bitbucket"
@@ -28,6 +31,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
kubetesting "k8s.io/client-go/testing"
argov1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
servercache "github.com/argoproj/argo-cd/v3/server/cache"
"github.com/argoproj/argo-cd/v3/util/cache/appstate"
"github.com/argoproj/argo-cd/v3/util/db"
@@ -98,6 +102,31 @@ func NewMockHandlerForBitbucketCallback(reactor *reactorDef, applicationNamespac
return newMockHandler(reactor, applicationNamespaces, defaultMaxPayloadSize, &mockDB, &argoSettings, objects...)
}
type fakeAppsLister struct {
argov1.ApplicationLister
argov1.ApplicationNamespaceLister
namespace string
clientset *appclientset.Clientset
}
func (f *fakeAppsLister) Applications(namespace string) argov1.ApplicationNamespaceLister {
return &fakeAppsLister{namespace: namespace, clientset: f.clientset}
}
func (f *fakeAppsLister) List(selector labels.Selector) ([]*v1alpha1.Application, error) {
res, err := f.clientset.ArgoprojV1alpha1().Applications(f.namespace).List(context.Background(), metav1.ListOptions{
LabelSelector: selector.String(),
})
if err != nil {
return nil, err
}
var apps []*v1alpha1.Application
for i := range res.Items {
apps = append(apps, &res.Items[i])
}
return apps, nil
}
func newMockHandler(reactor *reactorDef, applicationNamespaces []string, maxPayloadSize int64, argoDB db.ArgoDB, argoSettings *settings.ArgoCDSettings, objects ...runtime.Object) *ArgoCDWebhookHandler {
appClientset := appclientset.NewSimpleClientset(objects...)
if reactor != nil {
@@ -109,8 +138,7 @@ func newMockHandler(reactor *reactorDef, applicationNamespaces []string, maxPayl
appClientset.AddReactor(reactor.verb, reactor.resource, reactor.reaction)
}
cacheClient := cacheutil.NewCache(cacheutil.NewInMemoryCache(1 * time.Hour))
return NewHandler("argocd", applicationNamespaces, 10, appClientset, argoSettings, &fakeSettingsSrc{}, cache.NewCache(
return NewHandler("argocd", applicationNamespaces, 10, appClientset, &fakeAppsLister{clientset: appClientset}, argoSettings, &fakeSettingsSrc{}, cache.NewCache(
cacheClient,
1*time.Minute,
1*time.Minute,
@@ -702,6 +730,26 @@ func Test_affectedRevisionInfo_appRevisionHasChanged(t *testing.T) {
{true, "refs/tags/no-slashes", bitbucketPushPayload("no-slashes"), "bitbucket push branch or tag name without slashes, targetRevision tag prefixed"},
{true, "refs/tags/no-slashes", bitbucketRefChangedPayload("no-slashes"), "bitbucket ref changed branch or tag name without slashes, targetRevision tag prefixed"},
{true, "refs/tags/no-slashes", gogsPushPayload("no-slashes"), "gogs push branch or tag name without slashes, targetRevision tag prefixed"},
// Tests fix for https://github.com/argoproj/argo-cd/security/advisories/GHSA-wp4p-9pxh-cgx2
{true, "test", gogsclient.PushPayload{Ref: "test", Repo: nil}, "gogs push branch with nil repo in payload"},
// Testing fix for https://github.com/argoproj/argo-cd/security/advisories/GHSA-gpx4-37g2-c8pv
{false, "test", azuredevops.GitPushEvent{Resource: azuredevops.Resource{RefUpdates: []azuredevops.RefUpdate{}}}, "Azure DevOps malformed push event with no ref updates"},
{true, "some-ref", bitbucketserver.RepositoryReferenceChangedPayload{
Changes: []bitbucketserver.RepositoryChange{
{Reference: bitbucketserver.RepositoryReference{ID: "refs/heads/some-ref"}},
},
Repository: bitbucketserver.Repository{Links: map[string]any{"clone": "boom"}}, // The string "boom" here is what previously caused a panic.
}, "bitbucket push branch or tag name, malformed link"}, // https://github.com/argoproj/argo-cd/security/advisories/GHSA-f9gq-prrc-hrhc
{true, "some-ref", bitbucketserver.RepositoryReferenceChangedPayload{
Changes: []bitbucketserver.RepositoryChange{
{Reference: bitbucketserver.RepositoryReference{ID: "refs/heads/some-ref"}},
},
Repository: bitbucketserver.Repository{Links: map[string]any{"clone": []any{map[string]any{"name": "http", "href": []string{}}}}}, // The href as an empty array is what previously caused a panic.
}, "bitbucket push branch or tag name, malformed href"},
}
for _, testCase := range tests {
testCopy := testCase