Compare commits

...

46 Commits

Author SHA1 Message Date
github-actions[bot]
7cedae735c Bump version to 3.0.23 on release-3.0 branch (#26118)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2026-01-22 14:33:36 -05:00
Codey Jenkins
5f9dc64088 fix: cherry pick #25516 to release-3.0 (#26107)
Signed-off-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
Signed-off-by: Codey Jenkins <FourFifthsCode@users.noreply.github.com>
Co-authored-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-01-22 12:09:14 -05:00
dudinea
9a495627ce fix: invalid error message on health check failure (#26040) (cherry pick #26039 for 3.0) (#26073)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-01-20 18:51:40 +02:00
argo-cd-cherry-pick-bot[bot]
6c9b1cb046 fix(hydrator): empty links for failed operation (#25025) (cherry-pick #26014 for 3.0) (#26015)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-01-15 16:26:10 -05:00
github-actions[bot]
1eaf112dbb Bump version to 3.0.22 on release-3.0 branch (#25965)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-01-13 20:38:15 +02:00
Atif Ali
f9212b0961 chore(cherry-pick-3.0): bump expr version from v1.16.9 to v1.17.7 (#25908)
Signed-off-by: Atif Ali <atali@redhat.com>
2026-01-09 11:08:09 -05:00
argo-cd-cherry-pick-bot[bot]
e5b85eff35 fix: Only show please update resource specification message when spec… (cherry-pick #25066 for 3.0) (#25893)
Signed-off-by: Josh Soref <jsoref@gmail.com>
Co-authored-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2026-01-07 10:10:52 -05:00
github-actions[bot]
14cd1c1412 Bump version to 3.0.21 on release-3.0 branch (#25834)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-01-01 11:15:34 +02:00
Anand Francis Joseph
73b21ff4f2 chore(deps): bump golang.org/x/crypto from 0.38.0 to 0.46.0 (#25821)
Signed-off-by: anandf <anjoseph@redhat.com>
2026-01-01 10:06:06 +02:00
argo-cd-cherry-pick-bot[bot]
8910d47425 fix(hydrator): appset should preserve annotation when hydration is requested (cherry-pick #25644 for 3.0) (#25652)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-12-15 00:59:54 -10:00
Anand Francis Joseph
38b108e255 chore(deps): bump gitops-engine to include fix for #24242 (#25548)
Signed-off-by: anandf <anjoseph@redhat.com>
2025-12-08 10:14:18 -05:00
argo-cd-cherry-pick-bot[bot]
a24b8ec7d2 docs: Document usage of ?. in notifications triggers and fix examples (#25352) (cherry-pick #25418 for 3.0) (#25423)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Co-authored-by: dudinea <eugene.doudine@octopus.com>
2025-11-26 11:33:04 +02:00
Regina Voloshin
97dc75ee80 docs: Improve switch to annotation tracking docs, clarifying that a new Git commit may be needed to avoid orphan resources - (cherry-pick #25309 for 3.0) (#25336)
Signed-off-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-11-19 11:46:55 +01:00
argo-cd-cherry-pick-bot[bot]
a9a7868dc4 fix: Allow the ISVC to be healthy when the Stopped Condition is False (cherry-pick #25312 for 3.0) (#25316)
Signed-off-by: Hannah DeFazio <h2defazio@gmail.com>
Co-authored-by: Hannah DeFazio <h2defazio@gmail.com>
2025-11-17 23:21:27 -10:00
argo-cd-cherry-pick-bot[bot]
b53d2a2443 fix: handle annotated git tags correctly in repo server cache (cherry-pick #21771 for 3.0) (#25241)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <56743004+aali309@users.noreply.github.com>
2025-11-10 03:51:55 -10:00
jwinters01
ba78f8cdeb fix:(ui) don't render ApplicationSelector unless the panel is showing (Cherry Pick release-3.0) (#25217)
Signed-off-by: Jonathan Winters <wintersjonathan0@gmail.com>
2025-11-07 13:38:00 -05:00
argo-cd-cherry-pick-bot[bot]
ad1eacbe93 fix(ui): add null-safe handling for assignedWindows in status panel (cherry-pick #25128 for 3.0) (#25182)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
Co-authored-by: Jaewoo Choi <jaewoo45@gmail.com>
2025-11-05 01:13:03 -10:00
github-actions[bot]
653f7adb97 Bump version to 3.0.20 on release-3.0 branch (#24999)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-10-17 14:32:53 -07:00
argo-cd-cherry-pick-bot[bot]
bdab094f78 fix: make webhook payload handlers recover from panics (cherry-pick #24862 for 3.0) (#24913)
Signed-off-by: Jakub Ciolek <jakub@ciolek.dev>
Co-authored-by: Jakub Ciolek <66125090+jake-ciolek@users.noreply.github.com>
2025-10-14 14:15:05 -04:00
Rianov Viacheslav
3e09844ca5 fix(server): ensure resource health status is inferred on application retrieval (#24832) (cherry-pick #24851 for 3.0) (#24945)
Signed-off-by: Viacheslav Rianov <rianovviacheslav@gmail.com>
2025-10-14 12:29:48 -04:00
Carlos R.F.
a7a88fd43d chore(deps): bump redis from 7.2.7 to 7.2.11 to address vuln (release-3.0) (#24890)
Signed-off-by: Carlos Rodriguez-Fernandez <carlosrodrifernandez@gmail.com>
2025-10-09 17:11:59 -04:00
github-actions[bot]
132be88e67 Bump version to 3.0.19 on release-3.0 branch (#24795)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 11:31:43 -04:00
Ville Vesilehto
2aaace870d Merge commit from fork
Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.

The fix ensures thread-safe operations by always operating on
copies rather than shared objects.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-09-30 10:45:59 -04:00
Michael Crenshaw
f60a9441a7 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:45:32 -04:00
Michael Crenshaw
93ab7e4519 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:07:24 -04:00
Michael Crenshaw
3070736476 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 09:45:21 -04:00
argo-cd-cherry-pick-bot[bot]
a731ea68ff fix: allow for backwards compatibility of durations defined in days (cherry-pick #24769 for 3.0) (#24770)
Signed-off-by: lplazas <felipe.plazas10@gmail.com>
Co-authored-by: lplazas <luis.plazas@workato.com>
2025-09-29 15:23:19 +05:30
github-actions[bot]
7a7cf076c2 Bump version to 3.0.18 on release-3.0 branch (#24703)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-09-22 15:18:02 -07:00
Alexander Matyushentsev
36ce380906 fix: limit number of resources in appset status (#24690) (#24695)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-22 14:57:49 -07:00
Alexandre Gaudreault
531d96edef ci(release): only set latest release in github when latest (#24525) (#24687)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-22 11:47:55 -04:00
argo-cd-cherry-pick-bot[bot]
dcfb4db550 fix(server): validate new project on update (#23970) (cherry-pick #23973 for 3.0) (#24663)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-19 11:20:56 -04:00
github-actions[bot]
d1dbf20c99 Bump version to 3.0.17 on release-3.0 branch (#24636)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: blakepettersson <1227954+blakepettersson@users.noreply.github.com>
2025-09-18 04:19:24 -10:00
Alexander Matyushentsev
97a87308ab fix: use informer in webhook handler to reduce memory usage (#24622) (#24627)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-18 08:13:12 +02:00
argo-cd-cherry-pick-bot[bot]
a85fa0947b fix: correct post-delete finalizer removal when cluster not found (cherry-pick #24415 for 3.0) (#24589)
Signed-off-by: Pavel Aborilov <aborilov@gmail.com>
Co-authored-by: Pavel <aborilov@gmail.com>
2025-09-16 16:14:27 -07:00
Fox Piacenti
b729cff932 docs: Update URL for HA manifests to stable. (#24455)
Signed-off-by: Fox Danger Piacenti <fox@opencraft.com>
2025-09-09 12:37:11 +03:00
Nitish Kumar
2a0282d668 fix(3.0): change the appset namespace to server namespace when generating appset (#24479)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-09-09 10:45:53 +03:00
OpenGuidou
0af18331eb fix(cherry-pick-3.0): Do not block project update when a cluster referenced in an App doesn't exist (#24449)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
2025-09-08 11:38:07 -04:00
github-actions[bot]
2798b54c96 Bump version to 3.0.16 on release-3.0 branch (#24426)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: agaudreault <47184027+agaudreault@users.noreply.github.com>
2025-09-05 16:30:36 -04:00
Alexandre Gaudreault
998260452c chore(deps): bump gitops-engine (#24419)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-05 16:23:48 -04:00
Alexandre Gaudreault
50befe995c fix(test): race condition in kubectl metrics (#23382) (#23383) (#24422)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-05 15:28:15 -04:00
github-actions[bot]
1a55610f80 Bump version to 3.0.15 on release-3.0 branch (#24402)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 13:31:09 -04:00
github-actions[bot]
5a4ef23d96 Bump version to 3.0.14 on release-3.0 branch (#24396)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 11:48:39 -04:00
Michael Crenshaw
5ebdd714d0 fix(security): repository.GetDetailedProject exposes repo secrets (#24390)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-04 11:32:52 -04:00
Adrian Berger
ef5b8ca167 fix(cherry-pick-3.0): custom resource health for flux helm repository of type oci (#24340)
Signed-off-by: Adrian Berger <adrian.berger@bedag.ch>
2025-09-02 15:19:13 -04:00
Nitish Kumar
775edda033 chore(cherry-pick-3.0): replace bitnami images (#24101) (#24287)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-08-27 14:03:15 +02:00
Anand Francis Joseph
f4d409cf9b fix(appset): prevent idle connection buildup by cloning http.DefaultTransport in Bitbucket SCM/PR generator (#24266)
Signed-off-by: portly-halicore-76 <170707699+portly-halicore-76@users.noreply.github.com>
Signed-off-by: anandf <anjoseph@redhat.com>
Co-authored-by: portly-halicore-76 <170707699+portly-halicore-76@users.noreply.github.com>
2025-08-26 09:53:34 -04:00
97 changed files with 2769 additions and 862 deletions

View File

@@ -487,7 +487,7 @@ jobs:
run: |
docker pull ghcr.io/dexidp/dex:v2.41.1
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:7.2.7-alpine
docker pull redis:7.2.11-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist

View File

@@ -32,6 +32,42 @@ jobs:
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup variables
id: var
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
IS_LATEST=false
# Ensure latest release tag matches github.ref_name
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
IS_LATEST=true
fi
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
argocd-image-provenance:
needs: [argocd-image]
permissions:
@@ -50,15 +86,17 @@ jobs:
goreleaser:
needs:
- setup-variables
- argocd-image
- argocd-image-provenance
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -141,7 +179,7 @@ jobs:
permissions:
contents: write # Needed for release uploads
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes}}
hashes: ${{ steps.sbom-hash.outputs.hashes }}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
steps:
@@ -219,6 +257,7 @@ jobs:
post-release:
needs:
- setup-variables
- argocd-image
- goreleaser
- generate-sbom
@@ -227,6 +266,8 @@ jobs:
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -240,27 +281,6 @@ jobs:
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
- name: Check if tag is the latest version and not a pre-release
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
# Ensure latest tag matches github.ref_name & not a pre-release
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
echo "TAG_STABLE=true" >> $GITHUB_ENV
else
echo "TAG_STABLE=false" >> $GITHUB_ENV
fi
- name: Update stable tag to latest version
run: |
git tag -f stable ${{ github.ref_name }}

View File

@@ -46,16 +46,17 @@ builds:
archives:
- id: argocd-archive
builds:
- argocd-cli
- argocd-cli
name_template: |-
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
format: binary
formats: [binary]
checksum:
name_template: 'cli_checksums.txt'
algorithm: sha256
release:
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
prerelease: auto
draft: false
header: |
@@ -80,23 +81,21 @@ release:
All Argo CD container images are signed by cosign. A Provenance is generated for container images and CLI binaries which meet the SLSA Level 3 specifications. See the [documentation](https://argo-cd.readthedocs.io/en/stable/operator-manual/signed-release-assets) on how to verify.
## Release Notes Blog Post
For a detailed breakdown of the key changes and improvements in this release, check out the [official blog post](https://blog.argoproj.io/argo-cd-v2-14-release-candidate-57a664791e2a)
For a detailed breakdown of the key changes and improvements in this release, check out the [official blog post](https://blog.argoproj.io/argo-cd-v2-14-release-candidate-57a664791e2a)
## Upgrading
If upgrading from a different minor version, be sure to read the [upgrading](https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/overview/) documentation.
footer: |
**Full Changelog**: https://github.com/argoproj/argo-cd/compare/{{ .PreviousTag }}...{{ .Tag }}
<a href="https://argoproj.github.io/cd/"><img src="https://raw.githubusercontent.com/argoproj/argo-site/master/content/pages/cd/gitops-cd.png" width="25%" ></a>
snapshot: #### To be removed for PR
name_template: "2.6.0"
name_template: '2.6.0'
changelog:
use:
github
use: github
sort: asc
abbrev: 0
groups: # Regex use RE2 syntax as defined here: https://github.com/google/re2/wiki/Syntax.
@@ -119,7 +118,4 @@ changelog:
- '^test:'
- '^.*?Bump(\([[:word:]]+\))?.+$'
- '^.*?\[Bot\](\([[:word:]]+\))?.+$'
# yaml-language-server: $schema=https://goreleaser.com/static/schema.json

View File

@@ -1 +1 @@
3.0.13
3.0.23

View File

@@ -71,6 +71,7 @@ const (
var defaultPreservedAnnotations = []string{
NotifiedAnnotationKey,
argov1alpha1.AnnotationKeyRefresh,
argov1alpha1.AnnotationKeyHydrate,
}
// ApplicationSetReconciler reconciles a ApplicationSet object
@@ -91,6 +92,7 @@ type ApplicationSetReconciler struct {
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -1303,6 +1305,11 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
statuses = statuses[:r.MaxResourcesStatusCount]
}
appset.Status.Resources = statuses
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {

View File

@@ -588,6 +588,72 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
},
},
},
{
name: "Ensure that hydrate annotation is preserved from an existing app",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSetSpec{
Template: v1alpha1.ApplicationSetTemplate{
Spec: v1alpha1.ApplicationSpec{
Project: "project",
},
},
},
},
existingApps: []v1alpha1.Application{
{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "2",
Annotations: map[string]string{
"annot-key": "annot-value",
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.RefreshTypeNormal),
},
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
},
},
},
desiredApps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
},
},
},
expected: []v1alpha1.Application{
{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "3",
Annotations: map[string]string{
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.RefreshTypeNormal),
},
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
},
},
},
},
{
name: "Ensure that configured preserved annotations are preserved from an existing app",
appSet: v1alpha1.ApplicationSet{
@@ -6116,10 +6182,11 @@ func TestUpdateResourceStatus(t *testing.T) {
require.NoError(t, err)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
maxResourcesStatusCount int
}{
{
name: "handles an empty application list",
@@ -6290,6 +6357,73 @@ func TestUpdateResourceStatus(t *testing.T) {
apps: []v1alpha1.Application{},
expectedResources: nil,
},
{
name: "truncates resources status list to",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "argocd",
},
Status: v1alpha1.ApplicationSetStatus{
Resources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
{
Name: "app2",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
},
},
},
apps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "app2",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
},
expectedResources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeSynced,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
maxResourcesStatusCount: 1,
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
@@ -6300,13 +6434,14 @@ func TestUpdateResourceStatus(t *testing.T) {
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: argodb,
KubeClientset: kubeclientset,
Metrics: metrics,
MaxResourcesStatusCount: cc.maxResourcesStatusCount,
}
err := r.updateResourcesStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps)

View File

@@ -28,10 +28,11 @@ type GitGenerator struct {
namespace string
}
func NewGitGenerator(repos services.Repos, namespace string) Generator {
// NewGitGenerator creates a new instance of Git Generator
func NewGitGenerator(repos services.Repos, controllerNamespace string) Generator {
g := &GitGenerator{
repos: repos,
namespace: namespace,
namespace: controllerNamespace,
}
return g
@@ -70,11 +71,11 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
if !strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
project := appSet.Spec.Template.Spec.Project
appProject := &argoprojiov1alpha1.AppProject{}
namespace := g.namespace
if namespace == "" {
namespace = appSet.Namespace
controllerNamespace := g.namespace
if controllerNamespace == "" {
controllerNamespace = appSet.Namespace
}
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: namespace}, appProject); err != nil {
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: controllerNamespace}, appProject); err != nil {
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled

View File

@@ -10,15 +10,15 @@ import (
"github.com/argoproj/argo-cd/v3/applicationset/services"
)
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, namespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, controllerNamespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
terminalGenerators := map[string]Generator{
"List": NewListGenerator(),
"Clusters": NewClusterGenerator(ctx, c, k8sClient, namespace),
"Git": NewGitGenerator(argoCDService, namespace),
"Clusters": NewClusterGenerator(ctx, c, k8sClient, controllerNamespace),
"Git": NewGitGenerator(argoCDService, controllerNamespace),
"SCMProvider": NewSCMProviderGenerator(c, scmConfig),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, namespace),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, controllerNamespace),
"PullRequest": NewPullRequestGenerator(c, scmConfig),
"Plugin": NewPluginGenerator(ctx, c, k8sClient, namespace),
"Plugin": NewPluginGenerator(ctx, c, k8sClient, controllerNamespace),
}
nestedGenerators := map[string]Generator{

View File

@@ -58,8 +58,7 @@ func NewApplicationsetMetrics(appsetLister applisters.ApplicationSetLister, apps
metrics.Registry.MustRegister(reconcileHistogram)
metrics.Registry.MustRegister(appsetCollector)
kubectlMetricsServer := kubectl.NewKubectlMetrics()
kubectlMetricsServer.RegisterWithClientGo()
kubectl.RegisterWithClientGo()
kubectl.RegisterWithPrometheus(metrics.Registry)
return ApplicationsetMetrics{

View File

@@ -3,12 +3,11 @@ package pull_request
import (
"context"
"fmt"
"net/http"
bitbucketv1 "github.com/gfleury/go-bitbucket-v1"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v3/applicationset/utils"
"github.com/argoproj/argo-cd/v3/applicationset/services"
)
type BitbucketService struct {
@@ -49,15 +48,10 @@ func NewBitbucketServiceNoAuth(ctx context.Context, url, projectKey, repositoryS
}
func newBitbucketService(ctx context.Context, bitbucketConfig *bitbucketv1.Configuration, projectKey, repositorySlug string, scmRootCAPath string, insecure bool, caCerts []byte) (PullRequestService, error) {
bitbucketConfig.BasePath = utils.NormalizeBitbucketBasePath(bitbucketConfig.BasePath)
tlsConfig := utils.GetTlsConfig(scmRootCAPath, insecure, caCerts)
bitbucketConfig.HTTPClient = &http.Client{Transport: &http.Transport{
TLSClientConfig: tlsConfig,
}}
bitbucketClient := bitbucketv1.NewAPIClient(ctx, bitbucketConfig)
bbClient := services.SetupBitbucketClient(ctx, bitbucketConfig, scmRootCAPath, insecure, caCerts)
return &BitbucketService{
client: bitbucketClient,
client: bbClient,
projectKey: projectKey,
repositorySlug: repositorySlug,
}, nil

View File

@@ -10,7 +10,7 @@ import (
bitbucketv1 "github.com/gfleury/go-bitbucket-v1"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v3/applicationset/utils"
"github.com/argoproj/argo-cd/v3/applicationset/services"
)
type BitbucketServerProvider struct {
@@ -49,15 +49,10 @@ func NewBitbucketServerProviderNoAuth(ctx context.Context, url, projectKey strin
}
func newBitbucketServerProvider(ctx context.Context, bitbucketConfig *bitbucketv1.Configuration, projectKey string, allBranches bool, scmRootCAPath string, insecure bool, caCerts []byte) (*BitbucketServerProvider, error) {
bitbucketConfig.BasePath = utils.NormalizeBitbucketBasePath(bitbucketConfig.BasePath)
tlsConfig := utils.GetTlsConfig(scmRootCAPath, insecure, caCerts)
bitbucketConfig.HTTPClient = &http.Client{Transport: &http.Transport{
TLSClientConfig: tlsConfig,
}}
bitbucketClient := bitbucketv1.NewAPIClient(ctx, bitbucketConfig)
bbClient := services.SetupBitbucketClient(ctx, bitbucketConfig, scmRootCAPath, insecure, caCerts)
return &BitbucketServerProvider{
client: bitbucketClient,
client: bbClient,
projectKey: projectKey,
allBranches: allBranches,
}, nil

View File

@@ -0,0 +1,22 @@
package services
import (
"context"
"net/http"
bitbucketv1 "github.com/gfleury/go-bitbucket-v1"
"github.com/argoproj/argo-cd/v3/applicationset/utils"
)
// SetupBitbucketClient configures and creates a Bitbucket API client with TLS settings
func SetupBitbucketClient(ctx context.Context, config *bitbucketv1.Configuration, scmRootCAPath string, insecure bool, caCerts []byte) *bitbucketv1.APIClient {
config.BasePath = utils.NormalizeBitbucketBasePath(config.BasePath)
tlsConfig := utils.GetTlsConfig(scmRootCAPath, insecure, caCerts)
transport := http.DefaultTransport.(*http.Transport).Clone()
transport.TLSClientConfig = tlsConfig
config.HTTPClient = &http.Client{Transport: transport}
return bitbucketv1.NewAPIClient(ctx, config)
}

View File

@@ -0,0 +1,36 @@
package services
import (
"crypto/tls"
"net/http"
"testing"
"time"
bitbucketv1 "github.com/gfleury/go-bitbucket-v1"
"github.com/stretchr/testify/require"
)
func TestSetupBitbucketClient(t *testing.T) {
ctx := t.Context()
cfg := &bitbucketv1.Configuration{}
// Act
client := SetupBitbucketClient(ctx, cfg, "", false, nil)
// Assert
require.NotNil(t, client, "expected client to be created")
require.NotNil(t, cfg.HTTPClient, "expected HTTPClient to be set")
// The transport should be a clone of DefaultTransport
tr, ok := cfg.HTTPClient.Transport.(*http.Transport)
require.True(t, ok, "expected HTTPClient.Transport to be *http.Transport")
require.NotSame(t, http.DefaultTransport, tr, "transport should be a clone, not the global DefaultTransport")
// Ensure TLSClientConfig is set
require.IsType(t, &tls.Config{}, tr.TLSClientConfig)
// Defaults from http.DefaultTransport.Clone() should be preserved
require.Greater(t, tr.IdleConnTimeout, time.Duration(0), "IdleConnTimeout should be non-zero")
require.Positive(t, tr.MaxIdleConns, "MaxIdleConns should be non-zero")
require.Greater(t, tr.TLSHandshakeTimeout, time.Duration(0), "TLSHandshakeTimeout should be non-zero")
}

View File

@@ -25,10 +25,14 @@ import (
"github.com/go-playground/webhooks/v6/github"
"github.com/go-playground/webhooks/v6/gitlab"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v3/util/guard"
)
const payloadQueueSize = 50000
const panicMsgAppSet = "panic while processing applicationset-controller webhook event"
type WebhookHandler struct {
sync.WaitGroup // for testing
namespace string
@@ -103,6 +107,7 @@ func NewWebhookHandler(namespace string, webhookParallelism int, argocdSettingsM
}
func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
compLog := log.WithField("component", "applicationset-webhook")
for i := 0; i < webhookParallelism; i++ {
h.Add(1)
go func() {
@@ -112,7 +117,7 @@ func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
if !ok {
return
}
h.HandleEvent(payload)
guard.RecoverAndLog(func() { h.HandleEvent(payload) }, compLog, panicMsgAppSet)
}
}()
}

View File

@@ -74,6 +74,7 @@ func NewCommand() *cobra.Command {
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
maxResourcesStatusCount int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -225,6 +226,7 @@ func NewCommand() *cobra.Command {
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -268,6 +270,7 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&globalPreservedLabels, "preserved-labels", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS", []string{}, ","), "Sets global preserved field values for labels")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
return &command
}

View File

@@ -1201,7 +1201,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
if err != nil {
logCtx.Warnf("Unable to get destination cluster: %v", err)
app.UnSetCascadedDeletion()
app.UnSetPostDeleteFinalizer()
app.UnSetPostDeleteFinalizerAll()
if err := ctrl.updateFinalizers(app); err != nil {
return err
}

View File

@@ -199,8 +199,7 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFil
registry.MustRegister(resourceEventsProcessingHistogram)
registry.MustRegister(resourceEventsNumberGauge)
kubectlMetricsServer := kubectl.NewKubectlMetrics()
kubectlMetricsServer.RegisterWithClientGo()
kubectl.RegisterWithClientGo()
kubectl.RegisterWithPrometheus(registry)
metricsServer := &MetricsServer{

View File

@@ -247,7 +247,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
Revision: revision,
SyncedRevision: syncedRevision,
NoRevisionCache: noRevisionCache,
Paths: path.GetAppRefreshPaths(app),
Paths: path.GetSourceRefreshPaths(app, source),
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
Namespace: appNamespace,

View File

@@ -284,6 +284,8 @@ data:
applicationsetcontroller.global.preserved.annotations: "acme.com/annotation1,acme.com/annotation2"
# Comma delimited list of labels to preserve in generated applications
applicationsetcontroller.global.preserved.labels: "acme.com/label1,acme.com/label2"
# The maximum number of resources stored in the status of an ApplicationSet. This is a safeguard to prevent the status from growing too large.
applicationsetcontroller.status.max.resources.count: "5000"
## Argo CD Notifications Controller Properties
# Set the logging level. One of: debug|info|warn|error (default "info")

View File

@@ -2,7 +2,7 @@
Argo CD is largely stateless. All data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/stable/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
> **NOTE:** The HA installation will require at least three different nodes due to pod anti-affinity roles in the
> specs. Additionally, IPv6 only clusters are not supported.

View File

@@ -35,14 +35,26 @@ metadata:
name: argocd-notifications-cm
data:
trigger.sync-operation-change: |
- when: app.status.operationState.phase in ['Succeeded']
- when: app.status?.operationState.phase in ['Succeeded']
send: [github-commit-status]
- when: app.status.operationState.phase in ['Running']
- when: app.status?.operationState.phase in ['Running']
send: [github-commit-status]
- when: app.status.operationState.phase in ['Error', 'Failed']
- when: app.status?.operationState.phase in ['Error', 'Failed']
send: [app-sync-failed, github-commit-status]
```
## Accessing Optional Manifest Sections and Fields
Note that in the trigger example above, the `?.` (optional chaining) operator is used to access the Application's
`status.operationState` section. This section is optional; it is not present when an operation has been initiated but has not yet
started by the Application Controller.
If the `?.` operator were not used, `status.operationState` would resolve to `nil` and the evaluation of the
`app.status.operationState.phase` expression would fail. The `app.status?.operationState.phase` expression is equivalent to
`app.status.operationState != nil ? app.status.operationState.phase : nil`.
## Avoid Sending Same Notification Too Often
In some cases, the trigger condition might be "flapping". The example below illustrates the problem.
@@ -60,14 +72,14 @@ data:
# Optional 'oncePer' property ensure that notification is sent only once per specified field value
# E.g. following is triggered once per sync revision
trigger.on-deployed: |
when: app.status.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy'
when: app.status?.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy'
oncePer: app.status.sync.revision
send: [app-sync-succeeded]
```
**Mono Repo Usage**
When one repo is used to sync multiple applications, the `oncePer: app.status.sync.revision` field will trigger a notification for each commit. For mono repos, the better approach will be using `oncePer: app.status.operationState.syncResult.revision` statement. This way a notification will be sent only for a particular Application's revision.
When one repo is used to sync multiple applications, the `oncePer: app.status.sync.revision` field will trigger a notification for each commit. For mono repos, the better approach will be using `oncePer: app.status?.operationState.syncResult.revision` statement. This way a notification will be sent only for a particular Application's revision.
### oncePer
@@ -122,7 +134,7 @@ Triggers have access to the set of built-in functions.
Example:
```yaml
when: time.Now().Sub(time.Parse(app.status.operationState.startedAt)).Minutes() >= 5
when: time.Now().Sub(time.Parse(app.status?.operationState.startedAt)).Minutes() >= 5
```
{!docs/operator-manual/notifications/functions.md!}

View File

@@ -11,4 +11,12 @@ Eg, `https://github.com/argoproj/argo-cd/manifests/ha/cluster-install?ref=v2.14.
## Upgraded Helm Version
Helm was upgraded to 3.16.2 and the skipSchemaValidation Flag was added to
the [CLI and Application CR](https://argo-cd.readthedocs.io/en/latest/user-guide/helm/#helm-skip-schema-validation).
the [CLI and Application CR](https://argo-cd.readthedocs.io/en/latest/user-guide/helm/#helm-skip-schema-validation).
## Breaking Changes
## Sanitized project API response
Due to security reasons ([GHSA-786q-9hcg-v9ff](https://github.com/argoproj/argo-cd/security/advisories/GHSA-786q-9hcg-v9ff)),
the project API response was sanitized to remove sensitive information. This includes
credentials of project-scoped repositories and clusters.

View File

@@ -287,6 +287,9 @@ resources.
resource being deleted, Argo CD will fail to recognize that the resource is managed by the Application and will not
delete it. To avoid this edge case, it is recommended to perform a sync operation on your Applications, even if
they are not out of sync, so that orphan resource detection will work as expected on the next sync.
After upgrading to version 3.0, the Argo CD tracking annotation will only appear on an Applications resources when
either a new Git commit is made or the Application is explicitly synced.
##### Users who rely on label-based for resources that are not managed by Argo CD
Some users rely on label-based tracking to track resources that are not managed by Argo CD. They may set annotations
@@ -491,4 +494,9 @@ resource.customizations.ignoreDifferences.apiextensions.k8s.io_CustomResourceDef
- /spec/preserveUnknownFields
```
More details for ignored resource updates in the [Diffing customization](../../user-guide/diffing.md) documentation.
More details for ignored resource updates in the [Diffing customization](../../user-guide/diffing.md) documentation.
### Sanitized project API response
Due to security reasons ([GHSA-786q-9hcg-v9ff](https://github.com/argoproj/argo-cd/security/advisories/GHSA-786q-9hcg-v9ff)),
the project API response was sanitized to remove sensitive information. This includes
credentials of project-scoped repositories and clusters.

21
go.mod
View File

@@ -12,7 +12,7 @@ require (
github.com/Masterminds/sprig/v3 v3.3.0
github.com/TomOnTime/utfutil v1.0.0
github.com/alicebob/miniredis/v2 v2.34.0
github.com/argoproj/gitops-engine v0.7.1-0.20250520182409-89c110b5952e
github.com/argoproj/gitops-engine v0.7.1-0.20250905171100-0882c168faa3
github.com/argoproj/notifications-engine v0.4.1-0.20250309174002-87bf0576a872
github.com/argoproj/pkg v0.13.7-0.20250305113207-cbc37dc61de5
github.com/aws/aws-sdk-go v1.55.6
@@ -28,7 +28,7 @@ require (
github.com/dlclark/regexp2 v1.11.5
github.com/dustin/go-humanize v1.0.1
github.com/evanphx/json-patch v5.9.11+incompatible
github.com/expr-lang/expr v1.16.9
github.com/expr-lang/expr v1.17.7
github.com/felixge/httpsnoop v1.0.4
github.com/fsnotify/fsnotify v1.8.0
github.com/gfleury/go-bitbucket-v1 v0.0.0-20240917142304-df385efaac68
@@ -88,12 +88,12 @@ require (
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.32.0
go.opentelemetry.io/otel/sdk v1.34.0
go.uber.org/automaxprocs v1.6.0
golang.org/x/crypto v0.38.0
golang.org/x/crypto v0.46.0
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f
golang.org/x/net v0.40.0
golang.org/x/net v0.47.0
golang.org/x/oauth2 v0.28.0
golang.org/x/sync v0.14.0
golang.org/x/term v0.32.0
golang.org/x/sync v0.19.0
golang.org/x/term v0.38.0
golang.org/x/time v0.11.0
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422
google.golang.org/grpc v1.71.0
@@ -263,10 +263,11 @@ require (
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
golang.org/x/mod v0.22.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.25.0 // indirect
golang.org/x/tools v0.27.0 // indirect
golang.org/x/mod v0.30.0 // indirect
golang.org/x/sys v0.39.0 // indirect
golang.org/x/text v0.32.0 // indirect
golang.org/x/tools v0.39.0 // indirect
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated // indirect
gomodules.xyz/envconfig v1.3.1-0.20190308184047-426f31af0d45 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
gomodules.xyz/notify v0.1.1 // indirect

44
go.sum
View File

@@ -114,8 +114,8 @@ github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFI
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/appscode/go v0.0.0-20191119085241-0887d8ec2ecc/go.mod h1:OawnOmAL4ZX3YaPdN+8HTNwBveT1jMsqP74moa9XUbE=
github.com/argoproj/gitops-engine v0.7.1-0.20250520182409-89c110b5952e h1:65x5+7Vz3HPjFoj7+mFyCckgHrAhPwy4rnDp/AveD18=
github.com/argoproj/gitops-engine v0.7.1-0.20250520182409-89c110b5952e/go.mod h1:duVhxDW7M7M7+19IBCVth2REOS11gmqzTWwj4u8N7aQ=
github.com/argoproj/gitops-engine v0.7.1-0.20250905171100-0882c168faa3 h1:Nw5ZqatjlxUgzWMLZ3Josj7csW2TQSYLIP6D9IGz+kY=
github.com/argoproj/gitops-engine v0.7.1-0.20250905171100-0882c168faa3/go.mod h1:duVhxDW7M7M7+19IBCVth2REOS11gmqzTWwj4u8N7aQ=
github.com/argoproj/notifications-engine v0.4.1-0.20250309174002-87bf0576a872 h1:ADGAdyN9ty0+RmTT/yn+xV9vwkqvLn9O1ccqeP0Zeas=
github.com/argoproj/notifications-engine v0.4.1-0.20250309174002-87bf0576a872/go.mod h1:d1RazGXWvKRFv9//rg4MRRR7rbvbE7XLgTSMT5fITTE=
github.com/argoproj/pkg v0.13.7-0.20250305113207-cbc37dc61de5 h1:YBoLSjpoaJXaXAldVvBRKJuOPvIXz9UOv6S96gMJM/Q=
@@ -243,8 +243,8 @@ github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjT
github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM=
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f h1:Wl78ApPPB2Wvf/TIe2xdyJxTlb6obmF18d8QdkxNDu4=
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f/go.mod h1:OSYXu++VVOHnXeitef/D8n/6y4QV8uLHSFXX4NeXMGc=
github.com/expr-lang/expr v1.16.9 h1:WUAzmR0JNI9JCiF0/ewwHB1gmcGw5wW7nWt8gc6PpCI=
github.com/expr-lang/expr v1.16.9/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/expr-lang/expr v1.17.7 h1:Q0xY/e/2aCIp8g9s/LGvMDCC5PxYlvHgDZRQ4y16JX8=
github.com/expr-lang/expr v1.17.7/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/facebookgo/ensure v0.0.0-20160127193407-b4ab57deab51/go.mod h1:Yg+htXGokKKdzcwhuNDwVvN+uBxDGXJ7G/VN1d8fa64=
github.com/facebookgo/stack v0.0.0-20160209184415-751773369052/go.mod h1:UbMTZqLaRiH3MsBH8va0n7s1pQYcu3uTb8G4tygF4Zg=
github.com/facebookgo/subset v0.0.0-20150612182917-8dac2c3c4870/go.mod h1:5tD+neXqOorC30/tWg0LCSkrqj/AR6gu8yY8/fpw1q0=
@@ -881,8 +881,8 @@ golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -921,8 +921,8 @@ golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -975,8 +975,8 @@ golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -999,8 +999,8 @@ golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -1069,8 +1069,8 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@@ -1082,8 +1082,8 @@ golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg=
golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1098,8 +1098,8 @@ golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1156,8 +1156,12 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
golang.org/x/tools v0.4.0/go.mod h1:UE5sM2OK9E/d67R0ANs2xJizIymRP5gJU295PvKXxjQ=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.27.0 h1:qEKojBykQkQ4EynWy4S8Weg69NumxKdn40Fce3uc/8o=
golang.org/x/tools v0.27.0/go.mod h1:sUi0ZgbwW9ZPAq26Ekut+weQPR5eIM6GQLQ1Yjm1H0Q=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
golang.org/x/tools/go/expect v0.1.0-deprecated h1:jY2C5HGYR5lqex3gEniOQL0r7Dq5+VGVgY1nudX5lXY=
golang.org/x/tools/go/expect v0.1.0-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY=
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM=
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -181,6 +181,12 @@ spec:
name: argocd-cmd-params-cm
key: applicationsetcontroller.requeue.after
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: applicationsetcontroller.status.max.resources.count
optional: true
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.0.13
newTag: v3.0.23

View File

@@ -5,7 +5,7 @@ kind: Kustomization
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.0.13
newTag: v3.0.23
resources:
- ./application-controller
- ./dex

View File

@@ -40,7 +40,7 @@ spec:
serviceAccountName: argocd-redis
containers:
- name: redis
image: redis:7.2.7-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
args:
- "--save"

View File

@@ -24609,7 +24609,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24735,7 +24741,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -24781,7 +24787,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -24869,7 +24875,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.2.7-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -24885,7 +24891,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25158,7 +25164,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25210,7 +25216,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25552,7 +25558,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -24577,7 +24577,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24681,7 +24687,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.2.7-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -24697,7 +24703,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -24970,7 +24976,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25022,7 +25028,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25364,7 +25370,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.0.13
newTag: v3.0.23

View File

@@ -12,7 +12,7 @@ patches:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v3.0.13
newTag: v3.0.23
resources:
- ../../base/application-controller
- ../../base/applicationset-controller

View File

@@ -1250,7 +1250,7 @@ spec:
automountServiceAccountToken: false
initContainers:
- name: config-init
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
resources:
{}
@@ -1290,7 +1290,7 @@ spec:
containers:
- name: redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
command:
- redis-server
@@ -1364,7 +1364,7 @@ spec:
- /bin/sh
- /readonly-config/trigger-failover-if-master.sh
- name: sentinel
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
command:
- redis-sentinel
@@ -1437,7 +1437,7 @@ spec:
- sleep 30; redis-cli -p 26379 sentinel reset argocd
- name: split-brain-fix
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
command:
- sh

View File

@@ -27,7 +27,7 @@ redis-ha:
serviceAccount:
automountToken: true
image:
tag: 7.2.7-alpine
tag: 7.2.11-alpine
sentinel:
bind: '0.0.0.0'
lifecycle:

View File

@@ -25975,7 +25975,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -26101,7 +26107,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26147,7 +26153,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26274,7 +26280,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -26370,7 +26376,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26494,7 +26500,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26793,7 +26799,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26845,7 +26851,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -27219,7 +27225,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -27597,7 +27603,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -27695,7 +27701,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -27766,7 +27772,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -27841,7 +27847,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -27876,7 +27882,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -25945,7 +25945,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -26088,7 +26094,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -26184,7 +26190,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26308,7 +26314,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26607,7 +26613,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26659,7 +26665,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -27033,7 +27039,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -27411,7 +27417,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -27509,7 +27515,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -27580,7 +27586,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -27655,7 +27661,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -27690,7 +27696,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -1862,7 +1862,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1988,7 +1994,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2034,7 +2040,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2161,7 +2167,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2257,7 +2263,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2381,7 +2387,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2680,7 +2686,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2732,7 +2738,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -3106,7 +3112,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3484,7 +3490,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -3582,7 +3588,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -3653,7 +3659,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -3728,7 +3734,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -3763,7 +3769,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -1832,7 +1832,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1975,7 +1981,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2071,7 +2077,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2195,7 +2201,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2494,7 +2500,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2546,7 +2552,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2920,7 +2926,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3298,7 +3304,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -3396,7 +3402,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -3467,7 +3473,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -3542,7 +3548,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -3577,7 +3583,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.2.7-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -25069,7 +25069,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25195,7 +25201,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25241,7 +25247,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25368,7 +25374,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25464,7 +25470,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25550,7 +25556,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.2.7-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25566,7 +25572,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25839,7 +25845,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25891,7 +25897,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26263,7 +26269,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26641,7 +26647,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:

24
manifests/install.yaml generated
View File

@@ -25037,7 +25037,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25180,7 +25186,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25276,7 +25282,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25362,7 +25368,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.2.7-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25378,7 +25384,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25651,7 +25657,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25703,7 +25709,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26075,7 +26081,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26453,7 +26459,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -956,7 +956,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1082,7 +1088,7 @@ spec:
key: log.format.timestamp
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1128,7 +1134,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -1255,7 +1261,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1351,7 +1357,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1437,7 +1443,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.2.7-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1453,7 +1459,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1726,7 +1732,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1778,7 +1784,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2150,7 +2156,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2528,7 +2534,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -924,7 +924,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1067,7 +1073,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1163,7 +1169,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1249,7 +1255,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.2.7-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1265,7 +1271,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1538,7 +1544,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1590,7 +1596,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -1962,7 +1968,7 @@ spec:
key: server.sync.replace.allowed
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2340,7 +2346,7 @@ spec:
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v3.0.13
image: quay.io/argoproj/argocd:v3.0.23
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -321,7 +321,6 @@ func (repo *Repository) Sanitized() *Repository {
Repo: repo.Repo,
Type: repo.Type,
Name: repo.Name,
Username: repo.Username,
Insecure: repo.IsInsecure(),
EnableLFS: repo.EnableLFS,
EnableOCI: repo.EnableOCI,

View File

@@ -2152,6 +2152,32 @@ type Cluster struct {
Annotations map[string]string `json:"annotations,omitempty" protobuf:"bytes,13,opt,name=annotations"`
}
func (c *Cluster) Sanitized() *Cluster {
return &Cluster{
ID: c.ID,
Server: c.Server,
Name: c.Name,
Project: c.Project,
Namespaces: c.Namespaces,
Shard: c.Shard,
Labels: c.Labels,
Annotations: c.Annotations,
ClusterResources: c.ClusterResources,
ConnectionState: c.ConnectionState,
ServerVersion: c.ServerVersion,
Info: c.Info,
RefreshRequestedAt: c.RefreshRequestedAt,
Config: ClusterConfig{
AWSAuthConfig: c.Config.AWSAuthConfig,
ProxyUrl: c.Config.ProxyUrl,
DisableCompression: c.Config.DisableCompression,
TLSClientConfig: TLSClientConfig{
Insecure: c.Config.Insecure,
},
},
}
}
// Equals returns true if two cluster objects are considered to be equal
func (c *Cluster) Equals(other *Cluster) bool {
if c.Server != other.Server {
@@ -3152,6 +3178,14 @@ func (app *Application) SetPostDeleteFinalizer(stage ...string) {
setFinalizer(&app.ObjectMeta, strings.Join(append([]string{PostDeleteFinalizerName}, stage...), "/"), true)
}
func (app *Application) UnSetPostDeleteFinalizerAll() {
for _, finalizer := range app.Finalizers {
if strings.HasPrefix(finalizer, PostDeleteFinalizerName) {
setFinalizer(&app.ObjectMeta, finalizer, false)
}
}
}
func (app *Application) UnSetPostDeleteFinalizer(stage ...string) {
setFinalizer(&app.ObjectMeta, strings.Join(append([]string{PostDeleteFinalizerName}, stage...), "/"), false)
}

View File

@@ -4507,3 +4507,58 @@ func TestCluster_ParseProxyUrl(t *testing.T) {
}
}
}
func TestSanitized(t *testing.T) {
now := metav1.Now()
cluster := &Cluster{
ID: "123",
Server: "https://example.com",
Name: "example",
ServerVersion: "v1.0.0",
Namespaces: []string{"default", "kube-system"},
Project: "default",
Labels: map[string]string{
"env": "production",
},
Annotations: map[string]string{
"annotation-key": "annotation-value",
},
ConnectionState: ConnectionState{
Status: ConnectionStatusSuccessful,
Message: "Connection successful",
ModifiedAt: &now,
},
Config: ClusterConfig{
Username: "admin",
Password: "password123",
BearerToken: "abc",
TLSClientConfig: TLSClientConfig{
Insecure: true,
},
ExecProviderConfig: &ExecProviderConfig{
Command: "test",
},
},
}
assert.Equal(t, &Cluster{
ID: "123",
Server: "https://example.com",
Name: "example",
ServerVersion: "v1.0.0",
Namespaces: []string{"default", "kube-system"},
Project: "default",
Labels: map[string]string{"env": "production"},
Annotations: map[string]string{"annotation-key": "annotation-value"},
ConnectionState: ConnectionState{
Status: ConnectionStatusSuccessful,
Message: "Connection successful",
ModifiedAt: &now,
},
Config: ClusterConfig{
TLSClientConfig: TLSClientConfig{
Insecure: true,
},
},
}, cluster.Sanitized())
}

View File

@@ -24,14 +24,23 @@ if obj.status ~= nil then
if obj.status.conditions ~= nil then
for i, condition in pairs(obj.status.conditions) do
-- Check if the InferenceService is Stopped
if condition.type == "Stopped" and condition.status == "True" then
health_status.status = "Suspended"
health_status.message = "InferenceService is Stopped"
return health_status
end
-- Check for unhealthy statuses
-- Note: The Stopped condition's healthy status is False
if condition.status == "Unknown" then
status_unknown = status_unknown + 1
elseif condition.status == "False" then
elseif condition.status == "False" and condition.type ~= "Stopped" then
status_false = status_false + 1
end
if condition.status ~= "True" then
-- Add the error messages if the status is unhealthy
if condition.status ~= "True" and condition.type ~= "Stopped" then
msg = msg .. " | " .. i .. ": " .. condition.type .. " | " .. condition.status
if condition.reason ~= nil and condition.reason ~= "" then
msg = msg .. " | " .. condition.reason

View File

@@ -23,6 +23,10 @@ tests:
status: Degraded
message: "0: transitionStatus | BlockedByFailedLoad"
inputPath: testdata/degraded_modelmesh.yaml
- healthStatus:
status: Suspended
message: InferenceService is Stopped
inputPath: testdata/stopped.yaml
- healthStatus:
status: Healthy
message: InferenceService is healthy.

View File

@@ -23,3 +23,7 @@ status:
- lastTransitionTime: "2023-06-20T22:44:51Z"
status: "True"
type: Ready
- lastTransitionTime: "2023-06-20T22:44:51Z"
severity: Info
status: 'False'
type: Stopped

View File

@@ -31,5 +31,9 @@ status:
severity: Info
status: 'True'
type: RoutesReady
- lastTransitionTime: '2024-05-30T22:14:31Z'
severity: Info
status: 'False'
type: Stopped
modelStatus:
transitionStatus: UpToDate

View File

@@ -17,3 +17,7 @@ status:
- lastTransitionTime: '2024-05-16T18:48:56Z'
status: 'True'
type: Ready
- lastTransitionTime: '2024-05-16T18:48:56Z'
severity: Info
status: 'False'
type: Stopped

View File

@@ -0,0 +1,23 @@
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: helloworld
namespace: default
annotations:
serving.kserve.io/deploymentMode: RawDeployment
serving.kserve.io/stop: 'true'
spec: {}
status:
conditions:
- lastTransitionTime: '2024-05-16T18:48:56Z'
reason: Stopped
status: 'False'
type: PredictorReady
- lastTransitionTime: '2024-05-16T18:48:56Z'
reason: Stopped
status: 'False'
type: Ready
- lastTransitionTime: '2024-05-16T18:48:56Z'
severity: Info
status: 'True'
type: Stopped

View File

@@ -4,6 +4,13 @@ if obj.spec.suspend ~= nil and obj.spec.suspend == true then
hs.status = "Suspended"
return hs
end
-- Helm repositories of type "oci" do not contain any information in the status
-- https://fluxcd.io/flux/components/source/helmrepositories/#helmrepository-status
if obj.spec.type ~= nil and obj.spec.type == "oci" then
hs.message = "Helm repositories of type 'oci' do not contain any information in the status."
hs.status = "Healthy"
return hs
end
if obj.status ~= nil then
if obj.status.conditions ~= nil then
local numProgressing = 0

View File

@@ -11,3 +11,7 @@ tests:
status: Healthy
message: Succeeded
inputPath: testdata/healthy.yaml
- healthStatus:
status: Healthy
message: "Helm repositories of type 'oci' do not contain any information in the status."
inputPath: testdata/oci.yaml

View File

@@ -0,0 +1,10 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: podinfo
namespace: default
spec:
type: "oci"
interval: 5m0s
url: oci://ghcr.io/stefanprodan/charts
status: {}

View File

@@ -744,9 +744,8 @@ func (s *Server) Get(ctx context.Context, q *application.ApplicationQuery) (*v1a
return nil, err
}
s.inferResourcesStatusHealth(a)
if q.Refresh == nil {
s.inferResourcesStatusHealth(a)
return a, nil
}
@@ -827,7 +826,9 @@ func (s *Server) Get(ctx context.Context, q *application.ApplicationQuery) (*v1a
annotations = make(map[string]string)
}
if _, ok := annotations[v1alpha1.AnnotationKeyRefresh]; !ok {
return event.Application.DeepCopy(), nil
refreshedApp := event.Application.DeepCopy()
s.inferResourcesStatusHealth(refreshedApp)
return refreshedApp, nil
}
}
}
@@ -1288,6 +1289,17 @@ func (s *Server) validateAndNormalizeApp(ctx context.Context, app *v1alpha1.Appl
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionUpdate, currApp.RBACName(s.ns)); err != nil {
return err
}
// Validate that the new project exists and the application is allowed to use it
newProj, err := s.getAppProject(ctx, app, log.WithFields(log.Fields{
"application": app.Name,
"app-namespace": app.Namespace,
"app-qualified-name": app.QualifiedName(),
"project": app.Spec.Project,
}))
if err != nil {
return err
}
proj = newProj
}
if _, err := argo.GetDestinationCluster(ctx, app.Spec.Destination, s.db); err != nil {

View File

@@ -1511,14 +1511,130 @@ func TestCreateAppWithOperation(t *testing.T) {
}
func TestUpdateApp(t *testing.T) {
testApp := newTestApp()
appServer := newTestAppServer(t, testApp)
testApp.Spec.Project = ""
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: testApp,
t.Parallel()
t.Run("Same spec", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
appServer := newTestAppServer(t, testApp)
testApp.Spec.Project = ""
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: testApp,
})
require.NoError(t, err)
assert.Equal(t, "default", app.Spec.Project)
})
t.Run("Invalid existing app can be updated", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
testApp.Spec.Destination.Server = "https://invalid-cluster"
appServer := newTestAppServer(t, testApp)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Source.Name = "updated"
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.NoError(t, err)
require.NotNil(t, app)
assert.Equal(t, "updated", app.Spec.Source.Name)
})
t.Run("Can update application project from invalid", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
restrictedProj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "restricted-proj", Namespace: "default"},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"not-your-repo"},
Destinations: []v1alpha1.ApplicationDestination{{Server: "*", Namespace: "not-your-namespace"}},
},
}
testApp.Spec.Project = restrictedProj.Name
appServer := newTestAppServer(t, testApp, restrictedProj)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Project = "my-proj"
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.NoError(t, err)
require.NotNil(t, app)
assert.Equal(t, "my-proj", app.Spec.Project)
})
t.Run("Cannot update application project to invalid", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
restrictedProj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "restricted-proj", Namespace: "default"},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"not-your-repo"},
Destinations: []v1alpha1.ApplicationDestination{{Server: "*", Namespace: "not-your-namespace"}},
},
}
appServer := newTestAppServer(t, testApp, restrictedProj)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Project = restrictedProj.Name
_, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.Error(t, err)
require.ErrorContains(t, err, "application repo https://github.com/argoproj/argocd-example-apps.git is not permitted in project 'restricted-proj'")
require.ErrorContains(t, err, "application destination server 'fake-cluster' and namespace 'fake-dest-ns' do not match any of the allowed destinations in project 'restricted-proj'")
})
t.Run("Cannot update application project to inexisting", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
appServer := newTestAppServer(t, testApp)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Project = "i-do-not-exist"
_, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.Error(t, err)
require.ErrorContains(t, err, "app is not allowed in project \"i-do-not-exist\", or the project does not exist")
})
t.Run("Can update application project with project argument", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
appServer := newTestAppServer(t, testApp)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Spec.Project = "my-proj"
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
Project: ptr.To("default"),
})
require.NoError(t, err)
require.NotNil(t, app)
assert.Equal(t, "my-proj", app.Spec.Project)
})
t.Run("Existing label and annotations are replaced", func(t *testing.T) {
t.Parallel()
testApp := newTestApp()
testApp.Annotations = map[string]string{"test": "test-value", "update": "old"}
testApp.Labels = map[string]string{"test": "test-value", "update": "old"}
appServer := newTestAppServer(t, testApp)
updateApp := newTestAppWithDestName()
updateApp.TypeMeta = testApp.TypeMeta
updateApp.Annotations = map[string]string{"update": "new"}
updateApp.Labels = map[string]string{"update": "new"}
app, err := appServer.Update(t.Context(), &application.ApplicationUpdateRequest{
Application: updateApp,
})
require.NoError(t, err)
require.NotNil(t, app)
assert.Len(t, app.Annotations, 1)
assert.Equal(t, "new", app.GetAnnotations()["update"])
assert.Len(t, app.Labels, 1)
assert.Equal(t, "new", app.GetLabels()["update"])
})
require.NoError(t, err)
assert.Equal(t, "default", app.Spec.Project)
}
func TestUpdateAppSpec(t *testing.T) {
@@ -2411,6 +2527,99 @@ func TestGetAppRefresh_HardRefresh(t *testing.T) {
}
}
func TestGetApp_HealthStatusPropagation(t *testing.T) {
newServerWithTree := func(t *testing.T) (*Server, *v1alpha1.Application) {
t.Helper()
cacheClient := cache.NewCache(cache.NewInMemoryCache(1 * time.Hour))
testApp := newTestApp()
testApp.Status.ResourceHealthSource = v1alpha1.ResourceHealthLocationAppTree
testApp.Status.Resources = []v1alpha1.ResourceStatus{
{
Group: "apps",
Kind: "Deployment",
Name: "guestbook",
Namespace: "default",
},
}
appServer := newTestAppServer(t, testApp)
appStateCache := appstate.NewCache(cacheClient, time.Minute)
appInstanceName := testApp.InstanceName(appServer.appNamespaceOrDefault(testApp.Namespace))
err := appStateCache.SetAppResourcesTree(appInstanceName, &v1alpha1.ApplicationTree{
Nodes: []v1alpha1.ResourceNode{{
ResourceRef: v1alpha1.ResourceRef{
Group: "apps",
Kind: "Deployment",
Name: "guestbook",
Namespace: "default",
},
Health: &v1alpha1.HealthStatus{Status: health.HealthStatusDegraded},
}},
})
require.NoError(t, err)
appServer.cache = servercache.NewCache(appStateCache, time.Minute, time.Minute, time.Minute)
return appServer, testApp
}
t.Run("propagated health status on get with no refresh", func(t *testing.T) {
appServer, testApp := newServerWithTree(t)
fetchedApp, err := appServer.Get(t.Context(), &application.ApplicationQuery{
Name: &testApp.Name,
})
require.NoError(t, err)
assert.Equal(t, health.HealthStatusDegraded, fetchedApp.Status.Resources[0].Health.Status)
})
t.Run("propagated health status on normal refresh", func(t *testing.T) {
appServer, testApp := newServerWithTree(t)
var patched int32
ch := make(chan string, 1)
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
go refreshAnnotationRemover(t, ctx, &patched, appServer, testApp.Name, ch)
fetchedApp, err := appServer.Get(t.Context(), &application.ApplicationQuery{
Name: &testApp.Name,
Refresh: ptr.To(string(v1alpha1.RefreshTypeNormal)),
})
require.NoError(t, err)
select {
case <-ch:
assert.Equal(t, int32(1), atomic.LoadInt32(&patched))
case <-time.After(10 * time.Second):
assert.Fail(t, "Out of time ( 10 seconds )")
}
assert.Equal(t, health.HealthStatusDegraded, fetchedApp.Status.Resources[0].Health.Status)
})
t.Run("propagated health status on hard refresh", func(t *testing.T) {
appServer, testApp := newServerWithTree(t)
var patched int32
ch := make(chan string, 1)
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
go refreshAnnotationRemover(t, ctx, &patched, appServer, testApp.Name, ch)
fetchedApp, err := appServer.Get(t.Context(), &application.ApplicationQuery{
Name: &testApp.Name,
Refresh: ptr.To(string(v1alpha1.RefreshTypeHard)),
})
require.NoError(t, err)
select {
case <-ch:
assert.Equal(t, int32(1), atomic.LoadInt32(&patched))
case <-time.After(10 * time.Second):
assert.Fail(t, "Out of time ( 10 seconds )")
}
assert.Equal(t, health.HealthStatusDegraded, fetchedApp.Status.Resources[0].Health.Status)
})
}
func TestInferResourcesStatusHealth(t *testing.T) {
cacheClient := cache.NewCache(cache.NewInMemoryCache(1 * time.Hour))

View File

@@ -203,7 +203,7 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
}
if q.GetDryRun() {
apps, err := s.generateApplicationSetApps(ctx, log.WithField("applicationset", appset.Name), *appset, namespace)
apps, err := s.generateApplicationSetApps(ctx, log.WithField("applicationset", appset.Name), *appset)
if err != nil {
return nil, fmt.Errorf("unable to generate Applications of ApplicationSet: %w", err)
}
@@ -262,12 +262,12 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
return updated, nil
}
func (s *Server) generateApplicationSetApps(ctx context.Context, logEntry *log.Entry, appset v1alpha1.ApplicationSet, namespace string) ([]v1alpha1.Application, error) {
func (s *Server) generateApplicationSetApps(ctx context.Context, logEntry *log.Entry, appset v1alpha1.ApplicationSet) ([]v1alpha1.Application, error) {
argoCDDB := s.db
scmConfig := generators.NewSCMConfig(s.ScmRootCAPath, s.AllowedScmProviders, s.EnableScmProviders, github_app.NewAuthCredentials(argoCDDB.(db.RepoCredsDB)), true)
argoCDService := services.NewArgoCDService(s.db, s.GitSubmoduleEnabled, s.repoClientSet, s.EnableNewGitFileGlobbing)
appSetGenerators := generators.GetGenerators(ctx, s.client, s.k8sClient, namespace, argoCDService, s.dynamicClient, scmConfig)
appSetGenerators := generators.GetGenerators(ctx, s.client, s.k8sClient, s.ns, argoCDService, s.dynamicClient, scmConfig)
apps, _, err := appsettemplate.GenerateApplications(logEntry, appset, appSetGenerators, &appsetutils.Render{}, s.client)
if err != nil {
@@ -364,11 +364,15 @@ func (s *Server) Generate(ctx context.Context, q *applicationset.ApplicationSetG
if appset == nil {
return nil, errors.New("error creating ApplicationSets: ApplicationSets is nil in request")
}
namespace := s.appsetNamespaceOrDefault(appset.Namespace)
// The RBAC check needs to be performed against the appset namespace
// However, when trying to generate params, the server namespace needs
// to be passed.
namespace := s.appsetNamespaceOrDefault(appset.Namespace)
if !s.isNamespaceEnabled(namespace) {
return nil, security.NamespaceNotPermittedError(namespace)
}
projectName, err := s.validateAppSet(appset)
if err != nil {
return nil, fmt.Errorf("error validating ApplicationSets: %w", err)
@@ -381,7 +385,16 @@ func (s *Server) Generate(ctx context.Context, q *applicationset.ApplicationSetG
logger := log.New()
logger.SetOutput(logs)
apps, err := s.generateApplicationSetApps(ctx, logger.WithField("applicationset", appset.Name), *appset, namespace)
// The server namespace will be used in the function
// since this is the exact namespace that is being used
// to generate parameters (especially for git generator).
//
// In case of Git generator, if the namespace is set to
// appset namespace, we'll look for a project in the appset
// namespace that would lead to error when generating params
// for an appset in any namespace feature.
// See https://github.com/argoproj/argo-cd/issues/22942
apps, err := s.generateApplicationSetApps(ctx, logger.WithField("applicationset", appset.Name), *appset)
if err != nil {
return nil, fmt.Errorf("unable to generate Applications of ApplicationSet: %w\n%s", err, logs.String())
}

View File

@@ -4,6 +4,9 @@ import (
"sort"
"testing"
"sigs.k8s.io/controller-runtime/pkg/client"
cr_fake "sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/pkg/sync"
"github.com/stretchr/testify/assert"
@@ -50,7 +53,7 @@ func fakeCluster() *appsv1.Cluster {
}
// return an ApplicationServiceServer which returns fake data
func newTestAppSetServer(t *testing.T, objects ...runtime.Object) *Server {
func newTestAppSetServer(t *testing.T, objects ...client.Object) *Server {
t.Helper()
f := func(enf *rbac.Enforcer) {
_ = enf.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
@@ -61,7 +64,7 @@ func newTestAppSetServer(t *testing.T, objects ...runtime.Object) *Server {
}
// return an ApplicationServiceServer which returns fake data
func newTestNamespacedAppSetServer(t *testing.T, objects ...runtime.Object) *Server {
func newTestNamespacedAppSetServer(t *testing.T, objects ...client.Object) *Server {
t.Helper()
f := func(enf *rbac.Enforcer) {
_ = enf.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
@@ -71,7 +74,7 @@ func newTestNamespacedAppSetServer(t *testing.T, objects ...runtime.Object) *Ser
return newTestAppSetServerWithEnforcerConfigure(t, f, scopedNamespaces, objects...)
}
func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforcer), namespace string, objects ...runtime.Object) *Server {
func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforcer), namespace string, objects ...client.Object) *Server {
t.Helper()
kubeclientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
@@ -115,7 +118,11 @@ func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforce
objects = append(objects, defaultProj, myProj)
fakeAppsClientset := apps.NewSimpleClientset(objects...)
runtimeObjects := make([]runtime.Object, len(objects))
for i := range objects {
runtimeObjects[i] = objects[i]
}
fakeAppsClientset := apps.NewSimpleClientset(runtimeObjects...)
factory := appinformer.NewSharedInformerFactoryWithOptions(fakeAppsClientset, 0, appinformer.WithNamespace(namespace), appinformer.WithTweakListOptions(func(_ *metav1.ListOptions) {}))
fakeProjLister := factory.Argoproj().V1alpha1().AppProjects().Lister().AppProjects(testNamespace)
@@ -140,6 +147,13 @@ func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforce
panic("Timed out waiting for caches to sync")
}
scheme := runtime.NewScheme()
err = appsv1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
crClient := cr_fake.NewClientBuilder().WithScheme(scheme).WithObjects(objects...).Build()
projInformer := factory.Argoproj().V1alpha1().AppProjects().Informer()
go projInformer.Run(ctx.Done())
if !k8scache.WaitForCacheSync(ctx.Done(), projInformer.HasSynced) {
@@ -150,7 +164,7 @@ func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforce
db,
kubeclientset,
nil,
nil,
crClient,
enforcer,
nil,
fakeAppsClientset,
@@ -640,3 +654,54 @@ func TestResourceTree(t *testing.T) {
assert.EqualError(t, err, "namespace 'NOT-ALLOWED' is not permitted")
})
}
func TestAppSet_Generate_Cluster(t *testing.T) {
appSet1 := newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet1"
appset.Spec.Template.Name = "{{name}}"
appset.Spec.Generators = []appsv1.ApplicationSetGenerator{
{
Clusters: &appsv1.ClusterGenerator{},
},
}
})
t.Run("Generate in default namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appsetQuery := applicationset.ApplicationSetGenerateRequest{
ApplicationSet: appSet1,
}
res, err := appSetServer.Generate(t.Context(), &appsetQuery)
require.NoError(t, err)
require.Len(t, res.Applications, 2)
assert.Equal(t, "fake-cluster", res.Applications[0].Name)
assert.Equal(t, "in-cluster", res.Applications[1].Name)
})
t.Run("Generate in different namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appSet1Ns := appSet1.DeepCopy()
appSet1Ns.Namespace = "external-namespace"
appsetQuery := applicationset.ApplicationSetGenerateRequest{ApplicationSet: appSet1Ns}
res, err := appSetServer.Generate(t.Context(), &appsetQuery)
require.NoError(t, err)
require.Len(t, res.Applications, 2)
assert.Equal(t, "fake-cluster", res.Applications[0].Name)
assert.Equal(t, "in-cluster", res.Applications[1].Name)
})
t.Run("Generate in not allowed namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appSet1Ns := appSet1.DeepCopy()
appSet1Ns.Namespace = "NOT-ALLOWED"
appsetQuery := applicationset.ApplicationSetGenerateRequest{ApplicationSet: appSet1Ns}
_, err := appSetServer.Generate(t.Context(), &appsetQuery)
assert.EqualError(t, err, "namespace 'NOT-ALLOWED' is not permitted")
})
}

View File

@@ -471,19 +471,8 @@ func (s *Server) RotateAuth(ctx context.Context, q *cluster.ClusterQuery) (*clus
}
func (s *Server) toAPIResponse(clust *appv1.Cluster) *appv1.Cluster {
clust = clust.Sanitized()
_ = s.cache.GetClusterInfo(clust.Server, &clust.Info)
clust.Config.Password = ""
clust.Config.BearerToken = ""
clust.Config.TLSClientConfig.KeyData = nil
if clust.Config.ExecProviderConfig != nil {
// We can't know what the user has put into args or
// env vars on the exec provider that might be sensitive
// (e.g. --private-key=XXX, PASSWORD=XXX)
// Implicitly assumes the command executable name is non-sensitive
clust.Config.ExecProviderConfig.Env = make(map[string]string)
clust.Config.ExecProviderConfig.Args = nil
}
// populate deprecated fields for backward compatibility
//nolint:staticcheck
clust.ServerVersion = clust.Info.ServerVersion

View File

@@ -82,8 +82,7 @@ func NewMetricsServer(host string, port int) *MetricsServer {
registry.MustRegister(extensionRequestDuration)
registry.MustRegister(argoVersion)
kubectlMetricsServer := kubectl.NewKubectlMetrics()
kubectlMetricsServer.RegisterWithClientGo()
kubectl.RegisterWithClientGo()
kubectl.RegisterWithPrometheus(registry)
return &MetricsServer{

View File

@@ -310,12 +310,20 @@ func (s *Server) GetDetailedProject(ctx context.Context, q *project.ProjectQuery
}
proj.NormalizeJWTTokens()
globalProjects := argo.GetGlobalProjects(proj, listersv1alpha1.NewAppProjectLister(s.projInformer.GetIndexer()), s.settingsMgr)
var apiRepos []*v1alpha1.Repository
for _, repo := range repositories {
apiRepos = append(apiRepos, repo.Normalize().Sanitized())
}
var apiClusters []*v1alpha1.Cluster
for _, cluster := range clusters {
apiClusters = append(apiClusters, cluster.Sanitized())
}
return &project.DetailedProjectsResponse{
GlobalProjects: globalProjects,
Project: proj,
Repositories: repositories,
Clusters: clusters,
Repositories: apiRepos,
Clusters: apiClusters,
}, err
}
@@ -412,7 +420,8 @@ func (s *Server) Update(ctx context.Context, q *project.ProjectUpdateRequest) (*
destCluster, err := argo.GetDestinationCluster(ctx, a.Spec.Destination, s.db)
if err != nil {
if err.Error() != argo.ErrDestinationMissing {
return nil, err
// If cluster is not found, we should discard this app, as it's most likely already in error
continue
}
invalidDstCount++
}

View File

@@ -743,6 +743,35 @@ p, role:admin, projects, update, *, allow`)
_, err := projectServer.GetSyncWindowsState(ctx, &project.SyncWindowsQuery{Name: projectWithSyncWindows.Name})
assert.EqualError(t, err, "rpc error: code = PermissionDenied desc = permission denied: projects, get, test")
})
t.Run("TestAddSyncWindowWhenAnAppReferencesAClusterThatDoesNotExist", func(t *testing.T) {
_ = enforcer.SetBuiltinPolicy(`p, role:admin, projects, get, *, allow
p, role:admin, projects, update, *, allow`)
sessionMgr := session.NewSessionManager(settingsMgr, test.NewFakeProjLister(), "", nil, session.NewUserStateStorage(nil))
projectWithAppWithInvalidCluster := existingProj.DeepCopy()
argoDB := db.NewDB("default", settingsMgr, kubeclientset)
invalidApp := v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{Name: "test-invalid", Namespace: "default"},
Spec: v1alpha1.ApplicationSpec{Source: &v1alpha1.ApplicationSource{}, Project: "test", Destination: v1alpha1.ApplicationDestination{Namespace: "ns3", Server: "https://server4"}},
}
projectServer := NewServer("default", fake.NewSimpleClientset(), apps.NewSimpleClientset(projectWithAppWithInvalidCluster, &invalidApp), enforcer, sync.NewKeyLock(), sessionMgr, nil, projInformer, settingsMgr, argoDB, testEnableEventList)
// Add sync window
syncWindow := v1alpha1.SyncWindow{
Kind: "deny",
Schedule: "* * * * *",
Duration: "1h",
Applications: []string{"*"},
Clusters: []string{"*"},
}
projectWithAppWithInvalidCluster.Spec.SyncWindows = append(projectWithAppWithInvalidCluster.Spec.SyncWindows, &syncWindow)
res, err := projectServer.Update(ctx, &project.ProjectUpdateRequest{
Project: projectWithAppWithInvalidCluster,
})
require.NoError(t, err)
assert.Len(t, res.Spec.SyncWindows, 1)
})
}
func newEnforcer(kubeclientset *fake.Clientset) *rbac.Enforcer {

View File

@@ -313,7 +313,7 @@ func TestRepositoryServer(t *testing.T) {
testRepo := &appsv1.Repository{
Repo: url,
Type: "git",
Username: "foo",
Username: "",
InheritedCreds: true,
}
db.On("ListRepositories", t.Context()).Return([]*appsv1.Repository{testRepo}, nil)

View File

@@ -1225,7 +1225,7 @@ func (server *ArgoCDServer) newHTTPServer(ctx context.Context, port int, grpcWeb
// Webhook handler for git events (Note: cache timeouts are hardcoded because API server does not write to cache and not really using them)
argoDB := db.NewDB(server.Namespace, server.settingsMgr, server.KubeClientset)
acdWebhookHandler := webhook.NewHandler(server.Namespace, server.ArgoCDServerOpts.ApplicationNamespaces, server.ArgoCDServerOpts.WebhookParallelism, server.AppClientset, server.settings, server.settingsMgr, server.RepoServerCache, server.Cache, argoDB, server.settingsMgr.GetMaxWebhookPayloadSize())
acdWebhookHandler := webhook.NewHandler(server.Namespace, server.ArgoCDServerOpts.ApplicationNamespaces, server.ArgoCDServerOpts.WebhookParallelism, server.AppClientset, server.appLister, server.settings, server.settingsMgr, server.RepoServerCache, server.Cache, argoDB, server.settingsMgr.GetMaxWebhookPayloadSize())
mux.HandleFunc("/api/webhook", acdWebhookHandler.Handler)

View File

@@ -12,7 +12,7 @@ FROM docker.io/library/golang:1.24.6@sha256:2c89c41fb9efc3807029b59af69645867cfe
FROM docker.io/library/registry:2.8@sha256:543dade69668e02e5768d7ea2b0aa4fae6aa7384c9a5a8dbecc2be5136079ddb AS registry
FROM docker.io/bitnami/kubectl:1.32@sha256:493d1b871556d48d6b25d471f192c2427571cd6f78523eebcaf4d263353c7487 AS kubectl
FROM docker.io/bitnamilegacy/kubectl:1.32@sha256:493d1b871556d48d6b25d471f192c2427571cd6f78523eebcaf4d263353c7487 AS kubectl
FROM docker.io/library/ubuntu:24.04@sha256:80dd3c3b9c6cecb9f1667e9290b3bc61b78c2678c02cbdae5f0fea92cc6734ab

View File

@@ -6,7 +6,6 @@ import (
"os"
"slices"
"strconv"
"time"
rbacv1 "k8s.io/api/rbac/v1"
@@ -83,6 +82,18 @@ func (a *Actions) AddTag(name string) *Actions {
return a
}
func (a *Actions) AddAnnotatedTag(name string, message string) *Actions {
a.context.t.Helper()
fixture.AddAnnotatedTag(a.context.t, name, message)
return a
}
func (a *Actions) AddTagWithForce(name string) *Actions {
a.context.t.Helper()
fixture.AddTagWithForce(a.context.t, name)
return a
}
func (a *Actions) RemoveSubmodule() *Actions {
a.context.t.Helper()
fixture.RemoveSubmodule(a.context.t)
@@ -493,7 +504,6 @@ func (a *Actions) And(block func()) *Actions {
func (a *Actions) Then() *Consequences {
a.context.t.Helper()
time.Sleep(fixture.WhenThenSleepInterval)
return &Consequences{a.context, a, 15}
}

View File

@@ -1145,6 +1145,25 @@ func AddTag(t *testing.T, name string) {
}
}
func AddTagWithForce(t *testing.T, name string) {
t.Helper()
prevGnuPGHome := os.Getenv("GNUPGHOME")
t.Setenv("GNUPGHOME", TmpDir+"/gpg")
defer t.Setenv("GNUPGHOME", prevGnuPGHome)
errors.NewHandler(t).FailOnErr(Run(repoDirectory(), "git", "tag", "-f", name))
if IsRemote() {
errors.NewHandler(t).FailOnErr(Run(repoDirectory(), "git", "push", "--tags", "-f", "origin", "master"))
}
}
func AddAnnotatedTag(t *testing.T, name string, message string) {
t.Helper()
errors.NewHandler(t).FailOnErr(Run(repoDirectory(), "git", "tag", "-f", "-a", name, "-m", message))
if IsRemote() {
errors.NewHandler(t).FailOnErr(Run(repoDirectory(), "git", "push", "--tags", "-f", "origin", "master"))
}
}
// create the resource by creating using "kubectl apply", with bonus templating
func Declarative(t *testing.T, filename string, values any) (string, error) {
t.Helper()

View File

@@ -1,14 +1,22 @@
package e2e
import (
"context"
"strings"
"testing"
"time"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"github.com/stretchr/testify/require"
. "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/test/e2e/fixture"
. "github.com/argoproj/argo-cd/v3/test/e2e/fixture/app"
"github.com/argoproj/argo-cd/v3/util/errors"
)
func TestGitSemverResolutionNotUsingConstraint(t *testing.T) {
@@ -90,3 +98,145 @@ func TestGitSemverResolutionUsingConstraintWithLeadingZero(t *testing.T) {
Expect(SyncStatusIs(SyncStatusCodeSynced)).
Expect(Pod(func(p corev1.Pod) bool { return strings.HasPrefix(p.Name, "new-app") }))
}
func TestAnnotatedTagInStatusSyncRevision(t *testing.T) {
Given(t).
Path(guestbookPath).
When().
// Create annotated tag name 'annotated-tag'
AddAnnotatedTag("annotated-tag", "my-generic-tag-message").
// Create Application targeting annotated-tag, with automatedSync: true
CreateFromFile(func(app *Application) {
app.Spec.Source.TargetRevision = "annotated-tag"
app.Spec.SyncPolicy = &SyncPolicy{Automated: &SyncPolicyAutomated{Prune: true, SelfHeal: false}}
}).
Then().
Expect(SyncStatusIs(SyncStatusCodeSynced)).
And(func(app *Application) {
annotatedTagIDOutput, err := fixture.Run(fixture.TmpDir+"/testdata.git", "git", "show-ref", "annotated-tag")
require.NoError(t, err)
require.NotEmpty(t, annotatedTagIDOutput)
// example command output:
// "569798c430515ffe170bdb23e3aafaf8ae24b9ff refs/tags/annotated-tag"
annotatedTagIDFields := strings.Fields(string(annotatedTagIDOutput))
require.Len(t, annotatedTagIDFields, 2)
targetCommitID, err := fixture.Run(fixture.TmpDir+"/testdata.git", "git", "rev-parse", "--verify", "annotated-tag^{commit}")
// example command output:
// "bcd35965e494273355265b9f0bf85075b6bc5163"
require.NoError(t, err)
require.NotEmpty(t, targetCommitID)
require.NotEmpty(t, app.Status.Sync.Revision, "revision in sync status should be set by sync operation")
require.NotEqual(t, app.Status.Sync.Revision, annotatedTagIDFields[0], "revision should not match the annotated tag id")
require.Equal(t, app.Status.Sync.Revision, strings.TrimSpace(string(targetCommitID)), "revision SHOULD match the target commit SHA")
})
}
// Test updates to K8s resources should not trigger a self-heal when self-heal is false.
func TestAutomatedSelfHealingAgainstAnnotatedTag(t *testing.T) {
Given(t).
Path(guestbookPath).
When().
AddAnnotatedTag("annotated-tag", "my-generic-tag-message").
// App should be auto-synced once created
CreateFromFile(func(app *Application) {
app.Spec.Source.TargetRevision = "annotated-tag"
app.Spec.SyncPolicy = &SyncPolicy{Automated: &SyncPolicyAutomated{Prune: true, SelfHeal: false}}
}).
Then().
Expect(SyncStatusIs(SyncStatusCodeSynced)).
ExpectConsistently(SyncStatusIs(SyncStatusCodeSynced), WaitDuration, time.Second*10).
When().
// Update the annotated tag to a new git commit, that has a new revisionHistoryLimit.
PatchFile("guestbook-ui-deployment.yaml", `[{"op": "replace", "path": "/spec/revisionHistoryLimit", "value": 10}]`).
AddAnnotatedTag("annotated-tag", "my-generic-tag-message").
Refresh(RefreshTypeHard).
// The Application should update to the new annotated tag value within 10 seconds.
And(func() {
// Deployment revisionHistoryLimit should switch to 10
timeoutErr := wait.PollUntilContextTimeout(t.Context(), 1*time.Second, 10*time.Second, true, func(context.Context) (done bool, err error) {
deployment, err := fixture.KubeClientset.AppsV1().Deployments(fixture.DeploymentNamespace()).Get(t.Context(), "guestbook-ui", metav1.GetOptions{})
if err != nil {
return false, nil
}
revisionHistoryLimit := deployment.Spec.RevisionHistoryLimit
return revisionHistoryLimit != nil && *revisionHistoryLimit == 10, nil
})
require.NoError(t, timeoutErr)
}).
// Update the Deployment to a different revisionHistoryLimit
And(func() {
errors.NewHandler(t).FailOnErr(fixture.KubeClientset.AppsV1().Deployments(fixture.DeploymentNamespace()).Patch(t.Context(),
"guestbook-ui", types.MergePatchType, []byte(`{"spec": {"revisionHistoryLimit": 9}}`), metav1.PatchOptions{}))
}).
// The revisionHistoryLimit should NOT be self-healed, because selfHealing: false. It should remain at 9.
And(func() {
// Wait up to 10 seconds to ensure that deployment revisionHistoryLimit does NOT should switch to 10, it should remain at 9.
waitErr := wait.PollUntilContextTimeout(t.Context(), 1*time.Second, 10*time.Second, true, func(context.Context) (done bool, err error) {
deployment, err := fixture.KubeClientset.AppsV1().Deployments(fixture.DeploymentNamespace()).Get(t.Context(), "guestbook-ui", metav1.GetOptions{})
if err != nil {
return false, nil
}
revisionHistoryLimit := deployment.Spec.RevisionHistoryLimit
return revisionHistoryLimit != nil && *revisionHistoryLimit != 9, nil
})
require.Error(t, waitErr, "A timeout error should occur, indicating that revisionHistoryLimit never changed from 9")
})
}
func TestAutomatedSelfHealingAgainstLightweightTag(t *testing.T) {
Given(t).
Path(guestbookPath).
When().
AddTag("annotated-tag").
// App should be auto-synced once created
CreateFromFile(func(app *Application) {
app.Spec.Source.TargetRevision = "annotated-tag"
app.Spec.SyncPolicy = &SyncPolicy{Automated: &SyncPolicyAutomated{Prune: true, SelfHeal: false}}
}).
Then().
Expect(SyncStatusIs(SyncStatusCodeSynced)).
ExpectConsistently(SyncStatusIs(SyncStatusCodeSynced), WaitDuration, time.Second*10).
When().
// Update the annotated tag to a new git commit, that has a new revisionHistoryLimit.
PatchFile("guestbook-ui-deployment.yaml", `[{"op": "replace", "path": "/spec/revisionHistoryLimit", "value": 10}]`).
AddTagWithForce("annotated-tag").
Refresh(RefreshTypeHard).
// The Application should update to the new annotated tag value within 10 seconds.
And(func() {
// Deployment revisionHistoryLimit should switch to 10
timeoutErr := wait.PollUntilContextTimeout(t.Context(), 1*time.Second, 10*time.Second, true, func(context.Context) (done bool, err error) {
deployment, err := fixture.KubeClientset.AppsV1().Deployments(fixture.DeploymentNamespace()).Get(t.Context(), "guestbook-ui", metav1.GetOptions{})
if err != nil {
return false, nil
}
revisionHistoryLimit := deployment.Spec.RevisionHistoryLimit
return revisionHistoryLimit != nil && *revisionHistoryLimit == 10, nil
})
require.NoError(t, timeoutErr)
}).
// Update the Deployment to a different revisionHistoryLimit
And(func() {
errors.NewHandler(t).FailOnErr(fixture.KubeClientset.AppsV1().Deployments(fixture.DeploymentNamespace()).Patch(t.Context(),
"guestbook-ui", types.MergePatchType, []byte(`{"spec": {"revisionHistoryLimit": 9}}`), metav1.PatchOptions{}))
}).
// The revisionHistoryLimit should NOT be self-healed, because selfHealing: false
And(func() {
// Wait up to 10 seconds to ensure that deployment revisionHistoryLimit does NOT should switch to 10, it should remain at 9.
waitErr := wait.PollUntilContextTimeout(t.Context(), 1*time.Second, 10*time.Second, true, func(context.Context) (done bool, err error) {
deployment, err := fixture.KubeClientset.AppsV1().Deployments(fixture.DeploymentNamespace()).Get(t.Context(), "guestbook-ui", metav1.GetOptions{})
if err != nil {
return false, nil
}
revisionHistoryLimit := deployment.Spec.RevisionHistoryLimit
return revisionHistoryLimit != nil && *revisionHistoryLimit != 9, nil
})
require.Error(t, waitErr, "A timeout error should occur, indicating that revisionHistoryLimit never changed from 9")
})
}

View File

@@ -37,15 +37,17 @@ export const ApplicationHydrateOperationState: React.FunctionComponent<Props> =
if (hydrateOperationState.finishedAt && hydrateOperationState.phase !== 'Hydrating') {
operationAttributes.push({title: 'FINISHED AT', value: <Timestamp date={hydrateOperationState.finishedAt} />});
}
operationAttributes.push({
title: 'DRY REVISION',
value: (
<div>
<Revision repoUrl={hydrateOperationState.sourceHydrator.drySource.repoURL} revision={hydrateOperationState.drySHA} />
</div>
)
});
if (hydrateOperationState.finishedAt) {
if (hydrateOperationState.drySHA) {
operationAttributes.push({
title: 'DRY REVISION',
value: (
<div>
<Revision repoUrl={hydrateOperationState.sourceHydrator.drySource.repoURL} revision={hydrateOperationState.drySHA} />
</div>
)
});
}
if (hydrateOperationState.finishedAt && hydrateOperationState.hydratedSHA) {
operationAttributes.push({
title: 'HYDRATED REVISION',
value: (

View File

@@ -275,13 +275,15 @@ export const ApplicationNodeInfo = (props: {
Resource not found in cluster:{' '}
{`${props?.controlled?.state?.targetState?.apiVersion}/${props?.controlled?.state?.targetState?.kind}:${props.node.name}`}
<br />
{props?.controlled?.state?.normalizedLiveState?.apiVersion && (
<span>
Please update your resource specification to use the latest Kubernetes API resources supported by the target cluster. The
recommended syntax is{' '}
{`${props.controlled.state.normalizedLiveState.apiVersion}/${props?.controlled.state.normalizedLiveState?.kind}:${props.node.name}`}
</span>
)}
{props?.controlled?.state?.normalizedLiveState?.apiVersion &&
`${props?.controlled?.state?.targetState?.apiVersion}/${props?.controlled?.state?.targetState?.kind}:${props.node.name}` !==
`${props.controlled.state.normalizedLiveState.apiVersion}/${props?.controlled.state.normalizedLiveState?.kind}:${props.node.name}` && (
<span>
Please update your resource specification to use the latest Kubernetes API resources supported by the target cluster. The
recommended syntax is{' '}
{`${props.controlled.state.normalizedLiveState.apiVersion}/${props?.controlled.state.normalizedLiveState?.kind}:${props.node.name}`}
</span>
)}
</div>
)}
</React.Fragment>

View File

@@ -243,7 +243,7 @@ export const ApplicationStatusPanel = ({application, showDiff, showOperation, sh
}}>
{(data: models.ApplicationSyncWindowState) => (
<React.Fragment>
{data.assignedWindows && (
{data?.assignedWindows && (
<div className='application-status-panel__item' style={{position: 'relative'}}>
{sectionLabel({
title: 'SYNC WINDOWS',

View File

@@ -95,7 +95,7 @@ export const ApplicationsRefreshPanel = ({show, apps, hide}: {show: boolean; app
))}
</div>
</div>
<ApplicationSelector apps={apps} formApi={formApi} />
{show && <ApplicationSelector apps={apps} formApi={formApi} />}
</div>
</React.Fragment>
)}

View File

@@ -147,7 +147,7 @@ export const ApplicationsSyncPanel = ({show, apps, hide}: {show: boolean; apps:
<ApplicationRetryOptions id='applications-sync-panel' formApi={formApi} />
<ApplicationSelector apps={apps} formApi={formApi} />
{show && <ApplicationSelector apps={apps} formApi={formApi} />}
</div>
</React.Fragment>
)}

View File

@@ -1400,13 +1400,13 @@ export const SyncWindowStatusIcon = ({state, window}: {state: appModels.SyncWind
);
};
export const ApplicationSyncWindowStatusIcon = ({project, state}: {project: string; state: appModels.ApplicationSyncWindowState}) => {
export const ApplicationSyncWindowStatusIcon = ({project, state}: {project: string; state?: appModels.ApplicationSyncWindowState}) => {
let className = '';
let color = '';
let deny = false;
let allow = false;
let inactiveAllow = false;
if (state.assignedWindows !== undefined && state.assignedWindows.length > 0) {
if (state?.assignedWindows !== undefined && state?.assignedWindows.length > 0) {
if (state.activeWindows !== undefined && state.activeWindows.length > 0) {
for (const w of state.activeWindows) {
if (w.kind === 'deny') {

View File

@@ -481,7 +481,9 @@ export interface HydrateOperation {
finishedAt?: models.Time;
phase: HydrateOperationPhase;
message: string;
// drySHA is the sha of the DRY commit being hydrated. This will be empty if the operation is not successful.
drySHA: string;
// hydratedSHA is the sha of the hydrated commit. This will be empty if the operation is not successful.
hydratedSHA: string;
sourceHydrator: SourceHydrator;
}

View File

@@ -97,23 +97,41 @@ func CheckOutOfBoundsSymlinks(basePath string) error {
})
}
// GetAppRefreshPaths returns the list of paths that should trigger a refresh for an application
func GetAppRefreshPaths(app *v1alpha1.Application) []string {
// GetSourceRefreshPaths returns the list of paths that should trigger a refresh for an application.
// The source parameter influences the returned refresh paths:
// - if source hydrator configured AND source is syncSource: use sync source path (ignores annotation)
// - if source hydrator configured AND source is drySource WITH annotation: use annotation paths with drySource base
// - if source hydrator not configured: use annotation paths with source base, or empty if no annotation
func GetSourceRefreshPaths(app *v1alpha1.Application, source v1alpha1.ApplicationSource) []string {
annotationPaths, hasAnnotation := app.Annotations[v1alpha1.AnnotationKeyManifestGeneratePaths]
if app.Spec.SourceHydrator != nil {
syncSource := app.Spec.SourceHydrator.GetSyncSource()
// if source is syncSource use the source path
if (source).Equals(&syncSource) {
return []string{source.Path}
}
}
var paths []string
if val, ok := app.Annotations[v1alpha1.AnnotationKeyManifestGeneratePaths]; ok && val != "" {
for _, item := range strings.Split(val, ";") {
if hasAnnotation && annotationPaths != "" {
for _, item := range strings.Split(annotationPaths, ";") {
// skip empty paths
if item == "" {
continue
}
// if absolute path, add as is
if filepath.IsAbs(item) {
paths = append(paths, item[1:])
} else {
for _, source := range app.Spec.GetSources() {
paths = append(paths, filepath.Clean(filepath.Join(source.Path, item)))
}
continue
}
// add the path relative to the source path
paths = append(paths, filepath.Clean(filepath.Join(source.Path, item)))
}
}
return paths
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/ptr"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
fileutil "github.com/argoproj/argo-cd/v3/test/fixture/path"
@@ -100,117 +101,102 @@ func TestAbsSymlink(t *testing.T) {
assert.Equal(t, "abslink", oobError.File)
}
func getApp(annotation string, sourcePath string) *v1alpha1.Application {
return &v1alpha1.Application{
func getApp(annotation *string, sourcePath *string) *v1alpha1.Application {
app := &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
v1alpha1.AnnotationKeyManifestGeneratePaths: annotation,
},
},
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
Path: sourcePath,
},
Name: "test-app",
},
}
if annotation != nil {
app.Annotations = make(map[string]string)
app.Annotations[v1alpha1.AnnotationKeyManifestGeneratePaths] = *annotation
}
if sourcePath != nil {
app.Spec.Source = &v1alpha1.ApplicationSource{
Path: *sourcePath,
}
}
return app
}
func getMultiSourceApp(annotation string, paths ...string) *v1alpha1.Application {
var sources v1alpha1.ApplicationSources
for _, path := range paths {
sources = append(sources, v1alpha1.ApplicationSource{Path: path})
}
return &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
v1alpha1.AnnotationKeyManifestGeneratePaths: annotation,
},
func getSourceHydratorApp(annotation *string, drySourcePath string, syncSourcePath string) *v1alpha1.Application {
app := getApp(annotation, nil)
app.Spec.SourceHydrator = &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
Path: drySourcePath,
},
Spec: v1alpha1.ApplicationSpec{
Sources: sources,
SyncSource: v1alpha1.SyncSource{
Path: syncSourcePath,
},
}
}
func Test_AppFilesHaveChanged(t *testing.T) {
tests := []struct {
name string
app *v1alpha1.Application
files []string
changeExpected bool
}{
{"default no path", &v1alpha1.Application{}, []string{"README.md"}, true},
{"no files changed", getApp(".", "source/path"), []string{}, true},
{"relative path - matching", getApp(".", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"relative path, multi source - matching #1", getMultiSourceApp(".", "source/path", "other/path"), []string{"source/path/my-deployment.yaml"}, true},
{"relative path, multi source - matching #2", getMultiSourceApp(".", "other/path", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"relative path - not matching", getApp(".", "source/path"), []string{"README.md"}, false},
{"relative path, multi source - not matching", getMultiSourceApp(".", "other/path", "unrelated/path"), []string{"README.md"}, false},
{"absolute path - matching", getApp("/source/path", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"absolute path, multi source - matching #1", getMultiSourceApp("/source/path", "source/path", "other/path"), []string{"source/path/my-deployment.yaml"}, true},
{"absolute path, multi source - matching #2", getMultiSourceApp("/source/path", "other/path", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"absolute path - not matching", getApp("/source/path1", "source/path"), []string{"source/path/my-deployment.yaml"}, false},
{"absolute path, multi source - not matching", getMultiSourceApp("/source/path1", "other/path", "source/path"), []string{"source/path/my-deployment.yaml"}, false},
{"glob path * - matching", getApp("/source/**/my-deployment.yaml", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"glob path * - not matching", getApp("/source/**/my-service.yaml", "source/path"), []string{"source/path/my-deployment.yaml"}, false},
{"glob path ? - matching", getApp("/source/path/my-deployment-?.yaml", "source/path"), []string{"source/path/my-deployment-0.yaml"}, true},
{"glob path ? - not matching", getApp("/source/path/my-deployment-?.yaml", "source/path"), []string{"source/path/my-deployment.yaml"}, false},
{"glob path char range - matching", getApp("/source/path[0-9]/my-deployment.yaml", "source/path"), []string{"source/path1/my-deployment.yaml"}, true},
{"glob path char range - not matching", getApp("/source/path[0-9]/my-deployment.yaml", "source/path"), []string{"source/path/my-deployment.yaml"}, false},
{"mixed glob path - matching", getApp("/source/path[0-9]/my-*.yaml", "source/path"), []string{"source/path1/my-deployment.yaml"}, true},
{"mixed glob path - not matching", getApp("/source/path[0-9]/my-*.yaml", "source/path"), []string{"README.md"}, false},
{"two relative paths - matching", getApp(".;../shared", "my-app"), []string{"shared/my-deployment.yaml"}, true},
{"two relative paths, multi source - matching #1", getMultiSourceApp(".;../shared", "my-app", "other/path"), []string{"shared/my-deployment.yaml"}, true},
{"two relative paths, multi source - matching #2", getMultiSourceApp(".;../shared", "my-app", "other/path"), []string{"shared/my-deployment.yaml"}, true},
{"two relative paths - not matching", getApp(".;../shared", "my-app"), []string{"README.md"}, false},
{"two relative paths, multi source - not matching", getMultiSourceApp(".;../shared", "my-app", "other/path"), []string{"README.md"}, false},
{"file relative path - matching", getApp("./my-deployment.yaml", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"file relative path, multi source - matching #1", getMultiSourceApp("./my-deployment.yaml", "source/path", "other/path"), []string{"source/path/my-deployment.yaml"}, true},
{"file relative path, multi source - matching #2", getMultiSourceApp("./my-deployment.yaml", "other/path", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"file relative path - not matching", getApp("./my-deployment.yaml", "source/path"), []string{"README.md"}, false},
{"file relative path, multi source - not matching", getMultiSourceApp("./my-deployment.yaml", "source/path", "other/path"), []string{"README.md"}, false},
{"file absolute path - matching", getApp("/source/path/my-deployment.yaml", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"file absolute path, multi source - matching #1", getMultiSourceApp("/source/path/my-deployment.yaml", "source/path", "other/path"), []string{"source/path/my-deployment.yaml"}, true},
{"file absolute path, multi source - matching #2", getMultiSourceApp("/source/path/my-deployment.yaml", "other/path", "source/path"), []string{"source/path/my-deployment.yaml"}, true},
{"file absolute path - not matching", getApp("/source/path1/README.md", "source/path"), []string{"source/path/my-deployment.yaml"}, false},
{"file absolute path, multi source - not matching", getMultiSourceApp("/source/path1/README.md", "source/path", "other/path"), []string{"source/path/my-deployment.yaml"}, false},
{"file two relative paths - matching", getApp("./README.md;../shared/my-deployment.yaml", "my-app"), []string{"shared/my-deployment.yaml"}, true},
{"file two relative paths, multi source - matching", getMultiSourceApp("./README.md;../shared/my-deployment.yaml", "my-app", "other-path"), []string{"shared/my-deployment.yaml"}, true},
{"file two relative paths - not matching", getApp(".README.md;../shared/my-deployment.yaml", "my-app"), []string{"kustomization.yaml"}, false},
{"file two relative paths, multi source - not matching", getMultiSourceApp(".README.md;../shared/my-deployment.yaml", "my-app", "other-path"), []string{"kustomization.yaml"}, false},
{"changed file absolute path - matching", getApp(".", "source/path"), []string{"/source/path/my-deployment.yaml"}, true},
}
for _, tt := range tests {
ttc := tt
t.Run(ttc.name, func(t *testing.T) {
t.Parallel()
refreshPaths := GetAppRefreshPaths(ttc.app)
assert.Equal(t, ttc.changeExpected, AppFilesHaveChanged(refreshPaths, ttc.files), "AppFilesHaveChanged()")
})
}
return app
}
func Test_GetAppRefreshPaths(t *testing.T) {
tests := []struct {
name string
app *v1alpha1.Application
source v1alpha1.ApplicationSource
expectedPaths []string
}{
{"default no path", &v1alpha1.Application{}, []string{}},
{"relative path", getApp(".", "source/path"), []string{"source/path"}},
{"absolute path - multi source", getMultiSourceApp("/source/path", "source/path", "other/path"), []string{"source/path"}},
{"two relative paths ", getApp(".;../shared", "my-app"), []string{"my-app", "shared"}},
{"file relative path", getApp("./my-deployment.yaml", "source/path"), []string{"source/path/my-deployment.yaml"}},
{"file absolute path", getApp("/source/path/my-deployment.yaml", "source/path"), []string{"source/path/my-deployment.yaml"}},
{"file two relative paths", getApp("./README.md;../shared/my-deployment.yaml", "my-app"), []string{"my-app/README.md", "shared/my-deployment.yaml"}},
{"glob path", getApp("/source/*/my-deployment.yaml", "source/path"), []string{"source/*/my-deployment.yaml"}},
{"empty path", getApp(".;", "source/path"), []string{"source/path"}},
{
name: "single source without annotation",
app: getApp(nil, ptr.To("source/path")),
source: v1alpha1.ApplicationSource{Path: "source/path"},
expectedPaths: []string{},
},
{
name: "single source with annotation",
app: getApp(ptr.To(".;dev/deploy;other/path"), ptr.To("source/path")),
source: v1alpha1.ApplicationSource{Path: "source/path"},
expectedPaths: []string{"source/path", "source/path/dev/deploy", "source/path/other/path"},
},
{
name: "single source with empty annotation",
app: getApp(ptr.To(".;;"), ptr.To("source/path")),
source: v1alpha1.ApplicationSource{Path: "source/path"},
expectedPaths: []string{"source/path"},
},
{
name: "single source with absolute path annotation",
app: getApp(ptr.To("/fullpath/deploy;other/path"), ptr.To("source/path")),
source: v1alpha1.ApplicationSource{Path: "source/path"},
expectedPaths: []string{"fullpath/deploy", "source/path/other/path"},
},
{
name: "source hydrator sync source without annotation",
app: getSourceHydratorApp(nil, "dry/path", "sync/path"),
source: v1alpha1.ApplicationSource{Path: "sync/path"},
expectedPaths: []string{"sync/path"},
},
{
name: "source hydrator dry source without annotation",
app: getSourceHydratorApp(nil, "dry/path", "sync/path"),
source: v1alpha1.ApplicationSource{Path: "dry/path"},
expectedPaths: []string{},
},
{
name: "source hydrator sync source with annotation",
app: getSourceHydratorApp(ptr.To("deploy"), "dry/path", "sync/path"),
source: v1alpha1.ApplicationSource{Path: "sync/path"},
expectedPaths: []string{"sync/path"},
},
{
name: "source hydrator dry source with annotation",
app: getSourceHydratorApp(ptr.To("deploy"), "dry/path", "sync/path"),
source: v1alpha1.ApplicationSource{Path: "dry/path"},
expectedPaths: []string{"dry/path/deploy"},
},
}
for _, tt := range tests {
ttc := tt
t.Run(ttc.name, func(t *testing.T) {
t.Parallel()
assert.ElementsMatch(t, ttc.expectedPaths, GetAppRefreshPaths(ttc.app), "GetAppRefreshPath()")
assert.ElementsMatch(t, ttc.expectedPaths, GetSourceRefreshPaths(ttc.app, ttc.source), "GetAppRefreshPath()")
})
}
}

View File

@@ -564,7 +564,7 @@ func ValidatePermissions(ctx context.Context, spec *argoappv1.ApplicationSpec, p
if !proj.IsSourcePermitted(spec.SourceHydrator.GetDrySource()) {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", spec.GetSource().RepoURL, spec.Project),
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", spec.SourceHydrator.GetDrySource().RepoURL, proj.Name),
})
}
case spec.HasMultipleSources():
@@ -578,7 +578,7 @@ func ValidatePermissions(ctx context.Context, spec *argoappv1.ApplicationSpec, p
if !proj.IsSourcePermitted(source) {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", source.RepoURL, spec.Project),
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", source.RepoURL, proj.Name),
})
}
}
@@ -591,7 +591,7 @@ func ValidatePermissions(ctx context.Context, spec *argoappv1.ApplicationSpec, p
if !proj.IsSourcePermitted(spec.GetSource()) {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", spec.GetSource().RepoURL, spec.Project),
Message: fmt.Sprintf("application repo %s is not permitted in project '%s'", spec.GetSource().RepoURL, proj.Name),
})
}
}
@@ -604,22 +604,21 @@ func ValidatePermissions(ctx context.Context, spec *argoappv1.ApplicationSpec, p
})
return conditions, nil
}
if destCluster.Server != "" {
permitted, err := proj.IsDestinationPermitted(destCluster, spec.Destination.Namespace, func(project string) ([]*argoappv1.Cluster, error) {
return db.GetProjectClusters(ctx, project)
permitted, err := proj.IsDestinationPermitted(destCluster, spec.Destination.Namespace, func(project string) ([]*argoappv1.Cluster, error) {
return db.GetProjectClusters(ctx, project)
})
if err != nil {
return nil, err
}
if !permitted {
server := destCluster.Server
if spec.Destination.Name != "" {
server = destCluster.Name
}
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application destination server '%s' and namespace '%s' do not match any of the allowed destinations in project '%s'", server, spec.Destination.Namespace, proj.Name),
})
if err != nil {
return nil, err
}
if !permitted {
conditions = append(conditions, argoappv1.ApplicationCondition{
Type: argoappv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("application destination server '%s' and namespace '%s' do not match any of the allowed destinations in project '%s'", spec.Destination.Server, spec.Destination.Namespace, spec.Project),
})
}
} else if destCluster.Server == "" {
conditions = append(conditions, argoappv1.ApplicationCondition{Type: argoappv1.ApplicationConditionInvalidSpecError, Message: ErrDestinationMissing})
}
return conditions, nil
}

View File

@@ -34,9 +34,9 @@ func (s *secretsRepositoryBackend) CreateRepository(ctx context.Context, reposit
},
}
s.repositoryToSecret(repository, repositorySecret)
updatedSecret := s.repositoryToSecret(repository, repositorySecret)
_, err := s.db.createSecret(ctx, repositorySecret)
_, err := s.db.createSecret(ctx, updatedSecret)
if err != nil {
if apierrors.IsAlreadyExists(err) {
hasLabel, err := s.hasRepoTypeLabel(secName)
@@ -142,9 +142,9 @@ func (s *secretsRepositoryBackend) UpdateRepository(ctx context.Context, reposit
return nil, err
}
s.repositoryToSecret(repository, repositorySecret)
updatedSecret := s.repositoryToSecret(repository, repositorySecret)
_, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, repositorySecret, metav1.UpdateOptions{})
_, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, updatedSecret, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@@ -187,9 +187,9 @@ func (s *secretsRepositoryBackend) CreateRepoCreds(ctx context.Context, repoCred
},
}
repoCredsToSecret(repoCreds, repoCredsSecret)
updatedSecret := repoCredsToSecret(repoCreds, repoCredsSecret)
_, err := s.db.createSecret(ctx, repoCredsSecret)
_, err := s.db.createSecret(ctx, updatedSecret)
if err != nil {
if apierrors.IsAlreadyExists(err) {
return nil, status.Errorf(codes.AlreadyExists, "repository credentials %q already exists", repoCreds.URL)
@@ -237,9 +237,9 @@ func (s *secretsRepositoryBackend) UpdateRepoCreds(ctx context.Context, repoCred
return nil, err
}
repoCredsToSecret(repoCreds, repoCredsSecret)
updatedSecret := repoCredsToSecret(repoCreds, repoCredsSecret)
repoCredsSecret, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, repoCredsSecret, metav1.UpdateOptions{})
repoCredsSecret, err = s.db.kubeclientset.CoreV1().Secrets(s.db.ns).Update(ctx, updatedSecret, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@@ -301,67 +301,69 @@ func (s *secretsRepositoryBackend) GetAllHelmRepoCreds(_ context.Context) ([]*ap
}
func secretToRepository(secret *corev1.Secret) (*appsv1.Repository, error) {
secretCopy := secret.DeepCopy()
repository := &appsv1.Repository{
Name: string(secret.Data["name"]),
Repo: string(secret.Data["url"]),
Username: string(secret.Data["username"]),
Password: string(secret.Data["password"]),
BearerToken: string(secret.Data["bearerToken"]),
SSHPrivateKey: string(secret.Data["sshPrivateKey"]),
TLSClientCertData: string(secret.Data["tlsClientCertData"]),
TLSClientCertKey: string(secret.Data["tlsClientCertKey"]),
Type: string(secret.Data["type"]),
GithubAppPrivateKey: string(secret.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secret.Data["githubAppEnterpriseBaseUrl"]),
Proxy: string(secret.Data["proxy"]),
NoProxy: string(secret.Data["noProxy"]),
Project: string(secret.Data["project"]),
GCPServiceAccountKey: string(secret.Data["gcpServiceAccountKey"]),
Name: string(secretCopy.Data["name"]),
Repo: string(secretCopy.Data["url"]),
Username: string(secretCopy.Data["username"]),
Password: string(secretCopy.Data["password"]),
BearerToken: string(secretCopy.Data["bearerToken"]),
SSHPrivateKey: string(secretCopy.Data["sshPrivateKey"]),
TLSClientCertData: string(secretCopy.Data["tlsClientCertData"]),
TLSClientCertKey: string(secretCopy.Data["tlsClientCertKey"]),
Type: string(secretCopy.Data["type"]),
GithubAppPrivateKey: string(secretCopy.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secretCopy.Data["githubAppEnterpriseBaseUrl"]),
Proxy: string(secretCopy.Data["proxy"]),
NoProxy: string(secretCopy.Data["noProxy"]),
Project: string(secretCopy.Data["project"]),
GCPServiceAccountKey: string(secretCopy.Data["gcpServiceAccountKey"]),
}
insecureIgnoreHostKey, err := boolOrFalse(secret, "insecureIgnoreHostKey")
insecureIgnoreHostKey, err := boolOrFalse(secretCopy, "insecureIgnoreHostKey")
if err != nil {
return repository, err
}
repository.InsecureIgnoreHostKey = insecureIgnoreHostKey
insecure, err := boolOrFalse(secret, "insecure")
insecure, err := boolOrFalse(secretCopy, "insecure")
if err != nil {
return repository, err
}
repository.Insecure = insecure
enableLfs, err := boolOrFalse(secret, "enableLfs")
enableLfs, err := boolOrFalse(secretCopy, "enableLfs")
if err != nil {
return repository, err
}
repository.EnableLFS = enableLfs
enableOCI, err := boolOrFalse(secret, "enableOCI")
enableOCI, err := boolOrFalse(secretCopy, "enableOCI")
if err != nil {
return repository, err
}
repository.EnableOCI = enableOCI
githubAppID, err := intOrZero(secret, "githubAppID")
githubAppID, err := intOrZero(secretCopy, "githubAppID")
if err != nil {
return repository, err
}
repository.GithubAppId = githubAppID
githubAppInstallationID, err := intOrZero(secret, "githubAppInstallationID")
githubAppInstallationID, err := intOrZero(secretCopy, "githubAppInstallationID")
if err != nil {
return repository, err
}
repository.GithubAppInstallationId = githubAppInstallationID
forceBasicAuth, err := boolOrFalse(secret, "forceHttpBasicAuth")
forceBasicAuth, err := boolOrFalse(secretCopy, "forceHttpBasicAuth")
if err != nil {
return repository, err
}
repository.ForceHttpBasicAuth = forceBasicAuth
useAzureWorkloadIdentity, err := boolOrFalse(secret, "useAzureWorkloadIdentity")
useAzureWorkloadIdentity, err := boolOrFalse(secretCopy, "useAzureWorkloadIdentity")
if err != nil {
return repository, err
}
@@ -370,79 +372,85 @@ func secretToRepository(secret *corev1.Secret) (*appsv1.Repository, error) {
return repository, nil
}
func (s *secretsRepositoryBackend) repositoryToSecret(repository *appsv1.Repository, secret *corev1.Secret) {
if secret.Data == nil {
secret.Data = make(map[string][]byte)
func (s *secretsRepositoryBackend) repositoryToSecret(repository *appsv1.Repository, secret *corev1.Secret) *corev1.Secret {
secretCopy := secret.DeepCopy()
if secretCopy.Data == nil {
secretCopy.Data = make(map[string][]byte)
}
updateSecretString(secret, "name", repository.Name)
updateSecretString(secret, "project", repository.Project)
updateSecretString(secret, "url", repository.Repo)
updateSecretString(secret, "username", repository.Username)
updateSecretString(secret, "password", repository.Password)
updateSecretString(secret, "bearerToken", repository.BearerToken)
updateSecretString(secret, "sshPrivateKey", repository.SSHPrivateKey)
updateSecretBool(secret, "enableOCI", repository.EnableOCI)
updateSecretString(secret, "tlsClientCertData", repository.TLSClientCertData)
updateSecretString(secret, "tlsClientCertKey", repository.TLSClientCertKey)
updateSecretString(secret, "type", repository.Type)
updateSecretString(secret, "githubAppPrivateKey", repository.GithubAppPrivateKey)
updateSecretInt(secret, "githubAppID", repository.GithubAppId)
updateSecretInt(secret, "githubAppInstallationID", repository.GithubAppInstallationId)
updateSecretString(secret, "githubAppEnterpriseBaseUrl", repository.GitHubAppEnterpriseBaseURL)
updateSecretBool(secret, "insecureIgnoreHostKey", repository.InsecureIgnoreHostKey)
updateSecretBool(secret, "insecure", repository.Insecure)
updateSecretBool(secret, "enableLfs", repository.EnableLFS)
updateSecretString(secret, "proxy", repository.Proxy)
updateSecretString(secret, "noProxy", repository.NoProxy)
updateSecretString(secret, "gcpServiceAccountKey", repository.GCPServiceAccountKey)
updateSecretBool(secret, "forceHttpBasicAuth", repository.ForceHttpBasicAuth)
updateSecretBool(secret, "useAzureWorkloadIdentity", repository.UseAzureWorkloadIdentity)
addSecretMetadata(secret, s.getSecretType())
updateSecretString(secretCopy, "name", repository.Name)
updateSecretString(secretCopy, "project", repository.Project)
updateSecretString(secretCopy, "url", repository.Repo)
updateSecretString(secretCopy, "username", repository.Username)
updateSecretString(secretCopy, "password", repository.Password)
updateSecretString(secretCopy, "bearerToken", repository.BearerToken)
updateSecretString(secretCopy, "sshPrivateKey", repository.SSHPrivateKey)
updateSecretBool(secretCopy, "enableOCI", repository.EnableOCI)
updateSecretString(secretCopy, "tlsClientCertData", repository.TLSClientCertData)
updateSecretString(secretCopy, "tlsClientCertKey", repository.TLSClientCertKey)
updateSecretString(secretCopy, "type", repository.Type)
updateSecretString(secretCopy, "githubAppPrivateKey", repository.GithubAppPrivateKey)
updateSecretInt(secretCopy, "githubAppID", repository.GithubAppId)
updateSecretInt(secretCopy, "githubAppInstallationID", repository.GithubAppInstallationId)
updateSecretString(secretCopy, "githubAppEnterpriseBaseUrl", repository.GitHubAppEnterpriseBaseURL)
updateSecretBool(secretCopy, "insecureIgnoreHostKey", repository.InsecureIgnoreHostKey)
updateSecretBool(secretCopy, "insecure", repository.Insecure)
updateSecretBool(secretCopy, "enableLfs", repository.EnableLFS)
updateSecretString(secretCopy, "proxy", repository.Proxy)
updateSecretString(secretCopy, "noProxy", repository.NoProxy)
updateSecretString(secretCopy, "gcpServiceAccountKey", repository.GCPServiceAccountKey)
updateSecretBool(secretCopy, "forceHttpBasicAuth", repository.ForceHttpBasicAuth)
updateSecretBool(secretCopy, "useAzureWorkloadIdentity", repository.UseAzureWorkloadIdentity)
addSecretMetadata(secretCopy, s.getSecretType())
return secretCopy
}
func (s *secretsRepositoryBackend) secretToRepoCred(secret *corev1.Secret) (*appsv1.RepoCreds, error) {
secretCopy := secret.DeepCopy()
repository := &appsv1.RepoCreds{
URL: string(secret.Data["url"]),
Username: string(secret.Data["username"]),
Password: string(secret.Data["password"]),
BearerToken: string(secret.Data["bearerToken"]),
SSHPrivateKey: string(secret.Data["sshPrivateKey"]),
TLSClientCertData: string(secret.Data["tlsClientCertData"]),
TLSClientCertKey: string(secret.Data["tlsClientCertKey"]),
Type: string(secret.Data["type"]),
GithubAppPrivateKey: string(secret.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secret.Data["githubAppEnterpriseBaseUrl"]),
GCPServiceAccountKey: string(secret.Data["gcpServiceAccountKey"]),
Proxy: string(secret.Data["proxy"]),
NoProxy: string(secret.Data["noProxy"]),
URL: string(secretCopy.Data["url"]),
Username: string(secretCopy.Data["username"]),
Password: string(secretCopy.Data["password"]),
BearerToken: string(secretCopy.Data["bearerToken"]),
SSHPrivateKey: string(secretCopy.Data["sshPrivateKey"]),
TLSClientCertData: string(secretCopy.Data["tlsClientCertData"]),
TLSClientCertKey: string(secretCopy.Data["tlsClientCertKey"]),
Type: string(secretCopy.Data["type"]),
GithubAppPrivateKey: string(secretCopy.Data["githubAppPrivateKey"]),
GitHubAppEnterpriseBaseURL: string(secretCopy.Data["githubAppEnterpriseBaseUrl"]),
GCPServiceAccountKey: string(secretCopy.Data["gcpServiceAccountKey"]),
Proxy: string(secretCopy.Data["proxy"]),
NoProxy: string(secretCopy.Data["noProxy"]),
}
enableOCI, err := boolOrFalse(secret, "enableOCI")
enableOCI, err := boolOrFalse(secretCopy, "enableOCI")
if err != nil {
return repository, err
}
repository.EnableOCI = enableOCI
githubAppID, err := intOrZero(secret, "githubAppID")
githubAppID, err := intOrZero(secretCopy, "githubAppID")
if err != nil {
return repository, err
}
repository.GithubAppId = githubAppID
githubAppInstallationID, err := intOrZero(secret, "githubAppInstallationID")
githubAppInstallationID, err := intOrZero(secretCopy, "githubAppInstallationID")
if err != nil {
return repository, err
}
repository.GithubAppInstallationId = githubAppInstallationID
forceBasicAuth, err := boolOrFalse(secret, "forceHttpBasicAuth")
forceBasicAuth, err := boolOrFalse(secretCopy, "forceHttpBasicAuth")
if err != nil {
return repository, err
}
repository.ForceHttpBasicAuth = forceBasicAuth
useAzureWorkloadIdentity, err := boolOrFalse(secret, "useAzureWorkloadIdentity")
useAzureWorkloadIdentity, err := boolOrFalse(secretCopy, "useAzureWorkloadIdentity")
if err != nil {
return repository, err
}
@@ -451,30 +459,34 @@ func (s *secretsRepositoryBackend) secretToRepoCred(secret *corev1.Secret) (*app
return repository, nil
}
func repoCredsToSecret(repoCreds *appsv1.RepoCreds, secret *corev1.Secret) {
if secret.Data == nil {
secret.Data = make(map[string][]byte)
func repoCredsToSecret(repoCreds *appsv1.RepoCreds, secret *corev1.Secret) *corev1.Secret {
secretCopy := secret.DeepCopy()
if secretCopy.Data == nil {
secretCopy.Data = make(map[string][]byte)
}
updateSecretString(secret, "url", repoCreds.URL)
updateSecretString(secret, "username", repoCreds.Username)
updateSecretString(secret, "password", repoCreds.Password)
updateSecretString(secret, "bearerToken", repoCreds.BearerToken)
updateSecretString(secret, "sshPrivateKey", repoCreds.SSHPrivateKey)
updateSecretBool(secret, "enableOCI", repoCreds.EnableOCI)
updateSecretString(secret, "tlsClientCertData", repoCreds.TLSClientCertData)
updateSecretString(secret, "tlsClientCertKey", repoCreds.TLSClientCertKey)
updateSecretString(secret, "type", repoCreds.Type)
updateSecretString(secret, "githubAppPrivateKey", repoCreds.GithubAppPrivateKey)
updateSecretInt(secret, "githubAppID", repoCreds.GithubAppId)
updateSecretInt(secret, "githubAppInstallationID", repoCreds.GithubAppInstallationId)
updateSecretString(secret, "githubAppEnterpriseBaseUrl", repoCreds.GitHubAppEnterpriseBaseURL)
updateSecretString(secret, "gcpServiceAccountKey", repoCreds.GCPServiceAccountKey)
updateSecretString(secret, "proxy", repoCreds.Proxy)
updateSecretString(secret, "noProxy", repoCreds.NoProxy)
updateSecretBool(secret, "forceHttpBasicAuth", repoCreds.ForceHttpBasicAuth)
updateSecretBool(secret, "useAzureWorkloadIdentity", repoCreds.UseAzureWorkloadIdentity)
addSecretMetadata(secret, common.LabelValueSecretTypeRepoCreds)
updateSecretString(secretCopy, "url", repoCreds.URL)
updateSecretString(secretCopy, "username", repoCreds.Username)
updateSecretString(secretCopy, "password", repoCreds.Password)
updateSecretString(secretCopy, "bearerToken", repoCreds.BearerToken)
updateSecretString(secretCopy, "sshPrivateKey", repoCreds.SSHPrivateKey)
updateSecretBool(secretCopy, "enableOCI", repoCreds.EnableOCI)
updateSecretString(secretCopy, "tlsClientCertData", repoCreds.TLSClientCertData)
updateSecretString(secretCopy, "tlsClientCertKey", repoCreds.TLSClientCertKey)
updateSecretString(secretCopy, "type", repoCreds.Type)
updateSecretString(secretCopy, "githubAppPrivateKey", repoCreds.GithubAppPrivateKey)
updateSecretInt(secretCopy, "githubAppID", repoCreds.GithubAppId)
updateSecretInt(secretCopy, "githubAppInstallationID", repoCreds.GithubAppInstallationId)
updateSecretString(secretCopy, "githubAppEnterpriseBaseUrl", repoCreds.GitHubAppEnterpriseBaseURL)
updateSecretString(secretCopy, "gcpServiceAccountKey", repoCreds.GCPServiceAccountKey)
updateSecretString(secretCopy, "proxy", repoCreds.Proxy)
updateSecretString(secretCopy, "noProxy", repoCreds.NoProxy)
updateSecretBool(secretCopy, "forceHttpBasicAuth", repoCreds.ForceHttpBasicAuth)
updateSecretBool(secretCopy, "useAzureWorkloadIdentity", repoCreds.UseAzureWorkloadIdentity)
addSecretMetadata(secretCopy, common.LabelValueSecretTypeRepoCreds)
return secretCopy
}
func (s *secretsRepositoryBackend) getRepositorySecret(repoURL, project string, allowFallback bool) (*corev1.Secret, error) {

View File

@@ -1,7 +1,9 @@
package db
import (
"fmt"
"strconv"
"sync"
"testing"
"github.com/stretchr/testify/assert"
@@ -83,9 +85,9 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
t.Parallel()
secret := &corev1.Secret{}
s := secretsRepositoryBackend{}
s.repositoryToSecret(repo, secret)
delete(secret.Labels, common.LabelKeySecretType)
f := setupWithK8sObjects(secret)
updatedSecret := s.repositoryToSecret(repo, secret)
delete(updatedSecret.Labels, common.LabelKeySecretType)
f := setupWithK8sObjects(updatedSecret)
f.clientSet.ReactionChain = nil
f.clientSet.AddReactor("create", "secrets", func(_ k8stesting.Action) (handled bool, ret runtime.Object, err error) {
gr := schema.GroupResource{
@@ -120,8 +122,8 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
},
}
s := secretsRepositoryBackend{}
s.repositoryToSecret(repo, secret)
f := setupWithK8sObjects(secret)
updatedSecret := s.repositoryToSecret(repo, secret)
f := setupWithK8sObjects(updatedSecret)
f.clientSet.ReactionChain = nil
f.clientSet.WatchReactionChain = nil
f.clientSet.AddReactor("create", "secrets", func(_ k8stesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -132,7 +134,7 @@ func TestSecretsRepositoryBackend_CreateRepository(t *testing.T) {
return true, nil, apierrors.NewAlreadyExists(gr, "already exists")
})
watcher := watch.NewFakeWithChanSize(1, true)
watcher.Add(secret)
watcher.Add(updatedSecret)
f.clientSet.AddWatchReactor("secrets", func(_ k8stesting.Action) (handled bool, ret watch.Interface, err error) {
return true, watcher, nil
})
@@ -944,7 +946,7 @@ func TestRepoCredsToSecret(t *testing.T) {
GithubAppInstallationId: 456,
GitHubAppEnterpriseBaseURL: "GitHubAppEnterpriseBaseURL",
}
repoCredsToSecret(creds, s)
s = repoCredsToSecret(creds, s)
assert.Equal(t, []byte(creds.URL), s.Data["url"])
assert.Equal(t, []byte(creds.Username), s.Data["username"])
assert.Equal(t, []byte(creds.Password), s.Data["password"])
@@ -960,3 +962,169 @@ func TestRepoCredsToSecret(t *testing.T) {
assert.Equal(t, map[string]string{common.AnnotationKeyManagedBy: common.AnnotationValueManagedByArgoCD}, s.Annotations)
assert.Equal(t, map[string]string{common.LabelKeySecretType: common.LabelValueSecretTypeRepoCreds}, s.Labels)
}
func TestRaceConditionInRepoCredsOperations(t *testing.T) {
// Create a single shared secret that will be accessed concurrently
sharedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: RepoURLToSecretName(repoSecretPrefix, "git@github.com:argoproj/argo-cd.git", ""),
Namespace: testNamespace,
Labels: map[string]string{
common.LabelKeySecretType: common.LabelValueSecretTypeRepoCreds,
},
},
Data: map[string][]byte{
"url": []byte("git@github.com:argoproj/argo-cd.git"),
"username": []byte("test-user"),
"password": []byte("test-pass"),
},
}
// Create test credentials that we'll use for conversion
repoCreds := &appsv1.RepoCreds{
URL: "git@github.com:argoproj/argo-cd.git",
Username: "test-user",
Password: "test-pass",
}
backend := &secretsRepositoryBackend{}
var wg sync.WaitGroup
concurrentOps := 50
errChan := make(chan error, concurrentOps*2) // Channel to collect errors
// Launch goroutines that perform concurrent operations
for i := 0; i < concurrentOps; i++ {
wg.Add(2)
// One goroutine converts from RepoCreds to Secret
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in repoCredsToSecret: %v", r)
}
}()
_ = repoCredsToSecret(repoCreds, sharedSecret)
}()
// Another goroutine converts from Secret to RepoCreds
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in secretToRepoCred: %v", r)
}
}()
creds, err := backend.secretToRepoCred(sharedSecret)
if err != nil {
errChan <- fmt.Errorf("error in secretToRepoCred: %w", err)
return
}
// Verify data integrity
if creds.URL != repoCreds.URL || creds.Username != repoCreds.Username || creds.Password != repoCreds.Password {
errChan <- fmt.Errorf("data mismatch in conversion: expected %v, got %v", repoCreds, creds)
}
}()
}
wg.Wait()
close(errChan)
// Check for any errors that occurred during concurrent operations
for err := range errChan {
t.Errorf("concurrent operation error: %v", err)
}
// Verify final state
finalCreds, err := backend.secretToRepoCred(sharedSecret)
require.NoError(t, err)
assert.Equal(t, repoCreds.URL, finalCreds.URL)
assert.Equal(t, repoCreds.Username, finalCreds.Username)
assert.Equal(t, repoCreds.Password, finalCreds.Password)
}
func TestRaceConditionInRepositoryOperations(t *testing.T) {
// Create a single shared secret that will be accessed concurrently
sharedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: RepoURLToSecretName(repoSecretPrefix, "git@github.com:argoproj/argo-cd.git", ""),
Namespace: testNamespace,
Labels: map[string]string{
common.LabelKeySecretType: common.LabelValueSecretTypeRepository,
},
},
Data: map[string][]byte{
"url": []byte("git@github.com:argoproj/argo-cd.git"),
"name": []byte("test-repo"),
"username": []byte("test-user"),
"password": []byte("test-pass"),
},
}
// Create test repository that we'll use for conversion
repo := &appsv1.Repository{
Name: "test-repo",
Repo: "git@github.com:argoproj/argo-cd.git",
Username: "test-user",
Password: "test-pass",
}
backend := &secretsRepositoryBackend{}
var wg sync.WaitGroup
concurrentOps := 50
errChan := make(chan error, concurrentOps*2) // Channel to collect errors
// Launch goroutines that perform concurrent operations
for i := 0; i < concurrentOps; i++ {
wg.Add(2)
// One goroutine converts from Repository to Secret
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in repositoryToSecret: %v", r)
}
}()
_ = backend.repositoryToSecret(repo, sharedSecret)
}()
// Another goroutine converts from Secret to Repository
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in secretToRepository: %v", r)
}
}()
repository, err := secretToRepository(sharedSecret)
if err != nil {
errChan <- fmt.Errorf("error in secretToRepository: %w", err)
return
}
// Verify data integrity
if repository.Name != repo.Name || repository.Repo != repo.Repo ||
repository.Username != repo.Username || repository.Password != repo.Password {
errChan <- fmt.Errorf("data mismatch in conversion: expected %v, got %v", repo, repository)
}
}()
}
wg.Wait()
close(errChan)
// Check for any errors that occurred during concurrent operations
for err := range errChan {
t.Errorf("concurrent operation error: %v", err)
}
// Verify final state
finalRepo, err := secretToRepository(sharedSecret)
require.NoError(t, err)
assert.Equal(t, repo.Name, finalRepo.Name)
assert.Equal(t, repo.Repo, finalRepo.Repo)
assert.Equal(t, repo.Username, finalRepo.Username)
assert.Equal(t, repo.Password, finalRepo.Password)
}

11
util/env/env.go vendored
View File

@@ -7,6 +7,8 @@ import (
"strings"
"time"
timeutil "github.com/argoproj/pkg/time"
log "github.com/sirupsen/logrus"
)
@@ -125,8 +127,13 @@ func ParseDurationFromEnv(env string, defaultValue, min, max time.Duration) time
}
dur, err := time.ParseDuration(str)
if err != nil {
log.Warnf("Could not parse '%s' as a duration string from environment %s", str, env)
return defaultValue
// provides backwards compatibility for durations defined in days, see: https://github.com/argoproj/argo-cd/issues/24740
durPtr, err2 := timeutil.ParseDuration(str)
if err2 != nil {
log.Warnf("Could not parse '%s' as a duration from environment %s", str, env)
return defaultValue
}
dur = *durPtr
}
if dur < min {

84
util/env/env_test.go vendored
View File

@@ -142,6 +142,90 @@ func TestParseDurationFromEnv(t *testing.T) {
}
}
func TestParseDurationFromEnvEdgeCases(t *testing.T) {
envKey := "SOME_ENV_KEY"
def := 3 * time.Minute
minimum := 1 * time.Second
maximum := 2160 * time.Hour // 3 months
testCases := []struct {
name string
env string
expected time.Duration
}{{
name: "EnvNotSet",
expected: def,
}, {
name: "Durations defined as days are valid",
env: "12d",
expected: time.Hour * 24 * 12,
}, {
name: "Negative durations should fail parsing and use the default value",
env: "-1h",
expected: def,
}, {
name: "Negative day durations should fail parsing and use the default value",
env: "-12d",
expected: def,
}, {
name: "Scientific notation should fail parsing and use the default value",
env: "1e3s",
expected: def,
}, {
name: "Durations with a leading zero are considered valid and parsed as decimal notation",
env: "0755s",
expected: time.Second * 755,
}, {
name: "Durations with many leading zeroes are considered valid and parsed as decimal notation",
env: "000083m",
expected: time.Minute * 83,
}, {
name: "Decimal Durations should not fail parsing",
env: "30.5m",
expected: time.Minute*30 + time.Second*30,
}, {
name: "Decimal Day Durations should fail parsing and use the default value",
env: "30.5d",
expected: def,
}, {
name: "Fraction Durations should fail parsing and use the default value",
env: "1/2h",
expected: def,
}, {
name: "Durations without a time unit should fail parsing and use the default value",
env: "15",
expected: def,
}, {
name: "Durations with a trailing symbol should fail parsing and use the default value",
env: "+12d",
expected: def,
}, {
name: "Leading space Duration should fail parsing use the default value",
env: " 2h",
expected: def,
}, {
name: "Trailing space Duration should fail parsing use the default value",
env: "6m ",
expected: def,
}, {
name: "Empty Duration should fail parsing use the default value",
env: "",
expected: def,
}, {
name: "Whitespace Duration should fail parsing and use the default value",
env: " ",
expected: def,
}}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Setenv(envKey, tc.env)
val := ParseDurationFromEnv(envKey, def, minimum, maximum)
assert.Equal(t, tc.expected, val)
})
}
}
func Test_ParseBoolFromEnv(t *testing.T) {
envKey := "SOMEKEY"

View File

@@ -13,6 +13,7 @@ import (
"testing"
"time"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/transport"
githttp "github.com/go-git/go-git/v5/plumbing/transport/http"
"github.com/stretchr/testify/assert"
@@ -135,6 +136,19 @@ func Test_IsAnnotatedTag(t *testing.T) {
assert.False(t, atag)
}
func Test_resolveTagReference(t *testing.T) {
// Setup
commitHash := plumbing.NewHash("0123456789abcdef0123456789abcdef01234567")
tagRef := plumbing.NewReferenceFromStrings("refs/tags/v1.0.0", "sometaghash")
// Test single function
resolvedRef := plumbing.NewHashReference(tagRef.Name(), commitHash)
// Verify
assert.Equal(t, commitHash, resolvedRef.Hash())
assert.Equal(t, tagRef.Name(), resolvedRef.Name())
}
func Test_ChangedFiles(t *testing.T) {
tempDir := t.TempDir()

View File

@@ -501,3 +501,27 @@ func TestLsFiles(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, nilResult, lsResult)
}
func TestAnnotatedTagHandling(t *testing.T) {
dir := t.TempDir()
client, err := NewClientExt("https://github.com/argoproj/argo-cd.git", dir, NopCreds{}, false, false, "", "")
require.NoError(t, err)
err = client.Init()
require.NoError(t, err)
// Test annotated tag resolution
commitSHA, err := client.LsRemote("v1.0.0") // Known annotated tag
require.NoError(t, err)
// Verify we get commit SHA, not tag SHA
assert.True(t, IsCommitSHA(commitSHA))
// Test tag reference handling
refs, err := client.LsRefs()
require.NoError(t, err)
// Verify tag exists in the list and points to a valid commit SHA
assert.Contains(t, refs.Tags, "v1.0.0", "Tag v1.0.0 should exist in refs")
}

View File

@@ -85,6 +85,12 @@ func listRemote(r *git.Remote, o *git.ListOptions, insecure bool, creds Creds, p
var resultRefs []*plumbing.Reference
_ = refs.ForEach(func(ref *plumbing.Reference) error {
if ref.Name().IsTag() {
if peeled, ok := ar.Peeled[ref.Name().String()]; ok {
resultRefs = append(resultRefs, plumbing.NewHashReference(ref.Name(), peeled))
return nil
}
}
resultRefs = append(resultRefs, ref)
return nil
})

20
util/guard/guard.go Normal file
View File

@@ -0,0 +1,20 @@
package guard
import (
"runtime/debug"
)
// Minimal logger contract; avoids depending on any specific logging package.
type Logger interface{ Errorf(string, ...any) }
// Run executes fn and recovers a panic, logging a component-specific message.
func RecoverAndLog(fn func(), log Logger, msg string) {
defer func() {
if r := recover(); r != nil {
if log != nil {
log.Errorf("%s: %v %s", msg, r, debug.Stack())
}
}
}()
fn()
}

92
util/guard/guard_test.go Normal file
View File

@@ -0,0 +1,92 @@
package guard
import (
"fmt"
"strings"
"sync"
"testing"
)
type nop struct{}
func (nop) Errorf(string, ...any) {}
// recorder is a thread-safe logger that captures formatted messages.
type recorder struct {
mu sync.Mutex
calls int
msgs []string
}
func (r *recorder) Errorf(format string, args ...any) {
r.mu.Lock()
defer r.mu.Unlock()
r.calls++
r.msgs = append(r.msgs, fmt.Sprintf(format, args...))
}
func TestRun_Recovers(_ *testing.T) {
RecoverAndLog(func() { panic("boom") }, nop{}, "msg") // fails if panic escapes
}
func TestRun_AllowsNextCall(t *testing.T) {
ran := false
RecoverAndLog(func() { panic("boom") }, nop{}, "msg")
RecoverAndLog(func() { ran = true }, nop{}, "msg")
if !ran {
t.Fatal("expected second callback to run")
}
}
func TestRun_LogsMessageAndStack(t *testing.T) {
r := &recorder{}
RecoverAndLog(func() { panic("boom") }, r, "msg")
if r.calls != 1 {
t.Fatalf("expected 1 log call, got %d", r.calls)
}
got := strings.Join(r.msgs, "\n")
if !strings.Contains(got, "msg") {
t.Errorf("expected log to contain message %q; got %q", "msg", got)
}
if !strings.Contains(got, "boom") {
t.Errorf("expected log to contain panic value %q; got %q", "boom", got)
}
// Heuristic check that a stack trace was included.
if !strings.Contains(got, "guard.go") && !strings.Contains(got, "runtime/panic.go") && !strings.Contains(got, "goroutine") {
t.Errorf("expected log to contain a stack trace; got %q", got)
}
}
func TestRun_NilLoggerDoesNotPanic(_ *testing.T) {
var l Logger // nil
RecoverAndLog(func() { panic("boom") }, l, "ignored")
}
func TestRun_NoPanicDoesNotLog(t *testing.T) {
r := &recorder{}
ran := false
RecoverAndLog(func() { ran = true }, r, "msg")
if !ran {
t.Fatal("expected fn to run")
}
if r.calls != 0 {
t.Fatalf("expected 0 log calls, got %d", r.calls)
}
}
func TestRun_ConcurrentPanicsLogged(t *testing.T) {
r := &recorder{}
const n = 10
var wg sync.WaitGroup
wg.Add(n)
for i := 0; i < n; i++ {
go func(i int) {
defer wg.Done()
RecoverAndLog(func() { panic(fmt.Sprintf("boom-%d", i)) }, r, "msg")
}(i)
}
wg.Wait()
if r.calls != n {
t.Fatalf("expected %d log calls, got %d", n, r.calls)
}
}

View File

@@ -3,6 +3,7 @@ package healthz
import (
"fmt"
"net/http"
"time"
log "github.com/sirupsen/logrus"
)
@@ -11,9 +12,13 @@ import (
// ServeHealthCheck relies on the provided function to return an error if unhealthy and nil otherwise.
func ServeHealthCheck(mux *http.ServeMux, f func(r *http.Request) error) {
mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
startTs := time.Now()
if err := f(r); err != nil {
w.WriteHeader(http.StatusServiceUnavailable)
log.Errorln(w, err)
log.WithFields(log.Fields{
"duration": time.Since(startTs),
"component": "healthcheck",
}).WithError(err).Error("Error serving health check request")
} else {
fmt.Fprintln(w, "ok")
}

View File

@@ -5,16 +5,22 @@ import (
"net"
"net/http"
"testing"
"time"
log "github.com/sirupsen/logrus"
"github.com/sirupsen/logrus/hooks/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestHealthCheck(t *testing.T) {
sentinel := false
lc := &net.ListenConfig{}
ctx := t.Context()
svcErrMsg := "This is a dummy error"
serve := func(c chan<- string) {
// listen on first available dynamic (unprivileged) port
listener, err := net.Listen("tcp", ":0")
listener, err := lc.Listen(ctx, "tcp", ":0")
if err != nil {
panic(err)
}
@@ -25,7 +31,7 @@ func TestHealthCheck(t *testing.T) {
mux := http.NewServeMux()
ServeHealthCheck(mux, func(_ *http.Request) error {
if sentinel {
return errors.New("This is a dummy error")
return errors.New(svcErrMsg)
}
return nil
})
@@ -47,7 +53,23 @@ func TestHealthCheck(t *testing.T) {
require.Equalf(t, http.StatusOK, resp.StatusCode, "Was expecting status code 200 from health check, but got %d instead", resp.StatusCode)
sentinel = true
hook := test.NewGlobal()
resp, _ = http.Get(server + "/healthz")
require.Equalf(t, http.StatusServiceUnavailable, resp.StatusCode, "Was expecting status code 503 from health check, but got %d instead", resp.StatusCode)
assert.NotEmpty(t, hook.Entries, "Was expecting at least one log entry from health check, but got none")
expectedMsg := "Error serving health check request"
var foundEntry log.Entry
for _, entry := range hook.Entries {
if entry.Level == log.ErrorLevel &&
entry.Message == expectedMsg {
foundEntry = entry
break
}
}
require.NotEmpty(t, foundEntry, "Expected an error message '%s', but it was't found", expectedMsg)
actualErr, ok := foundEntry.Data["error"].(error)
require.True(t, ok, "Expected 'error' field to contain an error, but it doesn't")
assert.Equal(t, svcErrMsg, actualErr.Error(), "expected original error message '"+svcErrMsg+"', but got '"+actualErr.Error()+"'")
assert.Greater(t, foundEntry.Data["duration"].(time.Duration), time.Duration(0))
}

View File

@@ -49,101 +49,101 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the Redis chart and their default values.
| Parameter | Description | Default |
|--------------------------------------------|----------------------------------------------------------------------------------------------------------------|--------------------------------------|
| `image.registry` | Redis Image registry | `docker.io` |
| `image.repository` | Redis Image name | `bitnami/redis` |
| `image.tag` | Redis Image tag | `{VERSION}` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `nil` |
| `cluster.enabled` | Use master-slave topology | `true` |
| `cluster.slaveCount` | Number of slaves | 1 |
| `existingSecret` | Name of existing secret object (for password authentication) | `nil` |
| `usePassword` | Use password | `true` |
| `password` | Redis password (ignored if existingSecret set) | Randomly generated |
| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `false` |
| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the fullname template |
| `rbac.create` | Specifies whether RBAC resources should be created | `false` |
| `rbac.role.rules` | Rules to create | `[]` |
| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
| `metrics.image.registry` | Redis exporter image registry | `docker.io` |
| `metrics.image.repository` | Redis exporter image name | `bitnami/redis` |
| `metrics.image.tag` | Redis exporter image tag | `v0.20.2` |
| `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `nil` |
| `metrics.podLabels` | Additional labels for Metrics exporter pod | {} |
| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | {} |
| `master.service.type` | Kubernetes Service type (redis metrics) | `LoadBalancer` |
| `metrics.service.annotations` | Annotations for the services to monitor (redis master and redis slave service) | {} |
| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
| `metrics.resources` | Exporter resource requests/limit | Memory: `256Mi`, CPU: `100m` |
| `persistence.existingClaim` | Provide an existing PersistentVolumeClaim | `nil` |
| `master.persistence.enabled` | Use a PVC to persist data (master node) | `true` |
| `master.persistence.path` | Path to mount the volume at, to use other images | `/bitnami` |
| `master.persistence.subPath` | Subdirectory of the volume to mount at | `""` |
| `master.persistence.storageClass` | Storage class of backing PVC | `generic` |
| `master.persistence.accessModes` | Persistent Volume Access Modes | `[ReadWriteOnce]` |
| `master.persistence.size` | Size of data volume | `8Gi` |
| `master.statefulset.updateStrategy` | Update strategy for StatefulSet | onDelete |
| `master.statefulset.rollingUpdatePartition`| Partition update strategy | `nil` |
| `master.podLabels` | Additional labels for Redis master pod | {} |
| `master.podAnnotations` | Additional annotations for Redis master pod | {} |
| `master.port` | Redis master port | 6379 |
| `master.args` | Redis master command-line args | [] |
| `master.disableCommands` | Comma-separated list of Redis commands to disable (master) | `FLUSHDB,FLUSHALL` |
| `master.extraFlags` | Redis master additional command line flags | [] |
| `master.nodeSelector` | Redis master Node labels for pod assignment | {"kubernetes.io/arch": "amd64"} |
| `master.tolerations` | Toleration labels for Redis master pod assignment | [] |
| `master.affinity ` | Affinity settings for Redis master pod assignment | [] |
| `master.schedulerName` | Name of an alternate scheduler | `nil` |
| `master.service.type` | Kubernetes Service type (redis master) | `ClusterIP` |
| `master.service.annotations` | annotations for redis master service | {} |
| `master.service.loadBalancerIP` | loadBalancerIP if redis master service type is `LoadBalancer` | `nil` |
| `master.securityContext.enabled` | Enable security context (redis master pod) | `true` |
| `master.securityContext.fsGroup` | Group ID for the container (redis master pod) | `1001` |
| `master.securityContext.runAsUser` | User ID for the container (redis master pod) | `1001` |
| `master.resources` | Redis master CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `100m` |
| `master.livenessProbe.enabled` | Turn on and off liveness probe (redis master pod) | `true` |
| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (redis master pod) | `30` |
| `master.livenessProbe.periodSeconds` | How often to perform the probe (redis master pod) | `30` |
| `master.livenessProbe.timeoutSeconds` | When the probe times out (redis master pod) | `5` |
| `master.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (redis master pod) | `1` |
| `master.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `5` |
| `master.readinessProbe.enabled` | Turn on and off readiness probe (redis master pod) | `true` |
| `master.readinessProbe.initialDelaySeconds`| Delay before readiness probe is initiated (redis master pod) | `5` |
| `master.readinessProbe.periodSeconds` | How often to perform the probe (redis master pod) | `10` |
| `master.readinessProbe.timeoutSeconds` | When the probe times out (redis master pod) | `1` |
| `master.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (redis master pod) | `1` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `5` |
| `slave.serviceType` | Kubernetes Service type (redis slave) | `LoadBalancer` |
| `slave.service.annotations` | annotations for redis slave service | {} |
| `slave.service.loadBalancerIP` | LoadBalancerIP if Redis slave service type is `LoadBalancer` | `nil` |
| `slave.port` | Redis slave port | `master.port` |
| `slave.args` | Redis slave command-line args | `master.args` |
| `slave.disableCommands` | Comma-separated list of Redis commands to disable (slave) | `master.disableCommands` |
| `slave.extraFlags` | Redis slave additional command line flags | `master.extraFlags` |
| `slave.livenessProbe.enabled` | Turn on and off liveness probe (redis slave pod) | `master.livenessProbe.enabled` |
| `slave.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (redis slave pod) | `master.livenessProbe.initialDelaySeconds` |
| `slave.livenessProbe.periodSeconds` | How often to perform the probe (redis slave pod) | `master.livenessProbe.periodSeconds` |
| `slave.livenessProbe.timeoutSeconds` | When the probe times out (redis slave pod) | `master.livenessProbe.timeoutSeconds` |
| `slave.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (redis slave pod) | `master.livenessProbe.successThreshold` |
| `slave.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `master.livenessProbe.failureThreshold` |
| `slave.readinessProbe.enabled` | Turn on and off slave.readiness probe (redis slave pod) | `master.readinessProbe.enabled` |
| `slave.readinessProbe.initialDelaySeconds` | Delay before slave.readiness probe is initiated (redis slave pod) | `master.readinessProbe.initialDelaySeconds` |
| `slave.readinessProbe.periodSeconds` | How often to perform the probe (redis slave pod) | `master.readinessProbe.periodSeconds` |
| `slave.readinessProbe.timeoutSeconds` | When the probe times out (redis slave pod) | `master.readinessProbe.timeoutSeconds` |
| `slave.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (redis slave pod) | `master.readinessProbe.successThreshold` |
| `slave.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. (redis slave pod) | `master.readinessProbe.failureThreshold` |
| `slave.podLabels` | Additional labels for Redis slave pod | `master.podLabels` |
| `slave.podAnnotations` | Additional annotations for Redis slave pod | `master.podAnnotations` |
| `slave.schedulerName` | Name of an alternate scheduler | `nil` |
| `slave.securityContext.enabled` | Enable security context (redis slave pod) | `master.securityContext.enabled` |
| `slave.securityContext.fsGroup` | Group ID for the container (redis slave pod) | `master.securityContext.fsGroup` |
| `slave.securityContext.runAsUser` | User ID for the container (redis slave pod) | `master.securityContext.runAsUser` |
| `slave.resources` | Redis slave CPU/Memory resource requests/limits | `master.resources` |
| `slave.affinity` | Enable node/pod affinity for slaves | {} |
| Parameter | Description | Default |
|--------------------------------------------|----------------------------------------------------------------------------------------------------------------|---------------------------------------------|
| `image.registry` | Redis Image registry | `docker.io` |
| `image.repository` | Redis Image name | `bitnamilegacy/redis` |
| `image.tag` | Redis Image tag | `{VERSION}` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `nil` |
| `cluster.enabled` | Use master-slave topology | `true` |
| `cluster.slaveCount` | Number of slaves | 1 |
| `existingSecret` | Name of existing secret object (for password authentication) | `nil` |
| `usePassword` | Use password | `true` |
| `password` | Redis password (ignored if existingSecret set) | Randomly generated |
| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `false` |
| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the fullname template |
| `rbac.create` | Specifies whether RBAC resources should be created | `false` |
| `rbac.role.rules` | Rules to create | `[]` |
| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
| `metrics.image.registry` | Redis exporter image registry | `docker.io` |
| `metrics.image.repository` | Redis exporter image name | `bitnamilegacy/redis` |
| `metrics.image.tag` | Redis exporter image tag | `v0.20.2` |
| `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `nil` |
| `metrics.podLabels` | Additional labels for Metrics exporter pod | {} |
| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | {} |
| `master.service.type` | Kubernetes Service type (redis metrics) | `LoadBalancer` |
| `metrics.service.annotations` | Annotations for the services to monitor (redis master and redis slave service) | {} |
| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
| `metrics.resources` | Exporter resource requests/limit | Memory: `256Mi`, CPU: `100m` |
| `persistence.existingClaim` | Provide an existing PersistentVolumeClaim | `nil` |
| `master.persistence.enabled` | Use a PVC to persist data (master node) | `true` |
| `master.persistence.path` | Path to mount the volume at, to use other images | `/bitnami` |
| `master.persistence.subPath` | Subdirectory of the volume to mount at | `""` |
| `master.persistence.storageClass` | Storage class of backing PVC | `generic` |
| `master.persistence.accessModes` | Persistent Volume Access Modes | `[ReadWriteOnce]` |
| `master.persistence.size` | Size of data volume | `8Gi` |
| `master.statefulset.updateStrategy` | Update strategy for StatefulSet | onDelete |
| `master.statefulset.rollingUpdatePartition`| Partition update strategy | `nil` |
| `master.podLabels` | Additional labels for Redis master pod | {} |
| `master.podAnnotations` | Additional annotations for Redis master pod | {} |
| `master.port` | Redis master port | 6379 |
| `master.args` | Redis master command-line args | [] |
| `master.disableCommands` | Comma-separated list of Redis commands to disable (master) | `FLUSHDB,FLUSHALL` |
| `master.extraFlags` | Redis master additional command line flags | [] |
| `master.nodeSelector` | Redis master Node labels for pod assignment | {"kubernetes.io/arch": "amd64"} |
| `master.tolerations` | Toleration labels for Redis master pod assignment | [] |
| `master.affinity ` | Affinity settings for Redis master pod assignment | [] |
| `master.schedulerName` | Name of an alternate scheduler | `nil` |
| `master.service.type` | Kubernetes Service type (redis master) | `ClusterIP` |
| `master.service.annotations` | annotations for redis master service | {} |
| `master.service.loadBalancerIP` | loadBalancerIP if redis master service type is `LoadBalancer` | `nil` |
| `master.securityContext.enabled` | Enable security context (redis master pod) | `true` |
| `master.securityContext.fsGroup` | Group ID for the container (redis master pod) | `1001` |
| `master.securityContext.runAsUser` | User ID for the container (redis master pod) | `1001` |
| `master.resources` | Redis master CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `100m` |
| `master.livenessProbe.enabled` | Turn on and off liveness probe (redis master pod) | `true` |
| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (redis master pod) | `30` |
| `master.livenessProbe.periodSeconds` | How often to perform the probe (redis master pod) | `30` |
| `master.livenessProbe.timeoutSeconds` | When the probe times out (redis master pod) | `5` |
| `master.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (redis master pod) | `1` |
| `master.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `5` |
| `master.readinessProbe.enabled` | Turn on and off readiness probe (redis master pod) | `true` |
| `master.readinessProbe.initialDelaySeconds`| Delay before readiness probe is initiated (redis master pod) | `5` |
| `master.readinessProbe.periodSeconds` | How often to perform the probe (redis master pod) | `10` |
| `master.readinessProbe.timeoutSeconds` | When the probe times out (redis master pod) | `1` |
| `master.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (redis master pod) | `1` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `5` |
| `slave.serviceType` | Kubernetes Service type (redis slave) | `LoadBalancer` |
| `slave.service.annotations` | annotations for redis slave service | {} |
| `slave.service.loadBalancerIP` | LoadBalancerIP if Redis slave service type is `LoadBalancer` | `nil` |
| `slave.port` | Redis slave port | `master.port` |
| `slave.args` | Redis slave command-line args | `master.args` |
| `slave.disableCommands` | Comma-separated list of Redis commands to disable (slave) | `master.disableCommands` |
| `slave.extraFlags` | Redis slave additional command line flags | `master.extraFlags` |
| `slave.livenessProbe.enabled` | Turn on and off liveness probe (redis slave pod) | `master.livenessProbe.enabled` |
| `slave.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (redis slave pod) | `master.livenessProbe.initialDelaySeconds` |
| `slave.livenessProbe.periodSeconds` | How often to perform the probe (redis slave pod) | `master.livenessProbe.periodSeconds` |
| `slave.livenessProbe.timeoutSeconds` | When the probe times out (redis slave pod) | `master.livenessProbe.timeoutSeconds` |
| `slave.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (redis slave pod) | `master.livenessProbe.successThreshold` |
| `slave.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `master.livenessProbe.failureThreshold` |
| `slave.readinessProbe.enabled` | Turn on and off slave.readiness probe (redis slave pod) | `master.readinessProbe.enabled` |
| `slave.readinessProbe.initialDelaySeconds` | Delay before slave.readiness probe is initiated (redis slave pod) | `master.readinessProbe.initialDelaySeconds` |
| `slave.readinessProbe.periodSeconds` | How often to perform the probe (redis slave pod) | `master.readinessProbe.periodSeconds` |
| `slave.readinessProbe.timeoutSeconds` | When the probe times out (redis slave pod) | `master.readinessProbe.timeoutSeconds` |
| `slave.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (redis slave pod) | `master.readinessProbe.successThreshold` |
| `slave.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. (redis slave pod) | `master.readinessProbe.failureThreshold` |
| `slave.podLabels` | Additional labels for Redis slave pod | `master.podLabels` |
| `slave.podAnnotations` | Additional annotations for Redis slave pod | `master.podAnnotations` |
| `slave.schedulerName` | Name of an alternate scheduler | `nil` |
| `slave.securityContext.enabled` | Enable security context (redis slave pod) | `master.securityContext.enabled` |
| `slave.securityContext.fsGroup` | Group ID for the container (redis slave pod) | `master.securityContext.fsGroup` |
| `slave.securityContext.runAsUser` | User ID for the container (redis slave pod) | `master.securityContext.runAsUser` |
| `slave.resources` | Redis slave CPU/Memory resource requests/limits | `master.resources` |
| `slave.affinity` | Enable node/pod affinity for slaves | {} |
The above parameters map to the env variables defined in [bitnami/redis](https://github.com/bitnami/bitnami-docker-redis). For more information please refer to the [bitnami/redis](https://github.com/bitnami/bitnami-docker-redis) image documentation.

View File

@@ -3,7 +3,7 @@
##
image:
registry: docker.io
repository: bitnami/redis
repository: bitnamilegacy/redis
tag: 4.0.10-debian-9
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.

View File

@@ -3,7 +3,7 @@
##
image:
registry: docker.io
repository: bitnami/redis
repository: bitnamilegacy/redis
tag: 4.0.10-debian-9
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'

View File

@@ -4,6 +4,7 @@ import (
"context"
"net/url"
"strconv"
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
@@ -163,69 +164,30 @@ func ResetAll() {
transportCreateCallsCounter.Reset()
}
type KubectlMetrics struct {
clientCertRotationAgeMetric kubectlClientCertRotationAgeMetric
requestLatencyMetric kubectlRequestLatencyMetric
resolverLatencyMetric kubectlResolverLatencyMetric
requestSizeMetric kubectlRequestSizeMetric
responseSizeMetric kubectlResponseSizeMetric
rateLimiterLatencyMetric kubectlRateLimiterLatencyMetric
requestResultMetric kubectlRequestResultMetric
execPluginCallsMetric kubectlExecPluginCallsMetric
requestRetryMetric kubectlRequestRetryMetric
transportCacheEntriesMetric kubectlTransportCacheEntriesMetric
transportCreateCallsMetric kubectlTransportCreateCallsMetric
}
// NewKubectlMetrics returns a new KubectlMetrics instance with the given initiator, which should be the name of the
// Argo CD component that is producing the metrics.
//
// After initializing the KubectlMetrics instance, you must call RegisterWithClientGo to register the metrics with the
// client-go metrics library.
//
// You must also call RegisterWithPrometheus to register the metrics with the metrics server's prometheus registry.
//
// So these three lines should be enough to set up kubectl metrics in your metrics server:
//
// kubectlMetricsServer := metricsutil.NewKubectlMetrics("your-component-name")
// kubectlMetricsServer.RegisterWithClientGo()
// metricsutil.RegisterWithPrometheus(registry)
//
// Once those functions have been called, everything else should happen automatically. client-go will send observations
// to the handlers in this struct, and your metrics server will collect and expose the metrics.
func NewKubectlMetrics() *KubectlMetrics {
return &KubectlMetrics{
clientCertRotationAgeMetric: kubectlClientCertRotationAgeMetric{},
requestLatencyMetric: kubectlRequestLatencyMetric{},
resolverLatencyMetric: kubectlResolverLatencyMetric{},
requestSizeMetric: kubectlRequestSizeMetric{},
responseSizeMetric: kubectlResponseSizeMetric{},
rateLimiterLatencyMetric: kubectlRateLimiterLatencyMetric{},
requestResultMetric: kubectlRequestResultMetric{},
execPluginCallsMetric: kubectlExecPluginCallsMetric{},
requestRetryMetric: kubectlRequestRetryMetric{},
transportCacheEntriesMetric: kubectlTransportCacheEntriesMetric{},
transportCreateCallsMetric: kubectlTransportCreateCallsMetric{},
}
}
var newKubectlMetricsOnce sync.Once
// RegisterWithClientGo sets the metrics handlers for the go-client library. We do not use the metrics library's `RegisterWithClientGo` method,
// because it is protected by a sync.Once. controller-runtime registers a single handler, which blocks our registration
// of our own handlers. So we must rudely set them all directly.
//
// Since the metrics are global, this function should only be called once for a given Argo CD component.
func (k *KubectlMetrics) RegisterWithClientGo() {
metrics.ClientCertRotationAge = &k.clientCertRotationAgeMetric
metrics.RequestLatency = &k.requestLatencyMetric
metrics.ResolverLatency = &k.resolverLatencyMetric
metrics.RequestSize = &k.requestSizeMetric
metrics.ResponseSize = &k.responseSizeMetric
metrics.RateLimiterLatency = &k.rateLimiterLatencyMetric
metrics.RequestResult = &k.requestResultMetric
metrics.ExecPluginCalls = &k.execPluginCallsMetric
metrics.RequestRetry = &k.requestRetryMetric
metrics.TransportCacheEntries = &k.transportCacheEntriesMetric
metrics.TransportCreateCalls = &k.transportCreateCallsMetric
// Since the metrics are global, this function only needs to be called once for a given Argo CD component.
//
// You must also call RegisterWithPrometheus to register the metrics with the metrics server's prometheus registry.
func RegisterWithClientGo() {
// Do once to avoid races in unit tests that call this function.
newKubectlMetricsOnce.Do(func() {
metrics.ClientCertRotationAge = &kubectlClientCertRotationAgeMetric{}
metrics.RequestLatency = &kubectlRequestLatencyMetric{}
metrics.ResolverLatency = &kubectlResolverLatencyMetric{}
metrics.RequestSize = &kubectlRequestSizeMetric{}
metrics.ResponseSize = &kubectlResponseSizeMetric{}
metrics.RateLimiterLatency = &kubectlRateLimiterLatencyMetric{}
metrics.RequestResult = &kubectlRequestResultMetric{}
metrics.ExecPluginCalls = &kubectlExecPluginCallsMetric{}
metrics.RequestRetry = &kubectlRequestRetryMetric{}
metrics.TransportCacheEntries = &kubectlTransportCacheEntriesMetric{}
metrics.TransportCreateCalls = &kubectlTransportCreateCallsMetric{}
})
}
type kubectlClientCertRotationAgeMetric struct{}

View File

@@ -0,0 +1,11 @@
package kubectl
import (
"testing"
)
func Test_RegisterWithClientGo_race(_ *testing.T) {
// This test ensures that the RegisterWithClientGo function can be called concurrently without causing a data race.
go RegisterWithClientGo()
go RegisterWithClientGo()
}

View File

@@ -11,6 +11,8 @@ import (
"strings"
"sync"
alpha1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
"github.com/Masterminds/semver/v3"
"github.com/go-playground/webhooks/v6/azuredevops"
"github.com/go-playground/webhooks/v6/bitbucket"
@@ -20,7 +22,7 @@ import (
"github.com/go-playground/webhooks/v6/gogs"
gogsclient "github.com/gogits/go-gogs-client"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@@ -31,6 +33,7 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo"
"github.com/argoproj/argo-cd/v3/util/db"
"github.com/argoproj/argo-cd/v3/util/glob"
"github.com/argoproj/argo-cd/v3/util/guard"
"github.com/argoproj/argo-cd/v3/util/settings"
)
@@ -46,6 +49,8 @@ const usernameRegex = `[\w\.][\w\.-]{0,30}[\w\.\$-]?`
const payloadQueueSize = 50000
const panicMsgServer = "panic while processing api-server webhook event"
var _ settingsSource = &settings.SettingsManager{}
type ArgoCDWebhookHandler struct {
@@ -56,6 +61,7 @@ type ArgoCDWebhookHandler struct {
ns string
appNs []string
appClientset appclientset.Interface
appsLister alpha1.ApplicationLister
github *github.Webhook
gitlab *gitlab.Webhook
bitbucket *bitbucket.Webhook
@@ -67,7 +73,7 @@ type ArgoCDWebhookHandler struct {
maxWebhookPayloadSizeB int64
}
func NewHandler(namespace string, applicationNamespaces []string, webhookParallelism int, appClientset appclientset.Interface, set *settings.ArgoCDSettings, settingsSrc settingsSource, repoCache *cache.Cache, serverCache *servercache.Cache, argoDB db.ArgoDB, maxWebhookPayloadSizeB int64) *ArgoCDWebhookHandler {
func NewHandler(namespace string, applicationNamespaces []string, webhookParallelism int, appClientset appclientset.Interface, appsLister alpha1.ApplicationLister, set *settings.ArgoCDSettings, settingsSrc settingsSource, repoCache *cache.Cache, serverCache *servercache.Cache, argoDB db.ArgoDB, maxWebhookPayloadSizeB int64) *ArgoCDWebhookHandler {
githubWebhook, err := github.New(github.Options.Secret(set.WebhookGitHubSecret))
if err != nil {
log.Warnf("Unable to init the GitHub webhook")
@@ -109,6 +115,7 @@ func NewHandler(namespace string, applicationNamespaces []string, webhookParalle
db: argoDB,
queue: make(chan any, payloadQueueSize),
maxWebhookPayloadSizeB: maxWebhookPayloadSizeB,
appsLister: appsLister,
}
acdWebhook.startWorkerPool(webhookParallelism)
@@ -117,6 +124,7 @@ func NewHandler(namespace string, applicationNamespaces []string, webhookParalle
}
func (a *ArgoCDWebhookHandler) startWorkerPool(webhookParallelism int) {
compLog := log.WithField("component", "api-server-webhook")
for i := 0; i < webhookParallelism; i++ {
a.Add(1)
go func() {
@@ -126,7 +134,7 @@ func (a *ArgoCDWebhookHandler) startWorkerPool(webhookParallelism int) {
if !ok {
return
}
a.HandleEvent(payload)
guard.RecoverAndLog(func() { a.HandleEvent(payload) }, compLog, panicMsgServer)
}
}()
}
@@ -144,10 +152,12 @@ func affectedRevisionInfo(payloadIf any) (webURLs []string, revision string, cha
case azuredevops.GitPushEvent:
// See: https://learn.microsoft.com/en-us/azure/devops/service-hooks/events?view=azure-devops#git.push
webURLs = append(webURLs, payload.Resource.Repository.RemoteURL)
revision = ParseRevision(payload.Resource.RefUpdates[0].Name)
change.shaAfter = ParseRevision(payload.Resource.RefUpdates[0].NewObjectID)
change.shaBefore = ParseRevision(payload.Resource.RefUpdates[0].OldObjectID)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
if len(payload.Resource.RefUpdates) > 0 {
revision = ParseRevision(payload.Resource.RefUpdates[0].Name)
change.shaAfter = ParseRevision(payload.Resource.RefUpdates[0].NewObjectID)
change.shaBefore = ParseRevision(payload.Resource.RefUpdates[0].OldObjectID)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
}
// unfortunately, Azure DevOps doesn't provide a list of changed files
case github.PushPayload:
// See: https://developer.github.com/v3/activity/events/types/#pushevent
@@ -206,13 +216,15 @@ func affectedRevisionInfo(payloadIf any) (webURLs []string, revision string, cha
// Webhook module does not parse the inner links
if payload.Repository.Links != nil {
for _, l := range payload.Repository.Links["clone"].([]any) {
link := l.(map[string]any)
if link["name"] == "http" {
webURLs = append(webURLs, link["href"].(string))
}
if link["name"] == "ssh" {
webURLs = append(webURLs, link["href"].(string))
clone, ok := payload.Repository.Links["clone"].([]any)
if ok {
for _, l := range clone {
link := l.(map[string]any)
if link["name"] == "http" || link["name"] == "ssh" {
if href, ok := link["href"].(string); ok {
webURLs = append(webURLs, href)
}
}
}
}
}
@@ -231,11 +243,13 @@ func affectedRevisionInfo(payloadIf any) (webURLs []string, revision string, cha
// so we cannot update changedFiles for this type of payload
case gogsclient.PushPayload:
webURLs = append(webURLs, payload.Repo.HTMLURL)
revision = ParseRevision(payload.Ref)
change.shaAfter = ParseRevision(payload.After)
change.shaBefore = ParseRevision(payload.Before)
touchedHead = bool(payload.Repo.DefaultBranch == revision)
if payload.Repo != nil {
webURLs = append(webURLs, payload.Repo.HTMLURL)
touchedHead = payload.Repo.DefaultBranch == revision
}
for _, commit := range payload.Commits {
changedFiles = append(changedFiles, commit.Added...)
changedFiles = append(changedFiles, commit.Modified...)
@@ -268,76 +282,82 @@ func (a *ArgoCDWebhookHandler) HandleEvent(payload any) {
nsFilter = ""
}
appIf := a.appClientset.ArgoprojV1alpha1().Applications(nsFilter)
apps, err := appIf.List(context.Background(), metav1.ListOptions{})
appIf := a.appsLister.Applications(nsFilter)
apps, err := appIf.List(labels.Everything())
if err != nil {
log.Warnf("Failed to list applications: %v", err)
log.Errorf("Failed to list applications: %v", err)
return
}
installationID, err := a.settingsSrc.GetInstallationID()
if err != nil {
log.Warnf("Failed to get installation ID: %v", err)
log.Errorf("Failed to get installation ID: %v", err)
return
}
trackingMethod, err := a.settingsSrc.GetTrackingMethod()
if err != nil {
log.Warnf("Failed to get trackingMethod: %v", err)
log.Errorf("Failed to get trackingMethod: %v", err)
return
}
appInstanceLabelKey, err := a.settingsSrc.GetAppInstanceLabelKey()
if err != nil {
log.Warnf("Failed to get appInstanceLabelKey: %v", err)
log.Errorf("Failed to get appInstanceLabelKey: %v", err)
return
}
// Skip any application that is neither in the control plane's namespace
// nor in the list of enabled namespaces.
var filteredApps []v1alpha1.Application
for _, app := range apps.Items {
for _, app := range apps {
if app.Namespace == a.ns || glob.MatchStringInList(a.appNs, app.Namespace, glob.REGEXP) {
filteredApps = append(filteredApps, app)
filteredApps = append(filteredApps, *app)
}
}
for _, webURL := range webURLs {
repoRegexp, err := GetWebURLRegex(webURL)
if err != nil {
log.Warnf("Failed to get repoRegexp: %s", err)
log.Errorf("Failed to get repoRegexp: %s", err)
continue
}
// iterate over apps and check if any files specified in their sources have changed
for _, app := range filteredApps {
// get all sources, including sync source and dry source if source hydrator is configured
sources := app.Spec.GetSources()
if app.Spec.SourceHydrator != nil {
drySource := app.Spec.SourceHydrator.GetDrySource()
if sourceRevisionHasChanged(drySource, revision, touchedHead) && sourceUsesURL(drySource, webURL, repoRegexp) {
refreshPaths := path.GetAppRefreshPaths(&app)
if path.AppFilesHaveChanged(refreshPaths, changedFiles) {
namespacedAppInterface := a.appClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace)
log.Infof("webhook trigger refresh app to hydrate '%s'", app.ObjectMeta.Name)
_, err = argo.RefreshApp(namespacedAppInterface, app.ObjectMeta.Name, v1alpha1.RefreshTypeNormal, true)
if err != nil {
log.Warnf("Failed to hydrate app '%s' for controller reprocessing: %v", app.ObjectMeta.Name, err)
continue
}
}
}
// we already have sync source, so add dry source if source hydrator is configured
sources = append(sources, app.Spec.SourceHydrator.GetDrySource())
}
for _, source := range app.Spec.GetSources() {
// iterate over all sources and check if any files specified in refresh paths have changed
for _, source := range sources {
if sourceRevisionHasChanged(source, revision, touchedHead) && sourceUsesURL(source, webURL, repoRegexp) {
refreshPaths := path.GetAppRefreshPaths(&app)
refreshPaths := path.GetSourceRefreshPaths(&app, source)
if path.AppFilesHaveChanged(refreshPaths, changedFiles) {
namespacedAppInterface := a.appClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace)
_, err = argo.RefreshApp(namespacedAppInterface, app.ObjectMeta.Name, v1alpha1.RefreshTypeNormal, true)
if err != nil {
log.Warnf("Failed to refresh app '%s' for controller reprocessing: %v", app.ObjectMeta.Name, err)
continue
hydrate := false
if app.Spec.SourceHydrator != nil {
drySource := app.Spec.SourceHydrator.GetDrySource()
if (&source).Equals(&drySource) {
hydrate = true
}
}
// No need to refresh multiple times if multiple sources match.
break
// refresh paths have changed, so we need to refresh the app
log.Infof("refreshing app '%s' from webhook", app.Name)
if hydrate {
// log if we need to hydrate the app
log.Infof("webhook trigger refresh app to hydrate '%s'", app.Name)
}
namespacedAppInterface := a.appClientset.ArgoprojV1alpha1().Applications(app.Namespace)
if _, err := argo.RefreshApp(namespacedAppInterface, app.Name, v1alpha1.RefreshTypeNormal, hydrate); err != nil {
log.Errorf("Failed to refresh app '%s': %v", app.Name, err)
}
break // we don't need to check other sources
} else if change.shaBefore != "" && change.shaAfter != "" {
if err := a.storePreviouslyCachedManifests(&app, change, trackingMethod, appInstanceLabelKey, installationID); err != nil {
log.Warnf("Failed to store cached manifests of previous revision for app '%s': %v", app.Name, err)
// update the cached manifests with the new revision cache key
if err := a.storePreviouslyCachedManifests(&app, change, trackingMethod, appInstanceLabelKey, installationID, source); err != nil {
log.Errorf("Failed to store cached manifests of previous revision for app '%s': %v", app.Name, err)
}
}
}
@@ -389,7 +409,7 @@ func getURLRegex(originalURL string, regexpFormat string) (*regexp.Regexp, error
return repoRegexp, nil
}
func (a *ArgoCDWebhookHandler) storePreviouslyCachedManifests(app *v1alpha1.Application, change changeInfo, trackingMethod string, appInstanceLabelKey string, installationID string) error {
func (a *ArgoCDWebhookHandler) storePreviouslyCachedManifests(app *v1alpha1.Application, change changeInfo, trackingMethod string, appInstanceLabelKey string, installationID string, source v1alpha1.ApplicationSource) error {
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, a.db)
if err != nil {
return fmt.Errorf("error validating destination: %w", err)
@@ -412,7 +432,7 @@ func (a *ArgoCDWebhookHandler) storePreviouslyCachedManifests(app *v1alpha1.Appl
if err != nil {
return fmt.Errorf("error getting ref sources: %w", err)
}
source := app.Spec.GetSource()
cache.LogDebugManifestCacheKeyFields("moving manifests cache", "webhook app revision changed", change.shaBefore, &source, refSources, &clusterInfo, app.Spec.Destination.Namespace, trackingMethod, appInstanceLabelKey, app.Name, nil)
if err := a.repoCache.SetNewRevisionManifests(change.shaAfter, change.shaBefore, &source, refSources, &clusterInfo, app.Spec.Destination.Namespace, trackingMethod, appInstanceLabelKey, app.Name, nil, installationID); err != nil {

View File

@@ -2,6 +2,7 @@ package webhook
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
@@ -11,14 +12,18 @@ import (
"testing"
"time"
"k8s.io/apimachinery/pkg/types"
argov1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
"github.com/go-playground/webhooks/v6/azuredevops"
"github.com/go-playground/webhooks/v6/bitbucket"
bitbucketserver "github.com/go-playground/webhooks/v6/bitbucket-server"
"github.com/go-playground/webhooks/v6/github"
"github.com/go-playground/webhooks/v6/gitlab"
gogsclient "github.com/gogits/go-gogs-client"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
kubetesting "k8s.io/client-go/testing"
"github.com/argoproj/argo-cd/v3/util/cache/appstate"
@@ -29,11 +34,13 @@ import (
"github.com/sirupsen/logrus/hooks/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/v3/pkg/client/clientset/versioned/fake"
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
"github.com/argoproj/argo-cd/v3/reposerver/cache"
cacheutil "github.com/argoproj/argo-cd/v3/util/cache"
"github.com/argoproj/argo-cd/v3/util/settings"
@@ -64,6 +71,31 @@ func NewMockHandler(reactor *reactorDef, applicationNamespaces []string, objects
return NewMockHandlerWithPayloadLimit(reactor, applicationNamespaces, defaultMaxPayloadSize, objects...)
}
type fakeAppsLister struct {
argov1.ApplicationLister
argov1.ApplicationNamespaceLister
namespace string
clientset *appclientset.Clientset
}
func (f *fakeAppsLister) Applications(namespace string) argov1.ApplicationNamespaceLister {
return &fakeAppsLister{namespace: namespace, clientset: f.clientset}
}
func (f *fakeAppsLister) List(selector labels.Selector) ([]*v1alpha1.Application, error) {
res, err := f.clientset.ArgoprojV1alpha1().Applications(f.namespace).List(context.Background(), metav1.ListOptions{
LabelSelector: selector.String(),
})
if err != nil {
return nil, err
}
var apps []*v1alpha1.Application
for i := range res.Items {
apps = append(apps, &res.Items[i])
}
return apps, nil
}
func NewMockHandlerWithPayloadLimit(reactor *reactorDef, applicationNamespaces []string, maxPayloadSize int64, objects ...runtime.Object) *ArgoCDWebhookHandler {
appClientset := appclientset.NewSimpleClientset(objects...)
if reactor != nil {
@@ -76,7 +108,7 @@ func NewMockHandlerWithPayloadLimit(reactor *reactorDef, applicationNamespaces [
}
cacheClient := cacheutil.NewCache(cacheutil.NewInMemoryCache(1 * time.Hour))
return NewHandler("argocd", applicationNamespaces, 10, appClientset, &settings.ArgoCDSettings{}, &fakeSettingsSrc{}, cache.NewCache(
return NewHandler("argocd", applicationNamespaces, 10, appClientset, &fakeAppsLister{clientset: appClientset}, &settings.ArgoCDSettings{}, &fakeSettingsSrc{}, cache.NewCache(
cacheClient,
1*time.Minute,
1*time.Minute,
@@ -120,64 +152,6 @@ func TestAzureDevOpsCommitEvent(t *testing.T) {
hook.Reset()
}
// TestGitHubCommitEvent_MultiSource_Refresh makes sure that a webhook will refresh a multi-source app when at least
// one source matches.
func TestGitHubCommitEvent_MultiSource_Refresh(t *testing.T) {
hook := test.NewGlobal()
var patched bool
reaction := func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
patchAction := action.(kubetesting.PatchAction)
assert.Equal(t, "app-to-refresh", patchAction.GetName())
patched = true
return true, nil, nil
}
h := NewMockHandler(&reactorDef{"patch", "applications", reaction}, []string{}, &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "app-to-refresh",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/some/unrelated-repo",
Path: ".",
},
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: ".",
},
},
},
}, &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "app-to-ignore",
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/some/unrelated-repo",
Path: ".",
},
},
},
},
)
req := httptest.NewRequest(http.MethodPost, "/api/webhook", nil)
req.Header.Set("X-GitHub-Event", "push")
eventJSON, err := os.ReadFile("testdata/github-commit-event.json")
require.NoError(t, err)
req.Body = io.NopCloser(bytes.NewReader(eventJSON))
w := httptest.NewRecorder()
h.Handler(w, req)
close(h.queue)
h.Wait()
assert.Equal(t, http.StatusOK, w.Code)
expectedLogResult := "Requested app 'app-to-refresh' refresh"
assert.Equal(t, expectedLogResult, hook.LastEntry().Message)
assert.True(t, patched)
hook.Reset()
}
// TestGitHubCommitEvent_AppsInOtherNamespaces makes sure that webhooks properly find apps in the configured set of
// allowed namespaces when Apps are allowed in any namespace
func TestGitHubCommitEvent_AppsInOtherNamespaces(t *testing.T) {
@@ -276,72 +250,6 @@ func TestGitHubCommitEvent_AppsInOtherNamespaces(t *testing.T) {
hook.Reset()
}
// TestGitHubCommitEvent_Hydrate makes sure that a webhook will hydrate an app when dry source changed.
func TestGitHubCommitEvent_Hydrate(t *testing.T) {
hook := test.NewGlobal()
var patched bool
reaction := func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
patchAction := action.(kubetesting.PatchAction)
assert.Equal(t, "app-to-hydrate", patchAction.GetName())
patched = true
return true, nil, nil
}
h := NewMockHandler(&reactorDef{"patch", "applications", reaction}, []string{}, &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "app-to-hydrate",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSpec{
SourceHydrator: &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://github.com/jessesuen/test-repo",
TargetRevision: "HEAD",
Path: ".",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "environments/dev",
Path: ".",
},
HydrateTo: nil,
},
},
}, &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "app-to-ignore",
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/some/unrelated-repo",
Path: ".",
},
},
},
},
)
req := httptest.NewRequest(http.MethodPost, "/api/webhook", nil)
req.Header.Set("X-GitHub-Event", "push")
eventJSON, err := os.ReadFile("testdata/github-commit-event.json")
require.NoError(t, err)
req.Body = io.NopCloser(bytes.NewReader(eventJSON))
w := httptest.NewRecorder()
h.Handler(w, req)
close(h.queue)
h.Wait()
assert.Equal(t, http.StatusOK, w.Code)
assert.True(t, patched)
logMessages := make([]string, 0, len(hook.Entries))
for _, entry := range hook.Entries {
logMessages = append(logMessages, entry.Message)
}
assert.Contains(t, logMessages, "webhook trigger refresh app to hydrate 'app-to-hydrate'")
assert.NotContains(t, logMessages, "webhook trigger refresh app to hydrate 'app-to-ignore'")
hook.Reset()
}
func TestGitHubTagEvent(t *testing.T) {
hook := test.NewGlobal()
h := NewMockHandler(nil, []string{})
@@ -580,7 +488,8 @@ func Test_affectedRevisionInfo_appRevisionHasChanged(t *testing.T) {
// The payload's "push.changes[0].new.name" member seems to only have the branch name (based on the example payload).
// https://support.atlassian.com/bitbucket-cloud/docs/event-payloads/#EventPayloads-Push
var pl bitbucket.RepoPushPayload
_ = json.Unmarshal([]byte(fmt.Sprintf(`{"push":{"changes":[{"new":{"name":"%s"}}]}}`, branchName)), &pl)
err := json.Unmarshal([]byte(fmt.Sprintf(`{"push":{"changes":[{"new":{"name":%q}}]}}`, branchName)), &pl)
require.NoError(t, err)
return pl
}
@@ -664,6 +573,26 @@ func Test_affectedRevisionInfo_appRevisionHasChanged(t *testing.T) {
{true, "refs/tags/no-slashes", bitbucketPushPayload("no-slashes"), "bitbucket push branch or tag name without slashes, targetRevision tag prefixed"},
{true, "refs/tags/no-slashes", bitbucketRefChangedPayload("no-slashes"), "bitbucket ref changed branch or tag name without slashes, targetRevision tag prefixed"},
{true, "refs/tags/no-slashes", gogsPushPayload("no-slashes"), "gogs push branch or tag name without slashes, targetRevision tag prefixed"},
// Tests fix for https://github.com/argoproj/argo-cd/security/advisories/GHSA-wp4p-9pxh-cgx2
{true, "test", gogsclient.PushPayload{Ref: "test", Repo: nil}, "gogs push branch with nil repo in payload"},
// Testing fix for https://github.com/argoproj/argo-cd/security/advisories/GHSA-gpx4-37g2-c8pv
{false, "test", azuredevops.GitPushEvent{Resource: azuredevops.Resource{RefUpdates: []azuredevops.RefUpdate{}}}, "Azure DevOps malformed push event with no ref updates"},
{true, "some-ref", bitbucketserver.RepositoryReferenceChangedPayload{
Changes: []bitbucketserver.RepositoryChange{
{Reference: bitbucketserver.RepositoryReference{ID: "refs/heads/some-ref"}},
},
Repository: bitbucketserver.Repository{Links: map[string]any{"clone": "boom"}}, // The string "boom" here is what previously caused a panic.
}, "bitbucket push branch or tag name, malformed link"}, // https://github.com/argoproj/argo-cd/security/advisories/GHSA-f9gq-prrc-hrhc
{true, "some-ref", bitbucketserver.RepositoryReferenceChangedPayload{
Changes: []bitbucketserver.RepositoryChange{
{Reference: bitbucketserver.RepositoryReference{ID: "refs/heads/some-ref"}},
},
Repository: bitbucketserver.Repository{Links: map[string]any{"clone": []any{map[string]any{"name": "http", "href": []string{}}}}}, // The href as an empty array is what previously caused a panic.
}, "bitbucket push branch or tag name, malformed href"},
}
for _, testCase := range tests {
testCopy := testCase
@@ -779,3 +708,551 @@ func TestGitHubCommitEventMaxPayloadSize(t *testing.T) {
assert.Equal(t, expectedLogResult, hook.LastEntry().Message)
hook.Reset()
}
func TestHandleEvent(t *testing.T) {
t.Parallel()
tests := []struct {
name string
app *v1alpha1.Application
changedFile string // file that was changed in the webhook payload
hasRefresh bool // application has refresh annotation applied
hasHydrate bool // application has hydrate annotation applied
updateCache bool // cache should be updated with the new revision
}{
{
name: "single source without annotation - always refreshes",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: "source/path",
TargetRevision: "HEAD",
},
},
},
},
changedFile: "source/path/app.yaml",
hasRefresh: true,
hasHydrate: false,
updateCache: false,
},
{
name: "single source with annotation - matching file triggers refresh",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
Annotations: map[string]string{
"argocd.argoproj.io/manifest-generate-paths": "deploy",
},
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: "source/path",
TargetRevision: "HEAD",
},
},
},
},
changedFile: "source/path/deploy/app.yaml",
hasRefresh: true,
hasHydrate: false,
updateCache: false,
},
{
name: "single source with annotation - non-matching file updates cache",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
Annotations: map[string]string{
"argocd.argoproj.io/manifest-generate-paths": "manifests",
},
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: "source/path",
TargetRevision: "HEAD",
},
},
},
},
changedFile: "source/path/other/app.yaml",
hasRefresh: false,
hasHydrate: false,
updateCache: true,
},
{
name: "single source with multiple paths annotation - matching subpath triggers refresh",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
Annotations: map[string]string{
"argocd.argoproj.io/manifest-generate-paths": "manifests;dev/deploy;other/path",
},
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: "source/path",
TargetRevision: "HEAD",
},
},
},
},
changedFile: "source/path/dev/deploy/app.yaml",
hasRefresh: true,
hasHydrate: false,
updateCache: false,
},
{
name: "multi-source without annotation - always refreshes",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: "helm-charts",
TargetRevision: "HEAD",
},
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: "ksapps",
TargetRevision: "HEAD",
},
},
},
},
changedFile: "ksapps/app.yaml",
hasRefresh: true,
hasHydrate: false,
updateCache: false,
},
{
name: "multi-source with annotation - matching file triggers refresh",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
Annotations: map[string]string{
"argocd.argoproj.io/manifest-generate-paths": "components",
},
},
Spec: v1alpha1.ApplicationSpec{
Sources: v1alpha1.ApplicationSources{
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: "helm-charts",
TargetRevision: "HEAD",
},
{
RepoURL: "https://github.com/jessesuen/test-repo",
Path: "ksapps",
TargetRevision: "HEAD",
},
},
},
},
changedFile: "ksapps/components/app.yaml",
hasRefresh: true,
hasHydrate: false,
updateCache: false,
},
{
name: "source hydrator sync source without annotation - refreshes when sync path matches",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSpec{
SourceHydrator: &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://github.com/jessesuen/test-repo",
TargetRevision: "HEAD",
Path: "dry/path",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "master",
Path: "sync/path",
},
},
},
},
changedFile: "sync/path/app.yaml",
hasRefresh: true,
hasHydrate: false,
updateCache: false,
},
{
name: "source hydrator dry source without annotation - always refreshes and hydrates",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSpec{
SourceHydrator: &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://github.com/jessesuen/test-repo",
TargetRevision: "HEAD",
Path: "dry/path",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "master",
Path: "sync/path",
},
},
},
},
changedFile: "other/path/app.yaml",
hasRefresh: true,
hasHydrate: true,
updateCache: false,
},
{
name: "source hydrator sync source with annotation - refresh only",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
Annotations: map[string]string{
"argocd.argoproj.io/manifest-generate-paths": "deploy",
},
},
Spec: v1alpha1.ApplicationSpec{
SourceHydrator: &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://github.com/jessesuen/test-repo",
TargetRevision: "HEAD",
Path: "dry/path",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "master",
Path: "sync/path",
},
},
},
},
changedFile: "sync/path/deploy/app.yaml",
hasRefresh: true,
hasHydrate: false,
updateCache: false,
},
{
name: "source hydrator dry source with annotation - refresh and hydrate",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
Annotations: map[string]string{
"argocd.argoproj.io/manifest-generate-paths": "deploy",
},
},
Spec: v1alpha1.ApplicationSpec{
SourceHydrator: &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://github.com/jessesuen/test-repo",
TargetRevision: "HEAD",
Path: "dry/path",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "master",
Path: "sync/path",
},
},
},
},
changedFile: "dry/path/deploy/app.yaml",
hasRefresh: true,
hasHydrate: true,
updateCache: false,
},
{
name: "source hydrator dry source with annotation - non-matching file updates cache",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "test-app",
Namespace: "argocd",
Annotations: map[string]string{
"argocd.argoproj.io/manifest-generate-paths": "deploy",
},
},
Spec: v1alpha1.ApplicationSpec{
SourceHydrator: &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://github.com/jessesuen/test-repo",
TargetRevision: "HEAD",
Path: "dry/path",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "master",
Path: "sync/path",
},
},
},
},
changedFile: "dry/path/other/app.yaml",
hasRefresh: false,
hasHydrate: false,
updateCache: true,
},
}
for _, tt := range tests {
ttc := tt
t.Run(ttc.name, func(t *testing.T) {
t.Parallel()
var patchData []byte
var patched bool
reaction := func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if action.GetVerb() == "patch" {
patchAction := action.(kubetesting.PatchAction)
patchData = patchAction.GetPatch()
patched = true
}
return true, nil, nil
}
// Setup cache
inMemoryCache := cacheutil.NewInMemoryCache(1 * time.Hour)
cacheClient := cacheutil.NewCache(inMemoryCache)
repoCache := cache.NewCache(
cacheClient,
1*time.Minute,
1*time.Minute,
10*time.Second,
)
// Pre-populate cache with beforeSHA if we're testing cache updates
if ttc.updateCache {
var source *v1alpha1.ApplicationSource
if ttc.app.Spec.SourceHydrator != nil {
drySource := ttc.app.Spec.SourceHydrator.GetDrySource()
source = &drySource
} else if len(ttc.app.Spec.Sources) > 0 {
source = &ttc.app.Spec.Sources[0]
}
if source != nil {
setupTestCache(t, repoCache, ttc.app.Name, source, []string{"test-manifest"})
}
}
// Setup server cache with cluster info
serverCache := servercache.NewCache(appstate.NewCache(cacheClient, time.Minute), time.Minute, time.Minute, time.Minute)
mockDB := &mocks.ArgoDB{}
// Set destination if not present (required for cache updates)
if ttc.app.Spec.Destination.Server == "" {
ttc.app.Spec.Destination.Server = testClusterURL
}
mockDB.On("GetCluster", mock.Anything, testClusterURL).Return(&v1alpha1.Cluster{
Server: testClusterURL,
Info: v1alpha1.ClusterInfo{
ServerVersion: "1.28.0",
ConnectionState: v1alpha1.ConnectionState{Status: v1alpha1.ConnectionStatusSuccessful},
APIVersions: []string{},
},
}, nil).Maybe()
err := serverCache.SetClusterInfo(testClusterURL, &v1alpha1.ClusterInfo{
ServerVersion: "1.28.0",
ConnectionState: v1alpha1.ConnectionState{Status: v1alpha1.ConnectionStatusSuccessful},
APIVersions: []string{},
})
require.NoError(t, err)
// Create handler with reaction
appClientset := appclientset.NewSimpleClientset(ttc.app)
defaultReactor := appClientset.ReactionChain[0]
appClientset.ReactionChain = nil
appClientset.AddReactor("list", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
return defaultReactor.React(action)
})
appClientset.AddReactor("patch", "applications", reaction)
h := NewHandler(
"argocd",
[]string{},
10,
appClientset,
&fakeAppsLister{clientset: appClientset},
&settings.ArgoCDSettings{},
&fakeSettingsSrc{},
repoCache,
serverCache,
mockDB,
int64(50)*1024*1024,
)
// Create payload with the changed file
payload := createTestPayload(ttc.changedFile)
req := httptest.NewRequest(http.MethodPost, "/api/webhook", http.NoBody)
req.Header.Set("X-GitHub-Event", "push")
req.Body = io.NopCloser(bytes.NewReader(payload))
w := httptest.NewRecorder()
h.Handler(w, req)
close(h.queue)
h.Wait()
assert.Equal(t, http.StatusOK, w.Code)
// Verify refresh behavior
assert.Equal(t, ttc.hasRefresh, patched, "patch status mismatch for test: %s", ttc.name)
if patched && patchData != nil {
verifyAnnotations(t, patchData, ttc.hasRefresh, ttc.hasHydrate)
}
// Verify cache update behavior
if ttc.updateCache {
var source *v1alpha1.ApplicationSource
if ttc.app.Spec.SourceHydrator != nil {
drySource := ttc.app.Spec.SourceHydrator.GetDrySource()
source = &drySource
} else if len(ttc.app.Spec.Sources) > 0 {
source = &ttc.app.Spec.Sources[0]
}
if source != nil {
// Verify cache was updated with afterSHA
clusterInfo := &mockClusterInfo{}
var afterManifests cache.CachedManifestResponse
err := repoCache.GetManifests(testAfterSHA, source, nil, clusterInfo, "", "", testAppLabelKey, ttc.app.Name, &afterManifests, nil, "")
require.NoError(t, err, "cache should be updated with afterSHA")
if err == nil {
assert.Equal(t, testAfterSHA, afterManifests.ManifestResponse.Revision, "cached revision should match afterSHA")
}
}
}
})
}
}
// createTestPayload creates a GitHub push event payload with the specified changed file
func createTestPayload(changedFile string) []byte {
payload := fmt.Sprintf(`{
"ref": "refs/heads/master",
"before": "%s",
"after": "%s",
"repository": {
"html_url": "https://github.com/jessesuen/test-repo",
"default_branch": "master"
},
"commits": [
{
"added": [],
"modified": ["%s"],
"removed": []
}
]
}`, testBeforeSHA, testAfterSHA, changedFile)
return []byte(payload)
}
func Test_affectedRevisionInfo_bitbucket_changed_files(t *testing.T) {
t.Skip("TODO: implement bitbucket client and helper functions")
}
func TestLookupRepository(t *testing.T) {
t.Skip("TODO: implement lookupRepository method")
}
func TestCreateBitbucketClient(t *testing.T) {
t.Skip("TODO: implement newBitbucketClient function")
}
func TestFetchDiffStatBitbucketClient(t *testing.T) {
t.Skip("TODO: implement fetchDiffStatFromBitbucket function and bitbucket client")
}
func TestIsHeadTouched(t *testing.T) {
t.Skip("TODO: implement isHeadTouched function and bitbucket client")
}
// getRepositoryResponderFn and getDiffstatResponderFn removed - TODO: implement when bitbucket client is added
// mockClusterInfo implements cache.ClusterRuntimeInfo for testing
type mockClusterInfo struct{}
func (m *mockClusterInfo) GetApiVersions() []string { return []string{} } //nolint:revive // interface method name
func (m *mockClusterInfo) GetKubeVersion() string { return "1.28.0" }
// Common test constants
const (
testBeforeSHA = "d5c1ffa8e294bc18c639bfb4e0df499251034414"
testAfterSHA = "63738bb582c8b540af7bcfc18f87c575c3ed66e0"
testClusterURL = "https://kubernetes.default.svc"
testAppLabelKey = "mycompany.com/appname"
)
// verifyAnnotations is a helper that checks if the expected annotations are present in patch data
func verifyAnnotations(t *testing.T, patchData []byte, expectRefresh bool, expectHydrate bool) {
t.Helper()
if patchData == nil {
if expectRefresh {
t.Error("expected app to be patched but patchData is nil")
}
return
}
var patchMap map[string]any
err := json.Unmarshal(patchData, &patchMap)
require.NoError(t, err)
metadata, hasMetadata := patchMap["metadata"].(map[string]any)
require.True(t, hasMetadata, "patch should have metadata")
annotations, hasAnnotations := metadata["annotations"].(map[string]any)
require.True(t, hasAnnotations, "patch should have annotations")
// Check refresh annotation
refreshValue, hasRefresh := annotations["argocd.argoproj.io/refresh"]
if expectRefresh {
assert.True(t, hasRefresh, "should have refresh annotation")
assert.Equal(t, "normal", refreshValue, "refresh annotation should be 'normal'")
} else {
assert.False(t, hasRefresh, "should not have refresh annotation")
}
// Check hydrate annotation
hydrateValue, hasHydrate := annotations["argocd.argoproj.io/hydrate"]
if expectHydrate {
assert.True(t, hasHydrate, "should have hydrate annotation")
assert.Equal(t, "normal", hydrateValue, "hydrate annotation should be 'normal'")
} else {
assert.False(t, hasHydrate, "should not have hydrate annotation")
}
}
// setupTestCache is a helper that creates and populates a test cache
func setupTestCache(t *testing.T, repoCache *cache.Cache, appName string, source *v1alpha1.ApplicationSource, manifests []string) {
t.Helper()
clusterInfo := &mockClusterInfo{}
dummyManifests := &cache.CachedManifestResponse{
ManifestResponse: &apiclient.ManifestResponse{
Revision: testBeforeSHA,
Manifests: manifests,
Namespace: "",
Server: testClusterURL,
},
}
err := repoCache.SetManifests(testBeforeSHA, source, nil, clusterInfo, "", "", testAppLabelKey, appName, dummyManifests, nil, "")
require.NoError(t, err)
}