Compare commits

...

586 Commits

Author SHA1 Message Date
dependabot[bot]
500126019a chore(deps): bump node from 20 to 25
Bumps node from 20 to 25.

---
updated-dependencies:
- dependency-name: node
  dependency-version: '25'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-05 16:22:11 +00:00
argoproj-renovate[bot]
b4c7467cf3 chore(deps): update docker.io/library/golang:1.25.3 docker digest to 6d4e5e7 (#25187)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-11-05 11:21:05 -05:00
Michael Crenshaw
e6152b827b docs: more thorough release instructions (#25173)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-11-05 11:20:38 -05:00
jwinters01
1ae13b2896 feat(ui): conditionally render app view extensions (#25132)
Signed-off-by: Jonathan Winters <wintersjonathan0@gmail.com>
2025-11-05 09:34:29 -05:00
argoproj-renovate[bot]
8d0e5b9408 chore(deps): update docker.io/library/golang:1.25.3 docker digest to b2663ef (#25172)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-11-05 09:21:11 -05:00
Jaewoo Choi
0b40e3bc78 fix(ui): refactor tooltip, align action btns in app tile view (#25098)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2025-11-05 09:08:35 -05:00
dependabot[bot]
1389f0c032 chore(deps): bump github.com/casbin/casbin/v2 from 2.131.0 to 2.132.0 (#25177)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-05 11:46:29 +00:00
dependabot[bot]
59b6b0e2b8 chore(deps): bump github.com/grpc-ecosystem/go-grpc-middleware/v2 from 2.3.2 to 2.3.3 (#25176)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-05 11:43:48 +00:00
Jaewoo Choi
27a503aa59 fix(ui): add null-safe handling for assignedWindows in status panel (#25128)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2025-11-05 00:51:07 -10:00
Julian
943936a909 docs: clarify default hook deletion policy (#25170)
Signed-off-by: Globulard <julian.amoedo13@gmail.com>
2025-11-05 11:33:28 +02:00
Atif Ali
8d40fa3b5c docs: update user content for deleting applications (#25124)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2025-11-04 15:19:13 -07:00
argoproj-renovate[bot]
2d71941dd0 chore(deps): update docker.io/library/golang:1.25.3 docker digest to 0afe9b5 (#25168)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-11-04 13:52:07 -05:00
Codey Jenkins
49f5c03622 chore(tilt): add deps for build and ui packages (#25165)
Signed-off-by: Codey Jenkins <FourFifthsCode@users.noreply.github.com>
2025-11-04 13:07:42 -05:00
Nitish Kumar
ebca0521ad docs: add git concurrency issue in upgrade instruction (#25167)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-11-04 22:37:58 +05:30
argoproj-renovate[bot]
4c57962cf4 chore(deps): update docker.io/library/golang:1.25.3 docker digest to 7e3cbcd (#25158)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-11-04 11:20:46 -05:00
dependabot[bot]
b7691b2167 chore(deps): bump renovatebot/github-action from 43.0.19 to 43.0.20 (#25156)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-04 02:54:20 -10:00
Peter Jiang
21ae489589 fix: return empty instead of error if cache unavailable (#25072)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
Signed-off-by: Peter Jiang <35584807+pjiang-dev@users.noreply.github.com>
Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
2025-11-04 17:56:19 +05:30
Rick Brouwer
e58bdf2f87 feat: implement KEDA scaledJob health-checks (#25106)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2025-11-03 19:13:30 -05:00
Alexander Matyushentsev
7a09f69ad6 fix: avoid calling UpdateRevisionForPaths unnecessary (#25151)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-11-03 21:48:14 +00:00
Alexander Matyushentsev
1b08fd1004 feat: add ability to use shallow clone for repositories (#24931)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-11-03 13:00:59 -08:00
Revital Barletz
0a93e5701f docs: Update Kustomize documentation for forceCommonLabels and forceCommonAnnotations (#25138)
Signed-off-by: Revital Barletz <Revital.barletz@octopus.com>
2025-11-03 15:14:01 -05:00
argoproj-renovate[bot]
5072fb7136 chore(deps): update dependency markdown to v3.10 (#25152)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-11-03 15:13:04 -05:00
dependabot[bot]
b91c191a34 chore(deps): bump github.com/casbin/casbin/v2 from 2.128.0 to 2.131.0 (#25142)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-03 10:05:05 -05:00
github-actions[bot]
fe727c8fc9 [Bot] docs: Update Snyk report (#25135)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2025-11-03 14:29:08 +00:00
dudinea
fe4ab01cba fix: capture stderr in executil RunWithExecRunOpts (#25139)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2025-11-02 17:01:38 +02:00
Matthieu MOREL
4ea276860c chore: refactor test functions to pass context from testing.T to fixtures (#25134)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-11-02 13:39:24 +01:00
Matthieu MOREL
f26533ab37 chore: use Expecter Structs from mockery (#25133)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-11-01 13:07:08 +00:00
Atif Ali
c2f611f8cd fix(ui): Improve Delete Dialog Behaviour when deleting child apps in the app-of-app pattern (#24802)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-10-30 14:42:16 -04:00
dependabot[bot]
ef48aa9c37 chore(deps): bump library/busybox from 2f590fc to e3652a0 in /test/e2e/multiarch-container (#25116)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-30 10:50:23 -04:00
dependabot[bot]
1b4fde1986 chore(deps): bump gitlab.com/gitlab-org/api/client-go from 0.157.0 to 0.157.1 (#25108)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-30 02:52:52 +00:00
Simon Dahlbacka
ca84c31a46 docs: clarify resource.exclusions 25009 (#25107)
Signed-off-by: Simon Dahlbacka <simon.dahlbacka@fellowmind.fi>
2025-10-29 20:56:33 -04:00
Afzal Ansari
f5eaae73a2 docs: add app password step to retrieve using gmail support (#22706)
Signed-off-by: Afzal Ansari <afzal442@gmail.com>
2025-10-29 16:03:34 +00:00
jwinters01
a2f57be3d5 docs: application view ui extension docs (#25050)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Signed-off-by: Jonathan Winters <wintersjonathan0@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-10-28 14:05:58 -04:00
Josh Soref
0d0cec6b66 fix: Only show apiVersion/kind when targetState is defined (#25068)
Signed-off-by: Josh Soref <jsoref@gmail.com>
Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
Co-authored-by: Yusuke Abe <chansuke0@gmail.com>
2025-10-28 09:42:40 +01:00
Afzal Ansari
7ba62f9838 docs: add overrides in multi-source applications (#25089)
Signed-off-by: Afzal Ansari <afzal442@gmail.com>
2025-10-28 13:33:52 +05:30
dependabot[bot]
e80395bacf chore(deps): bump renovatebot/github-action from 43.0.18 to 43.0.19 (#25099)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-28 08:30:38 +01:00
argoproj-renovate[bot]
d8c72c2c8e chore(deps): update docker.io/library/golang:1.25.3 docker digest to 6bac879 (#25091)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-10-27 12:15:52 -04:00
dependabot[bot]
8e9d24a60f chore(deps): bump actions/upload-artifact from 4.6.2 to 5.0.0 (#25085)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 10:40:30 -04:00
dependabot[bot]
3d2d32bc2c chore(deps): bump actions/download-artifact from 5.0.0 to 6.0.0 (#25086)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 10:40:00 -04:00
argoproj-renovate[bot]
5f66fe5751 chore(deps): update docker.io/library/golang:1.25.3 docker digest to dd08f76 (#25073)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-10-27 10:38:40 -04:00
Jaewoo Choi
ad96cb8f0e fix(ui): overlapping UI elements and add resource units to tooltips (#24717)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2025-10-24 08:29:41 -04:00
Shota Sugiura
541a1546cd chore: remove unnecessary lock value copy in test (#24939)
Signed-off-by: shota3506 <s.shota.710.3506@gmail.com>
2025-10-23 13:00:50 -04:00
rumstead
97d50a14a6 feat(appset): add pprof endpoints (#25044)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-10-23 14:42:01 +00:00
Nitish Kumar
9c4579b229 docs: clarify explanation of reconciliation interval and auto-sync behavior (#25024)
Signed-off-by: Nitish Kumar <justnitish06@gmail.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-10-23 14:52:54 +03:00
Frederic M
2f175fcd64 docs: fix duration known type (#25035)
Signed-off-by: Frederic Mereu <frederic.mereu@gaming1.com>
2025-10-23 07:10:46 +03:00
Alan Clucas
05ccc01e17 fix: fix cherry-pick bot again because I broke sign off (#25040)
Signed-off-by: Alan Clucas <alan@clucas.org>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-10-22 10:26:13 -10:00
renovate[bot]
0fa1f675c5 chore(deps): update docker.io/library/ubuntu:25.10 docker digest to 9b61739 (#25043)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-22 15:24:04 -04:00
renovate[bot]
c207a4f76c chore(deps): update docker.io/library/golang:1.25.3 docker digest to 8c945d3 (#25038)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-22 15:56:14 +00:00
argoproj-renovate[bot]
6d5678a351 chore(deps): update docker.io/library/golang:1.25.3 docker digest to 8c945d3 (#25041)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-10-22 14:41:16 +00:00
Jaewoo Choi
6d40847e17 fix(ui): fix minor UI issue in app operation state (#24845)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2025-10-22 17:11:31 +03:00
argoproj-renovate[bot]
54311b9613 chore(deps): update docker.io/library/golang:1.25.1 docker digest to ab1f5c4 (#24820)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-10-22 13:55:33 +00:00
renovate[bot]
8fc795f1ad chore(deps): update docker.io/library/ubuntu:25.04 docker digest to 27771fb (#25031)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-22 09:23:10 -04:00
Rob E
82a2b76da2 docs: Add documentation for name property on Application sources resource (#24937)
Signed-off-by: Rob E <robert.erez@gmail.com>
2025-10-22 09:05:27 +03:00
Jaewoo Choi
ed1fb04bcc fix(ui): inaccurate timestamp in tooltip for root node (#25014)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2025-10-22 09:01:02 +03:00
renovate[bot]
9db75f63b3 chore(deps): update docker.io/library/golang:1.25.3 docker digest to 0d8c14c (#25016)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-22 03:38:37 +00:00
Alexander Lindeskär
2daaf1e317 fix: Health status for HTTPRoute with multiple generations (#24958) (#24959)
Signed-off-by: Alexander Lindeskär <lindeskar@users.noreply.github.com>
2025-10-21 16:12:50 -04:00
dependabot[bot]
7aa522b816 chore(deps): bump sigstore/cosign-installer from 3.10.0 to 4.0.0 (#24984)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 16:10:01 -04:00
Dobromir Zahariev
51c9add05d feat: Add health checks for ServiceBinding and ServiceInstance (#25007)
Signed-off-by: Dobromir Zahariev <dzahariev@yahoo.com>
2025-10-21 16:09:28 -04:00
Alan Clucas
3f3e54f7c3 fix: cherry-picking with quotes in the title (#25017)
Signed-off-by: Alan Clucas <alan@clucas.org>
2025-10-21 15:52:29 +00:00
renovate[bot]
aadd977816 chore(deps): update docker.io/library/golang:1.25.1 docker digest to d709837 (#24948)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-21 14:25:35 +00:00
dependabot[bot]
0e10276154 chore(deps): bump github.com/Azure/kubelogin from 0.2.11 to 0.2.12 (#24983)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 14:07:53 +00:00
dependabot[bot]
cb6608346f chore(deps): bump renovatebot/github-action from 43.0.17 to 43.0.18 (#25011)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2025-10-21 09:56:27 -04:00
dependabot[bot]
f0bb1192a8 chore(deps): bump golang.org/x/net from 0.44.0 to 0.46.0 (#24904)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 06:59:08 -04:00
Quentin Ågren
76c4996e2e docs: Mention default secret used to find Dex client secrets (#24105)
Signed-off-by: Quentin Ågren <quentin.agren@gmail.com>
2025-10-20 22:52:32 -04:00
dependabot[bot]
b8f5299f21 chore(deps): bump gitlab.com/gitlab-org/api/client-go from 0.148.1 to 0.157.0 (#24950)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-20 21:15:02 +03:00
github-actions[bot]
daa9a18953 [Bot] docs: Update Snyk report (#25004)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2025-10-20 11:06:12 -04:00
Alexander Matyushentsev
ca7dcbc594 fix: don't show error about missing appset (#24995)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-10-17 11:54:57 -07:00
dependabot[bot]
e161522dc0 chore(deps): bump library/golang from 2e3aca2 to 6ea52a0 in /test/remote (#24975)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-16 07:21:55 -04:00
Julian-Chu
576c0c2887 docs: Update telepresence command in the debugging-remote-environment.md (#24949)
Signed-off-by: Yu-Lang Chu <yulang.chu@gmail.com>
2025-10-16 07:21:03 -04:00
dependabot[bot]
b7b4ab938a chore(deps): bump library/golang from 7d73c4c to 2e3aca2 in /test/remote (#24963)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-15 16:58:08 -04:00
dependabot[bot]
b7e09e0dba chore(deps): bump library/busybox from d82f458 to 2f590fc in /test/e2e/multiarch-container (#24962)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2025-10-15 16:57:54 -04:00
Michael Crenshaw
28ec26a6ca fix(health): use promotion resource Ready condition regardless of reason (#24971)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-15 15:29:14 -04:00
jwinters01
9198b79e31 fix: update searchbar props on url parameter change (#24930)
Signed-off-by: Jonathan Winters <wintersjonathan0@gmail.com>
2025-10-15 12:29:59 -04:00
Nitish Kumar
440f1d25a7 test: add unit tests to filter apps and appsets (#24965)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-10-15 07:18:58 +00:00
Atif Ali
4a71661dbe chore(ui): fix Incorrect links to applications managed by other argo instances (#23266)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-10-14 21:38:14 -04:00
dependabot[bot]
b74cf4563b chore(deps): bump github.com/Azure/kubelogin from 0.2.10 to 0.2.11 (#24943)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 18:33:54 +00:00
dependabot[bot]
d1ece2d57b chore(deps): bump actions/setup-node from 5.0.0 to 6.0.0 (#24952)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 09:56:08 -04:00
dependabot[bot]
d4476dcce4 chore(deps): bump actions/checkout from 4.0.0 to 5.0.0 (#24953)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 09:55:48 -04:00
dependabot[bot]
99691565df chore(deps): bump renovatebot/github-action from 43.0.16 to 43.0.17 (#24951)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 13:43:13 +00:00
dependabot[bot]
5611814243 chore(deps): bump library/registry from 1e96c37 to cd92709 in /test/container (#24955)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 13:42:48 +00:00
dependabot[bot]
33e817549e chore(deps): bump library/golang from 1.25.1 to 1.25.3 in /test/remote (#24954)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 09:12:07 -04:00
renovate[bot]
7ba800af75 chore(deps): update actions/checkout digest to 1e31de5 (#24612)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-13 14:01:03 +00:00
github-actions[bot]
a1a1dc13fe [Bot] docs: Update Snyk report (#24936)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2025-10-13 13:35:55 +00:00
downfa11
fdd27369f9 chore: fix Dockerfile COPY to correctly handle gitops-engine (#24863)
Signed-off-by: downfa11 <downfa11@naver.com>
2025-10-13 09:35:30 -04:00
dependabot[bot]
8e778f12ac chore(deps): bump github.com/casbin/casbin/v2 from 2.127.0 to 2.128.0 (#24918)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-13 09:32:39 -04:00
dependabot[bot]
48c6671593 chore(deps): bump google.golang.org/protobuf from 1.36.9 to 1.36.10 (#24829)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-13 09:32:11 -04:00
dependabot[bot]
c3302545cc chore(deps): bump library/registry from 3725021 to 1e96c37 in /test/container (#24909)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-13 09:31:03 -04:00
Jaewoo Choi
8dcaa2fa47 fix(ui): show delete/details action dropdown for orphaned resource (#24766)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2025-10-13 09:00:13 -04:00
dependabot[bot]
492f712e70 chore(deps): bump softprops/action-gh-release from 2.4.0 to 2.4.1 (#24944)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-13 09:01:53 +00:00
dependabot[bot]
95b191d349 chore(deps): bump golang.org/x/time from 0.13.0 to 0.14.0 (#24906)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 18:03:30 -04:00
dependabot[bot]
ce183f02fe chore(deps): bump golang.org/x/oauth2 from 0.31.0 to 0.32.0 (#24903)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 18:03:12 -04:00
Jakub Ciolek
bf0661ea81 fix: make webhook payload handlers recover from panics (#24862)
Signed-off-by: Jakub Ciolek <jakub@ciolek.dev>
2025-10-09 17:26:22 -04:00
dependabot[bot]
25ee9cc36d chore(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.12.0 to 1.13.0 (#24894)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 17:25:01 -04:00
dependabot[bot]
4d33ca981b chore(deps): bump golang.org/x/crypto from 0.42.0 to 0.43.0 (#24908)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 17:06:10 -04:00
Robin Lieb
0aaffce556 docs: remove gitops-engine in dependencies developer guide (#24888)
Signed-off-by: Robin Lieb <34332703+robinlieb@users.noreply.github.com>
2025-10-08 10:17:31 -04:00
Robin Lieb
290aab9fff docs: update link to health checks in gitops-engine (#24887)
Signed-off-by: Robin Lieb <34332703+robinlieb@users.noreply.github.com>
2025-10-08 09:59:20 -04:00
dependabot[bot]
b260143db1 chore(deps): bump renovatebot/github-action from 43.0.14 to 43.0.16 (#24896)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-08 09:55:50 -04:00
dependabot[bot]
8273c3a8ba chore(deps): bump google.golang.org/grpc from 1.75.1 to 1.76.0 (#24870)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 05:36:42 -10:00
Yusuke Abe
de901da62a fix(ui): convert ProjectDetails components to functional components (#23797)
Signed-off-by: chansuke <moonset20@gmail.com>
2025-10-07 11:17:53 -04:00
jiwlee
b4e022c7d9 fix(ui): convert class component to functional component in project-sync-windows-edit (#23837)
Signed-off-by: jiwlee <ddazi9576@gmail.com>
2025-10-07 11:17:14 -04:00
Linghao Su
d1523a07b6 chore(ui): convert user-info-overview to function component (#23786)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
2025-10-07 11:16:52 -04:00
Linghao Su
65cbbca068 fix(ui): convert EditablePanel, EditbleSection and Query to function component (#22776)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
2025-10-07 11:14:32 -04:00
Andreas Schramm
026d10e3f2 feat: syncing to a different revision requires override privilege (#22858)
Signed-off-by: Andreas Schramm <andreas.jabs@gmail.com>
2025-10-07 11:01:25 -04:00
dependabot[bot]
00eb906211 chore(deps): bump softprops/action-gh-release from 2.3.4 to 2.4.0 (#24873)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 09:31:45 -04:00
Rianov Viacheslav
7430650ff5 fix(server): ensure resource health status is inferred on application retrieval (#24832) (#24851)
Signed-off-by: Viacheslav Rianov <rianovviacheslav@gmail.com>
2025-10-06 15:11:51 -04:00
Alexander Matyushentsev
079240b9ba chore: use go mod override instead of go.work (#24841)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-10-06 16:40:30 +00:00
dependabot[bot]
33f6889efd chore(deps): bump library/golang from ab1f5c4 to d709837 in /test/remote (#24857)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:37:19 -04:00
Charles Coupal-Jetté
8c890d4285 feat: Add impersonation support for App finalizer deletion (#24524)
Signed-off-by: Charles Coupal-Jetté <charles.coupaljette@goto.com>
Signed-off-by: Charles Coupal-Jetté <83649150+ccjette-logmein@users.noreply.github.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-10-06 10:30:44 -04:00
J3m3
1db5ee809c docs: update link to point to resource health documentation (#24652)
Signed-off-by: Jesung Yang <y.j3ms.n@gmail.com>
2025-10-06 10:07:48 -04:00
github-actions[bot]
d9699bdcef [Bot] docs: Update Snyk report (#24848)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2025-10-06 10:06:51 -04:00
dependabot[bot]
ab11e959b4 chore(deps): bump github.com/go-jose/go-jose/v4 from 4.1.2 to 4.1.3 (#24855)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 04:04:46 -10:00
dependabot[bot]
ea3925e34f chore(deps): bump softprops/action-gh-release from 2.3.3 to 2.3.4 (#24856)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 10:00:51 -04:00
Omar Nasser
ed537d5f3f feat: Add path flag to ArgoCD CLI app list (#24834)
Signed-off-by: Omar Nasser <omarnasserjr@gmail.com>
2025-10-06 09:57:30 -04:00
jan-mrm
afaf16b808 feat(ui): hide sync option 'replace' if sync with replace is disabled in the server (issue no. #22625) (#22647)
Signed-off-by: jan-mrm <67435696+jan-mrm@users.noreply.github.com>
2025-10-06 09:48:06 -04:00
Adam Danko
30abebda3d fix: GCP config connector healthchecks do not make use of existing observedGeneration #24458 (#24459)
Signed-off-by: danko <adamd@elementor.com>
Signed-off-by: Hapshanko <adamgaming391100@gmail.com>
Signed-off-by: Grégoire Salamin <gregoire.salamin@gmail.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Navaneethan <59121948+FalseDev@users.noreply.github.com>
Signed-off-by: Fox Danger Piacenti <fox@opencraft.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Signed-off-by: CI <ci@argoproj.com>
Signed-off-by: Pavel Aborilov <aborilov@gmail.com>
Signed-off-by: Adam Danko <112761282+Hapshanko@users.noreply.github.com>
Co-authored-by: danko <adamd@elementor.com>
Co-authored-by: gsalamin <31199936+gsalamin@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Navaneethan <59121948+FalseDev@users.noreply.github.com>
Co-authored-by: Fox Piacenti <fox@vulpinity.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: CI <ci@argoproj.com>
Co-authored-by: Pavel <aborilov@gmail.com>
2025-10-06 09:37:17 -04:00
Nathanael Liechti
5efb184c79 fix(oidc): check userinfo endpoint in AuthMiddleware (#23586)
Signed-off-by: Nathanael Liechti <technat@technat.ch>
2025-10-06 09:22:47 -04:00
Peter Jiang
96038ba2a1 fix: structured merge diff fix for null metadata field (#24844)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-10-06 09:20:33 -04:00
renovate[bot]
b553cc5ea4 chore(deps): update docker.io/library/ubuntu:25.04 docker digest to 103c747 (#24625)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-03 12:44:06 -04:00
Michael Crenshaw
64421a7acc feat(ci): add run failure link to cherry pick comment (#24838)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-03 18:37:41 +02:00
Michael Crenshaw
ef5b77811d fix(health): incorrect reason in PullRequest script (#24826)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-02 21:29:38 +00:00
gyu-young-park
ba41758147 docs: fix typo in hydrator commit message template documentation (#24822)
Signed-off-by: gyu-young-park <gyoue200125@gmail.com>
2025-10-02 16:59:39 -04:00
Michael Crenshaw
3ee16c0860 feat(actions): PullRequest merge action (#24823)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-02 13:31:13 -04:00
Peter Jiang
fb9d9747bd fix: server-side diff shows duplicate containerPorts (#24785)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-10-02 13:29:12 -04:00
Michael Crenshaw
90b3e856a6 feat(ui): support custom icons (#20864)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-02 11:42:35 -04:00
dependabot[bot]
1b973b81ef chore(deps): bump library/golang from 8305f5f to ab1f5c4 in /test/remote (#24817)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
2025-10-02 07:58:54 -04:00
dependabot[bot]
7faf8c9818 chore(deps): bump axios from 1.8.2 to 1.12.2 in /ui-test (#24793)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 05:19:40 -04:00
argoproj-renovate[bot]
b1e05e3e07 chore(deps): update docker.io/library/golang:1.25.1 docker digest to 3c96199 (#24812)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
2025-10-02 05:18:16 -04:00
Michael Crenshaw
f308ebf197 docs(hydrator): document commit-server ServiceMonitor (#24800)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-01 18:34:25 -04:00
Michael Crenshaw
46deaabea9 chore: allow docs approvers to merge for docs/operator-manual (#24813)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-01 18:34:11 -04:00
argoproj-renovate[bot]
b8e8c1fccb chore(deps): update docker.io/library/golang:1.25.1 docker digest to 9b057a4 (#24810)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-10-01 12:12:35 -04:00
Selim Can Özdemir
2d2249d3d8 docs: Update USERS.md (#24806)
Signed-off-by: Selim Can Özdemir <116464051+zxselimcan@users.noreply.github.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-01 07:38:09 -06:00
dependabot[bot]
68ff7df659 chore(deps): bump ossf/scorecard-action from 2.4.2 to 2.4.3 (#24803)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-01 09:32:11 -04:00
Alexandre Gaudreault
7396c1a9d2 fix: hydration errors not set on applications (#24755)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 21:44:43 +00:00
Aditya Raj
24fbf285d2 fix: shell compatibility issues in Procfile (#24792)
Signed-off-by: Aditya Raj <adityaraj10600@gmail.com>
2025-09-30 23:44:19 +02:00
dependabot[bot]
9419eb95a8 chore(deps): bump github.com/bradleyfalzon/ghinstallation/v2 from 2.16.0 to 2.17.0 (#24787)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-30 11:35:17 -04:00
Ville Vesilehto
701bc50d01 Merge commit from fork
Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.

The fix ensures thread-safe operations by always operating on
copies rather than shared objects.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-09-30 10:45:58 -04:00
Michael Crenshaw
fa0d6a8eb6 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:45:32 -04:00
Michael Crenshaw
1988c704d5 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:07:24 -04:00
Michael Crenshaw
174fcfe01e Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 09:45:21 -04:00
dependabot[bot]
c9fb04a606 chore(deps): bump docker/login-action from 3.5.0 to 3.6.0 (#24789)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-30 09:42:53 -04:00
dependabot[bot]
5a60cc4ee6 chore(deps): bump github.com/go-openapi/runtime from 0.28.0 to 0.29.0 (#24775)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-30 09:41:54 -04:00
Jonasz Łasut-Balcerzak
0679215992 fix: #24781 update crossplane healthchecks to V2 version (#24782)
Signed-off-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
2025-09-29 18:40:54 -04:00
downfa11
d5d7e8fad2 fix: allow flags to be set when auto-sync is disabled (#24328) (#24380)
Signed-off-by: downfa11 <downfa11@naver.com>
2025-09-29 20:33:23 +02:00
Matthieu MOREL
7357465ea6 chore: enable noctx linter (#24765)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-09-29 20:20:53 +02:00
Blake Pettersson
116707bed1 chore(cli): plugins always have an argocd prefix (#24768)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-29 20:56:26 +05:30
github-actions[bot]
cbea4e7ef3 [Bot] docs: Update Snyk report (#24763)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2025-09-29 13:15:17 +00:00
Leonardo Luz Almeida
a2b3f0a78e chore: gitops-engine post migration fixes (#24727)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2025-09-29 09:05:41 -04:00
dependabot[bot]
8dd534ec38 chore(deps): bump renovatebot/github-action from 43.0.13 to 43.0.14 (#24777)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 15:24:01 +05:30
dependabot[bot]
eff6a63e10 chore(deps): bump gitlab.com/gitlab-org/api/client-go from 0.148.0 to 0.148.1 (#24774)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 15:23:35 +05:30
Artem Vdovin
d00c99d52e chore: copy gitops-engine deps to test container (#24764)
Signed-off-by: Artem Vdovin <arte.vdovin@gmail.com>
2025-09-28 23:48:44 +02:00
lplazas
bc811c5774 fix: allow for backwards compatibility of durations defined in days (#24769)
Signed-off-by: lplazas <felipe.plazas10@gmail.com>
2025-09-28 23:39:42 +02:00
Christian Hernandez
ac12ab91f3 feat(cli): Updated CLI to show Plugins during tab completion (#24758)
Signed-off-by: Christian Hernandez <christian@chernand.io>
Signed-off-by: Christian Hernandez <christianh814@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-28 19:52:54 +00:00
dependabot[bot]
31088be65b chore(deps): bump actions/cache from 4.2.4 to 4.3.0 (#24736)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-28 12:25:00 +00:00
Andrii Korotkov
af7ae18189 chore: Add appkey and error fields in appcontroller (#24668) (#24669)
Signed-off-by: Andrii Korotkov <myolymp@gmail.com>
2025-09-27 23:47:21 +02:00
Leonardo Luz Almeida
ac46a18b55 Merge pull request #24726 from crenshaw-dev/fix-health-typo
fix(health): typo in PromotionStrategy health.lua
2025-09-27 09:57:27 -04:00
dependabot[bot]
e75e37c06c chore(deps): bump gitlab.com/gitlab-org/api/client-go from 0.147.1 to 0.148.0 (#24718)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-27 15:28:34 +02:00
Guillaume Assier 🌤️
6b1654bea9 docs(OCI): Add private repository examples for private OCI repo (#24737)
Signed-off-by: GuillaumeAssier <sykursen@protonmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2025-09-27 12:54:02 +02:00
Artem Vdovin
d0cb9c4ea8 docs: high availability notes for CMP (#24743)
Signed-off-by: Artem Vdovin <arte.vdovin@gmail.com>
Signed-off-by: Artem Vdovin <34456690+fm1ck3y@users.noreply.github.com>
Co-authored-by: Afzal Ansari <afzal442@gmail.com>
2025-09-26 20:13:10 +05:30
Masato Gosui
42477a5fcd docs: remove duplicated period in plugins.md (#24746)
Signed-off-by: Masato Gosui <82705154+nekomachi-touge@users.noreply.github.com>
2025-09-26 09:32:32 -04:00
Leonardo Luz Almeida
74a3275df2 Merge pull request #24733 from ranakan19/logVerbosity
fix(gitops-engine): set default verbosity as 0 when dry run strategy is none
2025-09-24 16:04:53 -04:00
Kanika Rana
631d429e2c set default verbosity as 0 when dry run strategy is none
Signed-off-by: Kanika Rana <krana@redhat.com>
2025-09-24 15:20:53 -04:00
Michael Crenshaw
2849f53930 fix(health): typo in PromotionStrategy health.lua
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-24 10:34:45 -04:00
Leonardo Luz Almeida
6dee3d61d4 Merge pull request #24710 from leoluz/gitops-migration
chore: gitops-engine migration
2025-09-24 10:02:15 -04:00
Yoonji Heo
b4e626c9eb chore: add unit tests for printKeyTable (header, rows, uppercase subtype) (#24274) (#24706)
Signed-off-by: Yoonji Heo <myeunee@gmail.com>
2025-09-24 17:36:59 +05:30
Leonardo Luz Almeida
f20995b271 chore: fix codegen check
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2025-09-23 16:26:37 -04:00
Leonardo Luz Almeida
3fcb1a2dca chore: fix go.work.sum
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2025-09-23 15:29:33 -04:00
Atif Ali
d59276a397 fix: Clear ApplicationSet applicationStatus when ProgressiveSync is disabled (#24587)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-09-23 15:19:56 -04:00
Leonardo Luz Almeida
a36bd76d49 shore: update go.work.sum
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2025-09-23 15:05:48 -04:00
warjiang
983f47e3bb fix: typo error in comments (#24650)
Signed-off-by: warjiang <1096409085@qq.com>
2025-09-23 13:42:00 -04:00
JPBotelho
7dacb11b02 docs: fix typos (#24666)
Signed-off-by: jpbotelho <jpbotelho.costa@gmail.com>
2025-09-23 13:41:36 -04:00
AvivGuiser
c3079fc31a fix: update ExternalSecret discovery.lua to also include the refreshPolicy (#24707)
Signed-off-by: AvivGuiser <avivguiser@gmail.com>
2025-09-23 17:27:07 +00:00
dependabot[bot]
f43147f813 chore(deps): bump gitlab.com/gitlab-org/api/client-go from 0.142.6 to 0.147.1 (#24705)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 12:12:03 -04:00
Alexander Matyushentsev
f6f1a42492 feat: add status.resourcesCount field to appset and change limit default (#24698)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-23 12:11:36 -04:00
Leonardo Luz Almeida
ba9125230f chore: merge gitops-engine test
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2025-09-23 10:15:34 -04:00
Leonardo Luz Almeida
f5382d9ef1 Merge remote-tracking branch 'ge-migrated/prepare-migration' into gitops-migration 2025-09-23 10:11:54 -04:00
Marco Franssen
598dbcb61d fix: Update copyutil command 'cp -n' to resolve warning (#24708)
Signed-off-by: Marco Franssen <marco.franssen@gmail.com>
2025-09-23 10:08:39 -04:00
Leonardo Luz Almeida
f13be2a393 chore: gitops migration test PR
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2025-09-23 10:08:25 -04:00
Leonardo Luz Almeida
bcc0243f1e prepare repo for migration to ArgoCD repo
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2025-09-23 10:05:42 -04:00
Michael Crenshaw
b8ca641e1d docs(webhook): document Azure DevOps username/password (#24693)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-22 15:20:33 -07:00
Alexander Matyushentsev
452f6c68b8 fix: limit number of resources in appset status (#24690)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-22 12:45:29 -07:00
Dillen Padhiar
552ad1c75b feat: update unpause fast/gradual actions for Numaplane rollouts (#24545)
Signed-off-by: Dillen Padhiar <dillen_padhiar@intuit.com>
2025-09-22 14:07:54 -04:00
Eunji
68d10fe7cb fix(ui): migrate gpgkeys-list.tsx from class to functional (#23646) (#23821)
Signed-off-by: EunJiJung <bianbbc87@gmail.com>
2025-09-22 11:59:33 -04:00
Eunji
e7b51daf58 fix(ui): migrate certs-list.tsx from class to functional (#23646) (#23820)
Signed-off-by: EunJiJung <bianbbc87@gmail.com>
2025-09-22 11:59:17 -04:00
Eunji
44324c0791 fix(ui): migrate repos-list.tsx from class to functional (#23646) (#23818)
Signed-off-by: EunJiJung <bianbbc87@gmail.com>
2025-09-22 11:58:50 -04:00
jiwlee
59c9c60153 fix(ui): convert TagsInput component to functional component (#23795)
Signed-off-by: jiwlee <ddazi9576@gmail.com>
2025-09-22 11:58:20 -04:00
Yusuke Abe
7e1db4a2a8 fix(ui): convert PodView components to functional components (#23781)
Signed-off-by: chansuke <moonset20@gmail.com>
2025-09-22 11:56:36 -04:00
Yusuke Abe
712109be5e chore(ui): convert ApplicationDetails components to functional components (#23767)
Signed-off-by: chansuke <moonset20@gmail.com>
2025-09-22 11:56:15 -04:00
Zach Aller
e181fbb81d fix(cmp): fix plugins not having access to argocd cli for git ASKPASS (#24665)
Signed-off-by: Zach Aller <zach_aller@intuit.com>
2025-09-22 09:52:47 -04:00
DOHYEONG LEE
d1f8cddad0 fix: resolve argocdService initialization issue in notifications CLI (#24664)
Signed-off-by: puretension <rlrlfhtm5@gmail.com>
2025-09-22 14:26:52 +02:00
dependabot[bot]
52c70b84c8 chore(deps): bump github.com/casbin/casbin/v2 from 2.126.0 to 2.127.0 (#24675)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-22 09:14:02 +00:00
dependabot[bot]
2adf4568a1 chore(deps): bump github.com/olekukonko/tablewriter from 1.0.9 to 1.1.0 (#24676)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-22 09:13:48 +00:00
argoproj-renovate[bot]
12f332ee2e chore(deps): update module github.com/golangci/golangci-lint to v2.5.0 (#24673)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-22 04:05:06 +00:00
dependabot[bot]
b7d975e0b3 chore(deps): bump renovatebot/github-action from 43.0.12 to 43.0.13 (#24677)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-21 23:50:36 -04:00
argoproj-renovate[bot]
5aa6bcb387 chore(deps): update dependency golang to v1.25.1 (#24618)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-09-22 03:24:06 +00:00
Aditya Raj
ed3acd0f79 fix: correct typo (#24671)
Signed-off-by: Aditya Raj <adityaraj10600@gmail.com>
2025-09-21 20:21:25 -04:00
Michael Crenshaw
a864d7052f docs: use GitHub alerts instead of mkdocs admonitions (#24631)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-21 19:46:10 -04:00
Alexandre Gaudreault
f960274139 ci(release): only set latest release in github when latest (#24525)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-19 13:11:07 -04:00
David Sutor
1bce29fd79 docs: add missing closing bracket (#24659)
Signed-off-by: David Sutor <david@sutor.sk>
2025-09-19 17:14:06 +02:00
Koray Oksay
4e42f00b57 chore(ci): update github runners to oci gh arc runners (#24632)
Signed-off-by: Koray Oksay <koray.oksay@gmail.com>
2025-09-18 14:49:05 -04:00
Omar Nasser
51b93e7b3f docs: Remove old link in USERS.md (#24646)
Signed-off-by: Omar Nasser <omarnasserjr@gmail.com>
2025-09-18 11:43:43 -04:00
Blake Pettersson
ed983d8a69 fix(oci): loosen up layer restrictions (#24640)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-18 08:19:34 -07:00
dependabot[bot]
b76078863d chore(deps): bump library/golang from 1.25.0 to 1.25.1 in /test/container (#24447)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-18 15:04:30 +00:00
dependabot[bot]
b9916c3313 chore(deps): bump library/golang from 1.25.0 to 1.25.1 in /test/remote (#24446)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-18 15:02:27 +00:00
github-actions[bot]
273ece0c67 chore: Bump version in master (#24583)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-18 14:44:43 +00:00
renovate[bot]
77f313c2ec chore(deps): update docker.io/library/busybox docker digest to d82f458 (#24616)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-18 10:15:45 -04:00
renovate[bot]
f209e7aa7b chore(deps): update docker.io/library/golang:1.25.0 docker digest to 5502b0e (#24617)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-18 10:15:22 -04:00
Atif Ali
fbcaf35ab2 fix: Progress Sync Unknown in UI (#24202)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-09-18 09:25:17 -04:00
dependabot[bot]
1d4d5dbbdc chore(deps): bump github.com/casbin/casbin/v2 from 2.123.0 to 2.126.0 (#24630)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-18 03:39:33 +00:00
Alexander Matyushentsev
a880feeefa fix: use informer in webhook handler to reduce memory usage (#24622)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-17 13:58:42 -07:00
argoproj-renovate[bot]
6ca71fec00 chore(deps): update module github.com/vektra/mockery/v3 to v3.5.5 (#24606)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-17 14:11:18 -04:00
dependabot[bot]
9ef837c326 chore(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.11.0 to 1.12.0 (#24593)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-17 13:44:21 -04:00
argoproj-renovate[bot]
c11d35a20f chore(deps): update dependency gotestyourself/gotestsum to v1.13.0 (#24610)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-09-17 13:43:28 -04:00
renovate[bot]
a7a07e2cd8 chore(deps): update dependency normalize-url to v4.5.1 [security] (#24607)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-17 13:40:30 -04:00
argoproj-renovate[bot]
9faa6098ed chore(deps): update dependency markdown to v3.9 (#24611)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-09-17 13:21:32 -04:00
argoproj-renovate[bot]
0fb6c51f9d chore(deps): update group golang to v1.25.1 (#24605)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2025-09-17 13:13:51 -04:00
Siva Sathwik Kommi
dbef22c843 fix: Fixed inconsistent alignment of titles and headings in status panel (#23160)
Signed-off-by: sivasath16 <sivasathwik.kommi@gmail.com>
Signed-off-by: Siva Sathwik Kommi <sivasathwik.kommi@gmail.com>
2025-09-17 21:33:02 +05:30
Michael Crenshaw
47142b89f4 chore(ci): enable Renovate (#24602)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-17 16:02:30 +00:00
José Maia
98a22612dd docs: Delete dangling word in Source Hydrator docs (#24601)
Signed-off-by: José Maia <josecbmaia@hotmail.com>
2025-09-17 11:34:22 -04:00
Blake Pettersson
6cce4b29b9 chore(ci): don't run renovate on forks (#24600)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-17 09:43:13 -04:00
Revital Barletz
9087ad7282 docs: fix inconsistency in application health example (#24585)
Signed-off-by: Revital Barletz <Revital.barletz@octopus.com>
Signed-off-by: Dan Garfield <dan@codefresh.io>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2025-09-17 09:08:13 +00:00
Alexandre Gaudreault
c377101491 fix(appset): progressive sync loop when application has sync errors (#24507)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-16 17:56:55 +00:00
Papapetrou Patroklos
1d13ebc372 chore: bumps redis version to 8.2.1 (#24523)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
2025-09-16 09:46:25 -04:00
Papapetrou Patroklos
9068f90261 add readme notice about ongoing migration (#781)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
2025-09-10 21:57:48 -10:00
Peter Jiang
97ad5b59a6 chore: bump k8s v1.34 (#773)
* bump k8s v1.34

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix broken dependencies

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-09-08 14:24:07 -04:00
Anand Francis Joseph
b3a2ec15e6 fix(sync): ApplyOutOfSyncOnly=true sync option is not honoured for cluster scoped resources (#765)
* Using live obj to get the resource key if not nil

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Fixed failing unit tests

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Added test case for validating cluster scoped resources with ApplyOutOfSyncOnly=true

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Fixed gofumpt formatting errors

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Corrected unit tests for cluster scoped resources

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Removed unwanted code comments

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Added comments for explaining the reason why ns is set from live object

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Added comments in the unit test

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

---------

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
2025-09-05 08:06:30 -04:00
Papapetrou Patroklos
e0a0d5ba17 chore: Adds a script to prepare repo for migration to main ArgoCD Repo (#766)
* Adds a script to prepare repo for migration to main ArgoCD Repo

Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>

* remove also .gitignore

Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>

* fix script

Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>

* address pr comments

Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>

---------

Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
2025-09-04 09:47:31 -04:00
Alexandre Gaudreault
dab4cc0b8a fix(hooks): always remove finalizers on create if hook exists (#770)
* fix(hooks): always remove finalizers

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* unit test

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

---------

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-02 15:51:53 -04:00
Alexandre Gaudreault
dc952c1a60 validate resource opts (#759)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-08-27 11:42:04 -04:00
rafal-jan
15973bc6b4 Do not use --force with --dry-run (#633)
Signed-off-by: rafal-jan <rafal7jan@gmail.com>
2025-08-18 12:59:21 -04:00
Regina Voloshin
90e5e3a40e added common disable sync otpion (#749)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2025-07-28 00:31:59 +03:00
Peter Jiang
38f73a75c3 fix: update normalization for ignoreDifferences on server-side diff (#747)
* WIP fix: update normalization for ignoreDifferences on server-side diff

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix linter

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* refactor

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* skip pre-normalization for ssd

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* skip pre-normalization for ssd

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* docs

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* modify Normalize to skip ignoreDifferences conditionally

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* ignoreDifferences for SSD and test backwards compatible

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* rename flags for clarity

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* doc change

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-07-24 09:53:28 -04:00
Andrii Korotkov
d590fe55be chore: Cleanup IterateHierarchy v1 (#748)
* chore: Cleanup IterateHierarchy v1

Part of https://github.com/argoproj/argo-cd/issues/23854

Remove the unused code in gitops-engine.

Signed-off-by: Andrii Korotkov <myolymp@gmail.com>

* Revert some deleted tests

Signed-off-by: Andrii Korotkov <myolymp@gmail.com>

---------

Signed-off-by: Andrii Korotkov <myolymp@gmail.com>
2025-07-22 13:50:43 -07:00
Peter Jiang
093aef0dad fix: server-side diff shows refresh/hydrate annotations (#737)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-06-17 13:49:52 -04:00
Alexandre Gaudreault
8007df5f6c fix(sync): create namespace before dry-run (#731) 2025-06-16 17:23:58 -04:00
Peter Jiang
f8f1b61ba3 chore: upgrade k8s to 1.33.1 (#735)
* upgrade k8s to 1.33.1

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix linter issues

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-06-13 11:47:45 -04:00
Michael Crenshaw
8697b44eea Revert "Add option to skip the dryrun from the sync context (#708)" (#730)
* Revert "Add option to skip the dryrun from the sync context (#708)"

This reverts commit 717b8bfd69.

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* format

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-06-12 14:52:06 -04:00
Peter Jiang
69dfa708a6 feat: auto migrate kubectl-client-side-apply fields for SSA (#727)
* feat: auto migrate kubectl-client-side-apply fields for SSA

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix master version

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* run gofumpt

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Propagate sync error instead of logging

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* allow enable/disable of CSA migration using annotation

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix linting

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Refactor to allow for multiple managers and disable option

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* remove commentj

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* refactor

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix test

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Add docs for client side apply migration

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Edit comment

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-06-12 10:03:52 -04:00
Michael Crenshaw
cebed7e704 chore: wrap errors (#732)
* chore: wrap errors

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* report list result along with error

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fixes

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-06-06 10:49:09 -04:00
Peter Jiang
89c110b595 fix: Server-Side diff removed fields missing in diff (#722)
* fix: Server-Side diff removed fields missing in diff

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* add unit test to cover deleted field

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-05-20 14:24:09 -04:00
Michael Crenshaw
90b69e9ae5 chore(deps): bump golangci-lint (#719)
* chore(deps): bump golangci-lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-14 17:14:57 -04:00
dependabot[bot]
60c6378a12 chore(deps): bump codecov/codecov-action from a2f73fb6db51fcd2e0aa085dfb36dea90c5e3689 to 5c47607acb93fed5485fdbf7232e8a31425f672a (#649)
* chore(deps): bump codecov/codecov-action

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from a2f73fb6db51fcd2e0aa085dfb36dea90c5e3689 to 5c47607acb93fed5485fdbf7232e8a31425f672a.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](a2f73fb6db...5c47607acb)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update ci.yaml

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-14 17:14:49 -04:00
dependabot[bot]
c7f25189f0 chore(deps): bump actions/setup-go (#720)
Bumps the dependencies group with 1 update in the / directory: [actions/setup-go](https://github.com/actions/setup-go).


Updates `actions/setup-go` from 5.1.0 to 5.5.0
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](41dfa10bad...d35c59abb0)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: 5.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 16:13:50 -04:00
dependabot[bot]
9169c08c91 chore(deps): bump golang.org/x/net from 0.36.0 to 0.38.0 (#713)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.36.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.36.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 16:04:28 -04:00
Pasha Kostohrys
d65e9d9227 feat: Enable SkipDryRunOnMissingResource sync option on Application level (#712)
* fix: go mod tidy is not working due to k8s.io/externaljwt dependency

Signed-off-by: pasha <pasha.k@fyxt.com>

* feat: Enable SkipDryRunOnMissingResource sync option on Application level

Signed-off-by: pasha <pasha.k@fyxt.com>

* feat: Enable SkipDryRunOnMissingResource sync option on Application level

* feat: add support for skipping dry run on missing resources in sync context

- Introduced a new option to skip dry run verification for missing resources at the application level.
- Updated the sync context to include a flag for this feature.
- Enhanced tests to cover scenarios where the skip dry run annotation is applied to all resources.

---------

Signed-off-by: pasha <pasha.k@fyxt.com>
Co-authored-by: pasha <pasha.k@fyxt.com>
2025-04-20 09:41:38 +03:00
Pasha Kostohrys
5f90e7b481 fix: go mod tidy is not working due to k8s.io/externaljwt dependency (#710)
Signed-off-by: pasha <pasha.k@fyxt.com>
Co-authored-by: pasha <pasha.k@fyxt.com>
2025-04-12 22:28:44 +03:00
Nick Heijmink
717b8bfd69 Add option to skip the dryrun from the sync context (#708)
* Add option to skip the dryrun from the sync context

Signed-off-by: Nick Heijmink <nick.heijmink@alliander.com>

* Fix test by mocking the discovery

Signed-off-by: Nick Heijmink <nick.heijmink@alliander.com>

* Fix linting errors

Signed-off-by: Nick Heijmink <nick.heijmink@alliander.com>

* Fix skip dryrun const

---------

Signed-off-by: Nick Heijmink <nick.heijmink@alliander.com>
2025-04-10 14:12:36 -07:00
Aaron Hoffman
c61756277b feat: return images from resources when sync occurs (#642)
* Add GetResourceImages

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Use require instead of assert

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Add test for empty images case

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Rename test function to match regex

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Add support for conjobs, refactor images implementation and add test for cronjobs

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Move missing images tests to single function

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Refactor test

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Add benchmark for sync

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Update comment on images

Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

---------

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-27 09:45:14 -04:00
Andrii Korotkov
370078d070 chore: Switch dry run applies to log with debug level (#705)
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2025-03-18 09:33:28 -04:00
Peter Jiang
7258614f50 chore: add unit test for ssa with dryRun (#703)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-03-14 12:43:14 -04:00
dependabot[bot]
e5ef2e16d8 chore(deps): bump golang.org/x/net from 0.33.0 to 0.36.0 (#700)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.33.0 to 0.36.0.
- [Commits](https://github.com/golang/net/compare/v0.33.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-13 11:41:39 -04:00
Peter Jiang
762f9b70f3 fix: Fix checking dryRun when using Server Side Apply (#699)
* fix: properly check dryRun flag for server side apply

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Remove debug logging

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-03-13 09:11:58 -04:00
Andrii Korotkov
1265e8382e chore(deps): Update some dependencies - another run (#22228) (#696)
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2025-03-10 06:26:31 -07:00
Andrii Korotkov
acb47d5407 chore(deps): Update some package versions (#690)
* chore(deps): Update some package versions

Helps with https://github.com/argoproj/argo-cd/issues/22104

Update some versions trying to avoid legacy dependencies. Bump go to 1.23.5.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Fix versions

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2025-03-05 10:26:49 -05:00
Michael Crenshaw
e4cacd37c4 chore(ci): run tests on cherry-pick PRs (#694)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-04 14:06:13 -05:00
sivchari
a16fb84a8c bump k8s v1.32 (#665)
Signed-off-by: sivchari <shibuuuu5@gmail.com>
2025-03-02 12:29:52 -05:00
Peter Jiang
4fd18478f5 fix: Server-side diff shows incorrect diffs for list related changes (#688)
* fix: Server-side diff shows incorrect diffs for list related changes

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Update docs for removeWebHookMutation

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Update docs

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Update docs

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-02-27 16:20:03 -05:00
Dejan Zele Pejchev
65db274b8d fix: stuck hook issue when a Job resource has a ttlSecondsAfterFinished field set (#646)
Signed-off-by: Dejan Zele Pejchev <pejcev.dejan@gmail.com>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-02-07 17:04:47 -05:00
Matthieu MOREL
11a5e25708 chore(deps): bump github.com/evanphx/json-patch to v5.9.11 (#682)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 15:37:51 -05:00
Matthieu MOREL
04266647b1 chore: enable require-error from testifylint (#681)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 15:01:48 -05:00
Matthieu MOREL
cc13a7d417 chore: enable gocritic (#680)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 15:00:49 -05:00
Matthieu MOREL
ad846ac0fd chore: enable increment-decrement and redundant-import-alias from revive (#679)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 14:56:44 -05:00
Matthieu MOREL
c323d36706 chore: enable dot-imports, duplicated-imports from revive (#678)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 13:10:59 -05:00
Matthieu MOREL
782fb85b94 chore: enable unparam linter (#677)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 12:52:34 -05:00
Matthieu MOREL
f5aa9e4d10 chore: enable perfsprint linter (#676)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 12:44:40 -05:00
Matthieu MOREL
367311bd6f chore: enable thelper linter (#675)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 12:43:51 -05:00
Matthieu MOREL
70bee6a3a5 chore: enable early-return, indent-error-flow and unnecessary-stmt from revive (#674)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 12:43:30 -05:00
Matthieu MOREL
b111e50082 chore: enable errorlint (#673)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 11:25:43 -05:00
Andrii Korotkov
3ef5ab187e fix: New kube applier for server side diff dry run with refactoring (#662)
* fix: New kube applier for server side diff dry run with refactoring

Part of a fix for: https://github.com/argoproj/argo-cd/issues/21488

Separate logic to handle server side diff dry run applies.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Break backwards compatibility for a better code

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Don't put applier constructor in the interface

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Address comments

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Address more comments

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: enable testifylint linter (#657)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: enable gofumpt, gosimple and whitespace linters (#666)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: enable use-any from revive (#667)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: bump golangci-lint to v1.63.4 and list argo-cd linters (#670)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: enable unused-parameter and var-declaration from revive (#668)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: remove actions/cache duplicated behavior with actions/setup-go (#658)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Add Leo's code

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Make linter happy

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Co-authored-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 11:13:38 -05:00
dependabot[bot]
30f4accb42 chore(deps): bump golang.org/x/net from 0.26.0 to 0.33.0 (#671)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.26.0 to 0.33.0.
- [Commits](https://github.com/golang/net/compare/v0.26.0...v0.33.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-07 11:06:57 -05:00
Matthieu MOREL
00472077d3 chore: enable goimports linter (#669)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 11:06:31 -05:00
Matthieu MOREL
bfdad63e27 chore: enable misspell linter (#672)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 11:05:44 -05:00
Matthieu MOREL
a093a7627f chore: remove actions/cache duplicated behavior with actions/setup-go (#658)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 09:56:59 -05:00
Matthieu MOREL
ccee58366a chore: enable unused-parameter and var-declaration from revive (#668)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 09:55:39 -05:00
Matthieu MOREL
edb9faabbf chore: bump golangci-lint to v1.63.4 and list argo-cd linters (#670)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 09:52:05 -05:00
Matthieu MOREL
7ac688a30f chore: enable use-any from revive (#667)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-06 18:20:29 -05:00
Matthieu MOREL
f948991e78 chore: enable gofumpt, gosimple and whitespace linters (#666)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-06 17:07:19 -05:00
Matthieu MOREL
382663864e chore: enable testifylint linter (#657)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-06 10:27:22 -05:00
Siddhesh Ghadi
7e21b91e9d Merge commit from fork
map[] in error output exposes secret data in last-applied-annotation
& patch error

Invalid secrets with stringData exposes the secret values in diff. Attempt a
normalization to prevent it.

Refactor stringData to data conversion to eliminate code duplication

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2025-01-29 10:51:13 -05:00
Michael Crenshaw
d78929e7f6 fix(cluster): reduce lock contention on cluster initialization (#660)
* fix: move expensive function outside lock

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* add benchmark

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-01-24 16:18:12 -05:00
Mykola Pelekh
54992bf424 fix: avoid resources lock contention utilizing channel (#629)
* fix: avoid resources lock contention utilizing channel

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* feat: process events in batch when the mode is enabled (default is `false`)

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* test: update unit tests to verify batch events processing flag

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* feat: make eventProcessingInterval option configurable (default is 0.1s)

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* fixup! feat: make eventProcessingInterval option configurable (default is 0.1s)

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

---------

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>
2024-12-16 10:52:26 -05:00
JM (Jason Meridth)
8d65e80ecb chore: update README get involved links (#647)
Change links:

- [x] from kubernetes slack to cncf slack
- [x] from k8s gitop channel to cncf argo-cd-contributors channel

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-12-12 16:23:57 -05:00
JM (Jason Meridth)
363a7155a5 chore: add CODEOWNERS (#641)
* chore: add CODEOWNERS and EMERITUS.md

Setting up CODEOWNERS file so people are automatically notified about new PRs. Can
eventually setup ruleset that requires at least one review before merging.

I based currently list in CODEOWNERS on people who have recently merged PRs. I'm wondering
if reviewers and approvers are a team/group instead of list of folks.

Setup EMERITUS.md that contains list from OWNERS.  Feedback on this PR will update
this PR.

Signed-off-by: jmeridth <jmeridth@gmail.com>

* chore: match this repo's CODEOWNERS to argoproj/argo-cd CODEOWNERS

Signed-off-by: jmeridth <jmeridth@gmail.com>

---------

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-12-12 16:22:31 -05:00
JM (Jason Meridth)
73452f8a58 fix: run go mod tidy in ci workflow (#652)
Fixes issue that showed up in https://github.com/argoproj/gitops-engine/pull/650

[Error](https://github.com/argoproj/gitops-engine/actions/runs/12300709584/job/34329534904?pr=650#step:5:96)

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-12-12 11:56:51 -05:00
JM (Jason Meridth)
d948e6b41c fix: github actions versions and warnings (#639)
* fix: github actions versions and warnings

- [x] upgrade github/actions/cache GitHub Action to latest
 - this fixings the following warnings (example list [here](https://github.com/argoproj/gitops-engine/actions/runs/11885468091)):
   - Your workflow is using a version of actions/cache that is scheduled for deprecation, actions/cache@v2.1.6. Please update your workflow to use the latest version of actions/cache to avoid interruptions. Learn more: https://github.blog/changelog/2024-09-16-notice-of-upcoming-deprecations-and-changes-in-github-actions-services/
   - The `save-state` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
   - The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
- [x] update dependabot config
  - prefix PRs with `chore(deps)`
  - group non major version updates into 1 PR

Signed-off-by: jmeridth <jmeridth@gmail.com>

* fix: switch to SHA from tags

more secure, as tags are mutable

Signed-off-by: jmeridth <jmeridth@gmail.com>

---------

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-12-12 11:31:38 -05:00
Andrii Korotkov
8849c3f30c fix: Server side diff now works correctly with fields removal (#640)
* fix: Server side diff now works correctly with some fields removal

Helps with https://github.com/argoproj/argo-cd/issues/20792

Removed and modified sets may only contain the fields that changed, not including key fields like "name". This can cause merge to fail, since it expects those fields to be present if they are present in the predicted live.
Fortunately, we can inspect the set and derive the key fields necessary. Then they can be added to the set and used during a merge.
Also, have a new test which fails before the fix, but passes now.

Failure of the new test before the fix
```
            	Error:      	Received unexpected error:
            	            	error removing non config mutations for resource Deployment/nginx-deployment: error reverting webhook removed fields in predicted live resource: .spec.template.spec.containers: element 0: associative list with keys has an element that omits key field "name" (and doesn't have default value)
            	Test:       	TestServerSideDiff/will_test_removing_some_field_with_undoing_changes_done_by_webhook
```

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Use new version of structured merge diff with a new option

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Add DCO

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Try to fix sonar exclusions config

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2024-12-11 15:28:47 -05:00
JM (Jason Meridth)
0371401803 chore(deps): upgrade go version in dockerfile (#638)
- [x] fix warnings about case of `as` to `AS` in Dockerfile
  - `FromAsCasing: 'as' and 'FROM' keywords' casing do not match (line 1)`
- [x] shorten go version in go.mod
- [x] update Dockerfile Go version from 1.17 to 1.22 to match go.mod
- [x] upgrade alipine/git image version to latest, current was 4 years old
  - -from alpine/git:v2.24.3 (4 years old) to alpine/git:v2.45.2
- [x] fix warning with linting
  - `WARN [config_reader] The configuration option 'run.skip-files' is deprecated, please use 'issues.exclude-files'`
- [x] add .tool-versions (asdf) to .gitignore

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-11-26 14:45:57 -05:00
Andrii Korotkov
88c35a9acf chore(deps): Upgrade structured-merge-diff from v4.4.1 to v4.4.3 (#637)
Adding updates from the last year, let's see if that fixed some bugs

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2024-11-17 21:19:19 -05:00
Pasha Kostohrys
847cfc9f8b fix: Ability to disable Server Side Apply on individual resource level (#634)
* fix: Ability to disable Server Side Apply on individual resource level

Signed-off-by: pashakostohrys <pavel@codefresh.io>

* fix: Ability to disable Server Side Apply on individual resource level

Signed-off-by: pashakostohrys <pavel@codefresh.io>

---------

Signed-off-by: pashakostohrys <pavel@codefresh.io>
2024-11-07 16:58:28 +02:00
Siddhesh Ghadi
9ab0b2ecae feat: Add ability to hide certain annotations on secret resources (#577)
* Add option to hide annotations on secrets

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Handle err

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Move hide logic to a generic func

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Remove test code

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Address review comments

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Handle lastAppliedConfig special case

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Fix if logic and remove comments

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

---------

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2024-10-29 12:29:52 +02:00
Alexander Matyushentsev
09e5225f84 feat: application resource deletion protection (#630)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-10-23 06:44:23 -07:00
Michael Crenshaw
72bcdda3f0 chore: avoid unnecessary json marshal (#626)
* chore: avoid unnecessary json marshal

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* refactor test to satisfy sonarcloud

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-09-17 13:19:20 -04:00
Michael Crenshaw
df9b446fd7 chore: avoid unnecessary json unmarshal (#627)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-09-16 16:42:18 -04:00
Michael Crenshaw
3d9aab3cdc chore: speed up resolveResourceReferences (#625)
* chore: speed up resolveResourceReferences

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* revert unnecessary changes

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-09-16 16:41:37 -04:00
Anand Francis Joseph
bd7681ae3f Added support for impersonation in the kubectl (#534)
Signed-off-by: anandf <anjoseph@redhat.com>
2024-09-04 21:08:10 -04:00
Michael Crenshaw
95e00254f8 chore: bump k8s libraries to 1.31 (#619)
* bump latest kubernetes version

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* upgrade Go version to resolve dependencies

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* fix: ci

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* use Go1.22.3

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* update from 1.29.2 to 1.30.1

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* update go.sum

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* upgrade golangci-lint

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* bump to 0.30.2

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary replace

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* latest patch

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* chore: bump k8s libraries from 1.30 to 1.31

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: sivchari <shibuuuu5@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: sivchari <shibuuuu5@gmail.com>
2024-08-23 17:30:48 -04:00
sivchari
099cba69bd chore: bump kubernetes version to 0.30.x (#579)
* bump latest kubernetes version

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* upgrade Go version to resolve dependencies

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* fix: ci

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* use Go1.22.3

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* update from 1.29.2 to 1.30.1

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* update go.sum

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* upgrade golangci-lint

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* bump to 0.30.2

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary replace

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* latest patch

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary toolchain line

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: sivchari <shibuuuu5@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-23 11:57:20 -04:00
Andrii Korotkov
6b2984ebc4 feat: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600] (#601)
* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]

Closes #600

The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* improvements to graph building

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use old name

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]

Closes #600

The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* finish merge

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]

Closes #600

The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* discard unneeded copies of child resources as we go

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary comment

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* make childrenByUID sparse

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* eliminate duplicate map

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix comment

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* add useful comment back

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use nsNodes instead of dupe map

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unused struct

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* skip invalid APIVersion

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-18 13:53:51 -04:00
Michael Crenshaw
7d150d0b6b chore: more docstrings (#606)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-17 23:20:55 -04:00
Andy Goldstein
adb68bcaab fix(clusterCache): don't miss finding live obj if obj is cluster-scoped and namespacedResources is in transition (#597)
* sync.Reconcile: guard against incomplete discovery

When Reconcile performs its logic to compare the desired state (target
objects) against the actual state (live objects), it looks up each live
object based on a key comprised of data from the target object: API
group, API kind, namespace, and name. While group, kind, and name will
always be accurate, there is a chance that the value for namespace is
not. If a cluster-scoped target object has a namespace (because it
incorrectly has a namespace from its source) or the namespace parameter
passed into the Reconcile method has a non-empty value (indicating a
default value to use on namespace-scoped objects that don't have it set
in the source), AND the resInfo ResourceInfoProvider has incomplete or
missing API discovery data, the call to IsNamespacedOrUnknown will
return true when the information is unknown. This leads to the key being
incorrect - it will have a value for namespace when it shouldn't. As a
result, indexing into liveObjByKey will fail. This failure results in
the reconciliation containing incorrect data: there will be a nil entry
appended to targetObjs when there shouldn't be.

Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>

* Address code review comments

Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>

---------

Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
2024-07-14 11:31:47 -04:00
Alexandre Gaudreault
a0c23b4210 fix: deadlock on start missing watches (#604)
* fix: deadlock on start missing watches

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* revert error

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* add unit test to validate some deadlock scenarios

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* test name

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* clarify comment

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

---------

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2024-07-12 13:36:05 -04:00
Michael Crenshaw
a22b34675f fix: deduplicate OpenAPI definitions for GVKParser (#587) (#590)
* fix: deduplicate OpenAPI definitions for GVKParser



* do the thing that was the whole point



* more logs



* don't uniquify models



* schema for both



* more logs



* fix logic



* better tainted gvk handling, better docs, update mocks



* add a test



* improvements from comments



---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-01 08:36:22 -04:00
Michael Crenshaw
fa0e8d60a3 chore: update static scheme (#588)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-28 11:55:02 -04:00
Michael Crenshaw
f38075deb3 fix: deduplicate OpenAPI definitions for GVKParser (#587)
* fix: deduplicate OpenAPI definitions for GVKParser

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* do the thing that was the whole point

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* don't uniquify models

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* schema for both

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix logic

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* better tainted gvk handling, better docs, update mocks

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* add a test

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* improvements from comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-28 10:55:06 -04:00
Paul Gier
4386ff4b8d chore: remove duplicate scheme import (#580)
Signed-off-by: Paul Gier <paul.gier@datastax.com>
2024-06-25 14:54:38 -04:00
Paul Gier
0be58f261a fix: printing gvkparser error message (#585)
The log.Info function doesn't understand format directives, so use key/value to print error message.

Signed-off-by: Paul Gier <paul.gier@datastax.com>
2024-06-25 14:53:55 -04:00
Michael Crenshaw
83ce6ca8ce chore(deps): bump k8s libs from 0.29.2 to 0.29.6 (#586)
* chore(deps): bump k8s libs from 0.29.2 to 0.29.6

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* oops

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-15 14:59:36 -04:00
Alexander Matyushentsev
1f371a01cf fix: replace k8s.io/endpointslice to v0.29.2 (#583)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-06-10 02:05:20 -07:00
Alexander Matyushentsev
a9fd001c11 fix: replace k8s.io/endpointslice version (#582)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-06-09 23:18:18 -07:00
hanzala1234
8a3ce6d85c Update condition to select right pvc as child for statefulset (#550)
* Update if condition to select right pvc as child for statefulset

Signed-off-by: hanzala <muhammad.hanzala@waltlabs.io>

* fix indentation

Signed-off-by: hanzala <muhammad.hanzala@waltlabs.io>

* test(cache): Add tests for isStatefulSetChild function

* test(pkg/cache): Replace JSON unmarshalling with structured approach in tests

---------

Signed-off-by: hanzala <muhammad.hanzala@waltlabs.io>
Co-authored-by: hanzala <muhammad.hanzala@waltlabs.io>
Co-authored-by: Obinna Odirionye <odirionye@gmail.com>
2024-05-14 22:01:00 +03:00
Leonardo Luz Almeida
0aecd43903 fix: handle nil ParseableType from GVKParser (#574)
* fix: handle nil ParseableType from GVKParser

Signed-off-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>

* address review comments

Signed-off-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>

---------

Signed-off-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2024-05-09 13:07:15 -04:00
sivchari
86a368824c chore: Bump Kubernetes clients to 1.29.2 (#566)
* update k8s libs

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* fix: golangci-lint

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* fix nil map

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* add deletion field

Signed-off-by: sivchari <shibuuuu5@gmail.com>

---------

Signed-off-by: sivchari <shibuuuu5@gmail.com>
2024-05-07 16:25:58 -04:00
Kota Kimura
fbecbb86e4 feat: sync-options annotation with Force=true (#414) (#560)
Signed-off-by: kkk777-7 <kota.kimura0725@gmail.com>
2024-04-16 17:26:47 +03:00
Jonathan West
1ade3a1998 fix: fix temporary files written to '/dev/shm' not cleaned up (#568) (#569)
Signed-off-by: Jonathan West <jonwest@redhat.com>
2024-04-11 08:23:34 -04:00
Michael Crenshaw
3de313666b chore: more logging for CRD updates (#554)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-07 21:10:09 -04:00
Siddhesh Ghadi
5fd9f449e7 feat: Prune resources in reverse of sync wave order (#538)
* Prune resources in reverse of sync wave order

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Use waveOverride var instead of directly patching live obj

Directly patching live objs results into incorrect wave ordering
as the new wave value from live obj is used to perform reordering during next sync

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

---------

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2024-01-24 00:27:10 -05:00
Anand Francis Joseph
792124280f fix(server): Dry run always in client mode just for yaml manifest validation even with server side apply (#564)
* Revert "feat: retry with client side dry run if server one was failed (#548)"

This reverts commit c0c2dd1f6f.

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Revert "fix(server): use server side dry run in case if it is server side apply (#546)"

This reverts commit 4a5648ee41.

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Fixed the logic to disable server side apply if it is a dry run

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Added more values in the log message for better debugging

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Fixed compilation error

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Written an inline fn to get string value of dry-run strategy

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Added comment as requested with reference to the issue number

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

---------

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
Co-authored-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2024-01-22 16:30:38 -05:00
Leonardo Luz Almeida
c1e23597e7 fix: address kubectl auth reconcile during server-side diff (#562)
* fix: address kubectl auth reconcile during server-side diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* server-side diff force conflict

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* do not ssa when ssd rbac

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* debug

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* better logs

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove debug

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* better comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* refactoring on rbacReconcile

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

---------

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2024-01-22 09:58:03 -05:00
Leonardo Luz Almeida
aba38192fb feat: Implement Server-Side Diffs (#522)
* feat: Implement Server-Side Diffs

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* trigger build

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* chore: remove unused function

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* make HasAnnotationOption more generic

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add server-side-diff printer option

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove managedFields during server-side-diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add ignore mutation webhook logic

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix configSet

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix comparison

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* merge typedconfig in typedpredictedlive

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* handle webhook diff conflicts

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix webhook normalization logic

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* address review comments 1/2

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* address review comments 2/2

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix lint

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove kubectl getter from cluster-cache

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix query param verifier instantiation

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* Add server-side-diff unit tests

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

---------

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-12-18 14:45:13 -05:00
pasha-codefresh
c0c2dd1f6f feat: retry with client side dry run if server one was failed (#548)
* feat: retry with client side dry run if server one was failed

Signed-off-by: pashakostohrys <pavel@codefresh.io>

* feat: retry with client side dry run if server one was failed

Signed-off-by: pashakostohrys <pavel@codefresh.io>

* feat: retry with client side dry run if server one was failed

Signed-off-by: pashakostohrys <pavel@codefresh.io>

---------

Signed-off-by: pashakostohrys <pavel@codefresh.io>
2023-11-02 11:40:24 -04:00
pasha-codefresh
4a5648ee41 fix(server): use server side dry run in case if it is server side apply (#546)
* fix: use server side dry run in case if it is server side apply

Signed-off-by: pashakostohrys <pavel@codefresh.io>

* fix: use server side dry run in case if it is server side apply

Signed-off-by: pashakostohrys <pavel@codefresh.io>

---------

Signed-off-by: pashakostohrys <pavel@codefresh.io>
2023-10-31 10:22:05 -04:00
fsl
f15cf615b8 chore(deps): upgrade k8s version and client-go (#530)
Signed-off-by: fengshunli <1171313930@qq.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-10-13 14:38:58 -04:00
Jesse Suen
9a03edb8e7 fix: remove lock acquisition in ClusterCache.GetAPIResources() (#543)
Signed-off-by: Jesse Suen <jesse@akuity.io>
2023-10-12 09:58:44 -07:00
Michael Crenshaw
a00ce82f1c chore: log cluster sync error (#541)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-09-29 16:35:05 -04:00
gdsoumya
b0fffe419a fix: resolve deadlock (#539)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2023-09-06 08:24:14 -07:00
gdsoumya
187312fe86 feat: auto respect rbac for discovery/sync (#532)
* feat: respect rbac for resource exclusions

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* feat: use list call to check for permissions

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* feat: updated implementation to handle different levels of rbac check

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* feat: fixed linter error

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* feat: resolve review comments

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

---------

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2023-08-11 13:20:59 -07:00
pasha-codefresh
ed7c77a929 feat: Apply out of sync option only (#533)
Signed-off-by: pashakostohrys <pavel@codefresh.io>
2023-08-09 09:45:34 -04:00
fsl
425d65e076 fix: update golangci-lint ci (#529)
Signed-off-by: fengshunli <1171313930@qq.com>
2023-06-07 12:30:28 -04:00
fsl
b58645a27c fix: remove deprecated ioutil (#528)
Signed-off-by: fengshunli <1171313930@qq.com>
2023-06-07 12:20:24 -04:00
ls0f
c0ffe8428a manage clusters via proxy (#466)
Signed-off-by: ls0f <lovedboy.tk@qq.com>
2023-05-31 13:15:21 -07:00
reggie-k
e56739ceba feat: add CreateResource to kubectl (#12174 and #4116) (#516)
* separating kubectl and resource ops mocks

Signed-off-by: reggie <reginakagan@gmail.com>

* separating kubectl and resource ops mocks

Signed-off-by: reggie <reginakagan@gmail.com>

* separating kubectl and resource ops mocks

Signed-off-by: reggie <reginakagan@gmail.com>

* server dry-run for MockKubectlCmd

Signed-off-by: reggie <reginakagan@gmail.com>

* server dry-run for MockKubectlCmd

Signed-off-by: reggie <reginakagan@gmail.com>

* server dry-run for MockKubectlCmd

Signed-off-by: reggie <reginakagan@gmail.com>

* mock create noop

Signed-off-by: reggie <reginakagan@gmail.com>

* ctl create resource with createOptions

Signed-off-by: reggie <reginakagan@gmail.com>

---------

Signed-off-by: reggie <reginakagan@gmail.com>
2023-05-27 13:48:09 -04:00
Blake Pettersson
ad9a694fe4 fix: do not replace namespaces (#524)
When doing `kubectl replace`, namespaces should not be affected. Fixes
argoproj/argo-cd#12810 and argoproj/argo-cd#12539.

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2023-05-26 16:32:14 -07:00
Alexander Matyushentsev
b4dd8b8c39 fix: avoid acquiring lock on mutex and semaphore at the same time to prevent deadlock (#521)
* fix: avoid acquiring lock on mutex and semaphore at the same time to prevent deadlock

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* apply reviewer notes

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

---------

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2023-05-11 19:08:22 -07:00
Soumya Ghosh Dastidar
ed70eac8b7 feat: add sync delete option (#507)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2023-02-14 08:53:51 -08:00
asingh
917f5a0f16 fix: add suspended condition (#484)
Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>

fix: add suspended condition

Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>

Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>
2022-12-08 15:06:15 -08:00
Leonardo Luz Almeida
e284fd71cb fix: managed namespaces should not mutate the live state (#479)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-11-08 16:05:51 -05:00
Blake Pettersson
b371e3bfc5 Update namespace v2 (#465)
* Revert "Revert "feat: Ability to create custom labels for namespaces created … (#455)"

This reverts commit ce2fb703a6.

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>

* feat: enable namespace to be updated

Rename `WithNamespaceCreation` to `WithNamespaceModifier`, since this
method is also used for modifying existing namespaces. This method
takes a single argument for the actual updating, and unless this method
gets invoked by its caller no updating will take place (fulfilling what
the `createNamespace` argument used to do).

Within `autoCreateNamespace`, everywhere where we previously added tasks
we'll now need to check whether the namespace should be created (or
modified), which is now delegated to the `appendNsTask` and
`appendFailedNsTask` methods.

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2022-11-03 15:29:13 -04:00
Jingchao Di
9664cf8123 feat: add profile feature for agent, and fix logr's panic (#444)
Signed-off-by: jingchao.djc <jingchao.djc@antfin.com>

Signed-off-by: jingchao.djc <jingchao.djc@antfin.com>
2022-10-06 08:31:10 -07:00
Leonardo Luz Almeida
98ccd3d43f fix: calculate SSA diffs with smd.merge.Updater (#467)
* fix: refactor ssa diff logic

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix: calculate ssa diff with smd.merge.Updater

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* chore: Add golangci config file

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix: remove wrong param passed to golanci-ghaction

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* doc: Add doc to the wrapper file

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* doc: Add instructions about how to extract the openapiv2 document from
k8s

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* better wording

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* better code comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-10-04 09:23:20 -04:00
Leonardo Luz Almeida
3951079de1 fix: remove last-applied-configuration before diff in ssa (#460)
* fix: remove last-apply-configurations before diff in ssa

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix: add tests to validate expected behaviour

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-09-16 10:22:00 -04:00
Yujun Zhang
517c1fff4e chore: fix typo in log message (#445)
Signed-off-by: Yujun Zhang <yujunz@nvidia.com>

Signed-off-by: Yujun Zhang <yujunz@nvidia.com>
2022-09-01 20:40:44 +02:00
Leonardo Luz Almeida
c036d3f6b0 fix: sort fields to correctly calculate diff in server-side apply (#456)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-08-29 14:50:54 +02:00
pasha-codefresh
ce2fb703a6 Revert "feat: Ability to create custom labels for namespaces created … (#455)
* Revert "feat: Ability to create custom labels for namespaces created with syncOptions CreateNamespace (#443)"

This reverts commit a56a803031.

* remove import

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* fix test

Signed-off-by: pashavictorovich <pavel@codefresh.io>

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2022-08-23 14:00:03 -04:00
Leonardo Luz Almeida
9970faba81 Cherry-Pick Retry commit in master (#452)
* fix: retry on unauthorized error when retrieving resources by gvk (#449)

* fix: retry on unauthorized when retrieving resources by gvk

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add test case to validate retry is just invoked if error is Unauthorized

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix merge conflict

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix lint

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix lint

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-08-19 15:58:16 -04:00
pasha-codefresh
a56a803031 feat: Ability to create custom labels for namespaces created with syncOptions CreateNamespace (#443)
* namespace labels hook

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* add tests

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* fix test

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* rename import

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* remove deep copy

Signed-off-by: pashavictorovich <pavel@codefresh.io>

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2022-08-18 18:48:07 +02:00
jannfis
2bc3fef13e fix: Fix argument order in resource filter (#436)
Signed-off-by: jannfis <jann@mistrust.net>
2022-08-04 21:09:09 +02:00
dependabot[bot]
e03364f7dd chore(deps): bump actions/setup-go from 3.2.0 to 3.2.1 (#428)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3.2.0 to 3.2.1.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3.2.0...v3.2.1)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:52:40 +02:00
dependabot[bot]
51a33e63f1 chore(deps): bump k8s.io/klog/v2 from 2.30.0 to 2.70.1 (#426)
Bumps [k8s.io/klog/v2](https://github.com/kubernetes/klog) from 2.30.0 to 2.70.1.
- [Release notes](https://github.com/kubernetes/klog/releases)
- [Changelog](https://github.com/kubernetes/klog/blob/main/RELEASE.md)
- [Commits](https://github.com/kubernetes/klog/compare/v2.30.0...v2.70.1)

---
updated-dependencies:
- dependency-name: k8s.io/klog/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:52:11 +02:00
dependabot[bot]
e5e3a1cf5c chore(deps): bump github.com/spf13/cobra from 1.2.1 to 1.5.0 (#420)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.2.1 to 1.5.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.2.1...v1.5.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:51:35 +02:00
dependabot[bot]
da6623b2e7 chore(deps): bump golangci/golangci-lint-action from 2 to 3.2.0 (#409)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 2 to 3.2.0.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v2...v3.2.0)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:50:28 +02:00
Rick
112657a1f9 chore(docs): typo fixing in agent README file (#351)
Signed-off-by: rick <1450685+LinuxSuRen@users.noreply.github.com>
2022-08-04 14:29:48 +02:00
dependabot[bot]
ab8fdc7dbd chore(deps): bump sigs.k8s.io/yaml from 1.2.0 to 1.3.0 (#339)
Bumps [sigs.k8s.io/yaml](https://github.com/kubernetes-sigs/yaml) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/kubernetes-sigs/yaml/releases)
- [Changelog](https://github.com/kubernetes-sigs/yaml/blob/master/RELEASE.md)
- [Commits](https://github.com/kubernetes-sigs/yaml/compare/v1.2.0...v1.3.0)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/yaml
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:23:51 +02:00
dependabot[bot]
2d495813b7 chore(deps): bump codecov/codecov-action from 1.5.0 to 3.1.0 (#405)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1.5.0 to 3.1.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.5.0...v3.1.0)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:22:38 +02:00
Josh Soref
d8c17c206f spelling: less than (#434)
Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2022-08-04 14:21:42 +02:00
Leonardo Luz Almeida
6cde7989d5 fix: structured-merge diff apply default values in live resource (#435)
* fix: structured-merge diff apply default values in live resource

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* address review comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-08-03 10:57:58 -04:00
Leonardo Luz Almeida
1c4ef33687 feat: Add server-side apply manager config (#418)
* feat: Add server-side apply manager config

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Force conflicts when SSA

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Implement strategic-merge patch in diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Implement structured merge diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Implement structured merge in diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix parseable type conversion

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Handle structured merge diff for create/delete operations

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* User NormalizeUnionsApply instead of Merge for structured-merge diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* NormalizeUnions

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* merge first than normalize union

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* calculate diff with fieldsets

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* extract managed fields

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove managed fields then merge

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Just remove fields if manager is found

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove config fieldset instead of using managed fields

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Structure merge diff with defaults

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* tests

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Normalize union at the end

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* test

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* test

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Remove fields after merging

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* test

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* test

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* apply defaults when building diff result

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix default func call

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix diff default

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix merged object

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* keep diff order

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* apply default with patch

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* handle ssa diffs with resource annotations

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* use managed fields to calculate diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Implement unit tests

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix bad merge

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add test to validate service with multiple ports

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* resolveFromStaticParser optimization

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* try without reordering while patching default values

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-08-02 14:48:09 -04:00
Daniel Urgell
da6681916f fix: Change wrong log level in cluster.go openAPISchema, gvkParser (#430)
* Update cluster.go

Fixes argoproj/gitops-engine#423

Signed-off-by: Daniel Urgell <urgell.d@gmail.com>

* Use c.log.Info instead of c.log.Warning

Signed-off-by: Daniel Urgell <urgell.d@gmail.com>

* Changed c.log.Info format to fix type string in argument

Signed-off-by: Daniel Urgell <urgell.d@gmail.com>

Co-authored-by: Daniel Urgell <daniel@bluelabs.eu>
2022-08-02 13:10:19 -04:00
Alexander Matyushentsev
67ddccd3cc chore: upgrade k8s cliet to v0.24.2 (#427)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-07-12 16:42:57 -07:00
jannfis
ed31317b27 fix: Only consider resources which supports appropriate verb for any given operation (#423)
* fix: Only consider resources which supports appropriate verb for any given operation

Signed-off-by: jannfis <jann@mistrust.net>

* Fix unit tests

Signed-off-by: jannfis <jann@mistrust.net>

* Return MethodNotSupported and add some tests

Signed-off-by: jannfis <jann@mistrust.net>
2022-07-06 20:25:44 +02:00
Leonardo Luz Almeida
f9456de217 feat: add GvkParser in cluster cache (#404)
* feat: add GvkParser in cluster cache

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-05-30 14:59:22 -07:00
Florent Monbillard
7493226dda feat: Add support for HPA v2 (autoscaling/v2) (#411)
Signed-off-by: Florent Monbillard <f.monbillard@gmail.com>
2022-05-30 14:58:55 -07:00
dependabot[bot]
4f069a220a chore(deps): bump actions/setup-go from 2.1.4 to 3.2.0 (#412)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 2.1.4 to 3.2.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2.1.4...v3.2.0)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-30 14:57:54 -07:00
Michael Crenshaw
b855894da0 chore: test resource conversion (#403)
* chore: test resource conversion

Signed-off-by: Michael Crenshaw <michael_crenshaw@intuit.com>

* chore: add const to avoid code smell

Signed-off-by: Michael Crenshaw <michael_crenshaw@intuit.com>

Co-authored-by: Michael Crenshaw <michael_crenshaw@intuit.com>
2022-04-14 12:59:55 -04:00
Josh Soref
55bb49480a chore: Spelling (#215)
* spelling: account

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: aggregation

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: annotations

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: argocd

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: does-not

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: donot

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: do-not

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: implementers

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: individual

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: openshift

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: relate

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: requeuing

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: settings

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: worse

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: youtube

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

Co-authored-by: Josh Soref <jsoref@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <michael@crenshaw.dev>
2022-04-12 13:35:40 -04:00
Mathieu Parent
d8b1a12ce6 feat: add basic support for server-side apply (#363)
See https://github.com/argoproj/argo-cd/issues/2267

Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2022-04-11 11:09:16 -04:00
Leonardo Luz Almeida
a586397dc3 chore: Fix go version during ci lint (#401)
* chore: Fix go version during ci lint

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix go version minor version only

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-04-11 10:23:49 -04:00
jannfis
73bcea9c8c fix: Set QPS and burst rate for resource ops client (#395)
Signed-off-by: jannfis <jann@mistrust.net>
2022-03-28 21:05:56 +02:00
Alexander Matyushentsev
723667dff7 feat: support exiting early from IterateHierarchy method (#388)
* feat: support existing early from IterateHierarchy method

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* reviewer notes: comment action callback return value; add missing return value check

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-03-15 17:06:47 -07:00
Yuan Tang
531c0dbb68 chore: Remove support for deprecated extensions APIs (#381)
* chore: Remove support for deprecated extensions APIs

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* Add back extensions Ingress

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2022-03-10 16:59:35 -08:00
Yuan Tang
61c0cc745e fix: Add missing IngressClass in kind order when syncing tasks (#380)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2022-02-15 16:51:16 -08:00
Yuan Tang
553ae80972 chore: Bump Go to 1.17 (#379)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2022-02-15 16:18:05 -08:00
Yuan Tang
c517b47f2f fix: ensureCRDReady check did not work for v1 CRDs (#378)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2022-02-15 12:20:24 -08:00
jannfis
e360551b19 feat: Support for retries when building up cluster cache (#374)
* feat: Support for retries when building up cluster cache

Signed-off-by: jannfis <jann@mistrust.net>

* Oops.

Signed-off-by: jannfis <jann@mistrust.net>

* RetryLimit must be at least 1

Signed-off-by: jannfis <jann@mistrust.net>

* RetryLimit must be at least 1

Signed-off-by: jannfis <jann@mistrust.net>
2022-02-08 22:03:06 +01:00
Ben Ye
8aefb18433 feat: expose cluster sync retry timeout (#373)
Signed-off-by: Ben Ye <ben.ye@bytedance.com>
2022-02-08 13:01:49 -08:00
Rо́man
b0c5e00ccf fix: add default protocol to subset of ports if it is empty (#347)
Signed-off-by: Roman Rudenko <3kmnazapad@gmail.com>
2022-01-26 10:45:17 -08:00
Alexander Matyushentsev
36e77462ae fix: health check for HPA doesn't catch all good states (#369)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-01-24 10:01:50 -08:00
dependabot[bot]
5bbbdfbc69 chore(deps): bump github.com/go-logr/logr from 1.2.0 to 1.2.2 (#368)
Bumps [github.com/go-logr/logr](https://github.com/go-logr/logr) from 1.2.0 to 1.2.2.
- [Release notes](https://github.com/go-logr/logr/releases)
- [Changelog](https://github.com/go-logr/logr/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-logr/logr/compare/v1.2.0...v1.2.2)

---
updated-dependencies:
- dependency-name: github.com/go-logr/logr
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-24 09:54:04 -08:00
Ramon Rüttimann
b560016286 chore(deps): bump k8s.io/kubernetes from 1.22.2 to 1.23.1 (#365)
This commit bumps the k8s.io/kubernetes dependencies and all other
kubernetes deps to 1.23.1. There have been two breakages with the
upgrade:
- The `apply.NewApplyOptions` in the kubectl libraries [has been
  removed / refactored](8165f83007 (diff-2672399cb3d708d5fed859b0a74387522408ab868b1c2457587b39cabe2b75ce))
  and split up into `ApplyOptions` and `ApplyFlags`. This commit
  populates the `ApplyOptions` directly, as going through
  the `ApplyFlags` and then calling `ToOptions()` is almost impossible
  due to its arguments.
- The `github.com/go-logr/logr` package has had some breaking changes
  between the previous alpha release `v.0.4.0` and its stable release
  `v1.0.0`. The generated mock has been updated to use `logr.LogSink`
  instead (`logr.Logger` is not an interface anymore), and the test code
  has been updated accordingly.
- Go has been updated to 1.17.6, as `sigs.k8s.io/json` depends on it.

Signed-off-by: Ramon Rüttimann <ramon@nine.ch>

apply reviewer notes: bump golang version; add missing ApplyOptions parameters

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-01-19 13:11:47 -08:00
Leonardo Luz Almeida
f6495020a3 fix: removeNamespaceAnnotation should not panic if annotation has unexpected value (#361)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2021-12-22 11:16:05 -08:00
Jesse Suen
ae94ad9510 feat: configurable watch resync timeout. ability to disable cluster resync (#353)
Signed-off-by: Jesse Suen <jesse@akuity.io>
2021-11-22 20:49:13 -08:00
pasha-codefresh
c7bab2eeca feat: add support split yaml that return actual yamls (#346)
* add support split yaml that return actual yamls

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* change description

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2021-11-03 15:01:10 -07:00
Leonardo Luz Almeida
c0b63afb74 fix: Address issue during diff when secret data is nil (#345)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2021-11-02 12:00:06 -07:00
harikrongali
2565df31d1 fix: Add ScaleDownLimit as health state for HPA (#343)
Signed-off-by: hari rongali <hari_rongali@intuit.com>
2021-11-02 08:31:54 -07:00
dependabot[bot]
c8139b3f94 chore(deps): bump actions/setup-go from 2.1.3 to 2.1.4 (#319)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 2.1.3 to 2.1.4.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2.1.3...v2.1.4)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-02 08:31:26 -07:00
Alexander Matyushentsev
27374da031 refactor: ugprade k8s client to v0.22.2 (#338)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-10-22 15:32:49 -07:00
Alexander Matyushentsev
762cb1bc26 feat: expose all kubernetes resources in cluster info (#337)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-10-22 10:32:54 -07:00
pasha-codefresh
bc9ce5764f feat: better error message for sync operations (#336)
feat: better error message for sync operations (#336)

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2021-10-21 12:20:28 -07:00
Jesse Suen
e8d9803a2b fix: stop relying on imported types for ingress health check (#335) 2021-10-20 00:21:24 -07:00
Matteo Ruina
23f41cb849 fix: Add ScalingDisabled healthy state to HPA (#323)
Signed-off-by: Matteo Ruina <matteo.ruina@gmail.com>
2021-09-27 17:05:18 -07:00
Alexander Matyushentsev
33f542da00 fix: SyncOption Replace=True is broken (#321)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-09-01 16:54:33 -07:00
Alexander Matyushentsev
57ea690344 refactor: update resources install order according to helm implementation (#309)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-03 16:23:43 -07:00
Alexander Matyushentsev
a4c77d5c70 feat: support managing cluster resources in a namespaced mode (#297)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-07-08 17:49:06 -07:00
Alexander Matyushentsev
2c97a96cab fix: 'ResourceOperations.CreateResource' should use 'kubectl' package to properly execute create operation (#298)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-06-30 12:22:00 -07:00
Jesse Suen
7495c633c3 fix: workflows without status.phase should be considered Progressing (#296)
Signed-off-by: Jesse Suen <jessesuen@gmail.com>
2021-06-23 14:32:58 -07:00
Alexander Matyushentsev
b067bd7463 refactor: regenerate cluster cache mock (#291)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-06-17 10:59:36 -07:00
Alexander Matyushentsev
579ea1d764 refactor: using open api schema in cluster live state cache (#289)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-06-17 08:57:33 -07:00
Alexander Matyushentsev
5da9c7eea0 refactor: stop caching OpenAPISchema since it is cached by cmdutil.Factory (#287)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-06-15 11:55:29 -07:00
Alexander Matyushentsev
6884d330a0 refactor: add an API that returns built-in health assessement function (#285)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-06-09 17:02:33 -07:00
Alexander Matyushentsev
ddc92c9bdb feat: expose APIGroups in GetClusterInfo (#283)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-06-08 13:05:35 -07:00
dependabot[bot]
6911e599ae chore(deps): bump actions/cache from 2.1.5 to 2.1.6 (#282)
Bumps [actions/cache](https://github.com/actions/cache) from 2.1.5 to 2.1.6.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v2.1.5...v2.1.6)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-28 07:28:56 -07:00
Alexander Matyushentsev
411c8d0f1c refactor: cache cmdutil.Factory to improve syncronization performance (#281)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-05-27 09:42:22 -07:00
dependabot[bot]
f0c9d7e75e chore(deps): bump codecov/codecov-action from 1.4.1 to 1.5.0 (#274)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1.4.1 to 1.5.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.4.1...v1.5.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-26 20:32:37 -07:00
dependabot[bot]
515974410e chore(deps): bump actions/cache from 2.1.4 to 2.1.5 (#275)
Bumps [actions/cache](https://github.com/actions/cache) from 2.1.4 to 2.1.5.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v2.1.4...v2.1.5)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-25 17:49:23 -07:00
Mikhail Mazurskiy
3f38eee773 fix: deadlock in listener (#271)
fix: deadlock in listener (#271)

Signed-off-by: Mikhail Mazurskiy <mmazurskiy@gitlab.com>
2021-05-02 20:16:07 -07:00
Mikhail Mazurskiy
46d1496140 refactor: Kubernetes 1.21 libraries (#266)
Signed-off-by: Mikhail Mazurskiy <mmazurskiy@gitlab.com>
2021-04-21 14:34:57 -07:00
dependabot[bot]
73f3e7f01a chore(deps): bump k8s.io/klog/v2 from 2.4.0 to 2.8.0 (#239)
Bumps [k8s.io/klog/v2](https://github.com/kubernetes/klog) from 2.4.0 to 2.8.0.
- [Release notes](https://github.com/kubernetes/klog/releases)
- [Changelog](https://github.com/kubernetes/klog/blob/master/RELEASE.md)
- [Commits](https://github.com/kubernetes/klog/compare/v2.4.0...v2.8.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-20 14:29:57 -07:00
dependabot[bot]
0f8652d4e7 chore(deps): bump github.com/golang/mock from 1.4.4 to 1.5.0 (#241)
Bumps [github.com/golang/mock](https://github.com/golang/mock) from 1.4.4 to 1.5.0.
- [Release notes](https://github.com/golang/mock/releases)
- [Changelog](https://github.com/golang/mock/blob/master/.goreleaser.yml)
- [Commits](https://github.com/golang/mock/compare/v1.4.4...v1.5.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-20 11:28:52 -07:00
dependabot[bot]
ba03b48543 chore(deps): bump actions/cache from v2 to v2.1.4 (#216)
Bumps [actions/cache](https://github.com/actions/cache) from v2 to v2.1.4.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v2...26968a09c0ea4f3e233fdddbafd1166051a095f6)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-20 11:03:38 -07:00
dependabot[bot]
09186f3d4f chore(deps): bump github.com/stretchr/testify from 1.6.1 to 1.7.0 (#208)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.6.1 to 1.7.0.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.6.1...v1.7.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-20 11:03:19 -07:00
Mikhail Mazurskiy
b3254f88f4 fix: missing resource version in tests (#256)
FakeDynamicClient returns typed lists (not unstructured lists) since Kubernetes 1.20. The type cast now handles that.

Signed-off-by: Mikhail Mazurskiy <mmazurskiy@gitlab.com>
2021-04-20 10:55:25 -07:00
dependabot[bot]
9e414998c8 chore(deps): bump codecov/codecov-action from v1.2.1 to v1.4.1 (#260)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.2.1 to v1.4.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.2.1...967e2b38a85a62bd61be5529ada27ebc109948c2)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-20 10:54:49 -07:00
Mikhail Mazurskiy
11e322186b refactor: reduce usage of k8s.io/kubernetes packages (#258)
Signed-off-by: Mikhail Mazurskiy <mmazurskiy@gitlab.com>
2021-04-20 10:54:02 -07:00
Alexander Matyushentsev
8e19104276 chore: switch CI to golangci/golangci-lint-action@v2 (#259)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-04-20 10:45:22 -07:00
Kshama Jain
2a9c1448b2 fix: applyoutofsync with dry-run (#253)
Signed-off-by: kshamajain99 <kshamajain99@gmail.com>
2021-04-05 15:15:16 -07:00
Alexander Matyushentsev
1ce2acc845 feat: support replace strategy for CRD (#252)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-03-25 14:51:06 -07:00
Kshama Jain
3c778a5431 fix: register new CRD to apigroups (#247)
Signed-off-by: kshamajain99 <kshamajain99@gmail.com>
2021-03-25 14:41:24 -07:00
Kshama Jain
46073c1cd6 fix: applyOutOfSyncOnly should work with sync waves as well (#251)
Signed-off-by: kshamajain99 <kshamajain99@gmail.com>
2021-03-25 14:03:18 -07:00
Shoubhik Bose
c1332abf89 docs: remove out-of-date sections (#248)
Signed-off-by: Shoubhik Bose <shbose@redhat.com>
2021-03-23 12:05:54 -07:00
Alexander Matyushentsev
89ddd0dffb feat: support 'Replace=true' sync option (#246)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-03-17 21:35:18 -07:00
kshamajain99
a9f11fade3 fix: sync option applyOutOfSync (#245)
Signed-off-by: kshamajain99 <kshamajain99@gmail.com>
2021-03-17 14:49:00 -07:00
May Zhang
478f8cb207 fix: file extension comparisons are case sensitive (#243)
Signed-off-by: May Zhang <may_zhang@intuit.com>
2021-03-16 17:02:46 -07:00
Ed Lee
38db8bb691 Update OWNERS (#237) 2021-03-15 15:39:03 -07:00
Mike Bryant
5d680d6b80 fix: Add additional healthy states for HPA (#234)
In the case that you have a HPA with a minimum or maximum replica count set, and the metrics would indicate a need to scale beyond those, this is still an expected and healthy state.

Signed-off-by: Mike Bryant <mikebryant@bulb.co.uk>
2021-03-15 14:52:47 -07:00
Shoubhik Bose
928245881d chore: Use v0.20.4 kube dependencies (#238)
Signed-off-by: Shoubhik Bose <shbose@redhat.com>
2021-03-15 14:50:58 -07:00
May Zhang
380f7be5bf fix: Dry run stuck on pre sync hook (#236)
* fix: Dry run stuck on pre sync hook

Signed-off-by: May Zhang <may_zhang@intuit.com>

* fix: Dry run stuck on pre sync hook

Signed-off-by: May Zhang <may_zhang@intuit.com>
2021-03-12 11:14:01 -08:00
Alexander Matyushentsev
89cb483bbb feat: support resource prune propagation policy (#235)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-03-12 08:36:15 +01:00
Ishita Sequeira
aae8ded161 feat: added cascade option to DeleteResource - argo-cd #5368 (#220)
* feat: added cascade option to DeleteResource - argo-cd #5368

Signed-off-by: ishitasequeira <isequeir@redhat.com>

* added a comment re-trigger the unit test

Signed-off-by: ishitasequeira <isequeir@redhat.com>

* feat: updated delete option logic in DeleteResource

Signed-off-by: ishitasequeira <isequeir@redhat.com>
2021-02-22 08:29:27 +01:00
Alexander Matyushentsev
354817a103 fix: sync should apply Namespaces and CRDs before resources that depend on them (#225)
* fix: sync should apply Namespaces and CRDs before resources that depend on them

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-02-18 15:30:04 -08:00
kshamajain99
c5b7114c50 selective sync (#213)
Signed-off-by: kshamajain99 <kshamajain99@gmail.com>
2021-01-29 10:37:11 -08:00
Jonathan West
814d79df49 fix: Data race between gitops-engine's pkg/cache/cluster.go and itself, on Argo CD startup (#4627) (#168)
Signed-off-by: Jonathan West <jonwest@redhat.com>
2021-01-12 12:43:06 -08:00
Alexander Matyushentsev
0b4199b001 feat: add FindResources method that allows to find any resource in cache (#204)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-01-07 16:00:20 -08:00
dependabot[bot]
bb076f0a89 chore(deps): bump codecov/codecov-action from v1.2.0 to v1.2.1 (#202)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.2.0 to v1.2.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.2.0...e156083f13aff6830c92fc5faa23505779fbf649)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-01-06 09:52:46 -08:00
May Zhang
82f0935363 feat: prune last (#203)
* fix: issue 5080 to allow resouce prunning at the final wave of sync phase

Signed-off-by: May Zhang <may_zhang@intuit.com>

* fix: Setting prunLast tasks to lastWave + 1

Signed-off-by: May Zhang <may_zhang@intuit.com>
2021-01-06 09:45:10 -08:00
dependabot[bot]
ee1772e1dc chore(deps): bump codecov/codecov-action from v1.1.1 to v1.2.0 (#200)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.1.1 to v1.2.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.1.1...a92c414703a4bba586f6df7fcc885c9d0bdff772)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-01-05 20:18:01 -08:00
Mikhail Mazurskiy
32c6afc4a7 refactor: Kubernetes v1.20.1 (#195)
Signed-off-by: Mikhail Mazurskiy <mmazurskiy@gitlab.com>
2021-01-05 19:35:02 -08:00
dependabot[bot]
dac837751e chore(deps): bump codecov/codecov-action from v1.1.0 to v1.1.1 (#199)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.1.0 to v1.1.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.1.0...1fc7722ded4708880a5aea49f2bfafb9336f0c8d)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-22 11:29:15 -08:00
dependabot[bot]
209882714e chore(deps): bump codecov/codecov-action from v1.0.13 to v1.1.0 (#194)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.0.13 to v1.1.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.0.13...7de43a7373de21874ae196a78f8eb633fcf7f0c4)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-18 09:16:01 -08:00
May Zhang
53cbe5f6be fix: HPA health check is making incorrect assumption on annotations (#190)
* fix: HPA health check is making incorrect assumption on annotations

Signed-off-by: May Zhang <may_zhang@intuit.com>

* fix: added handle of v1, v2beta1 and v2beta2

Signed-off-by: May Zhang <may_zhang@intuit.com>

* fix: clean up test data file

Signed-off-by: May Zhang <may_zhang@intuit.com>
2020-12-14 14:05:17 -08:00
May Zhang
0bc2f8c395 fix: panic: interface conversion: interface {} is string (#189)
* panic: interface conversion: interface {} is string

Signed-off-by: May Zhang <may_zhang@intuit.com>
2020-11-23 11:38:46 -08:00
Jesse Suen
069a5e64fb fix: WithInitialState should require start time to support generateName hooks properly (#183)
Signed-off-by: Jesse Suen <jessesuen@gmail.com>
2020-11-13 00:46:16 -08:00
yutachaos
760fcb68be fix: Current time is not set in startedAt in NewSyncContext (#180)
Signed-off-by: yutachaos <18604471+yutachaos@users.noreply.github.com>
2020-11-06 14:44:53 -08:00
Mikhail Mazurskiy
9a6cf9d611 docs: add myself as reviewer (#179)
Signed-off-by: Mikhail Mazurskiy <mmazurskiy@gitlab.com>
2020-11-04 20:41:19 -08:00
Jesse Suen
cfdefa46b2 feat: introduce SyncWaveHook callbacks invoked after applying each sync wave (#177)
Signed-off-by: Jesse Suen <jesse_suen@intuit.com>
2020-10-30 12:46:27 -07:00
dependabot[bot]
eb76c93f0a chore(deps): bump github.com/go-logr/logr from 0.2.0 to 0.2.1 (#173)
Bumps [github.com/go-logr/logr](https://github.com/go-logr/logr) from 0.2.0 to 0.2.1.
- [Release notes](https://github.com/go-logr/logr/releases)
- [Commits](https://github.com/go-logr/logr/compare/v0.2.0...v0.2.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-27 15:41:48 -07:00
Jesse Suen
e344629d42 refactor: allow PatchResource to accept subresource (#175) 2020-10-27 15:41:26 -07:00
Mikhail Mazurskiy
31311943a5 refactor: use github.com/go-logr/logr for logging (#162) 2020-10-26 17:14:56 -07:00
Alexander Matyushentsev
4eb3ca3fee feat: Namespace/CRD creation should happen before PreSync phase (#159) 2020-10-14 11:37:19 -07:00
Jonathan West
872c470033 fix: Detect unknown fields in invalid specs as OutOfSync (#154) 2020-10-14 11:34:19 -07:00
William Tam
a1dc4c598b fix: sort endpoint IP addresses before diffing (#160) 2020-10-13 16:53:40 -07:00
Gordon Honda
8d99997db0 fix: Set TLSServerName in NewKubeConfig (#156) 2020-10-13 13:04:59 -07:00
dependabot[bot]
646ff039d2 chore(deps): bump actions/setup-go from v2.1.2 to v2.1.3 (#152)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from v2.1.2 to v2.1.3.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2.1.2...37335c7bb261b353407cff977110895fa0b4f7d8)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-13 12:51:50 -07:00
Mikhail Mazurskiy
cc707ccc94 chore: Update to Kubernetes v1.19.2 (#139) 2020-10-13 12:17:22 -07:00
Mikhail Mazurskiy
ef7702fe86 refactor: cleanups (#142) 2020-10-13 12:14:22 -07:00
May Zhang
54bbebf593 fix: Hook Deletion Policies HookSucceeded should be run after whole H… (#144)
* fix: Hook Deletion Policies HookSucceeded should be run after whole Hook succeed and not only Resource succeed

* fix: handle HookFailed.

* fix: fixing lint error.
2020-10-09 13:41:49 -07:00
Keith Chong
3a3f6a33d7 fix: Child apps should not affect parent app's health by default (#153)
Signed-off-by: Keith Chong <kykchong@redhat.com>
2020-10-02 20:19:53 -07:00
Alexander Matyushentsev
d25b8fd69f fix: avoid memory and events spike after forcesfully refreshing api cache (#145) 2020-09-25 14:59:03 -07:00
John Pitman
828dc7574b fix: check resource namespaces are managed (#143) 2020-09-25 08:57:27 -07:00
May Zhang
7171d62f8c fix: Support transition from a git managed namespace to auto-create n… (#141)
* fix: Support transition from a git managed namespace to auto-create namespace

* fix: Support transition from a git managed namespace to auto-create namespace

* fix: use sync task to remove label

* fix: withNamespaceCreation

* fix: fix failed test

* fix: remove obsolete comment.
2020-09-22 15:59:33 -07:00
Mikhail Mazurskiy
8d05efd2df refactor: Return error from context (#140) 2020-09-18 10:50:06 -07:00
Alexander Matyushentsev
c04f859da9 fix: correctly infer ownership references from PVC to StatefulSet (#138) 2020-09-04 09:44:17 -07:00
May Zhang
21b78bd366 fix: health status for daemonset with onDelete updateStrategy (#137)
* fix: health status is set to healthy for statefulset with updateStrategy: OnDelete

* fix: updated message

* fix: added test

* fix: health status for daemon set with Ondelete updateStrategy
2020-09-03 10:13:21 -07:00
May Zhang
c9bb0095d3 fix: health status is set to healthy for statefulset with updateStrat… (#136)
* fix: health status is set to healthy for statefulset with updateStrategy: OnDelete

* fix: updated message

* fix: added test
2020-09-01 14:49:03 -07:00
dependabot[bot]
4d6f2988b2 chore(deps): bump codecov/codecov-action from v1.0.12 to v1.0.13 (#120)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.0.12 to v1.0.13.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.0.12...6004246f47ab62d32be025ce173b241cd84ac58e)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-08-28 23:17:29 -07:00
Alexander Matyushentsev
8472746916 refactor: ensure list semaphore is released after response is fully processed (#135) 2020-08-28 20:48:24 -07:00
Mikhail Mazurskiy
e024377862 refactor: cleanup printing and logging (#124)
- print using test logger
- print using fmt - println use is discouraged
2020-08-28 13:30:33 -07:00
dependabot[bot]
7026950e0d chore(deps): bump actions/setup-go from v2.1.1 to v2.1.2 (#114)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from v2.1.1 to v2.1.2.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2.1.1...8a3a76c2171de8c3be20bec507b6d829ccae48ba)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-08-28 13:28:21 -07:00
Alexander Matyushentsev
8617aa1a7e fix: leverage RetryWatcher to watch cluster events and introduce periodical K8S API state resynchronization (#133)
* fix: leverage RetryWatcher to watch cluster events and introduce periodical K8S API state resynchronization

* Apply reviewer notes

* enable race detection in tests
2020-08-27 17:33:06 -07:00
dependabot[bot]
51a45c0835 chore(deps): bump github.com/evanphx/json-patch (#125)
Bumps [github.com/evanphx/json-patch](https://github.com/evanphx/json-patch) from 4.2.0+incompatible to 4.9.0+incompatible.
- [Release notes](https://github.com/evanphx/json-patch/releases)
- [Commits](https://github.com/evanphx/json-patch/compare/v4.2.0...v4.9.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-08-27 17:32:37 -07:00
Mikhail Mazurskiy
2545f6c175 fix: data race fixes and cleanups (#122)
* Fix data race on err variable

==================
WARNING: DATA RACE
Write at 0x00c000621c30 by goroutine 29:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /gitops-engine/pkg/cache/cluster.go:548 +0x3a8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/mikhail/go/pkg/mod/golang.org/x/sync@v0.0.0-20190911185100-cd5d95a43a6e/errgroup/errgroup.go:57 +0x85

Previous write at 0x00c000621c30 by goroutine 28:
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1.1()
      /gitops-engine/pkg/cache/cluster.go:548 +0x3a8
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).processApi()
      /gitops-engine/pkg/cache/cluster.go:479 +0x2cc
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync.func1()
      /gitops-engine/pkg/cache/cluster.go:536 +0x430
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync.func1()
      /gitops-engine/pkg/utils/kube/ctl.go:510 +0x45
  golang.org/x/sync/errgroup.(*Group).Go.func1()
      /Users/mikhail/go/pkg/mod/golang.org/x/sync@v0.0.0-20190911185100-cd5d95a43a6e/errgroup/errgroup.go:57 +0x85

Goroutine 29 (running) created at:
  golang.org/x/sync/errgroup.(*Group).Go()
      /Users/mikhail/go/pkg/mod/golang.org/x/sync@v0.0.0-20190911185100-cd5d95a43a6e/errgroup/errgroup.go:54 +0x73
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync()
      /gitops-engine/pkg/utils/kube/ctl.go:509 +0x1a1
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync()
      /gitops-engine/pkg/cache/cluster.go:526 +0x98b
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).EnsureSynced()
      /gitops-engine/pkg/cache/cluster.go:592 +0xe9
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSynced()
      /gitops-engine/pkg/cache/cluster_test.go:140 +0xe2
  testing.tRunner()
      /usr/local/Cellar/go/1.14.3/libexec/src/testing/testing.go:991 +0x1eb

Goroutine 28 (finished) created at:
  golang.org/x/sync/errgroup.(*Group).Go()
      /Users/mikhail/go/pkg/mod/golang.org/x/sync@v0.0.0-20190911185100-cd5d95a43a6e/errgroup/errgroup.go:54 +0x73
  github.com/argoproj/gitops-engine/pkg/utils/kube.RunAllAsync()
      /gitops-engine/pkg/utils/kube/ctl.go:509 +0x1a1
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).sync()
      /gitops-engine/pkg/cache/cluster.go:526 +0x98b
  github.com/argoproj/gitops-engine/pkg/cache.(*clusterCache).EnsureSynced()
      /gitops-engine/pkg/cache/cluster.go:592 +0xe9
  github.com/argoproj/gitops-engine/pkg/cache.TestEnsureSynced()
      /gitops-engine/pkg/cache/cluster_test.go:140 +0xe2
  testing.tRunner()
      /usr/local/Cellar/go/1.14.3/libexec/src/testing/testing.go:991 +0x1eb
==================

* More type safety

Make runState a new type rather
than a type alias

* Fix data race on runState variable

==================
WARNING: DATA RACE

Write at 0x00c0000d0b40 by goroutine 83:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5.1()
      /gitops-engine/pkg/sync/sync_context.go:786 +0x68c

Previous write at 0x00c0000d0b40 by goroutine 84:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5.1()
      /gitops-engine/pkg/sync/sync_context.go:786 +0x68c

Goroutine 83 (running) created at:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5()
      /gitops-engine/pkg/sync/sync_context.go:778 +0x165
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks()
      /gitops-engine/pkg/sync/sync_context.go:807 +0xb32
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).Sync()
      /gitops-engine/pkg/sync/sync_context.go:265 +0x1b3d
  github.com/argoproj/gitops-engine/pkg/sync.TestSyncFailureHookWithFailedSync()
      /gitops-engine/pkg/sync/sync_context_test.go:532 +0x4e5
  testing.tRunner()
      /usr/local/Cellar/go/1.14.3/libexec/src/testing/testing.go:991 +0x1eb

Goroutine 84 (running) created at:
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks.func5()
      /gitops-engine/pkg/sync/sync_context.go:778 +0x165
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).runTasks()
      /gitops-engine/pkg/sync/sync_context.go:807 +0xb32
  github.com/argoproj/gitops-engine/pkg/sync.(*syncContext).Sync()
      /gitops-engine/pkg/sync/sync_context.go:265 +0x1b3d
  github.com/argoproj/gitops-engine/pkg/sync.TestSyncFailureHookWithFailedSync()
      /gitops-engine/pkg/sync/sync_context_test.go:532 +0x4e5
  testing.tRunner()
      /usr/local/Cellar/go/1.14.3/libexec/src/testing/testing.go:991 +0x1eb
==================

* Simplify
2020-08-27 12:33:25 -07:00
Alexander Matyushentsev
2cf3a72c65 fix: exclude creationTimestamp from diffing (#128) 2020-08-25 23:29:57 -07:00
Daniel Holbach
26efb17b22 docs: Update notes on Argo+Flux collaboration (#126)
* remove Flux mention on main README

* FAQ: explain backstory of Argo+Flux collaboration

* remove sentence about 'natural evolution'
2020-08-25 09:10:24 -07:00
Alexander Matyushentsev
0eede8069a fix: volumeClaimTemplates are out of sync incorrectly (#127) 2020-08-24 16:26:41 -07:00
Mikhail Mazurskiy
bf8e17f73f Update and trim dependencies (#123)
* chore: bump kubernetes deps

* refactor: remove dependency on pkg/errors

* refactor: remove indirect dependency version

* refactor: remove dependency on github.com/google/shlex
as it was only used in tests
2020-08-21 18:49:00 -07:00
Mikhail Mazurskiy
90979fe432 feat: use Kubernetes v1.18.6 libraries (#102)
* feat: use Kubernetes v1.18.6 libraries

* support switching between server and client side dry run mode

Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2020-08-04 19:13:02 -07:00
Alexander Matyushentsev
92a3433562 feat: detect PVC StatefulSet ownership (#112)
* feat: detect PVC StatefulSet ownership
2020-08-04 10:25:02 -07:00
Alexander Matyushentsev
afc2e64c70 refactor: remove dependency on github.com/argoproj/pkg package (#111) 2020-08-03 14:39:53 -07:00
May Zhang
f420bb91a1 fix: panic for data type conversion to *unstructured.Unstructured. (#109)
* fix: panic for data type conversion to *unstructured.Unstructured.

* fix: panic for data type conversion to *unstructured.Unstructured.
2020-08-03 11:17:06 -07:00
Mikhail Mazurskiy
605958d429 feat: improve memory consumption limiting (#100)
1. Instead of global semaphore use a
per-cache semaphore. Removes thread-safety
issues, allows fine control over limiting
if multiple caches are used in a program

2. Use the semaphore to guard whole
sections that use expensive list
operations, not just the list API call.
This ensures that memory usage is capped,
not the list operations.

3. Allow to control list pager.
Reduce default prefetch limit to 1 page
from 10.

Co-authored-by: Alexander Matyushentsev <Alexander_Matyushentsev@intuit.com>
2020-07-30 11:08:15 -07:00
Alexander Matyushentsev
11d47a6215 feat: support configuring cluster cache re-sync timeout (#107) 2020-07-27 14:47:32 -07:00
dependabot[bot]
607629ec6a chore(deps): bump actions/setup-go from v2.1.0 to v2.1.1 (#104)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from v2.1.0 to v2.1.1.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2.1.0...d0c5defdf364f1d1fb07530c000084836192af9c)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-07-27 14:22:06 -07:00
dependabot[bot]
618250fc48 chore(deps): bump codecov/codecov-action from v1.0.10 to v1.0.12 (#106)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.0.10 to v1.0.12.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.0.10...07127fde53bc3ccd346d47ab2f14c390161ad108)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-07-27 14:21:20 -07:00
May Zhang
dcb86f7d6b fix: Namespace auto-creation (#105)
* feat: added prehook for creating ns

* feat: added prehook for creating ns
Initial Draft

* feat: added prehook for creating ns

* feat: added prehook for creating ns

* feat: added prehook for creating ns checking the health of namespace created.

* feat: added prehook for creating ns. checking if ns existed already.

* feat: getSyncTasks returns the same list each time. added checking if resources contains ns already.

* feat: move const variable to right location; added additional checking if namespace is included in resources.

* feat: fixing compile issue.

* feat: moved code closer together.

* feat: adding test cases.

* feat: auto create only for sc.namespace

* feat: fix failed test

* feat: update livObj

* feat: added error handling

* feat: added error handling

* feat: move into its own fuction

* feat: fixing compile error

* fix: auto namespace creation

* fix: simplify sort method

* fix: remove sorting of namespace
2020-07-27 10:36:50 -07:00
Mikhail Mazurskiy
3e620057f4 Handlers cleanups (#101)
* refactor: remove global variable

handlerKey does not need to be global,
field is just fine.
Atomic access is no longer needed because field
is protected by mutex.

* fix: use the correct mutex

* refactor: pre-allocate slice
2020-07-27 09:32:02 -07:00
Mikhail Mazurskiy
cc0fb5531c refactor: improve func signature (#103) 2020-07-21 20:12:20 -07:00
Mikhail Mazurskiy
b576959416 fix: data race on ctxCompleted (#86)
ctxCompleted is read and written from different
goroutines, which is a data race.

This change also makes the sleep interruptible.
2020-07-21 08:50:03 -07:00
May Zhang
3c545080c9 feat: create namespace (#94)
* feat: added prehook for creating ns

* feat: added prehook for creating ns
Initial Draft

* feat: added prehook for creating ns

* feat: added prehook for creating ns

* feat: added prehook for creating ns checking the health of namespace created.

* feat: added prehook for creating ns. checking if ns existed already.

* feat: getSyncTasks returns the same list each time. added checking if resources contains ns already.

* feat: move const variable to right location; added additional checking if namespace is included in resources.

* feat: fixing compile issue.

* feat: moved code closer together.

* feat: adding test cases.

* feat: auto create only for sc.namespace

* feat: fix failed test

* feat: update livObj

* feat: added error handling

* feat: added error handling

* feat: move into its own fuction

* feat: fixing compile error
2020-07-20 09:45:07 -07:00
Mikhail Mazurskiy
59a09cd918 fix: improve manifest parsing (#97) 2020-07-20 09:32:44 -07:00
dependabot[bot]
ee5c440fc4 chore(deps): bump github.com/stretchr/testify from 1.4.0 to 1.6.1 (#77)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.4.0 to 1.6.1.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.4.0...v1.6.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-07-15 22:05:00 -07:00
Mikhail Mazurskiy
42e1b44849 fix: use streaming YAML decoder from Kubernetes (#85)
apimachinery provides library code to parse
multi document YAML, there is no need to resort
to regexes.
2020-07-15 15:56:56 -07:00
dependabot[bot]
933f7cb71e chore(deps): bump actions/cache from v1 to v2 (#71)
Bumps [actions/cache](https://github.com/actions/cache) from v1 to v2.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v1...b8204782bbb5f872091ecc5eb9cb7d004e35b1fa)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-07-15 15:32:19 -07:00
dependabot[bot]
d12976c298 chore(deps): bump codecov/codecov-action from v1 to v1.0.10 (#72)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1 to v1.0.10.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...f3570723ef743f6942b6a480461ed0cd6c0f9baa)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-07-15 15:31:48 -07:00
dependabot[bot]
817d0e48c3 chore(deps): bump actions/setup-go from v1 to v2.1.0 (#73)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from v1 to v2.1.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v1...1616116e1b39417f86ba049745f1a8946d4d00e7)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-07-15 14:50:36 -07:00
dependabot[bot]
db76222068 chore(deps): bump github.com/spf13/cobra from 0.0.5 to 0.0.7 (#78)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 0.0.5 to 0.0.7.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Changelog](https://github.com/spf13/cobra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spf13/cobra/compare/0.0.5...0.0.7)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-07-15 14:50:01 -07:00
Alexander Matyushentsev
c5bf3a2484 fix: sync hooks should be deleted after sync phase/wave completion (#92) 2020-07-15 13:13:24 -07:00
Alexander Matyushentsev
ee1db0902c fix: add missing scheme install imports (#89) 2020-07-14 09:32:49 -07:00
Alexander Matyushentsev
eb210cf6d1 fix: diffing should not fail if known kubernetes resource has invalid fields (#88) 2020-07-13 10:21:25 -07:00
Hiroki Sakamoto
bfac3f53c9 feat: add creation timestamp (#87) 2020-07-12 18:49:22 +02:00
Alexander Matyushentsev
344b1bcfc2 feat: improve sync operation messages (#84) 2020-07-09 15:01:52 -07:00
Alexander Matyushentsev
6fe0c0050b fix: diff should perform server side like apply during resource diffing (#82) 2020-07-09 13:44:23 -07:00
May Zhang
b18f378548 fix for nil while sync (#83) 2020-07-09 09:19:12 -07:00
sullis
97d4a75189 chore: enable Dependabot v2 (#67)
https://github.blog/2020-06-01-keep-all-your-packages-up-to-date-with-dependabot/
2020-07-06 16:43:13 -07:00
Darshan Chaudhary
a45189fc91 chore: bump k8s.io dependencies to 1.17 (#70)
Signed-off-by: darshanime <deathbullet@gmail.com>
2020-07-06 16:18:52 -07:00
Alexander Matyushentsev
56e31f3668 fix: change failed conversion log level to debug (#69) 2020-07-02 10:26:12 -07:00
Alexander Matyushentsev
c23d4d77d8 fix: don't remove defaulted fields and rely only on three way diff merge during diffing (#68)
* fix: don't remove defaulted fields and rely only on three way diff merge during diffing
2020-06-30 14:33:07 -07:00
Alexander Matyushentsev
ce9616ad10 refactor: IterateHierarchy method should use read lock (#65) 2020-06-24 11:48:52 -07:00
Mikhail Mazurskiy
7d3da9f16e Replace ghodss/yaml with sigs.k8s.io/yaml (#62)
Kubernetes community uses a permanent fork
https://github.com/kubernetes-sigs/yaml
2020-06-23 10:27:53 -07:00
Mikhail Mazurskiy
c820482a8b Drop github.com/grpc-ecosystem/grpc-gateway dep (#63)
gitops-engine does not use the JSONMarshaler
type and pulling in a dependency for it
is not great. Let's remove it.
2020-06-23 09:26:38 -07:00
Darshan Chaudhary
6657adfcfd fix: Check for err == nil before Fatal (#61) 2020-06-20 13:25:36 +02:00
Hidehito Yabuuchi
fb2ec13845 Fix markdown links in README.md (#59) 2020-06-15 14:49:04 -07:00
Mikhail Mazurskiy
16598d5148 Use UpdateSettingsFunc type (#58) 2020-06-10 21:32:03 -07:00
Mikhail Mazurskiy
5622a64808 Reduce k8s.io/kubernetes usage (#57) 2020-06-10 09:09:27 -07:00
Darshan Chaudhary
16575f1834 refactor: add methods in errors package to quit with exit codes (#49)
* add methods in errors package to quit with exit codes

Signed-off-by: darshanime <deathbullet@gmail.com>

* use CheckErrorWithCode to exit with appropriate error code

Signed-off-by: darshanime <deathbullet@gmail.com>
2020-06-09 17:34:14 +02:00
Alexander Matyushentsev
e8cfb83132 feat: support limiting number of concurrent k8s list queries (#55)
* feat: support limiting number of concurrent k8s list queries
2020-06-08 12:58:05 -07:00
Alexander Matyushentsev
1723191dde refactor: deprecates obsolete Diff field (#54) 2020-06-07 20:22:37 -07:00
Shunsuke Suzuki
93cf3c532b fix: Lock to write MockKubectlCmd.LastValidate to fix the race condition (#52)
* fix: lock to write MockKubectlCmd.LastValidate to fix the race condition
2020-06-05 08:58:42 -07:00
Alexander Matyushentsev
c3b39b3a7e docs: clarify project description (#53)
* docs: clarify project description
2020-06-03 09:15:31 -07:00
Mikhail Mazurskiy
4bd4f29670 Fix data race on err variable (#46)
Named return variable was potentially
mutated from multiple goroutines. Use
the library to do this safely.
2020-06-01 10:11:18 -07:00
Alexander Matyushentsev
58cd2c3897 fix: stop using deleted method 'SetPopulateResourceInfoHandler'; regenerate cluster cache mock; (#47) 2020-06-01 18:48:52 +02:00
Alexander Matyushentsev
ff6e9f8532 feat: support resource pruning in gitops-agent (#45) 2020-06-01 18:10:59 +02:00
Alexander Matyushentsev
df17961bbd docs: document 'top level' packages (#44)
* docs: document cache package

* docs: document diff package

* docs: document health package

* docs: document sync package

* docs: document engine package

* docs: document sync options
2020-06-01 17:54:33 +02:00
Alexander Matyushentsev
99bd42d9a3 fix: update packages structure (#42) 2020-05-28 17:18:31 -07:00
Alexander Matyushentsev
cd2e16da1a feat: implement gitops-agent (#37)
* feat: implement gitops-agent
2020-05-27 09:53:21 -07:00
Alexander Matyushentsev
7500c4faa4 chore: add OWNERS file (#41) 2020-05-24 17:06:28 -07:00
Alexander Matyushentsev
a702089057 chore: regenerate cluster cache mocks (#40) 2020-05-20 10:27:19 -07:00
Alexander Matyushentsev
625651c65d docs: add releasing.md file (#35) 2020-05-20 10:15:44 -07:00
jannfis
6f7cd4fca5 chore: Speficy correct module dependency for kube-aggregator (#39) 2020-05-19 11:56:22 -07:00
Alexander Matyushentsev
f11f15b3d5 fix: remove optional parameter from NewClusterCache function (#36) 2020-05-18 17:24:02 -07:00
Alexander Matyushentsev
70ab73ca32 fix: fix nil pointer dereference in cluster caching (#34) 2020-05-18 16:21:48 -07:00
Alexander Matyushentsev
916375861e feat: cluster cache should expose synchronization error (#32) 2020-05-18 10:27:19 -07:00
Alexander Matyushentsev
8430dc06d7 Move argocd core to gitops engine repo 2020-05-15 13:01:24 -07:00
Alexander Matyushentsev
2257d233ab Apply reviewer notes 2020-01-16 13:27:01 -08:00
Alexander Matyushentsev
aedd5f533e Add docs that describes requirements for a repo auto-update and docker registry scanning components 2020-01-16 13:27:01 -08:00
Daniel Holbach
21a4cdc1c5 no current events planned 2019-12-09 08:58:48 -08:00
Jay Pipes
be3abd2a12 Merge pull request #23 from argoproj/add-faq
seed FAQ document with content from the Slack AMA session
2019-11-29 14:03:32 -05:00
Daniel Holbach
d3f2ce3ad7 addressed review feedback from Jay 2019-11-28 16:32:01 +01:00
Daniel Holbach
5946820661 seed FAQ document with content from the Slack AMA session
closes: #21
2019-11-28 14:16:20 +01:00
Jay Pipes
051b7c6d9d Merge pull request #22 from argoproj/update-events
Update events section
2019-11-25 13:12:02 -05:00
Daniel Holbach
4815faf272 Update events section
- Slack AMA is over
	- Two WOUG sessions coming up
	- Small link fix
2019-11-25 19:04:21 +01:00
Daniel Holbach
7220635f97 Merge pull request #20 from dholbach/add-logo
Add "argo + flux" logo
2019-11-20 12:58:06 +01:00
Daniel Holbach
f39140cf0a Add "argo + flux" logo
For the announcement we used the "argo + flux" logo and while
	the project does not have its own logo yet, I think it's only
	fitting to add it, to add some colour to our Github and make it
	instantly clear which two communities came together here.

	relates to: #17
2019-11-20 10:05:02 +01:00
Daniel Holbach
a8b4283616 Merge pull request #19 from argoproj/fix-slack-invite
Fix Slack invite: https://slack.k8s.io/
2019-11-20 09:24:09 +01:00
Daniel Holbach
c08fd189d2 fix second slack-invite link 2019-11-18 12:06:29 +01:00
Daniel Holbach
278472de09 Fix Slack invite: https://slack.k8s.io/
closes: #18
2019-11-16 15:08:09 +01:00
Alexander Matyushentsev
777c0ff629 Initial commit 2019-11-14 11:50:35 -08:00
Jay Pipes
7d29c700ca Merge pull request #15 from argoproj/add-meeting-times
document meeting times
2019-11-14 08:44:43 -05:00
Daniel Holbach
b8a7d7bfad document youtube playlist too 2019-11-14 10:54:55 +01:00
Daniel Holbach
9d32a603b8 document meeting times
closes: #5
2019-11-14 10:50:29 +01:00
Daniel Holbach
3c2b7ebbde Merge pull request #14 from argoproj/add-poc-links
Clarify where we want contributions
2019-11-13 12:18:49 +01:00
Daniel Holbach
06a52d3d71 Clarify where we want contributions
link to PoC builds of Argo and Flux so people can
	take a look and try them out
2019-11-12 14:16:34 +01:00
Alexander Matyushentsev
53ff80ff60 Move bottom-up design from google doc to markdown 2019-11-11 08:15:37 -08:00
Alexander Matyushentsev
921c307c0c Rename black-box, white-box designs to bottom-up/top-down 2019-11-11 08:15:37 -08:00
Alexander Matyushentsev
142bfe40f5 Merge pull request #12 from alexmt/blackbox-api
Add engine API to the black-box design
2019-11-07 12:05:20 -08:00
Alexander Matyushentsev
6f8e242890 Replace SyncUnit with Sync<Term> 2019-11-07 12:03:26 -08:00
Alexander Matyushentsev
9de8221284 Limit width to 100 characters per line 2019-11-07 11:12:38 -08:00
Alexander Matyushentsev
fd50001c44 Use scheme.GroupKind as key for resource customization settings 2019-11-06 12:01:42 -08:00
Alexander Matyushentsev
148710afc0 Replace app with SyncUnit 2019-11-06 11:59:04 -08:00
Alexander Matyushentsev
40c5701ac3 Update specs/design-black-box.md 2019-11-04 13:48:10 -08:00
Alexander Matyushentsev
3a53391324 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-11-04 11:48:56 -08:00
Alexander Matyushentsev
37e7d2b470 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-11-04 11:48:10 -08:00
Alexander Matyushentsev
2bb958819a Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-11-04 11:47:52 -08:00
Alexander Matyushentsev
ab1888eef0 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-11-04 11:45:32 -08:00
Alexander Matyushentsev
8635459748 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-11-04 11:45:22 -08:00
Alexander Matyushentsev
374c236196 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-11-04 11:45:13 -08:00
Alexander Matyushentsev
6a28738865 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-11-04 11:37:37 -08:00
Alexander Matyushentsev
ffe694f117 Add engine API to the black-box design 2019-11-01 14:54:23 -07:00
Jay Pipes
3d22b0b8be Merge pull request #11 from dholbach/add-slack-ama
Add Ask-Us-Anything
2019-11-01 11:09:08 -04:00
Daniel Holbach
d79c06d1f0 Add Ask-Us-Anything
closes: #8
2019-11-01 12:07:50 +01:00
Jay Pipes
ae40d12575 Merge pull request #10 from dholbach/add-preliminary-slack
Point to #gitops on Kubernetes Slack for now
2019-10-30 09:34:53 -04:00
Daniel Holbach
fb4d62b938 Point to #gitops on Kubernetes Slack for now
closes: #7
2019-10-30 11:14:43 +01:00
Jay Pipes
1bd4f7af94 Merge pull request #9 from dholbach/add-coc
describe license and code-of-conduct
2019-10-23 12:56:22 -04:00
Daniel Holbach
58443c658c describe license and code-of-conduct 2019-10-23 17:20:12 +02:00
Daniel Holbach
78edcf4310 Merge pull request #4 from dholbach/more-info
add some explanation about what this project tries to do
2019-10-23 15:17:51 +02:00
Daniel Holbach
ef4dcd6c12 add some explanation about what this project tries to do 2019-10-23 15:11:14 +02:00
Alexander Matyushentsev
57db7848cf Merge pull request #2 from 2opremio/master
Add black box hypothesis and acceptance criteria
2019-10-18 07:56:35 -07:00
Alfonso Acosta
6f11a04a64 Update specs/design-black-box.md
Co-Authored-By: Michael Bridgen <mikeb@squaremobius.net>
2019-10-18 11:26:28 +02:00
Alfonso Acosta
fec5ec399b Add black box hypothesis and acceptance criteria 2019-10-18 11:25:36 +02:00
Alfonso Acosta
dd78b0b09e Update specs/design-black-box.md
Co-Authored-By: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2019-10-17 17:46:59 +02:00
Alfonso Acosta
c90d36e813 Add black box hypothesis and acceptance criteria 2019-10-17 17:32:53 +02:00
Alexander Matyushentsev
8b533e5763 Merge pull request #1 from alexmt/design
Add black box design proposal
2019-10-16 10:00:26 -07:00
Alexander Matyushentsev
d5b318d430 Rephrase a sentence that describes a requirement to contribute Flux specific features into GitOps engine 2019-10-16 09:57:31 -07:00
Alexander Matyushentsev
a1b21057fd Update specs/design-black-box.md
Co-Authored-By: Hidde Beydals <hiddeco@users.noreply.github.com>
2019-10-16 09:51:54 -07:00
Alexander Matyushentsev
a9c4d400c5 Add description of a risk into 'Risks and Mitigations' section 2019-10-16 07:28:52 -07:00
Alexander Matyushentsev
8049fbc818 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-10-16 07:03:21 -07:00
Alexander Matyushentsev
47e2511dc1 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-10-16 07:03:06 -07:00
Alexander Matyushentsev
3fdca00e92 Update specs/design-black-box.md
Co-Authored-By: Alfonso Acosta <fons@weave.works>
2019-10-16 07:02:55 -07:00
Alexander Matyushentsev
0cb4f8a5fb Add black box design proposal 2019-10-15 13:16:14 -07:00
Jay Pipes
c00a93dd2a Initial commit 2019-10-03 09:33:38 -04:00
828 changed files with 156541 additions and 15045 deletions

View File

@@ -9,18 +9,76 @@ assignees: ''
Target RC1 date: ___. __, ____
Target GA date: ___. __, ____
- [ ] 1wk before feature freeze post in #argo-contributors that PRs must be merged by DD-MM-YYYY to be included in the release - ask approvers to drop items from milestone they cant merge
## RC1 Release Checklist
- [ ] 1wk before feature freeze post in #argo-contributors that PRs must be merged by DD-MM-YYYY to be included in the release - ask approvers to drop items from milestone they can't merge
- [ ] At least two days before RC1 date, draft RC blog post and submit it for review (or delegate this task)
- [ ] Cut RC1 (or delegate this task to an Approver and coordinate timing)
- [ ] Create new release branch
- [ ] Create new release branch (or delegate this task to an Approver)
- [ ] Add the release branch to ReadTheDocs
- [ ] Confirm that tweet and blog post are ready
- [ ] Trigger the release
- [ ] After the release is finished, publish tweet and blog post
- [ ] Post in #argo-cd and #argo-announcements with lots of emojis announcing the release and requesting help testing
- [ ] Monitor support channels for issues, cherry-picking bugfixes and docs fixes as appropriate (or delegate this task to an Approver and coordinate timing)
- [ ] At release date, evaluate if any bugs justify delaying the release. If not, cut the release (or delegate this task to an Approver and coordinate timing)
- [ ] If unreleased changes are on the release branch for {current minor version minus 3}, cut a final patch release for that series (or delegate this task to an Approver and coordinate timing)
- [ ] After the release, post in #argo-cd that the {current minor version minus 3} has reached EOL (example: https://cloud-native.slack.com/archives/C01TSERG0KZ/p1667336234059729)
- [ ] Cut RC1 (or delegate this task to an Approver and coordinate timing)
- [ ] Run the [Init ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/init-release.yaml) from the release branch
- [ ] Review and merge the generated version bump PR
- [ ] Run `./hack/trigger-release.sh` to push the release tag
- [ ] Monitor the [Publish ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml)
- [ ] Verify the release on [GitHub releases](https://github.com/argoproj/argo-cd/releases)
- [ ] Verify the container image on [Quay.io](https://quay.io/repository/argoproj/argocd?tab=tags)
- [ ] Confirm the new version appears in [Read the Docs](https://argo-cd.readthedocs.io/)
- [ ] Announce RC1 release
- [ ] Confirm that tweet and blog post are ready
- [ ] Publish tweet and blog post
- [ ] Post in #argo-cd and #argo-announcements requesting help testing:
```
:mega: Argo CD v{MAJOR}.{MINOR}.{PATCH}-rc{RC_NUMBER} is OUT NOW! :argocd::tada:
Please go through the following resources to know more about the release:
Release notes: https://github.com/argoproj/argo-cd/releases/tag/v{VERSION}
Blog: {BLOG_POST_URL}
We'd love your help testing this release candidate! Please try it out in your environments and report any issues you find. This helps us ensure a stable GA release.
Thanks to all the folks who spent their time contributing to this release in any way possible!
```
- [ ] Monitor support channels for issues, cherry-picking bugfixes and docs fixes as appropriate during the RC period (or delegate this task to an Approver and coordinate timing)
## GA Release Checklist
- [ ] At GA release date, evaluate if any bugs justify delaying the release
- [ ] Prepare for EOL version (version that is 3 releases old)
- [ ] If unreleased changes are on the release branch for {current minor version minus 3}, cut a final patch release for that series (or delegate this task to an Approver and coordinate timing)
- [ ] Edit the final patch release on GitHub and add the following notice at the top:
```markdown
> [!IMPORTANT]
> **END OF LIFE NOTICE**
>
> This is the final release of the {EOL_SERIES} release series. As of {GA_DATE}, this version has reached end of life and will no longer receive bug fixes or security updates.
>
> **Action Required**: Please upgrade to a [supported version](https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/overview/) (v{SUPPORTED_VERSION_1}, v{SUPPORTED_VERSION_2}, or v{NEW_VERSION}).
```
- [ ] Cut GA release (or delegate this task to an Approver and coordinate timing)
- [ ] Run the [Init ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/init-release.yaml) from the release branch
- [ ] Review and merge the generated version bump PR
- [ ] Run `./hack/trigger-release.sh` to push the release tag
- [ ] Monitor the [Publish ArgoCD Release workflow](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml)
- [ ] Verify the release on [GitHub releases](https://github.com/argoproj/argo-cd/releases)
- [ ] Verify the container image on [Quay.io](https://quay.io/repository/argoproj/argocd?tab=tags)
- [ ] Verify the `stable` tag has been updated
- [ ] Confirm the new version appears in [Read the Docs](https://argo-cd.readthedocs.io/)
- [ ] Announce GA release with EOL notice
- [ ] Confirm that tweet and blog post are ready
- [ ] Publish tweet and blog post
- [ ] Post in #argo-cd and #argo-announcements announcing the release and EOL:
```
:mega: Argo CD v{MAJOR}.{MINOR} is OUT NOW! :argocd::tada:
Please go through the following resources to know more about the release:
Upgrade instructions: https://argo-cd.readthedocs.io/en/latest/operator-manual/upgrading/{PREV_MINOR}-{MAJOR}.{MINOR}/
Blog: {BLOG_POST_URL}
:warning: IMPORTANT: With the release of Argo CD v{MAJOR}.{MINOR}, support for Argo CD v{EOL_VERSION} has officially reached End of Life (EOL).
Thanks to all the folks who spent their time contributing to this release in any way possible!
```
- [ ] (For the next release champion) Review the [items scheduled for the next release](https://github.com/orgs/argoproj/projects/25). If any item does not have an assignee who can commit to finish the feature, move it to the next release.
- [ ] (For the next release champion) Schedule a time mid-way through the release cycle to review items again.

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -38,7 +38,7 @@ jobs:
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
- name: Checkout repository
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ steps.generate-token.outputs.token }}
@@ -91,11 +91,17 @@ jobs:
- name: Create Pull Request
run: |
# Create cherry-pick PR
gh pr create \
--title "${{ inputs.pr_title }} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})" \
--body "Cherry-picked ${{ inputs.pr_title }} (#${{ inputs.pr_number }})
TITLE="${PR_TITLE} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})"
BODY=$(cat <<EOF
Cherry-picked ${PR_TITLE} (#${{ inputs.pr_number }})
${{ steps.cherry-pick.outputs.signoff }}" \
${{ steps.cherry-pick.outputs.signoff }}
EOF
)
gh pr create \
--title "$TITLE" \
--body "$BODY" \
--base "${{ steps.cherry-pick.outputs.target_branch }}" \
--head "${{ steps.cherry-pick.outputs.branch_name }}"
@@ -103,12 +109,13 @@ jobs:
gh pr comment ${{ inputs.pr_number }} \
--body "🍒 Cherry-pick PR created for ${{ inputs.version_number }}: #$(gh pr list --head ${{ steps.cherry-pick.outputs.branch_name }} --json number --jq '.[0].number')"
env:
PR_TITLE: ${{ inputs.pr_title }}
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
- name: Comment on failure
if: failure()
run: |
gh pr comment ${{ inputs.pr_number }} \
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the workflow logs for details."
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the [workflow logs](https://github.com/argoproj/argo-cd/actions/runs/${{ github.run_id }}) for details."
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
GH_TOKEN: ${{ steps.generate-token.outputs.token }}

View File

@@ -14,7 +14,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.0'
GOLANG_VERSION: '1.25.3'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -31,7 +31,7 @@ jobs:
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
docs: ${{ steps.filter.outputs.docs_any_changed }}
steps:
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
id: filter
with:
@@ -55,7 +55,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -67,7 +67,6 @@ jobs:
run: |
go mod tidy
git diff --exit-code -- .
build-go:
name: Build & cache Go code
if: ${{ needs.changes.outputs.backend == 'true' }}
@@ -76,13 +75,13 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -103,7 +102,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -112,7 +111,7 @@ jobs:
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v2.4.0
version: v2.5.0
args: --verbose
test-go:
@@ -129,7 +128,7 @@ jobs:
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
@@ -153,7 +152,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -174,7 +173,7 @@ jobs:
- name: Run all unit tests
run: make test-local
- name: Generate test results artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: test-results
path: test-results
@@ -193,7 +192,7 @@ jobs:
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
@@ -217,7 +216,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -238,7 +237,7 @@ jobs:
- name: Run all unit tests
run: make test-race-local
- name: Generate test results artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: race-results
path: test-results/
@@ -251,7 +250,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -303,15 +302,15 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup NodeJS
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '22.9.0'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -336,7 +335,7 @@ jobs:
shellcheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- run: |
sudo apt-get install shellcheck
shellcheck -e SC2059 -e SC2154 -e SC2034 -e SC2016 -e SC1091 $(find . -type f -name '*.sh' | grep -v './ui/node_modules') | tee sc.log
@@ -355,12 +354,12 @@ jobs:
sonar_secret: ${{ secrets.SONAR_TOKEN }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -368,12 +367,12 @@ jobs:
run: |
rm -rf ui/node_modules/argo-ui/node_modules
- name: Get e2e code coverage
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: e2e-code-coverage
path: e2e-code-coverage
- name: Get unit test code coverage
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: test-results
path: test-results
@@ -407,7 +406,7 @@ jobs:
test-e2e:
name: Run end-to-end tests
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-latest-16-cores
runs-on: oracle-vm-16cpu-64gb-x86-64
strategy:
fail-fast: false
matrix:
@@ -426,7 +425,7 @@ jobs:
- build-go
- changes
env:
GOPATH: /home/runner/go
GOPATH: /home/ubuntu/go
ARGOCD_FAKE_IN_CLUSTER: 'true'
ARGOCD_SSH_DATA_PATH: '/tmp/argo-e2e/app/config/ssh'
ARGOCD_TLS_DATA_PATH: '/tmp/argo-e2e/app/config/tls'
@@ -447,7 +446,7 @@ jobs:
swap-storage: false
tool-cache: false
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -462,19 +461,19 @@ jobs:
set -x
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
sudo mkdir -p $HOME/.kube && sudo chown -R runner $HOME/.kube
sudo mkdir -p $HOME/.kube && sudo chown -R ubuntu $HOME/.kube
sudo k3s kubectl config view --raw > $HOME/.kube/config
sudo chown runner $HOME/.kube/config
sudo chown ubuntu $HOME/.kube/config
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Add ~/go/bin to PATH
run: |
echo "/home/runner/go/bin" >> $GITHUB_PATH
echo "/home/ubuntu/go/bin" >> $GITHUB_PATH
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
@@ -496,11 +495,11 @@ jobs:
run: |
docker pull ghcr.io/dexidp/dex:v2.43.0
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:7.2.7-alpine
docker pull redis:8.2.1-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist
chown runner dist
chown ubuntu dist
- name: Run E2E server and wait for it being available
timeout-minutes: 30
run: |
@@ -526,13 +525,13 @@ jobs:
goreman run stop-all || echo "goreman trouble"
sleep 30
- name: Upload e2e coverage report
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: e2e-code-coverage
path: /tmp/coverage
if: ${{ matrix.k3s.latest }}
- name: Upload e2e-server logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: e2e-server-k8s${{ matrix.k3s.version }}.log
path: /tmp/e2e-server.log

View File

@@ -29,7 +29,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout repository
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
- name: Setup Golang

View File

@@ -56,14 +56,14 @@ jobs:
image-digest: ${{ steps.image.outputs.digest }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
if: ${{ github.ref_type == 'tag'}}
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: ${{ github.ref_type != 'tag'}}
- name: Setup Golang
@@ -73,7 +73,7 @@ jobs:
cache: false
- name: Install cosign
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
@@ -103,7 +103,7 @@ jobs:
echo 'EOF' >> $GITHUB_ENV
- name: Login to Quay.io
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: quay.io
username: ${{ secrets.quay_username }}
@@ -111,7 +111,7 @@ jobs:
if: ${{ inputs.quay_image_name && inputs.push }}
- name: Login to GitHub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ secrets.ghcr_username }}
@@ -119,7 +119,7 @@ jobs:
if: ${{ inputs.ghcr_image_name && inputs.push }}
- name: Login to dockerhub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.docker_username }}
password: ${{ secrets.docker_password }}

View File

@@ -25,7 +25,7 @@ jobs:
image-tag: ${{ steps.image.outputs.tag}}
platforms: ${{ steps.platforms.outputs.platforms }}
steps:
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set image tag for ghcr
run: echo "tag=$(cat ./VERSION)-${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
@@ -53,7 +53,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.0
go-version: 1.25.3
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -70,7 +70,7 @@ jobs:
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.0
go-version: 1.25.3
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:
@@ -106,7 +106,7 @@ jobs:
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- run: git clone "https://$TOKEN@github.com/argoproj/argoproj-deployments"
env:
TOKEN: ${{ secrets.TOKEN }}

View File

@@ -23,7 +23,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -11,7 +11,7 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.0' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.25.3' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
@@ -25,13 +25,49 @@ jobs:
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.0
go-version: 1.25.3
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup variables
id: var
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
IS_LATEST=false
# Ensure latest release tag matches github.ref_name
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
IS_LATEST=true
fi
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
argocd-image-provenance:
needs: [argocd-image]
permissions:
@@ -50,18 +86,20 @@ jobs:
goreleaser:
needs:
- setup-variables
- argocd-image
- argocd-image-provenance
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -142,12 +180,12 @@ jobs:
permissions:
contents: write # Needed for release uploads
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes}}
hashes: ${{ steps.sbom-hash.outputs.hashes }}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -198,7 +236,7 @@ jobs:
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Upload SBOM
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836 # v2.3.3
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090 # v2.4.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -221,6 +259,7 @@ jobs:
post-release:
needs:
- setup-variables
- argocd-image
- goreleaser
- generate-sbom
@@ -229,9 +268,11 @@ jobs:
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -242,27 +283,6 @@ jobs:
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
- name: Check if tag is the latest version and not a pre-release
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
# Ensure latest tag matches github.ref_name & not a pre-release
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
echo "TAG_STABLE=true" >> $GITHUB_ENV
else
echo "TAG_STABLE=false" >> $GITHUB_ENV
fi
- name: Update stable tag to latest version
run: |
git tag -f stable ${{ github.ref_name }}

View File

@@ -10,6 +10,7 @@ permissions:
jobs:
renovate:
runs-on: ubuntu-latest
if: github.repository == 'argoproj/argo-cd'
steps:
- name: Get token
id: get_token
@@ -19,10 +20,17 @@ jobs:
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # 4.2.2
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # 5.0.0
# Some codegen commands require Go to be setup
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
- name: Self-hosted Renovate
uses: renovatebot/github-action@f8af9272cd94a4637c29f60dea8731afd3134473 #43.0.12
uses: renovatebot/github-action@ea850436a5fe75c0925d583c7a02c60a5865461d #43.0.20
with:
configurationFile: .github/configs/renovate-config.js
token: '${{ steps.get_token.outputs.token }}'

View File

@@ -30,12 +30,12 @@ jobs:
steps:
- name: "Checkout code"
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
with:
results_file: results.sarif
results_format: sarif
@@ -54,7 +54,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: SARIF file
path: results.sarif

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Build reports

1
.gitignore vendored
View File

@@ -20,6 +20,7 @@ node_modules/
.kube/
./test/cmp/*.sock
.envrc.remote
.mirrord/
.*.swp
rerunreport.txt

View File

@@ -22,6 +22,7 @@ linters:
- govet
- importas
- misspell
- noctx
- perfsprint
- revive
- staticcheck

View File

@@ -49,13 +49,14 @@ archives:
- argocd-cli
name_template: |-
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
formats: [ binary ]
formats: [binary]
checksum:
name_template: 'cli_checksums.txt'
algorithm: sha256
release:
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
prerelease: auto
draft: false
header: |

View File

@@ -1,11 +1,5 @@
dir: '{{.InterfaceDir}}/mocks'
structname: '{{.InterfaceName}}'
filename: '{{.InterfaceName}}.go'
pkgname: mocks
template-data:
unroll-variadic: true
packages:
github.com/argoproj/argo-cd/v3/applicationset/generators:
interfaces:
@@ -14,17 +8,15 @@ packages:
interfaces:
Repos: {}
github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider:
config:
dir: applicationset/services/scm_provider/aws_codecommit/mocks
interfaces:
AWSCodeCommitClient: {}
AWSTaggingClient: {}
AzureDevOpsClientFactory: {}
github.com/argoproj/argo-cd/v3/applicationset/utils:
interfaces:
Renderer: {}
github.com/argoproj/argo-cd/v3/commitserver/apiclient:
interfaces:
Clientset: {}
CommitServiceClient: {}
github.com/argoproj/argo-cd/v3/commitserver/commit:
interfaces:
@@ -35,6 +27,7 @@ packages:
github.com/argoproj/argo-cd/v3/controller/hydrator:
interfaces:
Dependencies: {}
RepoGetter: {}
github.com/argoproj/argo-cd/v3/pkg/apiclient/cluster:
interfaces:
ClusterServiceServer: {}
@@ -47,8 +40,8 @@ packages:
AppProjectInterface: {}
github.com/argoproj/argo-cd/v3/reposerver/apiclient:
interfaces:
RepoServerService_GenerateManifestWithFilesClient: {}
RepoServerServiceClient: {}
RepoServerService_GenerateManifestWithFilesClient: {}
github.com/argoproj/argo-cd/v3/server/application:
interfaces:
Broadcaster: {}
@@ -63,26 +56,37 @@ packages:
github.com/argoproj/argo-cd/v3/util/db:
interfaces:
ArgoDB: {}
RepoCredsDB: {}
github.com/argoproj/argo-cd/v3/util/git:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/helm:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/oci:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/io:
interfaces:
TempPaths: {}
github.com/argoproj/argo-cd/v3/util/notification/argocd:
interfaces:
Service: {}
github.com/argoproj/argo-cd/v3/util/oci:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/workloadidentity:
interfaces:
TokenProvider: {}
github.com/argoproj/gitops-engine/pkg/cache:
interfaces:
ClusterCache: {}
github.com/argoproj/gitops-engine/pkg/diff:
interfaces:
ServerSideDryRunner: {}
github.com/microsoft/azure-devops-go-api/azuredevops/v7/git:
config:
dir: applicationset/services/scm_provider/azure_devops/git/mocks
interfaces:
Client: {}
pkgname: mocks
structname: '{{.InterfaceName}}'
template-data:
unroll-variadic: true

View File

@@ -16,4 +16,5 @@
# CLI
/cmd/argocd/** @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
/cmd/main.go @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
# Also include @argoproj/argocd-approvers-docs to avoid requiring CLI approvers for docs-only PRs.
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-docs @argoproj/argocd-approvers-cli

View File

@@ -1,10 +1,10 @@
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:10bb10bb062de665d4dc3e0ea36715270ead632cfcb74d08ca2273712a0dfb42
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:27771fb7b40a58237c98e8d3e6b9ecdd9289cec69a857fccfb85ff36294dac20
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6 AS builder
FROM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2 AS builder
WORKDIR /tmp
@@ -103,11 +103,13 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2 AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd
COPY go.* ./
RUN mkdir -p gitops-engine
COPY gitops-engine/go.* ./gitops-engine
RUN go mod download
# Perform the build

View File

@@ -1,4 +1,4 @@
FROM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6
FROM docker.io/library/golang:1.25.3@sha256:6d4e5e74f47db00f7f24da5f53c1b4198ae46862a47395e30477365458347bf2
ENV DEBIAN_FRONTEND=noninteractive

View File

@@ -1,4 +1,4 @@
FROM node:20
FROM node:25
WORKDIR /app/ui

View File

@@ -261,8 +261,12 @@ clidocsgen:
actionsdocsgen:
hack/generate-actions-list.sh
.PHONY: resourceiconsgen
resourceiconsgen:
hack/generate-icons-typescript.sh
.PHONY: codegen-local
codegen-local: mod-vendor-local mockgen gogen protogen clientgen openapigen clidocsgen actionsdocsgen manifests-local notification-docs notification-catalog
codegen-local: mod-vendor-local mockgen gogen protogen clientgen openapigen clidocsgen actionsdocsgen resourceiconsgen manifests-local notification-docs notification-catalog
rm -rf vendor/
.PHONY: codegen-local-fast

View File

@@ -9,6 +9,6 @@ ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
git-server: test/fixture/testrepos/start-git.sh
helm-registry: test/fixture/testrepos/start-helm-registry.sh
oci-registry: test/fixture/testrepos/start-authenticated-helm-registry.sh
dev-mounter: [[ "$ARGOCD_E2E_TEST" != "true" ]] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
dev-mounter: [ "$ARGOCD_E2E_TEST" != "true" ] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"

View File

@@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: 320f46f06beaf75f9c406e3a47e2e09d36e2047a
commit-hash: 06ef059f9fc7cf9da2dfaef2a505ee1e3c693485
project-url: https://github.com/argoproj/argo-cd
project-release: v3.2.0
project-release: v3.3.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View File

@@ -123,6 +123,7 @@ k8s_resource(
'9345:2345',
'8083:8083'
],
resource_deps=['build']
)
# track crds
@@ -148,6 +149,7 @@ k8s_resource(
'9346:2345',
'8084:8084'
],
resource_deps=['build']
)
# track argocd-redis resources and port forward
@@ -162,6 +164,7 @@ k8s_resource(
port_forwards=[
'6379:6379',
],
resource_deps=['build']
)
# track argocd-applicationset-controller resources
@@ -180,6 +183,7 @@ k8s_resource(
'8085:8080',
'7000:7000'
],
resource_deps=['build']
)
# track argocd-application-controller resources
@@ -197,6 +201,7 @@ k8s_resource(
'9348:2345',
'8086:8082',
],
resource_deps=['build']
)
# track argocd-notifications-controller resources
@@ -214,6 +219,7 @@ k8s_resource(
'9349:2345',
'8087:9001',
],
resource_deps=['build']
)
# track argocd-dex-server resources
@@ -225,6 +231,7 @@ k8s_resource(
'argocd-dex-server:role',
'argocd-dex-server:rolebinding',
],
resource_deps=['build']
)
# track argocd-commit-server resources
@@ -239,6 +246,19 @@ k8s_resource(
'8088:8087',
'8089:8086',
],
resource_deps=['build']
)
# ui dependencies
local_resource(
'node-modules',
'yarn',
dir='ui',
deps = [
'ui/package.json',
'ui/yarn.lock',
],
allow_parallel=True,
)
# docker for ui
@@ -260,6 +280,7 @@ k8s_resource(
port_forwards=[
'4000:4000',
],
resource_deps=['node-modules'],
)
# linting
@@ -278,6 +299,7 @@ local_resource(
'ui',
],
allow_parallel=True,
resource_deps=['node-modules'],
)
local_resource(
@@ -287,5 +309,6 @@ local_resource(
'go.mod',
'go.sum',
],
allow_parallel=True,
)

View File

@@ -63,6 +63,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Camptocamp](https://camptocamp.com)
1. [Candis](https://www.candis.io)
1. [Capital One](https://www.capitalone.com)
1. [Capptain LTD](https://capptain.co/)
1. [CARFAX Europe](https://www.carfax.eu)
1. [CARFAX](https://www.carfax.com)
1. [Carrefour Group](https://www.carrefour.com)
@@ -309,7 +310,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Relex Solutions](https://www.relexsolutions.com/)
1. [RightRev](https://rightrev.com/)
1. [Rijkswaterstaat](https://www.rijkswaterstaat.nl/en)
1. [Rise](https://www.risecard.eu/)
1. Rise
1. [Riskified](https://www.riskified.com/)
1. [Robotinfra](https://www.robotinfra.com)
1. [Rocket.Chat](https://rocket.chat)

View File

@@ -1 +1 @@
3.2.0
3.3.0

View File

@@ -37,6 +37,7 @@ import (
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
"k8s.io/utils/ptr"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -46,6 +47,8 @@ import (
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/argo-cd/v3/applicationset/controllers/template"
"github.com/argoproj/argo-cd/v3/applicationset/generators"
"github.com/argoproj/argo-cd/v3/applicationset/metrics"
@@ -100,6 +103,7 @@ type ApplicationSetReconciler struct {
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -226,8 +230,6 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, fmt.Errorf("failed to get update resources status for application set: %w", err)
}
// appMap is a name->app collection of Applications in this ApplicationSet.
appMap := map[string]argov1alpha1.Application{}
// appSyncMap tracks which apps will be synced during this reconciliation.
appSyncMap := map[string]bool{}
@@ -241,22 +243,20 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, fmt.Errorf("failed to clear previous AppSet application statuses for %v: %w", applicationSetInfo.Name, err)
}
} else if isRollingSyncStrategy(&applicationSetInfo) {
// The appset uses progressive sync with `RollingSync` strategy
for _, app := range currentApplications {
appMap[app.Name] = app
}
appSyncMap, err = r.performProgressiveSyncs(ctx, logCtx, applicationSetInfo, currentApplications, generatedApplications, appMap)
appSyncMap, err = r.performProgressiveSyncs(ctx, logCtx, applicationSetInfo, currentApplications, generatedApplications)
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
}
}
}
} else {
// Progressive Sync is disabled, clear any existing applicationStatus to prevent stale data
if len(applicationSetInfo.Status.ApplicationStatus) > 0 {
logCtx.Infof("Progressive Sync disabled, removing %v AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
var validApps []argov1alpha1.Application
for i := range generatedApplications {
if validateErrors[generatedApplications[i].QualifiedName()] == nil {
validApps = append(validApps, generatedApplications[i])
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear AppSet application statuses when Progressive Sync is disabled for %v: %w", applicationSetInfo.Name, err)
}
}
}
@@ -287,13 +287,25 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
)
}
var validApps []argov1alpha1.Application
for i := range generatedApplications {
if validateErrors[generatedApplications[i].QualifiedName()] == nil {
validApps = append(validApps, generatedApplications[i])
}
}
if r.EnableProgressiveSyncs {
// trigger appropriate application syncs if RollingSync strategy is enabled
if progressiveSyncsRollingSyncStrategyEnabled(&applicationSetInfo) {
validApps = r.syncValidApplications(logCtx, &applicationSetInfo, appSyncMap, appMap, validApps)
validApps = r.syncDesiredApplications(logCtx, &applicationSetInfo, appSyncMap, validApps)
}
}
// Sort apps by name so they are updated/created in the same order, and condition errors are the same
sort.Slice(validApps, func(i, j int) bool {
return validApps[i].Name < validApps[j].Name
})
if utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowUpdate() {
err = r.createOrUpdateInCluster(ctx, logCtx, applicationSetInfo, validApps)
if err != nil {
@@ -325,6 +337,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
}
if utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowDelete() {
// Delete the generatedApplications instead of the validApps because we want to be able to delete applications in error/invalid state
err = r.deleteInCluster(ctx, logCtx, applicationSetInfo, generatedApplications)
if err != nil {
_ = r.setApplicationSetStatusCondition(ctx,
@@ -931,7 +944,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
return nil
}
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application) (map[string]bool, error) {
appDependencyList, appStepMap := r.buildAppDependencyList(logCtx, appset, desiredApplications)
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
@@ -940,21 +953,21 @@ func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context,
}
logCtx.Infof("ApplicationSet %v step list:", appset.Name)
for i, step := range appDependencyList {
logCtx.Infof("step %v: %+v", i+1, step)
for stepIndex, applicationNames := range appDependencyList {
logCtx.Infof("step %v: %+v", stepIndex+1, applicationNames)
}
appSyncMap := r.buildAppSyncMap(appset, appDependencyList, appMap)
logCtx.Infof("Application allowed to sync before maxUpdate?: %+v", appSyncMap)
appsToSync := r.getAppsToSync(appset, appDependencyList, applications)
logCtx.Infof("Application allowed to sync before maxUpdate?: %+v", appsToSync)
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appSyncMap, appStepMap)
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appsToSync, appStepMap)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset application status progress: %w", err)
}
_ = r.updateApplicationSetApplicationStatusConditions(ctx, &appset)
return appSyncMap, nil
return appsToSync, nil
}
// this list tracks which Applications belong to each RollingUpdate step
@@ -1028,55 +1041,53 @@ func labelMatchedExpression(logCtx *log.Entry, val string, matchExpression argov
return valueMatched
}
// this map is used to determine which stage of Applications are ready to be updated in the reconciler loop
func (r *ApplicationSetReconciler) buildAppSyncMap(applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, appMap map[string]argov1alpha1.Application) map[string]bool {
// getAppsToSync returns a Map of Applications that should be synced in this progressive sync wave
func (r *ApplicationSetReconciler) getAppsToSync(applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, currentApplications []argov1alpha1.Application) map[string]bool {
appSyncMap := map[string]bool{}
syncEnabled := true
currentAppsMap := map[string]bool{}
// healthy stages and the first non-healthy stage should have sync enabled
// every stage after should have sync disabled
for _, app := range currentApplications {
currentAppsMap[app.Name] = true
}
for i := range appDependencyList {
for stepIndex := range appDependencyList {
// set the syncEnabled boolean for every Application in the current step
for _, appName := range appDependencyList[i] {
appSyncMap[appName] = syncEnabled
for _, appName := range appDependencyList[stepIndex] {
appSyncMap[appName] = true
}
// detect if we need to halt before progressing to the next step
for _, appName := range appDependencyList[i] {
// evaluate if we need to sync next waves
syncNextWave := true
for _, appName := range appDependencyList[stepIndex] {
// Check if application is created and managed by this AppSet, if it is not created yet, we cannot progress
if _, ok := currentAppsMap[appName]; !ok {
syncNextWave = false
break
}
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, appName)
if idx == -1 {
// no Application status found, likely because the Application is being newly created
syncEnabled = false
// No Application status found, likely because the Application is being newly created
// This mean this wave is not yet completed
syncNextWave = false
break
}
appStatus := applicationSet.Status.ApplicationStatus[idx]
app, ok := appMap[appName]
if !ok {
// application name not found in the list of applications managed by this ApplicationSet, maybe because it's being deleted
syncEnabled = false
break
}
syncEnabled = appSyncEnabledForNextStep(&applicationSet, app, appStatus)
if !syncEnabled {
if appStatus.Status != argov1alpha1.ProgressiveSyncHealthy {
// At least one application in this wave is not yet healthy. We cannot proceed to the next wave
syncNextWave = false
break
}
}
if !syncNextWave {
break
}
}
return appSyncMap
}
func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1alpha1.Application, appStatus argov1alpha1.ApplicationSetApplicationStatus) bool {
if progressiveSyncsRollingSyncStrategyEnabled(appset) {
// we still need to complete the current step if the Application is not yet Healthy or there are still pending Application changes
return isApplicationHealthy(app) && appStatus.Status == "Healthy"
}
return true
}
func isRollingSyncStrategy(appset *argov1alpha1.ApplicationSet) bool {
// It's only RollingSync if the type specifically sets it
return appset.Spec.Strategy != nil && appset.Spec.Strategy.Type == "RollingSync" && appset.Spec.Strategy.RollingSync != nil
@@ -1087,29 +1098,21 @@ func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.Application
return isRollingSyncStrategy(appset) && len(appset.Spec.Strategy.RollingSync.Steps) > 0
}
func isProgressiveSyncDeletionOrderReversed(appset *argov1alpha1.ApplicationSet) bool {
// When progressive sync is enabled + deletionOrder is set to Reverse (case-insensitive)
return progressiveSyncsRollingSyncStrategyEnabled(appset) && strings.EqualFold(appset.Spec.Strategy.DeletionOrder, ReverseDeletionOrder)
}
func isApplicationHealthy(app argov1alpha1.Application) bool {
healthStatusString, syncStatusString, operationPhaseString := statusStrings(app)
if healthStatusString == "Healthy" && syncStatusString != "OutOfSync" && (operationPhaseString == "Succeeded" || operationPhaseString == "") {
return true
func isApplicationWithError(app argov1alpha1.Application) bool {
for _, condition := range app.Status.Conditions {
if condition.Type == argov1alpha1.ApplicationConditionInvalidSpecError {
return true
}
if condition.Type == argov1alpha1.ApplicationConditionUnknownError {
return true
}
}
return false
}
func statusStrings(app argov1alpha1.Application) (string, string, string) {
healthStatusString := string(app.Status.Health.Status)
syncStatusString := string(app.Status.Sync.Status)
operationPhaseString := ""
if app.Status.OperationState != nil {
operationPhaseString = string(app.Status.OperationState.Phase)
}
return healthStatusString, syncStatusString, operationPhaseString
func isProgressiveSyncDeletionOrderReversed(appset *argov1alpha1.ApplicationSet) bool {
// When progressive sync is enabled + deletionOrder is set to Reverse (case-insensitive)
return progressiveSyncsRollingSyncStrategyEnabled(appset) && strings.EqualFold(appset.Spec.Strategy.DeletionOrder, ReverseDeletionOrder)
}
func getAppStep(appName string, appStepMap map[string]int) int {
@@ -1128,81 +1131,112 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applications))
for _, app := range applications {
healthStatusString, syncStatusString, operationPhaseString := statusStrings(app)
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, app.Name)
appHealthStatus := app.Status.Health.Status
appSyncStatus := app.Status.Sync.Status
currentAppStatus := argov1alpha1.ApplicationSetApplicationStatus{}
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, app.Name)
if idx == -1 {
// AppStatus not found, set default status of "Waiting"
currentAppStatus = argov1alpha1.ApplicationSetApplicationStatus{
Application: app.Name,
TargetRevisions: app.Status.GetRevisions(),
LastTransitionTime: &now,
Message: "No Application status found, defaulting status to Waiting.",
Status: "Waiting",
Message: "No Application status found, defaulting status to Waiting",
Status: argov1alpha1.ProgressiveSyncWaiting,
Step: strconv.Itoa(getAppStep(app.Name, appStepMap)),
}
} else {
// we have an existing AppStatus
currentAppStatus = applicationSet.Status.ApplicationStatus[idx]
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
}
}
statusLogCtx := logCtx.WithFields(log.Fields{
"app.name": currentAppStatus.Application,
"app.health": appHealthStatus,
"app.sync": appSyncStatus,
"status.status": currentAppStatus.Status,
"status.message": currentAppStatus.Message,
"status.step": currentAppStatus.Step,
"status.targetRevisions": strings.Join(currentAppStatus.TargetRevisions, ","),
})
newAppStatus := currentAppStatus.DeepCopy()
newAppStatus.Step = strconv.Itoa(getAppStep(newAppStatus.Application, appStepMap))
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
currentAppStatus.Status = "Waiting"
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
// A new version is available in the application and we need to re-sync the application
newAppStatus.TargetRevisions = app.Status.GetRevisions()
newAppStatus.Message = "Application has pending changes, setting status to Waiting"
newAppStatus.Status = argov1alpha1.ProgressiveSyncWaiting
newAppStatus.LastTransitionTime = &now
}
appOutdated := false
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
appOutdated = syncStatusString == "OutOfSync"
}
if newAppStatus.Status == argov1alpha1.ProgressiveSyncWaiting {
// App has changed to waiting because the TargetRevisions changed or it is a new selected app
// This does not mean we should always sync the app. The app may not be OutOfSync
// and may not require a sync if it does not have differences.
if appSyncStatus == argov1alpha1.SyncStatusCodeSynced {
if app.Status.Health.Status == health.HealthStatusHealthy {
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncHealthy
newAppStatus.Message = "Application resource has synced, updating status to Healthy"
} else {
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
newAppStatus.Message = "Application resource has synced, updating status to Progressing"
}
}
} else {
// The target revision is the same, so we need to evaluate the current revision progress
if currentAppStatus.Status == argov1alpha1.ProgressiveSyncPending {
// No need to evaluate status health further if the application did not change since our last transition
if app.Status.ReconciledAt == nil || (newAppStatus.LastTransitionTime != nil && app.Status.ReconciledAt.After(newAppStatus.LastTransitionTime.Time)) {
// Validate that at least one sync was trigerred after the pending transition time
if app.Status.OperationState != nil && app.Status.OperationState.StartedAt.After(currentAppStatus.LastTransitionTime.Time) {
statusLogCtx = statusLogCtx.WithField("app.operation", app.Status.OperationState.Phase)
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
if appOutdated && currentAppStatus.Status != "Waiting" && currentAppStatus.Status != "Pending" {
logCtx.Infof("Application %v is outdated, updating its ApplicationSet status to Waiting", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Waiting"
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
switch {
case app.Status.OperationState.Phase.Successful():
newAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing"
case app.Status.OperationState.Phase.Completed():
newAppStatus.Message = "Application resource completed a sync, updating status from Pending to Progressing"
default:
// If a sync fails or has errors, the Application should be configured with retry. It is not the appset's job to retry failed syncs
newAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing"
}
} else if isApplicationWithError(app) {
// Validate if the application has errors preventing it to be reconciled and perform syncs
// If it does, we move it to progressing.
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
newAppStatus.Message = "Application resource has error and cannot sync, updating status to Progressing"
}
}
}
if currentAppStatus.Status == "Pending" {
if !appOutdated && operationPhaseString == "Succeeded" {
logCtx.Infof("Application %v has completed a sync successfully, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
} else if operationPhaseString == "Running" || healthStatusString == "Progressing" {
logCtx.Infof("Application %v has entered Progressing status, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
if currentAppStatus.Status == argov1alpha1.ProgressiveSyncProgressing {
// If the status has reached progressing, we know a sync has been triggered. No matter the result of that operation,
// we want an the app to reach the Healthy state for the current revision.
if appHealthStatus == health.HealthStatusHealthy && appSyncStatus == argov1alpha1.SyncStatusCodeSynced {
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncHealthy
newAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy"
}
}
}
if currentAppStatus.Status == "Waiting" && isApplicationHealthy(app) {
logCtx.Infof("Application %v is already synced and healthy, updating its ApplicationSet status to Healthy", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource is already Healthy, updating status from Waiting to Healthy."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
if newAppStatus.LastTransitionTime == &now {
statusLogCtx.WithFields(log.Fields{
"new_status.status": newAppStatus.Status,
"new_status.message": newAppStatus.Message,
"new_status.step": newAppStatus.Step,
"new_status.targetRevisions": strings.Join(newAppStatus.TargetRevisions, ","),
}).Info("Progressive sync application changed status")
}
if currentAppStatus.Status == "Progressing" && isApplicationHealthy(app) {
logCtx.Infof("Application %v has completed Progressing status, updating its ApplicationSet status to Healthy", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
appStatuses = append(appStatuses, currentAppStatus)
appStatuses = append(appStatuses, *newAppStatus)
}
err := r.setAppSetApplicationStatus(ctx, logCtx, applicationSet, appStatuses)
@@ -1214,7 +1248,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
}
// check Applications that are in Waiting status and promote them to Pending if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appsToSync map[string]bool, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applicationSet.Status.ApplicationStatus))
@@ -1230,12 +1264,20 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
for _, appStatus := range applicationSet.Status.ApplicationStatus {
totalCountMap[appStepMap[appStatus.Application]]++
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
if appStatus.Status == argov1alpha1.ProgressiveSyncPending || appStatus.Status == argov1alpha1.ProgressiveSyncProgressing {
updateCountMap[appStepMap[appStatus.Application]]++
}
}
for _, appStatus := range applicationSet.Status.ApplicationStatus {
statusLogCtx := logCtx.WithFields(log.Fields{
"app.name": appStatus.Application,
"status.status": appStatus.Status,
"status.message": appStatus.Message,
"status.step": appStatus.Step,
"status.targetRevisions": strings.Join(appStatus.TargetRevisions, ","),
})
maxUpdateAllowed := true
maxUpdate := &intstr.IntOrString{}
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
@@ -1246,7 +1288,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
if maxUpdate != nil {
maxUpdateVal, err := intstr.GetScaledValueFromIntOrPercent(maxUpdate, totalCountMap[appStepMap[appStatus.Application]], false)
if err != nil {
logCtx.Warnf("AppSet '%v' has a invalid maxUpdate value '%+v', ignoring maxUpdate logic for this step: %v", applicationSet.Name, maxUpdate, err)
statusLogCtx.Warnf("AppSet has a invalid maxUpdate value '%+v', ignoring maxUpdate logic for this step: %v", maxUpdate, err)
}
// ensure that percentage values greater than 0% always result in at least 1 Application being selected
@@ -1256,16 +1298,21 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
if updateCountMap[appStepMap[appStatus.Application]] >= maxUpdateVal {
maxUpdateAllowed = false
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap), applicationSet.Name)
statusLogCtx.Infof("Application is not allowed to update yet, %v/%v Applications already updating in step %v", updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap))
}
}
if appStatus.Status == "Waiting" && appSyncMap[appStatus.Application] && maxUpdateAllowed {
logCtx.Infof("Application %v moved to Pending status, watching for the Application to start Progressing", appStatus.Application)
if appStatus.Status == argov1alpha1.ProgressiveSyncWaiting && appsToSync[appStatus.Application] && maxUpdateAllowed {
appStatus.LastTransitionTime = &now
appStatus.Status = "Pending"
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing."
appStatus.Step = strconv.Itoa(getAppStep(appStatus.Application, appStepMap))
appStatus.Status = argov1alpha1.ProgressiveSyncPending
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing"
statusLogCtx.WithFields(log.Fields{
"new_status.status": appStatus.Status,
"new_status.message": appStatus.Message,
"new_status.step": appStatus.Step,
"new_status.targetRevisions": strings.Join(appStatus.TargetRevisions, ","),
}).Info("Progressive sync application changed status")
updateCountMap[appStepMap[appStatus.Application]]++
}
@@ -1290,9 +1337,9 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditio
completedWaves := map[string]bool{}
for _, appStatus := range applicationSet.Status.ApplicationStatus {
if v, ok := completedWaves[appStatus.Step]; !ok {
completedWaves[appStatus.Step] = appStatus.Status == "Healthy"
completedWaves[appStatus.Step] = appStatus.Status == argov1alpha1.ProgressiveSyncHealthy
} else {
completedWaves[appStatus.Step] = v && appStatus.Status == "Healthy"
completedWaves[appStatus.Step] = v && appStatus.Status == argov1alpha1.ProgressiveSyncHealthy
}
}
@@ -1398,7 +1445,13 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
resourcesCount := int64(len(statuses))
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
statuses = statuses[:r.MaxResourcesStatusCount]
}
appset.Status.Resources = statuses
appset.Status.ResourcesCount = resourcesCount
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
@@ -1411,6 +1464,7 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
}
updatedAppset.Status.Resources = appset.Status.Resources
updatedAppset.Status.ResourcesCount = resourcesCount
// Update the newly fetched object with new status resources
err := r.Client.Status().Update(ctx, updatedAppset)
@@ -1507,30 +1561,31 @@ func (r *ApplicationSetReconciler) setAppSetApplicationStatus(ctx context.Contex
return nil
}
func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appMap map[string]argov1alpha1.Application, validApps []argov1alpha1.Application) []argov1alpha1.Application {
func (r *ApplicationSetReconciler) syncDesiredApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appsToSync map[string]bool, desiredApplications []argov1alpha1.Application) []argov1alpha1.Application {
rolloutApps := []argov1alpha1.Application{}
for i := range validApps {
for i := range desiredApplications {
pruneEnabled := false
// ensure that Applications generated with RollingSync do not have an automated sync policy, since the AppSet controller will handle triggering the sync operation instead
if validApps[i].Spec.SyncPolicy != nil && validApps[i].Spec.SyncPolicy.IsAutomatedSyncEnabled() {
pruneEnabled = validApps[i].Spec.SyncPolicy.Automated.Prune
validApps[i].Spec.SyncPolicy.Automated = nil
if desiredApplications[i].Spec.SyncPolicy != nil && desiredApplications[i].Spec.SyncPolicy.IsAutomatedSyncEnabled() {
pruneEnabled = desiredApplications[i].Spec.SyncPolicy.Automated.Prune
desiredApplications[i].Spec.SyncPolicy.Automated.Enabled = ptr.To(false)
}
appSetStatusPending := false
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, validApps[i].Name)
if idx > -1 && applicationSet.Status.ApplicationStatus[idx].Status == "Pending" {
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, desiredApplications[i].Name)
if idx > -1 && applicationSet.Status.ApplicationStatus[idx].Status == argov1alpha1.ProgressiveSyncPending {
// only trigger a sync for Applications that are in Pending status, since this is governed by maxUpdate
appSetStatusPending = true
}
// check appSyncMap to determine which Applications are ready to be updated and which should be skipped
if appSyncMap[validApps[i].Name] && appMap[validApps[i].Name].Status.Sync.Status == "OutOfSync" && appSetStatusPending {
logCtx.Infof("triggering sync for application: %v, prune enabled: %v", validApps[i].Name, pruneEnabled)
validApps[i] = syncApplication(validApps[i], pruneEnabled)
// check appsToSync to determine which Applications are ready to be updated and which should be skipped
if appsToSync[desiredApplications[i].Name] && appSetStatusPending {
logCtx.Infof("triggering sync for application: %v, prune enabled: %v", desiredApplications[i].Name, pruneEnabled)
desiredApplications[i] = syncApplication(desiredApplications[i], pruneEnabled)
}
rolloutApps = append(rolloutApps, validApps[i])
rolloutApps = append(rolloutApps, desiredApplications[i])
}
return rolloutApps
}

File diff suppressed because it is too large Load Diff

View File

@@ -86,28 +86,28 @@ func TestGenerateApplications(t *testing.T) {
}
t.Run(cc.name, func(t *testing.T) {
generatorMock := genmock.Generator{}
generatorMock := &genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
List: &v1alpha1.ListGenerator{},
}
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cc.params, cc.generateParamsError)
generatorMock.On("GetTemplate", &generator).
generatorMock.EXPECT().GetTemplate(&generator).
Return(&v1alpha1.ApplicationSetTemplate{})
rendererMock := rendmock.Renderer{}
rendererMock := &rendmock.Renderer{}
var expectedApps []v1alpha1.Application
if cc.generateParamsError == nil {
for _, p := range cc.params {
if cc.rendererError != nil {
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
Return(nil, cc.rendererError)
} else {
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
Return(&app, nil)
expectedApps = append(expectedApps, app)
}
@@ -115,9 +115,9 @@ func TestGenerateApplications(t *testing.T) {
}
generators := map[string]generators.Generator{
"List": &generatorMock,
"List": generatorMock,
}
renderer := &rendererMock
renderer := rendererMock
got, reason, err := GenerateApplications(log.NewEntry(log.StandardLogger()), v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -200,26 +200,26 @@ func TestMergeTemplateApplications(t *testing.T) {
cc := c
t.Run(cc.name, func(t *testing.T) {
generatorMock := genmock.Generator{}
generatorMock := &genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
List: &v1alpha1.ListGenerator{},
}
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cc.params, nil)
generatorMock.On("GetTemplate", &generator).
generatorMock.EXPECT().GetTemplate(&generator).
Return(&cc.overrideTemplate)
rendererMock := rendmock.Renderer{}
rendererMock := &rendmock.Renderer{}
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.expectedMerged), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), cc.params[0], false, []string(nil)).
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.expectedMerged), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), cc.params[0], false, []string(nil)).
Return(&cc.expectedApps[0], nil)
generators := map[string]generators.Generator{
"List": &generatorMock,
"List": generatorMock,
}
renderer := &rendererMock
renderer := rendererMock
got, _, _ := GenerateApplications(log.NewEntry(log.StandardLogger()), v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -312,19 +312,19 @@ func TestGenerateAppsUsingPullRequestGenerator(t *testing.T) {
},
} {
t.Run(cases.name, func(t *testing.T) {
generatorMock := genmock.Generator{}
generatorMock := &genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
PullRequest: &v1alpha1.PullRequestGenerator{},
}
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cases.params, nil)
generatorMock.On("GetTemplate", &generator).
Return(&cases.template, nil)
generatorMock.EXPECT().GetTemplate(&generator).
Return(&cases.template)
generators := map[string]generators.Generator{
"PullRequest": &generatorMock,
"PullRequest": generatorMock,
}
renderer := &utils.Render{}

View File

@@ -223,7 +223,7 @@ func TestTransForm(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
testGenerators := map[string]Generator{
"Clusters": getMockClusterGenerator(),
"Clusters": getMockClusterGenerator(t.Context()),
}
applicationSetInfo := argov1alpha1.ApplicationSet{
@@ -260,7 +260,7 @@ func emptyTemplate() argov1alpha1.ApplicationSetTemplate {
}
}
func getMockClusterGenerator() Generator {
func getMockClusterGenerator(ctx context.Context) Generator {
clusters := []crtclient.Object{
&corev1.Secret{
TypeMeta: metav1.TypeMeta{
@@ -342,19 +342,19 @@ func getMockClusterGenerator() Generator {
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
return NewClusterGenerator(context.Background(), fakeClient, appClientset, "namespace")
return NewClusterGenerator(ctx, fakeClient, appClientset, "namespace")
}
func getMockGitGenerator() Generator {
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "namespace")
argoCDServiceMock := &mocks.Repos{}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
gitGenerator := NewGitGenerator(argoCDServiceMock, "namespace")
return gitGenerator
}
func TestGetRelevantGenerators(t *testing.T) {
testGenerators := map[string]Generator{
"Clusters": getMockClusterGenerator(),
"Clusters": getMockClusterGenerator(t.Context()),
"Git": getMockGitGenerator(),
}

View File

@@ -320,11 +320,11 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -357,8 +357,6 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -623,11 +621,11 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -660,8 +658,6 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1000,11 +996,11 @@ cluster:
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1036,8 +1032,6 @@ cluster:
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1331,7 +1325,7 @@ env: testing
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock := mocks.NewRepos(t)
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
@@ -1339,18 +1333,16 @@ env: testing
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1382,8 +1374,6 @@ env: testing
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1672,7 +1662,7 @@ env: testing
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock := mocks.NewRepos(t)
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
@@ -1680,18 +1670,16 @@ env: testing
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1723,8 +1711,6 @@ env: testing
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1908,25 +1894,23 @@ func TestGitGeneratorParamsFromFilesWithExcludeOptionGoTemplate(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock := mocks.NewRepos(t)
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
// This is generally done by the g.repos.GetFiles() function.
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1958,8 +1942,6 @@ func TestGitGeneratorParamsFromFilesWithExcludeOptionGoTemplate(t *testing.T) {
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -2279,11 +2261,11 @@ cluster:
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -2315,8 +2297,6 @@ cluster:
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -2482,7 +2462,7 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
},
}
for _, testCase := range cases {
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock := mocks.NewRepos(t)
if testCase.callGetDirectories {
var project any
@@ -2492,9 +2472,9 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
project = mock.Anything
}
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, project, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, project, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
}
gitGenerator := NewGitGenerator(&argoCDServiceMock, "argocd")
gitGenerator := NewGitGenerator(argoCDServiceMock, "argocd")
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
@@ -2510,7 +2490,5 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCase.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
}
}

View File

@@ -12,12 +12,12 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
generatorsMock "github.com/argoproj/argo-cd/v3/applicationset/generators/mocks"
servicesMocks "github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -136,7 +136,7 @@ func TestMatrixGenerate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorMock{}
genMock := &generatorsMock.Generator{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -149,7 +149,7 @@ func TestMatrixGenerate(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
{
"path": "app1",
"path.basename": "app1",
@@ -162,7 +162,7 @@ func TestMatrixGenerate(t *testing.T) {
},
}, nil)
genMock.On("GetTemplate", &gitGeneratorSpec).
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -343,7 +343,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorMock{}
genMock := &generatorsMock.Generator{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -358,7 +358,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
{
"path": map[string]string{
"path": "app1",
@@ -375,7 +375,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
},
}, nil)
genMock.On("GetTemplate", &gitGeneratorSpec).
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -507,7 +507,7 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
mock := &generatorMock{}
mock := &generatorsMock.Generator{}
for _, g := range testCaseCopy.baseGenerators {
gitGeneratorSpec := v1alpha1.ApplicationSetGenerator{
@@ -517,7 +517,7 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
SCMProvider: g.SCMProvider,
ClusterDecisionResource: g.ClusterDecisionResource,
}
mock.On("GetRequeueAfter", &gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter, nil)
mock.EXPECT().GetRequeueAfter(&gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter)
}
matrixGenerator := NewMatrixGenerator(
@@ -634,7 +634,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorMock{}
genMock := &generatorsMock.Generator{}
appSet := &v1alpha1.ApplicationSet{}
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
@@ -650,7 +650,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
Git: g.Git,
Clusters: g.Clusters,
}
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
{
"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json",
"path.basename": "dev",
@@ -662,7 +662,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
"path.basenameNormalized": "prod",
},
}, nil)
genMock.On("GetTemplate", &gitGeneratorSpec).
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
matrixGenerator := NewMatrixGenerator(
@@ -813,7 +813,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorMock{}
genMock := &generatorsMock.Generator{}
appSet := &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
GoTemplate: true,
@@ -833,7 +833,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
Git: g.Git,
Clusters: g.Clusters,
}
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
{
"path": map[string]string{
"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json",
@@ -849,7 +849,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
},
},
}, nil)
genMock.On("GetTemplate", &gitGeneratorSpec).
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
matrixGenerator := NewMatrixGenerator(
@@ -969,7 +969,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorMock{}
genMock := &generatorsMock.Generator{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -984,7 +984,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{{
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{{
"foo": map[string]any{
"bar": []any{
map[string]any{
@@ -1009,7 +1009,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
},
},
}}, nil)
genMock.On("GetTemplate", &gitGeneratorSpec).
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -1037,28 +1037,6 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
}
}
type generatorMock struct {
mock.Mock
}
func (g *generatorMock) GetTemplate(appSetGenerator *v1alpha1.ApplicationSetGenerator) *v1alpha1.ApplicationSetTemplate {
args := g.Called(appSetGenerator)
return args.Get(0).(*v1alpha1.ApplicationSetTemplate)
}
func (g *generatorMock) GenerateParams(appSetGenerator *v1alpha1.ApplicationSetGenerator, appSet *v1alpha1.ApplicationSet, _ client.Client) ([]map[string]any, error) {
args := g.Called(appSetGenerator, appSet)
return args.Get(0).([]map[string]any), args.Error(1)
}
func (g *generatorMock) GetRequeueAfter(appSetGenerator *v1alpha1.ApplicationSetGenerator) time.Duration {
args := g.Called(appSetGenerator)
return args.Get(0).(time.Duration)
}
func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
// Given a matrix generator over a list generator and a git files generator, the nested git files generator should
// be treated as a files generator, and it should produce parameters.
@@ -1072,11 +1050,11 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
// Now instead of checking for nil, we check whether the field is a non-empty slice. This test prevents a regression
// of that bug.
listGeneratorMock := &generatorMock{}
listGeneratorMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).Return([]map[string]any{
listGeneratorMock := &generatorsMock.Generator{}
listGeneratorMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).Return([]map[string]any{
{"some": "value"},
}, nil)
listGeneratorMock.On("GetTemplate", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&v1alpha1.ApplicationSetTemplate{})
listGeneratorMock.EXPECT().GetTemplate(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&v1alpha1.ApplicationSetTemplate{})
gitGeneratorSpec := &v1alpha1.GitGenerator{
RepoURL: "https://git.example.com",
@@ -1085,10 +1063,10 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
},
}
repoServiceMock := &mocks.Repos{}
repoServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
repoServiceMock := &servicesMocks.Repos{}
repoServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
"some/path.json": []byte("test: content"),
}, nil)
}, nil).Maybe()
gitGenerator := NewGitGenerator(repoServiceMock, "")
matrixGenerator := NewMatrixGenerator(map[string]Generator{

View File

@@ -174,7 +174,7 @@ func TestApplicationsetCollector(t *testing.T) {
appsetCollector := newAppsetCollector(utils.NewAppsetLister(client), collectedLabels, filter)
metrics.Registry.MustRegister(appsetCollector)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
@@ -216,7 +216,7 @@ func TestObserveReconcile(t *testing.T) {
appsetMetrics := NewApplicationsetMetrics(utils.NewAppsetLister(client), collectedLabels, filter)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})

View File

@@ -97,7 +97,9 @@ func TestGitHubMetrics_CollectorApproach_Success(t *testing.T) {
),
}
req, _ := http.NewRequest(http.MethodGet, ts.URL+URL, http.NoBody)
ctx := t.Context()
req, _ := http.NewRequestWithContext(ctx, http.MethodGet, ts.URL+URL, http.NoBody)
resp, err := client.Do(req)
if err != nil {
t.Fatalf("unexpected error: %v", err)
@@ -109,7 +111,11 @@ func TestGitHubMetrics_CollectorApproach_Success(t *testing.T) {
server := httptest.NewServer(handler)
defer server.Close()
resp, err = http.Get(server.URL)
req, err = http.NewRequestWithContext(ctx, http.MethodGet, server.URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
resp, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("failed to scrape metrics: %v", err)
}
@@ -151,15 +157,23 @@ func TestGitHubMetrics_CollectorApproach_NoRateLimitMetricsOnNilResponse(t *test
metrics: metrics,
},
}
ctx := t.Context()
req, _ := http.NewRequest(http.MethodGet, URL, http.NoBody)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
_, _ = client.Do(req)
handler := promhttp.HandlerFor(reg, promhttp.HandlerOpts{})
server := httptest.NewServer(handler)
defer server.Close()
resp, err := http.Get(server.URL)
req, err = http.NewRequestWithContext(ctx, http.MethodGet, server.URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("failed to scrape metrics: %v", err)
}

View File

@@ -1,7 +1,6 @@
package pull_request
import (
"context"
"errors"
"testing"
@@ -13,6 +12,7 @@ import (
"github.com/stretchr/testify/require"
azureMock "github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/azure_devops/git/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
)
func createBoolPtr(x bool) *bool {
@@ -35,29 +35,6 @@ func createUniqueNamePtr(x string) *string {
return &x
}
type AzureClientFactoryMock struct {
mock *mock.Mock
}
func (m *AzureClientFactoryMock) GetClient(ctx context.Context) (git.Client, error) {
args := m.mock.Called(ctx)
var client git.Client
c := args.Get(0)
if c != nil {
client = c.(git.Client)
}
var err error
if len(args) > 1 {
if e, ok := args.Get(1).(error); ok {
err = e
}
}
return client, err
}
func TestListPullRequest(t *testing.T) {
teamProject := "myorg_project"
repoName := "myorg_project_repo"
@@ -91,10 +68,10 @@ func TestListPullRequest(t *testing.T) {
SearchCriteria: &git.GitPullRequestSearchCriteria{},
}
gitClientMock := azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.On("GetPullRequestsByProject", ctx, args).Return(&pullRequestMock, nil)
gitClientMock := &azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
gitClientMock.EXPECT().GetPullRequestsByProject(mock.Anything, args).Return(&pullRequestMock, nil)
provider := AzureDevOpsService{
clientFactory: clientFactoryMock,
@@ -245,12 +222,12 @@ func TestAzureDevOpsListReturnsRepositoryNotFoundError(t *testing.T) {
pullRequestMock := []git.GitPullRequest{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock := &azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
// Mock the GetPullRequestsByProject to return an error containing "404"
gitClientMock.On("GetPullRequestsByProject", t.Context(), args).Return(&pullRequestMock,
gitClientMock.EXPECT().GetPullRequestsByProject(mock.Anything, args).Return(&pullRequestMock,
errors.New("The following project does not exist:"))
provider := AzureDevOpsService{

View File

@@ -12,7 +12,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/aws_codecommit/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -177,9 +177,8 @@ func TestAWSCodeCommitListRepos(t *testing.T) {
if repo.getRepositoryNilMetadata {
repoMetadata = nil
}
codeCommitClient.
On("GetRepositoryWithContext", ctx, &codecommit.GetRepositoryInput{RepositoryName: aws.String(repo.name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: repoMetadata}, repo.getRepositoryError)
codeCommitClient.EXPECT().GetRepositoryWithContext(mock.Anything, &codecommit.GetRepositoryInput{RepositoryName: aws.String(repo.name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: repoMetadata}, repo.getRepositoryError).Maybe()
codecommitRepoNameIdPairs = append(codecommitRepoNameIdPairs, &codecommit.RepositoryNameIdPair{
RepositoryId: aws.String(repo.id),
RepositoryName: aws.String(repo.name),
@@ -193,20 +192,18 @@ func TestAWSCodeCommitListRepos(t *testing.T) {
}
if testCase.expectListAtCodeCommit {
codeCommitClient.
On("ListRepositoriesWithContext", ctx, &codecommit.ListRepositoriesInput{}).
codeCommitClient.EXPECT().ListRepositoriesWithContext(mock.Anything, &codecommit.ListRepositoriesInput{}).
Return(&codecommit.ListRepositoriesOutput{
Repositories: codecommitRepoNameIdPairs,
}, testCase.listRepositoryError)
}, testCase.listRepositoryError).Maybe()
} else {
taggingClient.
On("GetResourcesWithContext", ctx, mock.MatchedBy(equalIgnoringTagFilterOrder(&resourcegroupstaggingapi.GetResourcesInput{
TagFilters: testCase.expectTagFilters,
ResourceTypeFilters: aws.StringSlice([]string{resourceTypeCodeCommitRepository}),
}))).
taggingClient.EXPECT().GetResourcesWithContext(mock.Anything, mock.MatchedBy(equalIgnoringTagFilterOrder(&resourcegroupstaggingapi.GetResourcesInput{
TagFilters: testCase.expectTagFilters,
ResourceTypeFilters: aws.StringSlice([]string{resourceTypeCodeCommitRepository}),
}))).
Return(&resourcegroupstaggingapi.GetResourcesOutput{
ResourceTagMappingList: resourceTaggings,
}, testCase.listRepositoryError)
}, testCase.listRepositoryError).Maybe()
}
provider := &AWSCodeCommitProvider{
@@ -350,13 +347,12 @@ func TestAWSCodeCommitRepoHasPath(t *testing.T) {
taggingClient := mocks.NewAWSTaggingClient(t)
ctx := t.Context()
if testCase.expectedGetFolderPath != "" {
codeCommitClient.
On("GetFolderWithContext", ctx, &codecommit.GetFolderInput{
CommitSpecifier: aws.String(branch),
FolderPath: aws.String(testCase.expectedGetFolderPath),
RepositoryName: aws.String(repoName),
}).
Return(testCase.getFolderOutput, testCase.getFolderError)
codeCommitClient.EXPECT().GetFolderWithContext(mock.Anything, &codecommit.GetFolderInput{
CommitSpecifier: aws.String(branch),
FolderPath: aws.String(testCase.expectedGetFolderPath),
RepositoryName: aws.String(repoName),
}).
Return(testCase.getFolderOutput, testCase.getFolderError).Maybe()
}
provider := &AWSCodeCommitProvider{
codeCommitClient: codeCommitClient,
@@ -423,18 +419,16 @@ func TestAWSCodeCommitGetBranches(t *testing.T) {
taggingClient := mocks.NewAWSTaggingClient(t)
ctx := t.Context()
if testCase.allBranches {
codeCommitClient.
On("ListBranchesWithContext", ctx, &codecommit.ListBranchesInput{
RepositoryName: aws.String(name),
}).
Return(&codecommit.ListBranchesOutput{Branches: aws.StringSlice(testCase.branches)}, testCase.apiError)
codeCommitClient.EXPECT().ListBranchesWithContext(mock.Anything, &codecommit.ListBranchesInput{
RepositoryName: aws.String(name),
}).
Return(&codecommit.ListBranchesOutput{Branches: aws.StringSlice(testCase.branches)}, testCase.apiError).Maybe()
} else {
codeCommitClient.
On("GetRepositoryWithContext", ctx, &codecommit.GetRepositoryInput{RepositoryName: aws.String(name)}).
codeCommitClient.EXPECT().GetRepositoryWithContext(mock.Anything, &codecommit.GetRepositoryInput{RepositoryName: aws.String(name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: &codecommit.RepositoryMetadata{
AccountId: aws.String(organization),
DefaultBranch: aws.String(defaultBranch),
}}, testCase.apiError)
}}, testCase.apiError).Maybe()
}
provider := &AWSCodeCommitProvider{
codeCommitClient: codeCommitClient,

View File

@@ -1,7 +1,6 @@
package scm_provider
import (
"context"
"errors"
"fmt"
"testing"
@@ -16,6 +15,7 @@ import (
azureGit "github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
azureMock "github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/azure_devops/git/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
)
func s(input string) *string {
@@ -78,13 +78,13 @@ func TestAzureDevopsRepoHasPath(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := azureMock.Client{}
gitClientMock := &azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
repoId := &uuid
gitClientMock.On("GetItem", ctx, azureGit.GetItemArgs{Project: &teamProject, Path: &path, VersionDescriptor: &azureGit.GitVersionDescriptor{Version: &branchName}, RepositoryId: repoId}).Return(nil, testCase.azureDevopsError)
gitClientMock.EXPECT().GetItem(mock.Anything, azureGit.GetItemArgs{Project: &teamProject, Path: &path, VersionDescriptor: &azureGit.GitVersionDescriptor{Version: &branchName}, RepositoryId: repoId}).Return(nil, testCase.azureDevopsError)
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock}
@@ -143,12 +143,12 @@ func TestGetDefaultBranchOnDisabledRepo(t *testing.T) {
t.Run(testCase.name, func(t *testing.T) {
uuid := uuid.New().String()
gitClientMock := azureMock.Client{}
gitClientMock := azureMock.NewClient(t)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(nil, testCase.azureDevOpsError)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(nil, testCase.azureDevOpsError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -162,8 +162,6 @@ func TestGetDefaultBranchOnDisabledRepo(t *testing.T) {
}
assert.Empty(t, branches)
gitClientMock.AssertExpectations(t)
})
}
}
@@ -202,12 +200,12 @@ func TestGetAllBranchesOnDisabledRepo(t *testing.T) {
t.Run(testCase.name, func(t *testing.T) {
uuid := uuid.New().String()
gitClientMock := azureMock.Client{}
gitClientMock := azureMock.NewClient(t)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
gitClientMock.On("GetBranches", ctx, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(nil, testCase.azureDevOpsError)
gitClientMock.EXPECT().GetBranches(mock.Anything, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(nil, testCase.azureDevOpsError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -221,8 +219,6 @@ func TestGetAllBranchesOnDisabledRepo(t *testing.T) {
}
assert.Empty(t, branches)
gitClientMock.AssertExpectations(t)
})
}
}
@@ -241,12 +237,12 @@ func TestAzureDevOpsGetDefaultBranchStripsRefsName(t *testing.T) {
branchReturn := &azureGit.GitBranchStats{Name: &strippedBranchName, Commit: &azureGit.GitCommitRef{CommitId: s("abc123233223")}}
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
gitClientMock := azureMock.Client{}
gitClientMock := &azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &strippedBranchName}).Return(branchReturn, nil)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &strippedBranchName}).Return(branchReturn, nil).Maybe()
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock, allBranches: false}
branches, err := provider.GetBranches(ctx, repo)
@@ -295,12 +291,12 @@ func TestAzureDevOpsGetBranchesDefultBranchOnly(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := azureMock.Client{}
gitClientMock := &azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(testCase.expectedBranch, testCase.getBranchesAPIError)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(testCase.expectedBranch, testCase.getBranchesAPIError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -379,12 +375,12 @@ func TestAzureDevopsGetBranches(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := azureMock.Client{}
gitClientMock := &azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
gitClientMock.On("GetBranches", ctx, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(testCase.expectedBranches, testCase.getBranchesAPIError)
gitClientMock.EXPECT().GetBranches(mock.Anything, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(testCase.expectedBranches, testCase.getBranchesAPIError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid}
@@ -427,7 +423,6 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
teamProject := "myorg_project"
uuid := uuid.New()
ctx := t.Context()
repoId := &uuid
@@ -477,15 +472,15 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := azureMock.Client{}
gitClientMock.On("GetRepositories", ctx, azureGit.GetRepositoriesArgs{Project: s(teamProject)}).Return(&testCase.repositories, testCase.getRepositoriesError)
gitClientMock := azureMock.NewClient(t)
gitClientMock.EXPECT().GetRepositories(mock.Anything, azureGit.GetRepositoriesArgs{Project: s(teamProject)}).Return(&testCase.repositories, testCase.getRepositoriesError)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock}
repositories, err := provider.ListRepos(ctx, "https")
repositories, err := provider.ListRepos(t.Context(), "https")
if testCase.getRepositoriesError != nil {
require.Error(t, err, "Expected an error from test case %v", testCase.name)
@@ -497,31 +492,6 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
assert.NotEmpty(t, repositories)
assert.Len(t, repositories, testCase.expectedNumberOfRepos)
}
gitClientMock.AssertExpectations(t)
})
}
}
type AzureClientFactoryMock struct {
mock *mock.Mock
}
func (m *AzureClientFactoryMock) GetClient(ctx context.Context) (azureGit.Client, error) {
args := m.mock.Called(ctx)
var client azureGit.Client
c := args.Get(0)
if c != nil {
client = c.(azureGit.Client)
}
var err error
if len(args) > 1 {
if e, ok := args.Get(1).(error); ok {
err = e
}
}
return client, err
}

View File

@@ -30,7 +30,7 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
urlStr += fmt.Sprintf("/repositories/%s/%s/src/%s/%s?format=meta", c.owner, repo.Repository, repo.SHA, path)
body := strings.NewReader("")
req, err := http.NewRequest(http.MethodGet, urlStr, body)
req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, urlStr, body)
if err != nil {
return false, err
}

View File

@@ -0,0 +1,101 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"context"
"github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
mock "github.com/stretchr/testify/mock"
)
// NewAzureDevOpsClientFactory creates a new instance of AzureDevOpsClientFactory. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewAzureDevOpsClientFactory(t interface {
mock.TestingT
Cleanup(func())
}) *AzureDevOpsClientFactory {
mock := &AzureDevOpsClientFactory{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// AzureDevOpsClientFactory is an autogenerated mock type for the AzureDevOpsClientFactory type
type AzureDevOpsClientFactory struct {
mock.Mock
}
type AzureDevOpsClientFactory_Expecter struct {
mock *mock.Mock
}
func (_m *AzureDevOpsClientFactory) EXPECT() *AzureDevOpsClientFactory_Expecter {
return &AzureDevOpsClientFactory_Expecter{mock: &_m.Mock}
}
// GetClient provides a mock function for the type AzureDevOpsClientFactory
func (_mock *AzureDevOpsClientFactory) GetClient(ctx context.Context) (git.Client, error) {
ret := _mock.Called(ctx)
if len(ret) == 0 {
panic("no return value specified for GetClient")
}
var r0 git.Client
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context) (git.Client, error)); ok {
return returnFunc(ctx)
}
if returnFunc, ok := ret.Get(0).(func(context.Context) git.Client); ok {
r0 = returnFunc(ctx)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(git.Client)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context) error); ok {
r1 = returnFunc(ctx)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// AzureDevOpsClientFactory_GetClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetClient'
type AzureDevOpsClientFactory_GetClient_Call struct {
*mock.Call
}
// GetClient is a helper method to define mock.On call
// - ctx context.Context
func (_e *AzureDevOpsClientFactory_Expecter) GetClient(ctx interface{}) *AzureDevOpsClientFactory_GetClient_Call {
return &AzureDevOpsClientFactory_GetClient_Call{Call: _e.mock.On("GetClient", ctx)}
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) Run(run func(ctx context.Context)) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
run(
arg0,
)
})
return _c
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) Return(client git.Client, err error) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Return(client, err)
return _c
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) RunAndReturn(run func(ctx context.Context) (git.Client, error)) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -1,7 +1,6 @@
package services
import (
"context"
"crypto/tls"
"net/http"
"testing"
@@ -12,7 +11,7 @@ import (
)
func TestSetupBitbucketClient(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
cfg := &bitbucketv1.Configuration{}
// Act

View File

@@ -26,10 +26,14 @@ import (
"github.com/go-playground/webhooks/v6/github"
"github.com/go-playground/webhooks/v6/gitlab"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v3/util/guard"
)
const payloadQueueSize = 50000
const panicMsgAppSet = "panic while processing applicationset-controller webhook event"
type WebhookHandler struct {
sync.WaitGroup // for testing
github *github.Webhook
@@ -102,6 +106,7 @@ func NewWebhookHandler(webhookParallelism int, argocdSettingsMgr *argosettings.S
}
func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
compLog := log.WithField("component", "applicationset-webhook")
for i := 0; i < webhookParallelism; i++ {
h.Add(1)
go func() {
@@ -111,7 +116,7 @@ func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
if !ok {
return
}
h.HandleEvent(payload)
guard.RecoverAndLog(func() { h.HandleEvent(payload) }, compLog, panicMsgAppSet)
}
}()
}

25
assets/swagger.json generated
View File

@@ -5619,6 +5619,9 @@
"statusBadgeRootUrl": {
"type": "string"
},
"syncWithReplaceAllowed": {
"type": "boolean"
},
"trackingMethod": {
"type": "string"
},
@@ -7077,7 +7080,7 @@
},
"status": {
"type": "string",
"title": "Status contains the AppSet's perceived status of the managed Application resource: (Waiting, Pending, Progressing, Healthy)"
"title": "Status contains the AppSet's perceived status of the managed Application resource"
},
"step": {
"type": "string",
@@ -7322,6 +7325,11 @@
"items": {
"$ref": "#/definitions/applicationv1alpha1ResourceStatus"
}
},
"resourcesCount": {
"description": "ResourcesCount is the total number of resources managed by this application set. The count may be higher than actual number of items in the Resources field when\nthe number of managed resources exceeds the limit imposed by the controller (to avoid making the status field too large).",
"type": "integer",
"format": "int64"
}
}
},
@@ -8249,7 +8257,7 @@
}
},
"v1alpha1ConfigMapKeyRef": {
"description": "Utility struct for a reference to a configmap key.",
"description": "ConfigMapKeyRef struct for a reference to a configmap key.",
"type": "object",
"properties": {
"configMapName": {
@@ -9331,7 +9339,7 @@
}
},
"v1alpha1PullRequestGeneratorGithub": {
"description": "PullRequestGenerator defines connection info specific to GitHub.",
"description": "PullRequestGeneratorGithub defines connection info specific to GitHub.",
"type": "object",
"properties": {
"api": {
@@ -9472,6 +9480,11 @@
"connectionState": {
"$ref": "#/definitions/v1alpha1ConnectionState"
},
"depth": {
"description": "Depth specifies the depth for shallow clones. A value of 0 or omitting the field indicates a full clone.",
"type": "integer",
"format": "int64"
},
"enableLfs": {
"description": "EnableLFS specifies whether git-lfs support should be enabled for this repo. Only valid for Git repositories.",
"type": "boolean"
@@ -10335,7 +10348,7 @@
}
},
"v1alpha1SecretRef": {
"description": "Utility struct for a reference to a secret key.",
"description": "SecretRef struct for a reference to a secret key.",
"type": "object",
"properties": {
"key": {
@@ -10572,8 +10585,8 @@
"type": "string"
},
"targetBranch": {
"type": "string",
"title": "TargetBranch is the branch to which hydrated manifests should be committed"
"description": "TargetBranch is the branch from which hydrated manifests will be synced.\nIf HydrateTo is not set, this is also the branch to which hydrated manifests are committed.",
"type": "string"
}
}
},

View File

@@ -14,6 +14,7 @@ import (
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
logutils "github.com/argoproj/argo-cd/v3/util/log"
"github.com/argoproj/argo-cd/v3/util/profile"
"github.com/argoproj/argo-cd/v3/util/tls"
"github.com/argoproj/argo-cd/v3/applicationset/controllers"
@@ -79,6 +80,7 @@ func NewCommand() *cobra.Command {
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
maxResourcesStatusCount int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -169,6 +171,15 @@ func NewCommand() *cobra.Command {
log.Error(err, "unable to start manager")
os.Exit(1)
}
pprofMux := http.NewServeMux()
profile.RegisterProfiler(pprofMux)
// This looks a little strange. Eg, not using ctrl.Options PprofBindAddress and then adding the pprof mux
// to the metrics server. However, it allows for the controller to dynamically expose the pprof endpoints
// and use the existing metrics server, the same pattern that the application controller and api-server follow.
if err = mgr.AddMetricsServerExtraHandler("/debug/pprof/", pprofMux); err != nil {
log.Error(err, "failed to register pprof handlers")
}
dynamicClient, err := dynamic.NewForConfig(mgr.GetConfig())
errors.CheckError(err)
k8sClient, err := kubernetes.NewForConfig(mgr.GetConfig())
@@ -231,6 +242,7 @@ func NewCommand() *cobra.Command {
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -275,6 +287,7 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
return &command
}

View File

@@ -38,7 +38,7 @@ func NewCommand() *cobra.Command {
Use: "argocd-commit-server",
Short: "Run Argo CD Commit Server",
Long: "Argo CD Commit Server is an internal service which commits and pushes hydrated manifests to git. This command runs Commit Server in the foreground.",
RunE: func(_ *cobra.Command, _ []string) error {
RunE: func(cmd *cobra.Command, _ []string) error {
vers := common.GetVersion()
vers.LogStartupInfo(
"Argo CD Commit Server",
@@ -59,8 +59,10 @@ func NewCommand() *cobra.Command {
server := commitserver.NewServer(askPassServer, metricsServer)
grpc := server.CreateGRPC()
ctx := cmd.Context()
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
lc := &net.ListenConfig{}
listener, err := lc.Listen(ctx, "tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
errors.CheckError(err)
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {

View File

@@ -115,7 +115,7 @@ func NewRunDexCommand() *cobra.Command {
err = os.WriteFile("/tmp/dex.yaml", dexCfgBytes, 0o644)
errors.CheckError(err)
log.Debug(redactor(string(dexCfgBytes)))
cmd = exec.Command("dex", "serve", "/tmp/dex.yaml")
cmd = exec.CommandContext(ctx, "dex", "serve", "/tmp/dex.yaml")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err = cmd.Start()

View File

@@ -169,7 +169,8 @@ func NewCommand() *cobra.Command {
}
grpc := server.CreateGRPC()
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
lc := &net.ListenConfig{}
listener, err := lc.Listen(ctx, "tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
errors.CheckError(err)
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {

View File

@@ -105,26 +105,26 @@ func TestGetReconcileResults_Refresh(t *testing.T) {
appClientset := appfake.NewSimpleClientset(app, proj)
deployment := test.NewDeployment()
kubeClientset := kubefake.NewClientset(deployment, argoCM, argoCDSecret)
clusterCache := clustermocks.ClusterCache{}
clusterCache.On("IsNamespaced", mock.Anything).Return(true, nil)
clusterCache.On("GetGVKParser", mock.Anything).Return(nil)
repoServerClient := mocks.RepoServerServiceClient{}
repoServerClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(&argocdclient.ManifestResponse{
clusterCache := &clustermocks.ClusterCache{}
clusterCache.EXPECT().IsNamespaced(mock.Anything).Return(true, nil)
clusterCache.EXPECT().GetGVKParser().Return(nil)
repoServerClient := &mocks.RepoServerServiceClient{}
repoServerClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(&argocdclient.ManifestResponse{
Manifests: []string{test.DeploymentManifest},
}, nil)
repoServerClientset := mocks.Clientset{RepoServerServiceClient: &repoServerClient}
liveStateCache := cachemocks.LiveStateCache{}
liveStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything, mock.Anything).Return(map[kube.ResourceKey]*unstructured.Unstructured{
repoServerClientset := &mocks.Clientset{RepoServerServiceClient: repoServerClient}
liveStateCache := &cachemocks.LiveStateCache{}
liveStateCache.EXPECT().GetManagedLiveObjs(mock.Anything, mock.Anything, mock.Anything).Return(map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(deployment): deployment,
}, nil)
liveStateCache.On("GetVersionsInfo", mock.Anything).Return("v1.2.3", nil, nil)
liveStateCache.On("Init").Return(nil, nil)
liveStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCache, nil)
liveStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
liveStateCache.EXPECT().GetVersionsInfo(mock.Anything).Return("v1.2.3", nil, nil)
liveStateCache.EXPECT().Init().Return(nil)
liveStateCache.EXPECT().GetClusterCache(mock.Anything).Return(clusterCache, nil)
liveStateCache.EXPECT().IsNamespaced(mock.Anything, mock.Anything).Return(true, nil)
result, err := reconcileApplications(ctx, kubeClientset, appClientset, "default", &repoServerClientset, "",
result, err := reconcileApplications(ctx, kubeClientset, appClientset, "default", repoServerClientset, "",
func(_ db.ArgoDB, _ cache.SharedIndexInformer, _ *settings.SettingsManager, _ *metrics.MetricsServer) statecache.LiveStateCache {
return &liveStateCache
return liveStateCache
},
false,
normalizers.IgnoreNormalizerOpts{},

View File

@@ -30,11 +30,12 @@ func NewNotificationsCommand() *cobra.Command {
)
var argocdService service.Service
toolsCommand := cmd.NewToolsCommand(
"notifications",
"argocd admin notifications",
applications,
settings.GetFactorySettingsForCLI(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
settings.GetFactorySettingsForCLI(func() service.Service { return argocdService }, "argocd-notifications-secret", "argocd-notifications-cm", false),
func(clientConfig clientcmd.ClientConfig) {
k8sCfg, err := clientConfig.ClientConfig()
if err != nil {

View File

@@ -40,9 +40,7 @@ func captureStdout(callback func()) (string, error) {
return string(data), err
}
func newSettingsManager(data map[string]string) *settings.SettingsManager {
ctx := context.Background()
func newSettingsManager(ctx context.Context, data map[string]string) *settings.SettingsManager {
clientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
@@ -69,8 +67,8 @@ type fakeCmdContext struct {
mgr *settings.SettingsManager
}
func newCmdContext(data map[string]string) *fakeCmdContext {
return &fakeCmdContext{mgr: newSettingsManager(data)}
func newCmdContext(ctx context.Context, data map[string]string) *fakeCmdContext {
return &fakeCmdContext{mgr: newSettingsManager(ctx, data)}
}
func (ctx *fakeCmdContext) createSettingsManager(context.Context) (*settings.SettingsManager, error) {
@@ -182,7 +180,7 @@ admissionregistration.k8s.io/MutatingWebhookConfiguration:
if !assert.True(t, ok) {
return
}
summary, err := validator(newSettingsManager(tc.data))
summary, err := validator(newSettingsManager(t.Context(), tc.data))
if tc.containsSummary != "" {
require.NoError(t, err)
assert.Contains(t, summary, tc.containsSummary)
@@ -249,7 +247,7 @@ func tempFile(content string) (string, io.Closer, error) {
}
func TestValidateSettingsCommand_NoErrors(t *testing.T) {
cmd := NewValidateSettingsCommand(newCmdContext(map[string]string{}))
cmd := NewValidateSettingsCommand(newCmdContext(t.Context(), map[string]string{}))
out, err := captureStdout(func() {
err := cmd.Execute()
require.NoError(t, err)
@@ -267,7 +265,7 @@ func TestResourceOverrideIgnoreDifferences(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoOverridesConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{}))
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"ignore-differences", f})
err := cmd.Execute()
@@ -278,7 +276,7 @@ func TestResourceOverrideIgnoreDifferences(t *testing.T) {
})
t.Run("DataIgnored", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
"resource.customizations": `apps/Deployment:
ignoreDifferences: |
jsonPointers:
@@ -300,7 +298,7 @@ func TestResourceOverrideHealth(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoHealthAssessment", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
"resource.customizations": `example.com/ExampleResource: {}`,
}))
out, err := captureStdout(func() {
@@ -313,7 +311,7 @@ func TestResourceOverrideHealth(t *testing.T) {
})
t.Run("HealthAssessmentConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
"resource.customizations": `example.com/ExampleResource:
health.lua: |
return { status = "Progressing" }
@@ -329,7 +327,7 @@ func TestResourceOverrideHealth(t *testing.T) {
})
t.Run("HealthAssessmentConfiguredWildcard", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
"resource.customizations": `example.com/*:
health.lua: |
return { status = "Progressing" }
@@ -355,7 +353,7 @@ func TestResourceOverrideAction(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoActions", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
"resource.customizations": `apps/Deployment: {}`,
}))
out, err := captureStdout(func() {
@@ -368,7 +366,7 @@ func TestResourceOverrideAction(t *testing.T) {
})
t.Run("OldStyleActionConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
"resource.customizations": `apps/Deployment:
actions: |
discovery.lua: |
@@ -404,7 +402,7 @@ resume false
})
t.Run("NewStyleActionConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
"resource.customizations": `batch/CronJob:
actions: |
discovery.lua: |

View File

@@ -1794,6 +1794,7 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
repo string
appNamespace string
cluster string
path string
)
command := &cobra.Command{
Use: "list",
@@ -1829,6 +1830,9 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if cluster != "" {
appList = argo.FilterByCluster(appList, cluster)
}
if path != "" {
appList = argo.FilterByPath(appList, path)
}
switch output {
case "yaml", "json":
@@ -1849,6 +1853,7 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().StringVarP(&repo, "repo", "r", "", "List apps by source repo URL")
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Only list applications in namespace")
command.Flags().StringVarP(&cluster, "cluster", "c", "", "List apps by cluster name or url")
command.Flags().StringVarP(&path, "path", "P", "", "List apps by path")
return command
}

View File

@@ -57,7 +57,7 @@ func NewApplicationGetResourceCommand(clientOpts *argocdclient.ClientOptions) *c
# Get a specific resource with managed fields, Pod my-app-pod, in 'my-app' by name in wide format
argocd app get-resource my-app --kind Pod --resource-name my-app-pod --show-managed-fields
# Get the the details of a specific field in a resource in 'my-app' in the wide format
# Get the details of a specific field in a resource in 'my-app' in the wide format
argocd app get-resource my-app --kind Pod --filter-fields status.podIP
# Get the details of multiple specific fields in a specific resource in 'my-app' in the wide format

View File

@@ -0,0 +1,103 @@
package commands
import (
"regexp"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
appsv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
// splitColumns splits a line produced by tabwriter using runs of 2 or more spaces
// as delimiters to obtain logical columns regardless of alignment padding.
func splitColumns(line string) []string {
re := regexp.MustCompile(`\s{2,}`)
return re.Split(strings.TrimSpace(line), -1)
}
func Test_printKeyTable_Empty(t *testing.T) {
out, err := captureOutput(func() error {
printKeyTable([]appsv1.GnuPGPublicKey{})
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 1)
headerCols := splitColumns(lines[0])
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, headerCols)
}
func Test_printKeyTable_Single(t *testing.T) {
keys := []appsv1.GnuPGPublicKey{
{
KeyID: "ABCDEF1234567890",
SubType: "rsa4096",
Owner: "Alice <alice@example.com>",
},
}
out, err := captureOutput(func() error {
printKeyTable(keys)
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 2)
// Header
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, splitColumns(lines[0]))
// Row
row := splitColumns(lines[1])
require.Len(t, row, 3)
assert.Equal(t, "ABCDEF1234567890", row[0])
assert.Equal(t, "RSA4096", row[1]) // subtype upper-cased
assert.Equal(t, "Alice <alice@example.com>", row[2])
}
func Test_printKeyTable_Multiple(t *testing.T) {
keys := []appsv1.GnuPGPublicKey{
{
KeyID: "ABCD",
SubType: "ed25519",
Owner: "User One <one@example.com>",
},
{
KeyID: "0123456789ABCDEF",
SubType: "rsa2048",
Owner: "Second User <second@example.com>",
},
}
out, err := captureOutput(func() error {
printKeyTable(keys)
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 3)
// Header
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, splitColumns(lines[0]))
// First row
row1 := splitColumns(lines[1])
require.Len(t, row1, 3)
assert.Equal(t, "ABCD", row1[0])
assert.Equal(t, "ED25519", row1[1])
assert.Equal(t, "User One <one@example.com>", row1[2])
// Second row
row2 := splitColumns(lines[2])
require.Len(t, row2, 3)
assert.Equal(t, "0123456789ABCDEF", row2[0])
assert.Equal(t, "RSA2048", row2[1])
assert.Equal(t, "Second User <second@example.com>", row2[2])
}

View File

@@ -213,7 +213,8 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
}
if port == nil || *port == 0 {
addr := *address + ":0"
ln, err := net.Listen("tcp", addr)
lc := &net.ListenConfig{}
ln, err := lc.Listen(ctx, "tcp", addr)
if err != nil {
return nil, fmt.Errorf("failed to listen on %q: %w", addr, err)
}

View File

@@ -3,9 +3,11 @@ package commands
import (
"errors"
"fmt"
"maps"
"os"
"os/exec"
"path/filepath"
"slices"
"strings"
"github.com/argoproj/argo-cd/v3/util/cli"
@@ -13,18 +15,17 @@ import (
log "github.com/sirupsen/logrus"
)
// DefaultPluginHandler implements the PluginHandler interface
const prefix = "argocd"
type DefaultPluginHandler struct {
ValidPrefixes []string
lookPath func(file string) (string, error)
run func(cmd *exec.Cmd) error
lookPath func(file string) (string, error)
run func(cmd *exec.Cmd) error
}
// NewDefaultPluginHandler instantiates the DefaultPluginHandler
func NewDefaultPluginHandler(validPrefixes []string) *DefaultPluginHandler {
// NewDefaultPluginHandler instantiates a DefaultPluginHandler
func NewDefaultPluginHandler() *DefaultPluginHandler {
return &DefaultPluginHandler{
ValidPrefixes: validPrefixes,
lookPath: exec.LookPath,
lookPath: exec.LookPath,
run: func(cmd *exec.Cmd) error {
return cmd.Run()
},
@@ -32,8 +33,8 @@ func NewDefaultPluginHandler(validPrefixes []string) *DefaultPluginHandler {
}
// HandleCommandExecutionError processes the error returned from executing the command.
// It handles both standard Argo CD commands and plugin commands. We don't require to return
// error but we are doing it to cover various test scenarios.
// It handles both standard Argo CD commands and plugin commands. We don't require returning
// an error, but we are doing it to cover various test scenarios.
func (h *DefaultPluginHandler) HandleCommandExecutionError(err error, isArgocdCLI bool, args []string) error {
// the log level needs to be setup manually here since the initConfig()
// set by the cobra.OnInitialize() was never executed because cmd.Execute()
@@ -85,27 +86,24 @@ func (h *DefaultPluginHandler) handlePluginCommand(cmdArgs []string) (string, er
// lookForPlugin looks for a plugin in the PATH that starts with argocd prefix
func (h *DefaultPluginHandler) lookForPlugin(filename string) (string, bool) {
for _, prefix := range h.ValidPrefixes {
pluginName := fmt.Sprintf("%s-%s", prefix, filename)
path, err := h.lookPath(pluginName)
if err != nil {
// error if a plugin is found in a relative path
if errors.Is(err, exec.ErrDot) {
log.Errorf("Plugin '%s' found in relative path: %v", pluginName, err)
} else {
log.Warnf("error looking for plugin '%s': %v", pluginName, err)
}
continue
pluginName := fmt.Sprintf("%s-%s", prefix, filename)
path, err := h.lookPath(pluginName)
if err != nil {
// error if a plugin is found in a relative path
if errors.Is(err, exec.ErrDot) {
log.Errorf("Plugin '%s' found in relative path: %v", pluginName, err)
} else {
log.Warnf("error looking for plugin '%s': %v", pluginName, err)
}
if path == "" {
return "", false
}
return path, true
return "", false
}
return "", false
if path == "" {
return "", false
}
return path, true
}
// executePlugin implements PluginHandler and executes a plugin found
@@ -141,3 +139,56 @@ func (h *DefaultPluginHandler) command(name string, arg ...string) *exec.Cmd {
}
return cmd
}
// ListAvailablePlugins returns a list of plugin names that are available in the user's PATH
// for tab completion. It searches for executables matching the ValidPrefixes pattern.
func (h *DefaultPluginHandler) ListAvailablePlugins() []string {
// Track seen plugin names to avoid duplicates
seenPlugins := make(map[string]bool)
// Search through each directory in PATH
for _, dir := range filepath.SplitList(os.Getenv("PATH")) {
// Skip empty directories
if dir == "" {
continue
}
// Read directory contents
entries, err := os.ReadDir(dir)
if err != nil {
continue
}
// Check each file in the directory
for _, entry := range entries {
// Skip directories and non-executable files
if entry.IsDir() {
continue
}
name := entry.Name()
// Check if the file is a valid argocd plugin
pluginPrefix := prefix + "-"
if strings.HasPrefix(name, pluginPrefix) {
// Extract the plugin command name (everything after the prefix)
pluginName := strings.TrimPrefix(name, pluginPrefix)
// Skip empty plugin names or names with path separators
if pluginName == "" || strings.Contains(pluginName, "/") || strings.Contains(pluginName, "\\") {
continue
}
// Check if the file is executable
if info, err := entry.Info(); err == nil {
// On Unix-like systems, check executable bit
if info.Mode()&0o111 != 0 {
seenPlugins[pluginName] = true
}
}
}
}
}
return slices.Sorted(maps.Keys(seenPlugins))
}

View File

@@ -28,7 +28,7 @@ func setupPluginPath(t *testing.T) {
func TestNormalCommandWithPlugin(t *testing.T) {
setupPluginPath(t)
_ = NewDefaultPluginHandler([]string{"argocd"})
_ = NewDefaultPluginHandler()
args := []string{"argocd", "version", "--short", "--client"}
buf := new(bytes.Buffer)
cmd := NewVersionCmd(&argocdclient.ClientOptions{}, nil)
@@ -47,7 +47,7 @@ func TestNormalCommandWithPlugin(t *testing.T) {
func TestPluginExecution(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
pluginHandler := NewDefaultPluginHandler()
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -101,7 +101,7 @@ func TestPluginExecution(t *testing.T) {
func TestNormalCommandError(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
pluginHandler := NewDefaultPluginHandler()
args := []string{"argocd", "version", "--non-existent-flag"}
cmd := NewVersionCmd(&argocdclient.ClientOptions{}, nil)
cmd.SetArgs(args[1:])
@@ -118,7 +118,7 @@ func TestNormalCommandError(t *testing.T) {
// TestUnknownCommandNoPlugin tests the scenario when the command is neither a normal ArgoCD command
// nor exists as a plugin
func TestUnknownCommandNoPlugin(t *testing.T) {
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
pluginHandler := NewDefaultPluginHandler()
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -137,7 +137,7 @@ func TestUnknownCommandNoPlugin(t *testing.T) {
func TestPluginNoExecutePermission(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
pluginHandler := NewDefaultPluginHandler()
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -156,7 +156,7 @@ func TestPluginNoExecutePermission(t *testing.T) {
func TestPluginExecutionError(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
pluginHandler := NewDefaultPluginHandler()
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -187,7 +187,7 @@ func TestPluginInRelativePathIgnored(t *testing.T) {
t.Setenv("PATH", os.Getenv("PATH")+string(os.PathListSeparator)+relativePath)
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
pluginHandler := NewDefaultPluginHandler()
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -206,7 +206,7 @@ func TestPluginInRelativePathIgnored(t *testing.T) {
func TestPluginFlagParsing(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
pluginHandler := NewDefaultPluginHandler()
tests := []struct {
name string
@@ -255,7 +255,7 @@ func TestPluginFlagParsing(t *testing.T) {
func TestPluginStatusCode(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
pluginHandler := NewDefaultPluginHandler()
tests := []struct {
name string
@@ -309,3 +309,76 @@ func TestPluginStatusCode(t *testing.T) {
})
}
}
// TestListAvailablePlugins tests the plugin discovery functionality for tab completion
func TestListAvailablePlugins(t *testing.T) {
setupPluginPath(t)
tests := []struct {
name string
validPrefix []string
expected []string
}{
{
name: "Standard argocd prefix finds plugins",
expected: []string{"demo_plugin", "error", "foo", "status-code-plugin", "test-plugin", "version"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Equal(t, tt.expected, plugins)
})
}
}
// TestListAvailablePluginsEmptyPath tests plugin discovery when PATH is empty
func TestListAvailablePluginsEmptyPath(t *testing.T) {
// Set empty PATH
t.Setenv("PATH", "")
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Empty(t, plugins, "Should return empty list when PATH is empty")
}
// TestListAvailablePluginsNonExecutableFiles tests that non-executable files are ignored
func TestListAvailablePluginsNonExecutableFiles(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
// Should not include 'no-permission' since it's not executable
assert.NotContains(t, plugins, "no-permission")
}
// TestListAvailablePluginsDeduplication tests that duplicate plugins from different PATH dirs are handled
func TestListAvailablePluginsDeduplication(t *testing.T) {
// Create two temporary directories with the same plugin
dir1 := t.TempDir()
dir2 := t.TempDir()
// Create the same plugin in both directories
plugin1 := filepath.Join(dir1, "argocd-duplicate")
plugin2 := filepath.Join(dir2, "argocd-duplicate")
err := os.WriteFile(plugin1, []byte("#!/bin/bash\necho 'plugin1'\n"), 0o755)
require.NoError(t, err)
err = os.WriteFile(plugin2, []byte("#!/bin/bash\necho 'plugin2'\n"), 0o755)
require.NoError(t, err)
// Set PATH to include both directories
testPath := dir1 + string(os.PathListSeparator) + dir2
t.Setenv("PATH", testPath)
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Equal(t, []string{"duplicate"}, plugins)
}

View File

@@ -191,6 +191,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
repoOpts.Repo.NoProxy = repoOpts.NoProxy
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
repoOpts.Repo.Depth = repoOpts.Depth
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
errors.Fatal(errors.ErrorGeneric, "Must specify --name for repos of type 'helm'")

View File

@@ -44,6 +44,11 @@ func NewCommand() *cobra.Command {
},
DisableAutoGenTag: true,
SilenceUsage: true,
ValidArgsFunction: func(_ *cobra.Command, _ []string, _ string) ([]string, cobra.ShellCompDirective) {
// Return available plugin commands for tab completion
plugins := NewDefaultPluginHandler().ListAvailablePlugins()
return plugins, cobra.ShellCompDirectiveNoFileComp
},
}
command.AddCommand(NewCompletionCommand())

View File

@@ -24,7 +24,7 @@ func extractHealthStatusAndReason(node v1alpha1.ResourceNode) (healthStatus heal
healthStatus = node.Health.Status
reason = node.Health.Message
}
return
return healthStatus, reason
}
func treeViewAppGet(prefix string, uidToNodeMap map[string]v1alpha1.ResourceNode, parentToChildMap map[string][]string, parent v1alpha1.ResourceNode, mapNodeNameToResourceState map[string]*resourceState, w *tabwriter.Writer) {

View File

@@ -83,12 +83,11 @@ func main() {
}
err := command.Execute()
// if the err is non-nil, try to look for various scenarios
// if an error is present, try to look for various scenarios
// such as if the error is from the execution of a normal argocd command,
// unknown command error or any other.
if err != nil {
pluginHandler := cli.NewDefaultPluginHandler([]string{"argocd"})
pluginErr := pluginHandler.HandleCommandExecutionError(err, isArgocdCLI, os.Args)
pluginErr := cli.NewDefaultPluginHandler().HandleCommandExecutionError(err, isArgocdCLI, os.Args)
if pluginErr != nil {
var exitErr *exec.ExitError
if errors.As(pluginErr, &exitErr) {

View File

@@ -136,9 +136,9 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
command.Flags().StringVar(&opts.syncPolicy, "sync-policy", "", "Set the sync policy (one of: manual (aliases of manual: none), automated (aliases of automated: auto, automatic))")
command.Flags().StringArrayVar(&opts.syncOptions, "sync-option", []string{}, "Add or remove a sync option, e.g add `Prune=false`. Remove using `!` prefix, e.g. `!Prune=false`")
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning when sync is automated")
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing when sync is automated")
command.Flags().BoolVar(&opts.allowEmpty, "allow-empty", false, "Set allow zero live resources when sync is automated")
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning for automated sync policy")
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing for automated sync policy")
command.Flags().BoolVar(&opts.allowEmpty, "allow-empty", false, "Set allow zero live resources for automated sync policy")
command.Flags().StringVar(&opts.namePrefix, "nameprefix", "", "Kustomize nameprefix")
command.Flags().StringVar(&opts.nameSuffix, "namesuffix", "", "Kustomize namesuffix")
command.Flags().StringVar(&opts.kustomizeVersion, "kustomize-version", "", "Kustomize version")
@@ -284,25 +284,26 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
spec.SyncPolicy.Retry.Refresh = appOpts.retryRefresh
}
})
if flags.Changed("auto-prune") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --auto-prune: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
}
if flags.Changed("self-heal") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --self-heal: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
}
if flags.Changed("allow-empty") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --allow-empty: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.AllowEmpty = appOpts.allowEmpty
}
if flags.Changed("auto-prune") || flags.Changed("self-heal") || flags.Changed("allow-empty") {
if spec.SyncPolicy == nil {
spec.SyncPolicy = &argoappv1.SyncPolicy{}
}
if spec.SyncPolicy.Automated == nil {
disabled := false
spec.SyncPolicy.Automated = &argoappv1.SyncPolicyAutomated{Enabled: &disabled}
}
if flags.Changed("auto-prune") {
spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
}
if flags.Changed("self-heal") {
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
}
if flags.Changed("allow-empty") {
spec.SyncPolicy.Automated.AllowEmpty = appOpts.allowEmpty
}
}
return visited
}

View File

@@ -267,6 +267,47 @@ func Test_setAppSpecOptions(t *testing.T) {
require.NoError(t, f.SetFlag("sync-option", "!a=1"))
assert.Nil(t, f.spec.SyncPolicy)
})
t.Run("AutoPruneFlag", func(t *testing.T) {
f := newAppOptionsFixture()
// syncPolicy is nil (automated.enabled = false)
require.NoError(t, f.SetFlag("auto-prune", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.Prune)
// automated.enabled = true
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("auto-prune", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.Prune)
})
t.Run("SelfHealFlag", func(t *testing.T) {
f := newAppOptionsFixture()
require.NoError(t, f.SetFlag("self-heal", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.SelfHeal)
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("self-heal", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.SelfHeal)
})
t.Run("AllowEmptyFlag", func(t *testing.T) {
f := newAppOptionsFixture()
require.NoError(t, f.SetFlag("allow-empty", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.AllowEmpty)
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("allow-empty", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.AllowEmpty)
})
t.Run("RetryLimit", func(t *testing.T) {
require.NoError(t, f.SetFlag("sync-retry-limit", "5"))
assert.Equal(t, int64(5), f.spec.SyncPolicy.Retry.Limit)

View File

@@ -27,6 +27,7 @@ type RepoOptions struct {
GCPServiceAccountKeyPath string
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
UseAzureWorkloadIdentity bool
Depth int64
}
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
@@ -53,4 +54,5 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
command.Flags().BoolVar(&opts.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
command.Flags().BoolVar(&opts.InsecureOCIForceHTTP, "insecure-oci-force-http", false, "Use http when accessing an OCI repository")
command.Flags().Int64Var(&opts.Depth, "depth", 0, "Specify a custom depth for git clone operations. Unless specified, a full clone is performed using the depth of 0")
}

View File

@@ -1,6 +1,7 @@
package cmpserver
import (
"context"
"fmt"
"net"
"os"
@@ -85,7 +86,8 @@ func (a *ArgoCDCMPServer) Run() {
// Listen on the socket address
_ = os.Remove(config.Address())
listener, err := net.Listen("unix", config.Address())
lc := &net.ListenConfig{}
listener, err := lc.Listen(context.Background(), "unix", config.Address())
errors.CheckError(err)
log.Infof("argocd-cmp-server %s serving on %s", common.GetVersion(), listener.Addr())

View File

@@ -1,101 +1,14 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
"github.com/argoproj/argo-cd/v3/util/io"
mock "github.com/stretchr/testify/mock"
utilio "github.com/argoproj/argo-cd/v3/util/io"
)
// NewClientset creates a new instance of Clientset. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewClientset(t interface {
mock.TestingT
Cleanup(func())
}) *Clientset {
mock := &Clientset{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// Clientset is an autogenerated mock type for the Clientset type
type Clientset struct {
mock.Mock
CommitServiceClient apiclient.CommitServiceClient
}
type Clientset_Expecter struct {
mock *mock.Mock
}
func (_m *Clientset) EXPECT() *Clientset_Expecter {
return &Clientset_Expecter{mock: &_m.Mock}
}
// NewCommitServerClient provides a mock function for the type Clientset
func (_mock *Clientset) NewCommitServerClient() (io.Closer, apiclient.CommitServiceClient, error) {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for NewCommitServerClient")
}
var r0 io.Closer
var r1 apiclient.CommitServiceClient
var r2 error
if returnFunc, ok := ret.Get(0).(func() (io.Closer, apiclient.CommitServiceClient, error)); ok {
return returnFunc()
}
if returnFunc, ok := ret.Get(0).(func() io.Closer); ok {
r0 = returnFunc()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(io.Closer)
}
}
if returnFunc, ok := ret.Get(1).(func() apiclient.CommitServiceClient); ok {
r1 = returnFunc()
} else {
if ret.Get(1) != nil {
r1 = ret.Get(1).(apiclient.CommitServiceClient)
}
}
if returnFunc, ok := ret.Get(2).(func() error); ok {
r2 = returnFunc()
} else {
r2 = ret.Error(2)
}
return r0, r1, r2
}
// Clientset_NewCommitServerClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'NewCommitServerClient'
type Clientset_NewCommitServerClient_Call struct {
*mock.Call
}
// NewCommitServerClient is a helper method to define mock.On call
func (_e *Clientset_Expecter) NewCommitServerClient() *Clientset_NewCommitServerClient_Call {
return &Clientset_NewCommitServerClient_Call{Call: _e.mock.On("NewCommitServerClient")}
}
func (_c *Clientset_NewCommitServerClient_Call) Run(run func()) *Clientset_NewCommitServerClient_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *Clientset_NewCommitServerClient_Call) Return(closer io.Closer, commitServiceClient apiclient.CommitServiceClient, err error) *Clientset_NewCommitServerClient_Call {
_c.Call.Return(closer, commitServiceClient, err)
return _c
}
func (_c *Clientset_NewCommitServerClient_Call) RunAndReturn(run func() (io.Closer, apiclient.CommitServiceClient, error)) *Clientset_NewCommitServerClient_Call {
_c.Call.Return(run)
return _c
func (c *Clientset) NewCommitServerClient() (utilio.Closer, apiclient.CommitServiceClient, error) {
return utilio.NopCloser, c.CommitServiceClient, nil
}

View File

@@ -229,7 +229,7 @@ func (s *Service) initGitClient(logCtx *log.Entry, r *apiclient.CommitHydratedMa
}
logCtx.Debugf("Fetching repo %s", r.Repo.Repo)
err = gitClient.Fetch("")
err = gitClient.Fetch("", 0)
if err != nil {
cleanupOrLog()
return nil, "", nil, fmt.Errorf("failed to clone repo: %w", err)

View File

@@ -82,7 +82,7 @@ func Test_CommitHydratedManifests(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
_, err := service.CommitHydratedManifests(t.Context(), validRequest)
require.Error(t, err)
@@ -94,14 +94,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
resp, err := service.CommitHydratedManifests(t.Context(), validRequest)
require.NoError(t, err)
@@ -114,14 +114,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("root-and-blank-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("root-and-blank-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithRootAndBlank := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -161,15 +161,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents", []string{"apps/staging"}).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("subdir-path-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().RemoveContents([]string{"apps/staging"}).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("subdir-path-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithSubdirPath := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -201,15 +201,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents", []string{"apps/production", "apps/staging"}).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("mixed-paths-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().RemoveContents([]string{"apps/production", "apps/staging"}).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("mixed-paths-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithMixedPaths := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -257,14 +257,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithEmptyPaths := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{

View File

@@ -229,6 +229,7 @@ const (
// AnnotationKeyAppSkipReconcile tells the Application to skip the Application controller reconcile.
// Skip reconcile when the value is "true" or any other string values that can be strconv.ParseBool() to be true.
AnnotationKeyAppSkipReconcile = "argocd.argoproj.io/skip-reconcile"
// LabelKeyComponentRepoServer is the label key to identify the component as repo-server
LabelKeyComponentRepoServer = "app.kubernetes.io/component"
// LabelValueComponentRepoServer is the label value for the repo-server component

View File

@@ -39,6 +39,7 @@ import (
"k8s.io/client-go/informers"
informerv1 "k8s.io/client-go/informers/apps/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/utils/ptr"
@@ -466,7 +467,7 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
// Enforce application's permission for the source namespace
_, err = ctrl.getAppProj(app)
if err != nil {
logCtx.Errorf("Unable to determine project for app '%s': %v", app.QualifiedName(), err)
logCtx.WithError(err).Errorf("Unable to determine project for app")
continue
}
@@ -902,12 +903,12 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
clusters, err := ctrl.db.ListClusters(ctx)
if err != nil {
log.Warnf("Cannot init sharding. Error while querying clusters list from database: %v", err)
log.WithError(err).Warn("Cannot init sharding. Error while querying clusters list from database")
} else {
appItems, err := ctrl.getAppList(metav1.ListOptions{})
if err != nil {
log.Warnf("Cannot init sharding. Error while querying application list from database: %v", err)
log.WithError(err).Warn("Cannot init sharding. Error while querying application list from database")
} else {
ctrl.clusterSharding.Init(clusters, appItems)
}
@@ -1000,29 +1001,29 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
appKey, shutdown := ctrl.appOperationQueue.Get()
if shutdown {
processNext = false
return
return processNext
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appOperationQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
return processNext
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app))
@@ -1041,8 +1042,8 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
// We cannot rely on informer since applications might be updated by both application controller and api server.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
logCtx.Errorf("Failed to retrieve latest application state: %v", err)
return
logCtx.WithError(err).Error("Failed to retrieve latest application state")
return processNext
}
app = freshApp
}
@@ -1064,7 +1065,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
}
ts.AddCheckpoint("finalize_application_deletion_ms")
}
return
return processNext
}
func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processNext bool) {
@@ -1073,26 +1074,26 @@ func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processN
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.WithField("appkey", key).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appComparisonTypeRefreshQueue.Done(key)
}()
if shutdown {
processNext = false
return
return processNext
}
if parts := strings.Split(key, "/"); len(parts) != 3 {
log.Warnf("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consists of namespace/name/comparisonType but got: %s", key)
log.WithField("appkey", key).Warn("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consist of namespace/name/comparisonType")
} else {
compareWith, err := strconv.Atoi(parts[2])
if err != nil {
log.Warnf("Unable to parse comparison type: %v", err)
return
log.WithField("appkey", key).WithError(err).Warn("Unable to parse comparison type")
return processNext
}
ctrl.requestAppRefresh(ctrl.toAppQualifiedName(parts[1], parts[0]), CompareWith(compareWith).Pointer(), nil)
}
return
return processNext
}
func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool) {
@@ -1101,35 +1102,35 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.WithField("key", key).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.projectRefreshQueue.Done(key)
}()
if shutdown {
processNext = false
return
return processNext
}
obj, exists, err := ctrl.projInformer.GetIndexer().GetByKey(key)
if err != nil {
log.Errorf("Failed to get project '%s' from informer index: %+v", key, err)
return
log.WithField("key", key).WithError(err).Error("Failed to get project from informer index")
return processNext
}
if !exists {
// This happens after appproj was deleted, but the work queue still had an entry for it.
return
return processNext
}
origProj, ok := obj.(*appv1.AppProject)
if !ok {
log.Warnf("Key '%s' in index is not an appproject", key)
return
log.WithField("key", key).Warnf("Key in index is not an appproject")
return processNext
}
if origProj.DeletionTimestamp != nil && origProj.HasFinalizer() {
if err := ctrl.finalizeProjectDeletion(origProj.DeepCopy()); err != nil {
log.Warnf("Failed to finalize project deletion: %v", err)
log.WithError(err).Warn("Failed to finalize project deletion")
}
}
return
return processNext
}
func (ctrl *ApplicationController) finalizeProjectDeletion(proj *appv1.AppProject) error {
@@ -1194,7 +1195,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
if !apierrors.IsNotFound(err) {
logCtx.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
logCtx.WithError(err).Error("Unable to get refreshed application info prior deleting resources")
}
return nil
}
@@ -1204,7 +1205,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
}
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.Warnf("Unable to get destination cluster: %v", err)
logCtx.WithError(err).Warn("Unable to get destination cluster")
app.UnSetCascadedDeletion()
app.UnSetPostDeleteFinalizerAll()
if err := ctrl.updateFinalizers(app); err != nil {
@@ -1219,6 +1220,11 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
}
config := metrics.AddMetricsTransportWrapper(ctrl.metricsServer, app, clusterRESTConfig)
// Apply impersonation config if necessary
if err := ctrl.applyImpersonationConfig(config, proj, app, destCluster); err != nil {
return fmt.Errorf("cannot apply impersonation: %w", err)
}
if app.CascadedDeletion() {
deletionApproved := app.IsDeletionConfirmed(app.DeletionTimestamp.Time)
@@ -1367,7 +1373,7 @@ func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condi
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(context.Background(), app.Name, types.MergePatchType, patch, metav1.PatchOptions{})
}
if err != nil {
logCtx.Errorf("Unable to set application condition: %v", err)
logCtx.WithError(err).Error("Unable to set application condition")
}
}
@@ -1511,7 +1517,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatestForceResolve.Pointer(), nil)
} else {
logCtx.Warnf("Fails to requeue application: %v", err)
logCtx.WithError(err).Warn("Fails to requeue application")
}
}
ts.AddCheckpoint("request_app_refresh_ms")
@@ -1543,13 +1549,13 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
}
patchJSON, err := json.Marshal(patch)
if err != nil {
logCtx.Errorf("error marshaling json: %v", err)
logCtx.WithError(err).Error("error marshaling json")
return
}
if app.Status.OperationState != nil && app.Status.OperationState.FinishedAt != nil && state.FinishedAt == nil {
patchJSON, err = jsonpatch.MergeMergePatches(patchJSON, []byte(`{"status": {"operationState": {"finishedAt": null}}}`))
if err != nil {
logCtx.Errorf("error merging operation state patch: %v", err)
logCtx.WithError(err).Error("error merging operation state patch")
return
}
}
@@ -1563,7 +1569,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
}
// kube.RetryUntilSucceed logs failed attempts at "debug" level, but we want to know if this fails. Log a
// warning.
logCtx.Warnf("error patching application with operation state: %v", err)
logCtx.WithError(err).Warn("error patching application with operation state")
return fmt.Errorf("error patching application with operation state: %w", err)
}
return nil
@@ -1592,7 +1598,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.Warnf("Unable to get destination cluster, setting dest_server label to empty string in sync metric: %v", err)
logCtx.WithError(err).Warn("Unable to get destination cluster, setting dest_server label to empty string in sync metric")
}
destServer := ""
if destCluster != nil {
@@ -1609,7 +1615,7 @@ func (ctrl *ApplicationController) writeBackToInformer(app *appv1.Application) {
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithField("informer-writeBack", true)
err := ctrl.appInformer.GetStore().Update(app)
if err != nil {
logCtx.Errorf("failed to update informer store: %v", err)
logCtx.WithError(err).Error("failed to update informer store")
return
}
}
@@ -1630,12 +1636,12 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
appKey, shutdown := ctrl.appRefreshQueue.Get()
if shutdown {
processNext = false
return
return processNext
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
// We want to have app operation update happen after the sync, so there's no race condition
// and app updates not proceeding. See https://github.com/argoproj/argo-cd/issues/18500.
@@ -1644,23 +1650,23 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
return processNext
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
}
origApp = origApp.DeepCopy()
needRefresh, refreshType, comparisonLevel := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout, ctrl.statusHardRefreshTimeout)
if !needRefresh {
return
return processNext
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithFields(log.Fields{
@@ -1702,13 +1708,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
if tree, err = ctrl.getResourceTree(destCluster, app, managedResources); err == nil {
app.Status.Summary = tree.GetSummary(app)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), tree); err != nil {
logCtx.Errorf("Failed to cache resources tree: %v", err)
return
logCtx.WithError(err).Error("Failed to cache resources tree")
return processNext
}
}
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
return
return processNext
}
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fall back to full reconciliation")
}
@@ -1724,20 +1730,20 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
logCtx.Warnf("failed to set app resource tree: %v", err)
logCtx.WithError(err).Warn("failed to set app resource tree")
}
if err := ctrl.cache.SetAppManagedResources(app.InstanceName(ctrl.namespace), nil); err != nil {
logCtx.Warnf("failed to set app managed resources tree: %v", err)
logCtx.WithError(err).Warn("failed to set app managed resources tree")
}
ts.AddCheckpoint("process_refresh_app_conditions_errors_ms")
return
return processNext
}
destCluster, err = argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.Errorf("Failed to get destination cluster: %v", err)
logCtx.WithError(err).Error("Failed to get destination cluster")
// exit the reconciliation. ctrl.refreshAppConditions should have caught the error
return
return processNext
}
var localManifests []string
@@ -1777,8 +1783,8 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
ts.AddCheckpoint("compare_app_state_ms")
if stderrors.Is(err, ErrCompareStateRepo) {
logCtx.Warnf("Ignoring temporary failed attempt to compare app state against repo: %v", err)
return // short circuit if git error is encountered
logCtx.WithError(err).Warn("Ignoring temporary failed attempt to compare app state against repo")
return processNext // short circuit if git error is encountered
}
for k, v := range compareResult.timings {
@@ -1791,7 +1797,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
tree, err := ctrl.setAppManagedResources(destCluster, app, compareResult)
ts.AddCheckpoint("set_app_managed_resources_ms")
if err != nil {
logCtx.Errorf("Failed to cache app resources: %v", err)
logCtx.WithError(err).Error("Failed to cache app resources")
} else {
app.Status.Summary = tree.GetSummary(app)
}
@@ -1843,60 +1849,54 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
if err := ctrl.updateFinalizers(app); err != nil {
logCtx.Errorf("Failed to update finalizers: %v", err)
logCtx.WithError(err).Error("Failed to update finalizers")
}
}
ts.AddCheckpoint("process_finalizers_ms")
return
return processNext
}
func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appHydrateQueue.Get()
if shutdown {
processNext = false
return
return processNext
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appHydrateQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
return processNext
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
}
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp)
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp.DeepCopy())
log.WithFields(applog.GetAppLogFields(origApp)).Debug("Successfully processed app hydrate queue item")
return
return processNext
}
func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool) {
hydrationKey, shutdown := ctrl.hydrationQueue.Get()
if shutdown {
processNext = false
return
return processNext
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.hydrationQueue.Done(hydrationKey)
}()
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
@@ -1904,12 +1904,19 @@ func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool
"destinationBranch": hydrationKey.DestinationBranch,
})
defer func() {
if r := recover(); r != nil {
logCtx.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.hydrationQueue.Done(hydrationKey)
}()
logCtx.Debug("Processing hydration queue item")
ctrl.hydrator.ProcessHydrationQueueItem(hydrationKey)
logCtx.Debug("Successfully processed hydration queue item")
return
return processNext
}
func resourceStatusKey(res appv1.ResourceStatus) string {
@@ -2012,11 +2019,11 @@ func (ctrl *ApplicationController) normalizeApplication(orig, app *appv1.Applica
patch, modified, err := diff.CreateTwoWayMergePatch(orig, app, appv1.Application{})
if err != nil {
logCtx.Errorf("error constructing app spec patch: %v", err)
logCtx.WithError(err).Error("error constructing app spec patch")
} else if modified {
_, err := ctrl.PatchAppWithWriteBack(context.Background(), app.Name, app.Namespace, types.MergePatchType, patch, metav1.PatchOptions{})
if err != nil {
logCtx.Errorf("Error persisting normalized application spec: %v", err)
logCtx.WithError(err).Error("Error persisting normalized application spec")
} else {
logCtx.Infof("Normalized app spec: %s", string(patch))
}
@@ -2071,12 +2078,12 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: orig.GetAnnotations()}, Status: orig.Status},
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: newAnnotations}, Status: *newStatus})
if err != nil {
logCtx.Errorf("Error constructing app status patch: %v", err)
return
logCtx.WithError(err).Error("Error constructing app status patch")
return patchDuration
}
if !modified {
logCtx.Infof("No status changes. Skipping patch")
return
return patchDuration
}
// calculate time for path call
start := time.Now()
@@ -2085,7 +2092,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
}()
_, err = ctrl.PatchAppWithWriteBack(context.Background(), orig.Name, orig.Namespace, types.MergePatchType, patch, metav1.PatchOptions{})
if err != nil {
logCtx.Warnf("Error updating application: %v", err)
logCtx.WithError(err).Warn("Error updating application")
} else {
logCtx.Infof("Update successful")
}
@@ -2230,11 +2237,11 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
if stderrors.Is(err, argo.ErrAnotherOperationInProgress) {
// skipping auto-sync because another operation is in progress and was not noticed due to stale data in informer
// it is safe to skip auto-sync because it is already running
logCtx.Warnf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
logCtx.WithError(err).Warnf("Failed to initiate auto-sync to %s", desiredRevisions)
return nil, 0
}
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
logCtx.WithError(err).Errorf("Failed to initiate auto-sync to %s", desiredRevisions)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}, setOpTime
}
ctrl.writeBackToInformer(updatedApp)
@@ -2360,7 +2367,7 @@ func (ctrl *ApplicationController) canProcessApp(obj any) bool {
return false
}
} else {
logCtx.Debugf("Unable to determine if Application should skip reconcile based on annotation %s: %v", common.AnnotationKeyAppSkipReconcile, err)
logCtx.WithError(err).Debugf("Unable to determine if Application should skip reconcile based on annotation %s", common.AnnotationKeyAppSkipReconcile)
}
}
}
@@ -2615,4 +2622,22 @@ func (ctrl *ApplicationController) logAppEvent(ctx context.Context, a *appv1.App
ctrl.auditLogger.LogAppEvent(a, eventInfo, message, "", eventLabels)
}
func (ctrl *ApplicationController) applyImpersonationConfig(config *rest.Config, proj *appv1.AppProject, app *appv1.Application, destCluster *appv1.Cluster) error {
impersonationEnabled, err := ctrl.settingsMgr.IsImpersonationEnabled()
if err != nil {
return fmt.Errorf("error getting impersonation setting: %w", err)
}
if !impersonationEnabled {
return nil
}
user, err := deriveServiceAccountToImpersonate(proj, app, destCluster)
if err != nil {
return fmt.Errorf("error deriving service account to impersonate: %w", err)
}
config.Impersonate = rest.ImpersonationConfig{
UserName: user,
}
return nil
}
type ClusterFilterFunction func(c *appv1.Cluster, distributionFunction sharding.DistributionFunction) bool

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"errors"
"fmt"
"strconv"
"testing"
"time"
@@ -94,11 +95,11 @@ func (m *MockKubectl) DeleteResource(ctx context.Context, config *rest.Config, g
return m.Kubectl.DeleteResource(ctx, config, gvk, name, namespace, deleteOptions)
}
func newFakeController(data *fakeData, repoErr error) *ApplicationController {
return newFakeControllerWithResync(data, time.Minute, repoErr, nil)
func newFakeController(ctx context.Context, data *fakeData, repoErr error) *ApplicationController {
return newFakeControllerWithResync(ctx, data, time.Minute, repoErr, nil)
}
func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration, repoErr, revisionPathsErr error) *ApplicationController {
func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncPeriod time.Duration, repoErr, revisionPathsErr error) *ApplicationController {
var clust corev1.Secret
err := yaml.Unmarshal([]byte(fakeCluster), &clust)
if err != nil {
@@ -106,33 +107,33 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
}
// Mock out call to GenerateManifest
mockRepoClient := mockrepoclient.RepoServerServiceClient{}
mockRepoClient := &mockrepoclient.RepoServerServiceClient{}
if len(data.manifestResponses) > 0 {
for _, response := range data.manifestResponses {
if repoErr != nil {
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, repoErr).Once()
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(response, repoErr).Once()
} else {
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, nil).Once()
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(response, nil).Once()
}
}
} else {
if repoErr != nil {
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, repoErr).Once()
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(data.manifestResponse, repoErr).Once()
} else {
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, nil).Once()
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(data.manifestResponse, nil).Once()
}
}
if revisionPathsErr != nil {
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(nil, revisionPathsErr)
mockRepoClient.EXPECT().UpdateRevisionForPaths(mock.Anything, mock.Anything).Return(nil, revisionPathsErr)
} else {
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(data.updateRevisionForPathsResponse, nil)
mockRepoClient.EXPECT().UpdateRevisionForPaths(mock.Anything, mock.Anything).Return(data.updateRevisionForPathsResponse, nil)
}
mockRepoClientset := mockrepoclient.Clientset{RepoServerServiceClient: &mockRepoClient}
mockRepoClientset := &mockrepoclient.Clientset{RepoServerServiceClient: mockRepoClient}
mockCommitClientset := mockcommitclient.Clientset{}
mockCommitClientset := &mockcommitclient.Clientset{}
secret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
@@ -157,15 +158,15 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
runtimeObjs := []runtime.Object{&clust, &secret, &cm}
runtimeObjs = append(runtimeObjs, data.additionalObjs...)
kubeClient := fake.NewClientset(runtimeObjs...)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, test.FakeArgoCDNamespace)
kubectl := &MockKubectl{Kubectl: &kubetest.MockKubectlCmd{}}
ctrl, err := NewApplicationController(
test.FakeArgoCDNamespace,
settingsMgr,
kubeClient,
appclientset.NewSimpleClientset(data.apps...),
&mockRepoClientset,
&mockCommitClientset,
mockRepoClientset,
mockCommitClientset,
appstatecache.NewCache(
cacheutil.NewCache(cacheutil.NewInMemoryCache(1*time.Minute)),
1*time.Minute,
@@ -196,7 +197,7 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
false,
)
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
// Setting a default sharding algorithm for the tests where we cannot set it.
ctrl.clusterSharding = sharding.NewClusterSharding(db, 0, 1, common.DefaultShardingAlgorithm)
if err != nil {
@@ -206,27 +207,25 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
defer cancelProj()
cancelApp := test.StartInformer(ctrl.appInformer)
defer cancelApp()
clusterCacheMock := mocks.ClusterCache{}
clusterCacheMock.On("IsNamespaced", mock.Anything).Return(true, nil)
clusterCacheMock.On("GetOpenAPISchema").Return(nil, nil)
clusterCacheMock.On("GetGVKParser").Return(nil)
clusterCacheMock := &mocks.ClusterCache{}
clusterCacheMock.EXPECT().IsNamespaced(mock.Anything).Return(true, nil)
clusterCacheMock.EXPECT().GetOpenAPISchema().Return(nil)
clusterCacheMock.EXPECT().GetGVKParser().Return(nil)
mockStateCache := mockstatecache.LiveStateCache{}
ctrl.appStateManager.(*appStateManager).liveStateCache = &mockStateCache
ctrl.stateCache = &mockStateCache
mockStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
mockStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
mockStateCache.On("GetVersionsInfo", mock.Anything).Return("v1.2.3", nil, nil)
mockStateCache := &mockstatecache.LiveStateCache{}
ctrl.appStateManager.(*appStateManager).liveStateCache = mockStateCache
ctrl.stateCache = mockStateCache
mockStateCache.EXPECT().IsNamespaced(mock.Anything, mock.Anything).Return(true, nil)
mockStateCache.EXPECT().GetManagedLiveObjs(mock.Anything, mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
mockStateCache.EXPECT().GetVersionsInfo(mock.Anything).Return("v1.2.3", nil, nil)
response := make(map[kube.ResourceKey]v1alpha1.ResourceNode)
for k, v := range data.namespacedResources {
response[k] = v.ResourceNode
}
mockStateCache.On("GetNamespaceTopLevelResources", mock.Anything, mock.Anything).Return(response, nil)
mockStateCache.On("IterateResources", mock.Anything, mock.Anything).Return(nil)
mockStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCacheMock, nil)
mockStateCache.On("IterateHierarchyV2", mock.Anything, mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
keys := args[1].([]kube.ResourceKey)
action := args[2].(func(child v1alpha1.ResourceNode, appName string) bool)
mockStateCache.EXPECT().GetNamespaceTopLevelResources(mock.Anything, mock.Anything).Return(response, nil)
mockStateCache.EXPECT().IterateResources(mock.Anything, mock.Anything).Return(nil)
mockStateCache.EXPECT().GetClusterCache(mock.Anything).Return(clusterCacheMock, nil)
mockStateCache.EXPECT().IterateHierarchyV2(mock.Anything, mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Cluster, keys []kube.ResourceKey, action func(_ v1alpha1.ResourceNode, _ string) bool) {
for _, key := range keys {
appName := ""
if res, ok := data.namespacedResources[key]; ok {
@@ -596,7 +595,7 @@ func newFakeServiceAccount() map[string]any {
func TestAutoSync(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -614,7 +613,7 @@ func TestAutoSyncEnabledSetToTrue(t *testing.T) {
app := newFakeApp()
enable := true
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -635,7 +634,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"z", "x", "v"},
@@ -650,7 +649,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"a", "b", "c"},
@@ -666,7 +665,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
func TestAutoSyncNotAllowEmpty(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated.Prune = true
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -679,7 +678,7 @@ func TestAutoSyncAllowEmpty(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated.Prune = true
app.Spec.SyncPolicy.Automated.AllowEmpty = true
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -693,7 +692,7 @@ func TestSkipAutoSync(t *testing.T) {
// Set current to 'aaaaa', desired to 'aaaa' and mark system OutOfSync
t.Run("PreviouslySyncedToRevision", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
@@ -708,7 +707,7 @@ func TestSkipAutoSync(t *testing.T) {
// Verify we skip when we are already Synced (even if revision is different)
t.Run("AlreadyInSyncedState", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -724,7 +723,7 @@ func TestSkipAutoSync(t *testing.T) {
t.Run("AutoSyncIsDisabled", func(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy = nil
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -741,7 +740,7 @@ func TestSkipAutoSync(t *testing.T) {
app := newFakeApp()
enable := false
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -758,7 +757,7 @@ func TestSkipAutoSync(t *testing.T) {
app := newFakeApp()
now := metav1.Now()
app.DeletionTimestamp = &now
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -784,7 +783,7 @@ func TestSkipAutoSync(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -808,7 +807,7 @@ func TestSkipAutoSync(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -822,7 +821,7 @@ func TestSkipAutoSync(t *testing.T) {
t.Run("NeedsToPruneResourcesOnlyButAutomatedPruneDisabled", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -848,7 +847,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
},
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
@@ -908,7 +907,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(t.Context(), "my-app", metav1.GetOptions{})
@@ -926,7 +925,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
},
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
app.Status.OperationState.SyncResult.Sources[0].Helm = &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
@@ -973,7 +972,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app.SetCascadedDeletion(v1alpha1.ResourcesFinalizerName)
app.DeletionTimestamp = &now
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
patched := false
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
defaultReactor := fakeAppCs.ReactionChain[0]
@@ -1018,7 +1017,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
appObj := kube.MustToUnstructured(&app)
cm := newFakeCM()
strayObj := kube.MustToUnstructured(&cm)
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj, &restrictedProj},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(appObj): appObj,
@@ -1059,7 +1058,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app := newFakeAppWithDestName()
app.SetCascadedDeletion(v1alpha1.ResourcesFinalizerName)
app.DeletionTimestamp = &now
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
patched := false
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
defaultReactor := fakeAppCs.ReactionChain[0]
@@ -1085,7 +1084,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
testShouldDelete := func(app *v1alpha1.Application) {
appObj := kube.MustToUnstructured(&app)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(appObj): appObj,
}}, nil)
@@ -1119,7 +1118,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app := newFakeApp()
app.SetPostDeleteFinalizer()
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakePostDeleteHook},
}},
@@ -1161,7 +1160,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
},
}
require.NoError(t, unstructured.SetNestedField(liveHook.Object, conditions, "status", "conditions"))
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakePostDeleteHook},
}},
@@ -1205,7 +1204,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
},
}
require.NoError(t, unstructured.SetNestedField(liveHook.Object, conditions, "status", "conditions"))
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakeRoleBinding, fakeRole, fakeServiceAccount, fakePostDeleteHook},
}},
@@ -1246,6 +1245,117 @@ func TestFinalizeAppDeletion(t *testing.T) {
})
}
func TestFinalizeAppDeletionWithImpersonation(t *testing.T) {
type fixture struct {
application *v1alpha1.Application
controller *ApplicationController
}
setup := func(destinationNamespace, serviceAccountName string) *fixture {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
now := metav1.Now()
app.DeletionTimestamp = &now
project := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
Name: "default",
},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []v1alpha1.ApplicationDestination{
{
Server: "*",
Namespace: "*",
},
},
DestinationServiceAccounts: []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://localhost:6443",
Namespace: destinationNamespace,
DefaultServiceAccount: serviceAccountName,
},
},
},
}
additionalObjs := []runtime.Object{}
if serviceAccountName != "" {
syncServiceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: serviceAccountName,
Namespace: test.FakeDestNamespace,
},
}
additionalObjs = append(additionalObjs, syncServiceAccount)
}
data := fakeData{
apps: []runtime.Object{app, project},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: "https://localhost:6443",
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{},
configMapData: map[string]string{
"application.sync.impersonation.enabled": strconv.FormatBool(true),
},
additionalObjs: additionalObjs,
}
ctrl := newFakeController(t.Context(), &data, nil)
return &fixture{
application: app,
controller: ctrl,
}
}
t.Run("no matching impersonation service account is configured", func(t *testing.T) {
// given impersonation is enabled but no matching service account exists
f := setup(test.FakeDestNamespace, "")
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should fail due to impersonation error
require.Error(t, err)
assert.Contains(t, err.Error(), "error deriving service account to impersonate")
})
t.Run("valid impersonation service account is configured", func(t *testing.T) {
// given impersonation is enabled with valid service account
f := setup(test.FakeDestNamespace, "test-sa")
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should succeed
require.NoError(t, err)
})
t.Run("invalid application destination cluster", func(t *testing.T) {
// given impersonation is enabled but destination cluster does not exist
f := setup(test.FakeDestNamespace, "test-sa")
f.application.Spec.Destination.Server = "https://invalid-cluster:6443"
f.application.Spec.Destination.Name = "invalid"
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should still succeed by removing finalizers
require.NoError(t, err)
})
}
// TestNormalizeApplication verifies we normalize an application during reconciliation
func TestNormalizeApplication(t *testing.T) {
defaultProj := v1alpha1.AppProject{
@@ -1279,7 +1389,7 @@ func TestNormalizeApplication(t *testing.T) {
{
// Verify we normalize the app because project is missing
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
key, _ := cache.MetaNamespaceKeyFunc(app)
ctrl.appRefreshQueue.AddRateLimited(key)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
@@ -1301,7 +1411,7 @@ func TestNormalizeApplication(t *testing.T) {
// Verify we don't unnecessarily normalize app when project is set
app.Spec.Project = "default"
data.apps[0] = app
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
key, _ := cache.MetaNamespaceKeyFunc(app)
ctrl.appRefreshQueue.AddRateLimited(key)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
@@ -1326,7 +1436,7 @@ func TestHandleAppUpdated(t *testing.T) {
app.Spec.Destination.Server = v1alpha1.KubernetesInternalAPIServerAddr
proj := defaultProj.DeepCopy()
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl.handleObjectUpdated(map[string]bool{app.InstanceName(ctrl.namespace): true}, kube.GetObjectRef(kube.MustToUnstructured(app)))
isRequested, level := ctrl.isRefreshRequested(app.QualifiedName())
@@ -1353,7 +1463,7 @@ func TestHandleOrphanedResourceUpdated(t *testing.T) {
proj := defaultProj.DeepCopy()
proj.Spec.OrphanedResources = &v1alpha1.OrphanedResourcesMonitorSettings{}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app1, app2, proj}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app1, app2, proj}}, nil)
ctrl.handleObjectUpdated(map[string]bool{}, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: test.FakeArgoCDNamespace})
@@ -1384,7 +1494,7 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
ResourceRef: v1alpha1.ResourceRef{Group: "apps", Kind: "Deployment", Namespace: "default", Name: "deploy2"},
}
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", "Deployment", "default", "nginx-deployment"): {ResourceNode: managedDeploy},
@@ -1407,7 +1517,7 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
}
func TestSetOperationStateOnDeletedApp(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
patched := false
@@ -1425,7 +1535,7 @@ func TestSetOperationStateLogRetries(t *testing.T) {
t.Cleanup(func() {
logrus.StandardLogger().ReplaceHooks(logrus.LevelHooks{})
})
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
patched := false
@@ -1438,7 +1548,12 @@ func TestSetOperationStateLogRetries(t *testing.T) {
})
ctrl.setOperationState(newFakeApp(), &v1alpha1.OperationState{Phase: synccommon.OperationSucceeded})
assert.True(t, patched)
assert.Contains(t, hook.Entries[0].Message, "fake error")
require.GreaterOrEqual(t, len(hook.Entries), 1)
entry := hook.Entries[0]
require.Contains(t, entry.Data, "error")
errorVal, ok := entry.Data["error"].(error)
require.True(t, ok, "error field should be of type error")
assert.Contains(t, errorVal.Error(), "fake error")
}
func TestNeedRefreshAppStatus(t *testing.T) {
@@ -1476,7 +1591,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
app.Status.Sync.ComparedTo.Source = app.Spec.GetSource()
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
t.Run("no need to refresh just reconciled application", func(t *testing.T) {
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
@@ -1488,7 +1603,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.False(t, needRefresh)
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
// refresh app using the 'deepest' requested comparison level
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
@@ -1505,7 +1620,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.False(t, needRefresh)
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
// refresh app with a non-nil delay
// use zero-second delay to test the add later logic without waiting in the test
@@ -1535,7 +1650,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
app := app.DeepCopy()
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.False(t, needRefresh)
@@ -1565,7 +1680,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
}
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.False(t, needRefresh)
@@ -1645,7 +1760,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
}
func TestUpdatedManagedNamespaceMetadata(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
app := newFakeApp()
app.Spec.SyncPolicy.ManagedNamespaceMetadata = &v1alpha1.ManagedNamespaceMetadata{
Labels: map[string]string{
@@ -1669,7 +1784,7 @@ func TestUpdatedManagedNamespaceMetadata(t *testing.T) {
}
func TestUnchangedManagedNamespaceMetadata(t *testing.T) {
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
app := newFakeApp()
app.Spec.SyncPolicy.ManagedNamespaceMetadata = &v1alpha1.ManagedNamespaceMetadata{
Labels: map[string]string{
@@ -1712,7 +1827,7 @@ func TestRefreshAppConditions(t *testing.T) {
t.Run("NoErrorConditions", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.False(t, hasErrors)
@@ -1723,7 +1838,7 @@ func TestRefreshAppConditions(t *testing.T) {
app := newFakeApp()
app.Status.SetConditions([]v1alpha1.ApplicationCondition{{Type: v1alpha1.ApplicationConditionExcludedResourceWarning}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.False(t, hasErrors)
@@ -1736,7 +1851,7 @@ func TestRefreshAppConditions(t *testing.T) {
app.Spec.Project = "wrong project"
app.Status.SetConditions([]v1alpha1.ApplicationCondition{{Type: v1alpha1.ApplicationConditionInvalidSpecError, Message: "old message"}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.True(t, hasErrors)
@@ -1751,7 +1866,7 @@ func TestUpdateReconciledAt(t *testing.T) {
reconciledAt := metav1.NewTime(time.Now().Add(-1 * time.Second))
app.Status = v1alpha1.ApplicationStatus{ReconciledAt: &reconciledAt}
app.Status.Sync = v1alpha1.SyncStatus{ComparedTo: v1alpha1.ComparedTo{Source: app.Spec.GetSource(), Destination: app.Spec.Destination, IgnoreDifferences: app.Spec.IgnoreDifferences}}
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -1882,7 +1997,7 @@ apps/Deployment:
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{tc.app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -1946,7 +2061,7 @@ apps/Deployment:
return hs`,
}
ctrl := newFakeControllerWithResync(&fakeData{
ctrl := newFakeControllerWithResync(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -2013,7 +2128,7 @@ apps/Deployment:
func TestProjectErrorToCondition(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "wrong project"
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -2041,7 +2156,7 @@ func TestProjectErrorToCondition(t *testing.T) {
func TestFinalizeProjectDeletion_HasApplications(t *testing.T) {
app := newFakeApp()
proj := &v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace}}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
patched := false
@@ -2057,7 +2172,7 @@ func TestFinalizeProjectDeletion_HasApplications(t *testing.T) {
func TestFinalizeProjectDeletion_DoesNotHaveApplications(t *testing.T) {
proj := &v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace}}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{&defaultProj}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{&defaultProj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
@@ -2083,7 +2198,7 @@ func TestProcessRequestedAppOperation_FailedNoRetries(t *testing.T) {
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2110,7 +2225,7 @@ func TestProcessRequestedAppOperation_InvalidDestination(t *testing.T) {
proj := defaultProj
proj.Name = "test-project"
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &proj}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &proj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
func() {
@@ -2139,7 +2254,7 @@ func TestProcessRequestedAppOperation_FailedHasRetries(t *testing.T) {
Sync: &v1alpha1.SyncOperation{},
Retry: v1alpha1.RetryStrategy{Limit: 1},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2186,7 +2301,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailed(t *testing.T) {
Revision: "abc123",
},
}
ctrl := newFakeController(data, nil)
ctrl := newFakeController(t.Context(), data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2243,7 +2358,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailedBackoff(t *testing.
Revision: "abc123",
},
}
ctrl := newFakeController(data, nil)
ctrl := newFakeController(t.Context(), data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.PrependReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
require.FailNow(t, "A patch should not have been called if the backoff has not passed")
@@ -2271,7 +2386,7 @@ func TestProcessRequestedAppOperation_HasRetriesTerminated(t *testing.T) {
Revision: "abc123",
},
}
ctrl := newFakeController(data, nil)
ctrl := newFakeController(t.Context(), data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2295,7 +2410,7 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
@@ -2370,7 +2485,7 @@ func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
Revision: "HEAD",
},
}
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
@@ -2412,9 +2527,9 @@ func TestGetAppHosts(t *testing.T) {
"application.allowedNodeLabels": "label1,label2",
},
}
ctrl := newFakeController(data, nil)
ctrl := newFakeController(t.Context(), data, nil)
mockStateCache := &mockstatecache.LiveStateCache{}
mockStateCache.On("IterateResources", mock.Anything, mock.MatchedBy(func(callback func(res *clustercache.Resource, info *statecache.ResourceInfo)) bool {
mockStateCache.EXPECT().IterateResources(mock.Anything, mock.MatchedBy(func(callback func(res *clustercache.Resource, info *statecache.ResourceInfo)) bool {
// node resource
callback(&clustercache.Resource{
Ref: corev1.ObjectReference{Name: "minikube", Kind: "Node", APIVersion: "v1"},
@@ -2440,7 +2555,7 @@ func TestGetAppHosts(t *testing.T) {
ResourceRequests: map[corev1.ResourceName]resource.Quantity{corev1.ResourceCPU: resource.MustParse("2")},
}})
return true
})).Return(nil)
})).Return(nil).Maybe()
ctrl.stateCache = mockStateCache
hosts, err := ctrl.getAppHosts(&v1alpha1.Cluster{Server: "test", Name: "test"}, app, []v1alpha1.ResourceNode{{
@@ -2467,15 +2582,15 @@ func TestGetAppHosts(t *testing.T) {
func TestMetricsExpiration(t *testing.T) {
app := newFakeApp()
// Check expiration is disabled by default
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
assert.False(t, ctrl.metricsServer.HasExpiration())
// Check expiration is enabled if set
ctrl = newFakeController(&fakeData{apps: []runtime.Object{app}, metricsCacheExpiration: 10 * time.Second}, nil)
ctrl = newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}, metricsCacheExpiration: 10 * time.Second}, nil)
assert.True(t, ctrl.metricsServer.HasExpiration())
}
func TestToAppKey(t *testing.T) {
ctrl := newFakeController(&fakeData{}, nil)
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
tests := []struct {
name string
input string
@@ -2495,7 +2610,7 @@ func TestToAppKey(t *testing.T) {
func Test_canProcessApp(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl.applicationNamespaces = []string{"good"}
t.Run("without cluster filter, good namespace", func(t *testing.T) {
app.Namespace = "good"
@@ -2526,7 +2641,7 @@ func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
appSkipReconcileFalse.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "false"}
appSkipReconcileTrue := newFakeApp()
appSkipReconcileTrue.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "true"}
ctrl := newFakeController(&fakeData{}, nil)
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
tests := []struct {
name string
input any
@@ -2547,7 +2662,7 @@ func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
func Test_syncDeleteOption(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
cm := newFakeCM()
t.Run("without delete option object is deleted", func(t *testing.T) {
cmObj := kube.MustToUnstructured(&cm)
@@ -2568,7 +2683,7 @@ func Test_syncDeleteOption(t *testing.T) {
func TestAddControllerNamespace(t *testing.T) {
t.Run("set controllerNamespace when the app is in the controller namespace", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{},
}, nil)
@@ -2586,7 +2701,7 @@ func TestAddControllerNamespace(t *testing.T) {
app.Namespace = appNamespace
proj := defaultProj
proj.Spec.SourceNamespaces = []string{appNamespace}
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &proj},
manifestResponse: &apiclient.ManifestResponse{},
applicationNamespaces: []string{appNamespace},
@@ -2843,7 +2958,7 @@ func assertDurationAround(t *testing.T, expected time.Duration, actual time.Dura
}
func TestSelfHealRemainingBackoff(t *testing.T) {
ctrl := newFakeController(&fakeData{}, nil)
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl.selfHealBackoff = &wait.Backoff{
Factor: 3,
Duration: 2 * time.Second,
@@ -2925,7 +3040,7 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
func TestSelfHealBackoffCooldownElapsed(t *testing.T) {
cooldown := time.Second * 30
ctrl := newFakeController(&fakeData{}, nil)
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl.selfHealBackoffCooldown = cooldown
app := &v1alpha1.Application{

View File

@@ -40,7 +40,7 @@ func (n netError) Error() string { return string(n) }
func (n netError) Timeout() bool { return false }
func (n netError) Temporary() bool { return false }
func fixtures(data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
func fixtures(ctx context.Context, data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDConfigMapName,
@@ -65,17 +65,17 @@ func fixtures(data map[string]string, opts ...func(secret *corev1.Secret)) (*fak
opts[i](secret)
}
kubeClient := fake.NewClientset(cm, secret)
settingsManager := argosettings.NewSettingsManager(context.Background(), kubeClient, "default")
settingsManager := argosettings.NewSettingsManager(ctx, kubeClient, "default")
return kubeClient, settingsManager
}
func TestHandleModEvent_HasChanges(_ *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
clusterCache.On("EnsureSynced").Return(nil).Once()
clusterCache.EXPECT().Invalidate(mock.Anything, mock.Anything).Return().Once()
clusterCache.EXPECT().EnsureSynced().Return(nil).Once()
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
db.EXPECT().GetApplicationControllerReplicas().Return(1)
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
"https://mycluster": clusterCache,
@@ -95,10 +95,10 @@ func TestHandleModEvent_HasChanges(_ *testing.T) {
func TestHandleModEvent_ClusterExcluded(t *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
clusterCache.On("EnsureSynced").Return(nil).Once()
clusterCache.EXPECT().Invalidate(mock.Anything, mock.Anything).Return().Once()
clusterCache.EXPECT().EnsureSynced().Return(nil).Once()
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
clustersCache := liveStateCache{
db: nil,
appInformer: nil,
@@ -128,10 +128,10 @@ func TestHandleModEvent_ClusterExcluded(t *testing.T) {
func TestHandleModEvent_NoChanges(_ *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.On("Invalidate", mock.Anything).Panic("should not invalidate")
clusterCache.On("EnsureSynced").Return(nil).Panic("should not re-sync")
clusterCache.EXPECT().Invalidate(mock.Anything).Panic("should not invalidate").Maybe()
clusterCache.EXPECT().EnsureSynced().Return(nil).Panic("should not re-sync").Maybe()
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
"https://mycluster": clusterCache,
@@ -150,7 +150,7 @@ func TestHandleModEvent_NoChanges(_ *testing.T) {
func TestHandleAddEvent_ClusterExcluded(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{},
clusterSharding: sharding.NewClusterSharding(db, 0, 2, common.DefaultShardingAlgorithm),
@@ -169,10 +169,9 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
Config: appv1.ClusterConfig{Username: "bar"},
}
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
db.EXPECT().GetApplicationControllerReplicas().Return(1)
fakeClient := fake.NewClientset()
settingsMgr := argosettings.NewSettingsManager(t.Context(), fakeClient, "argocd")
liveStateCacheLock := sync.RWMutex{}
gitopsEngineClusterCache := &mocks.ClusterCache{}
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
@@ -180,9 +179,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
},
clusterSharding: sharding.NewClusterSharding(db, 0, 1, common.DefaultShardingAlgorithm),
settingsMgr: settingsMgr,
// Set the lock here so we can reference it later
//nolint:govet // We need to overwrite here to have access to the lock
lock: liveStateCacheLock,
lock: sync.RWMutex{},
}
channel := make(chan string)
// Mocked lock held by the gitops-engine cluster cache
@@ -203,7 +200,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
handleDeleteWasCalled.Lock()
engineHoldsEngineLock.Lock()
gitopsEngineClusterCache.On("EnsureSynced").Run(func(_ mock.Arguments) {
gitopsEngineClusterCache.EXPECT().EnsureSynced().Run(func() {
gitopsEngineClusterCacheLock.Lock()
t.Log("EnsureSynced: Engine has engine lock")
engineHoldsEngineLock.Unlock()
@@ -217,7 +214,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
ensureSyncedCompleted.Unlock()
}).Return(nil).Once()
gitopsEngineClusterCache.On("Invalidate").Run(func(_ mock.Arguments) {
gitopsEngineClusterCache.EXPECT().Invalidate().Run(func(_ ...cache.UpdateSettingsFunc) {
// Allow EnsureSynced to continue now that we're in the deadlock condition
handleDeleteWasCalled.Unlock()
// Wait until gitops engine holds the gitops lock
@@ -230,7 +227,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
t.Log("Invalidate: Invalidate has engine lock")
gitopsEngineClusterCacheLock.Unlock()
invalidateCompleted.Unlock()
}).Return()
}).Return().Maybe()
go func() {
// Start the gitops-engine lock holds
go func() {
@@ -778,7 +775,7 @@ func Test_GetVersionsInfo_error_redacted(t *testing.T) {
}
func TestLoadCacheSettings(t *testing.T) {
_, settingsManager := fixtures(map[string]string{
_, settingsManager := fixtures(t.Context(), map[string]string{
"application.instanceLabelKey": "testLabel",
"application.resourceTrackingMethod": string(appv1.TrackingMethodLabel),
"installationID": "123456789",

View File

@@ -68,6 +68,10 @@ func populateNodeInfo(un *unstructured.Unstructured, res *ResourceInfo, customLa
case "ServiceEntry":
populateIstioServiceEntryInfo(un, res)
}
case "argoproj.io":
if gvk.Kind == "Application" {
populateApplicationInfo(un, res)
}
}
}
@@ -488,6 +492,13 @@ func populateHostNodeInfo(un *unstructured.Unstructured, res *ResourceInfo) {
}
}
func populateApplicationInfo(un *unstructured.Unstructured, res *ResourceInfo) {
// Add managed-by-url annotation to info if present
if managedByURL, ok := un.GetAnnotations()[v1alpha1.AnnotationKeyManagedByURL]; ok {
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "managed-by-url", Value: managedByURL})
}
}
func generateManifestHash(un *unstructured.Unstructured, ignores []v1alpha1.ResourceIgnoreDifferences, overrides map[string]v1alpha1.ResourceOverride, opts normalizers.IgnoreNormalizerOpts) (string, error) {
normalizer, err := normalizers.NewIgnoreNormalizer(ignores, overrides, opts)
if err != nil {

View File

@@ -4,7 +4,9 @@ import (
"context"
"encoding/json"
"fmt"
"maps"
"path/filepath"
"slices"
"sync"
"time"
@@ -99,47 +101,41 @@ func NewHydrator(dependencies Dependencies, statusRefreshTimeout time.Duration,
// It's likely that multiple applications will trigger hydration at the same time. The hydration queue key is meant to
// dedupe these requests.
func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
origApp = origApp.DeepCopy()
app := origApp.DeepCopy()
if app.Spec.SourceHydrator == nil {
return
}
logCtx := log.WithFields(applog.GetAppLogFields(app))
logCtx.Debug("Processing app hydrate queue item")
// TODO: don't reuse statusRefreshTimeout. Create a new timeout for hydration.
needsHydration, reason := appNeedsHydration(origApp, h.statusRefreshTimeout)
if !needsHydration {
return
needsHydration, reason := appNeedsHydration(app)
if needsHydration {
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
StartedAt: metav1.Now(),
FinishedAt: nil,
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
}
logCtx.WithField("reason", reason).Info("Hydrating app")
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
StartedAt: metav1.Now(),
FinishedAt: nil,
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
needsRefresh := app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.StartedAt.Time) > h.statusRefreshTimeout
if needsHydration || needsRefresh {
logCtx.WithField("reason", reason).Info("Hydrating app")
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
} else {
logCtx.WithField("reason", reason).Debug("Skipping hydration")
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
origApp.Status.SourceHydrator = app.Status.SourceHydrator
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
logCtx.Debug("Successfully processed app hydrate queue item")
}
func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
key := types.HydrationQueueKey{
SourceRepoURL: git.NormalizeGitURLAllowInvalid(app.Spec.SourceHydrator.DrySource.RepoURL),
SourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
DestinationBranch: destinationBranch,
DestinationBranch: app.Spec.GetHydrateToSource().TargetRevision,
}
return key
}
@@ -148,43 +144,92 @@ func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
// hydration key, hydrates their latest commit, and updates their status accordingly. If the hydration fails, it marks
// the operation as failed and logs the error. If successful, it updates the operation to indicate that hydration was
// successful and requests a refresh of the applications to pick up the new hydrated commit.
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) (processNext bool) {
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) {
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
"sourceTargetRevision": hydrationKey.SourceTargetRevision,
"destinationBranch": hydrationKey.DestinationBranch,
})
relevantApps, drySHA, hydratedSHA, err := h.hydrateAppsLatestCommit(logCtx, hydrationKey)
if len(relevantApps) == 0 {
// return early if there are no relevant apps found to hydrate
// otherwise you'll be stuck in hydrating
logCtx.Info("Skipping hydration since there are no relevant apps found to hydrate")
// Get all applications sharing the same hydration key
apps, err := h.getAppsForHydrationKey(hydrationKey)
if err != nil {
// If we get an error here, we cannot proceed with hydration and we do not know
// which apps to update with the failure. The best we can do is log an error in
// the controller and wait for statusRefreshTimeout to retry
logCtx.WithError(err).Error("failed to get apps for hydration")
return
}
logCtx.WithField("appCount", len(apps))
// FIXME: we might end up in a race condition here where an HydrationQueueItem is processed
// before all applications had their CurrentOperation set by ProcessAppHydrateQueueItem.
// This would cause this method to update "old" CurrentOperation.
// It should only start hydration if all apps are in the HydrateOperationPhaseHydrating phase.
raceDetected := false
for _, app := range apps {
if app.Status.SourceHydrator.CurrentOperation == nil || app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
raceDetected = true
break
}
}
if raceDetected {
logCtx.Warn("race condition detected: not all apps are in HydrateOperationPhaseHydrating phase")
}
// validate all the applications to make sure they are all correctly configured.
// All applications sharing the same hydration key must succeed for the hydration to be processed.
projects, validationErrors := h.validateApplications(apps)
if len(validationErrors) > 0 {
// For the applications that have an error, set the specific error in their status.
// Applications without error will still fail with a generic error since the hydration cannot be partial
genericError := genericHydrationError(validationErrors)
for _, app := range apps {
if err, ok := validationErrors[app.QualifiedName()]; ok {
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("failed to validate hydration app: %v", err)
h.setAppHydratorError(app, err)
} else {
h.setAppHydratorError(app, genericError)
}
}
return
}
// Hydrate all the apps
drySHA, hydratedSHA, appErrors, err := h.hydrate(logCtx, apps, projects)
if err != nil {
// If there is a single error, it affects each applications
for i := range apps {
appErrors[apps[i].QualifiedName()] = err
}
}
if drySHA != "" {
logCtx = logCtx.WithField("drySHA", drySHA)
}
if err != nil {
logCtx.WithField("appCount", len(relevantApps)).WithError(err).Error("Failed to hydrate apps")
for _, app := range relevantApps {
origApp := app.DeepCopy()
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate revision %q: %v", drySHA, err.Error())
// We may or may not have gotten far enough in the hydration process to get a non-empty SHA, but set it just
// in case we did.
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("Failed to hydrate app: %v", err)
if len(appErrors) > 0 {
// For the applications that have an error, set the specific error in their status.
// Applications without error will still fail with a generic error since the hydration cannot be partial
genericError := genericHydrationError(appErrors)
for _, app := range apps {
if drySHA != "" {
// If we have a drySHA, we can set it on the app status
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
}
if err, ok := appErrors[app.QualifiedName()]; ok {
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("failed to hydrate app: %v", err)
h.setAppHydratorError(app, err)
} else {
h.setAppHydratorError(app, genericError)
}
}
return
}
logCtx.WithField("appCount", len(relevantApps)).Debug("Successfully hydrated apps")
logCtx.Debug("Successfully hydrated apps")
finishedAt := metav1.Now()
for _, app := range relevantApps {
for _, app := range apps {
origApp := app.DeepCopy()
operation := &appv1.HydrateOperation{
StartedAt: app.Status.SourceHydrator.CurrentOperation.StartedAt,
@@ -202,118 +247,123 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
// Request a refresh since we pushed a new commit.
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
if err != nil {
logCtx.WithField("app", app.QualifiedName()).WithError(err).Error("Failed to request app refresh after hydration")
logCtx.WithFields(applog.GetAppLogFields(app)).WithError(err).Error("Failed to request app refresh after hydration")
}
}
return
}
func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, string, string, error) {
relevantApps, projects, err := h.getRelevantAppsAndProjectsForHydration(logCtx, hydrationKey)
if err != nil {
return nil, "", "", fmt.Errorf("failed to get relevant apps for hydration: %w", err)
// setAppHydratorError updates the CurrentOperation with the error information.
func (h *Hydrator) setAppHydratorError(app *appv1.Application, err error) {
// if the operation is not in progress, we do not update the status
if app.Status.SourceHydrator.CurrentOperation.Phase != appv1.HydrateOperationPhaseHydrating {
return
}
dryRevision, hydratedRevision, err := h.hydrate(logCtx, relevantApps, projects)
if err != nil {
return relevantApps, dryRevision, "", fmt.Errorf("failed to hydrate apps: %w", err)
}
return relevantApps, dryRevision, hydratedRevision, nil
origApp := app.DeepCopy()
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate: %v", err.Error())
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
}
func (h *Hydrator) getRelevantAppsAndProjectsForHydration(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, map[string]*appv1.AppProject, error) {
// getAppsForHydrationKey returns the applications matching the hydration key.
func (h *Hydrator) getAppsForHydrationKey(hydrationKey types.HydrationQueueKey) ([]*appv1.Application, error) {
// Get all apps
apps, err := h.dependencies.GetProcessableApps()
if err != nil {
return nil, nil, fmt.Errorf("failed to list apps: %w", err)
return nil, fmt.Errorf("failed to list apps: %w", err)
}
var relevantApps []*appv1.Application
projects := make(map[string]*appv1.AppProject)
uniquePaths := make(map[string]bool, len(apps.Items))
for _, app := range apps.Items {
if app.Spec.SourceHydrator == nil {
continue
}
if !git.SameURL(app.Spec.SourceHydrator.DrySource.RepoURL, hydrationKey.SourceRepoURL) ||
app.Spec.SourceHydrator.DrySource.TargetRevision != hydrationKey.SourceTargetRevision {
continue
}
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
if destinationBranch != hydrationKey.DestinationBranch {
appKey := getHydrationQueueKey(&app)
if appKey != hydrationKey {
continue
}
relevantApps = append(relevantApps, &app)
}
return relevantApps, nil
}
path := app.Spec.SourceHydrator.SyncSource.Path
// ensure that the path is always set to a path that doesn't resolve to the root of the repo
if IsRootPath(path) {
return nil, nil, fmt.Errorf("app %q has path %q which resolves to repository root", app.QualifiedName(), path)
}
// validateApplications checks that all applications are valid for hydration.
func (h *Hydrator) validateApplications(apps []*appv1.Application) (map[string]*appv1.AppProject, map[string]error) {
projects := make(map[string]*appv1.AppProject)
errors := make(map[string]error)
uniquePaths := make(map[string]string, len(apps))
var proj *appv1.AppProject
for _, app := range apps {
// Get the project for the app and validate if the app is allowed to use the source.
// We can't short-circuit this even if we have seen this project before, because we need to verify that this
// particular app is allowed to use this project. That logic is in GetProcessableAppProj.
proj, err = h.dependencies.GetProcessableAppProj(&app)
// particular app is allowed to use this project.
proj, err := h.dependencies.GetProcessableAppProj(app)
if err != nil {
return nil, nil, fmt.Errorf("failed to get project %q for app %q: %w", app.Spec.Project, app.QualifiedName(), err)
errors[app.QualifiedName()] = fmt.Errorf("failed to get project %q: %w", app.Spec.Project, err)
continue
}
permitted := proj.IsSourcePermitted(app.Spec.GetSource())
if !permitted {
// Log and skip. We don't want to fail the entire operation because of one app.
logCtx.Warnf("App %q is not permitted to use source %q", app.QualifiedName(), app.Spec.Source.String())
errors[app.QualifiedName()] = fmt.Errorf("application repo %s is not permitted in project '%s'", app.Spec.GetSource().RepoURL, proj.Name)
continue
}
projects[app.Spec.Project] = proj
// Disallow hydrating to the repository root.
// Hydrating to root would overwrite or delete files at the top level of the repo,
// which can break other applications or shared configuration.
// Every hydrated app must write into a subdirectory instead.
destPath := app.Spec.SourceHydrator.SyncSource.Path
if IsRootPath(destPath) {
errors[app.QualifiedName()] = fmt.Errorf("app is configured to hydrate to the repository root (branch %q, path %q) which is not allowed", app.Spec.GetHydrateToSource().TargetRevision, destPath)
continue
}
// TODO: test the dupe detection
// TODO: normalize the path to avoid "path/.." from being treated as different from "."
if _, ok := uniquePaths[path]; ok {
return nil, nil, fmt.Errorf("multiple app hydrators use the same destination: %v", app.Spec.SourceHydrator.SyncSource.Path)
if appName, ok := uniquePaths[destPath]; ok {
errors[app.QualifiedName()] = fmt.Errorf("app %s hydrator use the same destination: %v", appName, app.Spec.SourceHydrator.SyncSource.Path)
errors[appName] = fmt.Errorf("app %s hydrator use the same destination: %v", app.QualifiedName(), app.Spec.SourceHydrator.SyncSource.Path)
continue
}
uniquePaths[path] = true
relevantApps = append(relevantApps, &app)
uniquePaths[destPath] = app.QualifiedName()
}
return relevantApps, projects, nil
// If there are any errors, return nil for projects to avoid possible partial processing.
if len(errors) > 0 {
projects = nil
}
return projects, errors
}
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, error) {
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, projects map[string]*appv1.AppProject) (string, string, map[string]error, error) {
errors := make(map[string]error)
if len(apps) == 0 {
return "", "", nil
return "", "", nil, nil
}
// These values are the same for all apps being hydrated together, so just get them from the first app.
repoURL := apps[0].Spec.SourceHydrator.DrySource.RepoURL
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
repoURL := apps[0].Spec.GetHydrateToSource().RepoURL
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
// Disallow hydrating to the repository root.
// Hydrating to root would overwrite or delete files at the top level of the repo,
// which can break other applications or shared configuration.
// Every hydrated app must write into a subdirectory instead.
for _, app := range apps {
destPath := app.Spec.SourceHydrator.SyncSource.Path
if IsRootPath(destPath) {
return "", "", fmt.Errorf(
"app %q is configured to hydrate to the repository root (branch %q, path %q) which is not allowed",
app.QualifiedName(), targetBranch, destPath,
)
}
}
// FIXME: As a convenience, the commit server will create the syncBranch if it does not exist. If the
// targetBranch does not exist, it will create it based on the syncBranch. On the next line, we take
// the `syncBranch` from the first app and assume that they're all configured the same. Instead, if any
// app has a different syncBranch, we should send the commit server an empty string and allow it to
// create the targetBranch as an orphan since we can't reliable determine a reasonable base.
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
// Get a static SHA revision from the first app so that all apps are hydrated from the same revision.
targetRevision, pathDetails, err := h.getManifests(context.Background(), apps[0], "", projects[apps[0].Spec.Project])
if err != nil {
return "", "", fmt.Errorf("failed to get manifests for app %q: %w", apps[0].QualifiedName(), err)
errors[apps[0].QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
return "", "", errors, nil
}
paths := []*commitclient.PathDetails{pathDetails}
@@ -324,18 +374,18 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
app := app
eg.Go(func() error {
_, pathDetails, err = h.getManifests(ctx, app, targetRevision, projects[app.Spec.Project])
if err != nil {
return fmt.Errorf("failed to get manifests for app %q: %w", app.QualifiedName(), err)
}
mu.Lock()
defer mu.Unlock()
if err != nil {
errors[app.QualifiedName()] = fmt.Errorf("failed to get manifests: %w", err)
return errors[app.QualifiedName()]
}
paths = append(paths, pathDetails)
mu.Unlock()
return nil
})
}
err = eg.Wait()
if err != nil {
return "", "", fmt.Errorf("failed to get manifests for apps: %w", err)
if err := eg.Wait(); err != nil {
return targetRevision, "", errors, nil
}
// If all the apps are under the same project, use that project. Otherwise, use an empty string to indicate that we
@@ -344,18 +394,19 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
if len(projects) == 1 {
for p := range projects {
project = p
break
}
}
// Get the commit metadata for the target revision.
revisionMetadata, err := h.getRevisionMetadata(context.Background(), repoURL, project, targetRevision)
if err != nil {
return "", "", fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
return targetRevision, "", errors, fmt.Errorf("failed to get revision metadata for %q: %w", targetRevision, err)
}
repo, err := h.dependencies.GetWriteCredentials(context.Background(), repoURL, project)
if err != nil {
return "", "", fmt.Errorf("failed to get hydrator credentials: %w", err)
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator credentials: %w", err)
}
if repo == nil {
// Try without credentials.
@@ -367,11 +418,11 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
// get the commit message template
commitMessageTemplate, err := h.dependencies.GetHydratorCommitMessageTemplate()
if err != nil {
return "", "", fmt.Errorf("failed to get hydrated commit message template: %w", err)
return targetRevision, "", errors, fmt.Errorf("failed to get hydrated commit message template: %w", err)
}
commitMessage, errMsg := getTemplatedCommitMessage(repoURL, targetRevision, commitMessageTemplate, revisionMetadata)
if errMsg != nil {
return "", "", fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
return targetRevision, "", errors, fmt.Errorf("failed to get hydrator commit templated message: %w", errMsg)
}
manifestsRequest := commitclient.CommitHydratedManifestsRequest{
@@ -386,14 +437,14 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
closer, commitService, err := h.commitClientset.NewCommitServerClient()
if err != nil {
return targetRevision, "", fmt.Errorf("failed to create commit service: %w", err)
return targetRevision, "", errors, fmt.Errorf("failed to create commit service: %w", err)
}
defer utilio.Close(closer)
resp, err := commitService.CommitHydratedManifests(context.Background(), &manifestsRequest)
if err != nil {
return targetRevision, "", fmt.Errorf("failed to commit hydrated manifests: %w", err)
return targetRevision, "", errors, fmt.Errorf("failed to commit hydrated manifests: %w", err)
}
return targetRevision, resp.HydratedSha, nil
return targetRevision, resp.HydratedSha, errors, nil
}
// getManifests gets the manifests for the given application and target revision. It returns the resolved revision
@@ -456,34 +507,27 @@ func (h *Hydrator) getRevisionMetadata(ctx context.Context, repoURL, project, re
}
// appNeedsHydration answers if application needs manifests hydrated.
func appNeedsHydration(app *appv1.Application, statusHydrateTimeout time.Duration) (needsHydration bool, reason string) {
if app.Spec.SourceHydrator == nil {
return false, "source hydrator not configured"
}
var hydratedAt *metav1.Time
if app.Status.SourceHydrator.CurrentOperation != nil {
hydratedAt = &app.Status.SourceHydrator.CurrentOperation.StartedAt
}
func appNeedsHydration(app *appv1.Application) (needsHydration bool, reason string) {
switch {
case app.IsHydrateRequested():
return true, "hydrate requested"
case app.Spec.SourceHydrator == nil:
return false, "source hydrator not configured"
case app.Status.SourceHydrator.CurrentOperation == nil:
return true, "no previous hydrate operation"
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating:
return false, "hydration operation already in progress"
case app.IsHydrateRequested():
return true, "hydrate requested"
case !app.Spec.SourceHydrator.DeepEquals(app.Status.SourceHydrator.CurrentOperation.SourceHydrator):
return true, "spec.sourceHydrator differs"
case app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseFailed && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.FinishedAt.Time) > 2*time.Minute:
return true, "previous hydrate operation failed more than 2 minutes ago"
case hydratedAt == nil || hydratedAt.Add(statusHydrateTimeout).Before(time.Now().UTC()):
return true, "hydration expired"
}
return false, ""
return false, "hydration not needed"
}
// Gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
// 1. Get the metadata template engine would use to render the template
// getTemplatedCommitMessage gets the multi-line commit message based on the template defined in the configmap. It is a two step process:
// 1. Get the metadata template engine would use to render the template
// 2. Pass the output of Step 1 and Step 2 to template Render
func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string, dryCommitMetadata *appv1.RevisionMetadata) (string, error) {
hydratorCommitMetadata, err := hydrator.GetCommitMetadata(repoURL, revision, dryCommitMetadata)
@@ -497,6 +541,20 @@ func getTemplatedCommitMessage(repoURL, revision, commitMessageTemplate string,
return templatedCommitMsg, nil
}
// genericHydrationError returns an error that summarizes the hydration errors for all applications.
func genericHydrationError(validationErrors map[string]error) error {
if len(validationErrors) == 0 {
return nil
}
keys := slices.Sorted(maps.Keys(validationErrors))
remainder := "has an error"
if len(keys) > 1 {
remainder = fmt.Sprintf("and %d more have errors", len(keys)-1)
}
return fmt.Errorf("cannot hydrate because application %s %s", keys[0], remainder)
}
// IsRootPath returns whether the path references a root path
func IsRootPath(path string) bool {
clean := filepath.Clean(path)

File diff suppressed because it is too large Load Diff

113
controller/hydrator/mocks/RepoGetter.go generated Normal file
View File

@@ -0,0 +1,113 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"context"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
mock "github.com/stretchr/testify/mock"
)
// NewRepoGetter creates a new instance of RepoGetter. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewRepoGetter(t interface {
mock.TestingT
Cleanup(func())
}) *RepoGetter {
mock := &RepoGetter{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// RepoGetter is an autogenerated mock type for the RepoGetter type
type RepoGetter struct {
mock.Mock
}
type RepoGetter_Expecter struct {
mock *mock.Mock
}
func (_m *RepoGetter) EXPECT() *RepoGetter_Expecter {
return &RepoGetter_Expecter{mock: &_m.Mock}
}
// GetRepository provides a mock function for the type RepoGetter
func (_mock *RepoGetter) GetRepository(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error) {
ret := _mock.Called(ctx, repoURL, project)
if len(ret) == 0 {
panic("no return value specified for GetRepository")
}
var r0 *v1alpha1.Repository
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) (*v1alpha1.Repository, error)); ok {
return returnFunc(ctx, repoURL, project)
}
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) *v1alpha1.Repository); ok {
r0 = returnFunc(ctx, repoURL, project)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.Repository)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context, string, string) error); ok {
r1 = returnFunc(ctx, repoURL, project)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// RepoGetter_GetRepository_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetRepository'
type RepoGetter_GetRepository_Call struct {
*mock.Call
}
// GetRepository is a helper method to define mock.On call
// - ctx context.Context
// - repoURL string
// - project string
func (_e *RepoGetter_Expecter) GetRepository(ctx interface{}, repoURL interface{}, project interface{}) *RepoGetter_GetRepository_Call {
return &RepoGetter_GetRepository_Call{Call: _e.mock.On("GetRepository", ctx, repoURL, project)}
}
func (_c *RepoGetter_GetRepository_Call) Run(run func(ctx context.Context, repoURL string, project string)) *RepoGetter_GetRepository_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 string
if args[1] != nil {
arg1 = args[1].(string)
}
var arg2 string
if args[2] != nil {
arg2 = args[2].(string)
}
run(
arg0,
arg1,
arg2,
)
})
return _c
}
func (_c *RepoGetter_GetRepository_Call) Return(repository *v1alpha1.Repository, err error) *RepoGetter_GetRepository_Call {
_c.Call.Return(repository, err)
return _c
}
func (_c *RepoGetter_GetRepository_Call) RunAndReturn(run func(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error)) *RepoGetter_GetRepository_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -43,7 +43,7 @@ func TestGetRepoObjs(t *testing.T) {
},
}
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
source := app.Spec.GetSource()
source.RepoURL = "oci://example.com/argo/argo-cd"
@@ -92,7 +92,7 @@ func TestGetHydratorCommitMessageTemplate_WhenTemplateisNotDefined_FallbackToDef
},
}
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
require.NoError(t, err)
@@ -115,7 +115,7 @@ func TestGetHydratorCommitMessageTemplate(t *testing.T) {
configMapData: cm.Data,
}
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
require.NoError(t, err)

View File

@@ -13,12 +13,12 @@ import (
)
func TestMetricClusterConnectivity(t *testing.T) {
db := dbmocks.ArgoDB{}
db := &dbmocks.ArgoDB{}
cluster1 := v1alpha1.Cluster{Name: "cluster1", Server: "server1", Labels: map[string]string{"env": "dev", "team": "team1"}}
cluster2 := v1alpha1.Cluster{Name: "cluster2", Server: "server2", Labels: map[string]string{"env": "staging", "team": "team2"}}
cluster3 := v1alpha1.Cluster{Name: "cluster3", Server: "server3", Labels: map[string]string{"env": "production", "team": "team3"}}
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3}}
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
type testCases struct {
testCombination

View File

@@ -263,8 +263,8 @@ func newFakeApp(fakeAppYAML string) *argoappv1.Application {
return &app
}
func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.ApplicationLister) {
ctx, cancel := context.WithCancel(context.Background())
func newFakeLister(ctx context.Context, fakeAppYAMLs ...string) (context.CancelFunc, applister.ApplicationLister) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
var fakeApps []runtime.Object
for _, appYAML := range fakeAppYAMLs {
@@ -319,11 +319,11 @@ func testMetricServer(t *testing.T, fakeAppYAMLs []string, expectedResponse stri
func runTest(t *testing.T, cfg TestMetricServerConfig) {
t.Helper()
cancel, appLister := newFakeLister(cfg.FakeAppYAMLs...)
cancel, appLister := newFakeLister(t.Context(), cfg.FakeAppYAMLs...)
defer cancel()
mockDB := mocks.NewArgoDB(t)
mockDB.On("GetClusterServersByName", mock.Anything, "cluster1").Return([]string{"https://localhost:6443"}, nil)
mockDB.On("GetCluster", mock.Anything, "https://localhost:6443").Return(&argoappv1.Cluster{Name: "cluster1", Server: "https://localhost:6443"}, nil)
mockDB.EXPECT().GetClusterServersByName(mock.Anything, "cluster1").Return([]string{"https://localhost:6443"}, nil).Maybe()
mockDB.EXPECT().GetCluster(mock.Anything, "https://localhost:6443").Return(&argoappv1.Cluster{Name: "cluster1", Server: "https://localhost:6443"}, nil).Maybe()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, cfg.AppLabels, cfg.AppConditions, mockDB)
require.NoError(t, err)
@@ -333,7 +333,7 @@ func runTest(t *testing.T, cfg TestMetricServerConfig) {
metricsServ.registry.MustRegister(collector)
}
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -472,7 +472,7 @@ argocd_app_condition{condition="ExcludedResourceWarning",name="my-app-4",namespa
}
func TestMetricsSyncCounter(t *testing.T) {
cancel, appLister := newFakeLister()
cancel, appLister := newFakeLister(t.Context())
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -493,7 +493,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
metricsServ.IncSync(fakeApp, "https://localhost:6443", &argoappv1.OperationState{Phase: common.OperationSucceeded})
metricsServ.IncSync(fakeApp, "https://localhost:6443", &argoappv1.OperationState{Phase: common.OperationSucceeded})
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -526,7 +526,7 @@ func assertMetricsNotPrinted(t *testing.T, expectedLines, body string) {
}
func TestMetricsSyncDuration(t *testing.T) {
cancel, appLister := newFakeLister()
cancel, appLister := newFakeLister(t.Context())
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -536,7 +536,7 @@ func TestMetricsSyncDuration(t *testing.T) {
fakeAppOperationRunning := newFakeApp(fakeAppOperationRunning)
metricsServ.IncAppSyncDuration(fakeAppOperationRunning, "https://localhost:6443", fakeAppOperationRunning.Status.OperationState)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -550,7 +550,7 @@ func TestMetricsSyncDuration(t *testing.T) {
fakeAppOperationFinished := newFakeApp(fakeAppOperationFinished)
metricsServ.IncAppSyncDuration(fakeAppOperationFinished, "https://localhost:6443", fakeAppOperationFinished.Status.OperationState)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -567,7 +567,7 @@ argocd_app_sync_duration_seconds_total{dest_server="https://localhost:6443",name
}
func TestReconcileMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
cancel, appLister := newFakeLister(t.Context())
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -590,7 +590,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
fakeApp := newFakeApp(fakeApp)
metricsServ.IncReconcile(fakeApp, "https://localhost:6443", 5*time.Second)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -601,7 +601,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
}
func TestOrphanedResourcesMetric(t *testing.T) {
cancel, appLister := newFakeLister()
cancel, appLister := newFakeLister(t.Context())
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -616,7 +616,7 @@ argocd_app_orphaned_resources_count{name="my-app-4",namespace="argocd",project="
numOrphanedResources := 1
metricsServ.SetOrphanedResourcesMetric(app, numOrphanedResources)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -627,7 +627,7 @@ argocd_app_orphaned_resources_count{name="my-app-4",namespace="argocd",project="
}
func TestMetricsReset(t *testing.T) {
cancel, appLister := newFakeLister()
cancel, appLister := newFakeLister(t.Context())
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -641,7 +641,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name="my-app",namespace="argocd",phase="Succeeded",project="important-project"} 2
`
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -652,7 +652,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
err = metricsServ.SetExpiration(time.Second)
require.NoError(t, err)
time.Sleep(2 * time.Second)
req, err = http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err = http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr = httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -665,7 +665,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
}
func TestWorkqueueMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
cancel, appLister := newFakeLister(t.Context())
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -685,7 +685,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
`
workqueue.NewNamed("test")
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -696,7 +696,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
}
func TestGoMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
cancel, appLister := newFakeLister(t.Context())
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -718,7 +718,7 @@ go_memstats_sys_bytes
go_threads
`
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)

View File

@@ -28,7 +28,7 @@ import (
func TestGetShardByID_NotEmptyID(t *testing.T) {
db := &dbmocks.ArgoDB{}
replicasCount := 1
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "1"}))
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "2"}))
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "3"}))
@@ -38,7 +38,7 @@ func TestGetShardByID_NotEmptyID(t *testing.T) {
func TestGetShardByID_EmptyID(t *testing.T) {
db := &dbmocks.ArgoDB{}
replicasCount := 1
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(replicasCount)(&v1alpha1.Cluster{})
assert.Equal(t, 0, shard)
@@ -46,7 +46,7 @@ func TestGetShardByID_EmptyID(t *testing.T) {
func TestGetShardByID_NoReplicas(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(0)
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(0)(&v1alpha1.Cluster{})
assert.Equal(t, -1, shard)
@@ -54,7 +54,7 @@ func TestGetShardByID_NoReplicas(t *testing.T) {
func TestGetShardByID_NoReplicasUsingHashDistributionFunction(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(0)
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(0)(&v1alpha1.Cluster{})
assert.Equal(t, -1, shard)
@@ -63,7 +63,7 @@ func TestGetShardByID_NoReplicasUsingHashDistributionFunction(t *testing.T) {
func TestGetShardByID_NoReplicasUsingHashDistributionFunctionWithClusters(t *testing.T) {
clusters, db, cluster1, cluster2, cluster3, cluster4, cluster5 := createTestClusters()
// Test with replicas set to 0
db.On("GetApplicationControllerReplicas").Return(0)
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
t.Setenv(common.EnvControllerShardingAlgorithm, common.RoundRobinShardingAlgorithm)
distributionFunction := RoundRobinDistributionFunction(clusters, 0)
assert.Equal(t, -1, distributionFunction(nil))
@@ -91,7 +91,7 @@ func TestGetClusterFilterLegacy(t *testing.T) {
// shardIndex := 1 // ensuring that a shard with index 1 will process all the clusters with an "even" id (2,4,6,...)
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
replicasCount := 2
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
t.Setenv(common.EnvControllerShardingAlgorithm, common.LegacyShardingAlgorithm)
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
@@ -109,7 +109,7 @@ func TestGetClusterFilterUnknown(t *testing.T) {
os.Unsetenv(common.EnvControllerShardingAlgorithm)
t.Setenv(common.EnvControllerShardingAlgorithm, "unknown")
replicasCount := 2
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
distributionFunction := GetDistributionFunction(clusterAccessor, appAccessor, "unknown", replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -124,7 +124,7 @@ func TestLegacyGetClusterFilterWithFixedShard(t *testing.T) {
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
appAccessor, _, _, _, _, _ := createTestApps()
replicasCount := 5
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
filter := GetDistributionFunction(clusterAccessor, appAccessor, common.DefaultShardingAlgorithm, replicasCount)
assert.Equal(t, 0, filter(nil))
assert.Equal(t, 4, filter(&cluster1))
@@ -151,7 +151,7 @@ func TestRoundRobinGetClusterFilterWithFixedShard(t *testing.T) {
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
appAccessor, _, _, _, _, _ := createTestApps()
replicasCount := 4
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
filter := GetDistributionFunction(clusterAccessor, appAccessor, common.RoundRobinShardingAlgorithm, replicasCount)
assert.Equal(t, 0, filter(nil))
@@ -182,7 +182,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 1", func(t *testing.T) {
replicasCount := 1
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -194,7 +194,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 2", func(t *testing.T) {
replicasCount := 2
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -206,7 +206,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 3", func(t *testing.T) {
replicasCount := 3
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -232,7 +232,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterNumber
t.Setenv(common.EnvControllerReplicas, strconv.Itoa(replicasCount))
_, db, _, _, _, _, _ := createTestClusters()
clusterAccessor := func() []*v1alpha1.Cluster { return clusterPointers }
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
for i, c := range clusterPointers {
assert.Equal(t, i%2, distributionFunction(c))
@@ -240,7 +240,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterNumber
}
func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAddedAndRemoved(t *testing.T) {
db := dbmocks.ArgoDB{}
db := &dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "1")
cluster2 := createCluster("cluster2", "2")
cluster3 := createCluster("cluster3", "3")
@@ -252,10 +252,10 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAdde
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5}}
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
// Test with replicas set to 2
replicasCount := 2
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -277,7 +277,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAdde
}
func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
db := dbmocks.ArgoDB{}
db := &dbmocks.ArgoDB{}
clusterCount := 133
prefix := "cluster"
@@ -290,10 +290,10 @@ func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
clusterAccessor := getClusterAccessor(clusters)
appAccessor, _, _, _, _, _ := createTestApps()
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
// Test with replicas set to 3
replicasCount := 3
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
distributionMap := map[int]int{}
@@ -347,32 +347,32 @@ func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
}
func TestConsistentHashingWhenClusterWithZeroReplicas(t *testing.T) {
db := dbmocks.ArgoDB{}
db := &dbmocks.ArgoDB{}
clusters := []v1alpha1.Cluster{createCluster("cluster-01", "01")}
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
appAccessor, _, _, _, _, _ := createTestApps()
// Test with replicas set to 0
replicasCount := 0
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, -1, distributionFunction(nil))
}
func TestConsistentHashingWhenClusterWithFixedShard(t *testing.T) {
db := dbmocks.ArgoDB{}
db := &dbmocks.ArgoDB{}
var fixedShard int64 = 1
cluster := &v1alpha1.Cluster{ID: "1", Shard: &fixedShard}
clusters := []v1alpha1.Cluster{*cluster}
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
// Test with replicas set to 5
replicasCount := 5
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
appAccessor, _, _, _, _, _ := createTestApps()
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, fixedShard, int64(distributionFunction(cluster)))
@@ -381,7 +381,7 @@ func TestConsistentHashingWhenClusterWithFixedShard(t *testing.T) {
func TestGetShardByIndexModuloReplicasCountDistributionFunction(t *testing.T) {
clusters, db, cluster1, cluster2, _, _, _ := createTestClusters()
replicasCount := 2
db.On("GetApplicationControllerReplicas").Return(replicasCount)
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
// Test that the function returns the correct shard for cluster1 and cluster2
@@ -419,7 +419,7 @@ func TestInferShard(t *testing.T) {
}
func createTestClusters() (clusterAccessor, *dbmocks.ArgoDB, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster) {
db := dbmocks.ArgoDB{}
db := &dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "1")
cluster2 := createCluster("cluster2", "2")
cluster3 := createCluster("cluster3", "3")
@@ -428,10 +428,10 @@ func createTestClusters() (clusterAccessor, *dbmocks.ArgoDB, v1alpha1.Cluster, v
clusters := []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5}
db.On("ListClusters", mock.Anything).Return(&v1alpha1.ClusterList{Items: []v1alpha1.Cluster{
db.EXPECT().ListClusters(mock.Anything).Return(&v1alpha1.ClusterList{Items: []v1alpha1.Cluster{
cluster1, cluster2, cluster3, cluster4, cluster5,
}}, nil)
return getClusterAccessor(clusters), &db, cluster1, cluster2, cluster3, cluster4, cluster5
return getClusterAccessor(clusters), db, cluster1, cluster2, cluster3, cluster4, cluster5
}
func getClusterAccessor(clusters []v1alpha1.Cluster) clusterAccessor {

View File

@@ -16,14 +16,14 @@ import (
func TestLargeShuffle(t *testing.T) {
t.Skip()
db := dbmocks.ArgoDB{}
db := &dbmocks.ArgoDB{}
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{}}
for i := 0; i < math.MaxInt/4096; i += 256 {
// fmt.Fprintf(os.Stdout, "%d", i)
cluster := createCluster(fmt.Sprintf("cluster-%d", i), strconv.Itoa(i))
clusterList.Items = append(clusterList.Items, cluster)
}
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
clusterAccessor := getClusterAccessor(clusterList.Items)
// Test with replicas set to 256
replicasCount := 256
@@ -36,7 +36,7 @@ func TestLargeShuffle(t *testing.T) {
func TestShuffle(t *testing.T) {
t.Skip()
db := dbmocks.ArgoDB{}
db := &dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "10")
cluster2 := createCluster("cluster2", "20")
cluster3 := createCluster("cluster3", "30")
@@ -46,7 +46,7 @@ func TestShuffle(t *testing.T) {
cluster25 := createCluster("cluster6", "25")
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5, cluster6}}
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
clusterAccessor := getClusterAccessor(clusterList.Items)
// Test with replicas set to 3
t.Setenv(common.EnvControllerReplicas, "3")

View File

@@ -41,13 +41,18 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
appstatecache "github.com/argoproj/argo-cd/v3/util/cache/appstate"
"github.com/argoproj/argo-cd/v3/util/db"
"github.com/argoproj/argo-cd/v3/util/env"
"github.com/argoproj/argo-cd/v3/util/gpg"
utilio "github.com/argoproj/argo-cd/v3/util/io"
"github.com/argoproj/argo-cd/v3/util/settings"
"github.com/argoproj/argo-cd/v3/util/stats"
)
var ErrCompareStateRepo = errors.New("failed to get repo objects")
var (
ErrCompareStateRepo = errors.New("failed to get repo objects")
processManifestGeneratePathsEnabled = env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_PROCESS_MANIFEST_GENERATE_PATHS", true)
)
type resourceInfoProviderStub struct{}
@@ -70,7 +75,7 @@ type managedResource struct {
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState)
GetRepoObjs(ctx context.Context, app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
}
@@ -253,7 +258,14 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
appNamespace = ""
}
if !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
updateRevisions := processManifestGeneratePathsEnabled &&
// updating revisions result is not required if automated sync is not enabled
app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil &&
// using updating revisions gains performance only if manifest generation is required.
// just reading pre-generated manifests is comparable to updating revisions time-wise
app.Status.SourceType != v1alpha1.ApplicationSourceTypeDirectory
if updateRevisions && repo.Depth == 0 && !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && syncedRevision != revision && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
Repo: repo,
@@ -350,7 +362,7 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
}
// ResolveGitRevision will resolve the given revision to a full commit SHA. Only works for git.
func (m *appStateManager) ResolveGitRevision(repoURL string, revision string) (string, error) {
func (m *appStateManager) ResolveGitRevision(repoURL, revision string) (string, error) {
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
return "", fmt.Errorf("failed to connect to repo server: %w", err)
@@ -529,7 +541,7 @@ func isManagedNamespace(ns *unstructured.Unstructured, app *v1alpha1.Application
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
ts := stats.NewTimingStats()
logCtx := log.WithFields(applog.GetAppLogFields(app))

View File

@@ -48,7 +48,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -66,7 +66,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
// TestCompareAppStateRepoError tests the case when CompareAppState notices a repo error
func TestCompareAppStateRepoError(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{manifestResponses: make([]*apiclient.ManifestResponse, 3)}, errors.New("test repo error"))
ctrl := newFakeController(t.Context(), &fakeData{manifestResponses: make([]*apiclient.ManifestResponse, 3)}, errors.New("test repo error"))
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -112,7 +112,7 @@ func TestCompareAppStateNamespaceMetadataDiffers(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -161,7 +161,7 @@ func TestCompareAppStateNamespaceMetadataDiffersToManifest(t *testing.T) {
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -219,7 +219,7 @@ func TestCompareAppStateNamespaceMetadata(t *testing.T) {
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -278,7 +278,7 @@ func TestCompareAppStateNamespaceMetadataIsTheSame(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -306,7 +306,7 @@ func TestCompareAppStateMissing(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -338,7 +338,7 @@ func TestCompareAppStateExtra(t *testing.T) {
key: pod,
},
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -369,7 +369,7 @@ func TestCompareAppStateHook(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -401,7 +401,7 @@ func TestCompareAppStateSkipHook(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -431,7 +431,7 @@ func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
@@ -465,7 +465,7 @@ func TestCompareAppStateExtraHook(t *testing.T) {
key: pod,
},
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -494,7 +494,7 @@ func TestAppRevisionsSingleSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
app := newFakeApp()
revisions := make([]string, 0)
@@ -534,7 +534,7 @@ func TestAppRevisionsMultiSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
app := newFakeMultiSourceApp()
revisions := make([]string, 0)
@@ -583,7 +583,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
kube.GetResourceKey(obj3): obj3,
},
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -624,7 +624,7 @@ func TestCompareAppStateManagedNamespaceMetadataWithLiveNsDoesNotGetPruned(t *te
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, []string{}, app.Spec.Sources, false, false, nil, false)
require.NoError(t, err)
@@ -676,7 +676,7 @@ func TestCompareAppStateWithManifestGeneratePath(t *testing.T) {
updateRevisionForPathsResponse: &apiclient.UpdateRevisionForPathsResponse{},
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, false)
@@ -698,7 +698,7 @@ func TestSetHealth(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -734,7 +734,7 @@ func TestPreserveStatusTimestamp(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -770,7 +770,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -799,7 +799,7 @@ func TestSetManagedResourcesWithOrphanedResources(t *testing.T) {
proj.Spec.OrphanedResources = &v1alpha1.OrphanedResourcesMonitorSettings{}
app := newFakeApp()
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
@@ -828,7 +828,7 @@ func TestSetManagedResourcesWithResourcesOfAnotherApp(t *testing.T) {
app2 := newFakeApp()
app2.Name = "app2"
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app1, app2, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app2.Namespace, "guestbook"): {
@@ -852,7 +852,7 @@ func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, proj},
configMapData: map[string]string{
"resource.customizations": "invalid setting",
@@ -878,7 +878,7 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
app := newFakeApp()
app.Namespace = "default"
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
@@ -902,7 +902,7 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
func Test_appStateManager_persistRevisionHistory(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app},
}, nil)
manager := ctrl.appStateManager.(*appStateManager)
@@ -1007,7 +1007,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1034,7 +1034,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1066,7 +1066,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1093,7 +1093,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1120,7 +1120,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1147,7 +1147,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1175,7 +1175,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
testProj := signedProj
testProj.Spec.SignatureKeys[0].KeyID = "4AEE18F83AFDEB24"
sources := make([]v1alpha1.ApplicationSource, 0)
@@ -1207,7 +1207,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
}
// it doesn't matter for our test whether local manifests are valid
localManifests := []string{"foobar"}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1237,7 +1237,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1267,7 +1267,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
}
// it doesn't matter for our test whether local manifests are valid
localManifests := []string{""}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1395,7 +1395,7 @@ func TestIsLiveResourceManaged(t *testing.T) {
},
},
})
ctrl := newFakeController(&fakeData{
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -1765,7 +1765,7 @@ func TestCompareAppStateDefaultRevisionUpdated(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1788,7 +1788,7 @@ func TestCompareAppStateRevisionUpdatedWithHelmSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1807,10 +1807,10 @@ func Test_normalizeClusterScopeTracking(t *testing.T) {
Namespace: "test",
},
})
c := cachemocks.ClusterCache{}
c.On("IsNamespaced", mock.Anything).Return(false, nil)
c := &cachemocks.ClusterCache{}
c.EXPECT().IsNamespaced(mock.Anything).Return(false, nil)
var called bool
err := normalizeClusterScopeTracking([]*unstructured.Unstructured{obj}, &c, func(u *unstructured.Unstructured) error {
err := normalizeClusterScopeTracking([]*unstructured.Unstructured{obj}, c, func(u *unstructured.Unstructured) error {
// We expect that the normalization function will call this callback with an obj that has had the namespace set
// to empty.
called = true
@@ -1838,7 +1838,7 @@ func TestCompareAppState_DoesNotCallUpdateRevisionForPaths_ForOCI(t *testing.T)
Revision: "abc123",
},
}
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
source := app.Spec.GetSource()
source.RepoURL = "oci://example.com/argo/argo-cd"

View File

@@ -75,7 +75,7 @@ func TestPersistRevisionHistory(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -121,7 +121,7 @@ func TestPersistManagedNamespaceMetadataState(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -152,7 +152,7 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
// Sync with source specified
source := v1alpha1.ApplicationSource{
@@ -206,7 +206,7 @@ func TestSyncComparisonError(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -263,7 +263,7 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
},
managedLiveObjs: liveObjects,
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
return &fixture{
application: app,
@@ -350,7 +350,7 @@ func TestSyncWindowDeniesSync(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
return &fixture{
application: app,
@@ -1620,7 +1620,7 @@ func TestSyncWithImpersonate(t *testing.T) {
},
additionalObjs: additionalObjs,
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
return &fixture{
application: app,
project: project,
@@ -1780,7 +1780,7 @@ func TestClientSideApplyMigration(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
ctrl := newFakeController(t.Context(), &data, nil)
return &fixture{
application: app,

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 KiB

View File

@@ -68,9 +68,9 @@ make builder-image IMAGE_NAMESPACE=argoproj IMAGE_TAG=v1.0.0
Every commit to master is built and published to `ghcr.io/argoproj/argo-cd/argocd:<version>-<short-sha>`. The list of images is available at
[https://github.com/argoproj/argo-cd/packages](https://github.com/argoproj/argo-cd/packages).
!!! note
GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
> [!NOTE]
> GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
> even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
> to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
The image is automatically deployed to the dev Argo CD instance: [https://cd.apps.argoproj.io/](https://cd.apps.argoproj.io/)

View File

@@ -0,0 +1,18 @@
The Argo CD UI displays icons for various Kubernetes resource types to help users quickly identify them. Argo CD
includes a set of built-in icons for common resource types.
You can contribute additional icons for custom resource types by following these steps:
1. Ensure the license is compatible with Apache 2.0.
2. Add the icon file to the `ui/src/assets/images/resources/<group>/icon.svg` path in the Argo CD repository.
3. Modify the SVG to use the correct color, `#8fa4b1`.
4. Run `make resourceiconsgen` to update the generated typescript file that lists all available icons.
5. Create a pull request to the Argo CD repository with your changes.
`<group>` is the API group of the custom resource. For example, if you are adding an icon for a custom resource with the
API group `example.com`, you would place the icon at `ui/src/assets/images/resources/example.com/icon.svg`.
If you want the same icon to apply to resources in multiple API groups with the same suffix, you can create a directory
prefixed with an underscore. The underscore will be interpreted as a wildcard. For example, to apply the same icon to
resources in the `example.com` and `another.example.com` API groups, you would place the icon at
`ui/src/assets/images/resources/_.example.com/icon.svg`.

View File

@@ -26,8 +26,8 @@ api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
```
This configuration example will be used as the basis for the next steps.
!!! note
The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
> [!NOTE]
> The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
### Configure component env variables
The component that you will run in your IDE for debugging (`api-server` in our case) will need env variables. Copy the env variables from `Procfile`, located in the `argo-cd` root folder of your development branch. The env variables are located before the `$COMMAND` section in the `sh -c` section of the component run command.
@@ -112,8 +112,8 @@ Example for an `api-server` launch configuration snippet, based on our above exa
</component>
```
!!! note
As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
> [!NOTE]
> As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
## Run Argo CD without the debugged component
Next, we need to run all Argo CD components, except for the debugged component (cause we will run this component separately in the IDE).
@@ -143,4 +143,4 @@ To debug the `api-server`, run:
Finally, run the component you wish to debug from your IDE and make sure it does not have any errors.
## Important
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.

View File

@@ -21,7 +21,7 @@ curl -sSfL https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/i
Connect to one of the services, for example, to debug the main ArgoCD server run:
```shell
kubectl config set-context --current --namespace argocd
telepresence helm install --set agent.securityContext={} # Installs telepresence into your cluster
telepresence helm install --set-json agent.securityContext={} # Installs telepresence into your cluster
telepresence connect # Starts the connection to your cluster (bound to the current namespace)
telepresence intercept argocd-server --port 8080:http --env-file .envrc.remote # Starts the interception
```
@@ -37,7 +37,7 @@ telepresence status
Stop the intercept using:
```shell
telepresence leave argocd-server-argocd
telepresence leave argocd-server
```
And uninstall telepresence from your cluster:

View File

@@ -1,38 +1,5 @@
# Managing Dependencies
## GitOps Engine (`github.com/argoproj/gitops-engine`)
### Repository
https://github.com/argoproj/gitops-engine
### Pulling changes from `gitops-engine`
After your GitOps Engine PR has been merged, ArgoCD needs to be updated to pull in the version of the GitOps engine that contains your change. Here are the steps:
- Retrieve the SHA hash for your commit. You will use this in the next step.
- From the `argo-cd` folder, run the following command
`go get github.com/argoproj/gitops-engine@<git-commit-sha>`
If you get an error message `invalid version: unknown revision` then you got the wrong SHA hash
- Run:
`go mod tidy`
- The following files are changed:
- `go.mod`
- `go.sum`
- Create an ArgoCD PR with a `refactor:` type in its title for the two file changes.
### Tips:
- See https://github.com/argoproj/argo-cd/pull/4434 as an example
- The PR might require additional, dependent changes in ArgoCD that are directly impacted by the changes made in the engine.
## Notifications Engine (`github.com/argoproj/notifications-engine`)
### Repository

View File

@@ -7,7 +7,7 @@
## Preface
When you have developed and possibly manually tested the code you want to contribute, you should ensure that everything builds correctly. Commit your changes locally and perform the following steps, for each step the commands for both local and virtualized toolchain are listed.
### Docker priviliges for virtualized toolchain users
### Docker privileges for virtualized toolchain users
[These instructions](toolchain-guide.md#docker-privileges) are relevant for most of the steps below
### Using Podman for virtualized toolchain users
@@ -29,11 +29,6 @@ As build dependencies change over time, you have to synchronize your development
* `make dep-ui` or `make dep-ui-local`
Argo CD recently migrated to Go modules. Usually, dependencies will be downloaded at build time, but the Makefile provides two targets to download and vendor all dependencies:
* `make mod-download` or `make mod-download-local` will download all required Go modules and
* `make mod-vendor` or `make mod-vendor-local` will vendor those dependencies into the Argo CD source tree
### Generate API glue code and other assets
Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
@@ -42,8 +37,8 @@ Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/prot
* Check if something has changed by running `git status` or `git diff`
* Commit any possible changes to your local Git branch, an appropriate commit message would be `Changes from codegen`, for example.
!!!note
There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
> [!NOTE]
> There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
### Build your code and run unit tests

View File

@@ -38,7 +38,7 @@ If you want to build and test the site directly on your local machine without th
## Analytics
!!! tip
Don't forget to disable your ad-blocker when testing.
> [!TIP]
> Don't forget to disable your ad-blocker when testing.
We collect [Google Analytics](https://analytics.google.com/analytics/web/#/report-home/a105170809w198079555p192782995).

View File

@@ -1,9 +1,10 @@
# Proxy Extensions
!!! warning "Beta Feature (Since 2.7.0)"
This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
It is generally considered stable, but there may be unhandled edge cases.
> [!WARNING]
> **Beta Feature (Since 2.7.0)**
>
> This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
> It is generally considered stable, but there may be unhandled edge cases.
## Overview

View File

@@ -129,33 +129,30 @@ It is also possible to add an optional flyout widget to your extension. It can b
Below is an example of an extension using the flyout widget:
```javascript
((window) => {
const component = (props: {
openFlyout: () => any
}) => {
const component = (props: { openFlyout: () => any }) => {
return React.createElement(
"div",
{
style: { padding: "10px" },
onClick: () => props.openFlyout()
},
"Hello World"
"div",
{
style: { padding: "10px" },
onClick: () => props.openFlyout(),
},
"Hello World"
);
};
const flyout = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"This is a flyout"
"div",
{ style: { padding: "10px" } },
"This is a flyout"
);
};
window.extensionsAPI.registerStatusPanelExtension(
component,
"My Extension",
"my_extension",
flyout
component,
"My Extension",
"my_extension",
flyout
);
})(window);
```
@@ -183,7 +180,9 @@ The callback function `shouldDisplay` should return true if the extension should
```typescript
const shouldDisplay = (app: Application) => {
return application.metadata?.labels?.['application.environmentLabelKey'] === "prd";
return (
application.metadata?.labels?.["application.environmentLabelKey"] === "prd"
);
};
```
@@ -196,28 +195,68 @@ Below is an example of a simple extension with a flyout widget:
};
const flyout = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"This is a flyout"
"div",
{ style: { padding: "10px" } },
"This is a flyout"
);
};
const component = () => {
return React.createElement(
"div",
{
onClick: () => flyout()
},
"Toolbar Extension Test"
"div",
{
onClick: () => flyout(),
},
"Toolbar Extension Test"
);
};
window.extensionsAPI.registerTopBarActionMenuExt(
component,
"Toolbar Extension Test",
"Toolbar_Extension_Test",
flyout,
shouldDisplay,
'',
true
component,
"Toolbar Extension Test",
"Toolbar_Extension_Test",
flyout,
shouldDisplay,
"",
true
);
})(window);
```
```
## App View Extensions
App View extensions allow you to create a new Application Details View for an application. This view would be selectable alongside the other views like the Node Tree, Pod, and Network views. When the extension's icon is clicked, the extension's component is rendered as the main content of the application view.
Register this extension through the `extensionsAPI.registerAppViewExtension` method.
```typescript
registerAppViewExtension(
component: ExtensionComponent, // the component to be rendered
title: string, // the title of the page once the component is rendered
icon: string, // the favicon classname for the icon tab
shouldDisplay?: (app: Application): boolean // returns true if the view should be available
)
```
Below is an example of a simple extension:
```javascript
((window) => {
const component = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"Hello World"
);
};
window.extensionsAPI.registerAppViewExtension(
component,
"My Extension",
"fa-question-circle",
(app) =>
application.metadata?.labels?.["application.environmentLabelKey"] ===
"prd"
);
})(window);
```
Example rendered extension:
![destination](../../assets/application-view-extension.png)

View File

@@ -6,8 +6,8 @@
Sure thing! You can either open an Enhancement Proposal in our GitHub issue tracker or you can [join us on Slack](https://argoproj.github.io/community/join-slack) in channel #argo-contributors to discuss your ideas and get guidance for submitting a PR.
!!! note
Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
> [!NOTE]
> Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
### No one has looked at my PR yet. Why?

View File

@@ -1,10 +1,12 @@
# Overview
!!! warning "As an Argo CD user, you probably don't want to be reading this section of the docs."
This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
* A chat bot
* A Slack integration
> [!WARNING]
> **As an Argo CD user, you probably don't want to be reading this section of the docs.**
>
> This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
>
> * A chat bot
> * A Slack integration
## Preface
#### Understand the [Code Contribution Guide](code-contributions.md)
@@ -26,7 +28,7 @@ For backend and frontend contributions, that require a full building-testing-run
## Contributing to Argo CD Notifications documentation
This guide will help you get started quickly with contributing documentation changes, performing the minimum setup you'll need.
The notificaions docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
The notifications docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
For backend and frontend contributions, that require a full building-testing-running-locally cycle, please refer to [Contributing to Argo CD backend and frontend ](index.md#contributing-to-argo-cd-backend-and-frontend)
### Fork and clone Argo CD repository
@@ -100,4 +102,4 @@ Need help? Start with the [Contributors FAQ](faq/)
* [Config Management Plugins](../operator-manual/config-management-plugins/)
## Contributing to Argo Website
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.

View File

@@ -71,17 +71,17 @@ Example:
./hack/trigger-release.sh v2.7.2 upstream
```
!!! tip
The tag must be in one of the following formats to trigger the GH workflow:<br>
* GA: `v<MAJOR>.<MINOR>.<PATCH>`<br>
* Pre-release: `v<MAJOR>.<MINOR>.<PATCH>-rc<RC#>`
> [!TIP]
> The tag must be in one of the following formats to trigger the GH workflow:<br>
> * GA: `v<MAJOR>.<MINOR>.<PATCH>`<br>
> * Pre-release: `v<MAJOR>.<MINOR>.<PATCH>-rc<RC#>`
Once the script is executed successfully, a GitHub workflow will start
execution. You can follow its progress under the [Actions](https://github.com/argoproj/argo-cd/actions/workflows/release.yaml) tab, the name of the action is `Publish ArgoCD Release`.
!!! warning
You cannot perform more than one release on the same release branch at the
same time.
> [!WARNING]
> You cannot perform more than one release on the same release branch at the
> same time.
### Verifying automated release

View File

@@ -230,8 +230,8 @@ make manifests-local
(depending on your toolchain) to build a new set of installation manifests which include your specific image reference.
!!!note
Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
> [!NOTE]
> Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
#### Configure your cluster with custom manifests

View File

@@ -7,11 +7,13 @@
## Preface
!!!note "Before you start"
The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
[code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
> [!NOTE]
> **Before you start**
>
> The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
>
> We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
> [code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
If you want to submit a PR, please read this document carefully, as it contains important information guiding you through our PR quality gates.
@@ -34,8 +36,8 @@ make pre-commit-local
When you submit a PR against Argo CD's GitHub repository, a couple of CI checks will be run automatically to ensure your changes will build fine and meet certain quality standards. Your contribution needs to pass those checks in order to be merged into the repository.
!!!note
Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
> [!NOTE]
> Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.

View File

@@ -6,16 +6,18 @@ namespace `argocd-e2e***` is created prior to the execution of the tests. The th
The [/test/e2e/testdata](https://github.com/argoproj/argo-cd/tree/master/test/e2e/testdata) directory contains various Argo CD applications. Before test execution, the directory is copied into `/tmp/argo-e2e***` temp directory and used in tests as a
Git repository via file url: `file:///tmp/argo-e2e***`.
!!! note "Rancher Desktop Volume Sharing"
The e2e git server runs in a container. If you are using Rancher Desktop, you will need to enable volume sharing for
the e2e container to access the testdata directory. To do this, add the following to
`~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml` and restart Rancher Desktop:
```yaml
mounts:
- location: /private/tmp
writable: true
```
> [!NOTE]
> **Rancher Desktop Volume Sharing**
>
> The e2e git server runs in a container. If you are using Rancher Desktop, you will need to enable volume sharing for
> the e2e container to access the testdata directory. To do this, add the following to
> `~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml` and restart Rancher Desktop:
>
> ```yaml
> mounts:
> - location: /private/tmp
> writable: true
> ```
## Running Tests Locally

Some files were not shown because too many files have changed in this diff Show More