Compare commits

...

19 Commits

Author SHA1 Message Date
github-actions[bot]
c592219140 Bump version to 2.7.0 (#13404)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: crenshaw-dev <crenshaw-dev@users.noreply.github.com>
2023-05-01 20:01:14 -04:00
gcp-cherry-pick-bot[bot]
155b6a9c10 chore: upgrade redis to 7.0.11 to avoid CVE-2023-0464 (#13389) (#13402)
Signed-off-by: Justin Marquis <34fathombelow@protonmail.com>
Co-authored-by: Justin Marquis <34fathombelow@protonmail.com>
2023-05-01 17:48:32 -04:00
gcp-cherry-pick-bot[bot]
29c485778a chore: upgrade haproxy to 2.6.12 to avoid CVE-2023-0464 (#13388) (#13401)
Signed-off-by: Justin Marquis <34fathombelow@protonmail.com>
Co-authored-by: Justin Marquis <34fathombelow@protonmail.com>
2023-05-01 16:43:43 -04:00
gcp-cherry-pick-bot[bot]
a707ab6b0e docs: Application Info field documentation (#10814) (#13351) (#13377)
* add Application info field documentation



* Extra Application info docs



* Added info field documentation



* Add space to comment




* docs: Add extra_info.md to table of contents



---------

Signed-off-by: Hapshanko <112761282+Hapshanko@users.noreply.github.com>
Co-authored-by: Hapshanko <112761282+Hapshanko@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-05-01 15:42:22 -04:00
gcp-cherry-pick-bot[bot]
d6e5768417 fix: Disable scrollbars on pod logs viewer. Fixes #13266 (#13294) (#13397)
Signed-off-by: Alex Collins <alex_collins@intuit.com>
Co-authored-by: Alex Collins <alexec@users.noreply.github.com>
2023-05-01 15:09:39 -04:00
Alexander Matyushentsev
428d47ba8a feat: support 'helm.sh/resource-policy: keep' helm annotation (#13157)
* feat: support 'helm.sh/resource-policy: keep' helm annotation

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* document  annotation

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

---------

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2023-04-24 16:00:54 -07:00
gcp-cherry-pick-bot[bot]
1adbebf888 fix(ui): use name instead of title for CMP parameters (#13250) (#13337)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-04-24 15:13:02 -04:00
gcp-cherry-pick-bot[bot]
6ec093dcb6 fix: remove false positive for no-discovery cmp; log string, not bytes (#13251) (#13336)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-04-24 15:12:52 -04:00
gcp-cherry-pick-bot[bot]
daa9d4e13e fix: Update .goreleaser.yaml (#13260) (#13263)
Signed-off-by: Kiruthikameena <meenasuja16@gmail.com>
Co-authored-by: Kiruthikameena <meenasuja16@gmail.com>
2023-04-17 15:31:31 +02:00
gcp-cherry-pick-bot[bot]
bd9ef3fbde docs: s/No supported/Not supported (#13189) (#13253)
Signed-off-by: Vincent Verleye <124772102+smals-vinve@users.noreply.github.com>
Co-authored-by: Vincent Verleye <124772102+smals-vinve@users.noreply.github.com>
2023-04-16 01:34:08 -04:00
gcp-cherry-pick-bot[bot]
a29a2b13d1 docs: Fix wrong link to non existing page for applicationset reference (#13207) (#13247)
Signed-off-by: TheDatabaseMe <philip.haberkern@googlemail.com>
Co-authored-by: Philip Haberkern <59010269+thedatabaseme@users.noreply.github.com>
2023-04-15 14:33:27 -04:00
github-actions[bot]
483d26b113 Bump version to 2.7.0-rc2 (#13192)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: crenshaw-dev <crenshaw-dev@users.noreply.github.com>
2023-04-11 11:36:59 -04:00
Alexander Matyushentsev
21e2400b83 fix: --file usage is broken for 'argocd proj create' command (#13130)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2023-04-07 09:51:23 -07:00
gcp-cherry-pick-bot[bot]
52de54a799 fix(cli): add redis-compress flag to argocd admin dashboard command (#13055) (#13056) (#13114)
* add `redis-compress` flag to `argocd admin dashboard` command

Previously, gzip compression was disabled and not configurable,
which made it impossible to work with gzipped Redis cache.
This commit adds support for gzip compression to the ArgoCD admin dashboard.



* update dashboard docs for --redis-compress flag



* add support for REDIS_COMRESSION env in cli admin dashboard



* update flag description




* update dashboard docs



---------

Signed-off-by: Pavel Aborilov <aborilov@gmail.com>
Signed-off-by: Pavel <aborilov@gmail.com>
Co-authored-by: Pavel <aborilov@gmail.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-04-06 16:09:16 -07:00
gcp-cherry-pick-bot[bot]
f35c127e2c docs: fix broken version selector (#13102) (#13105)
Signed-off-by: Harold Cheng <niuchangcun@gmail.com>
Co-authored-by: cjc7373 <niuchangcun@gmail.com>
2023-04-04 16:21:48 -04:00
gcp-cherry-pick-bot[bot]
0edc7c5ef1 fix: Add more context to the sync failed message when resource kind doesn't exist (#12980) (#13090)
* fix: add more context to k8s message



* fix: add more context to k8s message



* fix: add more context to k8s message



* fix: add more context to k8s message



* fix: add more context to k8s message



* fix: add more context to k8s message



* Update util/argo/argo.go




* Update util/argo/argo.go




* improvements, maybe



* remove unnecessary end quote



* avoid conflicts with other tests



---------

Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>
Signed-off-by: asingh <11219262+ashutosh16@users.noreply.github.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: asingh <11219262+ashutosh16@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-04-03 11:51:55 -04:00
gcp-cherry-pick-bot[bot]
d232635ebe fix(perf): filtering process in application-list api (#12985) (#12999) (#13057)
* perf: fix filtering process in application-list api (fixes: #12985)



* fix function for filtering by name



* add nil check in filtering by name



* add benchmark test for application list func



* add err check for benchmark



* fix test func for source soundness



---------

Signed-off-by: tken2039 <tken2039@gmail.com>
Signed-off-by: tken2039 <ken.takahashi@linecorp.com>
Co-authored-by: tken2039 <57531594+tken2039@users.noreply.github.com>
2023-03-30 10:47:45 -04:00
gcp-cherry-pick-bot[bot]
5f1fc31ed0 fix: applicationset reduce redundant reconciles (#12457) (#12480) (#13029)
* fix: applicationset reduce redundant reconciles



* fix: applicationset reduce redundant reconciles



* adding tests



* every line counts



* deep copy applications from event object



* update from code review



* check progressive sync fields



* check progressive sync fields



* selective checks for progressive syncs



* selective checks for progressive syncs



* pural



---------

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2023-03-28 12:06:39 -04:00
github-actions[bot]
0d0d2a97bb Bump version to 2.7.0-rc1 (#13020)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: pasha-codefresh <pasha-codefresh@users.noreply.github.com>
2023-03-27 16:37:42 +03:00
52 changed files with 1198 additions and 121 deletions

View File

@@ -427,7 +427,7 @@ jobs:
run: |
docker pull ghcr.io/dexidp/dex:v2.36.0
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:7.0.9-alpine
docker pull redis:7.0.11-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist

View File

@@ -32,7 +32,7 @@ builds:
ignore:
- goos: darwin
goarch: s390x
- goos: darmwin
- goos: darwin
goarch: ppc64le
- goos: windows
goarch: s390x

View File

@@ -1 +1 @@
2.6.0
2.7.0

View File

@@ -17,6 +17,7 @@ package controllers
import (
"context"
"fmt"
"reflect"
"time"
log "github.com/sirupsen/logrus"
@@ -29,9 +30,12 @@ import (
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/argoproj/argo-cd/v2/applicationset/generators"
@@ -514,7 +518,7 @@ func (r *ApplicationSetReconciler) generateApplications(applicationSetInfo argov
return res, applicationSetReason, firstError
}
func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager) error {
func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProgressiveSyncs bool) error {
if err := mgr.GetFieldIndexer().IndexField(context.TODO(), &argov1alpha1.Application{}, ".metadata.controller", func(rawObj client.Object) []string {
// grab the job object, extract the owner...
app := rawObj.(*argov1alpha1.Application)
@@ -533,9 +537,11 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager) error {
return fmt.Errorf("error setting up with manager: %w", err)
}
ownsHandler := getOwnsHandlerPredicates(enableProgressiveSyncs)
return ctrl.NewControllerManagedBy(mgr).
For(&argov1alpha1.ApplicationSet{}).
Owns(&argov1alpha1.Application{}).
Owns(&argov1alpha1.Application{}, builder.WithPredicates(ownsHandler)).
Watches(
&source.Kind{Type: &corev1.Secret{}},
&clusterSecretEventHandler{
@@ -1320,4 +1326,73 @@ func syncApplication(application argov1alpha1.Application, prune bool) (argov1al
return application, nil
}
func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
// if we are the owner and there is a create event, we most likely created it and do not need to
// re-reconcile
log.Debugln("received create event from owning an application")
return false
},
DeleteFunc: func(e event.DeleteEvent) bool {
log.Debugln("received delete event from owning an application")
return true
},
UpdateFunc: func(e event.UpdateEvent) bool {
log.Debugln("received update event from owning an application")
appOld, isApp := e.ObjectOld.(*argov1alpha1.Application)
if !isApp {
return false
}
appNew, isApp := e.ObjectNew.(*argov1alpha1.Application)
if !isApp {
return false
}
requeue := shouldRequeueApplicationSet(appOld, appNew, enableProgressiveSyncs)
log.Debugf("requeue: %t caused by application %s\n", requeue, appNew.Name)
return requeue
},
GenericFunc: func(e event.GenericEvent) bool {
log.Debugln("received generic event from owning an application")
return true
},
}
}
// shouldRequeueApplicationSet determines when we want to requeue an ApplicationSet for reconciling based on an owned
// application change
// The applicationset controller owns a subset of the Application CR.
// We do not need to re-reconcile if parts of the application change outside the applicationset's control.
// An example being, Application.ApplicationStatus.ReconciledAt which gets updated by the application controller.
// Additionally, Application.ObjectMeta.ResourceVersion and Application.ObjectMeta.Generation which are set by K8s.
func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov1alpha1.Application, enableProgressiveSyncs bool) bool {
if appOld == nil || appNew == nil {
return false
}
// the applicationset controller owns the application spec, labels, annotations, and finalizers on the applications
if !reflect.DeepEqual(appOld.Spec, appNew.Spec) ||
!reflect.DeepEqual(appOld.ObjectMeta.GetAnnotations(), appNew.ObjectMeta.GetAnnotations()) ||
!reflect.DeepEqual(appOld.ObjectMeta.GetLabels(), appNew.ObjectMeta.GetLabels()) ||
!reflect.DeepEqual(appOld.ObjectMeta.GetFinalizers(), appNew.ObjectMeta.GetFinalizers()) {
return true
}
// progressive syncs use the application status for updates. if they differ, requeue to trigger the next progression
if enableProgressiveSyncs {
if appOld.Status.Health.Status != appNew.Status.Health.Status || appOld.Status.Sync.Status != appNew.Status.Sync.Status {
return true
}
if appOld.Status.OperationState != nil && appNew.Status.OperationState != nil {
if appOld.Status.OperationState.Phase != appNew.Status.OperationState.Phase ||
appOld.Status.OperationState.StartedAt != appNew.Status.OperationState.StartedAt {
return true
}
}
}
return false
}
var _ handler.EventHandler = &clusterSecretEventHandler{}

View File

@@ -24,6 +24,7 @@ import (
crtclient "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"github.com/argoproj/argo-cd/v2/applicationset/generators"
"github.com/argoproj/argo-cd/v2/applicationset/utils"
@@ -4906,3 +4907,133 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
})
}
}
func TestOwnsHandler(t *testing.T) {
// progressive syncs do not affect create, delete, or generic
ownsHandler := getOwnsHandlerPredicates(true)
assert.False(t, ownsHandler.CreateFunc(event.CreateEvent{}))
assert.True(t, ownsHandler.DeleteFunc(event.DeleteEvent{}))
assert.True(t, ownsHandler.GenericFunc(event.GenericEvent{}))
ownsHandler = getOwnsHandlerPredicates(false)
assert.False(t, ownsHandler.CreateFunc(event.CreateEvent{}))
assert.True(t, ownsHandler.DeleteFunc(event.DeleteEvent{}))
assert.True(t, ownsHandler.GenericFunc(event.GenericEvent{}))
now := metav1.Now()
type args struct {
e event.UpdateEvent
enableProgressiveSyncs bool
}
tests := []struct {
name string
args args
want bool
}{
{name: "SameApplicationReconciledAtDiff", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{ReconciledAt: &now}},
ObjectNew: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{ReconciledAt: &now}},
}}, want: false},
{name: "SameApplicationResourceVersionDiff", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{
ResourceVersion: "foo",
}},
ObjectNew: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{
ResourceVersion: "bar",
}},
}}, want: false},
{name: "ApplicationHealthStatusDiff", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{
Health: v1alpha1.HealthStatus{
Status: "Unknown",
},
}},
ObjectNew: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{
Health: v1alpha1.HealthStatus{
Status: "Healthy",
},
}},
},
enableProgressiveSyncs: true,
}, want: true},
{name: "ApplicationSyncStatusDiff", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: "OutOfSync",
},
}},
ObjectNew: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: "Synced",
},
}},
},
enableProgressiveSyncs: true,
}, want: true},
{name: "ApplicationOperationStateDiff", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{
OperationState: &v1alpha1.OperationState{
Phase: "foo",
},
}},
ObjectNew: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{
OperationState: &v1alpha1.OperationState{
Phase: "bar",
},
}},
},
enableProgressiveSyncs: true,
}, want: true},
{name: "ApplicationOperationStartedAtDiff", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{
OperationState: &v1alpha1.OperationState{
StartedAt: now,
},
}},
ObjectNew: &v1alpha1.Application{Status: v1alpha1.ApplicationStatus{
OperationState: &v1alpha1.OperationState{
StartedAt: metav1.NewTime(now.Add(time.Minute * 1)),
},
}},
},
enableProgressiveSyncs: true,
}, want: true},
{name: "SameApplicationGeneration", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{
Generation: 1,
}},
ObjectNew: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{
Generation: 2,
}},
}}, want: false},
{name: "DifferentApplicationSpec", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{Spec: v1alpha1.ApplicationSpec{Project: "default"}},
ObjectNew: &v1alpha1.Application{Spec: v1alpha1.ApplicationSpec{Project: "not-default"}},
}}, want: true},
{name: "DifferentApplicationLabels", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"foo": "bar"}}},
ObjectNew: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"bar": "foo"}}},
}}, want: true},
{name: "DifferentApplicationAnnotations", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"foo": "bar"}}},
ObjectNew: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"bar": "foo"}}},
}}, want: true},
{name: "DifferentApplicationFinalizers", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Finalizers: []string{"argo"}}},
ObjectNew: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Finalizers: []string{"none"}}},
}}, want: true},
{name: "NotAnAppOld", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.AppProject{},
ObjectNew: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"bar": "foo"}}},
}}, want: false},
{name: "NotAnAppNew", args: args{e: event.UpdateEvent{
ObjectOld: &v1alpha1.Application{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"foo": "bar"}}},
ObjectNew: &v1alpha1.AppProject{},
}}, want: false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ownsHandler = getOwnsHandlerPredicates(tt.args.enableProgressiveSyncs)
assert.Equalf(t, tt.want, ownsHandler.UpdateFunc(tt.args.e), "UpdateFunc(%v)", tt.args.e)
})
}
}

View File

@@ -177,7 +177,7 @@ func NewCommand() *cobra.Command {
KubeClientset: k8sClient,
ArgoDB: argoCDDB,
EnableProgressiveSyncs: enableProgressiveSyncs,
}).SetupWithManager(mgr); err != nil {
}).SetupWithManager(mgr, enableProgressiveSyncs); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
}

View File

@@ -9,13 +9,16 @@ import (
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/initialize"
"github.com/argoproj/argo-cd/v2/common"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
"github.com/argoproj/argo-cd/v2/util/cache"
"github.com/argoproj/argo-cd/v2/util/env"
"github.com/argoproj/argo-cd/v2/util/errors"
)
func NewDashboardCommand() *cobra.Command {
var (
port int
address string
port int
address string
compressionStr string
)
cmd := &cobra.Command{
Use: "dashboard",
@@ -23,7 +26,9 @@ func NewDashboardCommand() *cobra.Command {
Run: func(cmd *cobra.Command, args []string) {
ctx := cmd.Context()
errors.CheckError(headless.StartLocalServer(ctx, &argocdclient.ClientOptions{Core: true}, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address))
compression, err := cache.CompressionTypeFromString(compressionStr)
errors.CheckError(err)
errors.CheckError(headless.StartLocalServer(ctx, &argocdclient.ClientOptions{Core: true}, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address, compression))
println(fmt.Sprintf("Argo CD UI is available at http://%s:%d", address, port))
<-ctx.Done()
},
@@ -31,5 +36,6 @@ func NewDashboardCommand() *cobra.Command {
initialize.InitCommand(cmd)
cmd.Flags().IntVar(&port, "port", common.DefaultPortAPIServer, "Listen on given port")
cmd.Flags().StringVar(&address, "address", common.DefaultAddressAPIServer, "Listen on given address")
cmd.Flags().StringVar(&compressionStr, "redis-compress", env.StringFromEnv("REDIS_COMPRESSION", string(cache.RedisCompressionNone)), "Enable this if the application controller is configured with redis compression enabled. (possible values: none, gzip)")
return cmd
}

View File

@@ -38,11 +38,12 @@ import (
)
type forwardCacheClient struct {
namespace string
context string
init sync.Once
client cache.CacheClient
err error
namespace string
context string
init sync.Once
client cache.CacheClient
compression cache.RedisCompressionType
err error
}
func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error) error {
@@ -58,7 +59,7 @@ func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error)
}
redisClient := redis.NewClient(&redis.Options{Addr: fmt.Sprintf("localhost:%d", redisPort)})
c.client = cache.NewRedisCache(redisClient, time.Hour, cache.RedisCompressionNone)
c.client = cache.NewRedisCache(redisClient, time.Hour, c.compression)
})
if c.err != nil {
return c.err
@@ -139,7 +140,7 @@ func testAPI(ctx context.Context, clientOpts *apiclient.ClientOptions) error {
// StartLocalServer allows executing command in a headless mode: on the fly starts Argo CD API server and
// changes provided client options to use started API server port
func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string) error {
func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string, compression cache.RedisCompressionType) error {
flags := pflag.NewFlagSet("tmp", pflag.ContinueOnError)
clientConfig := cli.AddKubectlFlagsToSet(flags)
startInProcessAPI := clientOpts.Core
@@ -200,7 +201,7 @@ func StartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions,
if err != nil {
return err
}
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr}), time.Hour)
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: compression}), time.Hour)
srv := server.NewServer(ctx, server.ArgoCDServerOpts{
EnableGZip: false,
Namespace: namespace,
@@ -243,7 +244,7 @@ func NewClientOrDie(opts *apiclient.ClientOptions, c *cobra.Command) apiclient.C
ctx := c.Context()
ctxStr := initialize.RetrieveContextIfChanged(c.Flag("context"))
err := StartLocalServer(ctx, opts, ctxStr, nil, nil)
err := StartLocalServer(ctx, opts, ctxStr, nil, nil, cache.RedisCompressionNone)
if err != nil {
log.Fatal(err)
}

View File

@@ -138,7 +138,10 @@ func readProjFromURI(fileURL string, proj *v1alpha1.AppProject) error {
} else {
err = config.UnmarshalRemoteFile(fileURL, &proj)
}
return fmt.Errorf("error reading proj from uri: %w", err)
if err != nil {
return fmt.Errorf("error reading proj from uri: %w", err)
}
return nil
}
func SetProjSpecOptions(flags *pflag.FlagSet, spec *v1alpha1.AppProjectSpec, projOpts *ProjectOpts) int {

View File

@@ -37,7 +37,7 @@ type Discover struct {
}
func (d Discover) IsDefined() bool {
return d.FileName != "" || d.Find.Glob == "" || len(d.Find.Command.Command) > 0
return d.FileName != "" || d.Find.Glob != "" || len(d.Find.Command.Command) > 0
}
// Command holds binary path and arguments list

View File

@@ -113,7 +113,7 @@ func runCommand(ctx context.Context, command Command, path string, env []string)
}
if len(output) == 0 {
log.WithFields(log.Fields{
"stderr": stderr,
"stderr": stderr.String(),
"command": command,
}).Warn("Plugin command returned zero output")
}

View File

@@ -53,6 +53,7 @@ import (
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/errors"
"github.com/argoproj/argo-cd/v2/util/glob"
"github.com/argoproj/argo-cd/v2/util/helm"
logutils "github.com/argoproj/argo-cd/v2/util/log"
settings_util "github.com/argoproj/argo-cd/v2/util/settings"
)
@@ -943,7 +944,9 @@ func (ctrl *ApplicationController) removeProjectFinalizer(proj *appv1.AppProject
// shouldBeDeleted returns whether a given resource obj should be deleted on cascade delete of application app
func (ctrl *ApplicationController) shouldBeDeleted(app *appv1.Application, obj *unstructured.Unstructured) bool {
return !kube.IsCRD(obj) && !isSelfReferencedApp(app, kube.GetObjectRef(obj)) && !resourceutil.HasAnnotationOption(obj, synccommon.AnnotationSyncOptions, synccommon.SyncOptionDisableDeletion)
return !kube.IsCRD(obj) && !isSelfReferencedApp(app, kube.GetObjectRef(obj)) &&
!resourceutil.HasAnnotationOption(obj, synccommon.AnnotationSyncOptions, synccommon.SyncOptionDisableDeletion) &&
!resourceutil.HasAnnotationOption(obj, helm.ResourcePolicyAnnotation, helm.ResourcePolicyKeep)
}
func (ctrl *ApplicationController) getPermittedAppLiveObjects(app *appv1.Application, proj *appv1.AppProject, projectClusters func(project string) ([]*appv1.Cluster, error)) (map[kube.ResourceKey]*unstructured.Unstructured, error) {

View File

@@ -1528,4 +1528,10 @@ func Test_syncDeleteOption(t *testing.T) {
delete := ctrl.shouldBeDeleted(app, cmObj)
assert.False(t, delete)
})
t.Run("with delete set to false object is retained", func(t *testing.T) {
cmObj := kube.MustToUnstructured(&cm)
cmObj.SetAnnotations(map[string]string{"helm.sh/resource-policy": "keep"})
delete := ctrl.shouldBeDeleted(app, cmObj)
assert.False(t, delete)
})
}

View File

@@ -322,7 +322,25 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
var resState []common.ResourceSyncResult
state.Phase, state.Message, resState = syncCtx.GetState()
state.SyncResult.Resources = nil
var apiVersion []kube.APIResourceInfo
for _, res := range resState {
augmentedMsg, err := argo.AugmentSyncMsg(res, func() ([]kube.APIResourceInfo, error) {
if apiVersion == nil {
_, apiVersion, err = m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
if err != nil {
return nil, fmt.Errorf("failed to get version info from the target cluster %q", app.Spec.Destination.Server)
}
}
return apiVersion, nil
})
if err != nil {
log.Errorf("using the original message since: %v", err)
} else {
res.Message = augmentedMsg
}
state.SyncResult.Resources = append(state.SyncResult.Resources, &v1alpha1.ResourceResult{
HookType: res.HookType,
Group: res.ResourceKey.Group,

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

BIN
docs/assets/extra_info.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

View File

@@ -9,16 +9,6 @@ setTimeout(function() {
caret.innerHTML = "<i class='fa fa-caret-down dropdown-caret'></i>"
caret.classList.add('dropdown-caret')
div.querySelector('.rst-current-version').appendChild(caret);
div.querySelector('.rst-current-version').addEventListener('click', function() {
const classes = container.className.split(' ');
const index = classes.indexOf('shift-up');
if (index === -1) {
classes.push('shift-up');
} else {
classes.splice(index, 1);
}
container.className = classes.join(' ');
});
}
var CSSLink = document.createElement('link');

View File

@@ -152,7 +152,12 @@ spec:
# name: in-cluster
# The namespace will only be set for namespace-scoped resources that have not set a value for .metadata.namespace
namespace: guestbook
# Extra information to show in the Argo CD Application details tab
info:
- name: 'Example:'
value: 'https://example.com'
# Sync policy
syncPolicy:
automated: # automated sync by default retries failed attempts 5 times with following delays between attempts ( 5s, 10s, 20s, 40s, 80s ); retry controlled using `retry` field.

View File

@@ -66,7 +66,7 @@ See [Web-based Terminal](web_based_terminal.md) for more info.
#### The `applicationsets` resource
[ApplicationSets](applicationset) provide a declarative way to automatically create/update/delete Applications.
[ApplicationSets](applicationset/index.md) provide a declarative way to automatically create/update/delete Applications.
Granting `applicationsets, create` effectively grants the ability to create Applications. While it doesn't allow the
user to create Applications directly, they can create Applications via an ApplicationSet.

View File

@@ -58,7 +58,7 @@ The manifests are now using [`tini` as entrypoint][3], instead of `entrypoint.sh
## Deep Links template updates
Deep Links now allow you to access other values like `cluster`, `project`, `application` and `resource` in the url and condition templates for specific categories of links.
Deep Links now allow you to access other values like `cluster`, `project`, `application` and `resource` in the url and condition templates for specific categories of links.
The templating syntax has also been updated to be prefixed with the type of resource you want to access for example previously if you had a `resource.links` config like :
```yaml
resource.links: |
@@ -75,3 +75,9 @@ This would become :
```
Read the full [documentation](../deep_links.md) to see all possible combinations of values accessible fo each category of links.
## Support of `helm.sh/resource-policy` annotation
Argo CD now supports the `helm.sh/resource-policy` annotation to control the deletion of resources. The behavior is the same as the behavior of
`argocd.argoproj.io/sync-options: Delete=false` annotation: if the annotation is present and set to `keep`, the resource will not be deleted
when the application is deleted.

View File

@@ -25,6 +25,7 @@ argocd admin dashboard [flags]
--password string Password for basic authentication to the API server
--port int Listen on given port (default 8080)
--proxy-url string If provided, this URL will be used to connect via proxy
--redis-compress string Enable this if the application controller is configured with redis compression enabled. (possible values: none, gzip) (default "none")
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
--tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
--token string Bearer token for authentication to the API server

View File

@@ -0,0 +1,28 @@
# Add extra Application info
You can add additional information to an Application on your ArgoCD dashboard.
If you wish to add clickable links, see [Add external URL](https://argo-cd.readthedocs.io/en/stable/user-guide/external-url/).
This is done by providing the 'info' field a key-value in your Application manifest.
Example:
```yaml
project: argo-demo
source:
repoURL: 'https://demo'
path: argo-demo
destination:
server: https://demo
namespace: argo-demo
info:
- name: Example:
value: >-
https://example.com
```
![External link](../assets/extra_info-1.png)
The additional information will be visible on the ArgoCD Application details page.
![External link](../assets/extra_info.png)
![External link](../assets/extra_info-2.png)

View File

@@ -117,8 +117,9 @@ Argo CD supports many (most?) Helm hooks by mapping the Helm annotations onto Ar
| `helm.sh/hook: test-success` | Not supported. No equivalent in Argo CD. |
| `helm.sh/hook: test-failure` | Not supported. No equivalent in Argo CD. |
| `helm.sh/hook-delete-policy` | Supported. See also `argocd.argoproj.io/hook-delete-policy`). |
| `helm.sh/hook-delete-timeout` | No supported. Never used in Helm stable |
| `helm.sh/hook-delete-timeout` | Not supported. Never used in Helm stable |
| `helm.sh/hook-weight` | Supported as equivalent to `argocd.argoproj.io/sync-wave`. |
| `helm.sh/resource-policy: keep` | Supported as equivalent to `argocd.argoproj.io/sync-options: Delete=false`. |
Unsupported hooks are ignored. In Argo CD, hooks are created by using `kubectl apply`, rather than `kubectl create`. This means that if the hook is named and already exists, it will not change unless you have annotated it with `before-hook-creation`.

View File

@@ -5,7 +5,7 @@ kind: Kustomization
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v2.7.0
resources:
- ./application-controller
- ./dex

View File

@@ -23,7 +23,7 @@ spec:
serviceAccountName: argocd-redis
containers:
- name: redis
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: Always
args:
- "--save"

View File

@@ -16700,7 +16700,7 @@ spec:
key: applicationsetcontroller.enable.progressive.syncs
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -16785,7 +16785,7 @@ spec:
env:
- name: ARGOCD_REDIS_SERVICE
value: argocd-redis
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -16967,7 +16967,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -17019,7 +17019,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -17232,7 +17232,7 @@ spec:
key: controller.kubectl.parallelism.limit
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v2.7.0

View File

@@ -11,7 +11,7 @@ patches:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: latest
newTag: v2.7.0
resources:
- ../../base/application-controller
- ../../base/applicationset-controller

View File

@@ -1071,7 +1071,7 @@ spec:
topologyKey: kubernetes.io/hostname
initContainers:
- name: config-init
image: haproxy:2.6.9-alpine
image: haproxy:2.6.12-alpine
imagePullPolicy: IfNotPresent
resources:
{}
@@ -1089,7 +1089,7 @@ spec:
mountPath: /data
containers:
- name: haproxy
image: haproxy:2.6.9-alpine
image: haproxy:2.6.12-alpine
imagePullPolicy: IfNotPresent
securityContext:
null
@@ -1179,7 +1179,7 @@ spec:
automountServiceAccountToken: false
initContainers:
- name: config-init
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
resources:
{}
@@ -1206,7 +1206,7 @@ spec:
containers:
- name: redis
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
command:
- redis-server
@@ -1256,7 +1256,7 @@ spec:
- /bin/sh
- /readonly-config/trigger-failover-if-master.sh
- name: sentinel
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
command:
- redis-sentinel
@@ -1300,7 +1300,7 @@ spec:
{}
- name: split-brain-fix
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
command:
- sh

View File

@@ -11,14 +11,14 @@ redis-ha:
IPv6:
enabled: false
image:
tag: 2.6.9-alpine
tag: 2.6.12-alpine
containerSecurityContext: null
timeout:
server: 6m
client: 6m
checkInterval: 3s
image:
tag: 7.0.9-alpine
tag: 7.0.11-alpine
containerSecurityContext: null
sentinel:
bind: "0.0.0.0"

View File

@@ -17919,7 +17919,7 @@ spec:
key: applicationsetcontroller.enable.progressive.syncs
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -18029,7 +18029,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -18086,7 +18086,7 @@ spec:
containers:
- args:
- /usr/local/bin/argocd-notifications
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -18157,7 +18157,7 @@ spec:
app.kubernetes.io/name: argocd-redis-ha-haproxy
topologyKey: kubernetes.io/hostname
containers:
- image: haproxy:2.6.9-alpine
- image: haproxy:2.6.12-alpine
imagePullPolicy: IfNotPresent
lifecycle: {}
livenessProbe:
@@ -18193,7 +18193,7 @@ spec:
- /readonly/haproxy_init.sh
command:
- sh
image: haproxy:2.6.9-alpine
image: haproxy:2.6.12-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:
@@ -18388,7 +18388,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -18440,7 +18440,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -18719,7 +18719,7 @@ spec:
key: server.enable.proxy.extension
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -18961,7 +18961,7 @@ spec:
key: controller.kubectl.parallelism.limit
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -19038,7 +19038,7 @@ spec:
- /data/conf/redis.conf
command:
- redis-server
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -19091,7 +19091,7 @@ spec:
- /data/conf/sentinel.conf
command:
- redis-sentinel
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
lifecycle: {}
livenessProbe:
@@ -19143,7 +19143,7 @@ spec:
value: 40000915ab58c3fa8fd888fb8b24711944e6cbb4
- name: SENTINEL_ID_2
value: 2bbec7894d954a8af3bb54d13eaec53cb024e2ca
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -19172,7 +19172,7 @@ spec:
value: 40000915ab58c3fa8fd888fb8b24711944e6cbb4
- name: SENTINEL_ID_2
value: 2bbec7894d954a8af3bb54d13eaec53cb024e2ca
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -1579,7 +1579,7 @@ spec:
key: applicationsetcontroller.enable.progressive.syncs
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1689,7 +1689,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1746,7 +1746,7 @@ spec:
containers:
- args:
- /usr/local/bin/argocd-notifications
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1817,7 +1817,7 @@ spec:
app.kubernetes.io/name: argocd-redis-ha-haproxy
topologyKey: kubernetes.io/hostname
containers:
- image: haproxy:2.6.9-alpine
- image: haproxy:2.6.12-alpine
imagePullPolicy: IfNotPresent
lifecycle: {}
livenessProbe:
@@ -1853,7 +1853,7 @@ spec:
- /readonly/haproxy_init.sh
command:
- sh
image: haproxy:2.6.9-alpine
image: haproxy:2.6.12-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:
@@ -2048,7 +2048,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2100,7 +2100,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2379,7 +2379,7 @@ spec:
key: server.enable.proxy.extension
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2621,7 +2621,7 @@ spec:
key: controller.kubectl.parallelism.limit
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -2698,7 +2698,7 @@ spec:
- /data/conf/redis.conf
command:
- redis-server
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -2751,7 +2751,7 @@ spec:
- /data/conf/sentinel.conf
command:
- redis-sentinel
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
lifecycle: {}
livenessProbe:
@@ -2803,7 +2803,7 @@ spec:
value: 40000915ab58c3fa8fd888fb8b24711944e6cbb4
- name: SENTINEL_ID_2
value: 2bbec7894d954a8af3bb54d13eaec53cb024e2ca
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -2832,7 +2832,7 @@ spec:
value: 40000915ab58c3fa8fd888fb8b24711944e6cbb4
- name: SENTINEL_ID_2
value: 2bbec7894d954a8af3bb54d13eaec53cb024e2ca
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -17038,7 +17038,7 @@ spec:
key: applicationsetcontroller.enable.progressive.syncs
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -17148,7 +17148,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -17205,7 +17205,7 @@ spec:
containers:
- args:
- /usr/local/bin/argocd-notifications
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -17285,7 +17285,7 @@ spec:
env:
- name: ARGOCD_REDIS_SERVICE
value: argocd-redis
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -17467,7 +17467,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -17519,7 +17519,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -17794,7 +17794,7 @@ spec:
key: server.enable.proxy.extension
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -18034,7 +18034,7 @@ spec:
key: controller.kubectl.parallelism.limit
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -698,7 +698,7 @@ spec:
key: applicationsetcontroller.enable.progressive.syncs
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -808,7 +808,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -865,7 +865,7 @@ spec:
containers:
- args:
- /usr/local/bin/argocd-notifications
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -945,7 +945,7 @@ spec:
env:
- name: ARGOCD_REDIS_SERVICE
value: argocd-redis
image: redis:7.0.9-alpine
image: redis:7.0.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1127,7 +1127,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1179,7 +1179,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -1454,7 +1454,7 @@ spec:
key: server.enable.proxy.extension
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -1694,7 +1694,7 @@ spec:
key: controller.kubectl.parallelism.limit
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.7.0
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -164,6 +164,7 @@ nav:
- user-guide/best_practices.md
- user-guide/status-badge.md
- user-guide/external-url.md
- user-guide/extra_info.md
- Notification subscriptions: user-guide/subscriptions.md
- Command Reference: user-guide/commands/argocd.md
- Developer Guide:

View File

@@ -207,8 +207,21 @@ func (s *Server) List(ctx context.Context, q *application.ApplicationQuery) (*ap
if err != nil {
return nil, fmt.Errorf("error listing apps with selectors: %w", err)
}
filteredApps := apps
// Filter applications by name
if q.Name != nil {
filteredApps = argoutil.FilterByNameP(filteredApps, *q.Name)
}
// Filter applications by projects
filteredApps = argoutil.FilterByProjectsP(filteredApps, getProjectsFromApplicationQuery(*q))
// Filter applications by source repo URL
filteredApps = argoutil.FilterByRepoP(filteredApps, q.GetRepo())
newItems := make([]appv1.Application, 0)
for _, a := range apps {
for _, a := range filteredApps {
// Skip any application that is neither in the control plane's namespace
// nor in the list of enabled namespaces.
if a.Namespace != s.ns && !glob.MatchStringInList(s.enabledNamespaces, a.Namespace, false) {
@@ -219,19 +232,6 @@ func (s *Server) List(ctx context.Context, q *application.ApplicationQuery) (*ap
}
}
if q.Name != nil {
newItems, err = argoutil.FilterByName(newItems, *q.Name)
if err != nil {
return nil, fmt.Errorf("error filtering applications by name: %w", err)
}
}
// Filter applications by projects
newItems = argoutil.FilterByProjects(newItems, getProjectsFromApplicationQuery(*q))
// Filter applications by source repo URL
newItems = argoutil.FilterByRepo(newItems, q.GetRepo())
// Sort found applications by name
sort.Slice(newItems, func(i, j int) bool {
return newItems[i].Name < newItems[j].Name

View File

@@ -302,6 +302,186 @@ func newTestAppServerWithEnforcerConfigure(f func(*rbac.Enforcer), t *testing.T,
return server.(*Server)
}
// return an ApplicationServiceServer which returns fake data
func newTestAppServerWithBenchmark(b *testing.B, objects ...runtime.Object) *Server {
f := func(enf *rbac.Enforcer) {
_ = enf.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
enf.SetDefaultRole("role:admin")
}
return newTestAppServerWithEnforcerConfigureWithBenchmark(f, b, objects...)
}
func newTestAppServerWithEnforcerConfigureWithBenchmark(f func(*rbac.Enforcer), b *testing.B, objects ...runtime.Object) *Server {
kubeclientset := fake.NewSimpleClientset(&v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: testNamespace,
Name: "argocd-cm",
Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
},
},
}, &v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-secret",
Namespace: testNamespace,
},
Data: map[string][]byte{
"admin.password": []byte("test"),
"server.secretkey": []byte("test"),
},
})
ctx := context.Background()
db := db.NewDB(testNamespace, settings.NewSettingsManager(ctx, kubeclientset, testNamespace), kubeclientset)
_, err := db.CreateRepository(ctx, fakeRepo())
require.NoError(b, err)
_, err = db.CreateCluster(ctx, fakeCluster())
require.NoError(b, err)
mockRepoClient := &mocks.Clientset{RepoServerServiceClient: fakeRepoServerClient(false)}
defaultProj := &appsv1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: "default"},
Spec: appsv1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []appsv1.ApplicationDestination{{Server: "*", Namespace: "*"}},
},
}
myProj := &appsv1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "my-proj", Namespace: "default"},
Spec: appsv1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []appsv1.ApplicationDestination{{Server: "*", Namespace: "*"}},
},
}
projWithSyncWindows := &appsv1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "proj-maint", Namespace: "default"},
Spec: appsv1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []appsv1.ApplicationDestination{{Server: "*", Namespace: "*"}},
SyncWindows: appsv1.SyncWindows{},
},
}
matchingWindow := &appsv1.SyncWindow{
Kind: "allow",
Schedule: "* * * * *",
Duration: "1h",
Applications: []string{"test-app"},
}
projWithSyncWindows.Spec.SyncWindows = append(projWithSyncWindows.Spec.SyncWindows, matchingWindow)
objects = append(objects, defaultProj, myProj, projWithSyncWindows)
fakeAppsClientset := apps.NewSimpleClientset(objects...)
factory := appinformer.NewSharedInformerFactoryWithOptions(fakeAppsClientset, 0, appinformer.WithNamespace(""), appinformer.WithTweakListOptions(func(options *metav1.ListOptions) {}))
fakeProjLister := factory.Argoproj().V1alpha1().AppProjects().Lister().AppProjects(testNamespace)
enforcer := rbac.NewEnforcer(kubeclientset, testNamespace, common.ArgoCDRBACConfigMapName, nil)
f(enforcer)
enforcer.SetClaimsEnforcerFunc(rbacpolicy.NewRBACPolicyEnforcer(enforcer, fakeProjLister).EnforceClaims)
settingsMgr := settings.NewSettingsManager(ctx, kubeclientset, testNamespace)
// populate the app informer with the fake objects
appInformer := factory.Argoproj().V1alpha1().Applications().Informer()
go appInformer.Run(ctx.Done())
if !k8scache.WaitForCacheSync(ctx.Done(), appInformer.HasSynced) {
panic("Timed out waiting for caches to sync")
}
projInformer := factory.Argoproj().V1alpha1().AppProjects().Informer()
go projInformer.Run(ctx.Done())
if !k8scache.WaitForCacheSync(ctx.Done(), projInformer.HasSynced) {
panic("Timed out waiting for caches to sync")
}
broadcaster := new(appmocks.Broadcaster)
broadcaster.On("Subscribe", mock.Anything, mock.Anything).Return(func() {}).Run(func(args mock.Arguments) {
// Simulate the broadcaster notifying the subscriber of an application update.
// The second parameter to Subscribe is filters. For the purposes of tests, we ignore the filters. Future tests
// might require implementing those.
go func() {
events := args.Get(0).(chan *appsv1.ApplicationWatchEvent)
for _, obj := range objects {
app, ok := obj.(*appsv1.Application)
if ok {
oldVersion, err := strconv.Atoi(app.ResourceVersion)
if err != nil {
oldVersion = 0
}
clonedApp := app.DeepCopy()
clonedApp.ResourceVersion = fmt.Sprintf("%d", oldVersion+1)
events <- &appsv1.ApplicationWatchEvent{Type: watch.Added, Application: *clonedApp}
}
}
}()
})
broadcaster.On("OnAdd", mock.Anything).Return()
broadcaster.On("OnUpdate", mock.Anything, mock.Anything).Return()
broadcaster.On("OnDelete", mock.Anything).Return()
appStateCache := appstate.NewCache(cache.NewCache(cache.NewInMemoryCache(time.Hour)), time.Hour)
// pre-populate the app cache
for _, obj := range objects {
app, ok := obj.(*appsv1.Application)
if ok {
err := appStateCache.SetAppManagedResources(app.Name, []*appsv1.ResourceDiff{})
require.NoError(b, err)
// Pre-populate the resource tree based on the app's resources.
nodes := make([]appsv1.ResourceNode, len(app.Status.Resources))
for i, res := range app.Status.Resources {
nodes[i] = appsv1.ResourceNode{
ResourceRef: appsv1.ResourceRef{
Group: res.Group,
Kind: res.Kind,
Version: res.Version,
Name: res.Name,
Namespace: res.Namespace,
UID: "fake",
},
}
}
err = appStateCache.SetAppResourcesTree(app.Name, &appsv1.ApplicationTree{
Nodes: nodes,
})
require.NoError(b, err)
}
}
appCache := servercache.NewCache(appStateCache, time.Hour, time.Hour, time.Hour)
kubectl := &kubetest.MockKubectlCmd{}
kubectl = kubectl.WithGetResourceFunc(func(_ context.Context, _ *rest.Config, gvk schema.GroupVersionKind, name string, namespace string) (*unstructured.Unstructured, error) {
for _, obj := range objects {
if obj.GetObjectKind().GroupVersionKind().GroupKind() == gvk.GroupKind() {
if obj, ok := obj.(*unstructured.Unstructured); ok && obj.GetName() == name && obj.GetNamespace() == namespace {
return obj, nil
}
}
}
return nil, nil
})
server, _ := NewServer(
testNamespace,
kubeclientset,
fakeAppsClientset,
factory.Argoproj().V1alpha1().Applications().Lister(),
appInformer,
broadcaster,
mockRepoClient,
appCache,
kubectl,
db,
enforcer,
sync.NewKeyLock(),
settingsMgr,
projInformer,
[]string{},
)
return server.(*Server)
}
const fakeApp = `
apiVersion: argoproj.io/v1alpha1
kind: Application
@@ -1009,6 +1189,135 @@ g, group-49, role:test3
assert.Equal(t, 300, len(names))
}
func generateTestApp(num int) []*appsv1.Application {
apps := []*appsv1.Application{}
for i := 0; i < num; i++ {
apps = append(apps, newTestApp(func(app *appsv1.Application) {
app.Name = fmt.Sprintf("test-app%.6d", i)
}))
}
return apps
}
func BenchmarkListMuchApps(b *testing.B) {
// 10000 apps
apps := generateTestApp(10000)
obj := make([]runtime.Object, len(apps))
for i, v := range apps {
obj[i] = v
}
appServer := newTestAppServerWithBenchmark(b, obj...)
b.ResetTimer()
for n := 0; n < b.N; n++ {
_, err := appServer.List(context.Background(), &application.ApplicationQuery{})
if err != nil {
break
}
}
}
func BenchmarkListSomeApps(b *testing.B) {
// 500 apps
apps := generateTestApp(500)
obj := make([]runtime.Object, len(apps))
for i, v := range apps {
obj[i] = v
}
appServer := newTestAppServerWithBenchmark(b, obj...)
b.ResetTimer()
for n := 0; n < b.N; n++ {
_, err := appServer.List(context.Background(), &application.ApplicationQuery{})
if err != nil {
break
}
}
}
func BenchmarkListFewApps(b *testing.B) {
// 10 apps
apps := generateTestApp(10)
obj := make([]runtime.Object, len(apps))
for i, v := range apps {
obj[i] = v
}
appServer := newTestAppServerWithBenchmark(b, obj...)
b.ResetTimer()
for n := 0; n < b.N; n++ {
_, err := appServer.List(context.Background(), &application.ApplicationQuery{})
if err != nil {
break
}
}
}
func strToPtr(v string) *string {
return &v
}
func BenchmarkListMuchAppsWithName(b *testing.B) {
// 10000 apps
appsMuch := generateTestApp(10000)
obj := make([]runtime.Object, len(appsMuch))
for i, v := range appsMuch {
obj[i] = v
}
appServer := newTestAppServerWithBenchmark(b, obj...)
b.ResetTimer()
for n := 0; n < b.N; n++ {
app := &application.ApplicationQuery{Name: strToPtr("test-app000099")}
_, err := appServer.List(context.Background(), app)
if err != nil {
break
}
}
}
func BenchmarkListMuchAppsWithProjects(b *testing.B) {
// 10000 apps
appsMuch := generateTestApp(10000)
appsMuch[999].Spec.Project = "test-project1"
appsMuch[1999].Spec.Project = "test-project2"
obj := make([]runtime.Object, len(appsMuch))
for i, v := range appsMuch {
obj[i] = v
}
appServer := newTestAppServerWithBenchmark(b, obj...)
b.ResetTimer()
for n := 0; n < b.N; n++ {
app := &application.ApplicationQuery{Project: []string{"test-project1", "test-project2"}}
_, err := appServer.List(context.Background(), app)
if err != nil {
break
}
}
}
func BenchmarkListMuchAppsWithRepo(b *testing.B) {
// 10000 apps
appsMuch := generateTestApp(10000)
appsMuch[999].Spec.Source.RepoURL = "https://some-fake-source"
obj := make([]runtime.Object, len(appsMuch))
for i, v := range appsMuch {
obj[i] = v
}
appServer := newTestAppServerWithBenchmark(b, obj...)
b.ResetTimer()
for n := 0; n < b.N; n++ {
app := &application.ApplicationQuery{Repo: strToPtr("https://some-fake-source")}
_, err := appServer.List(context.Background(), app)
if err != nil {
break
}
}
}
func TestCreateApp(t *testing.T) {
testApp := newTestApp()
appServer := newTestAppServer(t)

View File

@@ -1,4 +1,4 @@
FROM docker.io/library/redis:7.0.9@sha256:e50c7e23f79ae81351beacb20e004720d4bed657415e68c2b1a2b5557c075ce0 as redis
FROM docker.io/library/redis:7.0.11@sha256:f50031a49f41e493087fb95f96fdb3523bb25dcf6a3f0b07c588ad3cdbe1d0aa as redis
# There are libraries we will want to copy from here in the final stage of the
# build, but the COPY directive does not have a way to determine system

View File

@@ -0,0 +1,57 @@
package e2e
import (
"testing"
. "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/v2/test/e2e/fixture/app"
)
func TestAppSyncWrongVersion(t *testing.T) {
// Make sure the error messages are good when there are group or version mismatches between CRDs and resources.
ctx := Given(t)
ctx.
Path("crd-version-differences").
When().
CreateApp().
// Install CRD and one instance of it on v1alpha1
AppSet("--directory-include", "crd-v1alpha1.yaml").
Sync().
Then().
Expect(SyncStatusIs(SyncStatusCodeSynced)).
When().
AppSet("--directory-include", "crd-v1alpha2-instance.yaml").
IgnoreErrors(). // Ignore errors because we are testing the error message.
Sync().
Then().
Expect(SyncStatusIs(SyncStatusCodeOutOfSync)).
When().
DoNotIgnoreErrors().
Get().
Then().
// Technically it's a "success" because we're just doing a "get," but the get output contains the error message.
Expect(SuccessRegex(`The Kubernetes API could not find version "v1alpha2" of argoproj\.io/Fake for requested resource [a-z0-9-]+/fake-crd-instance\. Version "v1alpha1" of argoproj\.io/Fake is installed on the destination cluster\.`)).
When().
AppSet("--directory-include", "crd-wronggroup-instance.yaml", "--directory-exclude", "crd-v1alpha2-instance.yaml").
IgnoreErrors(). // Ignore errors because we are testing the error message.
Sync().
Then().
Expect(SyncStatusIs(SyncStatusCodeOutOfSync)).
When().
DoNotIgnoreErrors().
Get().
Then().
Expect(SuccessRegex(`The Kubernetes API could not find version "v1alpha1" of wrong\.group/Fake for requested resource [a-z0-9-]+/fake-crd-instance-wronggroup\. Version "v1alpha1" of argoproj\.io/Fake is installed on the destination cluster\.`)).
When().
AppSet("--directory-include", "crd-does-not-exist-instance.yaml", "--directory-exclude", "crd-wronggroup-instance.yaml").
IgnoreErrors(). // Ignore errors because we are testing the error message.
Sync().
Then().
Expect(SyncStatusIs(SyncStatusCodeOutOfSync)).
When().
DoNotIgnoreErrors().
Get().
Then().
// Not the best error message, but good enough.
Expect(Success(`DoesNotExist.argoproj.io "" not found`))
}

View File

@@ -353,6 +353,12 @@ func (a *Actions) Refresh(refreshType RefreshType) *Actions {
return a
}
func (a *Actions) Get() *Actions {
a.context.t.Helper()
a.runCli("app", "get", a.context.AppQualifiedName())
return a
}
func (a *Actions) Delete(cascade bool) *Actions {
a.context.t.Helper()
a.runCli("app", "delete", a.context.AppQualifiedName(), fmt.Sprintf("--cascade=%v", cascade), "--yes")

View File

@@ -299,13 +299,24 @@ func NamespacedEvent(namespace string, reason string, message string) Expectatio
return event(namespace, reason, message)
}
// asserts that the last command was successful
func Success(message string) Expectation {
// Success asserts that the last command was successful and that the output contains the given message.
func Success(message string, matchers ...func(string, string) bool) Expectation {
if len(matchers) == 0 {
matchers = append(matchers, strings.Contains)
}
match := func(actual, expected string) bool {
for i := range matchers {
if !matchers[i](actual, expected) {
return false
}
}
return true
}
return func(c *Consequences) (state, string) {
if c.actions.lastError != nil {
return failed, "error"
}
if !strings.Contains(c.actions.lastOutput, message) {
if !match(c.actions.lastOutput, message) {
return failed, fmt.Sprintf("output did not contain '%s'", message)
}
return succeeded, fmt.Sprintf("no error and output contained '%s'", message)
@@ -345,3 +356,10 @@ func ErrorRegex(messagePattern, err string) Expectation {
return regexp.MustCompile(expected).MatchString(actual)
})
}
// SuccessRegex asserts that the last command was successful and output matches given regex expression
func SuccessRegex(messagePattern string) Expectation {
return Success(messagePattern, func(actual, expected string) bool {
return regexp.MustCompile(expected).MatchString(actual)
})
}

View File

@@ -0,0 +1,7 @@
apiVersion: argoproj.io/v1alpha1
kind: DoesNotExist
metadata:
name: dummy-crd-instance-doesnotexist
spec:
cpu: 2000m
memory: 32Mi

View File

@@ -0,0 +1,35 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: fakes.argoproj.io
spec:
conversion:
strategy: None
group: argoproj.io
names:
kind: Fake
listKind: FakeList
plural: fakes
singular: fake
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
properties:
cpu:
type: string
memory:
type: string

View File

@@ -0,0 +1,7 @@
apiVersion: argoproj.io/v1alpha2
kind: Fake
metadata:
name: fake-crd-instance
spec:
cpu: 2000m
memory: 32Mi

View File

@@ -0,0 +1,7 @@
apiVersion: wrong.group/v1alpha1
kind: Fake
metadata:
name: fake-crd-instance-wronggroup
spec:
cpu: 2000m
memory: 32Mi

View File

@@ -319,7 +319,7 @@ export const ApplicationParameters = (props: {
const liveParam = app.spec.source.plugin?.parameters?.find(param => param.name === name);
const pluginIcon =
announcement && liveParam ? 'This parameter has been provided by plugin, but is overridden in application manifest.' : 'This parameter is provided by the plugin.';
const isPluginPar = announcement ? true : false;
const isPluginPar = !!announcement;
if ((announcement?.collectionType === undefined && liveParam?.map) || announcement?.collectionType === 'map') {
let liveParamMap;
if (liveParam) {
@@ -351,7 +351,7 @@ export const ApplicationParameters = (props: {
<FormField
field='spec.source.plugin.parameters'
componentProps={{
name: announcement?.title ?? announcement?.name ?? name,
name: announcement?.name ?? name,
defaultVal: announcement?.map,
isPluginPar,
setAppParamsDeletedState
@@ -388,7 +388,7 @@ export const ApplicationParameters = (props: {
<FormField
field='spec.source.plugin.parameters'
componentProps={{
name: announcement?.title ?? announcement?.name ?? name,
name: announcement?.name ?? name,
defaultVal: announcement?.array,
isPluginPar,
setAppParamsDeletedState
@@ -429,7 +429,7 @@ export const ApplicationParameters = (props: {
<FormField
field='spec.source.plugin.parameters'
componentProps={{
name: announcement?.title ?? announcement?.name ?? name,
name: announcement?.name ?? name,
defaultVal: announcement?.string,
isPluginPar,
setAppParamsDeletedState

View File

@@ -225,3 +225,14 @@ code {
font-weight: inherit;
color: inherit;
}
/* Hide scrollbar for Chrome, Safari and Opera */
.noscroll::-webkit-scrollbar {
display: none;
}
/* Hide scrollbar for IE, Edge and Firefox */
.noscroll {
-ms-overflow-style: none; /* IE and Edge */
scrollbar-width: none; /* Firefox */
}

View File

@@ -5,6 +5,7 @@ import {useEffect, useRef, useState} from 'react';
import {bufferTime, delay, filter as rxfilter, map, retryWhen, scan} from 'rxjs/operators';
import * as models from '../../../shared/models';
import {LogEntry} from '../../../shared/models';
import {services, ViewPreferences} from '../../../shared/services';
import AutoSizer from 'react-virtualized/dist/commonjs/AutoSizer';
@@ -26,7 +27,6 @@ import {SinceSecondsSelector} from './since-seconds-selector';
import {TailSelector} from './tail-selector';
import {PodNamesToggleButton} from './pod-names-toggle-button';
import Ansi from 'ansi-to-react';
import {LogEntry} from '../../../shared/models';
export interface PodLogsProps {
namespace: string;
@@ -210,11 +210,19 @@ export const PodsLogsViewer = (props: PodLogsProps) => {
? (lineNum === 0 || logs[lineNum - 1].timeStamp !== log.timeStamp ? log.timeStampStr : ' '.repeat(log.timeStampStr.length)) + ' '
: '') +
// show the log content, highlight the filter text
log.content.replace(highlight, (substring: string) => whiteOnYellow + substring + reset);
log.content?.replace(highlight, (substring: string) => whiteOnYellow + substring + reset);
// logs are in 14px wide fixed width font
const width =
14 *
logs
.map(renderLog)
.map(v => v.length)
.reduce((a, b) => Math.max(a, b));
const rowRenderer = ({index, key, style}: {index: number; key: string; style: React.CSSProperties}) => {
return (
<pre key={key} style={style}>
<pre key={key} style={style} className='noscroll'>
<Ansi>{renderLog(logs[index], index)}</Ansi>
</pre>
);
@@ -229,7 +237,7 @@ export const PodsLogsViewer = (props: PodLogsProps) => {
<>
<AutoSizer>
{({height}: {width: number; height: number}) => (
<List ref={list} rowCount={logs.length} rowRenderer={rowRenderer} width={4096} height={height - 20} rowHeight={20} />
<List ref={list} rowCount={logs.length} rowRenderer={rowRenderer} width={width} height={height - 20} rowHeight={20} />
)}
</AutoSizer>
</>

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/argoproj/gitops-engine/pkg/cache"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/r3labs/diff"
log "github.com/sirupsen/logrus"
@@ -34,6 +35,50 @@ const (
errDestinationMissing = "Destination server missing from app spec"
)
// AugmentSyncMsg enrich the K8s message with user-relevant information
func AugmentSyncMsg(res common.ResourceSyncResult, apiResourceInfoGetter func() ([]kube.APIResourceInfo, error)) (string, error) {
switch res.Message {
case "the server could not find the requested resource":
resource, err := getAPIResourceInfo(res.ResourceKey.Group, res.ResourceKey.Kind, apiResourceInfoGetter)
if err != nil {
return "", fmt.Errorf("failed to get API resource info for group %q and kind %q: %w", res.ResourceKey.Group, res.ResourceKey.Kind, err)
}
if resource == nil {
res.Message = fmt.Sprintf("The Kubernetes API could not find %s/%s for requested resource %s/%s. Make sure the %q CRD is installed on the destination cluster.", res.ResourceKey.Group, res.ResourceKey.Kind, res.ResourceKey.Namespace, res.ResourceKey.Name, res.ResourceKey.Kind)
} else {
res.Message = fmt.Sprintf("The Kubernetes API could not find version %q of %s/%s for requested resource %s/%s. Version %q of %s/%s is installed on the destination cluster.", res.Version, res.ResourceKey.Group, res.ResourceKey.Kind, res.ResourceKey.Namespace, res.ResourceKey.Name, resource.GroupVersionResource.Version, resource.GroupKind.Group, resource.GroupKind.Kind)
}
}
return res.Message, nil
}
// getAPIResourceInfo gets Kubernetes API resource info for the given group and kind. If there's a matching resource
// group _and_ kind, it will return the resource info. If there's a matching kind but no matching group, it will
// return the first resource info that matches the kind. If there's no matching kind, it will return nil.
func getAPIResourceInfo(group, kind string, getApiResourceInfo func() ([]kube.APIResourceInfo, error)) (*kube.APIResourceInfo, error) {
apiResources, err := getApiResourceInfo()
if err != nil {
return nil, fmt.Errorf("failed to get API resource info: %w", err)
}
for _, r := range apiResources {
if r.GroupKind.Group == group && r.GroupKind.Kind == kind {
return &r, nil
}
}
for _, r := range apiResources {
if r.GroupKind.Kind == kind {
return &r, nil
}
}
return nil, nil
}
// FormatAppConditions returns string representation of give app condition list
func FormatAppConditions(conditions []argoappv1.ApplicationCondition) string {
formattedConditions := make([]string, 0)
@@ -63,6 +108,26 @@ func FilterByProjects(apps []argoappv1.Application, projects []string) []argoapp
}
// FilterByProjectsP returns application pointers which belongs to the specified project
func FilterByProjectsP(apps []*argoappv1.Application, projects []string) []*argoappv1.Application {
if len(projects) == 0 {
return apps
}
projectsMap := make(map[string]bool)
for i := range projects {
projectsMap[projects[i]] = true
}
items := make([]*argoappv1.Application, 0)
for i := 0; i < len(apps); i++ {
a := apps[i]
if _, ok := projectsMap[a.Spec.GetProject()]; ok {
items = append(items, a)
}
}
return items
}
// FilterAppSetsByProjects returns applications which belongs to the specified project
func FilterAppSetsByProjects(appsets []argoappv1.ApplicationSet, projects []string) []argoappv1.ApplicationSet {
if len(projects) == 0 {
@@ -96,6 +161,20 @@ func FilterByRepo(apps []argoappv1.Application, repo string) []argoappv1.Applica
return items
}
// FilterByRepoP returns application pointers
func FilterByRepoP(apps []*argoappv1.Application, repo string) []*argoappv1.Application {
if repo == "" {
return apps
}
items := make([]*argoappv1.Application, 0)
for i := 0; i < len(apps); i++ {
if apps[i].Spec.GetSource().RepoURL == repo {
items = append(items, apps[i])
}
}
return items
}
// FilterByCluster returns an application
func FilterByCluster(apps []argoappv1.Application, cluster string) []argoappv1.Application {
if cluster == "" {
@@ -125,6 +204,22 @@ func FilterByName(apps []argoappv1.Application, name string) ([]argoappv1.Applic
return items, status.Errorf(codes.NotFound, "application '%s' not found", name)
}
// FilterByNameP returns pointer applications
// This function is for the changes in #12985.
func FilterByNameP(apps []*argoappv1.Application, name string) []*argoappv1.Application {
if name == "" {
return apps
}
items := make([]*argoappv1.Application, 0)
for i := 0; i < len(apps); i++ {
if apps[i].Name == name {
items = append(items, apps[i])
return items
}
}
return items
}
// RefreshApp updates the refresh annotation of an application to coerce the controller to process it
func RefreshApp(appIf v1alpha1.ApplicationInterface, name string, refreshType argoappv1.RefreshType) (*argoappv1.Application, error) {
metadata := map[string]interface{}{

View File

@@ -11,6 +11,7 @@ import (
"github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
corev1 "k8s.io/api/core/v1"
@@ -20,6 +21,8 @@ import (
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/tools/cache"
"github.com/argoproj/gitops-engine/pkg/sync/common"
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned/fake"
"github.com/argoproj/argo-cd/v2/pkg/client/informers/externalversions/application/v1alpha1"
@@ -487,6 +490,41 @@ func TestFilterByProjects(t *testing.T) {
})
}
func TestFilterByProjectsP(t *testing.T) {
apps := []*argoappv1.Application{
{
Spec: argoappv1.ApplicationSpec{
Project: "fooproj",
},
},
{
Spec: argoappv1.ApplicationSpec{
Project: "barproj",
},
},
}
t.Run("No apps in single project", func(t *testing.T) {
res := FilterByProjectsP(apps, []string{"foobarproj"})
assert.Empty(t, res)
})
t.Run("Single app in single project", func(t *testing.T) {
res := FilterByProjectsP(apps, []string{"fooproj"})
assert.Len(t, res, 1)
})
t.Run("Single app in multiple project", func(t *testing.T) {
res := FilterByProjectsP(apps, []string{"fooproj", "foobarproj"})
assert.Len(t, res, 1)
})
t.Run("Multiple apps in multiple project", func(t *testing.T) {
res := FilterByProjectsP(apps, []string{"fooproj", "barproj"})
assert.Len(t, res, 2)
})
}
func TestFilterByRepo(t *testing.T) {
apps := []argoappv1.Application{
{
@@ -521,6 +559,40 @@ func TestFilterByRepo(t *testing.T) {
})
}
func TestFilterByRepoP(t *testing.T) {
apps := []*argoappv1.Application{
{
Spec: argoappv1.ApplicationSpec{
Source: &argoappv1.ApplicationSource{
RepoURL: "git@github.com:owner/repo.git",
},
},
},
{
Spec: argoappv1.ApplicationSpec{
Source: &argoappv1.ApplicationSource{
RepoURL: "git@github.com:owner/otherrepo.git",
},
},
},
}
t.Run("Empty filter", func(t *testing.T) {
res := FilterByRepoP(apps, "")
assert.Len(t, res, 2)
})
t.Run("Match", func(t *testing.T) {
res := FilterByRepoP(apps, "git@github.com:owner/repo.git")
assert.Len(t, res, 1)
})
t.Run("No match", func(t *testing.T) {
res := FilterByRepoP(apps, "git@github.com:owner/willnotmatch.git")
assert.Len(t, res, 0)
})
}
func TestValidatePermissions(t *testing.T) {
t.Run("Empty Repo URL result in condition", func(t *testing.T) {
spec := argoappv1.ApplicationSpec{
@@ -938,6 +1010,42 @@ func TestFilterByName(t *testing.T) {
})
}
func TestFilterByNameP(t *testing.T) {
apps := []*argoappv1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
},
Spec: argoappv1.ApplicationSpec{
Project: "fooproj",
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "bar",
},
Spec: argoappv1.ApplicationSpec{
Project: "barproj",
},
},
}
t.Run("Name is empty string", func(t *testing.T) {
res := FilterByNameP(apps, "")
assert.Len(t, res, 2)
})
t.Run("Single app by name", func(t *testing.T) {
res := FilterByNameP(apps, "foo")
assert.Len(t, res, 1)
})
t.Run("No such app", func(t *testing.T) {
res := FilterByNameP(apps, "foobar")
assert.Len(t, res, 0)
})
}
func TestGetGlobalProjects(t *testing.T) {
t.Run("Multiple global projects", func(t *testing.T) {
namespace := "default"
@@ -1328,3 +1436,132 @@ func TestValidatePermissionsMultipleSources(t *testing.T) {
assert.Len(t, conditions, 0)
})
}
func TestAugmentSyncMsg(t *testing.T) {
mockAPIResourcesFn := func() ([]kube.APIResourceInfo, error) {
return []kube.APIResourceInfo{
{
GroupKind: schema.GroupKind{
Group: "apps",
Kind: "Deployment",
},
GroupVersionResource: schema.GroupVersionResource{
Group: "apps",
Version: "v1",
},
},
{
GroupKind: schema.GroupKind{
Group: "networking.k8s.io",
Kind: "Ingress",
},
GroupVersionResource: schema.GroupVersionResource{
Group: "networking.k8s.io",
Version: "v1",
},
},
}, nil
}
testcases := []struct {
name string
msg string
expectedMessage string
res common.ResourceSyncResult
mockFn func() ([]kube.APIResourceInfo, error)
errMsg string
}{
{
name: "match specific k8s error",
msg: "the server could not find the requested resource",
res: common.ResourceSyncResult{
ResourceKey: kube.ResourceKey{
Name: "deployment-resource",
Namespace: "test-namespace",
Kind: "Deployment",
Group: "apps",
},
Version: "v1beta1",
},
expectedMessage: "The Kubernetes API could not find version \"v1beta1\" of apps/Deployment for requested resource test-namespace/deployment-resource. Version \"v1\" of apps/Deployment is installed on the destination cluster.",
mockFn: mockAPIResourcesFn,
},
{
name: "any random k8s msg",
msg: "random message from k8s",
res: common.ResourceSyncResult{
ResourceKey: kube.ResourceKey{
Name: "deployment-resource",
Namespace: "test-namespace",
Kind: "Deployment",
Group: "apps",
},
Version: "v1beta1",
},
expectedMessage: "random message from k8s",
mockFn: mockAPIResourcesFn,
},
{
name: "resource doesn't exist in the target cluster",
res: common.ResourceSyncResult{
ResourceKey: kube.ResourceKey{
Name: "persistent-volume-resource",
Namespace: "test-namespace",
Kind: "PersistentVolume",
Group: "",
},
Version: "v1",
},
msg: "the server could not find the requested resource",
expectedMessage: "The Kubernetes API could not find /PersistentVolume for requested resource test-namespace/persistent-volume-resource. Make sure the \"PersistentVolume\" CRD is installed on the destination cluster.",
mockFn: mockAPIResourcesFn,
},
{
name: "API Resource returns error",
res: common.ResourceSyncResult{
ResourceKey: kube.ResourceKey{
Name: "persistent-volume-resource",
Namespace: "test-namespace",
Kind: "PersistentVolume",
Group: "",
},
Version: "v1",
},
msg: "the server could not find the requested resource",
expectedMessage: "the server could not find the requested resource",
mockFn: func() ([]kube.APIResourceInfo, error) {
return nil, errors.New("failed to fetch resource of given kind %s from the target cluster")
},
errMsg: "failed to get API resource info for group \"\" and kind \"PersistentVolume\": failed to get API resource info: failed to fetch resource of given kind %s from the target cluster",
},
{
name: "old Ingress type returns error suggesting new Ingress type",
res: common.ResourceSyncResult{
ResourceKey: kube.ResourceKey{
Name: "ingress-resource",
Namespace: "test-namespace",
Kind: "Ingress",
Group: "extensions",
},
Version: "v1beta1",
},
msg: "the server could not find the requested resource",
expectedMessage: "The Kubernetes API could not find version \"v1beta1\" of extensions/Ingress for requested resource test-namespace/ingress-resource. Version \"v1\" of networking.k8s.io/Ingress is installed on the destination cluster.",
mockFn: mockAPIResourcesFn,
},
}
for _, tt := range testcases {
t.Run(tt.name, func(t *testing.T) {
tt.res.Message = tt.msg
msg, err := AugmentSyncMsg(tt.res, tt.mockFn)
if tt.errMsg != "" {
require.Error(t, err)
assert.Equal(t, tt.errMsg, err.Error())
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedMessage, msg)
}
})
}
}

View File

@@ -16,6 +16,11 @@ import (
pathutil "github.com/argoproj/argo-cd/v2/util/io/path"
)
const (
ResourcePolicyAnnotation = "helm.sh/resource-policy"
ResourcePolicyKeep = "keep"
)
type HelmRepository struct {
Creds
Name string