You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by nf...@apache.org on 2022/01/12 08:15:00 UTC

[camel-k] branch main updated (27ea823 -> 590b23c)

This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git.


    from 27ea823  Updated CHANGELOG.md
     new b170a71  Fix #1107: keda scaffolding
     new b848aba  Fix #1107: add support for nested trait configuration
     new 8ba04e7  Fix #1107: initial trait
     new 5a7e49c  Fix #1107: generalize server side apply code and reuse
     new 2fbfef6  Fix #1107: adding optional keda fields
     new c0cc560  Fix #1107: adding first support for Kamelets
     new 7c9596c  Fix #1107: refactoring annotations and secret generation
     new 371150f  Fix #1107: disable camel case conversion by default
     new a064422  Fix #1107: add documentation
     new 1239743  Fix #1107: add optional authentication secret
     new 1fde2b5  Fix #1107: added tests
     new e5354e5  Fix #1107: added roles and regen
     new 8a4f660  Fix #1107: update helm roles
     new 29c883f  Fix #1107: add tests for kamelet binding and replicas
     new b788952  Fix #1107: fix deepcopy gen
     new cdd75b2  Fix #1107: fix linter
     new 567c2de  Fix #1107: remove limit from doc
     new 9d89922  Fix #1107: fix findings
     new 0198461  Fix #1107: add missing operator role
     new 5520b63  Fix #1107: simplify applier code
     new 885d2bd  Fix #1107: disable applier code to detect real CI errors
     new 590b23c  Fix #1107: fix expected roles in tests

The 22 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../duck/v1beta2 => keda/duck/v1alpha1}/doc.go     |   6 +-
 addons/keda/duck/v1alpha1/duck_types.go            | 118 +++++
 .../keda/duck/v1alpha1/duck_types_support.go       |  23 +-
 .../v1beta2 => keda/duck/v1alpha1}/register.go     |  17 +-
 addons/keda/duck/v1alpha1/zz_generated.deepcopy.go | 260 ++++++++++
 addons/keda/keda.go                                | 542 +++++++++++++++++++++
 addons/keda/keda_test.go                           | 513 +++++++++++++++++++
 addons/{register_master.go => register_keda.go}    |   4 +-
 .../bases/camel.apache.org_kameletbindings.yaml    |  15 +-
 config/crd/bases/camel.apache.org_kamelets.yaml    |   8 +-
 .../bases/camel-k.clusterserviceversion.yaml       |   8 +-
 config/rbac/kustomization.yaml                     |   2 +
 ...inding.yaml => operator-role-binding-keda.yaml} |   4 +-
 ...le-podmonitors.yaml => operator-role-keda.yaml} |   7 +-
 config/rbac/operator-role.yaml                     |   2 +
 docs/modules/ROOT/nav.adoc                         |   1 +
 docs/modules/ROOT/pages/kamelets/kamelets-dev.adoc | 119 +++++
 .../modules/ROOT/pages/kamelets/kamelets-user.adoc |  39 ++
 docs/modules/ROOT/partials/apis/crds-html.adoc     |   2 +-
 docs/modules/traits/pages/keda.adoc                |  74 +++
 e2e/common/kustomize/common.go                     |   5 +-
 e2e/common/scale_binding_test.go                   |  11 +-
 e2e/common/scale_integration_test.go               |  11 +-
 go.sum                                             |   1 +
 helm/camel-k/crds/crd-kamelet-binding.yaml         |  15 +-
 helm/camel-k/crds/crd-kamelet.yaml                 |   8 +-
 helm/camel-k/templates/operator-role.yaml          | 207 +++++---
 pkg/apis/camel/v1alpha1/jsonschema_types.go        |   2 +-
 pkg/client/apply.go                                | 124 +++++
 pkg/client/client.go                               |   3 +
 .../kubernetes/discovery.go => client/scale.go}    |  28 +-
 pkg/cmd/kit_create.go                              |   2 +-
 pkg/cmd/run.go                                     |  43 +-
 pkg/cmd/run_test.go                                |  49 ++
 pkg/controller/kameletbinding/common.go            |   2 +-
 pkg/controller/kameletbinding/initialize.go        |   2 +-
 pkg/controller/kameletbinding/monitor.go           |   2 +-
 pkg/install/operator.go                            |  14 +
 pkg/resources/resources.go                         |  36 +-
 pkg/trait/dependencies_test.go                     |   2 +-
 pkg/trait/init.go                                  |   2 +-
 pkg/trait/trait_catalog.go                         |   6 +
 pkg/trait/trait_configure.go                       |  26 +-
 pkg/trait/trait_register.go                        |   2 +-
 pkg/util/property/property.go                      |  11 +
 pkg/util/test/client.go                            |  51 +-
 pkg/util/uri/uri.go                                |  15 +-
 pkg/util/uri/uri_test.go                           |  67 +++
 pkg/util/util.go                                   |  76 +++
 resources/traits.yaml                              |  51 ++
 script/Makefile                                    |   7 +-
 script/gen_doc.sh                                  |   2 +-
 52 files changed, 2462 insertions(+), 185 deletions(-)
 copy addons/{strimzi/duck/v1beta2 => keda/duck/v1alpha1}/doc.go (87%)
 create mode 100644 addons/keda/duck/v1alpha1/duck_types.go
 copy pkg/client/camel/clientset/versioned/typed/camel/v1alpha1/generated_expansion.go => addons/keda/duck/v1alpha1/duck_types_support.go (66%)
 copy addons/{strimzi/duck/v1beta2 => keda/duck/v1alpha1}/register.go (86%)
 create mode 100644 addons/keda/duck/v1alpha1/zz_generated.deepcopy.go
 create mode 100644 addons/keda/keda.go
 create mode 100644 addons/keda/keda_test.go
 copy addons/{register_master.go => register_keda.go} (90%)
 copy config/rbac/{operator-role-binding.yaml => operator-role-binding-keda.yaml} (95%)
 copy config/rbac/{operator-role-podmonitors.yaml => operator-role-keda.yaml} (92%)
 create mode 100644 docs/modules/traits/pages/keda.adoc
 create mode 100644 pkg/client/apply.go
 copy pkg/{util/kubernetes/discovery.go => client/scale.go} (61%)

[camel-k] 10/22: Fix #1107: add optional authentication secret

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 1239743ea126e3898d0a77e74f89f41022b7bfe9
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 20 12:01:07 2021 +0100

    Fix #1107: add optional authentication secret
---
 addons/keda/keda.go | 51 ++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 46 insertions(+), 5 deletions(-)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index 4911a76..3637153 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -93,14 +93,16 @@ type kedaTrait struct {
 	MaxReplicaCount *int32 `property:"max-replica-count" json:"maxReplicaCount,omitempty"`
 	// Definition of triggers according to the KEDA format. Each trigger must contain `type` field corresponding
 	// to the name of a KEDA autoscaler and a key/value map named `metadata` containing specific trigger options.
+	// An optional `authentication-secret` can be declared per trigger and the operator will link each entry of
+	// the secret to a KEDA authentication parameter.
 	Triggers []kedaTrigger `property:"triggers" json:"triggers,omitempty"`
 }
 
 type kedaTrigger struct {
-	Type     string            `property:"type" json:"type,omitempty"`
-	Metadata map[string]string `property:"metadata" json:"metadata,omitempty"`
-
-	authentication map[string]string
+	Type                 string            `property:"type" json:"type,omitempty"`
+	Metadata             map[string]string `property:"metadata" json:"metadata,omitempty"`
+	AuthenticationSecret string            `property:"authentication-secret" json:"authenticationSecret,omitempty"`
+	authentication       map[string]string
 }
 
 // NewKedaTrait --.
@@ -177,9 +179,12 @@ func (t *kedaTrait) addScalingResources(e *trait.Environment) error {
 			meta[kk] = v
 		}
 		var authenticationRef *kedav1alpha1.ScaledObjectAuthRef
+		if len(trigger.authentication) > 0 && trigger.AuthenticationSecret != "" {
+			return errors.New("an authentication secret cannot be provided for auto-configured triggers")
+		}
+		extConfigName := fmt.Sprintf("%s-keda-%d", e.Integration.Name, idx)
 		if len(trigger.authentication) > 0 {
 			// Save all authentication config in a secret
-			extConfigName := fmt.Sprintf("%s-keda-%d", e.Integration.Name, idx)
 			secret := v1.Secret{
 				TypeMeta: metav1.TypeMeta{
 					Kind:       "Secret",
@@ -215,6 +220,42 @@ func (t *kedaTrait) addScalingResources(e *trait.Environment) error {
 			authenticationRef = &kedav1alpha1.ScaledObjectAuthRef{
 				Name: extConfigName,
 			}
+		} else if trigger.AuthenticationSecret != "" {
+			s := v1.Secret{}
+			key := client.ObjectKey{
+				Namespace: e.Integration.Namespace,
+				Name:      trigger.AuthenticationSecret,
+			}
+			if err := e.Client.Get(e.Ctx, key, &s); err != nil {
+				return errors.Wrapf(err, "could not load secret named %q in namespace %q", trigger.AuthenticationSecret, e.Integration.Namespace)
+			}
+			// Fill a TriggerAuthentication from the secret
+			triggerAuth := kedav1alpha1.TriggerAuthentication{
+				TypeMeta: metav1.TypeMeta{
+					Kind:       "TriggerAuthentication",
+					APIVersion: kedav1alpha1.SchemeGroupVersion.String(),
+				},
+				ObjectMeta: metav1.ObjectMeta{
+					Namespace: e.Integration.Namespace,
+					Name:      extConfigName,
+				},
+			}
+			sortedKeys := make([]string, 0, len(s.Data))
+			for k := range s.Data {
+				sortedKeys = append(sortedKeys, k)
+			}
+			sort.Strings(sortedKeys)
+			for _, k := range sortedKeys {
+				triggerAuth.Spec.SecretTargetRef = append(triggerAuth.Spec.SecretTargetRef, kedav1alpha1.AuthSecretTargetRef{
+					Parameter: k,
+					Name:      s.Name,
+					Key:       k,
+				})
+			}
+			e.Resources.Add(&triggerAuth)
+			authenticationRef = &kedav1alpha1.ScaledObjectAuthRef{
+				Name: extConfigName,
+			}
 		}
 
 		st := kedav1alpha1.ScaleTriggers{

[camel-k] 06/22: Fix #1107: adding first support for Kamelets

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit c0cc56005e0da9c27aece3c0a7a6428f6eba5948
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Thu Dec 16 14:59:46 2021 +0100

    Fix #1107: adding first support for Kamelets
---
 addons/keda/keda.go                         | 192 +++++++++++++++++++++++++++-
 pkg/apis/camel/v1alpha1/jsonschema_types.go |   4 +-
 pkg/client/serverside.go                    |   6 +-
 pkg/util/property/property.go               |  11 ++
 pkg/util/uri/uri.go                         |  15 ++-
 pkg/util/uri/uri_test.go                    |  67 ++++++++++
 6 files changed, 288 insertions(+), 7 deletions(-)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index 834cea3..8396742 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -18,18 +18,42 @@ limitations under the License.
 package keda
 
 import (
+	"fmt"
+	"sort"
 	"strings"
 
 	kedav1alpha1 "github.com/apache/camel-k/addons/keda/duck/v1alpha1"
 	camelv1 "github.com/apache/camel-k/pkg/apis/camel/v1"
 	"github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
 	camelv1alpha1 "github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
+	"github.com/apache/camel-k/pkg/kamelet/repository"
+	"github.com/apache/camel-k/pkg/metadata"
+	"github.com/apache/camel-k/pkg/platform"
 	"github.com/apache/camel-k/pkg/trait"
+	"github.com/apache/camel-k/pkg/util"
+	"github.com/apache/camel-k/pkg/util/kubernetes"
+	"github.com/apache/camel-k/pkg/util/property"
+	"github.com/apache/camel-k/pkg/util/source"
+	"github.com/apache/camel-k/pkg/util/uri"
+	"github.com/pkg/errors"
 	scase "github.com/stoewer/go-strcase"
 	v1 "k8s.io/api/core/v1"
 	"sigs.k8s.io/controller-runtime/pkg/client"
 )
 
+const (
+	// kameletURNTypePrefix indicates the scaler type associated to a Kamelet
+	kameletURNTypePrefix = "urn:keda:type:"
+	// kameletURNMetadataPrefix allows binding Kamelet properties to Keda metadata
+	kameletURNMetadataPrefix = "urn:keda:metadata:"
+	// kameletURNRequiredTag is used to mark properties required by Keda
+	kameletURNRequiredTag = "urn:keda:required"
+
+	// kameletAnnotationType is an alternative to kameletURNTypePrefix.
+	// To be removed when the `spec -> definition -> x-descriptors` field becomes stable.
+	kameletAnnotationType = "camel.apache.org/keda.type"
+)
+
 // The Keda trait can be used for automatic integration with Keda autoscalers.
 //
 // The Keda trait is disabled by default.
@@ -79,7 +103,14 @@ func (t *kedaTrait) Configure(e *trait.Environment) (bool, error) {
 		return false, nil
 	}
 
-	return true, nil
+	if t.Auto == nil || *t.Auto {
+		if err := t.populateTriggersFromKamelets(e); err != nil {
+			// TODO: set condition
+			return false, err
+		}
+	}
+
+	return len(t.Triggers) > 0, nil
 }
 
 func (t *kedaTrait) Apply(e *trait.Environment) error {
@@ -142,7 +173,6 @@ func (t *kedaTrait) getScaledObject(e *trait.Environment) (*kedav1alpha1.ScaledO
 
 func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 	ctrlRef := t.getTopControllerReference(e)
-	applier := e.Client.ServerOrClientSideApplier()
 	if ctrlRef.Kind == camelv1alpha1.KameletBindingKind {
 		// Update the KameletBinding directly (do not add it to env resources, it's the integration parent)
 		key := client.ObjectKey{
@@ -156,7 +186,7 @@ func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 		if klb.Spec.Replicas == nil {
 			one := int32(1)
 			klb.Spec.Replicas = &one
-			if err := applier.Apply(e.Ctx, &klb); err != nil {
+			if err := e.Client.Update(e.Ctx, &klb); err != nil {
 				return err
 			}
 		}
@@ -164,7 +194,7 @@ func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 		if e.Integration.Spec.Replicas == nil {
 			one := int32(1)
 			e.Integration.Spec.Replicas = &one
-			if err := applier.Apply(e.Ctx, e.Integration); err != nil {
+			if err := e.Client.Update(e.Ctx, e.Integration); err != nil {
 				return err
 			}
 		}
@@ -188,3 +218,157 @@ func (t *kedaTrait) getTopControllerReference(e *trait.Environment) *v1.ObjectRe
 		Name:       e.Integration.Name,
 	}
 }
+
+func (t *kedaTrait) populateTriggersFromKamelets(e *trait.Environment) error {
+	sources, err := kubernetes.ResolveIntegrationSources(e.Ctx, e.Client, e.Integration, e.Resources)
+	if err != nil {
+		return err
+	}
+	kameletURIs := make(map[string][]string)
+	metadata.Each(e.CamelCatalog, sources, func(_ int, meta metadata.IntegrationMetadata) bool {
+		for _, uri := range meta.FromURIs {
+			if kameletStr := source.ExtractKamelet(uri); kameletStr != "" && camelv1alpha1.ValidKameletName(kameletStr) {
+				kamelet := kameletStr
+				if strings.Contains(kamelet, "/") {
+					kamelet = kamelet[0:strings.Index(kamelet, "/")]
+				}
+				uriList := kameletURIs[kamelet]
+				util.StringSliceUniqueAdd(&uriList, uri)
+				sort.Strings(uriList)
+				kameletURIs[kamelet] = uriList
+			}
+		}
+		return true
+	})
+
+	if len(kameletURIs) == 0 {
+		return nil
+	}
+
+	repo, err := repository.NewForPlatform(e.Ctx, e.Client, e.Platform, e.Integration.Namespace, platform.GetOperatorNamespace())
+	if err != nil {
+		return err
+	}
+
+	sortedKamelets := make([]string, 0, len(kameletURIs))
+	for kamelet, _ := range kameletURIs {
+		sortedKamelets = append(sortedKamelets, kamelet)
+	}
+	sort.Strings(sortedKamelets)
+	for _, kamelet := range sortedKamelets {
+		uris := kameletURIs[kamelet]
+		if err := t.populateTriggersFromKamelet(e, repo, kamelet, uris); err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+func (t *kedaTrait) populateTriggersFromKamelet(e *trait.Environment, repo repository.KameletRepository, kameletName string, uris []string) error {
+	kamelet, err := repo.Get(e.Ctx, kameletName)
+	if err != nil {
+		return err
+	} else if kamelet == nil {
+		return fmt.Errorf("kamelet %q not found", kameletName)
+	}
+	if kamelet.Spec.Definition == nil {
+		return nil
+	}
+	triggerType := t.getKedaType(kamelet)
+	if triggerType == "" {
+		return nil
+	}
+
+	metadataToProperty := make(map[string]string)
+	requiredMetadata := make(map[string]bool)
+	for k, def := range kamelet.Spec.Definition.Properties {
+		if metadataName := t.getXDescriptorValue(def.XDescriptors, kameletURNMetadataPrefix); metadataName != "" {
+			metadataToProperty[metadataName] = k
+			if req := t.isXDescriptorPresent(def.XDescriptors, kameletURNRequiredTag); req {
+				requiredMetadata[metadataName] = true
+			}
+		}
+	}
+	for _, uri := range uris {
+		if err := t.populateTriggersFromKameletURI(e, kameletName, triggerType, metadataToProperty, requiredMetadata, uri); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+func (t *kedaTrait) populateTriggersFromKameletURI(e *trait.Environment, kameletName string, triggerType string, metadataToProperty map[string]string, requiredMetadata map[string]bool, kameletURI string) error {
+	metaValues := make(map[string]string, len(metadataToProperty))
+	for metaParam, prop := range metadataToProperty {
+		// From lowest priority to top
+		if v := e.ApplicationProperties[fmt.Sprintf("camel.kamelet.%s.%s", kameletName, prop)]; v != "" {
+			metaValues[metaParam] = v
+		}
+		if kameletID := uri.GetPathSegment(kameletURI, 0); kameletID != "" {
+			kameletSpecificKey := fmt.Sprintf("camel.kamelet.%s.%s.%s", kameletName, kameletID, prop)
+			if v := e.ApplicationProperties[kameletSpecificKey]; v != "" {
+				metaValues[metaParam] = v
+			}
+			for _, c := range e.Integration.Spec.Configuration {
+				if c.Type == "property" && strings.HasPrefix(c.Value, kameletSpecificKey) {
+					v, err := property.DecodePropertyFileValue(c.Value, kameletSpecificKey)
+					if err != nil {
+						return errors.Wrapf(err, "could not decode property %q", kameletSpecificKey)
+					}
+					metaValues[metaParam] = v
+				}
+			}
+		}
+		if v := uri.GetQueryParameter(kameletURI, prop); v != "" {
+			metaValues[metaParam] = v
+		}
+	}
+
+	for req := range requiredMetadata {
+		if _, ok := metaValues[req]; !ok {
+			return fmt.Errorf("metadata parameter %q is missing in configuration: it is required by Keda", req)
+		}
+	}
+
+	kebabMetaValues := make(map[string]string, len(metaValues))
+	for k, v := range metaValues {
+		kebabMetaValues[scase.KebabCase(k)] = v
+	}
+
+	// Add the trigger in config
+	trigger := kedaTrigger{
+		Type:     triggerType,
+		Metadata: kebabMetaValues,
+	}
+	t.Triggers = append(t.Triggers, trigger)
+	return nil
+}
+
+func (t *kedaTrait) getKedaType(kamelet *camelv1alpha1.Kamelet) string {
+	if kamelet.Spec.Definition != nil {
+		triggerType := t.getXDescriptorValue(kamelet.Spec.Definition.XDescriptors, kameletURNTypePrefix)
+		if triggerType != "" {
+			return triggerType
+		}
+	}
+	return kamelet.Annotations[kameletAnnotationType]
+}
+
+func (t *kedaTrait) getXDescriptorValue(descriptors []string, prefix string) string {
+	for _, d := range descriptors {
+		if strings.HasPrefix(d, prefix) {
+			return d[len(prefix):]
+		}
+	}
+	return ""
+}
+
+func (t *kedaTrait) isXDescriptorPresent(descriptors []string, desc string) bool {
+	for _, d := range descriptors {
+		if d == desc {
+			return true
+		}
+	}
+	return false
+}
diff --git a/pkg/apis/camel/v1alpha1/jsonschema_types.go b/pkg/apis/camel/v1alpha1/jsonschema_types.go
index 5e90f4f..87e178b 100644
--- a/pkg/apis/camel/v1alpha1/jsonschema_types.go
+++ b/pkg/apis/camel/v1alpha1/jsonschema_types.go
@@ -74,7 +74,7 @@ type JSONSchemaProp struct {
 	Enum             []JSON       `json:"enum,omitempty"`
 	Example          *JSON        `json:"example,omitempty"`
 	Nullable         bool         `json:"nullable,omitempty"`
-	// The list of descriptors that determine which UI components to use on different views
+	// XDescriptors is a list of extended properties that trigger a custom behavior in external systems
 	XDescriptors []string `json:"x-descriptors,omitempty"`
 }
 
@@ -89,6 +89,8 @@ type JSONSchemaProps struct {
 	ExternalDocs *ExternalDocumentation    `json:"externalDocs,omitempty"`
 	Schema       JSONSchemaURL             `json:"$schema,omitempty"`
 	Type         string                    `json:"type,omitempty"`
+	// XDescriptors is a list of extended properties that trigger a custom behavior in external systems
+	XDescriptors []string `json:"x-descriptors,omitempty"`
 }
 
 // RawMessage is a raw encoded JSON value.
diff --git a/pkg/client/serverside.go b/pkg/client/serverside.go
index 6efd758..bca029d 100644
--- a/pkg/client/serverside.go
+++ b/pkg/client/serverside.go
@@ -49,6 +49,7 @@ func (c *defaultClient) ServerOrClientSideApplier() ServerOrClientSideApplier {
 func (a *ServerOrClientSideApplier) Apply(ctx context.Context, object ctrl.Object) error {
 	once := false
 	var err error
+	needsRetry := false
 	a.tryServerSideApply.Do(func() {
 		once = true
 		if err = a.serverSideApply(ctx, object); err != nil {
@@ -57,12 +58,15 @@ func (a *ServerOrClientSideApplier) Apply(ctx context.Context, object ctrl.Objec
 				a.hasServerSideApply.Store(false)
 				err = nil
 			} else {
-				a.tryServerSideApply = sync.Once{}
+				needsRetry = true
 			}
 		} else {
 			a.hasServerSideApply.Store(true)
 		}
 	})
+	if needsRetry {
+		a.tryServerSideApply = sync.Once{}
+	}
 	if err != nil {
 		return err
 	}
diff --git a/pkg/util/property/property.go b/pkg/util/property/property.go
index 7f02b7a..87fcc1a 100644
--- a/pkg/util/property/property.go
+++ b/pkg/util/property/property.go
@@ -65,3 +65,14 @@ func SplitPropertyFileEntry(entry string) (string, string) {
 	}
 	return k, v
 }
+
+// DecodePropertyFileEntry returns the decoded value corresponding to the given key in the entry.
+func DecodePropertyFileValue(entry, key string) (string, error) {
+	p := properties.NewProperties()
+	p.DisableExpansion = true
+	if err := p.Load([]byte(entry), properties.UTF8); err != nil {
+		return "", err
+	}
+	val, _ := p.Get(key)
+	return val, nil
+}
diff --git a/pkg/util/uri/uri.go b/pkg/util/uri/uri.go
index 4e722c6..210f169 100644
--- a/pkg/util/uri/uri.go
+++ b/pkg/util/uri/uri.go
@@ -28,7 +28,7 @@ import (
 )
 
 var uriRegexp = regexp.MustCompile(`^[a-z0-9+][a-zA-Z0-9-+]*:.*$`)
-
+var pathExtractorRegexp = regexp.MustCompile(`^[a-z0-9+][a-zA-Z0-9-+]*:(?://){0,1}[^/?]+/([^?]+)(?:[?].*){0,1}$`)
 var queryExtractorRegexp = `^[^?]+\?(?:|.*[&])%s=([^&]+)(?:[&].*|$)`
 
 // HasCamelURIFormat tells if a given string may belong to a Camel URI, without checking any catalog.
@@ -57,6 +57,19 @@ func GetQueryParameter(uri string, param string) string {
 	return res
 }
 
+// GetPathSegment returns the path segment of the URI corresponding to the given position (0 based), if present
+func GetPathSegment(uri string, pos int) string {
+	match := pathExtractorRegexp.FindStringSubmatch(uri)
+	if len(match) > 1 {
+		fullPath := match[1]
+		parts := strings.Split(fullPath, "/")
+		if pos >= 0 && pos < len(parts) {
+			return parts[pos]
+		}
+	}
+	return ""
+}
+
 func matchOrEmpty(reg *regexp.Regexp, str string) string {
 	match := reg.FindStringSubmatch(str)
 	if len(match) > 1 {
diff --git a/pkg/util/uri/uri_test.go b/pkg/util/uri/uri_test.go
index 4000bf3..49ecb0c 100644
--- a/pkg/util/uri/uri_test.go
+++ b/pkg/util/uri/uri_test.go
@@ -180,3 +180,70 @@ func TestCamelURIFormat(t *testing.T) {
 		})
 	}
 }
+
+func TestPathSegment(t *testing.T) {
+	tests := []struct {
+		uri      string
+		pos      int
+		expected string
+	}{
+		{
+			uri: "direct:endpoint",
+			pos: 0,
+		},
+		{
+			uri: "direct:endpoint",
+			pos: 12,
+		},
+		{
+			uri: "kamelet:endpoint/",
+			pos: 0,
+		},
+		{
+			uri:      "kamelet:endpoint/s",
+			pos:      0,
+			expected: "s",
+		},
+		{
+			uri: "kamelet:endpoint/s",
+			pos: 1,
+		},
+		{
+			uri:      "kamelet://endpoint/s",
+			pos:      0,
+			expected: "s",
+		},
+		{
+			uri:      "kamelet://endpoint/s/p",
+			pos:      0,
+			expected: "s",
+		},
+		{
+			uri:      "kamelet://endpoint/s/p",
+			pos:      1,
+			expected: "p",
+		},
+		{
+			uri:      "kamelet://endpoint/s/p?param=n",
+			pos:      1,
+			expected: "p",
+		},
+		{
+			uri:      "kamelet://endpoint/s/p?param=n&p2=n2",
+			pos:      1,
+			expected: "p",
+		},
+		{
+			uri: "kamelet://endpoint/s/p?param=n&p2=n2",
+			pos: 2,
+		},
+	}
+
+	for _, test := range tests {
+		thetest := test
+		t.Run(thetest.uri, func(t *testing.T) {
+			param := GetPathSegment(thetest.uri, thetest.pos)
+			assert.Equal(t, thetest.expected, param)
+		})
+	}
+}

[camel-k] 14/22: Fix #1107: add tests for kamelet binding and replicas

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 29c883f97142293d6f50eb62100b4d3607ed398c
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 20 18:18:37 2021 +0100

    Fix #1107: add tests for kamelet binding and replicas
---
 addons/keda/keda_test.go                           | 191 ++++++++++++++++++++-
 .../bases/camel-k.clusterserviceversion.yaml       |   8 +-
 pkg/controller/kameletbinding/common.go            |   2 +-
 pkg/controller/kameletbinding/initialize.go        |   2 +-
 pkg/controller/kameletbinding/monitor.go           |   2 +-
 pkg/resources/resources.go                         |   4 +-
 pkg/trait/dependencies_test.go                     |   2 +-
 pkg/trait/init.go                                  |   2 +-
 pkg/trait/trait_register.go                        |   2 +-
 9 files changed, 196 insertions(+), 19 deletions(-)

diff --git a/addons/keda/keda_test.go b/addons/keda/keda_test.go
index 083a231..ae49b4a 100644
--- a/addons/keda/keda_test.go
+++ b/addons/keda/keda_test.go
@@ -19,11 +19,13 @@ package keda
 
 import (
 	"context"
+	"encoding/json"
 	"testing"
 
 	"github.com/apache/camel-k/addons/keda/duck/v1alpha1"
 	camelv1 "github.com/apache/camel-k/pkg/apis/camel/v1"
 	camelv1alpha1 "github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
+	"github.com/apache/camel-k/pkg/controller/kameletbinding"
 	"github.com/apache/camel-k/pkg/trait"
 	"github.com/apache/camel-k/pkg/util/camel"
 	"github.com/apache/camel-k/pkg/util/kubernetes"
@@ -33,6 +35,7 @@ import (
 	corev1 "k8s.io/api/core/v1"
 	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 	"k8s.io/apimachinery/pkg/runtime"
+	"sigs.k8s.io/controller-runtime/pkg/client"
 )
 
 var (
@@ -205,6 +208,156 @@ func TestKameletAutoDetection(t *testing.T) {
 	assert.Contains(t, secret.StringData, "cc")
 }
 
+func TestKameletBindingAutoDetection(t *testing.T) {
+	keda, _ := NewKedaTrait().(*kedaTrait)
+	keda.Enabled = &testingTrue
+	logEndpoint := "log:info"
+	klb := camelv1alpha1.KameletBinding{
+		ObjectMeta: metav1.ObjectMeta{
+			Namespace: "test",
+			Name:      "my-binding",
+		},
+		Spec: camelv1alpha1.KameletBindingSpec{
+			Source: camelv1alpha1.Endpoint{
+				Ref: &corev1.ObjectReference{
+					Kind:       "Kamelet",
+					APIVersion: camelv1alpha1.SchemeGroupVersion.String(),
+					Name:       "my-kamelet",
+				},
+				Properties: asEndpointProperties(map[string]string{
+					"a": "v1",
+					"b": "v2",
+					"c": "v3",
+				}),
+			},
+			Sink: camelv1alpha1.Endpoint{
+				URI: &logEndpoint,
+			},
+		},
+	}
+
+	env := createBasicTestEnvironment(
+		&camelv1alpha1.Kamelet{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "my-kamelet",
+				Annotations: map[string]string{
+					"camel.apache.org/keda.type": "my-scaler",
+				},
+			},
+			Spec: camelv1alpha1.KameletSpec{
+				Definition: &camelv1alpha1.JSONSchemaProps{
+					Properties: map[string]camelv1alpha1.JSONSchemaProp{
+						"a": camelv1alpha1.JSONSchemaProp{
+							XDescriptors: []string{
+								"urn:keda:metadata:a",
+							},
+						},
+						"b": camelv1alpha1.JSONSchemaProp{
+							XDescriptors: []string{
+								"urn:keda:metadata:bb",
+							},
+						},
+						"c": camelv1alpha1.JSONSchemaProp{
+							XDescriptors: []string{
+								"urn:keda:authentication:cc",
+							},
+						},
+					},
+				},
+			},
+		},
+		&klb,
+		&camelv1.IntegrationPlatform{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "camel-k",
+			},
+			Spec: camelv1.IntegrationPlatformSpec{
+				Cluster: camelv1.IntegrationPlatformClusterKubernetes,
+				Profile: camelv1.TraitProfileKubernetes,
+			},
+			Status: camelv1.IntegrationPlatformStatus{
+				Phase: camelv1.IntegrationPlatformPhaseReady,
+			},
+		})
+
+	it, err := kameletbinding.CreateIntegrationFor(env.Ctx, env.Client, &klb)
+	assert.NoError(t, err)
+	assert.NotNil(t, it)
+	env.Integration = it
+
+	it.Status.Phase = camelv1.IntegrationPhaseInitialization
+	init := trait.NewInitTrait()
+	ok, err := init.Configure(env)
+	assert.NoError(t, err)
+	assert.True(t, ok)
+	assert.NoError(t, init.Apply(env))
+
+	it.Status.Phase = camelv1.IntegrationPhaseDeploying
+	res, err := keda.Configure(env)
+	assert.NoError(t, err)
+	assert.True(t, res)
+	assert.NoError(t, keda.Apply(env))
+	so := getScaledObject(env)
+	assert.NotNil(t, so)
+	assert.Len(t, so.Spec.Triggers, 1)
+	assert.Equal(t, "my-scaler", so.Spec.Triggers[0].Type)
+	assert.Equal(t, map[string]string{
+		"a":  "v1",
+		"bb": "v2",
+	}, so.Spec.Triggers[0].Metadata)
+	triggerAuth := getTriggerAuthentication(env)
+	assert.NotNil(t, triggerAuth)
+	assert.Equal(t, so.Spec.Triggers[0].AuthenticationRef.Name, triggerAuth.Name)
+	assert.Len(t, triggerAuth.Spec.SecretTargetRef, 1)
+	assert.Equal(t, "cc", triggerAuth.Spec.SecretTargetRef[0].Key)
+	assert.Equal(t, "cc", triggerAuth.Spec.SecretTargetRef[0].Parameter)
+	secretName := triggerAuth.Spec.SecretTargetRef[0].Name
+	secret := getSecret(env)
+	assert.NotNil(t, secret)
+	assert.Equal(t, secretName, secret.Name)
+	assert.Len(t, secret.StringData, 1)
+	assert.Contains(t, secret.StringData, "cc")
+}
+
+func TestHackReplicas(t *testing.T) {
+	keda, _ := NewKedaTrait().(*kedaTrait)
+	keda.Enabled = &testingTrue
+	keda.Auto = &testingFalse
+	keda.Triggers = append(keda.Triggers, kedaTrigger{
+		Type: "custom",
+		Metadata: map[string]string{
+			"a": "b",
+		},
+	})
+	keda.HackControllerReplicas = &testingTrue
+	env := createBasicTestEnvironment(
+		&camelv1.Integration{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "my-it",
+			},
+			Status: camelv1.IntegrationStatus{
+				Phase: camelv1.IntegrationPhaseInitialization,
+			},
+		},
+	)
+
+	res, err := keda.Configure(env)
+	assert.NoError(t, err)
+	assert.True(t, res)
+	assert.NoError(t, keda.Apply(env))
+	it := camelv1.Integration{}
+	key := client.ObjectKey{
+		Namespace: "test",
+		Name:      "my-it",
+	}
+	assert.NoError(t, env.Client.Get(env.Ctx, key, &it))
+	assert.NotNil(t, it.Spec.Replicas)
+	assert.Equal(t, int32(1), *it.Spec.Replicas)
+}
+
 func getScaledObject(e *trait.Environment) *v1alpha1.ScaledObject {
 	var res *v1alpha1.ScaledObject
 	for _, o := range e.Resources.Items() {
@@ -268,6 +421,25 @@ func createBasicTestEnvironment(resources ...runtime.Object) *trait.Environment
 		}
 	}
 
+	var pl *camelv1.IntegrationPlatform
+	for _, res := range resources {
+		if platform, ok := res.(*camelv1.IntegrationPlatform); ok {
+			pl = platform
+		}
+	}
+	if pl == nil {
+		pl = &camelv1.IntegrationPlatform{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "camel-k",
+			},
+			Spec: camelv1.IntegrationPlatformSpec{
+				Cluster: camelv1.IntegrationPlatformClusterKubernetes,
+				Profile: camelv1.TraitProfileKubernetes,
+			},
+		}
+	}
+
 	return &trait.Environment{
 		Catalog:     trait.NewCatalog(nil),
 		Ctx:         context.Background(),
@@ -281,15 +453,18 @@ func createBasicTestEnvironment(resources ...runtime.Object) *trait.Environment
 				},
 			},
 		},
-		Platform: &camelv1.IntegrationPlatform{
-			ObjectMeta: metav1.ObjectMeta{
-				Namespace: "test",
-			},
-			Spec: camelv1.IntegrationPlatformSpec{
-				Cluster: camelv1.IntegrationPlatformClusterKubernetes,
-			},
-		},
+		Platform:              pl,
 		Resources:             kubernetes.NewCollection(),
 		ApplicationProperties: make(map[string]string),
 	}
 }
+
+func asEndpointProperties(props map[string]string) *camelv1alpha1.EndpointProperties {
+	serialized, err := json.Marshal(props)
+	if err != nil {
+		panic(err)
+	}
+	return &camelv1alpha1.EndpointProperties{
+		RawMessage: serialized,
+	}
+}
diff --git a/config/manifests/bases/camel-k.clusterserviceversion.yaml b/config/manifests/bases/camel-k.clusterserviceversion.yaml
index 03b7414..ed06b13 100644
--- a/config/manifests/bases/camel-k.clusterserviceversion.yaml
+++ b/config/manifests/bases/camel-k.clusterserviceversion.yaml
@@ -23,8 +23,9 @@ metadata:
     categories: Integration & Delivery
     certified: "false"
     containerImage: docker.io/apache/camel-k:1.8.0-SNAPSHOT
-    createdAt: 2021-05-03T07:48:00Z
-    description: Apache Camel K is a lightweight integration platform, born on Kubernetes, with serverless superpowers.
+    createdAt: 2021-12-20T16:11:27Z
+    description: Apache Camel K is a lightweight integration platform, born on Kubernetes,
+      with serverless superpowers.
     operators.operatorframework.io/builder: operator-sdk-v1.3.0
     operators.operatorframework.io/internal-objects: '["builds.camel.apache.org","integrationkits.camel.apache.org","camelcatalogs.camel.apache.org"]'
     operators.operatorframework.io/project_layout: go.kubebuilder.io/v2
@@ -51,7 +52,8 @@ spec:
       kind: IntegrationKit
       name: integrationkits.camel.apache.org
       version: v1
-    - description: IntegrationPlatform is the Schema for the integrationplatforms API
+    - description: IntegrationPlatform is the Schema for the integrationplatforms
+        API
       displayName: Integration Platform
       kind: IntegrationPlatform
       name: integrationplatforms.camel.apache.org
diff --git a/pkg/controller/kameletbinding/common.go b/pkg/controller/kameletbinding/common.go
index ea7e3d0..cf9ec2b 100644
--- a/pkg/controller/kameletbinding/common.go
+++ b/pkg/controller/kameletbinding/common.go
@@ -41,7 +41,7 @@ var (
 	endpointTypeSinkContext   = bindings.EndpointContext{Type: v1alpha1.EndpointTypeSink}
 )
 
-func createIntegrationFor(ctx context.Context, c client.Client, kameletbinding *v1alpha1.KameletBinding) (*v1.Integration, error) {
+func CreateIntegrationFor(ctx context.Context, c client.Client, kameletbinding *v1alpha1.KameletBinding) (*v1.Integration, error) {
 	controller := true
 	blockOwnerDeletion := true
 	annotations := util.CopyMap(kameletbinding.Annotations)
diff --git a/pkg/controller/kameletbinding/initialize.go b/pkg/controller/kameletbinding/initialize.go
index c257825..d65656a 100644
--- a/pkg/controller/kameletbinding/initialize.go
+++ b/pkg/controller/kameletbinding/initialize.go
@@ -50,7 +50,7 @@ func (action *initializeAction) CanHandle(kameletbinding *v1alpha1.KameletBindin
 }
 
 func (action *initializeAction) Handle(ctx context.Context, kameletbinding *v1alpha1.KameletBinding) (*v1alpha1.KameletBinding, error) {
-	it, err := createIntegrationFor(ctx, action.client, kameletbinding)
+	it, err := CreateIntegrationFor(ctx, action.client, kameletbinding)
 	if err != nil {
 		return nil, err
 	}
diff --git a/pkg/controller/kameletbinding/monitor.go b/pkg/controller/kameletbinding/monitor.go
index dcba3b2..e92c35a 100644
--- a/pkg/controller/kameletbinding/monitor.go
+++ b/pkg/controller/kameletbinding/monitor.go
@@ -70,7 +70,7 @@ func (action *monitorAction) Handle(ctx context.Context, kameletbinding *v1alpha
 	}
 
 	// Check if the integration needs to be changed
-	expected, err := createIntegrationFor(ctx, action.client, kameletbinding)
+	expected, err := CreateIntegrationFor(ctx, action.client, kameletbinding)
 	if err != nil {
 		return nil, err
 	}
diff --git a/pkg/resources/resources.go b/pkg/resources/resources.go
index c753ccb..414e4d8 100644
--- a/pkg/resources/resources.go
+++ b/pkg/resources/resources.go
@@ -555,9 +555,9 @@ var assets = func() http.FileSystem {
 		"/traits.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "traits.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 49398,
+			uncompressedSize: 49560,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x5b\xb9\x91\xe0\xef\xf3\x57\xa0\xb4\x57\x65\x49\x45\x52\x9e\xc9\x26\x3b\xa7\xbb\xd9\x94\xc6\x76\x12\xcd\xf8\x43\x67\x3b\xb3\x97\x9a\x9b\x0a\xc1\xf7\x9a\x24\xcc\x47\xe0\x05\xc0\x93\xcc\xdc\xde\xff\x7e\x85\xee\xc6\xc7\x7b\x24\x25\xca\xb6\x66\xa3\xad\xdd\x54\xed\x58\xd2\x03\xd0\x68\x34\xfa\xbb\x1b\xde\x4a\xe5\xdd\xf9\x57\x63\xa1\xe5\x1a\xce\x85\x9c\xcf\x95\x56\x7e\xf3\x95\x10\x6d\x23\xfd\xdc\xd8\xf5\xb9\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x5b\xb9\x91\xe0\xef\xf3\x57\xa0\xb4\x57\x65\x49\x45\x52\x9e\xc9\x26\x3b\xa7\xbb\xd9\x94\xc6\x76\x12\xcd\xf8\x43\x67\x3b\xb3\x97\x9a\x9b\x0a\xc1\xf7\x9a\x24\xcc\x47\xe0\x05\xc0\x93\xcc\xdc\xde\xff\x7e\x85\xee\xc6\xc7\x7b\x24\x25\xca\xb6\x66\xa3\xad\xdd\x54\xed\x58\xd2\x03\xd0\x68\x34\xfa\xbb\x1b\xde\x4a\xe5\xdd\xf9\x57\x63\xa1\xe5\x1a\xce\x85\x9c\xcf\x95\x56\x7e\xf3\x95\x10\x6d\x23\xfd\xdc\xd8\xf5\xb9\x [...]
 		},
 	}
 	fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{
diff --git a/pkg/trait/dependencies_test.go b/pkg/trait/dependencies_test.go
index d0e8eda..d5cb612 100644
--- a/pkg/trait/dependencies_test.go
+++ b/pkg/trait/dependencies_test.go
@@ -175,7 +175,7 @@ func TestIntegrationAutoGeneratedDeps(t *testing.T) {
 		},
 	}
 
-	for _, trait := range []Trait{newInitTrait(), newDependenciesTrait()} {
+	for _, trait := range []Trait{NewInitTrait(), newDependenciesTrait()} {
 		enabled, err := trait.Configure(e)
 		assert.Nil(t, err)
 		assert.True(t, enabled)
diff --git a/pkg/trait/init.go b/pkg/trait/init.go
index 3cf8472..14f0ca6 100644
--- a/pkg/trait/init.go
+++ b/pkg/trait/init.go
@@ -33,7 +33,7 @@ type initTrait struct {
 	BaseTrait `property:",squash"`
 }
 
-func newInitTrait() Trait {
+func NewInitTrait() Trait {
 	return &initTrait{
 		BaseTrait: NewBaseTrait("init", 1),
 	}
diff --git a/pkg/trait/trait_register.go b/pkg/trait/trait_register.go
index 4f2a6d1..df8245c 100644
--- a/pkg/trait/trait_register.go
+++ b/pkg/trait/trait_register.go
@@ -32,7 +32,7 @@ func init() {
 	AddToTraits(newErrorHandlerTrait)
 	AddToTraits(newGarbageCollectorTrait)
 	AddToTraits(newHealthTrait)
-	AddToTraits(newInitTrait)
+	AddToTraits(NewInitTrait)
 	AddToTraits(newIngressTrait)
 	AddToTraits(newIstioTrait)
 	AddToTraits(newJolokiaTrait)

[camel-k] 02/22: Fix #1107: add support for nested trait configuration

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit b848abaab2832e5a86429befae578a1bccd3365f
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 13 16:40:08 2021 +0100

    Fix #1107: add support for nested trait configuration
---
 pkg/cmd/run.go               | 30 +++++++++++++----
 pkg/cmd/run_test.go          | 49 ++++++++++++++++++++++++++++
 pkg/trait/trait_catalog.go   |  6 ++++
 pkg/trait/trait_configure.go | 26 ++++++++++++---
 pkg/util/util.go             | 77 ++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 176 insertions(+), 12 deletions(-)

diff --git a/pkg/cmd/run.go b/pkg/cmd/run.go
index 732e1da..5dfca09 100644
--- a/pkg/cmd/run.go
+++ b/pkg/cmd/run.go
@@ -54,7 +54,7 @@ import (
 	"github.com/apache/camel-k/pkg/util/watch"
 )
 
-var traitConfigRegexp = regexp.MustCompile(`^([a-z0-9-]+)((?:\.[a-z0-9-]+)+)=(.*)$`)
+var traitConfigRegexp = regexp.MustCompile(`^([a-z0-9-]+)((?:\[[0-9]+\]|\.[a-z0-9-]+)+)=(.*)$`)
 
 func newCmdRun(rootCmdOptions *RootCmdOptions) (*cobra.Command, *runCmdOptions) {
 	options := runCmdOptions{
@@ -794,7 +794,7 @@ func resolvePodTemplate(ctx context.Context, templateSrc string, spec *v1.Integr
 	return err
 }
 
-func configureTraits(options []string, catalog *trait.Catalog) (map[string]v1.TraitSpec, error) {
+func configureTraits(options []string, catalog trait.Finder) (map[string]v1.TraitSpec, error) {
 	traits := make(map[string]map[string]interface{})
 
 	for _, option := range options {
@@ -803,23 +803,39 @@ func configureTraits(options []string, catalog *trait.Catalog) (map[string]v1.Tr
 			return nil, errors.New("unrecognized config format (expected \"<trait>.<prop>=<value>\"): " + option)
 		}
 		id := parts[1]
-		prop := parts[2][1:]
+		fullProp := parts[2][1:]
 		value := parts[3]
 		if _, ok := traits[id]; !ok {
 			traits[id] = make(map[string]interface{})
 		}
-		switch v := traits[id][prop].(type) {
+
+		propParts := util.ConfigTreePropertySplit(fullProp)
+		var current = traits[id]
+		if len(propParts) > 1 {
+			c, err := util.NavigateConfigTree(current, propParts[0:len(propParts)-1])
+			if err != nil {
+				return nil, err
+			}
+			if cc, ok := c.(map[string]interface{}); ok {
+				current = cc
+			} else {
+				return nil, errors.New("trait configuration cannot end with a slice")
+			}
+		}
+
+		prop := propParts[len(propParts)-1]
+		switch v := current[prop].(type) {
 		case []string:
-			traits[id][prop] = append(v, value)
+			current[prop] = append(v, value)
 		case string:
 			// Aggregate multiple occurrences of the same option into a string array, to emulate POSIX conventions.
 			// This enables executing:
 			// $ kamel run -t <trait>.<property>=<value_1> ... -t <trait>.<property>=<value_N>
 			// Or:
 			// $ kamel run --trait <trait>.<property>=<value_1>,...,<trait>.<property>=<value_N>
-			traits[id][prop] = []string{v, value}
+			current[prop] = []string{v, value}
 		case nil:
-			traits[id][prop] = value
+			current[prop] = value
 		}
 	}
 
diff --git a/pkg/cmd/run_test.go b/pkg/cmd/run_test.go
index aa25816..dc35ee4 100644
--- a/pkg/cmd/run_test.go
+++ b/pkg/cmd/run_test.go
@@ -387,6 +387,55 @@ func TestConfigureTraits(t *testing.T) {
 	assertTraitConfiguration(t, traits, "prometheus", `{"podMonitor":false}`)
 }
 
+type customTrait struct {
+	trait.BaseTrait `property:",squash"`
+	// SimpleMap
+	SimpleMap  map[string]string            `property:"simple-map" json:"simpleMap,omitempty"`
+	DoubleMap  map[string]map[string]string `property:"double-map" json:"doubleMap,omitempty"`
+	SliceOfMap []map[string]string          `property:"slice-of-map" json:"sliceOfMap,omitempty"`
+}
+
+func (c customTrait) Configure(environment *trait.Environment) (bool, error) {
+	panic("implement me")
+}
+func (c customTrait) Apply(environment *trait.Environment) error {
+	panic("implement me")
+}
+
+var _ trait.Trait = &customTrait{}
+
+type customTraitFinder struct {
+}
+
+func (finder customTraitFinder) GetTrait(id string) trait.Trait {
+	if id == "custom" {
+		return &customTrait{}
+	}
+	return nil
+}
+
+func TestTraitsNestedConfig(t *testing.T) {
+	runCmdOptions, rootCmd, _ := initializeRunCmdOptions(t)
+	_, err := test.ExecuteCommand(rootCmd, "run",
+		"--trait", "custom.simple-map.a=b",
+		"--trait", "custom.simple-map.y=z",
+		"--trait", "custom.double-map.m.n=q",
+		"--trait", "custom.double-map.m.o=w",
+		"--trait", "custom.slice-of-map[0].f=g",
+		"--trait", "custom.slice-of-map[3].f=h",
+		"--trait", "custom.slice-of-map[2].f=i",
+		"example.js")
+	if err != nil {
+		t.Error(err)
+	}
+	catalog := &customTraitFinder{}
+	traits, err := configureTraits(runCmdOptions.Traits, catalog)
+
+	assert.Nil(t, err)
+	assert.Len(t, traits, 1)
+	assertTraitConfiguration(t, traits, "custom", `{"simpleMap":{"a":"b","y":"z"},"doubleMap":{"m":{"n":"q","o":"w"}},"sliceOfMap":[{"f":"g"},null,{"f":"i"},{"f":"h"}]}`)
+}
+
 func assertTraitConfiguration(t *testing.T, traits map[string]v1.TraitSpec, trait string, expected string) {
 	t.Helper()
 
diff --git a/pkg/trait/trait_catalog.go b/pkg/trait/trait_catalog.go
index d681592..7831a4f 100644
--- a/pkg/trait/trait_catalog.go
+++ b/pkg/trait/trait_catalog.go
@@ -182,3 +182,9 @@ func (c *Catalog) processFields(fields []*structs.Field, processor func(string))
 		}
 	}
 }
+
+type Finder interface {
+	GetTrait(id string) Trait
+}
+
+var _ Finder = &Catalog{}
diff --git a/pkg/trait/trait_configure.go b/pkg/trait/trait_configure.go
index c782797..1bba95b 100644
--- a/pkg/trait/trait_configure.go
+++ b/pkg/trait/trait_configure.go
@@ -23,6 +23,7 @@ import (
 	"reflect"
 	"strings"
 
+	"github.com/apache/camel-k/pkg/util"
 	"github.com/mitchellh/mapstructure"
 	"github.com/pkg/errors"
 
@@ -88,7 +89,7 @@ func decodeTraitSpec(in *v1.TraitSpec, target interface{}) error {
 }
 
 func (c *Catalog) configureTraitsFromAnnotations(annotations map[string]string) error {
-	options := make(map[string]map[string]string, len(annotations))
+	options := make(map[string]map[string]interface{}, len(annotations))
 	for k, v := range annotations {
 		if strings.HasPrefix(k, v1.TraitAnnotationPrefix) {
 			configKey := strings.TrimPrefix(k, v1.TraitAnnotationPrefix)
@@ -97,9 +98,24 @@ func (c *Catalog) configureTraitsFromAnnotations(annotations map[string]string)
 				id := parts[0]
 				prop := parts[1]
 				if _, ok := options[id]; !ok {
-					options[id] = make(map[string]string)
+					options[id] = make(map[string]interface{})
 				}
-				options[id][prop] = v
+
+				propParts := util.ConfigTreePropertySplit(prop)
+				var current = options[id]
+				if len(propParts) > 1 {
+					c, err := util.NavigateConfigTree(current, propParts[0:len(propParts)-1])
+					if err != nil {
+						return err
+					}
+					if cc, ok := c.(map[string]interface{}); ok {
+						current = cc
+					} else {
+						return errors.New(`invalid array specification: to set an array value use the ["v1", "v2"] format`)
+					}
+				}
+				current[prop] = v
+
 			} else {
 				return fmt.Errorf("wrong format for trait annotation %q: missing trait ID", k)
 			}
@@ -108,7 +124,7 @@ func (c *Catalog) configureTraitsFromAnnotations(annotations map[string]string)
 	return c.configureFromOptions(options)
 }
 
-func (c *Catalog) configureFromOptions(traits map[string]map[string]string) error {
+func (c *Catalog) configureFromOptions(traits map[string]map[string]interface{}) error {
 	for id, config := range traits {
 		t := c.GetTrait(id)
 		if t != nil {
@@ -121,7 +137,7 @@ func (c *Catalog) configureFromOptions(traits map[string]map[string]string) erro
 	return nil
 }
 
-func configureTrait(id string, config map[string]string, trait interface{}) error {
+func configureTrait(id string, config map[string]interface{}, trait interface{}) error {
 	md := mapstructure.Metadata{}
 
 	var valueConverter mapstructure.DecodeHookFuncKind = func(sourceKind reflect.Kind, targetKind reflect.Kind, data interface{}) (interface{}, error) {
diff --git a/pkg/util/util.go b/pkg/util/util.go
index 2d2f73e..69fa4cb 100644
--- a/pkg/util/util.go
+++ b/pkg/util/util.go
@@ -29,6 +29,7 @@ import (
 	"path/filepath"
 	"regexp"
 	"sort"
+	"strconv"
 	"strings"
 
 	"go.uber.org/multierr"
@@ -855,3 +856,79 @@ func WithTempDir(pattern string, consumer func(string) error) error {
 
 	return multierr.Append(consumerErr, removeErr)
 }
+
+// Parses a property spec and returns its parts.
+func ConfigTreePropertySplit(property string) []string {
+	var res = make([]string, 0)
+	initialParts := strings.Split(property, ".")
+	for _, p := range initialParts {
+		cur := p
+		var tmp []string
+		for strings.Contains(cur[1:], "[") && strings.HasSuffix(cur, "]") {
+			pos := strings.LastIndex(cur, "[")
+			tmp = append(tmp, cur[pos:])
+			cur = cur[0:pos]
+		}
+		if len(cur) > 0 {
+			tmp = append(tmp, cur)
+		}
+		for i := len(tmp) - 1; i >= 0; i = i - 1 {
+			res = append(res, tmp[i])
+		}
+	}
+	return res
+}
+
+// NavigateConfigTree switch to the element in the tree represented by the "nodes" spec and creates intermediary
+// nodes if missing. Nodes specs starting with "[" and ending in "]" are treated as slice indexes.
+func NavigateConfigTree(current interface{}, nodes []string) (interface{}, error) {
+	if len(nodes) == 0 {
+		return current, nil
+	}
+	isSlice := func(idx int) bool {
+		if idx >= len(nodes) {
+			return false
+		}
+		return strings.HasPrefix(nodes[idx], "[") && strings.HasSuffix(nodes[idx], "]")
+	}
+	makeNext := func() interface{} {
+		if isSlice(1) {
+			slice := make([]interface{}, 0)
+			return &slice
+		} else {
+			return make(map[string]interface{})
+		}
+	}
+	switch c := current.(type) {
+	case map[string]interface{}:
+		var next interface{}
+		if n, ok := c[nodes[0]]; ok {
+			next = n
+		} else {
+			next = makeNext()
+			c[nodes[0]] = next
+		}
+		return NavigateConfigTree(next, nodes[1:])
+	case *[]interface{}:
+		if !isSlice(0) {
+			return nil, fmt.Errorf("attempting to set map value %q into a slice", nodes[0])
+		}
+		pos, err := strconv.Atoi(nodes[0][1 : len(nodes[0])-1])
+		if err != nil {
+			return nil, errors.Wrapf(err, "value %q inside brackets is not numeric", nodes[0])
+		}
+		var next interface{}
+		if len(*c) > pos && (*c)[pos] != nil {
+			next = (*c)[pos]
+		} else {
+			next = makeNext()
+			for len(*c) <= pos {
+				*c = append(*c, nil)
+			}
+			(*c)[pos] = next
+		}
+		return NavigateConfigTree(next, nodes[1:])
+	default:
+		return nil, errors.New("invalid node type in configuration")
+	}
+}

[camel-k] 01/22: Fix #1107: keda scaffolding

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit b170a7154c3b0cb3b52b4be959470b0cfcccee4d
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Fri Dec 10 11:43:06 2021 +0100

    Fix #1107: keda scaffolding
---
 addons/keda/keda.go     | 51 +++++++++++++++++++++++++++++++++++++++++++++++++
 addons/register_keda.go | 27 ++++++++++++++++++++++++++
 2 files changed, 78 insertions(+)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
new file mode 100644
index 0000000..c794249
--- /dev/null
+++ b/addons/keda/keda.go
@@ -0,0 +1,51 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package keda
+
+import (
+	"github.com/apache/camel-k/pkg/trait"
+)
+
+// The Keda trait can be used for automatic integration with Keda autoscalers.
+//
+// The Keda trait is disabled by default.
+//
+// +camel-k:trait=keda.
+type kedaTrait struct {
+	trait.BaseTrait `property:",squash"`
+	// Enables automatic configuration of the trait.
+	Auto *bool `property:"auto" json:"auto,omitempty"`
+	// Metadata
+	Metadata map[string]string `property:"metadata" json:"metadata,omitempty"`
+}
+
+// NewKedaTrait --.
+func NewKedaTrait() trait.Trait {
+	return &kedaTrait{
+		BaseTrait: trait.NewBaseTrait("keda", trait.TraitOrderPostProcessResources),
+	}
+}
+
+func (t *kedaTrait) Configure(e *trait.Environment) (bool, error) {
+
+	return false, nil
+}
+
+func (t *kedaTrait) Apply(e *trait.Environment) error {
+	return nil
+}
diff --git a/addons/register_keda.go b/addons/register_keda.go
new file mode 100644
index 0000000..a8699cc
--- /dev/null
+++ b/addons/register_keda.go
@@ -0,0 +1,27 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package addons
+
+import (
+	"github.com/apache/camel-k/addons/keda"
+	"github.com/apache/camel-k/pkg/trait"
+)
+
+func init() {
+	trait.AddToTraits(keda.NewKedaTrait)
+}

[camel-k] 18/22: Fix #1107: fix findings

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 9d89922951da6dcfc258d7fad097e7e47b78a4dc
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Wed Dec 22 00:54:16 2021 +0100

    Fix #1107: fix findings
---
 addons/keda/keda.go                    |  43 ++++++-------
 addons/keda/keda_test.go               |  61 ++++++++++++++++---
 docs/modules/traits/pages/keda.adoc    |   4 --
 e2e/common/scale_binding_test.go       |  11 +---
 e2e/common/scale_integration_test.go   |  11 +---
 pkg/client/{serverside.go => apply.go} |   0
 pkg/client/client.go                   |   2 +
 pkg/client/scale.go                    |  35 +++++++++++
 pkg/cmd/run.go                         |   2 +-
 pkg/resources/resources.go             |   4 +-
 pkg/trait/deployer.go                  | 108 +--------------------------------
 pkg/util/test/client.go                |  45 +++++++++++++-
 resources/traits.yaml                  |   4 --
 13 files changed, 157 insertions(+), 173 deletions(-)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index ad9f71d..e6e1d5e 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -38,7 +38,7 @@ import (
 	"github.com/apache/camel-k/pkg/util/source"
 	"github.com/apache/camel-k/pkg/util/uri"
 	"github.com/pkg/errors"
-	scase "github.com/stoewer/go-strcase"
+	autoscalingv1 "k8s.io/api/autoscaling/v1"
 	v1 "k8s.io/api/core/v1"
 	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 	ctrl "sigs.k8s.io/controller-runtime/pkg/client"
@@ -76,8 +76,6 @@ type kedaTrait struct {
 	trait.BaseTrait `property:",squash"`
 	// Enables automatic configuration of the trait. Allows the trait to infer KEDA triggers from the Kamelets.
 	Auto *bool `property:"auto" json:"auto,omitempty"`
-	// Convert metadata properties to camelCase (needed because Camel K trait properties use kebab-case from command line). Disabled by default.
-	CamelCaseConversion *bool `property:"camel-case-conversion" json:"camelCaseConversion,omitempty"`
 	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow KEDA to recognize it as a scalable resource.
 	HackControllerReplicas *bool `property:"hack-controller-replicas" json:"hackControllerReplicas,omitempty"`
 	// Interval (seconds) to check each trigger on.
@@ -170,11 +168,7 @@ func (t *kedaTrait) addScalingResources(e *trait.Environment) error {
 	for idx, trigger := range t.Triggers {
 		meta := make(map[string]string)
 		for k, v := range trigger.Metadata {
-			kk := k
-			if t.CamelCaseConversion != nil && *t.CamelCaseConversion {
-				kk = scase.LowerCamelCase(k)
-			}
-			meta[kk] = v
+			meta[k] = v
 		}
 		var authenticationRef *kedav1alpha1.ScaledObjectAuthRef
 		if len(trigger.authentication) > 0 && trigger.AuthenticationSecret != "" {
@@ -269,28 +263,25 @@ func (t *kedaTrait) addScalingResources(e *trait.Environment) error {
 
 func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 	ctrlRef := t.getTopControllerReference(e)
+	scale := autoscalingv1.Scale{
+		Spec: autoscalingv1.ScaleSpec{
+			Replicas: int32(1),
+		},
+	}
+	scalesClient, err := e.Client.ScalesClient()
+	if err != nil {
+		return err
+	}
 	if ctrlRef.Kind == camelv1alpha1.KameletBindingKind {
-		// Update the KameletBinding directly (do not add it to env resources, it's the integration parent)
-		key := ctrl.ObjectKey{
-			Namespace: e.Integration.Namespace,
-			Name:      ctrlRef.Name,
-		}
-		klb := camelv1alpha1.KameletBinding{}
-		if err := e.Client.Get(e.Ctx, key, &klb); err != nil {
+		scale.ObjectMeta.Name = ctrlRef.Name
+		_, err = scalesClient.Scales(e.Integration.Namespace).Update(e.Ctx, camelv1alpha1.SchemeGroupVersion.WithResource("kameletbindings").GroupResource(), &scale, metav1.UpdateOptions{})
+		if err != nil {
 			return err
 		}
-		if klb.Spec.Replicas == nil {
-			one := int32(1)
-			klb.Spec.Replicas = &one
-			if err := e.Client.Update(e.Ctx, &klb); err != nil {
-				return err
-			}
-		}
 	} else if e.Integration.Spec.Replicas == nil {
-		one := int32(1)
-		e.Integration.Spec.Replicas = &one
-		// Update the Integration directly as the spec section is not merged by default
-		if err := e.Client.Update(e.Ctx, e.Integration); err != nil {
+		scale.ObjectMeta.Name = e.Integration.Name
+		_, err = scalesClient.Scales(e.Integration.Namespace).Update(e.Ctx, camelv1.SchemeGroupVersion.WithResource("integrations").GroupResource(), &scale, metav1.UpdateOptions{})
+		if err != nil {
 			return err
 		}
 	}
diff --git a/addons/keda/keda_test.go b/addons/keda/keda_test.go
index 08a627e..4783653 100644
--- a/addons/keda/keda_test.go
+++ b/addons/keda/keda_test.go
@@ -35,7 +35,6 @@ import (
 	corev1 "k8s.io/api/core/v1"
 	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 	"k8s.io/apimachinery/pkg/runtime"
-	"sigs.k8s.io/controller-runtime/pkg/client"
 )
 
 var (
@@ -348,14 +347,58 @@ func TestHackReplicas(t *testing.T) {
 	assert.NoError(t, err)
 	assert.True(t, res)
 	assert.NoError(t, keda.Apply(env))
-	it := camelv1.Integration{}
-	key := client.ObjectKey{
-		Namespace: "test",
-		Name:      "my-it",
-	}
-	assert.NoError(t, env.Client.Get(env.Ctx, key, &it))
-	assert.NotNil(t, it.Spec.Replicas)
-	assert.Equal(t, int32(1), *it.Spec.Replicas)
+	scalesClient, err := env.Client.ScalesClient()
+	assert.NoError(t, err)
+	sc, err := scalesClient.Scales("test").Get(env.Ctx, camelv1.SchemeGroupVersion.WithResource("integrations").GroupResource(), "my-it", metav1.GetOptions{})
+	assert.NoError(t, err)
+	assert.Equal(t, int32(1), sc.Spec.Replicas)
+}
+
+func TestHackKLBReplicas(t *testing.T) {
+	keda, _ := NewKedaTrait().(*kedaTrait)
+	keda.Enabled = &testingTrue
+	keda.Auto = &testingFalse
+	keda.Triggers = append(keda.Triggers, kedaTrigger{
+		Type: "custom",
+		Metadata: map[string]string{
+			"a": "b",
+		},
+	})
+	keda.HackControllerReplicas = &testingTrue
+	env := createBasicTestEnvironment(
+		&camelv1alpha1.KameletBinding{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "my-klb",
+			},
+		},
+		&camelv1.Integration{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "my-it",
+				OwnerReferences: []metav1.OwnerReference{
+					{
+						APIVersion: camelv1alpha1.SchemeGroupVersion.String(),
+						Kind:       "KameletBinding",
+						Name:       "my-klb",
+					},
+				},
+			},
+			Status: camelv1.IntegrationStatus{
+				Phase: camelv1.IntegrationPhaseInitialization,
+			},
+		},
+	)
+
+	res, err := keda.Configure(env)
+	assert.NoError(t, err)
+	assert.True(t, res)
+	assert.NoError(t, keda.Apply(env))
+	scalesClient, err := env.Client.ScalesClient()
+	assert.NoError(t, err)
+	sc, err := scalesClient.Scales("test").Get(env.Ctx, camelv1alpha1.SchemeGroupVersion.WithResource("kameletbindings").GroupResource(), "my-klb", metav1.GetOptions{})
+	assert.NoError(t, err)
+	assert.Equal(t, int32(1), sc.Spec.Replicas)
 }
 
 func getScaledObject(e *trait.Environment) *v1alpha1.ScaledObject {
diff --git a/docs/modules/traits/pages/keda.adoc b/docs/modules/traits/pages/keda.adoc
index df6c8d9..b1a5827 100644
--- a/docs/modules/traits/pages/keda.adoc
+++ b/docs/modules/traits/pages/keda.adoc
@@ -38,10 +38,6 @@ The following configuration options are available:
 | bool
 | Enables automatic configuration of the trait. Allows the trait to infer KEDA triggers from the Kamelets.
 
-| keda.camel-case-conversion
-| bool
-| Convert metadata properties to camelCase (needed because Camel K trait properties use kebab-case from command line). Disabled by default.
-
 | keda.hack-controller-replicas
 | bool
 | Set the spec->replicas field on the top level controller to an explicit value if missing, to allow KEDA to recognize it as a scalable resource.
diff --git a/e2e/common/scale_binding_test.go b/e2e/common/scale_binding_test.go
index 49d9668..3bbc2ee 100644
--- a/e2e/common/scale_binding_test.go
+++ b/e2e/common/scale_binding_test.go
@@ -33,10 +33,6 @@ import (
 	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 	"k8s.io/apimachinery/pkg/types"
 
-	"k8s.io/client-go/dynamic"
-	"k8s.io/client-go/restmapper"
-	"k8s.io/client-go/scale"
-
 	. "github.com/apache/camel-k/e2e/support"
 	v1 "github.com/apache/camel-k/pkg/apis/camel/v1"
 	"github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
@@ -80,12 +76,7 @@ func TestKameletBindingScale(t *testing.T) {
 
 		t.Run("Scale kamelet binding with polymorphic client", func(t *testing.T) {
 			RegisterTestingT(t)
-			// Polymorphic scale client
-			groupResources, err := restmapper.GetAPIGroupResources(TestClient().Discovery())
-			Expect(err).To(BeNil())
-			mapper := restmapper.NewDiscoveryRESTMapper(groupResources)
-			resolver := scale.NewDiscoveryScaleKindResolver(TestClient().Discovery())
-			scaleClient, err := scale.NewForConfig(TestClient().GetConfig(), mapper, dynamic.LegacyAPIPathResolverFunc, resolver)
+			scaleClient, err := TestClient().ScalesClient()
 			Expect(err).To(BeNil())
 
 			// Patch the integration scale subresource
diff --git a/e2e/common/scale_integration_test.go b/e2e/common/scale_integration_test.go
index 5c70901..1c01d1c 100644
--- a/e2e/common/scale_integration_test.go
+++ b/e2e/common/scale_integration_test.go
@@ -33,10 +33,6 @@ import (
 	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 	"k8s.io/apimachinery/pkg/types"
 
-	"k8s.io/client-go/dynamic"
-	"k8s.io/client-go/restmapper"
-	"k8s.io/client-go/scale"
-
 	. "github.com/apache/camel-k/e2e/support"
 	v1 "github.com/apache/camel-k/pkg/apis/camel/v1"
 	"github.com/apache/camel-k/pkg/client/camel/clientset/versioned"
@@ -67,12 +63,7 @@ func TestIntegrationScale(t *testing.T) {
 
 		t.Run("Scale integration with polymorphic client", func(t *testing.T) {
 			RegisterTestingT(t)
-			// Polymorphic scale client
-			groupResources, err := restmapper.GetAPIGroupResources(TestClient().Discovery())
-			Expect(err).To(BeNil())
-			mapper := restmapper.NewDiscoveryRESTMapper(groupResources)
-			resolver := scale.NewDiscoveryScaleKindResolver(TestClient().Discovery())
-			scaleClient, err := scale.NewForConfig(TestClient().GetConfig(), mapper, dynamic.LegacyAPIPathResolverFunc, resolver)
+			scaleClient, err := TestClient().ScalesClient()
 			Expect(err).To(BeNil())
 
 			// Patch the integration scale subresource
diff --git a/pkg/client/serverside.go b/pkg/client/apply.go
similarity index 100%
rename from pkg/client/serverside.go
rename to pkg/client/apply.go
diff --git a/pkg/client/client.go b/pkg/client/client.go
index 2cf73c2..967a5fc 100644
--- a/pkg/client/client.go
+++ b/pkg/client/client.go
@@ -26,6 +26,7 @@ import (
 	user "github.com/mitchellh/go-homedir"
 	"github.com/pkg/errors"
 	"github.com/sirupsen/logrus"
+	"k8s.io/client-go/scale"
 
 	"k8s.io/apimachinery/pkg/api/meta"
 	"k8s.io/apimachinery/pkg/runtime"
@@ -64,6 +65,7 @@ type Client interface {
 	GetConfig() *rest.Config
 	GetCurrentNamespace(kubeConfig string) (string, error)
 	ServerOrClientSideApplier() ServerOrClientSideApplier
+	ScalesClient() (scale.ScalesGetter, error)
 }
 
 // Injectable identifies objects that can receive a Client.
diff --git a/pkg/client/scale.go b/pkg/client/scale.go
new file mode 100644
index 0000000..7bcf7d7
--- /dev/null
+++ b/pkg/client/scale.go
@@ -0,0 +1,35 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package client
+
+import (
+	"k8s.io/client-go/dynamic"
+	"k8s.io/client-go/restmapper"
+	"k8s.io/client-go/scale"
+)
+
+func (c *defaultClient) ScalesClient() (scale.ScalesGetter, error) {
+	// Polymorphic scale client
+	groupResources, err := restmapper.GetAPIGroupResources(c.Discovery())
+	if err != nil {
+		return nil, err
+	}
+	mapper := restmapper.NewDiscoveryRESTMapper(groupResources)
+	resolver := scale.NewDiscoveryScaleKindResolver(c.Discovery())
+	return scale.NewForConfig(c.GetConfig(), mapper, dynamic.LegacyAPIPathResolverFunc, resolver)
+}
diff --git a/pkg/cmd/run.go b/pkg/cmd/run.go
index d2d260f..4721fef 100644
--- a/pkg/cmd/run.go
+++ b/pkg/cmd/run.go
@@ -54,7 +54,7 @@ import (
 	"github.com/apache/camel-k/pkg/util/watch"
 )
 
-var traitConfigRegexp = regexp.MustCompile(`^([a-z0-9-]+)((?:\[[0-9]+\]|\.[a-z0-9-]+)+)=(.*)$`)
+var traitConfigRegexp = regexp.MustCompile(`^([a-z0-9-]+)((?:\.[a-z0-9-]+)(?:\[[0-9]+\]|\.[A-Za-z0-9-_]+)*)=(.*)$`)
 
 func newCmdRun(rootCmdOptions *RootCmdOptions) (*cobra.Command, *runCmdOptions) {
 	options := runCmdOptions{
diff --git a/pkg/resources/resources.go b/pkg/resources/resources.go
index 80a8c6d..5d8301d 100644
--- a/pkg/resources/resources.go
+++ b/pkg/resources/resources.go
@@ -555,9 +555,9 @@ var assets = func() http.FileSystem {
 		"/traits.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "traits.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 49570,
+			uncompressedSize: 49341,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x5b\xb9\x91\xe0\xef\xf3\x57\xa0\xb4\x57\x65\x49\x45\x52\x9e\xc9\x26\x3b\xa7\xbb\xd9\x94\xc6\x76\x12\xcd\xf8\x43\x67\x3b\xb3\x97\x9a\x9b\x0a\xc1\xf7\x9a\x24\xcc\x47\xe0\x05\xc0\x93\xcc\xdc\xde\xff\x7e\x85\xee\xc6\xc7\x7b\x24\x25\xca\xb6\x66\xa3\xad\xdd\x54\xed\x58\xd2\x03\xd0\x68\x34\xfa\xbb\x1b\xde\x4a\xe5\xdd\xf9\x57\x63\xa1\xe5\x1a\xce\x85\x9c\xcf\x95\x56\x7e\xf3\x95\x10\x6d\x23\xfd\xdc\xd8\xf5\xb9\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x5b\xb9\x91\xe0\xef\xf3\x57\xa0\xb4\x57\x65\x49\x45\x52\x9e\xc9\x26\x3b\xa7\xbb\xd9\x94\xc6\x76\x12\xcd\xf8\x43\x67\x3b\xb3\x97\x9a\x9b\x0a\xc1\xf7\x9a\x24\xcc\x47\xe0\x05\xc0\x93\xcc\xdc\xde\xff\x7e\x85\xee\xc6\xc7\x7b\x24\x25\xca\xb6\x66\xa3\xad\xdd\x54\xed\x58\xd2\x03\xd0\x68\x34\xfa\xbb\x1b\xde\x4a\xe5\xdd\xf9\x57\x63\xa1\xe5\x1a\xce\x85\x9c\xcf\x95\x56\x7e\xf3\x95\x10\x6d\x23\xfd\xdc\xd8\xf5\xb9\x [...]
 		},
 	}
 	fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{
diff --git a/pkg/trait/deployer.go b/pkg/trait/deployer.go
index 67cdb79..7735a37 100644
--- a/pkg/trait/deployer.go
+++ b/pkg/trait/deployer.go
@@ -17,22 +17,6 @@ limitations under the License.
 
 package trait
 
-import (
-	"encoding/json"
-	"errors"
-	"fmt"
-	"net/http"
-	"strings"
-
-	k8serrors "k8s.io/apimachinery/pkg/api/errors"
-	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
-	"k8s.io/apimachinery/pkg/types"
-
-	ctrl "sigs.k8s.io/controller-runtime/pkg/client"
-
-	"github.com/apache/camel-k/pkg/util/patch"
-)
-
 // The deployer trait is responsible for deploying the resources owned by the integration, and can be used
 // to explicitly select the underlying controller that will manage the integration pods.
 //
@@ -45,8 +29,6 @@ type deployerTrait struct {
 
 var _ ControllerStrategySelector = &deployerTrait{}
 
-var hasServerSideApply = true
-
 func newDeployerTrait() Trait {
 	return &deployerTrait{
 		BaseTrait: NewBaseTrait("deployer", 900),
@@ -60,28 +42,9 @@ func (t *deployerTrait) Configure(e *Environment) (bool, error) {
 func (t *deployerTrait) Apply(e *Environment) error {
 	// Register a post action that patches the resources generated by the traits
 	e.PostActions = append(e.PostActions, func(env *Environment) error {
+		applier := e.Client.ServerOrClientSideApplier()
 		for _, resource := range env.Resources.Items() {
-			// We assume that server-side apply is enabled by default.
-			// It is currently convoluted to check pro-actively whether server-side apply
-			// is enabled. This is possible to fetch the OpenAPI endpoint, which returns
-			// the entire server API document, then lookup the resource PATCH endpoint, and
-			// check its list of accepted MIME types.
-			// As a simpler solution, we fall back to client-side apply at the first
-			// 415 error, and assume server-side apply is not available globally.
-			if hasServerSideApply {
-				err := t.serverSideApply(env, resource)
-				switch {
-				case err == nil:
-					continue
-				case isIncompatibleServerError(err):
-					t.L.Info("Fallback to client-side apply to patch resources")
-					hasServerSideApply = false
-				default:
-					// Keep server-side apply unless server is incompatible with it
-					return err
-				}
-			}
-			if err := t.clientSideApply(env, resource); err != nil {
+			if err := applier.Apply(e.Ctx, resource); err != nil {
 				return err
 			}
 		}
@@ -91,73 +54,6 @@ func (t *deployerTrait) Apply(e *Environment) error {
 	return nil
 }
 
-func (t *deployerTrait) serverSideApply(env *Environment, resource ctrl.Object) error {
-	target, err := patch.PositiveApplyPatch(resource)
-	if err != nil {
-		return err
-	}
-	err = env.Client.Patch(env.Ctx, target, ctrl.Apply, ctrl.ForceOwnership, ctrl.FieldOwner("camel-k-operator"))
-	if err != nil {
-		return fmt.Errorf("error during apply resource: %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
-	}
-	// Update the resource with the response returned from the API server
-	return t.unstructuredToRuntimeObject(target, resource)
-}
-
-func (t *deployerTrait) clientSideApply(env *Environment, resource ctrl.Object) error {
-	err := env.Client.Create(env.Ctx, resource)
-	if err == nil {
-		return nil
-	} else if !k8serrors.IsAlreadyExists(err) {
-		return fmt.Errorf("error during create resource: %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
-	}
-	object := &unstructured.Unstructured{}
-	object.SetNamespace(resource.GetNamespace())
-	object.SetName(resource.GetName())
-	object.SetGroupVersionKind(resource.GetObjectKind().GroupVersionKind())
-	err = env.Client.Get(env.Ctx, ctrl.ObjectKeyFromObject(object), object)
-	if err != nil {
-		return err
-	}
-	p, err := patch.PositiveMergePatch(object, resource)
-	if err != nil {
-		return err
-	} else if len(p) == 0 {
-		// Update the resource with the object returned from the API server
-		return t.unstructuredToRuntimeObject(object, resource)
-	}
-	err = env.Client.Patch(env.Ctx, resource, ctrl.RawPatch(types.MergePatchType, p))
-	if err != nil {
-		return fmt.Errorf("error during patch %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
-	}
-	return nil
-}
-
-func (t *deployerTrait) unstructuredToRuntimeObject(u *unstructured.Unstructured, obj ctrl.Object) error {
-	data, err := json.Marshal(u)
-	if err != nil {
-		return err
-	}
-	return json.Unmarshal(data, obj)
-}
-
-func isIncompatibleServerError(err error) bool {
-	// First simpler check for older servers (i.e. OpenShift 3.11)
-	if strings.Contains(err.Error(), "415: Unsupported Media Type") {
-		return true
-	}
-
-	// 415: Unsupported media type means we're talking to a server which doesn't
-	// support server-side apply.
-	var serr *k8serrors.StatusError
-	if errors.As(err, &serr) {
-		return serr.Status().Code == http.StatusUnsupportedMediaType
-	}
-
-	// Non-StatusError means the error isn't because the server is incompatible.
-	return false
-}
-
 func (t *deployerTrait) SelectControllerStrategy(e *Environment) (*ControllerStrategy, error) {
 	if IsFalse(t.Enabled) {
 		return nil, nil
diff --git a/pkg/util/test/client.go b/pkg/util/test/client.go
index b4f6db4..7086e05 100644
--- a/pkg/util/test/client.go
+++ b/pkg/util/test/client.go
@@ -19,6 +19,7 @@ package test
 
 import (
 	"context"
+	"fmt"
 	"strings"
 
 	"github.com/apache/camel-k/pkg/apis"
@@ -27,6 +28,7 @@ import (
 	fakecamelclientset "github.com/apache/camel-k/pkg/client/camel/clientset/versioned/fake"
 	camelv1 "github.com/apache/camel-k/pkg/client/camel/clientset/versioned/typed/camel/v1"
 	camelv1alpha1 "github.com/apache/camel-k/pkg/client/camel/clientset/versioned/typed/camel/v1alpha1"
+	autoscalingv1 "k8s.io/api/autoscaling/v1"
 	k8serrors "k8s.io/apimachinery/pkg/api/errors"
 	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 	"k8s.io/apimachinery/pkg/runtime"
@@ -36,6 +38,9 @@ import (
 	fakeclientset "k8s.io/client-go/kubernetes/fake"
 	clientscheme "k8s.io/client-go/kubernetes/scheme"
 	"k8s.io/client-go/rest"
+	"k8s.io/client-go/scale"
+	fakescale "k8s.io/client-go/scale/fake"
+	"k8s.io/client-go/testing"
 	controller "sigs.k8s.io/controller-runtime/pkg/client"
 	"sigs.k8s.io/controller-runtime/pkg/client/fake"
 )
@@ -57,11 +62,44 @@ func NewFakeClient(initObjs ...runtime.Object) (client.Client, error) {
 	clientset := fakeclientset.NewSimpleClientset(filterObjects(scheme, initObjs, func(gvk schema.GroupVersionKind) bool {
 		return !strings.Contains(gvk.Group, "camel") && !strings.Contains(gvk.Group, "knative")
 	})...)
+	replicasCount := make(map[string]int32)
+	fakescaleclient := fakescale.FakeScaleClient{}
+	fakescaleclient.AddReactor("update", "*", func(rawAction testing.Action) (handled bool, ret runtime.Object, err error) {
+		action := rawAction.(testing.UpdateAction)       // nolint: forcetypeassert
+		obj := action.GetObject().(*autoscalingv1.Scale) // nolint: forcetypeassert
+		replicas := obj.Spec.Replicas
+		key := fmt.Sprintf("%s:%s:%s/%s", action.GetResource().Group, action.GetResource().Resource, action.GetNamespace(), obj.GetName())
+		replicasCount[key] = replicas
+		return true, &autoscalingv1.Scale{
+			ObjectMeta: metav1.ObjectMeta{
+				Name:      obj.Name,
+				Namespace: action.GetNamespace(),
+			},
+			Spec: autoscalingv1.ScaleSpec{
+				Replicas: replicas,
+			},
+		}, nil
+	})
+	fakescaleclient.AddReactor("get", "*", func(rawAction testing.Action) (handled bool, ret runtime.Object, err error) {
+		action := rawAction.(testing.GetAction) // nolint: forcetypeassert
+		key := fmt.Sprintf("%s:%s:%s/%s", action.GetResource().Group, action.GetResource().Resource, action.GetNamespace(), action.GetName())
+		obj := &autoscalingv1.Scale{
+			ObjectMeta: metav1.ObjectMeta{
+				Name:      action.GetName(),
+				Namespace: action.GetNamespace(),
+			},
+			Spec: autoscalingv1.ScaleSpec{
+				Replicas: replicasCount[key],
+			},
+		}
+		return true, obj, nil
+	})
 
 	return &FakeClient{
 		Client:    c,
 		Interface: clientset,
 		camel:     camelClientset,
+		scales:    &fakescaleclient,
 	}, nil
 }
 
@@ -82,7 +120,8 @@ func filterObjects(scheme *runtime.Scheme, input []runtime.Object, filter func(g
 type FakeClient struct {
 	controller.Client
 	kubernetes.Interface
-	camel camel.Interface
+	camel  camel.Interface
+	scales *fakescale.FakeScaleClient
 }
 
 func (c *FakeClient) CamelV1() camelv1.CamelV1Interface {
@@ -123,6 +162,10 @@ func (c *FakeClient) ServerOrClientSideApplier() client.ServerOrClientSideApplie
 	}
 }
 
+func (c *FakeClient) ScalesClient() (scale.ScalesGetter, error) {
+	return c.scales, nil
+}
+
 type FakeDiscovery struct {
 	discovery.DiscoveryInterface
 }
diff --git a/resources/traits.yaml b/resources/traits.yaml
index 638418d..fd24ebe 100755
--- a/resources/traits.yaml
+++ b/resources/traits.yaml
@@ -593,10 +593,6 @@ traits:
     type: bool
     description: Enables automatic configuration of the trait. Allows the trait to
       infer KEDA triggers from the Kamelets.
-  - name: camel-case-conversion
-    type: bool
-    description: Convert metadata properties to camelCase (needed because Camel K
-      trait properties use kebab-case from command line). Disabled by default.
   - name: hack-controller-replicas
     type: bool
     description: Set the spec->replicas field on the top level controller to an explicit

[camel-k] 04/22: Fix #1107: generalize server side apply code and reuse

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 5a7e49cf585aa987549f76f9301c92b40323acba
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Wed Dec 15 11:16:46 2021 +0100

    Fix #1107: generalize server side apply code and reuse
---
 addons/keda/keda.go      |   6 +--
 pkg/client/client.go     |   1 +
 pkg/client/serverside.go | 124 +++++++++++++++++++++++++++++++++++++++++++++++
 pkg/install/kamelets.go  |  86 +-------------------------------
 pkg/util/test/client.go  |   6 +++
 5 files changed, 136 insertions(+), 87 deletions(-)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index 65a8bd4..f59edd9 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -117,7 +117,7 @@ func (t *kedaTrait) getScaledObject(e *trait.Environment) (*kedav1alpha1.ScaledO
 
 func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 	ctrlRef := t.getTopControllerReference(e)
-
+	applier := e.Client.ServerOrClientSideApplier()
 	if ctrlRef.Kind == camelv1alpha1.KameletBindingKind {
 		// Update the KameletBinding directly (do not add it to env resources, it's the integration parent)
 		key := client.ObjectKey{
@@ -131,7 +131,7 @@ func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 		if klb.Spec.Replicas == nil {
 			one := int32(1)
 			klb.Spec.Replicas = &one
-			if err := e.Client.Update(e.Ctx, &klb); err != nil {
+			if err := applier.Apply(e.Ctx, &klb); err != nil {
 				return err
 			}
 		}
@@ -139,7 +139,7 @@ func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 		if e.Integration.Spec.Replicas == nil {
 			one := int32(1)
 			e.Integration.Spec.Replicas = &one
-			if err := e.Client.Update(e.Ctx, e.Integration); err != nil {
+			if err := applier.Apply(e.Ctx, e.Integration); err != nil {
 				return err
 			}
 		}
diff --git a/pkg/client/client.go b/pkg/client/client.go
index 3334e70..2cf73c2 100644
--- a/pkg/client/client.go
+++ b/pkg/client/client.go
@@ -63,6 +63,7 @@ type Client interface {
 	GetScheme() *runtime.Scheme
 	GetConfig() *rest.Config
 	GetCurrentNamespace(kubeConfig string) (string, error)
+	ServerOrClientSideApplier() ServerOrClientSideApplier
 }
 
 // Injectable identifies objects that can receive a Client.
diff --git a/pkg/client/serverside.go b/pkg/client/serverside.go
new file mode 100644
index 0000000..6efd758
--- /dev/null
+++ b/pkg/client/serverside.go
@@ -0,0 +1,124 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package client
+
+import (
+	"context"
+	"fmt"
+	"net/http"
+	"strings"
+	"sync"
+	"sync/atomic"
+
+	"github.com/apache/camel-k/pkg/util/log"
+	"github.com/apache/camel-k/pkg/util/patch"
+	"github.com/pkg/errors"
+	k8serrors "k8s.io/apimachinery/pkg/api/errors"
+	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+	"k8s.io/apimachinery/pkg/types"
+	ctrl "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+type ServerOrClientSideApplier struct {
+	Client             ctrl.Client
+	hasServerSideApply atomic.Value
+	tryServerSideApply sync.Once
+}
+
+func (c *defaultClient) ServerOrClientSideApplier() ServerOrClientSideApplier {
+	return ServerOrClientSideApplier{
+		Client: c,
+	}
+}
+
+func (a *ServerOrClientSideApplier) Apply(ctx context.Context, object ctrl.Object) error {
+	once := false
+	var err error
+	a.tryServerSideApply.Do(func() {
+		once = true
+		if err = a.serverSideApply(ctx, object); err != nil {
+			if isIncompatibleServerError(err) {
+				log.Info("Fallback to client-side apply for installing resources")
+				a.hasServerSideApply.Store(false)
+				err = nil
+			} else {
+				a.tryServerSideApply = sync.Once{}
+			}
+		} else {
+			a.hasServerSideApply.Store(true)
+		}
+	})
+	if err != nil {
+		return err
+	}
+	if v := a.hasServerSideApply.Load(); v.(bool) {
+		if !once {
+			return a.serverSideApply(ctx, object)
+		}
+	} else {
+		return a.clientSideApply(ctx, object)
+	}
+	return nil
+}
+
+func (a *ServerOrClientSideApplier) serverSideApply(ctx context.Context, resource ctrl.Object) error {
+	target, err := patch.PositiveApplyPatch(resource)
+	if err != nil {
+		return err
+	}
+	return a.Client.Patch(ctx, target, ctrl.Apply, ctrl.ForceOwnership, ctrl.FieldOwner("camel-k-operator"))
+}
+
+func (a *ServerOrClientSideApplier) clientSideApply(ctx context.Context, resource ctrl.Object) error {
+	err := a.Client.Create(ctx, resource)
+	if err == nil {
+		return nil
+	} else if !k8serrors.IsAlreadyExists(err) {
+		return fmt.Errorf("error during create resource: %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
+	}
+	object := &unstructured.Unstructured{}
+	object.SetNamespace(resource.GetNamespace())
+	object.SetName(resource.GetName())
+	object.SetGroupVersionKind(resource.GetObjectKind().GroupVersionKind())
+	err = a.Client.Get(ctx, ctrl.ObjectKeyFromObject(object), object)
+	if err != nil {
+		return err
+	}
+	p, err := patch.PositiveMergePatch(object, resource)
+	if err != nil {
+		return err
+	} else if len(p) == 0 {
+		return nil
+	}
+	return a.Client.Patch(ctx, resource, ctrl.RawPatch(types.MergePatchType, p))
+}
+
+func isIncompatibleServerError(err error) bool {
+	// First simpler check for older servers (i.e. OpenShift 3.11)
+	if strings.Contains(err.Error(), "415: Unsupported Media Type") {
+		return true
+	}
+	// 415: Unsupported media type means we're talking to a server which doesn't
+	// support server-side apply.
+	var serr *k8serrors.StatusError
+	if errors.As(err, &serr) {
+		return serr.Status().Code == http.StatusUnsupportedMediaType
+	}
+	// Non-StatusError means the error isn't because the server is incompatible.
+	return false
+}
diff --git a/pkg/install/kamelets.go b/pkg/install/kamelets.go
index 82a818b..fc64e25 100644
--- a/pkg/install/kamelets.go
+++ b/pkg/install/kamelets.go
@@ -19,25 +19,16 @@ package install
 
 import (
 	"context"
-	"errors"
 	"fmt"
 	"io/fs"
-	"net/http"
 	"os"
 	"path"
 	"path/filepath"
 	"strings"
-	"sync"
-	"sync/atomic"
 
 	"golang.org/x/sync/errgroup"
 
-	k8serrors "k8s.io/apimachinery/pkg/api/errors"
-	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
 	"k8s.io/apimachinery/pkg/runtime"
-	"k8s.io/apimachinery/pkg/types"
-
-	ctrl "sigs.k8s.io/controller-runtime/pkg/client"
 	logf "sigs.k8s.io/controller-runtime/pkg/log"
 
 	"github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
@@ -45,7 +36,6 @@ import (
 	"github.com/apache/camel-k/pkg/util"
 	"github.com/apache/camel-k/pkg/util/defaults"
 	"github.com/apache/camel-k/pkg/util/kubernetes"
-	"github.com/apache/camel-k/pkg/util/patch"
 )
 
 const (
@@ -55,9 +45,6 @@ const (
 
 var (
 	log = logf.Log
-
-	hasServerSideApply atomic.Value
-	tryServerSideApply sync.Once
 )
 
 // KameletCatalog installs the bundled Kamelets into the specified namespace.
@@ -77,7 +64,7 @@ func KameletCatalog(ctx context.Context, c client.Client, namespace string) erro
 	}
 
 	g, gCtx := errgroup.WithContext(ctx)
-
+	applier := c.ServerOrClientSideApplier()
 	err = filepath.WalkDir(kameletDir, func(p string, f fs.DirEntry, err error) error {
 		if err != nil {
 			return err
@@ -94,31 +81,9 @@ func KameletCatalog(ctx context.Context, c client.Client, namespace string) erro
 			if err != nil {
 				return err
 			}
-			once := false
-			tryServerSideApply.Do(func() {
-				once = true
-				if err = serverSideApply(gCtx, c, kamelet); err != nil {
-					if isIncompatibleServerError(err) {
-						log.Info("Fallback to client-side apply for installing bundled Kamelets")
-						hasServerSideApply.Store(false)
-						err = nil
-					} else {
-						tryServerSideApply = sync.Once{}
-					}
-				} else {
-					hasServerSideApply.Store(true)
-				}
-			})
-			if err != nil {
+			if err := applier.Apply(gCtx, kamelet); err != nil {
 				return err
 			}
-			if v := hasServerSideApply.Load(); v.(bool) {
-				if !once {
-					return serverSideApply(gCtx, c, kamelet)
-				}
-			} else {
-				return clientSideApply(gCtx, c, kamelet)
-			}
 			return nil
 		})
 		return nil
@@ -130,53 +95,6 @@ func KameletCatalog(ctx context.Context, c client.Client, namespace string) erro
 	return g.Wait()
 }
 
-func serverSideApply(ctx context.Context, c client.Client, resource runtime.Object) error {
-	target, err := patch.PositiveApplyPatch(resource)
-	if err != nil {
-		return err
-	}
-	return c.Patch(ctx, target, ctrl.Apply, ctrl.ForceOwnership, ctrl.FieldOwner("camel-k-operator"))
-}
-
-func clientSideApply(ctx context.Context, c client.Client, resource ctrl.Object) error {
-	err := c.Create(ctx, resource)
-	if err == nil {
-		return nil
-	} else if !k8serrors.IsAlreadyExists(err) {
-		return fmt.Errorf("error during create resource: %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
-	}
-	object := &unstructured.Unstructured{}
-	object.SetNamespace(resource.GetNamespace())
-	object.SetName(resource.GetName())
-	object.SetGroupVersionKind(resource.GetObjectKind().GroupVersionKind())
-	err = c.Get(ctx, ctrl.ObjectKeyFromObject(object), object)
-	if err != nil {
-		return err
-	}
-	p, err := patch.PositiveMergePatch(object, resource)
-	if err != nil {
-		return err
-	} else if len(p) == 0 {
-		return nil
-	}
-	return c.Patch(ctx, resource, ctrl.RawPatch(types.MergePatchType, p))
-}
-
-func isIncompatibleServerError(err error) bool {
-	// First simpler check for older servers (i.e. OpenShift 3.11)
-	if strings.Contains(err.Error(), "415: Unsupported Media Type") {
-		return true
-	}
-	// 415: Unsupported media type means we're talking to a server which doesn't
-	// support server-side apply.
-	var serr *k8serrors.StatusError
-	if errors.As(err, &serr) {
-		return serr.Status().Code == http.StatusUnsupportedMediaType
-	}
-	// Non-StatusError means the error isn't because the server is incompatible.
-	return false
-}
-
 func loadKamelet(path string, namespace string, scheme *runtime.Scheme) (*v1alpha1.Kamelet, error) {
 	content, err := util.ReadFile(path)
 	if err != nil {
diff --git a/pkg/util/test/client.go b/pkg/util/test/client.go
index 50d32fb..b4f6db4 100644
--- a/pkg/util/test/client.go
+++ b/pkg/util/test/client.go
@@ -117,6 +117,12 @@ func (c *FakeClient) Discovery() discovery.DiscoveryInterface {
 	}
 }
 
+func (c *FakeClient) ServerOrClientSideApplier() client.ServerOrClientSideApplier {
+	return client.ServerOrClientSideApplier{
+		Client: c,
+	}
+}
+
 type FakeDiscovery struct {
 	discovery.DiscoveryInterface
 }

[camel-k] 03/22: Fix #1107: initial trait

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 8ba04e79a203d8fe382731ab3d1e5d2db9ef45da
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Wed Dec 15 10:36:40 2021 +0100

    Fix #1107: initial trait
---
 addons/keda/duck/v1alpha1/doc.go                   |  21 ++
 addons/keda/duck/v1alpha1/duck_types.go            | 108 ++++++++++
 .../v1alpha1/duck_types_support.go}                |  42 ++--
 addons/keda/duck/v1alpha1/register.go              |  57 +++++
 addons/keda/duck/v1alpha1/zz_generated.deepcopy.go | 235 +++++++++++++++++++++
 addons/keda/keda.go                                | 118 ++++++++++-
 docs/modules/ROOT/nav.adoc                         |   1 +
 docs/modules/traits/pages/keda.adoc                |  44 ++++
 go.sum                                             |   1 +
 pkg/cmd/run.go                                     |   7 +-
 pkg/resources/resources.go                         |   4 +-
 resources/traits.yaml                              |  23 ++
 script/Makefile                                    |   8 +-
 script/{gen_doc.sh => gen_client_keda.sh}          |  16 +-
 script/gen_doc.sh                                  |   2 +-
 15 files changed, 645 insertions(+), 42 deletions(-)

diff --git a/addons/keda/duck/v1alpha1/doc.go b/addons/keda/duck/v1alpha1/doc.go
new file mode 100644
index 0000000..56d897a
--- /dev/null
+++ b/addons/keda/duck/v1alpha1/doc.go
@@ -0,0 +1,21 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Package duck contains a partial schema of the Keda APIs
+// +kubebuilder:object:generate=true
+// +groupName=keda.sh
+package v1alpha1
diff --git a/addons/keda/duck/v1alpha1/duck_types.go b/addons/keda/duck/v1alpha1/duck_types.go
new file mode 100644
index 0000000..8504b6c
--- /dev/null
+++ b/addons/keda/duck/v1alpha1/duck_types.go
@@ -0,0 +1,108 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package v1alpha1
+
+import (
+	v1 "k8s.io/api/core/v1"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+// +genclient
+// +genclient:onlyVerbs=get,list,watch
+// +genclient:noStatus
+// +kubebuilder:object:root=true
+
+// ScaledObject is a specification for a ScaledObject resource
+type ScaledObject struct {
+	metav1.TypeMeta   `json:",inline"`
+	metav1.ObjectMeta `json:"metadata,omitempty"`
+
+	Spec ScaledObjectSpec `json:"spec"`
+}
+
+// ScaledObjectSpec is the spec for a ScaledObject resource
+type ScaledObjectSpec struct {
+	ScaleTargetRef *v1.ObjectReference `json:"scaleTargetRef"`
+
+	Triggers []ScaleTriggers `json:"triggers"`
+}
+
+// ScaleTriggers reference the scaler that will be used
+type ScaleTriggers struct {
+	Type string `json:"type"`
+	// +optional
+	Name     string            `json:"name,omitempty"`
+	Metadata map[string]string `json:"metadata"`
+	// +optional
+	AuthenticationRef *ScaledObjectAuthRef `json:"authenticationRef,omitempty"`
+	// +optional
+	FallbackReplicas *int32 `json:"fallback,omitempty"`
+}
+
+// ScaledObjectAuthRef points to the TriggerAuthentication or ClusterTriggerAuthentication object that
+// is used to authenticate the scaler with the environment
+type ScaledObjectAuthRef struct {
+	Name string `json:"name"`
+	// Kind of the resource being referred to. Defaults to TriggerAuthentication.
+	// +optional
+	Kind string `json:"kind,omitempty"`
+}
+
+// +kubebuilder:object:root=true
+
+// ScaledObjectList contains a list of ScaledObject.
+type ScaledObjectList struct {
+	metav1.TypeMeta `json:",inline"`
+	metav1.ListMeta `json:"metadata,omitempty"`
+	Items           []ScaledObject `json:"items"`
+}
+
+// +genclient
+// +genclient:onlyVerbs=get,list,watch
+// +genclient:noStatus
+// +kubebuilder:object:root=true
+
+// TriggerAuthentication defines how a trigger can authenticate
+type TriggerAuthentication struct {
+	metav1.TypeMeta   `json:",inline"`
+	metav1.ObjectMeta `json:"metadata,omitempty"`
+
+	Spec TriggerAuthenticationSpec `json:"spec"`
+}
+
+// TriggerAuthenticationSpec defines the various ways to authenticate
+type TriggerAuthenticationSpec struct {
+	// +optional
+	SecretTargetRef []AuthSecretTargetRef `json:"secretTargetRef,omitempty"`
+}
+
+// AuthSecretTargetRef is used to authenticate using a reference to a secret
+type AuthSecretTargetRef struct {
+	Parameter string `json:"parameter"`
+	Name      string `json:"name"`
+	Key       string `json:"key"`
+}
+
+// +kubebuilder:object:root=true
+
+// TriggerAuthenticationList contains a list of TriggerAuthentication
+type TriggerAuthenticationList struct {
+	metav1.TypeMeta `json:",inline"`
+	metav1.ListMeta `json:"metadata"`
+	Items           []TriggerAuthentication `json:"items"`
+}
diff --git a/addons/keda/keda.go b/addons/keda/duck/v1alpha1/duck_types_support.go
similarity index 50%
copy from addons/keda/keda.go
copy to addons/keda/duck/v1alpha1/duck_types_support.go
index c794249..c8c2f23 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/duck/v1alpha1/duck_types_support.go
@@ -15,37 +15,25 @@ See the License for the specific language governing permissions and
 limitations under the License.
 */
 
-package keda
+package v1alpha1
 
 import (
-	"github.com/apache/camel-k/pkg/trait"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 )
 
-// The Keda trait can be used for automatic integration with Keda autoscalers.
-//
-// The Keda trait is disabled by default.
-//
-// +camel-k:trait=keda.
-type kedaTrait struct {
-	trait.BaseTrait `property:",squash"`
-	// Enables automatic configuration of the trait.
-	Auto *bool `property:"auto" json:"auto,omitempty"`
-	// Metadata
-	Metadata map[string]string `property:"metadata" json:"metadata,omitempty"`
-}
+const (
+	ScaledObjectKind = "ScaledObject"
+)
 
-// NewKedaTrait --.
-func NewKedaTrait() trait.Trait {
-	return &kedaTrait{
-		BaseTrait: trait.NewBaseTrait("keda", trait.TraitOrderPostProcessResources),
+func NewScaledObject(namespace string, name string) ScaledObject {
+	return ScaledObject{
+		TypeMeta: metav1.TypeMeta{
+			APIVersion: SchemeGroupVersion.String(),
+			Kind:       ScaledObjectKind,
+		},
+		ObjectMeta: metav1.ObjectMeta{
+			Namespace: namespace,
+			Name:      name,
+		},
 	}
 }
-
-func (t *kedaTrait) Configure(e *trait.Environment) (bool, error) {
-
-	return false, nil
-}
-
-func (t *kedaTrait) Apply(e *trait.Environment) error {
-	return nil
-}
diff --git a/addons/keda/duck/v1alpha1/register.go b/addons/keda/duck/v1alpha1/register.go
new file mode 100644
index 0000000..a3814da
--- /dev/null
+++ b/addons/keda/duck/v1alpha1/register.go
@@ -0,0 +1,57 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package v1alpha1
+
+import (
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+	"k8s.io/apimachinery/pkg/runtime"
+	"k8s.io/apimachinery/pkg/runtime/schema"
+)
+
+const (
+	KedaGroup   = "keda.sh"
+	KedaVersion = "v1alpha1"
+)
+
+var (
+	// SchemeGroupVersion is group version used to register these objects.
+	SchemeGroupVersion = schema.GroupVersion{Group: KedaGroup, Version: KedaVersion}
+
+	// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
+	SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
+
+	// AddToScheme is a shortcut to SchemeBuilder.AddToScheme.
+	AddToScheme = SchemeBuilder.AddToScheme
+)
+
+// Resource takes an unqualified resource and returns a Group qualified GroupResource.
+func Resource(resource string) schema.GroupResource {
+	return SchemeGroupVersion.WithResource(resource).GroupResource()
+}
+
+// Adds the list of known types to Scheme.
+func addKnownTypes(scheme *runtime.Scheme) error {
+	scheme.AddKnownTypes(SchemeGroupVersion,
+		&ScaledObject{},
+		&ScaledObjectList{},
+		&TriggerAuthentication{},
+		&TriggerAuthenticationList{},
+	)
+	metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
+	return nil
+}
diff --git a/addons/keda/duck/v1alpha1/zz_generated.deepcopy.go b/addons/keda/duck/v1alpha1/zz_generated.deepcopy.go
new file mode 100644
index 0000000..b551c7f
--- /dev/null
+++ b/addons/keda/duck/v1alpha1/zz_generated.deepcopy.go
@@ -0,0 +1,235 @@
+// +build !ignore_autogenerated
+
+// Code generated by controller-gen. DO NOT EDIT.
+
+package v1alpha1
+
+import (
+	"k8s.io/api/core/v1"
+	"k8s.io/apimachinery/pkg/runtime"
+)
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *AuthSecretTargetRef) DeepCopyInto(out *AuthSecretTargetRef) {
+	*out = *in
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AuthSecretTargetRef.
+func (in *AuthSecretTargetRef) DeepCopy() *AuthSecretTargetRef {
+	if in == nil {
+		return nil
+	}
+	out := new(AuthSecretTargetRef)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ScaleTriggers) DeepCopyInto(out *ScaleTriggers) {
+	*out = *in
+	if in.Metadata != nil {
+		in, out := &in.Metadata, &out.Metadata
+		*out = make(map[string]string, len(*in))
+		for key, val := range *in {
+			(*out)[key] = val
+		}
+	}
+	if in.AuthenticationRef != nil {
+		in, out := &in.AuthenticationRef, &out.AuthenticationRef
+		*out = new(ScaledObjectAuthRef)
+		**out = **in
+	}
+	if in.FallbackReplicas != nil {
+		in, out := &in.FallbackReplicas, &out.FallbackReplicas
+		*out = new(int32)
+		**out = **in
+	}
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaleTriggers.
+func (in *ScaleTriggers) DeepCopy() *ScaleTriggers {
+	if in == nil {
+		return nil
+	}
+	out := new(ScaleTriggers)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ScaledObject) DeepCopyInto(out *ScaledObject) {
+	*out = *in
+	out.TypeMeta = in.TypeMeta
+	in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+	in.Spec.DeepCopyInto(&out.Spec)
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaledObject.
+func (in *ScaledObject) DeepCopy() *ScaledObject {
+	if in == nil {
+		return nil
+	}
+	out := new(ScaledObject)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *ScaledObject) DeepCopyObject() runtime.Object {
+	if c := in.DeepCopy(); c != nil {
+		return c
+	}
+	return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ScaledObjectAuthRef) DeepCopyInto(out *ScaledObjectAuthRef) {
+	*out = *in
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaledObjectAuthRef.
+func (in *ScaledObjectAuthRef) DeepCopy() *ScaledObjectAuthRef {
+	if in == nil {
+		return nil
+	}
+	out := new(ScaledObjectAuthRef)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ScaledObjectList) DeepCopyInto(out *ScaledObjectList) {
+	*out = *in
+	out.TypeMeta = in.TypeMeta
+	in.ListMeta.DeepCopyInto(&out.ListMeta)
+	if in.Items != nil {
+		in, out := &in.Items, &out.Items
+		*out = make([]ScaledObject, len(*in))
+		for i := range *in {
+			(*in)[i].DeepCopyInto(&(*out)[i])
+		}
+	}
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaledObjectList.
+func (in *ScaledObjectList) DeepCopy() *ScaledObjectList {
+	if in == nil {
+		return nil
+	}
+	out := new(ScaledObjectList)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *ScaledObjectList) DeepCopyObject() runtime.Object {
+	if c := in.DeepCopy(); c != nil {
+		return c
+	}
+	return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ScaledObjectSpec) DeepCopyInto(out *ScaledObjectSpec) {
+	*out = *in
+	if in.ScaleTargetRef != nil {
+		in, out := &in.ScaleTargetRef, &out.ScaleTargetRef
+		*out = new(v1.ObjectReference)
+		**out = **in
+	}
+	if in.Triggers != nil {
+		in, out := &in.Triggers, &out.Triggers
+		*out = make([]ScaleTriggers, len(*in))
+		for i := range *in {
+			(*in)[i].DeepCopyInto(&(*out)[i])
+		}
+	}
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaledObjectSpec.
+func (in *ScaledObjectSpec) DeepCopy() *ScaledObjectSpec {
+	if in == nil {
+		return nil
+	}
+	out := new(ScaledObjectSpec)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *TriggerAuthentication) DeepCopyInto(out *TriggerAuthentication) {
+	*out = *in
+	out.TypeMeta = in.TypeMeta
+	in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+	in.Spec.DeepCopyInto(&out.Spec)
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TriggerAuthentication.
+func (in *TriggerAuthentication) DeepCopy() *TriggerAuthentication {
+	if in == nil {
+		return nil
+	}
+	out := new(TriggerAuthentication)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *TriggerAuthentication) DeepCopyObject() runtime.Object {
+	if c := in.DeepCopy(); c != nil {
+		return c
+	}
+	return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *TriggerAuthenticationList) DeepCopyInto(out *TriggerAuthenticationList) {
+	*out = *in
+	out.TypeMeta = in.TypeMeta
+	in.ListMeta.DeepCopyInto(&out.ListMeta)
+	if in.Items != nil {
+		in, out := &in.Items, &out.Items
+		*out = make([]TriggerAuthentication, len(*in))
+		for i := range *in {
+			(*in)[i].DeepCopyInto(&(*out)[i])
+		}
+	}
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TriggerAuthenticationList.
+func (in *TriggerAuthenticationList) DeepCopy() *TriggerAuthenticationList {
+	if in == nil {
+		return nil
+	}
+	out := new(TriggerAuthenticationList)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *TriggerAuthenticationList) DeepCopyObject() runtime.Object {
+	if c := in.DeepCopy(); c != nil {
+		return c
+	}
+	return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *TriggerAuthenticationSpec) DeepCopyInto(out *TriggerAuthenticationSpec) {
+	*out = *in
+	if in.SecretTargetRef != nil {
+		in, out := &in.SecretTargetRef, &out.SecretTargetRef
+		*out = make([]AuthSecretTargetRef, len(*in))
+		copy(*out, *in)
+	}
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TriggerAuthenticationSpec.
+func (in *TriggerAuthenticationSpec) DeepCopy() *TriggerAuthenticationSpec {
+	if in == nil {
+		return nil
+	}
+	out := new(TriggerAuthenticationSpec)
+	in.DeepCopyInto(out)
+	return out
+}
diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index c794249..65a8bd4 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -18,7 +18,16 @@ limitations under the License.
 package keda
 
 import (
+	"strings"
+
+	kedav1alpha1 "github.com/apache/camel-k/addons/keda/duck/v1alpha1"
+	camelv1 "github.com/apache/camel-k/pkg/apis/camel/v1"
+	"github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
+	camelv1alpha1 "github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
 	"github.com/apache/camel-k/pkg/trait"
+	scase "github.com/stoewer/go-strcase"
+	v1 "k8s.io/api/core/v1"
+	"sigs.k8s.io/controller-runtime/pkg/client"
 )
 
 // The Keda trait can be used for automatic integration with Keda autoscalers.
@@ -30,7 +39,16 @@ type kedaTrait struct {
 	trait.BaseTrait `property:",squash"`
 	// Enables automatic configuration of the trait.
 	Auto *bool `property:"auto" json:"auto,omitempty"`
-	// Metadata
+	// Convert metadata properties to camelCase (needed because trait properties use kebab-case). Enabled by default.
+	CamelCaseConversion *bool `property:"camel-case-conversion" json:"camelCaseConversion,omitempty"`
+	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow Keda to recognize it as a scalable resource
+	HackControllerReplicas *bool `property:"hack-controller-replicas" json:"hackControllerReplicas,omitempty"`
+	// Triggers
+	Triggers []kedaTrigger `property:"triggers" json:"triggers,omitempty"`
+}
+
+type kedaTrigger struct {
+	Type     string            `property:"type" json:"type,omitempty"`
 	Metadata map[string]string `property:"metadata" json:"metadata,omitempty"`
 }
 
@@ -42,10 +60,106 @@ func NewKedaTrait() trait.Trait {
 }
 
 func (t *kedaTrait) Configure(e *trait.Environment) (bool, error) {
+	if t.Enabled == nil || !*t.Enabled {
+		return false, nil
+	}
+
+	if !e.IntegrationInPhase(camelv1.IntegrationPhaseInitialization) && !e.IntegrationInRunningPhases() {
+		return false, nil
+	}
 
-	return false, nil
+	return true, nil
 }
 
 func (t *kedaTrait) Apply(e *trait.Environment) error {
+	if e.IntegrationInPhase(camelv1.IntegrationPhaseInitialization) {
+		if t.HackControllerReplicas == nil || *t.HackControllerReplicas {
+			if err := t.hackControllerReplicas(e); err != nil {
+				return err
+			}
+		}
+	} else if e.IntegrationInRunningPhases() {
+		if so, err := t.getScaledObject(e); err != nil {
+			return err
+		} else if so != nil {
+			e.Resources.Add(so)
+		}
+	}
+
 	return nil
 }
+
+func (t *kedaTrait) getScaledObject(e *trait.Environment) (*kedav1alpha1.ScaledObject, error) {
+	if len(t.Triggers) == 0 {
+		return nil, nil
+	}
+	obj := kedav1alpha1.NewScaledObject(e.Integration.Namespace, e.Integration.Name)
+	obj.Spec.ScaleTargetRef = t.getTopControllerReference(e)
+
+	for _, trigger := range t.Triggers {
+		meta := make(map[string]string)
+		for k, v := range trigger.Metadata {
+			kk := k
+			if t.CamelCaseConversion == nil || *t.CamelCaseConversion {
+				kk = scase.LowerCamelCase(k)
+			}
+			meta[kk] = v
+		}
+		st := kedav1alpha1.ScaleTriggers{
+			Type:     trigger.Type,
+			Metadata: meta,
+		}
+		obj.Spec.Triggers = append(obj.Spec.Triggers, st)
+	}
+
+	return &obj, nil
+}
+
+func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
+	ctrlRef := t.getTopControllerReference(e)
+
+	if ctrlRef.Kind == camelv1alpha1.KameletBindingKind {
+		// Update the KameletBinding directly (do not add it to env resources, it's the integration parent)
+		key := client.ObjectKey{
+			Namespace: e.Integration.Namespace,
+			Name:      ctrlRef.Name,
+		}
+		klb := camelv1alpha1.KameletBinding{}
+		if err := e.Client.Get(e.Ctx, key, &klb); err != nil {
+			return err
+		}
+		if klb.Spec.Replicas == nil {
+			one := int32(1)
+			klb.Spec.Replicas = &one
+			if err := e.Client.Update(e.Ctx, &klb); err != nil {
+				return err
+			}
+		}
+	} else {
+		if e.Integration.Spec.Replicas == nil {
+			one := int32(1)
+			e.Integration.Spec.Replicas = &one
+			if err := e.Client.Update(e.Ctx, e.Integration); err != nil {
+				return err
+			}
+		}
+	}
+	return nil
+}
+
+func (t *kedaTrait) getTopControllerReference(e *trait.Environment) *v1.ObjectReference {
+	for _, o := range e.Integration.OwnerReferences {
+		if o.Kind == v1alpha1.KameletBindingKind && strings.HasPrefix(o.APIVersion, v1alpha1.SchemeGroupVersion.Group) {
+			return &v1.ObjectReference{
+				APIVersion: o.APIVersion,
+				Kind:       o.Kind,
+				Name:       o.Name,
+			}
+		}
+	}
+	return &v1.ObjectReference{
+		APIVersion: e.Integration.APIVersion,
+		Kind:       e.Integration.Kind,
+		Name:       e.Integration.Name,
+	}
+}
diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc
index 593ab84..890e733 100644
--- a/docs/modules/ROOT/nav.adoc
+++ b/docs/modules/ROOT/nav.adoc
@@ -63,6 +63,7 @@
 ** xref:traits:jolokia.adoc[Jolokia]
 ** xref:traits:jvm.adoc[Jvm]
 ** xref:traits:kamelets.adoc[Kamelets]
+** xref:traits:keda.adoc[Keda]
 ** xref:traits:knative-service.adoc[Knative Service]
 ** xref:traits:knative.adoc[Knative]
 ** xref:traits:logging.adoc[Logging]
diff --git a/docs/modules/traits/pages/keda.adoc b/docs/modules/traits/pages/keda.adoc
new file mode 100644
index 0000000..a73dabd
--- /dev/null
+++ b/docs/modules/traits/pages/keda.adoc
@@ -0,0 +1,44 @@
+= Keda Trait
+
+// Start of autogenerated code - DO NOT EDIT! (description)
+The Keda trait can be used for automatic integration with Keda autoscalers.
+
+The Keda trait is disabled by default.
+
+
+This trait is available in the following profiles: **Kubernetes, Knative, OpenShift**.
+
+// End of autogenerated code - DO NOT EDIT! (description)
+// Start of autogenerated code - DO NOT EDIT! (configuration)
+== Configuration
+
+Trait properties can be specified when running any integration with the CLI:
+[source,console]
+----
+$ kamel run --trait keda.[key]=[value] --trait keda.[key2]=[value2] integration.groovy
+----
+The following configuration options are available:
+
+[cols="2m,1m,5a"]
+|===
+|Property | Type | Description
+
+| keda.enabled
+| bool
+| Can be used to enable or disable a trait. All traits share this common property.
+
+| keda.auto
+| bool
+| Enables automatic configuration of the trait.
+
+| keda.camel-case-conversion
+| bool
+| Convert metadata properties to camelCase (needed because trait properties use kebab-case). Enabled by default.
+
+| keda.triggers
+| []github.com/apache/camel-k/addons/keda.kedaTrigger
+| Triggers
+
+|===
+
+// End of autogenerated code - DO NOT EDIT! (configuration)
diff --git a/go.sum b/go.sum
index a7e63c9..9aaf745 100644
--- a/go.sum
+++ b/go.sum
@@ -1803,6 +1803,7 @@ k8s.io/code-generator v0.18.2/go.mod h1:+UHX5rSbxmR8kzS+FAv7um6dtYrZokQvjHpDSYRV
 k8s.io/code-generator v0.18.6/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c=
 k8s.io/code-generator v0.19.2/go.mod h1:moqLn7w0t9cMs4+5CQyxnfA/HV8MF6aAVENF+WZZhgk=
 k8s.io/code-generator v0.21.1/go.mod h1:hUlps5+9QaTrKx+jiM4rmq7YmH8wPOIko64uZCHDh6Q=
+k8s.io/code-generator v0.21.4 h1:vO8jVuEGV4UF+/2s/88Qg05MokE/1QUFi/Q2YDgz++A=
 k8s.io/code-generator v0.21.4/go.mod h1:K3y0Bv9Cz2cOW2vXUrNZlFbflhuPvuadW6JdnN6gGKo=
 k8s.io/component-base v0.18.2/go.mod h1:kqLlMuhJNHQ9lz8Z7V5bxUUtjFZnrypArGl58gmDfUM=
 k8s.io/component-base v0.18.6/go.mod h1:knSVsibPR5K6EW2XOjEHik6sdU5nCvKMrzMt2D4In14=
diff --git a/pkg/cmd/run.go b/pkg/cmd/run.go
index 5dfca09..1819405 100644
--- a/pkg/cmd/run.go
+++ b/pkg/cmd/run.go
@@ -292,8 +292,11 @@ func (o *runCmdOptions) run(cmd *cobra.Command, args []string) error {
 	tp := catalog.ComputeTraitsProperties()
 	for _, t := range o.Traits {
 		kv := strings.SplitN(t, "=", 2)
-
-		if !util.StringSliceExists(tp, kv[0]) {
+		prefix := kv[0]
+		if strings.Contains(prefix, "[") {
+			prefix = prefix[0:strings.Index(prefix, "[")]
+		}
+		if !util.StringSliceExists(tp, prefix) {
 			fmt.Printf("Error: %s is not a valid trait property\n", t)
 			return nil
 		}
diff --git a/pkg/resources/resources.go b/pkg/resources/resources.go
index b798a6a..a979664 100644
--- a/pkg/resources/resources.go
+++ b/pkg/resources/resources.go
@@ -541,9 +541,9 @@ var assets = func() http.FileSystem {
 		"/traits.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "traits.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 46896,
+			uncompressedSize: 47743,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x1c\xb7\xb1\xe0\xef\xfe\x2b\x50\x7c\x57\x25\x92\xb5\xbb\x94\x9d\x97\x3c\x1f\xef\x74\x29\x5a\x92\x13\xda\xfa\xe0\x49\xb2\x73\x29\x9f\x2b\x8b\x9d\xe9\xdd\x85\x38\x0b\x4c\x00\x0c\xa9\xcd\xbd\xfb\xdf\xaf\xd0\xdd\xf8\x98\xd9\x5d\x72\x29\x91\x7e\xe1\xd5\x4b\x7e\xb0\x48\x0e\x80\x46\xa3\xd1\xdf\xdd\xf0\x56\x2a\xef\x4e\xbf\x1a\x0b\x2d\x57\x70\x2a\xe4\x7c\xae\xb4\xf2\xeb\xaf\x84\x68\x1b\xe9\xe7\xc6\xae\x4e\xc5\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x1c\xb7\xb1\xe0\xef\xfe\x2b\x50\x7c\x57\x25\x92\xb5\xbb\x94\x9d\x97\xc4\xc7\x3b\x5d\x8a\x96\xe4\x98\xb6\x3e\x78\x92\xec\x5c\x4a\xe7\xca\x62\x67\x7a\x77\x21\x62\x80\x09\x80\x21\xb5\xb9\x77\xff\xfb\x2b\x74\xe3\x6b\x66\x77\xc9\xa1\x24\xfa\x85\x55\x79\xa9\x7a\x16\xc9\x01\xd0\xdd\x68\x34\xfa\x1b\xce\x70\xe1\xec\xe9\x57\x53\xa6\x78\x03\xa7\x8c\x2f\x97\x42\x09\xb7\xf9\x8a\xb1\x56\x72\xb7\xd4\xa6\x39\x65\x4b\x [...]
 		},
 	}
 	fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{
diff --git a/resources/traits.yaml b/resources/traits.yaml
index 2b66ec4..7bd54bf 100755
--- a/resources/traits.yaml
+++ b/resources/traits.yaml
@@ -570,6 +570,29 @@ traits:
   - name: list
     type: string
     description: Comma separated list of Kamelet names to load into the current integration
+- name: keda
+  platform: false
+  profiles:
+  - Kubernetes
+  - Knative
+  - OpenShift
+  description: The Keda trait can be used for automatic integration with Keda autoscalers.
+    The Keda trait is disabled by default.
+  properties:
+  - name: enabled
+    type: bool
+    description: Can be used to enable or disable a trait. All traits share this common
+      property.
+  - name: auto
+    type: bool
+    description: Enables automatic configuration of the trait.
+  - name: camel-case-conversion
+    type: bool
+    description: Convert metadata properties to camelCase (needed because trait properties
+      use kebab-case). Enabled by default.
+  - name: triggers
+    type: '[]github.com/apache/camel-k/addons/keda.kedaTrigger'
+    description: Triggers
 - name: knative-service
   platform: false
   profiles:
diff --git a/script/Makefile b/script/Makefile
index be57f3b..af03077 100644
--- a/script/Makefile
+++ b/script/Makefile
@@ -155,7 +155,7 @@ codegen:
 
 	gofmt -w pkg/util/defaults/defaults.go
 
-generate: generate-deepcopy generate-crd generate-client generate-doc generate-json-schema generate-strimzi
+generate: generate-deepcopy generate-crd generate-client generate-doc generate-json-schema generate-keda generate-strimzi
 
 generate-client:
 	./script/gen_client.sh
@@ -173,6 +173,10 @@ generate-json-schema:
 	# Skip since the YAML DSL schema has been moved to apache/camel
 	#./script/gen_json_schema.sh $(RUNTIME_VERSION) $(STAGING_RUNTIME_REPO)
 
+generate-keda:
+	cd addons/keda/duck && $(CONTROLLER_GEN) paths="./..." object
+	./script/gen_client_keda.sh
+
 generate-strimzi:
 	cd addons/strimzi/duck && $(CONTROLLER_GEN) paths="./..." object
 	./script/gen_client_strimzi.sh
@@ -359,7 +363,7 @@ install-minikube:
 get-staging-repo:
 	@echo $(or ${STAGING_RUNTIME_REPO},https://repository.apache.org/content/repositories/snapshots@id=apache-snapshots@snapshots)
 
-.PHONY: build build-kamel build-resources dep codegen images images-dev images-push images-push-staging test check test-integration clean release cross-compile package-examples set-version git-tag release-notes check-licenses generate-deepcopy generate-client generate-doc build-resources release-helm release-staging release-nightly get-staging-repo get-version build-submodules set-module-version bundle-kamelets generate-strimzi
+.PHONY: build build-kamel build-resources dep codegen images images-dev images-push images-push-staging test check test-integration clean release cross-compile package-examples set-version git-tag release-notes check-licenses generate-deepcopy generate-client generate-doc build-resources release-helm release-staging release-nightly get-staging-repo get-version build-submodules set-module-version bundle-kamelets generate-keda generate-strimzi
 
 # find or download controller-gen if necessary
 controller-gen:
diff --git a/script/gen_doc.sh b/script/gen_client_keda.sh
similarity index 70%
copy from script/gen_doc.sh
copy to script/gen_client_keda.sh
index d4d6aab..e5dd2ca 100755
--- a/script/gen_doc.sh
+++ b/script/gen_client_keda.sh
@@ -15,14 +15,18 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+set -e
+
 location=$(dirname $0)
 rootdir=$location/..
 
-echo "Generating API documentation..."
-$location/gen_crd/gen_crd_api.sh
-echo "Generating API documentation... done!"
+unset GOPATH
+GO111MODULE=on
+
+echo "Generating boilerplate code for Keda addon..."
 
-echo "Generating traits documentation..."
 cd $rootdir
-go run ./cmd/util/doc-gen --input-dirs ./pkg/trait --input-dirs ./addons/master --input-dirs ./addons/threescale --input-dirs ./addons/tracing
-echo "Generating traits documentation... done!"
+
+go run k8s.io/code-generator/cmd/deepcopy-gen \
+  -h ./script/headers/default.txt \
+  --input-dirs=github.com/apache/camel-k/addons/keda
diff --git a/script/gen_doc.sh b/script/gen_doc.sh
index d4d6aab..028ec44 100755
--- a/script/gen_doc.sh
+++ b/script/gen_doc.sh
@@ -24,5 +24,5 @@ echo "Generating API documentation... done!"
 
 echo "Generating traits documentation..."
 cd $rootdir
-go run ./cmd/util/doc-gen --input-dirs ./pkg/trait --input-dirs ./addons/master --input-dirs ./addons/threescale --input-dirs ./addons/tracing
+go run ./cmd/util/doc-gen --input-dirs github.com/apache/camel-k/pkg/trait --input-dirs github.com/apache/camel-k/addons/keda --input-dirs github.com/apache/camel-k/addons/master --input-dirs github.com/apache/camel-k/addons/threescale --input-dirs github.com/apache/camel-k/addons/tracing
 echo "Generating traits documentation... done!"

[camel-k] 12/22: Fix #1107: added roles and regen

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit e5354e53e48b853718848daf382f1da898e94b60
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 20 17:09:50 2021 +0100

    Fix #1107: added roles and regen
---
 addons/keda/keda.go                                |  1 -
 config/rbac/kustomization.yaml                     |  2 +
 ...zation.yaml => operator-role-binding-keda.yaml} | 36 +++++++-----------
 ...{kustomization.yaml => operator-role-keda.yaml} | 44 +++++++++++-----------
 docs/modules/traits/pages/keda.adoc                |  2 +
 pkg/install/operator.go                            | 14 +++++++
 pkg/resources/resources.go                         | 28 +++++++++++---
 resources/traits.yaml                              |  4 +-
 8 files changed, 77 insertions(+), 54 deletions(-)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index 3a54896..c446ea3 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -123,7 +123,6 @@ func (t *kedaTrait) Configure(e *trait.Environment) (bool, error) {
 
 	if t.Auto == nil || *t.Auto {
 		if err := t.populateTriggersFromKamelets(e); err != nil {
-			// TODO: set condition
 			return false, err
 		}
 	}
diff --git a/config/rbac/kustomization.yaml b/config/rbac/kustomization.yaml
index 40d4d39..7f03ac1 100644
--- a/config/rbac/kustomization.yaml
+++ b/config/rbac/kustomization.yaml
@@ -26,10 +26,12 @@ resources:
 - operator-role-events.yaml
 - operator-role-knative.yaml
 - operator-role.yaml
+- operator-role-keda.yaml
 - operator-role-leases.yaml
 - operator-role-podmonitors.yaml
 - operator-role-strimzi.yaml
 - operator-role-binding-events.yaml
+- operator-role-binding-keda.yaml
 - operator-role-binding-knative.yaml
 - operator-role-binding-leases.yaml
 - operator-role-binding-podmonitors.yaml
diff --git a/config/rbac/kustomization.yaml b/config/rbac/operator-role-binding-keda.yaml
similarity index 58%
copy from config/rbac/kustomization.yaml
copy to config/rbac/operator-role-binding-keda.yaml
index 40d4d39..fd8c602 100644
--- a/config/rbac/kustomization.yaml
+++ b/config/rbac/operator-role-binding-keda.yaml
@@ -15,26 +15,16 @@
 # limitations under the License.
 # ---------------------------------------------------------------------------
 
-#
-# rbac resources applicable for all kubernetes platforms
-#
-apiVersion: kustomize.config.k8s.io/v1beta1
-kind: Kustomization
-
-resources:
-- user-cluster-role.yaml
-- operator-role-events.yaml
-- operator-role-knative.yaml
-- operator-role.yaml
-- operator-role-leases.yaml
-- operator-role-podmonitors.yaml
-- operator-role-strimzi.yaml
-- operator-role-binding-events.yaml
-- operator-role-binding-knative.yaml
-- operator-role-binding-leases.yaml
-- operator-role-binding-podmonitors.yaml
-- operator-role-binding-strimzi.yaml
-- operator-role-binding.yaml
-- operator-cluster-role-custom-resource-definitions.yaml
-- operator-cluster-role-binding-custom-resource-definitions.yaml
-
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: camel-k-operator-keda
+  labels:
+    app: "camel-k"
+subjects:
+- kind: ServiceAccount
+  name: camel-k-operator
+roleRef:
+  kind: Role
+  name: camel-k-operator-keda
+  apiGroup: rbac.authorization.k8s.io
diff --git a/config/rbac/kustomization.yaml b/config/rbac/operator-role-keda.yaml
similarity index 60%
copy from config/rbac/kustomization.yaml
copy to config/rbac/operator-role-keda.yaml
index 40d4d39..22c026c 100644
--- a/config/rbac/kustomization.yaml
+++ b/config/rbac/operator-role-keda.yaml
@@ -15,26 +15,24 @@
 # limitations under the License.
 # ---------------------------------------------------------------------------
 
-#
-# rbac resources applicable for all kubernetes platforms
-#
-apiVersion: kustomize.config.k8s.io/v1beta1
-kind: Kustomization
-
-resources:
-- user-cluster-role.yaml
-- operator-role-events.yaml
-- operator-role-knative.yaml
-- operator-role.yaml
-- operator-role-leases.yaml
-- operator-role-podmonitors.yaml
-- operator-role-strimzi.yaml
-- operator-role-binding-events.yaml
-- operator-role-binding-knative.yaml
-- operator-role-binding-leases.yaml
-- operator-role-binding-podmonitors.yaml
-- operator-role-binding-strimzi.yaml
-- operator-role-binding.yaml
-- operator-cluster-role-custom-resource-definitions.yaml
-- operator-cluster-role-binding-custom-resource-definitions.yaml
-
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: camel-k-operator-keda
+  labels:
+    app: "camel-k"
+rules:
+- apiGroups:
+  - "keda.sh"
+  resources:
+  - scaledobjects
+  - triggerauthentications
+  verbs:
+  - create
+  - delete
+  - deletecollection
+  - get
+  - list
+  - patch
+  - update
+  - watch
diff --git a/docs/modules/traits/pages/keda.adoc b/docs/modules/traits/pages/keda.adoc
index 6f0fcac..1d5bbcb 100644
--- a/docs/modules/traits/pages/keda.adoc
+++ b/docs/modules/traits/pages/keda.adoc
@@ -70,6 +70,8 @@ The following configuration options are available:
 | []github.com/apache/camel-k/addons/keda.kedaTrigger
 | Definition of triggers according to the KEDA format. Each trigger must contain `type` field corresponding
 to the name of a KEDA autoscaler and a key/value map named `metadata` containing specific trigger options.
+An optional `authentication-secret` can be declared per trigger and the operator will link each entry of
+the secret to a KEDA authentication parameter.
 
 |===
 
diff --git a/pkg/install/operator.go b/pkg/install/operator.go
index 9492ab1..33b8371 100644
--- a/pkg/install/operator.go
+++ b/pkg/install/operator.go
@@ -256,6 +256,13 @@ func OperatorOrCollect(ctx context.Context, c client.Client, cfg OperatorConfigu
 		fmt.Println("Warning: the operator will not be able to publish Kubernetes events. Try installing as cluster-admin to allow it to generate events.")
 	}
 
+	if errmtr := installKedaBindings(ctx, c, cfg.Namespace, customizer, collection, force); errmtr != nil {
+		if k8serrors.IsAlreadyExists(errmtr) {
+			return errmtr
+		}
+		fmt.Println("Warning: the operator will not be able to create KEDA resources. Try installing as cluster-admin.")
+	}
+
 	if errmtr := installPodMonitors(ctx, c, cfg.Namespace, customizer, collection, force); errmtr != nil {
 		if k8serrors.IsAlreadyExists(errmtr) {
 			return errmtr
@@ -393,6 +400,13 @@ func installOperator(ctx context.Context, c client.Client, namespace string, cus
 	)
 }
 
+func installKedaBindings(ctx context.Context, c client.Client, namespace string, customizer ResourceCustomizer, collection *kubernetes.Collection, force bool) error {
+	return ResourcesOrCollect(ctx, c, namespace, collection, force, customizer,
+		"/rbac/operator-role-keda.yaml",
+		"/rbac/operator-role-binding-keda.yaml",
+	)
+}
+
 func installKnative(ctx context.Context, c client.Client, namespace string, customizer ResourceCustomizer, collection *kubernetes.Collection, force bool) error {
 	return ResourcesOrCollect(ctx, c, namespace, collection, force, customizer,
 		"/rbac/operator-role-knative.yaml",
diff --git a/pkg/resources/resources.go b/pkg/resources/resources.go
index a979664..c753ccb 100644
--- a/pkg/resources/resources.go
+++ b/pkg/resources/resources.go
@@ -152,16 +152,16 @@ var assets = func() http.FileSystem {
 		"/crd/bases/camel.apache.org_kameletbindings.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "camel.apache.org_kameletbindings.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 431973,
+			uncompressedSize: 432125,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\xbd\xfb\x73\x1b\x37\x96\x30\xfa\x7b\xfe\x0a\x94\x9c\xfa\x24\x6d\x44\xca\xce\xcc\xce\xdd\xf1\x9d\xfa\x52\x1a\x59\xce\xe8\xc6\x96\x59\x96\xe2\x7c\x29\x27\x9b\x05\xbb\x41\x12\xab\x6e\xa0\x17\x40\x53\xe2\x5e\xdf\xff\xfd\x16\x0e\x80\x7e\xf0\x25\x9c\xa6\xa8\x28\x3b\x8d\xa9\x9a\x98\x22\xfb\x34\x5e\xe7\xfd\x7a\x41\x06\x8f\x37\xbe\x7a\x41\xde\xf1\x84\x09\xcd\x52\x62\x24\x31\x33\x46\xce\x0a\x9a\xcc\x18\xb9\x96\x13\x73\x47\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\xbd\xfb\x73\x1b\x37\x96\x30\xfa\x7b\xfe\x0a\x94\x9c\xfa\x24\x6d\x44\xca\xce\xcc\xce\xdd\xf1\x9d\xfa\x52\x1a\x59\xce\xe8\xc6\x96\x59\x96\xe2\x7c\x29\x27\x9b\x05\xbb\x41\x12\xab\x6e\xa0\x17\x40\x53\xe2\x5e\xdf\xff\xfd\x16\x0e\x80\x7e\xf0\x25\x9c\xa6\xa8\x28\x3b\x8d\xa9\x9a\x98\x22\xfb\x34\x5e\xe7\xfd\x7a\x41\x06\x8f\x37\xbe\x7a\x41\xde\xf1\x84\x09\xcd\x52\x62\x24\x31\x33\x46\xce\x0a\x9a\xcc\x18\xb9\x96\x13\x73\x47\x [...]
 		},
 		"/crd/bases/camel.apache.org_kamelets.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "camel.apache.org_kamelets.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 24256,
+			uncompressedSize: 24280,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x5c\x7d\x4f\xe3\x3a\xba\xff\xbf\x9f\xe2\x11\x1c\x69\x18\x89\x94\x96\xc2\x9c\x99\xde\x3f\x10\x07\x86\xbd\xbd\x87\x03\x88\xc2\xae\xce\x85\x59\xc9\x4d\x9e\xb6\x5e\x12\x3b\x6b\x3b\x14\xf6\xc0\x77\xbf\xb2\x9d\xa4\xe9\x4b\x12\xb7\x14\xf6\xe8\x6a\x2d\x8d\xa6\x49\xec\x9f\x9f\x37\x3f\x7e\xc9\x8f\x6c\x83\xb7\xb9\xd2\xd8\x86\x73\xea\x23\x93\x18\x80\xe2\xa0\xc6\x08\xc7\x31\xf1\xc7\x08\x7d\x3e\x54\x13\x22\x10\xce\x78\xc2\x02\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x5c\x7d\x53\xe3\x38\x9a\xff\x3f\x9f\xe2\x29\x98\xaa\xa6\xab\x70\x48\x08\x30\xdd\xb9\x3f\x28\x06\x9a\xbd\xdc\xd0\x40\x11\xd8\xbd\x39\xe8\xad\x52\xec\x27\x89\x16\x5b\xf2\x4a\x32\x2f\x3b\xf0\xdd\xaf\x24\xd9\x8e\xf3\x62\x5b\x09\x81\xed\xba\x3a\x55\x4d\x0d\x76\xa4\x9f\x9e\x37\x3d\x7a\xfb\xb5\x37\xc1\x5b\x5f\x69\x6c\xc2\x19\xf5\x91\x49\x0c\x40\x71\x50\x63\x84\xa3\x98\xf8\x63\x84\x3e\x1f\xaa\x47\x22\x10\x4e\x79\xc2\x02\x [...]
 		},
 		"/manager": &vfsgen۰DirInfo{
 			name:    "manager",
@@ -298,6 +298,13 @@ var assets = func() http.FileSystem {
 
 			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xac\x93\x41\x6f\xfa\x46\x10\xc5\xef\xfb\x29\x9e\xf0\xe5\x1f\x09\x4c\xdb\x53\x45\x4f\x4e\x02\xad\xd5\x08\x24\x4c\x1a\xe5\xb8\xac\x07\x7b\x8a\xbd\xe3\xee\xae\x71\xe8\xa7\xaf\xd6\x40\x93\xa8\x6a\xd5\x43\xf6\x86\x18\xbf\xf9\xbd\x7d\x6f\x13\xcc\xbe\xee\xa8\x04\x4f\x6c\xc8\x7a\x2a\x11\x04\xa1\x26\x64\x9d\x36\x35\xa1\x90\x43\x18\xb4\x23\xac\xa4\xb7\xa5\x0e\x2c\x16\xdf\xb2\x62\x75\x87\xde\x96\xe4\x20\x96\x20\x0e\xad\x38\x52\x [...]
 		},
+		"/rbac/operator-role-binding-keda.yaml": &vfsgen۰CompressedFileInfo{
+			name:             "operator-role-binding-keda.yaml",
+			modTime:          time.Time{},
+			uncompressedSize: 1215,
+
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xac\x93\x41\x8f\xdb\x36\x10\x85\xef\xfc\x15\x0f\xd6\x25\x01\xd6\x72\xdb\x53\xe1\x9e\x94\xcd\xba\x15\x1a\xd8\x80\xe5\x34\xc8\x71\x4c\x8d\xa5\xa9\x25\x8e\x4a\x52\xab\xb8\xbf\xbe\xa0\x6c\x77\x37\x28\xda\x5e\xc2\x9b\xa0\xd1\x9b\xef\xf1\x3d\x65\x58\x7e\xbb\x63\x32\x7c\x10\xcb\x2e\x70\x8d\xa8\x88\x2d\xa3\x18\xc8\xb6\x8c\x4a\x4f\x71\x22\xcf\xd8\xe8\xe8\x6a\x8a\xa2\x0e\x6f\x8a\x6a\xf3\x16\xa3\xab\xd9\x43\x1d\x43\x3d\x7a\xf5\x [...]
+		},
 		"/rbac/operator-role-binding-knative.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "operator-role-binding-knative.yaml",
 			modTime:          time.Time{},
@@ -340,6 +347,13 @@ var assets = func() http.FileSystem {
 
 			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xac\x53\xc1\x8e\xdb\x36\x10\xbd\xf3\x2b\x1e\xac\x4b\x02\xac\xe5\xb6\xa7\xc2\x3d\xb9\x9b\xdd\x56\x68\x60\x03\x2b\xa7\x41\x8e\x63\x69\x2c\x0d\x56\xe2\xa8\x43\x6a\x15\xf7\xeb\x0b\xca\x72\xb2\x41\xaf\xcb\x8b\x69\xf2\xe9\xcd\x7b\xf3\x86\x19\xd6\x6f\xb7\x5c\x86\x8f\x52\xb1\x0f\x5c\x23\x2a\x62\xcb\xd8\x0d\x54\xb5\x8c\x52\xcf\x71\x22\x63\x3c\xea\xe8\x6b\x8a\xa2\x1e\xef\x76\xe5\xe3\x7b\x8c\xbe\x66\x83\x7a\x86\x1a\x7a\x35\x76\x [...]
 		},
+		"/rbac/operator-role-keda.yaml": &vfsgen۰CompressedFileInfo{
+			name:             "operator-role-keda.yaml",
+			modTime:          time.Time{},
+			uncompressedSize: 1252,
+
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xac\x53\xc1\x8e\xdb\x36\x10\xbd\xf3\x2b\x1e\xac\x4b\x02\xac\xe5\xb6\xa7\xc2\x3d\xb9\x9b\xdd\xd6\x68\x60\x03\x2b\xa7\x41\x8e\x63\x6a\x2c\x4d\x4d\x91\xea\x90\x5a\x65\xfb\xf5\x05\x69\xbb\xd9\x45\xaf\xe1\xc5\x63\x72\xe6\xcd\x7b\xf3\x46\x15\x96\xdf\xef\x98\x0a\x1f\xc5\xb2\x8f\xdc\x22\x05\xa4\x9e\xb1\x19\xc9\xf6\x8c\x26\x9c\xd2\x4c\xca\x78\x0c\x93\x6f\x29\x49\xf0\x78\xb7\x69\x1e\xdf\x63\xf2\x2d\x2b\x82\x67\x04\xc5\x10\x94\x [...]
+		},
 		"/rbac/operator-role-knative.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "operator-role-knative.yaml",
 			modTime:          time.Time{},
@@ -541,9 +555,9 @@ var assets = func() http.FileSystem {
 		"/traits.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "traits.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 47743,
+			uncompressedSize: 49398,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x1c\xb7\xb1\xe0\xef\xfe\x2b\x50\x7c\x57\x25\x92\xb5\xbb\x94\x9d\x97\xc4\xc7\x3b\x5d\x8a\x96\xe4\x98\xb6\x3e\x78\x92\xec\x5c\x4a\xe7\xca\x62\x67\x7a\x77\x21\x62\x80\x09\x80\x21\xb5\xb9\x77\xff\xfb\x2b\x74\xe3\x6b\x66\x77\xc9\xa1\x24\xfa\x85\x55\x79\xa9\x7a\x16\xc9\x01\xd0\xdd\x68\x34\xfa\x1b\xce\x70\xe1\xec\xe9\x57\x53\xa6\x78\x03\xa7\x8c\x2f\x97\x42\x09\xb7\xf9\x8a\xb1\x56\x72\xb7\xd4\xa6\x39\x65\x4b\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x5b\xb9\x91\xe0\xef\xf3\x57\xa0\xb4\x57\x65\x49\x45\x52\x9e\xc9\x26\x3b\xa7\xbb\xd9\x94\xc6\x76\x12\xcd\xf8\x43\x67\x3b\xb3\x97\x9a\x9b\x0a\xc1\xf7\x9a\x24\xcc\x47\xe0\x05\xc0\x93\xcc\xdc\xde\xff\x7e\x85\xee\xc6\xc7\x7b\x24\x25\xca\xb6\x66\xa3\xad\xdd\x54\xed\x58\xd2\x03\xd0\x68\x34\xfa\xbb\x1b\xde\x4a\xe5\xdd\xf9\x57\x63\xa1\xe5\x1a\xce\x85\x9c\xcf\x95\x56\x7e\xf3\x95\x10\x6d\x23\xfd\xdc\xd8\xf5\xb9\x [...]
 		},
 	}
 	fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{
@@ -605,12 +619,14 @@ var assets = func() http.FileSystem {
 		fs["/rbac/operator-cluster-role-binding-custom-resource-definitions.yaml"].(os.FileInfo),
 		fs["/rbac/operator-cluster-role-custom-resource-definitions.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-binding-events.yaml"].(os.FileInfo),
+		fs["/rbac/operator-role-binding-keda.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-binding-knative.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-binding-leases.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-binding-podmonitors.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-binding-strimzi.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-binding.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-events.yaml"].(os.FileInfo),
+		fs["/rbac/operator-role-keda.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-knative.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-leases.yaml"].(os.FileInfo),
 		fs["/rbac/operator-role-podmonitors.yaml"].(os.FileInfo),
diff --git a/resources/traits.yaml b/resources/traits.yaml
index a6c05b8..8eac6f4 100755
--- a/resources/traits.yaml
+++ b/resources/traits.yaml
@@ -622,7 +622,9 @@ traits:
     type: '[]github.com/apache/camel-k/addons/keda.kedaTrigger'
     description: Definition of triggers according to the KEDA format. Each trigger
       must contain `type` field correspondingto the name of a KEDA autoscaler and
-      a key/value map named `metadata` containing specific trigger options.
+      a key/value map named `metadata` containing specific trigger options.An optional
+      `authentication-secret` can be declared per trigger and the operator will link
+      each entry ofthe secret to a KEDA authentication parameter.
 - name: knative-service
   platform: false
   profiles:

[camel-k] 09/22: Fix #1107: add documentation

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit a0644223291e463c646b466f691a75c29eef43f4
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 20 11:50:39 2021 +0100

    Fix #1107: add documentation
---
 addons/keda/duck/v1alpha1/zz_generated.deepcopy.go |  25 +++++
 addons/keda/keda.go                                |  24 +++--
 .../bases/camel.apache.org_kameletbindings.yaml    |  15 +--
 config/crd/bases/camel.apache.org_kamelets.yaml    |   8 +-
 docs/modules/ROOT/nav.adoc                         |   2 +-
 docs/modules/ROOT/pages/kamelets/kamelets-dev.adoc | 119 +++++++++++++++++++++
 .../modules/ROOT/pages/kamelets/kamelets-user.adoc |  39 +++++++
 docs/modules/ROOT/partials/apis/crds-html.adoc     |   2 +-
 docs/modules/traits/pages/keda.adoc                |  42 +++++++-
 helm/camel-k/crds/crd-kamelet-binding.yaml         |  15 +--
 helm/camel-k/crds/crd-kamelet.yaml                 |   8 +-
 resources/traits.yaml                              |  42 ++++++--
 12 files changed, 300 insertions(+), 41 deletions(-)

diff --git a/addons/keda/duck/v1alpha1/zz_generated.deepcopy.go b/addons/keda/duck/v1alpha1/zz_generated.deepcopy.go
index b551c7f..9762e39 100644
--- a/addons/keda/duck/v1alpha1/zz_generated.deepcopy.go
+++ b/addons/keda/duck/v1alpha1/zz_generated.deepcopy.go
@@ -137,6 +137,31 @@ func (in *ScaledObjectSpec) DeepCopyInto(out *ScaledObjectSpec) {
 		*out = new(v1.ObjectReference)
 		**out = **in
 	}
+	if in.PollingInterval != nil {
+		in, out := &in.PollingInterval, &out.PollingInterval
+		*out = new(int32)
+		**out = **in
+	}
+	if in.CooldownPeriod != nil {
+		in, out := &in.CooldownPeriod, &out.CooldownPeriod
+		*out = new(int32)
+		**out = **in
+	}
+	if in.IdleReplicaCount != nil {
+		in, out := &in.IdleReplicaCount, &out.IdleReplicaCount
+		*out = new(int32)
+		**out = **in
+	}
+	if in.MinReplicaCount != nil {
+		in, out := &in.MinReplicaCount, &out.MinReplicaCount
+		*out = new(int32)
+		**out = **in
+	}
+	if in.MaxReplicaCount != nil {
+		in, out := &in.MaxReplicaCount, &out.MaxReplicaCount
+		*out = new(int32)
+		**out = **in
+	}
 	if in.Triggers != nil {
 		in, out := &in.Triggers, &out.Triggers
 		*out = make([]ScaleTriggers, len(*in))
diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index 90641e3..4911a76 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -62,27 +62,34 @@ const (
 )
 
 // The KEDA trait can be used for automatic integration with KEDA autoscalers.
+// The trait can be either manually configured using the `triggers` option or automatically configured
+// via markers in the Kamelets.
+//
+// For information on how to use KEDA enabled Kamelets with the KEDA trait, refer to
+// xref:kamelets/kamelets-user.adoc#kamelet-keda-user[the KEDA section in the Kamelets user guide].
+// If you want to create Kamelets that contain KEDA metadata, refer to
+// xref:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the KEDA section in the Kamelets development guide].
 //
 // The KEDA trait is disabled by default.
 //
 // +camel-k:trait=keda.
 type kedaTrait struct {
 	trait.BaseTrait `property:",squash"`
-	// Enables automatic configuration of the trait.
+	// Enables automatic configuration of the trait. Allows the trait to infer KEDA triggers from the Kamelets.
 	Auto *bool `property:"auto" json:"auto,omitempty"`
-	// Convert metadata properties to camelCase (needed because trait properties use kebab-case). Disabled by default.
+	// Convert metadata properties to camelCase (needed because Camel K trait properties use kebab-case from command line). Disabled by default.
 	CamelCaseConversion *bool `property:"camel-case-conversion" json:"camelCaseConversion,omitempty"`
-	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow KEDA to recognize it as a scalable resource
+	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow KEDA to recognize it as a scalable resource.
 	HackControllerReplicas *bool `property:"hack-controller-replicas" json:"hackControllerReplicas,omitempty"`
-	// Interval (seconds) to check each trigger on (minimum 10 seconds)
+	// Interval (seconds) to check each trigger on (minimum 10 seconds).
 	PollingInterval *int32 `property:"polling-interval" json:"pollingInterval,omitempty"`
-	// The wait period between the last active trigger reported and scaling the resource back to 0
+	// The wait period between the last active trigger reported and scaling the resource back to 0.
 	CooldownPeriod *int32 `property:"cooldown-period" json:"cooldownPeriod,omitempty"`
-	// Enabling this property allows KEDA to scale the resource down to the specified number of replicas
+	// Enabling this property allows KEDA to scale the resource down to the specified number of replicas.
 	IdleReplicaCount *int32 `property:"idle-replica-count" json:"idleReplicaCount,omitempty"`
-	// Minimum number of replicas
+	// Minimum number of replicas.
 	MinReplicaCount *int32 `property:"min-replica-count" json:"minReplicaCount,omitempty"`
-	// Maximum number of replicas
+	// Maximum number of replicas.
 	MaxReplicaCount *int32 `property:"max-replica-count" json:"maxReplicaCount,omitempty"`
 	// Definition of triggers according to the KEDA format. Each trigger must contain `type` field corresponding
 	// to the name of a KEDA autoscaler and a key/value map named `metadata` containing specific trigger options.
@@ -244,6 +251,7 @@ func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 		if e.Integration.Spec.Replicas == nil {
 			one := int32(1)
 			e.Integration.Spec.Replicas = &one
+			// Update the Integration directly as the spec section is not merged by default
 			if err := e.Client.Update(e.Ctx, e.Integration); err != nil {
 				return err
 			}
diff --git a/config/crd/bases/camel.apache.org_kameletbindings.yaml b/config/crd/bases/camel.apache.org_kameletbindings.yaml
index 6ad5d2e..0891d1f 100644
--- a/config/crd/bases/camel.apache.org_kameletbindings.yaml
+++ b/config/crd/bases/camel.apache.org_kameletbindings.yaml
@@ -5848,8 +5848,9 @@ spec:
                                   uniqueItems:
                                     type: boolean
                                   x-descriptors:
-                                    description: The list of descriptors that determine
-                                      which UI components to use on different views
+                                    description: XDescriptors is a list of extended
+                                      properties that trigger a custom behavior in
+                                      external systems
                                     items:
                                       type: string
                                     type: array
@@ -6062,8 +6063,9 @@ spec:
                                   uniqueItems:
                                     type: boolean
                                   x-descriptors:
-                                    description: The list of descriptors that determine
-                                      which UI components to use on different views
+                                    description: XDescriptors is a list of extended
+                                      properties that trigger a custom behavior in
+                                      external systems
                                     items:
                                       type: string
                                     type: array
@@ -6281,8 +6283,9 @@ spec:
                                     uniqueItems:
                                       type: boolean
                                     x-descriptors:
-                                      description: The list of descriptors that determine
-                                        which UI components to use on different views
+                                      description: XDescriptors is a list of extended
+                                        properties that trigger a custom behavior
+                                        in external systems
                                       items:
                                         type: string
                                       type: array
diff --git a/config/crd/bases/camel.apache.org_kamelets.yaml b/config/crd/bases/camel.apache.org_kamelets.yaml
index 8dd01d6..dada3b0 100644
--- a/config/crd/bases/camel.apache.org_kamelets.yaml
+++ b/config/crd/bases/camel.apache.org_kamelets.yaml
@@ -193,8 +193,8 @@ spec:
                         uniqueItems:
                           type: boolean
                         x-descriptors:
-                          description: The list of descriptors that determine which
-                            UI components to use on different views
+                          description: XDescriptors is a list of extended properties
+                            that trigger a custom behavior in external systems
                           items:
                             type: string
                           type: array
@@ -405,8 +405,8 @@ spec:
                               uniqueItems:
                                 type: boolean
                               x-descriptors:
-                                description: The list of descriptors that determine
-                                  which UI components to use on different views
+                                description: XDescriptors is a list of extended properties
+                                  that trigger a custom behavior in external systems
                                 items:
                                   type: string
                                 type: array
diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc
index 5ece5cd..890e733 100644
--- a/docs/modules/ROOT/nav.adoc
+++ b/docs/modules/ROOT/nav.adoc
@@ -63,7 +63,7 @@
 ** xref:traits:jolokia.adoc[Jolokia]
 ** xref:traits:jvm.adoc[Jvm]
 ** xref:traits:kamelets.adoc[Kamelets]
-** xref:traits:keda.adoc[KEDA]
+** xref:traits:keda.adoc[Keda]
 ** xref:traits:knative-service.adoc[Knative Service]
 ** xref:traits:knative.adoc[Knative]
 ** xref:traits:logging.adoc[Logging]
diff --git a/docs/modules/ROOT/pages/kamelets/kamelets-dev.adoc b/docs/modules/ROOT/pages/kamelets/kamelets-dev.adoc
index 688c66c..2904ec0 100644
--- a/docs/modules/ROOT/pages/kamelets/kamelets-dev.adoc
+++ b/docs/modules/ROOT/pages/kamelets/kamelets-dev.adoc
@@ -1427,3 +1427,122 @@ If everything goes well, you should receive a message during the test execution.
 For a more specific test that checks also the content sent to Telegram, you should add additional Gherking steps
 to get and verify the actual message via other Telegram APIs. We're not going in so much details for this example,
 but the Gherkin file highlighted above is a good approximation of the backbone you'll find in tests for Kamelets of type "sink".
+
+== KEDA Integration
+
+Kamelets of type `source` can be augmented with https://keda.sh/[KEDA] metadata to automatically configure autoscalers.
+
+The additional KEDA metadata is needed for the following purposes:
+
+- Map Kamelet properties to corresponding KEDA parameters
+- Distinguish which KEDA parameters are needed for authentication (and need to be placed in a `Secret`)
+- Mark KEDA parameters as required to signal an error during reconciliation
+
+[[kamelet-keda-dev]]
+=== Basic properties to KEDA parameter mapping
+
+Any Kamelet property can be mapped to a KEDA parameter by simply declaring the mapping in the `x-descriptors` list.
+For example:
+
+.aws-sqs-source.kamelet.yaml
+[source,yaml]
+----
+apiVersion: camel.apache.org/v1alpha1
+kind: Kamelet
+metadata:
+  name: aws-sqs-source
+  labels:
+    camel.apache.org/kamelet.type: "source"
+spec:
+  definition:
+    # ...
+    properties:
+      queueNameOrArn:
+        title: Queue Name
+        description: The SQS Queue Name or ARN
+        type: string
+        x-descriptors:
+        - urn:keda:metadata:queueURL # <1>
+        - urn:keda:required # <2>
+# ...
+----
+<1> The Kamelet property `queueNameOrArn` corresponds to a KEDA metadata parameter named `queueURL`
+<2> The `queueURL` parameter is required by KEDA
+
+In the example above, the `queueNameOrArn` Kamelet property is declared to correspond to a KEDA *metadata* parameter named `queueURL`, using the `urn:keda:metadata:` prefix.
+The `queueURL` parameter is documented in the https://keda.sh/docs/2.5/scalers/aws-sqs/[the KEDA AWS SQS Queue scaler] together with other options
+required by KEDA to configure an autoscaler (it can be a full queue URL or a simple queue name).
+By using the marker descriptor `urn:keda:required`, it is also marked as required by KEDA.
+
+The `queueURL` is a *metadata* parameter for the autoscaler. In order to configure *authentication* parameters, the syntax is slightly different:
+
+.aws-sqs-source.kamelet.yaml
+[source,yaml]
+----
+apiVersion: camel.apache.org/v1alpha1
+kind: Kamelet
+metadata:
+  name: aws-sqs-source
+  labels:
+    camel.apache.org/kamelet.type: "source"
+spec:
+  definition:
+    # ...
+    properties:
+      # ...
+      accessKey:
+        title: Access Key
+        description: The access key obtained from AWS
+        type: string
+        format: password
+        x-descriptors:
+        - urn:alm:descriptor:com.tectonic.ui:password
+        - urn:camel:group:credentials
+        - urn:keda:authentication:awsAccessKeyID <1>
+        - urn:keda:required
+# ...
+----
+<1> The Kamelet property `access` corresponds to a KEDA authentication parameter named `awsAccessKeyID`
+
+This time the property mapping uses the `urn:keda:authentication:` prefix, declaring it as a KEDA authentication parameter.
+The difference between the two approaches is that authentication parameters will be injected into a secret by the Camel K
+operator and linked to the KEDA ScaledObject using a TriggerAuthentication (refer to the https://keda.sh/[KEDA documentation] for more info).
+
+=== Advanced KEDA property mapping
+
+There are cases where KEDA requires some static values to be set in a ScaledObject or also values computed from multiple Kamelet properties.
+To deal with these cases it's possible to use annotations on the Kamelet prefixed with `camel.apache.org/keda.metadata.` (for metadata parameters)
+or `camel.apache.org/keda.authentication.` (for authentication parameters). Those annotations can contain plain fixed values or also *templates* (using the Go syntax).
+
+For example:
+
+.my-source.kamelet.yaml
+[source,yaml]
+----
+apiVersion: camel.apache.org/v1alpha1
+kind: Kamelet
+metadata:
+  name: my-source
+  labels:
+    camel.apache.org/kamelet.type: "source"
+  annotations:
+    camel.apache.org/keda.authentication.sasl: "plaintext" # <1>
+    camel.apache.org/keda.metadata.queueLength: "5" # <2>
+    camel.apache.org/keda.metadata.queueAddress: "https://myhost.com/queues/{{.queueName}}" # <3>
+spec:
+  definition:
+    # ...
+    properties:
+      queueName:
+        title: Queue Name
+        description: The Queue Name
+        type: string
+# ...
+----
+<1> An authentication parameter with a fixed value
+<2> A metadata parameter with a fixed value
+<3> A metadata parameter with a valued computed from a template
+
+When using the template syntax, all Kamelet properties are available as fields. The default values are used in case they are missing from the user configuration.
+
+For information on how to use Kamelets with KEDA, see the xref:kamelets/kamelets-user.adoc#kamelet-keda-user[KEDA section in the user guide].
diff --git a/docs/modules/ROOT/pages/kamelets/kamelets-user.adoc b/docs/modules/ROOT/pages/kamelets/kamelets-user.adoc
index c8b5689..7896031 100644
--- a/docs/modules/ROOT/pages/kamelets/kamelets-user.adoc
+++ b/docs/modules/ROOT/pages/kamelets/kamelets-user.adoc
@@ -615,3 +615,42 @@ Kamelets, however, can also contain additional sources in the `spec` -> `sources
 (not necessarily route templates) and will be added once to all the integrations where the Kamelet is used.
 They main role is to do advanced configuration of the integration context where the Kamelet is used, such as registering
 beans in the registry or adding customizers.
+
+[[kamelet-keda-user]]
+== KEDA enabled Kamelets
+
+Some Kamelets are enhanced with KEDA metadata to allow users to automatically configure autoscalers on them.
+Kamelets with KEDA features can be distinguished by the presence of the annotation `camel.apache.org/keda.type`,
+which is set to the name of a specific KEDA autoscaler.
+
+A KEDA enabled Kamelet can be used in the same way as any other Kamelet, in a binding or in an integration.
+KEDA autoscalers are not enabled by default: they need to be manually enabled by the user via the `keda` trait.
+
+In a KameletBinding, the KEDA trait can be enabled using annotations:
+
+.my-keda-binding.yaml
+[source,yaml]
+----
+apiVersion: camel.apache.org/v1alpha1
+kind: KameletBinding
+metadata:
+  name: my-keda-binding
+  annotations:
+    trait.camel.apache.org/keda.enabled: "true"
+spec:
+  source:
+  # ...
+  sink:
+  # ...
+----
+
+In an integration, it can be enabled using `kamel run` args, for example:
+
+[source,shell]
+----
+kamel run my-keda-integration.yaml -t keda.enabled=true
+----
+
+NOTE: Make sure that the `my-keda-integration` uses at least one KEDA enabled Kamelet, otherwise enabling KEDA (without other options) will have no effect.
+
+For information on how to create KEDA enabled Kamelets, see the xref:kamelets/kamelets-dev.adoc#kamelet-keda-dev[KEDA section in the development guide].
diff --git a/docs/modules/ROOT/partials/apis/crds-html.adoc b/docs/modules/ROOT/partials/apis/crds-html.adoc
index ed34383..131c061 100644
--- a/docs/modules/ROOT/partials/apis/crds-html.adoc
+++ b/docs/modules/ROOT/partials/apis/crds-html.adoc
@@ -6007,7 +6007,7 @@ bool
 </em>
 </td>
 <td>
-<p>The list of descriptors that determine which UI components to use on different views</p>
+<p>XDescriptors is a list of extended properties that trigger a custom behavior in external systems</p>
 </td>
 </tr>
 </tbody>
diff --git a/docs/modules/traits/pages/keda.adoc b/docs/modules/traits/pages/keda.adoc
index a73dabd..6f0fcac 100644
--- a/docs/modules/traits/pages/keda.adoc
+++ b/docs/modules/traits/pages/keda.adoc
@@ -1,9 +1,16 @@
 = Keda Trait
 
 // Start of autogenerated code - DO NOT EDIT! (description)
-The Keda trait can be used for automatic integration with Keda autoscalers.
+The KEDA trait can be used for automatic integration with KEDA autoscalers.
+The trait can be either manually configured using the `triggers` option or automatically configured
+via markers in the Kamelets.
 
-The Keda trait is disabled by default.
+For information on how to use KEDA enabled Kamelets with the KEDA trait, refer to
+xref:kamelets/kamelets-user.adoc#kamelet-keda-user[the KEDA section in the Kamelets user guide].
+If you want to create Kamelets that contain KEDA metadata, refer to
+xref:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the KEDA section in the Kamelets development guide].
+
+The KEDA trait is disabled by default.
 
 
 This trait is available in the following profiles: **Kubernetes, Knative, OpenShift**.
@@ -29,15 +36,40 @@ The following configuration options are available:
 
 | keda.auto
 | bool
-| Enables automatic configuration of the trait.
+| Enables automatic configuration of the trait. Allows the trait to infer KEDA triggers from the Kamelets.
 
 | keda.camel-case-conversion
 | bool
-| Convert metadata properties to camelCase (needed because trait properties use kebab-case). Enabled by default.
+| Convert metadata properties to camelCase (needed because Camel K trait properties use kebab-case from command line). Disabled by default.
+
+| keda.hack-controller-replicas
+| bool
+| Set the spec->replicas field on the top level controller to an explicit value if missing, to allow KEDA to recognize it as a scalable resource.
+
+| keda.polling-interval
+| int32
+| Interval (seconds) to check each trigger on (minimum 10 seconds).
+
+| keda.cooldown-period
+| int32
+| The wait period between the last active trigger reported and scaling the resource back to 0.
+
+| keda.idle-replica-count
+| int32
+| Enabling this property allows KEDA to scale the resource down to the specified number of replicas.
+
+| keda.min-replica-count
+| int32
+| Minimum number of replicas.
+
+| keda.max-replica-count
+| int32
+| Maximum number of replicas.
 
 | keda.triggers
 | []github.com/apache/camel-k/addons/keda.kedaTrigger
-| Triggers
+| Definition of triggers according to the KEDA format. Each trigger must contain `type` field corresponding
+to the name of a KEDA autoscaler and a key/value map named `metadata` containing specific trigger options.
 
 |===
 
diff --git a/helm/camel-k/crds/crd-kamelet-binding.yaml b/helm/camel-k/crds/crd-kamelet-binding.yaml
index 6ad5d2e..0891d1f 100644
--- a/helm/camel-k/crds/crd-kamelet-binding.yaml
+++ b/helm/camel-k/crds/crd-kamelet-binding.yaml
@@ -5848,8 +5848,9 @@ spec:
                                   uniqueItems:
                                     type: boolean
                                   x-descriptors:
-                                    description: The list of descriptors that determine
-                                      which UI components to use on different views
+                                    description: XDescriptors is a list of extended
+                                      properties that trigger a custom behavior in
+                                      external systems
                                     items:
                                       type: string
                                     type: array
@@ -6062,8 +6063,9 @@ spec:
                                   uniqueItems:
                                     type: boolean
                                   x-descriptors:
-                                    description: The list of descriptors that determine
-                                      which UI components to use on different views
+                                    description: XDescriptors is a list of extended
+                                      properties that trigger a custom behavior in
+                                      external systems
                                     items:
                                       type: string
                                     type: array
@@ -6281,8 +6283,9 @@ spec:
                                     uniqueItems:
                                       type: boolean
                                     x-descriptors:
-                                      description: The list of descriptors that determine
-                                        which UI components to use on different views
+                                      description: XDescriptors is a list of extended
+                                        properties that trigger a custom behavior
+                                        in external systems
                                       items:
                                         type: string
                                       type: array
diff --git a/helm/camel-k/crds/crd-kamelet.yaml b/helm/camel-k/crds/crd-kamelet.yaml
index 8dd01d6..dada3b0 100644
--- a/helm/camel-k/crds/crd-kamelet.yaml
+++ b/helm/camel-k/crds/crd-kamelet.yaml
@@ -193,8 +193,8 @@ spec:
                         uniqueItems:
                           type: boolean
                         x-descriptors:
-                          description: The list of descriptors that determine which
-                            UI components to use on different views
+                          description: XDescriptors is a list of extended properties
+                            that trigger a custom behavior in external systems
                           items:
                             type: string
                           type: array
@@ -405,8 +405,8 @@ spec:
                               uniqueItems:
                                 type: boolean
                               x-descriptors:
-                                description: The list of descriptors that determine
-                                  which UI components to use on different views
+                                description: XDescriptors is a list of extended properties
+                                  that trigger a custom behavior in external systems
                                 items:
                                   type: string
                                 type: array
diff --git a/resources/traits.yaml b/resources/traits.yaml
index 7bd54bf..a6c05b8 100755
--- a/resources/traits.yaml
+++ b/resources/traits.yaml
@@ -576,8 +576,14 @@ traits:
   - Kubernetes
   - Knative
   - OpenShift
-  description: The Keda trait can be used for automatic integration with Keda autoscalers.
-    The Keda trait is disabled by default.
+  description: The KEDA trait can be used for automatic integration with KEDA autoscalers.
+    The trait can be either manually configured using the `triggers` option or automatically
+    configured via markers in the Kamelets. For information on how to use KEDA enabled
+    Kamelets with the KEDA trait, refer to xref:kamelets/kamelets-user.adoc#kamelet-keda-user[the
+    KEDA section in the Kamelets user guide]. If you want to create Kamelets that
+    contain KEDA metadata, refer to xref:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the
+    KEDA section in the Kamelets development guide]. The KEDA trait is disabled by
+    default.
   properties:
   - name: enabled
     type: bool
@@ -585,14 +591,38 @@ traits:
       property.
   - name: auto
     type: bool
-    description: Enables automatic configuration of the trait.
+    description: Enables automatic configuration of the trait. Allows the trait to
+      infer KEDA triggers from the Kamelets.
   - name: camel-case-conversion
     type: bool
-    description: Convert metadata properties to camelCase (needed because trait properties
-      use kebab-case). Enabled by default.
+    description: Convert metadata properties to camelCase (needed because Camel K
+      trait properties use kebab-case from command line). Disabled by default.
+  - name: hack-controller-replicas
+    type: bool
+    description: Set the spec->replicas field on the top level controller to an explicit
+      value if missing, to allow KEDA to recognize it as a scalable resource.
+  - name: polling-interval
+    type: int32
+    description: Interval (seconds) to check each trigger on (minimum 10 seconds).
+  - name: cooldown-period
+    type: int32
+    description: The wait period between the last active trigger reported and scaling
+      the resource back to 0.
+  - name: idle-replica-count
+    type: int32
+    description: Enabling this property allows KEDA to scale the resource down to
+      the specified number of replicas.
+  - name: min-replica-count
+    type: int32
+    description: Minimum number of replicas.
+  - name: max-replica-count
+    type: int32
+    description: Maximum number of replicas.
   - name: triggers
     type: '[]github.com/apache/camel-k/addons/keda.kedaTrigger'
-    description: Triggers
+    description: Definition of triggers according to the KEDA format. Each trigger
+      must contain `type` field correspondingto the name of a KEDA autoscaler and
+      a key/value map named `metadata` containing specific trigger options.
 - name: knative-service
   platform: false
   profiles:

[camel-k] 05/22: Fix #1107: adding optional keda fields

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 2fbfef6662e04330bbd181330fa3df29b2574dcb
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Wed Dec 15 11:44:39 2021 +0100

    Fix #1107: adding optional keda fields
---
 addons/keda/duck/v1alpha1/duck_types.go | 10 ++++++++++
 addons/keda/keda.go                     | 29 +++++++++++++++++++++++++++--
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/addons/keda/duck/v1alpha1/duck_types.go b/addons/keda/duck/v1alpha1/duck_types.go
index 8504b6c..90a20bf 100644
--- a/addons/keda/duck/v1alpha1/duck_types.go
+++ b/addons/keda/duck/v1alpha1/duck_types.go
@@ -38,6 +38,16 @@ type ScaledObject struct {
 // ScaledObjectSpec is the spec for a ScaledObject resource
 type ScaledObjectSpec struct {
 	ScaleTargetRef *v1.ObjectReference `json:"scaleTargetRef"`
+	// +optional
+	PollingInterval *int32 `json:"pollingInterval,omitempty"`
+	// +optional
+	CooldownPeriod *int32 `json:"cooldownPeriod,omitempty"`
+	// +optional
+	IdleReplicaCount *int32 `json:"idleReplicaCount,omitempty"`
+	// +optional
+	MinReplicaCount *int32 `json:"minReplicaCount,omitempty"`
+	// +optional
+	MaxReplicaCount *int32 `json:"maxReplicaCount,omitempty"`
 
 	Triggers []ScaleTriggers `json:"triggers"`
 }
diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index f59edd9..834cea3 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -43,7 +43,18 @@ type kedaTrait struct {
 	CamelCaseConversion *bool `property:"camel-case-conversion" json:"camelCaseConversion,omitempty"`
 	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow Keda to recognize it as a scalable resource
 	HackControllerReplicas *bool `property:"hack-controller-replicas" json:"hackControllerReplicas,omitempty"`
-	// Triggers
+	// Interval (seconds) to check each trigger on (minimum 10 seconds)
+	PollingInterval *int32 `property:"polling-interval" json:"pollingInterval,omitempty"`
+	// The wait period between the last active trigger reported and scaling the resource back to 0
+	CooldownPeriod *int32 `property:"cooldown-period" json:"cooldownPeriod,omitempty"`
+	// Enabling this property allows KEDA to scale the resource down to the specified number of replicas
+	IdleReplicaCount *int32 `property:"idle-replica-count" json:"idleReplicaCount,omitempty"`
+	// Minimum number of replicas
+	MinReplicaCount *int32 `property:"min-replica-count" json:"minReplicaCount,omitempty"`
+	// Maximum number of replicas
+	MaxReplicaCount *int32 `property:"max-replica-count" json:"maxReplicaCount,omitempty"`
+	// Definition of triggers according to the Keda format. Each trigger must contain `type` field corresponding
+	// to the name of a Keda autoscaler and a key/value map named `metadata` containing specific trigger options.
 	Triggers []kedaTrigger `property:"triggers" json:"triggers,omitempty"`
 }
 
@@ -95,7 +106,21 @@ func (t *kedaTrait) getScaledObject(e *trait.Environment) (*kedav1alpha1.ScaledO
 	}
 	obj := kedav1alpha1.NewScaledObject(e.Integration.Namespace, e.Integration.Name)
 	obj.Spec.ScaleTargetRef = t.getTopControllerReference(e)
-
+	if t.PollingInterval != nil {
+		obj.Spec.PollingInterval = t.PollingInterval
+	}
+	if t.CooldownPeriod != nil {
+		obj.Spec.CooldownPeriod = t.CooldownPeriod
+	}
+	if t.IdleReplicaCount != nil {
+		obj.Spec.IdleReplicaCount = t.IdleReplicaCount
+	}
+	if t.MinReplicaCount != nil {
+		obj.Spec.MinReplicaCount = t.MinReplicaCount
+	}
+	if t.MaxReplicaCount != nil {
+		obj.Spec.MaxReplicaCount = t.MaxReplicaCount
+	}
 	for _, trigger := range t.Triggers {
 		meta := make(map[string]string)
 		for k, v := range trigger.Metadata {

[camel-k] 22/22: Fix #1107: fix expected roles in tests

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 590b23c15b2023dfc65f5500077e22d7981c7ffc
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Jan 10 22:52:43 2022 +0100

    Fix #1107: fix expected roles in tests
---
 e2e/common/kustomize/common.go | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/e2e/common/kustomize/common.go b/e2e/common/kustomize/common.go
index c5c483e..adbb2ff 100644
--- a/e2e/common/kustomize/common.go
+++ b/e2e/common/kustomize/common.go
@@ -40,8 +40,9 @@ const (
 
 	// camel-k-operator, 			 camel-k-operator-events,
 	// camel-k-operator-knative, 	 camel-k-operator-leases,
-	// camel-k-operator-podmonitors, camel-k-operator-strimzi
-	ExpKubePromoteRoles = 6
+	// camel-k-operator-podmonitors, camel-k-operator-strimzi,
+	// camel-k-operator-keda
+	ExpKubePromoteRoles = 7
 
 	// camel-k-edit
 	// camel-k-operator-custom-resource-definitions

[camel-k] 11/22: Fix #1107: added tests

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 1fde2b53666f30414d0a79521d0206dc023b68fc
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 20 16:35:39 2021 +0100

    Fix #1107: added tests
---
 addons/keda/keda.go      |   6 +-
 addons/keda/keda_test.go | 295 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 298 insertions(+), 3 deletions(-)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index 3637153..3a54896 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -42,7 +42,7 @@ import (
 	scase "github.com/stoewer/go-strcase"
 	v1 "k8s.io/api/core/v1"
 	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
-	"sigs.k8s.io/controller-runtime/pkg/client"
+	ctrl "sigs.k8s.io/controller-runtime/pkg/client"
 )
 
 const (
@@ -222,7 +222,7 @@ func (t *kedaTrait) addScalingResources(e *trait.Environment) error {
 			}
 		} else if trigger.AuthenticationSecret != "" {
 			s := v1.Secret{}
-			key := client.ObjectKey{
+			key := ctrl.ObjectKey{
 				Namespace: e.Integration.Namespace,
 				Name:      trigger.AuthenticationSecret,
 			}
@@ -273,7 +273,7 @@ func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 	ctrlRef := t.getTopControllerReference(e)
 	if ctrlRef.Kind == camelv1alpha1.KameletBindingKind {
 		// Update the KameletBinding directly (do not add it to env resources, it's the integration parent)
-		key := client.ObjectKey{
+		key := ctrl.ObjectKey{
 			Namespace: e.Integration.Namespace,
 			Name:      ctrlRef.Name,
 		}
diff --git a/addons/keda/keda_test.go b/addons/keda/keda_test.go
new file mode 100644
index 0000000..083a231
--- /dev/null
+++ b/addons/keda/keda_test.go
@@ -0,0 +1,295 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package keda
+
+import (
+	"context"
+	"testing"
+
+	"github.com/apache/camel-k/addons/keda/duck/v1alpha1"
+	camelv1 "github.com/apache/camel-k/pkg/apis/camel/v1"
+	camelv1alpha1 "github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
+	"github.com/apache/camel-k/pkg/trait"
+	"github.com/apache/camel-k/pkg/util/camel"
+	"github.com/apache/camel-k/pkg/util/kubernetes"
+	"github.com/apache/camel-k/pkg/util/test"
+	"github.com/pkg/errors"
+	"github.com/stretchr/testify/assert"
+	corev1 "k8s.io/api/core/v1"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+	"k8s.io/apimachinery/pkg/runtime"
+)
+
+var (
+	testingTrue  = true
+	testingFalse = false
+)
+
+func TestManualConfig(t *testing.T) {
+	keda, _ := NewKedaTrait().(*kedaTrait)
+	keda.Enabled = &testingTrue
+	keda.Auto = &testingFalse
+	meta := map[string]string{
+		"prop":      "val",
+		"camelCase": "VAL",
+	}
+	keda.Triggers = append(keda.Triggers, kedaTrigger{
+		Type:     "mytype",
+		Metadata: meta,
+	})
+	env := createBasicTestEnvironment()
+
+	res, err := keda.Configure(env)
+	assert.NoError(t, err)
+	assert.True(t, res)
+	assert.NoError(t, keda.Apply(env))
+	so := getScaledObject(env)
+	assert.NotNil(t, so)
+	assert.Len(t, so.Spec.Triggers, 1)
+	assert.Equal(t, "mytype", so.Spec.Triggers[0].Type)
+	assert.Equal(t, meta, so.Spec.Triggers[0].Metadata)
+	assert.Nil(t, so.Spec.Triggers[0].AuthenticationRef)
+	assert.Nil(t, getTriggerAuthentication(env))
+	assert.Nil(t, getSecret(env))
+}
+
+func TestConfigFromSecret(t *testing.T) {
+	keda, _ := NewKedaTrait().(*kedaTrait)
+	keda.Enabled = &testingTrue
+	keda.Auto = &testingFalse
+	meta := map[string]string{
+		"prop":      "val",
+		"camelCase": "VAL",
+	}
+	keda.Triggers = append(keda.Triggers, kedaTrigger{
+		Type:                 "mytype",
+		Metadata:             meta,
+		AuthenticationSecret: "my-secret",
+	})
+	env := createBasicTestEnvironment(&corev1.Secret{
+		ObjectMeta: metav1.ObjectMeta{
+			Namespace: "test",
+			Name:      "my-secret",
+		},
+		Data: map[string][]byte{
+			"bbb": []byte("val1"),
+			"aaa": []byte("val2"),
+		},
+	})
+
+	res, err := keda.Configure(env)
+	assert.NoError(t, err)
+	assert.True(t, res)
+	assert.NoError(t, keda.Apply(env))
+	so := getScaledObject(env)
+	assert.NotNil(t, so)
+	assert.Len(t, so.Spec.Triggers, 1)
+	assert.Equal(t, "mytype", so.Spec.Triggers[0].Type)
+	assert.Equal(t, meta, so.Spec.Triggers[0].Metadata)
+	triggerAuth := getTriggerAuthentication(env)
+	assert.NotNil(t, triggerAuth)
+	assert.Equal(t, so.Spec.Triggers[0].AuthenticationRef.Name, triggerAuth.Name)
+	assert.NotEqual(t, "my-secret", triggerAuth.Name)
+	assert.Len(t, triggerAuth.Spec.SecretTargetRef, 2)
+	assert.Equal(t, "aaa", triggerAuth.Spec.SecretTargetRef[0].Key)
+	assert.Equal(t, "aaa", triggerAuth.Spec.SecretTargetRef[0].Parameter)
+	assert.Equal(t, "my-secret", triggerAuth.Spec.SecretTargetRef[0].Name)
+	assert.Equal(t, "bbb", triggerAuth.Spec.SecretTargetRef[1].Key)
+	assert.Equal(t, "bbb", triggerAuth.Spec.SecretTargetRef[1].Parameter)
+	assert.Equal(t, "my-secret", triggerAuth.Spec.SecretTargetRef[1].Name)
+	assert.Nil(t, getSecret(env)) // Secret is already present, not generated
+}
+
+func TestKameletAutoDetection(t *testing.T) {
+	keda, _ := NewKedaTrait().(*kedaTrait)
+	keda.Enabled = &testingTrue
+	env := createBasicTestEnvironment(
+		&camelv1alpha1.Kamelet{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "my-kamelet",
+				Annotations: map[string]string{
+					"camel.apache.org/keda.type": "my-scaler",
+				},
+			},
+			Spec: camelv1alpha1.KameletSpec{
+				Definition: &camelv1alpha1.JSONSchemaProps{
+					Properties: map[string]camelv1alpha1.JSONSchemaProp{
+						"a": camelv1alpha1.JSONSchemaProp{
+							XDescriptors: []string{
+								"urn:keda:metadata:a",
+							},
+						},
+						"b": camelv1alpha1.JSONSchemaProp{
+							XDescriptors: []string{
+								"urn:keda:metadata:bb",
+							},
+						},
+						"c": camelv1alpha1.JSONSchemaProp{
+							XDescriptors: []string{
+								"urn:keda:authentication:cc",
+							},
+						},
+					},
+				},
+			},
+		},
+		&camelv1.Integration{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "my-it",
+			},
+			Spec: camelv1.IntegrationSpec{
+				Sources: []camelv1.SourceSpec{
+					{
+						DataSpec: camelv1.DataSpec{
+							Name: "my-it.yaml",
+							Content: "" +
+								"- route:\n" +
+								"    from:\n" +
+								"      uri: kamelet:my-kamelet\n" +
+								"      parameters:\n" +
+								"        a: v1\n" +
+								"        b: v2\n" +
+								"        c: v3\n" +
+								"    steps:\n" +
+								"    - to: log:sink\n",
+						},
+						Language: camelv1.LanguageYaml,
+					},
+				},
+			},
+			Status: camelv1.IntegrationStatus{
+				Phase: camelv1.IntegrationPhaseDeploying,
+			},
+		})
+
+	res, err := keda.Configure(env)
+	assert.NoError(t, err)
+	assert.True(t, res)
+	assert.NoError(t, keda.Apply(env))
+	so := getScaledObject(env)
+	assert.NotNil(t, so)
+	assert.Len(t, so.Spec.Triggers, 1)
+	assert.Equal(t, "my-scaler", so.Spec.Triggers[0].Type)
+	assert.Equal(t, map[string]string{
+		"a":  "v1",
+		"bb": "v2",
+	}, so.Spec.Triggers[0].Metadata)
+	triggerAuth := getTriggerAuthentication(env)
+	assert.NotNil(t, triggerAuth)
+	assert.Equal(t, so.Spec.Triggers[0].AuthenticationRef.Name, triggerAuth.Name)
+	assert.Len(t, triggerAuth.Spec.SecretTargetRef, 1)
+	assert.Equal(t, "cc", triggerAuth.Spec.SecretTargetRef[0].Key)
+	assert.Equal(t, "cc", triggerAuth.Spec.SecretTargetRef[0].Parameter)
+	secretName := triggerAuth.Spec.SecretTargetRef[0].Name
+	secret := getSecret(env)
+	assert.NotNil(t, secret)
+	assert.Equal(t, secretName, secret.Name)
+	assert.Len(t, secret.StringData, 1)
+	assert.Contains(t, secret.StringData, "cc")
+}
+
+func getScaledObject(e *trait.Environment) *v1alpha1.ScaledObject {
+	var res *v1alpha1.ScaledObject
+	for _, o := range e.Resources.Items() {
+		if so, ok := o.(*v1alpha1.ScaledObject); ok {
+			if res != nil {
+				panic("multiple ScaledObjects found in env")
+			}
+			res = so
+		}
+	}
+	return res
+}
+
+func getTriggerAuthentication(e *trait.Environment) *v1alpha1.TriggerAuthentication {
+	var res *v1alpha1.TriggerAuthentication
+	for _, o := range e.Resources.Items() {
+		if so, ok := o.(*v1alpha1.TriggerAuthentication); ok {
+			if res != nil {
+				panic("multiple TriggerAuthentication found in env")
+			}
+			res = so
+		}
+	}
+	return res
+}
+
+func getSecret(e *trait.Environment) *corev1.Secret {
+	var res *corev1.Secret
+	for _, o := range e.Resources.Items() {
+		if so, ok := o.(*corev1.Secret); ok {
+			if res != nil {
+				panic("multiple Secret found in env")
+			}
+			res = so
+		}
+	}
+	return res
+}
+
+func createBasicTestEnvironment(resources ...runtime.Object) *trait.Environment {
+	fakeClient, err := test.NewFakeClient(resources...)
+	if err != nil {
+		panic(errors.Wrap(err, "could not create fake client"))
+	}
+
+	var it *camelv1.Integration
+	for _, res := range resources {
+		if integration, ok := res.(*camelv1.Integration); ok {
+			it = integration
+		}
+	}
+	if it == nil {
+		it = &camelv1.Integration{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+				Name:      "integration-name",
+			},
+			Status: camelv1.IntegrationStatus{
+				Phase: camelv1.IntegrationPhaseDeploying,
+			},
+		}
+	}
+
+	return &trait.Environment{
+		Catalog:     trait.NewCatalog(nil),
+		Ctx:         context.Background(),
+		Client:      fakeClient,
+		Integration: it,
+		CamelCatalog: &camel.RuntimeCatalog{
+			CamelCatalogSpec: camelv1.CamelCatalogSpec{
+				Runtime: camelv1.RuntimeSpec{
+					Version:  "0.0.1",
+					Provider: camelv1.RuntimeProviderQuarkus,
+				},
+			},
+		},
+		Platform: &camelv1.IntegrationPlatform{
+			ObjectMeta: metav1.ObjectMeta{
+				Namespace: "test",
+			},
+			Spec: camelv1.IntegrationPlatformSpec{
+				Cluster: camelv1.IntegrationPlatformClusterKubernetes,
+			},
+		},
+		Resources:             kubernetes.NewCollection(),
+		ApplicationProperties: make(map[string]string),
+	}
+}

[camel-k] 21/22: Fix #1107: disable applier code to detect real CI errors

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 885d2bd99ca5ce46bad901df5082d6e424a1500b
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Jan 10 14:43:43 2022 +0100

    Fix #1107: disable applier code to detect real CI errors
---
 pkg/install/kamelets.go    |  94 +++++++++++++++++++++++++++++++++++++--
 pkg/resources/resources.go |  12 ++---
 pkg/trait/deployer.go      | 108 ++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 203 insertions(+), 11 deletions(-)

diff --git a/pkg/install/kamelets.go b/pkg/install/kamelets.go
index 4ff4572..82a818b 100644
--- a/pkg/install/kamelets.go
+++ b/pkg/install/kamelets.go
@@ -19,21 +19,33 @@ package install
 
 import (
 	"context"
+	"errors"
 	"fmt"
 	"io/fs"
+	"net/http"
 	"os"
 	"path"
 	"path/filepath"
 	"strings"
+	"sync"
+	"sync/atomic"
 
 	"golang.org/x/sync/errgroup"
 
+	k8serrors "k8s.io/apimachinery/pkg/api/errors"
+	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+	"k8s.io/apimachinery/pkg/runtime"
+	"k8s.io/apimachinery/pkg/types"
+
+	ctrl "sigs.k8s.io/controller-runtime/pkg/client"
+	logf "sigs.k8s.io/controller-runtime/pkg/log"
+
 	"github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
 	"github.com/apache/camel-k/pkg/client"
 	"github.com/apache/camel-k/pkg/util"
 	"github.com/apache/camel-k/pkg/util/defaults"
 	"github.com/apache/camel-k/pkg/util/kubernetes"
-	"k8s.io/apimachinery/pkg/runtime"
+	"github.com/apache/camel-k/pkg/util/patch"
 )
 
 const (
@@ -41,6 +53,13 @@ const (
 	defaultKameletDir = "/kamelets/"
 )
 
+var (
+	log = logf.Log
+
+	hasServerSideApply atomic.Value
+	tryServerSideApply sync.Once
+)
+
 // KameletCatalog installs the bundled Kamelets into the specified namespace.
 func KameletCatalog(ctx context.Context, c client.Client, namespace string) error {
 	kameletDir := os.Getenv(kameletDirEnv)
@@ -58,7 +77,7 @@ func KameletCatalog(ctx context.Context, c client.Client, namespace string) erro
 	}
 
 	g, gCtx := errgroup.WithContext(ctx)
-	applier := c.ServerOrClientSideApplier()
+
 	err = filepath.WalkDir(kameletDir, func(p string, f fs.DirEntry, err error) error {
 		if err != nil {
 			return err
@@ -75,9 +94,31 @@ func KameletCatalog(ctx context.Context, c client.Client, namespace string) erro
 			if err != nil {
 				return err
 			}
-			if err := applier.Apply(gCtx, kamelet); err != nil {
+			once := false
+			tryServerSideApply.Do(func() {
+				once = true
+				if err = serverSideApply(gCtx, c, kamelet); err != nil {
+					if isIncompatibleServerError(err) {
+						log.Info("Fallback to client-side apply for installing bundled Kamelets")
+						hasServerSideApply.Store(false)
+						err = nil
+					} else {
+						tryServerSideApply = sync.Once{}
+					}
+				} else {
+					hasServerSideApply.Store(true)
+				}
+			})
+			if err != nil {
 				return err
 			}
+			if v := hasServerSideApply.Load(); v.(bool) {
+				if !once {
+					return serverSideApply(gCtx, c, kamelet)
+				}
+			} else {
+				return clientSideApply(gCtx, c, kamelet)
+			}
 			return nil
 		})
 		return nil
@@ -89,6 +130,53 @@ func KameletCatalog(ctx context.Context, c client.Client, namespace string) erro
 	return g.Wait()
 }
 
+func serverSideApply(ctx context.Context, c client.Client, resource runtime.Object) error {
+	target, err := patch.PositiveApplyPatch(resource)
+	if err != nil {
+		return err
+	}
+	return c.Patch(ctx, target, ctrl.Apply, ctrl.ForceOwnership, ctrl.FieldOwner("camel-k-operator"))
+}
+
+func clientSideApply(ctx context.Context, c client.Client, resource ctrl.Object) error {
+	err := c.Create(ctx, resource)
+	if err == nil {
+		return nil
+	} else if !k8serrors.IsAlreadyExists(err) {
+		return fmt.Errorf("error during create resource: %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
+	}
+	object := &unstructured.Unstructured{}
+	object.SetNamespace(resource.GetNamespace())
+	object.SetName(resource.GetName())
+	object.SetGroupVersionKind(resource.GetObjectKind().GroupVersionKind())
+	err = c.Get(ctx, ctrl.ObjectKeyFromObject(object), object)
+	if err != nil {
+		return err
+	}
+	p, err := patch.PositiveMergePatch(object, resource)
+	if err != nil {
+		return err
+	} else if len(p) == 0 {
+		return nil
+	}
+	return c.Patch(ctx, resource, ctrl.RawPatch(types.MergePatchType, p))
+}
+
+func isIncompatibleServerError(err error) bool {
+	// First simpler check for older servers (i.e. OpenShift 3.11)
+	if strings.Contains(err.Error(), "415: Unsupported Media Type") {
+		return true
+	}
+	// 415: Unsupported media type means we're talking to a server which doesn't
+	// support server-side apply.
+	var serr *k8serrors.StatusError
+	if errors.As(err, &serr) {
+		return serr.Status().Code == http.StatusUnsupportedMediaType
+	}
+	// Non-StatusError means the error isn't because the server is incompatible.
+	return false
+}
+
 func loadKamelet(path string, namespace string, scheme *runtime.Scheme) (*v1alpha1.Kamelet, error) {
 	content, err := util.ReadFile(path)
 	if err != nil {
diff --git a/pkg/resources/resources.go b/pkg/resources/resources.go
index e64bea2..bcc7095 100644
--- a/pkg/resources/resources.go
+++ b/pkg/resources/resources.go
@@ -145,16 +145,16 @@ var assets = func() http.FileSystem {
 		"/crd/bases/camel.apache.org_integrations.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "camel.apache.org_integrations.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 366985,
+			uncompressedSize: 367530,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\xbd\xfb\x73\x1b\x37\x96\x30\xfa\x7b\xfe\x8a\x53\x4e\xea\x93\xb4\x11\x29\x3b\x99\x9d\xbb\xe3\x3b\xf5\xa5\x34\x92\x9c\xd5\x8d\x2d\xab\x2c\x25\xf9\x52\x4e\x36\x0b\x76\x83\x24\x56\xdd\x40\x2f\x80\xa6\xcc\xbd\xbe\xff\xfb\x2d\x1c\x00\xfd\xe0\xab\x81\x16\xe9\x38\x53\x8d\xa9\x9a\x98\x14\xfb\x34\x1e\xe7\x7d\x0e\xce\xf9\x12\x46\xfb\x1b\x5f\x7c\x09\xaf\x59\x42\xb9\xa2\x29\x68\x01\x7a\x4e\xe1\xbc\x20\xc9\x9c\xc2\x9d\x98\xea\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\xbd\xfb\x73\x1b\x37\x96\x30\xfa\x7b\xfe\x8a\x53\x4e\xea\x93\xb4\x11\x29\x3b\x99\x9d\xbb\xe3\x3b\xf5\xa5\x34\x92\x9c\xd5\x8d\x2d\xab\x2c\x25\xf9\x52\x4e\x36\x0b\x76\x83\x24\x56\xdd\x40\x2f\x80\xa6\xcc\xbd\xbe\xff\xfb\x2d\x1c\x00\xfd\xe0\xab\x81\x16\xe9\x38\x53\x8d\xa9\x9a\x98\x14\xfb\x34\x1e\xe7\x7d\x0e\xce\xf9\x12\x46\xfb\x1b\x5f\x7c\x09\xaf\x59\x42\xb9\xa2\x29\x68\x01\x7a\x4e\xe1\xbc\x20\xc9\x9c\xc2\x9d\x98\xea\x [...]
 		},
 		"/crd/bases/camel.apache.org_kameletbindings.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "camel.apache.org_kameletbindings.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 432125,
+			uncompressedSize: 432720,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\xbd\xfb\x73\x1b\x37\x96\x30\xfa\x7b\xfe\x0a\x94\x9c\xfa\x24\x6d\x44\xca\xce\xcc\xce\xdd\xf1\x9d\xfa\x52\x1a\x59\xce\xe8\xc6\x96\x59\x96\xe2\x7c\x29\x27\x9b\x05\xbb\x41\x12\xab\x6e\xa0\x17\x40\x53\xe2\x5e\xdf\xff\xfd\x16\x0e\x80\x7e\xf0\x25\x9c\xa6\xa8\x28\x3b\x8d\xa9\x9a\x98\x22\xfb\x34\x5e\xe7\xfd\x7a\x41\x06\x8f\x37\xbe\x7a\x41\xde\xf1\x84\x09\xcd\x52\x62\x24\x31\x33\x46\xce\x0a\x9a\xcc\x18\xb9\x96\x13\x73\x47\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\xbd\xfb\x73\x1b\x37\x96\x30\xfa\x7b\xfe\x0a\x94\x9c\xfa\x24\x6d\x44\xca\xce\xcc\xce\xdd\xf1\x9d\xfa\x52\x1a\x59\xce\xe8\xc6\x96\x59\x96\xe2\x7c\x29\x27\x9b\x05\xbb\x41\x12\xab\x6e\xa0\x17\x40\x53\xe2\x5e\xdf\xff\xfd\x16\x0e\x80\x7e\xf0\x25\x9c\xa6\xa8\x28\x3b\x8d\xa9\x9a\x98\x22\xfb\x34\x5e\xe7\xfd\x7a\x41\x06\x8f\x37\xbe\x7a\x41\xde\xf1\x84\x09\xcd\x52\x62\x24\x31\x33\x46\xce\x0a\x9a\xcc\x18\xb9\x96\x13\x73\x47\x [...]
 		},
 		"/crd/bases/camel.apache.org_kamelets.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "camel.apache.org_kamelets.yaml",
@@ -555,9 +555,9 @@ var assets = func() http.FileSystem {
 		"/traits.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "traits.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 49341,
+			uncompressedSize: 50652,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x5b\xb9\x91\xe0\xef\xf3\x57\xa0\xb4\x57\x65\x49\x45\x52\x9e\xc9\x26\x3b\xa7\xbb\xd9\x94\xc6\x76\x12\xcd\xf8\x43\x67\x3b\xb3\x97\x9a\x9b\x0a\xc1\xf7\x9a\x24\xcc\x47\xe0\x05\xc0\x93\xcc\xdc\xde\xff\x7e\x85\xee\xc6\xc7\x7b\x24\x25\xca\xb6\x66\xa3\xad\xdd\x54\xed\x58\xd2\x03\xd0\x68\x34\xfa\xbb\x1b\xde\x4a\xe5\xdd\xf9\x57\x63\xa1\xe5\x1a\xce\x85\x9c\xcf\x95\x56\x7e\xf3\x95\x10\x6d\x23\xfd\xdc\xd8\xf5\xb9\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\xbd\x7d\x73\x1c\xb9\x91\x27\xfc\xff\x7c\x0a\x04\xfd\x44\x88\x64\x74\x37\x35\xe3\xb5\x3d\x0f\xef\xb4\x3e\x8e\x24\xdb\x9c\xd1\x0b\x4f\x92\xc7\xe7\xd0\x29\xdc\xe8\xaa\xec\x6e\xa8\xab\x81\x32\x80\x22\xd5\x3e\xdf\x77\xbf\x40\x66\xe2\xa5\xaa\x9b\x64\x53\x12\x67\xcd\x8d\x5d\x47\xec\x88\x64\x01\x48\x24\x12\x89\x44\xe6\x2f\x13\xde\x4a\xe5\xdd\xe9\x37\x63\xa1\xe5\x1a\x4e\x85\x9c\xcf\x95\x56\x7e\xf3\x8d\x10\x6d\x23\xfd\xdc\x [...]
 		},
 	}
 	fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{
diff --git a/pkg/trait/deployer.go b/pkg/trait/deployer.go
index 7735a37..67cdb79 100644
--- a/pkg/trait/deployer.go
+++ b/pkg/trait/deployer.go
@@ -17,6 +17,22 @@ limitations under the License.
 
 package trait
 
+import (
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"strings"
+
+	k8serrors "k8s.io/apimachinery/pkg/api/errors"
+	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+	"k8s.io/apimachinery/pkg/types"
+
+	ctrl "sigs.k8s.io/controller-runtime/pkg/client"
+
+	"github.com/apache/camel-k/pkg/util/patch"
+)
+
 // The deployer trait is responsible for deploying the resources owned by the integration, and can be used
 // to explicitly select the underlying controller that will manage the integration pods.
 //
@@ -29,6 +45,8 @@ type deployerTrait struct {
 
 var _ ControllerStrategySelector = &deployerTrait{}
 
+var hasServerSideApply = true
+
 func newDeployerTrait() Trait {
 	return &deployerTrait{
 		BaseTrait: NewBaseTrait("deployer", 900),
@@ -42,9 +60,28 @@ func (t *deployerTrait) Configure(e *Environment) (bool, error) {
 func (t *deployerTrait) Apply(e *Environment) error {
 	// Register a post action that patches the resources generated by the traits
 	e.PostActions = append(e.PostActions, func(env *Environment) error {
-		applier := e.Client.ServerOrClientSideApplier()
 		for _, resource := range env.Resources.Items() {
-			if err := applier.Apply(e.Ctx, resource); err != nil {
+			// We assume that server-side apply is enabled by default.
+			// It is currently convoluted to check pro-actively whether server-side apply
+			// is enabled. This is possible to fetch the OpenAPI endpoint, which returns
+			// the entire server API document, then lookup the resource PATCH endpoint, and
+			// check its list of accepted MIME types.
+			// As a simpler solution, we fall back to client-side apply at the first
+			// 415 error, and assume server-side apply is not available globally.
+			if hasServerSideApply {
+				err := t.serverSideApply(env, resource)
+				switch {
+				case err == nil:
+					continue
+				case isIncompatibleServerError(err):
+					t.L.Info("Fallback to client-side apply to patch resources")
+					hasServerSideApply = false
+				default:
+					// Keep server-side apply unless server is incompatible with it
+					return err
+				}
+			}
+			if err := t.clientSideApply(env, resource); err != nil {
 				return err
 			}
 		}
@@ -54,6 +91,73 @@ func (t *deployerTrait) Apply(e *Environment) error {
 	return nil
 }
 
+func (t *deployerTrait) serverSideApply(env *Environment, resource ctrl.Object) error {
+	target, err := patch.PositiveApplyPatch(resource)
+	if err != nil {
+		return err
+	}
+	err = env.Client.Patch(env.Ctx, target, ctrl.Apply, ctrl.ForceOwnership, ctrl.FieldOwner("camel-k-operator"))
+	if err != nil {
+		return fmt.Errorf("error during apply resource: %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
+	}
+	// Update the resource with the response returned from the API server
+	return t.unstructuredToRuntimeObject(target, resource)
+}
+
+func (t *deployerTrait) clientSideApply(env *Environment, resource ctrl.Object) error {
+	err := env.Client.Create(env.Ctx, resource)
+	if err == nil {
+		return nil
+	} else if !k8serrors.IsAlreadyExists(err) {
+		return fmt.Errorf("error during create resource: %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
+	}
+	object := &unstructured.Unstructured{}
+	object.SetNamespace(resource.GetNamespace())
+	object.SetName(resource.GetName())
+	object.SetGroupVersionKind(resource.GetObjectKind().GroupVersionKind())
+	err = env.Client.Get(env.Ctx, ctrl.ObjectKeyFromObject(object), object)
+	if err != nil {
+		return err
+	}
+	p, err := patch.PositiveMergePatch(object, resource)
+	if err != nil {
+		return err
+	} else if len(p) == 0 {
+		// Update the resource with the object returned from the API server
+		return t.unstructuredToRuntimeObject(object, resource)
+	}
+	err = env.Client.Patch(env.Ctx, resource, ctrl.RawPatch(types.MergePatchType, p))
+	if err != nil {
+		return fmt.Errorf("error during patch %s/%s: %w", resource.GetNamespace(), resource.GetName(), err)
+	}
+	return nil
+}
+
+func (t *deployerTrait) unstructuredToRuntimeObject(u *unstructured.Unstructured, obj ctrl.Object) error {
+	data, err := json.Marshal(u)
+	if err != nil {
+		return err
+	}
+	return json.Unmarshal(data, obj)
+}
+
+func isIncompatibleServerError(err error) bool {
+	// First simpler check for older servers (i.e. OpenShift 3.11)
+	if strings.Contains(err.Error(), "415: Unsupported Media Type") {
+		return true
+	}
+
+	// 415: Unsupported media type means we're talking to a server which doesn't
+	// support server-side apply.
+	var serr *k8serrors.StatusError
+	if errors.As(err, &serr) {
+		return serr.Status().Code == http.StatusUnsupportedMediaType
+	}
+
+	// Non-StatusError means the error isn't because the server is incompatible.
+	return false
+}
+
 func (t *deployerTrait) SelectControllerStrategy(e *Environment) (*ControllerStrategy, error) {
 	if IsFalse(t.Enabled) {
 		return nil, nil

[camel-k] 13/22: Fix #1107: update helm roles

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 8a4f660781e3206f7d9fd4108dea17a70e356cb0
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 20 17:19:24 2021 +0100

    Fix #1107: update helm roles
---
 helm/camel-k/templates/operator-role.yaml | 205 +++++++++++++++++++++---------
 1 file changed, 143 insertions(+), 62 deletions(-)

diff --git a/helm/camel-k/templates/operator-role.yaml b/helm/camel-k/templates/operator-role.yaml
index 3afbe47..3f207fe 100644
--- a/helm/camel-k/templates/operator-role.yaml
+++ b/helm/camel-k/templates/operator-role.yaml
@@ -26,9 +26,40 @@ rules:
 - apiGroups:
   - camel.apache.org
   resources:
-  - "*"
+  - builds
+  - camelcatalogs
+  - integrationkits
+  - integrationplatforms
+  - integrations
+  - kameletbindings
+  - kamelets
+  verbs:
+  - create
+  - get
+  - list
+  - patch
+  - update
+  - watch
+- apiGroups:
+  - camel.apache.org
+  resources:
+  - builds
+  verbs:
+  - delete
+- apiGroups:
+  - camel.apache.org
+  resources:
+  - builds/status
+  - camelcatalogs/status
+  - integrationkits/status
+  - integrationplatforms/status
+  - integrations/status
+  - kameletbindings/status
+  - kamelets/status
   verbs:
-  - "*"
+  - get
+  - patch
+  - update
 - apiGroups:
   - ""
   resources:
@@ -87,21 +118,22 @@ rules:
   - update
   - watch
 - apiGroups:
-  - ""
+  - apps
   resources:
-  - events
+  - deployments
   verbs:
   - create
-  - patch
+  - delete
+  - deletecollection
   - get
   - list
+  - patch
+  - update
   - watch
 - apiGroups:
-  - apps
+  - batch
   resources:
-  - deployments
-  - replicasets
-  - statefulsets
+  - cronjobs
   verbs:
   - create
   - delete
@@ -114,7 +146,15 @@ rules:
 - apiGroups:
   - batch
   resources:
-  - cronjobs
+  - jobs
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - networking.k8s.io
+  resources:
+  - ingresses
   verbs:
   - create
   - delete
@@ -125,17 +165,20 @@ rules:
   - update
   - watch
 - apiGroups:
-  - apps
+  - ""
   resources:
-  - daemonsets
+  - events
   verbs:
+  - create
+  - patch
   - get
   - list
   - watch
 - apiGroups:
-  - extensions
+  - keda.sh
   resources:
-  - networking.k8s.io
+  - scaledobjects
+  - triggerauthentications
   verbs:
   - create
   - delete
@@ -146,53 +189,67 @@ rules:
   - update
   - watch
 - apiGroups:
-  - ""
-  - "build.openshift.io"
+  - serving.knative.dev
   resources:
-  - buildconfigs
-  - buildconfigs/webhooks
-  - builds
+  - services
   verbs:
   - create
   - delete
-  - deletecollection
   - get
   - list
   - patch
   - update
   - watch
 - apiGroups:
-  - ""
-  - "image.openshift.io"
+  - eventing.knative.dev
   resources:
-  - imagestreamimages
-  - imagestreammappings
-  - imagestreams
-  - imagestreams/secrets
-  - imagestreamtags
+  - triggers
   verbs:
   - create
   - delete
-  - deletecollection
   - get
   - list
   - patch
   - update
-  - watch
 - apiGroups:
-  - ""
-  - build.openshift.io
+  - messaging.knative.dev
   resources:
-  - buildconfigs/instantiate
-  - buildconfigs/instantiatebinary
-  - builds/clone
+  - subscriptions
   verbs:
   - create
+  - delete
+  - get
+  - list
+  - patch
+  - update
 - apiGroups:
-  - ""
-  - "route.openshift.io"
+  - sources.knative.dev
   resources:
-  - routes
+  - sinkbindings
+  verbs:
+  - create
+  - delete
+  - get
+  - list
+  - patch
+  - update
+- apiGroups:
+  - eventing.knative.dev
+  resources:
+  - brokers
+  verbs:
+  - get
+- apiGroups:
+  - messaging.knative.dev
+  resources:
+  - channels
+  - inmemorychannels
+  verbs:
+  - get
+- apiGroups:
+  - coordination.k8s.io
+  resources:
+  - leases
   verbs:
   - create
   - delete
@@ -203,16 +260,22 @@ rules:
   - update
   - watch
 - apiGroups:
-  - ""
-  - route.openshift.io
+  - camel.apache.org
   resources:
-  - routes/custom-host
+  - builds/finalizers
+  - integrationkits/finalizers
+  - integrationplatforms/finalizers
+  - integrations/finalizers
+  - kameletbindings/finalizers
   verbs:
-  - create
+  - update
 - apiGroups:
-  - serving.knative.dev
+  - ""
+  - build.openshift.io
   resources:
-  - services
+  - buildconfigs
+  - buildconfigs/webhooks
+  - builds
   verbs:
   - create
   - delete
@@ -223,11 +286,14 @@ rules:
   - update
   - watch
 - apiGroups:
-  - eventing.knative.dev
-  - messaging.knative.dev
-  - sources.knative.dev
+  - ""
+  - image.openshift.io
   resources:
-  - "*"
+  - imagestreamimages
+  - imagestreammappings
+  - imagestreams
+  - imagestreams/secrets
+  - imagestreamtags
   verbs:
   - create
   - delete
@@ -238,17 +304,19 @@ rules:
   - update
   - watch
 - apiGroups:
-  - rbac.authorization.k8s.io
+  - ""
+  - build.openshift.io
   resources:
-  - clusterroles
+  - buildconfigs/instantiate
+  - buildconfigs/instantiatebinary
+  - builds/clone
   verbs:
-  - bind
-  resourceNames:
-  - system:image-builder
+  - create
 - apiGroups:
-  - monitoring.coreos.com
+  - ""
+  - route.openshift.io
   resources:
-  - podmonitors
+  - routes
   verbs:
   - create
   - delete
@@ -259,18 +327,16 @@ rules:
   - update
   - watch
 - apiGroups:
-  - "kafka.strimzi.io"
+  - ""
+  - route.openshift.io
   resources:
-  - kafkatopics
-  - kafkas
+  - routes/custom-host
   verbs:
-  - get
-  - list
-  - watch
+  - create
 - apiGroups:
-  - "coordination.k8s.io"
+  - monitoring.coreos.com
   resources:
-  - leases
+  - podmonitors
   verbs:
   - create
   - delete
@@ -281,8 +347,23 @@ rules:
   - update
   - watch
 - apiGroups:
+  - kafka.strimzi.io
+  resources:
+  - kafkatopics
+  - kafkas
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
   - "apiextensions.k8s.io"
   resources:
   - customresourcedefinitions
   verbs:
   - get
+- apiGroups:
+  - rbac.authorization.k8s.io
+  resources:
+  - clusterroles
+  verbs:
+  - bind

[camel-k] 19/22: Fix #1107: add missing operator role

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 01984610f86ab07631bb4f573be7ebbb95d19031
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Wed Dec 22 12:39:47 2021 +0100

    Fix #1107: add missing operator role
---
 config/rbac/operator-role.yaml            | 2 ++
 helm/camel-k/templates/operator-role.yaml | 2 ++
 pkg/resources/resources.go                | 4 ++--
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/config/rbac/operator-role.yaml b/config/rbac/operator-role.yaml
index 613ac5e..0941d6e 100644
--- a/config/rbac/operator-role.yaml
+++ b/config/rbac/operator-role.yaml
@@ -52,8 +52,10 @@ rules:
   - camelcatalogs/status
   - integrationkits/status
   - integrationplatforms/status
+  - integrations/scale
   - integrations/status
   - kameletbindings/status
+  - kameletbindings/scale
   - kamelets/status
   verbs:
   - get
diff --git a/helm/camel-k/templates/operator-role.yaml b/helm/camel-k/templates/operator-role.yaml
index 3f207fe..d30c8eb 100644
--- a/helm/camel-k/templates/operator-role.yaml
+++ b/helm/camel-k/templates/operator-role.yaml
@@ -53,8 +53,10 @@ rules:
   - camelcatalogs/status
   - integrationkits/status
   - integrationplatforms/status
+  - integrations/scale
   - integrations/status
   - kameletbindings/status
+  - kameletbindings/scale
   - kamelets/status
   verbs:
   - get
diff --git a/pkg/resources/resources.go b/pkg/resources/resources.go
index 5d8301d..e64bea2 100644
--- a/pkg/resources/resources.go
+++ b/pkg/resources/resources.go
@@ -385,9 +385,9 @@ var assets = func() http.FileSystem {
 		"/rbac/operator-role.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "operator-role.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 2879,
+			uncompressedSize: 2928,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xcc\x56\xc1\x6e\xdb\x46\x10\xbd\xf3\x2b\x06\xe2\x25\x01\x6c\xa9\xed\xa9\x50\x4f\x6a\x62\xb7\x42\x03\x09\x30\x95\x06\x39\x0e\x97\x23\x6a\xaa\xe5\xce\x76\x76\x69\x59\xfd\xfa\x62\x29\x2a\xa6\x4d\x2b\x28\x9a\xa0\x29\x2f\x5e\xee\x8e\xdf\xbc\xf7\xe6\xad\xcd\x1c\xae\xbf\xde\x93\xe5\xf0\x8e\x0d\xb9\x40\x15\x44\x81\xb8\x23\x58\x78\x34\x3b\x82\x42\xb6\xf1\x80\x4a\x70\x2b\xad\xab\x30\xb2\x38\x78\xb5\x28\x6e\x5f\x43\xeb\x2a\x52\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xcc\x56\xc1\x6e\xe3\x36\x10\xbd\xeb\x2b\x06\xd6\x65\x17\x88\xed\xb6\xa7\xc2\x3d\xb9\xbb\x49\x6b\x74\x61\x03\x91\xb7\x8b\x3d\x8e\xc8\xb1\x3c\x35\xc5\x61\x49\x2a\x8e\xfb\xf5\x05\x65\x29\x76\x22\x3b\x28\xba\x8b\x6e\x7d\x09\x45\x4e\xde\xbc\x79\xef\x51\x50\x0e\xe3\xaf\xf7\xcb\x72\xf8\xc0\x8a\x6c\x20\x0d\x51\x20\x6e\x09\xe6\x0e\xd5\x96\xa0\x90\x4d\xdc\xa3\x27\xb8\x93\xc6\x6a\x8c\x2c\x16\xde\xcc\x8b\xbb\xb7\xd0\x58\x4d\x1e\x [...]
 		},
 		"/rbac/patch-role-to-clusterrole.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "patch-role-to-clusterrole.yaml",

[camel-k] 17/22: Fix #1107: remove limit from doc

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 567c2de448d9a90f299791d979dd6cf07c9f8797
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Tue Dec 21 22:04:33 2021 +0100

    Fix #1107: remove limit from doc
---
 addons/keda/keda.go                 | 2 +-
 docs/modules/traits/pages/keda.adoc | 2 +-
 resources/traits.yaml               | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index 0c972ab..ad9f71d 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -80,7 +80,7 @@ type kedaTrait struct {
 	CamelCaseConversion *bool `property:"camel-case-conversion" json:"camelCaseConversion,omitempty"`
 	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow KEDA to recognize it as a scalable resource.
 	HackControllerReplicas *bool `property:"hack-controller-replicas" json:"hackControllerReplicas,omitempty"`
-	// Interval (seconds) to check each trigger on (minimum 10 seconds).
+	// Interval (seconds) to check each trigger on.
 	PollingInterval *int32 `property:"polling-interval" json:"pollingInterval,omitempty"`
 	// The wait period between the last active trigger reported and scaling the resource back to 0.
 	CooldownPeriod *int32 `property:"cooldown-period" json:"cooldownPeriod,omitempty"`
diff --git a/docs/modules/traits/pages/keda.adoc b/docs/modules/traits/pages/keda.adoc
index 340e150..df6c8d9 100644
--- a/docs/modules/traits/pages/keda.adoc
+++ b/docs/modules/traits/pages/keda.adoc
@@ -48,7 +48,7 @@ The following configuration options are available:
 
 | keda.polling-interval
 | int32
-| Interval (seconds) to check each trigger on (minimum 10 seconds).
+| Interval (seconds) to check each trigger on.
 
 | keda.cooldown-period
 | int32
diff --git a/resources/traits.yaml b/resources/traits.yaml
index f1a813f..638418d 100755
--- a/resources/traits.yaml
+++ b/resources/traits.yaml
@@ -603,7 +603,7 @@ traits:
       value if missing, to allow KEDA to recognize it as a scalable resource.
   - name: polling-interval
     type: int32
-    description: Interval (seconds) to check each trigger on (minimum 10 seconds).
+    description: Interval (seconds) to check each trigger on.
   - name: cooldown-period
     type: int32
     description: The wait period between the last active trigger reported and scaling

[camel-k] 07/22: Fix #1107: refactoring annotations and secret generation

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 7c9596cdc44026723e0ef4498186a473c5de10c6
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Fri Dec 17 15:09:15 2021 +0100

    Fix #1107: refactoring annotations and secret generation
---
 addons/keda/duck/v1alpha1/doc.go            |   2 +-
 addons/keda/duck/v1alpha1/register.go       |   6 +-
 addons/keda/keda.go                         | 260 +++++++++++++++++++++-------
 docs/modules/ROOT/nav.adoc                  |   2 +-
 pkg/apis/camel/v1alpha1/jsonschema_types.go |   2 -
 5 files changed, 201 insertions(+), 71 deletions(-)

diff --git a/addons/keda/duck/v1alpha1/doc.go b/addons/keda/duck/v1alpha1/doc.go
index 56d897a..0ce22d9 100644
--- a/addons/keda/duck/v1alpha1/doc.go
+++ b/addons/keda/duck/v1alpha1/doc.go
@@ -15,7 +15,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 */
 
-// Package duck contains a partial schema of the Keda APIs
+// Package duck contains a partial schema of the KEDA APIs
 // +kubebuilder:object:generate=true
 // +groupName=keda.sh
 package v1alpha1
diff --git a/addons/keda/duck/v1alpha1/register.go b/addons/keda/duck/v1alpha1/register.go
index a3814da..8ed0791 100644
--- a/addons/keda/duck/v1alpha1/register.go
+++ b/addons/keda/duck/v1alpha1/register.go
@@ -24,13 +24,13 @@ import (
 )
 
 const (
-	KedaGroup   = "keda.sh"
-	KedaVersion = "v1alpha1"
+	KEDAGroup   = "keda.sh"
+	KEDAVersion = "v1alpha1"
 )
 
 var (
 	// SchemeGroupVersion is group version used to register these objects.
-	SchemeGroupVersion = schema.GroupVersion{Group: KedaGroup, Version: KedaVersion}
+	SchemeGroupVersion = schema.GroupVersion{Group: KEDAGroup, Version: KEDAVersion}
 
 	// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
 	SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index 8396742..ffe637c 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -18,9 +18,12 @@ limitations under the License.
 package keda
 
 import (
+	"bytes"
+	"encoding/json"
 	"fmt"
 	"sort"
 	"strings"
+	"text/template"
 
 	kedav1alpha1 "github.com/apache/camel-k/addons/keda/duck/v1alpha1"
 	camelv1 "github.com/apache/camel-k/pkg/apis/camel/v1"
@@ -38,25 +41,29 @@ import (
 	"github.com/pkg/errors"
 	scase "github.com/stoewer/go-strcase"
 	v1 "k8s.io/api/core/v1"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 	"sigs.k8s.io/controller-runtime/pkg/client"
 )
 
 const (
-	// kameletURNTypePrefix indicates the scaler type associated to a Kamelet
-	kameletURNTypePrefix = "urn:keda:type:"
-	// kameletURNMetadataPrefix allows binding Kamelet properties to Keda metadata
+	// kameletURNMetadataPrefix allows binding Kamelet properties to KEDA metadata
 	kameletURNMetadataPrefix = "urn:keda:metadata:"
-	// kameletURNRequiredTag is used to mark properties required by Keda
+	// kameletURNAuthenticationPrefix allows binding Kamelet properties to KEDA authentication options
+	kameletURNAuthenticationPrefix = "urn:keda:authentication:"
+	// kameletURNRequiredTag is used to mark properties required by KEDA
 	kameletURNRequiredTag = "urn:keda:required"
 
-	// kameletAnnotationType is an alternative to kameletURNTypePrefix.
-	// To be removed when the `spec -> definition -> x-descriptors` field becomes stable.
+	// kameletAnnotationType indicates the scaler type associated to a Kamelet
 	kameletAnnotationType = "camel.apache.org/keda.type"
+	// kameletAnnotationMetadataPrefix is used to define virtual metadata fields computed from Kamelet properties
+	kameletAnnotationMetadataPrefix = "camel.apache.org/keda.metadata."
+	// kameletAnnotationAuthenticationPrefix is used to define virtual authentication fields computed from Kamelet properties
+	kameletAnnotationAuthenticationPrefix = "camel.apache.org/keda.authentication."
 )
 
-// The Keda trait can be used for automatic integration with Keda autoscalers.
+// The KEDA trait can be used for automatic integration with KEDA autoscalers.
 //
-// The Keda trait is disabled by default.
+// The KEDA trait is disabled by default.
 //
 // +camel-k:trait=keda.
 type kedaTrait struct {
@@ -65,7 +72,7 @@ type kedaTrait struct {
 	Auto *bool `property:"auto" json:"auto,omitempty"`
 	// Convert metadata properties to camelCase (needed because trait properties use kebab-case). Enabled by default.
 	CamelCaseConversion *bool `property:"camel-case-conversion" json:"camelCaseConversion,omitempty"`
-	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow Keda to recognize it as a scalable resource
+	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow KEDA to recognize it as a scalable resource
 	HackControllerReplicas *bool `property:"hack-controller-replicas" json:"hackControllerReplicas,omitempty"`
 	// Interval (seconds) to check each trigger on (minimum 10 seconds)
 	PollingInterval *int32 `property:"polling-interval" json:"pollingInterval,omitempty"`
@@ -77,14 +84,16 @@ type kedaTrait struct {
 	MinReplicaCount *int32 `property:"min-replica-count" json:"minReplicaCount,omitempty"`
 	// Maximum number of replicas
 	MaxReplicaCount *int32 `property:"max-replica-count" json:"maxReplicaCount,omitempty"`
-	// Definition of triggers according to the Keda format. Each trigger must contain `type` field corresponding
-	// to the name of a Keda autoscaler and a key/value map named `metadata` containing specific trigger options.
+	// Definition of triggers according to the KEDA format. Each trigger must contain `type` field corresponding
+	// to the name of a KEDA autoscaler and a key/value map named `metadata` containing specific trigger options.
 	Triggers []kedaTrigger `property:"triggers" json:"triggers,omitempty"`
 }
 
 type kedaTrigger struct {
 	Type     string            `property:"type" json:"type,omitempty"`
 	Metadata map[string]string `property:"metadata" json:"metadata,omitempty"`
+
+	authentication map[string]string
 }
 
 // NewKedaTrait --.
@@ -121,20 +130,19 @@ func (t *kedaTrait) Apply(e *trait.Environment) error {
 			}
 		}
 	} else if e.IntegrationInRunningPhases() {
-		if so, err := t.getScaledObject(e); err != nil {
+		if err := t.addScalingResources(e); err != nil {
 			return err
-		} else if so != nil {
-			e.Resources.Add(so)
 		}
 	}
 
 	return nil
 }
 
-func (t *kedaTrait) getScaledObject(e *trait.Environment) (*kedav1alpha1.ScaledObject, error) {
+func (t *kedaTrait) addScalingResources(e *trait.Environment) error {
 	if len(t.Triggers) == 0 {
-		return nil, nil
+		return nil
 	}
+
 	obj := kedav1alpha1.NewScaledObject(e.Integration.Namespace, e.Integration.Name)
 	obj.Spec.ScaleTargetRef = t.getTopControllerReference(e)
 	if t.PollingInterval != nil {
@@ -152,7 +160,7 @@ func (t *kedaTrait) getScaledObject(e *trait.Environment) (*kedav1alpha1.ScaledO
 	if t.MaxReplicaCount != nil {
 		obj.Spec.MaxReplicaCount = t.MaxReplicaCount
 	}
-	for _, trigger := range t.Triggers {
+	for idx, trigger := range t.Triggers {
 		meta := make(map[string]string)
 		for k, v := range trigger.Metadata {
 			kk := k
@@ -161,14 +169,56 @@ func (t *kedaTrait) getScaledObject(e *trait.Environment) (*kedav1alpha1.ScaledO
 			}
 			meta[kk] = v
 		}
+		var authenticationRef *kedav1alpha1.ScaledObjectAuthRef
+		if len(trigger.authentication) > 0 {
+			// Save all authentication config in a secret
+			extConfigName := fmt.Sprintf("%s-keda-%d", e.Integration.Name, idx)
+			secret := v1.Secret{
+				TypeMeta: metav1.TypeMeta{
+					Kind:       "Secret",
+					APIVersion: v1.SchemeGroupVersion.String(),
+				},
+				ObjectMeta: metav1.ObjectMeta{
+					Namespace: e.Integration.Namespace,
+					Name:      extConfigName,
+				},
+				StringData: trigger.authentication,
+			}
+			e.Resources.Add(&secret)
+
+			// Link the secret using a TriggerAuthentication
+			triggerAuth := kedav1alpha1.TriggerAuthentication{
+				TypeMeta: metav1.TypeMeta{
+					Kind:       "TriggerAuthentication",
+					APIVersion: kedav1alpha1.SchemeGroupVersion.String(),
+				},
+				ObjectMeta: metav1.ObjectMeta{
+					Namespace: e.Integration.Namespace,
+					Name:      extConfigName,
+				},
+			}
+			for _, k := range util.SortedStringMapKeys(trigger.authentication) {
+				triggerAuth.Spec.SecretTargetRef = append(triggerAuth.Spec.SecretTargetRef, kedav1alpha1.AuthSecretTargetRef{
+					Parameter: k,
+					Name:      extConfigName,
+					Key:       k,
+				})
+			}
+			e.Resources.Add(&triggerAuth)
+			authenticationRef = &kedav1alpha1.ScaledObjectAuthRef{
+				Name: extConfigName,
+			}
+		}
+
 		st := kedav1alpha1.ScaleTriggers{
-			Type:     trigger.Type,
-			Metadata: meta,
+			Type:              trigger.Type,
+			Metadata:          meta,
+			AuthenticationRef: authenticationRef,
 		}
 		obj.Spec.Triggers = append(obj.Spec.Triggers, st)
 	}
-
-	return &obj, nil
+	e.Resources.Add(&obj)
+	return nil
 }
 
 func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
@@ -226,14 +276,14 @@ func (t *kedaTrait) populateTriggersFromKamelets(e *trait.Environment) error {
 	}
 	kameletURIs := make(map[string][]string)
 	metadata.Each(e.CamelCatalog, sources, func(_ int, meta metadata.IntegrationMetadata) bool {
-		for _, uri := range meta.FromURIs {
-			if kameletStr := source.ExtractKamelet(uri); kameletStr != "" && camelv1alpha1.ValidKameletName(kameletStr) {
+		for _, kameletURI := range meta.FromURIs {
+			if kameletStr := source.ExtractKamelet(kameletURI); kameletStr != "" && camelv1alpha1.ValidKameletName(kameletStr) {
 				kamelet := kameletStr
 				if strings.Contains(kamelet, "/") {
 					kamelet = kamelet[0:strings.Index(kamelet, "/")]
 				}
 				uriList := kameletURIs[kamelet]
-				util.StringSliceUniqueAdd(&uriList, uri)
+				util.StringSliceUniqueAdd(&uriList, kameletURI)
 				sort.Strings(uriList)
 				kameletURIs[kamelet] = uriList
 			}
@@ -275,84 +325,166 @@ func (t *kedaTrait) populateTriggersFromKamelet(e *trait.Environment, repo repos
 	if kamelet.Spec.Definition == nil {
 		return nil
 	}
-	triggerType := t.getKedaType(kamelet)
+	triggerType := kamelet.Annotations[kameletAnnotationType]
 	if triggerType == "" {
 		return nil
 	}
 
-	metadataToProperty := make(map[string]string)
-	requiredMetadata := make(map[string]bool)
+	kedaParamToProperty := make(map[string]string)
+	requiredKEDAParam := make(map[string]bool)
+	kedaAuthenticationParam := make(map[string]bool)
 	for k, def := range kamelet.Spec.Definition.Properties {
 		if metadataName := t.getXDescriptorValue(def.XDescriptors, kameletURNMetadataPrefix); metadataName != "" {
-			metadataToProperty[metadataName] = k
+			kedaParamToProperty[metadataName] = k
 			if req := t.isXDescriptorPresent(def.XDescriptors, kameletURNRequiredTag); req {
-				requiredMetadata[metadataName] = true
+				requiredKEDAParam[metadataName] = true
 			}
 		}
+		if authenticationName := t.getXDescriptorValue(def.XDescriptors, kameletURNAuthenticationPrefix); authenticationName != "" {
+			kedaParamToProperty[authenticationName] = k
+			if req := t.isXDescriptorPresent(def.XDescriptors, kameletURNRequiredTag); req {
+				requiredKEDAParam[authenticationName] = true
+			}
+			kedaAuthenticationParam[authenticationName] = true
+		}
 	}
-	for _, uri := range uris {
-		if err := t.populateTriggersFromKameletURI(e, kameletName, triggerType, metadataToProperty, requiredMetadata, uri); err != nil {
+	for _, kameletURI := range uris {
+		if err := t.populateTriggersFromKameletURI(e, kamelet, triggerType, kedaParamToProperty, requiredKEDAParam, kedaAuthenticationParam, kameletURI); err != nil {
 			return err
 		}
 	}
 	return nil
 }
 
-func (t *kedaTrait) populateTriggersFromKameletURI(e *trait.Environment, kameletName string, triggerType string, metadataToProperty map[string]string, requiredMetadata map[string]bool, kameletURI string) error {
-	metaValues := make(map[string]string, len(metadataToProperty))
-	for metaParam, prop := range metadataToProperty {
-		// From lowest priority to top
-		if v := e.ApplicationProperties[fmt.Sprintf("camel.kamelet.%s.%s", kameletName, prop)]; v != "" {
-			metaValues[metaParam] = v
-		}
-		if kameletID := uri.GetPathSegment(kameletURI, 0); kameletID != "" {
-			kameletSpecificKey := fmt.Sprintf("camel.kamelet.%s.%s.%s", kameletName, kameletID, prop)
-			if v := e.ApplicationProperties[kameletSpecificKey]; v != "" {
-				metaValues[metaParam] = v
-			}
-			for _, c := range e.Integration.Spec.Configuration {
-				if c.Type == "property" && strings.HasPrefix(c.Value, kameletSpecificKey) {
-					v, err := property.DecodePropertyFileValue(c.Value, kameletSpecificKey)
-					if err != nil {
-						return errors.Wrapf(err, "could not decode property %q", kameletSpecificKey)
-					}
-					metaValues[metaParam] = v
-				}
-			}
+func (t *kedaTrait) populateTriggersFromKameletURI(e *trait.Environment, kamelet *camelv1alpha1.Kamelet, triggerType string, kedaParamToProperty map[string]string, requiredKEDAParam map[string]bool, authenticationParams map[string]bool, kameletURI string) error {
+	metaValues := make(map[string]string, len(kedaParamToProperty))
+	for metaParam, prop := range kedaParamToProperty {
+		v, err := t.getKameletPropertyValue(e, kamelet, kameletURI, prop)
+		if err != nil {
+			return err
 		}
-		if v := uri.GetQueryParameter(kameletURI, prop); v != "" {
+		if v != "" {
 			metaValues[metaParam] = v
 		}
 	}
 
-	for req := range requiredMetadata {
+	metaTemplates, templateAuthParams, err := t.evaluateTemplateParameters(e, kamelet, kameletURI)
+	if err != nil {
+		return err
+	}
+	for k, v := range metaTemplates {
+		metaValues[k] = v
+	}
+	for k, v := range templateAuthParams {
+		authenticationParams[k] = v
+	}
+
+	for req := range requiredKEDAParam {
 		if _, ok := metaValues[req]; !ok {
-			return fmt.Errorf("metadata parameter %q is missing in configuration: it is required by Keda", req)
+			return fmt.Errorf("metadata parameter %q is missing in configuration: it is required by KEDA", req)
 		}
 	}
 
-	kebabMetaValues := make(map[string]string, len(metaValues))
+	onlyMetaValues := make(map[string]string, len(metaValues)-len(authenticationParams))
+	onlyAuthValues := make(map[string]string, len(authenticationParams))
 	for k, v := range metaValues {
-		kebabMetaValues[scase.KebabCase(k)] = v
+		if authenticationParams[k] {
+			onlyAuthValues[k] = v
+		} else {
+			onlyMetaValues[k] = v
+		}
 	}
 
 	// Add the trigger in config
 	trigger := kedaTrigger{
-		Type:     triggerType,
-		Metadata: kebabMetaValues,
+		Type:           triggerType,
+		Metadata:       onlyMetaValues,
+		authentication: onlyAuthValues,
 	}
 	t.Triggers = append(t.Triggers, trigger)
 	return nil
 }
 
-func (t *kedaTrait) getKedaType(kamelet *camelv1alpha1.Kamelet) string {
+func (t *kedaTrait) evaluateTemplateParameters(e *trait.Environment, kamelet *camelv1alpha1.Kamelet, kameletURI string) (map[string]string, map[string]bool, error) {
+	paramTemplates := make(map[string]string)
+	authenticationParam := make(map[string]bool)
+	for annotation, expr := range kamelet.Annotations {
+		if strings.HasPrefix(annotation, kameletAnnotationMetadataPrefix) {
+			paramName := annotation[len(kameletAnnotationMetadataPrefix):]
+			paramTemplates[paramName] = expr
+		} else if strings.HasPrefix(annotation, kameletAnnotationAuthenticationPrefix) {
+			paramName := annotation[len(kameletAnnotationAuthenticationPrefix):]
+			paramTemplates[paramName] = expr
+			authenticationParam[paramName] = true
+		}
+	}
+
+	kameletPropValues := make(map[string]string)
 	if kamelet.Spec.Definition != nil {
-		triggerType := t.getXDescriptorValue(kamelet.Spec.Definition.XDescriptors, kameletURNTypePrefix)
-		if triggerType != "" {
-			return triggerType
+		for prop := range kamelet.Spec.Definition.Properties {
+			val, err := t.getKameletPropertyValue(e, kamelet, kameletURI, prop)
+			if err != nil {
+				return nil, nil, err
+			}
+			if val != "" {
+				kameletPropValues[prop] = val
+			}
+		}
+	}
+
+	paramValues := make(map[string]string, len(paramTemplates))
+	for param, expr := range paramTemplates {
+		tmpl, err := template.New(fmt.Sprintf("kamelet-param-%s", param)).Parse(expr)
+		if err != nil {
+			return nil, nil, errors.Wrapf(err, "invalid template for KEDA parameter %q: %q", param, expr)
+		}
+		var buf bytes.Buffer
+		if err := tmpl.Execute(&buf, kameletPropValues); err != nil {
+			return nil, nil, errors.Wrapf(err, "unable to process template for KEDA parameter %q: %q", param, expr)
+		}
+		paramValues[param] = buf.String()
+	}
+	return paramValues, authenticationParam, nil
+}
+
+func (t *kedaTrait) getKameletPropertyValue(e *trait.Environment, kamelet *v1alpha1.Kamelet, kameletURI, prop string) (string, error) {
+	// From top priority to lowest
+	if v := uri.GetQueryParameter(kameletURI, prop); v != "" {
+		return v, nil
+	}
+	if kameletID := uri.GetPathSegment(kameletURI, 0); kameletID != "" {
+		kameletSpecificKey := fmt.Sprintf("camel.kamelet.%s.%s.%s", kamelet.Name, kameletID, prop)
+		for _, c := range e.Integration.Spec.Configuration {
+			if c.Type == "property" && strings.HasPrefix(c.Value, kameletSpecificKey) {
+				v, err := property.DecodePropertyFileValue(c.Value, kameletSpecificKey)
+				if err != nil {
+					return "", errors.Wrapf(err, "could not decode property %q", kameletSpecificKey)
+				}
+				return v, nil
+			}
+		}
+
+		if v := e.ApplicationProperties[kameletSpecificKey]; v != "" {
+			return v, nil
+		}
+
+	}
+	if v := e.ApplicationProperties[fmt.Sprintf("camel.kamelet.%s.%s", kamelet.Name, prop)]; v != "" {
+		return v, nil
+	}
+	if kamelet.Spec.Definition != nil {
+		if schema, ok := kamelet.Spec.Definition.Properties[prop]; ok && schema.Default != nil {
+			var val interface{}
+			d := json.NewDecoder(bytes.NewReader(schema.Default.RawMessage))
+			d.UseNumber()
+			if err := d.Decode(&val); err != nil {
+				return "", errors.Wrapf(err, "cannot decode default value for property %q", prop)
+			}
+			v := fmt.Sprintf("%v", val)
+			return v, nil
 		}
 	}
-	return kamelet.Annotations[kameletAnnotationType]
+	return "", nil
 }
 
 func (t *kedaTrait) getXDescriptorValue(descriptors []string, prefix string) string {
diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc
index 890e733..5ece5cd 100644
--- a/docs/modules/ROOT/nav.adoc
+++ b/docs/modules/ROOT/nav.adoc
@@ -63,7 +63,7 @@
 ** xref:traits:jolokia.adoc[Jolokia]
 ** xref:traits:jvm.adoc[Jvm]
 ** xref:traits:kamelets.adoc[Kamelets]
-** xref:traits:keda.adoc[Keda]
+** xref:traits:keda.adoc[KEDA]
 ** xref:traits:knative-service.adoc[Knative Service]
 ** xref:traits:knative.adoc[Knative]
 ** xref:traits:logging.adoc[Logging]
diff --git a/pkg/apis/camel/v1alpha1/jsonschema_types.go b/pkg/apis/camel/v1alpha1/jsonschema_types.go
index 87e178b..a93e557 100644
--- a/pkg/apis/camel/v1alpha1/jsonschema_types.go
+++ b/pkg/apis/camel/v1alpha1/jsonschema_types.go
@@ -89,8 +89,6 @@ type JSONSchemaProps struct {
 	ExternalDocs *ExternalDocumentation    `json:"externalDocs,omitempty"`
 	Schema       JSONSchemaURL             `json:"$schema,omitempty"`
 	Type         string                    `json:"type,omitempty"`
-	// XDescriptors is a list of extended properties that trigger a custom behavior in external systems
-	XDescriptors []string `json:"x-descriptors,omitempty"`
 }
 
 // RawMessage is a raw encoded JSON value.

[camel-k] 16/22: Fix #1107: fix linter

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit cdd75b2e53a337af2cbfc454676464ab3716a98b
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Tue Dec 21 11:02:20 2021 +0100

    Fix #1107: fix linter
---
 addons/keda/duck/v1alpha1/duck_types.go | 16 +++++++-------
 addons/keda/keda.go                     | 37 +++++++++++++++------------------
 addons/keda/keda_test.go                | 12 +++++------
 docs/modules/traits/pages/keda.adoc     |  4 ++--
 pkg/client/serverside.go                |  4 +++-
 pkg/cmd/kit_create.go                   |  2 +-
 pkg/cmd/run.go                          |  6 +++---
 pkg/install/kamelets.go                 |  8 +------
 pkg/resources/resources.go              |  4 ++--
 pkg/util/uri/uri.go                     |  2 +-
 pkg/util/util.go                        |  5 ++---
 resources/traits.yaml                   |  4 ++--
 12 files changed, 48 insertions(+), 56 deletions(-)

diff --git a/addons/keda/duck/v1alpha1/duck_types.go b/addons/keda/duck/v1alpha1/duck_types.go
index 90a20bf..a278ead 100644
--- a/addons/keda/duck/v1alpha1/duck_types.go
+++ b/addons/keda/duck/v1alpha1/duck_types.go
@@ -27,7 +27,7 @@ import (
 // +genclient:noStatus
 // +kubebuilder:object:root=true
 
-// ScaledObject is a specification for a ScaledObject resource
+// ScaledObject is a specification for a ScaledObject resource.
 type ScaledObject struct {
 	metav1.TypeMeta   `json:",inline"`
 	metav1.ObjectMeta `json:"metadata,omitempty"`
@@ -35,7 +35,7 @@ type ScaledObject struct {
 	Spec ScaledObjectSpec `json:"spec"`
 }
 
-// ScaledObjectSpec is the spec for a ScaledObject resource
+// ScaledObjectSpec is the spec for a ScaledObject resource.
 type ScaledObjectSpec struct {
 	ScaleTargetRef *v1.ObjectReference `json:"scaleTargetRef"`
 	// +optional
@@ -52,7 +52,7 @@ type ScaledObjectSpec struct {
 	Triggers []ScaleTriggers `json:"triggers"`
 }
 
-// ScaleTriggers reference the scaler that will be used
+// ScaleTriggers reference the scaler that will be used.
 type ScaleTriggers struct {
 	Type string `json:"type"`
 	// +optional
@@ -65,7 +65,7 @@ type ScaleTriggers struct {
 }
 
 // ScaledObjectAuthRef points to the TriggerAuthentication or ClusterTriggerAuthentication object that
-// is used to authenticate the scaler with the environment
+// is used to authenticate the scaler with the environment.
 type ScaledObjectAuthRef struct {
 	Name string `json:"name"`
 	// Kind of the resource being referred to. Defaults to TriggerAuthentication.
@@ -87,7 +87,7 @@ type ScaledObjectList struct {
 // +genclient:noStatus
 // +kubebuilder:object:root=true
 
-// TriggerAuthentication defines how a trigger can authenticate
+// TriggerAuthentication defines how a trigger can authenticate.
 type TriggerAuthentication struct {
 	metav1.TypeMeta   `json:",inline"`
 	metav1.ObjectMeta `json:"metadata,omitempty"`
@@ -95,13 +95,13 @@ type TriggerAuthentication struct {
 	Spec TriggerAuthenticationSpec `json:"spec"`
 }
 
-// TriggerAuthenticationSpec defines the various ways to authenticate
+// TriggerAuthenticationSpec defines the various ways to authenticate.
 type TriggerAuthenticationSpec struct {
 	// +optional
 	SecretTargetRef []AuthSecretTargetRef `json:"secretTargetRef,omitempty"`
 }
 
-// AuthSecretTargetRef is used to authenticate using a reference to a secret
+// AuthSecretTargetRef is used to authenticate using a reference to a secret.
 type AuthSecretTargetRef struct {
 	Parameter string `json:"parameter"`
 	Name      string `json:"name"`
@@ -110,7 +110,7 @@ type AuthSecretTargetRef struct {
 
 // +kubebuilder:object:root=true
 
-// TriggerAuthenticationList contains a list of TriggerAuthentication
+// TriggerAuthenticationList contains a list of TriggerAuthentication.
 type TriggerAuthenticationList struct {
 	metav1.TypeMeta `json:",inline"`
 	metav1.ListMeta `json:"metadata"`
diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index c446ea3..0c972ab 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -27,7 +27,6 @@ import (
 
 	kedav1alpha1 "github.com/apache/camel-k/addons/keda/duck/v1alpha1"
 	camelv1 "github.com/apache/camel-k/pkg/apis/camel/v1"
-	"github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
 	camelv1alpha1 "github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
 	"github.com/apache/camel-k/pkg/kamelet/repository"
 	"github.com/apache/camel-k/pkg/metadata"
@@ -46,18 +45,18 @@ import (
 )
 
 const (
-	// kameletURNMetadataPrefix allows binding Kamelet properties to KEDA metadata
+	// kameletURNMetadataPrefix allows binding Kamelet properties to KEDA metadata.
 	kameletURNMetadataPrefix = "urn:keda:metadata:"
-	// kameletURNAuthenticationPrefix allows binding Kamelet properties to KEDA authentication options
+	// kameletURNAuthenticationPrefix allows binding Kamelet properties to KEDA authentication options.
 	kameletURNAuthenticationPrefix = "urn:keda:authentication:"
-	// kameletURNRequiredTag is used to mark properties required by KEDA
+	// kameletURNRequiredTag is used to mark properties required by KEDA.
 	kameletURNRequiredTag = "urn:keda:required"
 
-	// kameletAnnotationType indicates the scaler type associated to a Kamelet
+	// kameletAnnotationType indicates the scaler type associated to a Kamelet.
 	kameletAnnotationType = "camel.apache.org/keda.type"
-	// kameletAnnotationMetadataPrefix is used to define virtual metadata fields computed from Kamelet properties
+	// kameletAnnotationMetadataPrefix is used to define virtual metadata fields computed from Kamelet properties.
 	kameletAnnotationMetadataPrefix = "camel.apache.org/keda.metadata."
-	// kameletAnnotationAuthenticationPrefix is used to define virtual authentication fields computed from Kamelet properties
+	// kameletAnnotationAuthenticationPrefix is used to define virtual authentication fields computed from Kamelet properties.
 	kameletAnnotationAuthenticationPrefix = "camel.apache.org/keda.authentication."
 )
 
@@ -66,9 +65,9 @@ const (
 // via markers in the Kamelets.
 //
 // For information on how to use KEDA enabled Kamelets with the KEDA trait, refer to
-// xref:kamelets/kamelets-user.adoc#kamelet-keda-user[the KEDA section in the Kamelets user guide].
+// xref:ROOT:kamelets/kamelets-user.adoc#kamelet-keda-user[the KEDA section in the Kamelets user guide].
 // If you want to create Kamelets that contain KEDA metadata, refer to
-// xref:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the KEDA section in the Kamelets development guide].
+// xref:ROOT:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the KEDA section in the Kamelets development guide].
 //
 // The KEDA trait is disabled by default.
 //
@@ -287,14 +286,12 @@ func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 				return err
 			}
 		}
-	} else {
-		if e.Integration.Spec.Replicas == nil {
-			one := int32(1)
-			e.Integration.Spec.Replicas = &one
-			// Update the Integration directly as the spec section is not merged by default
-			if err := e.Client.Update(e.Ctx, e.Integration); err != nil {
-				return err
-			}
+	} else if e.Integration.Spec.Replicas == nil {
+		one := int32(1)
+		e.Integration.Spec.Replicas = &one
+		// Update the Integration directly as the spec section is not merged by default
+		if err := e.Client.Update(e.Ctx, e.Integration); err != nil {
+			return err
 		}
 	}
 	return nil
@@ -302,7 +299,7 @@ func (t *kedaTrait) hackControllerReplicas(e *trait.Environment) error {
 
 func (t *kedaTrait) getTopControllerReference(e *trait.Environment) *v1.ObjectReference {
 	for _, o := range e.Integration.OwnerReferences {
-		if o.Kind == v1alpha1.KameletBindingKind && strings.HasPrefix(o.APIVersion, v1alpha1.SchemeGroupVersion.Group) {
+		if o.Kind == camelv1alpha1.KameletBindingKind && strings.HasPrefix(o.APIVersion, camelv1alpha1.SchemeGroupVersion.Group) {
 			return &v1.ObjectReference{
 				APIVersion: o.APIVersion,
 				Kind:       o.Kind,
@@ -349,7 +346,7 @@ func (t *kedaTrait) populateTriggersFromKamelets(e *trait.Environment) error {
 	}
 
 	sortedKamelets := make([]string, 0, len(kameletURIs))
-	for kamelet, _ := range kameletURIs {
+	for kamelet := range kameletURIs {
 		sortedKamelets = append(sortedKamelets, kamelet)
 	}
 	sort.Strings(sortedKamelets)
@@ -495,7 +492,7 @@ func (t *kedaTrait) evaluateTemplateParameters(e *trait.Environment, kamelet *ca
 	return paramValues, authenticationParam, nil
 }
 
-func (t *kedaTrait) getKameletPropertyValue(e *trait.Environment, kamelet *v1alpha1.Kamelet, kameletURI, prop string) (string, error) {
+func (t *kedaTrait) getKameletPropertyValue(e *trait.Environment, kamelet *camelv1alpha1.Kamelet, kameletURI, prop string) (string, error) {
 	// From top priority to lowest
 	if v := uri.GetQueryParameter(kameletURI, prop); v != "" {
 		return v, nil
diff --git a/addons/keda/keda_test.go b/addons/keda/keda_test.go
index ae49b4a..08a627e 100644
--- a/addons/keda/keda_test.go
+++ b/addons/keda/keda_test.go
@@ -133,17 +133,17 @@ func TestKameletAutoDetection(t *testing.T) {
 			Spec: camelv1alpha1.KameletSpec{
 				Definition: &camelv1alpha1.JSONSchemaProps{
 					Properties: map[string]camelv1alpha1.JSONSchemaProp{
-						"a": camelv1alpha1.JSONSchemaProp{
+						"a": {
 							XDescriptors: []string{
 								"urn:keda:metadata:a",
 							},
 						},
-						"b": camelv1alpha1.JSONSchemaProp{
+						"b": {
 							XDescriptors: []string{
 								"urn:keda:metadata:bb",
 							},
 						},
-						"c": camelv1alpha1.JSONSchemaProp{
+						"c": {
 							XDescriptors: []string{
 								"urn:keda:authentication:cc",
 							},
@@ -248,17 +248,17 @@ func TestKameletBindingAutoDetection(t *testing.T) {
 			Spec: camelv1alpha1.KameletSpec{
 				Definition: &camelv1alpha1.JSONSchemaProps{
 					Properties: map[string]camelv1alpha1.JSONSchemaProp{
-						"a": camelv1alpha1.JSONSchemaProp{
+						"a": {
 							XDescriptors: []string{
 								"urn:keda:metadata:a",
 							},
 						},
-						"b": camelv1alpha1.JSONSchemaProp{
+						"b": {
 							XDescriptors: []string{
 								"urn:keda:metadata:bb",
 							},
 						},
-						"c": camelv1alpha1.JSONSchemaProp{
+						"c": {
 							XDescriptors: []string{
 								"urn:keda:authentication:cc",
 							},
diff --git a/docs/modules/traits/pages/keda.adoc b/docs/modules/traits/pages/keda.adoc
index 1d5bbcb..340e150 100644
--- a/docs/modules/traits/pages/keda.adoc
+++ b/docs/modules/traits/pages/keda.adoc
@@ -6,9 +6,9 @@ The trait can be either manually configured using the `triggers` option or autom
 via markers in the Kamelets.
 
 For information on how to use KEDA enabled Kamelets with the KEDA trait, refer to
-xref:kamelets/kamelets-user.adoc#kamelet-keda-user[the KEDA section in the Kamelets user guide].
+xref:ROOT:kamelets/kamelets-user.adoc#kamelet-keda-user[the KEDA section in the Kamelets user guide].
 If you want to create Kamelets that contain KEDA metadata, refer to
-xref:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the KEDA section in the Kamelets development guide].
+xref:ROOT:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the KEDA section in the Kamelets development guide].
 
 The KEDA trait is disabled by default.
 
diff --git a/pkg/client/serverside.go b/pkg/client/serverside.go
index bca029d..50be4a7 100644
--- a/pkg/client/serverside.go
+++ b/pkg/client/serverside.go
@@ -30,6 +30,7 @@ import (
 	"github.com/pkg/errors"
 	k8serrors "k8s.io/apimachinery/pkg/api/errors"
 	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+	"k8s.io/apimachinery/pkg/runtime"
 	"k8s.io/apimachinery/pkg/types"
 	ctrl "sigs.k8s.io/controller-runtime/pkg/client"
 )
@@ -49,6 +50,7 @@ func (c *defaultClient) ServerOrClientSideApplier() ServerOrClientSideApplier {
 func (a *ServerOrClientSideApplier) Apply(ctx context.Context, object ctrl.Object) error {
 	once := false
 	var err error
+	// nolint: ifshort
 	needsRetry := false
 	a.tryServerSideApply.Do(func() {
 		once = true
@@ -80,7 +82,7 @@ func (a *ServerOrClientSideApplier) Apply(ctx context.Context, object ctrl.Objec
 	return nil
 }
 
-func (a *ServerOrClientSideApplier) serverSideApply(ctx context.Context, resource ctrl.Object) error {
+func (a *ServerOrClientSideApplier) serverSideApply(ctx context.Context, resource runtime.Object) error {
 	target, err := patch.PositiveApplyPatch(resource)
 	if err != nil {
 		return err
diff --git a/pkg/cmd/kit_create.go b/pkg/cmd/kit_create.go
index 4e3e7c2..4c16aba 100644
--- a/pkg/cmd/kit_create.go
+++ b/pkg/cmd/kit_create.go
@@ -196,7 +196,7 @@ func (command *kitCreateCommandOptions) run(_ *cobra.Command, args []string) err
 	return nil
 }
 
-func (*kitCreateCommandOptions) configureTraits(kit *v1.IntegrationKit, options []string, catalog *trait.Catalog) error {
+func (*kitCreateCommandOptions) configureTraits(kit *v1.IntegrationKit, options []string, catalog trait.Finder) error {
 	traits, err := configureTraits(options, catalog)
 	if err != nil {
 		return err
diff --git a/pkg/cmd/run.go b/pkg/cmd/run.go
index 1819405..d2d260f 100644
--- a/pkg/cmd/run.go
+++ b/pkg/cmd/run.go
@@ -417,7 +417,7 @@ func (o *runCmdOptions) waitForIntegrationReady(cmd *cobra.Command, c client.Cli
 	return watch.HandleIntegrationStateChanges(o.Context, c, integration, handler)
 }
 
-func (o *runCmdOptions) syncIntegration(cmd *cobra.Command, c client.Client, sources []string, catalog *trait.Catalog) error {
+func (o *runCmdOptions) syncIntegration(cmd *cobra.Command, c client.Client, sources []string, catalog trait.Finder) error {
 	// Let's watch all relevant files when in dev mode
 	var files []string
 	files = append(files, sources...)
@@ -480,7 +480,7 @@ func (o *runCmdOptions) syncIntegration(cmd *cobra.Command, c client.Client, sou
 }
 
 // nolint: gocyclo
-func (o *runCmdOptions) createOrUpdateIntegration(cmd *cobra.Command, c client.Client, sources []string, catalog *trait.Catalog) (*v1.Integration, error) {
+func (o *runCmdOptions) createOrUpdateIntegration(cmd *cobra.Command, c client.Client, sources []string, catalog trait.Finder) (*v1.Integration, error) {
 	namespace := o.Namespace
 	name := o.GetIntegrationName(sources)
 
@@ -738,7 +738,7 @@ func (o *runCmdOptions) GetIntegrationName(sources []string) string {
 	return name
 }
 
-func (o *runCmdOptions) configureTraits(integration *v1.Integration, options []string, catalog *trait.Catalog) error {
+func (o *runCmdOptions) configureTraits(integration *v1.Integration, options []string, catalog trait.Finder) error {
 	// configure ServiceBinding trait
 	for _, sb := range o.Connects {
 		bindings := fmt.Sprintf("service-binding.services=%s", sb)
diff --git a/pkg/install/kamelets.go b/pkg/install/kamelets.go
index fc64e25..4ff4572 100644
--- a/pkg/install/kamelets.go
+++ b/pkg/install/kamelets.go
@@ -28,14 +28,12 @@ import (
 
 	"golang.org/x/sync/errgroup"
 
-	"k8s.io/apimachinery/pkg/runtime"
-	logf "sigs.k8s.io/controller-runtime/pkg/log"
-
 	"github.com/apache/camel-k/pkg/apis/camel/v1alpha1"
 	"github.com/apache/camel-k/pkg/client"
 	"github.com/apache/camel-k/pkg/util"
 	"github.com/apache/camel-k/pkg/util/defaults"
 	"github.com/apache/camel-k/pkg/util/kubernetes"
+	"k8s.io/apimachinery/pkg/runtime"
 )
 
 const (
@@ -43,10 +41,6 @@ const (
 	defaultKameletDir = "/kamelets/"
 )
 
-var (
-	log = logf.Log
-)
-
 // KameletCatalog installs the bundled Kamelets into the specified namespace.
 func KameletCatalog(ctx context.Context, c client.Client, namespace string) error {
 	kameletDir := os.Getenv(kameletDirEnv)
diff --git a/pkg/resources/resources.go b/pkg/resources/resources.go
index 414e4d8..80a8c6d 100644
--- a/pkg/resources/resources.go
+++ b/pkg/resources/resources.go
@@ -555,9 +555,9 @@ var assets = func() http.FileSystem {
 		"/traits.yaml": &vfsgen۰CompressedFileInfo{
 			name:             "traits.yaml",
 			modTime:          time.Time{},
-			uncompressedSize: 49560,
+			uncompressedSize: 49570,
 
-			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x5b\xb9\x91\xe0\xef\xf3\x57\xa0\xb4\x57\x65\x49\x45\x52\x9e\xc9\x26\x3b\xa7\xbb\xd9\x94\xc6\x76\x12\xcd\xf8\x43\x67\x3b\xb3\x97\x9a\x9b\x0a\xc1\xf7\x9a\x24\xcc\x47\xe0\x05\xc0\x93\xcc\xdc\xde\xff\x7e\x85\xee\xc6\xc7\x7b\x24\x25\xca\xb6\x66\xa3\xad\xdd\x54\xed\x58\xd2\x03\xd0\x68\x34\xfa\xbb\x1b\xde\x4a\xe5\xdd\xf9\x57\x63\xa1\xe5\x1a\xce\x85\x9c\xcf\x95\x56\x7e\xf3\x95\x10\x6d\x23\xfd\xdc\xd8\xf5\xb9\x [...]
+			compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x7d\xfd\x73\x5b\xb9\x91\xe0\xef\xf3\x57\xa0\xb4\x57\x65\x49\x45\x52\x9e\xc9\x26\x3b\xa7\xbb\xd9\x94\xc6\x76\x12\xcd\xf8\x43\x67\x3b\xb3\x97\x9a\x9b\x0a\xc1\xf7\x9a\x24\xcc\x47\xe0\x05\xc0\x93\xcc\xdc\xde\xff\x7e\x85\xee\xc6\xc7\x7b\x24\x25\xca\xb6\x66\xa3\xad\xdd\x54\xed\x58\xd2\x03\xd0\x68\x34\xfa\xbb\x1b\xde\x4a\xe5\xdd\xf9\x57\x63\xa1\xe5\x1a\xce\x85\x9c\xcf\x95\x56\x7e\xf3\x95\x10\x6d\x23\xfd\xdc\xd8\xf5\xb9\x [...]
 		},
 	}
 	fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{
diff --git a/pkg/util/uri/uri.go b/pkg/util/uri/uri.go
index 210f169..79a9f86 100644
--- a/pkg/util/uri/uri.go
+++ b/pkg/util/uri/uri.go
@@ -57,7 +57,7 @@ func GetQueryParameter(uri string, param string) string {
 	return res
 }
 
-// GetPathSegment returns the path segment of the URI corresponding to the given position (0 based), if present
+// GetPathSegment returns the path segment of the URI corresponding to the given position (0 based), if present.
 func GetPathSegment(uri string, pos int) string {
 	match := pathExtractorRegexp.FindStringSubmatch(uri)
 	if len(match) > 1 {
diff --git a/pkg/util/util.go b/pkg/util/util.go
index 69fa4cb..05cdb80 100644
--- a/pkg/util/util.go
+++ b/pkg/util/util.go
@@ -872,7 +872,7 @@ func ConfigTreePropertySplit(property string) []string {
 		if len(cur) > 0 {
 			tmp = append(tmp, cur)
 		}
-		for i := len(tmp) - 1; i >= 0; i = i - 1 {
+		for i := len(tmp) - 1; i >= 0; i-- {
 			res = append(res, tmp[i])
 		}
 	}
@@ -895,9 +895,8 @@ func NavigateConfigTree(current interface{}, nodes []string) (interface{}, error
 		if isSlice(1) {
 			slice := make([]interface{}, 0)
 			return &slice
-		} else {
-			return make(map[string]interface{})
 		}
+		return make(map[string]interface{})
 	}
 	switch c := current.(type) {
 	case map[string]interface{}:
diff --git a/resources/traits.yaml b/resources/traits.yaml
index 8eac6f4..f1a813f 100755
--- a/resources/traits.yaml
+++ b/resources/traits.yaml
@@ -579,9 +579,9 @@ traits:
   description: The KEDA trait can be used for automatic integration with KEDA autoscalers.
     The trait can be either manually configured using the `triggers` option or automatically
     configured via markers in the Kamelets. For information on how to use KEDA enabled
-    Kamelets with the KEDA trait, refer to xref:kamelets/kamelets-user.adoc#kamelet-keda-user[the
+    Kamelets with the KEDA trait, refer to xref:ROOT:kamelets/kamelets-user.adoc#kamelet-keda-user[the
     KEDA section in the Kamelets user guide]. If you want to create Kamelets that
-    contain KEDA metadata, refer to xref:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the
+    contain KEDA metadata, refer to xref:ROOT:kamelets/kamelets-dev.adoc#kamelet-keda-dev[the
     KEDA section in the Kamelets development guide]. The KEDA trait is disabled by
     default.
   properties:

[camel-k] 20/22: Fix #1107: simplify applier code

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 5520b63a06413751a48692515bce3220d29488ad
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Tue Jan 4 00:17:22 2022 +0100

    Fix #1107: simplify applier code
---
 pkg/client/apply.go | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/pkg/client/apply.go b/pkg/client/apply.go
index 50be4a7..cfcc2c6 100644
--- a/pkg/client/apply.go
+++ b/pkg/client/apply.go
@@ -50,8 +50,6 @@ func (c *defaultClient) ServerOrClientSideApplier() ServerOrClientSideApplier {
 func (a *ServerOrClientSideApplier) Apply(ctx context.Context, object ctrl.Object) error {
 	once := false
 	var err error
-	// nolint: ifshort
-	needsRetry := false
 	a.tryServerSideApply.Do(func() {
 		once = true
 		if err = a.serverSideApply(ctx, object); err != nil {
@@ -59,17 +57,13 @@ func (a *ServerOrClientSideApplier) Apply(ctx context.Context, object ctrl.Objec
 				log.Info("Fallback to client-side apply for installing resources")
 				a.hasServerSideApply.Store(false)
 				err = nil
-			} else {
-				needsRetry = true
 			}
 		} else {
 			a.hasServerSideApply.Store(true)
 		}
 	})
-	if needsRetry {
-		a.tryServerSideApply = sync.Once{}
-	}
 	if err != nil {
+		a.tryServerSideApply = sync.Once{}
 		return err
 	}
 	if v := a.hasServerSideApply.Load(); v.(bool) {

[camel-k] 15/22: Fix #1107: fix deepcopy gen

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit b7889523191cedf1c2cc14162c807795f7a0879d
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 20 18:56:21 2021 +0100

    Fix #1107: fix deepcopy gen
---
 script/Makefile           |  1 -
 script/gen_client_keda.sh | 32 --------------------------------
 2 files changed, 33 deletions(-)

diff --git a/script/Makefile b/script/Makefile
index af03077..16c0198 100644
--- a/script/Makefile
+++ b/script/Makefile
@@ -175,7 +175,6 @@ generate-json-schema:
 
 generate-keda:
 	cd addons/keda/duck && $(CONTROLLER_GEN) paths="./..." object
-	./script/gen_client_keda.sh
 
 generate-strimzi:
 	cd addons/strimzi/duck && $(CONTROLLER_GEN) paths="./..." object
diff --git a/script/gen_client_keda.sh b/script/gen_client_keda.sh
deleted file mode 100755
index e5dd2ca..0000000
--- a/script/gen_client_keda.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/sh
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-set -e
-
-location=$(dirname $0)
-rootdir=$location/..
-
-unset GOPATH
-GO111MODULE=on
-
-echo "Generating boilerplate code for Keda addon..."
-
-cd $rootdir
-
-go run k8s.io/code-generator/cmd/deepcopy-gen \
-  -h ./script/headers/default.txt \
-  --input-dirs=github.com/apache/camel-k/addons/keda

[camel-k] 08/22: Fix #1107: disable camel case conversion by default

Posted by nf...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 371150fa8c931aa7e85e8fd38b744f19d3abbbc5
Author: nicolaferraro <ni...@gmail.com>
AuthorDate: Mon Dec 20 10:08:01 2021 +0100

    Fix #1107: disable camel case conversion by default
---
 addons/keda/keda.go | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/addons/keda/keda.go b/addons/keda/keda.go
index ffe637c..90641e3 100644
--- a/addons/keda/keda.go
+++ b/addons/keda/keda.go
@@ -70,7 +70,7 @@ type kedaTrait struct {
 	trait.BaseTrait `property:",squash"`
 	// Enables automatic configuration of the trait.
 	Auto *bool `property:"auto" json:"auto,omitempty"`
-	// Convert metadata properties to camelCase (needed because trait properties use kebab-case). Enabled by default.
+	// Convert metadata properties to camelCase (needed because trait properties use kebab-case). Disabled by default.
 	CamelCaseConversion *bool `property:"camel-case-conversion" json:"camelCaseConversion,omitempty"`
 	// Set the spec->replicas field on the top level controller to an explicit value if missing, to allow KEDA to recognize it as a scalable resource
 	HackControllerReplicas *bool `property:"hack-controller-replicas" json:"hackControllerReplicas,omitempty"`
@@ -164,7 +164,7 @@ func (t *kedaTrait) addScalingResources(e *trait.Environment) error {
 		meta := make(map[string]string)
 		for k, v := range trigger.Metadata {
 			kk := k
-			if t.CamelCaseConversion == nil || *t.CamelCaseConversion {
+			if t.CamelCaseConversion != nil && *t.CamelCaseConversion {
 				kk = scase.LowerCamelCase(k)
 			}
 			meta[kk] = v