You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@servicecomb.apache.org by GitBox <gi...@apache.org> on 2021/10/20 13:59:52 UTC

[GitHub] [servicecomb-service-center] sxcooler opened a new issue #1162: the latest version boot cause panic

sxcooler opened a new issue #1162:
URL: https://github.com/apache/servicecomb-service-center/issues/1162


   **Describe the bug**
   the docker image tagged with "latest", when using independence etcd, will fail at starting.
   these are some logs:
   ```
   2021-10-20T13:16:49.306Z        WARN    etcd/etcd.go:88 data source enable etcd mode
   2021-10-20T13:16:49.306Z        WARN    embedded/embedded_etcd.go:542   enable embedded registry mode
   2021-10-20T13:16:49.306Z        DEBUG   embedded/embedded_etcd.go:589   --initial-cluster sc-0=http://127.0.0.1:2380 --initial-advertise-peer-urls http://127.0.0.1:2380 --listen-peer-urls http://127.0.0.1:2380
   2021-10-20 13:16:49.306602 I | embed: listening for peers on http://127.0.0.1:2380
   2021-10-20 13:16:49.309251 I | etcdserver: name = sc-0
   2021-10-20 13:16:49.309266 I | etcdserver: data dir = data
   2021-10-20 13:16:49.309271 I | etcdserver: member dir = data/member
   2021-10-20 13:16:49.309275 I | etcdserver: heartbeat = 100ms
   2021-10-20 13:16:49.309278 I | etcdserver: election = 1000ms
   2021-10-20 13:16:49.309282 I | etcdserver: snapshot count = 100000
   2021-10-20 13:16:49.309289 I | etcdserver: advertise client URLs =
   2021-10-20 13:16:49.309294 I | etcdserver: initial advertise peer URLs = http://127.0.0.1:2380
   2021-10-20 13:16:49.309304 I | etcdserver: initial cluster = sc-0=http://127.0.0.1:2380
   2021-10-20 13:16:49.312221 I | etcdserver: starting member b71f75320dc06a6c in cluster 1c45a069f3a1d796
   2021-10-20 13:16:49.312259 I | raft: b71f75320dc06a6c became follower at term 0
   2021-10-20 13:16:49.312270 I | raft: newRaft b71f75320dc06a6c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
   2021-10-20 13:16:49.312275 I | raft: b71f75320dc06a6c became follower at term 1
   2021-10-20 13:16:49.314513 W | auth: simple token is not cryptographically signed
   2021-10-20 13:16:49.319706 I | etcdserver: starting server... [version: 3.3.25, cluster version: to_be_decided]
   2021-10-20 13:16:49.320344 I | etcdserver: b71f75320dc06a6c as single-node; fast-forwarding 9 ticks (election ticks 10)
   2021-10-20 13:16:49.320763 I | etcdserver/membership: added member b71f75320dc06a6c [http://127.0.0.1:2380] to cluster 1c45a069f3a1d796
   2021-10-20 13:16:50.112653 I | raft: b71f75320dc06a6c is starting a new election at term 1
   2021-10-20 13:16:50.112689 I | raft: b71f75320dc06a6c became candidate at term 2
   2021-10-20 13:16:50.112704 I | raft: b71f75320dc06a6c received MsgVoteResp from b71f75320dc06a6c at term 2
   2021-10-20 13:16:50.112728 I | raft: b71f75320dc06a6c became leader at term 2
   2021-10-20 13:16:50.112736 I | raft: raft.node: b71f75320dc06a6c elected leader b71f75320dc06a6c at term 2
   2021-10-20 13:16:50.112972 I | etcdserver: setting up the initial cluster version to 3.3
   2021-10-20 13:16:50.113596 N | etcdserver/membership: set the initial cluster version to 3.3
   2021-10-20 13:16:50.113670 I | etcdserver/api: enabled capabilities for version 3.3
   2021-10-20 13:16:50.113717 I | etcdserver: published {Name:sc-0 ClientURLs:[]} to cluster 1c45a069f3a1d796
   2021-10-20T13:16:50.113Z        INFO    client/manager.go:61    client plugin [embedded_etcd] enabled
   2021-10-20T13:16:50.113Z        INFO    sd/manager.go:46        cache plugin [etcd] enabled
   2021-10-20T13:16:50.113Z        INFO    sd/event_proxy.go:73    register event handler[SERVICE] etcd/event.ServiceEventHandler
   2021-10-20T13:16:50.113Z        INFO    sd/event_proxy.go:73    register event handler[INSTANCE] etcd/event.InstanceEventHandler
   2021-10-20T13:16:50.113Z        INFO    sd/event_proxy.go:73    register event handler[RULE] etcd/event.RuleEventHandler
   2021-10-20T13:16:50.113Z        INFO    sd/event_proxy.go:73    register event handler[SERVICE_TAG] etcd/event.TagEventHandler
   2021-10-20T13:16:50.113Z        INFO    sd/event_proxy.go:73    register event handler[DEPENDENCY_QUEUE] etcd/event.DependencyEventHandler
   2021-10-20T13:16:50.114Z        INFO    sd/event_proxy.go:73    register event handler[DEPENDENCY_RULE] etcd/event.DependencyRuleEventHandler
   2021-10-20T13:16:50.114Z        INFO    kv/kv.go:109    start auto clear cache in 5m0s
   2021-10-20T13:16:50.114Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/files/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.115Z        DEBUG   etcd/cacher_kv.go:107   [480.873µs]finish to cache key /cse-sr/ms/files/, 0 items, rev: 1
   2021-10-20T13:16:50.116Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/inst/files/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.116Z        DEBUG   etcd/cacher_kv.go:107   [378.881µs]finish to cache key /cse-sr/inst/files/, 0 items, rev: 1
   2021-10-20T13:16:50.116Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/domains/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.117Z        DEBUG   etcd/cacher_kv.go:107   [416.28µs]finish to cache key /cse-sr/domains/, 0 items, rev: 1
   2021-10-20T13:16:50.117Z        INFO    etcd/adaptor.go:61      core will not cache 'SCHEMA' and ignore all events of it, cache enabled: true, init size: 0
   2021-10-20T13:16:50.117Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/schema-sum/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.117Z        DEBUG   etcd/cacher_kv.go:107   [305.878µs]finish to cache key /cse-sr/ms/schema-sum/, 0 items, rev: 1
   2021-10-20T13:16:50.117Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/rules/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.118Z        DEBUG   etcd/cacher_kv.go:107   [217.857µs]finish to cache key /cse-sr/ms/rules/, 0 items, rev: 1
   2021-10-20T13:16:50.118Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/inst/leases/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.118Z        DEBUG   etcd/cacher_kv.go:107   [332.276µs]finish to cache key /cse-sr/inst/leases/, 0 items, rev: 1
   2021-10-20T13:16:50.119Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/indexes/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.119Z        DEBUG   etcd/cacher_kv.go:107   [251.122µs]finish to cache key /cse-sr/ms/indexes/, 0 items, rev: 1
   2021-10-20T13:16:50.119Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/alias/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.119Z        DEBUG   etcd/cacher_kv.go:107   [177.797µs]finish to cache key /cse-sr/ms/alias/, 0 items, rev: 1
   2021-10-20T13:16:50.119Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/tags/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.120Z        DEBUG   etcd/cacher_kv.go:107   [181.011µs]finish to cache key /cse-sr/ms/tags/, 0 items, rev: 1
   2021-10-20T13:16:50.120Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/rule-indexes/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.120Z        DEBUG   etcd/cacher_kv.go:107   [150.903µs]finish to cache key /cse-sr/ms/rule-indexes/, 0 items, rev: 1
   2021-10-20T13:16:50.120Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/dep-rules/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.121Z        DEBUG   etcd/cacher_kv.go:107   [125.662µs]finish to cache key /cse-sr/ms/dep-rules/, 0 items, rev: 1
   2021-10-20T13:16:50.121Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/ms/dep-queue/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.121Z        DEBUG   etcd/cacher_kv.go:107   [96.239µs]finish to cache key /cse-sr/ms/dep-queue/, 0 items, rev: 1
   2021-10-20T13:16:50.121Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-sr/projects/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.121Z        DEBUG   etcd/cacher_kv.go:107   [195.395µs]finish to cache key /cse-sr/projects/, 0 items, rev: 1
   2021-10-20T13:16:50.121Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-pact/participant/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.122Z        DEBUG   etcd/cacher_kv.go:107   [268.313µs]finish to cache key /cse-pact/participant/, 0 items, rev: 1
   2021-10-20T13:16:50.122Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-pact/version/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.122Z        DEBUG   etcd/cacher_kv.go:107   [195.884µs]finish to cache key /cse-pact/version/, 0 items, rev: 1
   2021-10-20T13:16:50.122Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-pact/pact/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.122Z        DEBUG   etcd/cacher_kv.go:107   [216.296µs]finish to cache key /cse-pact/pact/, 0 items, rev: 1
   2021-10-20T13:16:50.123Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-pact/pact-version/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.123Z        DEBUG   etcd/cacher_kv.go:107   [171.291µs]finish to cache key /cse-pact/pact-version/, 0 items, rev: 1
   2021-10-20T13:16:50.123Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-pact/pact-tag/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.123Z        DEBUG   etcd/cacher_kv.go:107   [184.579µs]finish to cache key /cse-pact/pact-tag/, 0 items, rev: 1
   2021-10-20T13:16:50.123Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-pact/verification/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.123Z        DEBUG   etcd/cacher_kv.go:107   [163.401µs]finish to cache key /cse-pact/verification/, 0 items, rev: 1
   2021-10-20T13:16:50.124Z        DEBUG   etcd/cacher_kv.go:209   start to list and watch {key: /cse-pact/latest/, timeout: 30s, period: 1s}
   2021-10-20T13:16:50.124Z        DEBUG   etcd/cacher_kv.go:107   [202.27µs]finish to cache key /cse-pact/latest/, 0 items, rev: 1
   2021-10-20T13:16:50.124Z        DEBUG   kv/kv.go:100    all adaptors are ready
   2021-10-20T13:16:50.124Z        INFO    datasource/manager.go:65        datasource plugin [embedded_etcd] enabled
   2021-10-20T13:16:50.124Z        INFO    etcd/etcd.go:160        enabled the automatic compact mechanism, compact once every 12h0m0s, reserve 100
   2021-10-20T13:16:50.124Z        DEBUG   event/bus_service.go:75 notify service is started
   2021-10-20T13:16:50.124Z        ERROR   plugin/loader.go:53     no any plugin has been loaded   {"error": "open ./plugins: no such file or directory"}
   2021-10-20T13:16:50.124Z        INFO    plugin/plugin.go:159    call static 'uuid' plugin uuid/buildin.New(), new a 'buildin' instance
   2021-10-20T13:16:50.124Z        INFO    plugin/plugin.go:159    call static 'tracing' plugin tracing/pzipkin.New(), new a 'buildin' instance
   2021-10-20T13:16:50.124Z        INFO    plugin/plugin.go:159    call static 'cipher' plugin cipher/buildin.New(), new a 'buildin' instance
   2021-10-20T13:16:50.124Z        INFO    plugin/plugin.go:159    call static 'auth' plugin auth/buildin.New(), new a 'buildin' instance
   2021-10-20T13:16:50.124Z        INFO    plugin/plugin.go:159    call static 'quota' plugin quota/buildin.New(), new a 'buildin' instance
   2021-10-20T13:16:50.124Z        INFO    buildin/buildin.go:34   quota init, service: 50000, instance: 150000, schema: 100/service, tag: 100/service, rule: 100/service, account: 1000, role: 100
   2021-10-20T13:16:50.124Z        INFO    plugin/plugin.go:159    call static 'ssl' plugin tlsconf/buildin.New(), new a 'buildin' instance
   2021-10-20T13:16:50.124Z        INFO    rbac/rbac.go:54 rbac is disabled
   2021-10-20T13:16:50.124Z        INFO    etcdsync/mutex.go:103   Trying to create a lock: key=/cse/etcdsync/cse-sr/lock/global, id=service-center-7c954b6df8-82nnl-7-20211020-13:16:50.124967906
   2021-10-20T13:16:50.125Z        DEBUG   embedded/embedded_etcd.go:270   response /cse/etcdsync/cse-sr/lock/global true 2
   2021-10-20T13:16:50.125Z        INFO    etcdsync/mutex.go:116   Create Lock OK, key=/cse/etcdsync/cse-sr/lock/global, id=service-center-7c954b6df8-82nnl-7-20211020-13:16:50.124967906
   2021-10-20T13:16:50.126Z        INFO    etcdsync/mutex.go:175   Delete lock OK, key=/cse/etcdsync/cse-sr/lock/global, id=service-center-7c954b6df8-82nnl-7-20211020-13:16:50.124967906
   2021-10-20T13:16:50.133Z        INFO    server/api.go:130       listen address: rest://172.20.0.183:30100
   2021-10-20T13:16:50.133Z        DEBUG   etcd/indexer_etcd.go:51 search '/cse-sr/ms/indexes/default/default/development/default/SERVICECENTER/2.0.0' match special options, request etcd server, opts: action=GET&mode=MODE_NO_CACHE&key=/cse-sr/ms/indexes/default/default/development/default/SERVICECENTER/2.0.0&len=0&limit=4096
   2021-10-20T13:16:50.133Z        DEBUG   util/microservice_util.go:133   could not search microservice[development/default/SERVICECENTER/2.0.0] id by 'serviceName', now try 'alias'
   2021-10-20T13:16:50.134Z        DEBUG   etcd/indexer_etcd.go:51 search '/cse-sr/ms/alias/default/default/development/default/SERVICECENTER/2.0.0' match special options, request etcd server, opts: action=GET&mode=MODE_NO_CACHE&key=/cse-sr/ms/alias/default/default/development/default/SERVICECENTER/2.0.0&len=0&limit=4096
   2021-10-20T13:16:50.134Z        INFO    etcd/ms.go:462  micro-service[development/default/SERVICECENTER/2.0.0] exist failed, service does not exist
   2021-10-20T13:16:50.134Z        DEBUG   disco/microservice.go:363       skip quota check
   2021-10-20T13:16:50.134Z        INFO    etcd/ms.go:173  create micro-service[fc26c99a31a711ecaf2e666bc5417188][development/default/SERVICECENTER/2.0.0] successfully, operator:
   2021-10-20T13:16:50.134Z        INFO    etcd/engine.go:98       register service center service[fc26c99a31a711ecaf2e666bc5417188]
   2021-10-20T13:16:50.134Z        DEBUG   etcd/indexer_etcd.go:51 search '/cse-sr/ms/files/default/default/fc26c99a31a711ecaf2e666bc5417188' match special options, request etcd server, opts: action=GET&mode=MODE_NO_CACHE&key=/cse-sr/ms/files/default/default/fc26c99a31a711ecaf2e666bc5417188&len=0&limit=4096
   2021-10-20T13:16:50.135Z        DEBUG   etcd/cacher_kv.go:194   [76.932µs]finish to handle 1 events, prefix: /cse-sr/ms/indexes/, rev: 5
   2021-10-20T13:16:50.135Z        DEBUG   etcd/cacher_kv.go:194   [183.092µs]finish to handle 1 events, prefix: /cse-sr/ms/alias/, rev: 5
   2021-10-20T13:16:50.135Z        DEBUG   embedded/embedded_etcd.go:270   response /cse-sr/domains/default true 6
   2021-10-20T13:16:50.135Z        INFO    util/domain_util.go:114 new domain(default)
   2021-10-20T13:16:50.136Z        DEBUG   etcd/cacher_kv.go:194   [523.961µs]finish to handle 1 events, prefix: /cse-sr/domains/, rev: 6
   2021-10-20T13:16:50.140Z        INFO    etcd/ms.go:659  register instance ttl 120s, endpoints [rest://172.20.0.183:30100/], host 'service-center-7c954b6df8-82nnl', serviceID fc26c99a31a711ecaf2e666bc5417188, instanceID fc26e4c131a711ecaf2e666bc5417188, operator
   2021-10-20T13:16:50.140Z        INFO    etcd/engine.go:116      register service center instance[fc26c99a31a711ecaf2e666bc5417188/fc26e4c131a711ecaf2e666bc5417188], endpoints is [rest://172.20.0.183:30100/]
   2021-10-20T13:16:50.140Z        DEBUG   etcd/cacher_kv.go:194   [101.853µs]finish to handle 1 events, prefix: /cse-sr/projects/, rev: 7
   2021-10-20T13:16:50.140Z        INFO    runtime/panic.go:969    api server is ready
   panic: runtime error: invalid memory address or nil pointer dereference
   [signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0xbe25e8]
   
   goroutine 1 [running]:
   github.com/go-chassis/go-chassis/v2/pkg/metrics.GaugeSet(...)
           /go/src/github.com/apache/servicecomb-service-center/vendor/github.com/go-chassis/go-chassis/v2/pkg/metrics/metrics.go:57
   github.com/apache/servicecomb-service-center/server/metrics.ReportScInstance()
           /go/src/github.com/apache/servicecomb-service-center/server/metrics/meta.go:131 +0x108
   github.com/apache/servicecomb-service-center/server.(*APIServer).selfRegister(0xc0003485a0)
           /go/src/github.com/apache/servicecomb-service-center/server/api.go:200 +0xa9
   github.com/apache/servicecomb-service-center/server.(*APIServer).Start(0xc0003485a0)
           /go/src/github.com/apache/servicecomb-service-center/server/api.go:169 +0x142
   github.com/apache/servicecomb-service-center/server.(*ServiceCenterServer).startAPIService(0x3b93dc0)
           /go/src/github.com/apache/servicecomb-service-center/server/server.go:212 +0xd9
   github.com/apache/servicecomb-service-center/server.(*ServiceCenterServer).startServices(0x3b93dc0)
           /go/src/github.com/apache/servicecomb-service-center/server/server.go:205 +0x105
   github.com/apache/servicecomb-service-center/server.(*ServiceCenterServer).Run(0x3b93dc0)
           /go/src/github.com/apache/servicecomb-service-center/server/server.go:74 +0x39
   github.com/apache/servicecomb-service-center/server.Run()
           /go/src/github.com/apache/servicecomb-service-center/server/server.go:54 +0x68
   main.main()
           /go/src/github.com/apache/servicecomb-service-center/cmd/scserver/main.go:29 +0x20
   ```
   then I switched back to tag 2.0.0, this problem disappeared.
   
   **To Reproduce**
   Steps to reproduce the behavior:
   1. having a Kubernetes env
   2. get a helm chart from examples/infrastructures/k8s/service-center/
   3. fix the problem in this chart ( it's too old to be compatible with nowadays k8s )
   my values.yaml is like this
   ```
   nameOverride: service-center
   frontend:
     deployment: true
     replicaCount: 1
     image:
       repository: servicecomb/scfrontend
       tag: latest
       pullPolicy: IfNotPresent
     service:
       name: scfrontend
       type: ClusterIP
       externalPort: 30103
       internalPort: 30103
     resources: {}
     ingress:
       enabled: true
       # Used to create an Ingress record.
       hosts:
         - sc.xxxxx.com
       annotations:
         kubernetes.io/ingress.class: nginx
         nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - xxxx'
         nginx.ingress.kubernetes.io/auth-secret: xxxxxx
         nginx.ingress.kubernetes.io/auth-type: basic
   
       tls:
   
         # Secrets must be manually created in the namespace.
         # - secretName: chart-example-tls
         #   hosts:
         #     - chart-example.local
   sc:
     deployment: true
     replicaCount: 1
     discovery:
       # support servicecenter, etcd, and aggregate discovery mode
       type: etcd
       # the cluster urls list, can only support discovery type is "servicecenter" or "aggregate"
       # e.g. clusters: "sc-0=http://service-center-1:30100,sc-1=http://service-center-2:30100"
       clusters: "sc-0=http://etcd-foo:2379"
       # setting up the configuration of aggregator, only enabled when discovery type is "aggregate"
       # e.g. aggregate: "k8s,servicecenter"
       aggregate: "k8s,etcd"
     registry:
       enabled: true
       # support embeded_etcd, etcd, and buildin registry mode
       type: "etcd"
       name: "sc-0"
       addr: "http://etcd-foo:2379"
     image:
       repository: servicecomb/service-center
       tag: latest
       pullPolicy: IfNotPresent
     service:
       name: service-center
       type: ClusterIP
       externalPort: 30100
       internalPort: 30100
   
     ingress:
       enabled: false
       # Used to create an Ingress record.
       hosts: []
         #- blahblah
       annotations:
         kubernetes.io/ingress.class: scfrontend
         # kubernetes.io/tls-acme: "true"
       tls:
         # Secrets must be manually created in the namespace.
         # - secretName: chart-example-tls
         #   hosts:
         #     - chart-example.local
     resources: {}
       # We usually recommend not to specify default resources and to leave this as a conscious
       # choice for the user. This also increases chances charts run on environments with little
       # resources, such as Minikube. If you do want to specify resources, uncomment the following
       # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
       # limits:
       #  cpu: 100m
       #  memory: 128Mi
       # requests:
       #  cpu: 100m
       #  memory: 128Mi
   
   ```
   4. See error
   
   **Expected behavior**
   sc start properly
   
   **Platform And Runtime (please complete the following information):**
   
   Platform
    - OS: aliyun k8s, alpine linux
    - Browser n/a
    - Version 1.18 aliyun
   
   Runtime
    - Version don't know, using released binary.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@servicecomb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [servicecomb-service-center] sxcooler commented on issue #1162: the latest version boot cause panic

Posted by GitBox <gi...@apache.org>.
sxcooler commented on issue #1162:
URL: https://github.com/apache/servicecomb-service-center/issues/1162#issuecomment-949397097


   > pls show me the deployment.yaml file in chart. there has more info to help reproduce this problem
   
   ```
   ## ---------------------------------------------------------------------------
   ## Licensed to the Apache Software Foundation (ASF) under one or more
   ## contributor license agreements.  See the NOTICE file distributed with
   ## this work for additional information regarding copyright ownership.
   ## The ASF licenses this file to You under the Apache License, Version 2.0
   ## (the "License"); you may not use this file except in compliance with
   ## the License.  You may obtain a copy of the License at
   ##
   ##      http://www.apache.org/licenses/LICENSE-2.0
   ##
   ## Unless required by applicable law or agreed to in writing, software
   ## distributed under the License is distributed on an "AS IS" BASIS,
   ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   ## See the License for the specific language governing permissions and
   ## limitations under the License.
   ## ---------------------------------------------------------------------------
   
   {{- $serviceName := include "service-center.fullname" . -}}
   {{- $servicePort := .Values.sc.service.externalPort -}}
   {{- if .Values.sc.deployment -}}
   ---
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: {{ template "service-center.fullname" . }}
     namespace: {{ .Release.Namespace }}
     labels:
       app: {{ template "service-center.name" . }}
       chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
       release: {{ .Release.Name }}
       heritage: {{ .Release.Service }}
   spec:
     replicas: {{ .Values.sc.replicaCount }}
     selector:
       matchLabels:
         app: {{ template "service-center.name" . }}
     template:
       metadata:
         labels:
           app: {{ template "service-center.name" . }}
           release: {{ .Release.Name }}
       spec:
         serviceAccountName: {{ template "service-center.fullname" . }}
         volumes:
           - name: config
             configMap:
               name: {{ template "service-center.fullname" . }}
               items:
               - key: app-config
                 path: app.conf
         containers:
           - name: {{ .Chart.Name }}
             image: "{{ .Values.sc.image.repository }}:{{ .Values.sc.image.tag }}"
             imagePullPolicy: {{ .Values.sc.image.pullPolicy }}
             ports:
               - containerPort: {{ .Values.sc.service.internalPort }}
             volumeMounts:
             - name: config
               mountPath: /opt/service-center/conf
               readOnly: false
             resources:
   {{ toYaml .Values.sc.resources | indent 12 }}
       {{- if .Values.sc.nodeSelector }}
         nodeSelector:
   {{ toYaml .Values.sc.nodeSelector | indent 8 }}
       {{- end }}
   {{- end }}
   
   ```
   it's similar to ur example, just fix api-version, add selector to fit newer k8s.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@servicecomb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [servicecomb-service-center] sxcooler commented on issue #1162: the latest version boot cause panic

Posted by GitBox <gi...@apache.org>.
sxcooler commented on issue #1162:
URL: https://github.com/apache/servicecomb-service-center/issues/1162#issuecomment-947698074


   And I also have another issue: https://github.com/apache/servicecomb-java-chassis/issues/1983
   @liubao68 says it belongs here.
   btw. no matter sc or scfrontend, I strongly suggested that upload new version after fully test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@servicecomb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [servicecomb-service-center] sxcooler commented on issue #1162: the latest version boot cause panic

Posted by GitBox <gi...@apache.org>.
sxcooler commented on issue #1162:
URL: https://github.com/apache/servicecomb-service-center/issues/1162#issuecomment-949400178


   I think it's most unlikely is a problem in my templates, cuz when I switched docker image back to 2.0.0, it works.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@servicecomb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [servicecomb-service-center] little-cui commented on issue #1162: the latest version boot cause panic

Posted by GitBox <gi...@apache.org>.
little-cui commented on issue #1162:
URL: https://github.com/apache/servicecomb-service-center/issues/1162#issuecomment-948224964


   pls show me the deployment.yaml file in chart. there has more info to help reproduce this problem


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@servicecomb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [servicecomb-service-center] little-cui closed issue #1162: the latest version boot cause panic

Posted by GitBox <gi...@apache.org>.
little-cui closed issue #1162:
URL: https://github.com/apache/servicecomb-service-center/issues/1162


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@servicecomb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org