You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@apisix.apache.org by GitBox <gi...@apache.org> on 2021/05/30 03:47:41 UTC

[GitHub] [apisix] xyz2b opened a new issue #4337: request help: apisix can't get up

xyz2b opened a new issue #4337:
URL: https://github.com/apache/apisix/issues/4337


   ### Issue description
   apisix can't get up
   
   apisix start log
   ```shell
   [app@VM_97_180_centos apisix]$ ./bin/apisix start --config ./conf/apisix.yaml                                                                                                  
   /data/app/openresty/luajit/bin/luajit ./apisix/cli/apisix.lua start --config ./conf/apisix.yaml
   mv: ‘/data/app/apisix/conf/config.yaml’ and ‘/data/app/apisix/conf/config.yaml.bak’ are the same file
   ln: failed to create hard link ‘/data/app/apisix/conf/config.yaml’: File exists
   Use customized yaml:    ./conf/apisix.yaml
   nginx: [warn] could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size
   [app@VM_97_180_centos apisix]$ ps -ef|grep nginx
   app      22906 12164  0 11:45 pts/0    00:00:00 grep --color=auto nginx
   [app@VM_97_180_centos apisix]$ ps -ef|grep apisix
   app      22942 12164  0 11:45 pts/0    00:00:00 grep --color=auto apisix
   ```
   
   apisix config
   ```shell
   #
   # Licensed to the Apache Software Foundation (ASF) under one or more
   # contributor license agreements.  See the NOTICE file distributed with
   # this work for additional information regarding copyright ownership.
   # The ASF licenses this file to You under the Apache License, Version 2.0
   # (the "License"); you may not use this file except in compliance with
   # the License.  You may obtain a copy of the License at
   #
   #     http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   #
   # PLEASE DO NOT UPDATE THIS FILE!
   # If you want to set the specified configuration value, you can set the new
   # value in the conf/config.yaml file.
   #
   
   apisix:
     node_listen: 9080                # APISIX listening port
     enable_admin: true
     enable_admin_cors: true          # Admin API support CORS response headers.
     enable_debug: false
     enable_dev_mode: false           # Sets nginx worker_processes to 1 if set to true
     enable_reuseport: true           # Enable nginx SO_REUSEPORT switch if set to true.
     enable_ipv6: true
     config_center: etcd              # etcd: use etcd to store the config value
                                      # yaml: fetch the config value from local yaml file `/your_path/conf/apisix.yaml`
   
     #proxy_protocol:                 # Proxy Protocol configuration
     #listen_http_port: 9181          # The port with proxy protocol for http, it differs from node_listen and port_admin.
                                      # This port can only receive http request with proxy protocol, but node_listen & port_admin
                                      # can only receive http request. If you enable proxy protocol, you must use this port to
                                      # receive http request with proxy protocol
     #listen_https_port: 9182         # The port with proxy protocol for https
     #enable_tcp_pp: true             # Enable the proxy protocol for tcp proxy, it works for stream_proxy.tcp option
     #enable_tcp_pp_to_upstream: true # Enables the proxy protocol to the upstream server
     enable_server_tokens: true       # Whether the APISIX version number should be shown in Server header.
                                      # It's enabled by default.
   
     # configurations to load third party code and/or override the builtin one.
     extra_lua_path: ""               # extend lua_package_path to load third party code
     extra_lua_cpath: ""              # extend lua_package_cpath to load third party code
   
     proxy_cache:                     # Proxy Caching configuration
       cache_ttl: 10s                 # The default caching time if the upstream does not specify the cache time
       zones:                         # The parameters of a cache
         - name: disk_cache_one       # The name of the cache, administrator can be specify
                                      # which cache to use by name in the admin api
           memory_size: 50m           # The size of shared memory, it's used to store the cache index
           disk_size: 1G              # The size of disk, it's used to store the cache data
           disk_path: "/tmp/disk_cache_one"  # The path to store the cache data
           cache_levels: "1:2"        # The hierarchy levels of a cache
         #- name: disk_cache_two
         #  memory_size: 50m
         #  disk_size: 1G
         #  disk_path: "/tmp/disk_cache_two"
         #  cache_levels: "1:2"
   
     allow_admin:                  # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
       - 127.0.0.0/24              # If we don't set any IP list, then any IP access is allowed by default.
       #- "::/64"
     #port_admin: 9180             # use a separate port
     #https_admin: true            # enable HTTPS when use a separate port for Admin API.
                                   # Admin API will use conf/apisix_admin_api.crt and conf/apisix_admin_api.key as certificate.
     admin_api_mtls:               # Depends on `port_admin` and `https_admin`.
       admin_ssl_cert: ""          # Path of your self-signed server side cert.
       admin_ssl_cert_key: ""      # Path of your self-signed server side key.
       admin_ssl_ca_cert: ""       # Path of your self-signed ca cert.The CA is used to sign all admin api callers' certificates.
   
     # Default token when use API to call for Admin API.
     # *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
     # Disabling this configuration item means that the Admin API does not
     # require any authentication.
     admin_key:
       -
         name: "admin"
         key: edd1c9f034335f136f87ad84b625c8f1
         role: admin                 # admin: manage all configuration data
                                     # viewer: only can view configuration data
       -
         name: "viewer"
         key: 4054f7cf07e344346cd3f287985e76a2
         role: viewer
   
     delete_uri_tail_slash: false    # delete the '/' at the end of the URI
     global_rule_skip_internal_api: true    # does not run global rule in internal apis
                                            # api that path starts with "/apisix" is considered to be internal api
     router:
       http: 'radixtree_uri'         # radixtree_uri: match route by uri(base on radixtree)
                                     # radixtree_host_uri: match route by host + uri(base on radixtree)
                                     # radixtree_uri_with_parameter: like radixtree_uri but match uri with parameters,
                                     #   see https://github.com/api7/lua-resty-radixtree/#parameters-in-path for
                                     #   more details.
       ssl: 'radixtree_sni'          # radixtree_sni: match route by SNI(base on radixtree)
     #stream_proxy:                  # TCP/UDP proxy
     #  tcp:                         # TCP proxy port list
     #    - 9100
     #    - "127.0.0.1:9101"
     #  udp:                         # UDP proxy port list
     #    - 9200
     #    - "127.0.0.1:9201"
     #dns_resolver:                  # If not set, read from `/etc/resolv.conf`
     #  - 1.1.1.1
     #  - 8.8.8.8
     #dns_resolver_valid: 30         # if given, override the TTL of the valid records. The unit is second.
     resolver_timeout: 5             # resolver timeout
     enable_resolv_search_opt: true  # enable search option in resolv.conf
     ssl:
       enable: true
       enable_http2: true
       listen_port: 9443
       ssl_trusted_certificate: /data/app/apisix/ssl/ca.pem  # Specifies a file path with trusted CA certificates in the PEM format
                                                   # used to verify the certificate when APISIX needs to do SSL/TLS handshaking
                                                   # with external services (e.g. etcd)
       ssl_protocols: "TLSv1.2 TLSv1.3"
       ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
       ssl_session_tickets: false              #  disable ssl_session_tickets by default for 'ssl_session_tickets' would make Perfect Forward Secrecy useless.
                                               #  ref: https://github.com/mozilla/server-side-tls/issues/135
       key_encrypt_salt: "edd1c9f0985e76a2"    #  If not set, will save origin ssl key into etcd.
                                               #  If set this, must be a string of length 16. And it will encrypt ssl key with AES-128-CBC
                                               #  !!! So do not change it after saving your ssl, it can't decrypt the ssl keys have be saved if you change !!
     enable_control: true
     #control:
     #  ip: "127.0.0.1"
     #  port: 9090
     disable_sync_configuration_during_start: false  # safe exit. Remove this once the feature is stable
   
   nginx_config:                     # config for render the template to generate nginx.conf
     error_log: "logs/error.log"
     error_log_level: "debug"         # warn,error
     worker_processes: auto          # one worker will get best performance, you can use "auto", but remember it is just work well only on physical machine
                                     # no more than 8 workers, otherwise competition between workers will consume a lot of resources
                                     # if you want use multiple cores in container, you can inject the number of cpu as environment variable "APISIX_WORKER_PROCESSES"
     enable_cpu_affinity: true       # enable cpu affinity, this is just work well only on physical machine
     worker_rlimit_nofile: 20480     # the number of files a worker process can open, should be larger than worker_connections
     worker_shutdown_timeout: 240s   # timeout for a graceful shutdown of worker processes
     event:
       worker_connections: 10620
     #envs:                          # allow to get a list of environment variables
     #  - TEST_ENV
   
     # As user can add arbitrary configurations in the snippet,
     # it is user's responsibility to check the configurations
     # don't conflict with APISIX.
     main_configuration_snippet: |
       # Add custom Nginx main configuration to nginx.conf.
       # The configuration should be well indented!
     http_configuration_snippet: |
       # Add custom Nginx http configuration to nginx.conf.
       # The configuration should be well indented!
     http_server_configuration_snippet: |
       # Add custom Nginx http server configuration to nginx.conf.
       # The configuration should be well indented!
     http_admin_configuration_snippet: |
       # Add custom Nginx admin server configuration to nginx.conf.
       # The configuration should be well indented!
     http_end_configuration_snippet: |
       # Add custom Nginx http end configuration to nginx.conf.
       # The configuration should be well indented!
     stream_configuration_snippet: |
       # Add custom Nginx stream configuration to nginx.conf.
       # The configuration should be well indented!
   
     http:
       enable_access_log: true        # enable access log or not, default true
       access_log: "logs/access.log"
       access_log_format: "$remote_addr - $remote_user [$time_local] $http_host \"$request\" $status $body_bytes_sent $request_time \"$http_referer\" \"$http_user_agent\" $upstream_addr $upstream_status $upstream_response_time \"$upstream_scheme://$upstream_host$upstream_uri\""
       access_log_format_escape: default       # allows setting json or default characters escaping in variables
       keepalive_timeout: 60s         # timeout during which a keep-alive client connection will stay open on the server side.
       client_header_timeout: 60s     # timeout for reading client request header, then 408 (Request Time-out) error is returned to the client
       client_body_timeout: 60s       # timeout for reading client request body, then 408 (Request Time-out) error is returned to the client
       client_max_body_size: 0        # The maximum allowed size of the client request body.
                                      # If exceeded, the 413 (Request Entity Too Large) error is returned to the client.
                                      # Note that unlike Nginx, we don't limit the body size by default.
   
       send_timeout: 10s              # timeout for transmitting a response to the client.then the connection is closed
       underscores_in_headers: "on"   # default enables the use of underscores in client request header fields
       real_ip_header: "X-Real-IP"    # http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
       real_ip_from:                  # http://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
         - 127.0.0.1
         - 'unix:'
       #lua_shared_dicts:             # add custom shared cache to nginx.conf
       #  ipc_shared_dict: 100m       # custom shared cache, format: `cache-key: cache-size`
   
       # Enables or disables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066)
       # when establishing a connection with the proxied HTTPS server.
       proxy_ssl_server_name: true
       upstream:
         keepalive: 320               # Sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process.
                                      # When this number is exceeded, the least recently used connections are closed.
         keepalive_requests: 1000     # Sets the maximum number of requests that can be served through one keepalive connection.
                                      # After the maximum number of requests is made, the connection is closed.
         keepalive_timeout: 60s       # Sets a timeout during which an idle keepalive connection to an upstream server will stay open.
   
   etcd:
     host:                           # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
       - "https://etcd01.apisix.webank.com:2379"
       - "https://etcd02.apisix.webank.com:2379"     # multiple etcd address, if your etcd cluster enables TLS, please use https scheme,
       - "https://etcd03.apisix.webank.com:2379"
                                     # e.g. "https://127.0.0.1:2379".
     prefix: "/apisix"               # apisix configurations prefix
     timeout: 30                     # 30 seconds
     #resync_delay: 5                # when sync failed and a rest is needed, resync after the configured seconds plus 50% random jitter
     #user: root                     # root username for etcd
     #password: 5tHkHhYkjr6cQY       # root password for etcd
     tls:
       # To enable etcd client certificate you need to build APISIX-Openresty, see
       # http://apisix.apache.org/docs/apisix/how-to-build#6-build-openresty-for-apisix
       cert: /data/app/apisix/ssl/etcd.pem          # path of certificate used by the etcd client
       key: /data/app/apisix/ssl/etcd-key.pem            # path of key used by the etcd client
   
       verify: true                  # whether to verify the etcd endpoint certificate when setup a TLS connection to etcd,
                                     # the default value is true, e.g. the certificate will be verified strictly.
   
   #discovery:                       # service discovery center
   #  dns:
   #    resolver:
   #      - "127.0.0.1:8600"         # use the real address of your dns server
   #  eureka:
   #    host:                        # it's possible to define multiple eureka hosts addresses of the same eureka cluster.
   #      - "http://127.0.0.1:8761"
   #    prefix: "/eureka/"
   #    fetch_interval: 30           # default 30s
   #    weight: 100                  # default weight for node
   #    timeout:
   #      connect: 2000              # default 2000ms
   #      send: 2000                 # default 2000ms
   #      read: 5000                 # default 5000ms
   
   graphql:
     max_size: 1048576               # the maximum size limitation of graphql in bytes, default 1MiB
   
   #ext-plugin:
     #cmd: ["ls", "-l"]
   
   plugins:                          # plugin list (sorted in alphabetical order)
     - api-breaker
     - authz-keycloak
     - basic-auth
     - batch-requests
     - consumer-restriction
     - cors
     #- dubbo-proxy
     - echo
     #- error-log-logger
     #- example-plugin
     - ext-plugin-pre-req
     - ext-plugin-post-req
     - fault-injection
     - grpc-transcode
     - hmac-auth
     - http-logger
     - ip-restriction
     - jwt-auth
     - kafka-logger
     - key-auth
     - limit-conn
     - limit-count
     - limit-req
     #- log-rotate
     #- node-status
     - openid-connect
     - prometheus
     - proxy-cache
     - proxy-mirror
     - proxy-rewrite
     - redirect
     - referer-restriction
     - request-id
     - request-validation
     - response-rewrite
     - serverless-post-function
     - serverless-pre-function
     #- skywalking
     - sls-logger
     - syslog
     - tcp-logger
     - udp-logger
     - uri-blocker
     - wolf-rbac
     - zipkin
     - server-info
     - traffic-split
   
   stream_plugins:
     - mqtt-proxy
   
   plugin_attr:
     log-rotate:
       interval: 3600    # rotate interval (unit: second)
       max_kept: 168     # max number of log files will be kept
     skywalking:
       service_name: APISIX
       service_instance_name: "APISIX Instance Name"
       endpoint_addr: http://127.0.0.1:12800
     prometheus:
       export_uri: /apisix/prometheus/metrics
       enable_export_server: true
       export_addr:
         ip: "127.0.0.1"
         port: 9091
     server-info:
       report_interval: 60  # server info report interval (unit: second)
       report_ttl: 3600     # live time for server info in etcd (unit: second)
     dubbo-proxy:
       upstream_multiplex_count: 32
   ```
   
   etcd config
   ```shell
   name: 'etcd01'
   data-dir: /data/app/etcd/data
   enable-grpc-gateway: true
   listen-peer-urls: https://10.107.97.24:2380
   listen-client-urls: https://10.107.97.24:2379
   initial-advertise-peer-urls: https://etcd01.apisix.xxxx.com:2380
   advertise-client-urls: https://etcd01.apisix.xxxx.com:2379
   initial-cluster: 'etcd01=https://etcd01.apisix.xxxx.com:2380,etcd02=https://etcd02.apisix.xxxx.com:2380,etcd03=https://etcd03.apisix.xxxx.com:2380'
   initial-cluster-token: 'apisix-etcd-cluster'
   initial-cluster-state: 'new'
   client-transport-security:
     cert-file: /data/app/etcd/ssl/etcd.pem
     key-file: /data/app/etcd/ssl/etcd-key.pem
     trusted-ca-file: /data/app/etcd/ssl/ca.pem
   peer-transport-security:
     cert-file: /data/app/etcd/ssl/etcd.pem
     key-file: /data/app/etcd/ssl/etcd-key.pem
     trusted-ca-file: /data/app/etcd/ssl/ca.pem
   ```
   
   nginx log
   ```shell
   2021/05/30 11:32:42 [warn] 18691#18691: could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size
   2021/05/30 11:32:42 [info] 18691#18691: [lua] core.lua:26: use config_center: etcd
   2021/05/30 11:32:42 [info] 18691#18691: [lua] resolver.lua:28: init_resolver(): dns resolver ["183.60.83.19","183.60.82.98"]
   2021/05/30 11:32:42 [debug] 18691#18691: posix_memalign: 00000000010A3490:512 @16
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A36A0:476
   2021/05/30 11:32:42 [debug] 18691#18691: posix_memalign: 00000000010A3890:512 @16
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A3AA0:4096
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A4AB0:4096
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A5AC0:4096
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A36A0
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A3AA0
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A5AC0
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A4AB0
   2021/05/30 11:32:42 [debug] 18691#18691: pcre JIT compiling result: 1
   2021/05/30 11:32:42 [debug] 18691#18691: posix_memalign: 00000000010A3AA0:512 @16
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A3CB0:4096
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A4CC0:4096
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A3CB0
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A4CC0
   2021/05/30 11:32:42 [debug] 18691#18691: posix_memalign: 00000000010A3CB0:512 @16
   2021/05/30 11:32:42 [debug] 18691#18691: pcre JIT compiling result: 1
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00007F0226D33010:589704
   2021/05/30 11:32:42 [debug] 18691#18691: new block, alloc semaphore: 00007F0226D33028 block: 00007F0226D33010
   2021/05/30 11:32:42 [debug] 18691#18691: http lua semaphore new: 00007F0226D33028, resources: 0
   2021/05/30 11:32:42 [info] 18691#18691: [lua] v3.lua:35: _request_uri(): v3 request uri: https://etcd02.apisix.webank.com:2379/v3/kv/range, timeout: 30
   2021/05/30 11:32:42 [debug] 18691#18691: posix_memalign: 00000000010A3EC0:512 @16
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A40D0:568
   2021/05/30 11:32:42 [debug] 18691#18691: posix_memalign: 00000000010A4310:512 @16
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A4520:4096
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A5530:4096
   2021/05/30 11:32:42 [debug] 18691#18691: malloc: 00000000010A6540:4096
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A40D0
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A4520
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A6540
   2021/05/30 11:32:42 [debug] 18691#18691: free: 00000000010A5530
   2021/05/30 11:32:42 [debug] 18691#18691: pcre JIT compiling result: 1
   ```
   ### Environment
   
   Request help without environment information will be ignored or closed.
   
   * apisix version (cmd: `apisix version`):
   ```shell
   [app@VM_97_180_centos apisix]$ ./bin/apisix version
   /data/app/openresty/luajit/bin/luajit ./apisix/cli/apisix.lua version
   2.6
   ```
   * OS (cmd: `uname -a`):
   ```shell
   [app@VM_97_180_centos apisix]$ uname -a
   Linux VM_97_180_centos 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
   ```
   * OpenResty / Nginx version (cmd: `nginx -V` or `openresty -V`):
   ```shell
   [app@VM_97_180_centos apisix]$ openresty -V                                                               
   nginx version: openresty/1.19.3.1
   built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) 
   built with OpenSSL 1.1.1k  25 Mar 2021
   TLS SNI support enabled
   configure arguments: --prefix=/data/app/openresty/nginx --with-debug --with-cc-opt='-DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC -O2' --add-module=../ngx_devel_kit-0.3.1 --add-module=../echo-nginx-module-0.62 --add-module=../xss-nginx-module-0.06 --add-module=../ngx_coolkit-0.2 --add-module=../set-misc-nginx-module-0.32 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.08 --add-module=../srcache-nginx-module-0.32 --add-module=../ngx_lua-0.10.19 --add-module=../ngx_lua_upstream-0.07 --add-module=../headers-more-nginx-module-0.33 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.19 --add-module=../redis2-nginx-module-0.15 --add-module=../redis-nginx-module-0.3.7 --add-module=../ngx_stream_lua-0.0.9 --with-ld-opt=-Wl,-rpath,/data/app/openresty/luajit/lib --user=app --group=apps --add-module=/data/backup/openresty-1.19.3.1/../mod_dubbo --add-module=/data/backup/openresty-1.19.3.1/../ngx_multi_upstream_module --
 add-module=/data/backup/openresty-1.19.3.1/../apisix-nginx-module --with-poll_module --with-pcre-jit --with-stream --with-stream_ssl_module --with-stream_ssl_preread_module --with-http_v2_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_auth_request_module --with-http_secure_link_module --with-http_random_index_module --with-http_gzip_static_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-threads --with-compat --with-openssl=/data/backup/openresty-1.19.3.1/../openssl-OpenSSL_1_1_1k --with-openssl-opt=-g --with-stream --with-http_ssl_module
   ```
   * etcd version, if have (cmd: run `curl http://127.0.0.1:9090/v1/server_info` to get the info from server-info API):
   ```shell
   [app@VM_97_180_centos apisix]$ curl --cert ./ssl/etcd.pem --key ./ssl/etcd-key.pem --cacert ./ssl/ca.pem -i https://etcd01.apisix.webank.com:2379/version
   
   {"etcdserver":"3.4.16","etcdcluster":"3.4.0"}
   ```
   * apisix-dashboard version, if have:
   * luarocks version, if the issue is about installation (cmd: `luarocks --version`):
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] spacewander commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
spacewander commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850937470


   Look like the customized yaml doesn't take effect if `mv` or `ln` failed.
   
   @Yiyiyimu 
   Can you take a look when you have free time? Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] spacewander commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
spacewander commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850936400


   Try disabling the daemon mode via:
   ```
     main_configuration_snippet: |
       daemon off;
   ```
   
   And see what happened?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-852951809


   I reinstalled openresty and apisix and the problem disappeared.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] Yiyiyimu commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
Yiyiyimu commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-851121413


   ```
   [app@VM_97_180_centos apisix]$ ./bin/apisix start --config ./conf/apisix.yaml
   /data/app/openresty/luajit/bin/luajit ./apisix/cli/apisix.lua start --config ./conf/apisix.yaml
   ```
   Hi @xyz2b if you are using the default path of `config.yaml`, you could leave the argument `--config` away and directly run `./bin/apisix start`.
   
   ---
   
   Hi @spacewander since @xyz2b actually uses the default path of `config.yaml`, the error indeed does not have effects on apisix (apisix still uses `config.yaml` to setup and run).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850962075


   > > I think it’s not for this reason. I see the configuration in the nginx.conf configuration file is effective.
   > > It is normal for me to use the original version of openresty, but it is not normal to use apisix openresty.
   > > nginx.conf
   > > ```shell
   > > # Configuration File - Nginx Server Configs
   > > # This is a read-only file, do not try to modify it.
   > > 
   > > master_process on;
   > > 
   > > worker_processes auto;
   > > worker_cpu_affinity auto;
   > > 
   > > # main configuration snippet starts
   > > daemon off;
   > > 
   > > # main configuration snippet ends
   > > 
   > > error_log logs/error.log debug;
   > > pid logs/nginx.pid;
   > > 
   > > worker_rlimit_nofile 20480;
   > > 
   > > events {
   > >     accept_mutex off;
   > >     worker_connections 10620;
   > > }
   > > 
   > > worker_rlimit_core  16G;
   > > 
   > > worker_shutdown_timeout 240s;
   > > 
   > > env APISIX_PROFILE;
   > > env PATH; # for searching external plugin runner's binary
   > > 
   > > 
   > > 
   > > http {
   > >     # put extra_lua_path in front of the builtin path
   > >     # so user can override the source code
   > >     lua_package_path  "$prefix/deps/share/lua/5.1/?.lua;$prefix/deps/share/lua/5.1/?/init.lua;/data/app/apisix/?.lua;/data/app/apisix/?/init.lua;;./?.lua;/data/app/openresty/luajit/share/luajit-2.1.0-beta3/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/data/app/openresty/luajit/share/lua/5.1/?.lua;/data/app/openresty/luajit/share/lua/5.1/?/init.lua;";
   > >     lua_package_cpath "$prefix/deps/lib64/lua/5.1/?.so;$prefix/deps/lib/lua/5.1/?.so;;./?.so;/usr/local/lib/lua/5.1/?.so;/data/app/openresty/luajit/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so;";
   > > 
   > >     lua_shared_dict internal_status      10m;
   > >     lua_shared_dict plugin-limit-req     10m;
   > >     lua_shared_dict plugin-limit-count   10m;
   > >     lua_shared_dict prometheus-metrics   10m;
   > >     lua_shared_dict plugin-limit-conn    10m;
   > >     lua_shared_dict upstream-healthcheck 10m;
   > >     lua_shared_dict worker-events        10m;
   > >     lua_shared_dict lrucache-lock        10m;
   > >     lua_shared_dict balancer_ewma        10m;
   > >     lua_shared_dict balancer_ewma_locks  10m;
   > >     lua_shared_dict balancer_ewma_last_touched_at 10m;
   > >     lua_shared_dict plugin-limit-count-redis-cluster-slot-lock 1m;
   > >     lua_shared_dict tracing_buffer       10m; # plugin: skywalking
   > >     lua_shared_dict plugin-api-breaker   10m;
   > > 
   > >     # for openid-connect and authz-keycloak plugin
   > >     lua_shared_dict discovery             1m; # cache for discovery metadata documents
   > > 
   > >     # for openid-connect plugin
   > >     lua_shared_dict jwks                  1m; # cache for JWKs
   > >     lua_shared_dict introspection        10m; # cache for JWT verification results
   > > 
   > >     # for authz-keycloak
   > >     lua_shared_dict access_tokens         1m; # cache for service account access tokens
   > > 
   > >     # for custom shared dict
   > > 
   > >     # for proxy cache
   > >     proxy_cache_path /tmp/disk_cache_one levels=1:2 keys_zone=disk_cache_one:50m inactive=1d max_size=1G use_temp_path=off;
   > > 
   > >     # for proxy cache
   > >     map $upstream_cache_zone $upstream_cache_zone_info {
   > >         disk_cache_one /tmp/disk_cache_one,1:2;
   > >     }
   > > 
   > > 
   > >     lua_ssl_verify_depth 5;
   > >     ssl_session_timeout 86400;
   > > 
   > >     underscores_in_headers on;
   > > 
   > >     lua_socket_log_errors off;
   > > 
   > >     resolver 183.60.83.19 183.60.82.98;
   > >     resolver_timeout 5;
   > > 
   > >     lua_http10_buffering off;
   > > 
   > >     lua_regex_match_limit 100000;
   > >     lua_regex_cache_max_entries 8192;
   > > 
   > >     log_format main escape=default '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_status $upstream_response_time "$upstream_scheme://$upstream_host$upstream_uri"';
   > >     uninitialized_variable_warn off;
   > > 
   > >     access_log logs/access.log main buffer=16384 flush=3;
   > >     open_file_cache  max=1000 inactive=60;
   > >     client_max_body_size 0;
   > >     keepalive_timeout 60s;
   > >     client_header_timeout 60s;
   > >     client_body_timeout 60s;
   > >     send_timeout 10s;
   > > 
   > >     server_tokens off;
   > > 
   > >     include mime.types;
   > >     charset utf-8;
   > > 
   > >     # error_page
   > >     error_page 500 @50x.html;
   > > 
   > >     real_ip_header X-Real-IP;
   > > 
   > >     set_real_ip_from 127.0.0.1;
   > >     set_real_ip_from unix:;
   > > 
   > >     # http configuration snippet starts
   > >     
   > > 
   > >     # http configuration snippet ends
   > > 
   > >     upstream apisix_backend {
   > >         server 0.0.0.1;
   > >         balancer_by_lua_block {
   > >             apisix.http_balancer_phase()
   > >         }
   > > 
   > >         keepalive 320;
   > >         keepalive_requests 1000;
   > >         keepalive_timeout 60s;
   > >     }
   > > 
   > > 
   > >     init_by_lua_block {
   > >         require "resty.core"
   > >         apisix = require("apisix")
   > > 
   > >         local dns_resolver = { "183.60.83.19", "183.60.82.98", }
   > >         local args = {
   > >             dns_resolver = dns_resolver,
   > >         }
   > >         apisix.http_init(args)
   > >     }
   > > 
   > >     init_worker_by_lua_block {
   > >         apisix.http_init_worker()
   > >     }
   > > 
   > >     server {
   > >         listen 127.0.0.1:9090;
   > > 
   > >         access_log off;
   > > 
   > >         location / {
   > >             content_by_lua_block {
   > >                 apisix.http_control()
   > >             }
   > >         }
   > > 
   > >         location @50x.html {
   > >             set $from_error_page 'true';
   > >             try_files /50x.html $uri;
   > >         }
   > >     }
   > > 
   > >     server {
   > >         listen 127.0.0.1:9091;
   > > 
   > >         access_log off;
   > > 
   > >         location / {
   > >             content_by_lua_block {
   > >                 local prometheus = require("apisix.plugins.prometheus")
   > >                 prometheus.export_metrics()
   > >             }
   > >         }
   > > 
   > >         location = /apisix/nginx_status {
   > >             allow 127.0.0.0/24;
   > >             deny all;
   > >             stub_status;
   > >         }
   > >     }
   > > 
   > > 
   > >     server {
   > >         listen 9080 default_server reuseport;
   > >         listen 9443 ssl default_server http2 reuseport;
   > > 
   > >         listen [::]:9080 default_server reuseport;
   > >         listen [::]:9443 ssl default_server http2 reuseport;
   > > 
   > >         server_name _;
   > > 
   > >         lua_ssl_trusted_certificate /data/app/apisix/ssl/ca.pem;
   > > 
   > >         ssl_certificate      cert/ssl_PLACE_HOLDER.crt;
   > >         ssl_certificate_key  cert/ssl_PLACE_HOLDER.key;
   > >         ssl_session_cache    shared:SSL:20m;
   > >         ssl_session_timeout 10m;
   > > 
   > >         ssl_protocols TLSv1.2 TLSv1.3;
   > >         ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
   > >         ssl_prefer_server_ciphers on;
   > >         ssl_session_tickets off;
   > > 
   > >         # http server configuration snippet starts
   > >         
   > > 
   > >         # http server configuration snippet ends
   > > 
   > >         location = /apisix/nginx_status {
   > >             allow 127.0.0.0/24;
   > >             deny all;
   > >             access_log off;
   > >             stub_status;
   > >         }
   > > 
   > >         location /apisix/admin {
   > >             set $upstream_scheme             'http';
   > >             set $upstream_host               $http_host;
   > >             set $upstream_uri                '';
   > > 
   > >                 allow 127.0.0.0/24;
   > >                 deny all;
   > > 
   > >             content_by_lua_block {
   > >                 apisix.http_admin()
   > >             }
   > >         }
   > > 
   > >         ssl_certificate_by_lua_block {
   > >             apisix.http_ssl_phase()
   > >         }
   > > 
   > >         proxy_ssl_name $upstream_host;
   > >         proxy_ssl_server_name on;
   > > 
   > >         location / {
   > >             set $upstream_mirror_host        '';
   > >             set $upstream_upgrade            '';
   > >             set $upstream_connection         '';
   > > 
   > >             set $upstream_scheme             'http';
   > >             set $upstream_host               $http_host;
   > >             set $upstream_uri                '';
   > >             set $ctx_ref                     '';
   > >             set $from_error_page             '';
   > > 
   > > 
   > >             access_by_lua_block {
   > >                 apisix.http_access_phase()
   > >             }
   > > 
   > >             proxy_http_version 1.1;
   > >             proxy_set_header   Host              $upstream_host;
   > >             proxy_set_header   Upgrade           $upstream_upgrade;
   > >             proxy_set_header   Connection        $upstream_connection;
   > >             proxy_set_header   X-Real-IP         $remote_addr;
   > >             proxy_pass_header  Date;
   > > 
   > >             ### the following x-forwarded-* headers is to send to upstream server
   > > 
   > >             set $var_x_forwarded_for        $remote_addr;
   > >             set $var_x_forwarded_proto      $scheme;
   > >             set $var_x_forwarded_host       $host;
   > >             set $var_x_forwarded_port       $server_port;
   > > 
   > >             if ($http_x_forwarded_for != "") {
   > >                 set $var_x_forwarded_for "${http_x_forwarded_for}, ${realip_remote_addr}";
   > >             }
   > >             if ($http_x_forwarded_host != "") {
   > >                 set $var_x_forwarded_host $http_x_forwarded_host;
   > >             }
   > >             if ($http_x_forwarded_port != "") {
   > >                 set $var_x_forwarded_port $http_x_forwarded_port;
   > >             }
   > > 
   > >             proxy_set_header   X-Forwarded-For      $var_x_forwarded_for;
   > >             proxy_set_header   X-Forwarded-Proto    $var_x_forwarded_proto;
   > >             proxy_set_header   X-Forwarded-Host     $var_x_forwarded_host;
   > >             proxy_set_header   X-Forwarded-Port     $var_x_forwarded_port;
   > > 
   > >             ###  the following configuration is to cache response content from upstream server
   > > 
   > >             set $upstream_cache_zone            off;
   > >             set $upstream_cache_key             '';
   > >             set $upstream_cache_bypass          '';
   > >             set $upstream_no_cache              '';
   > > 
   > >             proxy_cache                         $upstream_cache_zone;
   > >             proxy_cache_valid                   any 10s;
   > >             proxy_cache_min_uses                1;
   > >             proxy_cache_methods                 GET HEAD;
   > >             proxy_cache_lock_timeout            5s;
   > >             proxy_cache_use_stale               off;
   > >             proxy_cache_key                     $upstream_cache_key;
   > >             proxy_no_cache                      $upstream_no_cache;
   > >             proxy_cache_bypass                  $upstream_cache_bypass;
   > > 
   > > 
   > >             proxy_pass      $upstream_scheme://apisix_backend$upstream_uri;
   > > 
   > >             mirror          /proxy_mirror;
   > > 
   > >             header_filter_by_lua_block {
   > >                 apisix.http_header_filter_phase()
   > >             }
   > > 
   > >             body_filter_by_lua_block {
   > >                 apisix.http_body_filter_phase()
   > >             }
   > > 
   > >             log_by_lua_block {
   > >                 apisix.http_log_phase()
   > >             }
   > >         }
   > > 
   > >         location @grpc_pass {
   > > 
   > >             access_by_lua_block {
   > >                 apisix.grpc_access_phase()
   > >             }
   > > 
   > >             grpc_set_header   Content-Type application/grpc;
   > >             grpc_socket_keepalive on;
   > >             grpc_pass         $upstream_scheme://apisix_backend;
   > > 
   > >             header_filter_by_lua_block {
   > >                 apisix.http_header_filter_phase()
   > >             }
   > > 
   > >             body_filter_by_lua_block {
   > >                 apisix.http_body_filter_phase()
   > >             }
   > > 
   > >             log_by_lua_block {
   > >                 apisix.http_log_phase()
   > >             }
   > >         }
   > > 
   > > 
   > >         location = /proxy_mirror {
   > >             internal;
   > > 
   > >             if ($upstream_mirror_host = "") {
   > >                 return 200;
   > >             }
   > > 
   > >             proxy_http_version 1.1;
   > >             proxy_set_header Host $upstream_host;
   > >             proxy_pass $upstream_mirror_host$request_uri;
   > >         }
   > > 
   > >         location @50x.html {
   > >             set $from_error_page 'true';
   > >             try_files /50x.html $uri;
   > >             header_filter_by_lua_block {
   > >                 apisix.http_header_filter_phase()
   > >             }
   > > 
   > >             log_by_lua_block {
   > >                 apisix.http_log_phase()
   > >             }
   > >         }
   > >     }
   > >     # http end configuration snippet starts
   > >     
   > > 
   > >     # http end configuration snippet ends
   > > }
   > > ```
   > 
   > What @spacewander said is against the problem thrown by shell (the failure of `mv` command).
   
   I see. But the problem of nginx not starting is not caused by `mv` command error.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-851133012


   > After adding log debugging, it is found that the code is executed to the return sock:tlshandshake(opts) of the resty.http.tls_handshake function, and there is no return.
   > 
   > The `sock:tlshandshake` is new feature of apisix openresty.
   > 
   > resty.http
   > 
   > ```lua
   > function _M.tls_handshake(self, opts)
   >     local sock = self.sock
   >     if not sock then
   >         return nil, "not initialized"
   >     end
   > 
   >     self.ssl = true
   > 
   >     -- Stop here, did not continue execution.
   >     return sock:tlshandshake(opts)
   > end
   > ```
   
   
   Hi @spacewander. The problem may be here?  I used `ngx.log` to print logs in `openresty/lualib/resty/core/socket/tcp.lua` of apisix openresty but it didn't work. I don't know why.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] spacewander commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
spacewander commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-852835992


   We patch it to make it work in the init phase:
   https://github.com/apache/apisix/blob/2f34e1af268de17ff39649e9c87f53d2dbba9e66/apisix/patch.lua#L48


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b edited a comment on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b edited a comment on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850937086


   Same as before. 
   
   start apisix
   ```shell
   [app@VM_97_180_centos apisix]$ ./bin/apisix start --config ./conf/apisix.yaml
   /data/app/openresty/luajit/bin/luajit ./apisix/cli/apisix.lua start --config ./conf/apisix.yaml
   mv: ‘/data/app/apisix/conf/config.yaml’ and ‘/data/app/apisix/conf/config.yaml.bak’ are the same file
   ln: failed to create hard link ‘/data/app/apisix/conf/config.yaml’: File exists
   Use customized yaml:    ./conf/apisix.yaml
   nginx: [warn] could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size
   ```
   
   nginx log
   ```shell
   2021/05/30 12:12:09 [warn] 31499#31499: could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size
   2021/05/30 12:12:09 [info] 31499#31499: [lua] core.lua:26: use config_center: etcd
   2021/05/30 12:12:09 [info] 31499#31499: [lua] resolver.lua:28: init_resolver(): dns resolver ["183.60.83.19","183.60.82.98"]
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161B4A0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161B6B0:476
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161B8A0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161BAB0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161CAC0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161DAD0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161B6B0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161BAB0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161DAD0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161CAC0
   2021/05/30 12:12:09 [debug] 31499#31499: pcre JIT compiling result: 1
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161BAB0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161BCC0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161CCD0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161BCC0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161CCD0
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161BCC0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: pcre JIT compiling result: 1
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 00007F33207D5010:589704
   2021/05/30 12:12:09 [debug] 31499#31499: new block, alloc semaphore: 00007F33207D5028 block: 00007F33207D5010
   2021/05/30 12:12:09 [debug] 31499#31499: http lua semaphore new: 00007F33207D5028, resources: 0
   2021/05/30 12:12:09 [info] 31499#31499: [lua] v3.lua:35: _request_uri(): v3 request uri: https://etcd02.apisix.webank.com:2379/v3/kv/range, timeout: 30
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161BED0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161C0E0:568
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161C320:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161C530:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161D540:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161E550:4096
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161C0E0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161C530
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161E550
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161D540
   2021/05/30 12:12:09 [debug] 31499#31499: pcre JIT compiling result: 1
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850943810


   After adding log debugging, it is found that the code is executed to the return sock:tlshandshake(opts) of the resty.http.tls_handshake function, and there is no return.
   
   The `sock:tlshandshake` is new feature of apisix openresty.
   
   resty.http
   ```lua
   function _M.tls_handshake(self, opts)
       local sock = self.sock
       if not sock then
           return nil, "not initialized"
       end
   
       self.ssl = true
   
       // Stop here, did not continue execution.
       return sock:tlshandshake(opts)
   end
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] tokers commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
tokers commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850953401


   > I think it’s not for this reason. I see the configuration in the nginx.conf configuration file is effective.
   > It is normal for me to use the original version of openresty, but it is not normal to use apisix openresty.
   > 
   > nginx.conf
   > 
   > ```shell
   > # Configuration File - Nginx Server Configs
   > # This is a read-only file, do not try to modify it.
   > 
   > master_process on;
   > 
   > worker_processes auto;
   > worker_cpu_affinity auto;
   > 
   > # main configuration snippet starts
   > daemon off;
   > 
   > # main configuration snippet ends
   > 
   > error_log logs/error.log debug;
   > pid logs/nginx.pid;
   > 
   > worker_rlimit_nofile 20480;
   > 
   > events {
   >     accept_mutex off;
   >     worker_connections 10620;
   > }
   > 
   > worker_rlimit_core  16G;
   > 
   > worker_shutdown_timeout 240s;
   > 
   > env APISIX_PROFILE;
   > env PATH; # for searching external plugin runner's binary
   > 
   > 
   > 
   > http {
   >     # put extra_lua_path in front of the builtin path
   >     # so user can override the source code
   >     lua_package_path  "$prefix/deps/share/lua/5.1/?.lua;$prefix/deps/share/lua/5.1/?/init.lua;/data/app/apisix/?.lua;/data/app/apisix/?/init.lua;;./?.lua;/data/app/openresty/luajit/share/luajit-2.1.0-beta3/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/data/app/openresty/luajit/share/lua/5.1/?.lua;/data/app/openresty/luajit/share/lua/5.1/?/init.lua;";
   >     lua_package_cpath "$prefix/deps/lib64/lua/5.1/?.so;$prefix/deps/lib/lua/5.1/?.so;;./?.so;/usr/local/lib/lua/5.1/?.so;/data/app/openresty/luajit/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so;";
   > 
   >     lua_shared_dict internal_status      10m;
   >     lua_shared_dict plugin-limit-req     10m;
   >     lua_shared_dict plugin-limit-count   10m;
   >     lua_shared_dict prometheus-metrics   10m;
   >     lua_shared_dict plugin-limit-conn    10m;
   >     lua_shared_dict upstream-healthcheck 10m;
   >     lua_shared_dict worker-events        10m;
   >     lua_shared_dict lrucache-lock        10m;
   >     lua_shared_dict balancer_ewma        10m;
   >     lua_shared_dict balancer_ewma_locks  10m;
   >     lua_shared_dict balancer_ewma_last_touched_at 10m;
   >     lua_shared_dict plugin-limit-count-redis-cluster-slot-lock 1m;
   >     lua_shared_dict tracing_buffer       10m; # plugin: skywalking
   >     lua_shared_dict plugin-api-breaker   10m;
   > 
   >     # for openid-connect and authz-keycloak plugin
   >     lua_shared_dict discovery             1m; # cache for discovery metadata documents
   > 
   >     # for openid-connect plugin
   >     lua_shared_dict jwks                  1m; # cache for JWKs
   >     lua_shared_dict introspection        10m; # cache for JWT verification results
   > 
   >     # for authz-keycloak
   >     lua_shared_dict access_tokens         1m; # cache for service account access tokens
   > 
   >     # for custom shared dict
   > 
   >     # for proxy cache
   >     proxy_cache_path /tmp/disk_cache_one levels=1:2 keys_zone=disk_cache_one:50m inactive=1d max_size=1G use_temp_path=off;
   > 
   >     # for proxy cache
   >     map $upstream_cache_zone $upstream_cache_zone_info {
   >         disk_cache_one /tmp/disk_cache_one,1:2;
   >     }
   > 
   > 
   >     lua_ssl_verify_depth 5;
   >     ssl_session_timeout 86400;
   > 
   >     underscores_in_headers on;
   > 
   >     lua_socket_log_errors off;
   > 
   >     resolver 183.60.83.19 183.60.82.98;
   >     resolver_timeout 5;
   > 
   >     lua_http10_buffering off;
   > 
   >     lua_regex_match_limit 100000;
   >     lua_regex_cache_max_entries 8192;
   > 
   >     log_format main escape=default '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_status $upstream_response_time "$upstream_scheme://$upstream_host$upstream_uri"';
   >     uninitialized_variable_warn off;
   > 
   >     access_log logs/access.log main buffer=16384 flush=3;
   >     open_file_cache  max=1000 inactive=60;
   >     client_max_body_size 0;
   >     keepalive_timeout 60s;
   >     client_header_timeout 60s;
   >     client_body_timeout 60s;
   >     send_timeout 10s;
   > 
   >     server_tokens off;
   > 
   >     include mime.types;
   >     charset utf-8;
   > 
   >     # error_page
   >     error_page 500 @50x.html;
   > 
   >     real_ip_header X-Real-IP;
   > 
   >     set_real_ip_from 127.0.0.1;
   >     set_real_ip_from unix:;
   > 
   >     # http configuration snippet starts
   >     
   > 
   >     # http configuration snippet ends
   > 
   >     upstream apisix_backend {
   >         server 0.0.0.1;
   >         balancer_by_lua_block {
   >             apisix.http_balancer_phase()
   >         }
   > 
   >         keepalive 320;
   >         keepalive_requests 1000;
   >         keepalive_timeout 60s;
   >     }
   > 
   > 
   >     init_by_lua_block {
   >         require "resty.core"
   >         apisix = require("apisix")
   > 
   >         local dns_resolver = { "183.60.83.19", "183.60.82.98", }
   >         local args = {
   >             dns_resolver = dns_resolver,
   >         }
   >         apisix.http_init(args)
   >     }
   > 
   >     init_worker_by_lua_block {
   >         apisix.http_init_worker()
   >     }
   > 
   >     server {
   >         listen 127.0.0.1:9090;
   > 
   >         access_log off;
   > 
   >         location / {
   >             content_by_lua_block {
   >                 apisix.http_control()
   >             }
   >         }
   > 
   >         location @50x.html {
   >             set $from_error_page 'true';
   >             try_files /50x.html $uri;
   >         }
   >     }
   > 
   >     server {
   >         listen 127.0.0.1:9091;
   > 
   >         access_log off;
   > 
   >         location / {
   >             content_by_lua_block {
   >                 local prometheus = require("apisix.plugins.prometheus")
   >                 prometheus.export_metrics()
   >             }
   >         }
   > 
   >         location = /apisix/nginx_status {
   >             allow 127.0.0.0/24;
   >             deny all;
   >             stub_status;
   >         }
   >     }
   > 
   > 
   >     server {
   >         listen 9080 default_server reuseport;
   >         listen 9443 ssl default_server http2 reuseport;
   > 
   >         listen [::]:9080 default_server reuseport;
   >         listen [::]:9443 ssl default_server http2 reuseport;
   > 
   >         server_name _;
   > 
   >         lua_ssl_trusted_certificate /data/app/apisix/ssl/ca.pem;
   > 
   >         ssl_certificate      cert/ssl_PLACE_HOLDER.crt;
   >         ssl_certificate_key  cert/ssl_PLACE_HOLDER.key;
   >         ssl_session_cache    shared:SSL:20m;
   >         ssl_session_timeout 10m;
   > 
   >         ssl_protocols TLSv1.2 TLSv1.3;
   >         ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
   >         ssl_prefer_server_ciphers on;
   >         ssl_session_tickets off;
   > 
   >         # http server configuration snippet starts
   >         
   > 
   >         # http server configuration snippet ends
   > 
   >         location = /apisix/nginx_status {
   >             allow 127.0.0.0/24;
   >             deny all;
   >             access_log off;
   >             stub_status;
   >         }
   > 
   >         location /apisix/admin {
   >             set $upstream_scheme             'http';
   >             set $upstream_host               $http_host;
   >             set $upstream_uri                '';
   > 
   >                 allow 127.0.0.0/24;
   >                 deny all;
   > 
   >             content_by_lua_block {
   >                 apisix.http_admin()
   >             }
   >         }
   > 
   >         ssl_certificate_by_lua_block {
   >             apisix.http_ssl_phase()
   >         }
   > 
   >         proxy_ssl_name $upstream_host;
   >         proxy_ssl_server_name on;
   > 
   >         location / {
   >             set $upstream_mirror_host        '';
   >             set $upstream_upgrade            '';
   >             set $upstream_connection         '';
   > 
   >             set $upstream_scheme             'http';
   >             set $upstream_host               $http_host;
   >             set $upstream_uri                '';
   >             set $ctx_ref                     '';
   >             set $from_error_page             '';
   > 
   > 
   >             access_by_lua_block {
   >                 apisix.http_access_phase()
   >             }
   > 
   >             proxy_http_version 1.1;
   >             proxy_set_header   Host              $upstream_host;
   >             proxy_set_header   Upgrade           $upstream_upgrade;
   >             proxy_set_header   Connection        $upstream_connection;
   >             proxy_set_header   X-Real-IP         $remote_addr;
   >             proxy_pass_header  Date;
   > 
   >             ### the following x-forwarded-* headers is to send to upstream server
   > 
   >             set $var_x_forwarded_for        $remote_addr;
   >             set $var_x_forwarded_proto      $scheme;
   >             set $var_x_forwarded_host       $host;
   >             set $var_x_forwarded_port       $server_port;
   > 
   >             if ($http_x_forwarded_for != "") {
   >                 set $var_x_forwarded_for "${http_x_forwarded_for}, ${realip_remote_addr}";
   >             }
   >             if ($http_x_forwarded_host != "") {
   >                 set $var_x_forwarded_host $http_x_forwarded_host;
   >             }
   >             if ($http_x_forwarded_port != "") {
   >                 set $var_x_forwarded_port $http_x_forwarded_port;
   >             }
   > 
   >             proxy_set_header   X-Forwarded-For      $var_x_forwarded_for;
   >             proxy_set_header   X-Forwarded-Proto    $var_x_forwarded_proto;
   >             proxy_set_header   X-Forwarded-Host     $var_x_forwarded_host;
   >             proxy_set_header   X-Forwarded-Port     $var_x_forwarded_port;
   > 
   >             ###  the following configuration is to cache response content from upstream server
   > 
   >             set $upstream_cache_zone            off;
   >             set $upstream_cache_key             '';
   >             set $upstream_cache_bypass          '';
   >             set $upstream_no_cache              '';
   > 
   >             proxy_cache                         $upstream_cache_zone;
   >             proxy_cache_valid                   any 10s;
   >             proxy_cache_min_uses                1;
   >             proxy_cache_methods                 GET HEAD;
   >             proxy_cache_lock_timeout            5s;
   >             proxy_cache_use_stale               off;
   >             proxy_cache_key                     $upstream_cache_key;
   >             proxy_no_cache                      $upstream_no_cache;
   >             proxy_cache_bypass                  $upstream_cache_bypass;
   > 
   > 
   >             proxy_pass      $upstream_scheme://apisix_backend$upstream_uri;
   > 
   >             mirror          /proxy_mirror;
   > 
   >             header_filter_by_lua_block {
   >                 apisix.http_header_filter_phase()
   >             }
   > 
   >             body_filter_by_lua_block {
   >                 apisix.http_body_filter_phase()
   >             }
   > 
   >             log_by_lua_block {
   >                 apisix.http_log_phase()
   >             }
   >         }
   > 
   >         location @grpc_pass {
   > 
   >             access_by_lua_block {
   >                 apisix.grpc_access_phase()
   >             }
   > 
   >             grpc_set_header   Content-Type application/grpc;
   >             grpc_socket_keepalive on;
   >             grpc_pass         $upstream_scheme://apisix_backend;
   > 
   >             header_filter_by_lua_block {
   >                 apisix.http_header_filter_phase()
   >             }
   > 
   >             body_filter_by_lua_block {
   >                 apisix.http_body_filter_phase()
   >             }
   > 
   >             log_by_lua_block {
   >                 apisix.http_log_phase()
   >             }
   >         }
   > 
   > 
   >         location = /proxy_mirror {
   >             internal;
   > 
   >             if ($upstream_mirror_host = "") {
   >                 return 200;
   >             }
   > 
   >             proxy_http_version 1.1;
   >             proxy_set_header Host $upstream_host;
   >             proxy_pass $upstream_mirror_host$request_uri;
   >         }
   > 
   >         location @50x.html {
   >             set $from_error_page 'true';
   >             try_files /50x.html $uri;
   >             header_filter_by_lua_block {
   >                 apisix.http_header_filter_phase()
   >             }
   > 
   >             log_by_lua_block {
   >                 apisix.http_log_phase()
   >             }
   >         }
   >     }
   >     # http end configuration snippet starts
   >     
   > 
   >     # http end configuration snippet ends
   > }
   > ```
   
   What @spacewander said is against the problem thrown by shell (the failure of `mv` command).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850937086


   Same as before. 
   Look at the last line of the error log, it's more like a pcre JIT compiling problem
   
   start apisix
   ```shell
   [app@VM_97_180_centos apisix]$ ./bin/apisix start --config ./conf/apisix.yaml
   /data/app/openresty/luajit/bin/luajit ./apisix/cli/apisix.lua start --config ./conf/apisix.yaml
   mv: ‘/data/app/apisix/conf/config.yaml’ and ‘/data/app/apisix/conf/config.yaml.bak’ are the same file
   ln: failed to create hard link ‘/data/app/apisix/conf/config.yaml’: File exists
   Use customized yaml:    ./conf/apisix.yaml
   nginx: [warn] could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size
   ```
   
   nginx log
   ```shell
   2021/05/30 12:12:09 [warn] 31499#31499: could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size
   2021/05/30 12:12:09 [info] 31499#31499: [lua] core.lua:26: use config_center: etcd
   2021/05/30 12:12:09 [info] 31499#31499: [lua] resolver.lua:28: init_resolver(): dns resolver ["183.60.83.19","183.60.82.98"]
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161B4A0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161B6B0:476
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161B8A0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161BAB0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161CAC0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161DAD0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161B6B0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161BAB0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161DAD0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161CAC0
   2021/05/30 12:12:09 [debug] 31499#31499: pcre JIT compiling result: 1
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161BAB0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161BCC0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161CCD0:4096
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161BCC0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161CCD0
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161BCC0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: pcre JIT compiling result: 1
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 00007F33207D5010:589704
   2021/05/30 12:12:09 [debug] 31499#31499: new block, alloc semaphore: 00007F33207D5028 block: 00007F33207D5010
   2021/05/30 12:12:09 [debug] 31499#31499: http lua semaphore new: 00007F33207D5028, resources: 0
   2021/05/30 12:12:09 [info] 31499#31499: [lua] v3.lua:35: _request_uri(): v3 request uri: https://etcd02.apisix.webank.com:2379/v3/kv/range, timeout: 30
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161BED0:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161C0E0:568
   2021/05/30 12:12:09 [debug] 31499#31499: posix_memalign: 000000000161C320:512 @16
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161C530:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161D540:4096
   2021/05/30 12:12:09 [debug] 31499#31499: malloc: 000000000161E550:4096
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161C0E0
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161C530
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161E550
   2021/05/30 12:12:09 [debug] 31499#31499: free: 000000000161D540
   2021/05/30 12:12:09 [debug] 31499#31499: pcre JIT compiling result: 1
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b edited a comment on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b edited a comment on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850943810


   After adding log debugging, it is found that the code is executed to the return sock:tlshandshake(opts) of the resty.http.tls_handshake function, and there is no return.
   
   The `sock:tlshandshake` is new feature of apisix openresty.
   
   resty.http
   ```lua
   function _M.tls_handshake(self, opts)
       local sock = self.sock
       if not sock then
           return nil, "not initialized"
       end
   
       self.ssl = true
   
       -- Stop here, did not continue execution.
       return sock:tlshandshake(opts)
   end
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] spacewander closed issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
spacewander closed issue #4337:
URL: https://github.com/apache/apisix/issues/4337


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] spacewander closed issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
spacewander closed issue #4337:
URL: https://github.com/apache/apisix/issues/4337


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-850938912


   I think it’s not for this reason. I see the configuration in the nginx.conf configuration file is effective.
   It is normal for me to use the original version of openresty, but it is not normal to use apisix openresty.
   
   nginx.conf
   ```shell
   # Configuration File - Nginx Server Configs
   # This is a read-only file, do not try to modify it.
   
   master_process on;
   
   worker_processes auto;
   worker_cpu_affinity auto;
   
   # main configuration snippet starts
   daemon off;
   
   # main configuration snippet ends
   
   error_log logs/error.log debug;
   pid logs/nginx.pid;
   
   worker_rlimit_nofile 20480;
   
   events {
       accept_mutex off;
       worker_connections 10620;
   }
   
   worker_rlimit_core  16G;
   
   worker_shutdown_timeout 240s;
   
   env APISIX_PROFILE;
   env PATH; # for searching external plugin runner's binary
   
   
   
   http {
       # put extra_lua_path in front of the builtin path
       # so user can override the source code
       lua_package_path  "$prefix/deps/share/lua/5.1/?.lua;$prefix/deps/share/lua/5.1/?/init.lua;/data/app/apisix/?.lua;/data/app/apisix/?/init.lua;;./?.lua;/data/app/openresty/luajit/share/luajit-2.1.0-beta3/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/data/app/openresty/luajit/share/lua/5.1/?.lua;/data/app/openresty/luajit/share/lua/5.1/?/init.lua;";
       lua_package_cpath "$prefix/deps/lib64/lua/5.1/?.so;$prefix/deps/lib/lua/5.1/?.so;;./?.so;/usr/local/lib/lua/5.1/?.so;/data/app/openresty/luajit/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so;";
   
       lua_shared_dict internal_status      10m;
       lua_shared_dict plugin-limit-req     10m;
       lua_shared_dict plugin-limit-count   10m;
       lua_shared_dict prometheus-metrics   10m;
       lua_shared_dict plugin-limit-conn    10m;
       lua_shared_dict upstream-healthcheck 10m;
       lua_shared_dict worker-events        10m;
       lua_shared_dict lrucache-lock        10m;
       lua_shared_dict balancer_ewma        10m;
       lua_shared_dict balancer_ewma_locks  10m;
       lua_shared_dict balancer_ewma_last_touched_at 10m;
       lua_shared_dict plugin-limit-count-redis-cluster-slot-lock 1m;
       lua_shared_dict tracing_buffer       10m; # plugin: skywalking
       lua_shared_dict plugin-api-breaker   10m;
   
       # for openid-connect and authz-keycloak plugin
       lua_shared_dict discovery             1m; # cache for discovery metadata documents
   
       # for openid-connect plugin
       lua_shared_dict jwks                  1m; # cache for JWKs
       lua_shared_dict introspection        10m; # cache for JWT verification results
   
       # for authz-keycloak
       lua_shared_dict access_tokens         1m; # cache for service account access tokens
   
       # for custom shared dict
   
       # for proxy cache
       proxy_cache_path /tmp/disk_cache_one levels=1:2 keys_zone=disk_cache_one:50m inactive=1d max_size=1G use_temp_path=off;
   
       # for proxy cache
       map $upstream_cache_zone $upstream_cache_zone_info {
           disk_cache_one /tmp/disk_cache_one,1:2;
       }
   
   
       lua_ssl_verify_depth 5;
       ssl_session_timeout 86400;
   
       underscores_in_headers on;
   
       lua_socket_log_errors off;
   
       resolver 183.60.83.19 183.60.82.98;
       resolver_timeout 5;
   
       lua_http10_buffering off;
   
       lua_regex_match_limit 100000;
       lua_regex_cache_max_entries 8192;
   
       log_format main escape=default '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_status $upstream_response_time "$upstream_scheme://$upstream_host$upstream_uri"';
       uninitialized_variable_warn off;
   
       access_log logs/access.log main buffer=16384 flush=3;
       open_file_cache  max=1000 inactive=60;
       client_max_body_size 0;
       keepalive_timeout 60s;
       client_header_timeout 60s;
       client_body_timeout 60s;
       send_timeout 10s;
   
       server_tokens off;
   
       include mime.types;
       charset utf-8;
   
       # error_page
       error_page 500 @50x.html;
   
       real_ip_header X-Real-IP;
   
       set_real_ip_from 127.0.0.1;
       set_real_ip_from unix:;
   
       # http configuration snippet starts
       
   
       # http configuration snippet ends
   
       upstream apisix_backend {
           server 0.0.0.1;
           balancer_by_lua_block {
               apisix.http_balancer_phase()
           }
   
           keepalive 320;
           keepalive_requests 1000;
           keepalive_timeout 60s;
       }
   
   
       init_by_lua_block {
           require "resty.core"
           apisix = require("apisix")
   
           local dns_resolver = { "183.60.83.19", "183.60.82.98", }
           local args = {
               dns_resolver = dns_resolver,
           }
           apisix.http_init(args)
       }
   
       init_worker_by_lua_block {
           apisix.http_init_worker()
       }
   
       server {
           listen 127.0.0.1:9090;
   
           access_log off;
   
           location / {
               content_by_lua_block {
                   apisix.http_control()
               }
           }
   
           location @50x.html {
               set $from_error_page 'true';
               try_files /50x.html $uri;
           }
       }
   
       server {
           listen 127.0.0.1:9091;
   
           access_log off;
   
           location / {
               content_by_lua_block {
                   local prometheus = require("apisix.plugins.prometheus")
                   prometheus.export_metrics()
               }
           }
   
           location = /apisix/nginx_status {
               allow 127.0.0.0/24;
               deny all;
               stub_status;
           }
       }
   
   
       server {
           listen 9080 default_server reuseport;
           listen 9443 ssl default_server http2 reuseport;
   
           listen [::]:9080 default_server reuseport;
           listen [::]:9443 ssl default_server http2 reuseport;
   
           server_name _;
   
           lua_ssl_trusted_certificate /data/app/apisix/ssl/ca.pem;
   
           ssl_certificate      cert/ssl_PLACE_HOLDER.crt;
           ssl_certificate_key  cert/ssl_PLACE_HOLDER.key;
           ssl_session_cache    shared:SSL:20m;
           ssl_session_timeout 10m;
   
           ssl_protocols TLSv1.2 TLSv1.3;
           ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
           ssl_prefer_server_ciphers on;
           ssl_session_tickets off;
   
           # http server configuration snippet starts
           
   
           # http server configuration snippet ends
   
           location = /apisix/nginx_status {
               allow 127.0.0.0/24;
               deny all;
               access_log off;
               stub_status;
           }
   
           location /apisix/admin {
               set $upstream_scheme             'http';
               set $upstream_host               $http_host;
               set $upstream_uri                '';
   
                   allow 127.0.0.0/24;
                   deny all;
   
               content_by_lua_block {
                   apisix.http_admin()
               }
           }
   
           ssl_certificate_by_lua_block {
               apisix.http_ssl_phase()
           }
   
           proxy_ssl_name $upstream_host;
           proxy_ssl_server_name on;
   
           location / {
               set $upstream_mirror_host        '';
               set $upstream_upgrade            '';
               set $upstream_connection         '';
   
               set $upstream_scheme             'http';
               set $upstream_host               $http_host;
               set $upstream_uri                '';
               set $ctx_ref                     '';
               set $from_error_page             '';
   
   
               access_by_lua_block {
                   apisix.http_access_phase()
               }
   
               proxy_http_version 1.1;
               proxy_set_header   Host              $upstream_host;
               proxy_set_header   Upgrade           $upstream_upgrade;
               proxy_set_header   Connection        $upstream_connection;
               proxy_set_header   X-Real-IP         $remote_addr;
               proxy_pass_header  Date;
   
               ### the following x-forwarded-* headers is to send to upstream server
   
               set $var_x_forwarded_for        $remote_addr;
               set $var_x_forwarded_proto      $scheme;
               set $var_x_forwarded_host       $host;
               set $var_x_forwarded_port       $server_port;
   
               if ($http_x_forwarded_for != "") {
                   set $var_x_forwarded_for "${http_x_forwarded_for}, ${realip_remote_addr}";
               }
               if ($http_x_forwarded_host != "") {
                   set $var_x_forwarded_host $http_x_forwarded_host;
               }
               if ($http_x_forwarded_port != "") {
                   set $var_x_forwarded_port $http_x_forwarded_port;
               }
   
               proxy_set_header   X-Forwarded-For      $var_x_forwarded_for;
               proxy_set_header   X-Forwarded-Proto    $var_x_forwarded_proto;
               proxy_set_header   X-Forwarded-Host     $var_x_forwarded_host;
               proxy_set_header   X-Forwarded-Port     $var_x_forwarded_port;
   
               ###  the following configuration is to cache response content from upstream server
   
               set $upstream_cache_zone            off;
               set $upstream_cache_key             '';
               set $upstream_cache_bypass          '';
               set $upstream_no_cache              '';
   
               proxy_cache                         $upstream_cache_zone;
               proxy_cache_valid                   any 10s;
               proxy_cache_min_uses                1;
               proxy_cache_methods                 GET HEAD;
               proxy_cache_lock_timeout            5s;
               proxy_cache_use_stale               off;
               proxy_cache_key                     $upstream_cache_key;
               proxy_no_cache                      $upstream_no_cache;
               proxy_cache_bypass                  $upstream_cache_bypass;
   
   
               proxy_pass      $upstream_scheme://apisix_backend$upstream_uri;
   
               mirror          /proxy_mirror;
   
               header_filter_by_lua_block {
                   apisix.http_header_filter_phase()
               }
   
               body_filter_by_lua_block {
                   apisix.http_body_filter_phase()
               }
   
               log_by_lua_block {
                   apisix.http_log_phase()
               }
           }
   
           location @grpc_pass {
   
               access_by_lua_block {
                   apisix.grpc_access_phase()
               }
   
               grpc_set_header   Content-Type application/grpc;
               grpc_socket_keepalive on;
               grpc_pass         $upstream_scheme://apisix_backend;
   
               header_filter_by_lua_block {
                   apisix.http_header_filter_phase()
               }
   
               body_filter_by_lua_block {
                   apisix.http_body_filter_phase()
               }
   
               log_by_lua_block {
                   apisix.http_log_phase()
               }
           }
   
   
           location = /proxy_mirror {
               internal;
   
               if ($upstream_mirror_host = "") {
                   return 200;
               }
   
               proxy_http_version 1.1;
               proxy_set_header Host $upstream_host;
               proxy_pass $upstream_mirror_host$request_uri;
           }
   
           location @50x.html {
               set $from_error_page 'true';
               try_files /50x.html $uri;
               header_filter_by_lua_block {
                   apisix.http_header_filter_phase()
               }
   
               log_by_lua_block {
                   apisix.http_log_phase()
               }
           }
       }
       # http end configuration snippet starts
       
   
       # http end configuration snippet ends
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b commented on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b commented on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-852808831


   I modified the code to directly call the `ngx.socket.tcp` function under `resty/core/socket/tcp.lua`, the error is `not request found`.
   Cannot execute `ngx.socket.tcp` in the init phase. How does apisix realize that `ngx.socket.tcp` can be used in the init phase.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [apisix] xyz2b edited a comment on issue #4337: request help: apisix can't get up

Posted by GitBox <gi...@apache.org>.
xyz2b edited a comment on issue #4337:
URL: https://github.com/apache/apisix/issues/4337#issuecomment-851133012


   > After adding log debugging, it is found that the code is executed to the return sock:tlshandshake(opts) of the resty.http.tls_handshake function, and there is no return.
   > 
   > The `sock:tlshandshake` is new feature of apisix openresty.
   > 
   > resty.http
   > 
   > ```lua
   > function _M.tls_handshake(self, opts)
   >     local sock = self.sock
   >     if not sock then
   >         return nil, "not initialized"
   >     end
   > 
   >     self.ssl = true
   > 
   >     -- Stop here, did not continue execution.
   >     return sock:tlshandshake(opts)
   > end
   > ```
   
   
   Hi @spacewander. The problem may be here?  I used `ngx.log` to print logs in `openresty/lualib/resty/core/socket/tcp.lua` tlshandshake funciton of apisix openresty but it didn't work. I don't know why.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org