You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by GitBox <gi...@apache.org> on 2022/08/10 14:31:45 UTC

[GitHub] [dolphinscheduler] github-actions[bot] commented on issue #11404: [Bug] [UI] Storage not enabled

github-actions[bot] commented on issue #11404:
URL: https://github.com/apache/dolphinscheduler/issues/11404#issuecomment-1210761148

   ### Search before asking
   
   - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
   
   
   ### What happened
   
   What happened
   I set values.xml
   `sharedStoragePersistence:
   enabled: true
   mountPath: "/opt/soft"
   accessModes:
   - "ReadWriteMany"
   ## storageClassName must support the access mode: ReadWriteMany
   storageClassName: "ds-shared-nfs"
   storage: "2Gi"`
   `fsFileResourcePersistence:
   enabled: true
   accessModes:
   - "ReadWriteMany"
   ## storageClassName must support the access mode: ReadWriteMany
   storageClassName: "ds-file-nfs"
   storage: "2Gi"`
   
   `DATA_BASEDIR_PATH: "/tmp/dolphinscheduler"
   RESOURCE_STORAGE_TYPE: "HDFS"
   RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
   FS_DEFAULT_FS: "file:///"`
   
   ### What you expected to happen
   
   It's show "存储未启用" when I upload the spark-examples_2.11-2.4.7.jar.
   
   ### How to reproduce
   
   storageClassName: "ds-shared-nfs"
   storageClassName: "ds-file-nfs"
   all bound 
   It's use the nfs.
   
   ### Anything else
   
   root@dolphinscheduler-api-dbcf78666-72wff:/opt/dolphinscheduler/conf# pwd
   /opt/dolphinscheduler/conf
   root@dolphinscheduler-api-dbcf78666-72wff:/opt/dolphinscheduler/conf# cat common.properties 
   `data.basedir.path=/tmp/dolphinscheduler
   
   # resource storage type: HDFS, S3, NONE
   resource.storage.type=NONE
   
   # resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
   resource.upload.path=/dolphinscheduler
   
   # whether to startup kerberos
   hadoop.security.authentication.startup.state=false
   
   # java.security.krb5.conf path
   java.security.krb5.conf.path=/opt/krb5.conf
   
   # login user from keytab username
   login.user.keytab.username=hdfs-mycluster@ESZ.COM
   
   # login user from keytab path
   login.user.keytab.path=/opt/hdfs.headless.keytab
   
   # kerberos expire time, the unit is hour
   kerberos.expire.time=2
   # resource view suffixs
   #resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
   # if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
   hdfs.root.user=hdfs
   # if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
   fs.defaultFS=file:///`
    why?
   <img width="928" alt="image" src="https://user-images.githubusercontent.com/8449870/183924289-13152b13-0b8e-4b19-8bb4-9456cbb9c934.png">
   
   
   ### Version
   
   3.0.0
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org