You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@dolphinscheduler.apache.org by GitBox <gi...@apache.org> on 2019/11/18 11:55:34 UTC

[GitHub] [incubator-dolphinscheduler] lfyee edited a comment on issue #1217: why I cannot push files to hdfs

lfyee edited a comment on issue #1217: why I cannot push files to hdfs
URL: https://github.com/apache/incubator-dolphinscheduler/issues/1217#issuecomment-553356457
 
 
   If the dolphinscheduler is already running, if you want to enable HDFS in the resource center;
   (如果dolphinscheduler已经在运行,这时候如果要在资源中心开启HDFS功能)
   
   ## 1、Modify configuration file. ##
   (修改如下的配置文件)
   ### conf/common/common.properties ### 
   ```
   # Users who have permission to create directories under the HDFS root path
   hdfs.root.user=hdfs
   # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/escheduler" is recommended
   data.store2hdfs.basepath=/escheduler
   # resource upload startup type : HDFS,S3,NONE
   res.upload.startup.type=HDFS
   # whether kerberos starts
   hadoop.security.authentication.startup.state=false
   # java.security.krb5.conf path
   java.security.krb5.conf.path=/opt/krb5.conf
   # loginUserFromKeytab user
   login.user.keytab.username=hdfs-mycluster@ESZ.COM
   # loginUserFromKeytab path
   login.user.keytab.path=/opt/hdfs.headless.keytab
   ```
   Modify the corresponding parameters according to your own cluster environment. Both api-server and worker-server services need to be configured, modified, and restarted. Since the system is already running, it will not help us create the root and tenant directories, so we need to manually create the root and tenant-related directories.
   (根据自己集群环境修改对应的参数,api-server和worker-server两类服务都需要做配置,修改并重启服务,由于系统已经在运行,因此无法帮助我们创建根目录和租户目录,因此我们需要手动创建与根目录和租户相关的目录)
   E.g
   deploy user: dolphinscheduler
   data.store2hdfs.basepath:/escheduler
   tenant user:tim
   ```
   # mkdir root
   hadoop fs -mkdir  /escheduler
   # mkdir tenant
   hadoop fs -mkdir -p /escheduler/tim/{resources,udfs}
   # change 
   hadoop fs -chown -R  dolphinscheduler:dolphinscheduler   /escheduler
   ```
   ### conf/common/hadoop/hadoop.properties ###
   ```
   # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
   # to the conf directory,support s3,for example : s3a://dolphinscheduler
   fs.defaultFS=hdfs://mycluster:8020
   ```
   
   ## 2、Copy File  ##
   (复制文件)
   Copy core-site.xml, hdfs-site.xml from the Hadoop cluster to the conf directory.Both api-server and worker-server services need to be configured, modified, and restarted
   (从Hadoop集群复制core-site.xml、hdfs-site.xml到conf目录下,api-server和worker-server两类服务都需要做配置,修改并重启服务)
   
   
   
   After the modification, you can use the HDFS function.(修改完后,就可以使用HDFS功能)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services