You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@linkis.apache.org by GitBox <gi...@apache.org> on 2022/03/17 10:10:28 UTC

[GitHub] [incubator-linkis] MaoMiMao opened a new issue #1743: [Bug] java.naming.provider.url property does not contain a URL

MaoMiMao opened a new issue #1743:
URL: https://github.com/apache/incubator-linkis/issues/1743


   ### Search before asking
   
   - [X] I searched the [issues](https://github.com/apache/incubator-linkis/issues) and found no similar issues.
   
   
   ### Linkis Component
   
   linkis-service-gateway
   
   ### What happened + What you expected to happen
   
   按照1.0.2快速部署手册安装完成后 使用sdk方式连接 提示用户密码错误
   wrong user name or password(用户名或密码错误)! javax.naming.ConfigurationException: java.naming.provider.url property does not contain a URL
           at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:78) ~[?:1.8.0_181]
           at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684) ~[?:1.8.0_181]
           at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313) ~[?:1.8.0_181]
           at javax.naming.InitialContext.init(InitialContext.java:244) ~[?:1.8.0_181]
           at javax.naming.ldap.InitialLdapContext.<init>(InitialLdapContext.java:154) ~[?:1.8.0_181]
           at com.webank.wedatasphere.linkis.common.utils.LDAPUtils$.login(LDAPUtils.scala:45) ~[linkis-common-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.LDAPUserRestful$$anonfun$login$1.apply(LDAPUserRestful.scala:26) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.LDAPUserRestful$$anonfun$login$1.apply(LDAPUserRestful.scala:25) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:39) ~[linkis-common-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.LDAPUserRestful.login(LDAPUserRestful.scala:28) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.UserPwdAbstractUserRestful.tryLogin(UserRestful.scala:263) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.AbstractUserRestful.login(UserRestful.scala:132) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.AbstractUserRestful$$anonfun$3.apply(UserRestful.scala:104) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.AbstractUserRestful$$anonfun$3.apply(UserRestful.scala:104) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:52) ~[linkis-common-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.AbstractUserRestful.doUserRequest(UserRestful.scala:104) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.SecurityFilter$$anonfun$doFilter$1.apply$mcV$sp(SecurityFilter.scala:89) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.SecurityFilter$$anonfun$doFilter$1.apply(SecurityFilter.scala:89) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.SecurityFilter$$anonfun$doFilter$1.apply(SecurityFilter.scala:89) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:39) ~[linkis-common-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.SecurityFilter$.doFilter(SecurityFilter.scala:89) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.security.SecurityFilter.doFilter(SecurityFilter.scala) ~[linkis-gateway-core-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.springcloud.http.GatewayAuthorizationFilter.gatewayDeal(GatewayAuthorizationFilter.java:144) ~[linkis-spring-cloud-gateway-1.0.2.jar:?]
           at com.webank.wedatasphere.linkis.gateway.springcloud.http.GatewayAuthorizationFilter.lambda$filter$4(GatewayAuthorizationFilter.java:231) ~[linkis-spring-cloud-gateway-1.0.2.jar:?]
           at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:118) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:121) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:96) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:330) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1755) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:152) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
           at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:366) [reactor-netty-0.9.7.RELEASE.jar:0.9.7.RELEASE]
           at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:367) [reactor-netty-0.9.7.RELEASE.jar:0.9.7.RELEASE]
           at reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:489) [reactor-netty-0.9.7.RELEASE.jar:0.9.7.RELEASE]
           at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:96) [reactor-netty-0.9.7.RELEASE.jar:0.9.7.RELEASE]
           at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:214) [reactor-netty-0.9.7.RELEASE.jar:0.9.7.RELEASE]
           at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792) [netty-transport-native-epoll-4.1.49.Final-linux-x86_64.jar:4.1.49.Final]
           at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475) [netty-transport-native-epoll-4.1.49.Final-linux-x86_64.jar:4.1.49.Final]
           at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) [netty-transport-native-epoll-4.1.49.Final-linux-x86_64.jar:4.1.49.Final]
           at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
           at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.49.Final.jar:4.1.49.Final]
           at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
   
   linkis-mg-gateway.properties  如下
   ```
   #wds.linkis.server.restful.uri=/
   wds.linkis.server.web.session.timeout=1h
   wds.linkis.gateway.conf.enable.proxy.user=false
   wds.linkis.gateway.conf.url.pass.auth=/dss/
   wds.linkis.gateway.conf.enable.token.auth=true
   wds.linkis.is.gateway=true
   wds.linkis.server.mybatis.mapperLocations=classpath*:com/webank/wedatasphere/linkis/instance/label/dao/impl/*.xml
   wds.linkis.server.mybatis.typeAliasesPackage=com.webank.wedatasphere.linkis.instance.label.entity
   wds.linkis.server.mybatis.BasePackage=com.webank.wedatasphere.linkis.instance.label.dao
   wds.linkis.label.entity.packages=com.webank.wedatasphere.linkis.gateway.ujes.route.label
   wds.linkis.login_encrypt.enable=false
   wds.linkis.test.mode=true
   wds.linkis.test.user=hadoop
   ##LDAP
   wds.linkis.ldap.proxy.url=
   wds.linkis.ldap.proxy.baseDN=
   wds.linkis.admin.user=hadoop
   wds.linkis.test.mode=true
   wds.linkis.test.user=hadoop
   
   ##Spring
   spring.server.port=9001
   ```
   
   ### Relevent platform
   
   centos
   
   ### Reproduction script
   
   --
   
   ### Anything else
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [ ] Yes I am willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org
For additional commands, e-mail: dev-help@linkis.apache.org


[GitHub] [incubator-linkis] MaoMiMao removed a comment on issue #1743: [Bug] java.naming.provider.url property does not contain a URL

Posted by GitBox <gi...@apache.org>.
MaoMiMao removed a comment on issue #1743:
URL: https://github.com/apache/incubator-linkis/issues/1743#issuecomment-1073393036


   ```java
   public class linkisDemo {
       private static DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
               .addServerUrl("http://192.168.9.109:9001")   //set linkis-mg-gateway url: http://{ip}:{port}
               .connectionTimeout(30000)   //connectionTimeOut
               .discoveryEnabled(false) //disable discovery
               .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
               .loadbalancerEnabled(true)  // enable loadbalance
               .maxConnectionSize(5)   // set max Connection
               .retryEnabled(false) // set retry
               .readTimeout(30000)  //set read timeout
               .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
               .setAuthTokenKey("root")  // set submit user
               .setAuthTokenValue("root")))  // set passwd or token (setAuthTokenValue("BML-AUTH"))
               .setDWSVersion("v1") //linkis rest version v1
               .build();
       private static UJESClient client = new UJESClientImpl(clientConfig);
   
       public static void main(String[] args){
   
           String user = "hadoop"; // execute user
           String executeCode = "ls"; // code support:sql/hql/py/scala
           try {
   
               System.out.println("user : " + user + ", code : [" + executeCode + "]");
               // 3. build job and execute
               JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
               //0.x:JobExecuteResult jobExecuteResult = toExecute(user, executeCode);
               System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
               // 4. get job jonfo
               JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
               int sleepTimeMills = 1000;
               int logFromLen = 0;
               int logSize = 100;
               while(!jobInfoResult.isCompleted()) {
                   // 5. get progress and log
                   JobProgressResult progress = client.progress(jobExecuteResult);
                   System.out.println("progress: " + progress.getProgress());
                   JobLogResult logRes = client.log(jobExecuteResult, logFromLen, logSize);
                   logFromLen = logRes.fromLine();
                   // 0: info 1: warn 2: error 3: all
                   System.out.println(logRes.log().get(3));
                   Utils.sleepQuietly(sleepTimeMills);
                   jobInfoResult = client.getJobInfo(jobExecuteResult);
               }
   
               JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
               // 6. Get the result set list (if the user submits multiple SQLs at a time,
               // multiple result sets will be generated)
               String resultSet = jobInfo.getResultSetList(client)[0];
               // 7. get resultContent
               Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
               System.out.println("res: " + fileContents);
           } catch (Exception e) {
               e.printStackTrace();
               IOUtils.closeQuietly(client);
           }
           IOUtils.closeQuietly(client);
       }
       private static JobExecuteResult toSubmit(String user, String code) {
           // 1. build  params
           // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
           Map<String, Object> labels = new HashMap<String, Object>();
           labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "shell-1"); // required engineType Label
           labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
           labels.put(LabelKeyConstant.CODE_TYPE_KEY, "shell"); // required codeType
           // set start up map :engineConn start params
           Map<String, Object> startupMap = new HashMap<String, Object>(16);
           // Support setting engine native parameters,For example: parameters of engines such as spark/hive
           startupMap.put("spark.executor.instances", 2);
           // setting linkis params
           startupMap.put("wds.linkis.rm.yarnqueue", "dws");
   
           // 2. build jobSubmitAction
           JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
                   .addExecuteCode(code)
                   .setStartupParams(startupMap)
                   .setUser("root") //submit user
                   .addExecuteUser("root")  // execute user
                   .setLabels(labels)
                   .build();
           // 3. to execute
           return client.submit(jobSubmitAction);
       }
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org
For additional commands, e-mail: dev-help@linkis.apache.org


[GitHub] [incubator-linkis] MaoMiMao commented on issue #1743: [Bug] java.naming.provider.url property does not contain a URL

Posted by GitBox <gi...@apache.org>.
MaoMiMao commented on issue #1743:
URL: https://github.com/apache/incubator-linkis/issues/1743#issuecomment-1073393036


   ```java
   public class linkisDemo {
       private static DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
               .addServerUrl("http://192.168.9.109:9001")   //set linkis-mg-gateway url: http://{ip}:{port}
               .connectionTimeout(30000)   //connectionTimeOut
               .discoveryEnabled(false) //disable discovery
               .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
               .loadbalancerEnabled(true)  // enable loadbalance
               .maxConnectionSize(5)   // set max Connection
               .retryEnabled(false) // set retry
               .readTimeout(30000)  //set read timeout
               .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
               .setAuthTokenKey("root")  // set submit user
               .setAuthTokenValue("root")))  // set passwd or token (setAuthTokenValue("BML-AUTH"))
               .setDWSVersion("v1") //linkis rest version v1
               .build();
       private static UJESClient client = new UJESClientImpl(clientConfig);
   
       public static void main(String[] args){
   
           String user = "hadoop"; // execute user
           String executeCode = "ls"; // code support:sql/hql/py/scala
           try {
   
               System.out.println("user : " + user + ", code : [" + executeCode + "]");
               // 3. build job and execute
               JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
               //0.x:JobExecuteResult jobExecuteResult = toExecute(user, executeCode);
               System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
               // 4. get job jonfo
               JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
               int sleepTimeMills = 1000;
               int logFromLen = 0;
               int logSize = 100;
               while(!jobInfoResult.isCompleted()) {
                   // 5. get progress and log
                   JobProgressResult progress = client.progress(jobExecuteResult);
                   System.out.println("progress: " + progress.getProgress());
                   JobLogResult logRes = client.log(jobExecuteResult, logFromLen, logSize);
                   logFromLen = logRes.fromLine();
                   // 0: info 1: warn 2: error 3: all
                   System.out.println(logRes.log().get(3));
                   Utils.sleepQuietly(sleepTimeMills);
                   jobInfoResult = client.getJobInfo(jobExecuteResult);
               }
   
               JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
               // 6. Get the result set list (if the user submits multiple SQLs at a time,
               // multiple result sets will be generated)
               String resultSet = jobInfo.getResultSetList(client)[0];
               // 7. get resultContent
               Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
               System.out.println("res: " + fileContents);
           } catch (Exception e) {
               e.printStackTrace();
               IOUtils.closeQuietly(client);
           }
           IOUtils.closeQuietly(client);
       }
       private static JobExecuteResult toSubmit(String user, String code) {
           // 1. build  params
           // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
           Map<String, Object> labels = new HashMap<String, Object>();
           labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "shell-1"); // required engineType Label
           labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
           labels.put(LabelKeyConstant.CODE_TYPE_KEY, "shell"); // required codeType
           // set start up map :engineConn start params
           Map<String, Object> startupMap = new HashMap<String, Object>(16);
           // Support setting engine native parameters,For example: parameters of engines such as spark/hive
           startupMap.put("spark.executor.instances", 2);
           // setting linkis params
           startupMap.put("wds.linkis.rm.yarnqueue", "dws");
   
           // 2. build jobSubmitAction
           JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
                   .addExecuteCode(code)
                   .setStartupParams(startupMap)
                   .setUser("root") //submit user
                   .addExecuteUser("root")  // execute user
                   .setLabels(labels)
                   .build();
           // 3. to execute
           return client.submit(jobSubmitAction);
       }
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org
For additional commands, e-mail: dev-help@linkis.apache.org


[GitHub] [incubator-linkis] peacewong commented on issue #1743: [Bug] java.naming.provider.url property does not contain a URL

Posted by GitBox <gi...@apache.org>.
peacewong commented on issue #1743:
URL: https://github.com/apache/incubator-linkis/issues/1743#issuecomment-1072007692


   Please provide the code to use the SDK


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org
For additional commands, e-mail: dev-help@linkis.apache.org


[GitHub] [incubator-linkis] MaoMiMao removed a comment on issue #1743: [Bug] java.naming.provider.url property does not contain a URL

Posted by GitBox <gi...@apache.org>.
MaoMiMao removed a comment on issue #1743:
URL: https://github.com/apache/incubator-linkis/issues/1743#issuecomment-1073392657


   public class linkisDemo {
       private static DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
               .addServerUrl("http://192.168.9.109:9001")   //set linkis-mg-gateway url: http://{ip}:{port}
               .connectionTimeout(30000)   //connectionTimeOut
               .discoveryEnabled(false) //disable discovery
               .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
               .loadbalancerEnabled(true)  // enable loadbalance
               .maxConnectionSize(5)   // set max Connection
               .retryEnabled(false) // set retry
               .readTimeout(30000)  //set read timeout
               .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
               .setAuthTokenKey("root")  // set submit user
               .setAuthTokenValue("root")))  // set passwd or token (setAuthTokenValue("BML-AUTH"))
               .setDWSVersion("v1") //linkis rest version v1
               .build();
       private static UJESClient client = new UJESClientImpl(clientConfig);
   
       public static void main(String[] args){
   
           String user = "hadoop"; // execute user
           String executeCode = "ls"; // code support:sql/hql/py/scala
           try {
   
               System.out.println("user : " + user + ", code : [" + executeCode + "]");
               // 3. build job and execute
               JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
               //0.x:JobExecuteResult jobExecuteResult = toExecute(user, executeCode);
               System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
               // 4. get job jonfo
               JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
               int sleepTimeMills = 1000;
               int logFromLen = 0;
               int logSize = 100;
               while(!jobInfoResult.isCompleted()) {
                   // 5. get progress and log
                   JobProgressResult progress = client.progress(jobExecuteResult);
                   System.out.println("progress: " + progress.getProgress());
                   JobLogResult logRes = client.log(jobExecuteResult, logFromLen, logSize);
                   logFromLen = logRes.fromLine();
                   // 0: info 1: warn 2: error 3: all
                   System.out.println(logRes.log().get(3));
                   Utils.sleepQuietly(sleepTimeMills);
                   jobInfoResult = client.getJobInfo(jobExecuteResult);
               }
   
               JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
               // 6. Get the result set list (if the user submits multiple SQLs at a time,
               // multiple result sets will be generated)
               String resultSet = jobInfo.getResultSetList(client)[0];
               // 7. get resultContent
               Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
               System.out.println("res: " + fileContents);
           } catch (Exception e) {
               e.printStackTrace();
               IOUtils.closeQuietly(client);
           }
           IOUtils.closeQuietly(client);
       }
       private static JobExecuteResult toSubmit(String user, String code) {
           // 1. build  params
           // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
           Map<String, Object> labels = new HashMap<String, Object>();
           labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "shell-1"); // required engineType Label
           labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
           labels.put(LabelKeyConstant.CODE_TYPE_KEY, "shell"); // required codeType
           // set start up map :engineConn start params
           Map<String, Object> startupMap = new HashMap<String, Object>(16);
           // Support setting engine native parameters,For example: parameters of engines such as spark/hive
           startupMap.put("spark.executor.instances", 2);
           // setting linkis params
           startupMap.put("wds.linkis.rm.yarnqueue", "dws");
   
           // 2. build jobSubmitAction
           JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
                   .addExecuteCode(code)
                   .setStartupParams(startupMap)
                   .setUser("root") //submit user
                   .addExecuteUser("root")  // execute user
                   .setLabels(labels)
                   .build();
           // 3. to execute
           return client.submit(jobSubmitAction);
       }
   }


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org
For additional commands, e-mail: dev-help@linkis.apache.org


[GitHub] [incubator-linkis] MaoMiMao commented on issue #1743: [Bug] java.naming.provider.url property does not contain a URL

Posted by GitBox <gi...@apache.org>.
MaoMiMao commented on issue #1743:
URL: https://github.com/apache/incubator-linkis/issues/1743#issuecomment-1073393349


   ```java
   public class linkisDemo {
       private static DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
               .addServerUrl("http://192.168.9.109:9001")   //set linkis-mg-gateway url: http://{ip}:{port}
               .connectionTimeout(30000)   //connectionTimeOut
               .discoveryEnabled(false) //disable discovery
               .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
               .loadbalancerEnabled(true)  // enable loadbalance
               .maxConnectionSize(5)   // set max Connection
               .retryEnabled(false) // set retry
               .readTimeout(30000)  //set read timeout
               .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
               .setAuthTokenKey("hadoop")  // set submit user
               .setAuthTokenValue("hadoop")))  // set passwd or token (setAuthTokenValue("BML-AUTH"))
               .setDWSVersion("v1") //linkis rest version v1
               .build();
       private static UJESClient client = new UJESClientImpl(clientConfig);
   
       public static void main(String[] args){
   
           String user = "hadoop"; // execute user
           String executeCode = "ls"; // code support:sql/hql/py/scala
           try {
   
               System.out.println("user : " + user + ", code : [" + executeCode + "]");
               // 3. build job and execute
               JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
               //0.x:JobExecuteResult jobExecuteResult = toExecute(user, executeCode);
               System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
               // 4. get job jonfo
               JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
               int sleepTimeMills = 1000;
               int logFromLen = 0;
               int logSize = 100;
               while(!jobInfoResult.isCompleted()) {
                   // 5. get progress and log
                   JobProgressResult progress = client.progress(jobExecuteResult);
                   System.out.println("progress: " + progress.getProgress());
                   JobLogResult logRes = client.log(jobExecuteResult, logFromLen, logSize);
                   logFromLen = logRes.fromLine();
                   // 0: info 1: warn 2: error 3: all
                   System.out.println(logRes.log().get(3));
                   Utils.sleepQuietly(sleepTimeMills);
                   jobInfoResult = client.getJobInfo(jobExecuteResult);
               }
   
               JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
               // 6. Get the result set list (if the user submits multiple SQLs at a time,
               // multiple result sets will be generated)
               String resultSet = jobInfo.getResultSetList(client)[0];
               // 7. get resultContent
               Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
               System.out.println("res: " + fileContents);
           } catch (Exception e) {
               e.printStackTrace();
               IOUtils.closeQuietly(client);
           }
           IOUtils.closeQuietly(client);
       }
       private static JobExecuteResult toSubmit(String user, String code) {
           // 1. build  params
           // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
           Map<String, Object> labels = new HashMap<String, Object>();
           labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "shell-1"); // required engineType Label
           labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
           labels.put(LabelKeyConstant.CODE_TYPE_KEY, "shell"); // required codeType
           // set start up map :engineConn start params
           Map<String, Object> startupMap = new HashMap<String, Object>(16);
           // Support setting engine native parameters,For example: parameters of engines such as spark/hive
           startupMap.put("spark.executor.instances", 2);
           // setting linkis params
           startupMap.put("wds.linkis.rm.yarnqueue", "dws");
   
           // 2. build jobSubmitAction
           JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
                   .addExecuteCode(code)
                   .setStartupParams(startupMap)
                   .setUser("root") //submit user
                   .addExecuteUser("root")  // execute user
                   .setLabels(labels)
                   .build();
           // 3. to execute
           return client.submit(jobSubmitAction);
       }
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org
For additional commands, e-mail: dev-help@linkis.apache.org


[GitHub] [incubator-linkis] MaoMiMao commented on issue #1743: [Bug] java.naming.provider.url property does not contain a URL

Posted by GitBox <gi...@apache.org>.
MaoMiMao commented on issue #1743:
URL: https://github.com/apache/incubator-linkis/issues/1743#issuecomment-1073392657


   public class linkisDemo {
       private static DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
               .addServerUrl("http://192.168.9.109:9001")   //set linkis-mg-gateway url: http://{ip}:{port}
               .connectionTimeout(30000)   //connectionTimeOut
               .discoveryEnabled(false) //disable discovery
               .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
               .loadbalancerEnabled(true)  // enable loadbalance
               .maxConnectionSize(5)   // set max Connection
               .retryEnabled(false) // set retry
               .readTimeout(30000)  //set read timeout
               .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
               .setAuthTokenKey("root")  // set submit user
               .setAuthTokenValue("root")))  // set passwd or token (setAuthTokenValue("BML-AUTH"))
               .setDWSVersion("v1") //linkis rest version v1
               .build();
       private static UJESClient client = new UJESClientImpl(clientConfig);
   
       public static void main(String[] args){
   
           String user = "hadoop"; // execute user
           String executeCode = "ls"; // code support:sql/hql/py/scala
           try {
   
               System.out.println("user : " + user + ", code : [" + executeCode + "]");
               // 3. build job and execute
               JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
               //0.x:JobExecuteResult jobExecuteResult = toExecute(user, executeCode);
               System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
               // 4. get job jonfo
               JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
               int sleepTimeMills = 1000;
               int logFromLen = 0;
               int logSize = 100;
               while(!jobInfoResult.isCompleted()) {
                   // 5. get progress and log
                   JobProgressResult progress = client.progress(jobExecuteResult);
                   System.out.println("progress: " + progress.getProgress());
                   JobLogResult logRes = client.log(jobExecuteResult, logFromLen, logSize);
                   logFromLen = logRes.fromLine();
                   // 0: info 1: warn 2: error 3: all
                   System.out.println(logRes.log().get(3));
                   Utils.sleepQuietly(sleepTimeMills);
                   jobInfoResult = client.getJobInfo(jobExecuteResult);
               }
   
               JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
               // 6. Get the result set list (if the user submits multiple SQLs at a time,
               // multiple result sets will be generated)
               String resultSet = jobInfo.getResultSetList(client)[0];
               // 7. get resultContent
               Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
               System.out.println("res: " + fileContents);
           } catch (Exception e) {
               e.printStackTrace();
               IOUtils.closeQuietly(client);
           }
           IOUtils.closeQuietly(client);
       }
       private static JobExecuteResult toSubmit(String user, String code) {
           // 1. build  params
           // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
           Map<String, Object> labels = new HashMap<String, Object>();
           labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "shell-1"); // required engineType Label
           labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
           labels.put(LabelKeyConstant.CODE_TYPE_KEY, "shell"); // required codeType
           // set start up map :engineConn start params
           Map<String, Object> startupMap = new HashMap<String, Object>(16);
           // Support setting engine native parameters,For example: parameters of engines such as spark/hive
           startupMap.put("spark.executor.instances", 2);
           // setting linkis params
           startupMap.put("wds.linkis.rm.yarnqueue", "dws");
   
           // 2. build jobSubmitAction
           JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
                   .addExecuteCode(code)
                   .setStartupParams(startupMap)
                   .setUser("root") //submit user
                   .addExecuteUser("root")  // execute user
                   .setLabels(labels)
                   .build();
           // 3. to execute
           return client.submit(jobSubmitAction);
       }
   }


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org
For additional commands, e-mail: dev-help@linkis.apache.org


[GitHub] [incubator-linkis] peacewong commented on issue #1743: [Bug] java.naming.provider.url property does not contain a URL

Posted by GitBox <gi...@apache.org>.
peacewong commented on issue #1743:
URL: https://github.com/apache/incubator-linkis/issues/1743#issuecomment-1072007692


   Please provide the code to use the SDK


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@linkis.apache.org
For additional commands, e-mail: dev-help@linkis.apache.org