You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Bipin Roshan Nag (JIRA)" <ji...@apache.org> on 2015/04/09 11:48:12 UTC

[jira] [Updated] (SQOOP-2292) JDBC connector error

     [ https://issues.apache.org/jira/browse/SQOOP-2292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bipin Roshan Nag updated SQOOP-2292:
------------------------------------
    Environment: 
Ubuntu 14.04, Sqoop-2.0.0 Snapshot


  was:
Ubuntu, Sqoop-2.0.0 Snapshot



> JDBC connector error
> --------------------
>
>                 Key: SQOOP-2292
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2292
>             Project: Sqoop
>          Issue Type: Bug
>          Components: connectors, connectors/generic
>    Affects Versions: 2.0.0
>         Environment: Ubuntu 14.04, Sqoop-2.0.0 Snapshot
>            Reporter: Bipin Roshan Nag
>             Fix For: 2.0.0
>
>
> I get the following error when importing data from mssql server
> Stack trace: java.lang.ArithmeticException: / by zero
>     at
> org.apache.sqoop.connector.jdbc.GenericJdbcPartitioner.partitionIntegerColumn(GenericJdbcPartitioner.java:317)
>     at
> org.apache.sqoop.connector.jdbc.GenericJdbcPartitioner.getPartitions(GenericJdbcPartitioner.java:86)
>     at
> org.apache.sqoop.connector.jdbc.GenericJdbcPartitioner.getPartitions(GenericJdbcPartitioner.java:38)
>     at
> org.apache.sqoop.job.mr.SqoopInputFormat.getSplits(SqoopInputFormat.java:74)
>     at
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1107)
>     at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1124)
>     at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:178)
>     at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1023)
>     at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:976)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
>     at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:976)
>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:582)
>     at
> org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submitToCluster(MapreduceSubmissionEngine.java:274)
>     at
> org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submit(MapreduceSubmissionEngine.java:255)
>     at org.apache.sqoop.driver.JobManager.start(JobManager.java:288)
>     at
> org.apache.sqoop.handler.JobRequestHandler.startJob(JobRequestHandler.java:380)
>     at
> org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:116)
>     at
> org.apache.sqoop.server.v1.JobServlet.handlePutRequest(JobServlet.java:96)
>     at
> org.apache.sqoop.server.SqoopProtocolServlet.doPut(SqoopProtocolServlet.java:79)
>     at javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
>     at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>     at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
>     at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>     at
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
>     at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:277)
>     at
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:555)
>     at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>     at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>     at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>     at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>     at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>     at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>     at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>     at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>     at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>     at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
>     at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>     at java.lang.Thread.run(Thread.java:745)
> Here is my job detail:
> Job with id 1 and name Customer import (Enabled: true, Created by bipin at
> 8/4/15 12:03 PM, Updated by bipin at 8/4/15 12:54 PM)
> Using link id 3 and Connector id 4
>   From database configuration
>     Schema name:
>     Table name:
>     Table SQL statement: SELECT * FROM MasterData.Customer WITH (NOLOCK)
> WHERE CreationDate < '2012-01-01' AND ${CONDITIONS}
>     Table column names:
>     Partition column name: CustomerID
>     Null value allowed for the partition column: true
>     Boundary query:
>   Incremental read
>     Check column:
>     Last value:
>   Throttling resources
>     Extractors: 1
>     Loaders: 1
>   To Kite Dataset Configuration
>     Dataset URI: dataset:file:/home/bipin/data/Customer
>     File format: PARQUET



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)