You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Chetan Khatri <ch...@gmail.com> on 2019/09/02 11:11:17 UTC

Re: Control Sqoop job from Spark job

Hi Chris, Thanks for the email. You're right. but it's like Sqoop job gets
launched based on dataframe values in spark job. Certainly it can be
isolated and broken.

On Sat, Aug 31, 2019 at 8:07 AM Chris Teoh <ch...@gmail.com> wrote:

> I'd say this is an uncommon approach, could you use a workflow/scheduling
> system to call Sqoop outside of Spark? Spark is usually multiprocess
> distributed so putting in this Sqoop job in the Spark code seems to imply
> you want to run Sqoop first, then Spark. If you're really insistent on
> this, call it from the driver using Sqoop Java APIs.
>
> On Fri, 30 Aug 2019 at 06:02, Chetan Khatri <ch...@gmail.com>
> wrote:
>
>> Sorry,
>> I call sqoop job from above function. Can you help me to resolve this.
>>
>> Thanks
>>
>> On Fri, Aug 30, 2019 at 1:31 AM Chetan Khatri <
>> chetan.opensource@gmail.com> wrote:
>>
>>> Hi Users,
>>> I am launching a Sqoop job from Spark job and would like to FAIL Spark
>>> job if Sqoop job fails.
>>>
>>> def executeSqoopOriginal(serverName: String, schemaName: String, username: String, password: String,
>>>                  query: String, splitBy: String, fetchSize: Int, numMappers: Int, targetDir: String, jobName: String, dateColumns: String) = {
>>>
>>>   val connectionString = "jdbc:sqlserver://" + serverName + ";" + "databaseName=" + schemaName
>>>   var parameters = Array("import")
>>>   parameters = parameters :+ "-Dmapreduce.job.user.classpath.first=true"
>>>   parameters = parameters :+ "--connect"
>>>   parameters = parameters :+ connectionString
>>>   parameters = parameters :+ "--mapreduce-job-name"
>>>   parameters = parameters :+ jobName
>>>   parameters = parameters :+ "--username"
>>>   parameters = parameters :+ username
>>>   parameters = parameters :+ "--password"
>>>   parameters = parameters :+ password
>>>   parameters = parameters :+ "--hadoop-mapred-home"
>>>   parameters = parameters :+ "/usr/hdp/2.6.5.0-292/hadoop-mapreduce/"
>>>   parameters = parameters :+ "--hadoop-home"
>>>   parameters = parameters :+ "/usr/hdp/2.6.5.0-292/hadoop/"
>>>   parameters = parameters :+ "--query"
>>>   parameters = parameters :+ query
>>>   parameters = parameters :+ "--split-by"
>>>   parameters = parameters :+ splitBy
>>>   parameters = parameters :+ "--fetch-size"
>>>   parameters = parameters :+ fetchSize.toString
>>>   parameters = parameters :+ "--num-mappers"
>>>   parameters = parameters :+ numMappers.toString
>>>   if (dateColumns.length() > 0) {
>>>     parameters = parameters :+ "--map-column-java"
>>>     parameters = parameters :+ dateColumns
>>>   }
>>>   parameters = parameters :+ "--target-dir"
>>>   parameters = parameters :+ targetDir
>>>   parameters = parameters :+ "--delete-target-dir"
>>>   parameters = parameters :+ "--as-avrodatafile"
>>>
>>> }
>>>
>>>
>
> --
> Chris
>

Re: Control Sqoop job from Spark job

Posted by Chris Teoh <ch...@gmail.com>.
Hey Chetan,

How many database connections are you anticipating in this job? Is this for
every row in the dataframe?

Kind regards
Chris


On Mon., 2 Sep. 2019, 9:11 pm Chetan Khatri, <ch...@gmail.com>
wrote:

> Hi Chris, Thanks for the email. You're right. but it's like Sqoop job gets
> launched based on dataframe values in spark job. Certainly it can be
> isolated and broken.
>
> On Sat, Aug 31, 2019 at 8:07 AM Chris Teoh <ch...@gmail.com> wrote:
>
>> I'd say this is an uncommon approach, could you use a workflow/scheduling
>> system to call Sqoop outside of Spark? Spark is usually multiprocess
>> distributed so putting in this Sqoop job in the Spark code seems to imply
>> you want to run Sqoop first, then Spark. If you're really insistent on
>> this, call it from the driver using Sqoop Java APIs.
>>
>> On Fri, 30 Aug 2019 at 06:02, Chetan Khatri <ch...@gmail.com>
>> wrote:
>>
>>> Sorry,
>>> I call sqoop job from above function. Can you help me to resolve this.
>>>
>>> Thanks
>>>
>>> On Fri, Aug 30, 2019 at 1:31 AM Chetan Khatri <
>>> chetan.opensource@gmail.com> wrote:
>>>
>>>> Hi Users,
>>>> I am launching a Sqoop job from Spark job and would like to FAIL Spark
>>>> job if Sqoop job fails.
>>>>
>>>> def executeSqoopOriginal(serverName: String, schemaName: String, username: String, password: String,
>>>>                  query: String, splitBy: String, fetchSize: Int, numMappers: Int, targetDir: String, jobName: String, dateColumns: String) = {
>>>>
>>>>   val connectionString = "jdbc:sqlserver://" + serverName + ";" + "databaseName=" + schemaName
>>>>   var parameters = Array("import")
>>>>   parameters = parameters :+ "-Dmapreduce.job.user.classpath.first=true"
>>>>   parameters = parameters :+ "--connect"
>>>>   parameters = parameters :+ connectionString
>>>>   parameters = parameters :+ "--mapreduce-job-name"
>>>>   parameters = parameters :+ jobName
>>>>   parameters = parameters :+ "--username"
>>>>   parameters = parameters :+ username
>>>>   parameters = parameters :+ "--password"
>>>>   parameters = parameters :+ password
>>>>   parameters = parameters :+ "--hadoop-mapred-home"
>>>>   parameters = parameters :+ "/usr/hdp/2.6.5.0-292/hadoop-mapreduce/"
>>>>   parameters = parameters :+ "--hadoop-home"
>>>>   parameters = parameters :+ "/usr/hdp/2.6.5.0-292/hadoop/"
>>>>   parameters = parameters :+ "--query"
>>>>   parameters = parameters :+ query
>>>>   parameters = parameters :+ "--split-by"
>>>>   parameters = parameters :+ splitBy
>>>>   parameters = parameters :+ "--fetch-size"
>>>>   parameters = parameters :+ fetchSize.toString
>>>>   parameters = parameters :+ "--num-mappers"
>>>>   parameters = parameters :+ numMappers.toString
>>>>   if (dateColumns.length() > 0) {
>>>>     parameters = parameters :+ "--map-column-java"
>>>>     parameters = parameters :+ dateColumns
>>>>   }
>>>>   parameters = parameters :+ "--target-dir"
>>>>   parameters = parameters :+ targetDir
>>>>   parameters = parameters :+ "--delete-target-dir"
>>>>   parameters = parameters :+ "--as-avrodatafile"
>>>>
>>>> }
>>>>
>>>>
>>
>> --
>> Chris
>>
>