You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Mich Talebzadeh <mi...@gmail.com> on 2021/01/15 23:52:30 UTC

PySpark, setting spark conf values in a function and catching for errors

Hi,


I have multiple routines that are using Spark for Google BigQuery that set
these configuration values. I have decided to put them in a PySpark
function as below with spark as an input.


def setSparkConfSet(spark):

    try:

        spark.conf.set("GcpJsonKeyFile",
config['GCPVariables']['jsonKeyFile'])

        spark.conf.set("BigQueryProjectId",
config['GCPVariables']['projectId'])

        spark.conf.set("BigQueryDatasetLocation",
config['GCPVariables']['datasetLocation'])

        spark.conf.set("google.cloud.auth.service.account.enable", "true")

        spark.conf.set("fs.gs.project.id",
config['GCPVariables']['projectId'])

        spark.conf.set("fs.gs.impl",
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")

        spark.conf.set("fs.AbstractFileSystem.gs.impl",
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS")

        spark.conf.set("temporaryGcsBucket",
config['GCPVariables']['tmp_bucket'])

    except:

        print(f"""Could not set spark config variables, quitting""")

        sys.exit(1)


Two questions


Is it necessary to catch any error or simply call the function in the main
routine or do the error handling in the module that is calling this
function?


Thanks



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.