You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Kumba Janga <ky...@gmail.com> on 2022/08/01 23:56:32 UTC

[pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

   - Component: Spark Delta, Spark SQL
   - Level: Beginner
   - Scenario: Debug, How-to

*Python in Jupyter:*

import pyspark
import pyspark.sql.functions

from pyspark.sql import SparkSession
spark = (
    SparkSession
        .builder
        .appName("programming")
        .master("local")
        .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0")
        .config("spark.sql.extensions",
"io.delta.sql.DeltaSparkSessionExtension")
        .config("spark.sql.catalog.spark_catalog",
"org.apache.spark.sql.delta.catalog.DeltaCatalog")
        .config('spark.ui.port', '4050')
        .getOrCreate()

)
from delta import *

string_20210609 = '''worked_date,worker_id,delete_flag,hours_worked
2021-06-09,1001,Y,7
2021-06-09,1002,Y,3.75
2021-06-09,1003,Y,7.5
2021-06-09,1004,Y,6.25'''

rdd_20210609 = spark.sparkContext.parallelize(string_20210609.split('\n'))

# FILES WILL SHOW UP ON THE LEFT UNDER THE FOLDER ICON IF YOU WANT TO
BROWSE THEM
OUTPUT_DELTA_PATH = './output/delta/'

spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')

spark.sql('''
    CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(
        worked_date date
        , worker_id int
        , delete_flag string
        , hours_worked double
    ) USING DELTA


    PARTITIONED BY (worked_date)
    LOCATION "{0}"
    '''.format(OUTPUT_DELTA_PATH)
)

*Error Message:*

AnalysisException                         Traceback (most recent call
last)<ipython-input-13-e0469b5852dd> in <module>      4
spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')      5 ----> 6
spark.sql('''      7     CREATE TABLE IF NOT EXISTS
EXERCISE.WORKED_HOURS(      8         worked_date date
/Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\session.py
in sql(self, sqlQuery)    647         [Row(f1=1, f2=u'row1'),
Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]    648         """-->
649         return DataFrame(self._jsparkSession.sql(sqlQuery),
self._wrapped)    650     651     @since(2.0)
\Users\kyjan\spark-3.0.3-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py
in __call__(self, *args)   1302    1303         answer =
self.gateway_client.send_command(command)-> 1304         return_value
= get_return_value(   1305             answer, self.gateway_client,
self.target_id, self.name)   1306
/Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in
deco(*a, **kw)    132                 # Hide where the exception came
from that shows a non-Pythonic    133                 # JVM exception
message.--> 134                 raise_from(converted)    135
  else:    136                 raise
/Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in
raise_from(e)
AnalysisException: Cannot create table ('`EXERCISE`.`WORKED_HOURS`').
The associated location ('output/delta') is not empty.;


-- 
Best Wishes,
Kumba Janga

"The only way of finding the limits of the possible is by going beyond them
into the impossible"
-Arthur C. Clarke

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

Posted by ayan guha <gu...@gmail.com>.
Hi

I strongly suggest to use print prepared sqls and try them in raw form. The
error you posted points to a syntax error.

On Tue, 2 Aug 2022 at 3:56 pm, Kumba Janga <ky...@gmail.com> wrote:

> Thanks Sean! That was a simple fix. I changed it to "Create or Replace
> Table" but now I am getting the following error. I am still researching
> solutions but so far no luck.
>
> ParseException:
> mismatched input '<EOF>' expecting {'ADD', 'AFTER', 'ALL', 'ALTER', 'ANALYZE', 'AND', 'ANTI', 'ANY', 'ARCHIVE', 'ARRAY', 'AS', 'ASC', 'AT', 'AUTHORIZATION', 'BETWEEN', 'BOTH', 'BUCKET', 'BUCKETS', 'BY', 'CACHE', 'CASCADE', 'CASE', 'CAST', 'CHANGE', 'CHECK', 'CLEAR', 'CLUSTER', 'CLUSTERED', 'CODEGEN', 'COLLATE', 'COLLECTION', 'COLUMN', 'COLUMNS', 'COMMENT', 'COMMIT', 'COMPACT', 'COMPACTIONS', 'COMPUTE', 'CONCATENATE', 'CONSTRAINT', 'COST', 'CREATE', 'CROSS', 'CUBE', 'CURRENT', 'CURRENT_DATE', 'CURRENT_TIME', 'CURRENT_TIMESTAMP', 'CURRENT_USER', 'DATA', 'DATABASE', DATABASES, 'DBPROPERTIES', 'DEFINED', 'DELETE', 'DELIMITED', 'DESC', 'DESCRIBE', 'DFS', 'DIRECTORIES', 'DIRECTORY', 'DISTINCT', 'DISTRIBUTE', 'DIV', 'DROP', 'ELSE', 'END', 'ESCAPE', 'ESCAPED', 'EXCEPT', 'EXCHANGE', 'EXISTS', 'EXPLAIN', 'EXPORT', 'EXTENDED', 'EXTERNAL', 'EXTRACT', 'FALSE', 'FETCH', 'FIELDS', 'FILTER', 'FILEFORMAT', 'FIRST', 'FOLLOWING', 'FOR', 'FOREIGN', 'FORMAT', 'FORMATTED', 'FROM', 'FULL', 'FUNCTION', 'FUNCTIONS', 'GLOBAL', 'GRANT', 'GROUP', 'GROUPING', 'HAVING', 'IF', 'IGNORE', 'IMPORT', 'IN', 'INDEX', 'INDEXES', 'INNER', 'INPATH', 'INPUTFORMAT', 'INSERT', 'INTERSECT', 'INTERVAL', 'INTO', 'IS', 'ITEMS', 'JOIN', 'KEYS', 'LAST', 'LATERAL', 'LAZY', 'LEADING', 'LEFT', 'LIKE', 'LIMIT', 'LINES', 'LIST', 'LOAD', 'LOCAL', 'LOCATION', 'LOCK', 'LOCKS', 'LOGICAL', 'MACRO', 'MAP', 'MATCHED', 'MERGE', 'MSCK', 'NAMESPACE', 'NAMESPACES', 'NATURAL', 'NO', NOT, 'NULL', 'NULLS', 'OF', 'ON', 'ONLY', 'OPTION', 'OPTIONS', 'OR', 'ORDER', 'OUT', 'OUTER', 'OUTPUTFORMAT', 'OVER', 'OVERLAPS', 'OVERLAY', 'OVERWRITE', 'PARTITION', 'PARTITIONED', 'PARTITIONS', 'PERCENT', 'PIVOT', 'PLACING', 'POSITION', 'PRECEDING', 'PRIMARY', 'PRINCIPALS', 'PROPERTIES', 'PURGE', 'QUERY', 'RANGE', 'RECORDREADER', 'RECORDWRITER', 'RECOVER', 'REDUCE', 'REFERENCES', 'REFRESH', 'RENAME', 'REPAIR', 'REPLACE', 'RESET', 'RESTRICT', 'REVOKE', 'RIGHT', RLIKE, 'ROLE', 'ROLES', 'ROLLBACK', 'ROLLUP', 'ROW', 'ROWS', 'SCHEMA', 'SELECT', 'SEMI', 'SEPARATED', 'SERDE', 'SERDEPROPERTIES', 'SESSION_USER', 'SET', 'MINUS', 'SETS', 'SHOW', 'SKEWED', 'SOME', 'SORT', 'SORTED', 'START', 'STATISTICS', 'STORED', 'STRATIFY', 'STRUCT', 'SUBSTR', 'SUBSTRING', 'TABLE', 'TABLES', 'TABLESAMPLE', 'TBLPROPERTIES', TEMPORARY, 'TERMINATED', 'THEN', 'TO', 'TOUCH', 'TRAILING', 'TRANSACTION', 'TRANSACTIONS', 'TRANSFORM', 'TRIM', 'TRUE', 'TRUNCATE', 'TYPE', 'UNARCHIVE', 'UNBOUNDED', 'UNCACHE', 'UNION', 'UNIQUE', 'UNKNOWN', 'UNLOCK', 'UNSET', 'UPDATE', 'USE', 'USER', 'USING', 'VALUES', 'VIEW', 'VIEWS', 'WHEN', 'WHERE', 'WINDOW', 'WITH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 23)
>
> == SQL ==
> CREATE OR REPLACE TABLE
>
>
> On Mon, Aug 1, 2022 at 8:32 PM Sean Owen <sr...@gmail.com> wrote:
>
>> Pretty much what it says? you are creating a table over a path that
>> already has data in it. You can't do that without mode=overwrite at least,
>> if that's what you intend.
>>
>> On Mon, Aug 1, 2022 at 7:29 PM Kumba Janga <ky...@gmail.com> wrote:
>>
>>>
>>>
>>>    - Component: Spark Delta, Spark SQL
>>>    - Level: Beginner
>>>    - Scenario: Debug, How-to
>>>
>>> *Python in Jupyter:*
>>>
>>> import pyspark
>>> import pyspark.sql.functions
>>>
>>> from pyspark.sql import SparkSession
>>> spark = (
>>>     SparkSession
>>>         .builder
>>>         .appName("programming")
>>>         .master("local")
>>>         .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0")
>>>         .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
>>>         .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
>>>         .config('spark.ui.port', '4050')
>>>         .getOrCreate()
>>>
>>> )
>>> from delta import *
>>>
>>> string_20210609 = '''worked_date,worker_id,delete_flag,hours_worked
>>> 2021-06-09,1001,Y,7
>>> 2021-06-09,1002,Y,3.75
>>> 2021-06-09,1003,Y,7.5
>>> 2021-06-09,1004,Y,6.25'''
>>>
>>> rdd_20210609 = spark.sparkContext.parallelize(string_20210609.split('\n'))
>>>
>>> # FILES WILL SHOW UP ON THE LEFT UNDER THE FOLDER ICON IF YOU WANT TO BROWSE THEM
>>> OUTPUT_DELTA_PATH = './output/delta/'
>>>
>>> spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')
>>>
>>> spark.sql('''
>>>     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(
>>>         worked_date date
>>>         , worker_id int
>>>         , delete_flag string
>>>         , hours_worked double
>>>     ) USING DELTA
>>>
>>>
>>>     PARTITIONED BY (worked_date)
>>>     LOCATION "{0}"
>>>     '''.format(OUTPUT_DELTA_PATH)
>>> )
>>>
>>> *Error Message:*
>>>
>>> AnalysisException                         Traceback (most recent call last)<ipython-input-13-e0469b5852dd> in <module>      4 spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')      5 ----> 6 spark.sql('''      7     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(      8         worked_date date
>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\session.py in sql(self, sqlQuery)    647         [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]    648         """--> 649         return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)    650     651     @since(2.0)
>>> \Users\kyjan\spark-3.0.3-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py in __call__(self, *args)   1302    1303         answer = self.gateway_client.send_command(command)-> 1304         return_value = get_return_value(   1305             answer, self.gateway_client, self.target_id, self.name)   1306
>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)    132                 # Hide where the exception came from that shows a non-Pythonic    133                 # JVM exception message.--> 134                 raise_from(converted)    135             else:    136                 raise
>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in raise_from(e)
>>> AnalysisException: Cannot create table ('`EXERCISE`.`WORKED_HOURS`'). The associated location ('output/delta') is not empty.;
>>>
>>>
>>> --
>>> Best Wishes,
>>> Kumba Janga
>>>
>>> "The only way of finding the limits of the possible is by going beyond
>>> them into the impossible"
>>> -Arthur C. Clarke
>>>
>>
>
> --
> Best Wishes,
> Kumba Janga
>
> "The only way of finding the limits of the possible is by going beyond
> them into the impossible"
> -Arthur C. Clarke
>
-- 
Best Regards,
Ayan Guha

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

Posted by Sean Owen <sr...@gmail.com>.
That isn't the issue - the table does not exist anyway, but the storage
path does.

On Tue, Aug 2, 2022 at 6:48 AM Stelios Philippou <st...@gmail.com> wrote:

> HI Kumba.
>
> SQL Structure is a bit different for
> CREATE OR REPLACE TABLE
>
>
> You can only do the following
> CREATE TABLE IF NOT EXISTS
>
>
>
>
> https://spark.apache.org/docs/3.3.0/sql-ref-syntax-ddl-create-table-datasource.html
>
> On Tue, 2 Aug 2022 at 14:38, Sean Owen <sr...@gmail.com> wrote:
>
>> I don't think "CREATE OR REPLACE TABLE" exists (in SQL?); this isn't a
>> VIEW.
>> Delete the path first; that's simplest.
>>
>> On Tue, Aug 2, 2022 at 12:55 AM Kumba Janga <ky...@gmail.com> wrote:
>>
>>> Thanks Sean! That was a simple fix. I changed it to "Create or Replace
>>> Table" but now I am getting the following error. I am still researching
>>> solutions but so far no luck.
>>>
>>> ParseException:
>>> mismatched input '<EOF>' expecting {'ADD', 'AFTER', 'ALL', 'ALTER', 'ANALYZE', 'AND', 'ANTI', 'ANY', 'ARCHIVE', 'ARRAY', 'AS', 'ASC', 'AT', 'AUTHORIZATION', 'BETWEEN', 'BOTH', 'BUCKET', 'BUCKETS', 'BY', 'CACHE', 'CASCADE', 'CASE', 'CAST', 'CHANGE', 'CHECK', 'CLEAR', 'CLUSTER', 'CLUSTERED', 'CODEGEN', 'COLLATE', 'COLLECTION', 'COLUMN', 'COLUMNS', 'COMMENT', 'COMMIT', 'COMPACT', 'COMPACTIONS', 'COMPUTE', 'CONCATENATE', 'CONSTRAINT', 'COST', 'CREATE', 'CROSS', 'CUBE', 'CURRENT', 'CURRENT_DATE', 'CURRENT_TIME', 'CURRENT_TIMESTAMP', 'CURRENT_USER', 'DATA', 'DATABASE', DATABASES, 'DBPROPERTIES', 'DEFINED', 'DELETE', 'DELIMITED', 'DESC', 'DESCRIBE', 'DFS', 'DIRECTORIES', 'DIRECTORY', 'DISTINCT', 'DISTRIBUTE', 'DIV', 'DROP', 'ELSE', 'END', 'ESCAPE', 'ESCAPED', 'EXCEPT', 'EXCHANGE', 'EXISTS', 'EXPLAIN', 'EXPORT', 'EXTENDED', 'EXTERNAL', 'EXTRACT', 'FALSE', 'FETCH', 'FIELDS', 'FILTER', 'FILEFORMAT', 'FIRST', 'FOLLOWING', 'FOR', 'FOREIGN', 'FORMAT', 'FORMATTED', 'FROM', 'FULL', 'FUNCTION', 'FUNCTIONS', 'GLOBAL', 'GRANT', 'GROUP', 'GROUPING', 'HAVING', 'IF', 'IGNORE', 'IMPORT', 'IN', 'INDEX', 'INDEXES', 'INNER', 'INPATH', 'INPUTFORMAT', 'INSERT', 'INTERSECT', 'INTERVAL', 'INTO', 'IS', 'ITEMS', 'JOIN', 'KEYS', 'LAST', 'LATERAL', 'LAZY', 'LEADING', 'LEFT', 'LIKE', 'LIMIT', 'LINES', 'LIST', 'LOAD', 'LOCAL', 'LOCATION', 'LOCK', 'LOCKS', 'LOGICAL', 'MACRO', 'MAP', 'MATCHED', 'MERGE', 'MSCK', 'NAMESPACE', 'NAMESPACES', 'NATURAL', 'NO', NOT, 'NULL', 'NULLS', 'OF', 'ON', 'ONLY', 'OPTION', 'OPTIONS', 'OR', 'ORDER', 'OUT', 'OUTER', 'OUTPUTFORMAT', 'OVER', 'OVERLAPS', 'OVERLAY', 'OVERWRITE', 'PARTITION', 'PARTITIONED', 'PARTITIONS', 'PERCENT', 'PIVOT', 'PLACING', 'POSITION', 'PRECEDING', 'PRIMARY', 'PRINCIPALS', 'PROPERTIES', 'PURGE', 'QUERY', 'RANGE', 'RECORDREADER', 'RECORDWRITER', 'RECOVER', 'REDUCE', 'REFERENCES', 'REFRESH', 'RENAME', 'REPAIR', 'REPLACE', 'RESET', 'RESTRICT', 'REVOKE', 'RIGHT', RLIKE, 'ROLE', 'ROLES', 'ROLLBACK', 'ROLLUP', 'ROW', 'ROWS', 'SCHEMA', 'SELECT', 'SEMI', 'SEPARATED', 'SERDE', 'SERDEPROPERTIES', 'SESSION_USER', 'SET', 'MINUS', 'SETS', 'SHOW', 'SKEWED', 'SOME', 'SORT', 'SORTED', 'START', 'STATISTICS', 'STORED', 'STRATIFY', 'STRUCT', 'SUBSTR', 'SUBSTRING', 'TABLE', 'TABLES', 'TABLESAMPLE', 'TBLPROPERTIES', TEMPORARY, 'TERMINATED', 'THEN', 'TO', 'TOUCH', 'TRAILING', 'TRANSACTION', 'TRANSACTIONS', 'TRANSFORM', 'TRIM', 'TRUE', 'TRUNCATE', 'TYPE', 'UNARCHIVE', 'UNBOUNDED', 'UNCACHE', 'UNION', 'UNIQUE', 'UNKNOWN', 'UNLOCK', 'UNSET', 'UPDATE', 'USE', 'USER', 'USING', 'VALUES', 'VIEW', 'VIEWS', 'WHEN', 'WHERE', 'WINDOW', 'WITH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 23)
>>>
>>> == SQL ==
>>> CREATE OR REPLACE TABLE
>>>
>>>
>>> On Mon, Aug 1, 2022 at 8:32 PM Sean Owen <sr...@gmail.com> wrote:
>>>
>>>> Pretty much what it says? you are creating a table over a path that
>>>> already has data in it. You can't do that without mode=overwrite at least,
>>>> if that's what you intend.
>>>>
>>>> On Mon, Aug 1, 2022 at 7:29 PM Kumba Janga <ky...@gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>>    - Component: Spark Delta, Spark SQL
>>>>>    - Level: Beginner
>>>>>    - Scenario: Debug, How-to
>>>>>
>>>>> *Python in Jupyter:*
>>>>>
>>>>> import pyspark
>>>>> import pyspark.sql.functions
>>>>>
>>>>> from pyspark.sql import SparkSession
>>>>> spark = (
>>>>>     SparkSession
>>>>>         .builder
>>>>>         .appName("programming")
>>>>>         .master("local")
>>>>>         .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0")
>>>>>         .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
>>>>>         .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
>>>>>         .config('spark.ui.port', '4050')
>>>>>         .getOrCreate()
>>>>>
>>>>> )
>>>>> from delta import *
>>>>>
>>>>> string_20210609 = '''worked_date,worker_id,delete_flag,hours_worked
>>>>> 2021-06-09,1001,Y,7
>>>>> 2021-06-09,1002,Y,3.75
>>>>> 2021-06-09,1003,Y,7.5
>>>>> 2021-06-09,1004,Y,6.25'''
>>>>>
>>>>> rdd_20210609 = spark.sparkContext.parallelize(string_20210609.split('\n'))
>>>>>
>>>>> # FILES WILL SHOW UP ON THE LEFT UNDER THE FOLDER ICON IF YOU WANT TO BROWSE THEM
>>>>> OUTPUT_DELTA_PATH = './output/delta/'
>>>>>
>>>>> spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')
>>>>>
>>>>> spark.sql('''
>>>>>     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(
>>>>>         worked_date date
>>>>>         , worker_id int
>>>>>         , delete_flag string
>>>>>         , hours_worked double
>>>>>     ) USING DELTA
>>>>>
>>>>>
>>>>>     PARTITIONED BY (worked_date)
>>>>>     LOCATION "{0}"
>>>>>     '''.format(OUTPUT_DELTA_PATH)
>>>>> )
>>>>>
>>>>> *Error Message:*
>>>>>
>>>>> AnalysisException                         Traceback (most recent call last)<ipython-input-13-e0469b5852dd> in <module>      4 spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')      5 ----> 6 spark.sql('''      7     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(      8         worked_date date
>>>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\session.py in sql(self, sqlQuery)    647         [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]    648         """--> 649         return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)    650     651     @since(2.0)
>>>>> \Users\kyjan\spark-3.0.3-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py in __call__(self, *args)   1302    1303         answer = self.gateway_client.send_command(command)-> 1304         return_value = get_return_value(   1305             answer, self.gateway_client, self.target_id, self.name)   1306
>>>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)    132                 # Hide where the exception came from that shows a non-Pythonic    133                 # JVM exception message.--> 134                 raise_from(converted)    135             else:    136                 raise
>>>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in raise_from(e)
>>>>> AnalysisException: Cannot create table ('`EXERCISE`.`WORKED_HOURS`'). The associated location ('output/delta') is not empty.;
>>>>>
>>>>>
>>>>> --
>>>>> Best Wishes,
>>>>> Kumba Janga
>>>>>
>>>>> "The only way of finding the limits of the possible is by going beyond
>>>>> them into the impossible"
>>>>> -Arthur C. Clarke
>>>>>
>>>>
>>>
>>> --
>>> Best Wishes,
>>> Kumba Janga
>>>
>>> "The only way of finding the limits of the possible is by going beyond
>>> them into the impossible"
>>> -Arthur C. Clarke
>>>
>>

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

Posted by Stelios Philippou <st...@gmail.com>.
HI Kumba.

SQL Structure is a bit different for
CREATE OR REPLACE TABLE


You can only do the following
CREATE TABLE IF NOT EXISTS



https://spark.apache.org/docs/3.3.0/sql-ref-syntax-ddl-create-table-datasource.html

On Tue, 2 Aug 2022 at 14:38, Sean Owen <sr...@gmail.com> wrote:

> I don't think "CREATE OR REPLACE TABLE" exists (in SQL?); this isn't a
> VIEW.
> Delete the path first; that's simplest.
>
> On Tue, Aug 2, 2022 at 12:55 AM Kumba Janga <ky...@gmail.com> wrote:
>
>> Thanks Sean! That was a simple fix. I changed it to "Create or Replace
>> Table" but now I am getting the following error. I am still researching
>> solutions but so far no luck.
>>
>> ParseException:
>> mismatched input '<EOF>' expecting {'ADD', 'AFTER', 'ALL', 'ALTER', 'ANALYZE', 'AND', 'ANTI', 'ANY', 'ARCHIVE', 'ARRAY', 'AS', 'ASC', 'AT', 'AUTHORIZATION', 'BETWEEN', 'BOTH', 'BUCKET', 'BUCKETS', 'BY', 'CACHE', 'CASCADE', 'CASE', 'CAST', 'CHANGE', 'CHECK', 'CLEAR', 'CLUSTER', 'CLUSTERED', 'CODEGEN', 'COLLATE', 'COLLECTION', 'COLUMN', 'COLUMNS', 'COMMENT', 'COMMIT', 'COMPACT', 'COMPACTIONS', 'COMPUTE', 'CONCATENATE', 'CONSTRAINT', 'COST', 'CREATE', 'CROSS', 'CUBE', 'CURRENT', 'CURRENT_DATE', 'CURRENT_TIME', 'CURRENT_TIMESTAMP', 'CURRENT_USER', 'DATA', 'DATABASE', DATABASES, 'DBPROPERTIES', 'DEFINED', 'DELETE', 'DELIMITED', 'DESC', 'DESCRIBE', 'DFS', 'DIRECTORIES', 'DIRECTORY', 'DISTINCT', 'DISTRIBUTE', 'DIV', 'DROP', 'ELSE', 'END', 'ESCAPE', 'ESCAPED', 'EXCEPT', 'EXCHANGE', 'EXISTS', 'EXPLAIN', 'EXPORT', 'EXTENDED', 'EXTERNAL', 'EXTRACT', 'FALSE', 'FETCH', 'FIELDS', 'FILTER', 'FILEFORMAT', 'FIRST', 'FOLLOWING', 'FOR', 'FOREIGN', 'FORMAT', 'FORMATTED', 'FROM', 'FULL', 'FUNCTION', 'FUNCTIONS', 'GLOBAL', 'GRANT', 'GROUP', 'GROUPING', 'HAVING', 'IF', 'IGNORE', 'IMPORT', 'IN', 'INDEX', 'INDEXES', 'INNER', 'INPATH', 'INPUTFORMAT', 'INSERT', 'INTERSECT', 'INTERVAL', 'INTO', 'IS', 'ITEMS', 'JOIN', 'KEYS', 'LAST', 'LATERAL', 'LAZY', 'LEADING', 'LEFT', 'LIKE', 'LIMIT', 'LINES', 'LIST', 'LOAD', 'LOCAL', 'LOCATION', 'LOCK', 'LOCKS', 'LOGICAL', 'MACRO', 'MAP', 'MATCHED', 'MERGE', 'MSCK', 'NAMESPACE', 'NAMESPACES', 'NATURAL', 'NO', NOT, 'NULL', 'NULLS', 'OF', 'ON', 'ONLY', 'OPTION', 'OPTIONS', 'OR', 'ORDER', 'OUT', 'OUTER', 'OUTPUTFORMAT', 'OVER', 'OVERLAPS', 'OVERLAY', 'OVERWRITE', 'PARTITION', 'PARTITIONED', 'PARTITIONS', 'PERCENT', 'PIVOT', 'PLACING', 'POSITION', 'PRECEDING', 'PRIMARY', 'PRINCIPALS', 'PROPERTIES', 'PURGE', 'QUERY', 'RANGE', 'RECORDREADER', 'RECORDWRITER', 'RECOVER', 'REDUCE', 'REFERENCES', 'REFRESH', 'RENAME', 'REPAIR', 'REPLACE', 'RESET', 'RESTRICT', 'REVOKE', 'RIGHT', RLIKE, 'ROLE', 'ROLES', 'ROLLBACK', 'ROLLUP', 'ROW', 'ROWS', 'SCHEMA', 'SELECT', 'SEMI', 'SEPARATED', 'SERDE', 'SERDEPROPERTIES', 'SESSION_USER', 'SET', 'MINUS', 'SETS', 'SHOW', 'SKEWED', 'SOME', 'SORT', 'SORTED', 'START', 'STATISTICS', 'STORED', 'STRATIFY', 'STRUCT', 'SUBSTR', 'SUBSTRING', 'TABLE', 'TABLES', 'TABLESAMPLE', 'TBLPROPERTIES', TEMPORARY, 'TERMINATED', 'THEN', 'TO', 'TOUCH', 'TRAILING', 'TRANSACTION', 'TRANSACTIONS', 'TRANSFORM', 'TRIM', 'TRUE', 'TRUNCATE', 'TYPE', 'UNARCHIVE', 'UNBOUNDED', 'UNCACHE', 'UNION', 'UNIQUE', 'UNKNOWN', 'UNLOCK', 'UNSET', 'UPDATE', 'USE', 'USER', 'USING', 'VALUES', 'VIEW', 'VIEWS', 'WHEN', 'WHERE', 'WINDOW', 'WITH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 23)
>>
>> == SQL ==
>> CREATE OR REPLACE TABLE
>>
>>
>> On Mon, Aug 1, 2022 at 8:32 PM Sean Owen <sr...@gmail.com> wrote:
>>
>>> Pretty much what it says? you are creating a table over a path that
>>> already has data in it. You can't do that without mode=overwrite at least,
>>> if that's what you intend.
>>>
>>> On Mon, Aug 1, 2022 at 7:29 PM Kumba Janga <ky...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>>    - Component: Spark Delta, Spark SQL
>>>>    - Level: Beginner
>>>>    - Scenario: Debug, How-to
>>>>
>>>> *Python in Jupyter:*
>>>>
>>>> import pyspark
>>>> import pyspark.sql.functions
>>>>
>>>> from pyspark.sql import SparkSession
>>>> spark = (
>>>>     SparkSession
>>>>         .builder
>>>>         .appName("programming")
>>>>         .master("local")
>>>>         .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0")
>>>>         .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
>>>>         .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
>>>>         .config('spark.ui.port', '4050')
>>>>         .getOrCreate()
>>>>
>>>> )
>>>> from delta import *
>>>>
>>>> string_20210609 = '''worked_date,worker_id,delete_flag,hours_worked
>>>> 2021-06-09,1001,Y,7
>>>> 2021-06-09,1002,Y,3.75
>>>> 2021-06-09,1003,Y,7.5
>>>> 2021-06-09,1004,Y,6.25'''
>>>>
>>>> rdd_20210609 = spark.sparkContext.parallelize(string_20210609.split('\n'))
>>>>
>>>> # FILES WILL SHOW UP ON THE LEFT UNDER THE FOLDER ICON IF YOU WANT TO BROWSE THEM
>>>> OUTPUT_DELTA_PATH = './output/delta/'
>>>>
>>>> spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')
>>>>
>>>> spark.sql('''
>>>>     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(
>>>>         worked_date date
>>>>         , worker_id int
>>>>         , delete_flag string
>>>>         , hours_worked double
>>>>     ) USING DELTA
>>>>
>>>>
>>>>     PARTITIONED BY (worked_date)
>>>>     LOCATION "{0}"
>>>>     '''.format(OUTPUT_DELTA_PATH)
>>>> )
>>>>
>>>> *Error Message:*
>>>>
>>>> AnalysisException                         Traceback (most recent call last)<ipython-input-13-e0469b5852dd> in <module>      4 spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')      5 ----> 6 spark.sql('''      7     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(      8         worked_date date
>>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\session.py in sql(self, sqlQuery)    647         [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]    648         """--> 649         return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)    650     651     @since(2.0)
>>>> \Users\kyjan\spark-3.0.3-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py in __call__(self, *args)   1302    1303         answer = self.gateway_client.send_command(command)-> 1304         return_value = get_return_value(   1305             answer, self.gateway_client, self.target_id, self.name)   1306
>>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)    132                 # Hide where the exception came from that shows a non-Pythonic    133                 # JVM exception message.--> 134                 raise_from(converted)    135             else:    136                 raise
>>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in raise_from(e)
>>>> AnalysisException: Cannot create table ('`EXERCISE`.`WORKED_HOURS`'). The associated location ('output/delta') is not empty.;
>>>>
>>>>
>>>> --
>>>> Best Wishes,
>>>> Kumba Janga
>>>>
>>>> "The only way of finding the limits of the possible is by going beyond
>>>> them into the impossible"
>>>> -Arthur C. Clarke
>>>>
>>>
>>
>> --
>> Best Wishes,
>> Kumba Janga
>>
>> "The only way of finding the limits of the possible is by going beyond
>> them into the impossible"
>> -Arthur C. Clarke
>>
>

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

Posted by Sean Owen <sr...@gmail.com>.
I don't think "CREATE OR REPLACE TABLE" exists (in SQL?); this isn't a VIEW.
Delete the path first; that's simplest.

On Tue, Aug 2, 2022 at 12:55 AM Kumba Janga <ky...@gmail.com> wrote:

> Thanks Sean! That was a simple fix. I changed it to "Create or Replace
> Table" but now I am getting the following error. I am still researching
> solutions but so far no luck.
>
> ParseException:
> mismatched input '<EOF>' expecting {'ADD', 'AFTER', 'ALL', 'ALTER', 'ANALYZE', 'AND', 'ANTI', 'ANY', 'ARCHIVE', 'ARRAY', 'AS', 'ASC', 'AT', 'AUTHORIZATION', 'BETWEEN', 'BOTH', 'BUCKET', 'BUCKETS', 'BY', 'CACHE', 'CASCADE', 'CASE', 'CAST', 'CHANGE', 'CHECK', 'CLEAR', 'CLUSTER', 'CLUSTERED', 'CODEGEN', 'COLLATE', 'COLLECTION', 'COLUMN', 'COLUMNS', 'COMMENT', 'COMMIT', 'COMPACT', 'COMPACTIONS', 'COMPUTE', 'CONCATENATE', 'CONSTRAINT', 'COST', 'CREATE', 'CROSS', 'CUBE', 'CURRENT', 'CURRENT_DATE', 'CURRENT_TIME', 'CURRENT_TIMESTAMP', 'CURRENT_USER', 'DATA', 'DATABASE', DATABASES, 'DBPROPERTIES', 'DEFINED', 'DELETE', 'DELIMITED', 'DESC', 'DESCRIBE', 'DFS', 'DIRECTORIES', 'DIRECTORY', 'DISTINCT', 'DISTRIBUTE', 'DIV', 'DROP', 'ELSE', 'END', 'ESCAPE', 'ESCAPED', 'EXCEPT', 'EXCHANGE', 'EXISTS', 'EXPLAIN', 'EXPORT', 'EXTENDED', 'EXTERNAL', 'EXTRACT', 'FALSE', 'FETCH', 'FIELDS', 'FILTER', 'FILEFORMAT', 'FIRST', 'FOLLOWING', 'FOR', 'FOREIGN', 'FORMAT', 'FORMATTED', 'FROM', 'FULL', 'FUNCTION', 'FUNCTIONS', 'GLOBAL', 'GRANT', 'GROUP', 'GROUPING', 'HAVING', 'IF', 'IGNORE', 'IMPORT', 'IN', 'INDEX', 'INDEXES', 'INNER', 'INPATH', 'INPUTFORMAT', 'INSERT', 'INTERSECT', 'INTERVAL', 'INTO', 'IS', 'ITEMS', 'JOIN', 'KEYS', 'LAST', 'LATERAL', 'LAZY', 'LEADING', 'LEFT', 'LIKE', 'LIMIT', 'LINES', 'LIST', 'LOAD', 'LOCAL', 'LOCATION', 'LOCK', 'LOCKS', 'LOGICAL', 'MACRO', 'MAP', 'MATCHED', 'MERGE', 'MSCK', 'NAMESPACE', 'NAMESPACES', 'NATURAL', 'NO', NOT, 'NULL', 'NULLS', 'OF', 'ON', 'ONLY', 'OPTION', 'OPTIONS', 'OR', 'ORDER', 'OUT', 'OUTER', 'OUTPUTFORMAT', 'OVER', 'OVERLAPS', 'OVERLAY', 'OVERWRITE', 'PARTITION', 'PARTITIONED', 'PARTITIONS', 'PERCENT', 'PIVOT', 'PLACING', 'POSITION', 'PRECEDING', 'PRIMARY', 'PRINCIPALS', 'PROPERTIES', 'PURGE', 'QUERY', 'RANGE', 'RECORDREADER', 'RECORDWRITER', 'RECOVER', 'REDUCE', 'REFERENCES', 'REFRESH', 'RENAME', 'REPAIR', 'REPLACE', 'RESET', 'RESTRICT', 'REVOKE', 'RIGHT', RLIKE, 'ROLE', 'ROLES', 'ROLLBACK', 'ROLLUP', 'ROW', 'ROWS', 'SCHEMA', 'SELECT', 'SEMI', 'SEPARATED', 'SERDE', 'SERDEPROPERTIES', 'SESSION_USER', 'SET', 'MINUS', 'SETS', 'SHOW', 'SKEWED', 'SOME', 'SORT', 'SORTED', 'START', 'STATISTICS', 'STORED', 'STRATIFY', 'STRUCT', 'SUBSTR', 'SUBSTRING', 'TABLE', 'TABLES', 'TABLESAMPLE', 'TBLPROPERTIES', TEMPORARY, 'TERMINATED', 'THEN', 'TO', 'TOUCH', 'TRAILING', 'TRANSACTION', 'TRANSACTIONS', 'TRANSFORM', 'TRIM', 'TRUE', 'TRUNCATE', 'TYPE', 'UNARCHIVE', 'UNBOUNDED', 'UNCACHE', 'UNION', 'UNIQUE', 'UNKNOWN', 'UNLOCK', 'UNSET', 'UPDATE', 'USE', 'USER', 'USING', 'VALUES', 'VIEW', 'VIEWS', 'WHEN', 'WHERE', 'WINDOW', 'WITH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 23)
>
> == SQL ==
> CREATE OR REPLACE TABLE
>
>
> On Mon, Aug 1, 2022 at 8:32 PM Sean Owen <sr...@gmail.com> wrote:
>
>> Pretty much what it says? you are creating a table over a path that
>> already has data in it. You can't do that without mode=overwrite at least,
>> if that's what you intend.
>>
>> On Mon, Aug 1, 2022 at 7:29 PM Kumba Janga <ky...@gmail.com> wrote:
>>
>>>
>>>
>>>    - Component: Spark Delta, Spark SQL
>>>    - Level: Beginner
>>>    - Scenario: Debug, How-to
>>>
>>> *Python in Jupyter:*
>>>
>>> import pyspark
>>> import pyspark.sql.functions
>>>
>>> from pyspark.sql import SparkSession
>>> spark = (
>>>     SparkSession
>>>         .builder
>>>         .appName("programming")
>>>         .master("local")
>>>         .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0")
>>>         .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
>>>         .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
>>>         .config('spark.ui.port', '4050')
>>>         .getOrCreate()
>>>
>>> )
>>> from delta import *
>>>
>>> string_20210609 = '''worked_date,worker_id,delete_flag,hours_worked
>>> 2021-06-09,1001,Y,7
>>> 2021-06-09,1002,Y,3.75
>>> 2021-06-09,1003,Y,7.5
>>> 2021-06-09,1004,Y,6.25'''
>>>
>>> rdd_20210609 = spark.sparkContext.parallelize(string_20210609.split('\n'))
>>>
>>> # FILES WILL SHOW UP ON THE LEFT UNDER THE FOLDER ICON IF YOU WANT TO BROWSE THEM
>>> OUTPUT_DELTA_PATH = './output/delta/'
>>>
>>> spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')
>>>
>>> spark.sql('''
>>>     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(
>>>         worked_date date
>>>         , worker_id int
>>>         , delete_flag string
>>>         , hours_worked double
>>>     ) USING DELTA
>>>
>>>
>>>     PARTITIONED BY (worked_date)
>>>     LOCATION "{0}"
>>>     '''.format(OUTPUT_DELTA_PATH)
>>> )
>>>
>>> *Error Message:*
>>>
>>> AnalysisException                         Traceback (most recent call last)<ipython-input-13-e0469b5852dd> in <module>      4 spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')      5 ----> 6 spark.sql('''      7     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(      8         worked_date date
>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\session.py in sql(self, sqlQuery)    647         [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]    648         """--> 649         return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)    650     651     @since(2.0)
>>> \Users\kyjan\spark-3.0.3-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py in __call__(self, *args)   1302    1303         answer = self.gateway_client.send_command(command)-> 1304         return_value = get_return_value(   1305             answer, self.gateway_client, self.target_id, self.name)   1306
>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)    132                 # Hide where the exception came from that shows a non-Pythonic    133                 # JVM exception message.--> 134                 raise_from(converted)    135             else:    136                 raise
>>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in raise_from(e)
>>> AnalysisException: Cannot create table ('`EXERCISE`.`WORKED_HOURS`'). The associated location ('output/delta') is not empty.;
>>>
>>>
>>> --
>>> Best Wishes,
>>> Kumba Janga
>>>
>>> "The only way of finding the limits of the possible is by going beyond
>>> them into the impossible"
>>> -Arthur C. Clarke
>>>
>>
>
> --
> Best Wishes,
> Kumba Janga
>
> "The only way of finding the limits of the possible is by going beyond
> them into the impossible"
> -Arthur C. Clarke
>

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

Posted by Kumba Janga <ky...@gmail.com>.
Thanks Sean! That was a simple fix. I changed it to "Create or Replace
Table" but now I am getting the following error. I am still researching
solutions but so far no luck.

ParseException:
mismatched input '<EOF>' expecting {'ADD', 'AFTER', 'ALL', 'ALTER',
'ANALYZE', 'AND', 'ANTI', 'ANY', 'ARCHIVE', 'ARRAY', 'AS', 'ASC',
'AT', 'AUTHORIZATION', 'BETWEEN', 'BOTH', 'BUCKET', 'BUCKETS', 'BY',
'CACHE', 'CASCADE', 'CASE', 'CAST', 'CHANGE', 'CHECK', 'CLEAR',
'CLUSTER', 'CLUSTERED', 'CODEGEN', 'COLLATE', 'COLLECTION', 'COLUMN',
'COLUMNS', 'COMMENT', 'COMMIT', 'COMPACT', 'COMPACTIONS', 'COMPUTE',
'CONCATENATE', 'CONSTRAINT', 'COST', 'CREATE', 'CROSS', 'CUBE',
'CURRENT', 'CURRENT_DATE', 'CURRENT_TIME', 'CURRENT_TIMESTAMP',
'CURRENT_USER', 'DATA', 'DATABASE', DATABASES, 'DBPROPERTIES',
'DEFINED', 'DELETE', 'DELIMITED', 'DESC', 'DESCRIBE', 'DFS',
'DIRECTORIES', 'DIRECTORY', 'DISTINCT', 'DISTRIBUTE', 'DIV', 'DROP',
'ELSE', 'END', 'ESCAPE', 'ESCAPED', 'EXCEPT', 'EXCHANGE', 'EXISTS',
'EXPLAIN', 'EXPORT', 'EXTENDED', 'EXTERNAL', 'EXTRACT', 'FALSE',
'FETCH', 'FIELDS', 'FILTER', 'FILEFORMAT', 'FIRST', 'FOLLOWING',
'FOR', 'FOREIGN', 'FORMAT', 'FORMATTED', 'FROM', 'FULL', 'FUNCTION',
'FUNCTIONS', 'GLOBAL', 'GRANT', 'GROUP', 'GROUPING', 'HAVING', 'IF',
'IGNORE', 'IMPORT', 'IN', 'INDEX', 'INDEXES', 'INNER', 'INPATH',
'INPUTFORMAT', 'INSERT', 'INTERSECT', 'INTERVAL', 'INTO', 'IS',
'ITEMS', 'JOIN', 'KEYS', 'LAST', 'LATERAL', 'LAZY', 'LEADING', 'LEFT',
'LIKE', 'LIMIT', 'LINES', 'LIST', 'LOAD', 'LOCAL', 'LOCATION', 'LOCK',
'LOCKS', 'LOGICAL', 'MACRO', 'MAP', 'MATCHED', 'MERGE', 'MSCK',
'NAMESPACE', 'NAMESPACES', 'NATURAL', 'NO', NOT, 'NULL', 'NULLS',
'OF', 'ON', 'ONLY', 'OPTION', 'OPTIONS', 'OR', 'ORDER', 'OUT',
'OUTER', 'OUTPUTFORMAT', 'OVER', 'OVERLAPS', 'OVERLAY', 'OVERWRITE',
'PARTITION', 'PARTITIONED', 'PARTITIONS', 'PERCENT', 'PIVOT',
'PLACING', 'POSITION', 'PRECEDING', 'PRIMARY', 'PRINCIPALS',
'PROPERTIES', 'PURGE', 'QUERY', 'RANGE', 'RECORDREADER',
'RECORDWRITER', 'RECOVER', 'REDUCE', 'REFERENCES', 'REFRESH',
'RENAME', 'REPAIR', 'REPLACE', 'RESET', 'RESTRICT', 'REVOKE', 'RIGHT',
RLIKE, 'ROLE', 'ROLES', 'ROLLBACK', 'ROLLUP', 'ROW', 'ROWS', 'SCHEMA',
'SELECT', 'SEMI', 'SEPARATED', 'SERDE', 'SERDEPROPERTIES',
'SESSION_USER', 'SET', 'MINUS', 'SETS', 'SHOW', 'SKEWED', 'SOME',
'SORT', 'SORTED', 'START', 'STATISTICS', 'STORED', 'STRATIFY',
'STRUCT', 'SUBSTR', 'SUBSTRING', 'TABLE', 'TABLES', 'TABLESAMPLE',
'TBLPROPERTIES', TEMPORARY, 'TERMINATED', 'THEN', 'TO', 'TOUCH',
'TRAILING', 'TRANSACTION', 'TRANSACTIONS', 'TRANSFORM', 'TRIM',
'TRUE', 'TRUNCATE', 'TYPE', 'UNARCHIVE', 'UNBOUNDED', 'UNCACHE',
'UNION', 'UNIQUE', 'UNKNOWN', 'UNLOCK', 'UNSET', 'UPDATE', 'USE',
'USER', 'USING', 'VALUES', 'VIEW', 'VIEWS', 'WHEN', 'WHERE', 'WINDOW',
'WITH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 23)

== SQL ==
CREATE OR REPLACE TABLE


On Mon, Aug 1, 2022 at 8:32 PM Sean Owen <sr...@gmail.com> wrote:

> Pretty much what it says? you are creating a table over a path that
> already has data in it. You can't do that without mode=overwrite at least,
> if that's what you intend.
>
> On Mon, Aug 1, 2022 at 7:29 PM Kumba Janga <ky...@gmail.com> wrote:
>
>>
>>
>>    - Component: Spark Delta, Spark SQL
>>    - Level: Beginner
>>    - Scenario: Debug, How-to
>>
>> *Python in Jupyter:*
>>
>> import pyspark
>> import pyspark.sql.functions
>>
>> from pyspark.sql import SparkSession
>> spark = (
>>     SparkSession
>>         .builder
>>         .appName("programming")
>>         .master("local")
>>         .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0")
>>         .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
>>         .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
>>         .config('spark.ui.port', '4050')
>>         .getOrCreate()
>>
>> )
>> from delta import *
>>
>> string_20210609 = '''worked_date,worker_id,delete_flag,hours_worked
>> 2021-06-09,1001,Y,7
>> 2021-06-09,1002,Y,3.75
>> 2021-06-09,1003,Y,7.5
>> 2021-06-09,1004,Y,6.25'''
>>
>> rdd_20210609 = spark.sparkContext.parallelize(string_20210609.split('\n'))
>>
>> # FILES WILL SHOW UP ON THE LEFT UNDER THE FOLDER ICON IF YOU WANT TO BROWSE THEM
>> OUTPUT_DELTA_PATH = './output/delta/'
>>
>> spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')
>>
>> spark.sql('''
>>     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(
>>         worked_date date
>>         , worker_id int
>>         , delete_flag string
>>         , hours_worked double
>>     ) USING DELTA
>>
>>
>>     PARTITIONED BY (worked_date)
>>     LOCATION "{0}"
>>     '''.format(OUTPUT_DELTA_PATH)
>> )
>>
>> *Error Message:*
>>
>> AnalysisException                         Traceback (most recent call last)<ipython-input-13-e0469b5852dd> in <module>      4 spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')      5 ----> 6 spark.sql('''      7     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(      8         worked_date date
>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\session.py in sql(self, sqlQuery)    647         [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]    648         """--> 649         return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)    650     651     @since(2.0)
>> \Users\kyjan\spark-3.0.3-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py in __call__(self, *args)   1302    1303         answer = self.gateway_client.send_command(command)-> 1304         return_value = get_return_value(   1305             answer, self.gateway_client, self.target_id, self.name)   1306
>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)    132                 # Hide where the exception came from that shows a non-Pythonic    133                 # JVM exception message.--> 134                 raise_from(converted)    135             else:    136                 raise
>> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in raise_from(e)
>> AnalysisException: Cannot create table ('`EXERCISE`.`WORKED_HOURS`'). The associated location ('output/delta') is not empty.;
>>
>>
>> --
>> Best Wishes,
>> Kumba Janga
>>
>> "The only way of finding the limits of the possible is by going beyond
>> them into the impossible"
>> -Arthur C. Clarke
>>
>

-- 
Best Wishes,
Kumba Janga

"The only way of finding the limits of the possible is by going beyond them
into the impossible"
-Arthur C. Clarke

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

Posted by Sean Owen <sr...@gmail.com>.
Pretty much what it says? you are creating a table over a path that already
has data in it. You can't do that without mode=overwrite at least, if
that's what you intend.

On Mon, Aug 1, 2022 at 7:29 PM Kumba Janga <ky...@gmail.com> wrote:

>
>
>    - Component: Spark Delta, Spark SQL
>    - Level: Beginner
>    - Scenario: Debug, How-to
>
> *Python in Jupyter:*
>
> import pyspark
> import pyspark.sql.functions
>
> from pyspark.sql import SparkSession
> spark = (
>     SparkSession
>         .builder
>         .appName("programming")
>         .master("local")
>         .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0")
>         .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
>         .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
>         .config('spark.ui.port', '4050')
>         .getOrCreate()
>
> )
> from delta import *
>
> string_20210609 = '''worked_date,worker_id,delete_flag,hours_worked
> 2021-06-09,1001,Y,7
> 2021-06-09,1002,Y,3.75
> 2021-06-09,1003,Y,7.5
> 2021-06-09,1004,Y,6.25'''
>
> rdd_20210609 = spark.sparkContext.parallelize(string_20210609.split('\n'))
>
> # FILES WILL SHOW UP ON THE LEFT UNDER THE FOLDER ICON IF YOU WANT TO BROWSE THEM
> OUTPUT_DELTA_PATH = './output/delta/'
>
> spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')
>
> spark.sql('''
>     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(
>         worked_date date
>         , worker_id int
>         , delete_flag string
>         , hours_worked double
>     ) USING DELTA
>
>
>     PARTITIONED BY (worked_date)
>     LOCATION "{0}"
>     '''.format(OUTPUT_DELTA_PATH)
> )
>
> *Error Message:*
>
> AnalysisException                         Traceback (most recent call last)<ipython-input-13-e0469b5852dd> in <module>      4 spark.sql('CREATE DATABASE IF NOT EXISTS EXERCISE')      5 ----> 6 spark.sql('''      7     CREATE TABLE IF NOT EXISTS EXERCISE.WORKED_HOURS(      8         worked_date date
> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\session.py in sql(self, sqlQuery)    647         [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]    648         """--> 649         return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)    650     651     @since(2.0)
> \Users\kyjan\spark-3.0.3-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py in __call__(self, *args)   1302    1303         answer = self.gateway_client.send_command(command)-> 1304         return_value = get_return_value(   1305             answer, self.gateway_client, self.target_id, self.name)   1306
> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)    132                 # Hide where the exception came from that shows a non-Pythonic    133                 # JVM exception message.--> 134                 raise_from(converted)    135             else:    136                 raise
> /Users/kyjan/spark-3.0.3-bin-hadoop2.7\python\pyspark\sql\utils.py in raise_from(e)
> AnalysisException: Cannot create table ('`EXERCISE`.`WORKED_HOURS`'). The associated location ('output/delta') is not empty.;
>
>
> --
> Best Wishes,
> Kumba Janga
>
> "The only way of finding the limits of the possible is by going beyond
> them into the impossible"
> -Arthur C. Clarke
>