You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sai Varun Reddy Daram (JIRA)" <ji...@apache.org> on 2018/11/19 07:19:00 UTC

[jira] [Commented] (SPARK-26113) TypeError: object of type 'NoneType' has no len() in authenticate_and_accum_updates of pyspark/accumulators.py

    [ https://issues.apache.org/jira/browse/SPARK-26113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691330#comment-16691330 ] 

Sai Varun Reddy Daram commented on SPARK-26113:
-----------------------------------------------

Something to help here: https://issues.apache.org/jira/browse/SPARK-26113 ?

> TypeError: object of type 'NoneType' has no len() in authenticate_and_accum_updates of pyspark/accumulators.py
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-26113
>                 URL: https://issues.apache.org/jira/browse/SPARK-26113
>             Project: Spark
>          Issue Type: Bug
>          Components: Kubernetes, PySpark
>    Affects Versions: 2.4.0
>            Reporter: Sai Varun Reddy Daram
>            Priority: Blocker
>
> Machine OS: Ubuntu 16.04.
> Kubernetes: Minikube 
> Kubernetes Version: 1.10.0
> Spark Kubernetes Image: pyspark ( at docker hub: saivarunr/spark-py:2.4 ) built using standard spark docker build.sh file.
> Steps to replicate:
> 1) Create a spark Session:  
> {code:java}
> // spark_session=SparkSession.builder.master('k8s://https://192.168.99.100:8443').config('spark.executor.instances','1').config('spark.kubernetes.container.image','saivarunr/spark-py:2.4').getOrCreate()
> {code}
>  2) Create a sample DataFrame
> {code:java}
> // df=spark_session.createDataFrame([{'a':1}])
> {code}
>  3) Do some operation on this dataframe
> {code:java}
> // df.count(){code}
> I get this output.
> {code:java}
> // Exception happened during processing of request from ('127.0.0.1', 38690)
> Traceback (most recent call last):
> File "/usr/lib/python3.6/socketserver.py", line 317, in _handle_request_noblock
> self.process_request(request, client_address)
> File "/usr/lib/python3.6/socketserver.py", line 348, in process_request
> self.finish_request(request, client_address)
> File "/usr/lib/python3.6/socketserver.py", line 361, in finish_request
> self.RequestHandlerClass(request, client_address, self)
> File "/usr/lib/python3.6/socketserver.py", line 721, in __init__
> self.handle()
> File "/spark-2.4.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/accumulators.py", line 266, in handle
> poll(authenticate_and_accum_updates)
> File "/spark-2.4.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/accumulators.py", line 241, in poll
> if func():
> File "/spark-2.4.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/accumulators.py", line 254, in authenticate_and_accum_updates
> received_token = self.rfile.read(len(auth_token))
> TypeError: object of type 'NoneType' has no len()
> {code}
> 4) Repeat above step; it won't show the error.
>  
> But now close the session, kill the python terminal or process. and try again, the same happens.
>  
> Something related to https://issues.apache.org/jira/browse/SPARK-26019  ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org