You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Justin Uang <ju...@gmail.com> on 2015/04/19 23:56:49 UTC

Infinite recursion when using SQLContext#createDataFrame(JavaRDD[Row], java.util.List[String])

Hi,

I have a question regarding SQLContext#createDataFrame(JavaRDD[Row],
java.util.List[String]). It looks like when I try to call it, it results in
an infinite recursion that overflows the stack. I filed it here:
https://issues.apache.org/jira/browse/SPARK-6999.

What is the best way to fix this? Is the intention that it indeed calls a
scala implementation that infers the schema using the datatypes of the Rows
as well as using the provided column names?

Thanks!

Justin

Re: Infinite recursion when using SQLContext#createDataFrame(JavaRDD[Row], java.util.List[String])

Posted by Reynold Xin <rx...@databricks.com>.
Definitely a bug. I just checked and it looks like we don't actually have a
function that takes a Scala RDD and Seq[String].

cc Davies who added this code a while back.


On Sun, Apr 19, 2015 at 2:56 PM, Justin Uang <ju...@gmail.com> wrote:

> Hi,
>
> I have a question regarding SQLContext#createDataFrame(JavaRDD[Row],
> java.util.List[String]). It looks like when I try to call it, it results in
> an infinite recursion that overflows the stack. I filed it here:
> https://issues.apache.org/jira/browse/SPARK-6999.
>
> What is the best way to fix this? Is the intention that it indeed calls a
> scala implementation that infers the schema using the datatypes of the Rows
> as well as using the provided column names?
>
> Thanks!
>
> Justin
>