You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Dariusz Kobylarz <da...@gmail.com> on 2014/11/08 00:33:11 UTC
MatrixFactorizationModel serialization
I am trying to persist MatrixFactorizationModel (Collaborative Filtering
example) and use it in another script to evaluate/apply it.
This is the exception I get when I try to use a deserialized model instance:
Exception in thread "main" java.lang.NullPointerException
at
org.apache.spark.rdd.CoGroupedRDD$$anonfun$getPartitions$1.apply$mcVI$sp(CoGroupedRDD.scala:103)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at
org.apache.spark.rdd.CoGroupedRDD.getPartitions(CoGroupedRDD.scala:101)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at
org.apache.spark.rdd.MappedValuesRDD.getPartitions(MappedValuesRDD.scala:26)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at
org.apache.spark.rdd.FlatMappedValuesRDD.getPartitions(FlatMappedValuesRDD.scala:26)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at
org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.Partitioner$$anonfun$2.apply(Partitioner.scala:58)
at org.apache.spark.Partitioner$$anonfun$2.apply(Partitioner.scala:58)
at scala.math.Ordering$$anon$5.compare(Ordering.scala:122)
at java.util.TimSort.countRunAndMakeAscending(TimSort.java:324)
at java.util.TimSort.sort(TimSort.java:189)
at java.util.TimSort.sort(TimSort.java:173)
at java.util.Arrays.sort(Arrays.java:659)
at scala.collection.SeqLike$class.sorted(SeqLike.scala:615)
at scala.collection.AbstractSeq.sorted(Seq.scala:40)
at scala.collection.SeqLike$class.sortBy(SeqLike.scala:594)
at scala.collection.AbstractSeq.sortBy(Seq.scala:40)
at
org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:58)
at
org.apache.spark.rdd.PairRDDFunctions.join(PairRDDFunctions.scala:536)
at
org.apache.spark.mllib.recommendation.MatrixFactorizationModel.predict(MatrixFactorizationModel.scala:57)
...
Is this model serializable at all, I noticed it has two RDDs inside
(user & product features)?
Thanks,
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org
Re: MatrixFactorizationModel serialization
Posted by Sean Owen <so...@cloudera.com>.
Serializable like a Java object? no, it's an RDD. A factored matrix
model is huge, unlike most models, and is not a local object. You can
of course persist the RDDs to storage manually and read them back.
On Fri, Nov 7, 2014 at 11:33 PM, Dariusz Kobylarz
<da...@gmail.com> wrote:
> I am trying to persist MatrixFactorizationModel (Collaborative Filtering
> example) and use it in another script to evaluate/apply it.
> This is the exception I get when I try to use a deserialized model instance:
>
> Exception in thread "main" java.lang.NullPointerException
> at
> org.apache.spark.rdd.CoGroupedRDD$$anonfun$getPartitions$1.apply$mcVI$sp(CoGroupedRDD.scala:103)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at
> org.apache.spark.rdd.CoGroupedRDD.getPartitions(CoGroupedRDD.scala:101)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
> at
> org.apache.spark.rdd.MappedValuesRDD.getPartitions(MappedValuesRDD.scala:26)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
> at
> org.apache.spark.rdd.FlatMappedValuesRDD.getPartitions(FlatMappedValuesRDD.scala:26)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
> at
> org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
> at org.apache.spark.Partitioner$$anonfun$2.apply(Partitioner.scala:58)
> at org.apache.spark.Partitioner$$anonfun$2.apply(Partitioner.scala:58)
> at scala.math.Ordering$$anon$5.compare(Ordering.scala:122)
> at java.util.TimSort.countRunAndMakeAscending(TimSort.java:324)
> at java.util.TimSort.sort(TimSort.java:189)
> at java.util.TimSort.sort(TimSort.java:173)
> at java.util.Arrays.sort(Arrays.java:659)
> at scala.collection.SeqLike$class.sorted(SeqLike.scala:615)
> at scala.collection.AbstractSeq.sorted(Seq.scala:40)
> at scala.collection.SeqLike$class.sortBy(SeqLike.scala:594)
> at scala.collection.AbstractSeq.sortBy(Seq.scala:40)
> at
> org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:58)
> at
> org.apache.spark.rdd.PairRDDFunctions.join(PairRDDFunctions.scala:536)
> at
> org.apache.spark.mllib.recommendation.MatrixFactorizationModel.predict(MatrixFactorizationModel.scala:57)
> ...
>
> Is this model serializable at all, I noticed it has two RDDs inside (user &
> product features)?
>
> Thanks,
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org
RE: MatrixFactorizationModel serialization
Posted by "Ganelin, Ilya" <Il...@capitalone.com>.
Try loading features as
Val userfeatures = sc.objectFile(path1)
Val productFeatures = sc.objectFile(path2)
And then call the constructor of the MatrixFsgtorizationModel with those.
Sent with Good (www.good.com)
-----Original Message-----
From: wanbo [gewanbo@163.com<ma...@163.com>]
Sent: Wednesday, January 07, 2015 10:54 PM Eastern Standard Time
To: user@spark.apache.org
Subject: Re: MatrixFactorizationModel serialization
I save and reload model like this:
val bestModel = ALS.train(training, rank, numIter, lambda)
bestModel.get.userFeatures.saveAsObjectFile("hdfs://***:9000/spark/results/userfeatures")
bestModel.get.productFeatures.saveAsObjectFile("hdfs://***:9000/spark/results/productfeatures")
val bestModel = obj.asInstanceOf[MatrixFactorizationModel]
bestModel.userFeatures.sparkContext.objectFile("hdfs://***:9000/spark/results/userfeatures")
bestModel.productFeatures.sparkContext.objectFile("hdfs://***:9000/spark/results/productfeatures")
But, there has same exception:
Exception in thread "Driver" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:162)
Caused by: java.lang.NullPointerException
at
com.ft.jobs.test.ModelDeserialization$.main(ModelDeserialization.scala:138)
at com.ft.jobs.test.ModelDeserialization.main(ModelDeserialization.scala)
... 5 more
Have fixed this issue?
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/MatrixFactorizationModel-serialization-tp18389p21024.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org
________________________________________________________
The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Re: MatrixFactorizationModel serialization
Posted by wanbo <ge...@163.com>.
I save and reload model like this:
val bestModel = ALS.train(training, rank, numIter, lambda)
bestModel.get.userFeatures.saveAsObjectFile("hdfs://***:9000/spark/results/userfeatures")
bestModel.get.productFeatures.saveAsObjectFile("hdfs://***:9000/spark/results/productfeatures")
val bestModel = obj.asInstanceOf[MatrixFactorizationModel]
bestModel.userFeatures.sparkContext.objectFile("hdfs://***:9000/spark/results/userfeatures")
bestModel.productFeatures.sparkContext.objectFile("hdfs://***:9000/spark/results/productfeatures")
But, there has same exception:
Exception in thread "Driver" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:162)
Caused by: java.lang.NullPointerException
at
com.ft.jobs.test.ModelDeserialization$.main(ModelDeserialization.scala:138)
at com.ft.jobs.test.ModelDeserialization.main(ModelDeserialization.scala)
... 5 more
Have fixed this issue?
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/MatrixFactorizationModel-serialization-tp18389p21024.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org