You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by mharwida <ma...@yahoo.com> on 2014/01/25 20:44:07 UTC

Spark connecting to wrong Filesystem.uri

Hi,

I've had Spark/Shark running successfully on my Hadoop cluster. Due to some
reasons, I had to change the IP addresses of my 6 Hadoop nodes and since
then, I was unable to create a cached table in memory using Shark.

While 10.14.xx.xx in the first line below is the new address, Shark/Spark is
still trying to connect to the old ip address 10.xx.xx.xx which is evident
at the bottom of the trace and Shark seems to get stuck at that stage with
no cached tables being created.

Could someone please shed some light on this as I'm unable to use
Shark/Spark at all in this current state.


This is the output from running shark-withinfo

INFO cfs.CassandraFileSystem: CassandraFileSystem.uri : cfs://10.14.xx.xx/
INFO cfs.CassandraFileSystem: Default block size: 67108864
INFO cfs.CassandraFileSystemThriftStore: Consistency level for reads from
cfs: LOCAL_QUORUM
INFO cfs.CassandraFileSystemThriftStore: Consistency level for writes into
cfs: LOCAL_QUORUM
INFO cfs.CassandraFileSystemRules: Successfully loaded path rules for: cfs
INFO parse.SharkSemanticAnalyzer: Completed getting MetaData in Semantic
Analysis
INFO ppd.OpProcFactory: Processing for FS(2)
INFO ppd.OpProcFactory: Processing for SEL(1)
INFO ppd.OpProcFactory: Processing for TS(0)
INFO metastore.HiveMetaStore: 0: get_database: ly
INFO metastore.HiveMetaStore: 0: get_database: ly
INFO cfs.CassandraFileSystem: CassandraFileSystem.uri : cfs://10.xx.xx.xx/
INFO cfs.CassandraFileSystem: Default block size: 67108864






--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-connecting-to-wrong-Filesystem-uri-tp929.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.