You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Luca Rea (JIRA)" <ji...@apache.org> on 2016/03/01 09:20:18 UTC

[jira] [Created] (IGNITE-2736) custom Ignite Configuration (not xml file) is not used by spark executors

Luca Rea created IGNITE-2736:
--------------------------------

             Summary: custom Ignite Configuration (not xml file) is not used by spark executors
                 Key: IGNITE-2736
                 URL: https://issues.apache.org/jira/browse/IGNITE-2736
             Project: Ignite
          Issue Type: Bug
            Reporter: Luca Rea


Hi,
I have launched an Ignite Cluster inside YARN and I use spark-shell from a client to attach to existing cluster cluster in client mode, connection between client and cluster doesn't support multicast so I've tried to use a custom config like below:

{code}
import org.apache.ignite.spark._
import org.apache.ignite.configuration._
import org.apache.ignite._
import org.apache.ignite.spi.discovery.tcp._
val spi = new TcpDiscoverySpi();
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm._
val ipFinder = new TcpDiscoveryVmIpFinder();
import java.util.Arrays
ipFinder.setAddresses(Arrays.asList("172.16.24.48:47500", "172.16.24.49:47500", "172.16.24.50:47500", "172.16.24.51:47500", "172.16.24.52:47500", "172.16.24.53:47500"));
spi.setIpFinder(ipFinder);
val cfg = new IgniteConfiguration() with Serializable;
cfg.setGridName("ignite-cluster");
cfg.setDiscoverySpi(spi);
val cacheCfg = new CacheConfiguration("myCache");
import org.apache.ignite.cache._
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setBackups(1);
cfg.setCacheConfiguration(cacheCfg);
class cfg extends Serializable;

val ic = new IgniteContext[Integer, Integer](sc, () => cfg )

val sharedRdd = ic.fromCache("example")
val x = sqlContext.sparkContext.parallelize(1 to 10000, 10).map(i => (new Integer(i), new Integer(i)))
sharedRdd.savePairs(x)
{code}

when I run the last command it freeze waiting to connect to the cluster, in fact it seems that in this way the spark execurors don't use the above configuration nor load the file default-config.xml but use some hardocoded configuration with only multicast enabled.

The workaround is to use a acustom xml configuration file and copy it into the config ignite path of all all spark nodes the run:

{code}
import org.apache.ignite.spark._
val ic = new IgniteContext[Integer, Integer](sc, "config/custom-config.xml")
{code}

custom-config.xml:
{code}
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <!--
        Alter configuration below as needed.
    -->

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
  <property name="clientMode" value="true"/>
  <property name="discoverySpi">
    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
      <property name="ipFinder">
        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
          <property name="addresses">
            <list>
              <value>172.16.24.48:47500</value>
              <value>172.16.24.49:47500</value>
              <value>172.16.24.50:47500</value>
              <value>172.16.24.51:47500</value>
              <value>172.16.24.52:47500</value>
              <value>172.16.24.53:47500</value>
            </list>
          </property>
        </bean>
      </property>
    </bean>
  </property>
</bean>

</beans>
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)