You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Aleksey Zinoviev (Jira)" <ji...@apache.org> on 2019/09/19 14:18:00 UTC

[jira] [Assigned] (IGNITE-11724) IgniteSpark integration forget to close the IgniteContext and stops the client node in case if error during PairFunction logic

     [ https://issues.apache.org/jira/browse/IGNITE-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Aleksey Zinoviev reassigned IGNITE-11724:
-----------------------------------------

    Assignee: Aleksey Zinoviev

> IgniteSpark integration forget to close the IgniteContext and stops the client node in case if error during PairFunction logic 
> -------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: IGNITE-11724
>                 URL: https://issues.apache.org/jira/browse/IGNITE-11724
>             Project: Ignite
>          Issue Type: Bug
>          Components: spark
>    Affects Versions: 2.7
>            Reporter: Andrey Aleksandrov
>            Assignee: Aleksey Zinoviev
>            Priority: Major
>             Fix For: 2.8
>
>
> Next code could hang in case if PairFunction logic will throw the exception:
> JavaPairRDD<Key, Value> rdd_records = records.mapToPair(new MapFunction());
> JavaIgniteContext<Key, Value> igniteContext = new JavaIgniteContext<>(sparkCtx, configUrl);
> JavaIgniteRDD<Key, Value> igniteRdd = igniteContext.<Key, Value>fromCache(cacheName);
> igniteRdd.savePairs(rdd_records);
> Looks like next internal code (saveValues method)should also close the IgniteContext in case of an unexpected exception, not only data streamer:
>  try {
>     it.foreach(value ⇒ {
>          val key = affinityKeyFunc(value, node.orNull)
>           streamer.addData(key, value)
>        })
>     }
>     finally {
>         streamer.close()
>     }
>  })
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)