You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "xubo245 (JIRA)" <ji...@apache.org> on 2018/04/23 12:41:00 UTC

[jira] [Commented] (CARBONDATA-2382) OutOfMemoryError when use search mode in ConcurrentQueryBenchmark

    [ https://issues.apache.org/jira/browse/CARBONDATA-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16448069#comment-16448069 ] 

xubo245 commented on CARBONDATA-2382:
-------------------------------------

I analysis this issue with jacky, It because the default values of org.apache.carbondata.core.constants.CarbonCommonConstants#CARBON_SEARCH_MODE_SCAN_THREAD is -1, it means no limit for thread number of each worker.

Fixed this issue in :https://github.com/apache/carbondata/pull/2205

> OutOfMemoryError when use search mode in ConcurrentQueryBenchmark
> -----------------------------------------------------------------
>
>                 Key: CARBONDATA-2382
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2382
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: xubo245
>            Assignee: xubo245
>            Priority: Major
>
> {code:java}
> 18/04/23 16:24:49 WARN Utils: Your hostname, ecs-909c resolves to a loopback address: 127.0.0.1; using 192.168.0.206 instead (on interface eth0)
> 18/04/23 16:24:49 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
> 18/04/23 16:24:50 INFO SparkContext: Submitted application: totalNum: 10000000	threadNum: 16	taskNum: 100	resultIsEmpty: true	file path: /tmp/carbondata	runInLocal: true	generateFile: true	deleteFile: false
> 18/04/23 16:24:50 INFO SecurityManager: Changing view acls to: root
> 18/04/23 16:24:50 INFO SecurityManager: Changing modify acls to: root
> 18/04/23 16:24:50 INFO SecurityManager: Changing view acls groups to: 
> 18/04/23 16:24:50 INFO SecurityManager: Changing modify acls groups to: 
> 18/04/23 16:24:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
> 18/04/23 16:24:50 INFO Utils: Successfully started service 'sparkDriver' on port 35444.
> 18/04/23 16:24:50 INFO SparkEnv: Registering MapOutputTracker
> 18/04/23 16:24:50 INFO SparkEnv: Registering BlockManagerMaster
> 18/04/23 16:24:50 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
> 18/04/23 16:24:50 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
> 18/04/23 16:24:50 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-186ff215-cd17-4b23-9fe0-c55e169f5d87
> 18/04/23 16:24:50 INFO MemoryStore: MemoryStore started with capacity 1957.8 MB
> 18/04/23 16:24:50 INFO SparkEnv: Registering OutputCommitCoordinator
> 18/04/23 16:24:50 INFO Utils: Successfully started service 'SparkUI' on port 4040.
> 18/04/23 16:24:50 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.0.206:4040
> 18/04/23 16:24:50 INFO Executor: Starting executor ID driver on host localhost
> 18/04/23 16:24:50 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46504.
> 18/04/23 16:24:50 INFO NettyBlockTransferService: Server created on 192.168.0.206:46504
> 18/04/23 16:24:50 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
> 18/04/23 16:24:50 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.0.206, 46504, None)
> 18/04/23 16:24:50 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.0.206:46504 with 1957.8 MB RAM, BlockManagerId(driver, 192.168.0.206, 46504, None)
> 18/04/23 16:24:50 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.0.206, 46504, None)
> 18/04/23 16:24:50 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.0.206, 46504, None)
> 18/04/23 16:24:51 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/huawei/xubo/git/carbondata1/spark-warehouse').
> 18/04/23 16:24:51 INFO SharedState: Warehouse path is 'file:/huawei/xubo/git/carbondata1/spark-warehouse'.
> 18/04/23 16:24:51 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
> 18/04/23 16:24:52 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 18/04/23 16:24:52 INFO ObjectStore: ObjectStore, initialize called
> 18/04/23 16:24:52 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 18/04/23 16:24:52 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
> 18/04/23 16:24:53 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 18/04/23 16:24:54 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
> 18/04/23 16:24:54 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
> 18/04/23 16:24:55 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
> 18/04/23 16:24:55 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
> 18/04/23 16:24:55 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
> 18/04/23 16:24:55 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
> 18/04/23 16:24:55 INFO ObjectStore: Initialized ObjectStore
> 18/04/23 16:24:55 INFO HiveMetaStore: Added admin role in metastore
> 18/04/23 16:24:55 INFO HiveMetaStore: Added public role in metastore
> 18/04/23 16:24:55 INFO HiveMetaStore: No user is added in admin role, since config is empty
> 18/04/23 16:24:55 INFO HiveMetaStore: 0: get_all_databases
> 18/04/23 16:24:55 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_all_databases	
> 18/04/23 16:24:55 INFO HiveMetaStore: 0: get_functions: db=default pat=*
> 18/04/23 16:24:55 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_functions: db=default pat=*	
> 18/04/23 16:24:55 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
> 18/04/23 16:24:55 INFO SessionState: Created local directory: /tmp/a2a6b2d9-89b8-49ed-a1e4-510d33e89963_resources
> 18/04/23 16:24:55 INFO SessionState: Created HDFS directory: /tmp/hive/root/a2a6b2d9-89b8-49ed-a1e4-510d33e89963
> 18/04/23 16:24:55 INFO SessionState: Created local directory: /tmp/root/a2a6b2d9-89b8-49ed-a1e4-510d33e89963
> 18/04/23 16:24:55 INFO SessionState: Created HDFS directory: /tmp/hive/root/a2a6b2d9-89b8-49ed-a1e4-510d33e89963/_tmp_space.db
> 18/04/23 16:24:55 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/huawei/xubo/git/carbondata1/spark-warehouse
> 18/04/23 16:24:55 INFO HiveMetaStore: 0: get_database: default
> 18/04/23 16:24:55 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_database: default	
> 18/04/23 16:24:55 INFO HiveMetaStore: 0: get_database: global_temp
> 18/04/23 16:24:55 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_database: global_temp	
> 18/04/23 16:24:55 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
> 18/04/23 16:24:56 INFO SessionState: Created local directory: /tmp/fe452d60-a43e-4edf-8945-600c341bb02c_resources
> 18/04/23 16:24:56 INFO SessionState: Created HDFS directory: /tmp/hive/root/fe452d60-a43e-4edf-8945-600c341bb02c
> 18/04/23 16:24:56 INFO SessionState: Created local directory: /tmp/root/fe452d60-a43e-4edf-8945-600c341bb02c
> 18/04/23 16:24:56 INFO SessionState: Created HDFS directory: /tmp/hive/root/fe452d60-a43e-4edf-8945-600c341bb02c/_tmp_space.db
> 18/04/23 16:24:56 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/huawei/xubo/git/carbondata1/spark-warehouse
> 18/04/23 16:24:56 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
> Environment information:
> spark.master:	local[8]	
> spark.driver.cores:	default value	
> spark.driver.memory:	default value	
> spark.executor.cores:	default value	
> spark.executor.memory:	default value	
> spark.executor.instances:	default value	
> SPARK_VERSION:2.2.1	
> CARBONDATA_VERSION:1.4.0-SNAPSHOT	
> Parameters information:
> totalNum: 10000000	threadNum: 16	taskNum: 100	resultIsEmpty: true	file path: /tmp/carbondata	runInLocal: true	generateFile: true	deleteFile: false
> Start generate 10000000 records, schema: StructType(StructField(id,StringType,false), StructField(city,StringType,false), StructField(country,StringType,false), StructField(planet,StringType,false), StructField(m1,ShortType,false), StructField(m2,IntegerType,false), StructField(m3,LongType,false), StructField(m4,DoubleType,false), StructField(m5,DecimalType(30,10),false))
> Num10000000_comparetest_parquet completed, time: 33.757 sec
> 18/04/23 16:25:37 AUDIT CarbonCreateTableCommand: [ecs-909c][root][Thread-1]Creating Table with Database name [default] and Table name [num10000000_comparetest_carbonv3]
> 18/04/23 16:25:38 AUDIT CarbonCreateTableCommand: [ecs-909c][root][Thread-1]Table created with Database name [default] and Table name [num10000000_comparetest_carbonv3]
> 18/04/23 16:25:38 AUDIT CarbonDataRDDFactory$: [ecs-909c][root][Thread-1]Data load request has been received for table default.num10000000_comparetest_carbonv3
> 18/04/23 16:26:36 AUDIT CarbonDataRDDFactory$: [ecs-909c][root][Thread-1]Data load is successful for default.num10000000_comparetest_carbonv3
> Num10000000_comparetest_carbonV3 completed, time: 59.577 sec
> Start running queries for Num10000000_comparetest_carbonV3...
> Min: min time	Max: max time	90%: 90% time	99%: 99% time	Avg: average time	Count: number of result	Query X: running different query sql	Result: show it when ResultIsEmpty is false	Total execute time: total runtime
> 18/04/23 16:27:12 ERROR Inbox: Ignoring error
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
> 	at org.apache.carbondata.core.scan.result.iterator.AbstractSearchModeResultIterator.hasNext(AbstractSearchModeResultIterator.java:75)
> 	at org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator.<init>(ChunkRowIterator.java:40)
> 	at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:84)
> 	at org.apache.carbondata.store.worker.SearchRequestHandler.handleRequest(SearchRequestHandler.java:95)
> 	at org.apache.carbondata.store.worker.SearchRequestHandler.handleSearch(SearchRequestHandler.java:62)
> 	at org.apache.spark.search.Searcher$$anonfun$receiveAndReply$1.applyOrElse(Searcher.scala:41)
> 	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
> 	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
> 	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
> 	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:216)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
> 	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> 	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> 	at org.apache.carbondata.core.scan.result.iterator.AbstractSearchModeResultIterator.hasNext(AbstractSearchModeResultIterator.java:71)
> 	... 12 more
> Caused by: java.lang.OutOfMemoryError: Java heap space
> 	at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
> 	at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> 	at org.apache.carbondata.core.datastore.impl.FileReaderImpl.readByteBuffer(FileReaderImpl.java:200)
> 	at org.apache.carbondata.core.datastore.chunk.reader.measure.v3.CompressedMeasureChunkFileBasedReaderV3.readRawMeasureChunksInGroup(CompressedMeasureChunkFileBasedReaderV3.java:157)
> 	at org.apache.carbondata.core.datastore.chunk.reader.measure.AbstractMeasureChunkReaderV2V3Format.readRawMeasureChunks(AbstractMeasureChunkReaderV2V3Format.java:77)
> 	at org.apache.carbondata.core.indexstore.blockletindex.BlockletDataRefNode.readMeasureChunks(BlockletDataRefNode.java:147)
> 	at org.apache.carbondata.core.scan.scanner.impl.BlockletFilterScanner.executeFilter(BlockletFilterScanner.java:278)
> 	at org.apache.carbondata.core.scan.scanner.impl.BlockletFilterScanner.scanBlocklet(BlockletFilterScanner.java:101)
> 	at org.apache.carbondata.core.scan.processor.BlockScan.scan(BlockScan.java:69)
> 	at org.apache.carbondata.core.scan.result.iterator.AbstractSearchModeResultIterator$1.call(AbstractSearchModeResultIterator.java:59)
> 	at org.apache.carbondata.core.scan.result.iterator.AbstractSearchModeResultIterator$1.call(AbstractSearchModeResultIterator.java:53)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	... 3 more
> 18/04/23 16:27:12 ERROR Inbox: Ignoring error
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
> 	at org.apache.carbondata.core.scan.result.iterator.AbstractSearchModeResultIterator.hasNext(AbstractSearchModeResultIterator.java:75)
> 	at org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator.<init>(ChunkRowIterator.java:40)
> 	at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:84)
> 	at org.apache.carbondata.store.worker.SearchRequestHandler.handleRequest(SearchRequestHandler.java:95)
> 	at org.apache.carbondata.store.worker.SearchRequestHandler.handleSearch(SearchRequestHandler.java:62)
> 	at org.apache.spark.search.Searcher$$anonfun$receiveAndReply$1.applyOrElse(Searcher.scala:41)
> 	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
> 	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
> 	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
> 	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:216)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
> 	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> 	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> 	at org.apache.carbondata.core.scan.result.iterator.AbstractSearchModeResultIterator.hasNext(AbstractSearchModeResultIterator.java:71)
> 	... 12 more
> Caused by: java.lang.OutOfMemoryError: Java heap space
> 18/04/23 16:27:27 ERROR Inbox: Ignoring error
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
> 	at org.apache.carbondata.core.scan.result.iterator.AbstractSearchModeResultIterator.hasNext(AbstractSearchModeResultIterator.java:75)
> 	at org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator.<init>(ChunkRowIterator.java:40)
> 	at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:84)
> 	at org.apache.carbondata.store.worker.SearchRequestHandler.handleRequest(SearchRequestHandler.java:95)
> 	at org.apache.carbondata.store.worker.SearchRequestHandler.handleSearch(SearchRequestHandler.java:62)
> 	at org.apache.spark.search.Searcher$$anonfun$receiveAndReply$1.applyOrElse(Searcher.scala:41)
> 	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
> 	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
> 	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
> 	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:216)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
> 	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> 	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> 	at org.apache.carbondata.core.scan.result.iterator.AbstractSearchModeResultIterator.hasNext(AbstractSearchModeResultIterator.java:71)
> 	... 12 more
> Caused by: java.lang.OutOfMemoryError: Java heap space
> 18/04/23 16:27:27 ERROR CarbonSession: Exception when executing search mode: Exception thrown in awaitResult: , fallback to SparkSQL
> 18/04/23 16:27:27 ERROR CarbonSession: Exception when executing search mode: Exception thrown in awaitResult: , fallback to SparkSQL
> 18/04/23 16:27:28 ERROR CarbonSession: Exception when executing search mode: Exception thrown in awaitResult: , fallback to SparkSQL
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)