You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@carbondata.apache.org by Liang Chen <ch...@gmail.com> on 2017/07/06 04:28:59 UTC

Re: Why is slower that build ChunkRowIterator object in presto plugin of carbondata?

Hi

In Spark-shell, you can use the below script : 

import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.constants.CarbonCommonConstants

CarbonProperties.getInstance().addProperty(CarbonCommonConstants.ENABLE_VECTOR_READER,
"true")
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.ENABLE_OFFHEAP_SORT,
"true")
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.ENABLE_UNSAFE_SORT,
"true")

Regards
Liang

suizhe007 wrote
> Hi,
>    Datasize 1024MB(Default).I load data into Carbon Table through
> Spark-Shell. The presto plugin of Carbon supports the "carbondata-store"
> property only.How do I add the properties you listed?





--
View this message in context: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Why-is-slower-that-build-ChunkRowIterator-object-in-presto-plugin-of-carbondata-tp17356p17427.html
Sent from the Apache CarbonData Dev Mailing List archive mailing list archive at Nabble.com.

Re: Why is slower that build ChunkRowIterator object in presto plugin of carbondata?

Posted by Bhavya Aggarwal <bh...@knoldus.com>.
What are the configuration properties you are using for Presto and which
version you are using for testing. We are using the below configuration for
Presto

*Master*
*config.properties*
coordinator=true
node-scheduler.include-coordinator=true
http-server.http.port=8086
query.max-memory=5GB
query.max-memory-per-node=1GB
discovery-server.enabled=true
discovery.uri=http://<ip-address>:8086

*jvm.config*
-server
-Xmx16G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p

*Slave 1 *
config.properties
coordinator=false
http-server.http.port=8086
query.max-memory=5GB
query.max-memory-per-node=1GB
discovery-server.enabled=true
discovery.uri=http://<ip-address>:8086

jvm.config
-server
-Xmx16G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p

*Slave 2*
config.properties
coordinator=false
http-server.http.port=8086
query.max-memory=5GB
query.max-memory-per-node=1GB
discovery-server.enabled=true
discovery.uri=http://<ip-address>:8086

jvm.config
-server
-Xmx16G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p



Regards
Bhavya

On Thu, Jul 6, 2017 at 9:58 AM, Liang Chen <ch...@gmail.com> wrote:

> Hi
>
> In Spark-shell, you can use the below script :
>
> import org.apache.carbondata.core.util.CarbonProperties
> import org.apache.carbondata.core.constants.CarbonCommonConstants
>
> CarbonProperties.getInstance().addProperty(CarbonCommonConstants.ENABLE_
> VECTOR_READER,
> "true")
> CarbonProperties.getInstance().addProperty(CarbonCommonConstants.ENABLE_
> OFFHEAP_SORT,
> "true")
> CarbonProperties.getInstance().addProperty(CarbonCommonConstants.ENABLE_
> UNSAFE_SORT,
> "true")
>
> Regards
> Liang
>
> suizhe007 wrote
> > Hi,
> >    Datasize 1024MB(Default).I load data into Carbon Table through
> > Spark-Shell. The presto plugin of Carbon supports the "carbondata-store"
> > property only.How do I add the properties you listed?
>
>
>
>
>
> --
> View this message in context: http://apache-carbondata-dev-
> mailing-list-archive.1130556.n5.nabble.com/Why-is-slower-
> that-build-ChunkRowIterator-object-in-presto-plugin-of-
> carbondata-tp17356p17427.html
> Sent from the Apache CarbonData Dev Mailing List archive mailing list
> archive at Nabble.com.
>