You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Mich Talebzadeh <mi...@gmail.com> on 2016/10/08 19:48:31 UTC

Hive in-memory offerings in forthcoming releases

Hi,

Is there any documentation on Apache Hive proposed new releases which is
going to offer an in-memory database (IMDB) in the form of LLAP or built on
LLAP.

Love to see something like SAP ASE IMDB or Oracle 12c in-memory offerings
with Hive as well.

Regards,

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Re: Hive in-memory offerings in forthcoming releases

Posted by Mich Talebzadeh <mi...@gmail.com>.
thanks.

Certainly it will be icing on the cake if Hive gets this add on.

Now I know Oracle 12c,  so perhaps the following will help. The Oracle
Database In-Memory Option is not a separate database. What it does is allow
the user to store a *copy* of selected tables, or partitions, in*columnar*
format in-memory within the Oracle Database memory space. All tables are
still present in row format and all copies on storage are in row format.
For tables / partitions that the user elects to store ‘in memory’ an
*additional* copy is stored, in columnar format, purely in memory. *These
columnar copies are not logged nor are they ever persisted to disk.*

The columnar copies may be loaded into memory either at database startup or
on first access. There are mechanisms (I believe Materialized Views
snapshot) that keep the in-memory copy in sync with the underlying row
format 'master’ copy. The  optimizer is aware of the presence and currency
of the in-memory copies and transparently uses them for any analytical
style queries that can benefit from the vastly faster processing speed.
This is all completely transparent to applications (which what should be
doing in Hive as well).



This part is interesting. The primary use case for this capability is to
accelerate the analytics part of mixed OLTP and Analytical workloads by
eliminating the need* for most of the Analytics indexes. *This speed up the
analytical queries by a huge amount.



HTH




Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 10 October 2016 at 17:00, Alan Gates <al...@gmail.com> wrote:

> Hive doesn’t usually publish long term roadmaps.
>
> I am not familiar with either SAP ASE or Oracle 12c so I can’t say whether
> Hive is headed in that direction or not.
>
> We see LLAP as very important for speeding up Hive processing, especially
> in the cloud where fetches from blob storage are very expensive.  As an
> example, see how HDInsight on Microsoft’s Azure cloud is already using
> LLAP.  At the moment LLAP is read only, so an obvious next step here is
> adding write capabilities (see https://issues.apache.org/
> jira/browse/HIVE-14535 for some thoughts on how this might work).
>
> I don’t know if this answers your question or not.
>
> Alan.
>
> > On Oct 8, 2016, at 12:48, Mich Talebzadeh <mi...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > Is there any documentation on Apache Hive proposed new releases which is
> going to offer an in-memory database (IMDB) in the form of LLAP or built on
> LLAP.
> >
> > Love to see something like SAP ASE IMDB or Oracle 12c in-memory
> offerings with Hive as well.
> >
> > Regards,
> >
> > Dr Mich Talebzadeh
> >
> > LinkedIn  https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >
> > http://talebzadehmich.wordpress.com
> >
> > Disclaimer: Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
> >
>
>

Re: Hive in-memory offerings in forthcoming releases

Posted by Alan Gates <al...@gmail.com>.
Hive doesn’t usually publish long term roadmaps.  

I am not familiar with either SAP ASE or Oracle 12c so I can’t say whether Hive is headed in that direction or not.

We see LLAP as very important for speeding up Hive processing, especially in the cloud where fetches from blob storage are very expensive.  As an example, see how HDInsight on Microsoft’s Azure cloud is already using LLAP.  At the moment LLAP is read only, so an obvious next step here is adding write capabilities (see https://issues.apache.org/jira/browse/HIVE-14535 for some thoughts on how this might work).

I don’t know if this answers your question or not.

Alan.

> On Oct 8, 2016, at 12:48, Mich Talebzadeh <mi...@gmail.com> wrote:
> 
> Hi,
> 
> Is there any documentation on Apache Hive proposed new releases which is going to offer an in-memory database (IMDB) in the form of LLAP or built on LLAP.
> 
> Love to see something like SAP ASE IMDB or Oracle 12c in-memory offerings with Hive as well.
> 
> Regards,
> 
> Dr Mich Talebzadeh
>  
> LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  
> http://talebzadehmich.wordpress.com
> 
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.
>