You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by li...@apache.org on 2019/04/15 14:22:31 UTC

svn commit: r1857583 [8/8] - in /kylin/site: ./ about/ blog/ blog/2015/01/25/introduce-data-model/ blog/2015/06/10/release-v0.7.1-incubating/ blog/2015/08/13/kylin-dictionary/ blog/2015/08/15/fast-cubing/ blog/2015/09/06/release-v1.0-incubating/ blog/2...

Modified: kylin/site/feed.xml
URL: http://svn.apache.org/viewvc/kylin/site/feed.xml?rev=1857583&r1=1857582&r2=1857583&view=diff
==============================================================================
--- kylin/site/feed.xml (original)
+++ kylin/site/feed.xml Mon Apr 15 14:22:27 2019
@@ -19,82 +19,156 @@
     <description>Apache Kylin Home</description>
     <link>http://kylin.apache.org/</link>
     <atom:link href="http://kylin.apache.org/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Sun, 14 Apr 2019 06:59:26 -0700</pubDate>
-    <lastBuildDate>Sun, 14 Apr 2019 06:59:26 -0700</lastBuildDate>
+    <pubDate>Mon, 15 Apr 2019 07:02:01 -0700</pubDate>
+    <lastBuildDate>Mon, 15 Apr 2019 07:02:01 -0700</lastBuildDate>
     <generator>Jekyll v2.5.3</generator>
     
       <item>
-        <title>Apache Kylin v2.6.0 Release Announcement</title>
-        <description>&lt;p&gt;The Apache Kylin community is pleased to announce the release of Apache Kylin v2.6.0.&lt;/p&gt;
+        <title>Real-time Streaming Design in Apache Kylin</title>
+        <description>&lt;h2 id=&quot;why-build-real-time-streaming-in-kylin&quot;&gt;Why Build Real-time Streaming in Kylin&lt;/h2&gt;
+&lt;p&gt;The real-time streaming feature is contributed by eBay big data team in Kylin 3.0, the purpose we build real-time streaming is:&lt;/p&gt;
 
-&lt;p&gt;Apache Kylin is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Big Data supporting extremely large datasets.&lt;/p&gt;
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Milliseconds Data Preparation Delay  &lt;br /&gt;
+Kylin provide sub-second query latency for extremely large dataset, the underly magic is precalculation cube. But the cube building often take long time(usually hours for large data sets),  in some case, the analyst needs real-time data to do analysis, so we want to provide real-time OLAP, which means data can be queried immediately when produced to system.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Support Lambda Architecture  &lt;br /&gt;
+Real-time data often not reliable, that may caused by many reasons, for example, the upstream processing system has a bug, or the data need to be changed after some time, etc. So we need to support lambda architecture, which means the cube can be built from the streaming source(like Kafka), and the historical cube data can be refreshed from batch source(like Hive).&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Less MR jobs and HBase Tables   &lt;br /&gt;
+Since Kylin 1.6, community has provided a streaming solution, it uses MR to consume Kafka data and then do batch cube building, it can provide minute-level data preparation latency, but to ensure the data latency, you need to schedule the MR very shortly(5 minutes or even less), that will cause too many hadoop jobs and small hbase tables in the system, and dramatically increase the Hadoop system’s load.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-&lt;p&gt;This is a major release after 2.5.0, including many enhancements. All of the changes can be found in the &lt;a href=&quot;https://kylin.apache.org/docs/release_notes.html&quot;&gt;release notes&lt;/a&gt;. Here just highlight the major ones:&lt;/p&gt;
+&lt;h2 id=&quot;architecture&quot;&gt;Architecture&lt;/h2&gt;
 
-&lt;h3 id=&quot;sdk-for-jdbc-sources&quot;&gt;SDK for JDBC sources&lt;/h3&gt;
-&lt;p&gt;Apache Kylin has already supported several data sources like Amazon Redshift, SQL Server through JDBC. &lt;br /&gt;
-To help developers handle SQL dialect differences and easily implement a new data source through JDBC, Kylin provides a new data source SDK with APIs for:&lt;br /&gt;
-* Synchronize metadata and data from JDBC source&lt;br /&gt;
-* Build cube from JDBC source&lt;br /&gt;
-* Query pushdown to JDBC source engine when cube is unmatched&lt;/p&gt;
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_architecture.png&quot; alt=&quot;Kylin RT Streaming Architecture&quot; /&gt;&lt;/p&gt;
 
-&lt;p&gt;Check KYLIN-3552 for more.&lt;/p&gt;
+&lt;p&gt;The blue rectangle is streaming components added in current Kylin’s architecture, which is responsible to ingest data from streaming source, and provide query for real-time data.&lt;/p&gt;
 
-&lt;h3 id=&quot;memcached-as-distributed-cache&quot;&gt;Memcached as distributed cache&lt;/h3&gt;
-&lt;p&gt;In the past, query caches are not efficiently used in Kylin due to two aspects: aggressive cache expiration strategy and local cache. &lt;br /&gt;
-Because of the aggressive cache expiration strategy, useful caches are often cleaned up unnecessarily. &lt;br /&gt;
-Because query caches are stored in local servers, they cannot be shared between servers. &lt;br /&gt;
-And because of the size limitation of local cache, not all useful query results can be cached.&lt;/p&gt;
+&lt;p&gt;We divide the unbounded incoming streaming data into 3 stages, the data come into different stages are all queryable immediately.&lt;/p&gt;
 
-&lt;p&gt;To deal with these shortcomings, we change the query cache expiration strategy by signature checking and introduce the memcached as Kylin’s distributed cache so that Kylin servers are able to share cache between servers. &lt;br /&gt;
-And it’s easy to add memcached servers to scale out distributed cache. With enough memcached servers, we can cached things as much as possible. &lt;br /&gt;
-Then we also introduce segment level query cache which can not only speed up query but also reduce the rpcs to HBase. &lt;br /&gt;
-The related tasks are KYLIN-2895, KYLIN-2894, KYLIN-2896, KYLIN-2897, KYLIN-2898, KYLIN-2899.&lt;/p&gt;
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_stages.png&quot; alt=&quot;Kylin RT Streaming stages&quot; /&gt;&lt;/p&gt;
 
-&lt;h3 id=&quot;forkjoinpool-for-fast-cubing&quot;&gt;ForkJoinPool for fast cubing&lt;/h3&gt;
-&lt;p&gt;In the past, fast cubing uses split threads, task threads and main thread to do the cube building, there is complex join and error handling logic.&lt;/p&gt;
+&lt;h3 id=&quot;components&quot;&gt;Components&lt;/h3&gt;
 
-&lt;p&gt;The new implement leverages the ForkJoinPool from JDK, the event split logic is handled in&lt;br /&gt;
-main thread. Cuboid task and sub-tasks are handled in fork join pool, cube results are collected&lt;br /&gt;
-async and can be write to output earlier. Check KYLIN-2932 for more.&lt;/p&gt;
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_components.png&quot; alt=&quot;Kylin RT Streaming Components&quot; /&gt;&lt;/p&gt;
 
-&lt;h3 id=&quot;improve-hllcounter-performance&quot;&gt;Improve HLLCounter performance&lt;/h3&gt;
-&lt;p&gt;In the past, the way to create HLLCounter and to compute harmonic mean are not efficient.&lt;/p&gt;
+&lt;p&gt;Streaming Receiver: Responsible to ingest data from stream data source, and provide real-time data query.&lt;/p&gt;
 
-&lt;p&gt;The new implement improve the HLLCounter creation by copy register from another HLLCounter instead of merge. To compute harmonic mean in the HLLCSnapshot, it does the enhancement by &lt;br /&gt;
-* using table to cache all 1/2^r  without computing on the fly&lt;br /&gt;
-* remove floating addition by using integer addition in the bigger loop&lt;br /&gt;
-* remove branch, e.g. needn’t checking whether registers[i] is zero or not, although this is minor improvement.&lt;/p&gt;
+&lt;p&gt;Streaming Coordinator: Responsible to do coordination works, for example, when new streaming cube is onboard, the coordinator need to decide which streaming receivers can be assigned.&lt;/p&gt;
 
-&lt;p&gt;Check KYLIN-3656 for more.&lt;/p&gt;
+&lt;p&gt;Metadata Store:  Used to store streaming related metadata, for example, the cube assignments information, cube build state information.&lt;/p&gt;
 
-&lt;h3 id=&quot;improve-cuboid-recommendation-algorithm&quot;&gt;Improve Cuboid Recommendation Algorithm&lt;/h3&gt;
-&lt;p&gt;In the past, to add cuboids which are not prebuilt, the cube planner turns to mandatory cuboids which are selected if its rollup row count is above some threshold. &lt;br /&gt;
-There are two shortcomings:&lt;br /&gt;
-* The way to estimate the rollup row count is not good&lt;br /&gt;
-* It’s hard to determine the threshold of rollup row count for recommending mandatory cuboids&lt;/p&gt;
+&lt;p&gt;Query Engine:  Extend the existing query engine, support to query real-time data from streaming receiver&lt;/p&gt;
 
-&lt;p&gt;The new implement improves the way to estimate the row count of un-prebuilt cuboids by rollup ratio rather than exact rollup row count. &lt;br /&gt;
-With better estimated row counts for un-prebuilt cuboids, the cost-based cube planner algorithm will decide which cuboid to be built or not and the threshold for previous mandatory cuboids is not needed. &lt;br /&gt;
-By this improvement, we don’t need the threshold for mandatory cuboids recommendation, and mandatory cuboids can only be manually set and will not be recommended. Check KYLIN-3540 for more.&lt;/p&gt;
+&lt;p&gt;Build Engine: Extend the existing build engine, support to build full cube from the real-time data&lt;/p&gt;
 
-&lt;p&gt;&lt;strong&gt;Download&lt;/strong&gt;&lt;/p&gt;
+&lt;h3 id=&quot;how-streaming-cube-engine-works&quot;&gt;How Streaming Cube Engine Works&lt;/h3&gt;
 
-&lt;p&gt;To download Apache Kylin v2.6.0 source code or binary package, visit the &lt;a href=&quot;http://kylin.apache.org/download&quot;&gt;download&lt;/a&gt; page.&lt;/p&gt;
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_how_build_work.png&quot; alt=&quot;Kylin RT Streaming How Build Works&quot; /&gt;&lt;/p&gt;
 
-&lt;p&gt;&lt;strong&gt;Upgrade&lt;/strong&gt;&lt;/p&gt;
+&lt;ol&gt;
+  &lt;li&gt;Coordinator ask streaming source for all partitions of the cube&lt;/li&gt;
+  &lt;li&gt;Coordinator decide which streaming receivers to assign to consume streaming data, and ask streaming receivers to start consuming data.&lt;/li&gt;
+  &lt;li&gt;Streaming receiver start to consume and index streaming events&lt;/li&gt;
+  &lt;li&gt;After sometime, streaming receiver copy the immutable segments from local files to remote HDFS files&lt;/li&gt;
+  &lt;li&gt;Streaming receiver notify the coordinator that a segment has been persisted to HDFS&lt;/li&gt;
+  &lt;li&gt;Coordinator submit a cube build job to Build Engine to triger cube full building after all receivers have submitted their segments&lt;/li&gt;
+  &lt;li&gt;Build Engine build all cuboids from the streaming HDFS files&lt;/li&gt;
+  &lt;li&gt;Build Engine store cuboid data to Hbase, and then the coordinator will ask the streaming receivers to remove the related local real-time data.&lt;/li&gt;
+&lt;/ol&gt;
 
-&lt;p&gt;Follow the &lt;a href=&quot;/docs/howto/howto_upgrade.html&quot;&gt;upgrade guide&lt;/a&gt;.&lt;/p&gt;
+&lt;h3 id=&quot;how-streaming-query-engine-works&quot;&gt;How Streaming Query Engine Works&lt;/h3&gt;
 
-&lt;p&gt;&lt;strong&gt;Feedback&lt;/strong&gt;&lt;/p&gt;
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_how_query_work.png&quot; alt=&quot;Kylin RT Streaming How Query Works&quot; /&gt;&lt;/p&gt;
 
-&lt;p&gt;If you face issue or question, please send mail to Apache Kylin dev or user mailing list: dev@kylin.apache.org , user@kylin.apache.org; Before sending, please make sure you have subscribed the mailing list by dropping an email to dev-subscribe@kylin.apache.org or user-subscribe@kylin.apache.org.&lt;/p&gt;
+&lt;ol&gt;
+  &lt;li&gt;If Query hits a streaming cube, Query Engine ask Streaming Coordinator what streaming receivers are assigned for the cube&lt;/li&gt;
+  &lt;li&gt;Query Engine send query request to related streaming receivers to query realtime segments&lt;/li&gt;
+  &lt;li&gt;Query Engine send query request to Hbase to query historical segments&lt;/li&gt;
+  &lt;li&gt;Query Engine aggregate the query results, and send response back to client&lt;/li&gt;
+&lt;/ol&gt;
 
-&lt;p&gt;&lt;em&gt;Great thanks to everyone who contributed!&lt;/em&gt;&lt;/p&gt;
+&lt;h2 id=&quot;detail-design&quot;&gt;Detail Design&lt;/h2&gt;
+
+&lt;h3 id=&quot;real-time-segment-store&quot;&gt;Real-time Segment Store&lt;/h3&gt;
+&lt;p&gt;Real-time segments are divided by event time, when new event comes, it will be calculated which segment it will be located, if the segment doesn’t exist, create a new one.&lt;/p&gt;
+
+&lt;p&gt;The new created segment is in ‘Active’ state first, if no further events coming into the segment after some preconfigured period, the segment state will be changed to ‘Immutable’, and then write to remote HDFS.&lt;/p&gt;
+
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_rt_segment_state.png&quot; alt=&quot;Kylin RT Streaming Segment State&quot; /&gt;&lt;/p&gt;
+
+&lt;p&gt;Each real-time segment has a memory store, new event will first goes into the memory store to do aggregation, when the memory store size reaches the configured threshold, it will be then be flushed to local disk as a fragment file.&lt;/p&gt;
+
+&lt;p&gt;Not all cuboids are built in the receiver side, only basic cuboid and some specified cuboids are built.&lt;/p&gt;
+
+&lt;p&gt;The data is stored as columnar format on disk, and when there are too many fragments on disk, the fragment files will be merged by a background thread automatically.&lt;/p&gt;
+
+&lt;p&gt;The directory structure in receiver side is like:&lt;/p&gt;
+
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_dir_structure.png&quot; alt=&quot;Kylin RT Streaming Segment Directory&quot; /&gt;&lt;/p&gt;
+
+&lt;p&gt;To improve the query performance, the data is stored in columnar format, the data format is like:&lt;/p&gt;
+
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_columnar_format.png&quot; alt=&quot;Kylin RT Streaming Columnar Format&quot; /&gt;&lt;/p&gt;
+
+&lt;p&gt;Each cuboid data is stored together, and in each cuboid the data is stored column by column, the metadata is stored in json format.&lt;/p&gt;
+
+&lt;p&gt;The dimension data is divided into 3 parts:&lt;/p&gt;
+
+&lt;p&gt;The first part is Dictionary part, this part exists when the dimension encoding is set to ‘Dict’ in cube design, by default we use &lt;a href=&quot;https://kylin.apache.org/blog/2015/08/13/kylin-dictionary/&quot;&gt;tri-tree dictionary&lt;/a&gt; to minimize the memory footprints and preserve the original order.&lt;/p&gt;
+
+&lt;p&gt;The second part is dictionary encoded values, additional compression mechanism can be applied to these values, since the values for the same column are usually similar, so the compression rate will be very good.&lt;/p&gt;
+
+&lt;p&gt;The third part is invert-index data, use Roaring Bitmap to store the invert-index info, the following picture shows how invert-index data is stored, there are two types of format, the first one is dictionary encoding dimension’s index data format, the second is other fix-len encoding dimension’s index data format.&lt;/p&gt;
+
+&lt;p&gt;&lt;img src=&quot;/images/blog/rt_stream_invertindex_format.png&quot; alt=&quot;Kylin RT Streaming InvertIndex Format&quot; /&gt;&lt;/p&gt;
+
+&lt;p&gt;Real-time data is stored in compressed format, currently support two type compression: Run Length Encoding and LZ4.&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;Use RLE compression for time-related dim and first dim&lt;/li&gt;
+  &lt;li&gt;Use LZ4 for other dimensions by default&lt;/li&gt;
+  &lt;li&gt;Use LZ4 Compression for simple-type measure(long, double)&lt;/li&gt;
+  &lt;li&gt;No compression for complex measure(count distinct, topn, etc.)&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h3 id=&quot;high-availability&quot;&gt;High Availability&lt;/h3&gt;
+
+&lt;p&gt;Streaming receivers are group into replica-sets, all receivers in the same replica-set  share the same assignments, so that when one receiver is down, the query and event consuming will not be impacted.&lt;/p&gt;
+
+&lt;p&gt;In each replica-set, there is a lead responsible to upload real-time segments to HDFS, and zookeeper is used to do leader election&lt;/p&gt;
+
+&lt;h3 id=&quot;failure-recovery&quot;&gt;Failure Recovery&lt;/h3&gt;
+
+&lt;p&gt;We do checkpoint periodically in receiver side, so that when the receiver is restarted, the data can be restored correctly.&lt;/p&gt;
+
+&lt;p&gt;There are two parts in the checkpoint: the first part is the streaming source consume info, for Kafka it is {partition:offset} pairs, the second part is disk states {segment:framentID} pairs, which means when do the checkpoint what’s the max fragmentID for each segment.&lt;/p&gt;
+
+&lt;p&gt;When receiver is restarted, it will check the latest checkpoint, set the Kafka consumer to start to consume data from specified partition offsets, and remove the fragment files that the fragmentID is larger than the checkpointed fragmentID on the disk.&lt;/p&gt;
+
+&lt;p&gt;Besides the local checkpoint, we also have remote checkpoint, to restore the state when the disk is crashed, the remote checkpoint is saved to Cube Segment metadata after HBase segment build, like:&lt;br /&gt;
+&lt;code class=&quot;highlighter-rouge&quot;&gt;
+    ”segments”:[{…,
+    	 &quot;stream_source_checkpoint&quot;: {&quot;0&quot;:8946898241, “1”: 8193859535, ...}
+                 },
+	]
+&lt;/code&gt;&lt;br /&gt;
+The checkpoint info is the smallest partition offsets on the streaming receiver when real-time segment is sent to full build.&lt;/p&gt;
+
+&lt;h2 id=&quot;future&quot;&gt;Future&lt;/h2&gt;
+&lt;ul&gt;
+  &lt;li&gt;Star Schema Support&lt;/li&gt;
+  &lt;li&gt;Streaming Receiver On Kubernetes/Yarn&lt;/li&gt;
+&lt;/ul&gt;
 </description>
-        <pubDate>Fri, 18 Jan 2019 12:00:00 -0800</pubDate>
-        <link>http://kylin.apache.org/blog/2019/01/18/release-v2.6.0/</link>
-        <guid isPermaLink="true">http://kylin.apache.org/blog/2019/01/18/release-v2.6.0/</guid>
+        <pubDate>Fri, 12 Apr 2019 09:30:00 -0700</pubDate>
+        <link>http://kylin.apache.org/blog/2019/04/12/rt-streaming-design/</link>
+        <guid isPermaLink="true">http://kylin.apache.org/blog/2019/04/12/rt-streaming-design/</guid>
         
         
         <category>blog</category>
@@ -181,6 +255,84 @@ By this improvement, we don’t need
       </item>
     
       <item>
+        <title>Apache Kylin v2.6.0 Release Announcement</title>
+        <description>&lt;p&gt;The Apache Kylin community is pleased to announce the release of Apache Kylin v2.6.0.&lt;/p&gt;
+
+&lt;p&gt;Apache Kylin is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Big Data supporting extremely large datasets.&lt;/p&gt;
+
+&lt;p&gt;This is a major release after 2.5.0, including many enhancements. All of the changes can be found in the &lt;a href=&quot;https://kylin.apache.org/docs/release_notes.html&quot;&gt;release notes&lt;/a&gt;. Here just highlight the major ones:&lt;/p&gt;
+
+&lt;h3 id=&quot;sdk-for-jdbc-sources&quot;&gt;SDK for JDBC sources&lt;/h3&gt;
+&lt;p&gt;Apache Kylin has already supported several data sources like Amazon Redshift, SQL Server through JDBC. &lt;br /&gt;
+To help developers handle SQL dialect differences and easily implement a new data source through JDBC, Kylin provides a new data source SDK with APIs for:&lt;br /&gt;
+* Synchronize metadata and data from JDBC source&lt;br /&gt;
+* Build cube from JDBC source&lt;br /&gt;
+* Query pushdown to JDBC source engine when cube is unmatched&lt;/p&gt;
+
+&lt;p&gt;Check KYLIN-3552 for more.&lt;/p&gt;
+
+&lt;h3 id=&quot;memcached-as-distributed-cache&quot;&gt;Memcached as distributed cache&lt;/h3&gt;
+&lt;p&gt;In the past, query caches are not efficiently used in Kylin due to two aspects: aggressive cache expiration strategy and local cache. &lt;br /&gt;
+Because of the aggressive cache expiration strategy, useful caches are often cleaned up unnecessarily. &lt;br /&gt;
+Because query caches are stored in local servers, they cannot be shared between servers. &lt;br /&gt;
+And because of the size limitation of local cache, not all useful query results can be cached.&lt;/p&gt;
+
+&lt;p&gt;To deal with these shortcomings, we change the query cache expiration strategy by signature checking and introduce the memcached as Kylin’s distributed cache so that Kylin servers are able to share cache between servers. &lt;br /&gt;
+And it’s easy to add memcached servers to scale out distributed cache. With enough memcached servers, we can cached things as much as possible. &lt;br /&gt;
+Then we also introduce segment level query cache which can not only speed up query but also reduce the rpcs to HBase. &lt;br /&gt;
+The related tasks are KYLIN-2895, KYLIN-2894, KYLIN-2896, KYLIN-2897, KYLIN-2898, KYLIN-2899.&lt;/p&gt;
+
+&lt;h3 id=&quot;forkjoinpool-for-fast-cubing&quot;&gt;ForkJoinPool for fast cubing&lt;/h3&gt;
+&lt;p&gt;In the past, fast cubing uses split threads, task threads and main thread to do the cube building, there is complex join and error handling logic.&lt;/p&gt;
+
+&lt;p&gt;The new implement leverages the ForkJoinPool from JDK, the event split logic is handled in&lt;br /&gt;
+main thread. Cuboid task and sub-tasks are handled in fork join pool, cube results are collected&lt;br /&gt;
+async and can be write to output earlier. Check KYLIN-2932 for more.&lt;/p&gt;
+
+&lt;h3 id=&quot;improve-hllcounter-performance&quot;&gt;Improve HLLCounter performance&lt;/h3&gt;
+&lt;p&gt;In the past, the way to create HLLCounter and to compute harmonic mean are not efficient.&lt;/p&gt;
+
+&lt;p&gt;The new implement improve the HLLCounter creation by copy register from another HLLCounter instead of merge. To compute harmonic mean in the HLLCSnapshot, it does the enhancement by &lt;br /&gt;
+* using table to cache all 1/2^r  without computing on the fly&lt;br /&gt;
+* remove floating addition by using integer addition in the bigger loop&lt;br /&gt;
+* remove branch, e.g. needn’t checking whether registers[i] is zero or not, although this is minor improvement.&lt;/p&gt;
+
+&lt;p&gt;Check KYLIN-3656 for more.&lt;/p&gt;
+
+&lt;h3 id=&quot;improve-cuboid-recommendation-algorithm&quot;&gt;Improve Cuboid Recommendation Algorithm&lt;/h3&gt;
+&lt;p&gt;In the past, to add cuboids which are not prebuilt, the cube planner turns to mandatory cuboids which are selected if its rollup row count is above some threshold. &lt;br /&gt;
+There are two shortcomings:&lt;br /&gt;
+* The way to estimate the rollup row count is not good&lt;br /&gt;
+* It’s hard to determine the threshold of rollup row count for recommending mandatory cuboids&lt;/p&gt;
+
+&lt;p&gt;The new implement improves the way to estimate the row count of un-prebuilt cuboids by rollup ratio rather than exact rollup row count. &lt;br /&gt;
+With better estimated row counts for un-prebuilt cuboids, the cost-based cube planner algorithm will decide which cuboid to be built or not and the threshold for previous mandatory cuboids is not needed. &lt;br /&gt;
+By this improvement, we don’t need the threshold for mandatory cuboids recommendation, and mandatory cuboids can only be manually set and will not be recommended. Check KYLIN-3540 for more.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Download&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;To download Apache Kylin v2.6.0 source code or binary package, visit the &lt;a href=&quot;http://kylin.apache.org/download&quot;&gt;download&lt;/a&gt; page.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Upgrade&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Follow the &lt;a href=&quot;/docs/howto/howto_upgrade.html&quot;&gt;upgrade guide&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Feedback&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;If you face issue or question, please send mail to Apache Kylin dev or user mailing list: dev@kylin.apache.org , user@kylin.apache.org; Before sending, please make sure you have subscribed the mailing list by dropping an email to dev-subscribe@kylin.apache.org or user-subscribe@kylin.apache.org.&lt;/p&gt;
+
+&lt;p&gt;&lt;em&gt;Great thanks to everyone who contributed!&lt;/em&gt;&lt;/p&gt;
+</description>
+        <pubDate>Fri, 18 Jan 2019 12:00:00 -0800</pubDate>
+        <link>http://kylin.apache.org/blog/2019/01/18/release-v2.6.0/</link>
+        <guid isPermaLink="true">http://kylin.apache.org/blog/2019/01/18/release-v2.6.0/</guid>
+        
+        
+        <category>blog</category>
+        
+      </item>
+    
+      <item>
         <title>How Cisco&#39;s Big Data Team Improved the High Concurrent Throughput of Apache Kylin by 5x</title>
         <description>&lt;h2 id=&quot;background&quot;&gt;Background&lt;/h2&gt;
 
@@ -702,6 +854,70 @@ Graphic 10 Process of Querying Cube&lt;/
       </item>
     
       <item>
+        <title>Apache Kylin v2.5.0 正式发布</title>
+        <description>&lt;p&gt;近日Apache Kylin 社区很高兴地宣布,Apache Kylin 2.5.0 正式发布。&lt;/p&gt;
+
+&lt;p&gt;Apache Kylin 是一个开源的分布式分析引擎,旨在为极大数据集提供 SQL 接口和多维分析(OLAP)的能力。&lt;/p&gt;
+
+&lt;p&gt;这是继2.4.0 后的一个新功能版本。该版本引入了很多有价值的改进,完整的改动列表请参见&lt;a href=&quot;https://kylin.apache.org/docs/release_notes.html&quot;&gt;release notes&lt;/a&gt;;这里挑一些主要改进做说明:&lt;/p&gt;
+
+&lt;h3 id=&quot;all-in-spark--cubing-&quot;&gt;All-in-Spark 的 Cubing 引擎&lt;/h3&gt;
+&lt;p&gt;Kylin 的 Spark 引擎将使用 Spark 运行 cube 计算中的所有分布式作业,包括获取各个维度的不同值,将 cuboid 文件转换为 HBase HFile,合并 segment,合并词典等。默认的 Spark 配置也经过优化,使得用户可以获得开箱即用的体验。相关开发任务是 KYLIN-3427, KYLIN-3441, KYLIN-3442.&lt;/p&gt;
+
+&lt;p&gt;Spark 任务管理也有所改进:一旦 Spark 任务开始运行,您就可以在Web控制台上获得作业链接;如果您丢弃该作业,Kylin 将立刻终止 Spark 作业以及时释放资源;如果重新启动 Kylin,它可以从上一个作业恢复,而不是重新提交新作业.&lt;/p&gt;
+
+&lt;h3 id=&quot;mysql--kylin-&quot;&gt;MySQL 做 Kylin 元数据的存储&lt;/h3&gt;
+&lt;p&gt;在过去,HBase 是 Kylin 元数据存储的唯一选择。 在某些情况下 HBase不适用,例如使用多个 HBase 集群来为 Kylin 提供跨区域的高可用,这里复制的 HBase 集群是只读的,所以不能做元数据存储。现在我们引入了 MySQL Metastore 以满足这种需求。此功能现在处于测试阶段。更多内容参见 KYLIN-3488。&lt;/p&gt;
+
+&lt;h3 id=&quot;hybrid-model-&quot;&gt;Hybrid model 图形界面&lt;/h3&gt;
+&lt;p&gt;Hybrid 是一种用于组装多个 cube 的高级模型。 它可用于满足 cube 的 schema 要发生改变的情况。这个功能过去没有图形界面,因此只有一小部分用户知道它。现在我们在 Web 界面上开启了它,以便更多用户可以尝试。&lt;/p&gt;
+
+&lt;h3 id=&quot;cube-planner&quot;&gt;默认开启 Cube planner&lt;/h3&gt;
+&lt;p&gt;Cube planner 可以极大地优化 cube 结构,减少构建的 cuboid 数量,从而节省计算/存储资源并提高查询性能。它是在v2.3中引入的,但默认情况下没有开启。为了让更多用户看到并尝试它,我们默认在v2.5中启用它。 算法将在第一次构建 segment 的时候,根据数据统计自动优化 cuboid 集合.&lt;/p&gt;
+
+&lt;h3 id=&quot;segment-&quot;&gt;改进的 Segment 剪枝&lt;/h3&gt;
+&lt;p&gt;Segment(分区)修剪可以有效地减少磁盘和网络I / O,因此大大提高了查询性能。 过去,Kylin 只按分区列 (partition date column) 的值进行 segment 的修剪。 如果查询中没有将分区列作为过滤条件,那么修剪将不起作用,会扫描所有segment。.&lt;br /&gt;
+现在从v2.5开始,Kylin 将在 segment 级别记录每个维度的最小/最大值。 在扫描 segment 之前,会将查询的条件与最小/最大索引进行比较。 如果不匹配,将跳过该 segment。 检查KYLIN-3370了解更多信息。&lt;/p&gt;
+
+&lt;h3 id=&quot;yarn-&quot;&gt;在 YARN 上合并字典&lt;/h3&gt;
+&lt;p&gt;当 segment 合并时,它们的词典也需要合并。在过去,字典合并发生在 Kylin 的 JVM 中,这需要使用大量的本地内存和 CPU 资源。 在极端情况下(如果有几个并发作业),可能会导致 Kylin 进程崩溃。 因此,一些用户不得不为 Kylin 任务节点分配更多内存,或运行多个任务节点以平衡工作负载。&lt;br /&gt;
+现在从v2.5开始,Kylin 将把这项任务提交给 Hadoop MapReduce 和 Spark,这样就可以解决这个瓶颈问题。 查看KYLIN-3471了解更多信息.&lt;/p&gt;
+
+&lt;h3 id=&quot;cube-&quot;&gt;改进使用全局字典的 cube 构建性能&lt;/h3&gt;
+&lt;p&gt;全局字典 (Global Dictionary) 是 bitmap 精确去重计数的必要条件。如果去重列具有非常高的基数,则 GD 可能非常大。在 cube 构建阶段,Kylin 需要通过 GD 将非整数值转换为整数。尽管 GD 已被分成多个切片,可以分开加载到内存,但是由于去重列的值是乱序的。Kylin 需要反复载入和载出(swap in/out)切片,这会导致构建任务非常缓慢。&lt;br /&gt;
+该增强功能引入了一个新步骤,为每个数据块从全局字典中构建一个缩小的字典。 随后每个任务只需要加载缩小的字典,从而避免频繁的载入和载出。性能可以比以前快3倍。查看 KYLIN-3491 了解更多信息.&lt;/p&gt;
+
+&lt;h3 id=&quot;topn-count-distinct--cube-&quot;&gt;改进含 TOPN, COUNT DISTINCT 的 cube 大小的估计&lt;/h3&gt;
+&lt;p&gt;Cube 的大小在构建时是预先估计的,并被后续几个步骤使用,例如决定 MR / Spark 作业的分区数,计算 HBase region 切割等。它的准确与否会对构建性能产生很大影响。 当存在 COUNT DISTINCT,TOPN 的度量时候,因为它们的大小是灵活的,因此估计值可能跟真实值有很大偏差。 在过去,用户需要调整若干个参数以使尺寸估计更接近实际尺寸,这对普通用户有点困难。&lt;br /&gt;
+现在,Kylin 将根据收集的统计信息自动调整大小估计。这可以使估计值与实际大小更接近。查看 KYLIN-3453 了解更多信息。&lt;/p&gt;
+
+&lt;h3 id=&quot;hadoop-30hbase-20&quot;&gt;支持Hadoop 3.0/HBase 2.0&lt;/h3&gt;
+&lt;p&gt;Hadoop 3和 HBase 2开始被许多用户采用。现在 Kylin 提供使用新的 Hadoop 和 HBase API 编译的新二进制包。我们已经在 Hortonworks HDP 3.0 和 Cloudera CDH 6.0 上进行了测试&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;下载&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;要下载Apache Kylin v2.5.0源代码或二进制包,请访问&lt;a href=&quot;http://kylin.apache.org/download&quot;&gt;下载页面&lt;/a&gt; .&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;升级&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;参考&lt;a href=&quot;/docs/howto/howto_upgrade.html&quot;&gt;升级指南&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;反馈&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;如果您遇到问题或疑问,请发送邮件至 Apache Kylin dev 或 user 邮件列表:dev@kylin.apache.org,user@kylin.apache.org; 在发送之前,请确保您已通过发送电子邮件至 dev-subscribe@kylin.apache.org 或 user-subscribe@kylin.apache.org订阅了邮件列表。&lt;/p&gt;
+
+&lt;p&gt;&lt;em&gt;非常感谢所有贡献Apache Kylin的朋友!&lt;/em&gt;&lt;/p&gt;
+</description>
+        <pubDate>Thu, 20 Sep 2018 13:00:00 -0700</pubDate>
+        <link>http://kylin.apache.org/cn/blog/2018/09/20/release-v2.5.0/</link>
+        <guid isPermaLink="true">http://kylin.apache.org/cn/blog/2018/09/20/release-v2.5.0/</guid>
+        
+        
+        <category>blog</category>
+        
+      </item>
+    
+      <item>
         <title>Apache Kylin v2.5.0 Release Announcement</title>
         <description>&lt;p&gt;The Apache Kylin community is pleased to announce the release of Apache Kylin v2.5.0.&lt;/p&gt;
 
@@ -774,70 +990,6 @@ Graphic 10 Process of Querying Cube&lt;/
       </item>
     
       <item>
-        <title>Apache Kylin v2.5.0 正式发布</title>
-        <description>&lt;p&gt;近日Apache Kylin 社区很高兴地宣布,Apache Kylin 2.5.0 正式发布。&lt;/p&gt;
-
-&lt;p&gt;Apache Kylin 是一个开源的分布式分析引擎,旨在为极大数据集提供 SQL 接口和多维分析(OLAP)的能力。&lt;/p&gt;
-
-&lt;p&gt;这是继2.4.0 后的一个新功能版本。该版本引入了很多有价值的改进,完整的改动列表请参见&lt;a href=&quot;https://kylin.apache.org/docs/release_notes.html&quot;&gt;release notes&lt;/a&gt;;这里挑一些主要改进做说明:&lt;/p&gt;
-
-&lt;h3 id=&quot;all-in-spark--cubing-&quot;&gt;All-in-Spark 的 Cubing 引擎&lt;/h3&gt;
-&lt;p&gt;Kylin 的 Spark 引擎将使用 Spark 运行 cube 计算中的所有分布式作业,包括获取各个维度的不同值,将 cuboid 文件转换为 HBase HFile,合并 segment,合并词典等。默认的 Spark 配置也经过优化,使得用户可以获得开箱即用的体验。相关开发任务是 KYLIN-3427, KYLIN-3441, KYLIN-3442.&lt;/p&gt;
-
-&lt;p&gt;Spark 任务管理也有所改进:一旦 Spark 任务开始运行,您就可以在Web控制台上获得作业链接;如果您丢弃该作业,Kylin 将立刻终止 Spark 作业以及时释放资源;如果重新启动 Kylin,它可以从上一个作业恢复,而不是重新提交新作业.&lt;/p&gt;
-
-&lt;h3 id=&quot;mysql--kylin-&quot;&gt;MySQL 做 Kylin 元数据的存储&lt;/h3&gt;
-&lt;p&gt;在过去,HBase 是 Kylin 元数据存储的唯一选择。 在某些情况下 HBase不适用,例如使用多个 HBase 集群来为 Kylin 提供跨区域的高可用,这里复制的 HBase 集群是只读的,所以不能做元数据存储。现在我们引入了 MySQL Metastore 以满足这种需求。此功能现在处于测试阶段。更多内容参见 KYLIN-3488。&lt;/p&gt;
-
-&lt;h3 id=&quot;hybrid-model-&quot;&gt;Hybrid model 图形界面&lt;/h3&gt;
-&lt;p&gt;Hybrid 是一种用于组装多个 cube 的高级模型。 它可用于满足 cube 的 schema 要发生改变的情况。这个功能过去没有图形界面,因此只有一小部分用户知道它。现在我们在 Web 界面上开启了它,以便更多用户可以尝试。&lt;/p&gt;
-
-&lt;h3 id=&quot;cube-planner&quot;&gt;默认开启 Cube planner&lt;/h3&gt;
-&lt;p&gt;Cube planner 可以极大地优化 cube 结构,减少构建的 cuboid 数量,从而节省计算/存储资源并提高查询性能。它是在v2.3中引入的,但默认情况下没有开启。为了让更多用户看到并尝试它,我们默认在v2.5中启用它。 算法将在第一次构建 segment 的时候,根据数据统计自动优化 cuboid 集合.&lt;/p&gt;
-
-&lt;h3 id=&quot;segment-&quot;&gt;改进的 Segment 剪枝&lt;/h3&gt;
-&lt;p&gt;Segment(分区)修剪可以有效地减少磁盘和网络I / O,因此大大提高了查询性能。 过去,Kylin 只按分区列 (partition date column) 的值进行 segment 的修剪。 如果查询中没有将分区列作为过滤条件,那么修剪将不起作用,会扫描所有segment。.&lt;br /&gt;
-现在从v2.5开始,Kylin 将在 segment 级别记录每个维度的最小/最大值。 在扫描 segment 之前,会将查询的条件与最小/最大索引进行比较。 如果不匹配,将跳过该 segment。 检查KYLIN-3370了解更多信息。&lt;/p&gt;
-
-&lt;h3 id=&quot;yarn-&quot;&gt;在 YARN 上合并字典&lt;/h3&gt;
-&lt;p&gt;当 segment 合并时,它们的词典也需要合并。在过去,字典合并发生在 Kylin 的 JVM 中,这需要使用大量的本地内存和 CPU 资源。 在极端情况下(如果有几个并发作业),可能会导致 Kylin 进程崩溃。 因此,一些用户不得不为 Kylin 任务节点分配更多内存,或运行多个任务节点以平衡工作负载。&lt;br /&gt;
-现在从v2.5开始,Kylin 将把这项任务提交给 Hadoop MapReduce 和 Spark,这样就可以解决这个瓶颈问题。 查看KYLIN-3471了解更多信息.&lt;/p&gt;
-
-&lt;h3 id=&quot;cube-&quot;&gt;改进使用全局字典的 cube 构建性能&lt;/h3&gt;
-&lt;p&gt;全局字典 (Global Dictionary) 是 bitmap 精确去重计数的必要条件。如果去重列具有非常高的基数,则 GD 可能非常大。在 cube 构建阶段,Kylin 需要通过 GD 将非整数值转换为整数。尽管 GD 已被分成多个切片,可以分开加载到内存,但是由于去重列的值是乱序的。Kylin 需要反复载入和载出(swap in/out)切片,这会导致构建任务非常缓慢。&lt;br /&gt;
-该增强功能引入了一个新步骤,为每个数据块从全局字典中构建一个缩小的字典。 随后每个任务只需要加载缩小的字典,从而避免频繁的载入和载出。性能可以比以前快3倍。查看 KYLIN-3491 了解更多信息.&lt;/p&gt;
-
-&lt;h3 id=&quot;topn-count-distinct--cube-&quot;&gt;改进含 TOPN, COUNT DISTINCT 的 cube 大小的估计&lt;/h3&gt;
-&lt;p&gt;Cube 的大小在构建时是预先估计的,并被后续几个步骤使用,例如决定 MR / Spark 作业的分区数,计算 HBase region 切割等。它的准确与否会对构建性能产生很大影响。 当存在 COUNT DISTINCT,TOPN 的度量时候,因为它们的大小是灵活的,因此估计值可能跟真实值有很大偏差。 在过去,用户需要调整若干个参数以使尺寸估计更接近实际尺寸,这对普通用户有点困难。&lt;br /&gt;
-现在,Kylin 将根据收集的统计信息自动调整大小估计。这可以使估计值与实际大小更接近。查看 KYLIN-3453 了解更多信息。&lt;/p&gt;
-
-&lt;h3 id=&quot;hadoop-30hbase-20&quot;&gt;支持Hadoop 3.0/HBase 2.0&lt;/h3&gt;
-&lt;p&gt;Hadoop 3和 HBase 2开始被许多用户采用。现在 Kylin 提供使用新的 Hadoop 和 HBase API 编译的新二进制包。我们已经在 Hortonworks HDP 3.0 和 Cloudera CDH 6.0 上进行了测试&lt;/p&gt;
-
-&lt;p&gt;&lt;strong&gt;下载&lt;/strong&gt;&lt;/p&gt;
-
-&lt;p&gt;要下载Apache Kylin v2.5.0源代码或二进制包,请访问&lt;a href=&quot;http://kylin.apache.org/download&quot;&gt;下载页面&lt;/a&gt; .&lt;/p&gt;
-
-&lt;p&gt;&lt;strong&gt;升级&lt;/strong&gt;&lt;/p&gt;
-
-&lt;p&gt;参考&lt;a href=&quot;/docs/howto/howto_upgrade.html&quot;&gt;升级指南&lt;/a&gt;.&lt;/p&gt;
-
-&lt;p&gt;&lt;strong&gt;反馈&lt;/strong&gt;&lt;/p&gt;
-
-&lt;p&gt;如果您遇到问题或疑问,请发送邮件至 Apache Kylin dev 或 user 邮件列表:dev@kylin.apache.org,user@kylin.apache.org; 在发送之前,请确保您已通过发送电子邮件至 dev-subscribe@kylin.apache.org 或 user-subscribe@kylin.apache.org订阅了邮件列表。&lt;/p&gt;
-
-&lt;p&gt;&lt;em&gt;非常感谢所有贡献Apache Kylin的朋友!&lt;/em&gt;&lt;/p&gt;
-</description>
-        <pubDate>Thu, 20 Sep 2018 13:00:00 -0700</pubDate>
-        <link>http://kylin.apache.org/cn/blog/2018/09/20/release-v2.5.0/</link>
-        <guid isPermaLink="true">http://kylin.apache.org/cn/blog/2018/09/20/release-v2.5.0/</guid>
-        
-        
-        <category>blog</category>
-        
-      </item>
-    
-      <item>
         <title>Use Star Schema Benchmark for Apache Kylin</title>
         <description>&lt;h2 id=&quot;background&quot;&gt;Background&lt;/h2&gt;
 
@@ -1122,52 +1274,6 @@ The query result of Scale=10 is as follo
         
         
         <category>blog</category>
-        
-      </item>
-    
-      <item>
-        <title>Apache Kylin v2.3.0 Release Announcement</title>
-        <description>&lt;p&gt;The Apache Kylin community is pleased to announce the release of Apache Kylin v2.3.0.&lt;/p&gt;
-
-&lt;p&gt;Apache Kylin is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Big Data supporting extremely large datasets.&lt;/p&gt;
-
-&lt;p&gt;This is a major release after 2.2.0. The new features include supporting SparkSQL in building “Intermediate Flat Hive Table”, new dropwizard-based metrics framework and the fantastic cube planner which could select the cost-effective cuboids to build based on cost-based algorithm.&lt;/p&gt;
-
-&lt;p&gt;Apache Kylin 2.3.0 resolved 250+ issues including bug fixes, improvements, and new features. All of the changes can be found in the &lt;a href=&quot;https://kylin.apache.org/docs23/release_notes.html&quot;&gt;release notes&lt;/a&gt;.&lt;/p&gt;
-
-&lt;h2 id=&quot;change-highlights&quot;&gt;Change Highlights&lt;/h2&gt;
-
-&lt;ul&gt;
-  &lt;li&gt;Support SparkSql in Cube building step “Create Intermediate Flat Hive Table” &lt;a href=&quot;https://issues.apache.org/jira/browse/KYLIN-3125&quot;&gt;KYLIN-3125&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;Support SQL Server as data source &lt;a href=&quot;https://issues.apache.org/jira/browse/KYLIN-3044&quot;&gt;KYLIN-3044&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;Support user/group and role authentication for LDAP &lt;a href=&quot;https://issues.apache.org/jira/browse/KYLIN-2960&quot;&gt;KYLIN-2960&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;New metric framework based on dropwizard &lt;a href=&quot;https://issues.apache.org/jira/browse/KYLIN-2776&quot;&gt;KYLIN-2776&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;Introduce cube planner able to select cost-effective cuboids to be built by cost-based algorithms &lt;a href=&quot;https://issues.apache.org/jira/browse/KYLIN-2727&quot;&gt;KYLIN-2727&lt;/a&gt; &lt;a href=&quot;http://kylin.apache.org/docs23/howto/howto_use_cube_planner.html&quot;&gt;Document&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;Introduce a dashboard for showing kylin service related metrics, like query count, query latency, job count, etc &lt;a href=&quot;https://issues.apache.org/jira/browse/KYLIN-2726&quot;&gt;KYLIN-2726&lt;/a&gt; &lt;a href=&quot;http://kylin.apache.org/docs23/howto/howto_use_dashboard.html&quot;&gt;Document&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;Support volatile range for segments auto merge &lt;a href=&quot;https://issues.apache.org/jira/browse/KYLIN-1892&quot;&gt;KYLIN-1892&lt;/a&gt;&lt;/li&gt;
-&lt;/ul&gt;
-
-&lt;p&gt;To download Apache Kylin v2.3.0 source code or binary package, visit the &lt;a href=&quot;http://kylin.apache.org/download&quot;&gt;download&lt;/a&gt; page.&lt;/p&gt;
-
-&lt;p&gt;&lt;strong&gt;Upgrade&lt;/strong&gt;&lt;/p&gt;
-
-&lt;p&gt;Follow the &lt;a href=&quot;/docs23/howto/howto_upgrade.html&quot;&gt;upgrade guide&lt;/a&gt;.&lt;/p&gt;
-
-&lt;p&gt;&lt;strong&gt;Support&lt;/strong&gt;&lt;/p&gt;
-
-&lt;p&gt;Any issue or question,&lt;br /&gt;
-open JIRA to Apache Kylin project: &lt;a href=&quot;https://issues.apache.org/jira/browse/KYLIN/&quot;&gt;https://issues.apache.org/jira/browse/KYLIN/&lt;/a&gt;&lt;br /&gt;
-or&lt;br /&gt;
-send mail to Apache Kylin dev mailing list: &lt;a href=&quot;&amp;#109;&amp;#097;&amp;#105;&amp;#108;&amp;#116;&amp;#111;:&amp;#100;&amp;#101;&amp;#118;&amp;#064;&amp;#107;&amp;#121;&amp;#108;&amp;#105;&amp;#110;&amp;#046;&amp;#097;&amp;#112;&amp;#097;&amp;#099;&amp;#104;&amp;#101;&amp;#046;&amp;#111;&amp;#114;&amp;#103;&quot;&gt;&amp;#100;&amp;#101;&amp;#118;&amp;#064;&amp;#107;&amp;#121;&amp;#108;&amp;#105;&amp;#110;&amp;#046;&amp;#097;&amp;#112;&amp;#097;&amp;#099;&amp;#104;&amp;#101;&amp;#046;&amp;#111;&amp;#114;&amp;#103;&lt;/a&gt;&lt;/p&gt;
-
-&lt;p&gt;&lt;em&gt;Great thanks to everyone who contributed!&lt;/em&gt;&lt;/p&gt;
-</description>
-        <pubDate>Sun, 04 Mar 2018 12:00:00 -0800</pubDate>
-        <link>http://kylin.apache.org/blog/2018/03/04/release-v2.3.0/</link>
-        <guid isPermaLink="true">http://kylin.apache.org/blog/2018/03/04/release-v2.3.0/</guid>
-        
-        
-        <category>blog</category>
         
       </item>
     

Added: kylin/site/images/blog/rt_stream_architecture.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_architecture.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_architecture.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: kylin/site/images/blog/rt_stream_columnar_format.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_columnar_format.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_columnar_format.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: kylin/site/images/blog/rt_stream_components.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_components.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_components.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: kylin/site/images/blog/rt_stream_dir_structure.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_dir_structure.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_dir_structure.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: kylin/site/images/blog/rt_stream_how_build_work.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_how_build_work.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_how_build_work.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: kylin/site/images/blog/rt_stream_how_query_work.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_how_query_work.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_how_query_work.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: kylin/site/images/blog/rt_stream_invertindex_format.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_invertindex_format.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_invertindex_format.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: kylin/site/images/blog/rt_stream_rt_segment_state.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_rt_segment_state.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_rt_segment_state.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: kylin/site/images/blog/rt_stream_stages.png
URL: http://svn.apache.org/viewvc/kylin/site/images/blog/rt_stream_stages.png?rev=1857583&view=auto
==============================================================================
Binary file - no diff available.

Propchange: kylin/site/images/blog/rt_stream_stages.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Modified: kylin/site/index.html
URL: http://svn.apache.org/viewvc/kylin/site/index.html?rev=1857583&r1=1857582&r2=1857583&view=diff
==============================================================================
--- kylin/site/index.html (original)
+++ kylin/site/index.html Mon Apr 15 14:22:27 2019
@@ -334,10 +334,121 @@ var _hmt = _hmt || [];
     
   
     
-      <li class="navlist">
-        <a href="/docs/release_notes.html" class="list-group-item-lay pjaxlink">Release Notes</a>
-      </li>      
-      
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+  
+    
+