You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@parquet.apache.org by ga...@apache.org on 2020/01/06 10:33:46 UTC

[parquet-mr] branch encryption updated (3f2d0e7 -> b6067ac)

This is an automated email from the ASF dual-hosted git repository.

gabor pushed a change to branch encryption
in repository https://gitbox.apache.org/repos/asf/parquet-mr.git.


    omit 3f2d0e7  PARQUET-1228: Format Structures encryption (#613)
     add 0d9bad5  PARQUET-1637: Builds are failing because default jdk changed to openjdk11 on Travis (#665)
     add 340d157  PARQUET-1530: Remove Dependency on commons-codec (#618)
     add 14c1e81  PARQUET-1445: Remove Files.java (#584)
     add be00486  PARQUET-1607: Remove duplicate maven-enforcer-plugin (#658)
     add e0ab7af  PARQUET-1654: Remove unnecessary options when building thrift (#676)
     add e10a645  PARQUET-1649: Bump Jackson Databind to 2.9.9.3 (#674)
     add e9d8716  PARQUET-1601: Add zstd support to parquet-cli to-avro (#653)
     add 56f164f  PARQUET-1661: Upgrade to Avro 1.9.1 (#682)
     add 76c40e7  PARQUET-1542: Merge multiple I/O to one time I/O in method readFooter (#624)
     add 0cb5ead  PARQUET-1662: Upgrade Jackson to version 2.9.10 (#683)
     add 6e072cf  PARQUET-1665: Upgrade zstd-jni to 1.4.0-1 (#684)
     add 7c4d1ec  PARQUET-1644: Clean up some benchmark code and docs. (#672)
     add 600ffba  PARQUET-1669: Disable compiling all libraries when building thrift (#688)
     add 6db7287  PARQUET-1671: Upgrade Yetus to 0.11.0 (#689)
     add 7772644  PARQUET-1673: Upgrade parquet-mr format version to 2.7.0 (#690)
     add 59ae034  PARQUET-1578: Introduce Lambdas (#641)
     add 10f57a3  PARQUET-1596: PARQUET-1375 broke parquet-cli's to-avro command (#648)
     add 52a502e  PARQUET-0000: Fix typo (#666)
     add 57bd243  PARQUET-0000: Improved formatting (#673)
     add 0c6a650  PARQUET-1650: Implement unit test to validate column/offset indexes (#675)
     add 2117abc  PARQUET-1682: Maintain forward compatibility for TIME/TIMESTAMP (#694)
     add 2122a8a  PARQUET-1683: Remove unnecessary string conversions (#695)
     add 4648b06  PARQUET-XXXX: Minor Javadoc improvements (#667)
     add 10b926f  PARQUET-1444: Prefer ArrayList over LinkedList (#583)
     add ca7d0e2  PARQUET-1496: Update Scala to 2.12 (#693)
     add 19b10ac  PARQUET-1499: Add Java 11 to Travis (#596)
     add d1190ab  PARQUET-1691: Build fails due to missing hadoop-lzo (#698)
     add e60f5f1  PARQUET-1687: Update release process (#697)
     add 76f9010  PARQUET-1685: Truncate Min/Max for Statistics (#696)
     add 7d474c7  Update CHANGES.md for 1.11.0rc7
     add 18519eb  [maven-release-plugin] prepare release apache-parquet-1.11.0-rc7
     add 7bd38d1  [maven-release-plugin] prepare for next development iteration
     add 475b446  Prepare for next development iteration
     add 4ca29c7  [PARQUET-1717] Convert i16 thrift to INT16 logical type instead (#706)
     add 2c9ccf9  PARQUET-1696: Remove unused hadoop-1 profile (#701)
     add 3b4ecf2  PARQUET-1723: Read From Maps without using .contains(...) (#711)
     add b9f16e5  PARQUET-1724: Use ConcurrentHashMap for Cache in DictionaryPageReader (#712)
     add 1e15f60  PARQUET-1726: Use Java 8 Multi Exception Handling (#714)
     add 3d8ce06  PARQUET-1727: Do Not Swallow InterruptedException in ParquetLoader (#715)
     add cce6fdb  PARQUET-1732: Call toArray With Empty Array (#720)
     add a7447f6  PARQUET-1731: Use JDK 8 Facilities to Simplify FilteringRecordMaterializer (#719)
     add c697d80  PARQUET-1730: Use switch Statement in AvroIndexedRecordConverter for Enums (#718)
     new b6067ac  PARQUET-1228: Format Structures encryption (#613)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (3f2d0e7)
            \
             N -- N -- N   refs/heads/encryption (b6067ac)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .travis.yml                                        |    4 +
 CHANGES.md                                         |   64 +-
 README.md                                          |    2 +-
 .../run_checksums.sh => dev/finalize-release       |   31 +-
 dev/prepare-release.sh                             |   15 +-
 dev/source-release.sh                              |    4 +-
 dev/travis-before_install.sh                       |    8 +-
 .../org/apache/parquet/avro/AvroConverters.java    |    9 +-
 .../parquet/avro/AvroIndexedRecordConverter.java   |   49 +-
 .../apache/parquet/avro/AvroRecordConverter.java   |    4 +-
 .../parquet/avro/TestArrayCompatibility.java       | 1048 ++++++++++----------
 .../parquet/avro/TestAvroSchemaConverter.java      |   35 +-
 parquet-benchmarks/README.md                       |   36 +-
 parquet-benchmarks/run.sh                          |   96 +-
 .../apache/parquet/benchmarks/BenchmarkFiles.java  |    2 +
 .../apache/parquet/benchmarks/DataGenerator.java   |    9 +-
 .../benchmarks/PageChecksumDataGenerator.java      |   23 +-
 .../benchmarks/PageChecksumReadBenchmarks.java     |   63 +-
 .../benchmarks/PageChecksumWriteBenchmarks.java    |   56 +-
 .../apache/parquet/benchmarks/ReadBenchmarks.java  |   25 +
 .../apache/parquet/benchmarks/WriteBenchmarks.java |   11 +-
 .../src/main}/resources/log4j.properties           |    6 +-
 parquet-cascading3/pom.xml                         |   24 +-
 parquet-cli/pom.xml                                |   13 +-
 .../java/org/apache/parquet/cli/BaseCommand.java   |    4 +-
 .../src/main/java/org/apache/parquet/cli/Util.java |   17 +-
 .../apache/parquet/cli/commands/ToAvroCommand.java |   22 +-
 .../org/apache/parquet/cli/csv/RecordBuilder.java  |    2 +-
 .../java/org/apache/parquet/cli/json/AvroJson.java |   12 +-
 .../java/org/apache/parquet/cli/util/Codecs.java   |    2 +
 .../apache/parquet/cli/util/GetClassLoader.java    |    2 +-
 parquet-cli/src/main/resources/META-INF/LICENSE    |    5 +-
 .../apache/parquet/cli/commands/AvroFileTest.java  |    5 +
 .../parquet/cli/commands/ToAvroCommandTest.java    |   63 +-
 parquet-column/pom.xml                             |    6 -
 .../org/apache/parquet/CorruptDeltaByteArrays.java |    5 +-
 .../java/org/apache/parquet/CorruptStatistics.java |   12 +-
 .../org/apache/parquet/column/EncodingStats.java   |   14 +-
 .../apache/parquet/column/ParquetProperties.java   |   20 +-
 .../parquet/column/impl/ColumnReadStoreImpl.java   |    4 +-
 .../column/statistics/BinaryStatistics.java        |   11 +
 .../values/plain/BinaryPlainValuesReader.java      |    8 +-
 .../parquet/filter2/predicate/Operators.java       |    4 +-
 .../recordlevel/FilteringGroupConverter.java       |    7 +-
 .../recordlevel/FilteringRecordMaterializer.java   |   34 +-
 .../columnindex/BinaryColumnIndexBuilder.java      |    4 +-
 .../column/columnindex/BinaryTruncator.java        |   16 +-
 .../org/apache/parquet/io/PrimitiveColumnIO.java   |    2 +-
 .../parquet/io/RecordReaderImplementation.java     |    2 +-
 .../java/org/apache/parquet/io/api/Binary.java     |    7 +-
 .../java/org/apache/parquet/schema/GroupType.java  |    5 +-
 .../parquet/schema/LogicalTypeAnnotation.java      |    6 -
 .../main/java/org/apache/parquet/schema/Types.java |    6 +-
 .../apache/parquet/schema/TestTypeBuilders.java    |  245 ++---
 .../org/apache/parquet/schema/TestTypeUtil.java    |   39 +-
 parquet-common/pom.xml                             |    2 +-
 .../src/main/java/org/apache/parquet/Files.java    |    5 +-
 .../java/org/apache/parquet/VersionParser.java     |    5 +-
 .../org/apache/parquet/util/DynConstructors.java   |   22 +-
 .../java/org/apache/parquet/util/DynMethods.java   |    6 +-
 .../parquet/bytes/TestByteBufferInputStreams.java  |   50 +-
 .../io/TestDelegatingSeekableInputStream.java      |   43 +-
 .../apache/parquet/util/TestDynConstructors.java   |  102 +-
 .../org/apache/parquet/util/TestDynMethods.java    |  137 +--
 parquet-encoding/pom.xml                           |    6 -
 .../parquet/column/values/bitpacking/Packer.java   |   11 +-
 parquet-format-structures/pom.xml                  |   10 +-
 .../org/apache/parquet/format/event/Consumers.java |    4 +-
 parquet-hadoop/pom.xml                             |    2 +-
 .../org/apache/parquet/ParquetReadOptions.java     |    9 +-
 .../format/converter/ParquetMetadataConverter.java |   72 +-
 .../parquet/hadoop/ColumnChunkPageReadStore.java   |   17 +-
 .../parquet/hadoop/ColumnIndexValidator.java       |  613 ++++++++++++
 .../parquet/hadoop/DictionaryPageReader.java       |   67 +-
 .../apache/parquet/hadoop/DirectCodecFactory.java  |    8 +-
 .../org/apache/parquet/hadoop/MemoryManager.java   |    4 +-
 .../apache/parquet/hadoop/ParquetFileReader.java   |   67 +-
 .../apache/parquet/hadoop/ParquetFileWriter.java   |   21 +-
 .../apache/parquet/hadoop/ParquetInputFormat.java  |   11 +-
 .../apache/parquet/hadoop/ParquetOutputFormat.java |   21 +-
 .../org/apache/parquet/hadoop/ParquetWriter.java   |    3 +-
 .../org/apache/parquet/hadoop/PrintFooter.java     |   14 +-
 .../org/apache/parquet/hadoop/codec/CleanUtil.java |    1 -
 .../parquet/hadoop/metadata/ParquetMetadata.java   |    8 -
 .../apache/parquet/hadoop/util/ContextUtil.java    |   32 +-
 .../apache/parquet/hadoop/util/HadoopStreams.java  |   13 +-
 .../parquet/hadoop/util/SerializationUtil.java     |   15 +-
 .../converter/TestParquetMetadataConverter.java    |   74 +-
 .../hadoop/TestInputOutputFormatWithPadding.java   |    9 +-
 .../apache/parquet/hadoop/TestMemoryManager.java   |    7 +-
 .../apache/parquet/hadoop/TestParquetWriter.java   |   17 +-
 .../hadoop/TestParquetWriterAppendBlocks.java      |   22 +-
 .../hadoop/example/TestInputOutputFormat.java      |   15 +-
 .../hadoop/util/TestHadoop2ByteBufferReads.java    |   18 +-
 .../apache/parquet/statistics/RandomValues.java    |   46 +-
 .../parquet/statistics/TestColumnIndexes.java      |  300 ++++++
 parquet-jackson/pom.xml                            |    6 +-
 parquet-pig/pom.xml                                |    2 +-
 .../java/org/apache/parquet/pig/ParquetLoader.java |    5 +-
 .../org/apache/parquet/pig/TupleWriteSupport.java  |    4 +-
 .../apache/parquet/pig/convert/TupleConverter.java |    7 +-
 .../apache/parquet/pig/summary/SummaryData.java    |    8 -
 parquet-protobuf/pom.xml                           |    7 +
 .../apache/parquet/proto/ProtoSchemaConverter.java |    2 +-
 parquet-scrooge/pom.xml                            |   22 +-
 parquet-scrooge/src/test/thrift/test.thrift        |   14 +-
 parquet-thrift/pom.xml                             |    9 +-
 .../hadoop/thrift/ThriftBytesWriteSupport.java     |    4 +-
 .../parquet/hadoop/thrift/ThriftReadSupport.java   |    8 +-
 .../thrift/BufferedProtocolReadToWrite.java        |    4 +-
 .../apache/parquet/thrift/ParquetReadProtocol.java |    4 +-
 .../parquet/thrift/ThriftSchemaConvertVisitor.java |    2 +-
 .../parquet/thrift/TestThriftRecordConverter.java  |    8 +-
 .../parquet/thrift/TestThriftSchemaConverter.java  |   13 +
 parquet-thrift/src/test/thrift/test.thrift         |    6 +-
 parquet-tools/README.md                            |    4 +-
 .../parquet/tools/command/MetadataUtils.java       |    2 +-
 .../apache/parquet/tools/util/MetadataUtils.java   |    2 +-
 parquet-tools/src/main/resources/META-INF/LICENSE  |    9 -
 pom.xml                                            |   52 +-
 120 files changed, 2584 insertions(+), 1757 deletions(-)
 rename parquet-benchmarks/run_checksums.sh => dev/finalize-release (53%)
 mode change 100644 => 100755 dev/source-release.sh
 mode change 100644 => 100755 dev/travis-before_install.sh
 copy {parquet-hadoop/src/test => parquet-benchmarks/src/main}/resources/log4j.properties (88%)
 create mode 100644 parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ColumnIndexValidator.java
 create mode 100644 parquet-hadoop/src/test/java/org/apache/parquet/statistics/TestColumnIndexes.java


[parquet-mr] 01/01: PARQUET-1228: Format Structures encryption (#613)

Posted by ga...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

gabor pushed a commit to branch encryption
in repository https://gitbox.apache.org/repos/asf/parquet-mr.git

commit b6067ac5bade82fab0e4f72a714deda2a2cebce7
Author: ggershinsky <gg...@users.noreply.github.com>
AuthorDate: Tue Aug 27 12:07:10 2019 +0200

    PARQUET-1228: Format Structures encryption (#613)
---
 .travis.yml                                        |   1 +
 dev/travis-before_install-encryption.sh            |  29 +++
 .../org/apache/parquet/format/BlockCipher.java     |  69 +++++++
 .../main/java/org/apache/parquet/format/Util.java  | 222 +++++++++++++++++----
 pom.xml                                            |   2 +-
 5 files changed, 278 insertions(+), 45 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index 77b16d9..f7d7041 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,4 +1,5 @@
 language: java
+jdk: openjdk8
 before_install:
   - bash dev/travis-before_install.sh
 
diff --git a/dev/travis-before_install-encryption.sh b/dev/travis-before_install-encryption.sh
new file mode 100755
index 0000000..0e3a3f6
--- /dev/null
+++ b/dev/travis-before_install-encryption.sh
@@ -0,0 +1,29 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+################################################################################
+# This is a branch-specific script that gets invoked at the end of
+# travis-before_install.sh. It is run for the bloom-filter branch only.
+################################################################################
+
+cd ..
+git clone https://github.com/apache/parquet-format.git
+cd parquet-format
+mvn install -DskipTests --batch-mode
+cd $TRAVIS_BUILD_DIR
+
+
diff --git a/parquet-format-structures/src/main/java/org/apache/parquet/format/BlockCipher.java b/parquet-format-structures/src/main/java/org/apache/parquet/format/BlockCipher.java
new file mode 100755
index 0000000..48c0bf2
--- /dev/null
+++ b/parquet-format-structures/src/main/java/org/apache/parquet/format/BlockCipher.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.parquet.format;
+
+import java.io.IOException;
+import java.io.InputStream;
+
+public interface BlockCipher{
+
+
+  public interface Encryptor{
+    /**
+     * Encrypts the plaintext.
+     * 
+     * @param plaintext - starts at offset 0 of the input, and fills up the entire byte array.
+     * @param AAD - Additional Authenticated Data for the encryption (ignored in case of CTR cipher)
+     * @return lengthAndCiphertext The first 4 bytes of the returned value are the ciphertext length (little endian int). 
+     * The ciphertext starts at offset 4  and fills up the rest of the returned byte array.
+     * The ciphertext includes the nonce and (in case of GCM cipher) the tag, as detailed in the 
+     * Parquet Modular Encryption specification.
+     * @throws IOException thrown upon any crypto problem encountered during encryption
+     */
+    public byte[] encrypt(byte[] plaintext, byte[] AAD) throws IOException;
+  }
+
+
+  public interface Decryptor{  
+    /**
+     * Decrypts the ciphertext. 
+     * 
+     * @param lengthAndCiphertext - The first 4 bytes of the input are the ciphertext length (little endian int). 
+     * The ciphertext starts at offset 4  and fills up the rest of the input byte array.
+     * The ciphertext includes the nonce and (in case of GCM cipher) the tag, as detailed in the 
+     * Parquet Modular Encryption specification.
+     * @param AAD - Additional Authenticated Data for the decryption (ignored in case of CTR cipher)
+     * @return plaintext - starts at offset 0 of the output value, and fills up the entire byte array.
+     * @throws IOException thrown upon any crypto problem encountered during decryption
+     */
+    public byte[] decrypt(byte[] lengthAndCiphertext, byte[] AAD) throws IOException;
+
+    /**
+     * Convenience decryption method that reads the length and ciphertext from the input stream.
+     * 
+     * @param from Input stream with length and ciphertext.
+     * @param AAD - Additional Authenticated Data for the decryption (ignored in case of CTR cipher)
+     * @return plaintext -  starts at offset 0 of the output, and fills up the entire byte array.
+     * @throws IOException thrown upon any crypto or IO problem encountered during decryption
+     */
+    public byte[] decrypt(InputStream from, byte[] AAD) throws IOException;
+  }
+}
+
diff --git a/parquet-format-structures/src/main/java/org/apache/parquet/format/Util.java b/parquet-format-structures/src/main/java/org/apache/parquet/format/Util.java
index d09d007..9242290 100644
--- a/parquet-format-structures/src/main/java/org/apache/parquet/format/Util.java
+++ b/parquet-format-structures/src/main/java/org/apache/parquet/format/Util.java
@@ -20,6 +20,8 @@
 package org.apache.parquet.format;
 
 import static org.apache.parquet.format.FileMetaData._Fields.CREATED_BY;
+import static org.apache.parquet.format.FileMetaData._Fields.ENCRYPTION_ALGORITHM;
+import static org.apache.parquet.format.FileMetaData._Fields.FOOTER_SIGNING_KEY_METADATA;
 import static org.apache.parquet.format.FileMetaData._Fields.KEY_VALUE_METADATA;
 import static org.apache.parquet.format.FileMetaData._Fields.NUM_ROWS;
 import static org.apache.parquet.format.FileMetaData._Fields.ROW_GROUPS;
@@ -30,9 +32,11 @@ import static org.apache.parquet.format.event.Consumers.listElementsOf;
 import static org.apache.parquet.format.event.Consumers.listOf;
 import static org.apache.parquet.format.event.Consumers.struct;
 
+import java.io.ByteArrayInputStream;
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
 import java.util.List;
 
 import org.apache.thrift.TBase;
@@ -40,7 +44,7 @@ import org.apache.thrift.TException;
 import org.apache.thrift.protocol.TCompactProtocol;
 import org.apache.thrift.protocol.TProtocol;
 import org.apache.thrift.transport.TIOStreamTransport;
-
+import org.apache.thrift.transport.TMemoryBuffer;
 import org.apache.parquet.format.event.Consumers.Consumer;
 import org.apache.parquet.format.event.Consumers.DelegatingFieldConsumer;
 import org.apache.parquet.format.event.EventBasedThriftReader;
@@ -54,37 +58,91 @@ import org.apache.parquet.format.event.TypedConsumer.StringConsumer;
  */
 public class Util {
 
+  private final static int INIT_MEM_ALLOC_ENCR_BUFFER = 100;
+
   public static void writeColumnIndex(ColumnIndex columnIndex, OutputStream to) throws IOException {
-    write(columnIndex, to);
+    writeColumnIndex(columnIndex, to, null, null);
+  }
+
+  public static void writeColumnIndex(ColumnIndex columnIndex, OutputStream to, 
+      BlockCipher.Encryptor encryptor, byte[] AAD) throws IOException {
+    write(columnIndex, to, encryptor, AAD);
   }
 
   public static ColumnIndex readColumnIndex(InputStream from) throws IOException {
-    return read(from, new ColumnIndex());
+    return readColumnIndex(from, null, null);
+  }
+
+  public static ColumnIndex readColumnIndex(InputStream from, 
+      BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
+    return read(from, new ColumnIndex(), decryptor, AAD);
   }
 
   public static void writeOffsetIndex(OffsetIndex offsetIndex, OutputStream to) throws IOException {
-    write(offsetIndex, to);
+    writeOffsetIndex(offsetIndex, to, null, null);
+  }
+
+  public static void writeOffsetIndex(OffsetIndex offsetIndex, OutputStream to, 
+      BlockCipher.Encryptor encryptor, byte[] AAD) throws IOException {
+    write(offsetIndex, to, encryptor, AAD);
   }
 
   public static OffsetIndex readOffsetIndex(InputStream from) throws IOException {
-    return read(from, new OffsetIndex());
+    return readOffsetIndex(from, null, null);
+  }
+
+  public static OffsetIndex readOffsetIndex(InputStream from, 
+      BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
+    return read(from, new OffsetIndex(), decryptor, AAD);
   }
 
   public static void writePageHeader(PageHeader pageHeader, OutputStream to) throws IOException {
-    write(pageHeader, to);
+    writePageHeader(pageHeader, to, null, null);
+  }
+
+  public static void writePageHeader(PageHeader pageHeader, OutputStream to, 
+      BlockCipher.Encryptor encryptor, byte[] AAD) throws IOException {
+    write(pageHeader, to, encryptor, AAD);
   }
 
   public static PageHeader readPageHeader(InputStream from) throws IOException {
-    return read(from, new PageHeader());
+    return readPageHeader(from, null, null);
+  }
+
+  public static PageHeader readPageHeader(InputStream from, 
+      BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
+    return read(from, new PageHeader(), decryptor, AAD);
+  }
+
+  public static void writeFileMetaData(org.apache.parquet.format.FileMetaData fileMetadata, 
+      OutputStream to) throws IOException {
+    writeFileMetaData(fileMetadata, to, null, null);
   }
 
-  public static void writeFileMetaData(org.apache.parquet.format.FileMetaData fileMetadata, OutputStream to) throws IOException {
-    write(fileMetadata, to);
+  public static void writeFileMetaData(org.apache.parquet.format.FileMetaData fileMetadata, 
+      OutputStream to, BlockCipher.Encryptor encryptor, byte[] AAD) throws IOException {
+    write(fileMetadata, to, encryptor, AAD);
   }
 
   public static FileMetaData readFileMetaData(InputStream from) throws IOException {
-    return read(from, new FileMetaData());
+    return readFileMetaData(from, null, null);
+  }
+
+  public static FileMetaData readFileMetaData(InputStream from, 
+      BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
+    return read(from, new FileMetaData(), decryptor, AAD);
+  }
+
+  public static void writeColumnMetaData(ColumnMetaData columnMetaData, OutputStream to, 
+      BlockCipher.Encryptor encryptor, byte[] AAD) throws IOException {
+    write(columnMetaData, to, encryptor, AAD);
   }
+
+  public static ColumnMetaData readColumnMetaData(InputStream from, 
+      BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
+    return read(from, new ColumnMetaData(), decryptor, AAD);
+  }
+
   /**
    * reads the meta data from the stream
    * @param from the stream to read the metadata from
@@ -93,15 +151,28 @@ public class Util {
    * @throws IOException if any I/O error occurs during the reading
    */
   public static FileMetaData readFileMetaData(InputStream from, boolean skipRowGroups) throws IOException {
+    return readFileMetaData(from, skipRowGroups, (BlockCipher.Decryptor) null, (byte[]) null);
+  }
+
+  public static FileMetaData readFileMetaData(InputStream from, boolean skipRowGroups, 
+      BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
     FileMetaData md = new FileMetaData();
     if (skipRowGroups) {
-      readFileMetaData(from, new DefaultFileMetaDataConsumer(md), skipRowGroups);
+      readFileMetaData(from, new DefaultFileMetaDataConsumer(md), skipRowGroups, decryptor, AAD);
     } else {
-      read(from, md);
+      read(from, md, decryptor, AAD);
     }
     return md;
   }
 
+  public static void writeFileCryptoMetaData(org.apache.parquet.format.FileCryptoMetaData cryptoMetadata, OutputStream to) throws IOException { 
+    write(cryptoMetadata, to, null, null);
+  }
+
+  public static FileCryptoMetaData readFileCryptoMetaData(InputStream from) throws IOException {
+    return read(from, new FileCryptoMetaData(), null, null);
+  }
+
   /**
    * To read metadata in a streaming fashion.
    *
@@ -113,6 +184,8 @@ public class Util {
     abstract public void addRowGroup(RowGroup rowGroup);
     abstract public void addKeyValueMetaData(KeyValue kv);
     abstract public void setCreatedBy(String createdBy);
+    abstract public void setEncryptionAlgorithm(EncryptionAlgorithm encryptionAlgorithm);
+    abstract public void setFooterSigningKeyMetadata(byte[] footerSigningKeyMetadata);
   }
 
   /**
@@ -155,41 +228,73 @@ public class Util {
     public void addKeyValueMetaData(KeyValue kv) {
       md.addToKey_value_metadata(kv);
     }
+
+    @Override
+    public void setEncryptionAlgorithm(EncryptionAlgorithm encryptionAlgorithm) {
+      md.setEncryption_algorithm(encryptionAlgorithm);
+    }
+
+    @Override
+    public void setFooterSigningKeyMetadata(byte[] footerSigningKeyMetadata) {
+      md.setFooter_signing_key_metadata(footerSigningKeyMetadata);
+    }
   }
 
   public static void readFileMetaData(InputStream from, FileMetaDataConsumer consumer) throws IOException {
-    readFileMetaData(from, consumer, false);
+    readFileMetaData(from, consumer, null, null);
+  }
+
+  public static void readFileMetaData(InputStream from, FileMetaDataConsumer consumer, 
+      BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
+    readFileMetaData(from, consumer, false, decryptor, AAD);
   }
 
   public static void readFileMetaData(InputStream from, final FileMetaDataConsumer consumer, boolean skipRowGroups) throws IOException {
+    readFileMetaData(from, consumer, skipRowGroups, null, null);
+  }
+
+  public static void readFileMetaData(final InputStream input, final FileMetaDataConsumer consumer, 
+      boolean skipRowGroups, BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
     try {
       DelegatingFieldConsumer eventConsumer = fieldConsumer()
-      .onField(VERSION, new I32Consumer() {
-        @Override
-        public void consume(int value) {
-          consumer.setVersion(value);
-        }
-      }).onField(SCHEMA, listOf(SchemaElement.class, new Consumer<List<SchemaElement>>() {
-        @Override
-        public void consume(List<SchemaElement> schema) {
-          consumer.setSchema(schema);
-        }
-      })).onField(NUM_ROWS, new I64Consumer() {
-        @Override
-        public void consume(long value) {
-          consumer.setNumRows(value);
-        }
-      }).onField(KEY_VALUE_METADATA, listElementsOf(struct(KeyValue.class, new Consumer<KeyValue>() {
-        @Override
-        public void consume(KeyValue kv) {
-          consumer.addKeyValueMetaData(kv);
-        }
-      }))).onField(CREATED_BY, new StringConsumer() {
-        @Override
-        public void consume(String value) {
-          consumer.setCreatedBy(value);
-        }
-      });
+          .onField(VERSION, new I32Consumer() {
+            @Override
+            public void consume(int value) {
+              consumer.setVersion(value);
+            }
+          }).onField(SCHEMA, listOf(SchemaElement.class, new Consumer<List<SchemaElement>>() {
+            @Override
+            public void consume(List<SchemaElement> schema) {
+              consumer.setSchema(schema);
+            }
+          })).onField(NUM_ROWS, new I64Consumer() {
+            @Override
+            public void consume(long value) {
+              consumer.setNumRows(value);
+            }
+          }).onField(KEY_VALUE_METADATA, listElementsOf(struct(KeyValue.class, new Consumer<KeyValue>() {
+            @Override
+            public void consume(KeyValue kv) {
+              consumer.addKeyValueMetaData(kv);
+            }
+          }))).onField(CREATED_BY, new StringConsumer() {
+            @Override
+            public void consume(String value) {
+              consumer.setCreatedBy(value);
+            }
+          }).onField(ENCRYPTION_ALGORITHM, struct(EncryptionAlgorithm.class, new Consumer<EncryptionAlgorithm>() {
+            @Override
+            public void consume(EncryptionAlgorithm encryptionAlgorithm) {
+              consumer.setEncryptionAlgorithm(encryptionAlgorithm);
+            }
+          })).onField(FOOTER_SIGNING_KEY_METADATA, new StringConsumer() {
+            @Override
+            public void consume(String value) {
+              byte[] keyMetadata = value.getBytes(StandardCharsets.UTF_8);
+              consumer.setFooterSigningKeyMetadata(keyMetadata);
+            }
+          });
+
       if (!skipRowGroups) {
         eventConsumer = eventConsumer.onField(ROW_GROUPS, listElementsOf(struct(RowGroup.class, new Consumer<RowGroup>() {
           @Override
@@ -198,8 +303,16 @@ public class Util {
           }
         })));
       }
-      new EventBasedThriftReader(protocol(from)).readStruct(eventConsumer);
 
+      final InputStream from;
+      if (null == decryptor) {
+        from = input;
+      }
+      else {
+        byte[] plainText =  decryptor.decrypt(input, AAD);
+        from = new ByteArrayInputStream(plainText);
+      }
+      new EventBasedThriftReader(protocol(from)).readStruct(eventConsumer);
     } catch (TException e) {
       throw new IOException("can not read FileMetaData: " + e.getMessage(), e);
     }
@@ -217,7 +330,16 @@ public class Util {
     return new InterningProtocol(new TCompactProtocol(t));
   }
 
-  private static <T extends TBase<?,?>> T read(InputStream from, T tbase) throws IOException {
+
+  private static <T extends TBase<?,?>> T read(final InputStream input, T tbase, BlockCipher.Decryptor decryptor, byte[] AAD) throws IOException {
+    final InputStream from;
+    if (null == decryptor) {
+      from = input;
+    } else {
+      byte[] plainText = decryptor.decrypt(input, AAD);
+      from = new ByteArrayInputStream(plainText);
+    }
+
     try {
       tbase.read(protocol(from));
       return tbase;
@@ -226,11 +348,23 @@ public class Util {
     }
   }
 
-  private static void write(TBase<?, ?> tbase, OutputStream to) throws IOException {
-    try {
-      tbase.write(protocol(to));
+  private static void write(TBase<?, ?> tbase, OutputStream to, BlockCipher.Encryptor encryptor, byte[] AAD) throws IOException {
+    if (null == encryptor) { 
+      try {
+        tbase.write(protocol(to));
+        return;
+      } catch (TException e) {
+        throw new IOException("can not write " + tbase, e);
+      }
+    }
+    // Serialize and encrypt the structure
+    try (TMemoryBuffer thriftMemoryBuffer = new TMemoryBuffer(INIT_MEM_ALLOC_ENCR_BUFFER)) {
+      tbase.write(new InterningProtocol(new TCompactProtocol(thriftMemoryBuffer)));
+      byte[] encryptedBuffer = encryptor.encrypt(thriftMemoryBuffer.getArray(), AAD);
+      to.write(encryptedBuffer);
     } catch (TException e) {
       throw new IOException("can not write " + tbase, e);
     }
   }
 }
+
diff --git a/pom.xml b/pom.xml
index e837d8a..eb41ca2 100644
--- a/pom.xml
+++ b/pom.xml
@@ -81,7 +81,7 @@
     <hadoop.version>2.7.3</hadoop.version>
     <cascading.version>2.7.1</cascading.version>
     <cascading3.version>3.1.2</cascading3.version>
-    <parquet.format.version>2.7.0</parquet.format.version>
+    <parquet.format.version>2.7.0-SNAPSHOT</parquet.format.version>
     <previous.version>1.7.0</previous.version>
     <thrift.executable>thrift</thrift.executable>
     <format.thrift.executable>thrift</format.thrift.executable>