You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2020/03/28 02:59:30 UTC

[GitHub] [beam] reuvenlax commented on a change in pull request #10767: Document Beam Schemas

reuvenlax commented on a change in pull request #10767: Document Beam Schemas
URL: https://github.com/apache/beam/pull/10767#discussion_r399610830
 
 

 ##########
 File path: website/src/documentation/programming-guide.md
 ##########
 @@ -1970,7 +1976,1076 @@ records.apply("WriteToText",
 See the [Beam-provided I/O Transforms]({{site.baseurl }}/documentation/io/built-in/)
 page for a list of the currently available I/O transforms.
 
-## 6. Data encoding and type safety {#data-encoding-and-type-safety}
+## 6. Schemas {#schemas}
+Often, the type of records being processed have an obvious structure. Common Beam sources produce
+JSON, Avro, Protocol Buffer, or database row objects; all of these types have well defined structures, 
+structures that can often be determined by examining the type. Even within a pipeline, Simple Java POJOs 
+(or  equivalent structures in other languages) are often used as intermediate types, and these also have a
+ clear structure that can be inferred by inspecting the class. By understanding the structure of a pipeline’s 
+ records, we can provide much more concise APIs for data processing.
+ 
+### 6.1. What is a schema {#what-is-a-schema}
+Most structured records share some common characteristics: 
+* They can be subdivided into separate named fields. Fields usually have string names, but sometimes - as in the case of indexed
+ tuples - have numerical indices instead.
+* There is a confined list of primitive types that a field can have. These often match primitive types in most programming 
+ languages: int, long, string, etc.
+* Often a field type can be marked as optional (sometimes referred to as nullable) or required.
+
+In addition, often records have a nested structure. A nested structure occurs when a field itself has subfields so the 
+type of the field itself has a schema. Fields that are  array or map types is also a common feature of these structured 
+records.
+
+For example, consider the following schema, representing actions in a fictitious e-commerce company:
+
+**Purchase**
+<table>
+  <thead>
+    <tr class="header">
+      <th><b>Field Name</b></th>
+      <th><b>Field Type</b></th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>userId</td>
+      <td>STRING</td>      
+    </tr>
+    <tr>
+      <td>itemId</td>
+      <td>INT64</td>      
+    </tr>
+    <tr>
+      <td>shippingAddress</td>
+      <td>ROW(ShippingAddress)</td>      
+    </tr>
+    <tr>
+      <td>cost</td>
+      <td>INT64</td>      
+    </tr>
+    <tr>
+      <td>transactions</td>
+      <td>ARRAY[ROW(Transaction)]</td>      
+    </tr>                  
+  </tbody>
+</table>
+<br/>
+
+**ShippingAddress**
+<table>
+  <thead>
+    <tr class="header">
+      <th><b>Field Name</b></th>
+      <th><b>Field Type</b></th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>streetAddress</td>
+      <td>STRING</td>      
+    </tr>
+    <tr>
+      <td>city</td>
+      <td>STRING</td>      
+    </tr>
+    <tr>
+      <td>state</td>
+      <td>nullable STRING</td>      
+    </tr>
+    <tr>
+      <td>country</td>
+      <td>STRING</td>      
+    </tr>
+    <tr>
+      <td>postCode</td>
+      <td>STRING</td>      
+    </tr>                  
+  </tbody>
+</table> 
+<br/>
+
+**Transaction**
+<table>
+  <thead>
+    <tr class="header">
+      <th><b>Field Name</b></th>
+      <th><b>Field Type</b></th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>bank</td>
+      <td>STRING</td>      
+    </tr>
+    <tr>
+      <td>purchaseAmount</td>
+      <td>DOUBLE</td>      
+    </tr>                  
+  </tbody>
+</table>
+<br/>
+
+Purchase event records are represented by the aove purchase schema. Each purchase event contains a shipping address, which
+is a nested row containing its own schema. Each purchase also contains a list of credit-card transactions 
+(a list, because a purchase might be split across multiple credit cards); each item in the transaction list is a row 
+with its own schema.
+
+This provides an abstract description of the types involved, one that is abstracted away from any specific programming 
+language.
+
+Schemas provide us a type-system for Beam records that is independent of any specific programming-language type. There
+might be multiple Java classes that all have the same schema (for example a Protocol-Buffer class or a POJO class),
+and Beam will allow us to seamlessly convert between these types. Schemas also provide a simple way to reason about 
+types across different programming-language APIs.
+
+A `PCollection` with a schema does not need to have a `Coder` specified, as Beam knows how to encode and decode 
+Schema rows.
+
+### 6.2. Schemas for programming language types {#schemas-for-pl-types}
+While schemas themselves are language independent, they are designed to embed naturally into the programming languages
+of the Beam SDK being used. This allows Beam users to continue using native types while reaping the advantage of 
+having Beam understand their element schemas.
+ 
+ {:.language-java}
+ In Java you could use the following set of classes to represent the purchase schema.  Beam will automatically  
+ infer the correct schema based on the members of the class.
+
+```java
+@DefaultSchema(JavaBeanSchema.class)
+public class Purchase {
+  public String getUserId();  // Returns the id of the user who made the purchase.
+  public long getItemId();  // Returns the identifier of the item that was purchased.
+  public ShippingAddress getShippingAddress();  // Returns the shipping address, a nested type.
+  public long getCostCents();  // Returns the cost of the item.
+  public List<Transaction> getTransactions();  // Returns the transactions that paid for this purchase (returns a list, since the purchase might be spread out over multiple credit cards).
+  
+  @SchemaCreate
+  public Purchase(String userId, long itemId, ShippingAddress shippingAddress, long costCents, 
+                  List<Transaction> transactions) {
+      ...
+  }
+}
+
+@DefaultSchema(JavaBeanSchema.class)
+public class ShippingAddress {
+  public String getStreetAddress();
+  public String getCity();
+  @Nullable public String getState();
+  public String getCountry();
+  public String getPostCode();
+  
+  @SchemaCreate
+  public ShippingAddress(String streetAddress, String city, @Nullable String state, String country,
+                         String postCode) {
+     ...
+  }
+}
+
+@DefaultSchema(JavaBeanSchema.class)
+public class Transaction {
+  public String getBank();
+  public double getPurchaseAmount();
+ 
+  @SchemaCreate
+  public Transaction(String bank, double purchaseAmount) {
+     ...
+  }
+}
+```
+
+Using JavaBean classes as above is one way to map a schema to Java classes. However multiple Java classes might have
+the same schema, in which case the different Java types can often be used interchangeably. For example, the above
+`Transaction` class has the same schema as the following class:
+
+```java
+@DefaultSchema(JavaFieldSchema.class)
+public class TransactionPojo {
+  public String bank;
+  public double purchaseAmount;
+}
+```
+
+So if we had two `PCollection`s as follows
+
+```java
+PCollection<Transaction> transactionBeans = readTransactionsAsJavaBean();
+PCollection<TransactionPojos> transactionPojos = readTransactionsAsPojo();
+```
+
+Then these two `PCollection`s would have the same schema, even though their Java types would be different. This means
+for example the the following two code snippets are valid:
+
+```java
+transactionBeans.apply(ParDo.of(new DoFn<...>() {
+   @ProcessElement public void process(@Element TransactionPojo pojo) {
+      ...
+   }
+}));
+```
+
+and
+```java
+transactionPojos.apply(ParDo.of(new DoFn<...>() {
+   @ProcessElement public void process(@Element Transaction row) {
+    }
+}));
+```
+
+EEven though the in both cases the `@Element` parameter differs from the the `PCollection`'s Java type, since the
+schemas are the same Beam will automatically make the conversion. The built-in `Convert` transform can also be used
+to translate between Java types of equivalent schemas, as detailed below.
+
+### 6.3. Schema definition {#schema-definition}
+The schema for a `PCollection` defines elements of that `PCollection` as an ordered list of named fields. Each field
+has a name, a type, and possibly a set of user options. The type of a field can be primitive or composite. The following
+are the primitive types currently supported by Beam:
+
+<table>
+  <thead>
+    <tr class="header">
+      <th>Type</th>
+      <th>Description</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>BYTE</td>
+      <td>An 8-bit signed value</td>
+    </tr>
+     <tr>
+       <td>INT16</td>
+       <td>A 16-bit signed value</td>
+     </tr>
+     <tr>
+       <td>INT32</td>
+       <td>A 32-bit signed value</td>
+     </tr>
+    <tr>
+      <td>INT64</td>
+       <td>A 64-bit signed value</td>
+     </tr>
+     <tr>
+       <td>DECIMAL</td>
+       <td>An arbitrary-precision decimal type</td>
+     </tr>
+     <tr>
+       <td>FLOAT</td>
+       <td>A 32-bit IEEE 754 floating point number</td>
+     </tr>
+     <tr>
+       <td>DOUBLE</td>
+       <td>A 64-bit IEEE 754 floating point number</td>
+     </tr>
+     <tr>
+       <td>STRING</td>
+       <td>A string</td>
+     </tr>
+     <tr>
+       <td>DATETIME</td>
+       <td>A timestamp represented as milliseconds since the epoch</td>
+     </tr>  
+     <tr>
+       <td>BOOLEAN</td>
+       <td>A boolean value</td>
+     </tr>
+     <tr>
+       <td>BYTES</td>
+       <td>A raw byte array</td>
+     </tr>             
+  </tbody>
+</table>
+<br/>
+
+A field can also reference a nested schema. In this case, the field will have type ROW, and the nested schema will 
+be an attribute of this field type.
+
+Three collection types are supported as field types: ARRAY, ITERABLE and MAP:
+* **ARRAY** This represents a repeated value type, where the repeated elements can have any supported type. Arrays of 
+nested rows are supported, as are arrays of arrays.
+* **ITERABLE** This is very similar to the array type, it represents a repeated value, but one in which the full list of 
+items is not known until iterated over. This is intended for the case where an iterable might be larger than the 
+available memory, and backed by external storage (for example, this can happen with the iterable returned by a 
+`GroupByKey`). The repeated elements can have any supported type.
+* **MAP** This represents an associative map from keys to values. All schema types are supported for both keys and values.
+ Values that contain map types cannot be used as keys in any grouping operation.
+
+### 6.4. Logical types {#logical-types}
+Users can extend the schema type system to add custom logical types that can be used as a field. A logical type is 
+identified by a unique identifier and an argument. A logical type also specifies an underlying schema type to be used 
+for storage, along with conversions to and from that type. As an example, a logical union can always be represented as 
+a row with nullable fields, where the user ensures that only one of those fields is ever set at a time. However this can
+be tedious and complex to manage. The OneOf logical type provides a value class that makes it easier to manage the type
+as a union, while still using a row with nullable fields as its underlying storage. Each logical type also has a 
+unique identifier, so they can be interpreted by other languages as well. More examples of logical types are listed 
+below.
+
+#### 6.4.1. Defining a logical type {#defining-a-logical-type}
+To define a logical type you must specify a Schema type to be used to represent the underlying type as well as a unique
+identifier for that type. A logical type imposes additional semantics on top a schema type. For example, a logical 
+type to represent nanosecond timestamps is represented as a schema containing an INT64 and an INT32 field. This schema
+alone does not say anything about how to interpret this type, however the logical type tells you that this represents
+a nanosecond timestamp, with the INT64 field representing seconds and the INT32 field representing nanoseconds.
+
+Logical types are also specified by an argument, which allows creating a class of related typed. For example, a 
+limited-precision decimal type would have an integer argument indicating how many digits of precision are represented.
+The argument is represented by a schema type, so can itself be a complex type.
+
+ {:.language-java}
+In Java, a logical type is specified as a subclass of the `LogicalType` class. A custom Java class can be specified to 
+represent the logical type and conversion functions must be supplied to convert back and forth between this Java class
+and the underlying Schema type representation. For example, the logical type representing nanosecond timestamp might
+be implemented as follows
+
+```java
+// A Logical type using java.time.Instant to represent the logical type.
+public class TimestampNanos implements LogicalType<Instant, Row> {
+  // The underlying schema used to represent rows.
+  private final Schema SCHEMA = Schema.builder().addInt64Field("seconds").addInt32Field("nanos").build();
+  @Override public String getIdentifier() { return "timestampNanos"; }
+  @Override public FieldType getBaseType() { return schema; }
+  
+  // Convert the representation type to the underlying Row type. Called by Beam when necessary.
+  @Override public Row toBaseType(Instant instant) {
+    return Row.withSchema(schema).addValues(instant.getEpochSecond(), instant.getNano()).build();
+  }
+  
+  // Convert the underlying Row type to and Instant. Called by Beam when necessary.
+  @Override public Instant toInputType(Row base) {
+    return Instant.of(row.getInt64("seconds"), row.getInt32("nanos"));
+  }
+
+     ...
+}
+```
+
+#### 6.4.2. Useful logical types {#built-in-logical-types}
+##### **EnumerationType**
+EnumerationType allows creating an enumeration type consisting of a set of named constants.
+
+```java
+Schema schema = Schema.builder()
+               …
+     .addLogicalTypeField(“color”, EnumerationType.create(“RED”, “GREEN”, “BLUE”))
+     .build();
+```
+
+The value of this field is stored in the row as an INT32 type, however the logical type defines a value type that lets 
+you access the enumeration either as a string or a value. For example:
+
+```java
+EnumerationType.Value enumValue = enumType.valueOf(“RED”);
+enumValue.getValue();  // Returns 0, the integer value of the constant.
+enumValue.toString();  // Returns “RED”, the string value of the constant
+```
+
+Given a row object with an enumeration field, you can also extract the field as the enumeration value.
+
+```java
+EnumerationType.Value enumValue = row.getLogicalTypeValue(“color”, EnumerationType.Value.class);
+```
+
+Automatic schema inference from Java POJOs and JavaBeans automatically converts Java enums to EnumerationType logical 
+types.
+
+##### **OneOfType**
+OneOfType allows creating a disjoint union type over a set of schema fields. For example:
+
+```java
+Schema schema = Schema.builder()
+               …
+     .addLogicalTypeField(“oneOfField”, 
+        OneOfType.create(Field.of(“intField”, FieldType.INT32),
+                         Field.of(“stringField”, FieldType.STRING),
+                         Field.of(“bytesField”, FieldType.BYTES)))
+      .build();
+```
+
+The value of this field is stored in the row as another Row type, where all the fields are marked as nullable. The 
+logical type however defines a Value object that contains an enumeration value indicating which field was set and allows
+ getting just that field:
+
+```java
+// Returns an enumeration indicating all possible case values for the enum.
+// For the above example, this will be 
+// EnumerationType.create(“intField”, “stringField”, “bytesField”);
+EnumerationType oneOfEnum = onOfType.getCaseEnumType();
+
+// Creates an instance of the union with the string field set.
+OneOfType.Value oneOfValue = oneOfType.createValue(“stringField”, “foobar”);
+
+// Handle the oneof
+switch (oneOfValue.getCaseEnumType().toString()) {
+  case “intField”:  
+    return processInt(oneOfValue.getValue(Integer.class));
+  case “stringField”:
+    return processString(oneOfValue.getValue(String.class));
+  case “bytesField”:
+    return processBytes(oneOfValue.getValue(bytes[].class));
+}
+```
+
+In the above example we used the field names in the switch statement for clarity, however the enum integer values could
+ also be used.
+
+### 6.5. Creating Schemas {#creating-schemas}
+
+In order to take advantage of schemas, your `PCollection`s must have a schema attached to it. Often, the source 
+itself will attach a schema to the PCollection. For example, when using `AvroIO` to read Avro files, the source can
+automatically infer a Beam schema from the Avro schema and attach that to the Beam `PCollection`. However not all sources 
+produce schemas. In addition, often Beam pipelines have intermediate stages and types, and those also can benefit from
+the expressiveness of schemas.
+ 
+#### 6.5.1. Inferring schemas {#inferring-schemas}
+{:.language-java}
+Beam is able to infer schemas from a variety of common Java types. The `@DefaultSchema` annotation can be used to tell
+Beam to infer schemas from a specific type. The annotation takes a `SchemaProvider` as an argument, and `SchemaProvider` 
+classes are already built in for common Java types. The `SchemaRegistry` can also be invoked programmatically for cases 
+where it is not practical to annotate the Java type itself.
+
+##### **Java POJOs**
+A POJO (Plain Old Java Object) is a Java object that is not bound by any restriction other than the Java Language 
+Specification. A POJO can contain member variables that are primitives, that are other POJOs, or are collections maps or
+arrays thereof. POJOs do not have to extend prespecified classes or extend any specific interfaces.
+
+If a POJO class is annotated with `@DefaultSchema(JavaFieldSchema.class)`, Beam will automatically infer a schema for 
+this class. Nested classes are supported as are classes with `List`, array, and `Map` fields.
+
+For example, annotating the following class tells Beam to infer a schema from this POJO class and apply it to any 
+`PCollection<TransactionPojo>`.
+
+```java
+@DefaultSchema(JavaFieldSchema.class)
+public class TransactionPojo {
+  public final String bank;
+  public final double purchaseAmount;
+  @SchemaCreate
+  public TransactionPojo(String bank, double purchaseAmount) {
+    this.bank = bank.
+    this.purchaseAmount = purchaseAmount;
+  }
+}
+// Beam will automatically infer the correct schema for this PCollection. No coder is needed as a result.
+PCollection<TransactionPojo> pojos = readPojos();
+````
+
+The `@SchemaCreate` annotation tells Beam that this constructor can be used to create instances of TransactionPojo, 
+assuming that constructor parameters have the same names as the field names. `@SchemaCreate` can also be used to annotate
+static factory methods on the class, allowing the constructor to remain private. If there is no `@SchemaCreate`
+ annotation then all the fields must be non-final and the class must have a zero-argument constructor.
+
+There are a couple of other useful annotations that affect how Beam infers schemas. By default the schema field names 
+inferred will match that of the class field names. However `@SchemaFieldName` can be used to specify a different name to
+be used for the schema field. `@SchemaIgnore` can be used to mark specific class fields as excluded from the inferred
+schema. For example, it’s common to have ephemeral fields in a class that should not be included in a schema 
+(e.g. caching the hash value to prevent expensive recomputation of the hash), and `@SchemaIgnore` can be used to
+exclude these fields.
+
+In some cases it is not convenient to annotate the POJO class, for example if the POJO is in a different package that is
+not owned by the Beam pipeline author. In these cases the schema inference can be triggered programmatically in 
+pipeline’s main function as follows:
+
+```java
+ pipeline.getSchemaRegistry().registerPOJO(TransactionPOJO.class); 
+```
+
+##### **Java Beans**
+Java Beans are a de-facto standard for creating reusable property classes in Java. While the full 
+standard has many characteristics, the key ones are that all properties are accessed via getter and setter classes, and 
+the name format for these getters and setters is standardized. A Java Bean class can be annotated with 
+`@DefaultSchema(JavaBeanSchema.class)` and Beam will automatically infer a schema for this class. For example:
+
+```java
+@DefaultSchema(JavaBeanSchema.class)
+public class TransactionBean {
+  public TransactionBean() { … } 
+  public String getBank() { … }
+  public void setBank(String bank) { … }
+  public double getPurchaseAmount() { … }
+  public void setPurchaseAmount(double purchaseAmount) { … }
+}
+// Beam will automatically infer the correct schema for this PCollection. No coder is needed as a result.
+PCollection<TransactionBean> beans = readBeans();
+```
+
+The `@SchemaCreate` annotation can be used to specify a constructor or a static factory method, in which case the 
+setters and zero-argument constructor can be omitted.
+
+```java
+@DefaultSchema(JavaBeanSchema.class)
+public class TransactionBean {
+  @SchemaCreate
+  Public TransactionBean(String bank, double purchaseAmount) { … }
+  public String getBank() { … }
+  public double getPurchaseAmount() { … }
+}
+```
+
+`@SchemaFieldName` and `@SchemaIgnore` can be used to alter the schema inferred, just like with POJO classes.
+
+##### AutoValue
+Java value classes are notoriously difficult to generate correctly. There is a lot of boilerplate you must create in 
+order to properly implement a value class. AutoValue is a popular framework for easily generating such classes by i
+mplementing a simple abstract base class.
+
+Beam can infer a schema from an AutoValue class. For example:
+
+```java
+@DefaultSchema(AutoValueSchema.class)
+@AutoValue
+public abstract class TransactionValue {
+  public abstract String getBank(); 
+  public abstract double getPurchaseAmount();
+}
+```
+
+This is all that’s needed to generate a simple AutoValue class, and the above `@DefaultSchema` annotation tells Beam to
+infer a schema from it. This also allows AutoValue elements to be used inside of `PCollection`s.
+
+`@SchemaFieldName` and `@SchemaIgnore` can be used to alter the schema inferred.
+
+### 6.6. Using Schemas {#using-schemas}
+A schema on a `PCollection` enables a rich variety of relational transforms. The fact that each record is composed of
+named fields allows for simple and readable aggregations that reference fields by name, similar to the aggregations in
+a SQL expression. 
+
+#### 6.6.1. Field selection syntax
+The advantage of schemas is that they allow referencing of element fields by name. Beam provides a selection syntax for
+referencing fields, including nested and repeated fields. This syntax is used by all of the schema transforms when 
+referencing the fields they operate on. The syntax can also be used inside of a DoFn to specify which schema fields to
+process.
+
+Addressing fields by name still retains type safety as Beam will check that schemas match at the time the pipeline graph
+is constructed. If a field is specified that does not exist in the schema, the pipeline will fail to launch. In addition,
+if a field is specified with a type that does not match the type of that field in the schema, the pipeline will fail to
+launch.
+
+##### **Top-level fields**
+In order to select a field at the top level of a schema, the name of the field is specified. For example, to select just
+the user ids from a `PCollection` of purchases one would write (using the `Select` transform)
+
+```java
+purchases.apply(Select.fieldNames(“userId”));
+```
+
+##### **Nested fields**
+Individual nested fields can be specified using the dot operator. For example, to select just the postal code from the
+ shipping address one would write
+
+```java
+purchases.apply(Select.fieldNames(“shippingAddress.postCode”));
+```
+
+##### **Wildcards**
+The * operator can be specified at any nesting level to represent all fields at that level. For example, to select all
+shipping-address fields one would write
+
+```java
+purchases.apply(Select.fieldNames(“shippingAddress.*”));
+```
+
+##### **Arrays**
+An array field, where the array element type is a row, can also have subfields of the element type addressed. When 
+selected, the result is an array of the selected subfield type. For example
+
+```java
+purchases.apply(Select.fieldNames(“transactions[].bank”));
+```
+
+Will result in a row containing an array field with element-type string, containing the list of banks for each 
+transaction. 
+
+While the use of  [] brackets in the selector is recommended, to make it clear that array elements are being selected, 
+they can be omitted for brevity. In the future, array slicing will be supported, allowing selection of portions of the 
+array.
+
+##### **Maps**
+A map field, where the value type is a row, can also have subfields of the value type addressed. When selected, the 
+result is a map where the keys are the same as in the original map but the value is the specified type. Similar to 
+arrays, the use of {} curly brackets in the selector is recommended, to make it clear that map value elements are being 
+selected, they can be omitted for brevity. In the future, map key selectors will be supported, allowing selection of 
+specific keys from the map.
 
 Review comment:
   Added

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services