You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@iotdb.apache.org by ja...@apache.org on 2020/06/05 02:40:15 UTC
[incubator-iotdb] 01/01: add upsert alias
This is an automated email from the ASF dual-hosted git repository.
jackietien pushed a commit to branch UpsertAlias
in repository https://gitbox.apache.org/repos/asf/incubator-iotdb.git
commit 449a76776477817801d69fee8f159431d0e1d816
Author: JackieTien97 <Ja...@foxmail.com>
AuthorDate: Fri Jun 5 10:39:42 2020 +0800
add upsert alias
---
docs/SystemDesign/SchemaManager/SchemaManager.md | 18 ++-
.../DDL Data Definition Language.md | 6 +-
.../zh/SystemDesign/SchemaManager/SchemaManager.md | 13 ++-
.../DDL Data Definition Language.md | 6 +-
.../org/apache/iotdb/db/qp/strategy/SqlBase.g4 | 10 +-
.../org/apache/iotdb/db/metadata/MLogWriter.java | 6 +
.../org/apache/iotdb/db/metadata/MManager.java | 128 +++++++++++++++------
.../iotdb/db/metadata/MetadataOperationType.java | 1 +
.../apache/iotdb/db/qp/executor/PlanExecutor.java | 24 ++--
.../db/qp/logical/sys/AlterTimeSeriesOperator.java | 9 ++
.../db/qp/physical/sys/AlterTimeSeriesPlan.java | 14 ++-
.../iotdb/db/qp/strategy/LogicalGenerator.java | 11 +-
.../iotdb/db/qp/strategy/PhysicalGenerator.java | 1 +
.../apache/iotdb/db/integration/IoTDBAliasIT.java | 92 ++++++++++++---
14 files changed, 253 insertions(+), 86 deletions(-)
diff --git a/docs/SystemDesign/SchemaManager/SchemaManager.md b/docs/SystemDesign/SchemaManager/SchemaManager.md
index c8b6616..0734672 100644
--- a/docs/SystemDesign/SchemaManager/SchemaManager.md
+++ b/docs/SystemDesign/SchemaManager/SchemaManager.md
@@ -34,7 +34,7 @@ Metadata of IoTDB is managed by MManger, including:
> tag key -> tag value -> timeseries LeafMNode
-In the process of initializing, MManager will replay the mlog to load the metadata into memory. There are six types of operation log:
+In the process of initializing, MManager will replay the mlog to load the metadata into memory. There are seven types of operation log:
> At the beginning of each operation, it will try to obatin the write lock of MManager, and release it after operation.
* Create Timeseries
@@ -86,8 +86,11 @@ In the process of initializing, MManager will replay the mlog to load the metada
* Change the offset of Timeseries
* modify the offset of the timeseries's LeafMNode
+* Change the alias of Timeseries
+ * modify the alias of the timeseries's LeafMNode and update the aliasMap in its parent node.
-In addition to these six operation that are needed to be logged, there are another six alter operation to tag/attribute info of timeseries.
+
+In addition to these seven operation that are needed to be logged, there are another six alter operation to tag/attribute info of timeseries.
> Same as above, at the beginning of each operation, it will try to obatin the write lock of MManager, and release it after operation.
@@ -127,8 +130,10 @@ In addition to these six operation that are needed to be logged, there are anoth
* iterate the attributes needed to be added, if it has existed, then throw exception, otherwise, add it
* persist the new attribute information into tlog
-* upsert tags/attributes
+* upsert alias/tags/attributes
* obtain the LeafMNode of that timeseries
+ * change the alias of the timeseries's LeafMNode and update the aliasMap in its parent node if exists
+ * persist the updated alias into mlog
* read tag information through the offset in LeafMNode
* iterate the tags and attributes needed to be upserted, if it has existed,use the new value to update it, otherwise, add it
* persist the updated tags and attributes information into tlog
@@ -219,6 +224,13 @@ sql examples and the corresponding mlog record:
> format: 10,path,[change offset]
+* alter timeseries root.turbine.d1.s1 UPSERT ALIAS=newAlias
+
+ > mlog: 13,root.turbine.d1.s1,newAlias
+
+ > format: 13,path,[new alias]
+
+
## TLog
* org.apache.iotdb.db.metadata.TagLogFile
diff --git a/docs/UserGuide/Operation Manual/DDL Data Definition Language.md b/docs/UserGuide/Operation Manual/DDL Data Definition Language.md
index 739865d..3c12276 100644
--- a/docs/UserGuide/Operation Manual/DDL Data Definition Language.md
+++ b/docs/UserGuide/Operation Manual/DDL Data Definition Language.md
@@ -111,10 +111,10 @@ ALTER timeseries root.turbine.d1.s1 ADD TAGS tag3=v3, tag4=v4
```
ALTER timeseries root.turbine.d1.s1 ADD ATTRIBUTES attr3=v3, attr4=v4
```
-* upsert tags and attributes
-> add new key-value if the key doesn't exist, otherwise, update the old one with new value.
+* upsert alias, tags and attributes
+> add alias or a new key-value if the alias or key doesn't exist, otherwise, update the old one with new value.
```
-ALTER timeseries root.turbine.d1.s1 UPSERT TAGS(tag3=v3, tag4=v4) ATTRIBUTES(attr3=v3, attr4=v4)
+ALTER timeseries root.turbine.d1.s1 UPSERT ALIAS=newAlias TAGS(tag3=v3, tag4=v4) ATTRIBUTES(attr3=v3, attr4=v4)
```
## Show Timeseries
diff --git a/docs/zh/SystemDesign/SchemaManager/SchemaManager.md b/docs/zh/SystemDesign/SchemaManager/SchemaManager.md
index afda6c1..493b8d4 100644
--- a/docs/zh/SystemDesign/SchemaManager/SchemaManager.md
+++ b/docs/zh/SystemDesign/SchemaManager/SchemaManager.md
@@ -84,9 +84,12 @@ IoTDB 的元数据统一由 MManger 管理,包括以下几个部分:
* 改变时间序列标签信息offset
* 修改时间序列对应的LeafMNode中的offset
+
+* 改变时间序列的别名
+ * 更新LeafMNode中的alias属性,并更新其父节点中的aliasMap属性
-除了这六种需要记录日志的操作外,还有六种对时间序列标签/属性信息进行更新的操作,同样的,每个操作前都会先获得整个元数据的写锁(存储在MManager中),操作完后释放:
+除了这七种需要记录日志的操作外,还有六种对时间序列标签/属性信息进行更新的操作,同样的,每个操作前都会先获得整个元数据的写锁(存储在MManager中),操作完后释放:
* 重命名标签或属性
* 获得该时间序列的LeafMNode
@@ -126,6 +129,8 @@ IoTDB 的元数据统一由 MManger 管理,包括以下几个部分:
* 更新插入标签和属性
* 获得该时间序列的LeafMNode
+ * 更新LeafMNode中的alias属性,并更新其父节点中的aliasMap属性
+ * 讲更新后的别名持久化至mlog中
* 通过 LeafMNode 中的 offset 读取标签和属性信息
* 遍历需要更新插入的标签和属性,若已存在,则用新值更新;若不存在,则添加
* 将更新后的属性信息持久化至tlog中
@@ -216,6 +221,12 @@ IoTDB 的元数据管理采用目录树的形式,倒数第二层为设备层
> 格式: 10,path,[change offset]
+* alter timeseries root.turbine.d1.s1 UPSERT ALIAS=newAlias
+
+ > mlog: 13,root.turbine.d1.s1,newAlias
+
+ > 格式: 13,path,[new alias]
+
## 标签文件
* org.apache.iotdb.db.metadata.TagLogFile
diff --git a/docs/zh/UserGuide/Operation Manual/DDL Data Definition Language.md b/docs/zh/UserGuide/Operation Manual/DDL Data Definition Language.md
index 8e47ce3..891119a 100644
--- a/docs/zh/UserGuide/Operation Manual/DDL Data Definition Language.md
+++ b/docs/zh/UserGuide/Operation Manual/DDL Data Definition Language.md
@@ -109,10 +109,10 @@ ALTER timeseries root.turbine.d1.s1 ADD TAGS tag3=v3, tag4=v4
```
ALTER timeseries root.turbine.d1.s1 ADD ATTRIBUTES attr3=v3, attr4=v4
```
-* 更新插入标签和属性
-> 如果该标签或属性原来不存在,则插入,否则,用新值更新原来的旧值
+* 更新插入别名,标签和属性
+> 如果该别名,标签或属性原来不存在,则插入,否则,用新值更新原来的旧值
```
-ALTER timeseries root.turbine.d1.s1 UPSERT TAGS(tag2=newV2, tag3=v3) ATTRIBUTES(attr3=v3, attr4=v4)
+ALTER timeseries root.turbine.d1.s1 UPSERT ALIAS=newAlias TAGS(tag2=newV2, tag3=v3) ATTRIBUTES(attr3=v3, attr4=v4)
```
## 查看时间序列
diff --git a/server/src/main/antlr4/org/apache/iotdb/db/qp/strategy/SqlBase.g4 b/server/src/main/antlr4/org/apache/iotdb/db/qp/strategy/SqlBase.g4
index 733e721..08e9721 100644
--- a/server/src/main/antlr4/org/apache/iotdb/db/qp/strategy/SqlBase.g4
+++ b/server/src/main/antlr4/org/apache/iotdb/db/qp/strategy/SqlBase.g4
@@ -130,7 +130,11 @@ alterClause
| DROP ID (COMMA ID)*
| ADD TAGS property (COMMA property)*
| ADD ATTRIBUTES property (COMMA property)*
- | UPSERT tagClause attributeClause
+ | UPSERT aliasClause tagClause attributeClause
+ ;
+
+aliasClause
+ : (ALIAS OPERATOR_EQ ID)?
;
attributeClauses
@@ -629,6 +633,10 @@ UPSERT
: U P S E R T
;
+ALIAS
+ : A L I A S
+ ;
+
VALUES
: V A L U E S
;
diff --git a/server/src/main/java/org/apache/iotdb/db/metadata/MLogWriter.java b/server/src/main/java/org/apache/iotdb/db/metadata/MLogWriter.java
index c61a1bf..72ee54b 100644
--- a/server/src/main/java/org/apache/iotdb/db/metadata/MLogWriter.java
+++ b/server/src/main/java/org/apache/iotdb/db/metadata/MLogWriter.java
@@ -120,6 +120,12 @@ public class MLogWriter {
writer.flush();
}
+ public void changeAlias(String path, String alias) throws IOException {
+ writer.write(String.format("%s,%s,%s", MetadataOperationType.CHANGE_ALIAS, path, alias));
+ writer.newLine();
+ writer.flush();
+ }
+
public static void upgradeMLog(String schemaDir, String logFileName) throws IOException {
File logFile = SystemFileFactory.INSTANCE.getFile(schemaDir + File.separator + logFileName);
File tmpLogFile = SystemFileFactory.INSTANCE.getFile(logFile.getAbsolutePath() + ".tmp");
diff --git a/server/src/main/java/org/apache/iotdb/db/metadata/MManager.java b/server/src/main/java/org/apache/iotdb/db/metadata/MManager.java
index 63f814a..ba1105c 100644
--- a/server/src/main/java/org/apache/iotdb/db/metadata/MManager.java
+++ b/server/src/main/java/org/apache/iotdb/db/metadata/MManager.java
@@ -18,15 +18,39 @@
*/
package org.apache.iotdb.db.metadata;
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileReader;
+import java.io.IOException;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Deque;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
import org.apache.iotdb.db.conf.IoTDBConfig;
-import org.apache.iotdb.db.conf.IoTDBConstant;
import org.apache.iotdb.db.conf.IoTDBDescriptor;
import org.apache.iotdb.db.conf.adapter.ActiveTimeSeriesCounter;
import org.apache.iotdb.db.conf.adapter.IoTDBConfigDynamicAdapter;
import org.apache.iotdb.db.engine.StorageEngine;
import org.apache.iotdb.db.engine.fileSystem.SystemFileFactory;
import org.apache.iotdb.db.exception.ConfigAdjusterException;
-import org.apache.iotdb.db.exception.metadata.*;
+import org.apache.iotdb.db.exception.metadata.DeleteFailedException;
+import org.apache.iotdb.db.exception.metadata.IllegalPathException;
+import org.apache.iotdb.db.exception.metadata.MetadataException;
+import org.apache.iotdb.db.exception.metadata.PathNotExistException;
+import org.apache.iotdb.db.exception.metadata.StorageGroupAlreadySetException;
+import org.apache.iotdb.db.exception.metadata.StorageGroupNotSetException;
import org.apache.iotdb.db.metadata.mnode.InternalMNode;
import org.apache.iotdb.db.metadata.mnode.LeafMNode;
import org.apache.iotdb.db.metadata.mnode.MNode;
@@ -47,13 +71,6 @@ import org.apache.iotdb.tsfile.utils.Pair;
import org.apache.iotdb.tsfile.write.schema.MeasurementSchema;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-import java.io.BufferedReader;
-import java.io.File;
-import java.io.FileReader;
-import java.io.IOException;
-import java.util.*;
-import java.util.Map.Entry;
-import java.util.concurrent.locks.ReentrantReadWriteLock;
/**
* This class takes the responsibility of serialization of all the metadata info and persistent it
@@ -261,6 +278,9 @@ public class MManager {
case MetadataOperationType.CHANGE_OFFSET:
changeOffset(args[1], Long.parseLong(args[2]));
break;
+ case MetadataOperationType.CHANGE_ALIAS:
+ changeAlias(args[1], args[2]);
+ break;
default:
logger.error("Unrecognizable command {}", cmd);
}
@@ -354,7 +374,7 @@ public class MManager {
* Delete all timeseries under the given path, may cross different storage group
*
* @param prefixPath path to be deleted, could be root or a prefix path or a full path
- * @return The String is the deletion failed Timeseries
+ * @return The String is the deletion failed Timeseries
*/
public String deleteTimeseries(String prefixPath) throws MetadataException {
lock.writeLock().lock();
@@ -669,7 +689,8 @@ public class MManager {
}
/**
- * Similar to method getAllTimeseriesName(), but return Path instead of String in order to include alias.
+ * Similar to method getAllTimeseriesName(), but return Path instead of String in order to include
+ * alias.
*/
public List<Path> getAllTimeseriesPath(String prefixPath) throws MetadataException {
lock.readLock().lock();
@@ -696,7 +717,7 @@ public class MManager {
* To calculate the count of nodes in the given level for given prefix path.
*
* @param prefixPath a prefix path or a full path, can not contain '*'
- * @param level the level can not be smaller than the level of the prefixPath
+ * @param level the level can not be smaller than the level of the prefixPath
*/
public int getNodesCountInGivenLevel(String prefixPath, int level) throws MetadataException {
lock.readLock().lock();
@@ -1010,17 +1031,29 @@ public class MManager {
}
}
+ public void changeAlias(String path, String alias) throws MetadataException {
+ lock.writeLock().lock();
+ try {
+ LeafMNode leafMNode = (LeafMNode) mtree.getNodeByPath(path);
+ leafMNode.getParent().deleteAliasChild(leafMNode.getAlias());
+ leafMNode.getParent().addAlias(alias, leafMNode);
+ leafMNode.setAlias(alias);
+ } finally {
+ lock.writeLock().unlock();
+ }
+ }
+
/**
* upsert tags and attributes key-value for the timeseries if the key has existed, just use the
* new value to update it.
*
+ * @param alias newly added alias
* @param tagsMap newly added tags map
* @param attributesMap newly added attributes map
* @param fullPath timeseries
*/
- public void upsertTagsAndAttributes(
- Map<String, String> tagsMap, Map<String, String> attributesMap, String fullPath)
- throws MetadataException, IOException {
+ public void upsertTagsAndAttributes(String alias, Map<String, String> tagsMap,
+ Map<String, String> attributesMap, String fullPath) throws MetadataException, IOException {
lock.writeLock().lock();
try {
MNode mNode = mtree.getNodeByPath(fullPath);
@@ -1028,15 +1061,34 @@ public class MManager {
throw new PathNotExistException(fullPath);
}
LeafMNode leafMNode = (LeafMNode) mNode;
+ // upsert alias
+ if (alias != null) {
+ if (leafMNode.getParent().hasChild(alias)) {
+ throw new MetadataException("The alias already exits.");
+ }
+ if (leafMNode.getAlias() != null) {
+ leafMNode.getParent().deleteAliasChild(leafMNode.getAlias());
+ leafMNode.getParent().addAlias(alias, leafMNode);
+ leafMNode.setAlias(alias);
+ // persist to WAL
+ logWriter.changeAlias(fullPath, alias);
+ }
+ }
+ //
+ if (tagsMap == null && attributesMap == null) {
+ return;
+ }
// no tag or attribute, we need to add a new record in log
if (leafMNode.getOffset() < 0) {
long offset = tagLogFile.write(tagsMap, attributesMap);
logWriter.changeOffset(fullPath, offset);
leafMNode.setOffset(offset);
// update inverted Index map
- for (Entry<String, String> entry : tagsMap.entrySet()) {
- tagIndex.computeIfAbsent(entry.getKey(), k -> new HashMap<>())
- .computeIfAbsent(entry.getValue(), v -> new HashSet<>()).add(leafMNode);
+ if (tagsMap != null) {
+ for (Entry<String, String> entry : tagsMap.entrySet()) {
+ tagIndex.computeIfAbsent(entry.getKey(), k -> new HashMap<>())
+ .computeIfAbsent(entry.getValue(), v -> new HashSet<>()).add(leafMNode);
+ }
}
return;
}
@@ -1044,28 +1096,32 @@ public class MManager {
Pair<Map<String, String>, Map<String, String>> pair =
tagLogFile.read(config.getTagAttributeTotalSize(), leafMNode.getOffset());
- for (Entry<String, String> entry : tagsMap.entrySet()) {
- String key = entry.getKey();
- String value = entry.getValue();
- String beforeValue = pair.left.get(key);
- pair.left.put(key, value);
- // if the key has existed and the value is not equal to the new one
- // we should remove before key-value from inverted index map
- if (beforeValue != null && !beforeValue.equals(value)) {
- tagIndex.get(key).get(beforeValue).remove(leafMNode);
- if (tagIndex.get(key).get(beforeValue).isEmpty()) {
- tagIndex.get(key).remove(beforeValue);
+ if (tagsMap != null) {
+ for (Entry<String, String> entry : tagsMap.entrySet()) {
+ String key = entry.getKey();
+ String value = entry.getValue();
+ String beforeValue = pair.left.get(key);
+ pair.left.put(key, value);
+ // if the key has existed and the value is not equal to the new one
+ // we should remove before key-value from inverted index map
+ if (beforeValue != null && !beforeValue.equals(value)) {
+ tagIndex.get(key).get(beforeValue).remove(leafMNode);
+ if (tagIndex.get(key).get(beforeValue).isEmpty()) {
+ tagIndex.get(key).remove(beforeValue);
+ }
}
- }
- // if the key doesn't exist or the value is not equal to the new one
- // we should add a new key-value to inverted index map
- if (beforeValue == null || !beforeValue.equals(value)) {
- tagIndex.computeIfAbsent(key, k -> new HashMap<>())
- .computeIfAbsent(value, v -> new HashSet<>()).add(leafMNode);
+ // if the key doesn't exist or the value is not equal to the new one
+ // we should add a new key-value to inverted index map
+ if (beforeValue == null || !beforeValue.equals(value)) {
+ tagIndex.computeIfAbsent(key, k -> new HashMap<>())
+ .computeIfAbsent(value, v -> new HashSet<>()).add(leafMNode);
+ }
}
}
- pair.left.putAll(tagsMap);
+ if (tagsMap != null) {
+ pair.left.putAll(tagsMap);
+ }
pair.right.putAll(attributesMap);
// persist the change to disk
diff --git a/server/src/main/java/org/apache/iotdb/db/metadata/MetadataOperationType.java b/server/src/main/java/org/apache/iotdb/db/metadata/MetadataOperationType.java
index 0ffdcb7..3700972 100644
--- a/server/src/main/java/org/apache/iotdb/db/metadata/MetadataOperationType.java
+++ b/server/src/main/java/org/apache/iotdb/db/metadata/MetadataOperationType.java
@@ -30,4 +30,5 @@ public class MetadataOperationType {
public static final String SET_TTL = "10";
public static final String DELETE_STORAGE_GROUP = "11";
public static final String CHANGE_OFFSET = "12";
+ public static final String CHANGE_ALIAS = "13";
}
diff --git a/server/src/main/java/org/apache/iotdb/db/qp/executor/PlanExecutor.java b/server/src/main/java/org/apache/iotdb/db/qp/executor/PlanExecutor.java
index 08ef238..5c1123c 100644
--- a/server/src/main/java/org/apache/iotdb/db/qp/executor/PlanExecutor.java
+++ b/server/src/main/java/org/apache/iotdb/db/qp/executor/PlanExecutor.java
@@ -918,7 +918,8 @@ public class PlanExecutor implements IPlanExecutor {
insertPlan.setSchemasAndTransferType(schemas);
StorageEngine.getInstance().insert(insertPlan);
if (insertPlan.getFailedMeasurements() != null) {
- throw new StorageEngineException("failed to insert points " + insertPlan.getFailedMeasurements());
+ throw new StorageEngineException(
+ "failed to insert points " + insertPlan.getFailedMeasurements());
}
} catch (StorageEngineException | MetadataException e) {
throw new QueryProcessException(e);
@@ -967,8 +968,7 @@ public class PlanExecutor implements IPlanExecutor {
// need to do nothing
break;
}
- }
- catch (ClassCastException e){
+ } catch (ClassCastException e) {
logger.error("inconsistent type between client and server");
}
}
@@ -987,7 +987,7 @@ public class PlanExecutor implements IPlanExecutor {
} catch (PathAlreadyExistException e) {
if (logger.isDebugEnabled()) {
logger.debug("Ignore PathAlreadyExistException when Concurrent inserting"
- + " a non-exist time series {}", path);
+ + " a non-exist time series {}", path);
}
}
}
@@ -1046,9 +1046,9 @@ public class PlanExecutor implements IPlanExecutor {
// check data type
if (measurementNode.getSchema().getType() != insertTabletPlan.getDataTypes()[i]) {
throw new QueryProcessException(String.format(
- "Datatype mismatch, Insert measurement %s type %s, metadata tree type %s",
- measurement, insertTabletPlan.getDataTypes()[i],
- measurementNode.getSchema().getType()));
+ "Datatype mismatch, Insert measurement %s type %s, metadata tree type %s",
+ measurement, insertTabletPlan.getDataTypes()[i],
+ measurementNode.getSchema().getType()));
}
schemas[i] = measurementNode.getSchema();
// reset measurement to common name instead of alias
@@ -1192,18 +1192,16 @@ public class PlanExecutor implements IPlanExecutor {
mManager.addAttributes(alterMap, path.getFullPath());
break;
case UPSERT:
- mManager.upsertTagsAndAttributes(
- alterTimeSeriesPlan.getTagsMap(),
- alterTimeSeriesPlan.getAttributesMap(),
+ mManager.upsertTagsAndAttributes(alterTimeSeriesPlan.getAlias(),
+ alterTimeSeriesPlan.getTagsMap(), alterTimeSeriesPlan.getAttributesMap(),
path.getFullPath());
break;
}
} catch (MetadataException e) {
throw new QueryProcessException(e);
} catch (IOException e) {
- throw new QueryProcessException(
- String.format(
- "Something went wrong while read/write the [%s]'s tag/attribute info.",
+ throw new QueryProcessException(String
+ .format("Something went wrong while read/write the [%s]'s tag/attribute info.",
path.getFullPath()));
}
return true;
diff --git a/server/src/main/java/org/apache/iotdb/db/qp/logical/sys/AlterTimeSeriesOperator.java b/server/src/main/java/org/apache/iotdb/db/qp/logical/sys/AlterTimeSeriesOperator.java
index a72c76a..1c14588 100644
--- a/server/src/main/java/org/apache/iotdb/db/qp/logical/sys/AlterTimeSeriesOperator.java
+++ b/server/src/main/java/org/apache/iotdb/db/qp/logical/sys/AlterTimeSeriesOperator.java
@@ -38,6 +38,7 @@ public class AlterTimeSeriesOperator extends RootOperator {
private Map<String, String> alterMap;
// used when the alterType is UPSERT
+ private String alias;
private Map<String, String> tagsMap;
private Map<String, String> attributesMap;
@@ -86,6 +87,14 @@ public class AlterTimeSeriesOperator extends RootOperator {
this.attributesMap = attributesMap;
}
+ public String getAlias() {
+ return alias;
+ }
+
+ public void setAlias(String alias) {
+ this.alias = alias;
+ }
+
public enum AlterType {
RENAME,
SET,
diff --git a/server/src/main/java/org/apache/iotdb/db/qp/physical/sys/AlterTimeSeriesPlan.java b/server/src/main/java/org/apache/iotdb/db/qp/physical/sys/AlterTimeSeriesPlan.java
index 34763e3..e8020e3 100644
--- a/server/src/main/java/org/apache/iotdb/db/qp/physical/sys/AlterTimeSeriesPlan.java
+++ b/server/src/main/java/org/apache/iotdb/db/qp/physical/sys/AlterTimeSeriesPlan.java
@@ -42,19 +42,17 @@ public class AlterTimeSeriesPlan extends PhysicalPlan {
private final Map<String, String> alterMap;
// used when the alterType is UPSERT
+ private final String alias;
private final Map<String, String> tagsMap;
private final Map<String, String> attributesMap;
- public AlterTimeSeriesPlan(
- Path path,
- AlterType alterType,
- Map<String, String> alterMap,
- Map<String, String> tagsMap,
- Map<String, String> attributesMap) {
+ public AlterTimeSeriesPlan(Path path, AlterType alterType, Map<String, String> alterMap,
+ String alias, Map<String, String> tagsMap, Map<String, String> attributesMap) {
super(false, Operator.OperatorType.ALTER_TIMESERIES);
this.path = path;
this.alterType = alterType;
this.alterMap = alterMap;
+ this.alias = alias;
this.tagsMap = tagsMap;
this.attributesMap = attributesMap;
}
@@ -71,6 +69,10 @@ public class AlterTimeSeriesPlan extends PhysicalPlan {
return alterMap;
}
+ public String getAlias() {
+ return alias;
+ }
+
public Map<String, String> getTagsMap() {
return tagsMap;
}
diff --git a/server/src/main/java/org/apache/iotdb/db/qp/strategy/LogicalGenerator.java b/server/src/main/java/org/apache/iotdb/db/qp/strategy/LogicalGenerator.java
index d71784c..4ad25a6 100644
--- a/server/src/main/java/org/apache/iotdb/db/qp/strategy/LogicalGenerator.java
+++ b/server/src/main/java/org/apache/iotdb/db/qp/strategy/LogicalGenerator.java
@@ -19,7 +19,6 @@
package org.apache.iotdb.db.qp.strategy;
import java.io.File;
-import java.time.Instant;
import java.time.ZoneId;
import java.util.ArrayList;
import java.util.EnumMap;
@@ -163,7 +162,6 @@ import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
import org.apache.iotdb.tsfile.read.common.Path;
-import org.apache.iotdb.tsfile.read.filter.operator.In;
import org.apache.iotdb.tsfile.utils.StringContainer;
/**
@@ -1109,6 +1107,15 @@ public class LogicalGenerator extends SqlBaseBaseListener {
}
@Override
+ public void enterAliasClause(SqlBaseParser.AliasClauseContext ctx) {
+ super.enterAliasClause(ctx);
+ if (alterTimeSeriesOperator != null) {
+ alterTimeSeriesOperator.setAlias(ctx.ID().getText());
+ }
+ }
+
+
+ @Override
public void enterAttributeClause(AttributeClauseContext ctx) {
super.enterAttributeClause(ctx);
Map<String, String> attributes = extractMap(ctx.property(), ctx.property(0));
diff --git a/server/src/main/java/org/apache/iotdb/db/qp/strategy/PhysicalGenerator.java b/server/src/main/java/org/apache/iotdb/db/qp/strategy/PhysicalGenerator.java
index 5627616..8a1d8e3 100644
--- a/server/src/main/java/org/apache/iotdb/db/qp/strategy/PhysicalGenerator.java
+++ b/server/src/main/java/org/apache/iotdb/db/qp/strategy/PhysicalGenerator.java
@@ -165,6 +165,7 @@ public class PhysicalGenerator {
alterTimeSeriesOperator.getPath(),
alterTimeSeriesOperator.getAlterType(),
alterTimeSeriesOperator.getAlterMap(),
+ alterTimeSeriesOperator.getAlias(),
alterTimeSeriesOperator.getTagsMap(),
alterTimeSeriesOperator.getAttributesMap());
case DELETE:
diff --git a/server/src/test/java/org/apache/iotdb/db/integration/IoTDBAliasIT.java b/server/src/test/java/org/apache/iotdb/db/integration/IoTDBAliasIT.java
index 1c7e120..3f1649e 100644
--- a/server/src/test/java/org/apache/iotdb/db/integration/IoTDBAliasIT.java
+++ b/server/src/test/java/org/apache/iotdb/db/integration/IoTDBAliasIT.java
@@ -18,6 +18,8 @@
*/
package org.apache.iotdb.db.integration;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
import java.sql.Connection;
@@ -41,15 +43,17 @@ public class IoTDBAliasIT {
"CREATE TIMESERIES root.sg.d2.s1(speed) WITH DATATYPE=FLOAT, ENCODING=RLE",
"CREATE TIMESERIES root.sg.d2.s2(temperature) WITH DATATYPE=FLOAT, ENCODING=RLE",
+ "CREATE TIMESERIES root.sg.d2.s3(power) WITH DATATYPE=FLOAT, ENCODING=RLE",
+
"INSERT INTO root.sg.d1(timestamp,speed,temperature) values(100, 10.1, 20.7)",
"INSERT INTO root.sg.d1(timestamp,speed,temperature) values(200, 15.2, 22.9)",
"INSERT INTO root.sg.d1(timestamp,speed,temperature) values(300, 30.3, 25.1)",
"INSERT INTO root.sg.d1(timestamp,speed,temperature) values(400, 50.4, 28.3)",
- "INSERT INTO root.sg.d2(timestamp,speed,temperature) values(100, 11.1, 20.2)",
- "INSERT INTO root.sg.d2(timestamp,speed,temperature) values(200, 20.2, 21.8)",
- "INSERT INTO root.sg.d2(timestamp,speed,temperature) values(300, 45.3, 23.4)",
- "INSERT INTO root.sg.d2(timestamp,speed,temperature) values(400, 73.4, 26.3)"
+ "INSERT INTO root.sg.d2(timestamp,speed,temperature,power) values(100, 11.1, 20.2, 80.0)",
+ "INSERT INTO root.sg.d2(timestamp,speed,temperature,power) values(200, 20.2, 21.8, 81.0)",
+ "INSERT INTO root.sg.d2(timestamp,speed,temperature,power) values(300, 45.3, 23.4, 82.0)",
+ "INSERT INTO root.sg.d2(timestamp,speed,temperature,power) values(400, 73.4, 26.3, 83.0)"
};
private static final String TIMESTAMP_STR = "Time";
@@ -106,7 +110,7 @@ public class IoTDBAliasIT {
for (int i = 1; i <= resultSetMetaData.getColumnCount(); i++) {
header.append(resultSetMetaData.getColumnName(i)).append(",");
}
- Assert.assertEquals("Time,root.sg.d1.speed,root.sg.d1.temperature,", header.toString());
+ assertEquals("Time,root.sg.d1.speed,root.sg.d1.temperature,", header.toString());
int cnt = 0;
while (resultSet.next()) {
@@ -114,10 +118,10 @@ public class IoTDBAliasIT {
for (int i = 1; i <= resultSetMetaData.getColumnCount(); i++) {
builder.append(resultSet.getString(i)).append(",");
}
- Assert.assertEquals(retArray[cnt], builder.toString());
+ assertEquals(retArray[cnt], builder.toString());
cnt++;
}
- Assert.assertEquals(retArray.length, cnt);
+ assertEquals(retArray.length, cnt);
}
} catch (Exception e) {
e.printStackTrace();
@@ -145,10 +149,10 @@ public class IoTDBAliasIT {
String ans = resultSet.getString(TIMESTAMP_STR) + ","
+ resultSet.getString(TIMESEIRES_STR) + ","
+ resultSet.getString(VALUE_STR);
- Assert.assertEquals(retArray[cnt], ans);
+ assertEquals(retArray[cnt], ans);
cnt++;
}
- Assert.assertEquals(retArray.length, cnt);
+ assertEquals(retArray.length, cnt);
}
} catch (Exception e) {
e.printStackTrace();
@@ -178,7 +182,7 @@ public class IoTDBAliasIT {
for (int i = 1; i <= resultSetMetaData.getColumnCount(); i++) {
header.append(resultSetMetaData.getColumnName(i)).append(",");
}
- Assert.assertEquals("Time,root.sg.d1.speed,root.sg.d1.speed,root.sg.d1.s2,",
+ assertEquals("Time,root.sg.d1.speed,root.sg.d1.speed,root.sg.d1.s2,",
header.toString());
int cnt = 0;
@@ -187,10 +191,10 @@ public class IoTDBAliasIT {
for (int i = 1; i <= resultSetMetaData.getColumnCount(); i++) {
builder.append(resultSet.getString(i)).append(",");
}
- Assert.assertEquals(retArray[cnt], builder.toString());
+ assertEquals(retArray[cnt], builder.toString());
cnt++;
}
- Assert.assertEquals(4, cnt);
+ assertEquals(4, cnt);
}
} catch (Exception e) {
e.printStackTrace();
@@ -219,10 +223,10 @@ public class IoTDBAliasIT {
String ans = resultSet.getString(TIMESTAMP_STR) + ","
+ resultSet.getString(TIMESEIRES_STR) + ","
+ resultSet.getString(VALUE_STR);
- Assert.assertEquals(retArray[cnt], ans);
+ assertEquals(retArray[cnt], ans);
cnt++;
}
- Assert.assertEquals(retArray.length, cnt);
+ assertEquals(retArray.length, cnt);
}
} catch (Exception e) {
e.printStackTrace();
@@ -250,8 +254,9 @@ public class IoTDBAliasIT {
for (int i = 1; i <= resultSetMetaData.getColumnCount(); i++) {
header.append(resultSetMetaData.getColumnName(i)).append(",");
}
- Assert.assertEquals("count(root.sg.d1.speed),count(root.sg.d2.speed),"
- + "max_value(root.sg.d1.temperature),max_value(root.sg.d2.temperature),", header.toString());
+ assertEquals("count(root.sg.d1.speed),count(root.sg.d2.speed),"
+ + "max_value(root.sg.d1.temperature),max_value(root.sg.d2.temperature),",
+ header.toString());
int cnt = 0;
while (resultSet.next()) {
@@ -259,10 +264,10 @@ public class IoTDBAliasIT {
for (int i = 1; i <= resultSetMetaData.getColumnCount(); i++) {
builder.append(resultSet.getString(i)).append(",");
}
- Assert.assertEquals(retArray[cnt], builder.toString());
+ assertEquals(retArray[cnt], builder.toString());
cnt++;
}
- Assert.assertEquals(retArray.length, cnt);
+ assertEquals(retArray.length, cnt);
}
} catch (Exception e) {
e.printStackTrace();
@@ -270,4 +275,55 @@ public class IoTDBAliasIT {
}
}
+ @Test
+ public void AlterAliasTest() throws ClassNotFoundException {
+ String ret = "root.sg.d2.s3,powerNew,root.sg,FLOAT,RLE,SNAPPY";
+
+ String[] retArray = {"100,80.0,", "200,81.0,", "300,82.0,", "400,83.0,"};
+
+ Class.forName(Config.JDBC_DRIVER_NAME);
+ try (Connection connection = DriverManager
+ .getConnection(Config.IOTDB_URL_PREFIX + "127.0.0.1:6667/", "root", "root");
+ Statement statement = connection.createStatement()) {
+
+ statement.execute("ALTER timeseries root.sg.d2.s3 UPSERT ALIAS=powerNew");
+ boolean hasResult = statement.execute("show timeseries root.sg.d2.s3");
+ assertTrue(hasResult);
+ ResultSet resultSet = statement.getResultSet();
+ while (resultSet.next()) {
+ String ans = resultSet.getString("timeseries")
+ + "," + resultSet.getString("alias")
+ + "," + resultSet.getString("storage group")
+ + "," + resultSet.getString("dataType")
+ + "," + resultSet.getString("encoding")
+ + "," + resultSet.getString("compression");
+ assertEquals(ret, ans);
+ }
+
+ hasResult = statement.execute("select powerNew from root.sg.d2");
+ assertTrue(hasResult);
+ resultSet = statement.getResultSet();
+ ResultSetMetaData resultSetMetaData = resultSet.getMetaData();
+ StringBuilder header = new StringBuilder();
+ for (int i = 1; i <= resultSetMetaData.getColumnCount(); i++) {
+ header.append(resultSetMetaData.getColumnName(i)).append(",");
+ }
+ assertEquals("Time,root.sg.d2.powerNew,", header.toString());
+
+ int cnt = 0;
+ while (resultSet.next()) {
+ StringBuilder builder = new StringBuilder();
+ for (int i = 1; i <= resultSetMetaData.getColumnCount(); i++) {
+ builder.append(resultSet.getString(i)).append(",");
+ }
+ assertEquals(retArray[cnt], builder.toString());
+ cnt++;
+ }
+ assertEquals(retArray.length, cnt);
+ } catch (Exception e) {
+ fail(e.getMessage());
+ e.printStackTrace();
+ }
+ }
+
}