digraph G {
0 [labelType="html" label="<br><b>Union</b><br><br>"];
subgraph cluster1 {
isCluster="true";
label="WholeStageCodegen (1)\n \nduration: 1.2 s";
2 [labelType="html" label="<br><b>Project</b><br><br>"];
3 [labelType="html" label="<b>Filter</b><br><br>number of output rows: 2"];
4 [labelType="html" label="<b>ColumnarToRow</b><br><br>number of output rows: 13<br>number of input batches: 1"];
}
5 [labelType="html" label="<b>Scan parquet </b><br><br>number of files read: 1<br>scan time: 1.2 s<br>dynamic partition pruning time: 0 ms<br>metadata time: 1 ms<br>size of files read: 18.1 KiB<br>number of output rows: 13<br>number of partitions read: 1"];
subgraph cluster6 {
isCluster="true";
label="WholeStageCodegen (2)\n \nduration: total (min, med, max (stageId: taskId))\n3.9 s (774 ms, 774 ms, 800 ms (stage 1.0: task 5))";
7 [labelType="html" label="<br><b>Project</b><br><br>"];
8 [labelType="html" label="<b>Filter</b><br><br>number of output rows: 0"];
}
9 [labelType="html" label="<b>Scan json </b><br><br>number of files read: 5<br>dynamic partition pruning time: 0 ms<br>metadata time: 0 ms<br>size of files read: 21.2 KiB<br>number of output rows: 15<br>number of partitions read: 5"];
2->0;
3->2;
4->3;
5->4;
7->0;
8->7;
9->8;
}
10
Union
Project [protocol#12, metaData#11, commitInfo#15.inCommitTimestamp AS inCommitTimestamp#63L, 60 AS version#40L]
Filter (isnotnull(protocol#12.minReaderVersion) OR isnotnull(metaData#11.id))
ColumnarToRow
WholeStageCodegen (1)
FileScan parquet [metaData#11,protocol#12,commitInfo#15,version#17L] Batched: true, DataFilters: [(isnotnull(protocol#12.minReaderVersion) OR isnotnull(metaData#11.id))], Format: Parquet, Location: DeltaLogFileIndex(1 paths)[hdlfs://2e93940d-4be8-4f12-830d-f0b8d392c03a.files.hdl.prod-eu20.hanac..., PartitionFilters: [], PushedFilters: [Or(IsNotNull(protocol.minReaderVersion),IsNotNull(metaData.id))], ReadSchema: struct<metaData:struct<id:string,name:string,description:string,format:struct<provider:string,opt...
Project [protocol#50, metaData#49, commitInfo#51.inCommitTimestamp AS inCommitTimestamp#91L, version#52L]
Filter ((isnotnull(protocol#50.minReaderVersion) OR isnotnull(metaData#49.id)) OR (isnotnull(commitInfo#51.inCommitTimestamp) AND (version#52L = 65)))
WholeStageCodegen (2)
FileScan json [metaData#49,protocol#50,commitInfo#51,version#52L] Batched: false, DataFilters: [((isnotnull(protocol#50.minReaderVersion) OR isnotnull(metaData#49.id)) OR isnotnull(commitInfo#..., Format: JSON, Location: DeltaLogFileIndex(5 paths)[hdlfs://2e93940d-4be8-4f12-830d-f0b8d392c03a.files.hdl.prod-eu20.hanac..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<metaData:struct<id:string,name:string,description:string,format:struct<provider:string,opt...
== Physical Plan ==
Union (8)
:- * Project (4)
: +- * Filter (3)
: +- * ColumnarToRow (2)
: +- Scan parquet (1)
+- * Project (7)
+- * Filter (6)
+- Scan json (5)
(1) Scan parquet
Output [4]: [metaData#11, protocol#12, commitInfo#15, version#17L]
Batched: true
Location: DeltaLogFileIndex [hdlfs://2e93940d-4be8-4f12-830d-f0b8d392c03a.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-dl-stream-service/cornerstone/sap-cic-product-productplant/_delta_log/00000000000000000060.checkpoint.parquet]
PushedFilters: [Or(IsNotNull(protocol.minReaderVersion),IsNotNull(metaData.id))]
ReadSchema: struct<metaData:struct<id:string,name:string,description:string,format:struct<provider:string,options:map<string,string>>,schemaString:string,partitionColumns:array<string>,configuration:map<string,string>,createdTime:bigint>,protocol:struct<minReaderVersion:int,minWriterVersion:int,readerFeatures:array<string>,writerFeatures:array<string>>,commitInfo:struct<inCommitTimestamp:bigint>>
(2) ColumnarToRow [codegen id : 1]
Input [4]: [metaData#11, protocol#12, commitInfo#15, version#17L]
(3) Filter [codegen id : 1]
Input [4]: [metaData#11, protocol#12, commitInfo#15, version#17L]
Condition : (isnotnull(protocol#12.minReaderVersion) OR isnotnull(metaData#11.id))
(4) Project [codegen id : 1]
Output [4]: [protocol#12, metaData#11, commitInfo#15.inCommitTimestamp AS inCommitTimestamp#63L, 60 AS version#40L]
Input [4]: [metaData#11, protocol#12, commitInfo#15, version#17L]
(5) Scan json
Output [4]: [metaData#49, protocol#50, commitInfo#51, version#52L]
Batched: false
Location: DeltaLogFileIndex [hdlfs://2e93940d-4be8-4f12-830d-f0b8d392c03a.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-dl-stream-service/cornerstone/sap-cic-product-productplant/_delta_log/00000000000000000061.json, ... 4 entries]
ReadSchema: struct<metaData:struct<id:string,name:string,description:string,format:struct<provider:string,options:map<string,string>>,schemaString:string,partitionColumns:array<string>,configuration:map<string,string>,createdTime:bigint>,protocol:struct<minReaderVersion:int,minWriterVersion:int,readerFeatures:array<string>,writerFeatures:array<string>>,commitInfo:struct<version:bigint,inCommitTimestamp:bigint,timestamp:timestamp,userId:string,userName:string,operation:string,operationParameters:map<string,string>,job:struct<jobId:string,jobName:string,jobRunId:string,runId:string,jobOwnerId:string,triggerType:string>,notebook:struct<notebookId:string>,clusterId:string,readVersion:bigint,isolationLevel:string,isBlindAppend:boolean,operationMetrics:map<string,string>,userMetadata:string,tags:map<string,string>,engineInfo:string,txnId:string>>
(6) Filter [codegen id : 2]
Input [4]: [metaData#49, protocol#50, commitInfo#51, version#52L]
Condition : ((isnotnull(protocol#50.minReaderVersion) OR isnotnull(metaData#49.id)) OR (isnotnull(commitInfo#51.inCommitTimestamp) AND (version#52L = 65)))
(7) Project [codegen id : 2]
Output [4]: [protocol#50, metaData#49, commitInfo#51.inCommitTimestamp AS inCommitTimestamp#91L, version#52L]
Input [4]: [metaData#49, protocol#50, commitInfo#51, version#52L]
(8) Union