digraph G {
0 [labelType="html" label="<br><b>Union</b><br><br>"];
subgraph cluster1 {
isCluster="true";
label="WholeStageCodegen (1)\n \nduration: 131 ms";
2 [labelType="html" label="<br><b>Project</b><br><br>"];
3 [labelType="html" label="<b>Filter</b><br><br>number of output rows: 2"];
4 [labelType="html" label="<b>ColumnarToRow</b><br><br>number of output rows: 15<br>number of input batches: 1"];
}
5 [labelType="html" label="<b>Scan parquet </b><br><br>number of files read: 1<br>scan time: 129 ms<br>dynamic partition pruning time: 0 ms<br>metadata time: 0 ms<br>size of files read: 18.6 KiB<br>number of output rows: 15<br>number of partitions read: 1"];
subgraph cluster6 {
isCluster="true";
label="WholeStageCodegen (2)\n \nduration: total (min, med, max (stageId: taskId))\n157 ms (40 ms, 43 ms, 74 ms (stage 4.0: task 11))";
7 [labelType="html" label="<br><b>Project</b><br><br>"];
8 [labelType="html" label="<b>Filter</b><br><br>number of output rows: 0"];
}
9 [labelType="html" label="<b>Scan json </b><br><br>number of files read: 3<br>dynamic partition pruning time: 0 ms<br>metadata time: 0 ms<br>size of files read: 15.0 KiB<br>number of output rows: 9<br>number of partitions read: 3"];
2->0;
3->2;
4->3;
5->4;
7->0;
8->7;
9->8;
}
10
Union
Project [protocol#148, metaData#147, commitInfo#149.inCommitTimestamp AS inCommitTimestamp#205L, 140 AS version#166L]
Filter (isnotnull(protocol#148.minReaderVersion) OR isnotnull(metaData#147.id))
ColumnarToRow
WholeStageCodegen (1)
FileScan parquet [metaData#147,protocol#148,commitInfo#149,version#150L] Batched: true, DataFilters: [(isnotnull(protocol#148.minReaderVersion) OR isnotnull(metaData#147.id))], Format: Parquet, Location: DeltaLogFileIndex(1 paths)[hdlfs://2e93940d-4be8-4f12-830d-f0b8d392c03a.files.hdl.prod-eu20.hanac..., PartitionFilters: [], PushedFilters: [Or(IsNotNull(protocol.minReaderVersion),IsNotNull(metaData.id))], ReadSchema: struct<metaData:struct<id:string,name:string,description:string,format:struct<provider:string,opt...
Project [protocol#185, metaData#184, commitInfo#186.inCommitTimestamp AS inCommitTimestamp#222L, version#187L]
Filter ((isnotnull(protocol#185.minReaderVersion) OR isnotnull(metaData#184.id)) OR (isnotnull(commitInfo#186.inCommitTimestamp) AND (version#187L = 143)))
WholeStageCodegen (2)
FileScan json [metaData#184,protocol#185,commitInfo#186,version#187L] Batched: false, DataFilters: [((isnotnull(protocol#185.minReaderVersion) OR isnotnull(metaData#184.id)) OR isnotnull(commitInf..., Format: JSON, Location: DeltaLogFileIndex(3 paths)[hdlfs://2e93940d-4be8-4f12-830d-f0b8d392c03a.files.hdl.prod-eu20.hanac..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<metaData:struct<id:string,name:string,description:string,format:struct<provider:string,opt...
== Physical Plan ==
Union (8)
:- * Project (4)
: +- * Filter (3)
: +- * ColumnarToRow (2)
: +- Scan parquet (1)
+- * Project (7)
+- * Filter (6)
+- Scan json (5)
(1) Scan parquet
Output [4]: [metaData#147, protocol#148, commitInfo#149, version#150L]
Batched: true
Location: DeltaLogFileIndex [hdlfs://2e93940d-4be8-4f12-830d-f0b8d392c03a.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-dl-stream-service/cornerstone/sap-cic-sourceofsupply-sourceofsupply/_delta_log/00000000000000000140.checkpoint.parquet]
PushedFilters: [Or(IsNotNull(protocol.minReaderVersion),IsNotNull(metaData.id))]
ReadSchema: struct<metaData:struct<id:string,name:string,description:string,format:struct<provider:string,options:map<string,string>>,schemaString:string,partitionColumns:array<string>,configuration:map<string,string>,createdTime:bigint>,protocol:struct<minReaderVersion:int,minWriterVersion:int,readerFeatures:array<string>,writerFeatures:array<string>>,commitInfo:struct<inCommitTimestamp:bigint>>
(2) ColumnarToRow [codegen id : 1]
Input [4]: [metaData#147, protocol#148, commitInfo#149, version#150L]
(3) Filter [codegen id : 1]
Input [4]: [metaData#147, protocol#148, commitInfo#149, version#150L]
Condition : (isnotnull(protocol#148.minReaderVersion) OR isnotnull(metaData#147.id))
(4) Project [codegen id : 1]
Output [4]: [protocol#148, metaData#147, commitInfo#149.inCommitTimestamp AS inCommitTimestamp#205L, 140 AS version#166L]
Input [4]: [metaData#147, protocol#148, commitInfo#149, version#150L]
(5) Scan json
Output [4]: [metaData#184, protocol#185, commitInfo#186, version#187L]
Batched: false
Location: DeltaLogFileIndex [hdlfs://2e93940d-4be8-4f12-830d-f0b8d392c03a.files.hdl.prod-eu20.hanacloud.ondemand.com:443/crp-dl-stream-service/cornerstone/sap-cic-sourceofsupply-sourceofsupply/_delta_log/00000000000000000141.json, ... 2 entries]
ReadSchema: struct<metaData:struct<id:string,name:string,description:string,format:struct<provider:string,options:map<string,string>>,schemaString:string,partitionColumns:array<string>,configuration:map<string,string>,createdTime:bigint>,protocol:struct<minReaderVersion:int,minWriterVersion:int,readerFeatures:array<string>,writerFeatures:array<string>>,commitInfo:struct<version:bigint,inCommitTimestamp:bigint,timestamp:timestamp,userId:string,userName:string,operation:string,operationParameters:map<string,string>,job:struct<jobId:string,jobName:string,jobRunId:string,runId:string,jobOwnerId:string,triggerType:string>,notebook:struct<notebookId:string>,clusterId:string,readVersion:bigint,isolationLevel:string,isBlindAppend:boolean,operationMetrics:map<string,string>,userMetadata:string,tags:map<string,string>,engineInfo:string,txnId:string>>
(6) Filter [codegen id : 2]
Input [4]: [metaData#184, protocol#185, commitInfo#186, version#187L]
Condition : ((isnotnull(protocol#185.minReaderVersion) OR isnotnull(metaData#184.id)) OR (isnotnull(commitInfo#186.inCommitTimestamp) AND (version#187L = 143)))
(7) Project [codegen id : 2]
Output [4]: [protocol#185, metaData#184, commitInfo#186.inCommitTimestamp AS inCommitTimestamp#222L, version#187L]
Input [4]: [metaData#184, protocol#185, commitInfo#186, version#187L]
(8) Union