Hello Experts,
I am in confusion choosing the correct architecture in Fabric.
We currently have traditional BI platform. We get data from various sources into Azure SQL DB. Our main data source is D365(CE and FinOps). D365 data is exported to ADLS Gen2 through dataverse.
I am using ADF to read the data from datalake using Synapse serverless pool and store into Azure SQL DB(which is our BI database). We again do lot of transformation by applying business logic and store the results into BI denormalized tables which then used in PBI for reporting.
Our ETL platform is ADF. Pipelines are designed to run in Full Refresh and Incremental refresh based on a pipeline parameter.
Now, we are exploring Fabric. We already have PBI P1 capacity.
As said, we import large amounts of data from D365 and other sources. So, lakehouse is not the best option to read, transform and use for reporting. Am I correct?
Lake house uses csv files and writing stored procedures and updates are not possible.
So, my BI database will be in Azure SQL DB. Can I use fabric pipelines to read data from synapse serverless pool and store into Azure sql db?
My pipeline flow is as below
- Parameter: Incremental/Full refresh
- If Incremental, read the last import data from Azure sql db for the current table
- Pass this date to source data query and read from D365
- Copy the results to staging table in Azure SQL DB
- Run Insert stored procedure to insert new rows into target table
- Run update stored procedure to update modified rows
Can I do this in Fabric pipelines?
Regards
Krish