In Fabric, I'm using Data Pipeline to get the data from Sources (Azure SQL Managed Instance) and load it into Lakehouse as a Delta table, overwriting it each time it gets executed.
I am facing an issue after this process:
1. I manually deleted the table in the Lakehouse, named "Areas" in the "dbo" schema.
2. After that, I invoked a stored procedure where the stage (Lakehouse) tables compare the data with the Bronze layer (Fabric Warehouse) and insert the new data into Bronze.
3. I got the following error:
```
ERROR:
```
My Thoughts:
I am using Fabric F8 Capacity, and I think that after deleting the "Areas" table manually and triggering the pipeline immediately after, the new table with the same name "Areas" is populated. When the stored procedure runs, it picks the previous run (old) parquet files under the table "Areas" due to caching.
Please correct me if I am wrong. I believe that due to low compute, there is a time lag (delay) in updating the tables, which is why it fetched the previous parquet file for the respective "Areas" table due to the same names.
And, I've checked the delta_log file also,I can't able to find the name or I'd with this
"e887affb-d4cb-4ff8-afdb-d6ec7a6a4c.parquet"
Would you please share your thoughts on this?