Hello!
Data ingestion and transformation from file sources such as CSV and JSON can be implemented in Fabric with Spark or Pandas or standard Python. What are the use cases for these different options?
Moreover, it appears to me that Pandas dataframes cannot be written as Lakehouse Delta Tables yet, so even if we use Pandas, Pandas df needs to be converted to Spark dfs to save as Delta tables. Is this correct?