Hi everyone, We’re currently running a long-term evaluation of Microsoft Fabric in an organisation that has historically been an IBM stack for 20+ years (DataStage, Cognos, on-prem Oracle, etc.). As you’d expect, there are differing opinions internally on the best direction forward — some of our newer managers come from AWS or Snowflake environments, while others prefer to stay close to our IBM lineage. My question to the community is around the transformation layer inside Fabric: What transformation tools are you actually using in production (or serious pilots) with Fabric — and why? Fabric gives us several options (T-SQL in Warehouse/Lakehouse, PySpark notebooks, Dataflows Gen2, and potentially dbt). But compared to something like IBM DataStage, Fabric’s GUI-driven transformation story is still evolving. Before we commit to a direction, I’m keen to understand from real-world users: - Are you doing most of your transformation work inside Fabric itself?(e.g., Data Pipelines + Dataflow Gen2 + PySpark + T-SQL) - Or are you keeping / adopting external transformation engines such as dbt Cloud, Databricks, Fivetran/Matillion/ADF, or even continuing with legacy ETL tools? - How have you balanced capability vs cost?Adding external tools clearly introduces new spend, but Fabric alone may not yet match the maturity of platforms like DataStage. - If you transitioned from GUI-based ETL tools (DataStage, Informatica, SSIS), what does your transformation architecture look like now? - Anything you wish you knew before choosing your path? Any insights, lessons learned, or architectural examples would be hugely appreciated. Thanks in advance!