Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Data Alchemy

38.1k members โ€ข Free

Learn Microsoft Fabric

14.2k members โ€ข Free

8 contributions to Learn Microsoft Fabric
SCD best practice
Hi, I have a SCD table named targets and the following columns id, product, target, version, startdate and enddate. Currently, i look-up my sales data by doing a inner join to targets table and linked like this sales.sales_date between target.startdate and target.enddate. I feel that this is not optimal due to size of SCD(around 150k rows per version and changes 3 to 5 times per week). what is the best approach to achieve optimal performance?
Mirroring for on-prem SQL Server
New blog post: Mirroring SQL Server Database to Fabric I know quite a few of you have on-prem sql db's that you want to Mirror. This approach seems to be a bit of a workaround (replicating to Azure SQL), whilst they build the full feature, but might be interesting for some people. Thoughts on this? Is this something you're going to try?
Mirroring for on-prem SQL Server
0 likes โ€ข Aug '24
This can eliminate some of the SSIS packages and lift all ETL jobs to cloud
PowerBI Report Sharing
Hi There. Is there a way to share my powerbi report with users in my company that do not have powerbi pro lic like for them to view the report i'm sharing? If so how? How can Power Automate or SSIS help?
2 likes โ€ข Aug '24
you can use power automate to send power bi report in PDF format especially when you need to apply different filters per pdf and needs to be sent to different recipients.
0 likes โ€ข Aug '24
@Tumelo Tsikana you can do that by designing a page at your power bi report that is in landscape mode. then you need to extract that specific page by at power automate using its page id
Best practice in incremental loading from ADLS2
Hi, We are planning to load csv files from Azure Data Lake Gen 2 to our fabric lakehouse. Every hour, it is expected that new file will be added. In using Fabric Lakehouse, how can we ensure that only new files will be added to our lakehouse instead of doing full load every hour? Ant
0 likes โ€ข Aug '24
@Will Needham Yeah shortcut is the simplest solution
0 likes โ€ข Aug '24
@Jerry Lee Thanks
Need Help with Executing Warehouse SQL Query Transformations in Fabric Spark Notebook
Hi everyone, I'm working on a project that involves multiple query transformations. Here's an example of one such transformation: QUERY: SELECT col1 AS name, col2 AS price FROM warehouse_name.schema_name.table_name1 UNION SELECT col3 AS name, col8 AS price FROM warehouse_name.schema_name.table_name2 I need to execute these queries from a Fabric Spark notebook. This step is crucial for my validation criteria. Currently, I have 140 transformation queries that I need to validate by checking counts and performing manual difference calculations. This process is very time-consuming. If I can query these transformations and store the results as a DataFrame, it will significantly simplify the validation process. Could anyone guide me on how to achieve this in a Fabric Spark notebook? Any suggestions or best practices would be greatly appreciated. Thanks in advance for your help!
1 like โ€ข Aug '24
@Avinash Sekar you can use abfss path of your table to load the table in notebook like the script below df = spark.read.format("delta").load("abfss://xxxxxxx@onelake.dfs.fabric.microsoft.com/14958552-8ccc-40bb-953d-40653e33f37e/Tables/Sales/Dim_Customer") display(df)
1-8 of 8
Anthony Estrada
2
14points to level up
@anthony-estrada-6973
Data Engineer transitioning to be AI Engineer

Active 440d ago
Joined Jun 25, 2024
Powered by