Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Learn Microsoft Fabric

14.1k members • Free

8 contributions to Learn Microsoft Fabric
Understanding Direct Lake v2.0
Hey everyone, at FabCon 2025, Microsoft announced some updates/ extensions to the functionality of the Direct Lake semantic model storage mode. Some people are calling this Direct Lake 2.0 (but I think this is not the official terminology). What's the difference? The original Direct Lake connects to the SQL Endpoint, and the new Direct Lake 2.0 connects directly to the Delta Tables. This has a number of benefits: - it allows you create Direct Lake semantic models with more than one source - it by-passes the SQL Endpoint (which can cause problems, with syncing issues etc) - I believe it was an essential step also for security/ access reasons, when they role out OneLake Security - it will allow you to propogate OneLake security definitions into the semantic model layer (without the need to redefine RLS/ CLS permissions etc in the semantic model). You can learn more with this video from Data Mozart: https://www.youtube.com/watch?v=Z0tgA_pYK1s
4 likes • Apr 28
- it by-passes the SQL Endpoint (which can cause problems, with syncing issues etc) Crucial update.
0 likes • Apr 29
@Will Needham Hey Will, I'm trying to change the connection from SQL endpoint to OneLake files, but for a semantic model in Fabric web experience. I saw the video, but how to set it up while creating a new model. Can you help me? Sorry if the question sounds ignorant.
UPDATE, 2nd Apr: DP-800 certification retired
UPDATE, 2nd Apr: Sorry folks, the DP-800 is not a real thing (at least yet!). Just an April fools joke, enjoy your days 😀 #microsoftfabric #dp800 #april1st
0 likes • Apr 1
Man I honestly got caught... I started browsing for the news, my mind already started framing it in the Fabric "picture"... Then I couldn't find anything and I read the post again in detail... with hashtags...
Ideas to declutter the dev workspace (CI/CD)
Hey, I wanted to ask you about ideas for decluttering dev workspace. Long story short, I have a lot of POCs there, half-baked notebooks and such. Before git was introduced (and made available in my company) I was creating testing copies of items and worked on them. Now all this stuff is really not helpful while branching (folders are not supported, everything lands in root folder) and even deploying to test (I didn't find an option to filter only "different" items while comparing in the pipeline; I have to go through all supported items every time). Nevertheless, I don't want to get rid of the clutter completely. This is my idea: I create a deployment pipeline where my current dev workspace sits as second, then I deploy back all the items (I called the first workspace "early dev"). Dataflows can be ignored for now as they are invisible for CI/CD, but I believe that if I really want to clean thoroughly I can export templates in dev and import them in early dev. Eventually I want to delete all the clutter items in the dev workspace, but I am afraid I am missing something in my calculations and will lose stuff. So I decided to ask :-) Thank you in advance for any feedback!
0
0
Introducing OneLake events (preview) 👀
Today, Microsoft announced a new feature that definitely caught my eye - this will be really useful, OneLake events. A new event will trigger every time a OneLake file or folder changes - it can be used to trigger a Data Pipeline! Previously, folder-based triggers could only be setup on Azure Blob Storage folders, now you also get it with OneLake files ad folders. This is bring in lots of interesting new automation workflows - how are you going to use this? Read more here: https://blog.fabric.microsoft.com/en-us/blog/unlocking-the-power-of-real-time-data-with-onelake-events?ft=All
Introducing OneLake events (preview) 👀
2 likes • Dec '24
Very useful, as Blob Storage event triggers were cumbersome to maintain and introduced additional capacity usage - just for the trigger itself.
Passed DP-600
I have passed successfully the certification exam (score ~850/1000). It was "appropriately hard", I´d say. Thank you to Will and the community, you have been indispensable help! Some notes: - Using MS learn saved me a few extra points. - Using MS learn crashed the OnVue application three times (twice during transitions between the exam parts and once when I tried to close the learn window). What a hassle. - Lakehouse & Pyspark is still a thing, no overtly complex questions though
1 like • Dec '24
Congrats! Had the same issues with MS Learn. I saw one more person having similar issues, so they definitely have something going on.
1-8 of 8
Jakub Znamirowski
3
25points to level up
@jak-zna-9151
I'm a data engineer working with Fabric on a daily basis. I focus on ETL using pyspark notebooks and data modelling.

Active 95d ago
Joined May 3, 2024
Poland
Powered by