Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Learn Microsoft Fabric

14.3k members • Free

9 contributions to Learn Microsoft Fabric
❓ How are you using Variable Libraries? (If at all!)
Hey community! Are you using the Variable Library in Fabric?! I'm doing some research for a future YouTube video, and it would be great to open up a bit of a discussion around how you're using them so far, considering they are now Generally Available! What use cases are you using them for? Any tips to share about how you set them up? Or perhaps you're using the REST API endpoints for variable libraries for more advanced use cases? Any experience, big or small, would be useful to know about - so please share below! 👇👇👇
Poll
68 members have voted
0 likes • Nov 11
Our clients each get their own transactional and reporting database environments. Since our client data cannot be co-mingled, each client gets its own database environment. These databases are a copy of our "core" databases, which allows for quick stand up of new clients and reduces the amount of non standard coding. We also create a core project in Fabric and use the variable library when we deploy/set up our new client projects in Fabric.
No VNET data gateways in Fabric Trial
We are currently trying to implement a Fabric POC. The Trial version apparently does not support VNET data gateways (this from our Cloud Infrastructure Team after talking to Microsoft Support). Our data is contained in a High Trust environment and not public accessible. If VNET data gateways are not an option, how can we connect to our Azure SQL MIs so that we can use pipelines and data flows to bring data into our Lakehouses? Thanks in advance for any recommendations.
0 likes • Jun 10
@Johan Andolf It says right in the link you provided that you need a VNET data gateway, so that won't work.
0 likes • Jun 11
@Johan Andolf Thanks. Thats was the conclusion I had drawn, but was hoping for something else.
Incremental Refresh for Dataflow Gen2 (Public Preview)
Hey everyone, the conference updates are now starting 👀 Starting with this incremental refresh feature added to the Dataflow Gen2. You can read more here: https://blog.fabric.microsoft.com/en-us/blog/announcing-public-preview-incremental-refresh-in-dataflows-gen2/
Incremental Refresh for Dataflow Gen2 (Public Preview)
1 like • Sep '24
I've been waiting for this option as we've had to come up with some complicated work arounds for moving data between our Bronze and Silver layer. I don't see that option yet, but hopefully it shows up later today!
📊 Fabric Known Issues Report (link)
Microsoft publish a Known Issues Report which is a good thing to be aware of 👍
📊 Fabric Known Issues Report (link)
0 likes • Jul '24
@Will Needham I've written dynamic code in the table name of the destination tab, you can see it in the JSON string, but it no longer appears in the table name, or when I open the editor. Outside of a ForEach, everything works as it should. Failed to mention, this happens after I close the pipeline editor and re-open it.
0 likes • Jul '24
@Will Needham Thanks for looking. I tried recreating and that didn't do anything. Additionally, this has happened to every single Copy data inside a ForEach that I have built in multiple workspaces.
PySpark Notebook SQL commands not being reflected in Lakehouse
I am using a PySpark Notebook to help develop and incremental load into our Lakehouse. When I run the PySpark SQL, some things, such as deletes or added table columns, I can see reflected in the Lakehouse. However, when I run an insert statement or delete a column, I can see that reflected when I run a query in the Notebook, but if I run that same query in the Lakehouse SQL endpoint, it's like the column didn't drop or the insert didn't happen. A Notebook query is saying my max value on a column is x and the Lakehouse query is saying that same max value is less than x. I tried refreshing the Lakehouse, but the values still do not match after I run the INSERT statement in the Notebook. Am I missing some small setting or command? Trying to figure out why the Lakehouse and the Notebook are not returning the same thing.
0 likes • Jul '24
Thanks. I am aware the Lakehouse SQL endpoint is read-only, so I was using the Notebook to add a column, update, delete and insert. When I run queries in the Notebook, it looks like everything ran. When I go back and run the same query in the Lakehouse SQL endpoint, they do not match, like the insert from the Notebook never occurred, but the delete statement from the Notebook did occur. Its been nearly 2 hours and the changes are not reflected in the Lakehouse and the insert was only 10 rows. Appreciate the help
1 like • Jul '24
Think I figured out the issue. I was adding and dropping a column that I was using to flag matching records that I wanted to delete from the source table. In order to do that, I had to use the below code. That seems to be causing an issue with syncing table data between the Notebook and the SQL Endpoint. Once I came up with a new approach that doesnt involve that column I was adding and dropping, the Notebook and the SQL Endpoint were returning the same values. ALTER TABLE <table_name> SET TBLPROPERTIES ( 'delta.columnMapping.mode' = 'name', 'delta.minReaderVersion' = '2', 'delta.minWriterVersion' = '5')
1-9 of 9
Derek Darrow
2
13points to level up
@derek-darrow-2440
Data Engineer located in Saint Louis, MO (USA)

Active 33d ago
Joined Apr 24, 2024
Powered by