Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

Learn Microsoft Fabric

Public • 1.5k • Free

42 contributions to Learn Microsoft Fabric
What are your biggest pain points currently with Fabric?
Hey everyone, happy Monday! I'm currently planning out future content for the YouTube channel, and want to always produce the content that is most relevant/helpful to you! So, some questions: - What are your biggest pain points currently with Fabric? - Anything you're struggling to understand? - Things you think are important, but don't quite grasp yet? It's this kind of engagement that led to the Power BI -> Fabric series, and then the DP-600 series, so I thought I'd open it up to you again! Look forward to hearing from you - thank you! Here's some potential things: - Delta file format - integrating AI/ ML/ Azure AI / Open AI - copilot - Git integration / deployment pipelines / CI/CD - data modelling / implementing SCDs - medallion implementation - more advanced pyspark stuff - data pipelines - metadata driven workflows - dataflows (and optimising dataflows) - lakehouse achitectures - real-time - data science projects - semantic link - migrating semantic models - using python to manage semantic models - administration/ automation - fabric api - other...?
Complete action
7
38
New comment 2h ago
0 likes • 6h
@Chinmay Phadke @Will Needham Interesting to start the debate about workarounds or rigth way to do all of this point. A new thread @Will Needham ?
0 likes • 5h
@Chinmay Phadke my thoutghs / comments: 1) I am reading incremental data from a Lakehouse to the warehouse: 1.1) From Sql Server on-prem, I flag the incremental data. you can use CDC, or your own method. 1.2) Using Copy activity I am calling a Store procedure into the on-prem database, this store proc only brings the incremental data into a Lakehouse table. 1.3) Using a Store proc in a warehouse I am inserting/updating the incremental data into the large table. 2) you can use Dataflow Gen2 same way as the point 1) 3) using the abfss with the right security as you mention is the workaround. Youd can read files from many sources, one of them is the lakehouses. 4) As I use warehouse for Silver + Gold layer (Lakehouse for Bronze), then, I call a store procedure into the Warehouse, and here, you have the update options (I load the data from on-prem to a lakehouse table using copy activity, and then I call a Store proc into a warehouse to clean and upsert the data). 5) Workarround. use more lakehouses. I agree that Schema is needed on lakehouse for better organization. 6) I agree. This is pain point. I suppose Microsoft will fix this ASAP. If you create first the table into a specific schema, then you can see that table into the Dataflow Gen2 (it needs additional effort to create the table first). 7) you can use copy activity to load the data from on-prem to the lakehouse (to a file or a table), and use pyspark to process this information (you can sync this using pipelines) 9) Views will fall back the Direct lake connection to Direct Query. It is only available to SQL Endpoint. 10) RLS will fall back the Direct lake connection to Direct Query. Something to be improve by Microsoft. 11) A lot of benefis with Direct lake. You can pre-warm the data and the performance will be similar to import. 12) I didn't know that. Something to be improve by Microsoft 13 to 18) I didnt know this restrictions.. thanks for sharing.
FIne-grained access control for Lakehouse
Watch the Guy in a cube video here: https://www.youtube.com/watch?v=oamf3oztUAw Problem: previously, access to things in the Lakehouse was all-or-nothing. Solution: Now, Microsoft have introduced more fine-grained controls for the Lakehouse, so you can share individual tables (or files/ folders) with people or groups. It opens up some pretty interesting access control patterns which it would be good to explore in more detail. Thoughts?
11
4
New comment 1d ago
0 likes • 1d
This option will replace in some case the use of the ADLS, and with Fabric P1/F64 we have until 1Terabyte include into the prices for all uses (including the files into the lakehouse).
Task Flows...
Task Flows I believe has been released to Preview, but I think it's only currently available in the USA. I tried it (from UK), and it's not showing yet... Can anyone see it? https://www.youtube.com/shorts/Q8aYa6pS2zw
9
17
New comment 2d ago
0 likes • 6d
I don't have the option yet
BIG NEWS: Updates to CI/CD, Git Integration, Deployment Pipelines 👀👀
Some updates to the CI/CD functionality in Fabric announced today... >> READ ANNOUNCEMENT HERE << New items supported in CI/CD - Data pipelines – available in git integration and deployment pipelines. - Warehouse – available in git integration and deployment pipelines. - Spark Environment – available in git integration and deployment pipelines. - Spark Job Definition – available in git integration. PLUS, Deployment Pipelines APIs - read more here
17
7
New comment 6d ago
0 likes • 6d
Amazing news.. Now, we can use Azure DevOps to control the source of all the objects!!!
How to update column data type in data warehouse
Hi, I have got this table in data warehouse and need to update the "DateStamp" column from datetime2 to date. I was trying to run alter table query however it doesnt support in Fabric, is there a better way to do this without dropping and recreating the whole table?
0
7
New comment 7d ago
1 like • 7d
As I know, we need to drop and create... I think, you can try to use a store proc to dump the data into a temporal table, drop and create the new table with the new types, and after that move the data back..
1-10 of 42
Julio Ochoa
4
67points to level up
@julio-ochoa-1058
Computer Science. Data and Analytics Manager. Certification on DP-600 Fabric

Active 32m ago
Joined Mar 23, 2024
Canada
powered by