Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Owned by Pieter

unicornclub.io

1 member โ€ข Free

UnicornClub.io helps companies build Data and AI fast by providing top specialists, while giving those specialists access to great remote jobs.

Memberships

Founders Accelerator (Free)โ„ข

10k members โ€ข Free

Skoolers

180.5k members โ€ข Free

Startup System Network

104 members โ€ข Free

Selling Online / Prime Mover

33.1k members โ€ข Free

AI Automation Agency Hub

273.9k members โ€ข Free

Learn Microsoft Fabric

14.2k members โ€ข Free

6 contributions to Learn Microsoft Fabric
Databricks vs MS Fabric or Hybrid
Hi all, I have a client that is currently using databricks and power bi. They have multiple clusters of databricks running and want to consolidate into one analytics platform. I've suggested Fabric, what would be the reasons for migrating over to fabric instead of using databricks? or would a hybrid approach be better, by using fabric and databricks? Will Fabric be cheaper to run as one platform? Please let me now your thoughts.
Get discounted access to new Microsoft Fabric Udemy course
Hey everyone, a friend of mine Malvik Vaghadia has just released a brand new Udemy course on Microsoft Fabric. I've reviewed the course, and it's really comprehensive, and Malvik is a great instructor with lots of experience teaching on Udemy and on his YouTube channel. Try this link for FREE access (first 100 only, first come first served): ๐Ÿ‘‰๐Ÿ”— EDIT: *Link Removed Because All Coupons Have Been Redeemed* You can then use this link for a heavy discount (68% discount, I believe): ๐Ÿ‘‰๐Ÿ”— https://www.udemy.com/course/microsoft-fabric-the-ultimate-guide/?couponCode=LEARNFABRIC ๐—ช๐—ต๐—ฎ๐˜ ๐—ฌ๐—ผ๐˜‚'๐—น๐—น ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป: โœ… Data Warehousing: Understand the principles of data warehousing, master SQL for querying and managing data, and learn how to design and implement data warehouses within Microsoft Fabric. โœ… Power BI: Create powerful, interactive reports and dashboards. โœ… Data Engineering: Develop robust data engineering pipelines using Spark (PySpark), manage large-scale data transformations, and automate workflows efficiently. โœ… Data Factory: Master the use of Data Factory for orchestrating and automating data movement and transformation. โœ… Real-Time Intelligence: Leverage real-time data processing capabilities to gain instant insights and act swiftly in dynamic business environments.
Get discounted access to new Microsoft Fabric Udemy course
1 like โ€ข Aug '24
Thanks for sharing
How to change power bi lakehouse data source after deployment?
We have DEV, QA and PROD workspaces. We have a power bi report connecting to a lakehouse in the DEV workspace. When we deploy to the QA workspace, the power bi report points to the lakehouse in dev. How do we change the report to point to the lakehouse in the QA workspace?
0 likes โ€ข Aug '24
@Will Needham Yes, TestDeployment is a powerbi report pointing to a lakehouse in direct lake mode. It uses the default semantic model generated by fabric for the lakehouse. I've logged this microsoft, they say it is not supported, but does not give me a workaround.
0 likes โ€ข Aug '24
Thanks for your help @Will Needham , I'll try it...
Where's the best place to store your metadata?
Happy Monday everyone! For those of you exploring metadata-driven architectures (which I think is quite a lot of you!)... here are some ideas for you: As a quick recap: the metadata-driven data pipeline is a technique commonly used in data engineering. Rather than explicitly declaring the source and the destination for a Copy Data Activity (for example), we instead design our pipelines so that the Source and Destination can be passed in dynamically. This means we can store details of the Source/Destination connections in another location, which is read at Execution time. This adds a lot of benefits: scalability, maintainability, and many more. However, the point of this post was to start a discussion about how/ and where you can store such metadata. The two most common ways you see metadata stored (in a Microsoft environment) are 1. In structured tables (like the Data Warehouse) 2. In a JSON File (perhaps in your Lakehouse Files area). However, I'd like to throw in a third option for discussion: storing your metadata in a Notebook (and passing it into your pipeline using mspsarkutils.notebook.exit(). Pros of this appoach: - make your configuration trackable by version control (which is not possible with the previous two methods) Cons: - maybe more difficult to read, if you have quite a few Key/Value pairs Thoughts? Where are you storing your metadata at the moment?
Where's the best place to store your metadata?
2 likes โ€ข Jul '24
What about a delta table in a lakehouse?
4 likes โ€ข Jul '24
Here is my metadata pipeline design, any thoughts to make it better?
How do you do delta loads in pipelines from SQL Server?
Is there a native way in Fabric to do incremental loads from SQL to a Lakehouse? I see there are native settings in azure ADF, but not in Fabric pipelines? Do you manually have to enable CDC on sql server and then add the sql code in the source tab?
1-6 of 6
Pieter Human
3
44points to level up
@pieter-human-1671
Help companies build Data & AI capabilities fast, with the right people and the right playbooks.

Active 1d ago
Joined Jun 25, 2024
Powered by