Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Learn Microsoft Fabric

14.5k members • Free

11 contributions to Learn Microsoft Fabric
Built My Very First Fabric Solution
Hi Community, I had previously developed a Power BI report on the Premier League: Premier League Report This report ingests data from two separate web sources: - 24/25 Match Results - 24/25 Fixtures I passed DP-600 back in June but when I attended a presentation delivered by Will to the London Fabric User Group I was inspired to start building in Fabrics. I have tried to outline how using Microsoft Fabrics has improved my analytical solution in the slides attached. - Increased resilience & robustness - Built in Validation & Quality checks - Leveraging Git Integration for enhanced collaboration - Enhanced Analytical Capabilities. Please let me know your thoughts and any suggestions or improvements you might have.
0 likes • Oct '24
Hi Krishan - Good project to start with and nice documentation of your work and goals! I'm thinking about documenting one of my projects as. However, I'm afraid that I have to black out too much stuff, because I only have complete projects done for clients.
Connecting MS powerapps to fabric
Hi All I am in need of a CRUD editor for end users for a table in a fabric warehouse, the 1st thing that comes to mind (and co pilot actualy agreed :-D) was powerapps. However I cannot for the life of me get powerapp to connect and read a warehouse table. MS suggests to use dataverse, but when i connect that way i get "oops something went wrong try reloading". I know there could still be another issue with the fact warehouse doesnt normally work with primary keys but I have found a potential solution for that. Does anybody have any idea how to do this, or is powerapp not the right tool and do i need to use something else. PS I tried to go directly to the sql connection, by creating an sql connector, but that only shows me a bunch of system stored procedures and again not my table Thanks Hans
2 likes • Sep '24
Just my 2 cents: Is that the right use case for Fabric? Because it has a database engine to access your data, it is not an operational database system. Fabric is an analytics platform that is able to ingest and process data, analyse that data, train ML on it, and do reporting on that data. I'd go with a stand alone power app and use it's datastore as a source for a Fabric workflow. My opinion is not final, I'm open for better suggestion. I even have a similar requirement in my backlog. So happy to learn from this thread 😆.
Future of Power Query vs Python for ETL in Fabric
Power Query (Dataflows gen2) has a user friendly GUI to generate ETL code, and hand coding is occasionally required to handle tricky cases. In contrast, impelemting ETL in Python mostly involves hand writing code. What is the future direction in MS Fabric in terms of Power Query vs Python? Is Power Query engine being improved so that Python will not be required? Or, is Python going to be the de-facto ETL language in Fabric? Instead of investing in time and effort to master both of these languages, is it worthwhile to focus on one and master it?
4 likes • Sep '24
In my experience it depends on source and connection options to that source. If your just ingesting from structured data sources, Power Query might be a perfect solution. PySpark comes in when you have more requirements. E.g.: - moving ingested source data files to a specific location - checking and recording the SHA256 hashes of source files - complex data sanitizing - data validation - etc.
What are your biggest pain points currently with Fabric?
Hey everyone, happy Monday! I'm currently planning out future content for the YouTube channel, and want to always produce the content that is most relevant/helpful to you! So, some questions: - What are your biggest pain points currently with Fabric? - Anything you're struggling to understand? - Things you think are important, but don't quite grasp yet? It's this kind of engagement that led to the Power BI -> Fabric series, and then the DP-600 series, so I thought I'd open it up to you again! Look forward to hearing from you - thank you! Here's some potential things: - Delta file format - integrating AI/ ML/ Azure AI / Open AI - copilot - Git integration / deployment pipelines / CI/CD - data modelling / implementing SCDs - medallion implementation - more advanced pyspark stuff - data pipelines - metadata driven workflows - dataflows (and optimising dataflows) - lakehouse achitectures - real-time - data science projects - semantic link - migrating semantic models - using python to manage semantic models - administration/ automation - fabric api - other...?
What are your biggest pain points currently with Fabric?
0 likes • Aug '24
My biggest pain point is the lack of support for certain items and scenario's in the deployment pipeline. Second comes the random fallout of functionality. E.g. failure of creating a spark session for a notebook execution. When posting in the Fabric Community yields answers like "come back tomorrow and try again". Besides that, I miss a simple scheduling mechanism to turn on and off a capacity. I'm working for a small company with external managed Azure resources. To turn the capacity on and off, I have to issue a support ticket. So we stick to an "always on" scenario. To reduce costs, we picked a small capacity (F4). That's doable for a single user - single usecase situation, but to small for an environment with more users. E.g. F4 implies 4 concurrent Spark Nodes. The smallest setting on a Spark environment is 1-3 nodes. Effectively that means that there can be one Spark session running at any time. Since every notebook uses it's own spark session, we have to turn off every session after a run. Otherwise you're blocking a co-worker (or yourself when switching to another notebook). If we can easily reduce capacities to only be active at office hours (say 60 hours per week), we can go for a F16 at almost the same monthly costs. This may seem to be a minor issue, but for us, it's big.
Deployment of Fabric items across workspaces
@Will Needham Not sure if you have any content (video or other documentation) on deployment of Fabric items, so thought i'd raise it in here! We know that some items are supported in GIT, but it seems there are issues deploying some items across workspaces due to each item in each workspace having there own IDs i.e. Workspace ID, Warehouse ID, Lakehouse ID. And then how different items reference each other. For example, referencing a lakehouse in a notebook or pipeline is fine. But when that get's deployed, the notebook or pipeline will have errors as it's referencing a lakehouse in another workspace. I see that some people are finding work arounds which may not be ideal. But a video of deploying items across workspaces would be great.
1 like • Apr '24
You can get the lakehouse id from the url of the lakehouse. Just open the target lakehouse en copy the last guid on the url.
1-10 of 11
Arjaan den Ouden
2
5points to level up
@arjaan-den-ouden-3162
Professional Power BI user, taking first steps in Fabric. Expert knowledge on Dutch Pensions datasets.

Active 318d ago
Joined Mar 23, 2024
Netherlands
Powered by