Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

Learn Microsoft Fabric

10.2k members β€’ Free

Fabric Dojo 织物

330 members β€’ $49/m

8 contributions to Learn Microsoft Fabric
Ingesting from api, pipeline vs notebook
I am pulling in data from an api that is paginated, 500 records for a pull. I need to pull in 86,000 entries, several times, to pull in the set of data I need. I will need to do this on a regular basis, so performance is important. I have it working in a notebook and it is taking 5-6 minutes to run. I am wondering if a pipeline would be faster? Any thoughts? Thanks!
0 likes β€’ Feb 23
@Damien O'Connor Thanks for the response back. I am using the Emma api (email platform) https://api.myemma.com/api/external/mailings.html. The thing that seems different is that the paging is not passing back any information on the paging in the body of the response. Instead, you get a max count from an api call using a count query parameter, and then construct your paging in 500 entry chunks, using a start and end query parameter. The copy activity paging rules seemed to support this fine, by putting in a parameter and selecting the range and offset. However, it never seemed to recognize the parameters properly. I tried both using curly braces and without, but neither worked. Any thoughts would be greatly appreciated I agree, the for loop is probably not an accurate test of running parallel - and I do control the parallel in the notebook to prevent the throttling, but am still interested in seeing if I can get the pipeline to work, with the pagination rules, as I think it could be much faster. Here is a screenshot of my last attempt. Not sure if I should be using the absolute url instead - but was not sure what that would look like.
0 likes β€’ Apr 13
Hey - thought I would give an update here. I actually go the paging in a pipeline to work if I just set one parameter with a range and then calculated the end parameter. Somehow, setting two parameters, the start and end, each with a range, seemed to create the fatal issue. However, I don't think this does parallel calls very well, and it was in the end, much slower that using async, semaphore architecture in a notebook.
Fabric reports running in apps alongside legacy BI Pro reports
Hi all, We are trying to get a report from fabric integrated into an app that is non-fabric premium workspace to be able to keep it alongside our legacy reports. We have published a report using BI desktop and the semantic model within a fabric workspace. The dev team can see the app but I'm struggling to get permissions to work for anyone without fabric workspace access to view the report. I have given users access to the semantic model but this is not enough apparently.
1 like β€’ Apr 13
@RenΓ© Tachon Are you using a default semantic model created within Fabric, or a semantic model that is created using the PowerBi desktop. I have found they can behave differently and found times where only a semantic model created from PowerBI desktop had all the functionality I needed.
Power BI Licenses for Fabric
Hi, I'm Sam from London and I work as a Power BI Developer/BI Specialist for a Financial Services Company. We currently use Power BI extensively across our organisation, mainly pulling data from our Data Warehouse using SQL (Azure). I am really interested in a lot of the Fabric capabilities within Power BI, specifically Co-Pilot and Data Activator, as I think it could bring a number of tangible benefits to our company. However, we are currently on Pro Licenses. I would like to understand what licenses people would recommend to upgrade to so we can get the benefits of Fabric?
1 like β€’ Jan 20
@Antony Catella interesting. I have not tried to customize SharePoint in a while. It was tricky back then. Thanks. Will be checking that out.
1 like β€’ Mar 11
@Todd Magnusen You need to create a Service Principal and then write the code to get the token for the authentication, as that solution kind of walks you through. I am not trying to have a filter based on login though, which is a little tricky, because at least in my solution, I am using a single account to do the login. However, you can do embedded where you pass back the individual's account too, but then you are back to a different licensing model. @Diana Geyer was also trying to do filtering based on customer logins. Diana - did you get anywhere with that?
Data Pipeline
I would like to schedule a pipeline to run every hour between 8am to 8pm daily. Whats the best way to approach this.
0 likes β€’ Feb 24
Yes, I have a pipeline that just orchestrates the running of everything else, so that I can have any flavor I need. I also put in error handling so that I get an email if anything fails.
[FAQ] Which Fabric Capacity(s) do i need?! SKUs, how many, pricing, licencing
If you have any further questions on this topic not covered in the article: #Which Fabric capacity(s) will I need? ASK BELOW πŸ‘‡πŸ‘‡
1 like β€’ Jan 19
We are a small organization using F8 for Fabric and PowerBI. One way we were able to afford this was to no longer need to have PPU's for all those who needed to view reports. Instead, we are using the embed token - embed for your customers (https://learn.microsoft.com/en-us/javascript/api/overview/powerbi/embedding-solutions) and displaying the reports within our CRM system. Seemed to be the best way at the time, but curious if that is still the best option.
1 like β€’ Jan 26
@Mahmo Ud We are using F8 - and have had capacity issues running semantic layers that are too resource intensive. I have noticed issues when running a notebook - and have had to go to the monitor and stop other notebooks that have not time out yet, in order to get that one to run. Have you checked the monitor to see if something else is still running?
1-8 of 8
Arlette Slachmuylder
2
8points to level up
@arlette-slachmuylder-2089
Data Engineer at Portland State University Foundation

Active 3d ago
Joined Apr 14, 2024
powered by