Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Learn Microsoft Fabric

14.3k members • Free

3 contributions to Learn Microsoft Fabric
Purview
Question about data storage related to MS Purview. We recently ran into a security snag with a new data catalogue application. They store our query details on their server, and this has been deemed an unacceptable security risk due to sensitive data included in ad hoc queries. We asked if there is an option to store that data on our own Azure cloud, but it is not. The result is that we are not able to leverage the data lineage and "popularity" metrics that should be the biggest value from the tool. Does anyone know if Purview overcomes this issue? I see that it creates data lineage, and I assume it is using some type of query miner for that, similar to our current catalog. If we can keep those details on our company's cloud rather than, for example, sending it to Microsoft, I think there would be a clear value proposition. Just curious if this is anything anyone here has seen, etc. Thanks and best regards!
1 like • Jun '24
@Will Needham Sorry I missed this reply earlier. This is a good suggestion. I assume my employer has a CSM with MS. I will ask about the best way to raise the question. Thank you!
1 like • Jun '24
Hi Mohammad, I don't have insights to answer your questions yet. I will see what I can find out and loop back to share it here. Thanks!
Fabric CAT Tools becomes Semantic Link Labs
Fabric CAT Tools was a GitHub repo created and managed by Michael Kovalsky (member of the Fabric CAT). It contains a number of useful methods for automating things in Fabric and Power BI semantic models. Now, it has been renamed and moved to the official Microsoft GitHub repo: https://github.com/microsoft/semantic-link-labs/ It is also published to PyPI (under the name semantic-link-labs)
Fabric CAT Tools becomes Semantic Link Labs
0 likes • Jun '24
What does CAT stand for? šŸ™ƒ
High Level fabric Roadmap
Hi there! New joiner. I watched the "Transition to Fabric" Youtube video series over the holiday weekend. They are exceptionally good! Thank you for that great content! I am working on a team to re-engineer our data governance to improve data quality, align accountability, increase transparency, decrease duplications of measures, datasets, etc. We recently transitioned to Azure Synapse for our EDW and we use PowerBI as one of our main analytical apps. If I understand correctly, leveraging Fabric functionality should help us stop copying datasets and measures into local/private workspaces and desktops, and centralize them so multiple workspaces can use centralized tables curated by subject matter experts. I am not clear whether we need to copy those "gold" tables into OneLake or create a "gold" schema in Synapse and shortcut to that. Are both approaches acceptable? Does one have clear advantages? I am concerned about getting pushback regarding creating new tables in OneLake, though it seems like they should have better responsiveness. A lot of the materials focus on AI and machine learning, and I am just trying to get grounded on basic BI with an existing EDW. Any thoughts about whether I am on the right track would be appreciated. Thanks!
1-3 of 3
Maureen Brennan
1
3points to level up
@maureen-brennan-1474
Data governance manager helping lead tech and business teams to collaborate using leading cloud technologies and organizational data concepts.

Active 326d ago
Joined May 28, 2024
Powered by