Hi everyone, I watched Will's video on GE a few days ago and have been working on implementing a test in Fabric. I have got it working, but it has prompted some questions around scaling this up for use on many tables in a pipeline.
My understanding is that, instead of writing 10 scripts to validate 10 tables (for example), the optimal approach would be to write one script that loops through the different tables, and based on a naming convention, uses the corresponding expectation suite to validate the data (assuming that the data is so different that 10 different suites are needed). Is this correct?
This leads me to my next question around actually making the expectation suites in the first place. Is it standard/best practice to make these programmatically using table metadata, or manually write them?
Happy to hear any alternate or better approaches!
Thanks,
Matt