We are using Fabric pySpark notebooks to load data into our lakehouse from a SQL dedicated pool. We have run into a snag, apparently at some level the query string is scanned for reserved words. It clips the query string at that point. The reserved words are things like "OPEN", "INSERT", "COLLATE", etc.. I think this is done to prevent SQL injection attacks. Is there a way to shut off this filtering? I have put the code we use below. We do auto loans so we have several tables with the word "collateral" in them. "Collate" is a substring of the table name.
query = f"select * from {table_from} where 1=1"
#print(query)
df = (spark.read.option(Constants.SERVER, db_server) .option(Constants.USER, db_user)
.option(Constants.PASSWORD, db_password)
.option(Constants.DATABASE, db_database)
.option(Constants.QUERY, query)
.synapsesql()
)