Unable to read or write into lakehouse
I was able to run the following code without issues.
df = spark.read.format("csv").option("header","true").load("Files/data/orders_data/2019.csv")
But when I try to create a file using python it doesn't let me and throws
FileNotFoundError: [Errno 2] No such file or directory: '/Files/xyz/2024-11-03_xyz.json'
import json
data = "{'k1':'xyz'}"
file_path = f'/Files/xyz/{start_date}_xyz.json'
# Open the file in write mode ('w') and save the JSON data
with open(file_path, 'w') as file:
# The `json.dump` method serializes `data` as a JSON formatted stream to `file` `indent=4` makes the file human-readable by adding whitespace
json.dump(data, file, indent=4)
print(f"Data successfully saved to {file_path}")
Isn't it possible to access and interact with lakehouse via simple Python code?
0
1 comment
John Eipe
1
Unable to read or write into lakehouse
Learn Microsoft Fabric
skool.com/microsoft-fabric
Helping passionate analysts, data engineers, data scientists (& more) to advance their careers on the Microsoft Fabric platform.
Leaderboard (30-day)
Powered by