Hello Kedro community,👋
We had a question in our team:
We are using currently dataset databricks.ManagedTableDataset (that are working with databricks table using delta).
We though that dataset.save would use delta function (ex: WhenNotMAtch or when Match), but it is not the case (in source code, we saw that a merge in a sql query is hardcoded).
Is it a reason for that? Is it a PR we should propose?
Thank you and good week-end 🙂
Hello,
I have a question regarding the kedro cli tool `kedro catalog resolve`.
In our way of working with kedro, we generate specific conf for pipeline (independtly from general conf).
It means, that we have the base, and local folder in conf folder, but we also generate conf folder for specific pipelines. When we run pipeline, we do it like kedro run --conf-source={pipeline_conf}
However, With this way of doing, I am not able to use kedro catalog resolve since it is not possible to specify --conf-source in the cli tool.
Would you have any idea on how I could do that?
Hello Kedro community,
I have currently a problem on how my databricks.ManagedTableDataset
is created (Problem of type, precision of DecimalType.
To avoid that, I want to define schema in my yaml file in order define schema should have the ManagedTableDataset in databricks.
Would you have some yaml example on how to create this schema ? (with the DecimalType if possible 🙂 ). I did not find any example, and IntegerType (a spark type) did not match anything for example.
Thanks and have a good day !