Hello team,
I wonder if there is a way to do the following in a proper kedro way.
"{namespace}.{variant}.anomaly_scores": type: polars.CSVDataset filepath: data/08_reporting/{namespace}/anomaly_scores/{variant}.anomaly_scores.csvI use this catalog entry to save data from a pipeline with different namespaces. Then, I take all these CSVs at the same time, from another pipeline, with this entry:
anomaly_scores: type: partitions.PartitionedDataset path: data/08_reporting/train_evaluation/anomaly_scores dataset: type: polars.CSVDataset filename_suffix: ".csv"It works but since it is not the same entry, if I execute the two pipelines as part of a bigger one, the pipeline that takes the data, which has to come after the other, some times comes before. I thought of using a dummy entry/output variable to force the order. Is there another better way?
Hey Ruben
Your question is similar to this one. You might find the solution discussed there helpful. Can you take a look at that discussion and let me know if using dummy variables to enforce the correct execution order works for your case?