hi, does kedro support out of the box google cloud logging lib in logging formats? I am not clear on the documentation how advanced adding custom handlers is possible or do I have to do it manually? When would it be better to initialize? After or before kedro loading?
Hi, I'm testing after upgrading to 0.19.9 and I found what seems like a bug - after running the pipeline for the second time with a runner (like during test cases) the output is no longer saved (in catalog or returned as a value from pipeline). That wasn't the case in 0.19.8
Hi, I have a weird bug to report, but I'm not sure how to investigate it/debug. After adding some benign changes to pipeline registry, kedro-viz (9.2.0) started to randomly crash with javascript type error at some pipelines (resulting in whole screen going white-blank and needs to refresh). Those are the changes between commits (before commit all works fine, after commit it consistently crashes on some (seemingly somewhat random - the lower the pipe in the list the higher chance to crash) subset of pipelines with uncaught exception). That's the whole diff of the commit:
@@ -79,7 +79,16 @@ def register_pipelines() -> dict[str, Pipeline]: group_class, input_schema, pipelines[f"training_{group_class}"], pipelines[f"inference_{group_class}"] ) - pipelines = {k: pipelines[k] for k in sorted(pipelines.keys())} # sorting for kedro-viz selection box - pipelines["__default__"] = sum(pipelines[f"training_and_inference_{gc}"] for gc in GROUP_CLASSES) # type: ignore + # pipelines = {k: pipelines[k] for k in sorted(pipelines.keys())} # sorting for kedro-viz selection box + order = ["input", "data", "feature", "fgu", "layer", "pricing", "inference", "training"] + pipelines = { + k: pipelines[k] + for k in sorted( + pipelines.keys(), + key=lambda x: str(order.index(x.split("_")[0]) if x.split("_")[0] in order else len(order)) + x, + ) + } # reordering pipelines in kedro-viz based on order list + pipelines["__default__"] = pipelines["input_na"]