Join the Kedro community

M
M
M
D
M
Members
Alexis Drakopoulos
A
Alexis Drakopoulos
Offline, last seen 4 hours ago
Joined October 4, 2024

In Kedro pipeline tests, what's the best way to mock the underlying nodes? we use pytest

1 comment
L

I am writing my first Kedro pipeline tests and I am a little confused.

I am testing a pipeline with two nodes, however the first node outputs a spark object which needs to have copy mode assign as a memory dataset. How can I specify that in python rather than yaml?

catalog = DataCatalog( )
caplog.set_level(logging.DEBUG, logger="kedro")
successful_run_msg = "Pipeline execution completed successfully."
SequentialRunner().run(pipeline, catalog)
assert successful_run_msg in caplog.text

do I do that using add_feed_dict? how?

19 comments
d
A
M