-
Help
DescriptionHello, Apologies, I have not been able to find a solution online. I have a pipeline with two (file output) steps: A1=>A2, and B1=>B2 (Two samples, same functions) Job A2 is currently at 'dispatched' status, but I'm fairly sure it has failed. I'd like to cancel/forget it and start again. I assume for some reason it has finished without generating an error, and this is assumed to be quietly running on a cluster somewhere?
For information, I'm launching tar_make() as an rstudio workbench job in the background (using rstudio workbench pro). Default tar_make (no use of crew e.t.c). Typically, the stdout of these jobs are and accessible via the workbench interface, but for whatever reason the history of completed jobs has disappeared. For future reference - is there a way to stop or forget or force-ignore a 'dispatched' target so I can try again? Perhaps I"m looking at this from the wrong direction? Thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
I would recommend stopping and restarting |
Beta Was this translation helpful? Give feedback.
-
Thanks @wlandau , ok that makes sense from a workflow-y perspective. When you say 'stopping and restarting tar_make()', am I right to think that means I'd have to find and kill the process that tar_make is running in? Thanks, |
Beta Was this translation helpful? Give feedback.
I would recommend stopping and restarting
tar_make()
in that situation. Except for pre-specified debugging, pipelines are completely automated, encapsulated, and hidden from the user by design (to support reproducibility).