You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for writing and maintaining this excellent workflow. I suggest the following improvements to make profiling even more friendly.
The grouping of input datasets into batches is not guaranteed to be the same across workflow runs and this prevents retrieval of cached results when resuming a workflow run. Could you sort the datasets prior to batching?
The task identifier is just '(db1)', as in
name: NFCORE_TAXPROFILER:TAXPROFILER:PROFILING:KRAKENUNIQ_PRELOADEDKRAKENUNIQ (db1). It would be nice to have more specific identifier, including the batch # ( provided the issue above is resolved ) or at least the sample identifier of one dataset of a batch.
Might it be better to split the input datasets into batches based on dataset size or read count? This may help balance the run times across batches.
The text was updated successfully, but these errors were encountered:
For 1 and 2 that should be possible I think, right @Midnighter ?
however it would risk slow down of the pipeline waiting for all prior steps to complete to force an order
For 3 will be more difficult and I'm not sure if it's worth the effort (read count/dataset size doesn't necessarily linearly correspond to run time as it depends how many hits actually are valid vs things that don't match anything)
Agreed, 1&2 are definitely possible. I think, 2 is achievable with minimal changes and could be a good first time contributor feature 😉
As you say @jfy133, 1 requires a global ordering, meaning all samples will have to have finished being processed their upstream steps. I don't see that as a huge issue, it simply means that krakenuniq will be one of the last profilers to complete. I say, that's worth it for caching. Also this change should be achievable in a quick way if you are up for submitting a PR 😉
In my experience with kraken2, every input read is tested such that there is an almost perfect correlation between number of reads and processing time. I assume, krakenuniq will be similar. However, this is a larger change and I'm not sure it's worth it. Rather than trying to distribute your samples in a way that each node receives exactly one batch in a manner that they all complete at a similar time, try to use the batch size option such that each node receives multiple batches but you still observe speed ups compared to the overhead involved. Does that make sense?
Description of feature
Thank you for writing and maintaining this excellent workflow. I suggest the following improvements to make profiling even more friendly.
The grouping of input datasets into batches is not guaranteed to be the same across workflow runs and this prevents retrieval of cached results when resuming a workflow run. Could you sort the datasets prior to batching?
The task identifier is just '(db1)', as in
name: NFCORE_TAXPROFILER:TAXPROFILER:PROFILING:KRAKENUNIQ_PRELOADEDKRAKENUNIQ (db1). It would be nice to have more specific identifier, including the batch # ( provided the issue above is resolved ) or at least the sample identifier of one dataset of a batch.
Might it be better to split the input datasets into batches based on dataset size or read count? This may help balance the run times across batches.
The text was updated successfully, but these errors were encountered: