You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
--vllm-batched is passed set in LiT5 and FirstMistral examples in the README. But later on we say:
vLLM, SGLang, and TensorRT-LLM backends are only supported for RankZephyr and RankVicuna models.
I think we should have a clear table for which flags are supported for which rerankers. For example, I assume --use_logits and --use_alpha only make sense with the listwise rerankers (or are they only supported with FirstMistral?).
The text was updated successfully, but these errors were encountered:
--vllm_batched is used for LiT5 because when LiT5 is integrated vllm is the only batch method so I reused this parameter as a simple "batched" (i.e. not open this will cause model run the doc one by one) If we wanna it more preceise we could probably add another --batched arg probably
--vllm-batched
is passed set inLiT5
andFirstMistral
examples in the README. But later on we say:I think we should have a clear table for which flags are supported for which rerankers. For example, I assume
--use_logits
and--use_alpha
only make sense with the listwise rerankers (or are they only supported with FirstMistral?).The text was updated successfully, but these errors were encountered: