(xtuner) root@autodl-container-3f764e8be4-3dc6a130:~/autodl-tmp/xtuner# xtuner convert pth_to_hf /root/autodl-tmp/xtuner/qwen1_5_1_8b_chat_qlora_alpaca_e3.py /root/autodl-tmp/xtuner/work_dirs/qwen1_5_1_8b_chat_qlora_alpaca_e3/iter_2000.pth /root/autodl-tmp/xtuner/work_dirs/qwen1_5_1_8b_chat_qlora_alpaca_e3/iter_2000_hf
/root/miniconda3/envs/xtuner/lib/python3.10/site-packages/mmengine/optim/optimizer/zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead.
  from torch.distributed.optim import \
06/11 22:42:10 - mmengine - WARNING - WARNING: command error: 'Failed to import transformers.models.bloom.modeling_bloom because of the following error (look up to see its traceback):
operator torchvision::nms does not exist'!
06/11 22:42:10 - mmengine - WARNING - 
    Arguments received: ['xtuner', 'convert', 'pth_to_hf', '/root/autodl-tmp/xtuner/qwen1_5_1_8b_chat_qlora_alpaca_e3.py', '/root/autodl-tmp/xtuner/work_dirs/qwen1_5_1_8b_chat_qlora_alpaca_e3/iter_2000.pth', '/root/autodl-tmp/xtuner/work_dirs/qwen1_5_1_8b_chat_qlora_alpaca_e3/iter_2000_hf']. xtuner commands use the following syntax:
        xtuner MODE MODE_ARGS ARGS
        Where   MODE (required) is one of ('list-cfg', 'copy-cfg', 'log-dataset', 'check-custom-dataset', 'train', 'test', 'chat', 'convert', 'preprocess', 'mmbench', 'eval_refcoco')
                MODE_ARG (optional) is the argument for specific mode
                ARGS (optional) are the arguments for specific command
    Some usages for xtuner commands: (See more by using -h for specific command!)
        1. List all predefined configs:
            xtuner list-cfg
        2. Copy a predefined config to a given path:
            xtuner copy-cfg $CONFIG $SAVE_FILE
        3-1. Fine-tune LLMs by a single GPU:
            xtuner train $CONFIG
        3-2. Fine-tune LLMs by multiple GPUs:
            NPROC_PER_NODE=$NGPUS NNODES=$NNODES NODE_RANK=$NODE_RANK PORT=$PORT ADDR=$ADDR xtuner dist_train $CONFIG $GPUS
        4-1. Convert the pth model to HuggingFace's model:
            xtuner convert pth_to_hf $CONFIG $PATH_TO_PTH_MODEL $SAVE_PATH_TO_HF_MODEL
        4-2. Merge the HuggingFace's adapter to the pretrained base model:
            xtuner convert merge $LLM $ADAPTER $SAVE_PATH
            xtuner convert merge $CLIP $ADAPTER $SAVE_PATH --is-clip
        4-3. Split HuggingFace's LLM to the smallest sharded one:
            xtuner convert split $LLM $SAVE_PATH
        5-1. Chat with LLMs with HuggingFace's model and adapter:
            xtuner chat $LLM --adapter $ADAPTER --prompt-template $PROMPT_TEMPLATE --system-template $SYSTEM_TEMPLATE
        5-2. Chat with VLMs with HuggingFace's model and LLaVA:
            xtuner chat $LLM --llava $LLAVA --visual-encoder $VISUAL_ENCODER --image $IMAGE --prompt-template $PROMPT_TEMPLATE --system-template $SYSTEM_TEMPLATE
        6-1. Preprocess arxiv dataset:
            xtuner preprocess arxiv $SRC_FILE $DST_FILE --start-date $START_DATE --categories $CATEGORIES
        6-2. Preprocess refcoco dataset:
            xtuner preprocess refcoco --ann-path $RefCOCO_ANN_PATH --image-path $COCO_IMAGE_PATH --save-path $SAVE_PATH
        7-1. Log processed dataset:
            xtuner log-dataset $CONFIG
        7-2. Verify the correctness of the config file for the custom dataset:
            xtuner check-custom-dataset $CONFIG
        8. MMBench evaluation:
            xtuner mmbench $LLM --llava $LLAVA --visual-encoder $VISUAL_ENCODER --prompt-template $PROMPT_TEMPLATE --data-path $MMBENCH_DATA_PATH
        9. Refcoco evaluation:
            xtuner eval_refcoco $LLM --llava $LLAVA --visual-encoder $VISUAL_ENCODER --prompt-template $PROMPT_TEMPLATE --data-path $REFCOCO_DATA_PATH
        10. List all dataset formats which are supported in XTuner
    Run special commands:
        xtuner help
        xtuner version
    GitHub: https://github.com/InternLM/xtuner