Воспроизведенный здесь пример кода выглядит следующим образом:
Код: Выделить всё
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Llama-2-7b-chat-hf',
trust_remote_code=True,
device_map="auto",
torch_dtype=torch.float16, # optional if you have enough VRAM
)
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-chat-hf')
model = PeftModel.from_pretrained(base_model, 'FinGPT/fingpt-forecaster_dow30_llama2-7b_lora')
model = model.eval()
Код: Выделить всё
model = PeftModel.from_pretrained(base_model, 'FinGPT/fingpt-forecaster_dow30_llama2-7b_lora')
File ~\AppData\Roaming\Python\Python310\site-packages\peft\peft_model.py:430 in from_pretrained
model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)
File ~\AppData\Roaming\Python\Python310\site-packages\peft\peft_model.py:1022 in load_adapter
self._update_offload(offload_index, adapters_weights)
File ~\AppData\Roaming\Python\Python310\site-packages\peft\peft_model.py:908 in _update_offload
safe_module = dict(self.named_modules())[extended_prefix]
KeyError: 'base_model.model.model.model.embed_tokens'
Подробнее здесь: https://stackoverflow.com/questions/787 ... n-using-pe