Код: Выделить всё
from transformers import AutoModelForCausalLM, AutoTokenizer
# Define the model name (this is a placeholder, replace with the actual model name)
model_name = "meta-llama/Meta-Llama-3-8B"
!huggingface-cli login --token $HF_TOKEN
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# If the model is large, it might be beneficial to move it to GPU
model.to('cuda')
Появляется следующая ошибка:
Код: Выделить всё
Your token has been saved to /root/.cache/huggingface/token
Login successful
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:89: UserWarning:
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
warnings.warn(
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 25%
1/4 [00:22
Подробнее здесь: [url]https://stackoverflow.com/questions/78748213/huggingface-loading-checkpoint-shards-in-collab-for-llama-3-8b-stops-at-25[/url]