Не удалось создать установленные колеса для некоторых проектов на основе Pyproject.toml Llama-cpp-pythonPython

Программы на Python
Ответить Пред. темаСлед. тема
Anonymous
 Не удалось создать установленные колеса для некоторых проектов на основе Pyproject.toml Llama-cpp-python

Сообщение Anonymous »

Меня попытались установить llama-cpp-python через PIP, но имею ошибку с установкой < /p>
Команда, которую я написал

Код: Выделить всё

CMAKE_ARGS="-DLLAMA_METAL_EMBED_LIBRARY=ON -DLLAMA_METAL=on" pip install llama-cpp-python --no-cache-dir

Итак, что у меня есть

Код: Выделить всё

  Collecting llama-cpp-python
Downloading llama_cpp_python-0.3.16.tar.gz (50.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.7/50.7 MB 32.0 MB/s  0:00:01
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in ./venv/lib/python3.12/site-packages (from llama-cpp-python) (4.13.2)
Requirement already satisfied: numpy>=1.20.0 in ./venv/lib/python3.12/site-packages (from llama-cpp-python) (1.26.4)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: jinja2>=2.11.3 in ./venv/lib/python3.12/site-packages (from llama-cpp-python) (3.1.6)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.12/site-packages (from jinja2>=2.11.3->llama-cpp-python) (3.0.2)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [186 lines of output]
*** scikit-build-core 0.11.6 using CMake 4.1.0 (wheel)
*** Configuring CMake...
fatal error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/lipo: can't open input file: ninja (No such file or directory)
loading initial cache file /var/folders/rk/5ph66sp945n6ch5mz1zs_y040000gn/T/tmpy7d3bhcr/build/CMakeInit.txt
-- The C compiler identification is AppleClang 17.0.0.17000013
-- The CXX compiler identification is AppleClang 17.0.0.17000013
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /opt/homebrew/bin/ccache - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /opt/homebrew/bin/ccache - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Host architecture: arm64
-- Target architecture: arm64
CMAKE_BUILD_TYPE=Release
-- Found Git: /opt/homebrew/bin/git (found version "2.50.0")
CMake Warning at vendor/llama.cpp/CMakeLists.txt:118 (message):
LLAMA_METAL is deprecated and will be removed in the future.

Use GGML_METAL instead

Call Stack (most recent call first):
vendor/llama.cpp/CMakeLists.txt:125 (llama_option_depr)

CMake Warning at vendor/llama.cpp/CMakeLists.txt:118 (message):
LLAMA_METAL_EMBED_LIBRARY is deprecated and will be removed in the future.

Use GGML_METAL_EMBED_LIBRARY instead

Call Stack (most recent call first):
vendor/llama.cpp/CMakeLists.txt:126 (llama_option_depr)

-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- ccache found, compilation results will be cached.  Disable with GGML_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: arm64
-- GGML_SYSTEM_ARCH: ARM
-- Including CPU backend
-- Accelerate framework found
-- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES)
-- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES)
-- Could NOT find OpenMP (missing: OpenMP_C_FOUND OpenMP_CXX_FOUND)
CMake Warning at vendor/llama.cpp/ggml/src/ggml-cpu/CMakeLists.txt:79 (message):
OpenMP not found
Call Stack (most recent call first):
vendor/llama.cpp/ggml/src/CMakeLists.txt:372 (ggml_add_cpu_backend_variant_impl)

-- ARM detected
-- Performing Test GGML_COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test GGML_COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
-- ARM -mcpu not found, -mcpu=native will be used
-- Performing Test GGML_MACHINE_SUPPORTS_dotprod
-- Performing Test GGML_MACHINE_SUPPORTS_dotprod - Success
-- Performing Test GGML_MACHINE_SUPPORTS_i8mm
-- Performing Test GGML_MACHINE_SUPPORTS_i8mm - Success
-- Performing Test GGML_MACHINE_SUPPORTS_sve
-- Performing Test GGML_MACHINE_SUPPORTS_sve - Failed
-- Performing Test GGML_MACHINE_SUPPORTS_nosve
-- Performing Test GGML_MACHINE_SUPPORTS_nosve - Success
-- Performing Test GGML_MACHINE_SUPPORTS_sme
-- Performing Test GGML_MACHINE_SUPPORTS_sme - Failed
-- Performing Test GGML_MACHINE_SUPPORTS_nosme
-- Performing Test GGML_MACHINE_SUPPORTS_nosme - Success
ccache: invalid option -- m
CMake Warning at vendor/llama.cpp/ggml/src/ggml-cpu/CMakeLists.txt:222 (message):
Failed to get ARM features
Call Stack (most recent call first):
vendor/llama.cpp/ggml/src/CMakeLists.txt:372 (ggml_add_cpu_backend_variant_impl)

-- Adding CPU backend variant ggml-cpu: -mcpu=native+dotprod+i8mm+nosve+nosme
-- Looking for dgemm_
-- Looking for dgemm_ - found
-- Found BLAS: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework
-- BLAS found, Libraries: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework
-- BLAS found, Includes:
-- Including BLAS backend
-- Metal framework found
-- The ASM compiler identification is unknown
-- Found assembler: /opt/homebrew/bin/ccache
-- Including METAL backend
-- ggml version: 0.0.1
-- ggml commit:  4227c9b
CMake Warning (dev) at CMakeLists.txt:13 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:108 (llama_cpp_python_install_target)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:21 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:108 (llama_cpp_python_install_target)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:13 (install):
Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:109 (llama_cpp_python_install_target)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:21 (install):
Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:109 (llama_cpp_python_install_target)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:13 (install):
Target mtmd has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:162 (llama_cpp_python_install_target)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at CMakeLists.txt:21 (install):
Target mtmd has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:162 (llama_cpp_python_install_target)
This warning is for project developers.   Use -Wno-dev to suppress it.

-- Configuring done (3.4s)
-- Generating done (0.0s)
-- Build files have been written to: /var/folders/rk/5ph66sp945n6ch5mz1zs_y040000gn/T/tmpy7d3bhcr/build
*** Building project with Ninja...
Change Dir: '/var/folders/rk/5ph66sp945n6ch5mz1zs_y040000gn/T/tmpy7d3bhcr/build'

Run Build Command(s): ninja -v
[1/90]
...
[90/90]
ninja: build stopped: subcommand failed.

*** CMake build failed
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
error: failed-wheel-build-for-install

× Failed to build installable wheels for some pyproject.toml based projects
╰─> llama-cpp-python
python 3.12.10
pip 25.2
macOS 15.6 (24G84)
MacBook Air m2
what I did to resolve this issue:
  • installed Xcode from App Store
  • installed Xcode command tools via this command xcode-select-install
  • Заменил инструменты команд на путь с приложением Xcode (

    Код: Выделить всё

    sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
    )
  • Установил Cmake и Libomp через Brew
  • Измененный путь: cmake_args = "-dllama_metal = on"
  • Попроеклся, чтобы установить llama-cppy-python с помощью этой команды: --no-cache-dir
p.S.: Я установил Conda

Подробнее здесь: https://stackoverflow.com/questions/797 ... ects-llama
Реклама
Ответить Пред. темаСлед. тема

Быстрый ответ

Изменение регистра текста: 
Смайлики
:) :( :oops: :roll: :wink: :muza: :clever: :sorry: :angel: :read: *x)
Ещё смайлики…
   
К этому ответу прикреплено по крайней мере одно вложение.

Если вы не хотите добавлять вложения, оставьте поля пустыми.

Максимально разрешённый размер вложения: 15 МБ.

  • Похожие темы
    Ответы
    Просмотры
    Последнее сообщение

Вернуться в «Python»