728x90

출처

PyTorch 소스 가져오기

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ sudo apt install -y python3-pip cmake git ninja-build
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ pip install --upgrade setuptools wheel
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ git clone --recursive https://github.com/pytorch/pytorch
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ cd pytorch
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ git checkout v2.10.0
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ git submodule update --init --recursive

HX 390(gfx1150) 전용 빌드 환경 변수 설정

ROCm 경로 설정

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export ROCM_PATH=/opt/rocm

HX 390의 890M 아키텍처 지정 (gfx1150)

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export PYTORCH_ROCM_ARCH=gfx1150
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export USE_ROCM=1

HX 390의 890M 아키텍처 지정 (gfx1150)

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export USE_CUDA=0

빌드 속도 최적화 (HX 390은 코어가 많으므로 전체 활용)

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export MAX_JOBS=$(nproc)

# (선택) MKL 대신 OpenBLAS 등 사용 시

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export USE_MKLDNN=1

의존성 설치

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ pip install -r requirements.txt

소스 코드 "Hipify" 수동 실행

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ python tools/amd_build/build_amd.py

빌드

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ python setup.py bdist_wheel

설치

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ ls dist/torch-*.whl
dist/torch-2.10.0a0+git449b176-cp312-cp312-linux_x86_64.whl
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ pip install dist/torch-*.whl
728x90

+ Recent posts