728x90
가상환경 만들기
bluesanta@bluesanta-A520M-ITX-ac:~$ mkdir Application
bluesanta@bluesanta-A520M-ITX-ac:~$ cd Application/
bluesanta@bluesanta-A520M-ITX-ac:~/Application$ mkdir stable_diffusion
bluesanta@bluesanta-A520M-ITX-ac:~/Application$ cd stable_diffusion
bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ sudo apt install python3-venv
bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ python3 -m venv .venv
bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ source .venv/bin/activate
(.venv) bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$
PyTorch 설치 (CUDA 12.x 기반 호환)
현재 PyTorch 공식 빌드는 CUDA 12.4~12.6까지 지원
(.venv) bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
Transformer 및 최적화 라이브러리 설치
(.venv) bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ pip install transformers datasets accelerate
8-bit/4-bit 양자화용
(.venv) bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ pip install bitsandbytes
LoRA 등 효율적 파인튜닝용
(.venv) bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ pip install peft
ONNX, OpenVINO 등 하드웨어 가속용
(.venv) bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ pip install optimum
Flash Attention 설치
RTX 4060은 Ada Lovelace 아키텍처를 사용하므로, Transformer 연산 속도를 획기적으로 높여주는 Flash Attention을 설치
(.venv) bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ pip install flash-attn --no-build-isolation
CUDA 12.4 설치
cuda_version_check.py
import torch
import torch.backends.cudnn as cudnn
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"GPU Name: {torch.cuda.get_device_name(0)}")
print(f"cuDNN version: {torch.backends.cudnn.version()}")
# 현재 연결된 GPU 장치 수
print(f"Device Count: {torch.cuda.device_count()}")
# 간단한 텐서 연산 테스트
x = torch.randn(3, 3).cuda()
print("Tensor operation success!")
if cudnn.is_acceptable(torch.randn(1, device='cuda')):
print(f"cuDNN version: {cudnn.version()}")
print("cuDNN is working perfectly!")
실행
(.venv) bluesanta@bluesanta-A520M-ITX-ac:~/Application/stable_diffusion$ python cuda_version_check.py
PyTorch version: 2.6.0+cu124
CUDA available: True
GPU Name: NVIDIA GeForce RTX 4060
cuDNN version: 90100
Device Count: 1
Tensor operation success!
cuDNN version: 90100
cuDNN is working perfectly!728x90