728x90

출처

빌드 패키지 설치

(.venv) [bluesanta@localhost pytorch]$ sudo dnf install -y cmake
(.venv) [bluesanta@localhost pytorch]$ sudo dnf --enablerepo=devel install -y ninja-build
(.venv) [bluesanta@localhost pytorch]$ python -m pip install mkl-include mkl-static ninja scikit-build
(.venv) [bluesanta@localhost pytorch]$ python -m pip install -r requirements.txt
(.venv) [bluesanta@localhost pytorch]$ sudo dnf install -y libomp-devel

OpenMPI 설치

mkdir /tmp/openmpi \
	&& cd /tmp/openmpi \
	&& wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.6.tar.gz \
	&& tar zxf openmpi-4.1.6.tar.gz \
	&& cd openmpi-4.1.6 \
	&& ./configure --prefix=/usr  --enable-orterun-prefix-by-default  --with-cuda=$CUDA_HOME --with-cuda-libdir=$CUDA_HOME/lib64/stubs --with-slurm  > /dev/null \
	&& make -j $(nproc) all \
	&& sudo make -s install \
	&& sudo ldconfig \
	&& cd ~/ \
	&& rm -rf /tmp/openmpi \
	&& ompi_info | grep "MPI extensions"
(.venv) [bluesanta@localhost ~]$ mkdir /tmp/openmpi \
> && cd /tmp/openmpi \
> && wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.6.tar.gz \
> && tar zxf openmpi-4.1.6.tar.gz \
> && cd openmpi-4.1.6 \
> && ./configure --prefix=/usr  --enable-orterun-prefix-by-default  --with-cuda=$CUDA_HOME --with-cuda-libdir=$CUDA_HOME/lib64/stubs --with-slurm  > /dev/null \
> && make -j $(nproc) all \
> && sudo make -s install \
> && sudo ldconfig \
> && cd ~/ \
> && rm -rf /tmp/openmpi \
> && ompi_info | grep "MPI extensions"
 
Making install in tools/mpisync
Making install in test
Making install in support
Making install in asm
Making install in class
Making install in threads
Making install in datatype
Making install in util
Making install in dss
Making install in mpool
Making install in monitoring
Making install in spc
          MPI extensions: affinity, cuda, pcollreq

pytorch 소스 다운로드

(.venv) [bluesanta@localhost stable-diffusion]$ git clone https://github.com/pytorch/pytorch
(.venv) [bluesanta@localhost stable-diffusion]$ cd pytorch
(.venv) [bluesanta@localhost pytorch]$ git submodule sync
(.venv) [bluesanta@localhost pytorch]$ git submodule update --init --recursive

pytorch 빌드 패키지 설치

(.venv) [bluesanta@localhost pytorch]$ pip install -r requirements.txt

nccl 패키지 설치

(.venv) [bluesanta@localhost ~]$ sudo dnf config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo
(.venv) [bluesanta@localhost ~]$ sudo dnf install -y libnccl libnccl-devel libnccl-static
(.venv) [bluesanta@localhost ~]$ sudo dnf update -y

PyTorch 빌드

export CMAKE_PREFIX_PATH="/usr/include/openmpi;/usr/lib/openmpi;/usr/lib"

USE_CUDA=1 \
USE_CUDNN=1 \
USE_MPI=1 \
USE_SYSTEM_NCCL=1 \
USE_ROCM=0 \
NCCL_INCLUDE_DIR=/usr/include \
python setup.py develop
[bluesanta@localhost ~]$ cd Applications/
[bluesanta@localhost Applications]$ cd stable-diffusion/
[bluesanta@localhost stable-diffusion]$ source .venv/bin/activate
(.venv) [bluesanta@localhost stable-diffusion]$ cd pytorch/
(.venv) [bluesanta@localhost pytorch]$ export CMAKE_PREFIX_PATH="/usr/include/openmpi;/usr/lib/openmpi;/usr/lib"
(.venv) [bluesanta@localhost pytorch]$ 
(.venv) [bluesanta@localhost pytorch]$ USE_CUDA=1 \
> USE_CUDNN=1 \
> USE_MPI=1 \
> USE_SYSTEM_NCCL=1 \
> USE_ROCM=0 \
> NCCL_INCLUDE_DIR=/usr/include \
> python setup.py develop
 
copying build/lib.linux-x86_64-cpython-311/torch/_C.cpython-311-x86_64-linux-gnu.so -> torch
copying build/lib.linux-x86_64-cpython-311/functorch/_C.cpython-311-x86_64-linux-gnu.so -> functorch
Creating /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages/torch.egg-link (link to .)
Adding torch 2.8.0a0+git663bcb6 to easy-install.pth file
Installing torchfrtrace script to /home/bluesanta/Applications/stable-diffusion/.venv/bin
Installing torchrun script to /home/bluesanta/Applications/stable-diffusion/.venv/bin
 
Installed /home/bluesanta/Applications/stable-diffusion/pytorch
Processing dependencies for torch==2.8.0a0+git663bcb6
Searching for fsspec==2025.3.2
Best match: fsspec 2025.3.2
Adding fsspec 2025.3.2 to easy-install.pth file
 
Using /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages
Searching for jinja2==3.1.6
Best match: jinja2 3.1.6
Adding jinja2 3.1.6 to easy-install.pth file
 
Using /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages
Searching for networkx==3.4.2
Best match: networkx 3.4.2
Adding networkx 3.4.2 to easy-install.pth file
 
Using /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages
Searching for sympy==1.14.0
Best match: sympy 1.14.0
Adding sympy 1.14.0 to easy-install.pth file
Installing isympy script to /home/bluesanta/Applications/stable-diffusion/.venv/bin
 
Using /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages
Searching for typing-extensions==4.13.2
Best match: typing-extensions 4.13.2
Adding typing-extensions 4.13.2 to easy-install.pth file
 
Using /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages
Searching for filelock==3.18.0
Best match: filelock 3.18.0
Adding filelock 3.18.0 to easy-install.pth file
 
Using /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages
Searching for MarkupSafe==3.0.2
Best match: MarkupSafe 3.0.2
Adding MarkupSafe 3.0.2 to easy-install.pth file
 
Using /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages
Searching for mpmath==1.3.0
Best match: mpmath 1.3.0
Adding mpmath 1.3.0 to easy-install.pth file
 
Using /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages
Finished processing dependencies for torch==2.8.0a0+git663bcb6

tensorflow gpu 설치

(.venv) [bluesanta@localhost stable-diffusion]$ pip install tensorflow[and-cuda]==2.19.0

CUDA 버전 확인

cuda_version_check.py 소스

import torch

import tensorflow as tf
from tensorflow.python.client import device_lib

x = torch.rand(5, 3)
print(x)

print("----------------------------------------")
print("torch.cuda.is_available() =", torch.cuda.is_available())
print("----------------------------------------")
print("torch.cuda.current_device() =", torch.cuda.current_device())
print("----------------------------------------")
print("torch.cuda.get_device_name(0) =", torch.cuda.get_device_name(0))
print("----------------------------------------")
print("torch.__version__ =", torch.__version__)

# Print the CUDA version that PyTorch is using
print(f"CUDA version: {torch.version.cuda}")

# Check if the GPU and CuDNN are recognized
print("----------------------------------------")
print(device_lib.list_local_devices())
print("----------------------------------------")
print(tf.test.is_built_with_cuda())
print("----------------------------------------")
print(tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None))

cuda_version_check.py 실행

(.venv) [bluesanta@localhost stable-diffusion]$ python cuda_version_check.py 
2025-04-30 23:56:43.699453: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1746025003.710231  604585 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1746025003.713433  604585 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1746025003.722518  604585 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746025003.722529  604585 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746025003.722532  604585 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746025003.722534  604585 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-04-30 23:56:43.725295: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
tensor([[0.3775, 0.4729, 0.4964],
        [0.0783, 0.6957, 0.6801],
        [0.8738, 0.3818, 0.6314],
        [0.8639, 0.7027, 0.7775],
        [0.3149, 0.0923, 0.2672]])
----------------------------------------
torch.cuda.is_available() = True
----------------------------------------
torch.cuda.current_device() = 0
----------------------------------------
torch.cuda.get_device_name(0) = NVIDIA GeForce RTX 4090
----------------------------------------
torch.__version__ = 2.8.0a0+git663bcb6
CUDA version: 12.5
----------------------------------------
I0000 00:00:1746025005.014143  604585 gpu_device.cc:2019] Created device /device:GPU:0 with 21961 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:06:00.0, compute capability: 8.9
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 5174814723682739464
xla_global_id: -1
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 23028629504
locality {
  bus_id: 1
  links {
  }
}
incarnation: 18339436096741683477
physical_device_desc: "device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:06:00.0, compute capability: 8.9"
xla_global_id: 416903419
]
----------------------------------------
True
----------------------------------------
WARNING:tensorflow:From /home/bluesanta/Applications/stable-diffusion/pytorch_test1.py:27: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
I0000 00:00:1746025005.019230  604585 gpu_device.cc:2019] Created device /device:GPU:0 with 21961 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:06:00.0, compute capability: 8.9
True
728x90
728x90

출처

python 가상 환경 만들기

[bluesanta@localhost local]$ sudo ln -s /home/bluesanta/Applications/stable-diffusion /usr/local/stable-diffusion
[bluesanta@localhost ~]$ cd /usr/local/stable-diffusion/
[bluesanta@localhost stable-diffusion]$ python3.11 -m venv .venv

python 가상 환경 실행

[bluesanta@localhost stable-diffusion]$ source .venv/bin/activate
(.venv) [bluesanta@localhost stable-diffusion]$

pip 업그레이드

(.venv) [bluesanta@localhost stable-diffusion]$ python -m pip install --upgrade pip

ComfyUI 다운로드

(.venv) [bluesanta@localhost stable-diffusion]$ git clone https://github.com/comfyanonymous/ComfyUI.git

ComfyUI 관련 패키지 설치

(.venv) [bluesanta@localhost stable-diffusion]$ cd ComfyUI/
(.venv) [bluesanta@localhost ComfyUI]$ pip install -r requirements.txt

ComfyUI 실행

(.venv) [bluesanta@localhost ComfyUI]$ python main.py
Checkpoint files will always be loaded safely.
Total VRAM 24090 MB, total RAM 128222 MB
pytorch version: 2.7.0+cu126
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
Using pytorch attention
Python version: 3.11.12 (main, Apr 22 2025, 23:29:55) [GCC 11.5.0 20240719 (Red Hat 11.5.0-5)]
ComfyUI version: 0.3.30
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
ComfyUI frontend version: 1.17.11
[Prompt Server] web root: /home/bluesanta/Applications/stable-diffusion/.venv/lib/python3.11/site-packages/comfyui_frontend_package/static

Import times for custom nodes:
   0.0 seconds: /home/bluesanta/Applications/stable-diffusion/ComfyUI/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188

ComfyUI Manager 설치

(.venv) [bluesanta@localhost ComfyUI]$ cd custom_nodes
(.venv) [bluesanta@localhost custom_nodes]$ git clone https://github.com/ltdrdata/ComfyUI-Manager.git

stable-diffusion-webui 모델 공유

(.venv) [bluesanta@localhost custom_nodes]$ cd /usr/local/stable-diffusion/ComfyUI/
(.venv) [bluesanta@localhost ComfyUI]$ cp extra_model_paths.yaml.example extra_model_paths.yaml
(.venv) [bluesanta@localhost ComfyUI]$ vi extra_model_paths.yaml
#    base_path: path/to/stable-diffusion-webui/
    base_path: /usr/local/stable-diffusion/stable-diffusion-webui/

ComfyUI 외부접속 허용

(.venv) [bluesanta@localhost ComfyUI]$ python main.py --listen 0.0.0.0

8188 포트 개방

[bluesanta@localhost ~]$ sudo firewall-cmd --permanent --zone=public --add-port=8188/tcp
success
[bluesanta@localhost ~]$ sudo firewall-cmd --reload
success
[bluesanta@localhost ~]$ sudo firewall-cmd --list-ports
3389/tcp 7860/tcp 8188/tcp
728x90
728x90

출처

Docker 패키지 다운로드

user01@css:/usr/local$ sudo apt-get reinstall --download-only -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

Docker 패키지 파일 복사

user01@css:~$ cd ~/
user01@css:~$ mkdir docker
user01@css:~$ sudo mv /var/cache/apt/archives/*.deb ~/docker/

Docker 패키지 설치

user01@css:~$ cd docker/
user01@css:~/docker$ ls
bridge-utils_1.7.1-1ubuntu2_amd64.deb                                pigz_2.8-1_amd64.deb
containerd_1.7.24-0ubuntu1~24.04.2_amd64.deb                         python3-compose_1.29.2-6ubuntu1_all.deb
containerd.io_1.7.27-1_amd64.deb                                     python3-docker_5.0.3-1ubuntu1.1_all.deb
docker-buildx-plugin_0.23.0-1~ubuntu.24.04~noble_amd64.deb           python3-dockerpty_0.4.1-5_all.deb
docker-ce_5%3a28.1.1-1~ubuntu.24.04~noble_amd64.deb                  python3-docopt_0.6.2-6_all.deb
docker-ce-cli_5%3a28.1.1-1~ubuntu.24.04~noble_amd64.deb              python3-dotenv_1.0.1-1_all.deb
docker-ce-rootless-extras_5%3a28.1.1-1~ubuntu.24.04~noble_amd64.deb  python3-texttable_1.6.7-1_all.deb
docker-compose_1.29.2-6ubuntu1_all.deb                               python3-websocket_1.7.0-1_all.deb
docker-compose-plugin_2.35.1-1~ubuntu.24.04~noble_amd64.deb          runc_1.1.12-0ubuntu3.1_amd64.deb
docker.io_26.1.3-0ubuntu1~24.04.1_amd64.deb                          slirp4netns_1.2.1-1build2_amd64.deb
libltdl7_2.4.7-7build1_amd64.deb                                     ubuntu-fan_0.12.16_all.deb
libslirp0_4.7.0-1ubuntu3_amd64.deb
user01@css:~/docker$ sudo dpkg -i *.deb

공유할 디렉토리 생성

user01@css:~$ sudo mkdir /usr/local/bluexmas_home
user01@css:~$ sudo mkdir -p /usr/local/bluexmas/resources

공유할 디렉토리 소유자 변경

user01@css:~$ sudo chown user01:user01 -Rf /usr/local/cnssm_home/
user01@css:~$ sudo chown user01:user01 -Rf /usr/local/cnssm/

Docker 이미지 복원

user01@css:~$ sudo docker load -i bluexxmas-ubuntu_v8.1.tar
Loaded image: bluexxmas-ubuntu:v8.1

컨테이너 생성

sudo docker run --add-host=host.docker.internal:host-gateway -it \
-h 0.0.0.0 \
-p 80:80 -p 443:443 -p 8080:8080 \
--name bluexxmas-ubuntu \
--restart always \
-v /usr/local/bluexxmas_home:/usr/local/bluexxmas_home \
-v /usr/local/bluexxmas/resources:/usr/local/bluexxmas/resources \
bluexxmas-ubuntu:v8.1 \
/bin/startservice.sh
user01@css:~$ sudo docker run --add-host=host.docker.internal:host-gateway -it \
> -h 0.0.0.0 \
> -p 80:80 -p 443:443 -p 8080:8080 \
> --name bluexxmas-ubuntu \
> --restart always \
> -v /usr/local/bluexxmas_home:/usr/local/bluexxmas_home \
> -v /usr/local/bluexxmas/resources:/usr/local/bluexxmas/resources \
> bluexxmas-ubuntu:v8.1 \
> /bin/startservice.sh
 * Starting Nginx Server...                                                                                                                                             [ OK ] 
Starting Tomcat
Using CATALINA_BASE:   /usr/local/apache-tomcat-10.1.34
Using CATALINA_HOME:   /usr/local/apache-tomcat-10.1.34
Using CATALINA_TMPDIR: /usr/local/apache-tomcat-10.1.34/temp
Using JRE_HOME:        /usr/lib/jvm/java-17-openjdk-amd64
Using CLASSPATH:       /usr/local/apache-tomcat-10.1.34/bin/bootstrap.jar:/usr/local/apache-tomcat-10.1.34/bin/tomcat-juli.jar
Using CATALINA_OPTS:   
Tomcat started.

Docker 컨테이너 목록 확인

user01@css:~$ sudo docker ps -a
CONTAINER ID   IMAGE               COMMAND                  CREATED          STATUS         PORTS                                                                                                                       NAMES
ec829951584a   cnssm-ubuntu:v8.1   "/bin/startservice.sh"   11 minutes ago   Up 9 minutes   0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp   cnssm-ubuntu

Docker 컨테이너 shell 접속

user01@css:~$ sudo docker exec -it ec829951584a /bin/bash

Docker 컨테이너 shell 실행

root@ 0:/# cat /bin/startservice.sh
#!/bin/sh
 
service nginx start
service tomcat start
/bin/bash
 
root@ 0:/# 

 

728x90
728x90

출처

리눅스에서 NAS 마운트 :: 다인엔시스

마운트 전 확인

[bluesanta@localhost ~]$ df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             4.0M     0  4.0M   0% /dev
tmpfs                 63G     0   63G   0% /dev/shm
tmpfs                 26G   58M   25G   1% /run
efivarfs             128K   38K   86K  31% /sys/firmware/efi/efivars
/dev/mapper/rl-root   70G   24G   47G  34% /
/dev/mapper/rl-home  7.3T   84G  7.2T   2% /home
/dev/nvme0n1p2       960M  603M  358M  63% /boot
/dev/nvme0n1p1       599M  7.1M  592M   2% /boot/efi
tmpfs                 13G   56K   13G   1% /run/user/42
tmpfs                 13G  132K   13G   1% /run/user/1000

NFS 패키지 설치

[bluesanta@localhost ~]$ sudo dnf install -y nfs-utils
[bluesanta@localhost ~]$ sudo dnf install -y samba-client

NAS 정보 확인

[bluesanta@localhost ~]$ smbclient -L //192.168.0.58 -U bluesanta -d 3
lp_load_ex: refreshing parameters
Initialising global parameters
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[global]"
added interface enp5s0 ip=192.168.0.202 bcast=192.168.0.255 netmask=255.255.255.0
Client started (version 4.20.2).
Connecting to 192.168.0.58 at port 445
Password for [SAMBA\bluesanta]:
GENSEC backend 'gssapi_spnego' registered
GENSEC backend 'gssapi_krb5' registered
GENSEC backend 'gssapi_krb5_sasl' registered
GENSEC backend 'spnego' registered
GENSEC backend 'schannel' registered
GENSEC backend 'ncalrpc_as_system' registered
GENSEC backend 'sasl-EXTERNAL' registered
GENSEC backend 'ntlmssp' registered
GENSEC backend 'ntlmssp_resume_ccache' registered
GENSEC backend 'http_basic' registered
GENSEC backend 'http_ntlm' registered
GENSEC backend 'http_negotiate' registered
Cannot do GSE to an IP address
Got challenge flags:
Got NTLMSSP neg_flags=0x628a8215
NTLMSSP: Set final flags:
Got NTLMSSP neg_flags=0x62088215
NTLMSSP Sign/Seal - Initialising with flags:
Got NTLMSSP neg_flags=0x62088215
NTLMSSP Sign/Seal - Initialising with flags:
Got NTLMSSP neg_flags=0x62088215
 
        Sharename       Type      Comment
        ---------       ----      -------
        Disk1           Disk      
        Disk3           Disk      
        IPC$            IPC       IPC Service ()
SMB1 disabled -- no workgroup available

NAS 마운트

[bluesanta@localhost ~]$ sudo mkdir /mnt/Disk1
[bluesanta@localhost ~]$ sudo mount -t cifs -o username=bluesanta,password=passwd //192.168.0.58/Disk1 /mnt/Disk1

 

728x90
728x90

출처

 

stable-diffusion-webui 설치

[bluesanta@localhost ~]$ cd Applications
[bluesanta@localhost Applications]$ mkdir stable-diffusion
[bluesanta@localhost stable-diffusion]$ git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

python 가상 환경 만들기

[bluesanta@localhost stable-diffusion]$ python3.11 -m venv .venv

python 가상 환경 실행

[bluesanta@localhost stable-diffusion]$ source .venv/bin/activate
(.venv) [bluesanta@localhost stable-diffusion]$

stable-diffusion-webui 실행

(.venv) [bluesanta@localhost stable-diffusion]$ cd stable-diffusion-webui/
(.venv) [bluesanta@localhost stable-diffusion-webui]$ ./webui.sh
 
Loading weights [6ce0161689] from /home/bluesanta/Applications/stable-diffusion/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL:  http://127.0.0.1:7860

stable-diffusion-webui 외부 접속 허용

(.venv) [bluesanta@localhost stable-diffusion-webui]$ vi webui-user.sh
export COMMANDLINE_ARGS="--xformers --xformers-flash-attention --share --listen --gradio-auth bluexmas:passwd"

7860 포트 개방

(.venv) [bluesanta@localhost stable-diffusion-webui]$ sudo firewall-cmd --add-port=7860/tcp --permanent
success
(.venv) [bluesanta@localhost stable-diffusion-webui]$ sudo firewall-cmd --reload
success
(.venv) [bluesanta@localhost stable-diffusion-webui]$ sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp5s0
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 3389/tcp 7860/tcp
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

stable-diffusion-webui 실행

(.venv) [bluesanta@localhost stable-diffusion-webui]$ ./webui.sh
   
Loading weights [6ce0161689] from /home/bluesanta/Applications/stable-diffusion/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL:  http://0.0.0.0:7860
728x90
728x90

출처

키 다운로드

user01@css:~$ sudo apt-get install apt-transport-https curl
user01@css:~$ sudo mkdir -p /etc/apt/keyrings
user01@css:~$ sudo curl -o /etc/apt/keyrings/mariadb-keyring.pgp 'https://mariadb.org/mariadb_release_signing_key.pgp'

mariadb 소스 추가

user01@css:~$ sudo vi /etc/apt/sources.list.d/mariadb.sources
# MariaDB 11.8 repository list - created 2025-04-14 00:00 UTC
# https://mariadb.org/download/
X-Repolib-Name: MariaDB
Types: deb
# deb.mariadb.org is a dynamic mirror if your preferred mirror goes offline. See https://mariadb.org/mirrorbits/ for details.
# URIs: https://deb.mariadb.org/11.rc/ubuntu
URIs: https://tw1.mirror.blendbyte.net/mariadb/repo/11.8/ubuntu
Suites: noble
Components: main main/debug
Signed-By: /etc/apt/keyrings/mariadb-keyring.pgp

mariadb 설치

user01@css:~$ sudo apt-get update
user01@css:~$ sudo apt install mariadb-server

mariadb 서비스 등록 확인

user01@css:~$ sudo systemctl is-enabled mariadb
enabled

mariadb 서비스 실행 상태 확인

user01@css:~$ sudo systemctl status mysql
● mariadb.service - MariaDB 11.8.1 database server
     Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: enabled)
    Drop-In: /etc/systemd/system/mariadb.service.d
             └─migrated-from-my.cnf-settings.conf
     Active: active (running) since Thu 2025-04-24 12:33:14 UTC; 53min ago
       Docs: man:mariadbd(8)
             https://mariadb.com/kb/en/library/systemd/
    Process: 5020 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCE>
    Process: 5022 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= ||   VAR=`/usr/bin/galera_recovery`>
    Process: 5106 ExecStartPost=/bin/rm -f /run/mysqld/wsrep-start-position (code=exited, status=0/SUCCESS)
    Process: 5108 ExecStartPost=/etc/mysql/debian-start (code=exited, status=0/SUCCESS)
   Main PID: 5050 (mariadbd)
     Status: "Taking your SQL requests now..."
      Tasks: 9 (limit: 29494)
     Memory: 174.1M (peak: 263.6M)
        CPU: 7.101s
     CGroup: /system.slice/mariadb.service
             └─5050 /usr/sbin/mariadbd
 
Apr 24 12:33:12 css mariadbd[5050]: 2025-04-24 12:33:12 0 [Note] InnoDB: log sequence number 47629; transaction id 14
Apr 24 12:33:12 css mariadbd[5050]: 2025-04-24 12:33:12 0 [Note] Plugin 'FEEDBACK' is disabled.
Apr 24 12:33:12 css mariadbd[5050]: 2025-04-24 12:33:12 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_>
Apr 24 12:33:12 css mariadbd[5050]: 2025-04-24 12:33:12 0 [Note] Plugin 'wsrep-provider' is disabled.
Apr 24 12:33:12 css mariadbd[5050]: 2025-04-24 12:33:12 0 [Note] InnoDB: Buffer pool(s) load completed at 250424 12:33:>
Apr 24 12:33:14 css mariadbd[5050]: 2025-04-24 12:33:14 0 [Note] Server socket created on IP: '127.0.0.1'.
Apr 24 12:33:14 css mariadbd[5050]: 2025-04-24 12:33:14 0 [Note] mariadbd: Event Scheduler: Loaded 0 events
Apr 24 12:33:14 css mariadbd[5050]: 2025-04-24 12:33:14 0 [Note] /usr/sbin/mariadbd: ready for connections.
Apr 24 12:33:14 css mariadbd[5050]: Version: '11.8.1-MariaDB-ubu2404'  socket: '/run/mysqld/mysqld.sock'  port: 3306  m>
Apr 24 12:33:14 css systemd[1]: Started mariadb.service - MariaDB 11.8.1 database server.

mariadb root 패스워드 변경

user01@css:~$ sudo /usr/bin/mysqladmin -u root password
/usr/bin/mysqladmin: Deprecated program name. It will be removed in a future release, use '/usr/bin/mariadb-admin' instead
New password: 
Confirm new password: 

mariadb 접속

user01@css:~$ mysql -u root -p
mysql: Deprecated program name. It will be removed in a future release, use '/usr/bin/mariadb' instead
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 34
Server version: 11.8.1-MariaDB-ubu2404 mariadb.org binary distribution
 
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.001 sec)
 
MariaDB [(none)]> 

mariadb DB 사용자 생성

user01@css:~$ mysql -u root -p 
mysql: Deprecated program name. It will be removed in a future release, use '/usr/bin/mariadb' instead
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 35
Server version: 11.8.1-MariaDB-ubu2404 mariadb.org binary distribution
 
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
MariaDB [(none)]> CREATE DATABASE bluexmas_db CHARACTER SET utf8mb4 collate utf8mb4_general_ci;
Query OK, 1 row affected (0.000 sec)
 
MariaDB [(none)]> create user 'user01'@'localhost' identified by 'passwd';
Query OK, 0 rows affected (0.003 sec)
 
MariaDB [(none)]> grant all privileges on *.* to 'user01'@'localhost' with grant option;
Query OK, 0 rows affected (0.008 sec)
 
MariaDB [(none)]> create user 'user01'@'%' identified by 'passwd';
Query OK, 0 rows affected (0.003 sec)
 
MariaDB [(none)]> grant all privileges on bluexmas_db.* to 'user01'@'%' with grant option;
Query OK, 0 rows affected (0.003 sec)
 
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)
 
MariaDB [(none)]> exit
Bye

mariadb 포트 개발

user01@css:~$ sudo ufw status verbose
[sudo] password for user01: 
Status: inactive
user01@css:~$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
user01@css:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
user01@css:~$ sudo ufw allow 22
Rule added
Rule added (v6)
user01@css:~$ sudo ufw allow 3306
Rule added
Rule added (v6)
user01@css:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
 
To                         Action      From
--                         ------      ----
22                         ALLOW IN    Anywhere                  
3306                       ALLOW IN    Anywhere                  
22 (v6)                    ALLOW IN    Anywhere (v6)             
3306 (v6)                  ALLOW IN    Anywhere (v6)             
 
user01@css:~$ 

mariadb 외부 접속 허용

mariadb 포트 확인

mariadb 포트 확인 해보면 127.0.0.1:3306 로컬에서 접속하도록 설정 되어 있는 것을 확인 할 수 있음

user01@css:~$ netstat -nao | grep 3306
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      off (0.00/0/0)

mariadb 환경 변수 수정

user01@css:~$ sudo vi /etc/mysql/mariadb.conf.d/50-server.cnf 

bind-address 주석 처리

[mariadbd]

# bind-address            = 127.0.0.1
# skip-ssl        # (ERROR 2026 (HY000)) 해결

mariadb 서비스 재시작

user01@css:~$ sudo systemctl restart mysql

mariadb 포트 확인

user01@css:~$ netstat -nao | grep 3306
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      off (0.00/0/0)
tcp6       0      0 :::3306                 :::*                    LISTEN      off (0.00/0/0)
728x90
728x90

출처

xrdp 설치

[bluesanta@localhost ~]$ sudo dnf -y install epel-release.noarch
[bluesanta@localhost ~]$ sudo dnf -y install xrdp

xrdp 서비스 등록, 서비스 실행

[bluesanta@localhost ~]$ sudo systemctl enable xrdp
Created symlink /etc/systemd/system/multi-user.target.wants/xrdp.service → /usr/lib/systemd/system/xrdp.service.
[bluesanta@localhost ~]$ sudo systemctl restart xrdp

xrdp 방화벽 개방

[bluesanta@localhost ~]$ sudo firewall-cmd --add-port=3389/tcp --permanent 
success
[bluesanta@localhost ~]$ sudo firewall-cmd --reload
success
[bluesanta@localhost ~]$
728x90
728x90

출처

Node.js 설치

[bluesanta@localhost ~]$ cd ~
[bluesanta@localhost ~]$ curl -sL https://rpm.nodesource.com/setup_18.x -o nodesource_setup.sh
[bluesanta@localhost ~]$ sudo bash nodesource_setup.sh
   
2025-04-21 16:22:50 - Repository is configured and updated.
2025-04-21 16:22:50 - You can use N|solid Runtime as a node.js alternative
2025-04-21 16:22:50 - To install N|solid Runtime, run: dnf install nsolid -y
2025-04-21 16:22:50 - Run 'dnf install nodejs -y' to complete the installation.
[bluesanta@localhost ~]$ sudo dnf install nodejs openssl -y
[bluesanta@localhost ~]$ node --version
v18.20.8
[bluesanta@localhost ~]$ sudo npm install -g configurable-http-proxy

jupyterhub 설치

[bluesanta@localhost ~]$ su - 
[root@localhost ~]# pip3.11 install sudospawner

jupyterhub 관리자 계정 생성

[bluesanta@localhost ~]$ sudo groupadd jupyterhub
[bluesanta@localhost ~]$ sudo useradd -g jupyterhub -s /bin/bash -m jupyterhubapp

jupyterhub 관리자 계정 로그인

[bluesanta@localhost ~]$ su - jupyterhubapp
암호:
[jupyterhubapp@localhost ~]$ 

jupyterhub 실행

[jupyterhubapp@localhost ~]$ jupyterhub
[I 2025-04-21 16:59:44.638 JupyterHub app:3354] Running JupyterHub version 5.3.0
[I 2025-04-21 16:59:44.638 JupyterHub app:3384] Using Authenticator: jupyterhub.auth.PAMAuthenticator-5.3.0
[I 2025-04-21 16:59:44.638 JupyterHub app:3384] Using Spawner: jupyterhub.spawner.LocalProcessSpawner-5.3.0
[I 2025-04-21 16:59:44.638 JupyterHub app:3384] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-5.3.0
   
[I 2025-04-21 16:59:44.987 JupyterHub proxy:477] Adding route for Hub: / => http://127.0.0.1:8081
16:59:44.988 [ConfigProxy] info: Adding route / -> http://127.0.0.1:8081
16:59:44.988 [ConfigProxy] info: Route added / -> http://127.0.0.1:8081
16:59:44.989 [ConfigProxy] info: 201 POST /api/routes/ 
[I 2025-04-21 16:59:44.989 JupyterHub app:3778] JupyterHub is now running at http://:8000

jupyterhub_config.py 생성

[jupyterhubapp@localhost ~]$ cd /etc/jupyterhub/
[jupyterhubapp@localhost jupyterhub]$ jupyterhub --generate-config
Writing default config to: jupyterhub_config.py
[jupyterhubapp@localhost jupyterhub]$ ls
jupyterhub_config.py

jupyterhub 사용자 계정 만들기

[bluesanta@localhost ~]$ sudo useradd -g jupyterhub -s /bin/bash -m user01
[bluesanta@localhost ~]$ sudo passwd user01
user01 사용자의 비밀 번호 변경 중
새 암호:
새 암호 재입력:
passwd: 모든 인증 토큰이 성공적으로 업데이트 되었습니다.

jupyterhub_config.py 파일 수정

[bluesanta@localhost ~]$ sudo vi /etc/jupyterhub/jupyterhub_config.py
c.JupyterHub.hub_connect_ip = '0.0.0.0'
c.JupyterHub.port = 8000

c.Authenticator.whitelist = set({'user01'})
c.Authenticator.allow_all = True

c.PAMAuthenticator.admin_users = set({'jupyterhubapp'})
# c.PAMAuthenticator.admin_groups = set({'jupyterhub'})

jupyterhub 서비스 등록

jupyterhub.service 파일 생성

[bluesanta@localhost ~]$ sudo vi /etc/systemd/system/jupyterhub.service
[Unit]
Description=JupyterHub
After=syslog.target network.target

[Service]
Type=simple
PIDFile=/run/jupyter.pid
User=root
WorkingDirectory=/etc/jupyterhub
ExecStart=/usr/local/bin/jupyterhub -f /etc/jupyterhub/jupyterhub_config.py

[Install]
WantedBy=multi-user.target

jupyterhub 서비스 등록

[bluesanta@localhost ~]$ sudo systemctl enable jupyterhub.service 
Created symlink /etc/systemd/system/multi-user.target.wants/jupyterhub.service → /etc/systemd/system/jupyterhub.service.

jupyterhub 서비스 갱신

[bluesanta@localhost ~]$ sudo systemctl daemon-reload

8000 포트 개방

[bluesanta@localhost ~]$ sudo firewall-cmd --permanent --zone=public --add-port=8000/tcp
success
[bluesanta@localhost ~]$ sudo firewall-cmd --reload
success
728x90

+ Recent posts