728x90

출처

jetpack 버전 확인

bluesanta@ubuntu:~$ sudo apt show nvidia-jetpack
Package: nvidia-jetpack
Version: 6.2.2+b24
Priority: standard
Section: metapackages
Source: nvidia-jetpack (6.2.2)
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 6.2.2+b24), nvidia-jetpack-dev (= 6.2.2+b24)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29.3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r36.5/main arm64 Packages
Description: NVIDIA Jetpack Meta Package

CUDA 버전 확인

bluesanta@ubuntu:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:14:07_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0

가상환경 만들기

bluesanta@bluesanta-desktop:~$ cd llm
bluesanta@bluesanta-desktop:~/llm$ python -m venv .venv
bluesanta@bluesanta-desktop:~/llm$ source .venv/bin/activate
(.venv) bluesanta@bluesanta-desktop:~/llm$ 

모델 다운로드 (Hugging Face)

(.venv) bluesanta@ubuntu:~/llm$ pip install -U "huggingface_hub[cli]"
(.venv) bluesanta@ubuntu:~/llm$ hf download google/gemma-4-26B-A4B-it --local-dir ~/llm/models/gemma-4-original
(.venv) bluesanta@ubuntu:~/llm$ hf download RedHatAI/gemma-4-26B-A4B-it-NVFP4 --local-dir ~/llm/models/gemma-4-26B-A4B-it-NVFP4
(.venv) bluesanta@ubuntu:~/llm$ hf download nvidia/Gemma-4-31B-IT-NVFP4 --local-dir ~/llm/models/gemma-4-31b-it-nvfp4

pytorch 설치

기존 pytorch 제거

bluesanta@ubuntu:~/llm$ pip uninstall -y torch torchvision torchaudio

pytorch 다운로드

https://pypi.jetson-ai-lab.io/jp6/cu126

 

jp6/cu126 index

hloc-1.5 hloc-1.5-py3-none-any.whl

pypi.jetson-ai-lab.io

pytorch 설치

(.venv) bluesanta@bluesanta-desktop:~/llm/download$ pip install torch-2.11.0-cp310-cp310-linux_aarch64.whl

pytorch 확인

(.venv) bluesanta@bluesanta-desktop:~/llm/download$ python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
True
Orin

cmake 설치

bluesanta@ubuntu:~/llm/vllm$ wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | sudo tee /usr/share/keyrings/kitware-archive-keyring.gpg >/dev/null
bluesanta@ubuntu:~/llm/vllm$ echo 'deb [signed-by=/usr/share/keyrings/kitware-archive-keyring.gpg] https://apt.kitware.com/ubuntu/ jammy main' | sudo tee /etc/apt/sources.list.d/kitware.list >/dev/null
bluesanta@ubuntu:~/llm/vllm$ sudo apt update
bluesanta@ubuntu:~/llm/vllm$ cmake --version
cmake version 4.3.1
 
CMake suite maintained and supported by Kitware (kitware.com/cmake).

vLLM 설치

(.venv) bluesanta@bluesanta-desktop:~/llm/vllm$ pip uninstall -y vllm
(.venv) bluesanta@bluesanta-desktop:~/llm/vllm$ git clone https://github.com/vllm-project/vllm.git
(.venv) bluesanta@bluesanta-desktop:~/llm/vllm$ cd vllm
(.venv) bluesanta@bluesanta-desktop:~/llm/vllm$ pip install setuptools_scm
(.venv) bluesanta@bluesanta-desktop:~/llm/vllm$ pip install --upgrade pip setuptools setuptools-scm wheel
(.venv) bluesanta@bluesanta-desktop:~/llm/vllm$ sudo apt install -y ninja-build 
(.venv) bluesanta@bluesanta-desktop:~/llm/vllm$ MAX_JOBS=$(nproc) pip install -e . --user
 
Installing vllm script to /home/bluesanta/llm/.venv/bin

gemma-4-31b-it-nvfp4 실행 (메모리 부족으로 --gpu-memory-utilization 0.8 옵션 필요)

오류로 실행 불가 : (EngineCore pid=2446) ERROR 04-19 16:35:23 [core.py:1132] ValueError("type fp8e4nv not supported in this architecture. The supported fp8 dtypes are ('fp8e4b15', 'fp8e5')")

fp8e4nv는 H100 (Hopper) GPU에서만 지원

bluesanta@ubuntu:~/llm$ vllm serve ~/llm/models/gemma-4-31b-it-nvfp4 --quantization modelopt --tensor-parallel-size 1 --gpu-memory-utilization 0.8
728x90
728x90

출처

jetpack 버전 확인

bluesanta@ubuntu:~$ sudo apt show nvidia-jetpack
Package: nvidia-jetpack
Version: 6.2.2+b24
Priority: standard
Section: metapackages
Source: nvidia-jetpack (6.2.2)
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 6.2.2+b24), nvidia-jetpack-dev (= 6.2.2+b24)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29.3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r36.5/main arm64 Packages
Description: NVIDIA Jetpack Meta Package

CUDA 버전 확인

bluesanta@ubuntu:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:14:07_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0

가상환경 만들기

bluesanta@bluesanta-desktop:~$ cd llm
bluesanta@bluesanta-desktop:~/llm$ python -m venv .venv
bluesanta@bluesanta-desktop:~/llm$ source .venv/bin/activate
(.venv) bluesanta@bluesanta-desktop:~/llm$ 

pytorch 설치

기존 pytorch 제거

bluesanta@ubuntu:~/llm$ pip uninstall -y torch torchvision torchaudio

pytorch 다운로드

https://pypi.jetson-ai-lab.io/jp6/cu126

 

jp6/cu126 index

hloc-1.5 hloc-1.5-py3-none-any.whl

pypi.jetson-ai-lab.io

pytorch 설치

(.venv) bluesanta@bluesanta-desktop:~/llm/download$ pip install torch-2.11.0-cp310-cp310-linux_aarch64.whl

pytorch 확인

(.venv) bluesanta@bluesanta-desktop:~/llm/download$ python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
True
Orin

torchvision, torchaudio, numpy 설치

(.venv) bluesanta@ubuntu:~/llm/download$ pip install torchvision-0.26.0-cp310-cp310-linux_aarch64.whl
(.venv) bluesanta@ubuntu:~/llm/download$ pip install torchaudio-2.10.0-cp310-cp310-linux_aarch64.whl
(.venv) bluesanta@ubuntu:~/llm/download$ pip install "numpy<2.0"
728x90
728x90

출처

jetpack 버전 확인

bluesanta@ubuntu:~$ sudo apt show nvidia-jetpack
Package: nvidia-jetpack
Version: 6.2.2+b24
Priority: standard
Section: metapackages
Source: nvidia-jetpack (6.2.2)
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 6.2.2+b24), nvidia-jetpack-dev (= 6.2.2+b24)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29.3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r36.5/main arm64 Packages
Description: NVIDIA Jetpack Meta Package

CUDA 버전 확인 (cuda-toolkit 설치 이후)

bluesanta@ubuntu:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:14:07_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0

libcudss0 설치 (torch 예제 실행 후 필요한 경우)

bluesanta@ubuntu:~$ wget https://developer.download.nvidia.com/compute/cudss/0.6.0/local_installers/cudss-local-tegra-repo-ubuntu2204-0.6.0_0.6.0-1_arm64.deb
bluesanta@ubuntu:~$ sudo dpkg -i cudss-local-tegra-repo-ubuntu2204-0.6.0_0.6.0-1_arm64.deb
bluesanta@ubuntu:~$ sudo cp /var/cudss-local-tegra-repo-ubuntu2204-0.6.0/cudss-*-keyring.gpg /usr/share/keyrings/
bluesanta@ubuntu:~$ sudo apt update
bluesanta@ubuntu:~$ sudo apt install -y libcudss0-cuda-12 libcudss0-dev-cuda-12
bluesanta@ubuntu:~$ find /usr -name "libcudss.so.0" 2>/dev/null
/usr/lib/aarch64-linux-gnu/libcudss/12/libcudss.so.0
bluesanta@ubuntu:~$ export LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/libcudss/12:/usr/local/cuda-12.6/lib64:$LD_LIBRARY_PATH

가상환경 만들기

bluesanta@ubuntu:~$ mkdir llm
bluesanta@ubuntu:~$ cd llm
bluesanta@ubuntu:~/llm$ python -m venv .venv
bluesanta@ubuntu:~/llm$ source .venv/bin/activate
(.venv) bluesanta@ubuntu:~/llm$ 

cmake 최신버전 설치

(.venv) bluesanta@ubuntu:~/llm$ wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | sudo tee /usr/share/keyrings/kitware-archive-keyring.gpg >/dev/null
(.venv) bluesanta@ubuntu:~/llm$ echo 'deb [signed-by=/usr/share/keyrings/kitware-archive-keyring.gpg] https://apt.kitware.com/ubuntu/ jammy main' | sudo tee /etc/apt/sources.list.d/kitware.list >/dev/null
(.venv) bluesanta@ubuntu:~/llm$ sudo apt update
(.venv) bluesanta@ubuntu:~/llm$ sudo apt install -y cmake ninja-build 
(.venv) bluesanta@ubuntu:~/llm$ cmake --version
cmake version 4.3.1
 
CMake suite maintained and supported by Kitware (kitware.com/cmake).

cuda-toolkit 설치

(.venv) bluesanta@ubuntu:~/llm/download$ sudo apt install cuda-toolkit-12-6

환경변수 추가(~/.bashrc 파일에 내용 추가)

# 1. CUDA 홈 설정
export CUDA_HOME=/usr/local/cuda-12.6

# 2. 컴파일러가 헤더 파일을 찾을 수 있게 인클루드 경로 추가 (가장 중요)
export CPATH=$CUDA_HOME/targets/aarch64-linux/include:$CPATH
export TRITON_PTXAS_PATH=/usr/local/cuda-12.6/bin/ptxas

# 3. 라이브러리 경로 추가
export LD_LIBRARY_PATH=$CUDA_HOME/targets/aarch64-linux/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$CUDA_HOME/targets/aarch64-linux/lib:$LIBRARY_PATH

# 4. 실행 파일 경로 추가
export PATH=$CUDA_HOME/bin:$PATH

pytorch 설치

기존 pytorch 제거

bluesanta@ubuntu:~/llm$ pip uninstall -y torch torchvision torchaudio

pytorch 다운로드

https://pypi.jetson-ai-lab.io/jp6/cu126

 

jp6/cu126 index

hloc-1.5 hloc-1.5-py3-none-any.whl

pypi.jetson-ai-lab.io

pytorch 설치

(.venv) bluesanta@ubuntu:~/llm/download$ pip install torch-2.11.0-cp310-cp310-linux_aarch64.whl
(.venv) bluesanta@ubuntu:~/llm/download$ pip install "numpy<2.0"

pytorch 확인

(.venv) bluesanta@ubuntu:~/llm/download$ python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
True
Orin

vLLM 설치

(.venv) bluesanta@ubuntu:~/llm$ pip uninstall -y vllm
(.venv) bluesanta@ubuntu:~/llm$ git clone https://github.com/vllm-project/vllm.git
(.venv) bluesanta@ubuntu:~/llm$ cd vllm
(.venv) bluesanta@ubuntu:~/llm/vllm$ pip install setuptools_scm
(.venv) bluesanta@ubuntu:~/llm/vllm$ pip install --upgrade pip setuptools setuptools-scm wheel
(.venv) bluesanta@ubuntu:~/llm/vllm$ sudo apt install -y ninja-build
(.venv) bluesanta@ubuntu:~/llm/vllm$ MAX_JOBS=$(nproc) pip install -e .
 
Installing vllm script to /home/bluesanta/llm/.venv/bin

vLLM 설치확인

(.venv) bluesanta@ubuntu:~/llm/download$ vllm --version
0.19.2rc1.dev7+g38907e439.cu126
728x90
728x90

출처

Ollama Server 설치

bluesanta@bluesanta-desktop:~$ curl -fsSL https://ollama.com/install.sh | sh
>>> Installing ollama to /usr/local
>>> Downloading ollama-linux-arm64.tar.zst
######################################################################## 100.0%
>>> Downloading ollama-linux-arm64-jetpack6.tar.zst
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> NVIDIA JetPack ready.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.

Ollama 실행

bluesanta@bluesanta-desktop:~$ ollama
Ollama 0.20.6
 
▸ Chat with a model
    Start an interactive chat with a model
 
  Launch OpenClaw (install)
    Personal AI with 100+ skills
 
  Launch Claude Code (not installed)
    Anthropic's coding tool with subagents
 
  Launch OpenCode (not installed)
    Anomaly's open-source coding agent
 
  More...
    Show additional integrations
 
 
↑/↓ navigate • enter launch • → configure • esc quit

gemma4 실행

bluesanta@bluesanta-desktop:~$ ollama run gemma4:31b-it-q8_0

성능 최적화

최고 성능 모드(MAXN) 활성화

bluesanta@bluesanta-desktop:~$ sudo nvpmodel -m 0
NVPM WARN: Golden image context is already created
NVPM WARN: Reboot required for changing to this power mode: 0
NVPM WARN: DO YOU WANT TO REBOOT NOW? enter YES/yes to confirm:
yes

Swapfile보다 물리 메모리를 사용률을 늘리기

bluesanta@bluesanta-desktop:~$ sudo sysctl -w vm.swappiness=10
vm.swappiness = 10

Ollama 최적화 환경 변수

bluesanta@bluesanta-desktop:~$ vi ~/.bashrc

~/.bashrc 내용 추가

export OLLAMA_NUM_PARALLEL=1
export OLLAMA_MAX_LOADED_MODELS=1
export OLLAMA_NUM_THREADS=12
export OLLAMA_KEEP_ALIVE=5m
728x90
728x90

Maven 다운로드

bluesanta@bluesanta-AI-Series:~$ wget https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz

압축 해제 및 이동

bluesanta@bluesanta-AI-Series:~$ tar xvf apache-maven-3.9.12-bin.tar.gz
bluesanta@bluesanta-AI-Series:~$ sudo mv apache-maven-3.9.12 /opt

환경 변수 설정

~/.bashrc 수정

bluesanta@bluesanta-AI-Series:/opt/apache-maven-3.9.12$ vi ~/.bashrc

내용 추가

export MAVEN_HOME=/opt/apache-maven-3.9.12
export PATH=$PATH:$MAVEN_HOME/bin

-

728x90
728x90

출처

디스크 상세 포맷 및 파티션 구조 확인

radxa@radxa-dragon-q6a:~$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
mmcblk1     179:0    0  29.1G  0 disk 
├─mmcblk1p1 179:1    0    16M  0 part /config
├─mmcblk1p2 179:2    0     1G  0 part /boot/efi
└─mmcblk1p3 179:3    0  28.1G  0 part /
zram0       252:0    0   5.6G  0 disk [SWAP]
nvme0n1     259:0    0 465.8G  0 disk 

OS 다운로드

radxa@radxa-dragon-q6a:~$ wget https://github.com/radxa-build/radxa-dragon-q6a/releases/download/rsdk-r2/radxa-dragon-q6a_noble_gnome_r2.output_512.img.xz

압축 풀기

radxa@radxa-dragon-q6a:~$ unxz radxa-dragon-q6a_noble_gnome_r2.output_512.img.xz

OS 설치

radxa@radxa-dragon-q6a:~$ sudo dd if=radxa-dragon-q6a_noble_gnome_r2.output_512.img of=/dev/nvme0n1 bs=4M status=progress
5268045824 bytes (5.3 GB, 4.9 GiB) copied, 3 s, 1.8 GB/s
1431+1 records in
1431+1 records out
6003352576 bytes (6.0 GB, 5.6 GiB) copied, 5.56137 s, 1.1 GB/s

설치후 디스크 상세 포맷 및 파티션 구조 확인

radxa@radxa-dragon-q6a:~$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
mmcblk1     179:0    0  29.1G  0 disk 
├─mmcblk1p1 179:1    0    16M  0 part /config
├─mmcblk1p2 179:2    0     1G  0 part /boot/efi
└─mmcblk1p3 179:3    0  28.1G  0 part /
zram0       252:0    0   5.6G  0 disk [SWAP]
nvme0n1     259:0    0 465.8G  0 disk 
├─nvme0n1p1 259:1    0    16M  0 part 
├─nvme0n1p2 259:2    0     1G  0 part 
└─nvme0n1p3 259:3    0   4.6G  0 part 

SD 메모리 카드를 제거한 후 재부팅

raxda / raxda

728x90
728x90

CIFS 유틸리티 설치

bluesanta@bluesanta-desktop:~$ sudo apt update
bluesanta@bluesanta-desktop:~$ sudo apt install cifs-utils

CIFS 유틸리티 설치

bluesanta@bluesanta-desktop:~$ sudo apt update
bluesanta@bluesanta-desktop:~$ sudo apt install cifs-utils

일회성 수동 마운트

마운트 포인트 생성

bluesanta@bluesanta-desktop:~$ sudo mkdir -p /mnt/samba

마운트 실행

bluesanta@bluesanta-desktop:~$ sudo mount -t cifs //192.168.0.223/samba_root /mnt/samba -o username=bluesanta,password=비밀번호,iocharset=utf8

-

 

728x90
728x90

xrdp 원격 데스크톱 설치

bluesanta@bluesanta-AI-Series:~$ sudo apt install xrdp

상단 상태바(Top Panel)나 도크(Dock) 보이도록 설정

bluesanta@bluesanta-AI-Series:~/xdna-driver/build$ vi ~/.xsessionrc

내용 추가

export GNOME_SHELL_SESSION_MODE=ubuntu
export XDG_CURRENT_DESKTOP=ubuntu:GNOME
export XDG_CONFIG_DIRS=/etc/xdg/xdg-ubuntu:/etc/xdg

GNOME 세션을 지정

bluesanta@bluesanta-AI-Series:~$ echo "gnome-session" > ~/.xsession

기본 GNOME 상태바 설정 (우측 상단 메뉴)

bluesanta@bluesanta-AI-Series:~$ sudo apt install gnome-shell-extension-manager

xrdp 내에서 GPU(Radeon 890M) 가속 활성화

bluesanta@bluesanta-AI-Series:~$ sudo adduser xrdp video
info: Adding user `xrdp' to group `video' ...
bluesanta@bluesanta-AI-Series:~$ sudo adduser xrdp render
info: Adding user `xrdp' to group `render' ...
bluesanta@bluesanta-AI-Series:~$ sudo systemctl restart xrdp

turbovnc, virtualgl 설치

bluesanta@bluesanta-AI-Series:~$ wget https://github.com/TurboVNC/turbovnc/releases/download/3.3/turbovnc_3.3_amd64.deb
bluesanta@bluesanta-AI-Series:~$ sudo dpkg -i turbovnc_3.3_amd64.deb
bluesanta@bluesanta-AI-Series:~$ wget https://github.com/VirtualGL/virtualgl/releases/download/3.1.4/VirtualGL-3.1.4.x86_64.rpm
bluesanta@bluesanta-AI-Series:~$ sudo alien -k VirtualGL-3.1.4.x86_64.rpm
bluesanta@bluesanta-AI-Series:~$ sudo dpkg -i virtualgl_3.1.4-20251007_amd64.deb 
bluesanta@bluesanta-AI-Series:~$ /opt/VirtualGL/bin/vglrun --version
VirtualGL v3.1.4 (Build 20251007)
bluesanta@bluesanta-AI-Series:~$ sudo reboot

VirtualGL 설정

bluesanta@bluesanta-AI-Series:~$ sudo /opt/VirtualGL/bin/vglserver_config
 
1) Configure server for use with VirtualGL (GLX + EGL back ends)
2) Unconfigure server for use with VirtualGL (GLX + EGL back ends)
3) Configure server for use with VirtualGL (EGL back end only)
4) Unconfigure server for use with VirtualGL (EGL back end only)
X) Exit
 
Choose:
1
 
Restrict 3D X server access to vglusers group (recommended)?
[Y/n]
n
 
Restrict framebuffer device access to vglusers group (recommended)?
[Y/n]
y
 
Disable XTEST extension (recommended)?
[Y/n]
y
... Creating /etc/modprobe.d/virtualgl.conf to set requested permissions for
    /dev/nvidia* ...
... Granting write permission to /dev/dri/card0 for vglusers group ...
chown: 부적절한 그룹: `root:vglusers'
... Granting write permission to /dev/dri/renderD128 for vglusers group ...
chown: 부적절한 그룹: `root:vglusers'
... Modifying /etc/X11/xorg.conf.d/99-virtualgl-dri.conf to enable DRI
    permissions for vglusers group ...
... /etc/gdm3/Init/Default has been saved as /etc/gdm3/Init/Default.orig.vgl ...
... Adding xhost +LOCAL: to /etc/gdm3/Init/Default script ...
... Creating /usr/share/gdm/greeter/autostart/virtualgl.desktop ...

Done. You must restart the display manager for the changes to take effect.
 
 
1) Configure server for use with VirtualGL (GLX + EGL back ends)
2) Unconfigure server for use with VirtualGL (GLX + EGL back ends)
3) Configure server for use with VirtualGL (EGL back end only)
4) Unconfigure server for use with VirtualGL (EGL back end only)
X) Exit
 
Choose:
x
bluesanta@bluesanta-AI-Series:~$ sudo reboot

TurboVNC 서버 실행

bluesanta@bluesanta-AI-Series:~$ /opt/TurboVNC/bin/vncserver :1
 
You will require a password to access your desktops.
 
Password: 
Warning: password truncated to the length of 8.
Verify:   
Would you like to enter a view-only password (y/n)? y
Password: 
Warning: password truncated to the length of 8.
Verify:   
 
Desktop 'TurboVNC: bluesanta-AI-Series:1 (bluesanta)' started on display bluesanta-AI-Series:1
 
Starting applications specified in /opt/TurboVNC/bin/xstartup.turbovnc
Log file is /home/bluesanta/.vnc/bluesanta-AI-Series:1.log
 
bluesanta@bluesanta-AI-Series:~$ echo 'export PATH=/opt/TurboVNC/bin:$PATH' >> ~/.bashrc
bluesanta@bluesanta-AI-Series:~$ source ~/.bashrc
bluesanta@bluesanta-AI-Series:~$ vncserver :1
A VNC server is already running as :1
728x90

+ Recent posts