728x90

OpenJDK 17 설치

bluesanta@bluesanta-AI-Series:~/Application/Android$ sudo apt install openjdk-17-jdk

JAVA_HOME 설정

JDK 설치 경로 확인

bluesanta@bluesanta-AI-Series:~/Application/Android$ readlink -f $(which java)
/usr/lib/jvm/java-17-openjdk-amd64/bin/java

환경 변수 파일 수정

bluesanta@bluesanta-AI-Series:~/Application/Android$ vi ~/.bashrc

파일 맨 아래에 다음 내용 추가

export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
export PATH=$PATH:$JAVA_HOME/bin

32비트 라이브러리 설치

Ubuntu 64비트 환경에서 Android SDK의 일부 도구(특히 에뮬레이터나 오래된 빌드 도구)를 실행하려면 32비트 호환 라이브러리가 필요할 수 있으므로 미리 설치

i386 아키텍처 활성화

bluesanta@bluesanta-AI-Series:~/Application/Android$ sudo dpkg --add-architecture i386

32비트 라이브러리 설치

bluesanta@bluesanta-AI-Series:~/Application/Android$ sudo apt update
bluesanta@bluesanta-AI-Series:~/Application/Android$ sudo apt-get install libc6:i386 libncurses6:i386 libstdc++6:i386 lib32z1 libbz2-1.0:i386

라이선스 동의 및 필수 패키지 설치

라이선스 동의

bluesanta@bluesanta-AI-Series:~/Application/Android/sdk-tools-windows.old/cmdline-tools/latest/bin$ ./sdkmanager --licenses

3필수 패키지 설치

bluesanta@bluesanta-AI-Series:~/Application/Android/sdk-tools-windows.old/cmdline-tools/latest/bin$ ./sdkmanager "platform-tools" "platforms;android-35" "build-tools;35.0.0" --sdk_root=/home/bluesanta/Application/Android/sdk-tools-windows
728x90
728x90

출처

PyTorch 소스 가져오기

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ sudo apt install -y python3-pip cmake git ninja-build
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ pip install --upgrade setuptools wheel
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ git clone --recursive https://github.com/pytorch/pytorch
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ cd pytorch
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ git checkout v2.10.0
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ git submodule update --init --recursive

HX 390(gfx1150) 전용 빌드 환경 변수 설정

ROCm 경로 설정

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export ROCM_PATH=/opt/rocm

HX 390의 890M 아키텍처 지정 (gfx1150)

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export PYTORCH_ROCM_ARCH=gfx1150
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export USE_ROCM=1

HX 390의 890M 아키텍처 지정 (gfx1150)

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export USE_CUDA=0

빌드 속도 최적화 (HX 390은 코어가 많으므로 전체 활용)

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export MAX_JOBS=$(nproc)

# (선택) MKL 대신 OpenBLAS 등 사용 시

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ export USE_MKLDNN=1

의존성 설치

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ pip install -r requirements.txt

소스 코드 "Hipify" 수동 실행

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ python tools/amd_build/build_amd.py

빌드

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ python setup.py bdist_wheel

설치

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ ls dist/torch-*.whl
dist/torch-2.10.0a0+git449b176-cp312-cp312-linux_x86_64.whl
(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama/pytorch$ pip install dist/torch-*.whl
728x90
728x90

기존 CMake 삭제

bluesanta@bluesanta-desktop:~$ sudo apt remove --purge cmake

빌드 관련 패키지 설치

bluesanta@bluesanta-desktop:~$ sudo apt install build-essential libssl-dev

CMake 3.26.4 다운로드 및 설치

bluesanta@bluesanta-desktop:~$ wget https://github.com/Kitware/CMake/releases/download/v3.26.4/cmake-3.26.4-linux-x86_64.sh
bluesanta@bluesanta-desktop:~$ sudo sh cmake-3.26.4-linux-x86_64.sh --prefix=/usr/local --exclude-subdir

설치 확인

bluesanta@bluesanta-desktop:~$ cmake --version
cmake version 3.26.4
 
CMake suite maintained and supported by Kitware (kitware.com/cmake).
728x90
728x90

ApiApplication

package com.bx.api;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;

@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class})
public class ApiApplication {

	public static void main(String[] args) {
		SpringApplication.run(ApiApplication.class, args);
	}
}

application.properties

# RabbitMQ (RK3588 IP)
spring.rabbitmq.host=192.168.0.24
spring.rabbitmq.port=5672
spring.rabbitmq.username=admin
spring.rabbitmq.password=password

Topic의 핵심: 와일드카드(Wildcards)

Topic 방식의 가장 큰 특징은 **점(.)**으로 구분된 라우팅 키와 두 가지 특수 기호를 사용할 수 있다는 점입니다.

  • * (별표): 정확히 단어 하나를 대체합니다. (예: *.orange.*)
  • # (우물 정): 0개 이상의 단어를 대체합니다. (예: lazy.#)

RabbitTopicConfig

Spring Boot에서 Topic 방식을 쓰려면 TopicExchange와 Binding을 설정

package com.bx.api.config;

import org.springframework.amqp.core.Binding;
import org.springframework.amqp.core.BindingBuilder;
import org.springframework.amqp.core.Queue;
import org.springframework.amqp.core.TopicExchange;
import org.springframework.amqp.support.converter.Jackson2JsonMessageConverter;
import org.springframework.amqp.support.converter.MessageConverter;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class RabbitTopicConfig {

	// 1. Topic 타입의 Exchange 생성
	@Bean
	public TopicExchange topicExchange() {
		return new TopicExchange("log.exchange");
	}

	// 2. 큐 생성
	@Bean
	public Queue errorQueue() {
		return new Queue("error.queue");
	}

	// 3. 바인딩 (에러 로그만 이 큐로 오게 설정)
	@Bean
	public Binding bindingError(Queue errorQueue, TopicExchange topicExchange) {
		// "#.error" 패턴을 가진 메시지만 errorQueue로 연결
		return BindingBuilder.bind(errorQueue).to(topicExchange).with("#.error");
	}

	@Bean
	public MessageConverter jsonMessageConverter() {
		// 이 설정이 있어야 객체 <-> JSON 변환이 가능합니다.
		return new Jackson2JsonMessageConverter();
	}
}

TopicConfig

Topic 방식은 Exchange, Queue, Binding 정의

package com.bx.api.config;

import org.springframework.amqp.core.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class TopicConfig {

	public static final String EXCHANGE_NAME = "log.topic.exchange";
	public static final String ALL_LOG_QUEUE = "all.log.queue";
	public static final String ERROR_LOG_QUEUE = "error.log.queue";

	// 1. Topic Exchange 선언
	@Bean
	public TopicExchange logExchange() {
		return new TopicExchange(EXCHANGE_NAME);
	}

	// 2. 큐 선언 (모든 로그용 / 에러 전용)
	@Bean
	public Queue allLogQueue() {
		return new Queue(ALL_LOG_QUEUE);
	}

	@Bean
	public Queue errorLogQueue() {
		return new Queue(ERROR_LOG_QUEUE);
	}

	// 3. 바인딩 (와일드카드 사용)
	@Bean
	public Binding bindAll(Queue allLogQueue, TopicExchange logExchange) {
		// "seoul.#" -> 서울에서 발생하는 모든 로그(info, warn, error 등) 수집
		return BindingBuilder.bind(allLogQueue).to(logExchange).with("seoul.#");
	}

	@Bean
	public Binding bindError(Queue errorLogQueue, TopicExchange logExchange) {
		// "#.error" -> 지역 상관없이 모든 에러 로그만 수집
		return BindingBuilder.bind(errorLogQueue).to(logExchange).with("#.error");
	}
}

메시지 보내기 (Producer) : LogProducer

package com.bx.api.service;

import java.util.HashMap;
import java.util.Map;

import org.springframework.amqp.rabbit.core.RabbitTemplate;
import org.springframework.stereotype.Service;

import com.bx.api.config.TopicConfig;

import lombok.RequiredArgsConstructor;

@Service
@RequiredArgsConstructor
public class LogProducer {

	private final RabbitTemplate rabbitTemplate;

	public void sendLog(String location, String level, String message) {
		// 라우팅 키 생성 예: "seoul.info" 또는 "busan.error"
		String routingKey = location + "." + level;

		Map<String, String> logData = new HashMap<>();
		logData.put("location", location);
		logData.put("level", level);
		logData.put("message", message);

		rabbitTemplate.convertAndSend(TopicConfig.EXCHANGE_NAME, routingKey, logData);
		System.out.println("Sent Log with Key [" + routingKey + "]: " + message);
	}
}

메시지 받기 (Consumer) : LogConsumer

package com.bx.api.service;

import java.util.Map;

import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.stereotype.Service;

import com.bx.api.config.TopicConfig;

@Service
public class LogConsumer {

	// 서울 지역의 모든 로그 처리
	@RabbitListener(queues = TopicConfig.ALL_LOG_QUEUE)
	public void consumeAllSeoulLog(Map<String, String> message) {
		System.out.println("[서울 통합 관제센터] 수신: " + message);
	}

	// 전 지역의 에러 로그만 처리
	@RabbitListener(queues = TopicConfig.ERROR_LOG_QUEUE)
	public void consumeErrorLog(Map<String, String> message) {
		System.out.println("[긴급 에러 알람] 수신: " + message);
	}
}

로그 메시지 전송용 Controller 구현

package com.bx.api.controller;

import java.util.HashMap;
import java.util.Map;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import com.bx.api.service.LogProducer;

import lombok.RequiredArgsConstructor;

@RestController
@RequiredArgsConstructor
public class RabbitController {

	private final LogProducer logProducer;

	/**
	 * HTTP GET 호출을 받아 RabbitMQ로 메시지 전송 실행 주소:
	 * http://localhost:8080/send?loc=seoul&lvl=info&msg=test_message
	 */
	@GetMapping("/send")
	public Map<String, Object> sendMessage(@RequestParam(value = "loc") String location,
			@RequestParam(value = "lvl") String level, @RequestParam(value = "msg") String message) {

		// 이전에 만든 Producer의 메서드 호출
		logProducer.sendLog(location, level, message);

		Map<String, Object> result = new HashMap<>();
		result.put("status", "success");
		result.put("routingKey", location + "." + level);
		result.put("payload", message);

		return result;
	}
}

실행

메시지 전송

호출 주소 예제

http://192.168.0.4:8080/send?loc=seoul&lvl=info&msg=test_message

서버 로그

Sent Log with Key [seoul.info]: test_message
[서울 통합 관제센터] 수신: {level=info, location=seoul, message=test_message}
728x90
728x90

출처

리눅스 확인

bluesanta@bluesanta-AI-Series:~/Application/Llama$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 24.04.3 LTS
Release:        24.04
Codename:       noble
bluesanta@bluesanta-AI-Series:~/Application/Llama$ dkms status
amdgpu/6.16.6-2255209.24.04, 6.14.0-37-generic, x86_64: installed
xrt-amdxdna/2.21.0, 6.14.0-37-generic, x86_64: installed

가상환경 생성

bluesanta@bluesanta-AI-Series:~/Application/Llama$ python3 -m venv .venv
bluesanta@bluesanta-AI-Series:~/Application/Llama$ source .venv/bin/activate

PyTorch 설치 (HX 390/gfx1150 아키텍처용)

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ pip install torch torchvision torchaudio --index-url https://repo.amd.com/rocm/whl/gfx1150/

설치 확인

gpu_check.py

import sys
import torch

# 시스템 및 PyTorch 정보 출력
print(f"Python Version: {sys.version.split()[0]}")
print(f"PyTorch Version: {torch.__version__}")
print(f"GPU 인식 성공: {torch.cuda.is_available()}")

if torch.cuda.is_available():
    print(f"사용 가능 기기: {torch.cuda.get_device_name(0)}")
else:
    print("현재 GPU를 사용할 수 없습니다. 드라이버나 ROCm 설정을 확인하세요.")
 
 
# 실제 텐서 연산 테스트
if torch.cuda.is_available():
    x = torch.randn(1, 3).to("cuda")
    print("GPU 텐서 연산 테스트 완료:", x)

실행

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ python gpu_check.py
Python Version: 3.12.3
PyTorch Version: 2.9.1+rocm7.10.0
GPU 인식 성공: True
사용 가능 기기: AMD Radeon 890M Graphics
/home/bluesanta/Application/Llama/gpu_check.py:17: UserWarning: expandable_segments not supported on this platform (Triggered internally at /__w/rockrel/rockrel/external-builds/pytorch/pytorch/c10/hip/HIPAllocatorConfig.h:36.)
  x = torch.randn(1, 3).to("cuda")
GPU 텐서 연산 테스트 완료: tensor([[-0.5998, -1.3418, -0.5339]], device='cuda:0')
[W121 10:44:08.608317046 AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated, use PYTORCH_ALLOC_CONF instead (function operator())

PYTORCH_ALLOC_CONF 환경 변수 추가

~/.bashrc 수정

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ vi ~/.bashrc

내용

export PYTORCH_ALLOC_CONF="expandable_segments:True"

실행

(.venv) bluesanta@bluesanta-AI-Series:~/Application/Llama$ python gpu_check.py 
Python Version: 3.12.3
PyTorch Version: 2.9.1+rocm7.10.0
GPU 인식 성공: True
사용 가능 기기: AMD Radeon 890M Graphics
GPU 텐서 연산 테스트 완료: tensor([[-1.0138, -0.4262,  1.2874]], device='cuda:0')
728x90
728x90

출처

리눅스 버전 확인

bluesanta@bluesanta-AI-Series:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 24.04.3 LTS
Release:        24.04
Codename:       noble
bluesanta@bluesanta-AI-Series:~$ lsmod | grep amdgpu
amdgpu              19836928  5
amddrm_ttm_helper      12288  1 amdgpu
amdttm                131072  2 amdgpu,amddrm_ttm_helper
amddrm_buddy           24576  1 amdgpu
amdxcp                 16384  1 amdgpu
amddrm_exec            12288  1 amdgpu
drm_suballoc_helper    20480  1 amdgpu
amd_sched              61440  1 amdgpu
amdkcl                 36864  4 amd_sched,amdttm,amddrm_exec,amdgpu
drm_panel_backlight_quirks    12288  1 amdgpu
drm_display_helper    278528  1 amdgpu
cec                    94208  2 drm_display_helper,amdgpu
i2c_algo_bit           16384  1 amdgpu
drm_ttm_helper         16384  1 amdgpu
video                  77824  1 amdgpu
bluesanta@bluesanta-AI-Series:~$ dkms status
amdgpu/6.16.6-2255209.24.04, 6.14.0-37-generic, x86_64: installed

ComfyUI 설치

가상환경실행

bluesanta@bluesanta-AI-Series:~$ cd Application/stable_diffusion/
bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ source .venv/bin/activate
(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ 

ComfyUI 다운로드

(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ git clone https://github.com/comfyanonymous/ComfyUI.git

ComfyUI 참조 패키지 설치

(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ cd ComfyUI/
(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion/ComfyUI$ pip install -r requirements.txt

ComfyUI 실행

(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion/ComfyUI$ python main.py
Checkpoint files will always be loaded safely.
Total VRAM 14844 MB, total RAM 29688 MB
pytorch version: 2.9.1+rocm7.10.0
Set: torch.backends.cudnn.enabled = False for better AMD performance.
AMD arch: gfx1150
ROCm version: (7, 2)
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon 890M Graphics : native
Using async weight offloading with 2 streams
Enabled pinned memory 28203.0
Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
Python version: 3.12.3 (main, Nov  6 2025, 13:44:16) [GCC 13.3.0]
ComfyUI version: 0.7.0
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
ComfyUI frontend version: 1.35.9
[Prompt Server] web root: /home/bluesanta/Application/stable_diffusion/.venv/lib/python3.12/site-packages/comfyui_frontend_package/static
Total VRAM 14844 MB, total RAM 29688 MB
pytorch version: 2.9.1+rocm7.10.0
Set: torch.backends.cudnn.enabled = False for better AMD performance.
AMD arch: gfx1150
ROCm version: (7, 2)
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon 890M Graphics : native
Using async weight offloading with 2 streams
Enabled pinned memory 28203.0
 
Import times for custom nodes:
   0.0 seconds: /home/bluesanta/Application/stable_diffusion/ComfyUI/custom_nodes/websocket_image_save.py
 
Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server
 
To see the GUI go to: http://127.0.0.1:8188

ComfyUI Manager 설치

(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion/ComfyUI$ cd custom_nodes/
(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion/ComfyUI/custom_nodes$ git clone https://github.com/Comfy-Org/ComfyUI-Manager.git
(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion/ComfyUI/custom_nodes$ pip install -r ComfyUI-Manager/requirements.txt
728x90
728x90

출처

리눅스 버전 확인

bluesanta@bluesanta-AI-Series:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 24.04.3 LTS
Release:        24.04
Codename:       noble

AMD GPU 드라이버 설치

AMD GPU 설치 프로그램 설치

bluesanta@bluesanta-AI-Series:~$ wget https://repo.radeon.com/amdgpu-install/30.20.1/ubuntu/noble/amdgpu-install_7.1.1.70101-1_all.deb
bluesanta@bluesanta-AI-Series:~$ sudo dpkg -i amdgpu-install_7.1.1.70101-1_all.deb 
[sudo] password for bluesanta: 
Selecting previously unselected package amdgpu-install.
(Reading database ... 215673 files and directories currently installed.)
Preparing to unpack amdgpu-install_7.1.1.70101-1_all.deb ...
Unpacking amdgpu-install (30.20.1.0.30200100-2255209.24.04) ...
Setting up amdgpu-install (30.20.1.0.30200100-2255209.24.04) ...

ROCm 패키지 설치

bluesanta@bluesanta-AI-Series:~$ sudo amdgpu-install --usecase=rocm,hiplibsdk --no-dkms

설치 확인

(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ lsmod | grep amdgpu
amdgpu              19836928  6
amddrm_ttm_helper      12288  1 amdgpu
amdttm                131072  2 amdgpu,amddrm_ttm_helper
amddrm_buddy           24576  1 amdgpu
amdxcp                 16384  1 amdgpu
amddrm_exec            12288  1 amdgpu
drm_suballoc_helper    20480  1 amdgpu
amd_sched              61440  1 amdgpu
amdkcl                 36864  4 amd_sched,amdttm,amddrm_exec,amdgpu
drm_panel_backlight_quirks    12288  1 amdgpu
drm_display_helper    278528  1 amdgpu
cec                    94208  2 drm_display_helper,amdgpu
i2c_algo_bit           16384  1 amdgpu
drm_ttm_helper         16384  1 amdgpu
video                  77824  1 amdgpu
(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ dkms status
amdgpu/6.16.6-2255209.24.04, 6.14.0-37-generic, x86_64: installed

아키텍처 강제 인식

HX 370(gfx1150)은 최신 아키텍처이므로, PyTorch가 이를 인식하도록 환경 변수를 선언해야 합니다. 터미널에서 다음을 입력하거나 .bashrc에 추가

bluesanta@bluesanta-AI-Series:~$ vi ~/.bashrc

.bashrc 파일에 HSA_OVERRIDE_GFX_VERSION 값 설정

export HSA_OVERRIDE_GFX_VERSION=11.5.0

PyTorch 설치

가상환경만들기

bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ sudo apt install python3-venv
bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ python3 -m venv .venv
bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ source .venv/bin/activate
(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ 

ROCm 지원 PyTorch 빌드 및 설치

(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx1150/ torch torchvision torchaudio

적용 확인

(.venv) bluesanta@bluesanta-AI-Series:~/Application/stable_diffusion$ python -c "import torch; print(torch.cuda.is_available())"
True
728x90
728x90

출처

MicroPython Tools Plugin for JetBrains IDEs | JetBrains Marketplace

플러그인 설치

MicroPython Tools 설정 1

MicroPython Tools 설정 2

MicroPython Tools 도구 창 열기

소스 파일

from machine import Pin
import time

# Define the GPIO pin connected to the built-in LED
# (This may vary depending on your specific ESP32-C3 board)
led_pin = Pin(8, Pin.OUT) # Example: Assuming GPIO8 for the LED

while True:
    led_pin.on()  # Turn the LED on
    print("LED ON")
    time.sleep(1) # Wait for 1 second

    led_pin.off() # Turn the LED off
    print("LED OFF")
    time.sleep(1) # Wait for 1 second

Python 파일 실행

소스파일 선택 -> 오른쪽 메뉴에서 Execute File REPL 메뉴 실행

실행 콘솔 확인

728x90

+ Recent posts