출처 : SPEECH TO TEXT(STT) 라이브러리와 프로세싱을 이용하여 음성인식 테스트하기
라즈베리파이에서 Node.js 사용하기 [ 1. Node.js 및 기타 모듈 설치 ]
라즈베리파이 Node.js 최신버전 설치 :: 생각 정리소
[라즈하이파이] Runeaudio를 이용한 오디오 시스템 만들기 | - 산딸기마을
Building TensorFlow for Raspberry Pi: a Step-By-Step Guide
스마트미러 자동설치
출처 : Smart Mirror Installation
[user@localhost]$ curl -sL https://raw.githubusercontent.com/evancohen/smart-mirror/master/scripts/pi-install.sh | bash
________ _____ ______ ________ ________ _________
|\ ____\|\ _ \ _ \|\ __ \|\ __ \|\___ ___\
\ \ \___|\ \ \\\__\ \ \ \ \|\ \ \ \|\ \|___ \ \_|
\ \_____ \ \ \\|__| \ \ \ __ \ \ _ _\ \ \ \
\|____|\ \ \ \ \ \ \ \ \ \ \ \ \\ \| \ \ \
____\_\ \ \__\ \ \__\ \__\ \__\ \__\\ _\ \ \__\
|\_________\|__| \|__|\|__|\|__|\|__|\|__| \|__|
\|_________|
_____ ______ ___ ________ ________ ________ ________
|\ _ \ _ \|\ \|\ __ \|\ __ \|\ __ \|\ __ \
\ \ \\\__\ \ \ \ \ \ \|\ \ \ \|\ \ \ \|\ \ \ \|\ \
\ \ \\|__| \ \ \ \ \ _ _\ \ _ _\ \ \\\ \ \ _ _\
\ \ \ \ \ \ \ \ \ \\ \\ \ \\ \\ \ \\\ \ \ \\ \
\ \__\ \ \__\ \__\ \__\\ _\\ \__\\ _\\ \_______\ \__\\ _\
\|__| \|__|\|__|\|__|\|__|\|__|\|__|\|_______|\|__|\|__|
This script will install the smart-mirror and it's dependencies.
Please do not exit this script until it is complete.
Installing native dependencies
스마트미러 수동설치
출처 : Smart Mirror Install Raspbian
스마트미러 설치전 관련 라이브러리 설치
출처 : Install Smart Mirror dependencies
[user@localhost]$ sudo apt-get install sox libatlas-base-dev
스마트미러 설치
출처 : Smart Mirror 스마트 미러 만들기- 4 Smart Mirror GitHub 다운 및 실행
[user@localhost]$ git clone https://github.com/evancohen/smart-mirror.git
Cloning into 'smart-mirror'...
remote: Counting objects: 3760, done.
remote: Compressing objects: 100% (107/107), done.
remote: Total 3760 (delta 56), reused 0 (delta 0), pack-reused 3653
Receiving objects: 100% (3760/3760), 10.85 MiB | 2.09 MiB/s, done.
Resolving deltas: 100% (2072/2072), done.
Checking connectivity... done.
[user@localhost]$ cd smart-mirror
[user@localhost]$ cp config.default.json config.json
[user@localhost]$ npm install
스마트미러 실행
[user@localhost]$ npm start
스마트미러 실행 화면

한국어 음석 출력 테스트 - say함수 추가

<!-- responsivevoice.js -->
<script src="http://code.responsivevoice.org/responsivevoice.js"></script>
<script>
function say() {
if(responsiveVoice.voiceSupport()) {
console.log('Responsvie Voice Supported');
responsiveVoice.speak("예쁜 꽃 그리는 법","Korean Female");
}
}
</script>
say함수 실행 버튼 추가

<button onclick="javascript:say();">Speak</button>
실행

[user@localhost]$ cat .asoundrc
pcm.!default {
type asym
playback.pcm
{
type hw
card 0
}
capture.pcm
{
type plug
slave.pcm "hw:1,0"
}
}
ctl.!default {
type hw
card 0
}
오디오 환경 설정
pi@raspberrypi:~/smart-mirror$ scripts/conf-audio.sh
List of Capture Devices
0) card 1: Device [USB PnP Sound Device], device 0: USB Audio [USB Audio]
Enter the number of the Capture device you would like to use:
0
List of Playback Devices
0) card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]
1) card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI]
Enter the number of the Playback device you would like to use:
0
pcm.!default {
type asym
playback.pcm {
type plug
# card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA][choice]
slave.pcm "hw:0,0"
}
capture.pcm {
type plug
# card 1: Device [USB PnP Sound Device], device 0: USB Audio [USB Audio][choice]
slave.pcm "hw:1,0"
}
}
pi@raspberrypi:~/smart-mirror$ cat ~/.asoundrc
pcm.!default {
type asym
playback.pcm {
type plug
# card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA][choice]
slave.pcm "hw:0,0"
}
capture.pcm {
type plug
# card 1: Device [USB PnP Sound Device], device 0: USB Audio [USB Audio][choice]
slave.pcm "hw:1,0"
}
}
pi@raspberrypi:~/smart-mirror$
hotword 확인
[user@localhost]$ vi plugins/speech/service.js
ipcRenderer.on('hotword', () => {
console.log('hotword')
callbacks.listening(true)
})
부팅시 스마트미러 시작
출처 : Setting up Smart-Mirror to Run on Boot · Smart Mirror Documentation
실행 스크립트 작업 디렉토리에 복사
[user@localhost]$ cd ~
[user@localhost]$ cp ./smart-mirror/scripts/bash-start.sh smart-start.sh
[user@localhost]$ chown pi:pi /home/pi/smart-start.sh
[user@localhost]$ chmod +x /home/pi/smart-start.sh
pi 계정의 X-Windows가 실행시 실행되도록 실행 스크리트 등록
[user@localhost]$ vi /home/pi/.config/lxsession/LXDE-pi/autostart
/home/pi/smart-start.sh &
snowboy 설치 (옵션)
출처 : https://github.com/kitt-ai/snowboy
ハンドメイド Alexa Festival! Serverless、IoT + Voice への誘い。ラズパイ + Alexa Voice Servce(Python)
[user@localhost]$ sudo apt-get install swig3.0 python-pyaudio python3-pyaudio sox python-dev
[user@localhost]$ pip install pyaudio
[user@localhost]$ sudo apt-get install libmagic-dev libatlas-base-dev
[user@localhost]$ sudo npm install -g node-pre-gyp
[user@localhost]$ git clone https://github.com/Kitt-AI/snowboy.git
[user@localhost]$ cd snowboy
[user@localhost]$ npm install
날씨가 출력 되지 않는 오류
출처 : 스마트미러 forecast 오류해결방법(GeolocationService 문제) - 징징이 : 네이버 블로그
날씨를 표시 못하는 문제가 발생했다.
두가지 문제로 첫 번째는 위치 정보를 못 받아 오는 문제인데, 구글에 브라우져를 통해서 위치 정보를 받아 오는 함수가 될때가 있고, 안 될때가 있는데, 위치 정보를 수동으로 설정하는 방법으로 해결이 되지만,
두 번째 문제로 날씨 정보 서비스가 한글을 지원하지 못하는 문제가 있어 영어(en) 으로 고정해야 날씨 정보를 조회 한다.
위치정보 수동 설정

plugins\weather\controller.js 내용수정
아래와 같이 소스를 영어로 고정하면 날씨 정보를 조회 한다.
weather.get = function () {
var weather_url = 'https://api.darksky.net/forecast/' + config.forecast.key + '/' +
geoposition.coords.latitude + ',' + geoposition.coords.longitude + '?units=' +
//config.forecast.units + "&lang=" + language + "&callback=JSON_CALLBACK";
config.forecast.units + "&lang=en" + "&callback=JSON_CALLBACK";
console.log(weather_url);
return $http.jsonp(weather_url).then(function (response) {
return weather.forecast = response;
});
};
위치정보 조회 서비스 호출 부분 (참고용)
위치정보 조회 서비스가 성공하면 interval 함수 호출로 주기적으로 날씨 정보를 조회 한다.
GeolocationService.getLocation({ enableHighAccuracy: true }).then(function (geopo) {
geoposition = geopo;
refreshWeatherData(geoposition);
$interval(refreshWeatherData, config.forecast.refreshInterval * 60000 || 7200000)
});
-
-
-
-
-
-
-
-
-
-
모듈
node-record-lpcm16
smart-mirror/plugins/speech/service.js
ipcRenderer.on('hotword', () => {
console.log('hotword')
callbacks.listening(true)
})
smart-mirror/node_modules/sonus/index.js
'use strict'
const record = require('node-record-lpcm16')
const stream = require('stream')
const {Detector, Models} = require('snowboy')
const ERROR = {
NOT_STARTED: "NOT_STARTED",
INVALID_INDEX: "INVALID_INDEX"
}
const CloudSpeechRecognizer = {}
CloudSpeechRecognizer.init = recognizer => {
const csr = new stream.Writable()
csr.listening = false
csr.recognizer = recognizer
return csr
}
CloudSpeechRecognizer.startStreaming = (options, audioStream, cloudSpeechRecognizer) => {
if (cloudSpeechRecognizer.listening) {
return
}
cloudSpeechRecognizer.listening = true
const recognizer = cloudSpeechRecognizer.recognizer
const recognitionStream = recognizer.createRecognizeStream({
config: {
encoding: 'LINEAR16',
sampleRate: 16000,
languageCode: options.language
},
singleUtterance: true,
interimResults: true,
verbose: true
})
recognitionStream.on('error', err => cloudSpeechRecognizer.emit('error', err))
recognitionStream.on('data', data => {
if (data) {
cloudSpeechRecognizer.emit('data', data)
if (data.endpointerType === 'END_OF_UTTERANCE') {
cloudSpeechRecognizer.listening = false
audioStream.unpipe(recognitionStream)
}
}
})
audioStream.pipe(recognitionStream)
}
const Sonus = {}
Sonus.annyang = require('./lib/annyang-core.js')
Sonus.init = (options, recognizer) => {
// don't mutate options
const opts = Object.assign({}, options),
models = new Models(),
sonus = new stream.Writable(),
csr = CloudSpeechRecognizer.init(recognizer)
sonus.mic = {}
sonus.recordProgram = opts.recordProgram
sonus.started = false
// If we don't have any hotwords passed in, add the default global model
opts.hotwords = opts.hotwords || [1]
opts.hotwords.forEach(model => {
models.add({
file: model.file || 'node_modules/snowboy/resources/snowboy.umdl',
sensitivity: model.sensitivity || '0.5',
hotwords: model.hotword || 'default'
})
})
// defaults
opts.models = models
opts.resource = opts.resource || 'node_modules/snowboy/resources/common.res'
opts.audioGain = opts.audioGain || 2.0
opts.language = opts.language || 'en-US' //https://cloud.google.com/speech/docs/languages
const detector = sonus.detector = new Detector(opts)
detector.on('silence', () => sonus.emit('silence'))
detector.on('sound', () => sonus.emit('sound'))
// When a hotword is detected pipe the audio stream to speech detection
detector.on('hotword', (index, hotword) => {
sonus.trigger(index, hotword)
})
csr.on('error', error => sonus.emit('error', { streamingError: error }))
let transcriptEmpty = true
csr.on('data', data => {
const result = data.results[0]
if (result) {
transcriptEmpty = false
if (result.isFinal) {
sonus.emit('final-result', result.transcript)
Sonus.annyang.trigger(result.transcript)
transcriptEmpty = true //reset transcript
} else {
sonus.emit('partial-result', result.transcript)
}
} else if (data.endpointerType === 'END_OF_UTTERANCE' && transcriptEmpty) {
sonus.emit('final-result', "")
}
})
sonus.trigger = (index, hotword) => {
if (sonus.started) {
try {
let triggerHotword = (index == 0) ? hotword : models.lookup(index)
sonus.emit('hotword', index, triggerHotword)
CloudSpeechRecognizer.startStreaming(opts, sonus.mic, csr)
} catch (e) {
throw ERROR.INVALID_INDEX
}
} else {
throw ERROR.NOT_STARTED
}
}
return sonus
}
Sonus.start = sonus => {
sonus.mic = record.start({
threshold: 0,
recordProgram: sonus.recordProgram || "rec",
verbose: false
})
sonus.mic.pipe(sonus.detector)
sonus.started = true
}
Sonus.trigger = (sonus, index, hotword) => sonus.trigger(index, hotword)
Sonus.pause = sonus => sonus.mic.pause()
Sonus.resume = sonus => sonus.mic.resume()
Sonus.stop = () => record.stop()
module.exports = Sonus
smart-mirror/app/js/controller.js
(function (angular) {
'use strict';
function MirrorCtrl(
Focus,
SpeechService,
AutoSleepService,
LightService,
$rootScope, $scope, $timeout, $interval, tmhDynamicLocale, $translate) {
// Local Scope Vars
var _this = this;
$scope.listening = false;
$scope.debug = false;
$scope.commands = [];
$scope.partialResult = $translate.instant('home.commands');
$scope.layoutName = 'main';
$scope.config = config;
// Set up our Focus
$rootScope.$on('focus', function(targetScope, newFocus){
$scope.focus = newFocus;
})
Focus.change("default");
//set lang
if (config.general.language.substr(0, 2) == 'en') {
moment.locale(config.general.language,
{
calendar: {
lastWeek: '[Last] dddd',
lastDay: '[Yesterday]',
sameDay: '[Today]',
nextDay: '[Tomorrow]',
nextWeek: 'dddd',
sameElse: 'L'
}
}
)
} else {
moment.locale(config.general.language)
}
//Initialize the speech service
var resetCommandTimeout;
SpeechService.init({
listening: function (listening) {
$scope.listening = listening;
if (listening && !AutoSleepService.woke) {
AutoSleepService.wake()
$scope.focus = AutoSleepService.scope;
}
},
partialResult: function (result) {
$scope.partialResult = result;
$timeout.cancel(resetCommandTimeout);
},
finalResult: function (result) {
if (typeof result !== 'undefined') {
$scope.partialResult = result;
resetCommandTimeout = $timeout(restCommand, 5000);
}
},
error: function (error) {
console.log(error);
if (error.error == "network") {
$scope.speechError = "Google Speech Recognizer: Network Error (Speech quota exceeded?)";
}
}
});
//Update the time
function updateTime() {
$scope.date = new moment();
// Auto wake at a specific time
if (typeof config.autoTimer !== 'undefined' && typeof config.autoTimer.autoWake !== 'undefined' && config.autoTimer.autoWake == moment().format('HH:mm:ss')) {
console.debug('Auto-wake', config.autoTimer.autoWake);
AutoSleepService.wake()
$scope.focus = AutoSleepService.scope;
AutoSleepService.startAutoSleepTimer();
}
}
// Reset the command text
var restCommand = function () {
$translate('home.commands').then(function (translation) {
$scope.partialResult = translation;
});
};
_this.init = function () {
AutoSleepService.startAutoSleepTimer();
$interval(updateTime, 1000);
updateTime();
restCommand();
var defaultView = function () {
console.debug("Ok, going to default view...");
Focus.change("default");
}
// List commands
SpeechService.addCommand('list', function () {
console.debug("Here is a list of commands...");
console.log(SpeechService.commands);
$scope.commands = SpeechService.getCommands();
Focus.change("commands");
});
// Go back to default view
SpeechService.addCommand('home', defaultView);
SpeechService.addCommand('debug', function () {
console.debug("Boop Boop. Showing debug info...");
$scope.debug = true;
});
// Check the time
SpeechService.addCommand('time_show', function () {
console.debug("It is", moment().format('h:mm:ss a'));
});
// Control light
SpeechService.addCommand('light_action', function (state, action) {
LightService.performUpdate(state + " " + action);
});
};
_this.init();
}
angular.module('SmartMirror')
.controller('MirrorCtrl', MirrorCtrl);
function themeController($scope) {
$scope.layoutName = (typeof config.layout !== 'undefined' && config.layout) ? config.layout : 'main';
}
angular.module('SmartMirror')
.controller('Theme', themeController);
} (window.angular));