우분투에서 원격데스크탑 유틸을 설치한후 접속할 윈도우 서버에서 원격 데스크탑 연결을 허용해 주면 됩니다.
우분투 원격 데스크탑 팩키지 설치
rdesktop 설치
sudo apt-get install rdesktop
root@ubuntu:~/Desktop# sudo apt-get install rdesktop Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: pcscd The following NEW packages will be installed: rdesktop 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 222 kB of archives. After this operation, 712 kB of additional disk space will be used. Get:1 http://us.archive.ubuntu.com/ubuntu focal/universe amd64 rdesktop amd64 1.9.0-2 [222 kB] Fetched 222 kB in 1s (159 kB/s) Selecting previously unselected package rdesktop. (Reading database ... 277511 files and directories currently installed.) Preparing to unpack .../rdesktop_1.9.0-2_amd64.deb ... Unpacking rdesktop (1.9.0-2) ... Setting up rdesktop (1.9.0-2) ... Processing triggers for man-db (2.9.1-1) ...
mingw64 -> x86_64-8.1.0-release-win32-seh-rt_v6-rev0.7z 바이너리 압축 해제
윈도우 환경 변수 설정
시스템 -> 시스템 속성 -> 환경 변수 -> 환경 변수 편집 -> C:\mingw64\bin 추가
GCC 설치 버전 확인
Win + R 버튼 클릭후 cmd 창에서 설치 버전을 확인 합니다.
C:\>gcc --version gcc (x86_64-win32-seh-rev0, Built by MinGW-W64 project) 8.1.0 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
C:\>g++ --version g++ (x86_64-win32-seh-rev0, Built by MinGW-W64 project) 8.1.0 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#include <stdio.h>
int main()
{
printf("Hello, world!\n");
printf("C Program Compile Test \n");
return 0;
}
실행 매뉴 버튼 클릭후 테스트 코드를 컴파일 및 실행 합니다.
디버깅 없이 실행 (Ctrl+F5)
테스트 결과
입력변수 설정을 통한 컴파일 및 실행
컴파일 후에 입력 변수 설정을 통해 실행을 할수도 있습니다.
기존에 작성했던 코드를 조금 수정 합니다.
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("1 번째 [%s]:\n",argv[1]);
printf("2 번째 [%s]:\n",argv[2]);
printf("Hello, world!\n");
printf("C Program Compile Test \n");
return 0;
}
테스트 프로젝트 하위 .vscode 디렉 토리 밑에 launch.json 파일을 생성 합니다.
.vscode 디렉토리 에는 기존에 환경 설정한 tasks.json , c_cpp_properties.json 파일도 생성되어 있습니다.
동일 디렉토리 launch.json 파일을 만들고 환경 설정 내용을 작성 합니다.
입력 값은 "args": ["1 args ","2 args "], 형식으로 작성합니다.
launch.json
{
// IntelliSense를 사용하여 가능한 특성에 대해 알아보세요.
// 기존 특성에 대한 설명을 보려면 가리킵니다.
// 자세한 내용을 보려면 https://go.microsoft.com/fwlink/?linkid=830387을(를) 방문하세요.
"version": "0.2.0",
"configurations": [
{
"name": "gcc.exe - 활성 파일 빌드 및 디버그",
"type": "cppdbg",
"request": "launch",
"program": "${fileDirname}\\build\\${fileBasenameNoExtension}.exe",
"args": ["1 args ","2 args "],
"stopAtEntry": false,
"cwd": "${fileDirname}",
"environment": [],
"externalConsole": false,
"MIMode": "gdb",
"miDebuggerPath": "C:\\mingw64\\bin\\gdb.exe",
"setupCommands": [
{
"description": "gdb에 자동 서식 지정 사용",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
],
"preLaunchTask": "C/C++: gcc.exe 활성 파일 빌드"
},
]
}
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64
export JRE_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre
export PATH=$PATH:$JAVA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server:$JRE_HOME/lib:$LD_LIBRARY_PATH
CATALINA_HOME=/home/pm5/tomcat/apache-tomcat-9.0.81
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$CATALINA_HOME/lib-jsp-api.jar:$CATALINA_HOME/lib/servlet-api.jar
PATH=$PATH:$JAVA_HOME/bin:$CATALINA_HOME/bin
export JAVA_HOME CLASSPATH PATH CATALINA_HOME
alias tcdown="sh $CATALINA_HOME/bin/shutdown.sh -force"
alias tcup="sh $CATALINA_HOME/bin/startup.sh start"
alias tclog="tail -f $CATALINA_HOME/logs/catalina.out"
Tomcat 9 접속 확인
기동 : sh $CATALINA_HOME/bin/shutdown.sh -force
종료: sh $CATALINA_HOME/bin/startup.sh start
11-Oct-2023 15:44:00.073 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/docs] 11-Oct-2023 15:44:00.090 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/docs] has finished in [18] ms 11-Oct-2023 15:44:00.090 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/examples] 11-Oct-2023 15:44:00.251 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/examples] has finished in [161] ms 11-Oct-2023 15:44:00.251 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/host-manager] 11-Oct-2023 15:44:00.267 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/host-manager] has finished in [16] ms 11-Oct-2023 15:44:00.267 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/manager] 11-Oct-2023 15:44:00.280 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/manager] has finished in [13] ms 11-Oct-2023 15:44:00.283 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-9090"] 11-Oct-2023 15:44:00.297 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [430] milliseconds
Tomcat Url 접속: http://[ip]:9090/
웹 페이지 테스트 (hellow.jsp)
생성한 파일을 tomcat/apache-tomcat-9.0.81/webapps/ROOT/ 디렉토리 밑에 생성 한다.
정상적으로 설치가 완료 되었는지 확인을 위해 docker-compose logs -f [서비스명] 명령으로 설치 관련 에러가 없는지 확인 한다.
docker-compose logs pgadmin
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose logs -f postgres pgadmin Attaching to pgadmin, postgres pgadmin | NOTE: Configuring authentication for SERVER mode. pgadmin | pgadmin | pgAdmin 4 - Application Initialisation pgadmin | ====================================== pgadmin | pgadmin | [2023-03-29 09:20:40 +0000] [1] [INFO] Starting gunicorn 20.1.0 pgadmin | [2023-03-29 09:20:40 +0000] [1] [INFO] Listening at: http://[::]:80 (1) pgadmin | [2023-03-29 09:20:40 +0000] [1] [INFO] Using worker: gthread pgadmin | [2023-03-29 09:20:40 +0000] [92] [INFO] Booting worker with pid: 92 ........................ ........... ...... postgres | 2023-03-29 18:26:44.695 KST [69] LOG: checkpoint starting: shutdown immediate postgres | 2023-03-29 18:26:44.696 KST [69] LOG: checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.002 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=8463 kB postgres | 2023-03-29 18:26:44.700 KST [1] LOG: database system is shut down postgres | postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization postgres | postgres | 2023-03-29 18:26:58.554 KST [1] LOG: starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit postgres | 2023-03-29 18:26:58.554 KST [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2023-03-29 18:26:58.554 KST [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2023-03-29 18:26:58.579 KST [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2023-03-29 18:26:58.581 KST [29] LOG: database system was shut down at 2023-03-29 18:26:44 KST postgres | 2023-03-29 18:26:58.584 KST [1] LOG: database system is ready to accept connections
파일 생성권한이 없다면 아래와 같은 에러가 발생한다.
pgadmin | ERROR : Failed to create the directory /var/lib/pgadmin/sessions:
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose up Creating network "postgresql_default" with the default driver Creating postgres ... done Creating pgadmin ... done Attaching to postgres, pgadmin postgres | postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization postgres | postgres | 2023-03-29 18:18:48.407 KST [1] LOG: starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit postgres | 2023-03-29 18:18:48.408 KST [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2023-03-29 18:18:48.408 KST [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2023-03-29 18:18:48.497 KST [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2023-03-29 18:18:48.502 KST [29] LOG: database system was shut down at 2023-03-29 18:18:40 KST postgres | 2023-03-29 18:18:48.506 KST [1] LOG: database system is ready to accept connections pgadmin | ERROR : Failed to create the directory /var/lib/pgadmin/sessions: pgadmin | [Errno 13] Permission denied: '/var/lib/pgadmin/sessions' pgadmin | HINT : Create the directory /var/lib/pgadmin/sessions, ensure it is writeable by pgadmin | 'pgadmin', and try again, or, create a config_local.py file pgadmin | and override the SESSION_DB_PATH setting per pgadmin | https://www.pgadmin.org/docs/pgadmin4/6.21/config_py.html
docker container 상태확인
docker-compose ps -a 상태 확인 (State -> UP , Port mapping 정보 확인)
pgadmin 포트는 5555 번으로 설치를 진행 했으며 postgresql 은 5432 포트를 사용 한다.
포트 맵핑 정보 확인 => [host port] : [container port]
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose ps -a Name Command State Ports ------------------------------------------------------------------------------------------------ pgadmin /entrypoint.sh Up 443/tcp, 0.0.0.0:5555->80/tcp,:::5555->80/tcp postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp,:::5432->5432/tcp
PostgreSql 접속
정상적으로 설치가 완료되었다면 pgadmin을 통해 postgresql 에 접속해서 정보를 확인할수 있다.
[tac@mysvr:/home/tac]$ . tac.profile [tac@mysvr:/home/tac]$ tbcm -b ######################### WARNING ######################### # You are trying to start the CM-fence function. # ########################################################### You are not 'root'. Proceed anyway without fence? (y/N)y CM Guard daemon started up. CM-fence enabled.
TBCM 7.1.1 (Build 258584)
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero cluster manager started up. Local node name is (cm1:61040). [tac@mysvr:/home/tac]$ TB_SID=tas1 tbboot nomount Listener port = 3000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NOMOUNT mode).
tas diskspace create
tbsql 로 tas1 접속후 diskspace 를 생성 한다.
CREATE DISKSPACE ds0 FORCE EXTERNAL REDUNDANCY
FAILGROUP FG1 DISK
'/dev/mapper/vg1-vol_1G_01' NAME FG_DISK1,
'/dev/mapper/vg1-vol_1G_02' NAME FG_DISK2,
'/dev/mapper/vg1-vol_1G_03' NAME FG_DISK3,
'/dev/mapper/vg1-vol_1G_04' NAME FG_DISK4,
'/dev/mapper/vg2-vol_1G_01' NAME FG_DISK5,
'/dev/mapper/vg2-vol_1G_02' NAME FG_DISK6,
'/dev/mapper/vg2-vol_1G_03' NAME FG_DISK7,
'/dev/mapper/vg2-vol_1G_04' NAME FG_DISK8
/
[tac@mysvr:/home/tac]$ TB_SID=tas1 tbboot nomount Listener port = 3000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NOMOUNT mode). [tac@mysvr:/home/tac]$ tbsql sys/tibero@tas1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tas1.
SQL> CREATE DISKSPACE ds0 FORCE EXTERNAL REDUNDANCY 2 FAILGROUP FG1 DISK 3 '/dev/mapper/vg1-vol_1G_01' NAME FG_DISK1, 4 '/dev/mapper/vg1-vol_1G_02' NAME FG_DISK2, 5 '/dev/mapper/vg1-vol_1G_03' NAME FG_DISK3, 6 '/dev/mapper/vg1-vol_1G_04' NAME FG_DISK4, 7 '/dev/mapper/vg2-vol_1G_01' NAME FG_DISK5, 8 '/dev/mapper/vg2-vol_1G_02' NAME FG_DISK6, 9 '/dev/mapper/vg2-vol_1G_03' NAME FG_DISK7, 10 '/dev/mapper/vg2-vol_1G_04' NAME FG_DISK8 11 /
Diskspace 'DS0' created.
SQL> q Disconnected.
TAC1 번 노드 CM (Tibero Cluster Manager) 맵버십 등록
cmrctl add network --nettype private --ipaddr 10.10.10.10 --portno 51210 --name net1
cmrctl add network --nettype public --ifname ens33 --name pub1
cmrctl add cluster --incnet net1 --pubnet pub1 --cfile "+/dev/mapper/vg1-vol_1G_01,/dev/mapper/vg1-vol_1G_02,/dev/mapper/vg1-vol_1G_03,/dev/mapper/vg1-vol_1G_04,/dev/mapper/vg2-vol_1G_01,/dev/mapper/vg2-vol_1G_02,/dev/mapper/vg2-vol_1G_03,/dev/mapper/vg2-vol_1G_04" --name cls1
cmrctl start cluster --name cls1
cmrctl add service --name tas --type as --cname cls1
cmrctl add as --name tas1 --svcname tas --dbhome $CM_HOME
cmrctl add service --name tibero --cname cls1
cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME
cmrctl start as --name tas1
cmrctl show
[tac@mysvr:/home/tac]$ cmrctl start as --name tas1 Listener port = 3000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). BOOT SUCCESS! (MODE : NORMAL) [tac@mysvr:/home/tac]$ cmrctl show Resource List of Node cm1 ===================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- -------------- -------- ------------------------ COMMON network net1 UP (private) 10.10.10.10/51210 COMMON network pub1 UP (public) ens33 COMMON cluster cls1 UP inc: net1, pub: pub1 cls1 file cls1:0 UP +0 cls1 file cls1:1 UP +1 cls1 file cls1:2 UP +2 cls1 service tas UP Active Storage, Active Cluster (auto-restart: OFF) cls1 service tibero DOWN Database, Active Cluster (auto-restart: OFF) cls1 as tas1 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0 cls1 db tac1 DOWN tibero, /home/tac/tibero7, failed retry cnt: 0 ===================================================================== [tac@mysvr:/home/tac]$
tas diskspace 활성화
tbsql sys/tibero@tas1
ALTER DISKSPACE ds0 ADD THREAD 1;
database create
cmrctl start db --name tac1 --option "-t nomount"
tbsql sys/tibero@tac1
CREATE DATABASE "tibero"
USER sys IDENTIFIED BY tibero
MAXINSTANCES 8
MAXDATAFILES 256
CHARACTER set MSWIN949
NATIONAL character set UTF16
LOGFILE GROUP 0 '+DS0/redo001.redo' SIZE 100M,
GROUP 1 '+DS0/redo011.redo' SIZE 100M,
GROUP 2 '+DS0/redo021.redo' SIZE 100M
MAXLOGFILES 100
MAXLOGMEMBERS 8
ARCHIVELOG
DATAFILE '+DS0/system001.dtf' SIZE 128M
AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
DEFAULT TABLESPACE USR
DATAFILE '+DS0/usr001.dtf' SIZE 128M
AUTOEXTEND ON NEXT 16M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
DEFAULT TEMPORARY TABLESPACE TEMP
TEMPFILE '+DS0/temp001.dtf' SIZE 128M
AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
UNDO TABLESPACE UNDO0
DATAFILE '+DS0/undo001.dtf' SIZE 128M
AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
[tac@mysvr:/home/tac]$ cmrctl start db --name tac1 --option "-t nomount" Listener port = 21000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NOMOUNT mode). BOOT SUCCESS! (MODE : NOMOUNT) [tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tac1.
SQL> CREATE DATABASE "tibero" 2 USER sys IDENTIFIED BY tibero 3 MAXINSTANCES 8 4 MAXDATAFILES 256 5 CHARACTER set MSWIN949 6 NATIONAL character set UTF16 7 LOGFILE GROUP 0 '+DS0/redo001.redo' SIZE 100M, 8 GROUP 1 '+DS0/redo011.redo' SIZE 100M, 9 GROUP 2 '+DS0/redo021.redo' SIZE 100M 10 MAXLOGFILES 100 MAXLOGMEMBERS 8 11 12 ARCHIVELOG 13 DATAFILE '+DS0/system001.dtf' SIZE 128M 14 AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED 15 DEFAULT TABLESPACE USR 16 DATAFILE '+DS0/usr001.dtf' SIZE 128M 17 AUTOEXTEND ON NEXT 16M MAXSIZE UNLIMITED 18 EXTENT MANAGEMENT LOCAL AUTOALLOCATE 19 DEFAULT TEMPORARY TABLESPACE TEMP 20 TEMPFILE '+DS0/temp001.dtf' SIZE 128M 21 AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED 22 EXTENT MANAGEMENT LOCAL AUTOALLOCATE 23 UNDO TABLESPACE UNDO0 24 DATAFILE '+DS0/undo001.dtf' SIZE 128M 25 AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED 26 EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Database created.
SQL> q Disconnected.
tac 2번 노드 UNDO,REDO 등 생성
cmrctl start db --name tac1
tbsql sys/tibero@tac1
CREATE UNDO TABLESPACE UNDO1 DATAFILE '+DS0/undo011.dtf' SIZE 128M AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
create tablespace syssub datafile '+DS0/syssub001.dtf' SIZE 128M autoextend on next 8M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '+DS0/redo031.redo' size 100M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/redo041.redo' size 100M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/redo051.redo' size 100M;
ALTER DATABASE ENABLE PUBLIC THREAD 1;
[tac@mysvr:/home/tac]$ cmrctl start db --name tac1 Listener port = 21000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). BOOT SUCCESS! (MODE : NORMAL) [tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tac1.
SQL> CREATE UNDO TABLESPACE UNDO1 DATAFILE '+DS0/undo011.dtf' SIZE 128M AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Tablespace 'UNDO1' created.
SQL> create tablespace syssub datafile '+DS0/syssub001.dtf' SIZE 128M autoextend on next 8M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Tablespace 'SYSSUB' created.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '+DS0/redo031.redo' size 100M;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/redo041.redo' size 100M;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/redo051.redo' size 100M;
Database altered.
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 1;
Database altered.
system.sh 쉘 수행
export TB_SID=tac1
$TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y
[tac@mysvr:/home/tac]$ export TB_SID=tac1 [tac@mysvr:/home/tac]$ $TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y Creating additional system index... Dropping agent table... Creating client policy table ... Creating text packages table ... Creating the role DBA... Creating system users & roles... ......................... .................중략
Start TPR Create tudi interface Running /home/tac/tibero7/scripts/odci.sql... Creating spatial meta tables and views ... Registering default spatial reference systems ... Registering unit of measure entries... Creating internal system jobs... Creating Japanese Lexer epa source ... Creating internal system notice queue ... Creating sql translator profiles ... Creating agent table... Creating additional static views using dpv... Done. For details, check /home/tac/tibero7/instance/tac1/log/system_init.log.
TAC2 번 노드 작업 수행
TAC2 번 노드CM (Tibero Cluster Manager) 맵버십 등록
tbcm -b
cmrctl add network --nettype private --ipaddr 10.10.10.20 --portno 51210 --name net2
cmrctl add network --nettype public --ifname ens33 --name pub2
cmrctl add cluster --incnet net2 --pubnet pub2 --cfile "+/dev/mapper/vg1-vol_1G_01,/dev/mapper/vg1-vol_1G_02,/dev/mapper/vg1-vol_1G_03,/dev/mapper/vg1-vol_1G_04,/dev/mapper/vg2-vol_1G_01,/dev/mapper/vg2-vol_1G_02,/dev/mapper/vg2-vol_1G_03,/dev/mapper/vg2-vol_1G_04" --name cls1
cmrctl start cluster --name cls1
#cmrctl add service --name tas --type as --cname cls1
cmrctl add as --name tas2 --svcname tas --dbhome $CM_HOME
#cmrctl add service --name tibero --cname cls1
cmrctl add db --name tac2 --svcname tibero --dbhome $CM_HOME
[tac@mysvr:/home/tac]$ tbcm -b CM Guard daemon started up. CM-fence enabled.
TBCM 7.1.1 (Build 258584)
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero cluster manager started up. Local node name is (cm2:61040).
cmrctl start as --name tas2
cmrctl start db --name tac2
[tac@mysvr:/home/tac]$ cmrctl start as --name tas2 Listener port = 3000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). BOOT SUCCESS! (MODE : NORMAL) [tac@mysvr:/home/tac]$ cmrctl start db --name tac2 Listener port = 21000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). BOOT SUCCESS! (MODE : NORMAL) [tac@mysvr:/home/tac]$ tbsql sys/tibero@tac2
TAS2 번 노드 접속 및 확인
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac2
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tac2.
SQL> ls
NAME SUBNAME TYPE ---------------------------------- ------------------------ -------------------- SPH_REPORT_DIR DIRECTORY TPR_REPORT_DIR DIRECTORY TPR_TIP_DIR DIRECTORY NULL_VERIFY_FUNCTION FUNCTION TB_COMPLEXITY_CHECK FUNCTION TB_STRING_DISTANCE FUNCTION
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME --------------- ---------------------------------------- DB_NAME ---------------------------------------- HOST_NAME PARALLEL --------------------------------------------------------------- -------- THREAD# VERSION ---------- -------- STARTUP_TIME -------------------------------------------------------------------------------- STATUS SHUTDOWN_PENDING ---------------- ---------------- TIP_FILE -------------------------------------------------------------------------------- 1 tac2 tibero mysvr YES 1 7 2023/03/14 NORMAL NO /home/tac/tibero7/config/tac2.tip
1 row selected.
TAC 전체 노드 상태 확인
cmrctl show all 명령을 통해 tac instance 별 상태를 확인 할수 있다.
$ cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network net1 UP (private) 10.10.10.10/51210
COMMON network pub1 UP (public) ens33
COMMON cluster cls1 UP inc: net1, pub: pub1
cls1 file cls1:0 UP +0
cls1 file cls1:1 UP +1
cls1 file cls1:2 UP +2
cls1 service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cls1 service tibero UP Database, Active Cluster (auto-restart: OFF)
cls1 as tas1 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0
cls1 db tac1 UP(NRML) tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network net2 UP (private) 10.10.10.20/51210
COMMON network pub2 UP (public) ens33
COMMON cluster cls1 UP inc: net2, pub: pub2
cls1 file cls1:0 UP +0
cls1 file cls1:1 UP +1
cls1 file cls1:2 UP +2
cls1 service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cls1 service tibero UP Database, Active Cluster (auto-restart: OFF)
cls1 as tas2 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0
cls1 db tac2 UP(NRML) tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================
정상적으로 공유가 되었는지 확인하기 위해 tac 1,2번 machine 을 기동해서 정보를 확인 하겠습니다.
정상적으로 설정이 마무리되었다면 각각의 서버에 접속 하여 ls 명령을 통해 추가된 디스크 정보를 확인할수 있습니다.
fdisk -l 명령을 통해서도 추가된 디스크 정보를 확인할수 있습니다.
네트워크 추가 설정 (Host-only)
vmware machine 간 통신(Inter Connect) 을 하기 위한 private network 를 추가 하도록 하겠습니다.
각 vm machine 서버 별로 네트워크를 추가 합니다.
Netwrok Adapter->Add -> Host-only
정상적으로 추가 되었다면 서버에 접속 해서 IP를 할당 합니다.
TAC1 :10.10.10.10
TAC2 :10.10.10.20
tac1 번 노드
tac2 번노드
RAW Device 생성
추가한 hdd 디스크를 Raw device 로 생성하는 작업을 진행 합니다.
partition 생성
Physical Valume생성
Valume 그룹생성
Logical Volume 생성
Logical volume Raw Device 맵핑
Partition 생성
fdisk /dev/sdb 명령 수행후 n->p->1->엔터 키 -> 엔터 키-> w 수행
fdisk /dev/sdb
[root@mysvr:/root]$ fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be careful before using the write command.
Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x96b3f8bb.
Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): Using default value 10485759 Partition 1 of type Linux and of size 5 GiB is set
Command (m for help): w The partition table has been altered!
Calling ioctl() to re-read partition table. Syncing disks. [root@mysvr:/root]$
fdisk /dev/sdc
[root@mysvr:/root]$ fdisk /dev/sdc Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be careful before using the write command.
Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x536414f4.
Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): Using default value 10485759 Partition 1 of type Linux and of size 5 GiB is set
Command (m for help): w The partition table has been altered!
Calling ioctl() to re-read partition table. Syncing disks.
Physical Volume 생성
디스크 확인 후 pvcreate 명령을 통해 Physical Volume 을 생성합니다.
[root@mysvr:/root]$ ls -l /dev/sd* brw-rw----. 1 root disk 8, 0 3월 9 11:45 /dev/sda brw-rw----. 1 root disk 8, 1 3월 9 11:45 /dev/sda1 brw-rw----. 1 root disk 8, 2 3월 9 11:45 /dev/sda2 brw-rw----. 1 root disk 8, 16 3월 9 12:41 /dev/sdb brw-rw----. 1 root disk 8, 17 3월 9 12:41 /dev/sdb1 brw-rw----. 1 root disk 8, 32 3월 9 12:43 /dev/sdc brw-rw----. 1 root disk 8, 33 3월 9 12:43 /dev/sdc1
[root@mysvr:/root]$ pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name centos PV Size <39.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 9983 Free PE 1 Allocated PE 9982 PV UUID f6dArV-MiVJ-lCgG-CfbY-4PCD-f1d0-uwVkeY
"/dev/sdb1" is a new physical volume of "<5.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size <5.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID aly1AZ-7hzb-T4qM-JAdv-gCWS-Sj2f-5HVDDs
"/dev/sdc1" is a new physical volume of "<5.00 GiB" --- NEW Physical volume --- PV Name /dev/sdc1 VG Name PV Size <5.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID Q8sQZR-SBvK-mWRk-8j8o-Mdnp-eWHc-IUFZ2x
[root@mysvr:/root]$ vgdisplay --- Volume group --- VG Name vg1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size <5.00 GiB PE Size 4.00 MiB Total PE 1279 Alloc PE / Size 0 / 0 Free PE / Size 1279 / <5.00 GiB VG UUID 6IWGT6-XHgP-OESg-3Gdh-QOUW-p01L-Cx5INn
--- Volume group --- VG Name vg2 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size <5.00 GiB PE Size 4.00 MiB Total PE 1279 Alloc PE / Size 0 / 0 Free PE / Size 1279 / <5.00 GiB VG UUID NgC43A-4Cyq-gKHY-8ed7-8UKY-xh8n-p2YJfo
--- Volume group --- VG Name centos System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size <39.00 GiB PE Size 4.00 MiB Total PE 9983 Alloc PE / Size 9982 / 38.99 GiB Free PE / Size 1 / 4.00 MiB VG UUID N8yCf2-fqDT-miVi-neoj-A1i8-mFWE-X7dMRv
Logical Volume 생성
lvcreate -L 100M -n cm_01 vg1
lvcreate -L [size] -n [logical volume name] [volume group name]
[root@mysvr:/root]$ lvdisplay vg1 --- Logical volume --- LV Path /dev/vg1/contorl_01 LV Name contorl_01 VG Name vg1 LV UUID u8hnpe-DbAE-YgKj-qu5u-cPSI-HN3l-l0kYok LV Write Access read/write LV Creation host, time mysvr, 2023-03-09 16:34:02 +0900 LV Status available # open 0 LV Size 128.00 MiB Current LE 32 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:4
--- Logical volume --- LV Path /dev/vg1/redo001 LV Name redo001 VG Name vg1 LV UUID dPbgfi-Uqtp-wObd-eW4e-QDtd-E7rj-ylgWpv LV Write Access read/write LV Creation host, time mysvr, 2023-03-09 16:34:02 +0900 LV Status available # open 0 LV Size 128.00 MiB Current LE 32 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:5
udevadm control --reload-rules
udevadm trigger --action=add
raw -qa
ls -al /dev/raw*
[root@mysvr:/root]$ udevadm trigger --action=add [root@mysvr:/root]$ raw -qa /dev/raw/raw1: bound to major 253, minor 3 /dev/raw/raw2: bound to major 253, minor 4 /dev/raw/raw3: bound to major 253, minor 5 /dev/raw/raw4: bound to major 253, minor 6 /dev/raw/raw5: bound to major 253, minor 7 /dev/raw/raw6: bound to major 253, minor 8 /dev/raw/raw7: bound to major 253, minor 9 /dev/raw/raw8: bound to major 253, minor 10 /dev/raw/raw9: bound to major 253, minor 11 /dev/raw/raw10: bound to major 253, minor 12 /dev/raw/raw11: bound to major 253, minor 13 /dev/raw/raw12: bound to major 253, minor 14 /dev/raw/raw13: bound to major 253, minor 15 /dev/raw/raw14: bound to major 253, minor 16 /dev/raw/raw15: bound to major 253, minor 17 /dev/raw/raw16: bound to major 253, minor 18 /dev/raw/raw17: bound to major 253, minor 19 /dev/raw/raw18: bound to major 253, minor 20 /dev/raw/raw19: bound to major 253, minor 25 /dev/raw/raw20: bound to major 253, minor 26 /dev/raw/raw21: bound to major 253, minor 27 /dev/raw/raw22: bound to major 253, minor 28 /dev/raw/raw23: bound to major 253, minor 29 /dev/raw/raw24: bound to major 253, minor 30 /dev/raw/raw25: bound to major 253, minor 31 /dev/raw/raw26: bound to major 253, minor 32 /dev/raw/raw27: bound to major 253, minor 33 /dev/raw/raw28: bound to major 253, minor 34 /dev/raw/raw29: bound to major 253, minor 35 /dev/raw/raw30: bound to major 253, minor 36 /dev/raw/raw31: bound to major 253, minor 37 /dev/raw/raw32: bound to major 253, minor 38 /dev/raw/raw33: bound to major 253, minor 39 /dev/raw/raw34: bound to major 253, minor 40
리눅스 VM 재 부팅
정상적으로 적용이 되었는지 확인을 위해 각 각 서버를 재기동 합니다.
순서는 1번 tac1 machine 부팅 -> 2 번 tac2 machine 부팅 순으로 진행을 합니다.
[tac@mysvr:/home/tac]$ tbcm -b ######################### WARNING ######################### # You are trying to start the CM-fence function. # ########################################################### You are not 'root'. Proceed anyway without fence? (y/N)y CM Guard daemon started up. CM-fence enabled. Tibero cluster manager (cm1) startup failed!
[tac@mysvr:/home/tac]$ cmrctl add network --nettype private --ipaddr 10.10.10.10 --portno 51210 --name net1 Resource add success! (network, net1) [tac@mysvr:/home/tac]$ cmrctl add network --nettype public --ifname ens33 --name pub1 Resource add success! (network, pub1) [tac@mysvr:/home/tac]$ cmrctl add cluster --incnet net1 --pubnet pub1 --cfile "/dev/raw/raw1" --name cls1 Resource add success! (cluster, cls1) [tac@mysvr:/home/tac]$ cmrctl start cluster --name cls1 MSG SENDING SUCCESS! [tac@mysvr:/home/tac]$ cmrctl show Resource List of Node cm1 ===================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- -------------- -------- ------------------------ COMMON network net1 UP (private) 10.10.10.10/51210 COMMON network pub1 UP (public) ens33 COMMON cluster cls1 UP inc: net1, pub: pub1 cls1 file cls1:0 UP /dev/raw/raw1 =====================================================================
CM Tibero DB Cluster Service 등록
#TAC Cluster
#serivce name = DB_NAME (tibero)
cmrctl add service --name tibero --cname cls1 --type db --mode AC
cmrctl show
cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME
cmrctl show
[tac@mysvr:/home/tac]$ cmrctl add service --name tibero --cname cls1 --type db --mode AC Resource add success! (service, tibero) [tac@mysvr:/home/tac]$ cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME Resource add success! (db, tac1) [tac@mysvr:/home/tac]$ cmrctl show Resource List of Node cm1 ===================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- -------------- -------- ------------------------ COMMON network net1 UP (private) 10.10.10.10/51210 COMMON network pub1 UP (public) ens33 COMMON cluster cls1 UP inc: net1, pub: pub1 cls1 file cls1:0 UP /dev/raw/raw1 cls1 service tibero DOWN Database, Active Cluster (auto-restart: OFF) cls1 db tac1 DOWN tibero, /home/tac/tibero7, failed retry cnt: 0 =====================================================================
Database Create 생성
tac1 번 instance nomount 부팅 및 상태 확인
cmrctl start db --name tac1 --option "-t nomount"
[tac@mysvr:/home/tac]$ cmrctl start db --name tac1 --option "-t nomount" BOOT SUCCESS! (MODE : NOMOUNT) [tac@mysvr:/home/tac]$ cmrctl show Resource List of Node cm1 ===================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- -------------- -------- ------------------------ COMMON network net1 UP (private) 10.10.10.10/51210 COMMON network pub1 UP (public) ens33 COMMON cluster cls1 UP inc: net1, pub: pub1 cls1 file cls1:0 UP /dev/raw/raw1 cls1 service tibero UP Database, Active Cluster (auto-restart: OFF) cls1 db tac1 UP(NMNT) tibero, /home/tac/tibero7, failed retry cnt: 0 ===================================================================== [tac@mysvr:/home/tac]$
tac1 번 instance 접속후 데이터베이스 create
tbsql sys/tibero@tac1
CREATE DATABASE "tibero"
USER sys IDENTIFIED BY tibero
MAXINSTANCES 8
MAXDATAFILES 256
CHARACTER set MSWIN949
NATIONAL character set UTF16
LOGFILE GROUP 0 ('/dev/raw/raw3','/dev/raw/raw22') SIZE 127M,
GROUP 1 ('/dev/raw/raw4','/dev/raw/raw23') SIZE 127M,
GROUP 2 ('/dev/raw/raw5','/dev/raw/raw24') SIZE 127M
MAXLOGFILES 100
MAXLOGMEMBERS 8
ARCHIVELOG
DATAFILE '/dev/raw/raw10' SIZE 127M
AUTOEXTEND off
DEFAULT TABLESPACE USR
DATAFILE '/dev/raw/raw11' SIZE 127M
AUTOEXTEND off
DEFAULT TEMPORARY TABLESPACE TEMP
TEMPFILE '/dev/raw/raw12' SIZE 127M
AUTOEXTEND off
UNDO TABLESPACE UNDO0
DATAFILE '/dev/raw/raw9' SIZE 127M
AUTOEXTEND off ;
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tac1.
SQL> CREATE DATABASE "tibero" 2 USER sys IDENTIFIED BY tibero 3 MAXINSTANCES 8 4 MAXDATAFILES 256 5 CHARACTER set MSWIN949 6 NATIONAL character set UTF16 7 LOGFILE GROUP 0 ('/dev/raw/raw3','/dev/raw/raw22') SIZE 127M, 8 GROUP 1 ('/dev/raw/raw4','/dev/raw/raw23') SIZE 127M, 9 GROUP 2 ('/dev/raw/raw5','/dev/raw/raw24') SIZE 127M 10 MAXLOGFILES 100 11 MAXLOGMEMBERS 8 12 ARCHIVELOG 13 DATAFILE '/dev/raw/raw10' SIZE 127M 14 AUTOEXTEND off 15 DEFAULT TABLESPACE USR 16 DATAFILE '/dev/raw/raw11' SIZE 127M 17 AUTOEXTEND off 18 DEFAULT TEMPORARY TABLESPACE TEMP 19 TEMPFILE '/dev/raw/raw12' SIZE 127M 20 AUTOEXTEND off 21 UNDO TABLESPACE UNDO0 22 DATAFILE '/dev/raw/raw9' SIZE 127M 23 AUTOEXTEND off ;
Database created.
SQL> q
TAC2 번 노드 의 redo 및 undo 생성 및 thread 활성화
tac1 번 노드 Normal 모드로 재 부팅 부팅 후 undo 및 redo 생성후 쓰레드1 활성화
CREATE UNDO TABLESPACE UNDO1 DATAFILE '/dev/raw/raw28' SIZE 127M AUTOEXTEND off;
create tablespace syssub datafile '/dev/raw/raw12' SIZE 127M AUTOEXTEND off;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 ('/dev/raw/raw6','/dev/raw/raw25') size 127M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 ('/dev/raw/raw7','/dev/raw/raw26') size 127M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 ('/dev/raw/raw8','/dev/raw/raw27') size 127M;
ALTER DATABASE ENABLE PUBLIC THREAD 1;
[tac@mysvr:/home/tac]$ tbboot Listener port = 21000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). [tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.