정상적으로 설치가 완료 되었는지 확인을 위해 docker-compose logs -f [서비스명] 명령으로 설치 관련 에러가 없는지 확인 한다.
docker-compose logs pgadmin
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose logs -f postgres pgadmin Attaching to pgadmin, postgres pgadmin | NOTE: Configuring authentication for SERVER mode. pgadmin | pgadmin | pgAdmin 4 - Application Initialisation pgadmin | ====================================== pgadmin | pgadmin | [2023-03-29 09:20:40 +0000] [1] [INFO] Starting gunicorn 20.1.0 pgadmin | [2023-03-29 09:20:40 +0000] [1] [INFO] Listening at: http://[::]:80 (1) pgadmin | [2023-03-29 09:20:40 +0000] [1] [INFO] Using worker: gthread pgadmin | [2023-03-29 09:20:40 +0000] [92] [INFO] Booting worker with pid: 92 ........................ ........... ...... postgres | 2023-03-29 18:26:44.695 KST [69] LOG: checkpoint starting: shutdown immediate postgres | 2023-03-29 18:26:44.696 KST [69] LOG: checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.002 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=8463 kB postgres | 2023-03-29 18:26:44.700 KST [1] LOG: database system is shut down postgres | postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization postgres | postgres | 2023-03-29 18:26:58.554 KST [1] LOG: starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit postgres | 2023-03-29 18:26:58.554 KST [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2023-03-29 18:26:58.554 KST [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2023-03-29 18:26:58.579 KST [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2023-03-29 18:26:58.581 KST [29] LOG: database system was shut down at 2023-03-29 18:26:44 KST postgres | 2023-03-29 18:26:58.584 KST [1] LOG: database system is ready to accept connections
파일 생성권한이 없다면 아래와 같은 에러가 발생한다.
pgadmin | ERROR : Failed to create the directory /var/lib/pgadmin/sessions:
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose up Creating network "postgresql_default" with the default driver Creating postgres ... done Creating pgadmin ... done Attaching to postgres, pgadmin postgres | postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization postgres | postgres | 2023-03-29 18:18:48.407 KST [1] LOG: starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit postgres | 2023-03-29 18:18:48.408 KST [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2023-03-29 18:18:48.408 KST [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2023-03-29 18:18:48.497 KST [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2023-03-29 18:18:48.502 KST [29] LOG: database system was shut down at 2023-03-29 18:18:40 KST postgres | 2023-03-29 18:18:48.506 KST [1] LOG: database system is ready to accept connections pgadmin | ERROR : Failed to create the directory /var/lib/pgadmin/sessions: pgadmin | [Errno 13] Permission denied: '/var/lib/pgadmin/sessions' pgadmin | HINT : Create the directory /var/lib/pgadmin/sessions, ensure it is writeable by pgadmin | 'pgadmin', and try again, or, create a config_local.py file pgadmin | and override the SESSION_DB_PATH setting per pgadmin | https://www.pgadmin.org/docs/pgadmin4/6.21/config_py.html
docker container 상태확인
docker-compose ps -a 상태 확인 (State -> UP , Port mapping 정보 확인)
pgadmin 포트는 5555 번으로 설치를 진행 했으며 postgresql 은 5432 포트를 사용 한다.
포트 맵핑 정보 확인 => [host port] : [container port]
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose ps -a Name Command State Ports ------------------------------------------------------------------------------------------------ pgadmin /entrypoint.sh Up 443/tcp, 0.0.0.0:5555->80/tcp,:::5555->80/tcp postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp,:::5432->5432/tcp
PostgreSql 접속
정상적으로 설치가 완료되었다면 pgadmin을 통해 postgresql 에 접속해서 정보를 확인할수 있다.
[tac@mysvr:/home/tac]$ . tac.profile [tac@mysvr:/home/tac]$ tbcm -b ######################### WARNING ######################### # You are trying to start the CM-fence function. # ########################################################### You are not 'root'. Proceed anyway without fence? (y/N)y CM Guard daemon started up. CM-fence enabled.
TBCM 7.1.1 (Build 258584)
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero cluster manager started up. Local node name is (cm1:61040). [tac@mysvr:/home/tac]$ TB_SID=tas1 tbboot nomount Listener port = 3000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NOMOUNT mode).
tas diskspace create
tbsql 로 tas1 접속후 diskspace 를 생성 한다.
CREATE DISKSPACE ds0 FORCE EXTERNAL REDUNDANCY
FAILGROUP FG1 DISK
'/dev/mapper/vg1-vol_1G_01' NAME FG_DISK1,
'/dev/mapper/vg1-vol_1G_02' NAME FG_DISK2,
'/dev/mapper/vg1-vol_1G_03' NAME FG_DISK3,
'/dev/mapper/vg1-vol_1G_04' NAME FG_DISK4,
'/dev/mapper/vg2-vol_1G_01' NAME FG_DISK5,
'/dev/mapper/vg2-vol_1G_02' NAME FG_DISK6,
'/dev/mapper/vg2-vol_1G_03' NAME FG_DISK7,
'/dev/mapper/vg2-vol_1G_04' NAME FG_DISK8
/
[tac@mysvr:/home/tac]$ TB_SID=tas1 tbboot nomount Listener port = 3000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NOMOUNT mode). [tac@mysvr:/home/tac]$ tbsql sys/tibero@tas1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tas1.
SQL> CREATE DISKSPACE ds0 FORCE EXTERNAL REDUNDANCY 2 FAILGROUP FG1 DISK 3 '/dev/mapper/vg1-vol_1G_01' NAME FG_DISK1, 4 '/dev/mapper/vg1-vol_1G_02' NAME FG_DISK2, 5 '/dev/mapper/vg1-vol_1G_03' NAME FG_DISK3, 6 '/dev/mapper/vg1-vol_1G_04' NAME FG_DISK4, 7 '/dev/mapper/vg2-vol_1G_01' NAME FG_DISK5, 8 '/dev/mapper/vg2-vol_1G_02' NAME FG_DISK6, 9 '/dev/mapper/vg2-vol_1G_03' NAME FG_DISK7, 10 '/dev/mapper/vg2-vol_1G_04' NAME FG_DISK8 11 /
Diskspace 'DS0' created.
SQL> q Disconnected.
TAC1 번 노드 CM (Tibero Cluster Manager) 맵버십 등록
cmrctl add network --nettype private --ipaddr 10.10.10.10 --portno 51210 --name net1
cmrctl add network --nettype public --ifname ens33 --name pub1
cmrctl add cluster --incnet net1 --pubnet pub1 --cfile "+/dev/mapper/vg1-vol_1G_01,/dev/mapper/vg1-vol_1G_02,/dev/mapper/vg1-vol_1G_03,/dev/mapper/vg1-vol_1G_04,/dev/mapper/vg2-vol_1G_01,/dev/mapper/vg2-vol_1G_02,/dev/mapper/vg2-vol_1G_03,/dev/mapper/vg2-vol_1G_04" --name cls1
cmrctl start cluster --name cls1
cmrctl add service --name tas --type as --cname cls1
cmrctl add as --name tas1 --svcname tas --dbhome $CM_HOME
cmrctl add service --name tibero --cname cls1
cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME
cmrctl start as --name tas1
cmrctl show
[tac@mysvr:/home/tac]$ cmrctl start as --name tas1 Listener port = 3000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). BOOT SUCCESS! (MODE : NORMAL) [tac@mysvr:/home/tac]$ cmrctl show Resource List of Node cm1 ===================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- -------------- -------- ------------------------ COMMON network net1 UP (private) 10.10.10.10/51210 COMMON network pub1 UP (public) ens33 COMMON cluster cls1 UP inc: net1, pub: pub1 cls1 file cls1:0 UP +0 cls1 file cls1:1 UP +1 cls1 file cls1:2 UP +2 cls1 service tas UP Active Storage, Active Cluster (auto-restart: OFF) cls1 service tibero DOWN Database, Active Cluster (auto-restart: OFF) cls1 as tas1 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0 cls1 db tac1 DOWN tibero, /home/tac/tibero7, failed retry cnt: 0 ===================================================================== [tac@mysvr:/home/tac]$
tas diskspace 활성화
tbsql sys/tibero@tas1
ALTER DISKSPACE ds0 ADD THREAD 1;
database create
cmrctl start db --name tac1 --option "-t nomount"
tbsql sys/tibero@tac1
CREATE DATABASE "tibero"
USER sys IDENTIFIED BY tibero
MAXINSTANCES 8
MAXDATAFILES 256
CHARACTER set MSWIN949
NATIONAL character set UTF16
LOGFILE GROUP 0 '+DS0/redo001.redo' SIZE 100M,
GROUP 1 '+DS0/redo011.redo' SIZE 100M,
GROUP 2 '+DS0/redo021.redo' SIZE 100M
MAXLOGFILES 100
MAXLOGMEMBERS 8
ARCHIVELOG
DATAFILE '+DS0/system001.dtf' SIZE 128M
AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
DEFAULT TABLESPACE USR
DATAFILE '+DS0/usr001.dtf' SIZE 128M
AUTOEXTEND ON NEXT 16M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
DEFAULT TEMPORARY TABLESPACE TEMP
TEMPFILE '+DS0/temp001.dtf' SIZE 128M
AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
UNDO TABLESPACE UNDO0
DATAFILE '+DS0/undo001.dtf' SIZE 128M
AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
[tac@mysvr:/home/tac]$ cmrctl start db --name tac1 --option "-t nomount" Listener port = 21000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NOMOUNT mode). BOOT SUCCESS! (MODE : NOMOUNT) [tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tac1.
SQL> CREATE DATABASE "tibero" 2 USER sys IDENTIFIED BY tibero 3 MAXINSTANCES 8 4 MAXDATAFILES 256 5 CHARACTER set MSWIN949 6 NATIONAL character set UTF16 7 LOGFILE GROUP 0 '+DS0/redo001.redo' SIZE 100M, 8 GROUP 1 '+DS0/redo011.redo' SIZE 100M, 9 GROUP 2 '+DS0/redo021.redo' SIZE 100M 10 MAXLOGFILES 100 MAXLOGMEMBERS 8 11 12 ARCHIVELOG 13 DATAFILE '+DS0/system001.dtf' SIZE 128M 14 AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED 15 DEFAULT TABLESPACE USR 16 DATAFILE '+DS0/usr001.dtf' SIZE 128M 17 AUTOEXTEND ON NEXT 16M MAXSIZE UNLIMITED 18 EXTENT MANAGEMENT LOCAL AUTOALLOCATE 19 DEFAULT TEMPORARY TABLESPACE TEMP 20 TEMPFILE '+DS0/temp001.dtf' SIZE 128M 21 AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED 22 EXTENT MANAGEMENT LOCAL AUTOALLOCATE 23 UNDO TABLESPACE UNDO0 24 DATAFILE '+DS0/undo001.dtf' SIZE 128M 25 AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED 26 EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Database created.
SQL> q Disconnected.
tac 2번 노드 UNDO,REDO 등 생성
cmrctl start db --name tac1
tbsql sys/tibero@tac1
CREATE UNDO TABLESPACE UNDO1 DATAFILE '+DS0/undo011.dtf' SIZE 128M AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
create tablespace syssub datafile '+DS0/syssub001.dtf' SIZE 128M autoextend on next 8M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '+DS0/redo031.redo' size 100M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/redo041.redo' size 100M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/redo051.redo' size 100M;
ALTER DATABASE ENABLE PUBLIC THREAD 1;
[tac@mysvr:/home/tac]$ cmrctl start db --name tac1 Listener port = 21000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). BOOT SUCCESS! (MODE : NORMAL) [tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tac1.
SQL> CREATE UNDO TABLESPACE UNDO1 DATAFILE '+DS0/undo011.dtf' SIZE 128M AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Tablespace 'UNDO1' created.
SQL> create tablespace syssub datafile '+DS0/syssub001.dtf' SIZE 128M autoextend on next 8M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Tablespace 'SYSSUB' created.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '+DS0/redo031.redo' size 100M;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/redo041.redo' size 100M;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/redo051.redo' size 100M;
Database altered.
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 1;
Database altered.
system.sh 쉘 수행
export TB_SID=tac1
$TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y
[tac@mysvr:/home/tac]$ export TB_SID=tac1 [tac@mysvr:/home/tac]$ $TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y Creating additional system index... Dropping agent table... Creating client policy table ... Creating text packages table ... Creating the role DBA... Creating system users & roles... ......................... .................중략
Start TPR Create tudi interface Running /home/tac/tibero7/scripts/odci.sql... Creating spatial meta tables and views ... Registering default spatial reference systems ... Registering unit of measure entries... Creating internal system jobs... Creating Japanese Lexer epa source ... Creating internal system notice queue ... Creating sql translator profiles ... Creating agent table... Creating additional static views using dpv... Done. For details, check /home/tac/tibero7/instance/tac1/log/system_init.log.
TAC2 번 노드 작업 수행
TAC2 번 노드CM (Tibero Cluster Manager) 맵버십 등록
tbcm -b
cmrctl add network --nettype private --ipaddr 10.10.10.20 --portno 51210 --name net2
cmrctl add network --nettype public --ifname ens33 --name pub2
cmrctl add cluster --incnet net2 --pubnet pub2 --cfile "+/dev/mapper/vg1-vol_1G_01,/dev/mapper/vg1-vol_1G_02,/dev/mapper/vg1-vol_1G_03,/dev/mapper/vg1-vol_1G_04,/dev/mapper/vg2-vol_1G_01,/dev/mapper/vg2-vol_1G_02,/dev/mapper/vg2-vol_1G_03,/dev/mapper/vg2-vol_1G_04" --name cls1
cmrctl start cluster --name cls1
#cmrctl add service --name tas --type as --cname cls1
cmrctl add as --name tas2 --svcname tas --dbhome $CM_HOME
#cmrctl add service --name tibero --cname cls1
cmrctl add db --name tac2 --svcname tibero --dbhome $CM_HOME
[tac@mysvr:/home/tac]$ tbcm -b CM Guard daemon started up. CM-fence enabled.
TBCM 7.1.1 (Build 258584)
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero cluster manager started up. Local node name is (cm2:61040).
cmrctl start as --name tas2
cmrctl start db --name tac2
[tac@mysvr:/home/tac]$ cmrctl start as --name tas2 Listener port = 3000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). BOOT SUCCESS! (MODE : NORMAL) [tac@mysvr:/home/tac]$ cmrctl start db --name tac2 Listener port = 21000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). BOOT SUCCESS! (MODE : NORMAL) [tac@mysvr:/home/tac]$ tbsql sys/tibero@tac2
TAS2 번 노드 접속 및 확인
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac2
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tac2.
SQL> ls
NAME SUBNAME TYPE ---------------------------------- ------------------------ -------------------- SPH_REPORT_DIR DIRECTORY TPR_REPORT_DIR DIRECTORY TPR_TIP_DIR DIRECTORY NULL_VERIFY_FUNCTION FUNCTION TB_COMPLEXITY_CHECK FUNCTION TB_STRING_DISTANCE FUNCTION
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME --------------- ---------------------------------------- DB_NAME ---------------------------------------- HOST_NAME PARALLEL --------------------------------------------------------------- -------- THREAD# VERSION ---------- -------- STARTUP_TIME -------------------------------------------------------------------------------- STATUS SHUTDOWN_PENDING ---------------- ---------------- TIP_FILE -------------------------------------------------------------------------------- 1 tac2 tibero mysvr YES 1 7 2023/03/14 NORMAL NO /home/tac/tibero7/config/tac2.tip
1 row selected.
TAC 전체 노드 상태 확인
cmrctl show all 명령을 통해 tac instance 별 상태를 확인 할수 있다.
$ cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network net1 UP (private) 10.10.10.10/51210
COMMON network pub1 UP (public) ens33
COMMON cluster cls1 UP inc: net1, pub: pub1
cls1 file cls1:0 UP +0
cls1 file cls1:1 UP +1
cls1 file cls1:2 UP +2
cls1 service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cls1 service tibero UP Database, Active Cluster (auto-restart: OFF)
cls1 as tas1 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0
cls1 db tac1 UP(NRML) tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network net2 UP (private) 10.10.10.20/51210
COMMON network pub2 UP (public) ens33
COMMON cluster cls1 UP inc: net2, pub: pub2
cls1 file cls1:0 UP +0
cls1 file cls1:1 UP +1
cls1 file cls1:2 UP +2
cls1 service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cls1 service tibero UP Database, Active Cluster (auto-restart: OFF)
cls1 as tas2 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0
cls1 db tac2 UP(NRML) tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================
정상적으로 공유가 되었는지 확인하기 위해 tac 1,2번 machine 을 기동해서 정보를 확인 하겠습니다.
정상적으로 설정이 마무리되었다면 각각의 서버에 접속 하여 ls 명령을 통해 추가된 디스크 정보를 확인할수 있습니다.
fdisk -l 명령을 통해서도 추가된 디스크 정보를 확인할수 있습니다.
네트워크 추가 설정 (Host-only)
vmware machine 간 통신(Inter Connect) 을 하기 위한 private network 를 추가 하도록 하겠습니다.
각 vm machine 서버 별로 네트워크를 추가 합니다.
Netwrok Adapter->Add -> Host-only
정상적으로 추가 되었다면 서버에 접속 해서 IP를 할당 합니다.
TAC1 :10.10.10.10
TAC2 :10.10.10.20
tac1 번 노드
tac2 번노드
RAW Device 생성
추가한 hdd 디스크를 Raw device 로 생성하는 작업을 진행 합니다.
partition 생성
Physical Valume생성
Valume 그룹생성
Logical Volume 생성
Logical volume Raw Device 맵핑
Partition 생성
fdisk /dev/sdb 명령 수행후 n->p->1->엔터 키 -> 엔터 키-> w 수행
fdisk /dev/sdb
[root@mysvr:/root]$ fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be careful before using the write command.
Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x96b3f8bb.
Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): Using default value 10485759 Partition 1 of type Linux and of size 5 GiB is set
Command (m for help): w The partition table has been altered!
Calling ioctl() to re-read partition table. Syncing disks. [root@mysvr:/root]$
fdisk /dev/sdc
[root@mysvr:/root]$ fdisk /dev/sdc Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be careful before using the write command.
Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x536414f4.
Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): Using default value 10485759 Partition 1 of type Linux and of size 5 GiB is set
Command (m for help): w The partition table has been altered!
Calling ioctl() to re-read partition table. Syncing disks.
Physical Volume 생성
디스크 확인 후 pvcreate 명령을 통해 Physical Volume 을 생성합니다.
[root@mysvr:/root]$ ls -l /dev/sd* brw-rw----. 1 root disk 8, 0 3월 9 11:45 /dev/sda brw-rw----. 1 root disk 8, 1 3월 9 11:45 /dev/sda1 brw-rw----. 1 root disk 8, 2 3월 9 11:45 /dev/sda2 brw-rw----. 1 root disk 8, 16 3월 9 12:41 /dev/sdb brw-rw----. 1 root disk 8, 17 3월 9 12:41 /dev/sdb1 brw-rw----. 1 root disk 8, 32 3월 9 12:43 /dev/sdc brw-rw----. 1 root disk 8, 33 3월 9 12:43 /dev/sdc1
[root@mysvr:/root]$ pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name centos PV Size <39.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 9983 Free PE 1 Allocated PE 9982 PV UUID f6dArV-MiVJ-lCgG-CfbY-4PCD-f1d0-uwVkeY
"/dev/sdb1" is a new physical volume of "<5.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size <5.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID aly1AZ-7hzb-T4qM-JAdv-gCWS-Sj2f-5HVDDs
"/dev/sdc1" is a new physical volume of "<5.00 GiB" --- NEW Physical volume --- PV Name /dev/sdc1 VG Name PV Size <5.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID Q8sQZR-SBvK-mWRk-8j8o-Mdnp-eWHc-IUFZ2x
[root@mysvr:/root]$ vgdisplay --- Volume group --- VG Name vg1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size <5.00 GiB PE Size 4.00 MiB Total PE 1279 Alloc PE / Size 0 / 0 Free PE / Size 1279 / <5.00 GiB VG UUID 6IWGT6-XHgP-OESg-3Gdh-QOUW-p01L-Cx5INn
--- Volume group --- VG Name vg2 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size <5.00 GiB PE Size 4.00 MiB Total PE 1279 Alloc PE / Size 0 / 0 Free PE / Size 1279 / <5.00 GiB VG UUID NgC43A-4Cyq-gKHY-8ed7-8UKY-xh8n-p2YJfo
--- Volume group --- VG Name centos System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size <39.00 GiB PE Size 4.00 MiB Total PE 9983 Alloc PE / Size 9982 / 38.99 GiB Free PE / Size 1 / 4.00 MiB VG UUID N8yCf2-fqDT-miVi-neoj-A1i8-mFWE-X7dMRv
Logical Volume 생성
lvcreate -L 100M -n cm_01 vg1
lvcreate -L [size] -n [logical volume name] [volume group name]
[root@mysvr:/root]$ lvdisplay vg1 --- Logical volume --- LV Path /dev/vg1/contorl_01 LV Name contorl_01 VG Name vg1 LV UUID u8hnpe-DbAE-YgKj-qu5u-cPSI-HN3l-l0kYok LV Write Access read/write LV Creation host, time mysvr, 2023-03-09 16:34:02 +0900 LV Status available # open 0 LV Size 128.00 MiB Current LE 32 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:4
--- Logical volume --- LV Path /dev/vg1/redo001 LV Name redo001 VG Name vg1 LV UUID dPbgfi-Uqtp-wObd-eW4e-QDtd-E7rj-ylgWpv LV Write Access read/write LV Creation host, time mysvr, 2023-03-09 16:34:02 +0900 LV Status available # open 0 LV Size 128.00 MiB Current LE 32 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:5
udevadm control --reload-rules
udevadm trigger --action=add
raw -qa
ls -al /dev/raw*
[root@mysvr:/root]$ udevadm trigger --action=add [root@mysvr:/root]$ raw -qa /dev/raw/raw1: bound to major 253, minor 3 /dev/raw/raw2: bound to major 253, minor 4 /dev/raw/raw3: bound to major 253, minor 5 /dev/raw/raw4: bound to major 253, minor 6 /dev/raw/raw5: bound to major 253, minor 7 /dev/raw/raw6: bound to major 253, minor 8 /dev/raw/raw7: bound to major 253, minor 9 /dev/raw/raw8: bound to major 253, minor 10 /dev/raw/raw9: bound to major 253, minor 11 /dev/raw/raw10: bound to major 253, minor 12 /dev/raw/raw11: bound to major 253, minor 13 /dev/raw/raw12: bound to major 253, minor 14 /dev/raw/raw13: bound to major 253, minor 15 /dev/raw/raw14: bound to major 253, minor 16 /dev/raw/raw15: bound to major 253, minor 17 /dev/raw/raw16: bound to major 253, minor 18 /dev/raw/raw17: bound to major 253, minor 19 /dev/raw/raw18: bound to major 253, minor 20 /dev/raw/raw19: bound to major 253, minor 25 /dev/raw/raw20: bound to major 253, minor 26 /dev/raw/raw21: bound to major 253, minor 27 /dev/raw/raw22: bound to major 253, minor 28 /dev/raw/raw23: bound to major 253, minor 29 /dev/raw/raw24: bound to major 253, minor 30 /dev/raw/raw25: bound to major 253, minor 31 /dev/raw/raw26: bound to major 253, minor 32 /dev/raw/raw27: bound to major 253, minor 33 /dev/raw/raw28: bound to major 253, minor 34 /dev/raw/raw29: bound to major 253, minor 35 /dev/raw/raw30: bound to major 253, minor 36 /dev/raw/raw31: bound to major 253, minor 37 /dev/raw/raw32: bound to major 253, minor 38 /dev/raw/raw33: bound to major 253, minor 39 /dev/raw/raw34: bound to major 253, minor 40
리눅스 VM 재 부팅
정상적으로 적용이 되었는지 확인을 위해 각 각 서버를 재기동 합니다.
순서는 1번 tac1 machine 부팅 -> 2 번 tac2 machine 부팅 순으로 진행을 합니다.
[tac@mysvr:/home/tac]$ tbcm -b ######################### WARNING ######################### # You are trying to start the CM-fence function. # ########################################################### You are not 'root'. Proceed anyway without fence? (y/N)y CM Guard daemon started up. CM-fence enabled. Tibero cluster manager (cm1) startup failed!
[tac@mysvr:/home/tac]$ cmrctl add network --nettype private --ipaddr 10.10.10.10 --portno 51210 --name net1 Resource add success! (network, net1) [tac@mysvr:/home/tac]$ cmrctl add network --nettype public --ifname ens33 --name pub1 Resource add success! (network, pub1) [tac@mysvr:/home/tac]$ cmrctl add cluster --incnet net1 --pubnet pub1 --cfile "/dev/raw/raw1" --name cls1 Resource add success! (cluster, cls1) [tac@mysvr:/home/tac]$ cmrctl start cluster --name cls1 MSG SENDING SUCCESS! [tac@mysvr:/home/tac]$ cmrctl show Resource List of Node cm1 ===================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- -------------- -------- ------------------------ COMMON network net1 UP (private) 10.10.10.10/51210 COMMON network pub1 UP (public) ens33 COMMON cluster cls1 UP inc: net1, pub: pub1 cls1 file cls1:0 UP /dev/raw/raw1 =====================================================================
CM Tibero DB Cluster Service 등록
#TAC Cluster
#serivce name = DB_NAME (tibero)
cmrctl add service --name tibero --cname cls1 --type db --mode AC
cmrctl show
cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME
cmrctl show
[tac@mysvr:/home/tac]$ cmrctl add service --name tibero --cname cls1 --type db --mode AC Resource add success! (service, tibero) [tac@mysvr:/home/tac]$ cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME Resource add success! (db, tac1) [tac@mysvr:/home/tac]$ cmrctl show Resource List of Node cm1 ===================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- -------------- -------- ------------------------ COMMON network net1 UP (private) 10.10.10.10/51210 COMMON network pub1 UP (public) ens33 COMMON cluster cls1 UP inc: net1, pub: pub1 cls1 file cls1:0 UP /dev/raw/raw1 cls1 service tibero DOWN Database, Active Cluster (auto-restart: OFF) cls1 db tac1 DOWN tibero, /home/tac/tibero7, failed retry cnt: 0 =====================================================================
Database Create 생성
tac1 번 instance nomount 부팅 및 상태 확인
cmrctl start db --name tac1 --option "-t nomount"
[tac@mysvr:/home/tac]$ cmrctl start db --name tac1 --option "-t nomount" BOOT SUCCESS! (MODE : NOMOUNT) [tac@mysvr:/home/tac]$ cmrctl show Resource List of Node cm1 ===================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- -------------- -------- ------------------------ COMMON network net1 UP (private) 10.10.10.10/51210 COMMON network pub1 UP (public) ens33 COMMON cluster cls1 UP inc: net1, pub: pub1 cls1 file cls1:0 UP /dev/raw/raw1 cls1 service tibero UP Database, Active Cluster (auto-restart: OFF) cls1 db tac1 UP(NMNT) tibero, /home/tac/tibero7, failed retry cnt: 0 ===================================================================== [tac@mysvr:/home/tac]$
tac1 번 instance 접속후 데이터베이스 create
tbsql sys/tibero@tac1
CREATE DATABASE "tibero"
USER sys IDENTIFIED BY tibero
MAXINSTANCES 8
MAXDATAFILES 256
CHARACTER set MSWIN949
NATIONAL character set UTF16
LOGFILE GROUP 0 ('/dev/raw/raw3','/dev/raw/raw22') SIZE 127M,
GROUP 1 ('/dev/raw/raw4','/dev/raw/raw23') SIZE 127M,
GROUP 2 ('/dev/raw/raw5','/dev/raw/raw24') SIZE 127M
MAXLOGFILES 100
MAXLOGMEMBERS 8
ARCHIVELOG
DATAFILE '/dev/raw/raw10' SIZE 127M
AUTOEXTEND off
DEFAULT TABLESPACE USR
DATAFILE '/dev/raw/raw11' SIZE 127M
AUTOEXTEND off
DEFAULT TEMPORARY TABLESPACE TEMP
TEMPFILE '/dev/raw/raw12' SIZE 127M
AUTOEXTEND off
UNDO TABLESPACE UNDO0
DATAFILE '/dev/raw/raw9' SIZE 127M
AUTOEXTEND off ;
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero using tac1.
SQL> CREATE DATABASE "tibero" 2 USER sys IDENTIFIED BY tibero 3 MAXINSTANCES 8 4 MAXDATAFILES 256 5 CHARACTER set MSWIN949 6 NATIONAL character set UTF16 7 LOGFILE GROUP 0 ('/dev/raw/raw3','/dev/raw/raw22') SIZE 127M, 8 GROUP 1 ('/dev/raw/raw4','/dev/raw/raw23') SIZE 127M, 9 GROUP 2 ('/dev/raw/raw5','/dev/raw/raw24') SIZE 127M 10 MAXLOGFILES 100 11 MAXLOGMEMBERS 8 12 ARCHIVELOG 13 DATAFILE '/dev/raw/raw10' SIZE 127M 14 AUTOEXTEND off 15 DEFAULT TABLESPACE USR 16 DATAFILE '/dev/raw/raw11' SIZE 127M 17 AUTOEXTEND off 18 DEFAULT TEMPORARY TABLESPACE TEMP 19 TEMPFILE '/dev/raw/raw12' SIZE 127M 20 AUTOEXTEND off 21 UNDO TABLESPACE UNDO0 22 DATAFILE '/dev/raw/raw9' SIZE 127M 23 AUTOEXTEND off ;
Database created.
SQL> q
TAC2 번 노드 의 redo 및 undo 생성 및 thread 활성화
tac1 번 노드 Normal 모드로 재 부팅 부팅 후 undo 및 redo 생성후 쓰레드1 활성화
CREATE UNDO TABLESPACE UNDO1 DATAFILE '/dev/raw/raw28' SIZE 127M AUTOEXTEND off;
create tablespace syssub datafile '/dev/raw/raw12' SIZE 127M AUTOEXTEND off;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 ('/dev/raw/raw6','/dev/raw/raw25') size 127M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 ('/dev/raw/raw7','/dev/raw/raw26') size 127M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 ('/dev/raw/raw8','/dev/raw/raw27') size 127M;
ALTER DATABASE ENABLE PUBLIC THREAD 1;
[tac@mysvr:/home/tac]$ tbboot Listener port = 21000
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. Tibero instance started up (NORMAL mode). [tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1
tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Mysql에 접속 되어 실행 되는 SQL 들을 확인 할수 있는 방법에 대해 공유 하겠습니다.
Mysql DB 에 먼저 접속 한후 로그깅 설정 정보를 확인 합니다.
SHOW VARIABLES 명령을 통해 Mysql 에 현재 적용된 로그레벨과 로그 파일 위치를 확인 합니다.
SHOW VARIABLES LIKE '%general%';
실행 결과
mysql> SHOW VARIABLES LIKE '%general%'; +------------------+---------------------------------+ | Variable_name | Value | +------------------+---------------------------------+ | general_log | ON | | general_log_file | /var/lib/mysql/24a15adb0cb3.log | +------------------+---------------------------------+ 2 rows in set (0.45 sec)
로그확인(tail -f general_log_file )
mysql db에 수행된 sql 에 대해서 조회가 가능 합니다.
2023-03-07T10:29:12.676978Z 103 Query show tables 2023-03-07T10:29:28.580687Z 103 Query SELECT DATABASE() 2023-03-07T10:29:37.020534Z 103 Query show databases 2023-03-07T10:29:45.540610Z 103 Query SELECT DATABASE() 2023-03-07T10:29:45.540744Z 103 Init DB book_db 2023-03-07T10:29:45.541425Z 103 Query show databases 2023-03-07T10:29:45.541874Z 103 Query show tables 2023-03-07T10:29:45.542875Z 103 Field List Book_book 2023-03-07T10:29:45.749166Z 103 Field List Book_book_voter 2023-03-07T10:29:45.773772Z 103 Field List auth_group 2023-03-07T10:29:45.822584Z 103 Field List auth_group_permissions 2023-03-07T10:29:45.848494Z 103 Field List auth_permission 2023-03-07T10:29:45.889815Z 103 Field List auth_user 2023-03-07T10:29:45.977628Z 103 Field List auth_user_groups 2023-03-07T10:29:46.019426Z 103 Field List auth_user_user_permissions 2023-03-07T10:29:46.038964Z 103 Field List django_admin_log 2023-03-07T10:29:46.047558Z 103 Field List django_content_type 2023-03-07T10:29:46.187867Z 103 Field List django_migrations 2023-03-07T10:29:46.208062Z 103 Field List django_session 2023-03-07T10:29:51.093263Z 103 Query show tables 2023-03-07T10:30:02.581078Z 103 Query select * From auth_user 2023-03-07T10:30:08.757684Z 103 Query select * From auth_user 2023-03-07T10:30:24.740727Z 103 Query select * from Book_book 2023-03-07T10:30:57.684327Z 103 Query SET GLOBAL general_log = 'OFF'
MySQL 로그 설정 변경
set global general_log 명령을 통해 로그를 on/off 시킬수도 있습니다.
해당 명령을 수행한 이후에는 flush logs 명령을 수행 해주세요
SET GLOBAL general_log = 'OFF';
FLUSH LOGS ;
SET GLOBAL general_log = 'ON';
R version 3.6.0 (2019-04-26) -- "Planting of a Tree" Copyright (C) 2019 The R Foundation for Statistical Computing Platform: x86_64-redhat-linux-gnu (64-bit)
R은 자유 소프트웨어이며, 어떠한 형태의 보증없이 배포됩니다. 또한, 일정한 조건하에서 이것을 재배포 할 수 있습니다. 배포와 관련된 상세한 내용은 'license()' 또는 'licence()'을 통하여 확인할 수 있습니다.
R은 많은 기여자들이 참여하는 공동프로젝트입니다. 'contributors()'라고 입력하시면 이에 대한 더 많은 정보를 확인하실 수 있습니다. 그리고, R 또는 R 패키지들을 출판물에 인용하는 방법에 대해서는 'citation()'을 통해 확인하시길 부탁드립니다.
'demo()'를 입력하신다면 몇가지 데모를 보실 수 있으며, 'help()'를 입력하시면 온라인 도움말을 이용하실 수 있습니다. 또한, 'help.start()'의 입력을 통하여 HTML 브라우저에 의한 도움말을 사용하실수 있습니다 R의 종료를 원하시면 'q()'을 입력해주세요.
Tibero RODBC 연동 테스트
Tibero7 과 연동 테스트를 진행한 샘플 코드 입니다.
사전에 UnixODBC 연결 셋팅이 선행 되어야 합니다.
.odbc.ini 설정
[ODBC Data Sources]
Tibero7 = Tibero7 ODBC driver
[ODBC]
Trace = 1
TraceFile = /home/unixODBC/unixODBC-2.3.9/trace/odbc.trace
[Tibero7]
Driver = /data1/7FS02/tibero7/client/lib/libtbodbc.so
Description = Tibero ODBC driver for Tibero
server = 192.168.116.12
#server는(ip,hostname 둘다가능)
port = 35000
#port 는 tibero port
database = tibero
#database 는 DB_NAME 입력
User =tibero
Password = tmax
R 연동 테스트
library(RODBC)
db <- odbcConnect("Tibero7", uid="tibero", pwd="tmax")
query_result <- sqlQuery(db, "SELECT * FROM DUAL")
query_result
> library(RODBC) > db <- odbcConnect("Tibero7", uid="tibero", pwd="tmax") > query_result <- sqlQuery(db, "SELECT * FROM DUAL") > query_result DUMMY 1 X