728x90
반응형

Centos 7 환경에서 Tomcat 9을  설치하는 방법에 대해 소개하겠습니다. 

 

Tomcat 바이너리 다운로드 

톰켓 사이트에 접속해서 Tomcat 9 바이너리를 다운로드 받습니다. 

https://tomcat.apache.org/download-90.cgi

 

Apache Tomcat® - Apache Tomcat 9 Software Downloads

Welcome to the Apache Tomcat® 9.x software download page. This page provides download links for obtaining the latest version of Tomcat 9.0.x software, as well as links to the archives of older releases. Unsure which version you need? Specification version

tomcat.apache.org

바이너리 압축 해제 및 환경 설정

다운로드 받은 Tomcat 바이너리 파일을 리눅스 (Centos 7 ) 머신에서 압축 해제 한다. 

mkdir tomcat

mv apache-tomcat-9.0.81.tar.gz tomcat/

cd tomcat/

tar xvzf apache-tomcat-9.0.81.tar.gz

 

Tomcat 포트 변경 -> server.xml 파일을 수정 한다.

  • 사용하고자 하는 포트로 변경 합니다. 
  • 8080 -> 9090 변경
    <Connector port="9090" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443"
               maxParameterCount="1000"
               />

환경설정 및 alias 설정

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64
export JRE_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre
export PATH=$PATH:$JAVA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server:$JRE_HOME/lib:$LD_LIBRARY_PATH

CATALINA_HOME=/home/pm5/tomcat/apache-tomcat-9.0.81
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$CATALINA_HOME/lib-jsp-api.jar:$CATALINA_HOME/lib/servlet-api.jar
PATH=$PATH:$JAVA_HOME/bin:$CATALINA_HOME/bin
export JAVA_HOME CLASSPATH PATH CATALINA_HOME

alias tcdown="sh $CATALINA_HOME/bin/shutdown.sh -force"
alias tcup="sh $CATALINA_HOME/bin/startup.sh start"
alias tclog="tail -f $CATALINA_HOME/logs/catalina.out"

Tomcat 9 접속 확인 

  • 기동 : sh $CATALINA_HOME/bin/shutdown.sh -force
  • 종료: sh $CATALINA_HOME/bin/startup.sh start
11-Oct-2023 15:44:00.073 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/docs]
11-Oct-2023 15:44:00.090 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/docs] has finished in [18] ms
11-Oct-2023 15:44:00.090 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/examples]
11-Oct-2023 15:44:00.251 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/examples] has finished in [161] ms
11-Oct-2023 15:44:00.251 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/host-manager]
11-Oct-2023 15:44:00.267 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/host-manager] has finished in [16] ms
11-Oct-2023 15:44:00.267 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/manager]
11-Oct-2023 15:44:00.280 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/home/pm5/tomcat/apache-tomcat-9.0.81/webapps/manager] has finished in [13] ms
11-Oct-2023 15:44:00.283 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-9090"]
11-Oct-2023 15:44:00.297 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [430] milliseconds
  • Tomcat Url 접속: http://[ip]:9090/

  • 웹 페이지 테스트 (hellow.jsp)
  • 생성한 파일을 tomcat/apache-tomcat-9.0.81/webapps/ROOT/ 디렉토리 밑에 생성 한다.
<html lang="en">
    <body>
    <h2> Hello Tomcat 9 </h2>
    </body>
</html>
  • Hello Tomcat Url 접속 테스트 : http://[ip]:9090/hello.jsp

728x90
반응형
728x90
반응형

VSCode 다운로드 

Gitlab 에 저장된 .md 문서를  pdf 문서로 변환 하는 방법에 대해 소개 합니다. 

온라인에서도 변환하는 방법이 가능하지만 md 문서에 이미지가 있다면 변환시 이미지를 포함되지 않은 채로 pdf 문서로 변환 된게 됩니다. 

이번 글에서는 vscode 를 이용해  이미지를 포함한 md to pdf 문서 변환 방법에 대해 설명 하겠습니다. 

https://code.visualstudio.com/ 경로에 접속여 Visual Studio Code 를 다운로드 받습니다. 

 

VSCode Package 설치 

 

 MD to PDF 문서 변환 

변환하고자 하는 md(README.md)  문서를 열어 문서 우클릭후 Markdown PDF: Export(pdf) 를 클릭 합니다. 

변환 작업이 끝나면 동일 디렉토리에 pdf 파일이 생성 됩니다.

 

728x90
반응형
728x90
반응형

docker-compose를 사용해서 postgresql 을 설치 진행 과 간단한 테스트 방법에 대해 소개하도록 하겠습니다. 

 

 PostgreSql compose 파일

version: '3.6'

services:
  postgres:
    container_name: postgres
    image: postgres:14
    restart: unless-stopped
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - TZ=Asia/Seoul
    volumes:
      - ./data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  pgadmin:
    container_name: pgadmin
    image: dpage/pgadmin4
    restart: unless-stopped
    ports:
      - "5555:80"
    volumes:
      - ./pgadmin:/var/lib/pgadmin
    environment:
      - PGADMIN_DEFAULT_EMAIL=example@pgadmin.com
      - PGADMIN_DEFAULT_PASSWORD=pgadmin
      - TZ=Asia/Seoul
    depends_on:
      - postgres

 

PostgreSql 설치 

  • volumes 디렉토리 생성 및 권한 생성 
    • pstgressql 과 pgadmin 볼륨과 맵핑할 디렉토리를 생성 한다. 
  • docker-compose up 
mkdir data pgadmin

chmod 777 data pgadmin

docker-compose up -d
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose up -d  
Pulling postgres (postgis/postgis:latest)...
latest: Pulling from postgis/postgis
f1f26f570256: Pull complete
1c04f8741265: Pull complete
dffc353b86eb: Pull complete
18c4a9e6c414: Pull complete
81f47e7b3852: Pull complete
5e26c947960d: Pull complete
a2c3dc85e8c3: Pull complete
17df73636f01: Pull complete
713535cdf17c: Pull complete
52278a39eea2: Pull complete
4ded87da67f6: Pull complete
05fae4678312: Pull complete
f06b0b681e09: Pull complete
d24b8ffa110e: Pull complete
6456362dbd08: Pull complete
be89676f5f99: Pull complete
Digest: sha256:b7a27a9fdeedc98fc28798d87d67fb17896eed5a4ff6dd300d420de5643455f2
Status: Downloaded newer image for postgis/postgis:latest
Pulling pgadmin (dpage/pgadmin4:)...
latest: Pulling from dpage/pgadmin4
63b65145d645: Pull complete
c2ae92bf3093: Pull complete
0336e7e12a6e: Pull complete
99274f1e74ce: Pull complete
c4dd61273ae2: Pull complete
d300bb5702cd: Pull complete
2d2d362d9413: Pull complete
52558f382e17: Pull complete
cb849b81cd54: Pull complete
ee15939ba2be: Pull complete
8c0ad2b8007b: Pull complete
3bbef2c63b80: Pull complete
e333b65329da: Pull complete
f4b3d9d65332: Pull complete
Digest: sha256:d914d35dc1a5cf86f65c29d44451c05b73e283529e19ffcc25f45f939c579bd9
Status: Downloaded newer image for dpage/pgadmin4:latest
Creating postgres ... done
Creating pgadmin  ... done

정상 설치 확인 

정상적으로 설치가 완료 되었는지 확인을 위해 docker-compose logs -f  [서비스명] 명령으로 설치 관련 에러가 없는지 확인 한다. 

docker-compose logs pgadmin
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose logs -f postgres pgadmin
Attaching to pgadmin, postgres
pgadmin     | NOTE: Configuring authentication for SERVER mode.
pgadmin     | 
pgadmin     | pgAdmin 4 - Application Initialisation
pgadmin     | ======================================
pgadmin     | 
pgadmin     | [2023-03-29 09:20:40 +0000] [1] [INFO] Starting gunicorn 20.1.0
pgadmin     | [2023-03-29 09:20:40 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
pgadmin     | [2023-03-29 09:20:40 +0000] [1] [INFO] Using worker: gthread
pgadmin     | [2023-03-29 09:20:40 +0000] [92] [INFO] Booting worker with pid: 92
........................
...........
......
postgres    | 2023-03-29 18:26:44.695 KST [69] LOG:  checkpoint starting: shutdown immediate
postgres    | 2023-03-29 18:26:44.696 KST [69] LOG:  checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.002 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=8463 kB
postgres    | 2023-03-29 18:26:44.700 KST [1] LOG:  database system is shut down
postgres    | 
postgres    | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres    | 
postgres    | 2023-03-29 18:26:58.554 KST [1] LOG:  starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
postgres    | 2023-03-29 18:26:58.554 KST [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres    | 2023-03-29 18:26:58.554 KST [1] LOG:  listening on IPv6 address "::", port 5432
postgres    | 2023-03-29 18:26:58.579 KST [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres    | 2023-03-29 18:26:58.581 KST [29] LOG:  database system was shut down at 2023-03-29 18:26:44 KST
postgres    | 2023-03-29 18:26:58.584 KST [1] LOG:  database system is ready to accept connections

  • 파일 생성권한이 없다면 아래와 같은 에러가 발생한다. 
    • pgadmin     | ERROR  : Failed to create the directory /var/lib/pgadmin/sessions:
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose up 
Creating network "postgresql_default" with the default driver
Creating postgres ... done
Creating pgadmin  ... done
Attaching to postgres, pgadmin
postgres    | 
postgres    | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres    | 
postgres    | 2023-03-29 18:18:48.407 KST [1] LOG:  starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
postgres    | 2023-03-29 18:18:48.408 KST [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres    | 2023-03-29 18:18:48.408 KST [1] LOG:  listening on IPv6 address "::", port 5432
postgres    | 2023-03-29 18:18:48.497 KST [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres    | 2023-03-29 18:18:48.502 KST [29] LOG:  database system was shut down at 2023-03-29 18:18:40 KST
postgres    | 2023-03-29 18:18:48.506 KST [1] LOG:  database system is ready to accept connections
pgadmin     | ERROR  : Failed to create the directory /var/lib/pgadmin/sessions:
pgadmin     |            [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin     | HINT   : Create the directory /var/lib/pgadmin/sessions, ensure it is writeable by
pgadmin     |          'pgadmin', and try again, or, create a config_local.py file
pgadmin     |          and override the SESSION_DB_PATH setting per
pgadmin     |          https://www.pgadmin.org/docs/pgadmin4/6.21/config_py.html

 

docker container 상태확인 

  • docker-compose ps -a 상태 확인 (State -> UP , Port mapping 정보 확인) 
  • pgadmin 포트는 5555 번으로 설치를 진행 했으며 postgresql 은 5432 포트를 사용 한다. 
  • 포트 맵핑 정보 확인  => [host port] : [container port]
[docker_test@centos7:/data1/docker_test/compose/postgresql]$ docker-compose ps -a
  Name                Command              State                       Ports                    
------------------------------------------------------------------------------------------------
pgadmin    /entrypoint.sh                  Up      443/tcp, 0.0.0.0:5555->80/tcp,:::5555->80/tcp
postgres   docker-entrypoint.sh postgres   Up      0.0.0.0:5432->5432/tcp,:::5432->5432/tcp    

PostgreSql  접속

  • 정상적으로 설치가 완료되었다면 pgadmin을 통해 postgresql 에 접속해서 정보를 확인할수 있다. 

pgAdmin 접속 확인

  • http://localhost:5555/browser/
  • 접속 계정과 패스워드는 docker-compose.yml 파일에 작성된 정보이다. 
    •       - PGADMIN_DEFAULT_EMAIL=example@pgadmin.com
    •       - PGADMIN_DEFAULT_PASSWORD=pgadmin

  • 정상적으로 로그인이 되었다면 postgressql 서버 접속을 진행 한다. 
  • Servers 우 클릭 -> Register 버튼클릭 
  • 접속 계정 정보는 docker-compose.yml 파일에 작성된 계정 정보이다. 
    •     environment:
            - POSTGRES_USER=postgres
            - POSTGRES_PASSWORD=postgres
  • 정상적으로 접속이 되었다면 아래와 같이 PostgreSql database 정보를 확인할 수 있다. 

728x90
반응형
728x90
반응형

Markdown (MD) 문서란? 

 

마크다운 (Markdown)은 마크업 언어의 일종으로 확장자는 .MD 를 사용 한다.

GitHub나 GitLab 에서 README.md 파일 로 문서나 소스를 설명 하는 자료를 작성할때 주로 사용 한다. 

해당 글에서는 gitlab 에서 주로 사용 하는 Markdown 문법에 대해 소개 한다. 

 

Gitlab Markdown (MD) 생성 

테스트 용도로 gitlab 에서  Readme.md 파일을 생성 한다. 

Readme.md 파일명으로 작성을 하면 해당 디렉텍토리에서 자동으로 문서를 로딩 해서 화면에 출력한다. 

test.md, readme.md 파일이 동일 경로에 있더라고 readme.md 파일을 화면에 보여준다. 

 

  • 출력 확인 

  • 문서편집

gitlab 에서는 작성한 MD 문서는 Open in Web IDE 매뉴를 통해 문서를 웹상에서 편집을 할수 있습니다. 

Open in Web IDE -> Readme.md 파일 클릭 

우측 화면에서 Edit 탭 매뉴에서 문서를 작성 할수 있으며 Preview Markdown 탭 매뉴에서는 중간중간 수정한 내용을 미리 보기 기능통해 변경된 사항을 확인 해볼 수 있습니다. 

 

Gitlab Markdown 문법

Markdown 문법을 테스트 하고 확인 할 readme.md 파일로 내용을 작성 하겠습니다. 

 

List Item(리스트)

숫자와 문자를 통해 아이템을 리스트화 할수 있습니다. 

순서가가 필요한 리스트는 숫자로 표현하고 , 순서 지정이 필요없다면 (*,+,-) 기로호 아이템 리스트를 정의 할수 있습니다. 

1. First ordered list item
1. Second List
2. Another item
   - Unordered sub-list.
3. Actual numbers don't matter, just that it's a number
   1. Ordered sub-list
   1. Next ordered sub-list item
- And another item.
  * Unordered sub-list.
    - Unordered sub-list.
     + Unordered sub-list.

 

XML텍스트 삽입

```xml   

~~~~ -> xml 코드 작성 

```

XML 코드 삽입 예제 입니다. 
```xml

<?xml version="1.0" encoding="UTF-8"?>
<InvitationList>
<family>
       <aunt>
       <name>Christine</name>
        <name>Stephanie</name>
       </aunt>
</family>
</InvitationList>
<InvitationList>
</targetset>
```

표(table) 작성 예제

| (파이프) 기호를 통해 컬럼을 구분 하고 header 와 본문은 |---| 로 구분 합니다. 

열 정렬은 : 기로호 구분합니다. 

  • 왼쪽 정렬 :---
  • 가운데 정렬 :---:
  • 오른쪽 정렬 ---:
| 컬럼1 | 컬럼2 | 컬럼3 |
|---|:---:|:---|
|내용 1|내용 2|내용 3|
|내용 4|내용 5|내용 6|

글자 크기 

# 기호를 통해 글자 크기를 조정 할수 있습니다. 

# 기호가 많을 수록 글자 크기는 작아 집니다. 

H2 크기 까지는 밑줄이 표시 되네요 

# 글자크기 H1
## 글자크기 H2
### 글자크기 H3
#### 글자크기 H4
##### 글자크기 H5
###### 글자크기 H6

글자강조

>  글자 강조

***글자 강조***

~취소선 글자 강조~

이미지 삽입

![image](이미지경로) 태그를 사용해 이미지를 삽입할수 있습니다. 
 
![image](./images/dog.png)

코드블럭

위 xml 코드 삽입 방법도 있지만 <pre><code> ~~~ </code> </pre> 문법을 사용해 코드 블럭을 삽입할수 있습니다. 

  • <pre><code> ~~~ </code> </pre> 
  • ```  ~~~  ```
  • ``` [program language] ```  -> 프로그램 문법 강조 

두가지 사용법은 동일합니다. 

 

<pre>
<code>
public class JavaSampleCode {
public static void main(String[] args) {
System.out.println("hello Java word");
}

</code>
</pre>

```
public class JavaSampleCode {
public static void main(String[] args) {
System.out.println("hello Java word");
}
}
```

```java
public class JavaSampleCode {
public static void main(String[] args) {
System.out.println("hello Java word");
}
}
```
 

Details and summary

  • 접기 /펼치기 기능을 이용한 내용 요약 및 해당 내용에대한 상세 설명을 작성할때 사용합니다. 
<p>
<details>
<summary> 요약 내용 입니다.  </summary>

디테일 본문에 대한 본문에 <strong>간략한 설명</strong> 을 작성합니다.

<pre><code> 해당 부분에 상세 내용을 작성하세요 </code></pre>

</details>
</p>
728x90
반응형
728x90
반응형

공유 디스크  생성

 

Logical Volume 재 생성

  • lvdisplay [volume] 명령을 통해 생성한 logical volume 확인 합니다. 
lvdisplay vg1 | grep "LV Path"
lvdisplay vg2 | grep "LV Path"

 

[root@mysvr:/home/tac]$ lvdisplay  vg1 | grep "LV Path" 
  LV Path                /dev/vg1/redo001
  LV Path                /dev/vg1/redo011
  LV Path                /dev/vg1/redo021
  LV Path                /dev/vg1/redo031
  LV Path                /dev/vg1/redo041
  LV Path                /dev/vg1/redo051
  LV Path                /dev/vg1/vol_256m_01
  LV Path                /dev/vg1/vol_256m_02
  LV Path                /dev/vg1/vol_256m_03
  LV Path                /dev/vg1/vol_256m_04
  LV Path                /dev/vg1/vol_256m_05
  LV Path                /dev/vg1/vol_256m_06
  LV Path                /dev/vg1/vol_256m_07
  LV Path                /dev/vg1/vol_256m_08
  LV Path                /dev/vg1/vol_512m_01
  LV Path                /dev/vg1/vol_512m_02
  LV Path                /dev/vg1/vol_512m_03
  LV Path                /dev/vg1/cm_01
  LV Path                /dev/vg1/vol_512m_04
  LV Path                /dev/vg1/control_01
  • lvremove [lv path] 명령을 통해  logical volume 을 삭제 합니다. 
lvremove /dev/vg1/redo001 -y
lvremove /dev/vg1/redo011 -y
lvremove /dev/vg1/redo021 -y
lvremove /dev/vg1/redo031 -y
lvremove /dev/vg1/redo041 -y
lvremove /dev/vg1/redo051 -y
lvremove /dev/vg1/vol_256m_01 -y
lvremove /dev/vg1/vol_256m_02 -y
lvremove /dev/vg1/vol_256m_03 -y
lvremove /dev/vg1/vol_256m_04 -y
lvremove /dev/vg1/vol_256m_05 -y
lvremove /dev/vg1/vol_256m_06 -y
lvremove /dev/vg1/vol_256m_07 -y
lvremove /dev/vg1/vol_256m_08 -y
lvremove /dev/vg1/vol_512m_01 -y
lvremove /dev/vg1/vol_512m_02 -y
lvremove /dev/vg1/vol_512m_03 -y
lvremove /dev/vg1/cm_01 -y
lvremove /dev/vg1/vol_512m_04 -y
lvremove /dev/vg1/control_01 -y
  • lvcreate -L [size] -n [logical volume name] [volume group name] 명령을 통해 logical volume 을 재생성 합니다. 
  • 각각 volume group 은 5G로 구성 되어 있으므로 1G 짜리 4와 1000m 짜리 1개로  각각 생성 합니다.  
lvcreate -L 1G -n vol_1G_01 vg1 
lvcreate -L 1G -n vol_1G_02 vg1
lvcreate -L 1G -n vol_1G_03 vg1
lvcreate -L 1G -n vol_1G_04 vg1
lvcreate -L 1000 -n vol_1000m_05 vg1



lvcreate -L 1G -n vol_1G_01 vg2 
lvcreate -L 1G -n vol_1G_02 vg2
lvcreate -L 1G -n vol_1G_03 vg2
lvcreate -L 1G -n vol_1G_04 vg2
lvcreate -L 1000 -n vol_1000m_05 vg2

 

  • logical volume 생성 정보 확인
[root@mysvr:/home/tac]$ lvdisplay vg1 |grep  "LV Path"
  LV Path                /dev/vg1/vol_1G_01
  LV Path                /dev/vg1/vol_1G_02
  LV Path                /dev/vg1/vol_1G_03
  LV Path                /dev/vg1/vol_1G_04
  LV Path                /dev/vg1/vol_1000m_05
[root@mysvr:/home/tac]$ lvdisplay vg2 |grep  "LV Path" 
  LV Path                /dev/vg2/vol_1G_01
  LV Path                /dev/vg2/vol_1G_02
  LV Path                /dev/vg2/vol_1G_03
  LV Path                /dev/vg2/vol_1G_04
  LV Path                /dev/vg2/vol_1000m_05
  • 각각 노드별(TAC1,TAC2)  volum 권한 설정 및 정보 확인
cd /dev/mapper
ls -l | grep vg

chown tac:tibero /dev/mapper/vg*

ls -l ../ |grep dm
tac1( server node1) -192.168.116.11 tac2( server node2) -192.168.116.12

TAS TAC 설치 

Tibero profile 및 파일 설정  (환경설정)

  tac1( server node1) -192.168.116.11 tac2( server node2) -192.168.116.12
tac1.profile
vs
tac2.profile
export TB_HOME=/home/tac/tibero7
export CM_HOME=/home/tac/tibero7
export TB_SID=tac1
export CM_SID=cm1

export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$CM_HOME/scripts:$PATH
export LD_LIBRARY_PATH=.:$TB_HOME/lib:$TB_HOME/client/lib:/lib:/usr/lib:/usr/local/lib:/usr/lib/threads
export LD_LIBRARY_PATH_64=$TB_HOME/lib:$TB_HOME/client/lib:/usr/lib64:/usr/lib/64:/usr/ucblib:/usr/local/lib
export LIBPATH=$LD_LIBRARY_PATH
export LANG=ko_KR.eucKR
export LC_ALL=ko_KR.eucKR
export LC_CTYPE=ko_KR.eucKR
export LC_NUMERIC=ko_KR.eucKR
export LC_TIME=ko_KR.eucKR
export LC_COLLATE=ko_KR.eucKR
export LC_MONETARY=ko_KR.eucKR
export LC_MESSAGES=ko_KR.eucKR
export TB_HOME=/home/tac/tibero7
export CM_HOME=/home/tac/tibero7
export TB_SID=tac2
export CM_SID=cm2

export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$CM_HOME/scripts:$PATH
export LD_LIBRARY_PATH=.:$TB_HOME/lib:$TB_HOME/client/lib:/lib:/usr/lib:/usr/local/lib:/usr/lib/threads
export LD_LIBRARY_PATH_64=$TB_HOME/lib:$TB_HOME/client/lib:/usr/lib64:/usr/lib/64:/usr/ucblib:/usr/local/lib
export LIBPATH=$LD_LIBRARY_PATH
export LANG=ko_KR.eucKR
export LC_ALL=ko_KR.eucKR
export LC_CTYPE=ko_KR.eucKR
export LC_NUMERIC=ko_KR.eucKR
export LC_TIME=ko_KR.eucKR
export LC_COLLATE=ko_KR.eucKR
export LC_MONETARY=ko_KR.eucKR
export LC_MESSAGES=ko_KR.eucKR
cm1.tip
vs
cm2.tip
#cluster 구성 시, node의 ID로 사용될 값
CM_NAME=cm1
#cmctl 명령 사용 시, cm에 접속하기 위해 필요한 포트 정보
CM_UI_PORT=61040

CM_HEARTBEAT_EXPIRE=450
CM_WATCHDOG_EXPIRE=400
LOG_LVL_CM=5


CM_RESOURCE_FILE=/home/tac/cm_resource/cmfile
CM_RESOURCE_FILE_BACKUP=/home/tac/cm_resource/cmfile_backup
CM_RESOURCE_FILE_BACKUP_INTERVAL=1


CM_LOG_DEST=/home/tac/tibero7/instance/tac1/log/cm
CM_GUARD_LOG_DEST=/home/tac/tibero7/instance/tac1/log/cm_guard

CM_FENCE=Y
CM_ENABLE_FAST_NET_ERROR_DETECTION=Y
_CM_CHECK_RUNLEVEL=Y
#cluster 구성 시, node의 ID로 사용될 값
CM_NAME=cm2
#cmctl 명령 사용 시, cm에 접속하기 위해 필요한 포트 정보
CM_UI_PORT=61040

CM_HEARTBEAT_EXPIRE=450
CM_WATCHDOG_EXPIRE=400
LOG_LVL_CM=5



CM_RESOURCE_FILE=/home/tac/cm_resource/cmfile
CM_RESOURCE_FILE_BACKUP=/home/tac/cm_resource/cmfile_backup
CM_RESOURCE_FILE_BACKUP_INTERVAL=1


CM_LOG_DEST=/home/tac/tibero7/instance/tac2/log/cm
CM_GUARD_LOG_DEST=/home/tac/tibero7/instance/tac2/log/cm_guard

CM_FENCE=Y
CM_ENABLE_FAST_NET_ERROR_DETECTION=Y
_CM_CHECK_RUNLEVEL=Y
tas1.tip
vs
tas2.tip
DB_NAME=tas
LISTENER_PORT=3000

MAX_SESSION_COUNT=10
TOTAL_SHM_SIZE=1G
MEMORY_TARGET=2G
BOOT_WITH_AUTO_DOWN_CLEAN=Y

THREAD=0
CLUSTER_DATABASE=Y

###TBCM###
CM_PORT=61040
LOCAL_CLUSTER_ADDR=10.10.10.10
LOCAL_CLUSTER_PORT=61060


###TAS###
INSTANCE_TYPE=AS
AS_DISKSTRING="/dev/mapper/vg*-vol*"
DB_NAME=tas
LISTENER_PORT=3000

MAX_SESSION_COUNT=10
TOTAL_SHM_SIZE=1G
MEMORY_TARGET=2G
BOOT_WITH_AUTO_DOWN_CLEAN=Y

THREAD=1
CLUSTER_DATABASE=Y

###TBCM###
CM_PORT=61040
LOCAL_CLUSTER_ADDR=10.10.10.20
LOCAL_CLUSTER_PORT=61060


###TAS###
INSTANCE_TYPE=AS
AS_DISKSTRING="/dev/mapper/vg*-vol*"
tac1.tip
vs 
tac2.tip
DB_NAME=tibero
LISTENER_PORT=21000
#CONTROL_FILES="/dev/raw/raw2"
#DB_CREATE_FILE_DEST=/home/tac/tbdata/
#LOG_ARCHIVE_DEST=/home/tac/tbdata/archive
CONTROL_FILES="+DS0/c1.ctl"
DB_CREATE_FILE_DEST="+DS0"
LOG_ARCHIVE_DEST="+DS0/archive"

DBWR_CNT=1

DBMS_LOG_TOTAL_SIZE_LIMIT=300M


MAX_SESSION_COUNT=30

TOTAL_SHM_SIZE=1G
MEMORY_TARGET=2G
DB_BLOCK_SIZE=8K
DB_CACHE_SIZE=300M


CLUSTER_DATABASE=Y
THREAD=0
UNDO_TABLESPACE=UNDO0

########### NEW_CM #######################
CM_PORT=61040
#_CM_LOCAL_ADDR=192.168.116.11
LOCAL_CLUSTER_ADDR=10.10.10.10
LOCAL_CLUSTER_PORT=61050
###########################################

############# TAS #######################
USE_ACTIVE_STORAGE=Y
AS_PORT=3000
############################################
DB_NAME=tibero
LISTENER_PORT=21000
#CONTROL_FILES="/dev/raw/raw2"
#DB_CREATE_FILE_DEST=/home/tac/tbdata/
#LOG_ARCHIVE_DEST=/home/tac/tbdata/archive
CONTROL_FILES="+DS0/c1.ctl"
DB_CREATE_FILE_DEST="+DS0"
LOG_ARCHIVE_DEST="+DS0/archive"

DBWR_CNT=1

DBMS_LOG_TOTAL_SIZE_LIMIT=300M
#TRACE_LOG_TOTAL_SIZE_LIMIT=30G

MAX_SESSION_COUNT=30

TOTAL_SHM_SIZE=1G
MEMORY_TARGET=2G
DB_BLOCK_SIZE=8K
DB_CACHE_SIZE=300M



CLUSTER_DATABASE=Y
THREAD=1
UNDO_TABLESPACE=UNDO1

########### NEW_CM #######################
CM_PORT=61040
#_CM_LOCAL_ADDR=192.168.116.12
LOCAL_CLUSTER_ADDR=10.10.10.20
LOCAL_CLUSTER_PORT=61050
###########################################

############# TAS #######################
USE_ACTIVE_STORAGE=Y
AS_PORT=3000
###########################################
tbdsn.tbr tibero=(
    (INSTANCE=(HOST=192.168.116.11)
                (PORT=21000)
                (DB_NAME=tibero)
    )
    (INSTANCE=(HOST=192.168.116.12)
                (PORT=21000)
                (DB_NAME=tibero)
    )
    (LOAD_BALANCE=Y)
    (USE_FAILOVER=Y)
)


tas1=(
    (INSTANCE=(HOST=192.168.116.11)
              (PORT=3000)
              (DB_NAME=tas)
    )
)

tas2=(
    (INSTANCE=(HOST=192.168.116.12)
              (PORT=3000)
              (DB_NAME=tas)
    )
)

tac1=(
    (INSTANCE=(HOST=192.168.116.11)
              (PORT=21000)
              (DB_NAME=tibero)
    )
)

tac2=(
    (INSTANCE=(HOST=192.168.116.12)
              (PORT=21000)
              (DB_NAME=tibero)
    )
)
tibero=(
    (INSTANCE=(HOST=192.168.116.11)
                (PORT=21000)
                (DB_NAME=tibero)
    )
    (INSTANCE=(HOST=192.168.116.12)
                (PORT=21000)
                (DB_NAME=tibero)
    )
    (LOAD_BALANCE=Y)
    (USE_FAILOVER=Y)
)


tas1=(
    (INSTANCE=(HOST=192.168.116.11)
              (PORT=3000)
              (DB_NAME=tas)
    )
)

tas2=(
    (INSTANCE=(HOST=192.168.116.12)
              (PORT=3000)
              (DB_NAME=tas)
    )
)

tac1=(
    (INSTANCE=(HOST=192.168.116.11)
              (PORT=21000)
              (DB_NAME=tibero)
    )
)

tac2=(
    (INSTANCE=(HOST=192.168.116.12)
              (PORT=21000)
              (DB_NAME=tibero)
    )
)

TAC 설치

각 서버별 TAC 설정 파일 복사

  • tac 설정 관련 파일을 tibero7 각 디렉토리로  Copy 합니다. 
#TAC1 번 노드
cp tas1.tip $TB_HOME/config/tas1.tip
cp tac1.tip $TB_HOME/config/tac1.tip
cp cm1.tip $TB_HOME/config/cm1.tip
cp tbdsn.tbr $TB_HOME/client/config/tbdsn.tbr
cp license.xml $TB_HOME/license/license.xml
 
 
 
#TAC2 번 노드
cp tas2.tip $TB_HOME/config/tas2.tip
cp tac2.tip $TB_HOME/config/tac2.tip
cp cm2.tip $TB_HOME/config/cm2.tip
cp tbdsn.tbr $TB_HOME/client/config/tbdsn.tbr
cp license.xml $TB_HOME/license/license.xml

 

TAC1 번 노드 작업 수행 

  • cm booting 
tbcm -b
TB_SID=tas1 tbboot nomount
[tac@mysvr:/home/tac]$ . tac.profile
[tac@mysvr:/home/tac]$ tbcm -b
######################### WARNING #########################
# You are trying to start the CM-fence function.          #
###########################################################
You are not 'root'. Proceed anyway without fence? (y/N)y
CM Guard daemon started up.
CM-fence enabled.

TBCM 7.1.1 (Build 258584)

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Tibero cluster manager started up.
Local node name is (cm1:61040).
[tac@mysvr:/home/tac]$ TB_SID=tas1 tbboot nomount
Listener port = 3000

Tibero 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero instance started up (NOMOUNT mode).
  • tas diskspace create
  • tbsql 로 tas1 접속후 diskspace 를 생성 한다. 
CREATE DISKSPACE ds0 FORCE EXTERNAL REDUNDANCY
FAILGROUP FG1 DISK
'/dev/mapper/vg1-vol_1G_01' NAME FG_DISK1,
'/dev/mapper/vg1-vol_1G_02' NAME FG_DISK2,
'/dev/mapper/vg1-vol_1G_03' NAME FG_DISK3,
'/dev/mapper/vg1-vol_1G_04' NAME FG_DISK4,
'/dev/mapper/vg2-vol_1G_01' NAME FG_DISK5,
'/dev/mapper/vg2-vol_1G_02' NAME FG_DISK6,
'/dev/mapper/vg2-vol_1G_03' NAME FG_DISK7,
'/dev/mapper/vg2-vol_1G_04' NAME FG_DISK8
/
[tac@mysvr:/home/tac]$ TB_SID=tas1 tbboot nomount
Listener port = 3000

Tibero 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero instance started up (NOMOUNT mode).
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tas1

tbSQL 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Connected to Tibero using tas1.

SQL> CREATE DISKSPACE ds0 FORCE EXTERNAL REDUNDANCY
   2 FAILGROUP FG1 DISK
   3 '/dev/mapper/vg1-vol_1G_01' NAME FG_DISK1,
   4 '/dev/mapper/vg1-vol_1G_02' NAME FG_DISK2,
   5 '/dev/mapper/vg1-vol_1G_03' NAME FG_DISK3,
   6 '/dev/mapper/vg1-vol_1G_04' NAME FG_DISK4,
   7 '/dev/mapper/vg2-vol_1G_01' NAME FG_DISK5,
   8 '/dev/mapper/vg2-vol_1G_02' NAME FG_DISK6,
   9 '/dev/mapper/vg2-vol_1G_03' NAME FG_DISK7,
  10 '/dev/mapper/vg2-vol_1G_04' NAME FG_DISK8
  11 /

Diskspace 'DS0' created.

SQL> q
Disconnected.

TAC1 번 노드 CM (Tibero Cluster Manager) 맵버십 등록

cmrctl add network --nettype private --ipaddr 10.10.10.10 --portno 51210 --name net1
cmrctl add network --nettype public --ifname ens33 --name pub1
cmrctl add cluster --incnet net1 --pubnet pub1 --cfile "+/dev/mapper/vg1-vol_1G_01,/dev/mapper/vg1-vol_1G_02,/dev/mapper/vg1-vol_1G_03,/dev/mapper/vg1-vol_1G_04,/dev/mapper/vg2-vol_1G_01,/dev/mapper/vg2-vol_1G_02,/dev/mapper/vg2-vol_1G_03,/dev/mapper/vg2-vol_1G_04" --name cls1
cmrctl start cluster --name cls1
cmrctl add service --name tas --type as --cname cls1
cmrctl add as --name tas1 --svcname tas --dbhome $CM_HOME
cmrctl add service --name tibero --cname cls1
cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME
cmrctl start as --name tas1
cmrctl show
[tac@mysvr:/home/tac]$ cmrctl start as --name tas1
Listener port = 3000

Tibero 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
[tac@mysvr:/home/tac]$ cmrctl show 
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           net1       UP (private) 10.10.10.10/51210
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: net1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service         tibero     DOWN Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas1 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0
       cls1       db           tac1     DOWN tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================
[tac@mysvr:/home/tac]$ 

tas diskspace 활성화

tbsql sys/tibero@tas1  
ALTER DISKSPACE ds0 ADD THREAD 1;

database create 

cmrctl start db --name tac1 --option "-t nomount"

tbsql sys/tibero@tac1 

CREATE DATABASE "tibero"
                   USER sys IDENTIFIED BY tibero
          MAXINSTANCES 8
          MAXDATAFILES 256
CHARACTER set MSWIN949
NATIONAL character set UTF16
LOGFILE GROUP 0 '+DS0/redo001.redo' SIZE 100M,
           GROUP 1 '+DS0/redo011.redo' SIZE 100M,
           GROUP 2 '+DS0/redo021.redo' SIZE 100M
           MAXLOGFILES 100
          MAXLOGMEMBERS 8
          ARCHIVELOG
          DATAFILE '+DS0/system001.dtf' SIZE 128M
         AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
         DEFAULT TABLESPACE USR
         DATAFILE '+DS0/usr001.dtf' SIZE 128M
        AUTOEXTEND ON NEXT 16M MAXSIZE UNLIMITED
        EXTENT MANAGEMENT LOCAL AUTOALLOCATE
        DEFAULT TEMPORARY TABLESPACE TEMP
        TEMPFILE '+DS0/temp001.dtf' SIZE 128M
        AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
        EXTENT MANAGEMENT LOCAL AUTOALLOCATE
        UNDO TABLESPACE UNDO0
        DATAFILE '+DS0/undo001.dtf' SIZE 128M
        AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED
        EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
[tac@mysvr:/home/tac]$ cmrctl start db --name tac1 --option "-t nomount"
Listener port = 21000

Tibero 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero instance started up (NOMOUNT mode).
BOOT SUCCESS! (MODE : NOMOUNT)
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1 

tbSQL 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Connected to Tibero using tac1.

SQL> CREATE DATABASE "tibero"
   2                    USER sys IDENTIFIED BY tibero
   3           MAXINSTANCES 8
   4           MAXDATAFILES 256
   5 CHARACTER set MSWIN949
   6 NATIONAL character set UTF16
   7 LOGFILE GROUP 0 '+DS0/redo001.redo' SIZE 100M,
   8            GROUP 1 '+DS0/redo011.redo' SIZE 100M,
   9            GROUP 2 '+DS0/redo021.redo' SIZE 100M
  10            MAXLOGFILES 100
          MAXLOGMEMBERS 8
  11   12           ARCHIVELOG
  13           DATAFILE '+DS0/system001.dtf' SIZE 128M
  14          AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
  15          DEFAULT TABLESPACE USR
  16          DATAFILE '+DS0/usr001.dtf' SIZE 128M
  17         AUTOEXTEND ON NEXT 16M MAXSIZE UNLIMITED
  18         EXTENT MANAGEMENT LOCAL AUTOALLOCATE
  19         DEFAULT TEMPORARY TABLESPACE TEMP
  20         TEMPFILE '+DS0/temp001.dtf' SIZE 128M
  21         AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
  22         EXTENT MANAGEMENT LOCAL AUTOALLOCATE
  23         UNDO TABLESPACE UNDO0
  24         DATAFILE '+DS0/undo001.dtf' SIZE 128M
  25         AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED
  26         EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

Database created.

SQL> q
Disconnected.

tac 2번 노드 UNDO,REDO 등 생성 

cmrctl start db --name tac1

tbsql sys/tibero@tac1

CREATE UNDO TABLESPACE UNDO1 DATAFILE '+DS0/undo011.dtf' SIZE 128M AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

create tablespace syssub datafile '+DS0/syssub001.dtf' SIZE 128M autoextend on next 8M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '+DS0/redo031.redo' size 100M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/redo041.redo' size 100M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/redo051.redo' size 100M;
ALTER DATABASE ENABLE PUBLIC THREAD 1;
[tac@mysvr:/home/tac]$ cmrctl start db --name tac1
Listener port = 21000

Tibero 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1

tbSQL 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Connected to Tibero using tac1.

SQL> CREATE UNDO TABLESPACE UNDO1 DATAFILE '+DS0/undo011.dtf' SIZE 128M AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

Tablespace 'UNDO1' created.

SQL> create tablespace syssub datafile '+DS0/syssub001.dtf' SIZE 128M autoextend on next 8M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

Tablespace 'SYSSUB' created.

SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '+DS0/redo031.redo' size 100M;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/redo041.redo' size 100M;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/redo051.redo' size 100M;

Database altered.

SQL> ALTER DATABASE ENABLE PUBLIC THREAD 1;

Database altered.

system.sh 쉘 수행 

export TB_SID=tac1
$TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y
[tac@mysvr:/home/tac]$ export TB_SID=tac1
[tac@mysvr:/home/tac]$ $TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y
Creating additional system index...
Dropping agent table...
Creating client policy table ...
Creating text packages table ...
Creating the role DBA...
Creating system users & roles...
.........................
.................중략

Start TPR
Create tudi interface
    Running /home/tac/tibero7/scripts/odci.sql...
Creating spatial meta tables and views ...
Registering default spatial reference systems ...
Registering unit of measure entries...
Creating internal system jobs...
Creating Japanese Lexer epa source ...
Creating internal system notice queue ...
Creating sql translator profiles ...
Creating agent table...
Creating additional static views using dpv...
Done.
For details, check /home/tac/tibero7/instance/tac1/log/system_init.log.

TAC2 번 노드 작업 수행  

TAC2 번 노드 CM (Tibero Cluster Manager) 맵버십 등록

tbcm -b
cmrctl add network --nettype private --ipaddr 10.10.10.20 --portno 51210 --name net2
cmrctl add network --nettype public --ifname ens33 --name pub2
cmrctl add cluster --incnet net2 --pubnet pub2 --cfile "+/dev/mapper/vg1-vol_1G_01,/dev/mapper/vg1-vol_1G_02,/dev/mapper/vg1-vol_1G_03,/dev/mapper/vg1-vol_1G_04,/dev/mapper/vg2-vol_1G_01,/dev/mapper/vg2-vol_1G_02,/dev/mapper/vg2-vol_1G_03,/dev/mapper/vg2-vol_1G_04" --name cls1
cmrctl start cluster --name cls1
#cmrctl add service --name tas --type as --cname cls1
cmrctl add as --name tas2 --svcname tas --dbhome $CM_HOME
#cmrctl add service --name tibero --cname cls1
cmrctl add db --name tac2 --svcname tibero --dbhome $CM_HOME
[tac@mysvr:/home/tac]$ tbcm -b
CM Guard daemon started up.
CM-fence enabled.

TBCM 7.1.1 (Build 258584)

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Tibero cluster manager started up.
Local node name is (cm2:61040).

[tac@mysvr:/home/tac]$ cmrctl add network --nettype private --ipaddr 10.10.10.20 --portno 51210 --name net2
Resource add success! (network, net2)
[tac@mysvr:/home/tac]$ cmrctl add network --nettype public --ifname ens33 --name pub2
Resource add success! (network, pub2)
[tac@mysvr:/home/tac]$ cmrctl add cluster --incnet net2 --pubnet pub2 --cfile "+/dev/mapper/vg1-vol_1G_01,/dev/mapper/vg1-vol_1G_02,/dev/mapper/vg1-vol_1G_03,/
dev/mapper/vg1-vol_1G_04,/dev/mapper/vg2-vol_1G_01,/dev/mapper/vg2-vol_1G_02,/dev/mapper/vg2-vol_1G_03,/dev/mapper/vg2-vol_1G_04" --name cls1
Resource add success! (cluster, cls1)
[tac@mysvr:/home/tac]$ cmrctl start cluster --name cls1
MSG SENDING SUCCESS!
[tac@mysvr:/home/tac]$ cmrctl add as --name tas2 --svcname tas --dbhome $CM_HOME
ADD AS RESOURCE WITH ROOT PERMISSION!
Continue? (y/N) y
Resource add success! (as, tas2)
[tac@mysvr:/home/tac]$ cmrctl add db --name tac2 --svcname tibero --dbhome $CM_HOME
ADD DB RESOURCE WITH ROOT PERMISSION!
Continue? (y/N) y
Resource add success! (db, tac2)

TAS2 ,TAC2 부팅 

cmrctl start as --name tas2
cmrctl start db --name tac2
[tac@mysvr:/home/tac]$ cmrctl start as --name tas2
Listener port = 3000

Tibero 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
[tac@mysvr:/home/tac]$ cmrctl start db --name tac2
Listener port = 21000

Tibero 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac2

TAS2 번 노드 접속 및 확인 

[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac2

tbSQL 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Connected to Tibero using tac2.

SQL> ls

NAME                               SUBNAME                  TYPE                
---------------------------------- ------------------------ --------------------
SPH_REPORT_DIR                                              DIRECTORY
TPR_REPORT_DIR                                              DIRECTORY
TPR_TIP_DIR                                                 DIRECTORY
NULL_VERIFY_FUNCTION                                        FUNCTION
TB_COMPLEXITY_CHECK                                         FUNCTION
TB_STRING_DISTANCE                                          FUNCTION



SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME                           
--------------- ----------------------------------------
DB_NAME                                 
----------------------------------------
HOST_NAME                                                       PARALLEL
--------------------------------------------------------------- --------
   THREAD# VERSION 
---------- --------
STARTUP_TIME
--------------------------------------------------------------------------------
STATUS           SHUTDOWN_PENDING
---------------- ----------------
TIP_FILE
--------------------------------------------------------------------------------
              1 tac2
tibero
mysvr                                                           YES
         1 7
2023/03/14
NORMAL           NO
/home/tac/tibero7/config/tac2.tip


1 row selected.

TAC 전체 노드 상태 확인 

  • cmrctl show all 명령을 통해 tac instance 별 상태를 확인 할수 있다. 
$ cmrctl show all
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           net1       UP (private) 10.10.10.10/51210
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: net1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas1 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0
       cls1       db           tac1 UP(NRML) tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           net2       UP (private) 10.10.10.20/51210
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: net2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /home/tac/tibero7, failed retry cnt: 0
       cls1       db           tac2 UP(NRML) tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================
728x90
반응형
728x90
반응형

VMware 환경구성

Tibero TAC 를 설치하기위해서 리눅스 서버를 vmware 구성에 대해 먼저 살펴 보겠습니다. 

Centos7.9 를 설치후 해당 머신을 복사하여 TAC1 , TAC2 번 노드를 각각 설치 하겠습니다. 

 

VMware Centos7.9 설치는 생략 하도록 하겠습니다. 

 

Vmware 리눅스 환경 구성

Tibero7 을  설치할 리눅스 머신이 준비 되었다면 해당 머신을 복제하여 TAC1 ,TAC2  번 노드로 활용 하도록 합니다. 

그리고 공유 디스크로 사용하기위해 별도의 HDD 를 추가 합니다. 

 

 

※ OS 디렉토리 위치

  •  tac1 : D:\500.vmware\TAC\Node1
  •  tac2 : D:\500.vmware\TAC\Node2
  • 공유 디스크 파일 디렉토리 : diskFile

 

 

Vmware clone

위 구성에서의 설명 처럼 centos 7.9 설치 이미지를  복사하여 (Clone) 두개의 서버를 신규로 생성 합니다. 

 

 

 

 

 

Vmware Hdd Disk 추가

복제한 tac1 machine 에서 하드 디스크(HDD) 를 추가 합니다. 

5G HDD 2개를 추가해서 공유 디스크로 사용 하겠습니다. 

 

  • D:\500.vmware\TAC\diskFile\sharedisk1.vmdk
  • diskFile 디렉토리 밑에 sharedisk1.vmdk 라는 이름으로 디스크를 추가 하겠습니다. 

 

  • sharedisk2.vmdk disk 도 동일 한 방식으로 생성을 진행 합니다. 
 
 

 

 

  • 정상적으로 disk 가추가되었다면 Virtual device node 를 변경 합니다.

 

  • tac 1 번 machine 에서 추가한 하드디스크를 2번 노드에서도 사용 가능 하도록 tac 2 번 machine 에서 hdd 를 추가 합니다. 
  • tac2 번 머신 -> Edit virtual machine-> add -> Hard disk -> next ->
  • SCSI-> independent -> next ->Use an exsiting virtual disk

 

 

 

  • 같은 방식으로 2번 디스크를 추가 하면 tac2 번 machine 에서도 tac1 에서 추가한 hdd 가 추가된 것을 확인 할수 있습니다. 

 

공유 디스크 파일 수정

추가된 hard disk 를 각각의 서버에서 공유 하려면 추가적인 작업이 필요합니다. 

추가된 디스크 파일의 설정파일을 수정합니다. 

tac1.vmx 파일 수정 

#edit start 
disk.locking = "FALSE"
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedbus = "VIRTUAL"
#edit end

tac2.vmx 파일 수정 

#EDIT START 
disk.locking = "FALSE"
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedbus = "VIRTUAL"
#EDIT END

 

 

  • 정상적으로 공유가 되었는지 확인하기 위해 tac 1,2번 machine 을 기동해서 정보를 확인 하겠습니다. 
  • 정상적으로 설정이 마무리되었다면 각각의 서버에 접속 하여 ls 명령을 통해 추가된 디스크 정보를 확인할수 있습니다.

 

  • fdisk -l 명령을 통해서도 추가된 디스크 정보를 확인할수 있습니다. 

 

 

 

네트워크 추가 설정 (Host-only)

  • vmware machine  간 통신(Inter Connect) 을 하기 위한  private network   를 추가 하도록 하겠습니다.
  • 각 vm machine 서버 별로 네트워크를 추가 합니다. 
    •  Netwrok Adapter->Add -> Host-only
  • 정상적으로 추가 되었다면 서버에 접속 해서 IP를 할당 합니다.
    • TAC1 :10.10.10.10
    • TAC2 :10.10.10.20


tac1 번 노드  tac2 번노드

RAW Device 생성 

추가한 hdd 디스크를 Raw device  로 생성하는 작업을 진행 합니다. 

  1. partition 생성 
  2. Physical Valume 생성
  3. Valume 그룹생성
  4. Logical Volume 생성
  5. Logical volume Raw Device 맵핑

Partition 생성

  • fdisk /dev/sdb 명령 수행후 n->p->1->엔터 키 -> 엔터 키-> w 수행
fdisk /dev/sdb
[root@mysvr:/root]$ fdisk /dev/sdb 
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x96b3f8bb.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-10485759, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 
Using default value 10485759
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@mysvr:/root]$ 
fdisk /dev/sdc

 

[root@mysvr:/root]$ fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x536414f4.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-10485759, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 
Using default value 10485759
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

 

Physical Volume 생성

  • 디스크 확인 후  pvcreate  명령을 통해 Physical Volume 을 생성합니다.
[root@mysvr:/root]$ ls -l /dev/sd*
brw-rw----. 1 root disk 8,  0  3월  9 11:45 /dev/sda
brw-rw----. 1 root disk 8,  1  3월  9 11:45 /dev/sda1
brw-rw----. 1 root disk 8,  2  3월  9 11:45 /dev/sda2
brw-rw----. 1 root disk 8, 16  3월  9 12:41 /dev/sdb
brw-rw----. 1 root disk 8, 17  3월  9 12:41 /dev/sdb1
brw-rw----. 1 root disk 8, 32  3월  9 12:43 /dev/sdc
brw-rw----. 1 root disk 8, 33  3월  9 12:43 /dev/sdc1

 

pvcreate /dev/sdb1

pvcreate /dev/sdc1

pvdisplay

pvs

 

[root@mysvr:/root]$ pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               centos
  PV Size               <39.00 GiB / not usable 3.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              9983
  Free PE               1
  Allocated PE          9982
  PV UUID               f6dArV-MiVJ-lCgG-CfbY-4PCD-f1d0-uwVkeY
   
  "/dev/sdb1" is a new physical volume of "<5.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name               
  PV Size               <5.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               aly1AZ-7hzb-T4qM-JAdv-gCWS-Sj2f-5HVDDs
   
  "/dev/sdc1" is a new physical volume of "<5.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdc1
  VG Name               
  PV Size               <5.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               Q8sQZR-SBvK-mWRk-8j8o-Mdnp-eWHc-IUFZ2x

 

볼륨 그룹 생성 

  • vgcreate 명령을 통해 볼륨 그룹을 생성 합니다. 
  • vgremove [volume group name] 은 볼륨 그룹 삭제 명령입니다. 
vgcreate vg1 /dev/sdb1

vgcreate vg2 /dev/sdc1

vgdisplay

#볼륨그룹삭제 
#vgremove vg1 vg2

 

[root@mysvr:/root]$ vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0   
  Free  PE / Size       1279 / <5.00 GiB
  VG UUID               6IWGT6-XHgP-OESg-3Gdh-QOUW-p01L-Cx5INn
   
  --- Volume group ---
  VG Name               vg2
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0   
  Free  PE / Size       1279 / <5.00 GiB
  VG UUID               NgC43A-4Cyq-gKHY-8ed7-8UKY-xh8n-p2YJfo
   
  --- Volume group ---
  VG Name               centos
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <39.00 GiB
  PE Size               4.00 MiB
  Total PE              9983
  Alloc PE / Size       9982 / 38.99 GiB
  Free  PE / Size       1 / 4.00 MiB
  VG UUID               N8yCf2-fqDT-miVi-neoj-A1i8-mFWE-X7dMRv

 

Logical Volume 생성 

  • lvcreate -L 100M -n cm_01 vg1     
  • lvcreate -L [size] -n [logical volume name] [volume group name]
  • lvremove /dev/vg1/cm_01 -> 삭제 명령
#volumn Group vg1 
lvcreate -L 128M -n vol_128m_01 vg1
lvcreate -L 128M -n vol_128m_02 vg1
lvcreate -L 128M -n vol_128m_03 vg1
lvcreate -L 128M -n vol_128m_04 vg1
lvcreate -L 128M -n vol_128m_05 vg1
lvcreate -L 128M -n vol_128m_06 vg1
lvcreate -L 128M -n vol_128m_07 vg1
lvcreate -L 128M -n vol_128m_08 vg1
lvcreate -L 128M -n vol_128m_09 vg1
lvcreate -L 128M -n vol_128m_10 vg1
lvcreate -L 128M -n vol_128m_11 vg1
lvcreate -L 128M -n vol_128m_12 vg1
lvcreate -L 128M -n vol_128m_13 vg1
lvcreate -L 256M -n vol_256m_14 vg1
lvcreate -L 512M -n vol_512m_15 vg1
lvcreate -L 512M -n vol_512m_16 vg1
lvcreate -L 1024M -n vol_1024m_17 vg1
lvcreate -L 1024M -n vol_1024m_19 vg1

#volumn Group vg2
lvcreate -L 128M -n  vol_128m_01 vg2
lvcreate -L 128M -n  vol_128m_02 vg2
lvcreate -L 128M -n  vol_128m_03 vg2
lvcreate -L 128M -n  vol_128m_04 vg2
lvcreate -L 128M -n  vol_128m_05 vg2
lvcreate -L 128M -n  vol_128m_06 vg2
lvcreate -L 128M -n  vol_128m_07 vg2
lvcreate -L 128M -n  vol_128m_08 vg2
lvcreate -L 128M -n  vol_128m_09 vg2
lvcreate -L 256M -n  vol_256m_10 vg2
lvcreate -L 512M -n  vol_512m_11 vg2
lvcreate -L 1024M -n  vol_1024m_12 vg2
lvcreate -L 2048M -n  vol_2048m_13 vg2
  • logical volume 정보 확인 
lvdisplay vg1

lvdisplay vg2

lvs

 

[root@mysvr:/root]$ lvdisplay vg1
  --- Logical volume ---
  LV Path                /dev/vg1/contorl_01
  LV Name                contorl_01
  VG Name                vg1
  LV UUID                u8hnpe-DbAE-YgKj-qu5u-cPSI-HN3l-l0kYok
  LV Write Access        read/write
  LV Creation host, time mysvr, 2023-03-09 16:34:02 +0900
  LV Status              available
  # open                 0
  LV Size                128.00 MiB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4
   
  --- Logical volume ---
  LV Path                /dev/vg1/redo001
  LV Name                redo001
  VG Name                vg1
  LV UUID                dPbgfi-Uqtp-wObd-eW4e-QDtd-E7rj-ylgWpv
  LV Write Access        read/write
  LV Creation host, time mysvr, 2023-03-09 16:34:02 +0900
  LV Status              available
  # open                 0
  LV Size                128.00 MiB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:5
   
ls -al /dev/vg1 |awk '{print $9}'

lvs

pvs

 

[root@mysvr:/root]$ lvs
  LV           VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home         centos -wi-ao----  25.00g                                                    
  root         centos -wi-ao----   5.99g                                                    
  swap         centos -wi-ao----   8.00g                                                    
  vol_1024m_17 vg1    -wi-a-----   1.00g                                                    
  vol_1024m_19 vg1    -wi-a-----   1.00g                                                    
  vol_128m_01  vg1    -wi-a----- 128.00m                                                    
  vol_128m_02  vg1    -wi-a----- 128.00m                                                    
  vol_128m_03  vg1    -wi-a----- 128.00m                                                    
  vol_128m_04  vg1    -wi-a----- 128.00m                                                    
  vol_128m_05  vg1    -wi-a----- 128.00m                                                    
  vol_128m_06  vg1    -wi-a----- 128.00m                                                    
  vol_128m_07  vg1    -wi-a----- 128.00m                                                    
  vol_128m_08  vg1    -wi-a----- 128.00m                                                    
  vol_128m_09  vg1    -wi-a----- 128.00m                                                    
  vol_128m_10  vg1    -wi-a----- 128.00m                                                    
  vol_128m_11  vg1    -wi-a----- 128.00m                                                    
  vol_128m_12  vg1    -wi-a----- 128.00m                                                    
  vol_128m_13  vg1    -wi-a----- 128.00m                                                    
  vol_256m_14  vg1    -wi-a----- 256.00m                                                    
  vol_512m_15  vg1    -wi-a----- 512.00m                                                    
  vol_512m_16  vg1    -wi-a----- 512.00m                                                    
  vol_1024m_12 vg2    -wi-a-----   1.00g                                                    
  vol_128m_01  vg2    -wi-a----- 128.00m                                                    
  vol_128m_02  vg2    -wi-a----- 128.00m                                                    
  vol_128m_03  vg2    -wi-a----- 128.00m                                                    
  vol_128m_04  vg2    -wi-a----- 128.00m                                                    
  vol_128m_05  vg2    -wi-a----- 128.00m                                                    
  vol_128m_06  vg2    -wi-a----- 128.00m                                                    
  vol_128m_07  vg2    -wi-a----- 128.00m                                                    
  vol_128m_08  vg2    -wi-a----- 128.00m                                                    
  vol_128m_09  vg2    -wi-a----- 128.00m                                                    
  vol_2048m_13 vg2    -wi-a-----   2.00g                                                    
  vol_256m_10  vg2    -wi-a----- 256.00m                                                    
  vol_512m_11  vg2    -wi-a----- 512.00m                                                    
[root@mysvr:/root]$ pvs
  PV         VG     Fmt  Attr PSize   PFree  
  /dev/sda2  centos lvm2 a--  <39.00g   4.00m
  /dev/sdb1  vg1    lvm2 a--   <5.00g 124.00m
  /dev/sdc1  vg2    lvm2 a--   <5.00g 124.00m      

RAW Device 맵핑

logical volume 까지 정상적으로 생성되었다면 이제 마지막 작업으로 

RAW Device 와 생성한 volume 을 맵핑하는 작업을 진행 하겠습니다. 

파일은 /etc/udev/rules.d/70-persistent-ipoib.rules  파일 정보 수정하여 raw device 를 생성 할수 있습니다. 

  • /etc/udev/rules.d/70-persistent-ipoib.rules 
ACTION!="add|change" , GOTO="raw_end"

#ENV{DM_VG_NAME}=="[volum group name ]", ENV{DM_LV_NAME}=="[logical volumn name ]",RUN+="/usr/bin/raw /dev/raw/raw1 %N"

ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_01",RUN+="/usr/bin/raw /dev/raw/raw1 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_02",RUN+="/usr/bin/raw /dev/raw/raw2 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_03",RUN+="/usr/bin/raw /dev/raw/raw3 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_04",RUN+="/usr/bin/raw /dev/raw/raw4 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_05",RUN+="/usr/bin/raw /dev/raw/raw5 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_06",RUN+="/usr/bin/raw /dev/raw/raw6 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_07",RUN+="/usr/bin/raw /dev/raw/raw7 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_08",RUN+="/usr/bin/raw /dev/raw/raw8 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_09",RUN+="/usr/bin/raw /dev/raw/raw9 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_10",RUN+="/usr/bin/raw /dev/raw/raw10 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_11",RUN+="/usr/bin/raw /dev/raw/raw11 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_12",RUN+="/usr/bin/raw /dev/raw/raw12 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_128m_13",RUN+="/usr/bin/raw /dev/raw/raw13 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_256m_14",RUN+="/usr/bin/raw /dev/raw/raw14 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_512m_15",RUN+="/usr/bin/raw /dev/raw/raw15 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_512m_16",RUN+="/usr/bin/raw /dev/raw/raw16 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_1024m_17",RUN+="/usr/bin/raw /dev/raw/raw17 %N"
ENV{DM_VG_NAME}=="vg1", ENV{DM_LV_NAME}=="vol_1024m_19",RUN+="/usr/bin/raw /dev/raw/raw19 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_01",RUN+="/usr/bin/raw /dev/raw/raw20 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_02",RUN+="/usr/bin/raw /dev/raw/raw21 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_03",RUN+="/usr/bin/raw /dev/raw/raw22 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_04",RUN+="/usr/bin/raw /dev/raw/raw23 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_05",RUN+="/usr/bin/raw /dev/raw/raw24 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_06",RUN+="/usr/bin/raw /dev/raw/raw25 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_07",RUN+="/usr/bin/raw /dev/raw/raw26 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_08",RUN+="/usr/bin/raw /dev/raw/raw27 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_128m_09",RUN+="/usr/bin/raw /dev/raw/raw28 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_256m_10",RUN+="/usr/bin/raw /dev/raw/raw29 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_512m_11",RUN+="/usr/bin/raw /dev/raw/raw30 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_1024m_12",RUN+="/usr/bin/raw /dev/raw/raw31 %N"
ENV{DM_VG_NAME}=="vg2", ENV{DM_LV_NAME}=="vol_2048m_13",RUN+="/usr/bin/raw /dev/raw/raw32 %N"

KERNEL=="raw*",OWNER="tac",GROUP="tibero",MODE="0660"
LABEL="raw_end"

 

RAW Device 등록 및 상태 확인 

udevadm 명령을 통해 raw device 정보를 추가 하고 갱신 할수 있습니다. 

  • 변경사항 reload : udevadm control --reload-rules 
  • 추가: udevadm trigger --action=add
  • 정보 확인: raw -qa or ls -al /dev/raw*
udevadm control --reload-rules 
udevadm trigger --action=add

raw -qa

ls -al /dev/raw*
[root@mysvr:/root]$ udevadm trigger --action=add                      
[root@mysvr:/root]$ raw -qa                         
/dev/raw/raw1:  bound to major 253, minor 3
/dev/raw/raw2:  bound to major 253, minor 4
/dev/raw/raw3:  bound to major 253, minor 5
/dev/raw/raw4:  bound to major 253, minor 6
/dev/raw/raw5:  bound to major 253, minor 7
/dev/raw/raw6:  bound to major 253, minor 8
/dev/raw/raw7:  bound to major 253, minor 9
/dev/raw/raw8:  bound to major 253, minor 10
/dev/raw/raw9:  bound to major 253, minor 11
/dev/raw/raw10:  bound to major 253, minor 12
/dev/raw/raw11:  bound to major 253, minor 13
/dev/raw/raw12:  bound to major 253, minor 14
/dev/raw/raw13:  bound to major 253, minor 15
/dev/raw/raw14:  bound to major 253, minor 16
/dev/raw/raw15:  bound to major 253, minor 17
/dev/raw/raw16:  bound to major 253, minor 18
/dev/raw/raw17:  bound to major 253, minor 19
/dev/raw/raw18:  bound to major 253, minor 20
/dev/raw/raw19:  bound to major 253, minor 25
/dev/raw/raw20:  bound to major 253, minor 26
/dev/raw/raw21:  bound to major 253, minor 27
/dev/raw/raw22:  bound to major 253, minor 28
/dev/raw/raw23:  bound to major 253, minor 29
/dev/raw/raw24:  bound to major 253, minor 30
/dev/raw/raw25:  bound to major 253, minor 31
/dev/raw/raw26:  bound to major 253, minor 32
/dev/raw/raw27:  bound to major 253, minor 33
/dev/raw/raw28:  bound to major 253, minor 34
/dev/raw/raw29:  bound to major 253, minor 35
/dev/raw/raw30:  bound to major 253, minor 36
/dev/raw/raw31:  bound to major 253, minor 37
/dev/raw/raw32:  bound to major 253, minor 38
/dev/raw/raw33:  bound to major 253, minor 39
/dev/raw/raw34:  bound to major 253, minor 40

 

리눅스 VM 재 부팅 

  • 정상적으로 적용이 되었는지 확인을 위해 각 각 서버를 재기동 합니다. 
  • 순서는 1번 tac1 machine 부팅 -> 2 번 tac2 machine 부팅 순으로 진행을 합니다. 
  • 동시에 부팅 시킬 경우 정상적으로 리눅스가 부팅이 되지 않을 수 있다. 

 

 

 

Tibero TAC 설치 

이제 리눅스 머신의 서버의 설정이 끝났다면  티베로 설치 작업을 진행 하도록 하겠습니다. 

방화벽 해제

systemctl stop firewalld
systemctl disable  firewalld

 

[root@mysvr:/home/tac]$ systemctl stop firewalld
[root@mysvr:/home/tac]$ systemctl disable  firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Tibero 계정 생성 

groupadd tibero
adduser -d /home/tac -g tibero tac

 

커널 파라미터 수정 및 적용 

vi /etc/sysctl.conf

sysctl -p
kernel.shmmax = 4092966912
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem =  10000 32000 10000 10000
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 9000 65535
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 4194304
net.core.wmem_max = 4194304
vm.overcommit_memory = 0
fs.file-max = 6815744

바이너리 준비 ( Tibero7 & Licnese.xml 파일) 

  • 바이너리는 준비 작업은 생략 하겠습니다. 

Tibero profile 및 파일 설정  (환경설정) 

  tac1( server node1) -192.168.116.11 tac2( server node2) -192.168.116.12
tac1.profile vs 
tac2.profile
export TB_HOME=/home/tac/tibero7
export CM_HOME=/home/tac/tibero7
export TB_SID=tac1
export CM_SID=cm1

export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$CM_HOME/scripts:$PATH
export LD_LIBRARY_PATH=.:$TB_HOME/lib:$TB_HOME/client/lib:/lib:/usr/lib:/usr/local/lib:/usr/lib/threads
export LD_LIBRARY_PATH_64=$TB_HOME/lib:$TB_HOME/client/lib:/usr/lib64:/usr/lib/64:/usr/ucblib:/usr/local/lib
export LIBPATH=$LD_LIBRARY_PATH
export LANG=ko_KR.eucKR
export LC_ALL=ko_KR.eucKR
export LC_CTYPE=ko_KR.eucKR
export LC_NUMERIC=ko_KR.eucKR
export LC_TIME=ko_KR.eucKR
export LC_COLLATE=ko_KR.eucKR
export LC_MONETARY=ko_KR.eucKR
export LC_MESSAGES=ko_KR.eucKR
export TB_HOME=/home/tac/tibero7
export CM_HOME=/home/tac/tibero7
export TB_SID=tac2
export CM_SID=cm2

export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$CM_HOME/scripts:$PATH
export LD_LIBRARY_PATH=.:$TB_HOME/lib:$TB_HOME/client/lib:/lib:/usr/lib:/usr/local/lib:/usr/lib/threads
export LD_LIBRARY_PATH_64=$TB_HOME/lib:$TB_HOME/client/lib:/usr/lib64:/usr/lib/64:/usr/ucblib:/usr/local/lib
export LIBPATH=$LD_LIBRARY_PATH
export LANG=ko_KR.eucKR
export LC_ALL=ko_KR.eucKR
export LC_CTYPE=ko_KR.eucKR
export LC_NUMERIC=ko_KR.eucKR
export LC_TIME=ko_KR.eucKR
export LC_COLLATE=ko_KR.eucKR
export LC_MONETARY=ko_KR.eucKR
export LC_MESSAGES=ko_KR.eucKR
tac1.tip  
vs tac2.tip
DB_NAME=tibero
LISTENER_PORT=21000
CONTROL_FILES="/dev/raw/raw2","/dev/raw/raw21"
DB_CREATE_FILE_DEST=/home/tac/tbdata/
LOG_ARCHIVE_DEST=/home/tac/tbdata/archive

DBWR_CNT=1

DBMS_LOG_TOTAL_SIZE_LIMIT=300M


MAX_SESSION_COUNT=30

TOTAL_SHM_SIZE=1G
MEMORY_TARGET=2G
DB_BLOCK_SIZE=8K
DB_CACHE_SIZE=300M


CLUSTER_DATABASE=Y
THREAD=0
UNDO_TABLESPACE=UNDO0

############### CM #######################
CM_PORT=61040
_CM_LOCAL_ADDR=192.168.116.11
LOCAL_CLUSTER_ADDR=10.10.10.10
LOCAL_CLUSTER_PORT=61050
###########################################
DB_NAME=tibero
LISTENER_PORT=21000
CONTROL_FILES="/dev/raw/raw2","/dev/raw/raw21"
DB_CREATE_FILE_DEST=/home/tac/tbdata/
LOG_ARCHIVE_DEST=/home/tac/tbdata/archive

DBWR_CNT=1

DBMS_LOG_TOTAL_SIZE_LIMIT=300M

MAX_SESSION_COUNT=30

TOTAL_SHM_SIZE=2G
MEMORY_TARGET=3G
DB_BLOCK_SIZE=8K
DB_CACHE_SIZE=300M



CLUSTER_DATABASE=Y
THREAD=1
UNDO_TABLESPACE=UNDO1

################ CM #######################
CM_PORT=61040
_CM_LOCAL_ADDR=192.168.116.12
LOCAL_CLUSTER_ADDR=10.10.10.20
LOCAL_CLUSTER_PORT=61050
###########################################
cm1.tip vs
cm2.tip 
#cluster 구성 시, node의 ID로 사용될 값
CM_NAME=cm1
#cmctl 명령 사용 시, cm에 접속하기 위해 필요한 포트 정보
CM_UI_PORT=61040

CM_HEARTBEAT_EXPIRE=450
CM_WATCHDOG_EXPIRE=400
LOG_LVL_CM=5


CM_RESOURCE_FILE=/home/tac/cm_resource/cmfile
CM_RESOURCE_FILE_BACKUP=/home/tac/cm_resource/cmfile_backup
CM_RESOURCE_FILE_BACKUP_INTERVAL=1


CM_LOG_DEST=/home/tac/tibero7/instance/tac1/log/cm
CM_GUARD_LOG_DEST=/home/tac/tibero7/instance/tac1/log/cm_guard

CM_FENCE=Y
CM_ENABLE_FAST_NET_ERROR_DETECTION=Y
_CM_CHECK_RUNLEVEL=Y
#cluster 구성 시, node의 ID로 사용될 값
CM_NAME=cm2
#cmctl 명령 사용 시, cm에 접속하기 위해 필요한 포트 정보
CM_UI_PORT=61040

CM_HEARTBEAT_EXPIRE=450
CM_WATCHDOG_EXPIRE=400
LOG_LVL_CM=5



CM_RESOURCE_FILE=/home/tac/cm_resource/cmfile
CM_RESOURCE_FILE_BACKUP=/home/tac/cm_resource/cmfile_backup
CM_RESOURCE_FILE_BACKUP_INTERVAL=1


CM_LOG_DEST=/home/tac/tibero7/instance/tac2/log/cm
CM_GUARD_LOG_DEST=/home/tac/tibero7/instance/tac2/log/cm_guard

CM_FENCE=Y
CM_ENABLE_FAST_NET_ERROR_DETECTION=Y
_CM_CHECK_RUNLEVEL=Y
tbdsn.tbr tibero=(
    (INSTANCE=(HOST=192.168.116.11)
                (PORT=21000)
                (DB_NAME=tibero)
    )
    (INSTANCE=(HOST=192.168.116.12)
                (PORT=21000)
                (DB_NAME=tibero)
    )
    (LOAD_BALANCE=Y)
    (USE_FAILOVER=Y)
)


tac1=(
    (INSTANCE=(HOST=192.168.116.11)
              (PORT=21000)
              (DB_NAME=tibero)
    )
)
tac2=(
    (INSTANCE=(HOST=192.168.116.12)
              (PORT=21000)
              (DB_NAME=tibero)
    )
)
tibero=(
    (INSTANCE=(HOST=192.168.116.11)
                (PORT=21000)
                (DB_NAME=tibero)
    )
    (INSTANCE=(HOST=192.168.116.12)
                (PORT=21000)
                (DB_NAME=tibero)
    )
    (LOAD_BALANCE=Y)
    (USE_FAILOVER=Y)
)


tac1=(
    (INSTANCE=(HOST=192.168.116.11)
              (PORT=21000)
              (DB_NAME=tibero)
    )
)
tac2=(
    (INSTANCE=(HOST=192.168.116.12)
              (PORT=21000)
              (DB_NAME=tibero)
    )
)

 

TAC 설치

  • tac 설정 관련 파일을 tibero7 각 디렉토리로  Copy 합니다. 
#TAC1 번 노드
cp tac1.tip $TB_HOME/config/tac1.tip
cp cm1.tip $TB_HOME/config/cm1.tip
cp tbdsn.tbr $TB_HOME/client/config/tbdsn.tbr
cp license.xml $TB_HOME/license/license.xml



#TAC2 번 노드
cp tac2.tip $TB_HOME/config/tac2.tip
cp cm2.tip $TB_HOME/config/cm2.tip
cp tbdsn.tbr $TB_HOME/client/config/tbdsn.tbr
cp license.xml $TB_HOME/license/license.xml

 

TAC1 번 노드 수행 작업

 CM (Tibero Cluster Manager)  네트워크  리소스 등록 

#TAC1
tbcm -b 
cmrctl add network --nettype private --ipaddr 10.10.10.10 --portno 51210 --name net1
cmrctl show
cmrctl add network --nettype public --ifname ens33 --name pub1
cmrctl show
cmrctl add cluster --incnet net1 --pubnet pub1 --cfile "/dev/raw/raw1" --name cls1
cmrctl start cluster --name cls1
cmrctl show
[tac@mysvr:/home/tac]$ tbcm -b
######################### WARNING #########################
# You are trying to start the CM-fence function.          #
###########################################################
You are not 'root'. Proceed anyway without fence? (y/N)y
CM Guard daemon started up.
CM-fence enabled.
Tibero cluster manager (cm1) startup failed!

[tac@mysvr:/home/tac]$ cmrctl add network --nettype private --ipaddr 10.10.10.10 --portno 51210 --name net1
Resource add success! (network, net1)
[tac@mysvr:/home/tac]$ cmrctl add network --nettype public --ifname ens33 --name pub1
Resource add success! (network, pub1)
[tac@mysvr:/home/tac]$ cmrctl add cluster --incnet net1 --pubnet pub1 --cfile "/dev/raw/raw1" --name cls1
Resource add success! (cluster, cls1)
[tac@mysvr:/home/tac]$ cmrctl start cluster --name cls1
MSG SENDING SUCCESS!
[tac@mysvr:/home/tac]$ cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           net1       UP (private) 10.10.10.10/51210
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: net1, pub: pub1
       cls1     file         cls1:0       UP /dev/raw/raw1
=====================================================================

 

CM Tibero DB Cluster Service 등록

#TAC Cluster 
#serivce name = DB_NAME (tibero)
cmrctl add service --name tibero --cname cls1 --type db --mode AC
cmrctl show 
cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME
cmrctl show

 

[tac@mysvr:/home/tac]$ cmrctl add service --name tibero --cname cls1 --type db --mode AC
Resource add success! (service, tibero)
[tac@mysvr:/home/tac]$ cmrctl add db --name tac1 --svcname tibero --dbhome $CM_HOME
Resource add success! (db, tac1)
[tac@mysvr:/home/tac]$ cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           net1       UP (private) 10.10.10.10/51210
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: net1, pub: pub1
       cls1     file         cls1:0       UP /dev/raw/raw1
       cls1  service         tibero     DOWN Database, Active Cluster (auto-restart: OFF)
       cls1       db           tac1     DOWN tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================

Database Create 생성

  • tac1  번 instance nomount 부팅 및 상태 확인
cmrctl start db --name tac1 --option "-t nomount"

 

[tac@mysvr:/home/tac]$ cmrctl start db --name tac1 --option "-t nomount"
BOOT SUCCESS! (MODE : NOMOUNT)
[tac@mysvr:/home/tac]$ cmrctl show 
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           net1       UP (private) 10.10.10.10/51210
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: net1, pub: pub1
       cls1     file         cls1:0       UP /dev/raw/raw1
       cls1  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
       cls1       db           tac1 UP(NMNT) tibero, /home/tac/tibero7, failed retry cnt: 0
=====================================================================
[tac@mysvr:/home/tac]$ 
  • tac1 번 instance 접속후 데이터베이스 create
  • tbsql sys/tibero@tac1
CREATE DATABASE "tibero"
                   USER sys IDENTIFIED BY tibero
          MAXINSTANCES 8
          MAXDATAFILES 256
CHARACTER set MSWIN949
NATIONAL character set UTF16
LOGFILE GROUP 0 ('/dev/raw/raw3','/dev/raw/raw22') SIZE 127M,
        GROUP 1 ('/dev/raw/raw4','/dev/raw/raw23') SIZE 127M,
        GROUP 2 ('/dev/raw/raw5','/dev/raw/raw24') SIZE 127M
        MAXLOGFILES 100
          MAXLOGMEMBERS 8
          ARCHIVELOG
         DATAFILE '/dev/raw/raw10' SIZE 127M
         AUTOEXTEND off          
         DEFAULT TABLESPACE USR
         DATAFILE '/dev/raw/raw11' SIZE 127M
        AUTOEXTEND off
        DEFAULT TEMPORARY TABLESPACE TEMP
        TEMPFILE '/dev/raw/raw12' SIZE 127M
        AUTOEXTEND off
        UNDO TABLESPACE UNDO0
        DATAFILE '/dev/raw/raw9' SIZE 127M
        AUTOEXTEND off ;

 

[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1

tbSQL 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Connected to Tibero using tac1.

SQL> CREATE DATABASE "tibero"
   2                    USER sys IDENTIFIED BY tibero
   3           MAXINSTANCES 8
   4           MAXDATAFILES 256
   5 CHARACTER set MSWIN949
   6 NATIONAL character set UTF16
   7 LOGFILE GROUP 0 ('/dev/raw/raw3','/dev/raw/raw22') SIZE 127M,
   8         GROUP 1 ('/dev/raw/raw4','/dev/raw/raw23') SIZE 127M,
   9         GROUP 2 ('/dev/raw/raw5','/dev/raw/raw24') SIZE 127M
  10         MAXLOGFILES 100
  11           MAXLOGMEMBERS 8
  12           ARCHIVELOG
  13          DATAFILE '/dev/raw/raw10' SIZE 127M
  14          AUTOEXTEND off          
  15          DEFAULT TABLESPACE USR
  16          DATAFILE '/dev/raw/raw11' SIZE 127M
  17         AUTOEXTEND off
  18         DEFAULT TEMPORARY TABLESPACE TEMP
  19         TEMPFILE '/dev/raw/raw12' SIZE 127M
  20         AUTOEXTEND off
  21         UNDO TABLESPACE UNDO0
  22         DATAFILE '/dev/raw/raw9' SIZE 127M
  23         AUTOEXTEND off ;

Database created.

SQL> q

TAC2 번 노드 의 redo 및 undo 생성 및 thread  활성화 

  • tac1 번 노드  Normal  모드로 재 부팅 부팅 후  undo 및 redo 생성후 쓰레드1 활성화 
CREATE UNDO TABLESPACE UNDO1 DATAFILE '/dev/raw/raw28' SIZE 127M AUTOEXTEND off;
create tablespace syssub datafile '/dev/raw/raw12' SIZE 127M AUTOEXTEND off;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 ('/dev/raw/raw6','/dev/raw/raw25') size 127M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 ('/dev/raw/raw7','/dev/raw/raw26') size 127M;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 ('/dev/raw/raw8','/dev/raw/raw27') size 127M;
ALTER DATABASE ENABLE PUBLIC THREAD 1;
[tac@mysvr:/home/tac]$ tbboot 
Listener port = 21000

Tibero 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Tibero instance started up (NORMAL mode).
[tac@mysvr:/home/tac]$ tbsql sys/tibero@tac1    

tbSQL 7  

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Connected to Tibero using tac1.

SQL> CREATE UNDO TABLESPACE UNDO1 DATAFILE '/dev/raw/raw28' SIZE 127M AUTOEXTEND off;

Tablespace 'UNDO1' created.

SQL> create tablespace syssub datafile '/dev/raw/raw13' SIZE 127M autoextend on next 8M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

Tablespace 'SYSSUB' created.

SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 ('/dev/raw/raw6','/dev/raw/raw25') size 127M;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 ('/dev/raw/raw7','/dev/raw/raw26') size 127M;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 ('/dev/raw/raw8','/dev/raw/raw27') size 127M;

Database altered.

SQL> ALTER DATABASE ENABLE PUBLIC THREAD 1;

Database altered.

SQL> 

system.sh 수행 

$TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y
[tac@mysvr:/home/tac]$ $TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y
Creating additional system index...
Dropping agent table...
Creating client policy table ...
Creating text packages table ...
Creating the role DBA...
Creating system users & roles...
Creating example users...
..........................
Creating sql translator profiles ...
Creating agent table...
Creating additional static views using dpv...
Done.
For details, check /home/tac/tibero7/instance/tac1/log/system_init.log.

 

TAC 2번 노드 수행 작업

  • 2번째 노드의 vmware 에 접속하여 일련의 작업을 수행한다. 

CM (Tibero Cluster Manager) 맵버십 등록 

tbcm -b 
cmrctl add network --nettype private --ipaddr 10.10.10.20 --portno 51210 --name net2
cmrctl show 
cmrctl add network --nettype public --ifname ens33 --name pub2
cmrctl show 
cmrctl add cluster --incnet net2 --pubnet pub2 --cfile "/dev/raw/raw1" --name cls1
cmrctl show 
cmrctl start cluster --name cls1
cmrctl show

TAC 2번 instance 의 DB Cluster Service 등록 

cmrctl add db --name tac2 --svcname tibero --dbhome $CM_HOME

TAC 2번 instance 접속 및 상태 확인 

  • cmrctl show all

728x90
반응형
728x90
반응형

MySQL 로그 설정 정보 확인 

Mysql에 접속 되어 실행 되는 SQL 들을 확인 할수 있는 방법에 대해 공유 하겠습니다. 

Mysql DB 에 먼저 접속 한후 로그깅 설정 정보를 확인 합니다. 

 

  • SHOW VARIABLES 명령을 통해 Mysql 에 현재 적용된 로그레벨과 로그 파일 위치를 확인 합니다. 
SHOW VARIABLES LIKE '%general%';
  • 실행 결과
mysql> SHOW VARIABLES LIKE '%general%';
+------------------+---------------------------------+
| Variable_name    | Value                           |
+------------------+---------------------------------+
| general_log      | ON                              |
| general_log_file | /var/lib/mysql/24a15adb0cb3.log |
+------------------+---------------------------------+
2 rows in set (0.45 sec)
  • 로그확인(tail -f general_log_file ) 
  •  mysql db에 수행된 sql 에 대해서 조회가 가능 합니다. 
2023-03-07T10:29:12.676978Z       103 Query     show tables
2023-03-07T10:29:28.580687Z       103 Query     SELECT DATABASE()
2023-03-07T10:29:37.020534Z       103 Query     show databases
2023-03-07T10:29:45.540610Z       103 Query     SELECT DATABASE()
2023-03-07T10:29:45.540744Z       103 Init DB   book_db
2023-03-07T10:29:45.541425Z       103 Query     show databases
2023-03-07T10:29:45.541874Z       103 Query     show tables
2023-03-07T10:29:45.542875Z       103 Field List        Book_book 
2023-03-07T10:29:45.749166Z       103 Field List        Book_book_voter 
2023-03-07T10:29:45.773772Z       103 Field List        auth_group 
2023-03-07T10:29:45.822584Z       103 Field List        auth_group_permissions 
2023-03-07T10:29:45.848494Z       103 Field List        auth_permission 
2023-03-07T10:29:45.889815Z       103 Field List        auth_user 
2023-03-07T10:29:45.977628Z       103 Field List        auth_user_groups 
2023-03-07T10:29:46.019426Z       103 Field List        auth_user_user_permissions 
2023-03-07T10:29:46.038964Z       103 Field List        django_admin_log 
2023-03-07T10:29:46.047558Z       103 Field List        django_content_type 
2023-03-07T10:29:46.187867Z       103 Field List        django_migrations 
2023-03-07T10:29:46.208062Z       103 Field List        django_session 
2023-03-07T10:29:51.093263Z       103 Query     show tables
2023-03-07T10:30:02.581078Z       103 Query     select * From auth_user
2023-03-07T10:30:08.757684Z       103 Query     select * From auth_user
2023-03-07T10:30:24.740727Z       103 Query     select * from Book_book
2023-03-07T10:30:57.684327Z       103 Query     SET GLOBAL general_log = 'OFF'

MySQL 로그 설정 변경

  • set global general_log 명령을 통해 로그를 on/off 시킬수도 있습니다. 
  • 해당 명령을 수행한 이후에는 flush logs 명령을 수행 해주세요 
 SET GLOBAL general_log = 'OFF';
 FLUSH LOGS ;

 SET GLOBAL general_log = 'ON';

 

728x90
반응형

'05.DB > Mysql' 카테고리의 다른 글

[Mysql] 유저 생성 /DB생성/권한부여  (0) 2022.05.24
[Mysql] DBeaver Mysql 접속  (0) 2022.04.19
728x90
반응형

R 설치 및 실행 

yum install epel-release

yum install R

R 이 정상적으로 설치가 되었다면 R 을 실행합니다. 

$ R

R version 3.6.0 (2019-04-26) -- "Planting of a Tree"
Copyright (C) 2019 The R Foundation for Statistical Computing
Platform: x86_64-redhat-linux-gnu (64-bit)

R은 자유 소프트웨어이며, 어떠한 형태의 보증없이 배포됩니다.
또한, 일정한 조건하에서 이것을 재배포 할 수 있습니다.
배포와 관련된 상세한 내용은 'license()' 또는 'licence()'을 통하여 확인할 수 있습니다.

R은 많은 기여자들이 참여하는 공동프로젝트입니다.
'contributors()'라고 입력하시면 이에 대한 더 많은 정보를 확인하실 수 있습니다.
그리고, R 또는 R 패키지들을 출판물에 인용하는 방법에 대해서는 'citation()'을 통해 확인하시길 부탁드립니다.

'demo()'를 입력하신다면 몇가지 데모를 보실 수 있으며, 'help()'를 입력하시면 온라인 도움말을 이용하실 수 있습니다.
또한, 'help.start()'의 입력을 통하여 HTML 브라우저에 의한 도움말을 사용하실수 있습니다
R의 종료를 원하시면 'q()'을 입력해주세요.

Tibero RODBC 연동 테스트 

Tibero7 과 연동 테스트를 진행한 샘플 코드 입니다.  

사전에 UnixODBC 연결 셋팅이 선행 되어야 합니다. 

.odbc.ini 설정

[ODBC Data Sources]

Tibero7 = Tibero7 ODBC driver

[ODBC]

Trace = 1

TraceFile = /home/unixODBC/unixODBC-2.3.9/trace/odbc.trace

[Tibero7]
Driver = /data1/7FS02/tibero7/client/lib/libtbodbc.so
Description = Tibero ODBC driver for Tibero
server = 192.168.116.12


#server는(ip,hostname 둘다가능)
port = 35000
#port 는 tibero port
database = tibero
#database 는  DB_NAME 입력 
User =tibero
Password = tmax

R 연동 테스트

library(RODBC)
db <- odbcConnect("Tibero7", uid="tibero", pwd="tmax")
query_result <- sqlQuery(db, "SELECT * FROM DUAL")
query_result
> library(RODBC)
> db <- odbcConnect("Tibero7", uid="tibero", pwd="tmax")
> query_result <- sqlQuery(db, "SELECT * FROM DUAL")
> query_result
  DUMMY
1     X

 

728x90
반응형
728x90
반응형

전자정부 프레임워크 설치

 

JDK 설치 (OpenJdk11) 

  • JAVA 환경 설정 
  • 윈도우 버튼 우클릭 ->시스템 클릭 -> 시스템 정보 클릭 -> 고급 시스템 설정 

  • JAVA_HOME 환경 변수 등록 

 

  • Path 등록 

  • cmd 창에서 JAVA 버전 확인 

 

전자정부 프레임워크 실행

 

전자 정부 이클립스 실행

다운로드 받은 파일을 실행 하면 eGovFrame 디렉토리에  이클립스 디렉토리가 생성 된다. 

원하는 workspace 를 선택하고 이클립스를 실행 한다. 

전자 정부 simple homepage 프로젝트 생성 

simple homepage 프로젝트 구조 확인

 

DB Connection 설정 

연결정보 수정 

  • globals.properties 파일 수정 
  • C:\eGovFrame\eGovFrameDev-4.0.0-64bit\workspace\egovFrame-simpleHomepage\src\main\resources\egovframework\egovProps\globals.properties

사용하고자 하는 DataBase 는 Tibero 로 설정 하였다.

# 운영서버 타입(WINDOWS, UNIX)
Globals.OsType = WINDOWS

# G4C 연결용 IP (localhost)
Globals.LocalIp = 127.0.0.1

# DB서버 타입(mysql,oracle,altibase,tibero) - datasource 및 sqlMap 파일 지정에 사용됨
Globals.DbType = tibero
Globals.UserName=tibero
Globals.Password=tmax


#Tibero
Globals.DriverClassName=com.tmax.tibero.jdbc.TbDriver
Globals.Url=jdbc:tibero:thin:@127.0.0.1:17000:tibero

 

  • context-datasource.xml 파일 수정 
  • C:\eGovFrame\eGovFrameDev-4.0.0-64bit\workspace\egovFrame-simpleHomepage\src\main\resources\egovframework\spring\com\context-datasource.xml
    <!-- Tibero -->
    <bean id="dataSource-tibero" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close">
        <property name="driverClassName" value="${Globals.DriverClassName}"/>
        <property name="url" value="${Globals.Url}" />
        <property name="username" value="${Globals.UserName}"/>
        <property name="password" value="${Globals.Password}"/>
    </bean>

Table 및 데이터 생성 

tibero 에 접속을 해서 웹 프로젝트에 포함된 SQL 을 수행 한다. 

  • all_sht_ddl_tibero.sql (테이블 생성) 
  • all_sht_data_tibero.sql(데이터 입력)

SQL> @all_sht_ddl_tibero
TBR-7071: Schema object 'TIBERO.IDS' was not found or is invalid.


Table 'IDS' created.

TBR-7071: Schema object 'TIBERO.LETTCCMMNCLCODE' was not found or is invalid.


Table 'LETTCCMMNCLCODE' created.

TBR-7071: Schema object 'TIBERO.LETTCCMMNCODE' was not found or is invalid.


Table 'LETTCCMMNCODE' created.

 

SQL> @all_sht_data_tibero

1 row inserted.


1 row inserted.


1 row inserted.

 

Resource Compile

  • Run As -> Maven clean(빌드된 리소스 삭제)
  • Run As -> Maven Install(compile 을 통한 리소스 재빌드) -> war 파일이 재 빌드 된다. 

 

 

Tomcat Server 설치 

  • File -> New -> Other -> Server -> Server -> Apache 클릭수  설치할 Tomcat version  버전 선택 

  • Tomcat lib 폴더로 Tibero  jdbc 라이브러리 복사 

  • 설치하고자 하는 디렉토리 생성 후 (Tomcat) 라이센스 동의 후 설치 

  • 정상적으로 실치가 되면 하단 탭에 Server 탭이 생성 된다. 
  • Tomcat 서버  마우스 우 클릭 -> Add and Remove 메뉴를 클릭 

  • 이후  Add 버튼을 눌러서 deploy 하고자 하는 웹 프로젝트를 선택 한다. 

 

 

접속 테스트

 

  • http://localhost:8080/sht_webapp/cmm/main/mainPage.do
  • ID/PASSWD : admin/1

 

728x90
반응형
728x90
반응형

조회 쿼리

 set linesize 400;
 select substr(a.tablespace_name,  1, 30) as tablespace_name
	    , round(sum(a.total1) /1024/1024, 1) "Total(MB)"
	    , round(sum(a.total1) /1024/1024/1024, 1) "Total(GB)"
	    , round(sum(a.total1) /1024/1024,1) -round (sum(a.sum1) /1024/1024, 1) "Used(MB)"
	    , round(sum(a.sum1) /1024/1024, 1) "Free(MB)"
	    , round( (round(sum(a.total1) /1024/1024,1) -round (sum(a.sum1) /1024/1024,1) ) /round (sum(a.total1) /1024/1024, 1) *100, 2) "Used(%)"
   from ( select tablespace_name
               , 0 total1
               , sum(bytes) sum1
               , max(bytes) MAXB
               , count(bytes) cnt
	          from dba_free_space
	         group by tablespace_name
	         union
	         select tablespace_name
	              , sum(bytes) total1
	              , 0
	              , 0
	              , 0  
	           from dba_data_files
	           group by tablespace_name) a
  group by a.tablespace_name
  order by tablespace_name;

결과

데이터 파일 사이즈 비교

$ du -sh * | sort -k 2
201M    274752.tdf
32G     TS_PART_TBL_PB_SYNC.tdf
201M    TS_TBL_PB_SYNC.tdf
9.2G    TS_TBL_PB_SYNC_LOB.tdf
41M     TS_TEST_1.dtf
41M     TS_TEST_2.dtf
41M     TS_TEST_3.dtf
41M     TS_TEST_4.dtf
11M     TS_TEST_5.dtf
141M    TS_TEST_lob.dtf
4.0K    hc_ORCL19C.dat
4.0K    hc_ORCLCDB.dat
4.0K    hc_orcl19c.dat
1.1G    iconn.tdf
4.0K    init.ora
4.0K    initORCL19C.ora
4.0K    lkORCL19C
4.0K    lkORCLCDB
4.0K    orapwORCLCDB
4.0K    orapworcl19c
4.0K    spfileORCLCDB.ora
4.0K    spfileorcl19c.ora
4.0K    test_pfile.ora
201M    ts1.tdf
201M    ts2.tdf
201M    ts274259_1.tdf
201M    ts274259_2.tdf
201M    ts274259_3.tdf
201M    ts274259_4.tdf
201M    ts3.tdf
201M    ts4.tdf
728x90
반응형

+ Recent posts