2015/09/28

使用 Spine 加速 Cacti 的輪循裝置速度

----
使用 Spine 加速 Cacti 的輪循裝置速度
----
http://www.cacti.net/spine_info.php
Spine, formerly Cactid, is a poller for Cacti that primarily strives to be as fast as possible. For this reason it is written in native C, makes use of POSIX threads, and is linked directly against the net-snmp library for minmumal SNMP polling overhead. Spine is a replacement for the default cmd.php poller so you must decide if using Spine makes sense for your installation.

Spine 是 Cacti 用來加入輪循的程式,使用原生的 C 及支援 POSIX 的 threads 來加入輪循主機的效率。在 Cacti 預設是使用 cmd.php 來當成 poller ,如果你有 poller 時間過長的問題可以考慮使用 spine 來處理。

你可以透過, log 檔來判斷你是否需要 spine
Console -> Utilities -> View Cacti Log File

底下例子是 cmd.php 使用了 190 秒的時間,離上限 300 秒很接近了。
09/28/2015 08:25:12 PM - SYSTEM STATS: Time:190.7159 Method:cmd.php Processes:100 Threads:N/A Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433
這個案例就有改善的必要。

----
spine 的安裝
----
http://www.cacti.net/spine_install_unix.php
Spine 需要底下這些 lib 如果沒裝的話,可以再補上去。

net-snmp utilities and development libraries
mysql utilities, server and development libraries
openssl development libraries

例如:mysql 的 lib
yum install mysql++-devel -y

spine 沒有現成的 binary 檔案,請自行編譯
----
wget http://www.cacti.net/downloads/spine/cacti-spine-0.8.8f.tar.gz
tar zxvf cacti-spine-0.8.8f.tar.gz 
cd cacti-spine-0.8.8f
./configure 
make
sudo make install

# 找一下 spine 安裝去那了
[root@mail ~]# whereis spine
spine: /usr/local/spine

# 建立 spine 的設定檔,用來直接存取 mysql DB , 請設定和你的 cacti 一樣
[root@mail spine]# cat /usr/local/spine/etc/spine.conf 
DB_Host         localhost
DB_Database     cacti
DB_User         cactiuser
DB_Pass         cactiuser
DB_Port         3306

# spine 的路徑
[root@mail spine]# /usr/local/spine/bin/spine --help
SPINE 0.8.8f  Copyright 2002-2015 by The Cacti Group
Usage: spine [options] [[firstid lastid] || [-H/--hostlist='hostid1,hostid2,...,hostidn']]
... skip ...


----
spine 設定在 cacti 內
----

填入 spine 的程式路徑,讓 cacti 找到它
填入 spine 的程式路徑,讓 cacti 找到它

修改 poller type 為 spine 並設定 porcess 及 threads 的數量
修改 poller type 為 spine 並設定 porcess 及 threads 的數量


觀察 log 看看 poller 時間的變化
觀察 log 看看 poller 時間的變化

以這個案例而言,更換以後效能突飛猛進....差了快 10 倍。
09/28/2015 09:12:16 PM - SYSTEM STATS: Time:15.2687 Method:spine Processes:100 Threads:10 Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433
09/28/2015 09:07:21 PM - SYSTEM STATS: Time:20.1201 Method:spine Processes:100 Threads:10 Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433
09/28/2015 09:02:25 PM - SYSTEM STATS: Time:23.3930 Method:spine Processes:100 Threads:10 Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433
09/28/2015 08:57:17 PM - SYSTEM STATS: Time:15.7469 Method:spine Processes:100 Threads:60 Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433
09/28/2015 08:52:20 PM - SYSTEM STATS: Time:18.7691 Method:spine Processes:100 Threads:60 Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433
09/28/2015 08:50:16 PM - SYSTEM STATS: Time:194.3420 Method:cmd.php Processes:100 Threads:N/A Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1877
09/28/2015 08:43:43 PM - SYSTEM STATS: Time:100.7800 Method:cmd.php Processes:100 Threads:N/A Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433
09/28/2015 08:38:34 PM - SYSTEM STATS: Time:92.6313 Method:cmd.php Processes:100 Threads:N/A Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433
09/28/2015 08:34:33 PM - SYSTEM STATS: Time:151.5439 Method:cmd.php Processes:100 Threads:N/A Hosts:68 HostsPerProcess:1 DataSources:3003 RRDsProcessed:1433



REF:
http://blog.jangmt.com/2015/09/centos-7-cacti.html  CACTI 安裝 ni Centos 7

Apache HADOOP 安裝多主機(cluster)模式在 CentOS Linux 7

HADOOP 安裝多主機(cluster)模式,
首先建議你要有4台電腦以上,才會好工作。

(0) 把 OS 安裝好
此範例為 CentOS Linux 7
namenode 使用 server with GUI 的安裝選項
datanode 使用 @base @core @development 的選項
/home 目錄獨立一個分割區或硬碟,給 datanode or namenode 使用。

(1) 把所有主機的 dns name 及 hostname 設定好
[hadoop@namenode ~]$ host namenode
namenode.jangmt.com has address 192.168.1.100
[hadoop@namenode ~]$ host datanode1
datanode1.jangmt.com has address 192.168.1.1
[hadoop@namenode ~]$ host datanode2
datanode2.jangmt.com has address 192.168.1.2
[hadoop@namenode ~]$ host datanode3
datanode3.jangmt.com has address 192.168.1.3

為了方便識別主機,建議將主機的名稱修改為設定的名稱
修改方式可以透過 hostnamectl 指令修改。
可以參考 http://blog.jangmt.com/2015/06/centos7-rhel7-runlevel.html 
(在 CENTOS 7 的版本可以使用 hostnamectl 來改變系統 hostname)

(2) 先達成每台主機可以 ssh key 認證
可以參考
http://blog.jangmt.com/2015/09/ssh-key-linux.html
基本上建議建立一個 hadoop 帳號,這個使用者為主要放置程式的使用者,同時也是 HDFS 的管理員。
然後 hadoop 可以登入 root , root 可以登入 hadoop 都無須密碼,這樣交換資料也相較容易。

[hadoop@namenode ~]$ ssh root@datanode1 ifconfig enp2s0
enp2s0: flags=4163  mtu 1500
        inet 192.168.1.1  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::922b:34ff:fe24:e0c6  prefixlen 64  scopeid 0x20
        ether 90:2b:34:24:e0:c6  txqueuelen 1000  (Ethernet)
        RX packets 897188  bytes 1331862037 (1.2 GiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 191894  bytes 14979844 (14.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 1  collisions 0

[hadoop@namenode ~]$ ssh hadoop@datanode1 /sbin/ifconfig enp2s0
enp2s0: flags=4163  mtu 1500
        inet 192.168.1.1  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::922b:34ff:fe24:e0c6  prefixlen 64  scopeid 0x20
        ether 90:2b:34:24:e0:c6  txqueuelen 1000  (Ethernet)
        RX packets 897238  bytes 1331869640 (1.2 GiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 191949  bytes 14988692 (14.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 1  collisions 0

[hadoop@namenode ~]$ ssh root@datanode1
Last login: Mon Sep 28 00:58:31 2015 from namenode.jangmt.com
[root@datanode1 ~]# ssh hadoop@localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is 15:e4:e7:62:cc:59:71:7d:2d:54:7d:6a:ba:9a:6f:10.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Last login: Mon Sep 28 00:58:21 2015 from namenode.jangmt.com

並建議在使用前都登入一次,讓  known_hosts 擁有所有主機的 key 。
做好後每一台主機的 hadoop 及 root 目錄 .ssh 類似如下:
[hadoop@namenode ~]$ ls .ssh/ -l
total 16
-rw-r--r--. 1 hadoop hadoop  391 Sep 24 17:20 authorized_keys
-rw-------. 1 hadoop hadoop 1675 Sep 24 17:20 id_rsa
-rw-r--r--. 1 hadoop hadoop  391 Sep 24 17:20 id_rsa.pub
-rw-r--r--. 1 hadoop hadoop 1943 Sep 28 00:06 known_hosts


(3) 先安裝 oracle java
可以參考這一篇:
http://blog.jangmt.com/2015/09/oracle-java-centos-linux-7.html 
僅留下 oracle 版本的 java ,移除 openjdk 檔案

[hadoop@namenode ~]$ rpm -qa | grep openjdk
[hadoop@namenode ~]$ java -version
java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)

環境變數可於後面在一起設定好
[hadoop@namenode ~]$ env | grep JAVA
JAVA_HOME=/usr/java/latest

(4) 下載 hadoop 安裝檔及文件內容參考
hadoop 官方文件網站 2.7.1 版本
http://hadoop.apache.org/docs/r2.7.1/
單主機的安裝說明
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/SingleCluster.html
叢集主機的安裝說明,這篇教學文件以這篇修改而來
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/ClusterSetup.html
命令詳解
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/CommandsManual.html
hdfs shell
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/FileSystemShell.html

檔案下載
http://hadoop.apache.org/releases.html


(5) 更改設定檔案的內容

(5.1) namenode (主要的 master 機器)
# HDFS 的設定檔
[hadoop@namenode ~]$ cat /home/hadoop/hadoop/etc/hadoop/hdfs-site.xml 


# 核心的主機交換資料位址
[hadoop@namenode ~]$ cat /home/hadoop/hadoop/etc/hadoop/core-site.xml 
# 使用 yarn 管理
[hadoop@namenode ~]$ cat /home/hadoop/hadoop/etc/hadoop/mapred-site.xml
# yarn-site 的資源管理員設定
[hadoop@namenode ~]$ cat /home/hadoop/hadoop/etc/hadoop/yarn-site.xml 

# 設定 slave 主機,啟動後會呼叫別台主機的程式}
[hadoop@namenode ~]$ cat /home/hadoop/hadoop/etc/hadoop/slaves
datanode1
datanode2
datanode3

# hadoop 的預設 env 變數,其中這行需要 export JAVA_HOME=/usr/java/latest 否則抓不到變數。
[hadoop@namenode ~]$ cat /home/hadoop/hadoop/etc/hadoop/hadoop-env.sh
# The java implementation to use.
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/java/latest



(5.2) datanode 資料主機,每台都要設定。
# 問題一樣 java 變數就是找不到
[hadoop@datanode1 ~]$ cat /home/hadoop/hadoop/etc/hadoop/hadoop-env.sh
# The java implementation to use.
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/java/latest

# yarn-site 設定
[hadoop@datanode1 ~]$ cat /home/hadoop/hadoop/etc/hadoop/yarn-site.xml
# mapred 由 yarn 管管理
[hadoop@datanode1 ~]$ cat /home/hadoop/hadoop/etc/hadoop/mapred-site.xml
# HDFS 設定
[hadoop@datanode1 ~]$ cat /home/hadoop/hadoop/etc/hadoop/hdfs-site.xml
# 核心主機
[hadoop@datanode1 ~]$ cat /home/hadoop/hadoop/etc/hadoop/core-site.xml
# slave 本機設定
[hadoop@datanode1 ~]$ cat /home/hadoop/hadoop/etc/hadoop/slaves
localhost




(5.3) 啟動環境變數

# 修改系統的 .bash_profile 檔案
[hadoop@namenode ~]$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH

# HADOOP
export HADOOP_PREFIX=/home/hadoop/hadoop
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC -Xmx1g"

# mapreduce app
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar

# R-HADOOP
export HADOOP_CMD=$HADOOP_PREFIX/bin/hadoop
export HADOOP_STREAMING="$HADOOP_PREFIX/share/hadoop/tools/lib/hadoop-streaming-2.7.1.jar"

# HIVE
export HIVE_HOME=/home/hadoop/apache-hive

# ANT
export ANT_LIB=/usr/java/ant/lib
export ANT_HOME=/usr/java/ant/

# Maven
export MAVEN_HOME=/usr/java/maven/
export MAVEN_LIB=/usr/java/maven/lib

#JAVA
export JAVA_HOME=/usr/java/latest
export JRE_HOME=/usr/java/latest/jre

# PIG
export PIG_HOME=/home/hadoop/apache-pig
export PATH=$PATH:$PIG_HOME/bin:$HIVE_HOME/bin:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin:$JAVA_HOME/bin:$JRE_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin:$JAVA_HOME/bin:$JRE_HOME/bin

[hadoop@namenode ~]$ source .bash_profile


(6) 格式化 hdfs 檔案系統
# in namenode 機器上
rm -rf /home/hadoop/namenode
mkdir /home/hadoop/namenode

# in remote datanode 1,2,3 機器上
rm -rf /home/hadoop/datanode
mkdir /home/hadoop/datanode

# 格式化
hdfs namenode -format"


(7) HDFS and YARN 啟動
# 啟動 HDFS and YARN
[hadoop@namenode happ]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [namenode]
namenode: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-namenode.jangmt.com.out
datanode1: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-datanode1.out
datanode2: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-datanode2.out
datanode3: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-datanode3.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-namenode.jangmt.com.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-namenode.jangmt.com.out
datanode2: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-datanode2.out
datanode3: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-datanode3.out
datanode1: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-datanode1.out

# 也可以關閉
[hadoop@namenode happ]$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [namenode]
namenode: stopping namenode
datanode1: stopping datanode
datanode2: stopping datanode
datanode3: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
datanode2: no nodemanager to stop
datanode1: no nodemanager to stop
datanode3: no nodemanager to stop
no proxyserver to stop

(8) 觀看 HDFS 驗證
[hadoop@namenode happ]$ hdfs dfs -mkdir /home

[hadoop@namenode happ]$ hdfs dfs -ls /
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2015-09-28 00:18 /home

[hadoop@namenode happ]$ hdfs dfs -ls /
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2015-09-28 00:18 /home


(9) 使用 dfsadmin 觀看報表
[hadoop@namenode happ]$ hdfs dfsadmin -report
Configured Capacity: 932499226624 (868.46 GB)
Present Capacity: 923198201856 (859.80 GB)
DFS Remaining: 923198177280 (859.80 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (3):

Name: 192.168.1.1:50010 (datanode1.jangmt.com)
Hostname: datanode1.jangmt.com
Decommission Status : Normal
Configured Capacity: 227524214784 (211.90 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 3100733440 (2.89 GB)
DFS Remaining: 224423473152 (209.01 GB)
DFS Used%: 0.00%
DFS Remaining%: 98.64%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Sep 28 00:45:55 CST 2015


Name: 192.168.1.2:50010 (datanode2.jangmt.com)
Hostname: datanode2.jangmt.com
Decommission Status : Normal
Configured Capacity: 227524214784 (211.90 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 3100037120 (2.89 GB)
DFS Remaining: 224424169472 (209.01 GB)
DFS Used%: 0.00%
DFS Remaining%: 98.64%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Sep 28 00:45:55 CST 2015


Name: 192.168.1.3:50010 (datanode3.jangmt.com)
Hostname: datanode3.jangmt.com
Decommission Status : Normal
Configured Capacity: 477450797056 (444.66 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 3100254208 (2.89 GB)
DFS Remaining: 474350534656 (441.77 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.35%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Sep 28 00:45:55 CST 2015

(10) 你可以用 jps 看服務有哪些在跑
[hadoop@namenode happ]$ pdsh_cmd jps
datanode2: 325 Jps
datanode1: 7063 Jps
datanode3: 15576 Jps
datanode2: 32555 DataNode
datanode1: 6825 DataNode
datanode2: 32667 NodeManager
datanode3: 15338 DataNode
datanode1: 6937 NodeManager
datanode3: 15451 NodeManager
namenode2: 10490 Jps
[hadoop@namenode happ]$ jps
19301 SecondaryNameNode
19959 Jps
19500 ResourceManager
19055 NameNode

(11) HDFS 圖形界面



(12) 驗證 yarn 是否工作

[hadoop@hnamenode ~]$ yarn node -list
15/09/29 12:52:55 INFO client.RMProxy: Connecting to ResourceManager at hnamenode/192.168.1.100:8050
Total Nodes:16
         Node-Id     Node-State Node-Http-Address Number-of-Running-Containers
hdatanode5.cm.nsysu.edu.tw:36921        RUNNING hdatanode5.cm.nsysu.edu.tw:8042                           0
hdatanode8.cm.nsysu.edu.tw:56617        RUNNING hdatanode8.cm.nsysu.edu.tw:8042                           0
hdatanode15.cm.nsysu.edu.tw:45558        RUNNING hdatanode15.cm.nsysu.edu.tw:8042                           0
hdatanode6.cm.nsysu.edu.tw:45475        RUNNING hdatanode6.cm.nsysu.edu.tw:8042                           0
hdatanode1.cm.nsysu.edu.tw:56762        RUNNING hdatanode1.cm.nsysu.edu.tw:8042                           0
hdatanode14.cm.nsysu.edu.tw:38758        RUNNING hdatanode14.cm.nsysu.edu.tw:8042                           0
hdatanode9.cm.nsysu.edu.tw:57619        RUNNING hdatanode9.cm.nsysu.edu.tw:8042                           0
hdatanode3.cm.nsysu.edu.tw:32826        RUNNING hdatanode3.cm.nsysu.edu.tw:8042                           0
hdatanode10.cm.nsysu.edu.tw:51220        RUNNING hdatanode10.cm.nsysu.edu.tw:8042                           0
hdatanode11.cm.nsysu.edu.tw:32837        RUNNING hdatanode11.cm.nsysu.edu.tw:8042                           0
hdatanode4.cm.nsysu.edu.tw:43196        RUNNING hdatanode4.cm.nsysu.edu.tw:8042                           0
hdatanode13.cm.nsysu.edu.tw:47358        RUNNING hdatanode13.cm.nsysu.edu.tw:8042                           0
hdatanode2.cm.nsysu.edu.tw:48404        RUNNING hdatanode2.cm.nsysu.edu.tw:8042                           0
hdatanode12.cm.nsysu.edu.tw:37563        RUNNING hdatanode12.cm.nsysu.edu.tw:8042                           0
hdatanode16.cm.nsysu.edu.tw:41908        RUNNING hdatanode16.cm.nsysu.edu.tw:8042                           0
hdatanode7.cm.nsysu.edu.tw:54982        RUNNING hdatanode7.cm.nsysu.edu.tw:8042                           0




設定檔放這裡 https://github.com/mtchang/hadoop 有錯再來修!!
gist-it.appspot.com 真的很好用 – Embed files from a github repository like a gist
以上是簡易的設定過程。

延伸閱讀:
http://chaalpritam.blogspot.tw/2015/05/hadoop-270-multi-node-cluster-setup-on.html
http://chaalpritam.blogspot.tw/2015/05/hadoop-270-single-node-cluster-setup-on.html

2015/09/27

Linux Shell -- for 迴圈

Linux Shell -- for 迴圈

Linux shell 可以使用 for 迴圈傳遞固定的值內容給迴圈內部變數

基本格式為:

for  變數  in 值1  值2  值3  
do
  echo  變數 
done

執行結果會是:
 值1  值2  值3  

# 範例如下:複製 dl 目錄到每一台主機,主機已經先設定 ssh key 認證
[hadoop@namenode happ]$ cat sync_dl2_all.sh 
#!/bin/bash
ACTION=$1
if [ "$ACTION" == "all" ]; then
  for N in 1 2 3
  do
   echo "send to datanode${N}"
   RUN="rsync -av --delete-after /home/hadoop/happ/dl hadoop@datanode${N}:/home/hadoop/"
   echo $RUN
   #echo $RUN | sh
  done
  exit 0
else
  echo "usage:$0 all"
  exit 1
fi

[hadoop@namenode happ]$ ./sync_dl2_all.sh
usage:./sync_dl2_all.sh all

[hadoop@namenode happ]$ ./sync_dl2_all.sh all
send to datanode1
rsync -av --delete-after /home/hadoop/happ/dl hadoop@datanode1:/home/hadoop/
send to datanode2
rsync -av --delete-after /home/hadoop/happ/dl hadoop@datanode2:/home/hadoop/
send to datanode3
rsync -av --delete-after /home/hadoop/happ/dl hadoop@datanode3:/home/hadoop/


#echo $RUN | sh 
註解要去除,才會正式執行。

CentOS 7 上 cacti 流量報表管理系統的安裝

CACTI 是一個結合 snmp 及 rrdtool 的流量報表管理工具,可以大量監控所有的流量報表。

使用前請先安裝及設定好
1. MariaDB
http://blog.jangmt.com/2015/09/mariadb-root-in-centos-linux-7.html

2. httpd and phpmyadmin
http://blog.jangmt.com/2015/09/centos-7-phpmyadmin.html 

3. 主機的 snmpd 服務
http://blog.jangmt.com/2015/09/centos-7-snmpd-centos-7-snmp-install.html

----
CACTI install
----
# 直接使用 yum 安裝
[root@hnamenode2 snmp]# yum install cacti -y
[root@hnamenode2 cacti]# pwd
/etc/cacti

# 安裝號後,請觀看 db.php 的設定值,建立一個 cacti 帳號
[root@hnamenode2 cacti]# cat /etc/cacti/db.php
/* make sure these values refect your actual database/host/user/password */
$database_type = "mysql";
$database_default = "cacti";
$database_hostname = "localhost";
$database_username = "cacti";
$database_password = "cactiuser";
$database_port = "3306";
$database_ssl = false;

/*
   Edit this to point to the default URL of your Cacti install
   ex: if your cacti install as at http://serverip/cacti/ this
   would be set to /cacti/
*/
//$url_path = "/cacti/";

/* Default session name - Session name must contain alpha characters */
//$cacti_session_name = "Cacti";
?>

# 以上面 /etc/cacti/db.php 的資訊,透過 phpmyadmin 建立一個 cacti 帳號,不建議使用 root 帳號。
用 phpmyadmin 建立一個帳號
用 phpmyadmin 建立一個帳號


# 在 cacti 有個預設的 sql 檔
[root@hnamenode2 cacti]# cat /usr/share/doc/cacti-0.8.8b/cacti.sql

# 把它匯入 cacti DB 內
[root@hnamenode2 cacti]# mysql -u cacti -p cacti < /usr/share/doc/cacti-0.8.8b/cacti.sql
Enter password:

# 修改 apache cacti 的設定檔
[root@hnamenode2 conf.d]# cat /etc/httpd/conf.d/cacti.conf 

# httpd 2.4 因為 CentOS 7 是 2.4 以上的版本,所以請設定這裡。
Require host localhost
Require ip 111.22. 192.168. (加入你的 ip  預設只有本機可以登入系統)


# 登入驗證,過程中如有安裝畫面,因為都已經設定好了只要 next step 即可。
http://localhost/cacti/
登入驗證後的畫面
登入驗證後的畫面


# cacti 預設帳號及密碼
帳號:admin
密碼:admin
請進入後馬上修改

# cacti 的建立流量圖的方法,大致原則如下:
1. 建立一個 host 裝置的 snmp 資訊
2. 將這個 snmp 資訊建立圖表,這個圖表會 5 min 更新一次
3. 建立一個 tree 觀看界面,有分 host 和標題,可以自行安排顯示位置。
4. 使用上 tree 的觀看界面就是最後觀看結果的地方。

# 建立 host 選 DEVICES

請填入 host 的 snmp 等相關資訊
請填入 host 的 snmp 等相關資訊

觀察下面的Data Queries 是否有資料 ooo Items
觀察下面的Data Queries 是否有資料 ooo Items



有資料的裝置可以建立圖形 Create Graphs for this Host
有資料的裝置可以建立圖形 Create Graphs for this Host


勾選所有的 Data Query snmp 資訊建立圖形
勾選所有的 Data Query snmp 資訊建立圖形



# 建立 host tree view (選 Graph Trees)
產生 Graph Trees 的標題名稱
產生 Graph Trees 的標題名稱


從建立的標題名稱,產生一個 Host Item 選項。
從建立的標題名稱,產生一個 Host Item 選項。



產生的報表,因為還沒有 5min 所以圖形還沒產生。
產生的報表,因為還沒有 5min 所以圖形還沒產生。

# 系統預設使用 cron 排程執行 cacti 的 polling 輪循所有的裝置,需要手工啟用。
# 啟動 cacti 的排程
[root@hnamenode2 cron.d]# cat /etc/cron.d/cacti 
*/5 * * * * cacti /usr/bin/php /usr/share/cacti/poller.php > /dev/null 2>&1


# 啟用後,圖片就會產生了。

cron 啟用後,cacti 圖片就會產生了
cron 啟用後,cacti 圖片就會產生了


Q:  CACTI 裝好了,圖片也產生了。但是看不到流量可能的原因?

請檢查一下時間,Server 先對時,ntp 對時。時區確認。
[root@hnamenode2 ~]# date
一  9月 28 10:33:02 CST 2015
[root@hnamenode2 ~]# ntpdate clock.stdtime.gov.tw
28 Sep 10:33:33 ntpdate[5013]: adjust time server 211.22.103.158 offset -0.003425 sec

最後在檢查 php.ini 的設定,是否有設為同一個時區,如果沒有改一下並重新啟動 apache

[root@hnamenode2 ~]# cat /etc/php.ini  | grep timezone
; Defines the default timezone used by the date functions
; http://php.net/date.timezone
date.timezone = Asia/Taipei

[root@hnamenode2 ~]# systemctl  restart  httpd.service








在 CentOS 7 安裝 snmpd 服務(CentOS 7 SNMP install)


----
在 CentOS 7 安裝 snmpd 服務(CentOS 7 SNMP install)
----
在 CentOS 7 安裝 snmpd 服務,讓主機的資訊可以透過網路取得。
環境是 CentOS 7 x86_64 版本防火牆已經關閉, SELINUX 設定為 permissive

[root@hnamenode2 snmp]# getenforce
Permissive

# 安裝 snmpd 元件
[root@hnamenode2 ~]# yum -y install net-snmp net-snmp-utils
# 修改設定檔,更換為底下資訊
[root@hnamenode2 snmp]# cat /etc/snmp/snmpd.conf
# snmp 存取資訊
com2sec local           localhost       public
com2sec localnet         192.168.0.0/16  public(public建議修改為其他的帳號)

group   MyRWGroup v1            local
group   MyROGroup v1            localnet

group   MyROSystem v1           local
group   MyROSystem v2c          local
group   MyROSystem usm          local

group   MyROGroup v1            localnet
group   MyROGroup v2c           localnet
group   MyROGroup usm           localnet

group   MyRWGroup v1            local
group   MyRWGroup v2c           local
group   MyRWGroup usm           local

view    systemview    included   .1.3.6.1.2.1.1
view    systemview    included   .1.3.6.1.2.1.25.1.1
view    all    included   .1 80

access  MyROGroup ""      any       noauth    prefix  all none none
access  MyRWGroup ""      any       noauth    prefix  all all  all

# 位置及裝置資訊
sysName hadoop_hnamenode2(主機裝置名稱)
syslocation CM1022_RACK1(所在位置資訊)
syscontact mtchang@hadoop.jangmt.com (聯絡人資訊)
pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat


# 重新啟動服務
[root@hnamenode2 snmp]# systemctl restart snmpd.service

# 預設開機啟動
[root@hnamenode2 snmp]# systemctl enable snmpd.service
ln -s '/usr/lib/systemd/system/snmpd.service' '/etc/systemd/system/multi-user.target.wants/snmpd.service'

# 觀看啟動的狀態
[root@hnamenode2 snmp]# systemctl status snmpd.service
snmpd.service - Simple Network Management Protocol (SNMP) Daemon.
   Loaded: loaded (/usr/lib/systemd/system/snmpd.service; disabled)
   Active: active (running) since 日 2015-09-27 11:17:53 CST; 1min 55s ago
 Main PID: 23472 (snmpd)
   CGroup: /system.slice/snmpd.service
           └─23472 /usr/sbin/snmpd -LS0-6d -f

# 觀看 port 是否開啟
[root@hnamenode2 snmp]# netstat -auntp | grep snmp
tcp        0      0 127.0.0.1:199           0.0.0.0:*               LISTEN      23472/snmpd      
udp        0      0 0.0.0.0:161             0.0.0.0:*                           23472/snmpd   

# 驗證 snmpd 服務
[root@hnamenode2 snmp]# snmpwalk -c public -v 2c 192.168.1.250 
SNMPv2-MIB::sysDescr.0 = STRING: Linux hnamenode2 3.10.0-229.11.1.el7.x86_64 #1 SMP Thu Aug 6 01:06:18 UTC 20
15 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (7188) 0:01:11.88
SNMPv2-MIB::sysContact.0 = STRING: cccm@cm.nsysu.edu.tw
SNMPv2-MIB::sysName.0 = STRING: hadoop_hnamenode2
SNMPv2-MIB::sysLocation.0 = STRING: CM1022_RACK1
SNMPv2-MIB::sysORLastChange.0 = Timeticks: (4) 0:00:00.04
SNMPv2-MIB::sysORID.1 = OID: SNMP-MPD-MIB::snmpMPDCompliance
SNMPv2-MIB::sysORID.2 = OID: SNMP-USER-BASED-SM-MIB::usmMIBCompliance
SNMPv2-MIB::sysORID.3 = OID: SNMP-FRAMEWORK-MIB::snmpFrameworkMIBCompliance
SNMPv2-MIB::sysORID.4 = OID: SNMPv2-MIB::snmpMIB
SNMPv2-MIB::sysORID.5 = OID: TCP-MIB::tcpMIB
SNMPv2-MIB::sysORID.6 = OID: IP-MIB::ip
SNMPv2-MIB::sysORID.7 = OID: UDP-MIB::udpMIB
SNMPv2-MIB::sysORID.8 = OID: SNMP-VIEW-BASED-ACM-MIB::vacmBasicGroup
SNMPv2-MIB::sysORID.9 = OID: SNMP-NOTIFICATION-MIB::snmpNotifyFullCompliance
SNMPv2-MIB::sysORID.10 = OID: NOTIFICATION-LOG-MIB::notificationLogMIB
SNMPv2-MIB::sysORDescr.1 = STRING: The MIB for Message Processing and Dispatching.
SNMPv2-MIB::sysORDescr.2 = STRING: The management information definitions for t
... 略 ....

ref:
https://www.haproxy.com/doc/hapee/1.5/configuration/snmp_redhat_above55.html
http://www.liquidweb.com/kb/how-to-install-and-configure-snmp-on-centos/

CentOS 7 安裝 phpMyAdmin 設定

----
CentOS 7 安裝 phpMyAdmin 設定
----

# 安裝 httpd and php
[root@hnamenode2 ~]# yum -y install httpd httpd-devel httpd-manual php-mysql php-mbstring \
php php-soap php-xml php-mcrypt php-pear php-cli php-devel php-gd 

[root@hnamenode2 conf.d]# systemctl restart httpd.service 
[root@hnamenode2 conf.d]# systemctl status httpd.service 
httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled)
   Active: active (running) since 六 2015-09-26 22:54:14 CST; 7s ago
  Process: 11956 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
 Main PID: 11961 (httpd)
   Status: "Processing requests..."
   CGroup: /system.slice/httpd.service
           ├─11961 /usr/sbin/httpd -DFOREGROUND
           ├─11963 /usr/sbin/httpd -DFOREGROUND
           ├─11964 /usr/sbin/httpd -DFOREGROUND
           ├─11965 /usr/sbin/httpd -DFOREGROUND
           ├─11966 /usr/sbin/httpd -DFOREGROUND
           └─11967 /usr/sbin/httpd -DFOREGROUND

 9月 26 22:54:14 hnamenode2 systemd[1]: Started The Apache HTTP Server.

# EPEL 套件庫安裝
[root@hnamenode2 ~]# yum install epel-release

# 安裝 centos7 的 phpmyadmin
[root@hnamenode2 ~]# yum install phpmyadmin

# 設定允許登入的 ip 範圍,預設為 127.0.0.1 不允需其他位置登入
[root@hnamenode2 conf.d]# cat /etc/httpd/conf.d/phpMyAdmin.conf
   
     # Apache 2.4
     
       Require ip 110.111.0.0/16
       Require ip 127.0.0.1
       Require ip ::1
   
 

# 重新載入設定值
[root@hnamenode2 conf.d]# systemctl restart httpd.service 

# http://ooxx/phpMyAdmin/ 登入驗證


安裝 mariadb 並修改 root 密碼 in CentOS Linux 7

----
安裝 mariadb 並修改 root 密碼 in CentOS Linux 7
----
# 安裝 mariadb server
[root@hnamenode2 ~]# yum -y install mariadb-server mariadb-bench mariadb-libs

# 立即啟動 mariadb
[root@hnamenode2 ~]# systemctl start mariadb
# 預設開機啟動
[root@hnamenode2 ~]# systemctl enable mariadb
ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'
# 狀態
[root@hnamenode2 ~]# systemctl status mariadb
mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled)
   Active: active (running) since 六 2015-09-26 21:21:58 CST; 7s ago
  Process: 10071 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
  Process: 9990 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 10069 (mysqld_safe)
   CGroup: /system.slice/mariadb.service
           ├─10069 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
           └─10226 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mys...

 9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: The latest information about MariaDB is avail.../.
 9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: You can find additional information about the...t:
 9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: http://dev.mysql.com
 9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: Support MariaDB development by buying support...DB
 9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: Corporation Ab. You can contact us about this...m.
 9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: Alternatively consider joining our community ...t:
 9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: http://mariadb.com/kb/en/contributing-to-the-...t/
 9月 26 21:21:56 hnamenode2 mysqld_safe[10069]: 150926 21:21:56 mysqld_safe Logging to '/var/log/maria...g'.
 9月 26 21:21:56 hnamenode2 mysqld_safe[10069]: 150926 21:21:56 mysqld_safe Starting mysqld daemon wit...sql
 9月 26 21:21:58 hnamenode2 systemd[1]: Started MariaDB database server.
Hint: Some lines were ellipsized, use -l to show in full.

# 登入 mysql 的 root 帳號,並修改 root 密碼。
# 預設的 mysql root 密碼為空
[root@hnamenode2 html]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.44-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

# 使用 mysql DB
MariaDB [(none)]> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

# 更新 root 使用者的密碼
MariaDB [mysql]> update user set password=PASSWORD("密碼") where User='root';
Query OK, 4 rows affected (0.00 sec)
Rows matched: 4  Changed: 4  Warnings: 0

# 重新載入權限表
MariaDB [mysql]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

# 離開 mysql
MariaDB [mysql]> exit
Bye

CentOS Linux 7 xRDP (Linux RDP遠端桌面)

----
CentOS Linux 7 xRDP  (Linux RDP遠端桌面)
----
# 安裝 nux repo source 及 epel REPO
[root@hnamenode2 ~]# yum install epel-release -y
[root@hnamenode2 ~]# rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm

# 安裝 xrdp
[root@hnamenode2 ~]# yum -y install xrdp tigervnc-server

# 啟動 xrdp
[root@hnamenode2 ~]#  systemctl restart xrdp.service

# 觀看啟動的狀態
[root@hnamenode2 ~]# netstat -antup | grep xrdp
tcp        0      0 127.0.0.1:3350          0.0.0.0:*               LISTEN      31816/xrdp-sesman  
tcp        0      0 0.0.0.0:3389            0.0.0.0:*               LISTEN      31817/xrdp

# xrdp 預設啟動
[root@hnamenode2 ~]#  systemctl enable xrdp.service
ln -s '/usr/lib/systemd/system/xrdp.service' '/etc/systemd/system/multi-user.target.wants/xrdp.service'

# 設定 selinux 給予權限
[root@hnamenode2 ~]# chcon --type=bin_t /usr/sbin/xrdp
[root@hnamenode2 ~]# chcon --type=bin_t /usr/sbin/xrdp-sesman




ref:
https://wiki.centos.org/TipsAndTricks/MultimediaOnCentOS7

2015/09/26

關閉 centos7 防火牆 firewalld 改用傳統的 iptables


----
關閉 centos7 防火牆 firewalld 改用傳統的 iptables
----
ref: http://www.liquidweb.com/kb/how-to-stop-and-disable-firewalld-on-centos-7/
# 關閉防火牆(Firewall on RHEL / CentOS / RedHat Linux 7)


# 預設開機不啟動
[root@hnamenode2 ~]# systemctl disable firewalld
rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
rm '/etc/systemd/system/basic.target.wants/firewalld.service'

# 立即停止
[root@hnamenode2 ~]# systemctl stop firewalld

# 狀態檢查
[root@hnamenode2 ~]# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
   Active: inactive (dead)

 9月 14 18:48:27 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon...
 9月 14 18:48:33 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.
 9月 26 21:00:53 hnamenode2 systemd[1]: Stopping firewalld - dynamic firewall daemon...
 9月 26 21:00:54 hnamenode2 systemd[1]: Stopped firewalld - dynamic firewall daemon.

# iptables 檢查
[root@hnamenode2 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination        

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

----
# 改用傳統的 iptables-services , 安裝
----
[root@hnamenode2 ~]# yum install iptables-utils iptables-services

# 可以重新啟動 iptables
[root@hnamenode2 ~]# systemctl restart  iptables.service
# 看看目前系統中的規則
[root@hnamenode2 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination        
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0          
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        

# 停止 iptables 服務帶出的規則
[root@hnamenode2 ~]# systemctl stop  iptables.service
# 規則就會清空
[root@hnamenode2 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination        

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        

# 把清空的規則儲存 , 預設存在 /etc/sysconfig/iptables
[root@hnamenode2 ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]

# 可以檢查看看
[root@hnamenode2 ~]# cat /etc/sysconfig/iptables
# Generated by iptables-save v1.4.21 on Sat Sep 26 21:05:46 2015
*filter
:INPUT ACCEPT [13:832]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [9:1420]
COMMIT
# Completed on Sat Sep 26 21:05:46 2015

# 就算在重新啟動,規則也就清空了。
[root@hnamenode2 ~]# systemctl restart  iptables.service
[root@hnamenode2 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination        

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination


變更 hadoop 的 HDFS 檔案系統 block size

在 hadoop 2.7 版本的 hadoop 內把預設的 block size 設定為 128MB ,對於規模不大的教學用 hadoop 是很浪費空間的。所以修改一下把它改成 64MB 的 size。

----
先觀看 hdfs 的 datanode 狀況
----
# 也可以用 web 看
http://localhost:50070/dfshealth.html#tab-datanode 

# 用指令看
[hadoop@hnamenode ~]$ hdfs dfsadmin -report
Configured Capacity: 64280172384256 (58.46 TB)
Present Capacity: 64247015936000 (58.43 TB)
DFS Remaining: 64238110507008 (58.42 TB)
DFS Used: 8905428992 (8.29 GB)
DFS Used%: 0.01%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (17):

Name: 192.168.1.11:50010 (hdatanode11.cm.nsysu.edu.tw)
Hostname: hdatanode11.cm.nsysu.edu.tw
Decommission Status : Normal
Configured Capacity: 3998832504832 (3.64 TB)
DFS Used: 407236608 (388.37 MB)
Non DFS Used: 745283584 (710.76 MB)
DFS Remaining: 3997679984640 (3.64 TB)
DFS Used%: 0.01%
DFS Remaining%: 99.97%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Sep 26 10:37:29 CST 2015

.... skip ...

Name: 192.168.1.100:50010 (hnamenode.cm.nsysu.edu.tw)
Hostname: hnamenode.cm.nsysu.edu.tw
Decommission Status : Normal
Configured Capacity: 298852306944 (278.33 GB)
DFS Used: 2968330240 (2.76 GB)
Non DFS Used: 21231382528 (19.77 GB)
DFS Remaining: 274652594176 (255.79 GB)
DFS Used%: 0.99%
DFS Remaining%: 91.90%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Sep 26 10:37:30 CST 2015


----
更改 hadoop hdfs block size (dfs.blocksize)
----
從 http://localhost:19888/conf  看到 default 的 dfs.blocksize 值
<property>
<name>dfs.blocksize</name>
<value>134217728</value> 134217728(128MB) 預計變更為--> 67108864 (64MB)
<source>hdfs-default.xml</source>
</property>

# 它被預設再 hdfs-default.xml 檔案內,系統變數預設請看
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

# 可以用指令檢查
[hadoop@hnamenode ~]$ hdfs getconf -confKey dfs.blocksize
134217728

# 針對單一檔案檢查 block 的狀況
[hadoop@hnamenode ~]$ hdfs dfs  -stat %o /home/hadoop/test_map.R
134217728

# 使用 fsck 觀看某個檔案 blocks 分佈的狀況
[hadoop@hnamenode ~]$ hdfs fsck /home/hadoop/test_map.R -blocks
Connecting to namenode via http://hnamenode:50070/fsck?ugi=hadoop&blocks=1&path=%2Fhome%2Fhadoop%2Ftest_map.R
FSCK started by hadoop (auth:SIMPLE) from /192.168.1.100 for path /home/hadoop/test_map.R at Sat Sep 26 14:59:54 CST 2015
.Status: HEALTHY
 Total size: 81 B
 Total dirs: 0
 Total files: 1
 Total symlinks: 0
 Total blocks (validated): 1 (avg. block size 81 B)
 Minimally replicated blocks: 1 (100.0 %)
 Over-replicated blocks: 0 (0.0 %)
 Under-replicated blocks: 0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor: 3
 Average block replication: 3.0
 Corrupt blocks: 0
 Missing replicas: 0 (0.0 %)
 Number of data-nodes: 17
 Number of racks: 1
FSCK ended at Sat Sep 26 14:59:54 CST 2015 in 0 milliseconds


The filesystem under path '/home/hadoop/test_map.R' is HEALTHY

# 先停掉 hdfs and yarn
# stop-all.sh

# 修改 hdfs-site.xml 檔案,加入底下的內容把 dfs.blocksize 改變為 64MB ,系統預設為 128MB 。
<!-- by mtchang -->
<property>
<name>dfs.blocksize</name>
<value>67108864</value>
</property>
<!-- change block size to 64MB -->

# 改完後重新啟動
# start-all.sh

# 觀看修改後的 blocksize
[hadoop@hnamenode hadoop]$ hdfs getconf -confKey dfs.blocksize
67108864

# 推一個大檔案,約 242MB ,到 hdfs 上面看看。
[hadoop@hnamenode data]$ hdfs dfs -put big_number_1G.RData /home/hadoop/

# 變更為 64mb 的 block size 了
[hadoop@hnamenode data]$ hdfs dfs  -stat %o /home/hadoop/big_number_1G.RData 
67108864

# 但是原本已經存在的檔案 block size就沒有變動
[hadoop@hnamenode data]$ hdfs dfs  -stat %o /public/data/big_num_400t1t.RData
134217728

# 檢查看看 blocks 的檔案狀況
[hadoop@hnamenode data]$ hdfs fsck /home/hadoop/big_number_1G.RData -blocks
Connecting to namenode via http://hnamenode:50070/fsck?ugi=hadoop&blocks=1&path=%2Fhome%2Fhadoop%2Fbig_number_1G.RData
FSCK started by hadoop (auth:SIMPLE) from /192.168.1.100 for path /home/hadoop/big_number_1G.RData at Sat Sep 26 15:14:07 CST 2015
.Status: HEALTHY
 Total size: 307474871 B
 Total dirs: 0
 Total files: 1
 Total symlinks: 0
 Total blocks (validated): 5 (avg. block size 61494974 B)
 Minimally replicated blocks: 5 (100.0 %)
 Over-replicated blocks: 0 (0.0 %)
 Under-replicated blocks: 0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor: 3
 Average block replication: 3.0
 Corrupt blocks: 0
 Missing replicas: 0 (0.0 %)
 Number of data-nodes: 16
 Number of racks: 1
FSCK ended at Sat Sep 26 15:14:07 CST 2015 in 1 milliseconds

The filesystem under path '/home/hadoop/big_number_1G.RData' is HEALTHY

# 檢查新上傳的檔案 block size
[hadoop@hnamenode data]$ hdfs dfs  -stat %o /home/hadoop/big_number_1G.RData
67108864

指令:
http://hadoop.apache.org/docs/r1.2.1/commands_manual.html#Generic+Options
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#balancer

解釋:
http://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html#Rebalancer
https://www.quora.com/How-do-I-check-HDFS-blocksize-default-custom

大量安裝 Oracle Java 在 CentOS Linux 7 上

----
JAVA 單機安裝方式
----
首先要去官方網站上抓下來,放到 linux 上面。
http://www.oracle.com/technetwork/java/javase/downloads/index.html
這裡是以 centos7 x86_64 的版本的 rpm 為範例



# 安裝 oracle java
[hadoop@hnamenode2 dl]$ sudo yum install jdk-8u51-linux-x64.rpm

# 設定使用那個 java 當成預設值
[hadoop@hnamenode2 dl]$ sudo alternatives --config java

There are 2 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*+ 1           /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1.x86_64/jre/bin/java
   2           /usr/java/jdk1.8.0_51/jre/bin/java

Enter to keep the current selection[+], or type selection number: 2

# 建議把 openjdk 移除,因為 hadoop 只要用 oracle jdk 就好,避免後續變數的錯亂。
# 移除後上面的 alternatives --config java 就會只剩下一個選項。
[hadoop@hnamenode2 dl]$ rpm -qa | grep openjdk
java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1.x86_64
java-1.7.0-openjdk-headless-1.7.0.85-2.6.1.2.el7_1.x86_64
[hadoop@hnamenode2 dl]$ sudo yum remove java-1.7.0-openjdk java-1.7.0-openjdk-headless -y
[hadoop@hnamenode2 dl]$ java -version
java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)

----
JAVA 很多機器的安裝方式
----
把這個 rpm source 放在 httpd 上面,透過網路分散安裝
example: http://192.168.1.250/dl/app/jdk-8u51-linux-x64.rpm

# 透過 ssh 遠端抓取檔案並安裝, ssh key 認證請參考
ssh root@hdatanode15  yum install -y http://192.168.1.250/dl/app/jdk-8u51-linux-x64.rpm
ssh root@hdatanode15  alternatives --set java /usr/java/jdk1.8.0_51/jre/bin/java
ssh root@hdatanode15  java -version

ssh root@hdatanode16 yum install -y http://192.168.1.250/dl/app/jdk-8u51-linux-x64.rpm
ssh root@hdatanode16 alternatives --set java /usr/java/jdk1.8.0_51/jre/bin/java
ssh root@hdatanode16 java -version

你可以使用 pdsh 達成一次對所有主機安裝的方式。pdsh 請參考這裡

設定 ssh 使用 key 認證方式登入 linux 系統


目的為設定從 hnamenode 登入 hnamenode2 無須使用密碼登入

# ssh key 認證, 還沒產生 key 前的畫面
[hadoop@hnamenode ~]$ ls .ssh/
known_hosts

# 產生一對 ssh key
[hadoop@hnamenode ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): (這是key的密碼)
Enter same passphrase again: (這是key的密碼,check again)
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
cf:d3:df:a5:cb:ae:75:b9:09:59:51:e8:c2:ff:45:33 hadoop@hnamenode2
The key's randomart image is:
+--[ RSA 2048]----+
|               ..|
|              . .|
|           . . . |
|            o .Eo|
|        S    o oo|
|         o .  + o|
|          + .o.o+|
|           . +.+=|
|            .o*+.|
+-----------------+

# key 產生了, id_rsa 為私鑰, id_rsa.pub 為公鑰
[hadoop@hnamenode ~]$ ls .ssh/
-rw-------.  1 hadoop hadoop 1679  9月 26 12:38 id_rsa
-rw-r--r--.  1 hadoop hadoop  399  9月 26 12:38 id_rsa.pub
-rw-r--r--.  1 hadoop hadoop  530  9月 26 09:57 known_hosts

# 將 hnamenode key 複製到遠端 hnamenode2 的 .ssh/ 目錄底下 authorized_keys 檔案內
[hadoop@hnamenode Documents]$ ssh-copy-id hadoop@192.168.1.250
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@192.168.1.250's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop@192.168.1.250'"
and check to make sure that only the key(s) you wanted were added.

# 驗證,無須密碼就可以遠端下指令
[hadoop@hnamenode Documents]$ ssh hadoop@192.168.1.250 whoami
hadoop

# 再 hnamenode2 就會看到 authorized_keys 檔案,從放有 hnamenode 的 id_rsa.pub 內容
[hadoop@hnamenode2 ~]$ ls -la .ssh/
-rw-------.  1 hadoop hadoop  391  9月 26 12:40 authorized_keys
-rw-r--r--.  1 hadoop hadoop  530  9月 26 09:57 known_hosts

2015/09/10

命令列修改 centos 7 timezone(時區)

找出你的國家
[root@hnamenode ~]# timedatectl list-timezones | grep 'Taipei'
Asia/Taipei

設定時區
[root@hnamenode ~]# timedatectl set-timezone Asia/Taipei

驗證, CST 中原時區
[root@hnamenode ~]# date
Thu Sep 10 23:34:18 CST 2015

2015/09/06

PDSH(Parallel Distributed SHell) 同時對遠端多台電腦送指令的 shell 工具

PDSH(Parallel Distributed SHell) 同時對遠端多台電腦送指令的 shell 工具

Pdsh(Parallel Distributed Shell) 是個可以同時平行對遠端的 shell 送指令及接收回傳資料的工具,它目標是取代 IBM's DSH 在 clusters at LLNL. 當有很多台一樣的主機,需要管理的時候就顯的重要。
目前他的官方網站 https://code.google.com/p/pdsh/ 最後一次更新在 2013 年,網路上已經有人打包成為 RPM 檔案,直接使用打包好的就可以。
它同時提供有
pdcp (copy from local host to a group of remote hosts in parallel)
dshbak (formatting and demultiplexing pdsh output)
這些輔助工具。



RHLE 7 or CENTOS 7 版本的 RPM 下載
RPM 下載:http://blackjack.grid.org.ua/pub/linux/rhel/7x/base/x86_64/

安裝需要請下載這些檔案:
[hadoop@hadoop dl]$ rpm -qa | grep pdsh
pdsh-2.29-2.el7.centos.x86_64
pdsh-rcmd-exec-2.29-2.el7.centos.x86_64
pdsh-mod-dshgroup-2.29-2.el7.centos.x86_64
pdsh-rcmd-ssh-2.29-2.el7.centos.x86_64
pdsh-mod-machines-2.29-2.el7.centos.x86_64
pdsh-debuginfo-2.29-2.el7.centos.x86_64

然後使用 RPM 安裝,因為軟體相依性關係,請先依序安裝下面兩個程式再裝其他的程式
rpm -ivh pdsh-rcmd-exec-2.29-2.el7.centos.x86_64
rpm -ivh pdsh-2.29-2.el7.centos.x86_64
.... other

# 使用前需要先設定好, ssh key 認證,你可以參考底下這篇文章設定: http://jangmt.com/wiki/index.php/Sa1-unit15#.E4.BD.BF.E7.94.A8_SSH_Key-Based_.E8.AA.8D.E8.AD.89

# ssh key 認證驗證, pdsh 是以這個為基礎打包管理指令,省掉寫程式的麻煩。
[hadoop@hadoop ~]$ ssh hadoop@192.168.1.1 uptime
 10:10:27 up 20:47,  0 users,  load average: 0.00, 0.01, 0.05

# 裝好後看看他的 help 巴!!
[hadoop@hadoop dl]$ pdsh -help
Usage: pdsh [-options] command ...
-S                return largest of remote command return values
-h                output usage menu and quit
-V                output version information and quit
-q                list the option settings and quit
-b                disable ^C status feature (batch mode)
-d                enable extra debug information from ^C status
-l user           execute remote commands as user 指定登入的使用者
-t seconds        set connect timeout (default is 10 sec)
-u seconds        set command timeout (no default)
-f n              use fanout of n nodes
-w host,host,...  set target node list on command line 指定主機
-x host,host,...  set node exclusion list on command line 排除指定的主機
-R name           set rcmd module to name
-M name,...       select one or more misc modules to initialize first
-N                disable hostname: labels on output lines
-L                list info on all loaded modules and exit
-g groupname      target hosts in dsh group "groupname" 指定群組
-X groupname      exclude hosts in dsh group "groupname"
-a                target all nodes
available rcmd modules: ssh,exec (default: ssh)

# 先測試一台機器,使用 hadoop 使用者, uptime 指令
[hadoop@hadoop ~]$ pdsh -w 192.168.1.1 -l hadoop uptime
192.168.1.1:  09:51:08 up 20:28,  0 users,  load average: 0.00, 0.01, 0.05

# 測試2台機器
[hadoop@hadoop ~]$ pdsh -w 192.168.1.1,192.168.1.2 -l hadoop uptime
192.168.1.2:  09:51:19 up 20:28,  0 users,  load average: 0.00, 0.01, 0.05
192.168.1.1:  09:51:19 up 20:28,  0 users,  load average: 0.00, 0.01, 0.05

# 測試3台機器,排除 192.168.1.3
[hadoop@hadoop ~]$ pdsh -w 192.168.1.[1-3] -x 192.168.1.3 -l hadoop uptime
192.168.1.1:  09:52:36 up 20:30,  0 users,  load average: 0.11, 0.04, 0.05
192.168.1.2:  09:52:36 up 20:30,  0 users,  load average: 0.22, 0.10, 0.07

# 測試 16 台機器,排除 192.168.1.9 ~ 192.168.1.16
[hadoop@hadoop ~]$ pdsh -w 192.168.1.[1-16] -x 192.168.1.[9-16] -l hadoop uptime
192.168.1.5:  09:52:52 up 20:30,  0 users,  load average: 0.02, 0.02, 0.05
192.168.1.2:  09:52:52 up 20:30,  0 users,  load average: 0.17, 0.09, 0.07
192.168.1.7:  09:52:52 up 20:30,  0 users,  load average: 0.00, 0.01, 0.05
192.168.1.8:  09:52:52 up 20:30,  0 users,  load average: 0.00, 0.01, 0.05
192.168.1.6:  09:52:52 up 20:30,  0 users,  load average: 0.00, 0.01, 0.05
192.168.1.4:  09:52:52 up 20:30,  0 users,  load average: 0.04, 0.04, 0.05
192.168.1.1:  09:52:52 up 20:30,  0 users,  load average: 0.08, 0.04, 0.05
192.168.1.3:  09:52:52 up 20:30,  0 users,  load average: 0.00, 0.01, 0.05

# 設定群組,在使用者的目錄下建立 ~/.dsh/group/ 目錄,內的每個檔案寫入主機名稱就可以分群組。
[hadoop@hadoop group]$ pwd
/home/hadoop/.dsh/group

# 建立 new 群組
[hadoop@hadoop group]$ cat new
192.168.1.13
192.168.1.14
192.168.1.15
192.168.1.16

# 建立 hadoop 群組
[hadoop@hadoop ~]$ cat .dsh/group/hadoop
hdatanode1
hdatanode2
hdatanode3
hdatanode4
hdatanode5
hdatanode6
hdatanode7
hdatanode8
hdatanode9
hdatanode10
hdatanode11
hdatanode12
hdatanode13
hdatanode14
hdatanode15
hdatanode16

# 使用 new 群組下指令
[hadoop@hadoop group]$ pdsh -g new -l hadoop 'uptime'
192.168.1.15:  09:50:03 up 20:27,  0 users,  load average: 0.00, 0.01, 0.05
192.168.1.13:  09:50:03 up 20:27,  0 users,  load average: 0.02, 0.02, 0.05
192.168.1.16:  09:50:03 up 20:27,  0 users,  load average: 0.00, 0.01, 0.05
192.168.1.14:  09:50:03 up 20:27,  0 users,  load average: 0.00, 0.01, 0.05

# 同時 yum upgrade 群組 new 主機
[hadoop@hadoop ~]$ pdsh -g new -l root 'yum upgrade -y'

# 使用 dshbak 將輸出的資料整理,免得資料一多不易閱讀。
[hadoop@hadoop ~]$ pdsh -g hadoop -l hadoop 'uptime' | dshbak -c
----------------
hdatanode[1-16]
----------------
 10:32:50 up 21:10,  0 users,  load average: 0.00, 0.01, 0.05


# 對所有主機同時安裝 java 的範例:
# 把檔案放在內部的一台 web server 內,直接 rpm 抓下來安裝
[root@hmaster ~]# pdsh -g hadoop -l root 'rpm -ivh http://192.168.1.250/work/jdk-8u51-linux-x64.rpm'
# 切換 java 為這個 oracle java
[root@hmaster ~]# pdsh -g hadoop -l root 'alternatives --set java /usr/java/jdk1.8.0_51/jre/bin/java'
# 檢查版本是否一樣
[root@hmaster ~]# pdsh -g hadoop -l root 'java -version' 


REF:
http://kumu-linux.github.io/blog/2013/06/19/pdsh/
http://blackjack.grid.org.ua/pub/linux/rhel/7x/base/x86_64/
https://code.google.com/p/pdsh/