---- 使用 Spine 加速 Cacti 的輪循裝置速度
----
http://www.cacti.net/spine_info.php
Spine, formerly Cactid, is a poller for Cacti that primarily strives to be as fast as possible. For this reason it is written in native C, makes use of POSIX threads, and is linked directly against the net-snmp library for minmumal SNMP polling overhead. Spine is a replacement for the default cmd.php poller so you must decide if using Spine makes sense for your installation.
net-snmp utilities and development libraries mysql utilities, server and development libraries openssl development libraries
例如:mysql 的 lib yum install mysql++-devel -y
spine 沒有現成的 binary 檔案,請自行編譯
---- wget http://www.cacti.net/downloads/spine/cacti-spine-0.8.8f.tar.gz tar zxvf cacti-spine-0.8.8f.tar.gz cd cacti-spine-0.8.8f ./configure make sudo make install
(7) HDFS and YARN 啟動
# 啟動 HDFS and YARN
[hadoop@namenode happ]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [namenode]
namenode: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-namenode.jangmt.com.out
datanode1: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-datanode1.out
datanode2: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-datanode2.out
datanode3: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-datanode3.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-namenode.jangmt.com.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-namenode.jangmt.com.out
datanode2: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-datanode2.out
datanode3: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-datanode3.out
datanode1: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-datanode1.out
# 也可以關閉
[hadoop@namenode happ]$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [namenode]
namenode: stopping namenode
datanode1: stopping datanode
datanode2: stopping datanode
datanode3: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
datanode2: no nodemanager to stop
datanode1: no nodemanager to stop
datanode3: no nodemanager to stop
no proxyserver to stop
/*
Edit this to point to the default URL of your Cacti install
ex: if your cacti install as at http://serverip/cacti/ this
would be set to /cacti/
*/
//$url_path = "/cacti/";
/* Default session name - Session name must contain alpha characters */
//$cacti_session_name = "Cacti";
?>
請檢查一下時間,Server 先對時,ntp 對時。時區確認。
[root@hnamenode2 ~]# date
一 9月 28 10:33:02 CST 2015
[root@hnamenode2 ~]# ntpdate clock.stdtime.gov.tw
28 Sep 10:33:33 ntpdate[5013]: adjust time server 211.22.103.158 offset -0.003425 sec
最後在檢查 php.ini 的設定,是否有設為同一個時區,如果沒有改一下並重新啟動 apache
[root@hnamenode2 ~]# cat /etc/php.ini | grep timezone
; Defines the default timezone used by the date functions
; http://php.net/date.timezone
date.timezone = Asia/Taipei
9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: The latest information about MariaDB is avail.../.
9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: You can find additional information about the...t:
9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: http://dev.mysql.com
9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: Support MariaDB development by buying support...DB
9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: Corporation Ab. You can contact us about this...m.
9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: Alternatively consider joining our community ...t:
9月 26 21:21:56 hnamenode2 mariadb-prepare-db-dir[9990]: http://mariadb.com/kb/en/contributing-to-the-...t/
9月 26 21:21:56 hnamenode2 mysqld_safe[10069]: 150926 21:21:56 mysqld_safe Logging to '/var/log/maria...g'.
9月 26 21:21:56 hnamenode2 mysqld_safe[10069]: 150926 21:21:56 mysqld_safe Starting mysqld daemon wit...sql
9月 26 21:21:58 hnamenode2 systemd[1]: Started MariaDB database server.
Hint: Some lines were ellipsized, use -l to show in full.
# 登入 mysql 的 root 帳號,並修改 root 密碼。
# 預設的 mysql root 密碼為空
[root@hnamenode2 html]# mysql -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.44-MariaDB MariaDB Server
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
# 使用 mysql DB
MariaDB [(none)]> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
# 更新 root 使用者的密碼
MariaDB [mysql]> update user set password=PASSWORD("密碼") where User='root';
Query OK, 4 rows affected (0.00 sec)
Rows matched: 4 Changed: 4 Warnings: 0
# 使用 fsck 觀看某個檔案 blocks 分佈的狀況
[hadoop@hnamenode ~]$ hdfs fsck /home/hadoop/test_map.R -blocks
Connecting to namenode via http://hnamenode:50070/fsck?ugi=hadoop&blocks=1&path=%2Fhome%2Fhadoop%2Ftest_map.R
FSCK started by hadoop (auth:SIMPLE) from /192.168.1.100 for path /home/hadoop/test_map.R at Sat Sep 26 14:59:54 CST 2015
.Status: HEALTHY
Total size:81 B
Total dirs:0
Total files:1
Total symlinks:0
Total blocks (validated):1 (avg. block size 81 B)
Minimally replicated blocks:1 (100.0 %)
Over-replicated blocks:0 (0.0 %)
Under-replicated blocks:0 (0.0 %)
Mis-replicated blocks:0 (0.0 %)
Default replication factor:3
Average block replication:3.0
Corrupt blocks:0
Missing replicas:0 (0.0 %)
Number of data-nodes:17
Number of racks:1
FSCK ended at Sat Sep 26 14:59:54 CST 2015 in 0 milliseconds
The filesystem under path '/home/hadoop/test_map.R' is HEALTHY
# 檢查看看 blocks 的檔案狀況
[hadoop@hnamenode data]$ hdfs fsck /home/hadoop/big_number_1G.RData -blocks
Connecting to namenode via http://hnamenode:50070/fsck?ugi=hadoop&blocks=1&path=%2Fhome%2Fhadoop%2Fbig_number_1G.RData
FSCK started by hadoop (auth:SIMPLE) from /192.168.1.100 for path /home/hadoop/big_number_1G.RData at Sat Sep 26 15:14:07 CST 2015
.Status: HEALTHY
Total size:307474871 B
Total dirs:0
Total files:1
Total symlinks:0
Total blocks (validated):5 (avg. block size 61494974 B)
Minimally replicated blocks:5 (100.0 %)
Over-replicated blocks:0 (0.0 %)
Under-replicated blocks:0 (0.0 %)
Mis-replicated blocks:0 (0.0 %)
Default replication factor:3
Average block replication:3.0
Corrupt blocks:0
Missing replicas:0 (0.0 %)
Number of data-nodes:16
Number of racks:1
FSCK ended at Sat Sep 26 15:14:07 CST 2015 in 1 milliseconds
The filesystem under path '/home/hadoop/big_number_1G.RData' is HEALTHY
# 產生一對 ssh key
[hadoop@hnamenode ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): (這是key的密碼)
Enter same passphrase again: (這是key的密碼,check again)
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
cf:d3:df:a5:cb:ae:75:b9:09:59:51:e8:c2:ff:45:33 hadoop@hnamenode2
The key's randomart image is:
+--[ RSA 2048]----+
| ..|
| . .|
| . . . |
| o .Eo|
| S o oo|
| o . + o|
| + .o.o+|
| . +.+=|
| .o*+.|
+-----------------+
# 將 hnamenode key 複製到遠端 hnamenode2 的 .ssh/ 目錄底下 authorized_keys 檔案內
[hadoop@hnamenode Documents]$ ssh-copy-id hadoop@192.168.1.250
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@192.168.1.250's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'hadoop@192.168.1.250'"
and check to make sure that only the key(s) you wanted were added.
Pdsh(Parallel Distributed Shell) 是個可以同時平行對遠端的 shell 送指令及接收回傳資料的工具,它目標是取代 IBM's DSH 在 clusters at LLNL. 當有很多台一樣的主機,需要管理的時候就顯的重要。
目前他的官方網站 https://code.google.com/p/pdsh/ 最後一次更新在 2013 年,網路上已經有人打包成為 RPM 檔案,直接使用打包好的就可以。
它同時提供有
pdcp (copy from local host to a group of remote hosts in parallel)
dshbak (formatting and demultiplexing pdsh output)
這些輔助工具。
# 裝好後看看他的 help 巴!!
[hadoop@hadoop dl]$ pdsh -help
Usage: pdsh [-options] command ...
-S return largest of remote command return values
-h output usage menu and quit
-V output version information and quit
-q list the option settings and quit
-b disable ^C status feature (batch mode)
-d enable extra debug information from ^C status
-l user execute remote commands as user 指定登入的使用者
-t seconds set connect timeout (default is 10 sec)
-u seconds set command timeout (no default)
-f n use fanout of n nodes
-w host,host,... set target node list on command line 指定主機
-x host,host,... set node exclusion list on command line 排除指定的主機
-R name set rcmd module to name
-M name,... select one or more misc modules to initialize first
-N disable hostname: labels on output lines
-L list info on all loaded modules and exit
-g groupname target hosts in dsh group "groupname" 指定群組
-X groupname exclude hosts in dsh group "groupname"
-a target all nodes
available rcmd modules: ssh,exec (default: ssh)