LVM GFS ISCSI TOMCAT的Linux LVS双机技巧设置_万兴数据

日期:2016-02-03 / 人气: / 来源:网络

LVS是中国人发起的项目,真是意外呀!大家可以看http://www.douzhe.com/linuxtips/1665.html

我是从最初的HA(高可用性)开始的,别人的例子是用VMWARE,可以做试验但不能实际应用,我又没有光纤卡的Share Storage,于是就选用ISCSI,成功后又发现ISCSI EXT3不能用于LVS,倒最后发现GFS可用,我最终成功配成可实际应用的LVS,前后断断续续花了四个月,走了很多弯路。我花了三天时间写下这篇文章,希望对大家有用。

这里要感谢linuxfans.org、linuxsir.com、chinaunix.com以、chinastor.com及其它很多网站,很多资料都是从他们的论坛上找到的。参考文档及下载点

a.http://www.gyrate.org/misc/gfs.txt

b.http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/cluster-suite/index.html

http://www.redhat.com/docs/manuals/csgfs/admin-guide/index.html

c.ftp://ftp.redhat.com/pub/redhat/linux/updates/enterprise/3ES/en/RHGFS/SRPMS

d.http://distro.ibiblio.org/pub/linux/distributions/caoslinux/centos/3.1/contrib/i386/RPMS/ 

                 LVS结构图:
              eth0=10.3.1.101
            eth0:1=10.3.1.254
                Load Balance
                   Router
              eth1=192.168.1.71
            eth1:1=192.168.1.1
                  |         |
                  |         |
                Real1      Real2
       eth0=192.168.1.68  eth0=192.168.1.67
            (eth0 gateway=192.168.1.1)
       eth1=192.168.0.1---eth1=192.168.0.2
                  (双机互联线)
                       |
                       |
                      GFS
                     ISCSI
                Share storage
              eth0=192.168.1.124

1.Setup ISCSI Server

Server: PIII 1.4,512M, Dell 1650,Redhat 9,IP=192.168.1.124

从http://iscsitarget.sourceforge.net/下载ISCSI TARGET的Source code

(http://sourceforge.net/project/showfiles.php?group_id=108475&package_id=117141)

我选了iscsitarget-0.3.8.tar.gz,要求kernel 2.4.29

从kernel.org下载kernel 2.4.29,解开编译重启后编译安装iscsitarget-0.3.8:

#make KERNELSRC=/usr/src/linux-2.4.29

#make KERNELSRC=/usr/src/linux-2.4.29 install

#cp ietd.conf /etc

#vi /etc/ietd.conf

# Example iscsi target configuration
#
# Everything until the first target definition belongs
# to the global configuration.
# Right now this is only the user configuration used
# during discovery sessions:
# Users, who can access this target
# (no users means anyone can access the target)
User iscsiuser 1234567890abc
Target iqn.2005-04.com.my:storage.disk2.sys1.iraw1
        User iscsiuser 1234567890abc
        Lun 0 /dev/sda5 fileio
        Alias iraw1
Target iqn.2005-04.com.my:storage.disk2.sys1.iraw2
        User iscsiuser 1234567890abc
        Lun 1 /dev/sda6 fileio
        Alias iraw2
Target iqn.2005-04.com.my:storage.disk2.sys2.idisk
        User iscsiuser 1234567890abc
        Lun 2 /dev/sda3 fileio
        Alias idisk
Target iqn.2005-04.com.my:storage.disk2.sys2.icca
        User iscsiuser 1234567890abc
        Lun 3 /dev/sda7 fileio
        Alias icca

说明:password 长度必须不小于12个字符, Alias是别名, 不知为何这个别名在Client端显示不出来.

分区:我只有一个SCSI盘,所以:

/dev/sda3: Share storage,容量越大越好
/dev/sda5: raw1, 建Cluster要的rawdevice, 我给了900M
/dev/sda6: raw2, 建Cluster要的rawdevice, 我给了900M
/dev/sda7: cca, 建GFS要的,我给了64M
(/dev/sda4是Extended分区,在其中建了sda5,6,7)

#Reboot,用service iscsi-target start启ISCSI server(我觉得比建议的好,可以

用service iscsi-target status看状态)

2.Setup ISCSI Client(on two real server)

Server: PIII 1.4,512M, Dell 1650,Redhat AS3U4(用AS3U5更好),2.4.21-27.EL

#vi /etc/iscsi.conf

DiscoveryAddress=192.168.1.124
OutgoingUsername=iscsiuser
OutgoingPassword=1234567890abc
Username=iscsiuser
Password=1234567890abc
LoginTimeout=15
IncomingUsername=iscsiuser
IncomingPassword=1234567890abc
SendAsyncTest=yes

#service iscsi restart

#iscsi-ls -l

..., 精简如下:

/dev/sdb:iraw2

/dev/sdc:iraw1

/dev/sdd:idisk

/dev/sde:icca

注意: 在real server中ISCSI device的顺序很重要,两个real server中一定要一样,如不一样

就改ISCSI Server中的设置,多试几次

3.Install Redhat Cluster suite

先下载Cluster Suite的ISO, AS3的我是从ChinaUnix.net找到的下载点, 安装clumanager和

redhat-config-cluster。没有Cluster Suite的ISO也没关系,从

ftp://ftp.redhat.com/pub/redhat/linux/updates/enterprise/3ES/en/RHCS/SRPMS/下载

clumanager-1.2.xx.src.rpm,redhat-config-cluster-1.0.x.src.rpm,编译后安装,应该更好:

#rpm -Uvh clumanager-1.2.26.1-1.src.rpm

#rpmbuild -bs /usr/src/redhat/SPECS/clumanager.spec

#rpmbuild --rebuild --target i686 /usr/src/redhat/SRPMS/clumanager-1.2.26.1-1.src.rpm

还有redhat-config-cluster-1.0.x.src.rpm,也装好

4.Setup Cluster as HA module

详细步骤我就不写了,网上有很多文章,我也是看了别人的文章学会的,不过人家是用VMWARE,

而我是用真的机子 ISCSI,raw device就是/dev/sdb,/dev/sdc, 然后就

mount /dev/sdd /u01, mkfs.ext3 /u01 ......

设好后会发现ISCSI有问题:同时只能有一个Client联接写盘,如果

两个Client同时联ISCSI的Share Storge,一个Client写,另一个Client是看不到的,而且此时文

件系统已经破坏了,Client重联ISCSI时会发现文件是坏的,用fsck也修复不了。

ISCSI真的是鸡肋吗?

NO!从GOOGLE上我终于查到ISCSI只有用Cluster File System才能真正用于Share Storage!

而Redhat买下的GFS就是一个!

5.Setup GFS on ISCSI

GFS只有Fedora Core4才自带了,而GFS又一定要用到Cluster Suite产生的/etc/cluster.xml文件,

我没见FC4有Cluster Suite,真不知Redhat给FC4带GFS干嘛,馋人吗?

好,闲话少说,下载:c处的GFS-6.0.2.20-2.src.rpm, 按a处的gfs.txt编译安装,不过关于

cluster.ccs,fence.ccs,nodes.ccs的设置没说,看b的文档,我总算弄出来了,都存在

/root/cluster下,存在别的地方也行,不过我不知道有没有错,我没有光纤卡,文档又没讲ISCSI

的例子,不过GFS能启动的。

#cat cluster.ccs

cluster {
        name = "Cluster_1"
        lock_gulm {
            servers = ["cluster1", "cluster2"]
            heartbeat_rate = 0.9
            allowed_misses = 10
        }
}

注:name就是Cluster Suite设置的Cluster name, servers就是Cluster member的Hostname,别忘

了加进/etc/hosts;allowed_misses我开始设为1,结果跑二天GFS就会死掉,改为10就没死过了。

#cat fence.ccs

fence_devices{
        admin {
              agent = "fence_manual"
        }
}

#cat nodes.ccs

nodes {
   cluster1 {
      ip_interfaces {
         hsi0 = "192.168.0.1"
      }
      fence {
         human {
            admin {
               ipaddr = "192.168.0.1"
            }
         }
      }
   }
   cluster2 {
      ip_interfaces {
         hsi0 = "192.168.0.2"
      }
      fence {
         human {
            admin {
               ipaddr = "192.168.0.2"
            }
         }
      }
   }
}

注:ip就是心跳线的ip

这三个文件建在/root/cluster下,先建立Cluster Configuration System:

a.#vi /etc/gfs/pool0.cfg

poolname pool0

minor 1 subpools 1

subpool 0 8 1 gfs_data

pooldevice 0 0 /dev/sde1

b.#pool_assemble -a pool0

c.#ccs_tool create /root/cluster /dev/pool/pool0

d.#vi /etc/sysconfig/gfs

CCS_ARCHIVE="/dev/pool/pool0"

再Creating a Pool Volume,就是我们要的共享磁盘啦,

a.#vi /etc/gfs/pool1.cfg

poolname pool1

minor 2 subpools 1

subpool 0 128 1 gfs_data

pooldevice 0 0 /dev/sdd1

b.#pool_assemble -a pool1

c.#gfs_mkfs -p lock_gulm -t Cluster_1:gfs1 -j 8 /dev/pool/pool1

d.#mount -t gfs -o noatime /dev/pool/pool1 /u01

下面是个GFS的启动脚本,注意real1和real2必须同时启动lock_gulmd进程,第一台lock_gulmd

会成为Server并等Client的lock_gulmd,几十秒后没有响应会fail,GFS启动失败。Redhat建议

GFS盘不要写进/etc/fstab。

#cat gfstart.sh

#!/bin/sh
depmod -a
modprobe pool
modprobe lock_gulm
modprobe gfs
sleep 5
service iscsi start
sleep 20
service rawdevices restart
pool_assemble -a pool0
pool_assemble -a pool1
service ccsd start
service lock_gulmd start
mount -t gfs /dev/pool/pool1 /s02 -o noatime
service gfs status

6. Setup Linux LVS

LVS是章文嵩博士发起和领导的优秀的集群解决方案,许多商业的集群产品,比如RedHat的Piranha,Turbolinux公司的Turbo Cluster等,都是基于LVS的核心代码的。

我的系统是Redhat AS3U4,就用Piranha了。从rhel-3-u5-rhcs-i386.iso安装piranha-0.7.10-2.i386.rpm,ipvsadm-1.21-9.ipvs108.i386.rpm (http://distro.ibiblio.org/pub/linux/distributions/caoslinux/centos/3.1/contrib/i386/RPMS/) 装完后service httpd start & service piranha-gui start,就可以从http://xx.xx.xx.xx:3636管理或设置了,当然了,手工改/etc/sysconfig/ha/lvs.cf也一样。

#cat /etc/sysconfig/ha/lvs.cf

serial_no = 80
primary = 10.3.1.101
service = lvs
rsh_command = ssh
backup_active = 0
backup = 0.0.0.0
heartbeat = 1
heartbeat_port = 1050
keepalive = 6
deadtime = 18
network = nat
nat_router = 192.168.1.1 eth1:1
nat_nmask = 255.255.255.0
reservation_conflict_action = preempt
debug_level = NONE
virtual lvs1 {
     active = 1
     address = 10.3.1.254 eth0:1
     vip_nmask = 255.255.255.0
     fwmark = 100
     port = 80
     persistent = 60
     pmask = 255.255.255.255
     send = "GET / HTTP/1.0rnrn"
     expect = "HTTP"
     load_monitor = ruptime
     scheduler = wlc
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 1
     server Real1 {
         address = 192.168.1.68
         active = 1
         weight = 1
     }
     server Real2 {
         address = 192.168.1.67
         active = 1
         weight = 1
     }
}
virtual lvs2 {
     active = 1
     address = 10.3.1.254 eth0:1
     vip_nmask = 255.255.255.0
     port = 21
     send = "n"
     use_regex = 0
     load_monitor = ruptime
     scheduler = wlc
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 0
     server ftp1 {
         address = 192.168.1.68
         active = 1
         weight = 1
     }
     server ftp2 {
         address = 192.168.1.67
         active = 1
         weight = 1
     }
}

设置完后service pulse start, 别忘了把相关的client加进/etc/hosts

#iptables -t mangle -A PREROUTING -p tcp -d 10.3.1.254/32 --dport 80 -j MARK --set-mark 100

#iptables -t mangle -A PREROUTING -p tcp -d 10.3.1.254/32 --dport 443 -j MARK --set-mark 100

#iptables -A POSTROUTING -t nat -p tcp -s 10.3.1.0/24 --sport 20 -j MASQUERADE

运行以上三行命令并存入/etc/rc.d/rc.local,用ipvsadm看状态:

#ipvsadm

IP Virtual Server version 1.0.8 (size=65536)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.3.1.254:ftp wlc
  -> cluster2:ftp                  Masq    1      0          0
  -> cluster1:ftp                  Masq    1      0          0
FWM  100 wlc persistent 60
  -> cluster1:0                    Masq    1      0          0
  -> cluster2:0                    Masq    1      0          0

注意:a.Firewall Mark可以不要,我反正是加了,文档说有https的话加上,值我选了100,

b.Virtual IP别加进/etc/hosts,我上过当,80端口时有时无的,

c.eth0:1,eth1:1是piranha产生的,别自己手工设置,我干过这画蛇添足的事,网上有

些帖子没说清,最后是看Redhat的文档才弄清楚的。

d.The LVS router can monitor the load on the various real servers by using

either rup or ruptime. If you select rup from the drop-down menu, each real

server must run the rstatd service. If you select ruptime, each real server

must run the rwhod service.Redhat的原话,就是如选rup的监控模式real server上

都要运行rstatd进程,如选ruptime就要运行rwhod进程。

e.Real Server同Router相联的网卡的Gateway必须是Router的那块网卡的VIP,举本例:

Router的eth1同两个real server的eth0相联,如VIP eth1:1=192.168.1.1,则real

server 的eth0的Gateway=192.168.1.1

echo "1" > /proc/sys/net/ipv4/ip_forward

7.Setup TOMCAT5.59 JDK1.5(用Redhat自带的Apache)

a.#tar xzvf jakarta-tomcat-5.5.9.tar.gz

#mv jakarta-tomcat-5.5.9 /usr/local

#ln -s /usr/local/jakarta-tomcat-5.5.9 /usr/local/tomcat

b.#jdk-1_5_0_04-linux-i586.bin

#mv jdk1.5.0_4 /usr/java

#ln -s /usr/java/jdk1.5.0_4 /usr/java/jdk

c.#vi /etc/profile.d/tomcat.sh

export CATALINA_HOME=/usr/local/tomcat

export TOMCAT_HOME=/usr/local/tomcat

d.#vi /etc/profile.d/jdk.sh

if ! echo ${PATH} | grep "/usr/java/jdk/bin" ; then
  JAVA_HOME=/usr/java/jdk
  export JAVA_HOME
  export PATH=/usr/java/jdk/bin:${PATH}
  export CLASSPATH=$JAVA_HOME/lib
fi

e.#chmod 755 /etc/profile.d/*.sh

f.重新用root登录,让tomcat.sh和jdk.sh起作用,

#tar xzvf jakarta-tomcat-connectors-jk2-src-current.tar.gz

#cd jakarta-tomcat-connectors-jk2-2.0.4-src/jk/native2/

#./configure --with-apxs2=/usr/sbin/apxs --with-jni --with-apr-lib=/usr/lib

#make

#libtool --finish /usr/lib/httpd/modules

#cp ../build/jk2/apache2/mod_jk2.so ../build/jk2/apache2/libjkjni.so /usr/lib/httpd/modules/

g.#vi /usr/local/tomcat/bin/catalina.sh

在# Only set CATALINA_HOME if not already set后加上以下两行:

serverRoot=/etc/httpd

export serverRoot

h.#vi /usr/local/tomcat/conf/jk2.properties

serverRoot=/etc/httpd

apr.NativeSo=/usr/lib/httpd/modules/libjkjni.so

apr.jniModeSo=/usr/lib/httpd/modules/mod_jk2.so

i.#vi /usr/local/tomcat/conf/server.xml,

在前加上以下几行,建了两个VirtualPath:myjsp和local,一个指向share storage,一个指向real server本地

    
   
    
     
    
   
  
 

j.#vi /etc/httpd/conf/workers2.properties

#[logger.apache2]
#level=DEBUG
[shm]
  file=/var/log/httpd/shm.file
  size=1048576
[channel.socket:localhost:8009]
  tomcatId=localhost:8009
  keepalive=1
  info=Ajp13 forwarding over socket
[ajp13:localhost:8009]
  channel=channel.socket:localhost:8009
[status:status]
  info=Status worker, displays runtime informations
[uri:/*.jsp]
  worker=ajp13:localhost:8009
  context=/

k.#vi /etc/httpd/conf/httpd.conf

改:DocumentRoot "/u01/www"

加:

在LoadModule最后加:

LoadModule jk2_module modules/mod_jk2.so

JkSet config.file /etc/httpd/conf/workers2.properties

在# 之前加:

Order allow,deny

Deny from all

 

l:#mkdir /u01/ftproot

#mkdir /u01/www

#mkdir /u01/www/myjsp

m:在每个real server上生成index.jsp

#vi /var/www/html/index.jsp

<%@ page import="java.util.*,java.sql.*,java.text.*" contentType="text/html"

%>

<%

out.println("test page on real server 1");

%>

在real server2上就是"test page on real server 2"

n:下载jdbc Driver

http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc9201.html

可惜只有for JDK1.4的,在两台real server上分别

#cp -R /usr/local/tomcat/webapps/webdav/WEB-INF /u01/www/myjsp

#cp ojdbc14.jar ojdbc14_g.jar ocrs12.zip /u01/www/myjsp/WEB-INF/lib

o: 假设我有一台OracleServer,ip=10.3.1.211,sid=MYID,username=my,password=1234,

并有Oracle的例子employees的read权限,或干脆把这个table拷过来,我是Oracle9i中的

#vi /u01/www/myjsp/testoracle.jsp

 

<%@ page contentType="text/html" %>;
<%@ page import="java.sql.*"%>;
<?xml version="1.0"?>;
<html">;
<head>;
<meta http-equiv="Content-Type" content="text/html" />;
<title>;Test ORACLE Employees</title>;
</head>;
<body>;
<%
       String OracleDBDriver="oracle.jdbc.driver.OracleDriver";
       String DBUrl="jdbc:oracle:thin:@10.3.1.211:1521:MYID";
       String UserID="my";
       String UserPWD="1234";

       Connection conn=null;

       Statement  stmt=null;
       ResultSet  rs=null;
       try
       {
       Class.forName(OracleDBDriver);
       }
       catch(ClassNotFoundException ex)
       {
        System.out.println("Class.forname:" ex);
        }
    conn=DriverManager.getConnection(DBUrl,UserID,UserPWD);
    stmt=conn.createStatement();
    String sql="select * from EMPLOYEES";

    rs = stmt.executeQuery(sql);

        out.print("<table border>;");
          out.print("<tr>;");
            out.print("<th width=100>;" "EMPLOYEE_ID");
               out.print("<th width=50>;" "FIRST_NAME");

                out.print("<th width=50>;" "LAST_NAME");
                out.print("<th width=50>;" "EMAIL");
                out.print("<th width=50>;" "PHONE_NUMBER");
                out.print("<th width=50>;" "HIRE_DATE");
                out.print("<th width=50>;" "JOB_ID");


          out.print("<tr>;");
 try
   {
        while(rs.next())
        {
          out.print("<tr>;");
            int n=rs.getInt(1);
               out.print("<td>;" n "</td>;");

               String e=rs.getString(2);
               out.print("<td>;" e "</td>;");
               //String e=rs.getString(3);
               out.print("<td>;" rs.getString(3) "</td>;");
               out.print("<td>;" rs.getString(4) "</td>;");

               out.print("<td>;" rs.getString(5) "</td>;");
               out.print("<td>;" rs.getString(6) "</td>;");
               out.print("<td>;" rs.getString(7) "</td>;");

          out.print("</tr>;");

        }
    }
       catch(SQLException ex)
       {
              System.err.println("ConnDB.Main:" ex.getMessage());
       }

        out.print("</table>;");
        rs.close();
        stmt.close();
        conn.close();
%>;

</body>;
</html>;

p:#vi /u01/www/index.html

<HTML>
<HEAD>
<META HTTP-EQUIV="Refresh" CONTENT="10 URL=http://10.3.1.254/myjsp/testoracle.jsp">
</HEAD>
<BODY>
<a href="http://10.3.1.254/local/index.jsp">;WEB Local</a>
<p>
<a href="http://10.3.1.254/myjsp/testoracle.jsp">;Test Oracle WEB</a>
</BODY>
</HTML>

q:在两台real server上分别

#vi /usr/local/tomcat/conf/tomcat-users.xml

加下面一行,允许页面管理:

r:在两台real server上分别

#service httpd restart

#/usr/local/tomcat/bin/startup.sh

s:打开http://1092.168.1.68:8080和http://1092.168.1.67:8080,选Tomcat Manager,用

manager/tomcat登录,虚拟目录/myjsp和/local应该Start了

在两台机子上分别打开网页http://10.3.1.254,选WEB Local,可以看到一台显示:

"test page on real server 1",另一台为"test page on real server 2",同时在Router上

ipvsadm可以看到每个real server的联接数

8.设置FTP服务

#vi /etc/vsftpd/vsftp.conf,在两台real server上分别加入以下几行:

 

anon_root=/u01/ftproot
local_root=/u01/ftproot
setproctitle_enable=YES

#service vsftpd start

现在LVM GFS ISCSI TOMCAT就设置好了,我们可以用Apache Jmeter来测试LVM的性能,两台机子上分别运行jmeter,都指向10.3.1.254/myjsp/testoracle.jsp,各200个threads同时运行,在Router上用ipvsadm可以监控,Oracle Server的性能可要好,否则大量的http进程会hang在real server上,ipvsadm也会显示有个real server失去了。测试时real server的CPU idle会降到70%,而Router的CPU idle几乎不动。

负载均衡集群LVS配置

导读:LVS-NAT的配置,Director:两块网卡eth0:桥接的 VIP:192.168.0.65 eth1:网络连接为自定义的DIP 192.168.10.1

lvs,

作者:管理员




现在致电4006-2991-90 OR 查看更多联系方式 →

Go To Top 回顶部