测试hadoop集群是否安装乐成(用jps下令和实例举

日期:2016-01-26 / 人气: / 来源:网络

在上篇介绍了3节点hadoop集群的安装配置,装完hadoop集群后,验证hadoop集群是否安装成功的方法。本人集群是三台虚拟机,一台是master,另外两台分别为slave1和slave2。在你用start-all.sh启动集群后,可以用jps命令和实例进行验证集群是否安装配置成功。

1、用jps命令

(1)master节点

启动集群:

cy@master:~$ start-all.sh

starting namenode, logging to /home/cy/Hadoop/hadoop-1.2.1/libexec/../logs/hadoop-cy-namenode-master.out

slave2: starting datanode, logging to /home/cy/Hadoop/hadoop-1.2.1/libexec/../logs/hadoop-cy-datanode-slave2.out

slave1: starting datanode, logging to /home/cy/Hadoop/hadoop-1.2.1/libexec/../logs/hadoop-cy-datanode-slave1.out

master: starting secondarynamenode, logging to /home/cy/Hadoop/hadoop-1.2.1/libexec/../logs/hadoop-cy-secondarynamenode-master.out

starting jobtracker, logging to /home/cy/Hadoop/hadoop-1.2.1/libexec/../logs/hadoop-cy-jobtracker-master.out

slave1: starting tasktracker, logging to /home/cy/Hadoop/hadoop-1.2.1/libexec/../logs/hadoop-cy-tasktracker-slave1.out

slave2: starting tasktracker, logging to /home/cy/Hadoop/hadoop-1.2.1/libexec/../logs/hadoop-cy-tasktracker-slave2.out

用jps命令查看Java进程:

cy@master:~$ jps

6670 NameNode

7141 Jps

7057 JobTracker

(2)slave1节点

用jps命令查看Java进程:

cy@slave1:~$ jps

3218 Jps

2805 DataNode

2995 TaskTracker

(3)slave2节点

用jps命令查看Java进程:

cy@slave2:~$ jps

2913 TaskTracker

2731 DataNode

3147 Jps

如果三台虚拟机用jps命令查询时如上面显示的那样子,就说明hadoop安装和配置成功了。

 

2、hadoop集群的测试,用hadoop-examples-1.2.1.jar中自带的wordcount程序进行测试,该程序的作用是统计单词的个数。

(1)我们现在桌面上创建一个新的文件test.txt,里面总共有10行,每行都是hello world

(2)在HDFS系统里创建一个input文件夹,使用命令如下:

          hadoop fs -mkdir input  

         或  hadoop fs -mkdir /user/你的用户名/input

(3)把创建好的test.txt上传到HDFS系统的input文件夹下,使用命令如下所示。

          hadoop fs -put /home/你的用户名/桌面/test.txt  input

         或 hadoop fs -put /home/你的用户名/桌面/test.txt   /user/你的用户名/input

(4)我们可以查看test.txt是否在HDFS的input文件夹下,如下所示:

          hadoop fs -ls input

          如果显示如下就说明上传成功:

         Found 1 items

         -rw-r--r--   3 cy supergroup        120 2015-05-08 20:26 /user/cy/input/test.txt

(5)执行hadoop-examples-1.2.1.jar中自带的wordcount程序,如下:(提示:在执行下面的命令之前,你要在终端用cd命令进入到/home/cy/Hadoop/hadoop-1.2.1目录)

         hadoop jar hadoop-examples-1.2.1.jar wordcount /user/你的用户名/input/test.txt  /user/你的用户名/output

        如果显示如下结果就说明运行成功:

15/05/08 20:31:29 INFO input.FileInputFormat: Total input paths to process : 1

15/05/08 20:31:29 INFO util.NativeCodeLoader: Loaded the native-hadoop library

15/05/08 20:31:29 WARN snappy.LoadSnappy: Snappy native library not loaded

15/05/08 20:31:30 INFO mapred.JobClient: Running job: job_201505082010_0001

15/05/08 20:31:31 INFO mapred.JobClient:  map 0% reduce 0%

15/05/08 20:31:35 INFO mapred.JobClient:  map 100% reduce 0%

15/05/08 20:31:42 INFO mapred.JobClient:  map 100% reduce 33%

15/05/08 20:31:43 INFO mapred.JobClient:  map 100% reduce 100%

15/05/08 20:31:43 INFO mapred.JobClient: Job complete: job_201505082010_0001

15/05/08 20:31:43 INFO mapred.JobClient: Counters: 29

15/05/08 20:31:43 INFO mapred.JobClient:   Job Counters 

15/05/08 20:31:43 INFO mapred.JobClient:     Launched reduce tasks=1

15/05/08 20:31:43 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=3117

15/05/08 20:31:43 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

15/05/08 20:31:43 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

15/05/08 20:31:43 INFO mapred.JobClient:     Launched map tasks=1

15/05/08 20:31:43 INFO mapred.JobClient:     Data-local map tasks=1

15/05/08 20:31:43 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8014

15/05/08 20:31:43 INFO mapred.JobClient:   File Output Format Counters 

15/05/08 20:31:43 INFO mapred.JobClient:     Bytes Written=18

15/05/08 20:31:43 INFO mapred.JobClient:   FileSystemCounters

15/05/08 20:31:43 INFO mapred.JobClient:     FILE_BYTES_READ=30

15/05/08 20:31:43 INFO mapred.JobClient:     HDFS_BYTES_READ=226

15/05/08 20:31:43 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=116774

15/05/08 20:31:43 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=18

15/05/08 20:31:43 INFO mapred.JobClient:   File Input Format Counters 

15/05/08 20:31:43 INFO mapred.JobClient:     Bytes Read=120

15/05/08 20:31:43 INFO mapred.JobClient:   Map-Reduce Framework

15/05/08 20:31:43 INFO mapred.JobClient:     Map output materialized bytes=30

15/05/08 20:31:43 INFO mapred.JobClient:     Map input records=10

15/05/08 20:31:43 INFO mapred.JobClient:     Reduce shuffle bytes=30

15/05/08 20:31:43 INFO mapred.JobClient:     Spilled Records=4

15/05/08 20:31:43 INFO mapred.JobClient:     Map output bytes=200

15/05/08 20:31:43 INFO mapred.JobClient:     CPU time spent (ms)=610

15/05/08 20:31:43 INFO mapred.JobClient:     Total committed heap usage (bytes)=176427008

15/05/08 20:31:43 INFO mapred.JobClient:     Combine input records=20

15/05/08 20:31:43 INFO mapred.JobClient:     SPLIT_RAW_BYTES=106

15/05/08 20:31:43 INFO mapred.JobClient:     Reduce input records=2

15/05/08 20:31:43 INFO mapred.JobClient:     Reduce input groups=2

15/05/08 20:31:43 INFO mapred.JobClient:     Combine output records=2

15/05/08 20:31:43 INFO mapred.JobClient:     Physical memory (bytes) snapshot=182902784

15/05/08 20:31:43 INFO mapred.JobClient:     Reduce output records=2

15/05/08 20:31:43 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=756301824

15/05/08 20:31:43 INFO mapred.JobClient:     Map output records=20

(6)我们可以使用下面的命令还查看运行后的结果:

         hadoop fs -ls output

         hadoop fs -text /user/你的用户名/output/part-r-00000

如果显示如下就说明hadoop三个节点安装和配置成功,测试也成功了,就可以继续更深入地使用和研究hadoop了

hello 10

world 10

以上都是本人在安装和配置hadoop的亲身体验,大家可以借鉴借鉴,请多多支持,我还会继续写关于我自己学习hadoop的一些经历。

基于Hadoop集群的大规模分布式深度学习

Hadoop集群已成为Yahoo大规模机器学习的首选平台,为了在这些强化的Hadoop集群上支持深度学习,我们基于开源软件库开发了一套完整的分布式计算工具,它们是Apache Spark和Caffe。

Hadoop集群,深度学习

作者:管理员




现在致电4006-2991-90 OR 查看更多联系方式 →

Go To Top 回顶部