新建 maven 项目

  1. +Create New Project–Maven –Next


  2. 填写好 GroupId 和 ArtifactId –Next–Finish

编写 wordcount 项目

  1. 建立项目结构目录:右键 java -> New -> package 输入 package 路径(本例是 com.hadoop.wdcount)建立 package

  2. 类似的方式在创建好的 package 下建立三个类 WordcountMain、WordcountMapper、WordcountReducer

  3. 编写 pom.xml 配置(引入要用到的 hadoop 的 jar 包)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>sa.hadoop</groupId>
<artifactId>wordcount</artifactId>
<version>1.0-SNAPSHOT</version>

<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.7</version>
<!-- 我们用的是2.7.7版本的hadoop -->
</dependency>

<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-common</artifactId>
<version>2.7.7</version>
</dependency>

<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.7.7</version>
</dependency>
</dependencies>

</project>

编写项目代码

完成刚刚建立的三个类中的逻辑实现。
(1) WordcountMapper.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
package com.hadoop.wdcount;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;

import java.io.IOException;

public class WordcountMapper extends org.apache.hadoop.mapreduce.Mapper<LongWritable,Text,Text,IntWritable> {


@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line=value.toString();
String[] words=line.split(" ");
for (String word:words){
context.write(new Text(word),new IntWritable(1));
}
}
}

(2)WordcountReducer.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
package com.hadoop.wdcount;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;
import java.util.Iterator;

public class WordcountReducer extends Reducer<Text,IntWritable,Text,IntWritable> {

@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
Integer counts=0;
for (IntWritable value:values){
counts+=value.get();
}
context.write(key,new IntWritable(counts));
}
}

(3)WordcountMain.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package com.hadoop.wdcount;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordcountMain {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "wordcount");
job.setJarByClass(WordcountMain.class);
job.setMapperClass(WordcountMapper.class);
job.setReducerClass(WordcountReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
boolean flag = job.waitForCompletion(true);
if (!flag) {
System.out.println("wordcount failed!");
}
}
}

将项目打包成 jar

  1. 右键项目名称–Open Module Settings

  2. Artifacts -> + -> JAR -> From modules with dependencies…

  3. 填写 Main Class(点击… 选择 WordcountMain),然后选择 extract to the target JAR,点击 OK。

  4. 勾选 include in project build ,其中 Output directory 为最后的输出目录,下面 output layout 是输出的各 jar 包,点击 ok

  5. 点击菜单 Build——>Build Aritifacts…

  6. 选择 Build,结果可到前面 4 的 output 目录查看或者项目结构中的 out 目录

执行验证

(这里采用 win 环境下的 hadoop2.7.6 作为例子,wsl 暂时未验证)

  1. 先在创建 jar 包路径下(C:\Users\USTC\Documents\maxyi\Java\wordcount\out\artifacts\wordcount_jar)建立一个 input1.txt 文档,并添加内容 “I believe that I will succeed!” 并保存。等会儿要将该 txt 文件上传到 hadoop。

  2. 运行 hadoop 打开所有节点
    cd hadoop-2.7.6/sbin
    start-all.cmd

  3. 运行成功后,来到之前创建的 jar 包路径,将编写好的 txt 文件上传到 hadoop
    cd /
    cd C:\Users\USTC\Documents\maxyi\Java\wordcount\out\artifacts \wordcount_jar
    hadoop fs -put ./input1.txt /input1
    可以用以下代码查看是否上传成功。
    hadoop fs -ls /

  4. 删除 wordcount.jar/META-INF/LICENSE,否则不能创建 hadoop 运行时不能创建 license,会报错。

  5. 运行 wordcount
    hadoop jar wordcount.jar com.hadoop.wdcount.WordcountMain /input1 /output2
    jar 命令后有四个参数,
    第一个 wordcount.jar 是打好的 jar 包
    第二个 com.hadoop.wdcount.WordcountMain 是 java 工程总编写的主类,这里要包含 package 路径
    第三个 / input1 是刚刚上传的输入
    第四个 / output2 是 wordcount 的输出(一定要新建,不能重用之前建立的)

  6. 下载输出文件查看结果是否正确
    hadoop fs -get /output2