hadoop windows用控制台编译执行自带程序WordCount.java错误...

Access denied |
used Cloudflare to restrict access
Please enable cookies.
What happened?
The owner of this website () has banned your access based on your browser's signature (386d0cd9f125794e-ua98).Hadoop 实战,WordCount 运行报错 java.io.IOException: Job failed!
来源:csdn
【刚开始学习Hadoop,现在部署了Hadoop 2.4.1的集群,编译了eclipse插件,现在运行《Hadoop实战》里面的WordCount程序,jar包已经打好,input文件也已上传。运行时报错 java.io.IOException: Job
运行程序报错如下:
后接二楼】
14/08/26 01:14:52 INFO mapreduce.Job: Job job_local01 running in uber mode : false
14/08/26 01:14:52 INFO mapreduce.Job:
map 100% reduce 0%
14/08/26 01:14:52 INFO mapreduce.Job: Job job_local01 failed with state FAILED due to: NA
14/08/26 01:14:52 INFO mapreduce.Job: Counters: 38
File System Counters
FILE: Number of bytes read=3661
FILE: Number of bytes written=222411
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=22
HDFS: Number of bytes written=0
HDFS: Number of read operations=5
HDFS: Number of large read operations=0
HDFS: Number of write operations=1
Map-Reduce Framework
Map input records=1
Map output records=4
Map output bytes=38
Map output materialized bytes=52
Input split bytes=85
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=52
Reduce input records=0
Reduce output records=0
Spilled Records=4
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=0
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=
Shuffle Errors
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=22
File Output Format Counters
Bytes Written=0
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at WordCount.main(WordCount.java:91)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
[root@master ~]#
是不是权限问题?
方便贴一下代码么?
再清空一下tmp文件夹试试
代码是《Hadoop实战》第二版里面的,没有改动。
import java.io.IOE
import java.util.I
import java.util.StringT
import org.apache.hadoop.fs.P
import org.apache.hadoop.io.IntW
import org.apache.hadoop.io.LongW
import org.apache.hadoop.io.T
import org.apache.hadoop.mapred.FileInputF
import org.apache.hadoop.mapred.FileOutputF
import org.apache.hadoop.mapred.JobC
import org.apache.hadoop.mapred.JobC
import org.apache.hadoop.mapred.MapReduceB
import org.apache.hadoop.mapred.M
import org.apache.hadoop.mapred.OutputC
import org.apache.hadoop.mapred.R
import org.apache.hadoop.mapred.R
import org.apache.hadoop.mapred.TextInputF
import org.apache.hadoop.mapred.TextOutputF
public class WordCount
public static class Map extends MapReduceBase implements Mapper&LongWritable, Text, Text, IntWritable&
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector&Text, IntWritable& output, Reporter reporter)
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens())
word.set(tokenizer.nextToken());
output.collect(word, one);
catch (IOException e)
e.printStackTrace();
public static class Reduce extends MapReduceBase implements Reducer&Text, IntWritable, Text, IntWritable&
public void reduce(Text key, Iterator&IntWritable& values, OutputCollector&Text, IntWritable& output, Reporter reporter)
int sum = 0;
while (values.hasNext())
sum += values.next().get();
output.collect(key, new IntWritable(sum));
catch (IOException e)
e.printStackTrace();
public static void main(String[] args)
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setReducerClass(Reducer.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
catch (IOException e)
e.printStackTrace();
权限我没有设置过,具体要设置哪些目录和文件的权限?
问题解决,程序代码的问题,貌似原来的代码是旧版本的,找了一个新版本的OK了。
import java.io.IOE
import java.util.StringT
import org.apache.hadoop.conf.C
import org.apache.hadoop.fs.P
import org.apache.hadoop.io.IntW
import org.apache.hadoop.io.LongW
import org.apache.hadoop.io.T
import org.apache.hadoop.mapreduce.J
import org.apache.hadoop.mapreduce.M
import org.apache.hadoop.mapreduce.R
import org.apache.hadoop.mapreduce.lib.input.FileInputF
import org.apache.hadoop.mapreduce.lib.output.FileOutputF
public class WordCount
public static class Map extends Mapper&LongWritable, Text, Text, IntWritable&
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context)
StringTokenizer tokenizer = new StringTokenizer(value.toString());
while (tokenizer.hasMoreTokens())
word.set(tokenizer.nextToken());
context.write(word, one);
catch (IOException | InterruptedException e)
e.printStackTrace();
public static class Reduce extends Reducer&Text, IntWritable, Text, IntWritable&
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable&IntWritable& values, Context context)
int sum = 0;
for (IntWritable val : values)
sum += val.get();
result.set(sum);
context.write(key, result);
catch (IOException | InterruptedException e)
e.printStackTrace();
public static void main(String[] args)
Configuration conf = new Configuration();
job = Job.getInstance(conf);
catch (IOException e)
e.printStackTrace();
job.setJarByClass(WordCount.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("hdfs://master:9000/input"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/output"));
catch (IllegalArgumentException | IOException e)
e.printStackTrace();
job.submit();
catch (ClassNotFoundException | IOException | InterruptedException e)
e.printStackTrace();
果然是代码的问题
其实,也不是对症下药,就是试试可能出问题的地方
你是自己敲上去的,还是有配套的源代码?
warmtiffany:
我遇到跟你一样的问题,请问下是怎么改代码,jar包里解压出来WorldCount是个.class文件,用notepad打开是乱码。不知道咋改。。
免责声明:本站部分内容、图片、文字、视频等来自于互联网,仅供大家学习与交流。相关内容如涉嫌侵犯您的知识产权或其他合法权益,请向本站发送有效通知,我们会及时处理。反馈邮箱&&&&。
学生服务号
在线咨询,奖学金返现,名师点评,等你来互动21315人阅读
Hadoop(21)
需要说明的有以下几点。
1.如果wordcount程序不含层次,即没有package
那么使用如下命令:
hadoop jar wordcount.jar WordCount2 /home/hadoop/input/20418.txt /home/hadoop/output/wordcount2-6
该命令行的意思大致是:执行hadoop 程序,该程序在wordcount.jar中。该wordcount.jar包含以下几个class文件,分别是WordCount.java编译产生的3个class文件:
WordCount.classWordCount$Map.classWordCount$Reduce.class
和WordCount2.java编译产生的四个claas文件:
&classWordCount2.classWordCount2$IntSumReducer.classWordCount2$IntWritableDecreasingComparator.class&WordCount2$TokenizerMapper.class
并且这几个.class文件是在jar包的根目录下。将上述七个class文件打包成jar文件的方法如下:假设上述七个class文件在同一个目录WordCount文件夹下,使用命令行进入到该级目录,然后通过如下命令打包:
jar cvf WordCount.jar *.class
2.如果wordcount程序含有层次
那么使用如下命令(错误):
$ hadoop jar wordcount.jar WordCount2 /home/hadoop/input/20418.txt /home/hadoop/output/wordcount2-7会报错,错误如下:(具体原因可以参考博客:)
Exception in thread &main& java.lang.ClassNotFoundException: WordCount2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at org.apache.hadoop.util.RunJar.main(RunJar.java:149)正确命令如下:
$ hadoop jar WordCount.jar org.apache.hadoop.examples.WordCount2 /home/hadoop/input/20418.txt /home/hadoop/output/wordcount2-7
这里唯一的不同色jar包的不同,上一条命令的jar包是wordcount.jar,而这里的java包是WordCount.jar。WordCount.jar把是对org/apache/hadoop/examples整个目录打包得到的。
3.编译WordCount.java程序
在参考1中使用了类似以下的命令来编译WordCount.java
javac -classpath /home/hadoop/program/hadoop-0.20.1/hadoop-0.20.1-core.jar WordCount.java -d /home/hadoop/WordCount/
这条命令的指定了classspath为&/home/hadoop/program/hadoop-0.20.1/hadoop-0.20.1-core.jar 。其实我们可以修改环境变量来省去这些内容。
具体方法为:
sudo gedit /etc/profile修改
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH为
export HADOOP_HOME=/home/hadoop/program/hadoop-0.20.1
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH:$HADOOP_HOME/hadoop-0.20.1-core.jar
这样的话,就可以直接使用如下命令来编译WordCount.java程序了
WordCount.java -d /home/hadoop/WordCount/
注意:这里的WordCount.java可以有包名,也可以没有包名。如果没有包名的话,在/home/hadoop/WordCount/目录下就有几个编译完成的.class文件。如果有包名的话,那么在/home/hadoop/WordCount/目录下还会生成包的结构目录。
4.出现无法编译WordCount.java程序的情况
在使用同样的编译命令的时候
javac WordCount2.java -d /home/hadoop/WordCount/会提示如下错误:
WordCount2.java:93: 无法访问 mons.cli.Options
未找到 mons.cli.Options 的类文件
String[] otherArgs = new GenericOptionsParser(conf, args)
1 错误查了一些资料,这个错误主要是因为WordCount2.java里面用到了一些类,而这些类没有在classpath路径下注册。所以会产生这样的错误。因为在eclipse中,我们的bulid path中的library中添加的jar包不单单只有hadoop-0.20.1-core.jar一个jar,还有其他许多的jar,如果要通过命令行将这些jar包全部打出来,非常麻烦。这里推荐使用ant来进行编译。具体如何使用ant,可能会在后面的博客中提及。我们可以使用eclipse来帮我们编译。参考博客:。
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:2522086次
积分:19301
积分:19301
排名:第427名
原创:320篇
转载:41篇
评论:458条
(1)(18)(31)(34)(3)(11)(22)(23)(8)(13)(7)(10)(13)(12)(4)(21)(16)(5)(9)(8)(1)(39)(4)(12)(32)(3)(1)运行hadoop的WordCount程序 -
- ITeye博客
博客分类:
源代码
import java.io.IOE
import java.util.ArrayL
import java.util.I
import java.util.L
import java.util.StringT
import org.apache.hadoop.conf.C
import org.apache.hadoop.conf.C
import org.apache.hadoop.fs.P
import org.apache.hadoop.io.IntW
import org.apache.hadoop.io.LongW
import org.apache.hadoop.io.T
import org.apache.hadoop.mapred.FileInputF
import org.apache.hadoop.mapred.FileOutputF
import org.apache.hadoop.mapred.JobC
import org.apache.hadoop.mapred.JobC
import org.apache.hadoop.mapred.MapReduceB
import org.apache.hadoop.mapred.M
import org.apache.hadoop.mapred.OutputC
import org.apache.hadoop.mapred.R
import org.apache.hadoop.mapred.R
import org.apache.hadoop.util.T
import org.apache.hadoop.util.ToolR
public class WordCount extends Configured implements Tool {
public static class MapClass extends MapReduceBase implements
Mapper&LongWritable, Text, Text, IntWritable& {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value,
OutputCollector&Text, IntWritable& output, Reporter reporter)
throws IOException {
String line = value.toString();
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
output.collect(word, one);
* A reducer class that just emits the sum of the input values.
public static class Reduce extends MapReduceBase implements
Reducer&Text, IntWritable, Text, IntWritable& {
public void reduce(Text key, Iterator&IntWritable& values,
OutputCollector&Text, IntWritable& output, Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
output.collect(key, new IntWritable(sum));
static int printUsage() {
System.out.println("wordcount [-m &maps&] [-r &reduces&] &input& &output&");
ToolRunner.printGenericCommandUsage(System.out);
return -1;
* The main driver for word count map/reduce program. Invoke this method to
* submit the map/reduce job.
* @throws IOException
When there is communication problems with the job tracker.
public int run(String[] args) throws Exception {
JobConf conf = new JobConf(getConf(), WordCount.class);
conf.setJobName("wordcount");
// the keys are words (strings)
conf.setOutputKeyClass(Text.class);
// the values are counts (ints)
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(MapClass.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
List&String& other_args = new ArrayList&String&();
for (int i = 0; i & args. ++i) {
if ("-m".equals(args[i])) {
conf.setNumMapTasks(Integer.parseInt(args[++i]));
} else if ("-r".equals(args[i])) {
conf.setNumReduceTasks(Integer.parseInt(args[++i]));
other_args.add(args[i]);
} catch (NumberFormatException except) {
System.out.println("ERROR: Integer expected instead of "
+ args[i]);
return printUsage();
} catch (ArrayIndexOutOfBoundsException except) {
System.out.println("ERROR: Required parameter missing from "
+ args[i - 1]);
return printUsage();
// Make sure there are exactly 2 parameters left.
if (other_args.size() != 2) {
System.out.println("ERROR: Wrong number of parameters: "
+ other_args.size() + " instead of 2.");
return printUsage();
FileInputFormat.setInputPaths(conf, other_args.get(0));
FileOutputFormat.setOutputPath(conf, new Path(other_args.get(1)));
JobClient.runJob(conf);
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new WordCount(), args);
System.exit(res);
hadoop配置成伪分布模式。本机上新建一个目录,如:~/code/hadoop/WordCount,编译WordCount.java。
javac -classpath /usr/local/hadoop-0.20.2/hadoop-0.20.2-core.jar WordCount.java
编译后生成三个class文件,WordCount.class,WordCount$Map.class,WordCount$Reduce.class。
打包成jar文件,jar -cvf WordCount.jar *.class
新建input1.txt和input2.txt文件,输入一些单词。
在hdfs上新建目录,上传输入文件
hadoop fs -mkdir /tmp/input
hadoop fs -put input1.txt /tmp/input/
hadoop fs -put input2.txt /tmp/input/
运行程序
hadoop jar WordCount.jar WordCount /tmp/input /tmp/output
谢了,不过 hadoop fs -mkdir /tmp/output& 这一步应该不要吧,我加了这一步反而在最后执行的时候报/tmp/output is already exsit错误,此时我把已建好的/tmp/output删掉,再执行最后一步就OK了&&&& 是的,输出目录事先存在的话会出错,多谢指正啊。
浏览: 126005 次
来自: 上海
samwalt 写道zhengalways 写道samwalt ...
zhengalways 写道samwalt 写道zhengal ...
samwalt 写道zhengalways 写道博主你好,刚用 ...
zhengalways 写道博主你好,刚用idea,切换成jr ...
博主你好,刚用idea,切换成jre也不行啊,崩溃?用eclipse远程调试hadoop wordcount程序出现如下错误 - 知乎2被浏览187分享邀请回答&?xml version="1.0" encoding="UTF-8"?&
&project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"&
&modelVersion&4.0.0&/modelVersion&
&groupId&Hadoop&/groupId&
&artifactId&demo&/artifactId&
&version&1.0-SNAPSHOT&/version&
&properties&
&hadoop.version&2.7.1&/hadoop.version&
&/properties&
&dependencies&
&!--hadoop--&
&!-- /artifact/commons-io/commons-io --&
&dependency&
&groupId&commons-io&/groupId&
&artifactId&commons-io&/artifactId&
&version&2.4&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-mapreduce-client-core&/artifactId&
&version&${hadoop.version}&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-client&/artifactId&
&version&${hadoop.version}&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-common&/artifactId&
&version&${hadoop.version}&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-hdfs&/artifactId&
&version&${hadoop.version}&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-yarn-common&/artifactId&
&version&${hadoop.version}&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-yarn-api&/artifactId&
&version&${hadoop.version}&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-yarn-client&/artifactId&
&version&${hadoop.version}&/version&
&/dependency&
&/dependencies&
&/project&
0添加评论分享收藏感谢收起}

我要回帖

更多关于 hadoop windows 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信