亚洲av成人无遮挡网站在线观看,少妇性bbb搡bbb爽爽爽,亚洲av日韩精品久久久久久,兔费看少妇性l交大片免费,无码少妇一区二区三区
Chinaunix
標(biāo)題:
SequoiaDB與MapReduce的集成
[打印本頁]
作者:
chenjiefeng1985
時間:
2014-08-07 17:26
標(biāo)題:
SequoiaDB與MapReduce的集成
SequoiaDB與 MapReduce 對接,需要準(zhǔn)備 hadoop-connector.jar 和 sequoiadb.jar,這兩個 jar 可以在 SequoiaDB 安裝目錄下面的 hadoop 目錄中找到。
因為不同版本的 Hadoop 的 classpath 不一樣,所以先查看 hadoop 的 classpath,輸入 hadoop classpath,在classpath 中選擇一個目錄,把 hadoop-connector.jar 和 sequoiadb.jar 放在目錄里面,重啟 hadoop 集群。
hadoop-connector.jar 中一些重要的類:
SequoiadbInputFormat:讀取SequoiaDB的數(shù)據(jù)
SequoiadbOutputFormat:向SequoiaDB中寫入數(shù)據(jù)
BSONWritable:BSONObject 的包裝類,實現(xiàn)了 WritableComparable 接口。用于序列化 BSONObject 對象。
SequoiaDB 和 MapReduce 的配置:
sequoiadb-hadoop.xml 是配置文件,放在你編寫的 MapReduce 工程的源碼根目錄下面。
sequoiadb.input.url:指定作為輸入的 SequoiaDB 的 URL 路徑,格式為:hostname1ort1,hostname2ort2,
sequoiadb.in.collectionspace:指定作為輸入的集合空間。
sequoiadb.in.collection:指定作為輸入的集合。
sequoiadb.output.url:指定作為輸出的 SequoiaDB 的 URL 路徑。
sequoiadb.out.collectionspace:指定作為輸出的集合空間。
sequoiadb.out.collection:指定作為輸出的集合。
sequoiadb.out.bulknum:指定每次向 SequoiaDB 寫入的記錄條數(shù),對寫入性能進(jìn)行優(yōu)化。
作者:
datakai
時間:
2014-08-08 14:22
可以再補充詳細(xì)一點嗎?!
作者:
chenjiefeng1985
時間:
2014-08-19 15:32
示例看看吧
1. 讀取 HDFS 文件,處理后寫入到 SequoiaDB 中去:
public class HdfsSequoiadbMR {
static class MobileMapper extends Mapper<LongWritable,Text,Text,IntWritable>{
private static final IntWritable ONE=new IntWritable(1);
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String valueStr=value.toString();
String mobile_prefix=valueStr.split(",")[3].substring(0,3);
context.write(new Text(mobile_prefix), ONE);
}
}
static class MobileReducer extends Reducer<Text, IntWritable, NullWritable, BSONWritable>{
@Override
protected void reduce(Text key, Iterable<IntWritable> values,Context context)
throws IOException, InterruptedException {
Iterator<IntWritable> iterator=values.iterator();
long sum=0;
while(iterator.hasNext()){
sum+=iterator.next().get();
}
BSONObject bson=new BasicBSONObject();
bson.put("prefix", key.toString());
bson.put("count", sum);
context.write(null,new BSONWritable(bson));
}
}
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
if(args.length<1){
System.out.print("please set input path ");
System.exit(1);
}
Configuration conf=new Configuration();
conf.addResource("sequoiadb-hadoop.xml"); //加載配置文件
Job job=Job.getInstance(conf);
job.setJarByClass(HdfsSequoiadbMR.class);
job.setJobName("HdfsSequoiadbMR");
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(SequoiadbOutputFormat.class); //reduce 輸出寫入到 SequoiaDB 中
TextInputFormat.setInputPaths(job, new Path(args[0]));
job.setMapperClass(MobileMapper.class);
job.setReducerClass(MobileReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(BSONWritable.class);
job.waitForCompletion(true);
}
}
2. 讀取 SequoiaDB 中數(shù)據(jù)處理后寫入到 HDFS 中。
public class SequoiadbHdfsMR {
/**
*
* @author gaoshengjie
* read the data, count penple in a province
*/
static class ProvinceMapper extends Mapper<Object, BSONObject,IntWritable,IntWritable>{
private static final IntWritable ONE=new IntWritable(1);
@Override
protected void map(Object key, BSONObject value, Context context)
throws IOException, InterruptedException {
int province=(Integer) value.get("province_code");
context.write(new IntWritable(province), ONE);
}
}
static class ProvinceReducer extends Reducer<IntWritable,IntWritable,IntWritable,LongWritable>{
@Override
protected void reduce(IntWritable key, Iterable<IntWritable> values,
Context context)
throws IOException, InterruptedException {
Iterator<IntWritable> iterator=values.iterator();
long sum=0;
while(iterator.hasNext()){
sum+=iterator.next().get();
}
context.write(key,new LongWritable(sum));
}
}
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
if(args.length<1){
System.out.print("please set output path ");
System.exit(1);
}
Configuration conf=new Configuration();
conf.addResource("sequoiadb-hadoop.xml");
Job job=Job.getInstance(conf);
job.setJarByClass(SequoiadbHdfsMR.class);
job.setJobName("SequoiadbHdfsMR");
job.setInputFormatClass(SequoiadbInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileOutputFormat.setOutputPath(job, new Path(args[0]+"/result"));
job.setMapperClass(ProvinceMapper.class);
job.setReducerClass(ProvinceReducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(LongWritable.class);
job.waitForCompletion(true);
}
}
配置信息:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>sequoiadb.input.url</name>
<value>localhost:11810</value>
</property>
<property>
<name>sequoiadb.output.url</name>
<value>localhost:11810</value>
</property>
<property>
<name>sequoiadb.in.collectionspace</name>
<value>default</value>
</property>
<property>
<name>sequoiadb.in.collect</name>
<value>student</value>
</property>
<property>
<name>sequoiadb.out.collectionspace</name>
<value>default</value>
</property>
<property>
<name>sequoiadb.out.collect</name>
<value>result</value>
</property>
<property>
<name>sequoiadb.out.bulknum</name>
<value>10</value>
</property>
</configuration>
歡迎光臨 Chinaunix (http://72891.cn/)
Powered by Discuz! X3.2