出发一路向北 发表于 2012-12-30 16:21:47

hive对lzo文件并行处理的关键点

<div id="cnblogs_post_body">1,确保创建索引
$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jarcom.hadoop.compression.lzo.DistributedLzoIndexer /user/hive/warehouse/flog
2 如果在hive中新建外部表的语句为
CREATE EXTERNAL TABLE foo (         columnA string,         columnB string )    PARTITIONED BY (date string)    ROW FORMAT DELIMITED FIELDS TERMINATED BY "\t"    STORED AS INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"          OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"    LOCATION '/path/to/hive/tables/foo';3对于已经存在的表修改语句为
ALTER TABLE foo    SET FILEFORMAT      INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"      OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat";4 alter table后对已经load进表中的数据,需要重新load和创建索引,要不还是不能分块
5 用hadoop streaming编程执行mapreduce作业语句为
hadoop jar /opt/mapr/hadoop/hadoop-0.20.2/contrib/streaming/hadoop-0.20.2-dev-streaming.jar -file /home/pyshell/map.py -file /home/pyshell/red.py-mapper /home/pyshell/map.py -reducer /home/pyshell/red.py -input /aojianlog/20120304/gold/gold_38_3.csv.lzo -output /aojianresult/gold38 -inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat -jobconf mapred.output.compress=true -jobconf mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec
注意 如果没有-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat 选项的话map作业也不会分片
没有-jobconf mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec选项,只设置-jobconf mapred.output.compress=true 选项的话 reduce作业输出文件的格式为.lzo_deflate
页: [1]
查看完整版本: hive对lzo文件并行处理的关键点