samuschen 发表于 2013-1-31 12:22:50

Configuration Parameters: What can you just ignore?

http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/
 
Configuring a Hadoop cluster is something akin to voodoo. There are a large number of variables in hadoop-default.xml that you can override in hadoop-site.xml. Some specify file paths on your system, but others adjust levers and knobs deep inside Hadoop’s guts. Unfortuately, there’s little or no documentation on how to set them well. Is there a single optimal configuration? Are there some settings that can just be “set to 11?”
http://www.cloudera.com/blog/wp-content/uploads/volume_dials.jpg At Cloudera, we’re working hard to make Hadoop easier to use and to make configuration less painful. Our Hadoop Configuration Tool gives you a web-based guide to help set up your cluster. Once it’s running, though, you might want to look under the hood and tune things a bit.
The rest of this post discusses why it’s a bad idea to just set all the limits as high as they’ll go, and gives you some pointers to get started on finding a happy medium.
Why can’t you just set all the limits to 1,000,000?

Increasing most settings has a direct impact on memory consumption. Increasing DataNode and TaskTracker settings, therefore, has an adverse impact on RAM available to individual MapReduce tasks. On large hardware, they can be set generously high. In general though, unless you have several dozen more more nodes working together, dialing up settings very high wastes system resources like RAM that could be better applied to running your mapper and reducer code.
That having been said, here’s a list of some things that can be cranked up higher than the defaults by a fair margin:
File descriptor limits

A busy Hadoop daemon might need to open a lot of files. The open fd ulimit in Linux defaults to 1024, which might be too low. You can set to something more generous — maybe 16384. Setting this an order of magnitude higher (e.g., 128K) is probably not a good idea. No individual Hadoop daemon is supposed to need hundreds of thousands of fds; if it’s consuming that many, then there’s probably an fd leak or other bug that needs fixing. This would just mask the true problem until errors started showing up somewhere else.
You can view your ulimits in bash by running:
$ ulimit -a
页: [1]
查看完整版本: Configuration Parameters: What can you just ignore?