Hadoop 2.x Administration Cookbook
上QQ阅读APP看书,第一时间看更新

Hadoop streaming

In this recipe, we will look at how we can execute jobs on an Hadoop cluster using scripts written in Bash or Python. It is not mandatory to use only Java for programming MapReduce code; any language can be used by evoking the Hadoop streaming utility. Do not confuse this with real-time streaming, which is different from what we will be discussing here.

Getting ready

To step through the recipes in this chapter, make sure you have a running cluster with HDFS and YARN setup correctly as discussed in the previous chapters. This can be a single node cluster or a multinode cluster, as long as the cluster is configured correctly.

It is not necessary to know Java to run MapReduce programs on Hadoop. Users can carry forward their existing scripting knowledge and use Bash or Python to run the job on Hadoop.

How to do it...

  1. Connect to an edge node in the cluster and switch to user hadoop.
  2. The streaming JAR is also under the location as Hadoop /opt/cluster/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.2.jar.
  3. The map script of the wordcount example using Python is shown in the following screenshot:
    How to do it...
  4. The reduce script is as shown next:
    #!/usr/bin/env python
    
    from operator import itemgetter
    import sys
    
    current_word = None
    current_count = 0
    word = None
    
    # input comes from STDIN
    for line in sys.stdin:
        # remove leading and trailing whitespace
        line = line.strip()
    
        # parse the input we got from mapper.py
        word, count = line.split('\t', 1)
    
       # convert count (currently a string) to int
        try:
            count = int(count)
        except ValueError:
            # count was not a number, so silently
            # ignore/discard this line
            continue
    
        # this IF-switch only works because Hadoop sorts map output
        # by key (here: word) before it is passed to the reducer
        if current_word == word:
            current_count += count
        else:
            if current_word:
                # write result to STDOUT
                print '%s\t%s' % (current_word, current_count)
            current_count = count
            current_word = word
    
    # do not forget to output the last word if needed!
    if current_word == word:
    print '%s\t%s' % (current_word, current_count)
  5. The user can execute the script as shown in the following screenshot:
    How to do it...

How it works...

In this recipe, mapper.py and reducer.py are simple Python scripts, which can be executed directly on the command line, without the need for Hadoop as shown next:

$ cat file | ./mapper.py | ./reducer.py

Here, file is a simple text file. Make sure you understand the indentation in Python to troubleshoot this script.

If the users are finding it difficult to write scripts or configurations, all these are available at GitHub: https://github.com/netxillon/hadoop/tree/master/map_scripts