Hadoop Use

From Medialab

Here are simple steps to follow to perform a map[reduce] job.

I'll refer Hadoop installation directory as

  $HADOOP_HOME (/usr/local/hadoop on paleo):
  • Format your hadoop-distributed-file-system:
$HADOOP_HOME/bin/hadoop namenode -format
  • Startup all nodes (things will be a bit more tricky with a full working cluster...):
$HADOOP_HOME/bin/start-all.sh
  • Copy your inputfile to HDFS hdfsFile:

(Remember that your HOME in HDFS is

    /user/YOURNAME: the full HDFS path of your copied file
    will be /user/YOURNAME/hdfsFile)
$HADOOP_HOME/bin/hadoop dfs -copyFromLocal inputFile hdfsFile
  • You can look at the content of your HDFS and
 check whether file you created exists:
$HADOOP_HOME/bin/hadoop dfs -ls
  • Call your mapper function on copiedHdfsFile and puts results in outputDirName;

this is just a map task, we don't need a reduce one because we don't want our data to be sorted:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/contrib/hadoop-0.15.3-streaming.jar
   -file yourScriptPath -mapper "yourScriptName"
   -jobconf mapred.reduce.tasks=0
   -input copiedHdfsFile -output outputDirName

If everything worked properly, your output would be placed in outputDirName, split into several files (one for each map task performed by Hadoop).

The output of dfs -ls outputDirName looks like:

Found 4 items
/user/YOURNAME/outputDirName/part-00000   <r 1>   SIZE CREATION_TIME
/user/YOURNAME/outputDirName/part-00001   <r 1>   ...  ...
/user/YOURNAME/outputDirName/part-00002   <r 1>   ...  ...
/user/YOURNAME/outputDirName/part-00003   <r 1>   ...  ...

To read and recombine these files you can use dfs -cat option (look at /project/piqasso/workspaces/tamberi/catter.sh)

To copy a single file from HDFS to the local filesystem do:

$HADOOP_HOME/bin/hadoop dfs -copyToLocal pathToHdfsFile pathToLocalFile 

To stop the Hadoop cluster:

$HADOOP_HOME/bin/stop-all.sh

Sample Session

In the output below, we use DIR to stand for /project/piqasso/workspaces/tamberi.

Formatting:

$ $HADOOP_HOME/bin/hadoop namenode -format
08/02/24 19:03:17 INFO dfs.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = paleo.di.unipi.it/131.114.3.155
STARTUP_MSG:   args = [-format]
************************************************************/
08/02/24 19:03:18 INFO dfs.Storage: 
  Storage directory /tmp/hadoop-tamberi/dfs/name has been
  successfully formatted.
08/02/24 19:03:18 INFO dfs.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at paleo.di.unipi.it/131.114.3.155
************************************************************/

Starting Hadoop:

$ $HADOOP_HOME/bin/start-all.sh
starting namenode, logging to
  ~tamberi/hadoop/bin/../logs/hadoop-tamberi-namenode-paleo.di.unipi.it.out
localhost: starting datanode, logging to
   ~tamberi/hadoop/bin/../logs/hadoop-tamberi-datanode-paleo.di.unipi.it.out
localhost: starting secondarynamenode, logging to
  ~tamberi/hadoop/bin/../logs/hadoop-tamberi-secondarynamenode-paleo.di.unipi.it.out
starting jobtracker, logging to
  ~tamberi/hadoop/bin/../logs/hadoop-tamberi-jobtracker-paleo.di.unipi.it.out
localhost: starting tasktracker, logging to
  ~tamberi/hadoop/bin/../logs/hadoop-tamberi-tasktracker-paleo.di.unipi.it.out

Copying local file to HDFS:

$ $HADOOP_HOME/bin/hadoop dfs -copyFromLocal $HOME/hadoopTest/wiki.xml wiki.xml

Listing HDFS content:

$ $HADOOP_HOME/bin/hadoop dfs -ls
Found 1 items
/user/tamberi/wiki.xml  <r 1>   1000000 2008-02-24 19:05

Launching job (/project/piqasso/workspaces/tamberi/allTools.sh is the simple mapper):

$ $HADOOP_HOME/bin/hadoop jar 
   $HADOOP_HOME/contrib/hadoop-0.15.3-streaming.jar
   -file DIR/allTools.sh
   -mapper "allTools.sh" -jobconf mapred.reduce.tasks=0
   -input /user/tamberi/wiki.xml -output wiki-output
additionalConfSpec_:null
null=@@@userJobConfProps_.get(stream.shipped.hadoopstreaming
packageJobJar: [/project/piqasso/workspaces/tamberi/allTools.sh,
   /tmp/hadoop-tamberi/hadoop-unjar45697/] []
   /tmp/streamjob45698.jar tmpDir=null
08/02/24 19:09:27 INFO mapred.FileInputFormat: Total input paths to process : 1
08/02/24 19:09:28 INFO streaming.StreamJob: getLocalDirs():
   [/tmp/hadoop-tamberi/mapred/local]
08/02/24 19:09:28 INFO streaming.StreamJob: Running job: job_200802241904_0001
08/02/24 19:09:28 INFO streaming.StreamJob: To kill this job, run:
08/02/24 19:09:28 INFO streaming.StreamJob: ~tamberi/hadoop/bin/../bin/hadoop job
    -Dmapred.job.tracker=localhost:55556 -kill job_200802241904_0001
08/02/24 19:09:28 INFO streaming.StreamJob: Tracking URL:
    http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_200802241904_0001
08/02/24 19:09:29 INFO streaming.StreamJob:  map 0%  reduce 0%
08/02/24 19:09:38 INFO streaming.StreamJob:  map 25%  reduce 0%
08/02/24 19:09:42 INFO streaming.StreamJob:  map 50%  reduce 0%
08/02/24 19:09:45 INFO streaming.StreamJob:  map 75%  reduce 0%
08/02/24 19:09:55 INFO streaming.StreamJob:  map 100%  reduce 0%
08/02/24 19:09:58 INFO streaming.StreamJob:  map 100%  reduce 100%
08/02/24 19:09:58 INFO streaming.StreamJob: Job complete: job_200802241904_0001
08/02/24 19:09:58 INFO streaming.StreamJob: Output: wiki-output

Listing output directory content:

$ $HADOOP_HOME/bin/hadoop dfs -ls wiki-output
Found 4 items
/user/tamberi/wiki-output/part-00000    <r 1>   118330  2008-02-24 19:09
/user/tamberi/wiki-output/part-00001    <r 1>   12056   2008-02-24 19:09
/user/tamberi/wiki-output/part-00002    <r 1>   26952   2008-02-24 19:09
/user/tamberi/wiki-output/part-00003    <r 1>   137785  2008-02-24 19:09

Merging output files into single one:

$ ../workspace/catter.sh wiki-output
file(s) merged into /home/medialab/tamberi/hadoop-out.txt

Stopping Hadoop:

$ $HADOOP_HOME/bin/stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode

Problems

There's an annoying bug related to HDFS, copying file can throw this exception:

copyFromLocal: java.io.IOException: File filePath could only
be replicated to x nodes, instead of y

The only way to get around that seems to be this:

$HADOOP_HOME/bin/stop-all.sh
rm -Rf /tmp/hadoop-$USER*
$HADOOP_HOME/bin/hadoop namenode -format
$HADOOP_HOME/bin/start-all.sh

Which erases the HDFS from disk and then recreates it.. Obviously you'll lose every HDFS data so be careful!