November 26, 2024, Tuesday, 330

Hadoop on Mac OS X

From NeoWiki

(Difference between revisions)
Jump to: navigation, search
m
m
 
(12 intermediate revisions by one user not shown)
Line 1: Line 1:
 
''From http://www.infosci.cornell.edu/hadoop/mac.html''
 
''From http://www.infosci.cornell.edu/hadoop/mac.html''
  
This guide is written to help Cornell students using Mac OS X 10.5 with setting up a development environment for working with [http://www.hadoop.org/ Hadoop] and running Hadoop jobs on the [http://7thgen.info/images/intro_icon_blog.jpg Cornell Center for Advanced Computing (CAC)] Hadoop cluster. This guide will walk you through compiling and running a simple example Hadoop job. More information is available at the official [http://hadoop.apache.org/core/docs/current/mapred_tutorial.html#Example%3A+WordCount+v1.0 Hadoop Map-Reduce Tutorial].
+
 
 +
 
 +
This guide is written to help Cornell students using Mac OS X 10.5 with setting up a development environment for working with Hadoop [http://www.hadoop.org/] and running Hadoop jobs on the [http://7thgen.info/images/intro_icon_blog.jpg Cornell Center for Advanced Computing (CAC)] Hadoop cluster. This guide will walk you through compiling and running a simple example Hadoop job. More information is available at the official Hadoop Map-Reduce Tutorial[http://hadoop.apache.org/core/docs/current/mapred_tutorial.html#Example%3A+WordCount+v1.0].
  
 
The overall process of developing a Hadoop job is as follows:
 
The overall process of developing a Hadoop job is as follows:
Line 9: Line 11:
 
# Run the Hadoop job JAR file on your development machine, for testing and debugging
 
# Run the Hadoop job JAR file on your development machine, for testing and debugging
 
# Run the Hadoop job JAR file on the CAC Hadoop cluster, for production
 
# Run the Hadoop job JAR file on the CAC Hadoop cluster, for production
 +
 +
 +
==Installing Hadoop==
 +
 +
This section shows you how to download Hadoop and prepare it for use on a Mac machine. Note: For hadoop versions up to and including 0.17.2, you must use Java version 1.5. Using Java 1.6 will fail. The below instructions take this into account.
 +
 +
1. Obtain the latest stable Hadoop release. The file is named hadoop-version.tar.gz and can be obtained [http://apache.freelamp.com/hadoop/core/stable/ here]. Unzip the downloaded file and place the resulting folder on your Desktop (or other location).
 +
 +
2. To make hadoop run on a Mac, you will need to edit two files. Open the file conf/hadoop-env.sh within the hadoop folder you just unzipped in your favorite text editor. Find the following line in the file:
 +
 +
<source lang="bash"># export JAVA_HOME=/usr/lib/j2sdk1.5-sun</source>
 +
 +
and change it to:
 +
 +
<source lang="bash">export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/</source>
 +
 +
Save the file. Second, open the file bin/hadoop within the hadoop folder in your favorite text editor. Search the file for the following line:
 +
 +
<source lang="bash">JAVA=$JAVA_HOME/bin/java</source>
 +
 +
and change it to:
 +
 +
<source lang="bash">JAVA=$JAVA_HOME/Commands/java</source>
 +
 +
Save the file and exit the editor. You have now set up Hadoop for development purposes on your computer.
 +
 +
 +
==Compiling a Hadoop job into a JAR file==
 +
 +
This section guides you through compiling the WordCount[http://hadoop.apache.org/core/docs/current/mapred_tutorial.html#Example%3A+WordCount+v1.0] example available in the Hadoop Map-Reduce Tutorial[http://hadoop.apache.org/core/docs/current/mapred_tutorial.html#Example%3A+WordCount+v1.0]. This section assumes you are using the Eclipse[http://www.eclipse.org/] IDE. If this is not the case, you should be able to adapt these instructions for your IDE.
 +
 +
'''1. Create a new Java Project.'''<br />
 +
Launch Eclipse, and from the File Menu select New, then use the Wizard to create a new Java Project. Enter a project name, in this example WordCount. Make sure you that the selected JRE is of version 1.5.0. Click Finish.
 +
 +
'''2. Add hadoop library to project'''<br />
 +
In Eclipse, right-click (control-click), on your project, go to Build Paths then Add External Archives. Browse to the hadoop folder on your desktop and select the file hadoop-version-core.jar, click Open.
 +
 +
'''3. Add source code file'''<br />
 +
From the File Menu, select New, then File. Select the parent folder WordCount/src (make sure this is right or you will encounter trouble when exporting the JAR file below.) and name the new file WordCount.java click Finish. Copy [[Hadoop_on_Mac_OS_X_WordCount_java|this code]] and paste it into the new file and save it. Eclipse will compile the file as soon as you save it.
 +
 +
'''4. Export JAR file'''<br />
 +
From the File Menu, select Export. From under Java select JAR file, click Next. Select all resources to be exported. In this case, select the entire WordCount project. Make sure the export classes checkbox is checked. Select an export destination for your JAR file - you can use your Desktop, or some other directory. For simplicity, name the file WordCount.jar and export it to your Desktop.
 +
 +
 +
==Running a Hadoop job on your development machine==
 +
 +
This section shows you how to run your job on your own machine, for testing purposes. Hadoop will run in "standalone mode", which means that it will run within a single process, not taking advantage of any parallel processing. This will be much slower than running on the cluster, so you may want to reduce the data size set for testing.
 +
 +
'''1. Create or obtain test data'''<br />
 +
For this example, the input data will be this webpage. Copy this entire web page, and using your favorite text editor save it as a plain text file named testing.txt. Place this file within a folder called input on your Desktop.
 +
 +
'''2. Run the job'''<br />
 +
First, go to the command line. (To access the command line, go to the finder, then to "Applications", then "Utilities" and finally launch "Terminal"). If you are not familiar with the UNIX command line, here is [http://www.cgi101.com/help/unixhelp.html a basic guide]. Change into your hadoop directory ~/Desktop/hadoop-0.17.2 or similar. Execute the following command
 +
 +
<source lang="bash">./bin/hadoop jar ~/Desktop/WordCount.jar WordCount ~/Desktop/input ~/Desktop/output</source>
 +
 +
You may need to alter the paths if any of the files were saved to different places.
 +
 +
'''3. Retrieve the results'''<br />
 +
The results have been written to a new folder called output on your Desktop. There should be one file, named part-00000 which lists all the words on this webpage, along with their occurrence count. Note, that before running hadoop again you will need to delete the entire output folder, since hadoop will not do this for you.
 +
 +
 +
==Running a Hadoop job on the CAC cluster==
 +
 +
This section shows you how to take the JAR file you created above along with the test data, and run the job on the CAC cluster.
 +
 +
'''1. Obtain a CAC account'''<br />
 +
If you are taking a course which requires the use of the cluster, the instructor should organize the CAC account for you. If you are using the cluster for research, the Principal Investigator will add you to their CAC project. In either case, you will receive an email to your Cornell email address with your username and password for the CAC. Then, you will need to log in and change your password to something secure and easy to remember. The easiest way to do this is via [http://www.tc.cornell.edu/Services/Support/getting_started.access.how_to%27s.htm#remote Remote Desktop Connection]. You can also use [http://www.tc.cornell.edu/Services/Support/getting_started.access.how_to%27s.htm#remote SSH].
 +
 +
'''2. Copy JAR and input files to CAC'''<br />
 +
Go to the Finder, then select Connect to Server from the Go menu. Enter the following path:
 +
 +
<source lang="bash">smb://cacfs01.cac.cornell.edu/netid</source>
 +
 +
and replace netid with your CAC user name. Enter your CAC username and password when prompted. You should then see a new Finder window, showing the contents of your CAC home directory. Please note, this directory is only accessible from within the Cornell firewall. If you wish to access it from off-campus, you will first need to [http://www.cit.cornell.edu/services/vpn/ VPN into Cornell].
 +
 +
Drag and drop your WordCount.jar file and input folder from your Desktop into your CAC home directory.
 +
 +
'''3. Use SSH to connect to the job tracker node'''<br />
 +
Run the following from the command line:
 +
 +
<source lang="bash">ssh [email protected]</source>
 +
 +
where netid is replaced by your CAC username. Note: the address starts with doubleu-el-zero-one NOT doubleu-zero-one-zero. Enter your CAC password when prompted. Once you are SSH'd in you will be placed in your CAC home directory - the same directory that you previously copied files into. You can run "ls" to list the files, and ensure your files were copied over successfully.
 +
 +
'''4. Copy input files into HDFS'''<br />
 +
Make a directory in the Hadoop Distributed File System (dfs) for your input files. You can see the list of commands available for working on the dfs by executing the following:
 +
 +
<source lang="bash">/usr/local/hadoop/bin/hadoop dfs</source>
 +
 +
More information about the commands is available [http://hadoop.apache.org/core/docs/r0.17.2/hdfs_shell.html here]. Note, that to execute any hadoop dfs command, you must type <source lang="bash">/usr/local/hadoop/bin/hadoop dfs -command</source> where command is the dfs command to run.
 +
 +
To copy input data files into dfs from your home directory, do the following:
 +
 +
<source lang="bash">/usr/local/hadoop/bin/hadoop dfs -copyFromLocal input .</source>
 +
 +
'''5. Run your job'''<br />
 +
Perform the following:
 +
 +
<source lang="bash">/usr/local/hadoop/bin/hadoop jar WordCount.jar WordCount input output</source>
 +
 +
This will place the result files in a directory called "output" in the dfs. You can then copy these files back to your CAC home directory by executing the following:
 +
 +
<source lang="bash">/usr/local/hadoop/bin/hadoop dfs -copyToLocal output output</source>
 +
 +
Now you can retrieve the output files in the same fashion that you copied the input files to your home directory. Note, that one output file is produced for each reduce job you run. The WordCount example uses the system-configured limit of the number of reduce jobs, so do not be surprised to see 10-20 output files (the exact number depends on the number of cluster nodes running and their configuration). You can control this limit programatically via the setNumReduceTasks() method of the JobConf class in the hadoop API. Refer to the map reduce tutorial[http://hadoop.apache.org/core/docs/current/mapred_tutorial.html] for more details on running map reduce jobs.
 +
 +
When you are finished with the output files, you should delete the output directory. Hadoop will not automatically do this for you, and it will throw an error if you run it while there is an old output directory. To do this, execute:
 +
 +
<source lang="bash">/usr/local/hadoop/bin/hadoop dfs -rmr output</source>

Latest revision as of 05:27, 2 April 2010

From http://www.infosci.cornell.edu/hadoop/mac.html


This guide is written to help Cornell students using Mac OS X 10.5 with setting up a development environment for working with Hadoop [1] and running Hadoop jobs on the Cornell Center for Advanced Computing (CAC) Hadoop cluster. This guide will walk you through compiling and running a simple example Hadoop job. More information is available at the official Hadoop Map-Reduce Tutorial[2].

The overall process of developing a Hadoop job is as follows:

  1. Install Hadoop on your development machine (personal or lab computer)
  2. Compile the Hadoop job, create a JAR file
  3. Run the Hadoop job JAR file on your development machine, for testing and debugging
  4. Run the Hadoop job JAR file on the CAC Hadoop cluster, for production


Contents

Installing Hadoop

This section shows you how to download Hadoop and prepare it for use on a Mac machine. Note: For hadoop versions up to and including 0.17.2, you must use Java version 1.5. Using Java 1.6 will fail. The below instructions take this into account.

1. Obtain the latest stable Hadoop release. The file is named hadoop-version.tar.gz and can be obtained here. Unzip the downloaded file and place the resulting folder on your Desktop (or other location).

2. To make hadoop run on a Mac, you will need to edit two files. Open the file conf/hadoop-env.sh within the hadoop folder you just unzipped in your favorite text editor. Find the following line in the file:

# export JAVA_HOME=/usr/lib/j2sdk1.5-sun

and change it to:

export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/

Save the file. Second, open the file bin/hadoop within the hadoop folder in your favorite text editor. Search the file for the following line:

JAVA=$JAVA_HOME/bin/java

and change it to:

JAVA=$JAVA_HOME/Commands/java

Save the file and exit the editor. You have now set up Hadoop for development purposes on your computer.


Compiling a Hadoop job into a JAR file

This section guides you through compiling the WordCount[3] example available in the Hadoop Map-Reduce Tutorial[4]. This section assumes you are using the Eclipse[5] IDE. If this is not the case, you should be able to adapt these instructions for your IDE.

1. Create a new Java Project.
Launch Eclipse, and from the File Menu select New, then use the Wizard to create a new Java Project. Enter a project name, in this example WordCount. Make sure you that the selected JRE is of version 1.5.0. Click Finish.

2. Add hadoop library to project
In Eclipse, right-click (control-click), on your project, go to Build Paths then Add External Archives. Browse to the hadoop folder on your desktop and select the file hadoop-version-core.jar, click Open.

3. Add source code file
From the File Menu, select New, then File. Select the parent folder WordCount/src (make sure this is right or you will encounter trouble when exporting the JAR file below.) and name the new file WordCount.java click Finish. Copy this code and paste it into the new file and save it. Eclipse will compile the file as soon as you save it.

4. Export JAR file
From the File Menu, select Export. From under Java select JAR file, click Next. Select all resources to be exported. In this case, select the entire WordCount project. Make sure the export classes checkbox is checked. Select an export destination for your JAR file - you can use your Desktop, or some other directory. For simplicity, name the file WordCount.jar and export it to your Desktop.


Running a Hadoop job on your development machine

This section shows you how to run your job on your own machine, for testing purposes. Hadoop will run in "standalone mode", which means that it will run within a single process, not taking advantage of any parallel processing. This will be much slower than running on the cluster, so you may want to reduce the data size set for testing.

1. Create or obtain test data
For this example, the input data will be this webpage. Copy this entire web page, and using your favorite text editor save it as a plain text file named testing.txt. Place this file within a folder called input on your Desktop.

2. Run the job
First, go to the command line. (To access the command line, go to the finder, then to "Applications", then "Utilities" and finally launch "Terminal"). If you are not familiar with the UNIX command line, here is a basic guide. Change into your hadoop directory ~/Desktop/hadoop-0.17.2 or similar. Execute the following command

./bin/hadoop jar ~/Desktop/WordCount.jar WordCount ~/Desktop/input ~/Desktop/output

You may need to alter the paths if any of the files were saved to different places.

3. Retrieve the results
The results have been written to a new folder called output on your Desktop. There should be one file, named part-00000 which lists all the words on this webpage, along with their occurrence count. Note, that before running hadoop again you will need to delete the entire output folder, since hadoop will not do this for you.


Running a Hadoop job on the CAC cluster

This section shows you how to take the JAR file you created above along with the test data, and run the job on the CAC cluster.

1. Obtain a CAC account
If you are taking a course which requires the use of the cluster, the instructor should organize the CAC account for you. If you are using the cluster for research, the Principal Investigator will add you to their CAC project. In either case, you will receive an email to your Cornell email address with your username and password for the CAC. Then, you will need to log in and change your password to something secure and easy to remember. The easiest way to do this is via Remote Desktop Connection. You can also use SSH.

2. Copy JAR and input files to CAC
Go to the Finder, then select Connect to Server from the Go menu. Enter the following path:

smb://cacfs01.cac.cornell.edu/netid

and replace netid with your CAC user name. Enter your CAC username and password when prompted. You should then see a new Finder window, showing the contents of your CAC home directory. Please note, this directory is only accessible from within the Cornell firewall. If you wish to access it from off-campus, you will first need to VPN into Cornell.

Drag and drop your WordCount.jar file and input folder from your Desktop into your CAC home directory.

3. Use SSH to connect to the job tracker node
Run the following from the command line:

ssh netid@wl01.cac.cornell.edu

where netid is replaced by your CAC username. Note: the address starts with doubleu-el-zero-one NOT doubleu-zero-one-zero. Enter your CAC password when prompted. Once you are SSH'd in you will be placed in your CAC home directory - the same directory that you previously copied files into. You can run "ls" to list the files, and ensure your files were copied over successfully.

4. Copy input files into HDFS
Make a directory in the Hadoop Distributed File System (dfs) for your input files. You can see the list of commands available for working on the dfs by executing the following:

/usr/local/hadoop/bin/hadoop dfs
More information about the commands is available here. Note, that to execute any hadoop dfs command, you must type
/usr/local/hadoop/bin/hadoop dfs -command
where command is the dfs command to run.

To copy input data files into dfs from your home directory, do the following:

/usr/local/hadoop/bin/hadoop dfs -copyFromLocal input .

5. Run your job
Perform the following:

/usr/local/hadoop/bin/hadoop jar WordCount.jar WordCount input output

This will place the result files in a directory called "output" in the dfs. You can then copy these files back to your CAC home directory by executing the following:

/usr/local/hadoop/bin/hadoop dfs -copyToLocal output output

Now you can retrieve the output files in the same fashion that you copied the input files to your home directory. Note, that one output file is produced for each reduce job you run. The WordCount example uses the system-configured limit of the number of reduce jobs, so do not be surprised to see 10-20 output files (the exact number depends on the number of cluster nodes running and their configuration). You can control this limit programatically via the setNumReduceTasks() method of the JobConf class in the hadoop API. Refer to the map reduce tutorial[6] for more details on running map reduce jobs.

When you are finished with the output files, you should delete the output directory. Hadoop will not automatically do this for you, and it will throw an error if you run it while there is an old output directory. To do this, execute:

/usr/local/hadoop/bin/hadoop dfs -rmr output