hadoop - Why my map reduce job is running sequentially? -


I have 4 node clusters and the total is 96 GB memory.

I have my input in 100 files Have split up and set the job for 100mps. Logs seem to be that the mappers are running successively. [2014/10/08 15:22:36] Information: total input path for process: 100 [2014/10/08 15:22:36] Information: Number of divisions: 100 [2014/10/08 15:22:36] Information: Action to start: try_local1244628585_0001_m_000000_0 [2014/10/08 15:22:36] Info: Token for Job: job_local1244628585_0001 [2014/10/08 15 : 22: 36] information: split processing: hdfs: //.../input/in10: 0 + 2 [2014/10/08 15:22:38] info: work: effort_lok 1244628585_0001_m_000000_0 is complete. And is in the process of committing [2014/10/08 15:22:38] Information: Work Effort_Location 1244628585_0001_m_000000_0 is now allowed to commit [2014/10/08 15:22:38] Info: Work 'try_local1244628585_0001_m_000000_0' Saved output Hdfs: //.../output/_temporary/0/task_local1244628585_0001_m_000000 [2014/10/08 15:22:38] Info: hdfs: //.../input/in10: 0 + 2 [2014/10 / 08 15:22:38] Info: Work 'effort_lok 1244628585_0001_m_000000_0' done [2014/10/08 15:22:38] Info: Finishing work: try_local1244628585_0001_m_000000_0 [2014/10/08 15:22:38] Info: Work started Do: try_local1244628585_0001_m_000001_0

....

And on and on. Basically, it completes one task before starting a work.

You are running in local mode:

  [2014 / 10/08 15:22:36] Information: The task of starting: effort _ ** local ** 1244628585_0001_m_000000_0   

Depending on your Hadoop version you need the JobTracker address, or To configure Resourcenare address.

Comments

Popular posts from this blog

php - PDO bindParam() fatal error -

logging - How can I log both the Request.InputStream and Response.OutputStream traffic in my ASP.NET MVC3 Application for specific Actions? -

java - Why my included JSP file won't get processed correctly? -