Ambari, Action > Start All 실행시 DataNode 구동중 에러

> DataNode 로그
/var/log/hadoop/hdfs/hadoop-hdfs-datanode-hdfs.sunshiny.log

2015-09-11 09:27:47,579 FATAL datanode.DataNode (BPServiceActor.java:run(807)) - Initialization failed for Block pool <registering> (Datanode Uuid 371c62c1-78b5-4341-8e10-ccf261b80d5f) service to hdfs.sunshiny/192.168.1.210:8020. Exiting.
java.io.IOException: All specified directories are failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1383)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1348)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:221)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:795)
        at java.lang.Thread.run(Thread.java:745)
2015-09-11 09:27:47,580 WARN  datanode.DataNode (BPServiceActor.java:run(828)) - Ending block pool service for: Block pool <registering> (Datanode Uuid 371c62c1-78b5-4341-8e10-ccf261b80d5f) service to hdfs.sunshiny/192.168.1.210:8020
2015-09-11 09:27:47,681 INFO  datanode.DataNode (BlockPoolManager.java:remove(103)) - Removed Block pool <registering> (Datanode Uuid 371c62c1-78b5-4341-8e10-ccf261b80d5f)


> Ambari-agent 로그
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 153, in <module>
    DataNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 218, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 47, in start
    datanode(action="start")
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 58, in datanode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 271, in service
    environment=hadoop_env_exports
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 157, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 258, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-hdfs.sunshiny.out



# HDFS namenode를 포멧 하고 서비스를 재시작
1) Ambari > Action > Stop All

2)
[hdfs@hdfs ~]$ rm -rf /app/hadoop/*
[hdfs@hdfs ~]$ rm -rf /data/hadoop/*

3) hdfs namenode -format


4) Ambari > Action > Start All

% NameNode Start 시에 동일 에러가 반복 될시에는, ulimit 한계로 인해, 시스템 리부팅 필요.

※ 위 내용은, 여러 자료를 참고하거나 제가 주관적으로 정리한 것입니다.
   잘못된 정보나 보완이 필요한 부분을, 댓글 또는 메일로 보내주시면 많은 도움이 되겠습니다.
09 11, 2015 09:50 09 11, 2015 09:50


Trackback URL : http://develop.sunshiny.co.kr/trackback/1034

Leave a comment

« Previous : 1 : ... 16 : 17 : 18 : 19 : 20 : 21 : 22 : 23 : 24 : ... 648 : Next »

Recent Posts

  1. HDFS - Python Encoding 오류 처리
  2. HP - Vertica ROS Container 관련 오류...
  3. HDFS - Hive 실행시 System Time 오류
  4. HP - Vertica 사용자 쿼리 이력 테이블...
  5. Client에서 HDFS 환경의 데이터 처리시...

Recent Comments

  1. Hi, I do think this is a great blo... 룸싸롱 01시 43분
  2. Thanks in favor of sharing such a... 리니지 프리서버 01 20,
  3. 안녕하세요^^ 배그핵
  4. 안녕하세요^^ 도움이 되셨다니, 저... sunshiny
  5. 정말 큰 도움이 되었습니다.. 감사합... 사랑은

Recent Trackbacks

  1. top london relocation agents top london relocation agents %M
  2. invoice printing and mailing invoice printing and mailing 20 01
  3. cabo san lucas packages cabo san lucas packages 20 01
  4. london relocation services fees london relocation services fees 20 01
  5. printing and mailing companies printing and mailing companies 20 01

Calendar

«   01 2020   »
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  

Bookmarks

  1. 위키피디아
  2. MysqlKorea
  3. 오라클 클럽
  4. API - Java
  5. Apache Hadoop API
  6. Apache Software Foundation
  7. HDFS 생태계 솔루션
  8. DNSBL - Spam Database Lookup
  9. Ready System
  10. Solaris Freeware
  11. Linux-Site
  12. 윈디하나의 솔라나라

Site Stats

TOTAL 2819329 HIT
TODAY 120 HIT
YESTERDAY 1318 HIT