HDP(Hortonworks Data Platform)환경 HUE Install

참고 URL :
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.6/bk_installing_manually_book/content/ch_installing_hue_chapter.html

 

# HDP 환경 설정

주의 : Ambari를 이용해 자동으로 클러스터 구축시, Ambari Web 인터페이스에서 수정(터미널상에서 직접 수정을 하면 충돌 가능성 있음)

 

1. HDFS 메뉴

1-1) hdfs-site.xml 설정

<property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
</property>

 

> Ambari Web
HDFS > Configs > Advanced > General > WebHDFS enabled
체크박스 확인(체크=enabled) 기본 체크된 상태

사용자 삽입 이미지

 

 




1-2) core-site.xml 설정

    <property>
        <name>hadoop.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.hue.groups</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.hcat.groups</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.hcat.hosts</name>
        <value>*</value>
    </property>

 

> Ambari Web
HDFS > Configs > Advanced > Custom core-site
에 옵션 추가

사용자 삽입 이미지

 

 

 

 

 

 

 




 






2. Hive 메뉴

2-1) webhcat-site.xml 설정

  <property>
        <name>webhcat.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>

    <property>
        <name>webhcat.proxyuser.hue.groups</name>
        <value>*</value>
    </property>

 

> Ambari Web
Hive > Configs > Advanced > Custom webhcat-site
에 옵션 추가


사용자 삽입 이미지

 

 





2-1) hive-site.xml 설정

    <property>
        <name>hive.server2.enable.impersonation</name>
        <value>true</value>
    </property>

 

> Ambari Web
Hive > Configs > Advanced > Custom hiveserver2-site
에 옵션 추가(주의 : Custom hive-site가 아님)

사용자 삽입 이미지

 

 






3. Oozie 메뉴

3-1) oozie-site.xml 설정

<property>
        <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>

    <property>
        <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
        <value>*</value>
    </property>

 

> Ambari Web
Oozie  > Configs > Custom oozie-site
옵션 추가

사용자 삽입 이미지

 

 


 

 

 

 

4. Install HUE

> 설치 명령

RHEL/CentOS/Oracle Linux

yum install hue

 

 

5. HUE 환경 설정(hue.ini)

root@hdfs:/etc/hue/conf# pwd

/etc/hue/conf

root@hdfs:/etc/hue/conf# vi hue.ini

 

# 아래는 주요 설정 부분만 요약함

# Webserver listens on this address and port

  http_host=0.0.0.0
  http_port=8000

 

  # Time zone name

time_zone=Asia/Seoul

 

###########################################################################

# Settings to configure your Hadoop cluster.

###########################################################################

# 주의 : 여러대의 클러스터 노드에서 나누어 서비스를 설치한 경우,

아래의 호스트명을 각각에 맞게 설정 해주어야 함.

> 아래는 하나의 호스트에 hdfs, mapreduce, yarn, hive, oozie 등의 서비스를 설치 했으므로

하나의 호스트만 지정해줌.

 

[hadoop]

 

  # Configuration for HDFS NameNode

  # ------------------------------------------------------------------------

  [[hdfs_clusters]]

 

    [[[default]]]

      # Enter the filesystem uri

      fs_defaultfs=hdfs://hdfs.sunshiny:8020

 

      # Use WebHdfs/HttpFs as the communication mechanism. To fallback to

      # using the Thrift plugin (used in Hue 1.x), this must be uncommented

      # and explicitly set to the empty value.

      webhdfs_url=http://hdfs.sunshiny:50070/webhdfs/v1/

 

      ## security_enabled=true

 

      # Default umask for file and directory creation, specified in an octal value.

      ## umask=022

 

  [[yarn_clusters]]

 

    [[[default]]]

      # Whether to submit jobs to this cluster

      submit_to=true

 

      ## security_enabled=false

 

      # Resource Manager logical name (required for HA)

      ## logical_name=

 

      # URL of the ResourceManager webapp address (yarn.resourcemanager.webapp.address)

      resourcemanager_api_url=http://hdfs.sunshiny:8088

 

      # URL of Yarn RPC adress (yarn.resourcemanager.address)

      resourcemanager_rpc_url=http://hdfs.sunshiny:8050

 

      # URL of the ProxyServer API

      proxy_api_url=http://hdfs.sunshiny:8088

 

      # URL of the HistoryServer API

      history_server_api_url=http://hdfs.sunshiny:19888

 

      # URL of the AppTimelineServer API

      app_timeline_server_api_url=http://hdfs.sunshiny:8188

 

      # URL of the NodeManager API

      node_manager_api_url=http://hdfs.sunshiny:8042

 

      # HA support by specifying multiple clusters

      # e.g.

 

      # [[[ha]]]

        # Enter the host on which you are running the failover Resource Manager

        #resourcemanager_api_url=http://failover-host:8088

        #logical_name=failover

        #submit_to=True

 

###########################################################################

# Settings to configure liboozie

###########################################################################

 

[liboozie]

  # The URL where the Oozie service runs on. This is required in order for

  # users to submit jobs.

  oozie_url=http://hdfs.sunshiny:11000/oozie

 

  ## security_enabled=true

 

  # Location on HDFS where the workflows/coordinator are deployed when submitted.

  remote_deployement_dir=/user/hue/oozie/deployments

 

 

###########################################################################

# Settings to configure the Oozie app

###########################################################################

 

[oozie]

  # Location on local FS where the examples are stored.

  ## local_data_dir=..../examples

 

  # Location on local FS where the data for the examples is stored.

  ## sample_data_dir=...thirdparty/sample_data

 

  # Location on HDFS where the oozie examples and workflows are stored.

  remote_data_dir=/user/hue/oozie/workspaces

 

  # Share workflows and coordinators information with all users. If set to false,

  # they will be visible only to the owner and administrators.

  share_jobs=true

 

  # Maximum of Oozie workflows or coodinators to retrieve in one API call.

  ## oozie_jobs_count=100

 

  # Comma separated list of parameters which should be obfuscated in Oozie job configuration.

  ## oozie_obfuscate_params=password,pwd

 

  # Maximum count of actions of Oozie coodinators to be shown on the one page.

  ## oozie_job_actions_count=50

 

 

###########################################################################

# Settings to configure Beeswax

###########################################################################

 

[beeswax]

 

  # Host where Hive server Thrift daemon is running.

  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).

  hive_server_host=hdfs.sunshiny

 

  # Port where HiveServer2 Thrift server runs on.

hive_server_port=10000

     # Hive configuration directory, where hive-site.xml is located

     hive_conf_dir=/etc/hive/conf

      >
주의 : HDP 분산 클러스터에서 hive conf 디렉토리가 업데이트시 제대로 갱신되지 못하는 경우.
              /etc/hive/conf/
위치를 /etc/hive/conf.server/ 로 변경 필요.
              (conf.server
디렉토리 확인후 변경)

     >
/etc/hive/conf/ 디렉토리가 제대로 갱신되지 않는 경우, 설정해놓은
        MySQL 메타스토어와 로컬 Duby 메타스토어가 동시에 이용되고,
        테이블 describe등의 정보 조회시 에러가 발생함.


  # Timeout in seconds for thrift calls to Hive service

  ## server_conn_timeout=120

 

  # Set a LIMIT clause when browsing a partitioned table.

  # A positive value will be set as the LIMIT. If 0 or negative, do not set any limit.

  ## browse_partitioned_table_limit=250

 

  # A limit to the number of rows that can be downloaded from a query.

  # A value of -1 means there will be no limit.

  # A maximum of 65,000 is applied to XLS downloads.

  ## download_row_limit=1000000

 

  # Hue will try to close the Hive query when the user leaves the editor page.

  # This will free all the query resources in HiveServer2, but also make its results inaccessible.

  ## close_queries=false

 

  # Option to show execution engine choice.

  ## show_execution_engine=False

 

  # "Go to column pop up on query result page. Set to false to disable"

  ## go_to_column=true

 

###########################################################################

# Settings to configure Job Browser

###########################################################################

 

[jobbrowser]

  # Share submitted jobs information with all users. If set to false,

  # submitted jobs are visible only to the owner and administrators.

  share_jobs=true

 

 

###########################################################################

# Settings for the User Admin application

###########################################################################

 

[useradmin]

  # The name of the default user group that users will be a member of

  default_user_group=hadoop

  default_username=hue

  default_user_password=1111

 

 

[hcatalog]

  templeton_url=http://hdfs.sunshiny:50111/templeton/v1/

  security_enabled=false

 

[about]

  tutorials_installed=false

 

[pig]

  udf_path="/tmp/udfs"

 

 

 

6. Start HUE

/etc/init.d/hue start

 

 

#. 기본 HUE 계정 : hue / hue
http://[hue install host]:8000/

사용자 삽입 이미지





※ 위 내용은, 여러 자료를 참고하거나 제가 주관적으로 정리한 것입니다.
   잘못된 정보나 보완이 필요한 부분을, 댓글 또는 메일로 보내주시면 많은 도움이 되겠습니다.

"BigData / HUE" 분류의 다른 글

HUE - HDP 환경 또는 일반 HCatalog 조회 오류 (0)2015/09/10
09 10, 2015 19:20 09 10, 2015 19:20


Trackback URL : http://develop.sunshiny.co.kr/trackback/1032

  1. Going Here

    Tracked from Going Here 03 30, 2020 14:19 Delete

    Get moving now with with cabo san lucas mexico beach that are currently available and currently available for today only!

Leave a comment
[로그인][오픈아이디란?]
오픈아이디로만 댓글을 남길 수 있습니다


Recent Posts

  1. HDFS - Python Encoding 오류 처리
  2. HP - Vertica ROS Container 관련 오류...
  3. HDFS - Hive 실행시 System Time 오류
  4. HP - Vertica 사용자 쿼리 이력 테이블...
  5. Client에서 HDFS 환경의 데이터 처리시...

Recent Comments

  1. 안녕하세요^^ 배그핵
  2. 안녕하세요^^ 도움이 되셨다니, 저... sunshiny
  3. 정말 큰 도움이 되었습니다.. 감사합... 사랑은
  4. 네, 안녕하세요. 댓글 남겨 주셔서... sunshiny
  5. 감사합니다 많은 도움 되었습니다!ㅎㅎ 프리시퀸스

Recent Trackbacks

  1. tenant improvement contractor tenant improvement contractor 30 03
  2. construction management experts construction management experts 30 03
  3. Going Here Going Here 30 03
  4. cabo san lucas vacation rentals cabo san lucas vacation rentals 30 03
  5. los cabos los cabos 30 03

Calendar

«   03 2020   »
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

Bookmarks

  1. 위키피디아
  2. MysqlKorea
  3. 오라클 클럽
  4. API - Java
  5. Apache Hadoop API
  6. Apache Software Foundation
  7. HDFS 생태계 솔루션
  8. DNSBL - Spam Database Lookup
  9. Ready System
  10. Solaris Freeware
  11. Linux-Site
  12. 윈디하나의 솔라나라

Site Stats

TOTAL 2897138 HIT
TODAY 166 HIT
YESTERDAY 1376 HIT