欧美性猛交XXXX免费看蜜桃,成人网18免费韩国,亚洲国产成人精品区综合,欧美日韩一区二区三区高清不卡,亚洲综合一区二区精品久久

打開(kāi)APP
userphoto
未登錄

開(kāi)通VIP,暢享免費電子書(shū)等14項超值服

開(kāi)通VIP
Hadoop 2.6.0 Single Node Cluster Setup on Ubuntu 14.10
原文鏈接  

下面自己做了幾個(gè)小的修正。
(是安裝在VirtualBox下的Ubuntu Server里的)
Ubuntu is 14.04
主機名(hostname)是  master
用戶(hù)名是 hadoop
自己也disable了ipv6
Disabling IPv6
Since Hadoop doesn’t work on IPv6, we should disable it. One of another reason is also that it has been developed and tested on IPv4 stacks. Hadoop nodes will be able to communicate if we are having IPv4 cluster. (Once you have disabled IPV6 on your machine, you need to reboot your machine in order to check its effect. In case if you don’t know how to reboot with command use sudo reboot )

For getting your IPv6 disable in your Linux machine, you need to update /etc/sysctl.conf by adding following line of codes at end of the file,

# disable ipv6net.ipv6.conf.all.disable_ipv6 = 1net.ipv6.conf.default.disable_ipv6 = 1net.ipv6.conf.lo.disable_ipv6 = 1

********************************************************************************************************
$ sudo apt-get update

$ sudo apt-get install default-jdk

$ java -version

$ sudo apt-get install ssh

$ sudo apt-get install rsync

$cd ~/.ssh                      # 如果沒(méi)有該目錄,先執行一次ssh localhost
$ssh-keygen -t rsa              # 一直按回車(chē)就可以
$cp id_rsa.pub authorized_keys

此時(shí)再用ssh localhost命令,就可以直接登陸了.


$ wget -c http://mirror.olnevhost.net/pub/apache/hadoop/common/current/hadoop-2.6.0.tar.gz

$ sudo tar -zxvf hadoop-2.6.0.tar.gz

$ sudo mv hadoop-2.6.0 /usr/local/hadoop

$ update-alternatives --config java

$ sudo gedit ~/.bashrc

          #Hadoop Variables
          export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
          export HADOOP_HOME=/usr/local/hadoop
          export PATH=$PATH:$HADOOP_HOME/bin
          export PATH=$PATH:$HADOOP_HOME/sbin
          export HADOOP_MAPRED_HOME=$HADOOP_HOME
          export HADOOP_COMMON_HOME=$HADOOP_HOME
          export HADOOP_HDFS_HOME=$HADOOP_HOME
          export YARN_HOME=$HADOOP_HOME
          export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
          export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

$ source ~/.bashrc

$ cd /usr/local/hadoop/etc/hadoop

$ sudo vi hadoop-env.sh

          #The java implementation to use.
          export JAVA_HOME="/usr/lib/jvm/java-7-openjdk-amd64"

$ sudo vi core-site.xml

          <configuration>
                  <property>
                      <name>fs.defaultFS</name>
                      <value>hdfs://localhost:9000</value>
                  </property>
          </configuration>

$ sudo vi yarn-site.xml

          <configuration>
                  <property>
                      <name>yarn.nodemanager.aux-services</name>
                      <value>mapreduce_shuffle</value>
                  </property>
                  <property>
                      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                      <value> org.apache.hadoop.mapred.ShuffleHandler</value>
                  </property>
          </configuration>

$ sudo cp mapred-site.xml.template mapred-site.xml

$ sudo vi mapred-site.xml

          <configuration>
                  <property>
                      <name>mapreduce.framework.name</name>
                      <value>yarn</value>
                  </property>
          </configuration>

$ sudo vi hdfs-site.xml

          <configuration>
                  <property>
                      <name>dfs.replication</name>
                      <value>1</value>
                  </property>
                  <property>
                      <name>dfs.namenode.name.dir</name>
                      <value>file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
                  </property>
                  <property>
                      <name>dfs.datanode.data.dir</name>
                      <value>file:/usr/local/hadoop/hadoop_store/hdfs/datanode</value>
                  </property>
          </configuration>

$ cd

$ mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode

$ mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode

$ sudo chown hadoop -R /usr/local/hadoop

$ hdfs namenode -format   

$ start-all.sh (這個(gè)已經(jīng)deprecated了。  要用start-dfs.sh  和 start-yarn.sh)

為了方便,在.bashrc中加入:

alias hstart="/usr/local/hadoop/sbin/start-dfs.sh;/usr/local/hadoop/sbin/start-yarn.sh"
alias hstop="/usr/local/hadoop/sbin/stop-yarn.sh;/usr/local/hadoop/sbin/stop-dfs.sh"

$ jps
its result should be :
hadoop@master:/usr/local/hadoop$ jps
5253 NodeManager
6084 NameNode
5118 ResourceManager
4791 DataNode
4972 SecondaryNameNode
6713 Jps


http://localhost:8088/
http://localhost:50070/
http://localhost:50090/
http://localhost:50075/


本站僅提供存儲服務(wù),所有內容均由用戶(hù)發(fā)布,如發(fā)現有害或侵權內容,請點(diǎn)擊舉報。
打開(kāi)APP,閱讀全文并永久保存 查看更多類(lèi)似文章
猜你喜歡
類(lèi)似文章
CentOS 6.5 配置hadoop 2.6.0偽分布式
Spark on Mac 環(huán)境搭建
二、Ubuntu14.04下安裝Hadoop2.4.0 (偽分布模式)
VMware下Hadoop 2.4.1完全分布式集群平臺安裝與設置
大數據環(huán)境搭建之Hadooop偽分布搭建(2)
Spark1.5.0 Hadoop2.7.1整合
更多類(lèi)似文章 >>
生活服務(wù)
分享 收藏 導長(cháng)圖 關(guān)注 下載文章
綁定賬號成功
后續可登錄賬號暢享VIP特權!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服

欧美性猛交XXXX免费看蜜桃,成人网18免费韩国,亚洲国产成人精品区综合,欧美日韩一区二区三区高清不卡,亚洲综合一区二区精品久久