亚洲av成人无遮挡网站在线观看,少妇性bbb搡bbb爽爽爽,亚洲av日韩精品久久久久久,兔费看少妇性l交大片免费,无码少妇一区二区三区

  免費(fèi)注冊(cè) 查看新帖 |

Chinaunix

  平臺(tái) 論壇 博客 文庫
最近訪問板塊 發(fā)新帖
查看: 91858 | 回復(fù): 4
打印 上一主題 下一主題

[Spark] 大數(shù)據(jù)平臺(tái)搭建(hadoop+spark) [復(fù)制鏈接]

論壇徽章:
1
15-16賽季CBA聯(lián)賽之同曦
日期:2017-01-17 18:19:30
跳轉(zhuǎn)到指定樓層
1 [收藏(0)] [報(bào)告]
發(fā)表于 2017-08-22 10:32 |只看該作者 |倒序?yàn)g覽
一.基本信息


1. 服務(wù)器基本信息


主機(jī)名        ip地址        安裝服務(wù)
spark-master        172.16.200.81        jdk、hadoop、spark、scala
spark-slave01        172.16.200.82        jdk、hadoop、spark
spark-slave02        172.16.200.83        jdk、hadoop、spark
spark-slave03        172.16.200.84        jdk、hadoop、spark


2. 軟件基本信息


軟件名        版本        安裝路徑
oracle jdk        1.8.0_111        /usr/local/jdk1.8.0_111
hadoop        2.7.1        /usr/local/hadoop-2.7.3
spark        2.0.2        /usr/local/spark-2.0.2
scala        2.12.1        usr/local/2.12.1



3.環(huán)境變量匯總
  1. ############# java ############
  2. export JAVA_HOME=/usr/local/jdk1.8.0_111
  3. export PATH=$JAVA_HOME/bin:$PATH
  4. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar


  5. ########### hadoop ##########
  6. export HADOOP_HOME=/usr/local/hadoop-2.7.3
  7. export PATH=$JAVA_HOme/bin:$HADOOP_HOME/bin:$PATH
  8. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  9. export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin



  10. ######### spark ############
  11. export SPARK_HOME=/usr/local/spark-2.0.2
  12. export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

  13. ######### scala ##########
  14. export SCALA_HOME=/usr/local/scala-2.12.1
  15. export PATH=$PATH:$SCALA_HOME/bin
復(fù)制代碼
4. 基本環(huán)境配置(master、slave相同操作)

4.1 配置jdk
  1. cd /usr/loca/src/
  2. tar -C /usr/local/ -xzf /usr/local/src/jdk-8u111-linux-x64.tar.gz
復(fù)制代碼
4.2 配置java環(huán)境變量
  1. vim /etc/profile
復(fù)制代碼
添加如下信息
  1. ######### jdk ############
  2. export JAVA_HOME=/usr/local/jdk1.8.0_111
  3. export PATH=$JAVA_HOME/bin:$PATH
  4. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
復(fù)制代碼
4.3 刷新配置文件:
  1. source /etc/profile
復(fù)制代碼
4.4 配置hosts
  1. vim /etc/hosts
  2. 172.16.200.81   spark-master
  3. 172.16.200.82   spark-slave1
  4. 172.16.200.83   spark-slave2
復(fù)制代碼
4.5 配置免密碼

生成密鑰對(duì)
  1. ssh-keygen
復(fù)制代碼
如果密鑰不設(shè)置密碼,則連按幾下回車

先配置本機(jī)免密碼登錄
  1. cd /root/.ssh
  2. cat id_rsa.pub > authorized_keys
  3. chmod 600 authorized_keys
復(fù)制代碼
再將其它主機(jī)id_rsa.pub 內(nèi)容追加到 authorized_keys中,三臺(tái)配置完成后即可實(shí)現(xiàn)免密碼登錄

二.大數(shù)據(jù)平臺(tái)搭建

搭建Hadoop(master、slave相同操作)

1.1 安裝hadoop
  1. cd /usr/loca/src/
  2. tar -C /usr/local/ -xzf hadoop-2.7.3.tar.gz
復(fù)制代碼
1.2 配置hadoop環(huán)境變量
  1. vim /etc/profile
復(fù)制代碼
添加如下信息
  1. ######### hadoop ############
  2. export HADOOP_HOME=/usr/local/hadoop-2.7.3
  3. export PATH=$JAVA_HOme/bin:$HADOOP_HOME/bin:$PATH
  4. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  5. export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
復(fù)制代碼
1.3 刷新配置文件:
  1. source /etc/profile
復(fù)制代碼
1.4 修改hadoop配置文件
  1. cd /usr/local/hadoop-2.7.3/etc/hadoop
復(fù)制代碼
查看
  1. root@spark-master hadoop]# ll
  2. 總用量 152
  3. -rw-r--r--. 1 root root  4436 8月  18 09:49 capacity-scheduler.xml
  4. -rw-r--r--. 1 root root  1335 8月  18 09:49 configuration.xsl
  5. -rw-r--r--. 1 root root   318 8月  18 09:49 container-executor.cfg
  6. -rw-r--r--. 1 root root  1037 12月 21 14:58 core-site.xml
  7. -rw-r--r--. 1 root root  3589 8月  18 09:49 hadoop-env.cmd
  8. -rw-r--r--. 1 root root  4235 12月 21 11:17 hadoop-env.sh
  9. -rw-r--r--. 1 root root  2598 8月  18 09:49 hadoop-metrics2.properties
  10. -rw-r--r--. 1 root root  2490 8月  18 09:49 hadoop-metrics.properties
  11. -rw-r--r--. 1 root root  9683 8月  18 09:49 hadoop-policy.xml
  12. -rw-r--r--. 1 root root  1826 12月 21 14:11 hdfs-site.xml
  13. -rw-r--r--. 1 root root  1449 8月  18 09:49 httpfs-env.sh
  14. -rw-r--r--. 1 root root  1657 8月  18 09:49 httpfs-log4j.properties
  15. -rw-r--r--. 1 root root    21 8月  18 09:49 httpfs-signature.secret
  16. -rw-r--r--. 1 root root   620 8月  18 09:49 httpfs-site.xml
  17. -rw-r--r--. 1 root root  3518 8月  18 09:49 kms-acls.xml
  18. -rw-r--r--. 1 root root  1527 8月  18 09:49 kms-env.sh
  19. -rw-r--r--. 1 root root  1631 8月  18 09:49 kms-log4j.properties
  20. -rw-r--r--. 1 root root  5511 8月  18 09:49 kms-site.xml
  21. -rw-r--r--. 1 root root 11237 8月  18 09:49 log4j.properties
  22. -rw-r--r--. 1 root root   931 8月  18 09:49 mapred-env.cmd
  23. -rw-r--r--. 1 root root  1383 8月  18 09:49 mapred-env.sh
  24. -rw-r--r--. 1 root root  4113 8月  18 09:49 mapred-queues.xml.template
  25. -rw-r--r--. 1 root root  1612 12月 21 12:03 mapred-site.xml
  26. -rw-r--r--. 1 root root    56 12月 21 16:30 slaves
  27. -rw-r--r--. 1 root root  2316 8月  18 09:49 ssl-client.xml.example
  28. -rw-r--r--. 1 root root  2268 8月  18 09:49 ssl-server.xml.example
  29. -rw-r--r--. 1 root root  2191 8月  18 09:49 yarn-env.cmd
  30. -rw-r--r--. 1 root root  4564 12月 21 11:19 yarn-env.sh
  31. -rw-r--r--. 1 root root  1195 12月 21 14:24 yarn-site.xml
復(fù)制代碼
1.4.1 修改hadoop全局配置文件
  1. vim core-site.xml
復(fù)制代碼
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  3. <!-- Put site-specific property overrides in this file. -->

  4. <configuration>
  5. <!--配置namenode的地址-->

  6.   <property>
  7.     <name>fs.defaultFS</name>
  8.     <value>hdfs://172.16.200.81:9000</value>
  9.   </property>
  10. <!-- 指定hadoop運(yùn)行時(shí)產(chǎn)生文件的存儲(chǔ)目錄 -->
  11.   <property>
  12.     <name>hadoop.tmp.dir</name>
  13.     <value>file:///data/hadoop/data/tmp</value>
  14. </property>        
  15. </configuration>
復(fù)制代碼
1.4.2 配置hadoop關(guān)聯(lián)jdk
  1. vim Hadoop-env.sh
復(fù)制代碼
  1. # Licensed to the Apache Software Foundation (ASF) under one
  2. # or more contributor license agreements.  See the NOTICE file
  3. # distributed with this work for additional information
  4. # regarding copyright ownership.  The ASF licenses this file
  5. # to you under the Apache License, Version 2.0 (the
  6. # "License"); you may not use this file except in compliance
  7. # with the License.  You may obtain a copy of the License at
  8. #
  9. #     http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.

  16. # Set Hadoop-specific environment variables here.

  17. # The only required environment variable is JAVA_HOME.  All others are
  18. # optional.  When running a distributed configuration it is best to
  19. # set JAVA_HOME in this file, so that it is correctly defined on
  20. # remote nodes.

  21. # The java implementation to use.
  22. #配置jdk的環(huán)境
  23. export JAVA_HOME=/usr/local/jdk1.8.0_111

  24. # The jsvc implementation to use. Jsvc is required to run secure datanodes
  25. # that bind to privileged ports to provide authentication of data transfer
  26. # protocol.  Jsvc is not required if SASL is configured for authentication of
  27. # data transfer protocol using non-privileged ports.
  28. #export JSVC_HOME=${JSVC_HOME}

  29. export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

  30. # Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.
  31. for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
  32.   if [ "$HADOOP_CLASSPATH" ]; then
  33.     export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
  34.   else
  35.     export HADOOP_CLASSPATH=$f
  36.   fi
  37. done

  38. # The maximum amount of heap to use, in MB. Default is 1000.
  39. #export HADOOP_HEAPSIZE=
  40. #export HADOOP_NAMENODE_INIT_HEAPSIZE=""

  41. # Extra Java runtime options.  Empty by default.
  42. export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

  43. # Command specific options appended to HADOOP_OPTS when specified
  44. export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
  45. export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"

  46. export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"

  47. export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
  48. export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"

  49. # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
  50. export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
  51. #HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"

  52. # On secure datanodes, user to run the datanode as after dropping privileges.
  53. # This **MUST** be uncommented to enable secure HDFS if using privileged ports
  54. # to provide authentication of data transfer protocol.  This **MUST NOT** be
  55. # defined if SASL is configured for authentication of data transfer protocol
  56. # using non-privileged ports.
  57. export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}

  58. # Where log files are stored.  $HADOOP_HOME/logs by default.
  59. #export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

  60. # Where log files are stored in the secure data environment.
  61. export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}

  62. ###
  63. # HDFS Mover specific parameters
  64. ###
  65. # Specify the JVM options to be used when starting the HDFS Mover.
  66. # These options will be appended to the options specified as HADOOP_OPTS
  67. # and therefore may override any similar flags set in HADOOP_OPTS
  68. #
  69. # export HADOOP_MOVER_OPTS=""

  70. ###
  71. # Advanced Users Only!
  72. ###

  73. # The directory where pid files are stored. /tmp by default.
  74. # NOTE: this should be set to a directory that can only be written to by
  75. #       the user that will run the hadoop daemons.  Otherwise there is the
  76. #       potential for a symlink attack.
  77. export HADOOP_PID_DIR=${HADOOP_PID_DIR}
  78. export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}

  79. # A string representing this instance of hadoop. $USER by default.
  80. export HADOOP_IDENT_STRING=$USER
復(fù)制代碼
1.4.3 配置hdfs
  1. vim hdfs-site.xml
復(fù)制代碼
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.   Licensed under the Apache License, Version 2.0 (the "License");
  5.   you may not use this file except in compliance with the License.
  6.   You may obtain a copy of the License at

  7.     http://www.apache.org/licenses/LICENSE-2.0

  8.   Unless required by applicable law or agreed to in writing, software
  9.   distributed under the License is distributed on an "AS IS" BASIS,
  10.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.   See the License for the specific language governing permissions and
  12.   limitations under the License. See accompanying LICENSE file.
  13. -->

  14. <!-- Put site-specific property overrides in this file. -->

  15. <configuration>
  16. <!--指定hdfs的副本數(shù)-->
  17.   <property>
  18.         <name>dfs.replication</name>
  19.         <value>3</value>
  20.   </property>
  21. <!--設(shè)置hdfs的權(quán)限-->  
  22.   <property>
  23.          <name>dfs.permissions</name>
  24.          <value>false</value>
  25.   </property>
  26. <!-- secondary name node web 監(jiān)聽端口 -->      
  27.   <property>
  28.          <name>dfs.namenode.secondary.http-address</name>
  29.          <value>172.16.200.81:50090</value>
  30.   </property>
  31. <!-- name node web 監(jiān)聽端口 -->
  32.         
  33.   <property>
  34.     <name>dfs.namenode.http-address</name>
  35.     <value>172.16.200.81:50070</value>
  36.   </property>
  37. <!-- 真正的datanode數(shù)據(jù)保存路徑 -->
  38.   <property>
  39.     <name>dfs.datanode.data.dir</name>
  40.     <value>file:///data/hadoop/data/dfs/dn</value>
  41.   </property>
  42. <!-- NN所使用的元數(shù)據(jù)保存-->
  43.   <property>
  44.     <name>dfs.namenode.name.dir</name>
  45.     <value>file:///data/hadoop/data/dfs/nn/name</value>
  46.   </property>
  47. <!--存放 edit 文件-->
  48.   <property>
  49.     <name>dfs.namenode.edits.dir</name>
  50.     <value>file:///data/hadoop/data/dfs/nn/edits</value>
  51.   </property>
  52. <!-- secondary namenode 節(jié)點(diǎn)存儲(chǔ) checkpoint 文件目錄-->
  53.   <property>
  54.     <name>dfs.namenode.checkpoint.dir</name>
  55.     <value>file:///data/hadoop/data/dfs/snn/name</value>
  56.   </property>
  57. <!-- secondary namenode 節(jié)點(diǎn)存儲(chǔ) edits 文件目錄-->
  58.   <property>
  59.     <name>dfs.namenode.checkpoint.edits.dir</name>
  60.     <value>file:///data/hadoop/data/dfs/snn/edits</value>
  61.   </property>

  62. </configuration>
復(fù)制代碼
1.4.4 配置mapred
  1. vim mapred-site.xml
復(fù)制代碼
  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.   Licensed under the Apache License, Version 2.0 (the "License");
  5.   you may not use this file except in compliance with the License.
  6.   You may obtain a copy of the License at

  7.     http://www.apache.org/licenses/LICENSE-2.0

  8.   Unless required by applicable law or agreed to in writing, software
  9.   distributed under the License is distributed on an "AS IS" BASIS,
  10.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.   See the License for the specific language governing permissions and
  12.   limitations under the License. See accompanying LICENSE file.
  13. -->

  14. <!-- Put site-specific property overrides in this file. -->

  15. <configuration>
  16. <!-- 指定mr運(yùn)行在yarn上 -->
  17.   <property>
  18.         <name>mapreduce.framework.name</name>
  19.         <value>yarn</value>
  20.   </property>
  21. <!--歷史服務(wù)的web端口地址  -->
  22.   <property>
  23.     <name>mapreduce.jobhistory.webapp.address</name>
  24.     <value>172.16.200.81:19888</value>
  25.   </property>
  26. <!--歷史服務(wù)的端口地址-->
  27.   <property>
  28.     <name>mapreduce.jobhistory.address</name>
  29.     <value>172.16.200.81:10020</value>
  30.   </property>
  31. <!--Uber運(yùn)行模式-->
  32.   <property>
  33.     <name>mapreduce.job.ubertask.enable</name>
  34.     <value>false</value>
  35.   </property>
  36. <!--MapReduce作業(yè)產(chǎn)生的日志存放位置。-->
  37.   <property>
  38.     <name>mapreduce.jobhistory.intermediate-done-dir</name>
  39.     <value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value>
  40.   </property>
  41. <!--MR JobHistory Server管理的日志的存放位置-->
  42.   <property>
  43.     <name>mapreduce.jobhistory.done-dir</name>
  44.     <value>${yarn.app.mapreduce.am.staging-dir}/history/done</value>
  45.   </property>
  46. <!--是job運(yùn)行時(shí)的臨時(shí)文件夾-->
  47.   <property>
  48.     <name>yarn.app.mapreduce.am.staging-dir</name>
  49.     <value>/data/hadoop/hadoop-yarn/staging</value>
  50.   </property>
  51. </configuration>
復(fù)制代碼
1.4.5 配置slaves
  1. vim slaves
復(fù)制代碼
  1. 172.16.200.81
  2. 172.16.200.82
  3. 172.16.200.83
  4. 172.16.200.84
復(fù)制代碼
1.4.6 配置yarn
  1. vim yarn-site.xml
復(fù)制代碼
  1. <?xml version="1.0"?>
  2. <!--
  3.   Licensed under the Apache License, Version 2.0 (the "License");
  4.   you may not use this file except in compliance with the License.
  5.   You may obtain a copy of the License at

  6.     http://www.apache.org/licenses/LICENSE-2.0

  7.   Unless required by applicable law or agreed to in writing, software
  8.   distributed under the License is distributed on an "AS IS" BASIS,
  9.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  10.   See the License for the specific language governing permissions and
  11.   limitations under the License. See accompanying LICENSE file.
  12. -->
  13. <configuration>
  14. <!-- 指定nodeManager組件在哪個(gè)機(jī)子上跑 -->
  15.   <property>
  16.         <name>yarn.nodemanager.aux-services</name>
  17.         <value>mapreduce_shuffle</value>
  18.   </property>
  19. <!-- 指定resourcemanager組件在哪個(gè)機(jī)子上跑 -->
  20.   <property>
  21.     <name>yarn.resourcemanager.hostname</name>
  22.     <value>172.16.200.81</value>
  23.   </property>
  24. <!--resourcemanager web地址-->
  25.   <property>
  26.     <name>yarn.resourcemanager.webapp.address</name>
  27.     <value>172.16.200.81:8088</value>
  28.   </property>
  29. <!--啟用日志聚集功能-->
  30.   <property>
  31.     <name>yarn.log-aggregation-enable</name>
  32.     <value>true</value>
  33.   </property>
  34. <!--在HDFS上聚集的日志最多保存多長時(shí)間-->
  35.   <property>
  36.     <name>yarn.log-aggregation.retain-seconds</name>
  37.     <value>86400</value>
  38.   </property>



  39. </configuration>
復(fù)制代碼
2. 搭建Spark(master、slave相同操作)

2.1 安裝spark
  1. cd /usr/loca/src/
  2. tar zxvf spark-2.0.2-bin-hadoop2.7.tgz
  3. mv spark-2.0.2-bin-hadoop2.7  /usr/local/spark-2.0.2
復(fù)制代碼
2.2 配置spark環(huán)境變量
  1. vim /etc/profile
復(fù)制代碼
添加如下信息
  1. ######### spark ############
  2. export SPARK_HOME=/usr/local/spark-2.0.2
  3. export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
復(fù)制代碼
2.3 刷新配置文件:
  1. source /etc/profile
復(fù)制代碼
2.4 修改spark配置文件
  1. cd /usr/local/spark-2.0.2/conf
  2. mv spark-env.sh.template spark-env.sh
復(fù)制代碼
  1. [root@spark-master conf]# ll
  2. 總用量 36
  3. -rw-r--r--. 1  500  500  987 11月  8 09:58 docker.properties.template
  4. -rw-r--r--. 1  500  500 1105 11月  8 09:58 fairscheduler.xml.template
  5. -rw-r--r--. 1  500  500 2025 11月  8 09:58 log4j.properties.template
  6. -rw-r--r--. 1  500  500 7239 11月  8 09:58 metrics.properties.template
  7. -rw-r--r--. 1  500  500  912 12月 21 16:55 slaves
  8. -rw-r--r--. 1  500  500 1292 11月  8 09:58 spark-defaults.conf.template
  9. -rwxr-xr-x. 1 root root 3969 12月 21 15:50 spark-env.sh
  10. -rwxr-xr-x. 1  500  500 3861 11月  8 09:58 spark-env.sh.template
復(fù)制代碼
2.4.1 spark關(guān)聯(lián)jdk
  1. vim spark-env.sh
復(fù)制代碼
  1. #!/usr/bin/env bash

  2. #
  3. # Licensed to the Apache Software Foundation (ASF) under one or more
  4. # contributor license agreements.  See the NOTICE file distributed with
  5. # this work for additional information regarding copyright ownership.
  6. # The ASF licenses this file to You under the Apache License, Version 2.0
  7. # (the "License"); you may not use this file except in compliance with
  8. # the License.  You may obtain a copy of the License at
  9. #
  10. #    http://www.apache.org/licenses/LICENSE-2.0
  11. #
  12. # Unless required by applicable law or agreed to in writing, software
  13. # distributed under the License is distributed on an "AS IS" BASIS,
  14. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  15. # See the License for the specific language governing permissions and
  16. # limitations under the License.
  17. #

  18. # This file is sourced when running various Spark programs.
  19. # Copy it as spark-env.sh and edit that to configure Spark for your site.

  20. # Options read when launching programs locally with
  21. # ./bin/run-example or ./bin/spark-submit
  22. # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
  23. # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
  24. # - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
  25. # - SPARK_CLASSPATH, default classpath entries to append

  26. # Options read by executors and drivers running inside the cluster
  27. # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
  28. # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
  29. # - SPARK_CLASSPATH, default classpath entries to append
  30. # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
  31. # - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos

  32. # Options read in YARN client mode
  33. # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
  34. # - SPARK_EXECUTOR_INSTANCES, Number of executors to start (Default: 2)
  35. # - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
  36. # - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
  37. # - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)

  38. # Options for the daemons used in the standalone deploy mode
  39. # - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
  40. # - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
  41. # - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
  42. # - SPARK_WORKER_CORES, to set the number of cores to use on this machine
  43. # - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
  44. # - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
  45. # - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
  46. # - SPARK_WORKER_DIR, to set the working directory of worker processes
  47. # - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
  48. # - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
  49. # - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
  50. # - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
  51. # - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
  52. # - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers

  53. # Generic options for the daemons used in the standalone deploy mode
  54. # - SPARK_CONF_DIR      Alternate conf dir. (Default: ${SPARK_HOME}/conf)
  55. # - SPARK_LOG_DIR       Where log files are stored.  (Default: ${SPARK_HOME}/logs)
  56. # - SPARK_PID_DIR       Where the pid file is stored. (Default: /tmp)
  57. # - SPARK_IDENT_STRING  A string representing this instance of spark. (Default: $USER)
  58. # - SPARK_NICENESS      The scheduling priority for daemons. (Default: 0)
  59. #java的環(huán)境變量
  60. export JAVA_HOME=/usr/local/jdk1.8.0_111
  61. #spark主節(jié)點(diǎn)的ip
  62. export SPARK_MASTER_IP=172.16.200.81
  63. #spark主節(jié)點(diǎn)的端口號(hào)
  64. export SPARK_MASTER_PORT=7077
復(fù)制代碼
2.4.2 配置slaves
  1. vim slaves
復(fù)制代碼
  1. #
  2. # Licensed to the Apache Software Foundation (ASF) under one or more
  3. # contributor license agreements.  See the NOTICE file distributed with
  4. # this work for additional information regarding copyright ownership.
  5. # The ASF licenses this file to You under the Apache License, Version 2.0
  6. # (the "License"); you may not use this file except in compliance with
  7. # the License.  You may obtain a copy of the License at
  8. #
  9. #    http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16. #

  17. # A Spark Worker will be started on each of the machines listed below.
  18. 172.16.200.81
  19. 172.16.200.82
  20. 172.16.200.83
  21. 172.16.200.84
復(fù)制代碼
3. 安裝scala
  1. cd /usr/loca/src/
  2. tar zxvf scala-2.12.1.tgz
  3. mv scala-2.12.1  /usr/local
復(fù)制代碼
3.1 配置scala環(huán)境變量(只master安裝)
  1. vim /etc/profile
復(fù)制代碼
添加如下信息
  1. ######### scala ##########
  2. export SCALA_HOME=/usr/local/scala-2.12.1
  3. export PATH=$PATH:$SCALA_HOME/bin
復(fù)制代碼
3.2 刷新配置文件:
  1. source /etc/profile
復(fù)制代碼
4. 啟動(dòng)程序

4.1 啟動(dòng)hadoop

4.1.1 格式化namenode
  1. hadoop namenode -format
復(fù)制代碼
4.1.2 master啟動(dòng)hadoop
  1. cd /usr/local/hadoop-2.7.3/sbin
  2. ./start-all.sh
復(fù)制代碼
提示
  1. start-all.sh                    //啟動(dòng)master和slaves
  2. stop-all.sh                    //停止master和slaves
復(fù)制代碼
查看進(jìn)程 (master)
  1. [root@spark-master sbin]# jps
  2. 8961 NodeManager
  3. 8327 DataNode
  4. 8503 SecondaryNameNode
  5. 8187 NameNode
  6. 8670 ResourceManager
  7. 9102 Jps
  8. [root@spark-master sbin]#
復(fù)制代碼
查看進(jìn)程 (slave)
  1. [root@spark-slave01 ~]# jps
  2. 4289 NodeManager
  3. 4439 Jps
  4. 4175 DataNode
  5. [root@spark-slave01 ~]#
復(fù)制代碼
slave01、slve02、slave03顯示相同

4.2 啟動(dòng)spark

4.1.2 master啟動(dòng)hadoop
  1. cd /usr/local/spark-2.0.2/sbin
  2. ./start-all.sh
復(fù)制代碼
提示
  1. start-all.sh                    //啟動(dòng)master和slaves
  2. stop-all.sh                    //停止master和slaves
復(fù)制代碼


論壇徽章:
0
2 [報(bào)告]
發(fā)表于 2018-01-18 15:39 |只看該作者
大數(shù)據(jù)學(xué)了這幾個(gè)月感覺怎么樣呢??我也想學(xué)習(xí)下

論壇徽章:
0
3 [報(bào)告]
發(fā)表于 2018-04-04 18:11 |只看該作者
大數(shù)據(jù)學(xué)了這幾個(gè)月

論壇徽章:
18
IT運(yùn)維版塊每日發(fā)帖之星
日期:2016-06-07 06:20:0015-16賽季CBA聯(lián)賽之北控
日期:2016-06-30 21:19:06IT運(yùn)維版塊每日發(fā)帖之星
日期:2016-06-29 06:20:00每日論壇發(fā)貼之星
日期:2016-06-28 06:20:00IT運(yùn)維版塊每日發(fā)帖之星
日期:2016-06-28 06:20:00數(shù)據(jù)庫技術(shù)版塊每日發(fā)帖之星
日期:2016-06-23 06:20:00每日論壇發(fā)貼之星
日期:2016-06-22 06:20:00IT運(yùn)維版塊每日發(fā)帖之星
日期:2016-06-22 06:20:00IT運(yùn)維版塊每日發(fā)帖之星
日期:2016-06-21 06:20:00wusuopu
日期:2016-06-17 17:43:45IT運(yùn)維版塊每日發(fā)帖之星
日期:2016-06-16 06:20:00IT運(yùn)維版塊每日發(fā)帖之星
日期:2016-06-15 06:20:00
4 [報(bào)告]
發(fā)表于 2018-07-23 16:14 |只看該作者
重回论坛

論壇徽章:
0
5 [報(bào)告]
發(fā)表于 2019-10-08 13:40 |只看該作者
謝謝樓主分享。
您需要登錄后才可以回帖 登錄 | 注冊(cè)

本版積分規(guī)則 發(fā)表回復(fù)

  

北京盛拓優(yōu)訊信息技術(shù)有限公司. 版權(quán)所有 京ICP備16024965號(hào)-6 北京市公安局海淀分局網(wǎng)監(jiān)中心備案編號(hào):11010802020122 niuxiaotong@pcpop.com 17352615567
未成年舉報(bào)專區(qū)
中國互聯(lián)網(wǎng)協(xié)會(huì)會(huì)員  聯(lián)系我們:huangweiwei@itpub.net
感謝所有關(guān)心和支持過ChinaUnix的朋友們 轉(zhuǎn)載本站內(nèi)容請(qǐng)注明原作者名及出處

清除 Cookies - ChinaUnix - Archiver - WAP - TOP