《基于Spark的机器学习资料24、hdfs单机安装部署.pdf》由会员分享,可在线阅读,更多相关《基于Spark的机器学习资料24、hdfs单机安装部署.pdf(4页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。
1、HDFS 单机版安装 一、准备机器 机器编号 地址 端口 1 10.211.55.8 9000、50070、8088 二、安装 1、安装 java 环境 export JAVA_HOME=/data/program/software/java8 export JRE_HOME=/data/program/software/java8/jre export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin source/etc/profile 使其生
2、效 2、修改 hostname vi/etc/profile 添加 10.211.55.8 bigdata2 3、关闭防火墙 service iptables stop 用久关闭防火墙:chkconfig iptables off 查看防火墙状态:service iptables status 4、添加 hadoop 用户和用户组 创建用户组:groupadd hadoop 新建 hadoop 用户并且增加到 hadoop 用户下:useradd g hadoop hadoop 设置密码:passwd hadoop 5、下载安装 hadoop cd/data/program/software
3、wget http:/ 解压:tar-zxf hadoop-2.8.1.tar.gz 将 hadoop-2.8.1 操作权限赋给 hadoop 用户:chown R hadoop:hadoop hadoop-2.8.1 6、创建数据目录 mkdir p/data/dfs/name mkdir p/data/dfs/data mkdir p/data/tmp 将/data 文件权限赋给 hadoop:chown R hadoop:hadoop/data 7、配置 etc/hadoop/core-site.xml cd/data/program/software/hadoop-2.8.1 fs.d
4、efaultFS hdfs:/bigdata2:9000 io.file.buffer.size 131072 hadoop.tmp.dir file:/data/tmp Abase for other temporary directories.hadoop.proxyuser.hadoop.hosts*hadoop.proxyuser.hadoop.groups*8、配置 etc/hadoop/hdfs-site.xml dfs.namenode.name.dir file:/data/dfs/name Determineswhere on the local filesystem the
5、 DFS name node should store the name table.Ifthis is a comma-delimited list of directories then the name table is replicatedin all of the directories,for redundancy.true dfs.datanode.data.dir file:/data/dfs/data Determineswhere on the local filesystem an DFS data node should store its blocks.If this
6、is a comma-delimited list of directories,then data will be stored in all nameddirectories,typically on different devices.Directories that do not exist areignored.true dfs.replication 1 dfs.permissions false 9、配置 etc/hadoop/mapred-site.ml mapreduce.framework.name yarn 10、配置 yarn-site.xml yarn.nodeman
7、ager.aux-services mapreduce_shuffle 11、配置 slaves bigdata2 12、设置 hadoop 环境变量 vi/etc/profile HADOOP_HOME=/data/program/software/hadoop-2.8.1 PATH=$HADOOP_HOME/bin:$PATH export HADOOP_HOME PATH export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP
8、_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 13、ssh 无密码验证配置 切换到 hadoop 用户下:su hadoop 直接输入 cd 会切换到/home/hadoop 根目录下:cd 创建.ssh 目录:mkdir.ssh 生成秘钥(一直回车):ssh-keygen t rsa 进入.ssh 目录:cd.ssh 复制一份秘钥:cp id_rsa.pub authorized_keys 后退到根目录:cd.给.ssh700 权限:chmod 7
9、00.ssh 给.ssh 里面的文件 600 权限:chmod 600.ssh/*ssh bigdata2 14、运行 hadoop 先格式化一下 namenode:bin/hadoop namenode format 为了让大家看一下 hadoop 我们将所有的服务全部启动:sbin/start-all.sh 看一下启动的服务:jps 看一下 hdfs 的管理界面:http:/10.211.55.8:50070 看 hadoop 运行任务:http:/10.211.55.8:8088/cluster/nodes 15、测试 创建一个目录:bin/hadoop fs mkdir/test 创建
10、一个 txt 然后放到/test 下:bin/hadoop fs put/home/hadoop/first.txt/text 查看目录下文件:bin/hadoop fs ls/test 启动过程中如果出现如下错误:则需要更改/data/program/software/hadoop-2.8.1/etc/hadoop/hadoop-env.sh 中的 JAVA_HOME 为绝对地址。hadoopbigdata2 hadoop-2.8.1$sbin/start-all.sh This script is Deprecated.Instead use start-dfs.sh and start-
11、yarn.sh 17/07/25 13:52:49 WARN util.NativeCodeLoader:Unable to load native-hadoop library for your platform.using builtin-java classes where applicable 17/07/25 13:52:49 WARN conf.Configuration:bad conf file:element not 17/07/25 13:52:49 WARN conf.Configuration:bad conf file:element not 17/07/25 13:
12、52:49 WARN conf.Configuration:bad conf file:element not 17/07/25 13:52:49 WARN conf.Configuration:bad conf file:element not Starting namenodes on bigdata2 bigdata2:Error:JAVA_HOME is not set and could not be found.The authenticity of host localhost(:1)cant be established.RSA key fingerprint is 24:e2:40:a1:fd:ac:68:46:fb:6b:6b:ac:94:ac:05:e3.Are you sure you want to continue connecting(yes/no)?bigdata2:Error:JAVA_HOME is not set and could not be found.Clocalhost:Host key verification failed.Starting secondary namenodes 0.0.0.0 0.0.0.0:Error:JAVA_HOME is not set and could not be found.