# Linux系统准备 ### *集群名称:hadoop01、hadoop02、hadoop03* ### *Linux系统版本:CentOS 7* ### *linux账户:persagy* 1. 安装虚拟机 2. 配置静态IP 3. 修改主机名 4. 修改/etc/hosts文件映射主机名与IP ​ 192.168.xxx.xxx hadoop01 ​ 192.168.xxx.xxx hadoop02 ​ 192.168.xxx.xxx hadoop03 5. 创建用户 ​ useradd persagy ​ passwd 自定义 ​ 在 /etc/sudoers文件中加入 persagy ALL=(ALL) NOPASSWD:ALL 6. 关闭防火墙服务以及开机自启动 7. root和persagy的SSH无密登录配置 1. 每个节点执行:ssh-keygen -t rsa 2. 将公钥拷贝到要免密登录的目标机器上 ssh-copy-id hadoop01 ssh-copy-id hadoop02 ssh-copy-id hadoop03 8. 集群时间同步 9. opt目录创建文件夹(*此目录可以自定义*) 1. mkdir /opt/software /opt/module 2. chown persagy:persagy /opt/software /opt/module 10. 安装JDK(*文档以 jdk1.8.0_121 为准,jdk版本可在兼容hadoop的情况下自行选择*) 1. tar -zxvf jdk-8u121-linux-x64.tar.gz -C /opt/module 2. /etc/profile 配置环境变量 ```shell #JAVA_HOME export JAVA_HOME=/opt/module/jdk1.8.0_121 export PATH=$PATH:$JAVA_HOME/bin ``` # 集群部署计划及软件版本 ## 集群部署计划 | | hadoop01 | hadoop02 | hadoop03 | | ----------------------- | ---------------------- | -------------------------------- | ------------------------------- | | Hadoop-hdfs | nameNode
dataNode | dataNode | dataNode
secondaryNameNode | | Hadoop-yarn | nodeManager | nodeManager
resourceManager | nodeManager | | zookeeper | ✅ | ✅ | ✅ | | flume | 采集 | 采集 | 传输 | | kafka | ✅ | ✅ | ✅ | | hbase | ✅ | ✅ | ✅ | | spark | ✅ | ✅ | ✅ | | hive | ✅ | | | | MySQL(保存hive元数据) | ✅ | | | ## 软件版本 | 软件 | 版本号 | 备注 | | --------- | ------------------------------------------------------------ | ---------------------------------------------------- | | Java | jdk-8u121-linux-x64.tar.gz | | | Hadoop | hadoop-2.7.2.tar.gz | | | Hive | apache-hive-1.2.1-bin.tar.gz | 可用DataGrip或者DBeaver连接,需要启动hiveserver2服务 | | flume | apache-flume-1.7.0-bin.tar.gz | | | kafka | kafka_2.11-0.11.0.0.tgz | | | hbase | | | | zookeeper | 3.5.7 | | | spark | | | | mysql | mysql-community-client-5.7.32-1.el7.x86_64.rpm
mysql-community-server-5.7.32-1.el7.x86_64.rpm
mysql-connector-java-5.1.49( <= 放入hive的lib文件) | | | | | | # Zookeeper安装 #### *persagy用户操作* 1. 解压 2. 创建 /opt/module/zookeeper-3.4.10/zkData 和 /opt/module/zookeeper-3.4.10/logs 目录 3. 修改配置文件 1. 在zkData文件下创建myid文件(集群的myid必须不同) ```xml 1(此数字与zoo.cfg的server.id对应,具体数值可以自定义) ``` 2. 修改zookeeper中conf下的配置文件zoo.cfg ```shell cp zoo_sample.cfg zoo.cfg ``` ```xml 在zoo.cfg文件中添加如下配置: dataDir=/opt/module/zookeeper-3.4.10/zkData ZOO_LOG_DIR=/opt/module/zookeeper-3.4.10/logs ## 内容是本机的 serverId(与zkData目下创建的myid对应);以下配置1、2、3就是 serverId server.1=hadoop01:2888:3888 server.2=hadoop02:2888:3888 server.3=hadoop03:2888:3888 ``` # Hadoop集群安装 #### *persagy用户操作* 1. 解压 ```shell tar -zxvf hadoop-2.7.2.tar.gz -C /opt/module/ ``` 2. /etc/profile 配置环境变量 ```shell #HADOOP_HOME export HADOOP_HOME=/opt/module/hadoop-2.7.2 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ``` 3. 修改 Hadoop 配置文件(目录:/opt/module/hadoop-2.7.2/etc/hadoop)每个节点部署配置相同 1. core-site.xml ```xml fs.defaultFS hdfs://hadoop01:9000 hadoop.tmp.dir /opt/module/hadoop-2.7.2/data/tmp io.compression.codecs org.apache.hadoop.io.compress.GzipCodec, org.apache.hadoop.io.compress.DefaultCodec, org.apache.hadoop.io.compress.BZip2Codec, org.apache.hadoop.io.compress.SnappyCodec, com.hadoop.compression.lzo.LzoCodec, com.hadoop.compression.lzo.LzopCodec io.compression.codec.lzo.class com.hadoop.compression.lzo.LzoCodec ``` 2. hdfs-site.xml ```xml dfs.replication 3 dfs.namenode.secondary.http-address hadoop03:50090 ``` 3. yarn-site.xml ```xml yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname hadoop02 yarn.log-aggregation-enable true yarn.log-aggregation.retain-seconds 604800 ``` 4. mapred-site.xml(*需要 cp mapred-site.xml.template mapred-site.xml*) ```xml mapreduce.framework.name yarn mapreduce.jobhistory.address hadoop03:10020 mapreduce.jobhistory.webapp.address hadoop03:19888 ``` 5. hadoop-env.sh ```shell # The java implementation to use. export JAVA_HOME=/opt/module/jdk1.8.0_121 ``` 6. yarn-env.sh ```shell # some Java parameters export JAVA_HOME=/opt/module/jdk1.8.0_121 ``` 7. mapred-env.sh ```shell # limitations under the License. export JAVA_HOME=/opt/module/jdk1.8.0_121 ``` 8. slaves ```xml hadoop01 hadoop02 hadoop03 ``` 4. ***(重点)nameNode配置在hadoop01节点,第一次启动之前需要格式化:hdfs namenode -format*** 5. 启动Hadoop 1. 在hadoop01节点/opt/module/hadoop-2.7.2/sbin/start-dfs.sh 2. 在hadoop02节点/opt/module/hadoop-2.7.2/sbin/start-yarn.sh 6. 可通过jps在三个节点查看启动情况 # Hive安装 ## 安装Hive 1. 解压 ```shell tar -zxvf apache-hive-1.2.1-bin.tar.gz -C /opt/module/hive ``` 2. 配置hive-env.xml文件 ```shell mv hive-env.sh.template hive-env.sh ``` ```shell export HADOOP_HOME=/opt/module/hadoop-2.7.2 export HIVE_CONF_DIR=/opt/module/hive/conf ``` 3. Hadoop集群配置 1. *启动hdfs和yarn* 2. 在hdfs上创建 /tmp 和 /user/hive/warehouse 目录,同事修改组权限为可写 ```shell [persagy@$hostname hadoop-2.7.2]$ bin/hadoop fs -mkdir /tmp [persagy@$hostname hadoop-2.7.2]$ bin/hadoop fs -mkdir -p /user/hive/warehouse [persagy@$hostname hadoop-2.7.2]$ bin/hadoop fs -chmod g+w /tmp [persagy@$hostname hadoop-2.7.2]$ bin/hadoop fs -chmod g+w /user/hive/warehouse ``` ## 安装MySQL #### 存放Hive元数据,*root用户操作* 1. 检查是否安装MySQL 1. rpm -qa|grep mysql 2. 卸载安装的版本 rpm -e --nodeps 2. 安装服务端 ```shell 1. rpm -ivh mysql-community-server-5.7.32-1.el7.x86_64.rpm 2. 初始密码:grep 'temporary password' /var/log/mysqld.log ``` 3. 安装客户端 ```shell rpm -ivh mysql-community-client-5.7.32-1.el7.x86_64.rpm ``` 4. 修改user表,把Host表内容修改为% ```mysql update user set host='%' where host='localhost'; flush privileges; ``` 5. 配置Hive元数据到MySQL 1. copy驱动包 mysql-connector-java-5.1.49-bin.jar 到 /opt/module/hive/lib/ 2. 修改 hive-site.xml ```xml javax.jdo.option.ConnectionURL jdbc:mysql://hadoop01:3306/metastore?createDatabaseIfNotExist=true JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionUserName user username to use against metastore database javax.jdo.option.ConnectionPassword passwd password to use against metastore database ``` 6. HiveJDBC访问 1. 启动 hiveserver2 服务 ```shell nohup bin/hiveserver2 > hiveserver2.out 2> hiveserver2.err & ``` 2. 以 DataGrip 为例 1. 引入jar包,hive-jdbc-uber-2.6.5.0-292.jar(根据版本引入) hive-jdbc-uber-2.6.5.0-292.jar 2. 配置如图: image-20201201100958554 # Kafka安装 1. 解压 ```shell tar -zxvf kafka_2.11-0.11.0.0.tgz -C /opt/module/ ``` 2. 在kafka目录下创建logs文件 3. 修改配置文件:/config/server.properties ```xml #broker的全局唯一编号,不能重复,hadoop02的broker.id=2,hadoop03的broker.id=3 broker.id=1 #删除topic功能使能 delete.topic.enable=true #处理网络请求的线程数量 num.network.threads=3 #用来处理磁盘IO的现成数量 num.io.threads=8 #发送套接字的缓冲区大小 socket.send.buffer.bytes=102400 #接收套接字的缓冲区大小 socket.receive.buffer.bytes=102400 #请求套接字的缓冲区大小 socket.request.max.bytes=104857600 #kafka运行日志存放的路径 log.dirs=/opt/module/kafka/logs #topic在当前broker上的分区个数 num.partitions=1 #用来恢复和清理data下数据的线程数量 num.recovery.threads.per.data.dir=1 #segment文件保留的最长时间,超时将被删除 log.retention.hours=168 #配置连接Zookeeper集群地址 zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181 ``` # Hbase安装 ## 安装HBase 1. 解压 ```shell tar -zxvf HBase-1.3.1-bin.tar.gz -C /opt/module ``` 2. 修改配置文件 1. HBase-env.sh ```xml export JAVA_HOME=/opt/module/jdk1.6.0_144 export HBASE_MANAGES_ZK=false ``` 2. HBase-site.xml ```xml hbase.rootdir hdfs://hadoop102:9000/HBase hbase.cluster.distributed true hbase.master.port 16000 hbase.zookeeper.quorum hadoop102:2181,hadoop103:2181,hadoop104:2181 hbase.zookeeper.property.dataDir /opt/module/zookeeper-3.4.10/zkData ``` 3. regionservers ```xml hadoop01 hadoop02 hadoop03 ``` 4. 软连接hadoop配置文件到HBase ```shell ln -s /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml /opt/module/hbase-1.3.1/conf/core-site.xml ln -s /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml /opt/module/hbase-1.3.1/conf/hdfs-site.xml ``` 5. 分发 hbase 到其他节点 ## Hive与Hbase映射 1. 拷贝 hive-HBase-handler-1.2.2.jar (此版本根据项目自行决定)到 Hive 的 lib 下 2. 建立软连接 ```shell export HBASE_HOME=/opt/module/HBase export HIVE_HOME=/opt/module/hive ln -s $HBASE_HOME/lib/HBase-common-1.3.1.jar $HIVE_HOME/lib/HBase-common-1.3.1.jar ln -s $HBASE_HOME/lib/HBase-server-1.3.1.jar $HIVE_HOME/lib/HBase-server-1.3.1.jar ln -s $HBASE_HOME/lib/HBase-client-1.3.1.jar $HIVE_HOME/lib/HBase-client-1.3.1.jar ln -s $HBASE_HOME/lib/HBase-protocol-1.3.1.jar $HIVE_HOME/lib/HBase-protocol-1.3.1.jar ln -s $HBASE_HOME/lib/HBase-it-1.3.1.jar $HIVE_HOME/lib/HBase-it-1.3.1.jar ln -s $HBASE_HOME/lib/htrace-core-3.1.0-incubating.jar $HIVE_HOME/lib/htrace-core-3.1.0-incubating.jar ln -s $HBASE_HOME/lib/HBase-hadoop2-compat-1.3.1.jar $HIVE_HOME/lib/HBase-hadoop2-compat-1.3.1.jar ln -s $HBASE_HOME/lib/HBase-hadoop-compat-1.3.1.jar $HIVE_HOME/lib/HBase-hadoop-compat-1.3.1.jar ``` 3. 修改配置文件hive-site.xml ```xml hive.zookeeper.quorum hadoop01,hadoop02,hadoop03 The list of ZooKeeper servers to talk to. This is only needed for read/write locks. hive.zookeeper.client.port 2181 The port of ZooKeeper servers to talk to. This is only needed for read/write locks. ``` ## flume安装 1. 解压 2. 修改配置文件 将flume / conf下的 flume-env.sh.template 复制为 flume-env.sh ```xml export JAVA_HOME=/opt/module/jdk1.8.0_121 ```