安装虚拟机
配置静态IP
修改主机名
修改/etc/hosts文件映射主机名与IP
192.168.xxx.xxx hadoop01
192.168.xxx.xxx hadoop02
192.168.xxx.xxx hadoop03
useradd persagy
passwd 自定义
在 /etc/sudoers文件中加入 persagy ALL=(ALL) NOPASSWD:ALL
关闭防火墙服务以及开机自启动
root和persagy的SSH无密登录配置
集群时间同步
opt目录创建文件夹(此目录可以自定义)
安装JDK(文档以 jdk1.8.0_121 为准,jdk版本可在兼容hadoop的情况下自行选择)
tar -zxvf jdk-8u121-linux-x64.tar.gz -C /opt/module
/etc/profile 配置环境变量
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_121
export PATH=$PATH:$JAVA_HOME/bin
hadoop01 | hadoop02 | hadoop03 | |
---|---|---|---|
Hadoop-hdfs | nameNode dataNode |
dataNode | dataNode secondaryNameNode |
Hadoop-yarn | nodeManager | nodeManager resourceManager |
nodeManager |
zookeeper | ✅ | ✅ | ✅ |
flume | 采集 | 采集 | 传输 |
kafka | ✅ | ✅ | ✅ |
hbase | ✅ | ✅ | ✅ |
spark | ✅ | ✅ | ✅ |
hive | ✅ | ||
MySQL(保存hive元数据) | ✅ |
软件 | 版本号 | 备注 |
---|---|---|
Java | jdk-8u121-linux-x64.tar.gz | |
Hadoop | hadoop-2.7.2.tar.gz | |
Hive | apache-hive-1.2.1-bin.tar.gz | 可用DataGrip或者DBeaver连接,需要启动hiveserver2服务 |
flume | apache-flume-1.7.0-bin.tar.gz | |
kafka | kafka_2.11-0.11.0.0.tgz | |
hbase | ||
zookeeper | 3.5.7 | |
spark | ||
mysql | mysql-community-client-5.7.32-1.el7.x86_64.rpm mysql-community-server-5.7.32-1.el7.x86_64.rpm mysql-connector-java-5.1.49( <= 放入hive的lib文件) |
|
解压
创建 /opt/module/zookeeper-3.4.10/zkData 和 /opt/module/zookeeper-3.4.10/logs 目录
修改配置文件
1(此数字与zoo.cfg的server.id对应,具体数值可以自定义)
cp zoo_sample.cfg zoo.cfg
在zoo.cfg文件中添加如下配置:
dataDir=/opt/module/zookeeper-3.4.10/zkData
ZOO_LOG_DIR=/opt/module/zookeeper-3.4.10/logs
## 内容是本机的 serverId(与zkData目下创建的myid对应);以下配置1、2、3就是 serverId
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
tar -zxvf hadoop-2.7.2.tar.gz -C /opt/module/
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-2.7.2
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
修改 Hadoop 配置文件(目录:/opt/module/hadoop-2.7.2/etc/hadoop)每个节点部署配置相同
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定HDFS中NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>
<!-- 指定Hadoop运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-2.7.2/data/tmp</value>
</property>
<!-- 配置lzo压缩 start -->
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
<!-- 配置lzo压缩 end -->
</configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- Hadoop副本数量 -->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!-- 指定Hadoop辅助名称节点主机配置 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop03:50090</value>
</property>
</configuration>
<?xml version="1.0"?>
<configuration>
<!-- reducer获取数据的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定YARN的ResourceManager的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop02</value>
</property>
<!-- 日志聚集功能使能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 日志保留时间设置7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定mr运行在yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 历史服务器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop03:10020</value>
</property>
<!-- 历史服务器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop03:19888</value>
</property>
</configuration>
# The java implementation to use.
export JAVA_HOME=/opt/module/jdk1.8.0_121
# some Java parameters
export JAVA_HOME=/opt/module/jdk1.8.0_121
# limitations under the License.
export JAVA_HOME=/opt/module/jdk1.8.0_121
hadoop01
hadoop02
hadoop03
(重点)nameNode配置在hadoop01节点,第一次启动之前需要格式化:hdfs namenode -format
启动Hadoop
可通过jps在三个节点查看启动情况
tar -zxvf apache-hive-1.2.1-bin.tar.gz -C /opt/module/hive
mv hive-env.sh.template hive-env.sh
export HADOOP_HOME=/opt/module/hadoop-2.7.2
export HIVE_CONF_DIR=/opt/module/hive/conf
Hadoop集群配置
启动hdfs和yarn
在hdfs上创建 /tmp 和 /user/hive/warehouse 目录,同事修改组权限为可写
[persagy@$hostname hadoop-2.7.2]$ bin/hadoop fs -mkdir /tmp
[persagy@$hostname hadoop-2.7.2]$ bin/hadoop fs -mkdir -p /user/hive/warehouse
[persagy@$hostname hadoop-2.7.2]$ bin/hadoop fs -chmod g+w /tmp
[persagy@$hostname hadoop-2.7.2]$ bin/hadoop fs -chmod g+w /user/hive/warehouse
检查是否安装MySQL
安装服务端
1. rpm -ivh mysql-community-server-5.7.32-1.el7.x86_64.rpm
2. 初始密码:grep 'temporary password' /var/log/mysqld.log
rpm -ivh mysql-community-client-5.7.32-1.el7.x86_64.rpm
update user set host='%' where host='localhost';
flush privileges;
配置Hive元数据到MySQL
copy驱动包 mysql-connector-java-5.1.49-bin.jar 到 /opt/module/hive/lib/
修改 hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop01:3306/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>user</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>passwd</value>
<description>password to use against metastore database</description>
</property>
</configuration>
HiveJDBC访问
nohup bin/hiveserver2 > hiveserver2.out 2> hiveserver2.err &
以 DataGrip 为例
tar -zxvf kafka_2.11-0.11.0.0.tgz -C /opt/module/
在kafka目录下创建logs文件
修改配置文件:/config/server.properties
#broker的全局唯一编号,不能重复,hadoop02的broker.id=2,hadoop03的broker.id=3
broker.id=1
#删除topic功能使能
delete.topic.enable=true
#处理网络请求的线程数量
num.network.threads=3
#用来处理磁盘IO的现成数量
num.io.threads=8
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400
#接收套接字的缓冲区大小
socket.receive.buffer.bytes=102400
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600
#kafka运行日志存放的路径
log.dirs=/opt/module/kafka/logs
#topic在当前broker上的分区个数
num.partitions=1
#用来恢复和清理data下数据的线程数量
num.recovery.threads.per.data.dir=1
#segment文件保留的最长时间,超时将被删除
log.retention.hours=168
#配置连接Zookeeper集群地址
zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181
tar -zxvf HBase-1.3.1-bin.tar.gz -C /opt/module
修改配置文件
export JAVA_HOME=/opt/module/jdk1.6.0_144
export HBASE_MANAGES_ZK=false
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop102:9000/HBase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 -->
<property>
<name>hbase.master.port</name>
<value>16000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/opt/module/zookeeper-3.4.10/zkData</value>
</property>
hadoop01
hadoop02
hadoop03
ln -s /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml /opt/module/hbase-1.3.1/conf/core-site.xml
ln -s /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml /opt/module/hbase-1.3.1/conf/hdfs-site.xml
拷贝 hive-HBase-handler-1.2.2.jar (此版本根据项目自行决定)到 Hive 的 lib 下
建立软连接
export HBASE_HOME=/opt/module/HBase
export HIVE_HOME=/opt/module/hive
ln -s $HBASE_HOME/lib/HBase-common-1.3.1.jar $HIVE_HOME/lib/HBase-common-1.3.1.jar
ln -s $HBASE_HOME/lib/HBase-server-1.3.1.jar $HIVE_HOME/lib/HBase-server-1.3.1.jar
ln -s $HBASE_HOME/lib/HBase-client-1.3.1.jar $HIVE_HOME/lib/HBase-client-1.3.1.jar
ln -s $HBASE_HOME/lib/HBase-protocol-1.3.1.jar $HIVE_HOME/lib/HBase-protocol-1.3.1.jar
ln -s $HBASE_HOME/lib/HBase-it-1.3.1.jar $HIVE_HOME/lib/HBase-it-1.3.1.jar
ln -s $HBASE_HOME/lib/htrace-core-3.1.0-incubating.jar $HIVE_HOME/lib/htrace-core-3.1.0-incubating.jar
ln -s $HBASE_HOME/lib/HBase-hadoop2-compat-1.3.1.jar $HIVE_HOME/lib/HBase-hadoop2-compat-1.3.1.jar
ln -s $HBASE_HOME/lib/HBase-hadoop-compat-1.3.1.jar $HIVE_HOME/lib/HBase-hadoop-compat-1.3.1.jar
<property>
<name>hive.zookeeper.quorum</name>
<value>hadoop01,hadoop02,hadoop03</value>
<description>The list of ZooKeeper servers to talk to. This is only needed for read/write locks.</description>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
<description>The port of ZooKeeper servers to talk to. This is only needed for read/write locks.</description>
</property>
解压
修改配置文件 将flume / conf下的 flume-env.sh.template 复制为 flume-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_121