zookeeper cluster install
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
| # 解压、重命名、环境变量
# zookeeper根目录下创建数据目录 mkdir data
# 编辑zookeeper server的编号 vim myid ---------- 1 ----------
# 编辑zookeeper配置文件 mv conf/zoo_sample.cfg conf/zoo.cfg vim conf/zoo.cfg ---------------------------------- dataDir=/opt/software/zookeeper-3.6.3/data client.port=2181 server.1=zzy:2888:3888 server.2=zyh:2888:3888 server.3=why:2888:3888 ----------------------------------
# 分发到集群中其他机器 bash /root/kb18/shell/rcopy.sh /opt/software/apache-zookeeper-3.6.3-bin /opt/software bash /etc/profile.d/kb18.sh /etc/profile.d
# 分别修改其他机器下对应zookeeper的myid为 2 和 3
# 启动zookeeper服务 bash /root/kb18/shell/rcall.sh "zkServer.sh start" all
# 查查zookeeper服务 bash /root/kb18/shell/rcall.sh jps all
# 检查zookeeper角色状态 bash /root/kb18/shell/rcall.sh "zkServer.sh status" all
|
ZooKeeper
Hadoop
Spark
spark-submit –class ProductAnalyzer /
–master hdfs://master01:8020 /
–deploy-mode
1 2 3 4 5 6 7 8
| spark-submit \ --class ProductAnalyzer \ --master spark://master01:7077 \ --name spark-sql-product \ sparkSql-1.0-SNAPSHOT.jar \ hdfs://master01:8020/spark/data/products.txt \ hdfs://master01:8020/spark/result/product_result
|