博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Strom(0.9.3)配置
阅读量:6332 次
发布时间:2019-06-22

本文共 3664 字,大约阅读时间需要 12 分钟。

一、准备

192.168.83.129(hadoop1) Nimbus
192.168.83.130(hadoop2) Supervisor1
192.168.83.132(hadoop3) Supervisor2
JDK1.6以上.

二、Zookeeper配置

以下在每个Node都同样的步骤
# tar -zxvf zookeeper-3.3.6.tar.gz 
# cp -R zookeeper-3.3.6 /usr/local/
# vim /etc/profile
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.3.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin
# cp zoo_sample.cfg zoo.cfg
# vim zoo.cfg
server.1=
server.2=
server.3=
dataDir=/var/zookeeper
# mkdir /var/zookeeper
# chmod 777 /var/zookeeper
在每个Node的dataDir创建myid文件,输入本机的编号(1,2,3)
# bin/zkServer.sh start|stop|status

 

三、安装 or 升级 Python2.7

# tar -zxvf Python-2.7.8.tgz
# ./configure -prefix=/usr/local/python
# make & make install /* 此处分开 make make install */
# mv /usr/bin/python /usr/bin/python_old
# ln -s /usr/local/python/bin/python /usr/bin
将/usr/bin/yum脚本中的第一行改为:#!/usr/bin/python_old
# python -V /* 最新版本 */

 

四、配置storm

# tar apache-storm-0.9.3.tar.gz

# mv apache-storm-0.9.3 /usr/local/

# vim conf/storm.yaml storm.zookeeper.servers:

- "192.168.83.129"

- "192.168.83.130"

- "192.168.83.132"

nimbus.host: "192.168.83.129"

storm.local.dir: "/home/hadoop/storm/workdir"

# mkdir /home/hadoop/storm/workdir

# chmod 777 /home/hadoop/storm/workdir

 

五、启动测试

Nimbus节点上启动
# bin/storm nimbus >/dev/null 2>&1 & /* 启动Nimbus后台程序 */
# bin/storm ui >/dev/null 2>&1 & /* 启动UI后台程序 http://{nimbushost}:8080 */
# bin/storm logviewer > /dev/null 2>&1 /* 启动logviewer后台程序 */
工作节点上启动
# bin/storm supervisor>/dev/null 2>&1 & /* 启动Supervisor后台程序 */

 storm.yaml

# Licensed to the Apache Software Foundation (ASF) under one# or more contributor license agreements.  See the NOTICE file# distributed with this work for additional information# regarding copyright ownership.  The ASF licenses this file# to you under the Apache License, Version 2.0 (the# "License"); you may not use this file except in compliance# with the License.  You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.########### These MUST be filled in for a storm configuration storm.zookeeper.servers:     - "192.168.83.129"     - "192.168.83.130"     - "192.168.83.132"  nimbus.host: "192.168.83.129" storm.local.dir: "/home/hadoop/storm/workdir"# # # ##### These may optionally be filled in:#    ## List of custom serializations# topology.kryo.register:#     - org.mycompany.MyType#     - org.mycompany.MyType2: org.mycompany.MyType2Serializer### List of custom kryo decorators# topology.kryo.decorators:#     - org.mycompany.MyDecorator### Locations of the drpc servers# drpc.servers:#     - "server1"#     - "server2"## Metrics Consumers# topology.metrics.consumer.register:#   - class: "backtype.storm.metric.LoggingMetricsConsumer"#     parallelism.hint: 1#   - class: "org.mycompany.MyMetricsConsumer"#     parallelism.hint: 1#     argument:#       - endpoint: "metrics-collector.mycompany.org"

zoo.cfg

# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.dataDir=/var/zookeeper# the port at which the clients will connectclientPort=2181dataLogDir=/usr/local/zookeeper-3.3.6/logs server.1=hadoop1:2888:3888server.2=hadoop2:2888:3888server.3=hadoop3:2888:3888

 

转载于:https://www.cnblogs.com/bobsoft/p/4421399.html

你可能感兴趣的文章
常见内部排序总结
查看>>
repo original
查看>>
文本处理三剑客之sed命令用法
查看>>
我的友情链接
查看>>
CSS预处理器-Sass
查看>>
mysql主主同步+Keepalived
查看>>
F5 负载均衡学习笔记----V9.x启动U盘制作方法
查看>>
PageRank MATLAB 实现
查看>>
不能盲目选择视频会议系统
查看>>
如何学编程
查看>>
学习Linux决心书
查看>>
javascript中函数的参数与arguments关系
查看>>
MySql函数大全<->
查看>>
头像裁剪
查看>>
MySQL 自连接分组取每组最大N条记录
查看>>
通俗易懂理解 AI “深度学习”的基本原理:梯度下降
查看>>
大数据统计之基数估计(Cardinality Estimation)
查看>>
Oracle 执行计划
查看>>
51cto CSRF漏洞
查看>>
R: Hierarchical Cluster 层次聚类
查看>>