hadoop动态添加datanode启动失败的经验
发表于:2025-12-03 作者:千家信息网编辑
千家信息网最后更新 2025年12月03日,动态添加datanode节点,主机名node14.cnshell>hadoop-daemon.sh start datanodeshell>jps #查看datanode进程是否已启动发现DataNo
千家信息网最后更新 2025年12月03日hadoop动态添加datanode启动失败的经验
动态添加datanode节点,主机名node14.cnshell>hadoop-daemon.sh start datanodeshell>jps #查看datanode进程是否已启动
发现DataNode进程启动后立即消失,查询日志发现一下记录:
2018-04-15 00:08:43,158 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]2018-04-15 00:08:43,168 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []2018-04-15 00:08:43,673 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started2018-04-15 00:08:43,839 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is node11.cn:90002018-04-15 00:08:44,138 WARN org.apache.hadoop.fs.FileSystem: "node11.cn:9000" is a deprecated filesystem name. Use "hdfs://node11.cn:9000/" instead.2018-04-15 00:08:44,196 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://node11.cn:90012018-04-15 00:08:44,266 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog2018-04-15 00:08:44,273 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined2018-04-15 00:08:44,293 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static2018-04-15 00:08:44,374 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)2018-04-15 00:08:44,377 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*2018-04-15 00:08:44,411 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOExceptionjava.net.BindException: Port in use: node11.cn:9001 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887) ... 8 more2018-04-15 00:08:44,414 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2018-04-15 00:08:44,415 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: node11.cn:9001 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887) ... 8 more2018-04-15 00:08:44,423 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12018-04-15 00:08:44,426 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at node14.cn/192.168.74.114************************************************************/ 解决方式:
删除dfs目录下的内容重新执行一下命令即可
shell>rm -rf dfs/
shell>hadoop-daemon.sh start datanode
shell>yarn-daemon.sh start nodemanager
刷新nanenode节点
shell>hdfs dfsadmin -refreshNodes
shell>start-balancer.sh
新增datanode成功,
将数据分发到新增datanode节点主机上
shell>hadoop balancer -threshold 10 #50控制磁盘使用率的参数,数值越小,各个节点磁盘使用率越均衡
节点
主机
使用率
磁盘
进程
动态
均衡
成功
内容
参数
命令
数值
数据
方式
日志
目录
控制
查询
经验
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
crm软件开发详细设计
网络安全培训交流体会
宠物软件开发费用明细
博途启用不了web服务器
漫普法网络安全讲解
网络安全宿舍创意视频
中国大唐网络技术有限公司
网络安全基础书籍推荐
南京中储智运软件开发
软件开发网络工程UI设计
盖章软件开发
防止服务器域名被恶意解析
数字素养和网络安全的重要性
云服务器价格购买价格表
定制产品用别人服务器安全吗
数据库大小有限制么
西安电子科技大学数据库期末
dw站点服务器设置
网络安全2级3级
阴阳师大区服务器在什么位置
网络安全如何做催化剂
自学网络技术可以吗
长春正规网络技术服务质量推荐
商淘软件开发
迷你世界怎么免费开服务器电脑版
两台服务器如何做虚拟化
为什么要加入网络技术
网络安全日体会简洁
网络安全手抄报简单漂亮初中
redis服务器管理