Hadoop HDFS: the directory item limit is exceed: limit=1048576问题的解决
发表于:2025-12-03 作者:千家信息网编辑
千家信息网最后更新 2025年12月03日,问题描述:1.文件无法写入hadoop hdfs文件系统;2.hadoop namenode日志记录the directory item limit is exceed: limit=10485763
千家信息网最后更新 2025年12月03日Hadoop HDFS: the directory item limit is exceed: limit=1048576问题的解决
问题描述:
1.文件无法写入hadoop hdfs文件系统;
2.hadoop namenode日志记录
the directory item limit is exceed: limit=1048576
3.hadoop单个目录下文件超1048576个,默认limit限制数为1048576,所以要调大limit限制数
解决办法:
hdfs-site.xml配置文件添加配置参数将配置文件推送至hadoop集群所有节点重启hadoop服务 dfs.namenode.fs-limits.max-directory-items 3200000 Defines the maximum number of items that a directory may contain. Cannot set the property to a value less than 1 or more than 6400000.
延伸问题:
2017-11-17 13:03:31,795 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12017-11-17 13:03:31,797 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at name-01/10.0.0.101************************************************************/2017-11-17 13:09:45,016 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = name-01/10.0.0.101STARTUP_MSG: args = []STARTUP_MSG: version = 2.6.0-cdh6.7.4STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/zookeeper.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-math4-3.1.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/logredactor-1.0.3.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/.//hadoop-common-2.6.0-cdh6.7.4-tests.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//hadoop-aws-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-scrooge_2.10.jar:/usr/lib/hadoop/.//hadoop-auth-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop/.//hadoop-nfs-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop/.//hadoop-common-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-tools.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//parquet-jackson.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-scala_2.10.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-common-tests.jar:/usr/lib/hadoop/.//parquet-protobuf.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh6.7.4-tests.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/zookeeper.jar:/usr/lib/hadoop-yarn/lib/jline-2.11.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//zookeeper.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.2.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh6.7.4-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//commons-math4-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//htrace-core4-4.0.1-incubating.jar:/usr/lib/hadoop-mapreduce/.//junit-4.11.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.6.0-cdh6.7.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jarSTARTUP_MSG: build = http://github.com/cloudera/hadoop -r 2390c11b3cb7a741189f62797de0d9862f48e211; compiled by 'jenkins' on 2016-09-20T23:02ZSTARTUP_MSG: java = 1.7.0_75************************************************************/2017-11-17 13:09:45,026 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]2017-11-17 13:09:45,030 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []2017-11-17 13:09:45,506 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2017-11-17 13:09:45,624 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2017-11-17 13:09:45,624 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started2017-11-17 13:09:45,653 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://bmh2017-11-17 13:09:45,654 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use bmh to access this namenode/service.2017-11-17 13:09:46,001 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor2017-11-17 13:09:46,016 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://name-01:500702017-11-17 13:09:46,071 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog2017-11-17 13:09:46,080 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.2017-11-17 13:09:46,087 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined2017-11-17 13:09:46,100 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)2017-11-17 13:09:46,106 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs2017-11-17 13:09:46,106 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static2017-11-17 13:09:46,106 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs2017-11-17 13:09:46,142 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)2017-11-17 13:09:46,144 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*2017-11-17 13:09:46,162 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 500702017-11-17 13:09:46,162 INFO org.mortbay.log: jetty-6.1.26.cloudera.42017-11-17 13:09:46,468 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@name-01:500702017-11-17 13:09:46,508 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one p_w_picpath storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!2017-11-17 13:09:46,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.2017-11-17 13:09:46,566 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true2017-11-17 13:09:46,621 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=10002017-11-17 13:09:46,622 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true2017-11-17 13:09:46,624 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.0002017-11-17 13:09:46,626 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2017 Nov 17 13:09:462017-11-17 13:09:46,628 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap2017-11-17 13:09:46,628 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2017-11-17 13:09:46,630 INFO org.apache.hadoop.util.GSet: 2.0% max memory 958.5 MB = 19.2 MB2017-11-17 13:09:46,630 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries2017-11-17 13:09:46,641 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=true2017-11-17 13:09:46,641 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 22017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 5122017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 12017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 22017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 30002017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 10002017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: bmh2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true2017-11-17 13:09:46,805 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true2017-11-17 13:09:46,852 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap2017-11-17 13:09:46,852 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2017-11-17 13:09:46,852 INFO org.apache.hadoop.util.GSet: 1.0% max memory 958.5 MB = 9.6 MB2017-11-17 13:09:46,852 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries2017-11-17 13:09:46,854 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times2017-11-17 13:09:46,865 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks2017-11-17 13:09:46,865 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2017-11-17 13:09:46,865 INFO org.apache.hadoop.util.GSet: 0.25% max memory 958.5 MB = 2.4 MB2017-11-17 13:09:46,865 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries2017-11-17 13:09:46,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.99900001287460332017-11-17 13:09:46,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 02017-11-17 13:09:46,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 300002017-11-17 13:09:46,871 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 102017-11-17 13:09:46,871 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 102017-11-17 13:09:46,871 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,252017-11-17 13:09:46,872 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled2017-11-17 13:09:46,873 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis2017-11-17 13:09:46,875 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache2017-11-17 13:09:46,875 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2017-11-17 13:09:46,876 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 958.5 MB = 294.5 KB2017-11-17 13:09:46,876 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries2017-11-17 13:09:46,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ACLs enabled? false2017-11-17 13:09:46,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: XAttrs enabled? true2017-11-17 13:09:46,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Maximum size of an xattr: 163842017-11-17 13:09:46,887 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /hdname/in_use.lock acquired by nodename 53363@name-012017-11-17 13:09:47,773 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load p_w_picpath: FSImageFile(file=/hdname/current/fsp_w_picpath_0000000000025720921, cpktTxId=0000000000025720921)2017-11-17 13:09:49,818 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 3109720 INodes.2017-11-17 13:10:01,010 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2984msGC pool 'PS MarkSweep' had collection(s): count=1 time=3433msGC pool 'PS Scavenge' had collection(s): count=1 time=40ms2017-11-17 13:10:04,191 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2679msGC pool 'PS MarkSweep' had collection(s): count=1 time=3064ms2017-11-17 13:10:07,361 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2669msGC pool 'PS MarkSweep' had collection(s): count=1 time=3044ms2017-11-17 13:10:09,395 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1534msGC pool 'PS MarkSweep' had collection(s): count=1 time=1904ms2017-11-17 13:10:11,453 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1557msGC pool 'PS MarkSweep' had collection(s): count=1 time=1934ms2017-11-17 13:10:13,630 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1676msGC pool 'PS MarkSweep' had collection(s): count=1 time=2060ms2017-11-17 13:10:15,817 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1686msGC pool 'PS MarkSweep' had collection(s): count=1 time=2085ms2017-11-17 13:10:18,025 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1707msGC pool 'PS MarkSweep' had collection(s): count=1 time=2112ms2017-11-17 13:10:20,272 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1746msGC pool 'PS MarkSweep' had collection(s): count=1 time=2156ms2017-11-17 13:10:22,439 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1666msGC pool 'PS MarkSweep' had collection(s): count=1 time=2081ms2017-11-17 13:10:24,630 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1691msGC pool 'PS MarkSweep' had collection(s): count=1 time=2102ms2017-11-17 13:10:26,830 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1699msGC pool 'PS MarkSweep' had collection(s): count=1 time=2115ms2017-11-17 13:10:29,056 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1725msGC pool 'PS MarkSweep' had collection(s): count=1 time=2146ms2017-11-17 13:10:31,385 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1828msGC pool 'PS MarkSweep' had collection(s): count=1 time=2252ms2017-11-17 13:10:35,134 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2748msGC pool 'PS MarkSweep' had collection(s): count=1 time=2830ms2017-11-17 13:10:37,381 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1246msGC pool 'PS MarkSweep' had collection(s): count=1 time=1625ms2017-11-17 13:10:40,385 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2503msGC pool 'PS MarkSweep' had collection(s): count=1 time=2599ms2017-11-17 13:10:43,274 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2388msGC pool 'PS MarkSweep' had collection(s): count=1 time=2389ms2017-11-17 13:10:46,499 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2224msGC pool 'PS MarkSweep' had collection(s): count=1 time=2574ms2017-11-17 13:10:50,074 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2574msGC pool 'PS MarkSweep' had collection(s): count=1 time=2992ms2017-11-17 13:10:54,803 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 4228msGC pool 'PS MarkSweep' had collection(s): count=1 time=4334ms2017-11-17 13:10:59,246 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3942msGC pool 'PS MarkSweep' had collection(s): count=1 time=4160ms2017-11-17 13:11:03,330 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3583msGC pool 'PS MarkSweep' had collection(s): count=1 time=3908ms2017-11-17 13:11:06,025 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2194msGC pool 'PS MarkSweep' had collection(s): count=1 time=2598ms2017-11-17 13:11:08,504 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1978msGC pool 'PS MarkSweep' had collection(s): count=1 time=2412ms2017-11-17 13:11:10,137 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1132msGC pool 'PS MarkSweep' had collection(s): count=1 time=1583ms2017-11-17 13:11:11,784 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1147msGC pool 'PS MarkSweep' had collection(s): count=1 time=1606ms2017-11-17 13:11:13,642 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1357msGC pool 'PS MarkSweep' had collection(s): count=1 time=1828ms2017-11-17 13:11:15,375 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1233msGC pool 'PS MarkSweep' had collection(s): count=1 time=1714ms2017-11-17 13:11:18,098 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2222msGC pool 'PS MarkSweep' had collection(s): count=1 time=2708ms2017-11-17 13:11:19,720 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1121msGC pool 'PS MarkSweep' had collection(s): count=1 time=1612ms2017-11-17 13:11:21,282 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1061msGC pool 'PS MarkSweep' had collection(s): count=1 time=1556ms2017-11-17 13:11:22,907 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1125msGC pool 'PS MarkSweep' had collection(s): count=1 time=1621ms2017-11-17 13:11:24,460 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1052msGC pool 'PS MarkSweep' had collection(s): count=1 time=1550ms2017-11-17 13:11:26,013 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1052msGC pool 'PS MarkSweep' had collection(s): count=1 time=1550ms2017-11-17 13:11:27,557 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1043msGC pool 'PS MarkSweep' had collection(s): count=1 time=1542ms2017-11-17 13:11:29,185 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1127msGC pool 'PS MarkSweep' had collection(s): count=1 time=1627ms2017-11-17 13:11:30,955 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1269msGC pool 'PS MarkSweep' had collection(s): count=1 time=1769ms2017-11-17 13:11:32,573 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1117msGC pool 'PS MarkSweep' had collection(s): count=1 time=1617ms2017-11-17 13:11:35,833 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1077msGC pool 'PS MarkSweep' had collection(s): count=1 time=1578ms2017-11-17 13:12:08,946 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1252msGC pool 'PS MarkSweep' had collection(s): count=16 time=26562ms2017-11-17 13:12:11,521 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2074msGC pool 'PS MarkSweep' had collection(s): count=6 time=10800ms2017-11-17 13:12:14,285 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2264msGC pool 'PS MarkSweep' had collection(s): count=1 time=2763ms2017-11-17 13:12:14,338 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.java.lang.OutOfMemoryError: GC overhead limit exceeded at org.apache.hadoop.hdfs.server.namenode.INodeMap.get(INodeMap.java:92) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.getInode(FSDirectory.java:2357) at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:207) at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:262) at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:181) at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:946) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:930) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:749) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:680) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:292) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1096) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:778) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:609) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:670) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:838) at org.apache.hadoop.hdfs.server.namenode.NameNode. (NameNode.java:817) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1538) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1606)2017-11-17 13:12:14,343 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12017-11-17 13:12:14,344 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at name-01/10.0.0.101************************************************************/
hadoop hdfs jvm内存溢出,无法成功启动
默认启动-Xmx1000m
解决办法:
由于cdh yum安装默认无hadoop-env.sh文件,添加hadoop-env.sh文件,并增加环境参数
export HADOOP_HEAPSIZE=32000
问题解决!
文件
问题
配置
办法
参数
限制
成功
内存
单个
日志
环境
目录
系统
节点
集群
推送
服务
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
承德数据库安全审计
把数据库的表名称修改
pythonweb服务器安全
软件开发划分周期
网页数据库批量上传
软件开发 怎么接活
以网络安全法为主题的作文
软件开发质量成本时间的辩证关系
网络安全事件通报单
研究生网络技术就业
怎么监督网络安全
数据库怎么删除字符中的一个数字
临清软件开发价格
软件开发 ba 职责
贵阳数据库有限公司
国开数据库应用技术自测答案
石家庄壹点网络技术有限公司
外包软件开发规范
新疆网信网络技术有限公司
iis网站能和数据库相连吗
青浦区品牌软件开发共同合作
从事嵌入式软件开发的导师
软件开发五行
福建升凯网络技术有限公司
桌面软件开发实训报告
新媒体与网络技术的变革
数据库查看表的内容的命令
杭州信息报修管理软件开发
excel无法选中数据库
惠普服务器更新bios版本