FL
Size: a a a
FL
GP
ДП
ДП
19/12/01 21:00:31 DEBUG ipc.Server: Successfully authorized userInfo {
effectiveUser: "sortedmap"
}
protocol: "org.apache.hadoop.hdfs.protocol.ClientProtocol"
19/12/01 21:00:31 DEBUG ipc.Server: got #5
19/12/01 21:00:31 DEBUG ipc.Server: IPC Server handler 5 on 8020: Call#5 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.abandonBlock from 172.17.0.1:55436 for RpcKind RPC_PROTOCOL_BUFFER
19/12/01 21:00:31 DEBUG security.UserGroupInformation: PrivilegedAction as:sortedmap (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
19/12/01 21:00:31 DEBUG hdfs.StateChange: BLOCK* NameSystem.abandonBlock: BP-1947034320-172.17.0.2-1510154331033:blk_1073741838_1014 of file /test/file.txt
19/12/01 21:00:31 DEBUG security.UserGroupInformation: Failed to get groups for user sortedmap by java.io.IOException: No groups found for user sortedmap
19/12/01 21:00:31 DEBUG hdfs.StateChange: DIR* FSDirectory.removeBlock: /test/file.txt with blk_1073741838_1014 block is removed from the file system
19/12/01 21:00:31 DEBUG hdfs.StateChange: persistBlocks: /test/file.txt with 0 blocks is persisted to the file system
19/12/01 21:00:31 DEBUG hdfs.StateChange: BLOCK* NameSystem.abandonBlock: BP-1947034320-172.17.0.2-1510154331033:blk_1073741838_1014 is removed from pendingCreates
19/12/01 21:00:31 DEBUG ipc.Server: Served: abandonBlock queueTime= 6 procesingTime= 7
19/12/01 21:00:31 DEBUG ipc.Server: IPC Server handler 5 on 8020: responding to Call#5 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.abandonBlock from 172.17.0.1:55436
19/12/01 21:00:31 DEBUG ipc.Server: IPC Server handler 5 on 8020: responding to Call#5 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.abandonBlock from 172.17.0.1:55436 Wrote 32 bytes.
19/12/01 21:00:31 DEBUG ipc.Server: got #7
19/12/01 21:00:31 DEBUG ipc.Server: IPC Server handler 6 on 8020: Call#7 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.renewLease from 172.17.0.1:55436 for RpcKind RPC_PROTOCOL_BUFFER
19/12/01 21:00:31 DEBUG security.UserGroupInformation: PrivilegedAction as:sortedmap (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
19/12/01 21:00:31 DEBUG ipc.Server: Served: renewLease queueTime= 1 procesingTime= 0
19/12/01 21:00:31 DEBUG ipc.Server: IPC Server handler 6 on 8020: responding to Call#7 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.renewLease from 172.17.0.1:55436
19/12/01 21:00:31 DEBUG ipc.Server: IPC Server handler 6 on 8020: responding to Call#7 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.renewLease from 172.17.0.1:55436 Wrote 32 bytes.
19/12/01 21:00:31 DEBUG ipc.Server: got #6
19/12/01 21:00:31 DEBUG mortbay.log: REQUEST /logs/hadoop-root-datanode-fe629b2f12e9.out on org.mortbay.jetty.HttpConnection@4d9006c1
19/12/01 21:00:31 DEBUG mortbay.log: sessionManager=org.mortbay.jetty.servlet.HashSessionManager@21e360a
19/12/01 21:00:31 DEBUG mortbay.log: session=null
19/12/01 21:00:31 DEBUG mortbay.log: servlet=org.apache.hadoop.http.AdminAuthorizedServlet-741730375
19/12/01 21:00:31 DEBUG mortbay.log: chain=safety->static_user_filter->org.apache.hadoop.http.AdminAuthorizedServlet-741730375
19/12/01 21:00:31 DEBUG mortbay.log: servlet holder=org.apache.hadoop.http.AdminAuthorizedServlet-741730375
19/12/01 21:00:31 DEBUG mortbay.log: call filter safety
19/12/01 21:00:31 DEBUG ipc.Server: IPC Server handler 4 on 8020: Call#6 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 172.17.0.1:55436 for RpcKind RPC_PROTOCOL_BUFFER
19/12/01 21:00:31 DEBUG mortbay.log: call filter static_user_filter
19/12/01 21:00:31 DEBUG mortbay.log: call servlet org.apache.hadoop.http.AdminAuthorizedServlet-741730375
19/12/01 21:00:31 DEBUG security.UserGroupInformation: PrivilegedAction as:sortedmap (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
19/12/01 21:00:31 DEBUG mortbay.log: RESOURCE=file:/hadoop-2.8.2/logs/hadoop-root-datanode-fe629b2f12e9.out.gz
19/12/01 21:00:31 DEBUG mortbay.log: RESOURCE=file:/hadoop-2.8.2/logs/hadoop-root-datanode-fe629b2f12e9.out
ДП
19/12/01 21:00:31 DEBUG mortbay.log: resource=file:/hadoop-2.8.2/logs/hadoop-root-datanode-fe629b2f12e9.out content
19/12/01 21:00:31 DEBUG hdfs.StateChange: BLOCK* getAdditionalBlock: /test/file.txt inodeId 16400 for DFSClient_NONMAPREDUCE_-57205618_1
19/12/01 21:00:31 DEBUG security.UserGroupInformation: Failed to get groups for user sortedmap by java.io.IOException: No groups found for user sortedmap
19/12/01 21:00:31 DEBUG mortbay.log: RESPONSE /logs/hadoop-root-datanode-fe629b2f12e9.out 200
19/12/01 21:00:31 DEBUG net.NetworkTopology: Choosing random from 0 available nodes on node /default-rack, scope=/default-rack, excludedScope=null, excludeNodes=[172.17.0.2:50010]
19/12/01 21:00:31 DEBUG net.NetworkTopology: chooseRandom returning null
19/12/01 21:00:31 DEBUG blockmanagement.BlockPlacementPolicy: Failed to choose from local rack (location = /default-rack); the second replica is not found, retry choosing ramdomly
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:768)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:689)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:596)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:556)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:459)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:390)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:266)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:119)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:135)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1724)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2515)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
19/12/01 21:00:31 DEBUG net.NetworkTopology: Choosing random from 0 available nodes on node /, scope=, excludedScope=null, excludeNodes=[172.17.0.2:50010]
19/12/01 21:00:31 DEBUG net.NetworkTopology: chooseRandom returning null
19/12/01 21:00:31 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true)
ДП
19/12/01 21:00:31 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
19/12/01 21:00:31 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
19/12/01 21:00:31 DEBUG ipc.Server: Served: addBlock queueTime= 1 procesingTime= 23 exception= IOException
19/12/01 21:00:31 INFO ipc.Server: IPC Server handler 4 on 8020, call Call#6 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 172.17.0.1:55436
java.io.IOException: File /test/file.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1728)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2515)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
ДП
19/12/01 21:00:31 DEBUG hdfs.StateChange: BLOCK* NameSystem.abandonBlock: BP-1947034320-172.17.0.2-1510154331033:blk_1073741838_1014 of file /test/file.txt
ДП
19/12/01 21:00:31 DEBUG net.NetworkTopology: chooseRandom returning null```
19/12/01 21:00:31 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true)
RI
YI
RI
RI
YI
YI
RI
N
A
I
A