sqoop:【error】ERROR tool.ImportTool: Import failed: java.io.IOException: Filesystem closed

19/06/06 12:04:08 ERROR tool.ImportTool: Import failed: java.io.IOException: Filesystem closed

at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)

at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2041)

at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)

at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)

at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:703)

at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:251)

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)

at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)

at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:200)

at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:173)

at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:270)

at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)

at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:127)

at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:520)

at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)

at org.apache.sqoop.Sqoop.run(Sqoop.java:147)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)

at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)

at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)

at org.apache.sqoop.Sqoop.main(Sqoop.java:252)

原因:

多个datanode在getFileSystem过程中,由于Configuration一样,会得到同一个FileSystem。如果有一个datanode在使用完关闭连接,其它的datanode在访问就会出现上述异常

解决办法:

在core-site.xml中添加如下内容:

<property>
     <name>fs.hdfs.impl.disable.cache</name>
     <value>true</value>
</property>