HDFS plays a major role in the performance of batch or micro jobs, using HDFS to read and write the data. If there is any bottleneck on the application during writing or reading the file from HDFS, then it will lead to overall performance issues.
DFSIO are the tests that are used to measure read and write performance of MapReduce jobs. They are file-based operations that read and write tasks in parallel. The reduce tasks collect all performance parameters and statistics. You can always pass different parameters to test the throughput, the total number of bytes processed, average I/O, and much more. The important key is that you match these outputs with the number of cores, disks, and memory in your Hadoop cluster. Understand your current cluster limitations, try to mitigate those limitations to the extent possible, and then modify your jobs scheduling or coordination to get the maximum out of your cluster resources. The following is the command to run DFSIO:
hadoop jar <HADOOP_CLIENT_INSTALLATION_PATH...