View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All

Top 20 HDFS Commands You Should Know About [2024]

By Rohit Sharma

Updated on Nov 21, 2022 | 7 min read | 9.5k views

Share:

Hadoop is an Apache open-source structure that enables the distributed processing of large-scale data sets over batches of workstations with simple programming patterns. It operates in a distributed storage environment with numerous clusters of computers with the best scalability features. Read more about HDFS and it’s architecture.

Goals of HDFS

1. It Provides a Large-Scale Distributed File System

10k nodes, 100 million files, and 10 PB

2. Optimization of Batch Processing                    

Provides very comprehensive aggregated capacity

3. Assume Commodity Hardware                                 

It detects hardware failure and recovers it

Possibilities of consuming the existing file if the hardware fails

4. Best Smart Client Intelligence Solution

The client can find the location of the scaffolds

The client can access the data directly from the data nodes      

5. Data Consistency                                                                                             

The client can append to the existing files

It is the Write-once-Read-many access model 

6. Chunks of File Replication and Usability 

Files can be a break in multi-nodes blocks in the 128 MB-block sizes and reuse it

7. Meta-Data in Memory      

The entire Meta-data is stored in the main memory

Meta-data is in the list of files, a list of blocks, and a list of data-nodes

Transaction-logs, it records file creation and file-deletions

8. Data-Correctness

It uses the checksum to validate and transform the data.

Its client calculates the checksum per 512 bytes. The client retrieves the data and its checksum from the nodes

If validations fail, the client can use the replica-process.

9. Data-Pipelining Process    

Its client begins the initial step of writing from the first nodes

The first data-nodes transmit the data to the next data node to the pipeline

When all models are written, the client moves on to the next step to write the next block in the file

HDFS Architecture

Hadoop Distributed File System (HDFS) is structured into blocks. HDFS architecture is described as a master/slave one. Namenode and data node make up the HDFS architecture.

  1. Namenode: It functions as a master server for managing the file system namespace and also provides the right access approach to the clients.
  • It provides all the data nodes comprising data blocks for a particular file. With the help of this, when the system starts, it restores the data from the data nodes every time. 
  • HDFS incorporates a file method namespace that is executed with the Namenode for common operations like file “opening, closing, and renaming”, and even for catalogue.
  1. Datanode: It is the second technique specification in the HDFS cluster. It usually works one per node in the HDFS cluster. 
  • DataNodes are the methods that perform like slaves, stay on each computer in a cluster mode, and implement the original storage. They serve, read, and write requests for the clients.
background

Liverpool John Moores University

MS in Data Science

Dual Credentials

Master's Degree18 Months
View Program

Placement Assistance

Certification8-8.5 Months
View Program

HDFS Top 20 Commands

Here is a list of all the HDFS commands:

1. To get the list of all the files in the HDFS root directory

  • Command: Usage: hdfs dfs [generic options] -ls [-c] [-h] [-q] [-R] [-t] [-S] [-u] [<path>…]   
  • Note: Here, choose the path from the root, just like the general Linux file system. -h in Green Mark shows that it is in human-readable sizes, as recommended. -R in Blue Mark shows that it is different from numerous one to practice into subdirectories.

2. Help

  • Command: fs –help        
  • Note: It prints the long output which prints all the commands

3. Concatenate all the files into a catalogue within a single file

  • Command: hdfs dfs [generic options] -getmerge [-nl] <src> <localdst>
  • Note: This will generate a new file on the local system directory which carries all files from a root directory and concatenates all together. -nl option, which is marked in Red, combines newlines among the files. With the help of this command, you can combine a collection of small records within a selection for a different operation.

 4. Show Disk Usage in Megabytes for the Register Directory: /dir

  • Command: hdfs dfs [generic options] -du [-s] [-h] <path> …
  • Note: The -h, which is marked in Blue gives you a readable output of size, i.e., Gigabytes.

5. Modifying the replication factor for a file

  • Command: hadoop fs -setrep -w 1 /root/journaldev_bigdata/derby.log
  • Note: It is for replication factors, which count by a file, which can be replicated in each Hadoop cluster. 

6. copyFromLocal

  • Command: hadoop fs -copyFromLocal derby.log /root/journaldev_bigdata
  • Note: This command is for copy of a file from Local file System to Hadoop FS

7.-rm -r

  • Command: hadoop fs -rm -r /root/journaldev_bigdata
  • Note:  With the help of rm-r command, we can remove an entire HDFS directory

8. Expunge

  • Command: hadoop fs -expunge
  • Note: This expunge performs fragments empty.

9. fs -du

  • Command:  hadoop fs -du /root/journaldev_bigdata/
  • Note: This command helps to disk usage of files under HDFS in a directory.

10.mkdir 

  • Command:hadoop fs -mkdir /root/journaldev_bigdata
  • Note: This command is used for checking the health of the files.

11.text

  • Command: hadoop fs -text <src>
  • Note: This command is used to visualise the .“sample zip” file in text format.

12. Stat 

  • Command: hadoop fs -stat [format] <path>
  • Note:  This stat command is used to print the information about the ‘test’ file present in the directory.

13. chmod : (Hadoop chmod Command Usage)

  • Command: hadoop fs -chmod [-R] <mode> <path>
  • Note: This command is used for changing the file permission on “testfile”.

14. appendToFile

  • Command: hadoop fs -appendToFile <localsrc> <dest>
  • Note: This command can be used for appending the localfile1, localfile2 instantly in the local filesystem into the file specified as ‘appendfile’ in the catalog.

15. Checksum

  • Command: hadoop fs -checksum <src>
  • Note: This is the shell command which returns the checksum information.

16.Count

  • Command: hadoop fs -count [options] <path>
  • Note: This command is used for counting the numbers of files, directories, and bytes from the specified path of the given file.

17. Find

  • Command: hadoop fs -find <path> … <expression>
  • Note: This command is used for finding all files which match the mentioned expression.

18. getmerge

  • Command: hadoop fs -getmerge <src> <localdest>
  • Note: This command is used for “MergeFile into Local”.

19. touchz

  • Command: hadoop fs –touchz /directory/filename
  • Note: This command generates a file in HDFS with a file size corresponding to 0 bytes. 

20. fs -ls

  • Command: hadoop fs -ls
  • Note: This command generates a list of  available files and subdirectories under default directory.

Hopefully, this article helped you with understanding HDFS commands to execute operations on the Hadoop filesystem. The article has described all the fundamental HDFS commands.

If you are interested to know more about Big Data, check out our PG Diploma in Software Development Specialization in Big Data program which is designed for working professionals and provides 7+ case studies & projects, covers 14 programming languages & tools, practical hands-on workshops, more than 400 hours of rigorous learning & job placement assistance with top firms.

Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.

Frequently Asked Questions (FAQs)

1. What is HDFS and how does it work?

2. How are files stored in HDFS?

3. What is Apache Pig and its applications?

Rohit Sharma

694 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

Start Your Career in Data Science Today

Top Resources

Recommended Programs

IIIT Bangalore logo
bestseller

The International Institute of Information Technology, Bangalore

Executive Diploma in Data Science & AI

Placement Assistance

Executive PG Program

12 Months

View Program
Liverpool John Moores University Logo
bestseller

Liverpool John Moores University

MS in Data Science

Dual Credentials

Master's Degree

18 Months

View Program
upGrad Logo

Certification

3 Months

View Program