Contribute limited/specific amount of storage as a slave to the Hadoop cluster
Oct 24, 2020
Problem Statement:
In a Hadoop cluster, find how to contribute a limited/specific amount of storage as a slave to the cluster?
Prerequisite:
- Hadoop Cluster configured.
Step-1)Create Volume.
- Here first u have to create a volume.
- Attach the volume.
- After this run command to check how many drives are connected
Command
fdisk -l
Step-2)Make partition:
- Now we need to make a partition of /dev/xvdf
fdisk /dev/xvdf
Step-3)Format Partition
- After doing partition we need to format
mkfs.ext4 /dev/xvdf
Step-5)Mount Volume
- Now we need to mount this volume on the directory.
- Make one directory.
- Mount it.
Step-6)Add the directory name in the slave configuration file.
- Now add this folder in the file hdfs-site.xml
Step-7)Start datanode:
- Now start the datanode.
- Command
hadoop-daemon.sh start datanode
Output
- Now run the command to check how many slaves are connected.
Command
hadoop dfsadmin -report