Implementing disk quota in Linux

Quota can be implemnted in two ways.

On inode

On Block

Here I am mentioning the quota implentation in Block. A block usually represents one least size on a disk, usually one block equal to 1kb.

Soft limit : This is the disk limit where the user gets just a warning message saying that your disk quota is going to expire. This is just a warning, no restriction on data creation will occur at this point.

Hard limit : This is the disk limit where user gets error message, I repeat user gets error message stating that unable to create data.

Implementing QUOTA :

Step1 : Select/prepare the partition for quota, most of the time disk quota is implemented for restricting users not to create unwanted data on servers, so we will implement disk quota on /home mount point.

#vi /etc/fstab

Edit the /home mount point as follows

Before editing

/dev/hda2 /home ext3 defaults 0 0

after editing

/dev/hda2 /home ext3 defaults,usrquota 0 0

Step2 : Remounting the partition(this is done because the mount table should be updated to kernel). Other wise you can reboot the system too for updating of mount table, which is not preferred for live servers.

#mount -o remount,rw /home

Here -o specifies options, with remounting /home partition with read and write options.

Step3 : Creating quota database

#quotacheck -cu /home

The option -c for creating disk quota DB and u for user

Check for user database is created or not when you give ls /home you have to see auota.user file in /home directory,which contains user database.

Step4 : Switching on quota

#quotaon /home

Now get the report for default quota values for user surendra

#repquoata -a | grep surendra

surendra_anne — 4 0 0 1 0 0

surendra_a — 4 0 0 1 0 0

surendra_test — 16 0 0 4 0 0

Step5 : Now implementing disk quota for a user on /home mount point(/dev/hda2)

#setquota -u surendra_anne 100 110 0 0 /dev/hda2

Step6 : Checking quota is implemented or not login to user surendra_anne and execute this command

#repquota -a



Here if 100MB is reached user will get an warning message saying, and when he reaches 110MB he can not create any more data.

Removing quota :

To do this one, all the users should log out from the system so better do it in runlevel one.

Step8 : Stop the disk quota

#quotaoff /home

Step9 : Removing quota database which is located /home

#rm /home/aquota.user

Step10 : Edit fstab file and remove usrdata from /home line

#vi /etc/fstab

Before editing

/dev/hda2 /home ext3 defaults,usrquota 0 0

After editing

/dev/hda2 /home ext3 defaults 0 0

Step11 : Remount the /home partition

#mount -o remount,rw /home

That’s it you are done with Disk Quota Implementation in Linux. Now test your self in creating Linux user disk quota on your own.

Linux Booting Process









When we power on, the power is supplied to SMPS (Switched mode power supply) which converts AC to DC.

The DC power is supplied to all the devices that are connected to system.

Once the processor gets power, the processor executes BIOS; it is a code stored in flash memory in the motherboard. BIOS determine which hardware need to be loaded for booting.

BIOS will do 2 task: Run POST operation. (Power on self test)

Selecting first boot device.

POST is a process of checking hardware availability.

After POST, it will select first boot device, which has mentioned in BIOS.

>>>>> MBR

The BIOS will load MBR from first boot device. MBR is loacted in first sector of the harddisk (or last sector depending upon vendor). Its size is 512 bytes.

MBR contains following details.

>Primary boot loader code(This is of 446 Bytes)

>Partition table information(64 Bytes)

>Magic number(2 Bytes) ( Validation check). This is used for retrieving the MBR if it corrupted.

MBR contains machine code instructions for booting the machine, called a boot loader, along with the partition table.


Once MBR loads the boot loader into memory, it gives control to boot loader. Here there are 3 stages in the boot loader. Stage 1, 1.5 and 2.

>First stage is a small machine code in MBR and its role is to load stage 1.5 or second stage boot loader and load the first part of it into memory.

>GRUB Stage 1.5 is located in the first 30 KB of Hard disk immediately after MBR and before the first partition. This space is utilised to store file system drivers and modules.

This enabled stage 1.5 to load stage 2 to load from any known loaction on the file system i.e. /boot/grub

>Once the second stage boot loader is in memory, it presents the user with a graphical screen showing the different operating systems or kernels it has been configured to boot.

i.e. splash image located at /grub/splash.xpm.gz with list of available kernels where you can manually select the kernel. On this screen a user can use the arrow keys to choose which operating system or kernel they wish to boot and press Enter.

The 2nd stage is responsible for loading kernel from /boot/grub/grub.conf (Grub configuration file) and any other modules needed

Once the second stage boot loader has determined which kernel to boot, it locates the corresponding kernel binary in the /boot/ directory. The kernel binary is named using the following format — /boot/vmlinuz-<kernel-version> file.

The boot loader then places one or more appropriate initramfs images into memory. Next, the kernel decompresses these images from memory to /sysroot/, a RAM-based virtual file system, via cpio. The initramfs is used by the kernel to load drivers and modules necessary to boot the system.

Once the kernel and the initramfs image(s) are loaded into memory, the boot loader hands control of the boot process to the kernel.


As said earlier, initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted. The kenel loads all necessary drivers used for boting process.

Unmounts initrd image and frees up all the memory occupied by the disk image. The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory. At this point, the kernel is loaded into memory and executes the /sbin/init program.

>>>>>INIT (/sbin/init)

When the init command starts, it becomes the parent or grandparent of all of the processes that start up automatically on the system. Since it is the first process, it has the pid 1. First, it runs the /etc/rc.d/rc.sysinit script, which sets the environment path, starts swap, checks the file systems, and executes all other steps required for system initialization.

After The init command then runs the /etc/inittab script, and determine which runlevel should system run.

Depending on the runlevel it will execute the runlevel files from /etc/rc.d/ directory.


These are the runlevels, specified in inittab.

0 – halt

1 – Single user mode

2 – Multiuser, without NFS

3 – Full multiuser mode

4 – unused

5 – X11

6 – reboot

When booting to runlevel 5, the init program looks in the /etc/rc.d/rc5.d/ directory to determine which processes to start and stop. Below are the some files in the /etc/rc.d/rc5.d/ directory.

S97rhnsd -> ../init.d/rhnsd

K15httpd -> ../init.d/httpd

/etc/rc0.d/ –Contain Start/Kill scripts which should be run in Runlevel 0

/etc/rc5.d/ –Contain Start/Kill scripts which should be run in Runlevel 5

None of the scripts that actually start and stop the services are located in the /etc/rc.d/rc5.d/ directory. Rather, all of the files in /etc/rc.d/rc5.d/ are symbolic links pointing to scripts located in the /etc/rc.d/init.d/ directory. Symbolic links are used in each of the rc directories so that the runlevels can be reconfigured by creating, modifying, and deleting the symbolic links without affecting the actual scripts they reference.

Here K indicates Kill and S indicates start.

First init command stops all of the K symbolic links and it then starts S synmbolic link, It starts based on the priority number based on the name.

Then init program executes /etc/rc.d/rc.local file. This file is useful for system customization.

The /etc/rc.d/rc.local script is executed by the init command at boot time or when changing runlevels.

Once everything is completed the control is given back to the kernel. Once the Kernel get the control it start multiple instances of “getty” (/sbin/mingetty) which waits for console logins which spawn one’s user shell process and gives you user prompt to login.


Active and passive mode in FTP

For FTP there is two channel, data channel and command channel.

Data channel uses 20 and command channel uses 21 protocol.

What is Active mode FTP ?

1. A user connects from a random port on a file transfer client to port 21 on the server.

It sends the PORT command, specifying what client-side port the server should connect to. This port will be used later on for the data channel and is different from the port used in this step for the command channel.

2. The server connects from port 20 to the client port designated for the data channel. Once connection is established, file transfers are then made through these client and server ports.


What is passive mode FTP ?

1. The client connects from a random port to port 21 on the server and issues the PASV command. The server replies, indicating which (random) port it has opened for data transfer.

2.The client connects from another random port to the random port specified in the server’s response. Once connection is established, data transfers are made through these client and server ports.

passive ftp

Some time users can’t connect to FTP server, it is because the client machine’s firewall is blocking the connection. We can use passive type connection for avoiding those type of issues.

NFS: Network file system: /etc/exports


no_root_squash: Allows root account on client to access export share on server as the root account.

all_squash: Forces all connecting users to “nobody” account and permissions.

Anonuid: Forces all anonymous connections to predefined UID on server

showmount -e : Shows the available shares on your local machine

exportfs -v : Displays a list of shares files and options on a server

exportfs -r : Refresh the server’s list after modifying /etc/exports

In client machine, for seeing the shared folders of a server

showmount -e

For mounting in clien machine

mount -t nfs /mnt/nfsshare

We can also add the entry in fstab in client machine. /mnt nfs defauls 0 0

Since NFS is a RPC (Remote procedure call) based service, we need to start rbcbind service to work NFS fuctionally. Also NFS use 2049 port no, so rcpbind will listen to that port.

We can check by issuing below command

rpcinfo -p | grep nfs

Regarding fstab

fstab is a configuration file that contains information of all the partitions and storage devices in your computer. The file is located under /etc, so the full path to this file is /etc/fstab.

Contains 6 fields.

/dev/hdb2 /home ext2 defaults 1 2

<device name> <mountpoint> <filesystemtype> <options> <dump> <fsckorder>

4th field options which mount options should use when mounting the filesystem.

Default = rw,suid,dev,exec,auto,nouser,async.

Sync/async: I/O operation to be done synchnously.

Suid/nosuid: Permit/Block the operation of suid, and sgid bits.

exec / noexec: Permit/Prevent the execution of binaries from the filesystem.

Dev/nodev : permits any user to mount the filesystem.

User: Any user can mont the filesystem

nouser: Only root user can mount the file system.

Exec/noexec: Permit/Prevent the execution of binaries from the filesystem.

Ro: Mount read only.

5th field (0) is used by dump (a backup utility) to decide if a filesystem should be backed up. If zero then dump will ignore that filesystem.

The 6th field (0) is used by fsck (the filesystem check utility) to determine the order in which filesystems should be checked.

If zero then fsck won’t check the filesystem.