MCSE Distributed File System

A Distributed File System (DFS) is a file structure that facilitates sharing of data files and resources by means of consistent storage across a network. The earliest file servers were designed in the 1970s. Following its inception in 1985, Sun’s Network File System (NFS) eventually became the foremost commonly used distributed file system. Aside from NFS, significantly distributed file systems are Common Internet File System (CIFS) and Andrew file system (AFS).

The DFS or Microsoft Distributed File System is an arranged client and server solution that enable a large organization to manage numerous allocated shared file within a distributed file system. It delivers site transparency and redundancy to enhance data accessibility in the midst of a breakdown or extreme load by permitting shares in a number of various locations to be logically arranged under a DFS root or a single folder.

It is a client/server-based service that permits individuals to directly access and process files located on the hosting server as if it had been on their personal computer. Every time an individual access a data on the server, the server transmits a copy of the data file, which is cache on the user’s personal computer while the information is being processed which is subsequently returned to the server.

Whenever individuals attempt to gain access to a share found off the DFS root, the individual is actually going through a DFS link allowing the DFS server to automatically re-direct it to the appropriate share and file server.

There can be two methods for utilizing DFS on a Windows Server:

A Standalone or Distinct DFS root provides you with a DFS root found only on the local computer, which therefore does not make use of Active Directory. A Standalone DFS can only be accessed on the local PC where it was made. It does not feature any kind of fault tolerance and could not be connected to any other DFS.

Domain-based DFS roots can be found within Active Directory which enables you to have their information and facts distributed to any number of domain controllers located in the domain; this provides you with fault tolerance to DFS. DFS roots that can be found on a domain needs to be hosted on a domain controller. This is to make sure that links with identical target get hold of all their duplicated data through the network. The file and root data is replicated by means of the Microsoft File Replication Service (FRS).

Advantages of DFS

1. Easy accessibility: individuals do not need to be aware of various locations from where they acquire data. Simply by remembering a single location they will have access to the data.

2. Fail Tolerance: for master DFS hosting server it is possible to obtain a duplicate (Target) on yet another DFS Server. With the help of the master DFS server end users are still able to continue on accessing the data from a back-up DFS (Target). There is absolutely no interruption in being able to access information.

3. Load Balancing: in the event that all of the DFS root servers and targets are operating in good condition, it results in Load balancing. This is often accomplished by indicating locations for different users.

4. Security and safety: By making use of the NTFS configuration, security is put into practice.

How to Repair Corrupt Superblock in Ext2 File System

In the Linux systems, the ext2 (short for second extended file system) is used extensively by numerous users. The Ext2 is very efficient when dealing with real large disk partitions. In addition, when the ext2 is mounted then all the information that is stored in the disk data structures is copied into the RAM of the system. Because of this, the Linux kernel is able to avoid numerous disk read operations. However, as nothing is perfect in this world the ext2, too, is prone to corruption. In such cases, you should perform in-built methods to remove corruption and mount file system. However, in case you are unable to fix the problem then you should use a third-party Linux data recovery software to perform data recovery for Linux system.

Let us take an example. Consider a scenario wherein you have a Linux system having ext2-based file system. In this, when you try to mount the file system after a power outage you are unable to do so. An error message is displayed, that is:

“mount: wrong file system type, bad option, bad superblock, “


Such problems in mounting file system can occur due to corrupt ext2 especially the superblock.


To recover the problem of corrupt superblock, you should perform the following steps:

1) Search the superblock for the /dev/sda2 location.

2) Try to restore using alternate superblock #xxx

Here, xxx is the location of the alternate superblock.

3) Now, try to mount the file system using the following command:

# mount /dev/sda2 /mnt

4) Try to mount the file system and check the files to see whether they are intact or not.

Such workarounds would be able to fix the superblock and you would be able to access it again. However, if the method does not succeed then you should use a third-party ext2 recovery software to recover the inaccessible data from the system. These Linux recovery tools have rich user interface that do not overwrite the existing file while scanning the storage media. Also, the use of fast and sophisticated scanning algorithms ensures that the Ext2 recovery is safe and secure.

One such Linux recovery software is Stellar Phoenix Linux Data Recovery that restores lost, deleted, or formatted data from inaccessible Linux systems. Specifically designed for ext2, ext3, ext4, FAT32, FAT16, and FAT12, this ext3 recovery software is supported by various Linux distributions such as Red Hat, SUSE, Debian,, Sorcerer, TurboLinux, Caldera, Gentoo, Mandrake, Slackware etc. Compatible with Windows 7, Vista, Server 2003, XP, and 2000, this ext4 recovery utility recovers data from SCSI, SATA, EIDE, and IDE.

Distributed File System Or Centralized File Systems?

Many professionals, especially engineers and architects are working from home offices or collaborating with small teams no longer centralized in a home office location, but rather spread all over the country. How does the engineer in Philadelphia share large CAD files with the General Contractor who is doing the project in Tampa? The old system was to use FTP technology, but there are two key problems with this methodology.

  • The files are large and take a long time to upload and download.
  • The files can have revision issues if two people decide to edit the same file at once.

So, IT professionals have to make decisions. Do they employ a solution like SharePoint for the potential of “File Locking” — technically it is a check in and check out system. Do they invest tens of thousands of dollars at each site for WAN Optimization? Are there other technologies they can use?

The most prevalent solution to these problems is to employ a distributed file system. A distributed file system allows the files to all be “distributed” to each user so that the download time is minimal and changes are merely replicated out to the other users of the files. It works slick when employed properly. There is the speed of the local networks for the opening of the files without the WAN optimization costs and there is the file locking capacity if employed with the right 3rd party software solution.

If your organization has been trying to figure out how to share large files, your group should consider a distributed file system.