LOCAL AREA NETWORK

Local Area Network

Is a system where two or more computers are interconnected with one another.
LAN have the flexibility and immediacy of a Personal Computer.
LAN has the shared resources and common management of a centralized system.
LAN BENEFITS

  1. Direct, immediate Communications
  2. Simplified File Transfer
  3. Electronic Mail
  4. Concurrent Access (on database)
  5. Centralized File System
  6. Shareable Application Programs
LAN HARDWARE REQUIREMENTS

  1. Network Interface Cards
  2. Cables
  3. Server
  4. Workstations/Nodes
  5. Off-line communications
NETWORK INTERFACE CARDS
  1. Determines cables type
  2. Govern Transmission rate
  3. Determines Media Access Control (MAC)
  4. Protocols
    Ethernet Token ring FDDI
LAN STANDARDS
  1. ANSI /IEEE 802.3 standard
  2. ANSI/ IEEE 802.5 standard
  3. FDDI
  4. OSI 7 Layer Model (protocols)
IEEE 802.3 STANDARDS
  1. Better known as Ethernet
  2. Twisted Fair
  3. Co-axial Cable
  4. Fiber Optic
  5. Carrier Sense Multiple Access/Collision
  6. Detection (CSMA/CD)
  7. Signaling Speed: 10 Megabits per second
IEEE 802.5 Specifications
  1. Token Method (as in Token Ring )
  2. Twisted Pair
  3. Fiber Optic
  4. Carrier Sense Multiple Access
  5. Collision Avoidance ((CSMA/CA)
  6. Signaling Speed : 4 to 16 Megabits per Second
FIBER DATA DISTRIBUTED INTERFACE
  1. Better known as FDDI
  2. Fiber Optic
  3. Carrier Sense Multiple Access/Collision Avoidance
  4. Up to 100 Megabits per Second
CABLE TYPES
  1. Twisted pair
  2. Unshielded
  3. Shielded
  4. Coaxial Cable
  5. Thinnet
  6. Thicknet
  7. Fiber Optic
  8. Single Mode
  9. Dual Mode
SERVER ISSUES
  1. Central Processor
  2. Memory
  3. Hard disk
  4. Others
  5. Tape Backup
  6. External Cache
  7. Expansion Slots
  8. Power Supply
CENTRAL PROCESSORS
  1. PROCESSOR TYPE
  2. Bus architecture
  3. Operating Environment
  4. Dimensions MEMORY REQUIREMENTS
    1. Memory architecture
    2. Memory Expansions
    3. Requirements
    Hard disk Capacity * 0.023 Block size + Min . NOS Requirements WORKSTATION REQUIREMENTS
    1. Central processing units
    2. Memory
    3. Hard disk
    4. Others
    5. Expansion slots
    6. Available drive Bays
    7. Operating environment
    8. Power requirements
    9. Dimensions
    HUBS / CONCENTRATOR A Hub is a writing center. Hubs maybe active or passive. Active hubs also acts as repeaters. Passive hubs merely pass signals to the other stations. LAN SOFTWARE REQUIREMENTS
    1. Network operating System
    2. Utility Programs
    3. Application Programs
    4. LAN ignorant
    5. LAN aware
    6. LAN intrinsic

    TOPOLOGIES


    TOPOLOGIES It is the logical and physical layout of the network. The three basic topologies are:
    1. Bus topology
    2. Token ring
    3. Star
    COSTS OF THE LAN
    1. INSTALLATION COSTS
    2. COSTS OF NOS SOFTWARE
    3. COSTS OF LAN MANAGEMENT
    4. COSTS OF ESTABLISHING
    5. BRIDGES/GATEWAYS
    6. TRAINING COSTS
    INSTALLATION COSTS
    1. Purchase of Equipment
    2. MACHINES
    3. UPS
    4. NICS
    5. Purchase of Cables
    6. Physical layout
    7. Furniture /fixtures
    SOFTWARE COSTS
    1. Network Operating System
    2. Application Programs
    3. Utilities
    LAN MANAGEMENT COST
    1. Technical support
    2. Adding / removing Workstations
    3. Maintenance and Backups
    4. Security
    5. Upgrades

    FTP has had a long evolution over the years. Appendix III is a chronological compilation of Request for Comments documents relating to FTP. These include the first proposed file transfer mechanisms in 1971 that were developed for implementation on hosts at M.I.T. (RFC 114), plus comments and discussion in RFC 141.

    RFC 172 provided a user-level oriented protocol for file transfer between host computers (including terminal IMPs). A revision of this as RFC 265, restated FTP for additional review, while RFC 281 suggested further changes. The use of a "Set Data Type" transaction was proposed in RFC 294 in January 1982.

    RFC 354 obsoleted RFCs 264 and 265. The File Transfer Protocol was now defined as a protocol for file transfer between HOSTs on the ARPANET, with the primary function of FTP defined as transfering files efficiently and reliably among hosts and allowing the convenient use of remote file storage capabilities. RFC 385 further commented on errors, emphasis points, and additions to the protocol, while RFC 414 provided a status report on the working server and user FTPs. RFC 430, issued in 1973, (among other RFCs too numerous to mention) presented further comments on FTP. Finally, an "official" FTP document was published as RFC 454.

    By July 1973, considerable changes from the last versions of FTP were made, but the general structure remained the same. RFC 542 was published as a new "official" specification to reflect these changes. However, many implementations based on the older specification were not updated.

    In 1974, RFCs 607 and 614 continued comments on FTP. RFC 624 proposed further design changes and minor modifications. In 1975, RFC 686 entitled, "Leaving Well Enough Alone", discussed the differences between all of the early and later versions of FTP. RFC 691 presented a minor revision of RFC 686, regarding the subject of print files.

    Motivated by the transition from the NCP to the TCP as the underlying protocol, a phoenix was born out of all of the above efforts in RFC 765 as the specification of FTP for use on TCP.

    This current edition of the FTP specification is intended to correct some minor documentation errors, to improve the explanation of some protocol features, and to add some new optional commands.




    The communication path between the USER-PI and SERVER-PI for the exchange of commands and replies. This connection follows the Telnet Protocol.

    A full duplex connection over which data is transferred, in a specified mode and type. The data transferred may be a part of a file, an entire file or a number of files. The path may be between a server-DTP and a user-DTP, or between two server-DTPs.

    1. Data Port

      The passive data transfer process "listens" on the data port for a connection from the active transfer process in order to open the data connection.

    2. DTP

      The data transfer process establishes and manages the data connection. The DTP can be passive or active.

    3. End-of-Line

      The end-of-line sequence defines the separation of printing lines. The sequence is Carriage Return, followed by Line Feed.

    4. EOF

      The end-of-file condition that defines the end of a file being transferred.

    5. EOR

      The end-of-record condition that defines the end of a record being transferred.

    6. Error Recovery

      A procedure that allows a user to recover from certain errors such as failure of either host system or transfer process. In FTP, error recovery may involve restarting a file transfer at a given checkpoint.

    IMAP stands for Internet Message Access Protocol. It is a method of accessing electronic mail or bulletin board messages that are kept on a (possibly shared) mail server. In other words, it permits a "client" email program to access remote message stores as if they were local. For example, email stored on an IMAP server can be manipulated from a desktop computer at home, a workstation at the office, and a notebook computer while traveling, without the need to transfer messages or files back and forth between these computers.

    IMAP's ability to access messages (both new and saved) from more than one computer has become extremely important as reliance on electronic messaging and use of multiple computers increase, but this functionality cannot be taken for granted: the widely used Post Office Protocol (POP) works best when one has only a single computer, since it was designed to support "offline" message access, wherein messages are downloaded and then deleted from the mail server. This mode of access is not compatible with access from multiple computers since it tends to sprinkle messages across all of the computers used for mail access. Thus, unless all of those machines share a common file system, the offline mode of access that POP was designed to support effectively ties the user to one computer for message storage and manipulation.

    KEY GOALS FOR IMAP INCLUDE:

    Be fully compatible with Internet messaging standards, e.g. MIME. Allow message access and management from more than one computer. Allow access without reliance on less efficient file access protocols. Provide support for "online", "offline", and "disconnected" access modes * Support for concurrent access to shared mailboxes Client software needs no knowledge about the server's file store format.

    The protocol includes operations for creating, deleting, and renaming mailboxes; checking for new messages; permanently removing messages; setting and clearing flags; server-based RFC-822 and MIME parsing (so clients don't need to), and searching; and selective fetching of message attributes, texts, and portions thereof for efficiency.

    IMAP was originally developed in 1986 at Stanford University. However, it did not command the attention of mainstream email vendors until a decade later, and it is still not as well-known as earlier and less-capable alternatives such as POP, though that is rapidly changing, as articles in the trade press and the implementation of IMAP in more and more software products show. (See IMAP Status and History for a chronological overview of significant IMAP developments.)

    There is a companion protocol to IMAP, developed at Carnegie Mellon University. It is called the "Application Configuration Access Protocol", or ACAP, and provides the same location independent access to configuration files, address books, bookmark lists, etc, that IMAP offers for mailboxes.Dynamic Relay Authorization Control

  5. SMTP
  6. Encourages
  7. Spam
  8. The Roaming
  9. User E-Mail
  10. Problem
  11. Allowing
  12. Relaying for
  13. Roaming Users
  14. How DRAC
  15. Works
  16. Obtaining the
  17. Source
  18. Requirements
  19. Host
  20. Configuration

    Dynamic Relay Authorization Control (DRAC) is a way of providing on the fly authorization for sendmail. It acts as a solution to allow relaying of legitimate users through a Simple Mail Transport Protocol (SMTP) server. This works to prevent the use of the SMTP server as a spam relay. As of November of 1998 MBnet has implemented this methodology

    User's IP addresses are added to the map immediately after they have authenticated to the POP or IMAP server. By default, map entries expire after 30 minutes, but can be renewed by additional authentication. Periodically checking mail on a POP server is sufficient to do this. The POP and SMTP servers can be on different hosts.

    SMTP Encourages Spam

    One of the reasons for the popularity of e-mail spam as a means of advertizing is that the mail can be sent completely anonymously. This is possible because the standard Internet e-mail transfer protocol, SMTP, has no provisions for user authentication. The reason for this is because SMTP has it's roots in a network environment where e-mail users had to login to a multi-user host, supplying a user name and a password in the process, before they could send e-mail. Their e-mail address was unchangeable, and their mail program ensured that the mail they sent had that address in the headers. SMTP was used only to transfer e-mail between servers until it arrived at the destination. Essentially, there was no anonmity in sending email.

    All this was changed when Internet mail client programs were developed that allowed people with single-user hosts to send mail. These client programs used Post Office Protocol (POP) (as well in some cases, IMAP) to receive mail, but used SMTP to send mail. Both POP and IMAP require authentication to the server, with a user name and password, but SMTP has no such thing. Users can put any e-mail address in the headers, and the SMTP server will accept it and pass it along. The SMTP server cannot distinguish between a connection from an SMTP client program and a connection from another mail server. Spammers are free to send e-mail with forged or bogus headers, and there is little that the owner of an SMTP server can do to prevent it. The only identifying characteristics that are left my the spammer is headers that show where they originated from, but by the time this is examined, the damage has been done, the mail has been sent as spam.

    The Roaming User E-Mail Problem

    Recent versions of sendmail are configured by default to allow only local users to send mail to all destinations, non-local as well as local. This is done to prevent spammers from using your mail system to relay mail. However, if some of your users connect from other ISPs, they will find that your SMTP server refuses to relay for them as well. They can read mail from your POP or IMAP server, but they can't send mail to non-local addresses with your SMTP server.

    The reason is that the only reliable information that sendmail sees is the client's hostname or IP address, and it cannot distinguish your user from a spammer with this information. If the roaming users have fixed IP addresses or predictable hostnames, you can configure sendmail to allow them to relay. If this is not the case, you would have to defeat sendmail's anti-relay provisions, or tell the users that they have to send through their ISPs SMTP server. At this point in time, all individuals that are not either direct connect clients of MBnet or Dial-up users of MBnet have no ability to send through our SMTP server.

    Allowing Relaying for Roaming Users

    One interesting method to allow relaying for roaming users is often called POP-before-SMTP. Since the POP server knows the IP address of each POP client, these IP addresses can be collected and used to build a relay authorization map for sendmail. In some cases, the information is already available in the POP server's log files. Some POP servers need to be modified to produce the necessary log entries. A separate process, such as a perl script, periodically collects new information from the log and rebuilds the sendmail map, generally by executing 'makemap'. It's also responsible for removing old map entries after some expiry period, often 30 minutes.

    This means that once a user has successfully authenticated to the POP server with an e-mail client, that IP address is permitted to relay mail through the SMTP server for the next 30 minutes. In most cases, this will be completely transparent, since people generally check for new mail before sending mail. Automatic checking every five minutes will enable relaying indefinitely. A few people may be in the habit of sending mail before reading mail - they will find their mail is rejected until they authenticate to the POP server. Note that relaying is authorized by the client's IP address, so that in some cases where multiple users share the same IP address, more users than expected will be permitted to relay. It works best for single-user client machines with distinctive IP addresses. Fortunately, this is the most common situation now.

    HOW DRAC WORKS?

    DRAC is a robust implementation of POP-before-SMTP. It uses a single daemon process, written in C, to add and eventually delete entries from a relay authorization map for sendmail. POP or IMAP servers use RPC calls to request the daemon to add entries after they have authenticated the user. Source modifications are required to add the RPC calls. The DRAC daemon also expires entries after a suitable time has elapsed. Since RPC works across the network, the POP server and the SMTP server can reside on different hosts. In this case, a configuration file specifies which hosts can send requests to the DRAC daemon.

    LInux was originately developed as a hobby project by LINUS TORVALDS. It was inspired by Minix, a small UNIX system developed by Andy Tanenbaum. The fisrt discussion about Linux were on the Usenet Newsgroup. This discussion were concerned mostly with the development of small, academic, Unix system for minix users who wanted more.

    The very early development of Linux mostly dealt with the task-switching features of the 80386 protected mode interface, all written is assembly code. On October5, 1991, Linus announced the first "official" version of Linux, which was version 0.02. At that point, Linus was able to run bash(the GNU Boune Again Shell) and gcc(GNU C Compiler), but not much else. Agian, This was intended as a hacker system. The primarily focus was kernel development-users support, documentation, distribution had not yet been addressed. Today, the linux community still seems to treat these--kernel development.

    After the version 0.03, Linus bumped up the version number to 0.10, as more poeple started to work on he system After several further revirsion, Linus increased the version number to 0.95 in March, 1992 to reflect this expectation that the system was ready for an "official" released soon. Almost a year and a half later, in late December of 1993, the Linux kernel was still at version 0.99.pll4-asymptotically approaching 1.0. AT the time of this writing the current stable kernel version 2.0 patchlevel 33, and version 2.1 is under development.

    Most of the mojor, free UNIX software package have been ported to Linux, and commercial software is also available. More hardware is supported that in the original kernel version. Many poeple have execute benchmark on 80486 Linux system and found them compatible with mid-range workstation from Sun Microsystem and Digital Equipment Corporation.

    System Features

    Linux support features found in implementation of UNIX and many which aren't found elsewhere in this section, we'll take a nickel tour of the Linux kernel. Linux is multi-tasking, multi-user operating system, as are all other version of UNIX. This means that many users can login and run program on the samemachine simultaneously.

    The linux system is mostly compatible with several UNIX standard at the source level, Linux was develop with source code portability in mind, and it's easy to find commonly used features that are shared by more than one platform. Much of the freee UNIX software availabe on the Internet and elesewhere complies under Linux. In addition, all of the source code for the linux system, including the kernel, device, driver, libraries, users programs, and development tools, is freely distibuted.

    Differences between Linux and other OS

    It is important that we know that is the difference between Linux and other operating system, like MS-DOS, OS/2 and other implementation of UNIX for personal computer. First of all, Linux coexist happily with other operating system on the same machine you can run MS-DOS and OS/2 along with Linux on the same system without any problems.

    Why we have to use Linux? Instead of well known, well tested and well-documented commercial operating? One of the most important, however, Is that Linux is an excellent choice for personal Unix computing, if you are a Unix developer, Including X Windows System application.

    UNIX


    History of the UNIX System

    Bell Laboratories is the research and development arm of AT&T. Established in 1952, it is one of the largest research groups in the world. Bell Laboraatories serves several dynamic functions in the Bell System. As a basic research organization, Bell Laboratories investigates scientific fields relevant to communications, including mathematics and physical science. At the forefront of applied research in communications technologies, it also designs and develops product and provides system engineering. All of the Bell System's facilities, as well as many independent telephone companies, use the UNIX system internally.

    Oring of the UNIX system

    For a brief period in 1969, the Computing Science Research Department of Bell Laboratories used a large General Electric 645 mainframe computer with an operating system called Multics. Multics was an early interactive multiuser operating system and a forerunner of modern operating system. Interactive refers to the computer's almost immediate response to a typed-in commannd.

    Previously, only batch-oriented operating system was were available. in batch-oriented system, codes were punched onto rectangular cards. These codes were request for information, data, or commands. A set of the cards were subsequently process by the computers in large batches. This usually required several minutes to several hours for the printed resuls confirming or fullfiling a request. this method was too slow for programmers who needed an immediate response from the syste.

    Development of UNIX system

    One such specific project was a program Ken Thompson developed called "Space Travel". It simulated the movement of the major celestial bodies in the solar sytem. Finding the cost og single-user interaction with an mainframe computer prohibitive, Thompson rewrote "Space Travel" for a lower cost, less powerful mini computer, a Digital Equipment Corporation(DEC).

    Minicomputers were the first computers inexpesive enough for a single university department or small company and small enough for a single-user interaction. However the software available for the "PDP-7" was limited, and it did not have the memory capability to handle continously development. While it was less expensive to run "Space Travel" on the mini computers, any program changes had to be written on the GE mainframe before executing by the PDP-7. Continously loading of the paper tape was a slow and vulnerable process. Before Thompson could write programs on the minicomputer as well as run them, he had to develop the necessary software. Thomson wrote an operating system, a PDP-7 assembler, and several utility programs--all in the assembly language specific to the PDP-7. The operating system was christened the UNIX system a pun on the earlier Multics system on which some of its comcepts are base.