INTRODUCTION OF SUMUKHA TECHNOLOGIES.
Problem Statement
Prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable.
Prior researches have proposed to provide link failure feedback to TCP so that TCP can avoid responding to route failures as if congestion had occurred.
TCP performance degrades significantly in Mobile Ad hoc Networks due to the packet losses.
Most of these packet losses result from the Route failures due to network mobility.
TCP assumes such losses occur because of congestion, thus invokes congestion control mechanisms such as decreasing congestion windows, raising timeout, etc, thus greatly reduce TCP throughput.
Prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable.
SYSTEM ANALYSIS
Routing protocols for ad hoc networks can be classified into two major types: proactive and on-demand. Proactive protocols attempt to maintain up-to-date routing information to all nodes by periodically disseminating topology updates throughout the network. In contrast, on demand protocols attempt to discover a route only when a route is needed. To reduce the overhead and the latency of initiating a route discovery for each packet, on-demand routing protocols use route Caches. Due to mobility, cached routes easily become stale. Using stale routes causes packet losses, and increases latency and overhead. In this paper, we investigate how to make on-demand routing Protocols adapt quickly to topology changes. This problem is important because such protocols use route caches to make routing decisions; it is challenging because topology changes are frequent.
To address the cache staleness issue in DSR (the Dynamic Source Routing protocol) prior work used adaptive timeout mechanisms. Such mechanisms use heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, a predetermined choice of ad hoc parameters for certain scenarios may not work well for others, and scenarios in the real world are different from those used in simulations. Moreover, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. As a result, either valid routes will be removed or stale routes will be kept in caches.
In our project, we propose proactively disseminating the broken link information to the nodes that have that link in their caches. Proactive cache updating is key to making route caches adapt quickly to topology changes. It is also important to inform only the nodes that have cached a broken link to avoid unnecessary overhead. Thus, when a link failure is detected, our goal is to notify all reachable nodes that have cached the link about the link failure.
We define a new cache structure called a cache table to maintain the information necessary for cache updates. A cache table has no capacity limit; its size increases as new routes are discovered and decreases as stale routes are removed. Each node maintains in its cache table two types of information for each route. The first type of information is how well routing information is synchronized among nodes on a route: whether a link has been cached in only upstream nodes, or in both upstream and downstream nodes, or neither. The second type of information is which neighbor has learned which links through a ROUTE REPLY.
We design a distributed algorithm that uses the information kept by each node to achieve distributed cache updating. When a link failure is detected, the algorithm notifies selected neighborhood nodes about the broken link: the closest upstream and/or downstream nodes on each route containing the broken link, and the neighbors that learned the link through ROUTE REPLIES. When a node receives a notification, the algorithm notifies selected neighbors. Thus, the broken link information will be quickly propagated to all reachable nodes that need to be notified.
Our algorithm has the following desirable properties:
Distributed: The algorithm uses only local information and communicates with neighborhood Nodes; therefore, it is scalable with network size.
Adaptive: The algorithm notifies only the nodes that have cached a broken link to update their Caches; therefore, cache update overhead is minimized.
Proactive on-demand: Proactive cache updating is triggered on-demand, without periodic behavior.
Without ad hoc mechanisms: The algorithm does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes.
Existing System
TCP performance degrades significantly in Mobile Ad hoc Networks due to the packet losses. Most of these packet losses result from the Route failures due to network mobility.
TCP assumes such losses occur because of congestion, thus invokes congestion control mechanisms such as decreasing congestion windows, raising timeout, etc, thus greatly reduce TCP throughput.
However, after a link failure is detected, several packets will be dropped from the network interface queue; TCP will time out because of these packet losses, as well as for Acknowledgement losses caused by route failures.
There is no intimation information regarding about to the failure links to the Node from its neighboring Node’s. So that the Source Node cannot able to make the Route Decision’s at the time of data transfer.
Limitation of Existing System
The Stale routes causes packet losses if packets cannot be salvaged by intermediate nodes
The stale routes increases packet delivery latency, since the MAC layer goes through multiple retransmissions before concluding a link failure
Use Adaptive time out mechanisms
If the cache size is set large, more stale routes will stay in caches because FIFO replacement becomes less effective
Proposed System
Prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable.
Prior researches have proposed to provide link failure feedback to TCP so that TCP can avoid responding to route failures as if congestion had occurred.
We propose proactively disseminating the broken link information to the nodes that have that link in their caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Each node maintains in its cache table the Information necessary for cache updates.
The Source Node has the information regarding about the Destination and the Intermediate Node links failure, So that it is useful from Packet loss and reduce the latency time while data transfer throughout the Network.
Advantages of Proposed System
Proactive cache updating also prevents stale routes from being propagated to other nodes
We defined a new cache structure called a cache table to maintain the information necessary for cache updates. We presented a distributed cache update algorithm that uses the local information kept by each node to notify all reachable nodes that have cached a broken link. The algorithm enables DSR to adapt quickly to topology changes.
The algorithm quickly removes stale routes no matter how nodes move and which traffic model is used.
Description of Modules
Module 1: Route Request
When a source node wants to send packets to a destination to which it does not have a route, it initiates a Route Discovery by broadcasting a ROUTE REQUEST. The node receiving a ROUTE REQUEST checks whether it has a route to the destination in its cache. If it has, it sends a ROUTE REPLY to the source including a source route, which is the concatenation of the source route in the ROUTE REQUEST and the cached route. If the node does not have a cached route to the destination, it adds its address to the source route and rebroadcasts the ROUTE REQUEST. When the destination receives the ROUTE REQUEST, it sends a ROUTE REPLY containing the source route to the source. Each node forwarding a ROUTE REPLY stores the route starting from itself to the destination. When the source receives the ROUTE REPLY, it caches the source route.
Module 2: Message Transfer
The Message transfer relates with that the sender node wants to send a message to the destination node after the path is selected and status of the destination node through is true. The receiver node receives the message completely and then it send the acknowledgement to the sender node through the router nodes where it is received the message.
Module 3: Route Maintenance
Route Maintenance, the node forwarding a packet is responsible for confirming that the packet has been successfully received by the next hop. If no acknowledgement is received after the maximum number of retransmissions, the forwarding node sends a ROUTE ERROR to the source, indicating the broken link. Each node forwarding the ROUTE ERROR removes from its cache the routes containing the broken link.
Module 4: Cache Updating
When a node detects a link failure, our goal is to notify all reachable nodes that have cached that link to update their caches. To achieve this goal, the node detecting a link failure needs to know which nodes have cached the broken link and needs to notify such nodes efficiently. Our solution is to keep track of topology propagation state in a distributed manner.
Feasibility Study
The development of a computer-based system is more likely to be plugged by the scarcity of resourced bad difficulty delivery data. A feasibility study is not warranted system in which economic justification is obvious, technical risk, low, few legal problems are expected, and no reasonable alternative exits.
Three essential considerations are involved in the feasibility analysis:
Economic feasibility
Technical feasibility
Functional or behavior feasibility
Economic feasibility
Economical analysis is the most frequently used method for evaluating the effective of candidate system more commonly know as cost/benefit analysis, the procedure is to determine the benefits and saving that are expected from a candidate system and compare them with the costs, if the benefits outwit the cost then the system is successfully implemented. Otherwise further justifications of the alternative systems are proposed. The investment to develop this system is minimum. Hence this system is economically feasible.
Technical Feasibility
Technical feasibility centers on existing computer system and to what extent is supported the proposed. for e.g. if the current computer is operating at 80% capacity then running another application could overload the system or require additional hardware. So this project is technically variable without requiring any additional hardware or software.
Functional Feasibility
People are inherently resistant to change. Computers have been known to facility changes. an estimate should be made to know the reaction of user is likely to have towards the new system.
Since this system is ready to use in the organization, this system is operationally feasible. as this system is technically, economically and functionally feasible the system is judged feasible. Viewing the corrected information, recommendation, justification and conclusions are made on the developed system.
SYSTEM ENVIRONMENT
Hardware and Software Requirements
Software Requirements:
Front End Tool : Java 1.5 and Swing
Back End Tool : MsAccess
Operating System : Windows 98.
Hardware Requirements:
Processor : Intel Pentium III Processor
RAM : 128MB
Hard Disk : 20GB
Software Tools Description
Brief Introduction about Java
Java was conceived by James Gosling, Patrick Naughton, Chris Wrath, Ed Frank, and Mike Sheridan at Sun Micro system. It is an platform independent programming language that extends it’s features wide over the network.Java2 version introduces an new component called “Swing” – is a set of classes that provides more power & flexible components than are possible with AWT.
It’s a light weight package, as they are not implemented by platform-specific code.
Related classes are contained in javax.swing and its sub packages, such as javax.swing.tree.
Networking Basics
Ken Thompson and Dennis Ritchie developed UNIX in concert with the C language at Bell Telephone Laboratories, Murray Hill, New Jersey, in 1969. In 1978, Bill Joy was leading a project at Cal Berkeley to add many new features to UNIX, such as virtual memory and full-screen display capabilities. By early 1984, just as Bill was leaving to found Sun Microsystems, he shipped 4.2BSD, commonly known as Berkeley UNIX.4.2BSD came with a fast file system, reliable signals, interprocess communication, and, most important, networking. The networking support first found in 4.2 eventually became the de facto standard for the Internet. Berkeley’s implementation of TCP/IP remains the primary standard for communications with the Internet. The socket paradigm for inter process and network communication has also been widely adopted outside of Berkeley.
Socket Overview
A network socket is a lot like an electrical socket. Various plugs around the network have a standard way of delivering their payload. Anything that understands the standard protocol can “plug in” to the socket and communicate.
Internet protocol (IP) is a low-level routing protocol that breaks data into small packets and sends them to an address across a network, which does not guarantee to deliver said packets to the destination.
Transmission Control Protocol (TCP) is a higher-level protocol that manages to reliably transmit data. A third protocol, User Datagram Protocol (UDP), sits next to TCP and can be used directly to support fast, connectionless, unreliable transport of packets.
Client/Server
A server is anything that has some resource that can be shared. There are compute servers, which provide computing power; print servers, which manage a collection of printers; disk servers, which provide networked disk space; and web servers, which store web pages. A client is simply any other entity that wants to gain access to a particular server.
In Berkeley sockets, the notion of a socket allows as single computer to serve many different clients at once, as well as serving many different types of information. This feat is managed by the introduction of a port, which is a numbered socket on a particular machine. A server process is said to “listen” to a port until a client connects to it. A server is allowed to accept multiple clients connected to the same port number, although each session is unique. To mange multiple client connections, a server process must be multithreaded or have some other means of multiplexing the simultaneous I/O.
Reserved Sockets
Once connected, a higher-level protocol ensues, which is dependent on which port you are using. TCP/IP reserves the lower, 1,024 ports for specific protocols. Port number 21 is for FTP, 23 is for Telnet, 25 is for e-mail, 79 is for finger, 80 is for HTTP, 119 is for Netnews-and the list goes on. It is up to each protocol to determine how a client should interact with the port.
TCP/IP Client Sockets
TCP/IP sockets are used to implement reliable, bidirectional, persistent, point-to-point, stream-based connections between hosts on the Internet. A socket can be used to connect Java’s I/O system to other programs that may reside either on the local machine or on any other machine on the Internet.
There are two kinds of TCP sockets in Java. One is for servers, and the other is for clients. The Server Socket class is designed to be a “listener,” which waits for clients to connect before doing anything. The Socket class is designed to connect to server sockets and initiate protocol exchanges.
The creation of a Socket object implicitly establishes a connection between the client and server. There are no methods or constructors that explicitly expose the details of establishing that connection. Here are two constructors used to create client sockets: Socket (String hostname, int port) Creates a socket connecting the local host to the named host and port; can throw an UnknownHostException or anIOException.
SYSTEM DESIGN
On-demand Route Maintenance results in delayed awareness of mobility, because a node is not notified when a cached route breaks until it uses the route to send packets. We classify a cached route into three types:
pre-active, if a route has not been used;
active, if a route is being used;
post-active, if a route was used before but now is not.
It is not necessary to detect whether a route is active or post-active, but these terms help clarify the cache staleness issue. Stale pre-active and post-active routes will not be detected until they are used. Due to the use of responding to ROUTE REQUESTS with cached routes, stale routes may be quickly propagated to the caches of other nodes. Thus, pre-active and post-active routes are important sources of cache staleness.
When a node detects a link failure, our goal is to notify all reachable nodes that have cached that link to update their caches. To achieve this goal, the node detecting a link failure needs to know which nodes have cached the broken link and needs to notify such nodes efficiently. This goal is very challenging because of mobility and the fast propagation of routing information.
Our solution is to keep track of topology propagation state in a distributed manner. Topology propagation state means which node has cached which link. In a cache table, a node not only stores routes but also maintain two types of information for each route:
(1) How well routing information is synchronized among nodes on a route.
(2) Which neighbor has learned which links through a ROUTE REPLY? Each node gathers such information during route discoveries and data transmission.
The two types of information are sufficient; because each node knows for each cached link which neighbors have that link in their caches. Each entry in the cache table contains a field called Data Packets. This field records whether a node has forwarded 0, 1, or 2 data packets. A node knows how well routing information is synchronized through the first data packet.
When forwarding a ROUTE REPLY, a node caches only the downstream links; thus, its downstream nodes did not cache the first downstream link through this ROUTE REPLY. When receiving the first data packet, the node knows that upstream nodes have cached all downstream links. The node adds the upstream links to the route consisting of the downstream links. Thus, when a downstream link is broken, the node knows which upstream node needs to be notified.
TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program input produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.
Functional testing
Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.
System Testing
System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.
Performance Test
The Performance test ensures that the output be produced within the time limits,and the time taken by the system for compiling, giving response to the users and request being send to the system for to retrieve the results.
Integration Testing
Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.
Integration testing for Server Synchronization:
Testing the IP Address for to communicate with the other Nodes
Check the Route status in the Cache Table after the status information is received by the Node
The Messages are displayed throughout the end of the application
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.
Acceptance testing for Data Synchronization:
The Acknowledgements will be received by the Sender Node after the Packets are received by the Destination Node
The Route add operation is done only when there is a Route request in need
The Status of Nodes information is done automatically in the Cache Updation process
IMPLEMENTATION
Implementation is the stage in the project where the theoretical design is turned Into a working system and is giving confidence on the new system for the users, which it will work efficiently and effectively. It involves careful planning, investigation of the current System and its constraints on implementation, design of methods to achieve the change over, an evaluation, of change over methods. Apart from planning major task of preparing the implementation are education and training of users. The more complex system being implemented, the more involved will be the system analysis and the design effort required just for implementation.
An implementation co-ordination committee based on policies of individual organization has been appointed. The implementation process begins with preparing a plan for the implementation of the system. According to this plan, the activities are to be carried out, discussions made regarding the equipment and resources and the additional equipment has to be acquired to implement the new system.
Implementation is the final and important phase, the most critical stage in achieving a successful new system and in giving the users confidence. That the new system will work be effective .The system can be implemented only after through testing is done and if it found to working according to the specification.
Problem Statement
Prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable.
Prior researches have proposed to provide link failure feedback to TCP so that TCP can avoid responding to route failures as if congestion had occurred.
TCP performance degrades significantly in Mobile Ad hoc Networks due to the packet losses.
Most of these packet losses result from the Route failures due to network mobility.
TCP assumes such losses occur because of congestion, thus invokes congestion control mechanisms such as decreasing congestion windows, raising timeout, etc, thus greatly reduce TCP throughput.
Prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable.
SYSTEM ANALYSIS
Routing protocols for ad hoc networks can be classified into two major types: proactive and on-demand. Proactive protocols attempt to maintain up-to-date routing information to all nodes by periodically disseminating topology updates throughout the network. In contrast, on demand protocols attempt to discover a route only when a route is needed. To reduce the overhead and the latency of initiating a route discovery for each packet, on-demand routing protocols use route Caches. Due to mobility, cached routes easily become stale. Using stale routes causes packet losses, and increases latency and overhead. In this paper, we investigate how to make on-demand routing Protocols adapt quickly to topology changes. This problem is important because such protocols use route caches to make routing decisions; it is challenging because topology changes are frequent.
To address the cache staleness issue in DSR (the Dynamic Source Routing protocol) prior work used adaptive timeout mechanisms. Such mechanisms use heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, a predetermined choice of ad hoc parameters for certain scenarios may not work well for others, and scenarios in the real world are different from those used in simulations. Moreover, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. As a result, either valid routes will be removed or stale routes will be kept in caches.
In our project, we propose proactively disseminating the broken link information to the nodes that have that link in their caches. Proactive cache updating is key to making route caches adapt quickly to topology changes. It is also important to inform only the nodes that have cached a broken link to avoid unnecessary overhead. Thus, when a link failure is detected, our goal is to notify all reachable nodes that have cached the link about the link failure.
We define a new cache structure called a cache table to maintain the information necessary for cache updates. A cache table has no capacity limit; its size increases as new routes are discovered and decreases as stale routes are removed. Each node maintains in its cache table two types of information for each route. The first type of information is how well routing information is synchronized among nodes on a route: whether a link has been cached in only upstream nodes, or in both upstream and downstream nodes, or neither. The second type of information is which neighbor has learned which links through a ROUTE REPLY.
We design a distributed algorithm that uses the information kept by each node to achieve distributed cache updating. When a link failure is detected, the algorithm notifies selected neighborhood nodes about the broken link: the closest upstream and/or downstream nodes on each route containing the broken link, and the neighbors that learned the link through ROUTE REPLIES. When a node receives a notification, the algorithm notifies selected neighbors. Thus, the broken link information will be quickly propagated to all reachable nodes that need to be notified.
Our algorithm has the following desirable properties:
Distributed: The algorithm uses only local information and communicates with neighborhood Nodes; therefore, it is scalable with network size.
Adaptive: The algorithm notifies only the nodes that have cached a broken link to update their Caches; therefore, cache update overhead is minimized.
Proactive on-demand: Proactive cache updating is triggered on-demand, without periodic behavior.
Without ad hoc mechanisms: The algorithm does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes.
Existing System
TCP performance degrades significantly in Mobile Ad hoc Networks due to the packet losses. Most of these packet losses result from the Route failures due to network mobility.
TCP assumes such losses occur because of congestion, thus invokes congestion control mechanisms such as decreasing congestion windows, raising timeout, etc, thus greatly reduce TCP throughput.
However, after a link failure is detected, several packets will be dropped from the network interface queue; TCP will time out because of these packet losses, as well as for Acknowledgement losses caused by route failures.
There is no intimation information regarding about to the failure links to the Node from its neighboring Node’s. So that the Source Node cannot able to make the Route Decision’s at the time of data transfer.
Limitation of Existing System
The Stale routes causes packet losses if packets cannot be salvaged by intermediate nodes
The stale routes increases packet delivery latency, since the MAC layer goes through multiple retransmissions before concluding a link failure
Use Adaptive time out mechanisms
If the cache size is set large, more stale routes will stay in caches because FIFO replacement becomes less effective
Proposed System
Prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable.
Prior researches have proposed to provide link failure feedback to TCP so that TCP can avoid responding to route failures as if congestion had occurred.
We propose proactively disseminating the broken link information to the nodes that have that link in their caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Each node maintains in its cache table the Information necessary for cache updates.
The Source Node has the information regarding about the Destination and the Intermediate Node links failure, So that it is useful from Packet loss and reduce the latency time while data transfer throughout the Network.
Advantages of Proposed System
Proactive cache updating also prevents stale routes from being propagated to other nodes
We defined a new cache structure called a cache table to maintain the information necessary for cache updates. We presented a distributed cache update algorithm that uses the local information kept by each node to notify all reachable nodes that have cached a broken link. The algorithm enables DSR to adapt quickly to topology changes.
The algorithm quickly removes stale routes no matter how nodes move and which traffic model is used.
Description of Modules
Module 1: Route Request
When a source node wants to send packets to a destination to which it does not have a route, it initiates a Route Discovery by broadcasting a ROUTE REQUEST. The node receiving a ROUTE REQUEST checks whether it has a route to the destination in its cache. If it has, it sends a ROUTE REPLY to the source including a source route, which is the concatenation of the source route in the ROUTE REQUEST and the cached route. If the node does not have a cached route to the destination, it adds its address to the source route and rebroadcasts the ROUTE REQUEST. When the destination receives the ROUTE REQUEST, it sends a ROUTE REPLY containing the source route to the source. Each node forwarding a ROUTE REPLY stores the route starting from itself to the destination. When the source receives the ROUTE REPLY, it caches the source route.
Module 2: Message Transfer
The Message transfer relates with that the sender node wants to send a message to the destination node after the path is selected and status of the destination node through is true. The receiver node receives the message completely and then it send the acknowledgement to the sender node through the router nodes where it is received the message.
Module 3: Route Maintenance
Route Maintenance, the node forwarding a packet is responsible for confirming that the packet has been successfully received by the next hop. If no acknowledgement is received after the maximum number of retransmissions, the forwarding node sends a ROUTE ERROR to the source, indicating the broken link. Each node forwarding the ROUTE ERROR removes from its cache the routes containing the broken link.
Module 4: Cache Updating
When a node detects a link failure, our goal is to notify all reachable nodes that have cached that link to update their caches. To achieve this goal, the node detecting a link failure needs to know which nodes have cached the broken link and needs to notify such nodes efficiently. Our solution is to keep track of topology propagation state in a distributed manner.
Feasibility Study
The development of a computer-based system is more likely to be plugged by the scarcity of resourced bad difficulty delivery data. A feasibility study is not warranted system in which economic justification is obvious, technical risk, low, few legal problems are expected, and no reasonable alternative exits.
Three essential considerations are involved in the feasibility analysis:
Economic feasibility
Technical feasibility
Functional or behavior feasibility
Economic feasibility
Economical analysis is the most frequently used method for evaluating the effective of candidate system more commonly know as cost/benefit analysis, the procedure is to determine the benefits and saving that are expected from a candidate system and compare them with the costs, if the benefits outwit the cost then the system is successfully implemented. Otherwise further justifications of the alternative systems are proposed. The investment to develop this system is minimum. Hence this system is economically feasible.
Technical Feasibility
Technical feasibility centers on existing computer system and to what extent is supported the proposed. for e.g. if the current computer is operating at 80% capacity then running another application could overload the system or require additional hardware. So this project is technically variable without requiring any additional hardware or software.
Functional Feasibility
People are inherently resistant to change. Computers have been known to facility changes. an estimate should be made to know the reaction of user is likely to have towards the new system.
Since this system is ready to use in the organization, this system is operationally feasible. as this system is technically, economically and functionally feasible the system is judged feasible. Viewing the corrected information, recommendation, justification and conclusions are made on the developed system.
SYSTEM ENVIRONMENT
Hardware and Software Requirements
Software Requirements:
Front End Tool : Java 1.5 and Swing
Back End Tool : MsAccess
Operating System : Windows 98.
Hardware Requirements:
Processor : Intel Pentium III Processor
RAM : 128MB
Hard Disk : 20GB
Software Tools Description
Brief Introduction about Java
Java was conceived by James Gosling, Patrick Naughton, Chris Wrath, Ed Frank, and Mike Sheridan at Sun Micro system. It is an platform independent programming language that extends it’s features wide over the network.Java2 version introduces an new component called “Swing” – is a set of classes that provides more power & flexible components than are possible with AWT.
It’s a light weight package, as they are not implemented by platform-specific code.
Related classes are contained in javax.swing and its sub packages, such as javax.swing.tree.
Networking Basics
Ken Thompson and Dennis Ritchie developed UNIX in concert with the C language at Bell Telephone Laboratories, Murray Hill, New Jersey, in 1969. In 1978, Bill Joy was leading a project at Cal Berkeley to add many new features to UNIX, such as virtual memory and full-screen display capabilities. By early 1984, just as Bill was leaving to found Sun Microsystems, he shipped 4.2BSD, commonly known as Berkeley UNIX.4.2BSD came with a fast file system, reliable signals, interprocess communication, and, most important, networking. The networking support first found in 4.2 eventually became the de facto standard for the Internet. Berkeley’s implementation of TCP/IP remains the primary standard for communications with the Internet. The socket paradigm for inter process and network communication has also been widely adopted outside of Berkeley.
Socket Overview
A network socket is a lot like an electrical socket. Various plugs around the network have a standard way of delivering their payload. Anything that understands the standard protocol can “plug in” to the socket and communicate.
Internet protocol (IP) is a low-level routing protocol that breaks data into small packets and sends them to an address across a network, which does not guarantee to deliver said packets to the destination.
Transmission Control Protocol (TCP) is a higher-level protocol that manages to reliably transmit data. A third protocol, User Datagram Protocol (UDP), sits next to TCP and can be used directly to support fast, connectionless, unreliable transport of packets.
Client/Server
A server is anything that has some resource that can be shared. There are compute servers, which provide computing power; print servers, which manage a collection of printers; disk servers, which provide networked disk space; and web servers, which store web pages. A client is simply any other entity that wants to gain access to a particular server.
In Berkeley sockets, the notion of a socket allows as single computer to serve many different clients at once, as well as serving many different types of information. This feat is managed by the introduction of a port, which is a numbered socket on a particular machine. A server process is said to “listen” to a port until a client connects to it. A server is allowed to accept multiple clients connected to the same port number, although each session is unique. To mange multiple client connections, a server process must be multithreaded or have some other means of multiplexing the simultaneous I/O.
Reserved Sockets
Once connected, a higher-level protocol ensues, which is dependent on which port you are using. TCP/IP reserves the lower, 1,024 ports for specific protocols. Port number 21 is for FTP, 23 is for Telnet, 25 is for e-mail, 79 is for finger, 80 is for HTTP, 119 is for Netnews-and the list goes on. It is up to each protocol to determine how a client should interact with the port.
TCP/IP Client Sockets
TCP/IP sockets are used to implement reliable, bidirectional, persistent, point-to-point, stream-based connections between hosts on the Internet. A socket can be used to connect Java’s I/O system to other programs that may reside either on the local machine or on any other machine on the Internet.
There are two kinds of TCP sockets in Java. One is for servers, and the other is for clients. The Server Socket class is designed to be a “listener,” which waits for clients to connect before doing anything. The Socket class is designed to connect to server sockets and initiate protocol exchanges.
The creation of a Socket object implicitly establishes a connection between the client and server. There are no methods or constructors that explicitly expose the details of establishing that connection. Here are two constructors used to create client sockets: Socket (String hostname, int port) Creates a socket connecting the local host to the named host and port; can throw an UnknownHostException or anIOException.
SYSTEM DESIGN
On-demand Route Maintenance results in delayed awareness of mobility, because a node is not notified when a cached route breaks until it uses the route to send packets. We classify a cached route into three types:
pre-active, if a route has not been used;
active, if a route is being used;
post-active, if a route was used before but now is not.
It is not necessary to detect whether a route is active or post-active, but these terms help clarify the cache staleness issue. Stale pre-active and post-active routes will not be detected until they are used. Due to the use of responding to ROUTE REQUESTS with cached routes, stale routes may be quickly propagated to the caches of other nodes. Thus, pre-active and post-active routes are important sources of cache staleness.
When a node detects a link failure, our goal is to notify all reachable nodes that have cached that link to update their caches. To achieve this goal, the node detecting a link failure needs to know which nodes have cached the broken link and needs to notify such nodes efficiently. This goal is very challenging because of mobility and the fast propagation of routing information.
Our solution is to keep track of topology propagation state in a distributed manner. Topology propagation state means which node has cached which link. In a cache table, a node not only stores routes but also maintain two types of information for each route:
(1) How well routing information is synchronized among nodes on a route.
(2) Which neighbor has learned which links through a ROUTE REPLY? Each node gathers such information during route discoveries and data transmission.
The two types of information are sufficient; because each node knows for each cached link which neighbors have that link in their caches. Each entry in the cache table contains a field called Data Packets. This field records whether a node has forwarded 0, 1, or 2 data packets. A node knows how well routing information is synchronized through the first data packet.
When forwarding a ROUTE REPLY, a node caches only the downstream links; thus, its downstream nodes did not cache the first downstream link through this ROUTE REPLY. When receiving the first data packet, the node knows that upstream nodes have cached all downstream links. The node adds the upstream links to the route consisting of the downstream links. Thus, when a downstream link is broken, the node knows which upstream node needs to be notified.
TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program input produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.
Functional testing
Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.
System Testing
System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.
Performance Test
The Performance test ensures that the output be produced within the time limits,and the time taken by the system for compiling, giving response to the users and request being send to the system for to retrieve the results.
Integration Testing
Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.
Integration testing for Server Synchronization:
Testing the IP Address for to communicate with the other Nodes
Check the Route status in the Cache Table after the status information is received by the Node
The Messages are displayed throughout the end of the application
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.
Acceptance testing for Data Synchronization:
The Acknowledgements will be received by the Sender Node after the Packets are received by the Destination Node
The Route add operation is done only when there is a Route request in need
The Status of Nodes information is done automatically in the Cache Updation process
IMPLEMENTATION
Implementation is the stage in the project where the theoretical design is turned Into a working system and is giving confidence on the new system for the users, which it will work efficiently and effectively. It involves careful planning, investigation of the current System and its constraints on implementation, design of methods to achieve the change over, an evaluation, of change over methods. Apart from planning major task of preparing the implementation are education and training of users. The more complex system being implemented, the more involved will be the system analysis and the design effort required just for implementation.
An implementation co-ordination committee based on policies of individual organization has been appointed. The implementation process begins with preparing a plan for the implementation of the system. According to this plan, the activities are to be carried out, discussions made regarding the equipment and resources and the additional equipment has to be acquired to implement the new system.
Implementation is the final and important phase, the most critical stage in achieving a successful new system and in giving the users confidence. That the new system will work be effective .The system can be implemented only after through testing is done and if it found to working according to the specification.
0 comments:
Post a Comment