Tuesday, September 14, 2010

SOFTWARE REQUIRMENT SPECIFICATION



Table of Contents     
Revision History       
1.         Introduction   
1.1       Purpose
1.2       Document Conventions
1.3       Intended Audience and Reading Suggestions
1.4       Project Scope
1.5       References
2.         Overall Description  
2.1       Product Perspective
2.2       Product Features
2.3       User Classes and Characteristics
2.4       Operating Environment
2.5       Design and Implementation Constraints
2.6       User Documentation
2.7       Assumptions and Dependencies
3.         System Features
3.1       System Feature 1
3.2       System Feature 2 (and so on)
4.         External Interface Requirements
4.1       User Interfaces
4.2       Hardware Interfaces
4.3       Software Interfaces
4.4       Communications Interfaces
5.         Other Nonfunctional Requirements
5.1       Performance Requirements
5.2       Safety Requirements
5.3       Security Requirements
5.4       Software Quality Attributes 
6.         Other Requirements
Appendix A: Glossary
Appendix B: Analysis Models
Appendix C: Issues List

Basic Communication Modes of Operation

Simplex Operation
In simplex operation, a network cable or communications channel can only send information in one direction; it's a “one-way street”. This may seem counter-intuitive: what's the point of communications that only travel in one direction? In fact, there are at least two different places where simplex operation is encountered in modern networking. The first is when two distinct channels are used for communication: one transmits from A to B and the other from B to A. This is surprisingly common, even though not always obvious. Simplex operation is also used in special types of technologies, especially ones that are asymmetric. For example, one type of satellite Internet access sends data over the satellite only for downloads, while a regular dial-up modem is used for upload to the service provider. In this case, both the satellite link and the dial-up connection are operating in a simplex mode.


Half-Duplex Operation
Technologies that employ half-duplex operation are capable of sending information in both directions between two nodes, but only one direction or the other can be utilized at a time. This is a fairly common mode of operation when there is only a single network medium (cable, radio frequency and so forth) between devices.
While this term is often used to describe the behavior of a pair of devices, it can more generally refer to any number of connected devices that take turns transmitting. For example, in conventional Ethernet networks, any device can transmit, but only one may do so at a time. For this reason, regular (unswitched) Ethernet networks are often said to be “half-duplex”, even though it may seem strange to describe a LAN that way.


Full-Duplex Operation
In full-duplex operation, a connection between two devices is capable of sending data in both directions simultaneously. Full-duplex channels can be constructed either as a pair of simplex links (as described above) or using one channel designed to permit bidirectional simultaneous transmissions. A full-duplex link can only connect two devices, so many such links are required if multiple devices are to be connected together.

Sunday, September 12, 2010

PROTOTYPE MODEL

A prototype is a working model that is functionally equivalent to a component of the product.

In many instances the client only has a general view of what is expected from the software product. In such a scenario where there is an absence of detailed information regarding the input to the system, the processing needs and the output requirements, the prototyping model may be employed.

This model reflects an attempt to increase the flexibility of the development process by allowing the client to interact and experiment with a working representation of the product. The developmental process only continues once the client is satisfied with the functioning of the prototype. At that stage the developer determines the specifications of the client’s real needs.

PROS AND CONS OF WATER FALL MODEL

Advantages

The advantage of waterfall development is that it allows for departmentalization and managerial control. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process like a car in a carwash, and theoretically, be delivered on time. Development moves from concept, through design, implementation, testing, installation, troubleshooting, and ends up at operation and maintenance. Each phase of development proceeds in strict order, without any overlapping or iterative steps.
Disadvantages
The disadvantage of waterfall development is that it does not allow for much reflection or revision. Once an application is in the testing stage, it is very difficult to go back and change something that was not well-thought out in the concept stage. Alternatives to the waterfall model include joint application development (JAD), rapid application development (RAD), synch and stabilize, build and fix, and the spiral model

WATERFALL MODEL

Waterfall approach was first Process Model to be introduced and followed widely in Software Engineering to ensure success of the project. In "The Waterfall" approach, the whole process of software development is divided into separate process phases.


The phases in Waterfall model are: Requirement Specifications phase, Software Design, Implementation and Testing & Maintenance. All these phases are cascaded to each other so that second phase is started as and when defined set of goals are achieved for first phase and it is signed off, so the name "Waterfall Model". All the methods and processes undertaken in Waterfall Model are more visible.
 
The stages of "The Waterfall Model" are:

Requirement Analysis & Definition: All possible requirements of the system to be developed are captured in this phase. Requirements are set of functionalities and constraints that the end-user (who will be using the system) expects from the system. The requirements are gathered from the end-user by consultation, these requirements are analyzed for their validity and the possibility of incorporating the requirements in the system to be development is also studied. Finally, a Requirement Specification document is created which serves the purpose of guideline for the next phase of the model.

System & Software Design: Before a starting for actual coding, it is highly important to understand what we are going to create and what it should look like? The requirement specifications from first phase are studied in this phase and system design is prepared. System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture. The system design specifications serve as input for the next phase of the model.

Implementation & Unit Testing: On receiving system design documents, the work is divided in modules/units and actual coding is started. The system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality; this is referred to as Unit Testing. Unit testing mainly verifies if the modules/units meet their specifications.

Integration & System Testing: As specified above, the system is first divided in units which are developed and tested for their functionalities. These units are integrated into a complete system during Integration phase and tested to check if all modules/units coordinate between each other and the system as a whole behaves as per the specifications. After successfully testing the software, it is delivered to the customer.

Operations & Maintenance: This phase of "The Waterfall Model" is virtually never ending phase (Very long). Generally, problems with the system developed (which are not found during the development life cycle) come up after its practical use starts, so the issues related to the system are solved after deployment of the system. Not all the problems come in picture directly but they arise time to time and needs to be solved; hence this process is referred as Maintenance.

Tuesday, September 07, 2010

Systems Development Life Cycle (SDLC)

The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems engineering, information systems and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems.


In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system

Systems and Development Life Cycle (SDLC) is a process of process used by a systems analyst to develop an information system, including requirements, validation, training, and user (stakeholder) ownership. Any SDLC should result in a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned Information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.
Requirements gathering and analysis


The goal of system analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces to analyze the situation, analyzing project goals, breaking down what needs to be created and attempting to engage users so that definite requirements can be defined. Requirements analysis sometimes requires individuals/teams from client as well as service provider sides to get detailed and accurate requirements....often there has to be a lot of communication to and from to understand these requirements. Requirement gathering is the most crucial aspect as many times communication gaps arise in this phase and this leads to validation errors and bugs in the software program.

 Design

In systems, design functions and operations are described in detail, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems.
The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudocode, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input design.

Build or coding

Modular and subsystem programming code will be accomplished during this stage. Unit testing and module testing are done in this stage by the developers. This stage is intermingled with the next in that individual modules will need testing before integration to the main project.

Testing

The code is tested at various levels in software testing. Unit, system and user acceptance testings are often performed. This is a grey area as many different opinions exist as to what the stages of testing are and how much if any iteration occurs. Iteration is not generally part of the waterfall model, but usually some occur at this stage.

Below are the following types of testing:

Data set testing.

Unit testing

System testing

Integration testing

Black box testing

White box testing

Regression testing

Automation testing

User acceptance testing

Performance testing

Production

definition:- it is a process that ensures that the program performs the intended task.

Operations and maintenance

The deployment of the system includes changes and enhancements before the decommissioning or sunset of the system. Maintaining the system is an important aspect of SDLC. As key personnel change positions in the organization, new changes will be implemented, which will require system updates.

ISO OSI Model

The OSI Reference Model is founded on a suggestion developed by the International Organization for Standardization (ISO). The model is known as ISO OSI (Open Systems Interconnection) Reference Model because it relates with connecting open systems – that is, systems that are open for communication with other systems.

OSI Model is a set of protocols that try to identify and homogenize the data communication practices. The OSI Model has the support of most computer and network vendors, many big customers, and most governments, including the United States.


The OSI Model is a model that illustrates how data communications should take place. It segregates the process into seven groups, called layers. Into these layers are integrated the protocol standards developed by the ISO and other standards organization, including the Institute of Electrical and Electronic Engineers (IEEE), American National Standards Institute ( ANSI), and the International Telecommunications Union (ITU), formerly known as the CCITT (Comite Consultatif Internationale de Telegraphique et Telephone). The OSI Model affirms what protocols and standards should be used at each layer. It is modular, each layer of the OSI Model functions with the one above and below it.

The short form used to memorize the layer names of the OSI Model is “All People Seem To Need Data Processing”. The lower two layers are normally put into practice with hardware and software. The remaining five layers are only implemented with software.

The layered approach to network communications gives the subsequent advantages: Reduced intricacy, enhanced teaching/learning, modular engineering, accelerated advancement, interoperable technology, and standard interfaces.

The Seven Layers of the OSI Model

Layer Name

7 Application

6 Presentation

5 Session

4 Transport

3 Network

2 Data Link

1 Physical


The easiest way to remember the layers of the OSI model is to use the handy mnemonic "All People Seem To Need Data Processing":

Layer Name Mnemonic

7 Application All

6 Presentation People

5 Session Seem

4 Transport To

3 Network Need

2 Data Link Data

1 Physical Processing

The functions of the seven layers of the OSI model are:

Layer Seven of the OSI Model

The Application Layer of the OSI model is responsible for providing end-user services, such as file transfers, electronic messaging, e-mail, virtual terminal access, and network management. This is the layer with which the user interacts.

Layer Six of the OSI Model

The Presentation Layer of the OSI model is responsible for defining the syntax which two network hosts use to communicate. Encryption and compression should be Presentation Layer functions.

Layer Five of the OSI Model

The Session Layer of the OSI model is responsible for establishing process-to-process commnunications between networked hosts.

Layer Four of the OSI Model

The Transport Layer of the OSI model is responsible for delivering messages between networked hosts. The Transport Layer should be responsible for fragmentation and reassembly.

Layer Three of the OSI Model

The Network Layer of the OSI model is responsible for establishing paths for data transfer through the network. Routers operate at the Network Layer.

Layer Two of the OSI Model

The Data Link Layer of the OSI model is responsible for communications between adjacent network nodes. Hubs and switches operate at the Data Link Layer.

Layer One of the OSI Model

The Physical Layer of the OSI model is responsible for bit-level transmission between network nodes. The Physical Layer defines items such as: connector types, cable types, voltages, and pin-outs.

The OSI Model vs. The Real World

The most major difficulty with the OSI model is that is does not map well to the real world!

The OSI was created after many of todays protocols were already in production use. These existing protocols, such as TCP/ IP, were designed and built around the needs of real users with real problems to solve. The OSI model was created by academicians for academic purposes.

The OSI model is a very poor standard, but it's the only well-recognized standard we have which describes networked applications.

The easiest way to deal with the OSI model is to map the real-world protocols to the model, as well as they can be mapped.

Layer Name Common Protocols

7 Application SSH, telnet, FTP ,http, SMTP

6 Presentation  SNMP

5 Session RPC, Named Pipes, NETBIOS

4 Transport TCP, UDP

3 Network IP

2 Data Link Ethernet

1 Physical Cat-5

Software Project Planning

Project planning is an aspect of Project Management, which comprises of various processes. The aim of theses processes is to ensure that various Project tasks are well coordinated and they meet the various project objectives including timely completion of the project.


What is Project Planning?


Project Planning is an aspect of Project Management that focuses a lot on Project Integration. The project plan reflects the current status of all project activities and is used to monitor and control the project.

The Project Planning tasks ensure that various elements of the Project are coordinated and therefore guide the project execution.
Project Planning helps in

- Facilitating communication

- Monitoring/measuring the project progress, and

- Provides overall documentation of assumptions/planning decisions


The Project Planning Phases can be broadly classified as follows:

- Development of the Project Plan

- Execution of the Project Plan

- Change Control and Corrective Actions

Project Planning is an ongoing effort throughout the Project Lifecycle.

Why is it important?

“If you fail to plan, you plan to fail.”

Project planning is crucial to the success of the Project.

Careful planning right from the beginning of the project can help to avoid costly mistakes. It provides an assurance that the project execution will accomplish its goals on schedule and within budget.

What are the steps in Project Planning?

Project Planning spans across the various aspects of the Project. Generally Project Planning is considered to be a process of estimating, scheduling and assigning the projects resources in order to deliver an end product of suitable quality. However it is much more as it can assume a very strategic role, which can determine the very success of the project. A Project Plan is one of the crucial steps in Project Planning in General!

Typically Project Planning can include the following types of project Planning:

1) Project Scope Definition and Scope Planning

2) Project Activity Definition and Activity Sequencing

3) Time, Effort and Resource Estimation

4) Risk Factors Identification

5) Cost Estimation and Budgeting

6) Organizational and Resource Planning

7) Schedule Development

8) Quality Planning

9) Risk Management Planning

10) Project Plan Development and Execution

11) Performance Reporting

12) Planning Change Management

13) Project Rollout Planning


We now briefly examine each of the above steps:

1) Project Scope Definition and Scope Planning:

In this step we document the project work that would help us achieve the project goal. We document the assumptions, constraints, user expectations, Business Requirements, Technical requirements, project deliverables, project objectives and everything that defines the final product requirements. This is the foundation for a successful project completion.

2) Quality Planning:

The relevant quality standards are determined for the project. This is an important aspect of Project Planning. Based on the inputs captured in the previous steps such as the Project Scope, Requirements, deliverables, etc. various factors influencing the quality of the final product are determined. The processes required to deliver the Product as promised and as per the standards are defined.


3) Project Activity Definition and Activity Sequencing:

In this step we define all the specific activities that must be performed to deliver the product by producing the various product deliverables. The Project Activity sequencing identifies the interdependence of all the activities defined.


4) Time, Effort and Resource Estimation:

Once the Scope, Activities and Activity interdependence is clearly defined and documented, the next crucial step is to determine the effort required to complete each of the activities. See the article on “Software Cost Estimation” for more details. The Effort can be calculated using one of the many techniques available such as Function Points, Lines of Code, Complexity of Code, Benchmarks, etc.

This step clearly estimates and documents the time, effort and resource required for each activity.

5) Risk Factors Identification:

“Expecting the unexpected and facing it”

It is important to identify and document the risk factors associated with the project based on the assumptions, constraints, user expectations, specific circumstances, etc.

6) Schedule Development:

The time schedule for the project can be arrived at based on the activities, interdependence and effort required for each of them. The schedule may influence the cost estimates, the cost benefit analysis and so on.

Project Scheduling is one of the most important task of Project Planning and also the most difficult tasks. In very large projects it is possible that several teams work on developing the project. They may work on it in parallel. However their work may be interdependent.
Again various factors may impact in successfully scheduling a project

...........o Teams not directly under our control

...........o Resources with not enough experience

Popular Tools can be used for creating and reporting the schedules such as Gantt Charts

7) Cost Estimation and Budgeting:

Based on the information collected in all the previous steps it is possible to estimate the cost involved in executing and implementing the project. See the article on "Software Cost Estimation" for more details. A Cost Benefit Analysis can be arrived at for the project. Based on the Cost Estimates Budget allocation is done for the project.

8) Organizational and Resource Planning

Based on the activities identified, schedule and budget allocation resource types and resources are identified. One of the primary goals of Resource planning is to ensure that the project is run efficiently. This can only be achieved by keeping all the project resources fully utilized as possible. The success depends on the accuracy in predicting the resource demands that will be placed on the project. Resource planning is an iterative process and necessary to optimize the use of resources throughout the project life cycle thus making the project execution more efficient. There are various types of resources – Equipment, Personnel, Facilities, Money, etc.

9) Risk Management Planning:

Risk Management is a process of identifying, analyzing and responding to a risk. Based on the Risk factors Identified a Risk resolution Plan is created. The plan analyses each of the risk factors and their impact on the project. The possible responses for each of them can be planned. Throughout the lifetime of the project these risk factors are monitored and acted upon as necessary.


10) Project Plan Development and Execution:

Project Plan Development uses the inputs gathered from all the other planning processes such as Scope definition, Activity identification, Activity sequencing, Quality Management Planning, etc. A detailed Work Break down structure comprising of all the activities identified is used. The tasks are scheduled based on the inputs captured in the steps previously described. The Project Plan documents all the assumptions, activities, schedule, timelines and drives the project.

Each of the Project tasks and activities are periodically monitored. The team and the stakeholders are informed of the progress. This serves as an excellent communication mechanism. Any delays are analyzed and the project plan may be adjusted accordingly

11) Performance Reporting:

As described above the progress of each of the tasks/activities described in the Project plan is monitored. The progress is compared with the schedule and timelines documented in the Project Plan. Various techniques are used to measure and report the project performance such as EVM (Earned Value Management) A wide variety of tools can be used to report the performance of the project such as PERT Charts, GANTT charts, Logical Bar Charts, Histograms, Pie Charts, etc.

12) Planning Change Management:

Analysis of project performance can necessitate that certain aspects of the project be changed. The Requests for Changes need to be analyzed carefully and its impact on the project should be studied. Considering all these aspects the Project Plan may be modified to accommodate this request for Chan

Friday, September 03, 2010

LATEST TRICKS FOR ORKUT USERS

ORKUT SHORTCUTS

SCRAPBOOK
alt+shft+S

HOME
alt+shft+H

FRIENDS
alt+shft+B

PROFILE
alt+shft+P

LOGOUT
alt+shft+L

STOP HELLO TONE (LATEST TRICKS)

FOR stop hello tunes

AIRTEL
dial 543211808

AIRCEL
sms UNSUB to 5300003

VODAFONE
sms CAN CT to 144

IDEA
dial 56765

MESH/FULLY CONNECTED TOPOLOGY


Mesh networking is a type of networking wherein each node in the network may act as an independent router, regardless of whether it is connected to another network or not. It allows for continuous connections and reconfiguration around broken or blocked paths by “hopping” from node to node until the destination is reached. A mesh network whose nodes are all connected to each other is a fully connected network. Mesh networks differ from other networks in that the component parts can all connect to each other via multiple hops, and they generally are not mobile. Mesh networks can be seen as one type of ad hoc network. Fully connected topology: A network topology in which there is a direct path (branch) between any two nodes. Note: In a fully connected network with n nodes, there are n(n-1)/2 direct paths. IT MEANS EACH NODE IS CONNECTED WITH EVERY NODE. Both these topologies are generally not implemented in computer networks.

STAR TOPLOGY


Star networks are one of the most common computer network topologies. In its simplest form, a star network consists of one central switch, hub or computer, which acts as a conduit to transmit messages.The star topology reduces the chance of network failure by connecting all of the systems to a central node. When applied to a bus-based network, this central hub rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only.Data on a star network passes through the hub, switch, or concentrator before continuing to its destination. The hub, switch, or concentrator manages and controls all functions of the network. It is also acts as a repeater for the data flow. This configuration is common with twisted pair cable. However, it can also be used with coaxial cable or optical fibre cable.

Advantages


  • Better performance: star topology prevents the passing of data packets through an excessive number of nodes. At most, 3 devices and 2 links are involved in any communication between any two devices. Although this topology places a huge overhead on the central hub, with adequate capacity, the hub can handle very high utilization by one device without affecting others.
  • Isolation of devices: Each device is inherently isolated by the link that connects it to the hub. This makes the isolation of individual devices straightforward and amounts to disconnecting each device from the others. This isolation also prevents any non-centralized failure from affecting the network.
  • Benefits from centralization: As the central hub is the bottleneck, increasing its capacity, or connecting additional devices to it, increases the size of the network very easily. Centralization also allows the inspection of traffic through the network. This facilitates analysis of the traffic and detection of suspicious behavior.
  • Simplicity: This topology is easy to understand, establish, and navigate. Its simplicity obviates the need for complex routing or message passing protocols. Also, as noted earlier, the isolation and centralization it allows simplify fault detection, as each link or device can be probed individually.
  • Easy to install and wire.  
  • Easy to detect faults and to remove parts.
  • No disruptions to the network when connecting or removing devices.
Disadvantages


The primary disadvantage of a star topology is the high dependence of the system on the functioning of the central hub. While the failure of an individual link only results in the isolation of a single node, the failure of the central hub renders the network inoperable, immediately isolating all nodes. The performance and scalability of the network also depend on the capabilities of the hub. Network size is limited by the number of connections that can be made to the hub, and performance for the entire network is capped by its throughput. While in theory traffic between the hub and a node is isolated from other nodes on the network, other nodes may see a performance drop if traffic to another node occupies a significant portion of the central node's processing capability or throughput. Furthermore, wiring up of the system can be very complex and high costing.

SOFTWARE The Product

Computer software is the product that software professionals build. It encompasses programs that execute within a computer of any size and architecture, documents that encompass hardcopy and virtual forms, and data that encompasses numbers and text, but also includes representations of pictorial, video, and audio information.

Computers are fast becoming our way of life and one cannot imagine life without computers in today’s world. You go to a railway station for reservation, you want to web site a ticket for a cinema, you go to a library, or you go to a bank, you will find computers at all places. Since computers are used in every possible field today, it becomes an important issue to understand and build these computerized systems in an effective way.


Building such systems is not an easy process but requires certain skills and capabilities to understand and follow a systematic procedure towards making of any information system.

What is Software Engineering?


Software Engineering is the systematic aproach to the development, operation and maintenance of software. Software Engineering is concerned with development and maintenance of software products.

The primary goal of software engineering is to provide the quality of software with low cost. Software Engineering involves project planning, project management, systematic analysis, design, validations and maintenance activities.
Software engineering (SE) is a profession dedicated to designing, implementing, and modifying software so that it is of higher quality, more affordable, maintainable, and faster to build.