Search This Blog

Saturday, October 30, 2010

Installing Remote Desktop connection on non-XP systemsInstalling Remote Desktop connection on non-XP systems

Non-Windows XP systems can also access Windows systems running Windows Remote Desktop. The local system used to access the remote computer must have the remote connectivity client software installed. To install the required Terminal Services components:
  1. Insert a Windows XP Professional CD in the local system’s CD or DVD drive.
  2. From the resulting Welcome To Microsoft Windows XP screen, click Perform Additional Tasks.
  3. Click Setup Remote Desktop Connection from the What Do You Want To Do Screen.
  4. The InstallShield Wizard will open; click Next on the Welcome To The InstallShield Wizard for Remote Desktop Connection.
  5. Read and accept the license agreement and click Next.
  6. Enter the customer name and organization, and specify whether the desktop connection is to be available to all users or only the logged in user and click Next.
  7. Click Install.
  8. Click Finish.
The older Windows system can now open the Remote Desktop Connection menu by clicking Start | Programs | Accessories | Communications | Remote Desktop Connection or by opening a command prompt and typing mstsc.

The Remote Desktop Connection

The Remote Desktop Connection software is pre-installed with Windows XP. To run it, click Start, click All Programs, click Accessories, click Communications, and then click Remote Desktop Connection. This software package can also be found on the Windows XP Professional and Windows XP Home Edition product CDs and can be installed on any supported Windows platform. To install from the CD, insert the disc into the target machine's CD-ROM drive, select Perform Additional Tasks, and then click Install Remote Desktop Connection.

Windows Remote Desktop

Connecting to a remote desktop is fairly straightforward, but a few elements must be in place first:
  • The host desktop must have Internet access (preferably high-speed).
  • The local system (the PC connecting to the remote desktop that will serve as the host) must be running Windows XP Professional (or a Windows 2003-flavor server) or have the appropriate Terminal Services tools installed.
  • Firewalls between the local system and the remote host must be configured to pass the appropriate traffic.
  • Remote Desktop must be installed and enabled on the target system.

Installing Remote Desktop

Remote Desktop is an optional Windows XP Professional service. To install it on a host system (to enable a computer to accept a remote connection request), Microsoft recommends you:
  1. Click Start.
  2. Click Control Panel.
  3. Select Add Or Remove Programs.
  4. Select Add/Remove Windows Components.
  5. Select Internet Information Services.
  6. Click the Details button.
  7. Select World Wide Web Service.
  8. Click the Details button.
  9. Check the Remote Desktop Web Connection checkbox.
  10. Click OK.
  11. Click Next.
  12. Click Finish to complete the wizard.
  13. Click Start.
  14. Select Run.
  15. Enter Net Stop w3svc and click the OK button or press Enter.
  16. Click Start.
  17. Select All Programs.
  18. Select Microsoft Update.
  19. Select Scan For Updates.
  20. Install all critical updates on the host system.
  21. Click Start.
  22. Select Run.
  23. Enter Net Start w3svc and click the OK button.

Wednesday, October 27, 2010

Risk Management in IT

It is the method, by which the business managers control the overall operational and financial costs, on all their important business procedures which ultimately yield them profits.

Asset is an entity that demands security/safety. For e.g.

1. Information assets: e.g.

 Databases: about customers, personnel, production, sales, marketing, financial. These Information assets are critical for the business its confidentiality, integrity and availability is of utmost importance.
Data files: transaction data giving up to date information about each event.
Operation and support procedures: These have been developed over the years and provide detailed instructions on how to perform various activities.
Archived Information : Old Information that may be required to maintain by law.
 Continuity plans: These would be developed to overcome any disaster and maintain the continuity of business. Absence of these will lead to Ad-hoc decisions in crisis.

2. Software Assets:

 Application softwares.
 System Softwares.

3. Physical Assets:

 Computer Equipments
 Communication Equipments
 Storage Media
 Technical Equipments
 Furnitures and fixtures.

4. Services:

Computing services that the organization has outsourced.
Communication services like voice communication, data communication, value added services, wide area network. Environmental Conditioning services like heating, lighting, air conditioning and power.

Risk Assessment:

 A step in Risk Management Process
It is the determination of quantitative or qualitative value of risk related to a real situation and a well-known threat.
Quantitative risk assessment requires calculations of two components of risk:
R, the magnitude of the potential loss L, and the probability p that the loss will occur.

Sunday, October 24, 2010

Bluetooth History

The name Bluetooth was derived from the Danish king Harald Blatand (Bluetooth), who is credited with uniting the people from Scandinavia during the tenth century. The Bluetooth SIG (Special Interest Group) was formed in February 1998. The original founding members of Bluetooth SIG consisted of Ericsson, IBM, Intel, Nokia, and Toshiba.
By June 2001, the member list of participating companies exceeded 2400. The role of the SIG is to develop the specifications for Bluetooth as per the requirements, promote and market the technology and brand name, handle legal and regulatory issues, and certify the Bluetooth products that meet the conformance and interoperability requirements. Any company, by signing a zero-cost agreement, has complete access to the SIG specifications and can qualify for a royalty-free license to build products based on the Bluetooth technology.
The Bluetooth SIG completed the initial specification work and released the first version of the official Bluetooth standard in July 1999.

Data in GSM Networks

The Global System for Mobile Communication (GSM) is a multiservice cellular network. It provides not only voice service, but a good set of data services as well. This chapter describes the data services offered by a GSM network. It describes the data services before the advent of GPRS and EDGE.
The GSM data services can be categorized in terms of traffic, signaling, and broadcast channel data services. The GSM standard specifies data services on the traffic channel (TCH), which can be utilized by data applications such as fax and Internet service provider (ISP) connection. This is also referred to as circuit switched (CS) data service. The data service on a signaling channel is known as the point-to-point short message service (SMS). Using SMS, a subscriber sends or receives a short string of text (maximum 126 characters) using a signaling channel. There is another type of SMS service called SMS broadcast, which is the only broadcast channel data service. This service transports data on a specially defined broadcast channel to all the subscribers in a cell. The broadcast data applications, such as traffic reports and weather alerts, were anticipated to use this service, but it didn't get much attention in deployment from cellular service providers.


Cyber forensics can be defined as the process of extracting information and data from computer storage media and guaranteeing its accuracy and reliability. The challenge of course is actually finding this data, collecting it, preserving it, and presenting it in a manner acceptable in a court of law.
Electronic evidence is fragile and can easily be modified. Additionally, cyber thieves, criminals, dishonest and even honest employees hide, wipe, disguise, cloak, encrypt and destroy evidence from storage media using a variety of freeware, shareware and commercially available utility programs.
A global dependency on technology combined with the expanding presence of the Internet as a key and strategic resource requires that corporate assets are well protected and safeguarded.
When those assets come under attack, or are misused, info security professionals must be able to gather electronic evidence of such misuse and utilize that evidence to bring to justice those who misuse the technology.
Cyber forensics, while firmly established as both an art as well as a science, is at its infancy. With technology evolving, mutating, and changing at such a rapid pace, the rules governing the application of cyber forensics to the fields of auditing, security, and law enforcement are changing as well. Almost daily, new techniques and procedures are designed to provide info security professionals a better means of finding electronic evidence, collecting it, preserving it, and presenting it to client management for potential use in the prosecution of cyber criminals.

Saturday, October 16, 2010

Internet Registries

Three regional Internet registries are responsible for the assignment of IP addresses and autonomous system numbers globally (other organizations are responsible for the assignment of domain names):
  • ARIN— American Registry for Internet Numbers
  • APNIC— Asia Pacific Network Information Centre
  • RIPE— Réseaux IP Européens
ARIN is a nonprofit organization established for the purpose of administration and registration of IP numbers for the following geographical areas: North America, South America, the Caribbean, and sub-Saharan Africa.
APNIC represents the Asia Pacific region, comprising 62 economies. It is a not-for-profit, membership-based organization whose members include Internet service providers, national Internet registries, and similar organizations.
RIPE is an open collaborative community of organizations and individuals operating wide area IP networks in Europe and beyond. The objective of the RIPE community is to ensure the administrative and technical coordination necessary to enable operation of a pan-European IP network. RIPE has no formal membership, and its activities are performed on a voluntary basis.

Satellite Communication Systems

The era of satellite systems began in 1957 with the launch of Sputnik by the Soviet Union.
However, the communication capabilities of Sputnik were very limited. The first real communication
satellite was the AT&T Telstar 1, which was launched by NASA in 1962. Telstar 1
was enhanced in 1963 by its successor, Telstar 2. From the Telstar era to today, satellite
communications [16] have enjoyed an enormous growth offering services such as data,
paging, voice, TV broadcasting, Internet access and a number of mobile services.
Satellite orbits belong to three different categories. In ascending order of height, these are
the circular Low Earth Orbit (LEO), Medium Earth Orbit (MEO) and Geosynchronous Earth
Orbit (GEO) categories at distances in the ranges of 100–1000 km, 5000–15 000 km andapproximately 36 000 km, respectively. There also exist satellites that utilize elliptical orbits.
These try to combine the low propagation delay property of LEO systems and the stability of
GEO systems.
The trend nowadays is towards use of LEO orbits, which enable small propagation delays
and construction of simple and light ground mobile units. A number of LEO systems have
appeared, such as Globalstar and Iridium. They offer voice and data services at rates up to 10
kbps through a dense constellation of LEO satellites.


Another advantage of GSM is its support for several extension technologies that achieve
higher rates for data applications. Two such technologies are High Speed Circuit Switched
Data (HSCSD) and General Packet Radio Service (GPRS). HSCSD is a very simple upgrade
to GSM. Contrary to GSM, it gives more than one time slot per frame to a user; hence the
increased data rates. HSCD allows a phone to use two, three or four slots per frame to achieve
rates of 57.6, 43.2 and 28.8 kbps, respectively. Support for asymmetric links is also provided,
meaning that the downlink rate can be different than that of the uplink. A problem of HSCSD
is the fact that it decreases battery life, due to the fact that increased slot use makes terminals
spend more time in transmission and reception modes. However, due to the fact that reception
requires significantly less consumption than transmission, HSCSD can be efficient for web
browsing, which entails much more downloading than uploading.
GPRS operation is based on the same principle as that of HSCSD: allocation of more slots
within a frame. However, the difference is that GPRS is packet-switched, whereas GSM and
HSCSD are circuit-switched. This means that a GSM or HSCSD terminal that browses the
Internet at 14.4 kbps occupies a 14.4 kbps GSM/HSCSD circuit for the entire duration of the
connection, despite the fact that most of the time is spent reading (thus downloading) Web
pages rather than sending (thus uploading) information. Therefore, significant system capacity
is lost. GPRS uses bandwidth on demand (in the case of the above example, only when
the user downloads a new page). In GPRS, a single 14.4 kbps link can be shared by more than
one user, provided of course that users do not simultaneously try to use the link at this speed;
rather, each user is assigned a very low rate connection which can for short periods use
additional capacity to deliver web pages. GPRS terminals support a variety of rates, ranging
from 14.4 to 115.2 kbps, both in symmetric and asymmetric configurations.


Throughout Europe, a new part of the spectrum in the area around 900 MHz has been made
available for 2G systems. This allocation was followed later by allocation of frequencies at
the 1800 MHz band. 2G activities in Europe were initiated in 1982 with the formation of a
study group that aimed to specify a common pan-European standard. Its name was ‘Groupe
Speciale Mobile’ (later renamed Global System for Mobile Communications). GSM [3],
which comes from the initials of the group’s name, was the resulting standard. Nowadays,
it is the most popular 2G technology; by 1999 it had 1 million new subscribers every week.
This popularity is not only due to its performance, but also due to the fact that it is the only 2G
standard in Europe. This can be thought of as an advantage, since it simplifies roaming of
subscribers between different operators and countries.
The first commercial deployment of GSM was made in 1992 and used the 900 MHz band.
The system that uses the 1800 MHz band is known as DCS 1800 but it is essentially GSM.
GSM can also operate in the 1900 MHz band used in America for several digital networks and
in the 450 MHz band in order to provide a migration path from the 1G NMT standard that
uses this band to 2G systems.
As far as operation is concerned, GSM defines a number of frequency channels, which are
organized into frames and are in turn divided into time slots. The exact structure of GSM
channels is described later in the book; here we just mention that slots are used to construct
both channels for user traffic and control operations, such as handover control, registration,
call setup, etc. User traffic can be either voice or low rate data, around 14.4 kbps.

What Is PHP?

PHP is officially known as PHP: Hypertext Preprocessor. It is a server-side scripting language often written in an HTML context. Unlike an ordinary HTML page, a PHP script is not sent directly to a client by the server; instead, it is parsed by the PHP engine. HTML elements in the script are left alone, but PHP code is interpreted and executed. PHP code in a script can query databases, create images, read and write files, talk to remote servers—the possibilities are endless. The output from PHP code is combined with the HTML in the script and the result sent to the user.

PHP is also installed as a command-line application, making it an excellent tool for scripting on a server. Many system administrators now use PHP for the sort of automation that has been traditionally handled by Perl or shell scripting.

Why Choose PHP?

There are some compelling reasons to work with PHP. For many projects, you will find that the production process is significantly faster than you might expect if you are used to working with other scripting languages. At Corrosive we work with both PHP and Java. We choose PHP when we want to see results quickly without sacrificing stability. As an open-source product, PHP is well supported by a talented production team and a committed user community. Furthermore, PHP can be run on all the major operating systems and with most servers.

Speed of Development

Because PHP allows you to separate HTML code from scripted elements, you will notice a significant decrease in development time on many projects. In many instances, you will be able to separate the coding stage of a project from the design and build stages. Not only can this make life easier for you as a programmer, but it also can remove obstacles that stand in the way of effective and flexible design.

PHP Is Open Source

To many people, open source simply means free, which is, of course, a benefit in itself.

Well-maintained open-source projects offer users additional benefits, though. You benefit from an accessible and committed community that offers a wealth of experience in the subject. Chances are that any problem you encounter in your coding can be answered swiftly and easily with a little research. If that fails, a question sent to a mailing list can yield an intelligent, authoritative response.

You also can be sure that bugs will be addressed as they are found, and that new features will be made available as the need is defined. You will not have to wait for the next commercial release before taking advantage of improvements.

There is no vested interest in a particular server product or operating system. You are free to make choices that suit your needs or those of your clients, secure that your code will run whatever you decide.


Because of the powerful Zend engine, PHP shows solid performance compared with other server scripting languages, such as ASP, Perl, and Java Servlets, in benchmark tests. To further improve performance, you can acquire a caching tool (Zend Accelerator) from; it stores compiled code in memory, eliminating the overhead of parsing and interpreting source files for every request.


PHP is designed to run on many operating systems and to cooperate with many servers and databases. You can build for a Unix environment and shift your work to NT without a problem. You can test a project with Personal Web Server and install it on a Unix system running on PHP as an Apache module.


  1. CHINA
  2. INDIA
  3. USA
  7. JAPAN

Sunday, October 10, 2010

TCP/IP Layers

TCP/IP Architecture and the TCP/IP Model

The OSI reference model consists of seven layers that represent a functional division of the tasks required to implement a network. It is a conceptual tool that I often use to show how various protocols and technologies fit together to implement networks. However, it's not the only networking model that attempts to divide tasks into layers and components. The TCP/IP protocol suite was in fact created before the OSI Reference Model; as such, its inventors didn't use the OSI model to explain TCP/IP architecture (even though the OSI model is often used in TCP/IP discussions today, as you will see in this Guide, believe me.)
The TCP/IP Model

The developers of the TCP/IP protocol suite created their own architectural model to help describe its components and functions. This model goes by different names, including the TCP/IP model, the DARPA model (after the agency that was largely responsible for developing TCP/IP) and the DOD model (after the United States Department of Defense, the “D” in “DARPA”). I just call it the TCP/IP model since this seems the simplest designation for modern times.

Regardless of the model you use to represent the function of a network—and regardless of what you call that model!—the functions that the model represents are pretty much the same. This means that the TCP/IP and the OSI models are really quite similar in nature even if they don't carve up the network functionality pie in precisely the same way. There is a fairly natural correspondence between the TCP/IP and OSI layers, it just isn't always a “one-to-one” relationship. Since the OSI model is used so widely, it is common to explain the TCP/IP architecture both in terms of the TCP/IP layers and the corresponding OSI layers, and that's what I will now do.


TCP/IP is the most used network protocol nowadays. In this tutorial we will explain how it works in a very easy to follow language.

So, what is a network protocol anyway? Protocol is like a language used to make two computers to talk to each other. Like in real world, if they are not talking the same language, they cannot communicate.

Before going further, we recommend you to read our tutorial The OSI Reference Model for Network Protocols, which is a primer for understanding how network protocols work. Consider the present tutorial as a sequel to our OSI Reference Model tutorial.

TCP/IP is not really a protocol, but a set of protocols – a protocol stack, as it is most commonly called. Its name, for example, already refers to two different protocols, TCP (Transmission Control Protocol) and IP (Internet Protocol). There are several other protocols related to TCP/IP like FTP, HTTP, SMTP and UDP – just to name a few

The Internet Protocol Suite is the set of communications protocols used for the Internet  and other similar networks. It is commonly also known as TCP/IP, named from two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in this standard. Modern IP networking represents a synthesis of several developments that began to evolve in the 1960s and 1970s, namely the Internet  and local area networks, which emerged during the 1980s, together with the advent of the World Wide Web in the early 1990s.

The Internet Protocol Suite, like many protocol suites, is constructed as a set of layers. Each layer solves a set of problems involving the transmission of data. In particular, the layers define the operational scope of the protocols within.

Often a component of a layer provides a well-defined service to the upper layer protocols and may be using services from the lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted.
The TCP/IP model consists of four layers.

Tuesday, October 05, 2010

Installing WINE on Ubuntu 9.10

How to Install WINE on Ubuntu 9.10

1.Open Firefox and go to

2.Under "Direct links to the latest Wine Packages", select Ubuntu Jaunty (9.04) package (1.1.32 1386 or 1.1.32 amd64) depending on your platform

3.Left Click your selection and then tick the option to (1) Open with GDebi Package Installer and (2) Click OK to install

If all goes well, you should now have WINE up and running on your Ubuntu 9.10.

Installation of Wine in Ubuntu

Here is a quick way to add the winehq repository so you dont need to wait for the ubuntu community to add the latest wine.

Open up a terminal Applications->Accessories->Terminal

Now copy/paste these commands:

Adding the gpg apt key:

wget -q -O-
sudo apt-key add -

Lets add the Repository via wget:

sudo wget -O /etc/apt/sources.list.d/winehq.list

Now lets update our apt sources and install the latest wine!

sudo apt-get update ; sudo apt-get install wine

Ok now you will always have the latest wine package installed!

Sunday, October 03, 2010

What should the SRS address?

The basic issues that the SRS writer(s) shall address are the following:
a) Functionality. What is the software supposed to do?
b) External interfaces. How does the software interact with people, the system’s hardware, other hardware, and other software?
c) Performance. What is the speed, availability, response time, recovery time of various software functions, etc.?
d) Attributes. What are the portability, correctness, maintainability, security, etc. considerations?
e) Design constraints imposed on an implementation. Are there any required standards in effect, implementation language, policies for database integrity, resource limits, operating environment(s) etc.?

What are the characteristics of a great SRS?

Again from the IEEE standard:
An SRS should be
a) Correct
b) Unambiguous
c) Complete
d) Consistent
e) Ranked for importance and/or stability
f) Verifiable
g) Modifiable
h) Traceable

Correct - This is like motherhood and apple pie. Of course you want the specification to be correct. No one writes a specification that they know is incorrect. We like to say - "Correct and Ever Correcting." The discipline is keeping the specification up to date when you find things that are not correct.
Unambiguous - An SRS is unambiguous if, and only if, every requirement stated therein has only one interpretation. Again, easier said than done. Spending time on this area prior to releasing the SRS can be a waste of time. But as you find ambiguities - fix them.
Complete - A simple judge of this is that is should be all that is needed by the software designers to create the software.
Consistent - The SRS should be consistent within itself and consistent to its reference documents. If you call an input "Start and Stop" in one place, don't call it "Start/Stop" in another.
Ranked for Importance - Very often a new system has requirements that are really marketing wish lists. Some may not be achievable. It is useful provide this information in the SRS.
Verifiable - Don't put in requirements like - "It should provide the user a fast response." Another of my favorites is - "The system should never crash." Instead, provide a quantitative requirement like: "Every key stroke should provide a user response within 100 milliseconds."
Modifiable - Having the same requirement in more than one place may not be wrong - but tends to make the document not maintainable.
Traceable - Often, this is not important in a non-politicized environment. However, in most organizations, it is sometimes useful to connect the requirements in the SRS to a higher level document. Why do we need this requirement?

What are the benefits of a Great SRS?

The IEEE 830 standard defines the benefits of a good SRS:
Establish the basis for agreement between the customers and the suppliers on what the software product is to do. The complete description of the functions to be performed by the software specified in the SRS will assist the potential users to determine if the software specified meets their needs or how the software must be modified to meet their needs. [NOTE: We use it as the basis of our contract with our clients all the time].
Reduce the development effort. The preparation of the SRS forces the various concerned groups in the customer’s organization to consider rigorously all of the requirements before design begins and reduces later redesign, recoding, and retesting. Careful review of the requirements in the SRS can reveal omissions, misunderstandings, and inconsistencies early in the development cycle when these problems are easier to correct.
Provide a basis for estimating costs and schedules. The description of the product to be developed as given in the SRS is a realistic basis for estimating project costs and can be used to obtain approval for bids or price estimates. [NOTE: Again, we use the SRS as the basis for our fixed price estimates]
Provide a baseline for validation and verification. Organizations can develop their validation and Verification plans much more productively from a good SRS. As a part of the development contract, the SRS provides a baseline against which compliance can be measured. [NOTE: We use the SRS to create the Test Plan].
Facilitate transfer.The SRS makes it easier to transfer the software product to new users or new machines. Customers thus find it easier to transfer the software to other parts of their organization, and suppliers find it easier to transfer it to new customers.
Serve as a basis for enhancement. Because the SRS discusses the product but not the project that developed it, the SRS serves as a basis for later enhancement of the finished product. The SRS may need to be altered, but it does provide a foundation for continued production evaluation. [NOTE: This is often a major pitfall – when the SRS is not continually updated with changes

what is SRS?

SRS stands for Software Requirement Specification.
It establishes the basis for agreement between customers and contractors or suppliers on what the software product is expected to do, as well as what it is not expected to do.
Some of the features of SRS are -
• It sets permits a rigorous assessment of requirements before design can begin.
• It sets the basis for software design, test, deployment, training etc. It also sets pre-requisite for a good design though it is not enough.
• It sets basis for software enhancement and maintenance.
• It sets Basis for Project plans like Scheduling and Estimation.

SRS is a document of software requirement specification prepared by testing engineer,which contains the specifications about the developing of the project,project details,project modules without bugs,about test cases to improve the quality of the project and hardware specifications is called SRS

POP3 Mail Vs Web-based E-mail

1. POP3 Mail- Here to access the mail account user needs a „mail-client‟. A mail client is a simple application or program used entirely to receive and send mail. Internet Browsers Netscape and Internet explorer have e-mail clients included with them. But there are also stand alone e-mail clients such as Eudora which can be downloaded free from the internet. Web-based mail-This type of e-mail account is used by dozens of Internet service providers like Hotmail, yahoo etc.

2. POP3 Mail-The primary advantage of POP3 type email account is that it is usually not restrictive of the size of files which could be send or received. Web-based mail-The main disadvantage of this kind of account is the limited space in mail box and limited size of files that are allowed to send and receive.

3. POP3 mail-Most viruses are transmitted through this type of email. Many of the viruses are written to be specifically activated by usage of the e-mail client which comes with Internet explorer, outlook express. Web-based mail-Automatically prescans all the received attachment for viruses.

4. POP3 Mail-Filters are not easy to use here and spam is a problem here. Web-based mail-Filters are easy to use here to avoid spam.

5. POP3 mail-The account could be accessed only from the computer at which e-mail client is installed. Web-based mail-The account could be accessed from any computer having a web-browser.

6. POP3 mail-E-mail clients with POP connect to the server whenever the program starts and transfers all new messages to the local computer, thereby removing them from the server. Therefore server space is conserved here. Web-based mail-E-mails or messages remains on the server hence server space required is more.

7. POP3 mail-It makes the use of multiple computers very cumbersome and significantly reduces the security of the data. Web-based mail-If privacy and security are important web-based email service should be used.



In computing, the Post Office Protocol (POP) is an application-layer Internet standard protocol used by local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection. POP and IMAP (Internet Message Access Protocol) are the two most prevalent Internet standard protocols for e-mail retrieval. Virtually all modern e-mail clients and servers support both. The POP protocol has been developed through several versions, with version 3 (POP3) being the current standard.


The Internet Message Access Protocol (IMAP) is one of the two most prevalent Internet standard protocols for e-mail retrieval, the other being the Post Office Protocol (POP). Virtually all modern e-mail clients and mail servers support both protocols as a means of transferring e-mail messages from a server.



HTML, which stands for Hyper Text Markup Language, is the predominant markup language for web pages. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists etc as well as for links, quotes, and other items. It allows images and objects to be embedded and can be used to create interactive forms. It is written in the form of HTML elements consisting of "tags" surrounded by angle brackets within the web page content. It can include or can load scripts in languages such as JavaScript which affect the behavior of HTML processors like Web browsers; and Cascading Style Sheets (CSS) to define the appearance and layout of text and other material. The W3C, maintainer of both HTML and CSS standards, encourages the use of



In computing, a Uniform Resource Locator (URL) is a subset of the Uniform Resource Identifier(URI) that specifies where an identified resource is available and the mechanism for retrieving it. In popular usage and in many technical documents and verbal discussions it is often incorrectly used as a synonym for URI,  the best-known example of which is the 'address' of a web page on the World Wide Web.


In computing, a Uniform Resource Identifier (URI) is a string of characters used to identify a name or a resource on the Internet. Such identification enables interaction with representations of the resource over a network (typically the World Wide Web) using specific protocols. Schemes specifying a concrete syntax and associated protocols define each URI.


The homepage (often written as home page) is the URL or local file that automatically loads when a web browser starts or when the browser's "home" button is pressed. One can turn this feature off and on, as well as specify a URL for the page to be loaded.
The term is also used to refer to the front page, web server directory index, or main web page of a website of a group, company, organization, or individual. In some countries, such as Germany, Japan, and South Korea, and formerly in the US, the term "homepage" commonly refers to a complete website (of a company or other organization) rather than to a single web page. By the late 1990s this usage had died out in the US, replaced by the more comprehensive term "web site".


Web browser

A web browser is a software application for retrieving, presenting, and traversing information resources on the World Wide Web. An information resource is identified by a Uniform Resource Identifier (URI) and may be a web page, image, video, or other piece of content. Hyperlinks present in resources enable users to easily navigate their browsers to related resources.
Although browsers are primarily intended to access the World Wide Web, they can also be used to access information provided by Web servers in private networks or files in file systems. Some browsers can be also used to save information resources to file systems. eg. Internet Explorer, Google Chrome, Opera,Mozilla Firefox, Safari, Netscape Navigator etc.

Web server

A web server is a computer program that delivers (serves) content, such as web pages, using the Hypertext Transfer Protocol. The term web server can also refer to the computer or virtual machine running the program. In large commercial deployments, a server computer running a web server can be rack-mounted in a server rack or cabinet with other servers to operate a web farm.
                                          The primary function of a web server is to deliver web pages to clients. This means delivery of HTML documents and any additional content that may be included by a document, such as images, style sheets and JavaScript’s.
A client, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource, or an error message if unable to do so. The resource is typically a real file on the server's secondary memory, but this is not necessarily the case and depends on how the web server is implemented.

www (world wide web)

The World Wide Web, abbreviated as WWW and commonly known as The Web, is a system of interlinked hypertext documents contained on the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them by using hyperlinks. Using concepts from earlier hypertext systems, British engineer and computer scientist Sir Tim Berners Lee, now the Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web. He was later joined by Belgian computer scientist Robert Cailliau while both were working at CERN in Geneva,  Switzerland. In 1990, they proposed using "HyperText [...] to link and access information of various kinds as a web of nodes in which the user can browse at will", and released that web in December.
"The World-Wide Web (W3) was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project."  If two projects are independently created, rather than have a central figure make the changes, the two bodies of information could form into one cohesive piece of work.