• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

AirBinary

Hosting Reviews

  • Home
  • About
  • Contact Us
  • Privacy Policy
  • Terms of Use

Web Hosting

Web hosting control panel

By Erik


Image by/from Aapp

A web hosting control panel is a web-based interface provided by a web hosting service that allows users to manage their servers and hosted services.

Web hosting control panels usually include the following modules:

Some web hosting control panels are:

Filed Under: Web Hosting Tagged With: web hosting

Social network hosting service

By Erik

A social network hosting service is a web hosting service that specifically hosts the user creation of web-based social networking services, alongside related applications. Such services are also known as vertical social networks due to the creation of SNSes which cater to specific user interests and niches; like larger, interest-agnostic SNSes, such niche networking services may also possess the ability to create increasingly niche groups of users.

Filed Under: Web Hosting Tagged With: web hosting

Web application

By Erik


Image by/from The Horde Project

A web application (or web app) is application software that runs on a web server, unlike computer-based software programs that are run locally on the operating system (OS) of the device. Web applications are accessed by the user through a web browser with an active internet connection. These applications are programmed using a client-server modeled structure—the user (“client”) is provided services through an off-site server that is hosted by a third-party. Examples of commonly-used web applications include: web-mail, online retail sales, online banking, and online auctions.

The general distinction between a dynamic web page of any kind and a “web app” is unclear. Web sites most likely to be referred to as “web applications” are those which have similar functionality to a desktop software application, or to a mobile app. HTML5 introduced explicit language support for making applications that are loaded as web pages, but can store data locally and continue to function while offline.

Single-page applications are more application-like because they reject the more typical web paradigm of moving between distinct pages with different URLs. Single-page frameworks might be used to speed development of such a web app for a mobile platform.

There are several ways of targeting mobile devices when making web applications:

In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as its user interface and had to be separately installed on each user’s personal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to the support cost and decreasing productivity. In addition, both the client and server components of the application were usually tightly bound to a particular computer architecture and operating system and porting them to others was often prohibitively expensive for all but the largest applications (Nowadays, native apps for mobile devices are also hobbled by some or all of the foregoing issues).

In contrast, web applications use web documents written in a standard format such as HTML and JavaScript, which are supported by a variety of web browsers. Web applications can be considered as a specific variant of client-server software where the client software is downloaded to the client machine when visiting the relevant web page, using standard procedures such as HTTP. Client web software updates may happen each time the web page is visited. During the session, the web browser interprets and displays the pages, and acts as the universal client for any web application.

In the early days of the Web, each individual web page was delivered to the client as a static document, but the sequence of pages could still provide an interactive experience, as user input was returned through web form elements embedded in the page markup. However, every significant change to the web page required a round trip back to the server to refresh the entire page.

In 1995, Netscape introduced a client-side scripting language called JavaScript allowing programmers to add some dynamic elements to the user interface that ran on the client side. So instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such as input validation or showing/hiding parts of the page.

In 1996, Macromedia introduced Flash, a vector animation player that could be added to browsers as a plug-in to embed animations on the web pages. It allowed the use of a scripting language to program interactions on the client-side with no need to communicate with the server.

In 1999, the “web application” concept was introduced in the Java language in the Servlet Specification version 2.2. [2.1?]. At that time both JavaScript and XML had already been developed, but Ajax had still not yet been coined and the XMLHttpRequest object had only been recently introduced on Internet Explorer 5 as an ActiveX object.

In 2005, the term Ajax was coined, and applications like Gmail started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page.

In 2007, Steve Jobs announced that web apps, developed in HTML5 using AJAX architecture, would be the standard format for iPhone apps. No software development kit (SDK) was required, and the apps would be fully integrated into the device through the Safari browser engine. This model was later switched for the App Store, as a means of preventing jailbreakers and of appeasing frustrated developers.

In 2014, HTML5 was finalized, which provides graphic and multimedia capabilities without the need of client-side plug-ins. HTML5 also enriched the semantic content of documents. The APIs and document object model (DOM) are no longer afterthoughts, but are fundamental parts of the HTML5 specification. WebGL API paved the way for advanced 3D graphics based on HTML5 canvas and JavaScript language. These have significant importance in creating truly platform and browser independent rich web applications.

In 2016, during the annual Google IO conference, Eric Bidelman (Senior Staff Developers Programs Engineer) introduced Progressive Web Apps (PWAs) as a new standard in web development. Jeff Burtoft, Principal Program Manager at Microsoft, said “Google led the way with Progressive Web Apps, and after a long process, we decided that we needed to fully support it.” As such, Microsoft and Google both supported the PWA standard.

Through Java, JavaScript, DHTML, Flash, Silverlight and other technologies, application-specific methods such as drawing on the screen, playing audio, and access to the keyboard and mouse are all possible. Many services have worked to combine all of these into a more familiar interface that adopts the appearance of an operating system. General-purpose techniques such as drag and drop are also supported by these technologies. Web developers often use client-side scripting to add functionality, especially to create an interactive experience that does not require page reloading. Recently, technologies have been developed to coordinate client-side scripting with server-side technologies such as ASP.NET, J2EE, Perl/Plack and PHP.

Ajax, a web development technique using a combination of various technologies, is an example of technology that creates a more interactive experience.

Applications are usually broken into logical chunks called “tiers”, where every tier is assigned a role. Traditional applications consist only of 1 tier, which resides on the client machine, but web applications lend themselves to an n-tiered approach by nature. Though many variations are possible, the most common structure is the three-tiered application. In its most common form, the three tiers are called presentation, application and storage, in this order. A web browser is the first tier (presentation), an engine using some dynamic Web content technology (such as ASP, CGI, ColdFusion, Dart, JSP/Java, Node.js, PHP, Python or Ruby on Rails) is the middle tier (application logic), and a database is the third tier (storage). The web browser sends requests to the middle tier, which services them by making queries and updates against the database and generates a user interface.

For more complex applications, a 3-tier solution may fall short, and it may be beneficial to use an n-tiered approach, where the greatest benefit is breaking the business logic, which resides on the application tier, into a more fine-grained model. Another benefit may be adding an integration tier that separates the data tier from the rest of tiers by providing an easy-to-use interface to access the data. For example, the client data would be accessed by calling a “list_clients()” function instead of making an SQL query directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers.

There are some who view a web application as a two-tier architecture. This can be a “smart” client that performs all the work and queries a “dumb” server, or a “dumb” client that relies on a “smart” server. The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both. While this increases the scalability of the applications and separates the display and the database, it still doesn’t allow for true specialization of layers, so most applications will outgrow this model.

An emerging strategy for application software companies is to provide web access to software previously distributed as local applications. Depending on the type of application, it may require the development of an entirely different browser-based interface, or merely adapting an existing application to use different presentation technology. These programs allow the user to pay a monthly or yearly fee for use of a software application without having to install it on a local hard drive. A company which follows this strategy is known as an application service provider (ASP), and ASPs are currently receiving much attention in the software industry.

Security breaches on these kinds of applications are a major concern because it can involve both enterprise information and private customer data. Protecting these assets is an important part of any web application and there
are some key operational areas that must be included in the development process. This includes processes for authentication, authorization, asset handling, input, and logging and auditing. Building security into the applications from the beginning can be more effective and less disruptive in the long run.

Cloud computing model web applications are software as a service (SaaS). There are business applications provided as SaaS for enterprises for a fixed or usage-dependent fee. Other web applications are offered free of charge, often generating income from advertisements shown in web application interface.

Writing web applications is often simplified by the use of web application framework. These frameworks facilitate rapid application development by allowing a development team to focus on the parts of their application which are unique to their goals without having to resolve common development issues such as user management. Many of the frameworks in use are open-source software.

The use of web application frameworks can often reduce the number of errors in a program, both by making the code simpler, and by allowing one team to concentrate on the framework while another focuses on a specified use case. In applications which are exposed to constant hacking attempts on the Internet, security-related problems can be caused by errors in the program. Frameworks can also promote the use of best practices such as GET after POST.

In addition, there is potential for the development of applications on Internet operating systems, although currently there are not many viable platforms that fit this model.

Examples of browser applications are simple office software (word processors, online spreadsheets, and presentation tools), but can also include more advanced applications such as project management, computer-aided design, video editing, and point-of-sale.

Filed Under: Web Hosting Tagged With: web hosting

Web 2.0

By Erik


Image by/from HarryAlffa at en.wikipedia (Later version(s) were uploaded by Nigelj and Mono at en.wikipedia.)

Web 2.0 (also known as Participative (or Participatory) and Social Web) refers to websites that emphasize user-generated content, ease of use, participatory culture and interoperability (i.e., compatible with other products, systems, and devices) for end users.

The term was invented by Darcy DiNucci in 1999 and later popularized by Tim O’Reilly and Dale Dougherty at the O’Reilly Media Web 2.0 Conference in late 2004. The Web 2.0 framework specifies only the design and use of websites and does not place any technical demands or specifications on designers. The transition was gradual and, therefore, no precise date for when this change happened has been given.[which?]

A Web 2.0 website allows users to interact and collaborate with each other through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to viewing content in a passive manner. Examples of Web 2.0 features include social networking sites or social media sites (e.g., Facebook), blogs, wikis, folksonomies (“tagging” keywords on websites and links), video sharing sites (e.g., YouTube), image sharing sites (e.g., Flickr), hosted services, Web applications (“apps”), collaborative consumption platforms, and mashup applications.

Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was “a collaborative medium, a place where we [could] all meet and read and write.” On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines.

Web 1.0 is a retronym referring to the first stage of the World Wide Web’s evolution, from roughly 1991 to 2004. According to Cormode and Krishnamurthy, “content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content.” Personal web pages were common, consisting mainly of static pages hosted on ISP-run web servers, or on free web hosting services such as Tripod and defunct GeoCities. With Web 2.0, it became common for average web users to have social-networking profiles (on sites such as Myspace and Facebook) and personal blogs (sites like Blogger, Tumblr and LiveJournal) through either a low-cost web hosting service or through a dedicated host. In general, content was generated dynamically, allowing readers to comment directly on pages in a way that was not common previously.

Some Web 2.0 capabilities were present in the days of Web 1.0, but were implemented differently. For example, a Web 1.0 site may have had a guestbook page for visitor comments, instead of a comment section at the end of each page (typical of Web 2.0). During Web 1.0, server performance and bandwidth had to be considered—lengthy comment threads on multiple pages could potentially slow down an entire site. Terry Flew, in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a

“move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on “tagging” website content using keywords (folksonomy).”

Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 “craze”.

Some common design elements of a Web 1.0 site include:

The term “Web 2.0” was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article “Fragmented Future”:

The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will […] appear on your computer screen, […] on your TV set […] your car dashboard […] your cell phone […] hand-held game machines […] maybe even your microwave oven.

Writing when Palm Inc. introduced its first web-capable personal digital assistant (supporting Web access with WAP), DiNucci saw the Web “fragmenting” into a future that extended beyond the browser/PC combination it was identified with. She focused on how the basic information structure and hyper-linking mechanism introduced by HTTP would be used by a variety of devices and platforms. As such, her “2.0” designation refers to the next version of the Web that does not directly relate to the term’s current use.

The term Web 2.0 did not resurface until 2002. Kinsley and Eric focus on the concepts currently associated with the term where, as Scott Dietzen puts it, “the Web becomes a universal, standards-based integration platform”. In 2004, the term began to popularize when O’Reilly Media and MediaLive hosted the first Web 2.0 conference. In their opening remarks, John Battelle and Tim O’Reilly outlined their definition of the “Web as Platform”, where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that “customers are building your business for you”. They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be “harnessed” to create value. O’Reilly and Battelle contrasted Web 2.0 with what they called “Web 1.0”. They associated this term with the business models of Netscape and the Encyclopadia Britannica Online. For example,

Netscape framed “the web as platform” in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the “horseless carriage” framed the automobile as an extension of the familiar, Netscape promoted a “webtop” to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.

In short, Netscape focused on creating software, releasing updates and bug fixes, and distributing it to the end users. O’Reilly contrasted this with Google, a company that did not, at the time, focus on producing end-user software, but instead on providing a service based on data, such as the links that Web page authors make between sites. Google exploits this user-generated content to offer Web searches based on reputation through its “PageRank” algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called “the perpetual beta”. A similar difference can be seen between the Encyclopadia Britannica Online and Wikipedia – while the Britannica relies upon experts to write articles and release them periodically in publications, Wikipedia relies on trust in (sometimes anonymous) community members to constantly write and edit content. Wikipedia editors are not required to have educational credentials, such as degrees, in the subjects in which they are editing. Wikipedia is not based on subject-matter expertise, but rather on an adaptation of the open source software adage “given enough eyeballs, all bugs are shallow”. This maxim is stating that if enough users are able to look at a software product’s code (or a website), then these users will be able to fix any “bugs” or other problems. The Wikipedia volunteer editor community produces, edits, and updates articles constantly. O’Reilly’s Web 2.0 conferences have been held every year since 2004, attracting entrepreneurs, representatives from large companies, tech experts and technology reporters.

The popularity of Web 2.0 was acknowledged by 2006 TIME magazine Person of The Year (You). That is, TIME selected the masses of users who were participating in content creation on social networks, blogs, wikis, and media sharing sites.

In the cover story, Lev Grossman explains:

It’s a story about community and collaboration on a scale never seen before. It’s about the cosmic compendium of knowledge Wikipedia and the million-channel people’s network YouTube and the online metropolis MySpace. It’s about the many wresting power from the few and helping one another for nothing and how that will not only change the world but also change the way the world changes.

Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site’s content by commenting on published articles, or creating a user account or profile on the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser for user interface, application software (“apps”) and file storage facilities. This has been called “network as platform” computing. Major features of Web 2.0 include social networking websites, self-publishing platforms (e.g., WordPress’ easy-to-use blog and website creation tools), “tagging” (which enables users to label websites, videos or photos in some fashion), “like” buttons (which enable a user to indicate that they are pleased by online content), and social bookmarking.

Users can provide the data and exercise some control over what they share on a Web 2.0 site. These sites may have an “architecture of participation” that encourages users to add value to the application as they use it. Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g. Amazon and eBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube and Instagram) and collaborative-writing projects. Some scholars argue that cloud computing is an example of Web 2.0 because it is simply an implication of computing on the Internet.

Web 2.0 offers almost all users the same freedom to contribute. While this opens the possibility for serious debate and collaboration, it also increases the incidence of “spamming”, “trolling”, and can even create a venue for racist hate speech, cyberbullying, and defamation. The impossibility of excluding group members who do not contribute to the provision of goods (i.e., to the creation of a user-generated website) from sharing the benefits (of using the website) gives rise to the possibility that serious members will prefer to withhold their contribution of effort and “free ride” on the contributions of others. This requires what is sometimes called radical trust by the management of the Web site.

According to Best, the characteristics of Web 2.0 are rich user experience, user participation, dynamic content, metadata, Web standards, and scalability. Further characteristics, such as openness, freedom, and collective intelligence by way of user participation, can also be viewed as essential attributes of Web 2.0. Some websites require users to contribute user-generated content to have access to the website, to discourage “free riding”.

The key features of Web 2.0 include:

The client-side (Web browser) technologies used in Web 2.0 development include Ajax and JavaScript frameworks. Ajax programming uses JavaScript and the Document Object Model (DOM) to update selected regions of the page area without undergoing a full page reload. To allow users to continue interacting with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously).

Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases the overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client. The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript Object Notation) format, two widely used structured data formats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their Web application.

When this data is received via Ajax, the JavaScript program then uses the Document Object Model to dynamically update the Web page based on the new data, allowing for rapid and interactive user experience. In short, using these techniques, web designers can make their pages function like desktop applications. For example, Google Docs uses this technique to create a Web-based word processor.

As a widely available plug-in independent of W3C standards (the World Wide Web Consortium is the governing body of Web standards and protocols), Adobe Flash is capable of doing many things that were not possible pre-HTML5. Of Flash’s many capabilities, the most commonly used is its ability to integrate streaming multimedia into HTML pages. With the introduction of HTML5 in 2010 and the growing concerns with Flash’s security, the role of Flash is decreasing.

In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks use the same technology as JavaScript, Ajax, and the DOM. However, frameworks smooth over inconsistencies between Web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated ‘widgets’ that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel.

On the server-side, Web 2.0 uses many of the same technologies as Web 1.0. Languages such as Perl, PHP, Python, Ruby, as well as Enterprise Java (J2EE) and Microsoft.NET Framework, are used by developers to output data dynamically using information from files and databases. This allows websites and web services to share machine readable formats such as XML (Atom, RSS, etc.) and JSON. When data is available in one of these formats, another website can use it to integrate a portion of that site’s functionality.

Web 2.0 can be described in three parts:

As such, Web 2.0 draws together the capabilities of client- and server-side software, content syndication and the use of network protocols. Standards-oriented Web browsers may use plug-ins and software extensions to handle the content and user interactions. Web 2.0 sites provide users with information storage, creation, and dissemination capabilities that were not possible in the environment known as “Web 1.0”.

Web 2.0 sites include the following features and techniques, referred to as the acronym SLATES by Andrew McAfee:

While SLATES forms the basic framework of Enterprise 2.0, it does not contradict all of the higher level Web 2.0 design patterns and business models. It includes discussions of self-service IT, the long tail of enterprise IT demand, and many other consequences of the Web 2.0 era in enterprise uses.

A third important part of Web 2.0 is the social web. The social Web consists of a number of online tools and platforms where people share their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to interact much more with the end user. As such, the end user is not only a user of the application but also a participant by:

The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to append a flurry of 2.0’s to existing concepts and fields of study, including Library 2.0, Social Work 2.0,
Enterprise 2.0, PR 2.0, Classroom 2.0, Publishing 2.0, Medicine 2.0, Telco 2.0, Travel 2.0, Government 2.0, and even Porn 2.0. Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper “Library 2.0: The Challenge of Disruptive Innovation”, Paul Miller argues

Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web. Talis believes that Library 2.0 means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloging efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others.

Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a “Library 2.0”. Many of the other proponents of new 2.0s mentioned here use similar methods. The meaning of Web 2.0 is role dependent. For example, some use Web 2.0 to establish and maintain relationships through social networks, while some marketing managers might use this promising technology to “end-run traditionally unresponsive I.T. department[s].”

There is a debate over the use of Web 2.0 technologies in mainstream education. Issues under consideration include the understanding of students’ different learning modes; the conflicts between ideas entrenched in informal online communities and educational establishments’ views on the production and authentication of ‘formal’ knowledge; and questions about privacy, plagiarism, shared authorship and the ownership of knowledge and information produced and/or published on line.

Web 2.0 is used by companies, non-profit organisations and governments for interactive marketing. A growing number of marketers are using Web 2.0 tools to collaborate with consumers on product development, customer service enhancement, product or service improvement and promotion. Companies can use Web 2.0 tools to improve collaboration with both its business partners and consumers. Among other things, company employees have created wikis—Websites that allow users to add, delete, and edit content — to list answers to frequently asked questions about each product, and consumers have added significant contributions.

Another marketing Web 2.0 lure is to make sure consumers can use the online community to network among themselves on topics of their own choosing. Mainstream media usage of Web 2.0 is increasing. Saturating media hubs—like The New York Times, PC Magazine and Business Week — with links to popular new Web sites and services, is critical to achieving the threshold for mass adoption of those services. User web content can be used to gauge consumer satisfaction. In a recent article for Bank Technology News, Shane Kite describes how Citigroup’s Global Transaction Services unit monitors social media outlets to address customer issues and improve products.

In tourism industries, social media is an effective channel to attract travellers and promote tourism products and services by engaging with customers. The brand of tourist destinations can be built through marketing campaigns on social media and by engaging with customers. For example, the “Snow at First Sight” campaign launched by the State of Colorado aimed to bring brand awareness to Colorado as a winter destination. The campaign used social media platforms, for example, Facebook and Twitter, to promote this competition, and requested the participants to share experiences, pictures and videos on social media platforms. As a result, Colorado enhanced their image as a winter destination and created a campaign worth about $2.9 million.

The tourism organisation can earn brand royalty from interactive marketing campaigns on social media with engaging passive communication tactics. For example, “Moms” advisors of the Walt Disney World are responsible for offering suggestions and replying to questions about the family trips at Walt Disney World. Due to its characteristic of expertise in Disney, “Moms” was chosen to represent the campaign. Social networking sites, such as Facebook, can be used as a platform for providing detailed information about the marketing campaign, as well as real-time online communication with customers. Korean Airline Tour created and maintained a relationship with customers by using Facebook for individual communication purposes.

Travel 2.0 refers a model of Web 2.0 on tourism industries which provides virtual travel communities. The travel 2.0 model allows users to create their own content and exchange their words through globally interactive features on websites. The users also can contribute their experiences, images and suggestions regarding their trips through online travel communities. For example, TripAdvisor is an online travel community which enables user to rate and share autonomously their reviews and feedback on hotels and tourist destinations. Non pre-associate users can interact socially and communicate through discussion forums on TripAdvisor.

Social media, especially Travel 2.0 websites, plays a crucial role in decision-making behaviors of travelers. The user-generated content on social media tools have a significant impact on travelers choices and organisation preferences. Travel 2.0 sparked radical change in receiving information methods for travelers, from business-to-customer marketing into peer-to-peer reviews. User-generated content became a vital tool for helping a number of travelers manage their international travels, especially for first time visitors. The travellers tend to trust and rely on peer-to-peer reviews and virtual communications on social media rather than the information provided by travel suppliers.

In addition, an autonomous review feature on social media would help travelers reduce risks and uncertainties before the purchasing stages. Social media is also a channel for customer complaints and negative feedback which can damage images and reputations of organisations and destinations. For example, a majority of UK travellers read customer reviews before booking hotels, these hotels receiving negative feedback would be refrained by half of customers.

Therefore, the organisations should develop strategic plans to handle and manage the negative feedback on social media. Although the user-generated content and rating systems on social media are out of a businesses controls, the business can monitor those conversations and participate in communities to enhance customer loyalty and maintain customer relationships.

Web 2.0 could allow for more collaborative education. For example, blogs give students a public space to interact with one another and the content of the class. Some studies suggest that Web 2.0 can increase the public’s understanding of science, which could improve government policy decisions. A 2012 study by researchers at the University of Wisconsin-Madison notes that “…the internet could be a crucial tool in increasing the general public’s level of science literacy. This increase could then lead to better communication between researchers and the public, more substantive discussion, and more informed policy decision.”

Ajax has prompted the development of Web sites that mimic desktop applications, such as word processing, the spreadsheet, and slide-show presentation. WYSIWYG wiki and blogging sites replicate many features of PC authoring applications. Several browser-based services have emerged, including EyeOS and YouOS.(No longer active.) Although named operating systems, many of these services are application platforms. They mimic the user experience of desktop operating systems, offering features and applications similar to a PC environment, and are able to run within any modern browser. However, these so-called “operating systems” do not directly control the hardware on the client’s computer. Numerous web-based application services appeared during the dot-com bubble of 1997-2001 and then vanished, having failed to gain a critical mass of customers.

Many regard syndication of site content as a Web 2.0 feature. Syndication uses standardized protocols to permit end-users to make use of a site’s data in another context (such as another Web site, a browser plugin, or a separate desktop application). Protocols permitting syndication include RSS (really simple syndication, also known as Web syndication), RDF (as in RSS 1.1), and Atom, all of which are XML-based formats. Observers have started to refer to these technologies as Web feeds. Specialized protocols such as FOAF and XFN (both for social networking) extend the functionality of sites and permit end-users to interact without centralized Web sites.

Web 2.0 often uses machine-based interactions such as REST and SOAP. Servers often expose proprietary Application programming interfaces (API), but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into use. Most communications through APIs involve XML or JSON payloads. REST APIs, through their use of self-descriptive messages and hypermedia as the engine of application state, should be self-describing once an entry URI is known. Web Services Description Language (WSDL) is the standard way of publishing a SOAP Application programming interface and there are a range of Web service specifications.

In November 2004, CMP Media applied to the USPTO for a service mark on the use of the term “WEB 2.0” for live events. On the basis of this application, CMP Media sent a cease-and-desist demand to the Irish non-profit organisation IT@Cork on May 24, 2006, but retracted it two days later. The “WEB 2.0” service mark registration passed final PTO Examining Attorney review on May 10, 2006, and was registered on June 27, 2006. The European Union application (which would confer unambiguous status in Ireland) was declined on May 23, 2007.

Critics of the term claim that “Web 2.0” does not represent a new version of the World Wide Web at all, but merely continues to use so-called “Web 1.0” technologies and concepts. First, techniques such as Ajax do not replace underlying protocols like HTTP, but add a layer of abstraction on top of them. Second, many of the ideas of Web 2.0 were already featured in implementations on networked systems well before the term “Web 2.0” emerged. Amazon.com, for instance, has allowed users to write reviews and consumer guides since its launch in 1995, in a form of self-publishing. Amazon also opened its API to outside developers in 2002. Previous developments also came from research in computer-supported collaborative learning and computer supported cooperative work (CSCW) and from established products like Lotus Notes and Lotus Domino, all phenomena that preceded Web 2.0. Tim Berners-Lee, who developed the initial technologies of the Web, has been an outspoken critic of the term, while supporting many of the elements associated with it. In the environment where the Web originated, each workstation had a dedicated IP address and always-on connection to the Internet. Sharing a file or publishing a web page was as simple as moving the file into a shared folder.

Perhaps the most common criticism is that the term is unclear or simply a buzzword. For many people who work in software, version numbers like 2.0 and 3.0 are for software versioning or hardware versioning only, and to assign 2.0 arbitrarily to many technologies with a variety of real version numbers has no meaning. The web does not have a version number. For example, in a 2006 interview with IBM developerWorks podcast editor Scott Laningham, Tim Berners-Lee described the term “Web 2.0” as a jargon:

“Nobody really knows what it means… If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along… Web 2.0, for some people, it means moving some of the thinking [to the] client side, so making it more immediate, but the idea of the Web as interaction between people is really what the Web is. That was what it was designed to be… a collaborative space where people can interact.”

Other critics labeled Web 2.0 “a second bubble” (referring to the Dot-com bubble of 1997-2000), suggesting that too many Web 2.0 companies attempt to develop the same product with a lack of business models. For example, The Economist has dubbed the mid- to late-2000s focus on Web companies as “Bubble 2.0”.

In terms of Web 2.0’s social impact, critics such as Andrew Keen argue that Web 2.0 has created a cult of digital narcissism and amateurism, which undermines the notion of expertise by allowing anybody, anywhere to share and place undue value upon their own opinions about any subject and post any kind of content, regardless of their actual talent, knowledge, credentials, biases or possible hidden agendas. Keen’s 2007 book, Cult of the Amateur, argues that the core assumption of Web 2.0, that all opinions and user-generated content are equally valuable and relevant, is misguided. Additionally, Sunday Times reviewer John Flintoff has characterized Web 2.0 as “creating an endless digital forest of mediocrity: uninformed political commentary, unseemly home videos, embarrassingly amateurish music, unreadable poems, essays and novels… [and that Wikipedia is full of] mistakes, half-truths and misunderstandings”. In a 1994 Wired interview, Steve Jobs, forecasting the future development of the web for personal publishing, said “The Web is great because that person can’t foist anything on you-you have to go get it. They can make themselves available, but if nobody wants to look at their site, that’s fine. To be honest, most people who have something to say get published now.” Michael Gorman, former president of the American Library Association has been vocal about his opposition to Web 2.0 due to the lack of expertise that it outwardly claims, though he believes that there is hope for the future.

“The task before us is to extend into the digital world the virtues of authenticity, expertise, and scholarly apparatus that have evolved over the 500 years of print, virtues often absent in the manuscript age that preceded print”.

There is also a growing body of critique of Web 2.0 from the perspective of political economy. Since, as Tim O’Reilly and John Batelle put it, Web 2.0 is based on the “customers… building your business for you,” critics have argued that sites such as Google, Facebook, YouTube, and Twitter are exploiting the “free labor” of user-created content. Web 2.0 sites use Terms of Service agreements to claim perpetual licenses to user-generated content, and they use that content to create profiles of users to sell to marketers. This is part of increased surveillance of user activity happening within Web 2.0 sites. Jonathan Zittrain of Harvard’s Berkman Center for the Internet and Society argues that such data can be used by governments who want to monitor dissident citizens. The rise of AJAX-driven web sites where much of the content must be rendered on the client has meant that users of older hardware are given worse performance versus a site purely composed of HTML, where the processing takes place on the server. Accessibility for disabled or impaired users may also suffer in a Web 2.0 site.

Others have noted that Web 2.0 technologies are tied to particular political ideologies. “Web 2.0 discourse is a conduit for the materialization of neoliberal ideology.” The technologies of Web 2.0 may also “function as a disciplining technology within the framework of a neoliberal political economy.”

When looking at Web 2.0 from a cultural convergence view, according to Henry Jenkins, it can be problematic because the consumers are doing more and more work in order to entertain themselves. For instance, Twitter offers online tools for users to create their own tweet, in a way the users are doing all the work when it comes to producing media content. At the heart of Web 2.0’s participatory culture is an inherent disregard for privacy, although it was not much of an issue for giant platforms like Facebook and Google, as users are discovering and exploring the internet because they want users to participate and create more content. More importantly, because user participation creates fresh content and profile data that are useful for third parties such as advertising corporates and national security. Therefore, suppression of privacy is built into the business model of Web 2.0 and one should not be too tied up to the optimistic notion of Web 2.0 being the next evolutionary step for digital media.

Filed Under: Web Hosting Tagged With: web hosting

Shared web hosting service

By Erik


Image by/from Mathiasjok

A shared web hosting service is a web hosting service where many websites reside on one web server connected to the Internet. This is generally the most economical option for hosting, as the overall cost of server maintenance is spread over many customers. By choosing shared hosting, the website will share a physical server with one or more other websites.

The service must include system administration since it is shared by many users; this is a benefit for users who do not want to deal with it, but a hindrance to power users who want more control. In general shared hosting will be inappropriate for users who require extensive software development outside what the hosting provider supports. Almost all applications intended to be on a standard web server work fine with a shared web hosting service. But on the other hand, shared hosting is cheaper than other types of hosting such as dedicated server hosting. Shared hosting usually has usage limits and hosting providers should have extensive reliability features in place. Shared hosting services typically offer basic web statistics support, email and webmail services, auto script installations, updated PHP and MySQL, basic after-sale technical support that is included with a monthly subscription. It also typically uses a web-based control panel system. Most of the large hosting companies use their own custom developed control panel. Control panels and web interfaces can cause controversy however since web hosting companies sometimes sell the right to use their control panel system to others. Attempting to recreate the functionality of a specific control panel is common, which leads to many lawsuits over patent infringement.

In shared hosting, the provider is generally responsible for managing servers, installing server software, security updates, technical support, and other aspects of the service. Most servers are based on the Linux operating system and LAMP (software bundle). Some providers offer Microsoft Windows-based or FreeBSD-based solutions. Server-side facilities for either operating system (OS) have similar functionality (for example: MySQL (database) and many server-side programming languages (such as the widely used PHP programming language) under Linux, or the proprietary SQL Server (database) and ASP.NET programming language under Microsoft Windows.

There are thousands of shared hosting providers in the world. They range from “mom-and-pop shops” and small design firms to multimillion-dollar providers with hundreds of thousands of customers. A large portion of the shared web hosting market is driven through pay per click (PPC) advertising or affiliate programs while some are purely non-profit.

Shared web hosting can also be done privately by sharing the cost of running a server in a colocation centre; this is called cooperative hosting.

Shared web hosting can be accomplished in two ways: name-based and Internet Protocol-based (IP-based), although some control panels allow a mix of name-based and IP-based on the one server.

In IP-based virtual hosting, also called dedicated IP hosting, each virtual host has a different IP address. The web server is configured with multiple physical network interfaces or virtual network interfaces on the same physical interface. The web server software uses the IP address the client connects to in order to determine which website to show the user. The issue of IPv4 address exhaustion means that IP addresses are an increasingly scarce resource, so the primary justification for a site to use a dedicated IP is to be able to use its own SSL/TLS certificate rather than a shared certificate.

In name-based virtual hosting, also called shared IP hosting, the virtual hosts serve multiple hostnames on a single machine with a single IP address. This is possible because when a web browser requests a resource from a web server using HTTP/1.1 it includes the requested hostname as part of the request. The server uses this information to determine which website to show the user.

DNS stands for “Domain Name System”. The domain name system acts like a large telephone directory and in that it’s the master database, which associates a domain name such as www.wikipedia.org with the appropriate IP number. Consider the IP number something similar to a phone number: When someone calls www.wikipedia.org, the ISP looks at the DNS server, and asks “how do I contact www.wikipedia.org?” The DNS server responds, for example, “it can be found at: 91.198.174.192.”. As the Internet understands it, this can be considered the phone number for the server that houses the website. When the domain name is registered/purchased on a particular registrar’s “name server”, the DNS settings are kept on their server, and in most cases point the domain to the name server of the hosting provider. This name server is where the IP number (currently associated with the domain name) resides.

Filed Under: Web Hosting Tagged With: web hosting

Wiki hosting service

By Erik


Image by/from Everaldo Coelho and YellowIcon;

A wiki hosting service or wiki farm is a server or an array of servers that offers users tools to simplify the creation and development of individual, independent wikis. Wiki farms are not to be confused with wiki “families”, a more generic term for any group of wikis located on the same server.

Prior to wiki farms, someone who wanted to operate a wiki had to install the software and manage the server(s) themselves. With a wiki farm, the farm’s administration installs the core wiki code once on its own servers, centrally maintains the servers, and establishes unique space on the servers for the content of each individual wiki with the shared core code executing the functions of each wiki.

Both commercial and non-commercial wiki farms are available for users and online communities. While most of the wiki farms allow anyone to open their own wiki, some impose restrictions. Many wiki farm companies generate revenue through the insertion of advertisements, but often allow payment of a monthly fee as an alternative to accepting ads.

Many of the currently most notable wiki farms got their start in the mid-2000s, including
Wikipedia (2001), Fandom (2004), PBworks (2005), Wetpaint (2005), Wikidot (2006), and Gamepedia (2012).

Filed Under: Web Hosting Tagged With: web hosting

Web server benchmarking

By Erik

Web server benchmarking is the process of estimating a web server performance in order to find if the server can serve sufficiently high workload.

The performance is usually measured in terms of:

The measurements must be performed under a varying load of clients and requests per client.

Load testing (stress/performance testing) a web server can be performed using automation/analysis tools such as:

Web application benchmarks measure the performance of application servers and database servers used to host web applications. TPC-W was a common benchmark emulating an online bookstore with synthetic workload generation.

Filed Under: Web Hosting Tagged With: web hosting

Internet hosting service

By Erik

An Internet hosting service is a service that runs Internet servers, allowing organizations and individuals to serve content to the Internet. There are various levels of service and various kinds of services offered.

A common kind of hosting is web hosting. Most hosting providers offer a combination of services; e-mail hosting, Website Hosting, Database Hosting for example. DNS hosting service is usually bundled with domain name registration.

Generic kinds of Internet hosting, that provide dedicated hosting, provide a server where the clients can run anything they want (including web servers and other servers) and have Internet connections with good upstream bandwidth.

Another popular kind of hosting is shared hosting. This is where the hosting provider provisions hosting services for multiple clients on one physical server and shares the resources between the clients. Virtualization is key to making this work effectively.

Full-featured hosting services include:

Limited or application-specific hosting services include:

Internet hosting services include the required Internet connection; they may charge a flat rate per month or charge per bandwidth used — a common payment plan is to sell a predetermined amount of bandwidth and charge for any ‘overage’ (Usage above the predetermined limit) the customer may incur on a per GB basis. The overage charge would be agreed upon at the start of the contract.

Web hosting technology has been causing some controversy, as Web.com claims that it holds patent rights to some common hosting technologies, including the use of a web-based control panel to manage the hosting service, with its 19 patents. Hostopia, a large wholesale hosting provider, purchased a license to use that technology from Web.com for 10% of Hostopia’s retail revenues. In addition, Web.com sued GoDaddy as well for similar patent infringement.

Filed Under: Web Hosting Tagged With: web hosting

WebCL

By Erik

WebCL (Web Computing Language) is a JavaScript binding to OpenCL for heterogeneous parallel computing within any compatible web browser without the use of plug-ins, first announced in March 2011. It is developed on similar grounds as OpenCL and is considered as a browser version of the latter. Primarily, WebCL allows web applications to actualize speed with multi-core CPUs and GPUs. With the growing popularity of applications that need parallel processing like image editing, augmented reality applications and sophisticated gaming, it has become more important to improve the computational speed. With these background reasons, a non-profit Khronos Group designed and developed WebCL, which is a Javascript binding to OpenCL with a portable kernel programming, enabling parallel computing on web browsers, across a wide range of devices. In short, WebCL consists of two parts, one being Kernel programming, which runs on the processors (devices) and the other being JavaScript, which binds the web application to OpenCL. The completed and ratified specification for WebCL 1.0 was released on March 19, 2014.

Currently, no browsers natively support WebCL. However, non-native add-ons are used to implement WebCL. For example, Nokia developed a WebCL extension. Mozilla does not plan to implement WebCL in favor of OpenGL ES 3.1 Compute Shaders.

The basic unit of a parallel program is kernel. A kernel is any parallelizable task used to perform a specific job. More often functions can be realized as kernels. A program can be composed of one or more kernels. In order to realize a kernel, it is essential that a task is parallelizable. Data dependencies and order of execution play a vital role in producing efficient parallelized algorithms. A simple example can be thought of the case of loop unrolling performed by C compilers, where a statement like:

can be unrolled into:

Above statements can be parallelized and can be made to run simultaneously. A kernel follows a similar approach where only the snapshot of the ith iteration is captured inside kernel.
Let’s rewrite the above code using a kernel:

Running a WebCL application involves the following steps:

Further details about the same can be found at

WebCL, being a JavaScript based implementation, doesn’t return an error code when errors occur. Instead, it throws an exception such as OUT_OF_RESOURCES, OUT_OF_HOST_MEMORY, or the WebCL-specific WEBCL_IMPLEMENTATION_FAILURE. The exception object describes the machine-readable name and human-readable message describing the error. The syntax is as follows:

From the code above, it can be observed that the message field can be a NULL value.

List of few other exceptions:

More information on exceptions can be found in the specs document.

There is another exception that is raised upon trying to call an object that is ‘released’. On using the release method, the object doesn’t get deleted permanently but it frees the resources associated with that object. In order to avoid this exception, ‘releaseAll’ method can be used, which not only frees the resources but also deletes all the associated objects created.

WebCL, being an open-ended software developed for web applications, there is lot of scope for vulnerabilities in the design and development fields too. This forced the developers working on WebCL to give security the utmost importance. Few concerns that were addressed are:

Filed Under: Web Hosting Tagged With: web hosting

Internet service provider

By Erik


Image by/from Stealth Communications

An Internet service provider (ISP) is an organization that provides a myriad of services for accessing, using, or participating in the Internet. Internet service providers can be organized in various forms, such as commercial, community-owned, non-profit, or otherwise privately owned.

Internet services typically provided by ISPs can include Internet access, Internet transit, domain name registration, web hosting, Usenet service, and colocation.

An ISP typically serves as the access point or the gateway that provides a user, access to everything available on the Internet.

The Internet (originally ARPAnet) was developed as a network between government research laboratories and participating departments of universities. Other companies and organizations joined by direct connection to the backbone, or by arrangements through other connected companies, sometimes using dialup tools such as UUCP. By the late 1980s, a process was set in place towards public, commercial use of the Internet. Some restrictions were removed by 1991, shortly after the introduction of the World Wide Web.

During the 1980s, online service providers such as CompuServe and America On Line (AOL) began to offer limited capabilities to access the Internet, such as e-mail interchange, but full access to the Internet was not readily available to the general public.

In 1989, the first Internet service providers, companies offering the public direct access to the Internet for a monthly fee, were established in Australia and the United States. In Brookline, Massachusetts, The World became the first commercial ISP in the US. Its first customer was served in November 1989. These companies generally offered dial-up connections, using the public telephone network to provide last-mile connections to their customers. The barriers to entry for dial-up ISPs were low and many providers emerged.

However, cable television companies and the telephone carriers already had wired connections to their customers and could offer Internet connections at much higher speeds than dial-up using broadband technology such as cable modems and digital subscriber line (DSL). As a result, these companies often became the dominant ISPs in their service areas, and what was once a highly competitive ISP market became effectively a monopoly or duopoly in countries with a commercial telecommunications market, such as the United States.

In 1995, NSFNET was decommissioned removing the last restrictions on the use of the Internet to carry commercial traffic and network access points were created to allow peering arrangements between commercial ISPs.

On 23 April 2014, the U.S. Federal Communications Commission (FCC) was reported to be considering a new rule permitting ISPs to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On 15 May 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality. On 10 November 2014, President Barack Obama recommended that the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On 16 January 2015, Republicans presented legislation, in the form of a U.S. Congress H.R. discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers. On 31 January 2015, AP News reported that the FCC will present the notion of applying (“with some caveats”) Title II (common carrier) of the Communications Act of 1934 to the Internet in a vote expected on 26 February 2015. Adoption of this notion would reclassify Internet service from one of information to one of the telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC was expected to enforce net neutrality in its vote, according to The New York Times.

On 26 February 2015, the FCC ruled in favor of net neutrality by adopting Title II (common carrier) of the Communications Act of 1934 and Section 706 in the Telecommunications Act of 1996 to the Internet. The FCC Chairman, Tom Wheeler, commented, “This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept.” On 12 March 2015, the FCC released the specific details of the net neutrality rules. On 13 April 2015, the FCC published the final rule on its new “Net Neutrality” regulations. These rules went into effect on 12 June 2015.

Upon becoming FCC chairman in April 2017, Ajit Pai proposed an end to net neutrality, awaiting votes from the commission. On 21 November 2017, Pai announced that a vote will be held by FCC members on 14 December 2017 on whether to repeal the policy. On 11 June 2018, the repeal of the FCC’s network neutrality rules took effect.

Access provider ISPs provide Internet access, employing a range of technologies to connect users to their network. Available technologies have ranged from computer modems with acoustic couplers to telephone lines, to television cable (CATV), Wi-Fi, and fiber optics.

For users and small businesses, traditional options include copper wires to provide dial-up, DSL, typically asymmetric digital subscriber line (ADSL), cable modem or Integrated Services Digital Network (ISDN) (typically basic rate interface). Using fiber-optics to end users is called Fiber To The Home or similar names.

Customers with more demanding requirements (such as medium-to-large businesses, or other ISPs) can use higher-speed DSL (such as single-pair high-speed digital subscriber line), Ethernet, metropolitan Ethernet, gigabit Ethernet, Frame Relay, ISDN Primary Rate Interface, ATM (Asynchronous Transfer Mode) and synchronous optical networking (SONET).

Wireless access is another option, including cellular and satellite Internet access.

A mailbox provider is an organization that provides services for hosting electronic mail domains with access to storage for mail boxes. It provides email servers to send, receive, accept, and store email for end users or other organizations.

Many mailbox providers are also access providers, while others are not (e.g., Gmail, Yahoo! Mail, Outlook.com, AOL Mail, Po box). The definition given in RFC 6650 covers email hosting services, as well as the relevant department of companies, universities, organizations, groups, and individuals that manage their mail servers themselves. The task is typically accomplished by implementing Simple Mail Transfer Protocol (SMTP) and possibly providing access to messages through Internet Message Access Protocol (IMAP), the Post Office Protocol, Webmail, or a proprietary protocol.

Internet hosting services provide email, web-hosting, or online storage services. Other services include virtual server, cloud services, or physical server operation.

Just as their customers pay them for Internet access, ISPs themselves pay upstream ISPs for Internet access. An upstream ISP usually has a larger network than the contracting ISP or is able to provide the contracting ISP with access to parts of the Internet the contracting ISP by itself has no access to.

In the simplest case, a single connection is established to an upstream ISP and is used to transmit data to or from areas of the Internet beyond the home network; this mode of interconnection is often cascaded multiple times until reaching a tier 1 carrier. In reality, the situation is often more complex. ISPs with more than one point of presence (PoP) may have separate connections to an upstream ISP at multiple PoPs, or they may be customers of multiple upstream ISPs and may have connections to each one of them at one or more point of presence. Transit ISPs provide large amounts of bandwidth for connecting hosting ISPs and access ISPs.

A virtual ISP (VISP) is an operation that purchases services from another ISP, sometimes called a wholesale ISP in this context, which allow the VISP’s customers to access the Internet using services and infrastructure owned and operated by the wholesale ISP. VISPs resemble mobile virtual network operators and competitive local exchange carriers for voice communications.

Free ISPs are Internet service providers that provide service free of charge. Many free ISPs display advertisements while the user is connected; like commercial television, in a sense they are selling the user’s attention to the advertiser. Other free ISPs, sometimes called freenets, are run on a nonprofit basis, usually with volunteer staff.

A wireless Internet service provider (WISP) is an Internet service provider with a network based on wireless networking. Technology may include commonplace Wi-Fi wireless mesh networking, or proprietary equipment designed to operate over open 900 MHz, 2.4 GHz, 4.9, 5.2, 5.4, 5.7, and 5.8 GHz bands or licensed frequencies such as 2.5 GHz (EBS/BRS), 3.65 GHz (NN) and in the UHF band (including the MMDS frequency band) and LMDS.

ISPs may engage in peering, where multiple ISPs interconnect at peering points or Internet exchange points (IXPs), allowing routing of data between each network, without charging one another for the data transmitted—data that would otherwise have passed through a third upstream ISP, incurring charges from the upstream ISP.

ISPs requiring no upstream and having only customers (end customers or peer ISPs) are called Tier 1 ISPs.

Network hardware, software and specifications, as well as the expertise of network management personnel are important in ensuring that data follows the most efficient route, and upstream connections work reliably. A tradeoff between cost and efficiency is possible.

Internet service providers in many countries are legally required (e.g., via Communications Assistance for Law Enforcement Act (CALEA) in the U.S.) to allow law enforcement agencies to monitor some or all of the information transmitted by the ISP, or even store the browsing history of users to allow government access if needed (e.g. via the Investigatory Powers Act 2016 in the United Kingdom). Furthermore, in some countries ISPs are subject to monitoring by intelligence agencies. In the U.S., a controversial National Security Agency program known as PRISM provides for broad monitoring of Internet users traffic and has raised concerns about potential violation of the privacy protections in the Fourth Amendment to the United States Constitution. Modern ISPs integrate a wide array of surveillance and packet sniffing equipment into their networks, which then feeds the data to law-enforcement/intelligence networks (such as DCSNet in the United States, or SORM in Russia) allowing monitoring of Internet traffic in real time.

Filed Under: Web Hosting Tagged With: web hosting

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Primary Sidebar

Categories

  • hosting service,website hosting,web hosts,expensive hosts
  • Security Protocols
  • Speed Information
  • Web Hosting
  • What To Look For

Recent Posts

  • Web hosting control panel
  • Social network hosting service
  • Web application
  • Web 2.0
  • Shared web hosting service
  • Wiki hosting service
  • Web server benchmarking
  • Internet hosting service
  • WebCL
  • Internet service provider
  • Funnel Web
  • CPanel
  • Peer-to-peer web hosting
  • WebObjects
  • Web hosting service
  • Comparison of web hosting control panels
  • Domain name
  • Personal web page
  • Virtual hosting
  • Web Services for Remote Portlets
  • Amazon Web Services
  • Reseller web hosting
  • Clustered web hosting
  • File hosting service
  • Webmail

Copyright © 2021 · Log in