All application servers share some basic features, including connection management, state and session management, business-logic hosting, and database connectivity. Clearly the server must support protocols to let clients connect, and it must have a security mechanism to authenticate users. The primary jobs of the server process are to manage client connections and direct traffic for incoming client requests to various back-end resources.
To gain access, clients authenticate to the server and the server connects to the database. So the server must have a security facility to authenticate users. Most servers provide a mechanism to add users and groups and control access to components. Also, more and more, servers provide authentication to a security service such as the operating system, LDAP, and so on. More advanced security can be provided via the use of X.509 digital certificates on the user's machine. Once a user is authenticated, the server will allow or disallow access to the components and data connections it manages.
For rich-client connections (Java, VB, PowerBuilder, and so on) the user's process has a persistent connection to the application server and data can be passed back and forth at any time to a set of objects that are persistent on the server. As components on the server become more stateful or more bound to a particular user, the total number of users that the server can handle decreases. So developers prefer to request data from the server, disconnect from a particular object, process the data locally at the client, then send changes to the server for validation and forwarding to the database. Session objects are less important when using servers this way, but they provide a way to pass data between objects on the server without a round trip to the client.
Application servers all provide some sort of load balancing and fail-over mechanism. Load balancing means that a group of application servers will be hosted as a cluster or farm. Requests coming in to the servers are handled by a dispatcher that sends the request to the least-busy server. From then on, clients communicate with that server. Load balancing provides scalability. As the user load increases, more machines can be added to the cluster.
Fail-over provides fault-tolerance. If one machine in the cluster fails, new requests are rerouted to one of the other servers. Simple fail-over doesn't solve all the problems, however. If the user is in the middle of a five-page sequence and a server fails before the user submits page three, the load-balancing mechanism will realize that the server is gone and reroute the user to another server. But the state and session data for that user won't be available at the new server. Because of this situation, some servers also provide session-level fail-over. State and session data are copied to the other servers in the cluster or placed in persistent storage such as the database. This way a user's state is always available on each server.
At the core, an application server hosts business logic -- the code to access, manipulate, and validate data and perform processes. Application-specific business logic consists of the reusable components that everyone is trying to develop today. Once a component is created, it's brought into the application server, assigned security settings, and then is available to be run. Most servers provide mechanisms for specifying component persistence, transaction handling requirements, and threading requirements.
Rich clients can access server components directly. My Java applet can ask the server to instantiate a customer component. Then my applet will call methods. Usually the sequence is: connect, call a method and get a result set of data, disconnect, display the data to the user and allow manipulation, perform local validation of data, connect to the server, send updates to the server, perform validation of the data on the server, and finally store the data in the database. Having an intelligent client such as an applet makes programming against a middle tier fairly easy.
Pure thin-client applications can't talk to components on the application server directly. There's no way to exchange result sets with HTML or ECMAScript on the client browser. This means another component is needed on the server to generate and process HTML. The application server must decode a URL and determine which component should be executed. The component accesses the database or another component, gets the result set, wraps that result set in HTML (or XML), and passes it back to the browser. The HTML generation components on a server become clients to the business-logic components.
Once a page is sent back to the browser, we expect the user to make changes to data and submit it back to the application server. Data is passed back to the server either as parameters in a URL (in a
GET), via form fields (
POST), or in a cookie. The server gets the data from the HTTP request and makes it available to whatever component is called on the server.
Application servers have facilities for adding and managing connections to relational systems such as Oracle or DB2. Once a data source is added, components can make calls to it and issue SQL requests. More and more, servers are also allowing access to nonrelational data sources -- such as CICS. While access to relational data (JDBC, ODBC, and so on) is fairly standardized, each server implements its own interface for accessing nonrelational data. Third-party vendors are also providing interfaces to various servers to allow easy access to data sources such as SAP, PeopleSoft, Notes, and CICS/MQSeries. While these are servers in their own right, they are considered nonrelational sources of enterprise data, as are proprietary in-house systems, MQSeries, and so on. The key is build some sort of object or component wrapper around these nonrelational sources of data to make their functionality available to other components. By encapsulating nonrelational data sources in objects, front-end developers can provide access to streaming data like stock prices or plant-monitoring equipment without knowing how the access is actually implemented.
Web and corporate IT developers are used to managing their own transactions against the database. Regardless of the programming language used, almost everyone uses SQL to access data and then commit or roll back changes. In an application server environment, only the server handles transaction boundaries. Developers write code that issues the SQL and notifies the server whether the logic was successful or not. If any component in a sequence of calls fails, the server issues a rollback; otherwise it commits. Servers provide different levels of transaction control. For example, you might want a particular component to have its own separate stand-alone transaction.
When a page is submitted and data is updated, the server must connect to the database to get the job done. If a connect and disconnect have to be performed every time a user submits a page, performance and scalability suffer. A connect to the database is a fairly expensive operation in the life of one transaction. One solution is to connect when the user logs in and disconnect when he or she leaves. The page analogy is to connect when the first page is requested and keep the connection until the user either logs out or some time-out is reached. This is clearly not scalable: 5000 users could log in requiring 5000 connections to the database.
An application server doesn't hold a connection open for each individual user. Instead, it holds a pool of connections that are shared among all the server components as needed. If a user wants to update a set of data, the component asks the server for a connection from the pool. If one is available, it's assigned to the user's request. All updates are made and the component signals whether everything was successful. When the transaction ends, the connection is immediately released.
High-end servers also provide thread pooling and instance pooling. Because the most expensive operation for a thread or component instance is the creation or instantiation step, pooling provides better performance. Threads and component instances are available for immediate use by the server. If components aren't stateless or threads aren't released quickly, pooling offers no advantage. -- SB