Simone Busoli: Welcome Google Searcher!
Simone Busoli: Welcome Google Searcher!
In this series of articles I'm going to tackle and describe the life cycle of a web
request from the early stages of its life, when it's accepted by the web server,
through its processing into the ASP.NET pipeline and up to the generation of a
response by the endpoints of the pipeline.
I noticed that you arrived here from Google looking for ASP.NET internals. The
below article is probably best suited for you based on the provided
keywords. However to aid you on your search here's a list of related blog posts.
If the information provided on this page is not helpful, please consider posting in
our forums or in the comment section of this page. Registration takes less than
30 seconds.
Introduction
Microsoft Active Server Pages, also known as ASP, since its first release in late 1996
provided web developers with a rich and complex framework for building web
applications. As years passed its infrastructure evolved and improved so much that
what is now known as ASP.NET is no longer something which resembles its
predecessor. ASP.NET is a framework for building web applications, that is,
applications that run over the web, where the client-server paradigm is represented
mostly by a browser forwarding requests for resources of different kinds to a web
server. Before the advent of dynamic server-side resource generation techniques
like CGI, PHP, JSP and ASP, all web servers had to do was accept client’s requests for
static resources and forward them to the requestor. When dynamic technologies
started to grow up web servers became invested of greater responsibility, since they
had to find a way to generate those dynamic resources on their side and return the
result to the client, a task they were not formerly built for.
From a bird’s eye view, the interaction between client and server is very simple.
Communications over the web occur via HTTP (Hyper Text Transfer Protocol), an
application level protocol which relies on TCP and IP to transmit data between two
nodes connected to the heterogeneous network known as World Wide Web.
Different servers chose different ways to generate and serve dynamic resources and
what we’re going to examine is how IIS does that, together with the path a request
follows once on the server and back to the client.
When installed, ASP.NET configures IIS to redirect requests for ASP.NET specific files
to a new ISAPI extension called aspnet_isapi.dll. What this extension does is
somewhat different then the former asp.dll extension, which was essentially
responsible just for parsing and executing the requested ASP page. The steps taken
by a generic ISAPI module to process a request are totally hidden from IIS,
therefore ISAPI extension may follow different paradigms in order to process
requests.
Table 1: IIS Application Mappings for aspnet_isapi.dll
As well as the file extensions listed in Table 1, the ASP.NET ISAPI extension
manages other file extensions which are usually not served to web browsers, like
Visual Studio project files, source code files and config files, for example.
So far we’ve seen that when a request for an ASP.NET file is picked up by IIS, it is
passed to the aspnet_isapi.dll, which is the main entry point forASP.NET related
processing. Actually, what the ISAPI extension does depends sensibly on the version
of IIS available on the system, and thus the process model, which is the sequence of
operations performed by the ASP.NET runtime to process the request and generate a
response, may vary quite a bit.
When running under IIS 5.X, all ASP.NET-related requests are dispatched by
the ISAPI extension to an external worker process called aspnet_wp.exe.
TheASP.NET ISAPI extension, hosted in the IIS process inetinfo.exe, passes the
control to aspnet_wp.exe, along with all the information concerning the incoming
request. The communication between the two is performed via named pipes, a well
known mechanism for IPC (Inter Process Communication). The ASP.NET worker
process performs a considerable number of tasks, together with the ISAPI extension.
They are the main authors of all the stuff that happens under the hoods of
an ASP.NET request. To introduce a topic which will be discussed later, take note of
the fact that each web application, corresponding to a different virtual directory
hosted on IIS, is executed in the context of the same process, the ASP.NET worker
process. To provide isolation and abstraction from the execution context
the ASP.NET model introduces the concept of Application Domains, in brief
AppDomains. They can be considered as lightweight processes. More on this later.
If running under IIS 6, on the other side, the aspnet_wp.exe process is not used, in
favour of another process called w3wp.exe. Furthermore, inetinfo.exe is no longer
used to forward HTTP requests to ISAPI extensions, although it keeps running for
serving other protocols requests. A lot of other details change compared to the
process model used by previous versions of IIS, although IIS 6 is capable of
running in compatibility mode and emulate the behavior of its predecessor. A big
step forward, compared to the process model used when running on top of IIS 5, is
that incoming requests are in the former handled at a lower – Kernel – level and
then forwarded to the correct ISAPI extension, thereby avoiding inter process
communication techniques which may represent an expensive operation under a
performance and resource consumption point of view. We’ll delve deeper into this
topic in the following paragraphs.
This is the default process model available on Windows 2000 and XP machines. As
mentioned it consists in the IIS inetinfo.exe process listening by default on the
TCP port 80 for incoming HTTP requests and queuing them into a single queue,
waiting to be processed. If the request is specific to ASP.NET, the processing is
delegated to the ASP.NET ISAPI extension, aspnet_isapi.dll. This, in turn,
communicates with the ASP.NET worker process, aspnet_wp.exevia named pipes and
finally is the worker process which takes care of delivering the request to
the ASP.NET HTTP runtime environment. This process is graphically represented in
Figure 2.
One of the interesting points of this process model is that all the requests, once
handled by the ISAPI extension, are passed to the ASP.NET worker process. Only
one instance of this process is active at a time, with one exception, discussed later.
Therefore all ASP.NET web applications hosted on IISare actually hosted inside the
worker process, too. However, this doesn’t mean that all the applications are run
under the same context and share all their data. As mentioned, ASP.NET introduces
the concept of AppDomain, which is essentially a sort of managed lightweight
process which provides isolation and security boundaries. Each IIS virtual directory
is executed in a single AppDomain, which is loaded automatically into the worker
process whenever a resource belonging to that application is requested for the first
time. Once the AppDomain is loaded – that is, all the assemblies required to satisfy
that request are loaded into the AppDomain – the control is actually passed to
the ASP.NET pipeline for the actual processing. Multiple AppDomains can thus run
under the same process, while requests for the same AppDomain can be served by
multiple threads. However, a thread doesn’t belong to an AppDomain and can serve
requests for different AppDomains, but at a given time a thread belongs to a single
AppDomain.
For performance purposes the worker process can be recycled according to some
criteria which can be specified declaratively in the machine.config file placed in the
directory C:\windows\microsoft.net\Framework\[framework version]\CONFIG. These
criteria are the age of the process, number of requests served and queued, time
spent idle and consumed memory. Once one of the threshold value of these
parameters is reached, the ISAPI extension creates a new instance of the worker
process, which will we used from then on to serve the requests. This is the only time
when multiple copies of the process can be running concurrently. In fact, the old
instance of the process isn’t killed, but it is allowed to terminate serving the pending
requests.
The IIS 6 process model is the default model on machines running Windows 2003
Server operating system. It introduces several changes and improvements over
the IIS 5 process model. One of the biggest changes is the concept of application
pools. On IIS 5.X all web applications, that is, all AppDomains, were hosted by
the ASP.NET worker process. To achieve a finer granularity over security boundaries
and personalization, the IIS 6 process model allows applications to run inside
different copies of a new worker process, w3wp.exe. Each application pool can
contain multiple AppDomains and is hosted in a single copy of the worker process. In
other words, the shift is from a single process hosting all applications to multiple
processes hosting each an application pool. This model is also called the worker
process isolation mode.
Another big change from the previous model is the way IIS listens for incoming
requests. With the IIS 5 model, it was the IIS process, inetinfo.exe, who was
listening on a specific TCP port for HTTP requests. In the IIS 6 architecture, incoming
requests are handled and queued at kernel level instead of user mode via a kernel
driver called http.sys; this approach has several advantages over the old model and
is called kernel-level request queuing.
It’s the worker process who is in charge of loading the ASP.NET ISAPI extension,
which, in turn, loads the CRL and delegates all the work to the HTTPRuntime.
The w3wp.exe worker process, differently from the aspnet_wp.exe process used
in IIS 5 model, isn’t ASP.NET specific, and is used to handle any kind of requests.
The specific worker process then decides which ISAPI modules to load according to
the type of resources it needs to serve.
A detail not underlined in Figure 3 for simplicity reasons is that incoming requests
are forwarded from the application pool queue to the right worker process via a
module loaded in IIS 6 called Web Administration Service (WAS). This module is
responsible for reading worker process – web application bindings from
the IIS metabase and forwarding the request to the right worker process.
Introduction
This article inspects how ASP.NET (and IIS) handles requests. On the way, I will
discuss in detail what happens inside the ASP.NET architecture from the moment
a request leaves the browser until it goes all the way through
the ASP.NET Runtime.
Info: After the request finishes the ASP.NET Runtime, the ASP.NET page model
starts executing. This is beyond the scope of this article, but it will be the topic of my
next article.
The process begins once a user requests an ASP.NET resource via the browser.
For example, let us say that a user requested the following
URL:http://www.myserver.com/myapplication/mypage.aspx. The request will reach
“myserver”, which has Windows Server 2003 and IIS 6.0 installed.
Once the request reaches IIS, it is detected by the http.sys kernel mode driver.
Before going further, let us examine what the http.sys kernel mode driver is and
what is it that it does.
Generally speaking, Windows provides two modes: User mode and Kernel mode.
User applications run in User mode, and Operating System code runs in Kernel
mode. If a user application needs to work directly with the hardware, that specific
action is done by a Kernel mode process. The obvious purpose of these modes is to
protect the Operating System components from being damaged by user applications.
So now that we know what User mode and Kernel mode are, what is the role of
the http.sys kernel mode driver?
When you create a new IIS website, IIS registers the site with http.sys, which then
receives all HTTP requests for any web application within the site.
The http.sys functions as a forwarder, directing the HTTP requests to the User mode
process that runs the web application. In this case, the User mode process is the
worker process running the application pool which the web application runs under.
The http.sys implements a queuing mechanism by creating as many queues as there
are application pools in IIS.
Following up with our example, once the request reaches “myserver”, http.sys picks
up the request. Let us say that “myapplication” is configured to run under the
application pool “myapplicationpool”; in this case, http.sys inspects the request
queue of “myapplicationpool” and forwards the request to the worker process under
which “myapplicationpool” is running under.
OK, so now, the request is forwarded to the application pool as explained in the
previous section. Each application pool is managed by an instance of the worker
process “w3wp.exe”. The “w3wp.exe” runs, by default, under the “NetworkService”
account. This can be changed as follows: right click on the application pool hosting
your application--Properties--Identity tab. Recall that the application pool is run by
the worker – the “w3wp.exe”. So now, the worker process takes over.
The worker process “w3wp.exe” looks up the URL of the request in order to load the
correct ISAPI extension. The requested resource in the URL is “mypage.aspx”. So,
what happens next? A full discussion of ISAPI extensions (and filters) is beyond the
scope of this article, but in short, ISAPI extensions are the IIS way to handle
requests for different resources. Once ASP.NET is installed, it installs its own
ISAPI extension (aspnet_isapi.dll) and adds the mapping into IIS. IIS maps various
extensions to its ISAPI extensions. You can see the mappings in IIS as follows: right
click on the website-Properties-Home Directory tab-Configuration button-Mappings
tab. The figure below shows the mappings:
As you can see, the “.aspx” extension is mapped to the aspnet_isapi.dll extension.
So now, the worker process passes the request to the aspnet_isapiextension.
The aspnet_isapi extension in turn loads the HTTP Runtime and the processing of the
request starts.
Before inspecting what happens inside the HTTP Runtime, let us examine some
details about how the worker process loads the web application. The worker process
loads the web application assembly, allocating one application domain for the
application. When the worker process starts a web application (in its application
domain), the web application inherits the identity of the process (NetworkService, by
default) if impersonation is disabled. However, if impersonation is enabled, each web
application runs under the account that is authenticated by IIS, or the user account
that is configured in the web.config.
• Identity impersonate="true"
o If only anonymous access is enabled by IIS, the identity that is passed
to the web application will be [machine]\IUSR_[machine].
o If only integrated Windows authentication is enabled in IIS, the
identity that is passed to the web application will be the authenticated
Windows user.
o If both integrated Windows authentication and anonymous access are
enabled, the identity that is passed to the web application will depend
on the one that was authenticated by IIS. IIS first attempts to use
anonymous access to grant a user access to a web application
resource. If this attempt fails, it then tries to use Windows
authentication
• Identity
impersonate="true" username="username" password="password"
o This allows the web application to run under a specific identity.
Info: There are differences between IIS 6.0 and IIS 5.0 in the way they handle
requests. First, the http.sys kernel mode is implemented only in IIS 6.0. It is not a
feature of IIS 5.0. In IIS 5.0, the request is caught directly by
the aspnet_asapi module, which in turn passes the request to the worker process.
The worker process and the ISAPI module communicate through pipes which cause
calling overhead. Moreover, a single instance of the worker process serves all web
applications; no application pools exist. As such, the model supplied by IIS 6.0 is
much improved over the IIS 5.0 model. Second, the worker process in IIS 5.0 is
“aspnet_wp.exe” as opposed to “w3wp.exe” in IIS 6.0. The worker process
“aspnet_wp.exe” runs under the default account “ASPNET” as opposed to
“NetworkService” in IIS 6.0. You can change this account by locating
the <processmodel />element in the “machine.config” file.
Info: IIS 7.0 presents two ways to handle ASP.NET requests. First, there is the
classic way which behaves the same as IIS 6.0; this is useful in compatibility
scenarios. Second, there is the new integrated way where ASP.NET and IIS are
part of the same request processing pipeline. In this second way, the .NET modules
and handlers plug directly into the generic request-processing pipeline, which is much
more efficient than the IIS 6.0 way.
So, let us summarize what happened so far: the request has passed from the
browser to http.sys, which in turn passed the request to the application pool. The
worker process which is running the application pool investigates the URL of the
request, and uses the IIS application extension mapping to load up
the ASP.NET ISAPI “aspnet_isapi.dll”. The ASP.NET ISAPI will now load the
HTTP Runtime, which is also called the ASP.NET Runtime.
OK, so now, we begin investigating what happens inside the HTTP Runtime. The
entry point of the Runtime is the HttpRuntime class.
TheHttpRuntime.ProcessRequest method signals the start of the processing. In
the following subsections, we will examine what happens inside the Runtime after
the ProcessRequest method is called:
The HttpContext lives during the lifetime of the request and is accessible via the
static HttpContext.Current property. The HttpContextobject represents the
context of the currently active request, as it contains references to objects you can
access during the request lifetime, such
asRequest, Response, Application, Server, and Cache. At any time during
request processing, HttpContext.Current gives you access to all of these objects.
Moreover, the HttpContext object contains an Items collection which you can use
to store request specific information.
The HTTP Pipeline is just, as the name implies, a pipeline for the request to pass by.
It is called a pipeline because it contains a set of HttpModules that intercept the
request on its way to the HttpHandler. HTTPModules are classes that have access to
the incoming request. These modules can inspect the incoming request and make
decisions that affect the internal flow of the request. After passing through the
specified HTTPModules, the request reaches an HTTP Handler, whose job it is to
generate the output that will be sent back to the requesting browser.
Developers can, of course, write their own modules and plug them in the
“machine.config” if they intend to apply the modules on all their applications, or in
the “web.config” of a certain application if they intend to apply the modules on that
specific application. Requests inside the HTTP Pipeline will pass through all modules
defined in the “machine.config” and “web.config”. As mentioned in the previous
section, these modules are maintained inside the HttpApplication and are loaded
dynamically at runtime.
HTTP Handlers are the endpoints in the HTTP pipeline. The job of the HTTP Handler is
to generate the output for the requested resource. For ASPX pages, this means
rendering these pages into HTML and returning this HTML. HTTP Handlers can be
configured at both machine level (machine.config) and application level (web.config).
The figure below is taken from the “machine.config”, and it shows how HTTP
Handlers are set.
As you can see from the above figure, different resources are configured to use
different handlers. For ASP.NET ASPX pages, note that it is configured to use the
“PageHandlerFactory”. The job of the “PageHandlerFactory” is to provide an
instance of an HTTP Handler that can handle the request. What the
“PageHandleFactory” does is that it tries to find a compiled class that represents
the requested page “mypage.aspx”. If it succeeds, then it passes this compiled class
as the HTTP Handler. If there is no compiled class to represent the requested page
because the request is the first one or because the page has been modified since the
last request, then the “PageHandlerFactory” compiles the requested page
“mypage.aspx” and returns the compiled class. Any subsequent requests will be
served by the same compiled class until the page is modified.
Info: Page compilation is beyond the scope of this article, but it will be discussed in
my next article which discusses the ASP.NET Page model.
Finally, the request has been handed to the appropriate HTTP Handler as we saw in
the previous section. Next, the Runtime calls
theIHttpHandler.ProcessRequest method and the ASP.NET Page life cycle
starts. What happens next is outside the scope of this document, and will be
explained thoroughly in the next article.
Summary
This article discussed the hidden details of what happens whenever we request
an ASP.NET page via the browser. To quickly summarize the process:
Contents
• Introduction
• Introducing Viewstate Outside the ASP.NET Context
• Introducing Viewstate Inside the ASP.NET Context
• ASP.NET Page Life Cycle
• Role of Viewstate in the Page Life Cycle
• Viewstate Walkthroughs
Introduction
In this article, I will discuss in detail the ASP.NET page life cycle and
the ASP.NET Viewstate as an integrated unit.
It is strongly recommended that you read the complete article; however, if –and only
if – you consider yourself very familiar with the inner workings of Viewstate and the
page life cycle, and you want to go directly to the “cream” of the advanced
scenarios, then you skip your way through to the “Viewstate Walkthroughs” section.
In its very basic form, Viewstate is nothing more than a property which has a
key/value pair indexer:
Collapse
Viewstate[“mystringvariable”] = “myvalue”;
Viewstate[“myintegervariable”] = 1;
As you can see, the indexer accepts a string as a key and an object as the value.
The ViewState property is defined in the “System.Web.UI.Control” class. Since
all ASP.NET pages and controls derive from this class, they all have access to
the ViewState property. The type of the ViewState property is
“System.Web.UI.StateBag”.
The very special thing about the StateBag class is the ability to “track changes”.
Tracking is, by nature, off; it can be turned on by calling the “TrackViewState()”
method. Once tracking is on, it cannot be turned off. Only when tracking is on, any
change to the StateBag value will cause that item to be marked as “Dirty”.
Info: In case you are wondering (and you should be) about the reason behind this
“tracking” behavior of the StateBag, do not worry; this will be explained in detail
along with examples about how tracking works, in the next sections.
When dealt within the context of ASP.NET, the Viewstate can be defined as the
technique used by an ASP.NET web page to remember the change of state spanning
multiple requests. As you know, ASP.NET is a stateless technology; meaning that
two different requests (two postbacks) to the same web page are considered
completely unrelated. This raises the need of a mechanism to track the change of
state for this web page between the first and the second request.
The Viewstate of an ASP.NET page is created during the page life cycle, and saved
into the rendered HTML using in the “__VIEWSTATE” hidden HTML field. The value
stored in the “__VIEWSTATE” hidden field is the result of serialization of two types of
data:
1. Any programmatic change of state of the ASP.NET page and its controls.
(Tracking and “Dirty” items come into play here. This will be detailed later.)
2. Any data stored by the developer using the Viewstate property described
previously.
This value is loaded during postbacks, and is used to preserve the state of the web
page.
Info: The process of creating and loading the Viewstate during the page life cycle will
be explained in detail later. Moreover, what type of information is saved in the
Viewstate is also to be deeply discussed.
One final thing to be explained here is the fact that ASP.NET server controls use
Viewstate to store the values of their properties. What does this mean? Well, if you
take a look at the source code of the TextBox server control, for example, you will
find that the “Text” property is defined like this:
Collapse
public string Text
{
get { return (string)ViewState["Text"]; }
set { ViewState["Text"] = value; }
}
You should apply this rule when developing your own server controls. Now, this does
not mean that the value of the “Text” property set at design time is saved in the
rendered serialized ViewState field. For example, if you add the
following TextBox to your ASP.NET page:
Collapse
<asp:TextBox runat="server" ID="txt" Text="sometext" />
According to the fact that the “Text” property uses Viewstate to store its value, you
might think that the value is actually serialized in the__VIEWSTATE hidden field. This
is wrong because the data serialized in the hidden __VIEWSTATE consists only of
state changes done programmatically. Design time data is not serialized into
the __VIEWSTATE (more on this later). For now, just know that the above
declaration of the “Text” property means that it is utilizing Viewstate in its basic
form: an indexed property to store values much like the Hashtable collection does.
The first step (actually, step 0 as explained in the info block below) of the life cycle is
to generate the compiled class that represents the requested page. As you know, the
ASPX page consists of a combination of HTML and Server controls. A dynamic
compiled class is generated that represents this ASPX page. This compiled class is
then stored in the “Temporary ASP.NET Files” folder. As long as the ASPX page is
not modified (or the application itself is restarted, in which case the request will
actually be the first request and then the generation of the class will occur again),
then future requests to the ASPX page will be served by the same compiled class.
Info: If you have read my previous article about request processing, you will likely
recall that the last step of the request handling is to find a suitable HTTP Handler to
handle the request. Back then, I explained how the HTTP Handler Factory will either
locate a compiled class that represents the requested page or will compile the class in
the case of the first request. Well, do you see the link? This is the compiled class
discussed here. So, as you may have concluded, this step of generating the compiled
class actually occurs during the ASP.NET request handling architecture. You can
think of this step as the intersection between where the ASP.NET request handling
architecture stops and the ASP.NET page life cycle starts.
This compiled class will actually contain the programmatic definition of the controls
defined in the aspx page. In order to fully understand the process of dynamic
generation, let us consider the following example of a simple ASPX page for
collecting the username and the password:
Now, upon the first request for the above page, the generated class is created and
stored in the following location:
Collapse
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\
<application name>\< folder identifier1>\< file identifier1>.cs
The compiled class “DLL” is also created in the same location with the same file
identifier, as follows:
Collapse
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\
<application name>\< folder identifier1>\< file identifier1>.dll
As you can see, the generated class and its corresponding DLL are created, and will
serve any new request for the ASPX page until one of the following conditions occur:
The following code is extracted from the compiled class. Note that the code is
reformatted and “cleaned up” for presentation purposes.
OK, so now is the really interesting part. Start examining the code above, and you
will be able to notice the following:
The entry point of the page life cycle is the pre-initialization phase called “PreInit”.
This is the only event where programmatic access to master pages and themes is
allowed. Note that this event is not recursive, meaning that it is accessible only for
the page itself and not for any of its child controls.
Init
Next is the initialization phase called “Init”. The “Init” event is fired reclusively for
the page itself and for all the child controls in the hierarchy (which is created during
the creation of the compiled class, as explained earlier). Note that against many
developers’ beliefs, the event is fired in a bottom to up manner and not an up to
bottom within the hierarchy. This means that following up with our previous
example, the “Init” event is fired first for the most bottom control in the hierarchy,
and then fired up the hierarchy until it is fired for the page itself. You can test this
behavior yourself by adding a custom or user control to the page. Override the
“OnInit” event handler in both the page and the custom (or user) control, and add
break points to both event handlers. Issue a request against the page, and you will
notice that the event handler for the control will be fired up before that of the page.
The initialization complete event called “InitComplete” signals the end of the
initialization phase. It is at the start of this event that tracking of
theASP.NET Viewstate is turned on. Recall that the StateBag class (the Type of
Viewstate) has a tracking ability which is off by default, and is turned on by calling
the “TrackViewState()” method. Also recall that, only when tracking is enabled
will any change to a Viewstate key mark the item as “Dirty”. Well, it is at the start
of “InitComplete” where “Page.TrackViewState()” is called, enabling tracking
for the page Viewstate.
Info: Later, I will give detailed samples of how the Viewstate works within each of
the events of the life cycle. These examples will hopefully aid in understanding the
inner workings of the Viewstate.
LoadViewState
This event happens only at postbacks. This is a recursive event, much like the
“Init” event. In this event, the Viewstate which has been saved in
the __VIEWSTATE during the previous page visit (via the SaveViewState event) is
loaded and then populated into the control hierarchy.
LoadPostbackdata
This event happens only at postbacks. This is a recursive event much like the “Init”
event. During this event, the posted form data is loaded into the appropriate
controls. For example, assume that, on your form, you had a TextBox server
control, and you entered some text inside the TextBox and posted the form. The
text you have entered is what is called postback data. This text is loaded during
the LoadPostbackdata event and handed to the TextBox. This is why when you
post a form, you find that the posted data is loaded again into the appropriate
controls. This behavior applies to most controls like the selected item of a drop down
list or the “checked” state of a check box, etc…
A very common conceptual error is the thought that Viewstate is responsible for
preserving posted data. This is absolutely false; Viewstate has nothing to do with it.
If you want a proof, disable the Viewstate on your TextBox control or even on the
entire page, and you will find that the posted data is still preserved. This is the virtue
of the LoadPostbackdata event.
Load
This event is recursive much like the “Init” event. The important thing to note
about this event is the fact that by now, the page has been restored to its previous
state in case of postbacks. That is because the LoadViewState and
the LoadPostbackdata events are fired, which means that the page Viewstate and
postback data are now handed to the page controls.
RaisePostbackEvent
This event is fired only at postbacks. What this event does is inspect all child controls
of the page and determine if they need to fire any postback events. If it finds such
controls, then these controls fire their events. For example, if you have a page with
a Button server control and you click this button causing a postback, then
the RaisePostbackEvent inspects the page and finds that the Button control has
actually raised a postback event – in this case, the Button's Click event.
The Button's Click event is fired at this stage.
SaveViewstate
This event is recursive, much like the “Init” event. During this event, the Viewstate
of the page is constructed and serialized into the __VIEWSTATEhidden field.
Info: Again, what exactly goes into the __VIEWSTATE field will be discussed later.
State Control is a new feature of ASP.NET 2.0. In ASP.NET 1.1, Viewstate was
used to store two kinds of state information for a control:
• Functionality state
• UI state
ASP.NET 2.0 solved this problem by partitioning the Viewstate into two:
The Control State is used to store the UI state of a control such as the sorting and
paging of a DataGrid. In this case, you can safely disable theDataGrid Viewstate
(provided, of course, that you rebind the data source at each postback), and the
sorting and paging will still work simply because they are saved in the Control State.
The Control State is also serialized and stored in the same __VIEWSTATE hidden
field. The Control State cannot be set off.
Render
This is a recursive event much like the “Init” event. During this event, the HTML
that is returned to the client requesting the page is generated.
Unload
This is a recursive event much like the “Init” event. This event unloads the page
from memory, and releases any used resources.
The important thing to note in the “Init” event is that Viewstate tracking is not yet
enabled. In order to demonstrate this, let us take the following example:
Collapse
protected override void OnInit(EventArgs e)
(
bool ret;
ret = ViewState.IsItemDirty("item");//returns false
ViewSTate["item"] = "1";
ret = ViewState.IsItemDirty("item");returns false
base.OnInit(e);
}
Note that in the above example, the item is still not marked as “Dirty” even after it
is being set. This is due to the fact that at this stage, the “TrackViewState()”
method is not called yet. It will be called in the “InitComplete” event.
Info: You can force the tracking of Viewstate at any stage that you want. For
example, you can call the “Page.TrackViewState()” method at the start of the
“Init” event and thus enabling tracking. In this case, the “Init” event will behave
exactly like the “InitComplete” event, to be discussed next.
Collapse
protected override void OnInitComplete(EventArgs e)
(
bool ret;
ret = ViewState.IsItemDirty("item");//returns false
ViewSTate["item"] = "1";
ret = ViewState.IsItemDirty("item");//returns true
base.OnInitComplete(e);
}
This time, note that first, the item is not marked as “Dirty”. However, once it is set,
then it is marked as “Dirty”. This is due to the fact that at the start of this event,
the “TrackViewState()” method is called.
To better understand the role of Viewstate in these events, let us take an example.
Say that you have a simple ASPX page that consists of the following:
• During the “Load” event, the “dynamictext” value will be assigned to the
“Text” property of the Label
• Later in the life cycle, the “SaveViewSatte” event is fired, and it stores the
new assigned value “dynamictext” in the __VIEWSTATE serialized field (in the
next sections, you will see how tracking came into play here)
Now, when you click the Button control, the following will happen:
See how that because of the Viewstate, the “dynamictext” value is persisted across
page postback. Without Viewstate and with the “!IsPostback” condition inside the
“Load” event, the value “dynamictext” would have been lost after the postback.
Viewstate Walkthroughs
OK, so by now, you should have a solid understanding about the page life cycle, the
Viewstate, and the role of the Viewstate in the page life cycle. However, we are yet
to discuss the cases that tests whether you are an advanced Viewstate user or not.
In the next walkthroughs, you shall tackle most of these cases. By the end of this
section, you can safely call yourself an advanced Viewstate user.
So, let us start by a very simple question. Will you have a value stored in
the __VIEWSTATE field if you have a blank ASP.NET page or a page with Viewstate
completely disabled? The answer is yes. As a test, create a blank ASP.NET page and
run it. Open the View Source page and note the__VIEWSTATE field. You will notice
that it contains a small serialized value. The reason is that the page itself saves 20 or
so bytes of information into the __VIEWSTATE field, which it uses to distribute
postback data and Viewstate values to the correct controls upon postback. So, even
for a blank page or a page with disabled Viewstate, you will see a few remaining
bytes in the __VIEWSTATE field.
As mentioned previously, the data stored in the __VIEWSTATE field consists of the
following:
While the former is pretty much clear, the latter is to be explained thoroughly.
OK, so let us assume that you have a web page with only a Label control on it.
Now, let us say that at design time, you set the “Text” property of the Label to
“statictext”. Now, load the page and open the View Source page. You will see
the __VIEWSTATE field. Do you think “statictext” is saved there? The answer is, for
sure, not. Recall from our discussion about generating the compiled class of the
page, that static properties of controls are assigned at the generated class? So, when
the page is requested, the values stored in the generated class are displayed. As
such, static properties that are set at design time are never stored in
the __VIEWSTATE field. However, they are stored in the generated compiled class,
and thus they show up when rendering the page.
Now, let us say that you did some changes to your page such that it consists of the
following:
• A Label control with the value “statictext” assigned to its “Text” property
• A Button control to postback the page
• Inside the “Page_Load” event handler of the page, add code to change the
value of the Label’s “Text” property to “dynamictext”. Wrap this code inside
a “!IsPostback” condition.
Load the page and open the View Source page. This time, you will notice that
the __VIEWSTATE field is larger. This is because at runtime, you changed the value
of the “Text” property. And, since you have done this inside the “Load” event,
where, at this time, tracking of Viewstate is enabled (remember, tracking is enabled
in the “InitComplete” event), the “Text” property is marked as “Dirty”, and thus
the “SaveViewstate” event stores its new value in the __VIEWSTATE field.
Interesting!!
As a proof that the “dynamictext” value is stored in the __VIEWSTATE field (apart
from the fact that it is larger), click the Button control causing a postback. You will
find that the “dynamictext” value is retained in the Label even though the code in
the “Load” event is wrapped inside the “!IsPostback” condition. This is due to the
fact that upon postback, the “LoadViewstate” event extracted the value
“dynamictext” which was saved by the “SaveViewState” event during the previous
page visit and then assigned it back to the Label’s “Text” property.
Let us now go one step further. Repeat the exact same scenario, with one exception:
instead of adding the code that will change the value of the “Text” property at the
“Page_Load” event handler, override the “OnPreInit” event handler and add the
code there. Now, first load the page, and you will see the new text displayed in
the Label. But, do you think this time it is stored in the __VIEWSTATE field? The
answer is no. The reason is that during the “PreInit” event, Viewstate tracking is
not yet enabled, so the change done to the “Text” property is not tracked and thus
not saved during the “SaveViewState” event. As a proof, click the Button causing
a postback. You will see that the value displayed inside the Label is the design time
value “statictext”.
Let us even go one step further. If you have been following this long article carefully,
you will recall that during the “Init” event, tracking of Viewstate is not yet enabled.
You have even seen an example in the section Viewstate in the “Init” event of how
an item keeps being not dirty even after it is being set. Now, going back to our
current example, you will most probably be tempted to consider that if we add the
code that changes the value of the “Text” property inside the “OnInit” event
handler, you will get the same result as when you put it inside the “OnPreInit”
event handler; after all, tracking of Viewstate is not enabled yet during either event.
Well, if you think so, then you are mistaken. Reason: if you have been following this
article carefully, you will recall that the “InitComplete” event (where tracking of
Viewstate is enabled) is not a recursive event; it is called only for the page itself, but
not for its child controls. This means that, for the child controls, there is no
“InitComplete” event; so, where does the tracking of Viewstate start? The answer
is at the start of the “Init” event. So, this means that for the page itself, tracking of
Viewstate is enabled at the “InitComplete” event; that is why during the “OnInit”
event handler, the item was still marked as not being “Dirty” even after
modification. However, for child controls, there is no “InitComplete” event, so the
Viewstate tracking is enabled as early as the “Init” event. And, since – again as
was explained earlier – the “Init” event is called recursively for the page and its
controls in a bottom to up manner, by the time the “Init” event of the page itself is
fired, the “Init” event of the child controls is already being fired, and thus
Viewstate tracking for child controls is on by that time. As such, adding the code that
changes the value of the “Text” property inside the “OnInit” event handler will
have the same effect as putting it inside the “Page_Load” event handler.
In this walkthrough, we will go through each of the relevant page life cycle events
and see what role Viewstate has in each of these events. In this examplem, we have
as ASP.NET page with the following characteristics:
• It contains a single Label control (lbl) with its “Text” property set to
“statictext”
• It contains a Button control (btnA) with code in its event handler that sets
the “Text” property of “lbl” to “dynamictext”
• It contains a Button control (btnB) whose purpose is to cause a page
postback
Now, let us examine what will happen during the page life cycle.
Disabling Viewstate would obviously reduce Viewstate size; but, it surely kills the
functionality along the way. So, a little more planning is required…
Consider the case where you have a page with a drop down list that should display
the countries of the world. On the “Page_Load” event handler, you bind the drop
down list to a data source that contains the countries of the world; the code that
does the binding is wrapped inside a “!IsPostback” condition. Finally, on the page,
you have a button that when clicked should read the selected country from the drop
down list. With the setup described above, you will end up with a
large __VIEWSTATE field. This is due to the fact that data bound controls (like the
drop down list) store their state inside the Viewstate. You want to reduce Viewstate;
what options do you have?
If you implement Option 1, you will reduce the Viewstate alright, but with it, you will
also lose the list of countries on the first postback of the page. When the page first
loads, the code in the “Page_Load” event handler is executed and the list of
countries is bound to the list. However, because Viewstate is disabled on the list, this
change of state is not saved during the “SaveViewState” event. When the button
on the page is clicked causing a postback, since the binding code is wrapped inside a
“!IsPostback” condition, the “LoadViewState” event has nothing saved from the
previous page visit and the drop down list is empty. If you implement Option 2, you
will reduce the Viewstate size and you will not lose the list of countries on postback.
However, another problem arises: because the binding code is now executed at each
“Page_Load”, the postback data is lost upon postback, and every time, the first item
of the list will be selected. This is true because in the page life cycle, the
“LoadPostbackdata” event occurs before the “Load” event. Option 3 is the correct
option. In this option, you have done the following:
Since the “Init” event occurs before the “LoadPostbackdata” in the page life
cycle, the postback data is preserved upon postbacks, and the selected item from
the list is correctly preserved.
Now, remember that in Option 3, you have successfully reduced the Viewstate size
and kept the functionality working; but, this actually comes at the cost of rebinding
the drop down list at each postback. The performance hit of revisiting the data
source at each postback is nothing when compared with the performance boost
gained from saving a huge amount of bytes being rendered at the
client’s __VIEWSTATE field. This is especially true with the fact that most clients are
connected to the Internet via low speed dial up connections.
The first fact that you should know about dynamic controls is that they should be
added to the page at each and every page execution. Never wrap the code that
initializes and adds the dynamic control to the page inside a “!IsPostback”
condition. The reason is that since the control is dynamic, it is not included in the
compiled class generated for the page. So, the control should be added at every
page execution for it to be available in the final control tree. The second fact is that
dynamic controls play “catch-up” with the page life cycle once they are added. Say,
you did the following:
Collapse
Label lbl = new Label();
Page.Controls.Add(lbl);
Once the control is added to the “Controls” collection, it plays “catch-up” with the
page life cycle, and all the events that it missed are fired. This leads to a very
important conclusion: you can add dynamic controls at any time during the page life
cycle until the “PreRender” event. Even when you add the dynamic control in the
“PreRender” event, once the control is added to the “Controls” collection, the
“Init”, “LoadViewState”, “LoadPostbackdata”, “Load”, and “SaveViewstate”
are fired for this control. This is called “catch-up”. Note though that it is
recommended that you add your dynamic controls during the “PreInit” or “Init”
events. This is because it is best to add controls to the control tree before tracking of
the Viewstate of other controls is enabled…
Finally, what is the best practice for adding dynamic controls regarding Viewstate?
Let us say that you want to add a Label at runtime and assign a value to its “Text”
property. What is the best practice to do so? If you are thinking the below, then you
are mistaken:
Collapse
Label lbl = new Label();
Page.Controls.Add(lbl);
lbl.Text = "bad idea";
You are mistaken because using the above technique you are actually storing the
value of the “Text” property in the Vewstate. Why? Because, recall that once a
dynamic control is added to the “Controls” collection, a “catch-up” happens and
the tracking for the Viewstate starts. So, setting the value of the “Text” property will
cause the value to be stored in the Viewstate.
However, if you do the below, then you are thinking the right way, because you are
setting the “Text” property before adding the control to the “Controls” collection.
In this case, the value of the “Text” property is added with the control to the control
tree and not persisted in Viewstate.
Collapse
Label lbl = new Label();
lbl.Text = "good idea";
Page.Controls.Add(lbl);
ASP.NET is a powerful platform for building Web applications,
that provides a tremendous amount of flexibility and power for building
just about any kind of Web application. Most people are familiar only with
the high level frameworks likeWebForms and WebServices which sit at the
very top level of the ASP.NET hierarchy. In this article I’ll describe the lower
level aspects of ASP.NET and explain how requests move from Web Server
to the ASP.NET runtime and then through the ASP.NET Http Pipeline to
process requests.
Most people using ASP.NET are familiar with WebForms and WebServices. These high
level implementations are abstractions that make it easy to build Web based
application logic and ASP.NET is the driving engine that provides the underlying
interface to the Web Server and routing mechanics to provide the base for these high
level front end services typically used for your
applications.WebForms and WebServices are merely two very sophisticated
implementations of HTTP Handlers built on top of the core ASP.NET framework.
However, ASP.NET provides much more flexibility from a lower level. The HTTP
Runtime and the request pipeline provide all the same power that went into building
the WebForms and WebService implementations – these implementations were
actually built with .NET managed code. And all of that same functionality is available
to you, should you decide you need to build a custom platform that sits at a level a
little lower than WebForms.
WebForms are definitely the easiest way to build most Web interfaces, but if you’re
building custom content handlers, or have special needs for processing the incoming
or outgoing content, or you need to build a custom application server interface to
another application, using these lower level handlers or modules can provide better
performance and more control over the actual request process. With all the power
that the high level implementations of WebForms and WebServices provide they also
add quite a bit of overhead to requests that you can bypass by working at a lower
level.
What is ASP.NET
Let’s start with a simple definition: What is ASP.NET? I like to define ASP.NET as
follows:
The runtime provides a complex yet very elegant mechanism for routing requests
through this pipeline. There are a number of interrelated objects, most of which are
extensible either via subclassing or through event interfaces at almost every level of
the process, so the framework is highly extensible. Through this mechanism it’s
possible to hook into very low level interfaces such as the caching, authentication
and authorization. You can even filter content by pre or post processing requests or
simply route incoming requests that match a specific signature directly to your code
or another URL. There are a lot of different ways to accomplish the same thing, but
all of the approaches are straightforward to implement, yet provide flexibility in
finding the best match for performance and ease of development.
The entire ASP.NET engine was completely built in managed code and all
extensibility is provided via managed code extensions.
The entire ASP.NET engine was completely built in managed code and all of the
extensibility functionality is provided via managed code extensions. This is a
testament to the power of the .NET framework in its ability to build sophisticated and
very performance oriented architectures. Above all though, the most impressive part
of ASP.NET is the thoughtful design that makes the architecture easy to work with,
yet provides hooks into just about any part of the request processing.
With ASP.NET you can perform tasks that previously were the domain of ISAPI
extensions and filters on IIS – with some limitations, but it’s a lot closer than say
ASP was. ISAPI is a low level Win32 style API that had a very meager interface and
was very difficult to work for sophisticated applications. Since ISAPI is very low level
it also is very fast, but fairly unmanageable for application level development. So,
ISAPI has been mainly relegated for some time to providing bridge interfaces to
other application or platforms. But ISAPI isn’t dead by any means. In fact, ASP.NET
on Microsoft platforms interfaces with IIS through an ISAPI extension that hosts
.NET and through it the ASP.NET runtime. ISAPI provides the core interface from the
Web Server and ASP.NET uses the unmanaged ISAPI code to retrieve input and send
output back to the client. The content that ISAPI provides is available via common
objects likeHttpRequest and HttpResponse that expose the unmanaged data as
managed objects with a nice and accessible interface.
ISAPI is the first and highest performance entry point into IIS for custom
Web Request handling.
As a protocol ISAPI supports both ISAPI extensions and ISAPI Filters. Extensions are
a request handling interface and provide the logic to handle input and output with
the Web Server – it’s essentially a transaction interface. ASP and ASP.NET are
implemented as ISAPI extensions. ISAPI filters are hook interfaces that allow the
ability to look at EVERY request that comes into IIS and to modify the content or
change the behavior of functionalities like Authentication. Incidentally ASP.NET maps
ISAPI-like functionality via two concepts: Http Handlers (extensions) and Http
Modules (filters). We’ll look at these later in more detail.
ISAPI is the initial code point that marks the beginning of an ASP.NET request.
ASP.NET maps various extensions to its ISAPI extension which lives in the .NET
Framework directory:
<.NET FrameworkDir>\aspnet_isapi.dll
You can interactively see these mapping in the IIS Service manager as shown in
Figure 1. Look at the root of the Web Site and the Home Directory tab, then
Configuration | Mappings.
Figure 1: IIS maps various extensions like .ASPX to the ASP.NET ISAPI extension.
Through this mechanism requests are routed intoASP.NET's processing pipeline at
the Web Server level.
You shouldn’t set these extensions manually as .NET requires a number of them.
Instead use the aspnet_regiis.exe utility to make sure that all the
various scriptmaps get registered properly:
cd <.NetFrameworkDirectory>
aspnet_regiis - i
This will register the particular version of the ASP.NET runtime for the entire Web
site by registering the scriptmaps and setting up the client side scripting libraries
used by the various controls for uplevel browsers. Note that it registers the particular
version of the CLR that is installed in the above directory. Options
on aspnet_regiis let you configure virtual directories individually. Each version of
the .NET framework has its own version of aspnet_regiis and you need to run the
appropriate one to register a site or virtual directory for a specific version of the .NET
framework. Starting with ASP.NET 2.0, an IIS ASP.NET configuration page lets you
pick the .NET version interactively in the IIS management console.
Figure 2 – Request flow from IIS to the ASP.NET Runtime and through the request
processing pipeline from a high level. IIS 5 and IIS 6 interface with ASP.NET in
different ways but the overall process once it reaches the ASP.NET Pipeline is the
same.
IIS6, unlike previous servers, is fully optimized for ASP.NET
In addition, Application Pools are highly configurable. You can configure their
execution security environment by setting an execution impersonation level for the
pool which allows you to customize the rights given to a Web application in that
same granular fashion. One big improvement for ASP.NET is that the Application Pool
replaces most of the ProcessModel entry in machine.config. This entry was difficult to
manage in IIS 5, because the settings were global and could not be overridden in an
application specific web.config file. When running IIS 6, the ProcessModel setting is
mostly ignored and settings are instead read from the Application Pool. I say mostly
– some settings, like the size of the ThreadPool and IO threads still are configured
through this key since they have no equivalent in the Application Pool settings of the
server.
Because Application Pools are external executables these executables can also be
easily monitored and managed. IIS 6 provides a number of health checking,
restarting and timeout options that can detect and in many cases correct problems
with an application. Finally IIS 6’s Application Pools don’t rely on COM+ as IIS 5
isolation processes did which has improved performance and stability especially for
applications that need to use COM objects internally.
Although IIS 6 application pools are separate EXEs, they are highly optimized for
HTTP operations by directly communicating with a kernel mode HTTP.SYS driver.
Incoming requests are directly routed to the appropriate application
pool. InetInfo acts merely as an Administration and configuration service – most
interaction actually occurs directly between HTTP.SYS and the Application Pools, all
of which translates into a more stable and higher performance environment over IIS
5. This is especially true for static content and ASP.NET applications.
An IIS 6 application pool also has intrinsic knowledge of ASP.NET and ASP.NET can
communicate with new low level APIs that allow direct access to the HTTP Cache
APIs which can offload caching from the ASP.NET level directly into the Web Server’s
cache.
In IIS 6, ISAPI extensions run in the Application Pool worker process. The .NET
Runtime also runs in this same process, so communication between the ISAPI
extension and the .NET runtime happens in-process which is inherently more
efficient than the named pipe interface that IIS 5 must use. Although the IIS hosting
models are very different the actual interfaces into managed code are very similar –
only the process in getting the request routed varies a bit.
The ISAPIRuntime.ProcessRequest() method is the first entry point into
ASP.NET
The worker processes ASPNET_WP.EXE (IIS5) and W3WP.EXE (IIS6) host the .NET
runtime and the ISAPI DLL calls into small set of unmanged interfaces via low level
COM that eventually forward calls to an instance subclass of the ISAPIRuntime class.
The first entry point to the runtime is the undocumented ISAPIRuntime class which
exposes the IISAPIRuntime interface via COM to a caller. These COM interfaces low
level IUnknown based interfaces that are meant for internal calls from the ISAPI
extension into ASP.NET. Figure 3 shows the interface and call signatures for
the IISAPIRuntime interface as shown in Lutz Roeder’s excellent .NET
Reflectortool (http://www.aisto.com/roeder/dotnet/). Reflector an assembly viewer
and disassembler that makes it very easy to look atmedadata and disassembled
code (in IL, C#, VB) as shown in Figure 3. It’s a great way to explore the
bootstrapping process.
Figure 3 – If you want to dig into the low level interfaces open up Reflector, and
point at the System.Web.Hosting namespace. The entry point to ASP.NET occurs
through a managed COM Interface called from the ISAPI dll, that receives an
unmanaged pointer to the ISAPI ECB. The ECB contains has access to the full ISAPI
interface to allow retrieving request data and sending back to IIS.
The IISAPIRuntime interface acts as the interface point between the unmanaged
code coming from the ISAPI extension (directly in IIS 6 and indirectly via the Named
Pipe handler in IIS 5). If you take a look at this class you’ll find
a ProcessRequest method with a signature like this:
[return: MarshalAs(UnmanagedType.I4)]
int ProcessRequest([In] IntPtr ecb,
[In, MarshalAs(UnmanagedType.I4)] int useProcessModel
);
The ecb parameter is the ISAPI Extension Control Block (ECB) which is passed as an
unmanaged resource to ProcessRequest. The method then takes the ECB and uses it
as the base input and output interface used with the Request and Response objects.
An ISAPI ECB contains all low level request information including server variables, an
input stream for form variables as well as an output stream that is used to write data
back to the client. The single ecb reference basically provides access to all of the
functionality an ISAPI request has access to and ProcessRequest is the entry and exit
point where this resource initially makes contact with managed code.
The ISAPI extension runs requests asynchronously. In this mode the ISAPI extension
immediately returns on the calling worker process or IIS thread, but keeps the ECB
for the current request alive. The ECB then includes a mechanism for letting ISAPI
know when the request is complete (via ecb.ServerSupportFunction) which then
releases the ECB. This asynchronous processing releases the ISAPI worker thread
immediately, and offloads processing to a separate thread that is managed by
ASP.NET.
ASP.NET receives this ecb reference and uses it internally to retrieve information
about the current request such as server variables, POST data as well as returning
output back to the server. The ecb stays alive until the request finishes or times out
in IIS and ASP.NET continues to communicate with it until the request is done.
Output is written into the ISAPI output stream (ecb.WriteClient()) and when the
request is done, the ISAPI extension is notified of request completion to let it know
that the ECB can be freed. This implementation is very efficient as the .NET classes
essentially act as a fairly thin wrapper around the high performance, unmanaged
ISAPI ECB.
My best guess is that the worker process bootstraps the .NET runtime from within
the ISAPI extension on the first hit against an ASP.NET mapped extension. Once the
runtime exists, the unmanaged code can request an instance of an ISAPIRuntime
object for a given virtual path if one doesn’t exist yet. Each virtual directory gets its
own AppDomain and within that AppDomain the ISAPIRuntime exists from which the
bootstrapping process for an individual application starts. Instantiation appears to
occur over COM as the interface methods are exposed as COM callable methods.
Figure 4 – The transfer of the ISAPI request into the HTTP Pipeline of ASP.NET uses
a number of undocumented classes and interfaces and requires several factory
method calls. Each Web Application/Virtual runs in its own AppDomain with the caller
holding a reference to an IISAPIRuntime interface that triggers the ASP.NET request
processing.
Back in the runtime
At this point we have an instance of ISAPIRuntime active and callable from the ISAPI
extension. Once the runtime is up and running the ISAPI code calls into
the ISAPIRuntime.ProcessRequest() method which is the real entry point into
the ASP.NET Pipeline. The flow from there is shown in Figure 4.
Listing 1: The Process request method receives an ISAPI Ecb and passes it
on to the Worker request
public int ProcessRequest(IntPtr ecb, int iWRType)
{
HttpWorkerRequest request1 =
ISAPIWorkerRequest.CreateWorkerRequest(ecb, iWRType);
The actual code here is not important, and keep in mind that this is disassembled
internal framework code that you’ll never deal with directly and that might change in
the future. It’s meant to demonstrate what’s happening behind the
scenes. ProcessRequestreceives the unmanaged ECB reference and passes it on to
the ISAPIWorkerRequest object which is in charge of creating the Request Context
for the current request as shown in Listing 2.
In the case of IIS the abstraction is centered around an ISAPI ECB block. In our
request processing, ISAPIWorkerRequest hangs on to the ISAPI ECB and retrieves
data from it as needed. Listing 2 shows how the query string value is retrieved for
example.
ISAPIWorkerRequest implements a high level wrapper method, that calls into lower
level Core methods, which are responsible for performing the actual access to the
unmanaged APIs – or the ‘service level implementation’. The Core methods are
implemented in the specific ISAPIWorkerRequest instance subclasses and thus
provide the specific implementation for the environment that it’s hosted in. This
makes for an easily pluggable environment where additional implementation classes
can be provided later as newer Web Server interfaces or other platforms are targeted
by ASP.NET. There’s also a helper class System.Web.UnsafeNativeMethods. Many of
these methods operate on the ISAPI ECB structure performing unmanaged calls into
the ISAPI extension.
The HttpContext object also contains a very useful Items collection that you can use
to store data that is request specific. The context object gets created at the begging
of the request cycle and released when the request finishes, so data stored there in
the Items collection is specific only to the current request. A good example use is a
request logging mechanism where you want to track start and end times of a request
by hooking the Application_BeginRequest and Application_EndRequest methods
in Global.asax as shown in Listing 3. HttpContext is your friend – you’ll use it liberally
if you need data in different parts of the request or page processing.
// do your logging
WebRequestLog.Log(App.Configuration.ConnectionString,
true,MilliSecs);
}
}
Once the Context has been set up, ASP.NET needs to route your incoming request to
the appropriate application/virtual directory by way of an HttpApplication object.
Every ASP.NET application must be set up as a Virtual (or Web Root) directory and
each of these ‘applications’ are handled independently.
The pool starts out with a smaller number though; usually one and it then grows as
multiple simulataneous requests need to be processed. The Pool is monitored so
under load it may grow to its max number of instances, which is later scaled back to
a smaller number as the load drops.
HttpApplication is the outer container for your specific Web application and it maps to
the class that is defined in Global.asax. It’s the first entry point into the HTTP
Runtime that you actually see on a regular basis in your applications. If you look
in Global.asax (or the code behind class) you’ll find that this class derives directly
from HttpApplication:
• BeginRequest
• AuthenticateRequest
• AuthorizeRequest
• ResolveRequestCache
• AquireRequestState
• PreRequestHandlerExecute
• …Handler Execution…
• PostRequestHandlerExecute
• ReleaseRequestState
• UpdateRequestCache
• EndRequest
Each of these events are also implemented in the Global.asax file via empty
methods that start with an Application_ prefix. For
example, Application_BeginRequest(), Application_AuthorizeRequest(). These
handlers are provided for convenience since they are frequently used in applications
and make it so that you don’t have to explicitly create the event handler delegates.
It’s important to understand that each ASP.NET virtual application runs in its
own AppDomain and that there inside of
the AppDomainmultiple HttpApplication instances running simultaneously, fed out of
a pool that ASP.NET manages. This is so that multiple requests can process at the
same time without interfering with each other.
this.DomainId = AppDomain.CurrentDomain.FriendlyName;
This is part of a demo is provided with your samples and the running form is shown
in Figure 5. To check this out run two instances of a browser and hit this sample
page and watch the various Ids.
Figure 5 – You can easily check out how AppDomains, Application Pool instances,
and Request Threads interact with each other by running a couple of browser
instances simultaneously. When multiple requests fire you’ll see the thread and
Application ids change, but the AppDomain staying the same.
Threads are served from the .NET ThreadPool and by default are Multithreaded
Apartment (MTA) style threads. You can override this apartment state in ASP.NET
pages with the ASPCOMPAT="true" attribute in the @Page directive. ASPCOMPAT is
meant to provide COM components a safe environment to run in and ASPCOMPAT
uses special Single Threaded Apartment (STA) threads to service those requests.
STA threads are set aside and pooled separately as they require special handling.
The fact that these HttpApplication objects are all running in the same AppDomain is
very important. This is how ASP.NET can guarantee that changes to web.config or
individual ASP.NET pages get recognized throughout the AppDomain. Making a
change to a value in web.config causes the AppDomain to be shut down and
restarted. This makes sure that all instances of HttpApplication see the changes
made because when the AppDomain reloads the changes from ASP.NET are re-read
at startup. Any static references are also reloaded when the AppDomain so if the
application reads values from App Configuration settings these values also get
refreshed.
To see this in the sample, hit the ApplicationPoolsAndThreads.aspx page and note
the AppDomain Id. Then go in and make a change in web.config (add a space and
save). Then reload the page. You’ll l find that a new AppDomain has been created.
In essence the Web Application/Virtual completely ‘restarts’ when this happens. Any
requests that are already in the pipeline processing will continue running through the
existing pipeline, while any new requests coming in are routed to the
new AppDomain. In order to deal with ‘hung requests’ ASP.NET forcefully shuts down
the AppDomain after the request timeout period is up even if requests are still
pending. So it’s actually possible that two AppDomains exist for the
same HttpApplication at a given point in time as the old one’s shutting down and the
new one is ramping up. Both AppDomains continue to serve their clients until the old
one has run out its pending requests and shuts down leaving just the
new AppDomain running.
Both HttpModules and HttpHandlers are loaded dynamically via entries in Web.config
and attached to the event chain. HttpModulesare actual event handlers that hook
specific HttpApplication events, while HttpHandlers are an end point that gets called
to handle ‘application level request processing’.
Both Modules and Handlers are loaded and attached to the call chain as part of
the HttpApplication.Init() method call. Figure 6 shows the various events and
when they happen and which parts of the pipeline they affect.
Figure 6 – Events flowing through the ASP.NET HTTP Pipeline.
The HttpApplication object’s events drive requests through the pipeline. Http
Modules can intercept these events and override or enhance existing functionality.
Once the pipeline is started, HttpApplication starts firing events one by one as shown
in Figure 6. Each of the event handlers is fired and if events are hooked up those
handlers execute and perform their tasks. The main purpose of this process is to
eventually call theHttpHandler hooked up to a specific request. Handlers are the core
processing mechanism for ASP.NET requests and usually the place where any
application level code is executed. Remember that the ASP.NET Page and Web
Service frameworks are implemented asHTTPHandlers and that’s where all the core
processing of the request is handled. Modules tend to be of a more core nature used
to prepare or post process the Context that is delivered to the handler. Typical
default handlers in ASP.NET are Authentication, Caching for pre-processing and
various encoding mechanisms on post processing.
HttpModules
As requests move through the pipeline a number of events fire on
the HttpApplication object. We’ve already seen that these events are published as
event methods in Global.asax. This approach is application specific though which is
not always what you want. If you want to build generic HttpApplication event hooks
that can be plugged into any Web applications you can use HttpModules which are
reusable and don’t require application specific code except for an entry in web.config.
Modules are in essence filters – similar in functionality to ISAPI filters at the ASP.NET
request level. Modules allow hooking events for EVERY request that pass through the
ASP.NET HttpApplication object. These modules are stored as classes in external
assemblies that are configured in web.config and loaded when the Application starts.
By implementing specific interfaces and methods the module then gets hooked up to
the HttpApplication event chain. Multiple HttpModules can hook the same event and
event ordering is determined by the order they are declared in Web.config. Here’s
what a handler definition looks like in Web.config:
<configuration>
<system.web>
<httpModules>
<add name= "BasicAuthModule"
type="HttpHandlers.BasicAuth,WebStore" />
</httpModules>
</system.web>
</configuration>
Note that you need to specify a full typename and an assembly name without the
DLL extension.
Modules allow you look at each incoming Web request and perform an action based
on the events that fire. Modules are great to modify request or response content, to
provide custom authentication or otherwise provide pre or post processing to every
request that occurs against ASP.NET in a particular application. Many
of ASP.NET’s features like the Authentication and Session engines are implemented
as HTTP Modules.
While HttpModules feel similar to ISAPI Filters in that they look at every request in
that comes through an ASP.NET Application, they are limited to looking at requests
mapped to a single specific ASP.NET application or virtual directory and then only
against requests that are mapped to ASP.NET. Thus you can look at all ASPX pages
or any of the other custom extensions that are mapped to this application. You
cannot however look at standard .HTM or image files unless you explicitly map the
extension to the ASP.NET ISAPI dllby adding an extension as shown in Figure 1. A
common use for a module might be to filter content to JPG images in a special folder
and display a ‘SAMPLE’ overlay ontop of every image by drawing ontop of the
returned bitmap with GDI+.
Remember that your Module has access the HttpContext object and from there to all
the other intrinsic ASP.NET pipeline objects like Response and Request, so you can
retrieve input etc. But keep in mind that certain things may not be available until
later in the chain.
You can hook multiple events in the Init() method so your module can manage
multiple functionally different operations in one module. However, it’s probably
cleaner to separate differing logic out into separate classes to make sure the module
is modular. <g> In many cases functionality that you implement may require that
you hook multiple events – for example a logging filter might log the start time of a
request in Begin Request and then write the request completion into the log
in EndRequest.
HttpHandlers
Modules are fairly low level and fire against every inbound request to the ASP.NET
application. Http Handlers are more focused and operate on a specific request
mapping, usually a page extension that is mapped to the handler.
Http Handler implementations are very basic in their requirements, but through
access of the HttpContext object a lot of power is available. Http Handlers are
implemented through a very simple IHttpHandler interface (or its asynchronous
cousin,IHttpAsyncHandler) which consists of merely a single method
– ProcessRequest() – and a single property IsReusable. The key
isProcessRequest() which gets passed an instance of the HttpContext object. This
single method is responsible for handling a Web request start to finish.
Single, simple method? Must be too simple, right? Well, simple interface, but not
simplistic in what’s possible! Remember thatWebForms and WebServices are both
implemented as Http Handlers, so there’s a lot of power wrapped up in this
seemingly simplistic interface. The key is the fact that by the time an Http Handler is
reached all of ASP.NET’s internal objects are set up and configured to start
processing of requests. The key is the HttpContext object, which provides all of
the relevant request functionality to retireveinput and send output back to
the Web Server.
For an HTTP Handler all action occurs through this single call to ProcessRequest().
This can be as simple as:
to a full implementation like the WebForms Page engine that can render complex
forms from HTML templates. The point is that it’s up to you to decide of what you
want to do with this simple, but powerful interface!
Because the Context object is available to you, you get access to the Request,
Response, Session and Cache objects, so you have all the key features of an
ASP.NET request at your disposal to figure out what users submitted and return
content you generate back to the client. Remember the Context object – it’s your
friend throughout the lifetime of an ASP.NET request!
The key operation of the handler should be eventually write output into
the Respone object or more specifically the Response object’sOutputStream. This
output is what actually gets sent back to the client. Behind the scenes
the ISAPIWorkerRequest manages sending the OutputStream back into the
ISAPI ecb.WriteClient method that actually performs the IIS output generation.
Figure 7 – The ASP.NET Request pipeline flows requests through a set of event
interfaces that provide much flexibility. The Application acts as the hosting container
that loads up the Web application and fires events as requests come in and pass
through the pipeline. Each request follows a common path through the Http Filters
and Modules configured. Filters can examine each request going through the pipeline
and Handlers allow implementation of application logic or application level interfaces
like Web Forms and Web Services. To provide Input and Output for the application
the Context object provides request specific information throughout the entire
process.
WebForms implements an Http Handler with a much more high level interface on top
of this very basic framework, but eventually aWebForm’s Render() method simply
ends up using an HtmlTextWriter object to write its final final output to
thecontext.Response.OutputStream. So while very fancy, ultimately even a high
level tool like Web forms is just a high level abstractionontop of the Request and
Response object.
You might wonder at this point whether you need to deal with Http Handlers at all.
After all WebForms provides an easily accessible Http Handler implementation, so
why bother with something a lot more low level and give up that flexibility?
WebForms are great for generating complex HTML pages and business level logic
that requires graphical layout tools and template backed pages. But
the WebForms engine performs a lot of tasks that are overhead intensive. If all you
want to do is read a file from the system and return it back through code it’s much
more efficient to bypass the Web Forms Page framework and directly feed the file
back. If you do things like Image Serving from a Database there’s no need to go into
the Page framework – you don’t need templates and there surely is no Web UI that
requires you to capture events off an Image served.
There’s no reason to set up a page object and session and hook up Page level events
– all of that stuff requires execution of code that has nothing to do with your task at
hand.
So handlers are more efficient. Handlers also can do things that aren’t possible
with WebForms such as the ability to process requests without the need to have a
physical file on disk, which is known as a virtual Url. To do this make sure you turn
off ‘Check that file exists’ checkbox in the Application Extension dialog shown in
Figure 1.
This is common for content providers, such as dynamic image processing, XML
servers, URL Redirectors providing vanity Urls, download managers and the like,
none of which would benefit from the WebForm engine.
Before I’m done let’s do the quick review of the event sequences I’ve discussed in
this article from IIS to handler:
The ASP .Net is proved to be a worth while time spent by programmers to create applications, by
providing a very rich Framework Class Library (FCL), Performance tuning, Security,
Manageability etc., It has been said and written that ASP .Net follows a compiled execution
model.
Sometimes we used to get some doubts about whether writing the code inside the code behind
file is faster, some times we are not aware of how the compilation of the code happens etc., This
article tries to take a brief look at some of the internals about the ASP .Net Compiled Page
Rendering and Execution model along with some other related concepts, clearing such doubts.
When the ASPX page is embedded with the code of either C# or VB .Net, the ASP .Net run
time automatically compiles the code into an assembly and loads it. If the code is kept in a
separate source file either as a VB or C# file, it has to be compiled by the programmer, which will
be used by the run time for further execution.
When a the aspnet_wp.exe gets a request for an aspx page which is written without a code
behind class file, it generates a class file dynamically for the corresponding page and places it
inside the Temporary ASP .Net Files somewhere under the tree structure of the .Net installation
path. Then it compiles this into a DLL and finally deletes the class file after successful
compilation. It will render the pages from the same binary DLL for any subsequent requests. If it
sees any change in the time stamp of the .aspx file, it recognizes that there is some change and
recompiles it again for further use.
So ultimately the compilation is only once and all the subsequent requests are entertained only
by using the compiled code/DLL.
When ASP .Net is installed, installation process creates an association for .aspx files with
the aspnet_isapi.dll files. When the IIS receives a request from the clients or web browsers for an
aspx page, the IIS web server hands this request over to the aspnet_isapi.dll, which in turn
instantiates the aspnet_wp.exe job. This aspnet_wp.exe finalizes any unfinished jobs like run time
compilation etc., as explained above and then executes the asp .net application in a new
application domain. Finally the output page is generated and returned back to the web server,
which in-turn sends the file over to the client.
Conclusion:
The above is a very robust model of executing an application extending support to both the
code behind and code embedded inside aspx pages. Ultimately writing the code in any place be it
inside the code behind files or inside the aspx page will always be faster and will not have any
performance difference between the two approaches.
But writing the code inside a separate class file makes a clean coding approach with a
separation of UI from the logic.