(ASP).NET programmers have to keep certain rules in mind when developing high performance ASP.NET applications, and/or optimizing your existing ASP.NET website. A lot of information is available on this subject. In this post I’ll share some valuable posts, and I continue to update this post when I find something new. Posts about ASP.NET performance I frequently pass on to customers so they can improve their ASP.NET web applications.
ASP.NET performance: 9 tips for improving your .NET website performance – learn how to fix the performance killers in your application
One important aspect of performance and programming (not only in .NET…) is memory; memory allocation (memory addressing) and memory usage.
The Dangers of the Large Object Heap
Whenever I need to explain why a customers website uses a lot of memory, I find this one of the best information resources available.
Usually, .NET developers don’t need to think too much about how their objects are being laid out in physical memory: after all, .NET has a garbage collector and so is capable of automatically removing ‘dead’ objects and rearranging memory so that the survivors are packed in the most efficient manner possible. The garbage collector has limits to what it can do, however; and when these limits are reached, then the runtime can exhaust memory in a way that is surprising and confusing to any developer who is not aware of how .NET chooses to lay out objects in memory.
.NET manages memory in four different regions, known as heaps. You can think of each of these as continuous regions of memory, though in practice .NET can create several fragmented regions for each heap. Three of the heaps, called the generation 0, 1 and 2 heaps are reserved for small objects: In current versions of .NET ‘small’ means objects that are 85,000 bytes or less. Any object is assigned to one of these generations according to the number of garbage collections it has survived, the veterans ending in generation 2. .NET knows that younger objects are more likely to be short lived and can reduce the performance cost of garbage collections by initially only looking at the recently allocated objects on generation 0. Perhaps more importantly, it can also move the survivors of a collection around so that there are no gaps, ensuring that the free space available for new objects is always together in one large lump. This helps with performance – .NET never needs to search for a hole big enough for a new object, unlike unmanaged code: if there’s enough memory available it’s always in the same place. When it needs to compact a series of objects, .NET actually copies all of them to a new block of memory rather than moving them in place; this improves performance by simplifying how objects are allocated. In these small heaps this means that the free space is always at the end, so there is never any need to scan elsewhere for a ‘hole’ big enough to store a new object.
Another great article en explanation is the MSDN article Large Object Heap Uncovered (an old, recovered, MSDN article), explaining the LOH inner workings.
The .NET Garbage Collector divides objects up into small and large objects. When an object is large some attributes associated with it become more significant than if the object is small. For instance, compacting it, meaning copying the memory elsewhere on the heap, is expensive. In this article we are going to look at the large object heap in depth. We will talk about what qualifies an object as a large object, how these large objects are collected and what kind of performance implications large objects impose.
Back to Basics: Dynamic Image Generation, ASP.NET Controllers, Routing, IHttpHandlers, and runAllManagedModulesForAllRequests
Another important aspect of performance is what type of content you push through the .NET pipeline. Setting a wildcard ASP.NET scriptmapping (back in the good old IIS 6.0 days), or runAllManagedModulesForAllRequests on IIS 7+ pushes everything through the ASP.NET ISAPI. Even generated images or documents. You can imagine this slows down the .NET process and increases memory usage. For this, basic understanding of a .NET process / pipeline is required.
Scott Hanselman, a Microsoft programmer and author of several ASP.NET books, wrote an excellent article on, basically how not to push your images and documents through the whole request pipeline.
Like he says, the article is long but full of info. Read it all.
Often folks want to dynamically generate stuff with ASP.NET. The want to dynamically generate PDFs, GIFs, PNGs, CSVs, and lots more. It’s easy to do this, but there’s a few things to be aware of if you want to keep things as simple and scalable as possible.
You need to think about the whole pipeline as any HTTP request comes in. The goal is to have just the minimum number of things run to do the job effectively and securely, but you also need to think about “who sees the URL and when.”
Modules can see any request if they are plugged into the pipeline. There are native modules written in C++ and managed modules written in .NET. Managed modules are run anytime a URL ends up being processed by ASP.NET or if “RAMMFAR” is turned on.
RAMMFAR means “runAllManagedModulesForAllRequests” and refers to this optional setting in your web.config.
You want to avoid having this option turned on if your configuration and architecture can handle it. This does exactly what it says. All managed modules will run for all requests. That means *.* folks. PNGs, PDFs, everything including static files ends up getting seen by ASP.NET and the full pipeline. If you can let IIS handle a request before ASP.NET sees it, that’s better.
Read on at Scott Hanselman’s blog
.NET Baby Steps: Part VII – Caching
Caching is the art of saving information in-process (mostly memory) for later use. The website can reuse the cached information without the need of performing the same, earlier performed operation. This saves a lot of computing time and information is faster available.
On the other hand, one must think of what information needs to be cached and what not. For instance, you don’t want to aggressively cache information for logged in users so it becomes available to users who are not logged in, just because your caching logic or policy is wrong.
New in .NET 4.0 is the System.Runtime.Caching namespace and Robert MacLean has a nice article about it.
.NET has had one out of the box way to do caching in the past, System.Web.Caching. While a good system it suffered from two issues. Firstly it was not extensible, so if you wanted to cache to disk or SQL or anywhere other than memory you were out of luck and secondly it was part of ASP.NET and while you could use it in WinForms it took a bit of juggling.
The patterns & practises team saw these issues and have provided a caching application block in their Enterprise Library which has been used by everyone who did not want to re-invent the wheel. Thankfully from .NET 4 there is a caching system now included in the framework which solves those two issues above. This is known as System.Runtime.Caching.
You can read on about how to use System.Runtime.Caching on http://www.sadev.co.za/content/net-baby-steps-part-vii-caching. Be sure to check out the other baby steps posts.
A feature of the global.asax is to fire an event upon the end of the session or application. For example, when a visitor leaves your website or when the application pool is recycled. Unfortunately, you never know for sure if the event fired. This might keep objects and variables alive and filling up important memory space. One example of this behaviour is:
Session_OnEnd or Session_End events in global.asax won’t fire if you store ASP.NET sessions out of proc (in State Server or SQL Server)
This is an overlooked behavior which may break your ASP.NET application if you are using Session_OnEnd or Session_End events in Global.asax. Here is a snippet from related article from MSDN:
The Session_OnEnd event is only supported when the session-state HttpSessionState.Mode property value is InProc, which is the default. If the session-state Mode is set to StateServer or SQLServer, then the Session_OnEnd event in the Global.asax file is ignored. If the session state Mode property value is Custom, then support for the Session_OnEnd event is determined by the custom session-state store provider.
So, please pay attention to this change if you are planning to move your sessions from InProc to State Server or SQL Server. If you have code in Session_End or Session_OnEnd methods in Global.asax then you may need to find an alternative way to call them as these methods will be ignored after moving sessions to OutProc.
Other examples exist.
Use with care!
Fix the 3 silent performance killers for IIS / ASP.NET apps
Mike Volodarsky, lead developer for LeanSentry and a former Microsoft programmer for IIS 7.0 and ASP.NET products, writes about fixing the three silent performance killers for IIS and ASP.NET apps.
If you could double your IIS/ASP.NET application performance by making just a few small tweaks, would you do it?
Of course you would!
The three points he outlines to improve are:
- Handled exceptions & Response.Redirect
- LINQ to SQL & non-compiled queries
- Memory allocation & “% Time In GC”
Watching for and fixing these 3 low-hanging issues could make a big difference in the performance of your ASP.NET application, with a minimal amount of work.
Tips to improve ASP.NET application performance
20 Tips to Improve ASP.net Application Performance
Red Gate’s 25 Secrets for Faster ASP.NET: the Eagle has landed! whitepaper
Red Gate, known from its ANTS .NET Performance Profiler, created a free whitepaper 25 Secrets for Faster ASP.NET Applications. It’s the follow up to the wildly successful 50 Ways to Avoid, Find and Fix ASP.NET Performance Issues, which we released back in January this year (you can download from www.red-gate.com/50ways).
Once again, we collected tips from some of the smartest brains in the ASP.NET community, but this time around, we’ve covered the latest stuff in the .NET framework – async/await, Web API, and more.
You can grab it from http://www.red-gate.com/25secrets.
Understanding and troubleshooting unmanaged memory usage in .NET
Writing in C# every day, we forget that we are in a privileged world. Underneath the abstraction of the virtual machine lies a batch of C++ code that is handling memory in the old fashioned way. Blocks of memory are allocated by asking a heap manager for a chunk of memory – you get a pointer to it and you can do exactly what you want with that memory. There’s no associated type controlling your access to the memory and you’re free to do what you like with it. Unfortunately that also means that you can write outside its bounds, or over any header that the heap manager has associated with the block. You can free the block and continue to use it too. All of these problems can lead to spectacular crashes.
Over time, patterns have been developed to handle some of these issues. C++ programs for example often encapsulate memory allocation using the RAII pattern, where blocks of memory are allocated for a particular lexical scope within the program. When the scope is exited, the destructor on a stack allocated object can ensure that the memory is released, and the object’s API can ensure that the programmer does not get unrestrained access to the raw memory itself.
But that’s a different story.
ASP.NET Performance, Troubleshooting, and Debugging
Of course, Microsoft has documentation on ASP.NET performance, troubleshooting and debugging. Some of the subjects covered are:
- ASP.NET Performance Overview
- ASP.NET Tracing Overview
- ASP.NET Health Monitoring Overview
- ASP.NET Troubleshooting and Debugging
One major subject in Performance is, as mentioned earlier in this article, Caching.
You’ll find the documentation here:
ASP.NET Performance, Troubleshooting, and Debugging and specific for caching: ASP.NET Caching.