FANDOM


Common .NET Performance Myths Edit

Myth #1: Natively compiled code runs much faster than JIT compiled code Edit

Just in Time (JIT) compilation means that .NET compiles code from MSIL to machine code at runtime. This means the first time you use a function there is a small penalty while the function compiles. However, after this initial penalty the function will already be in machine code and will execute as fast as natively compiled code. While this may seem like a bad idea, the JIT time is typically small. The advantage is that because the code compiles to machine code at runtime the framework can further optimize the code for the particular CPU that the computer is using. This means programmers do not have to write different versions of the same code for different CPUs. You can also avoid the initial penalty if you use NGen that pre-compiles all the functions in a program.

Myth #2: C# is faster than VB.NET Edit

This is a common myth but it is not true. The only real difference between the languages is syntax. Both C# and VB.NET are turned into identical MSIL code, so at runtime they are effectively identical. However, VB has some legacy features that are included to help developers with the transition from VB6 and earlier to VB.NET. Using these legacy methods will always be slower than using the new .NET methods for things like file I/O, and error handling. These methods are never required and you should avoid using them. It is always possible to write it the faster way in VB.NET. This creates MSIL code that will be equivalent to C#.

Myth #3: Garbage Collection is slower than Manual Memory Management Edit

The garbage collector does introduce some extra overhead but because of the design of the Garbage Collection in .NET, this makes memory allocation very fast, faster than C++ in many cases, the only performance decrease will be during garbage collections. If you keep garbage collection in mind while programming the application then you can try to avoid slow collections and create very fast applications. The garbage collector is expected to get even more advanced in future versions of .NET, and it could even become much faster than doing manual memory management.

Myth #4: If it is Easy, it is FastEdit

One of the .NET Framework’s biggest advantages is the ease-of-use of writing code for .NET leading to increased productivity. The libraries included with .NET provide a wealth of functionality that can easily be included in your applications. Unfortunately, many programmers new to .NET assume that if the code is simple and easy to use it will be fast. They then tend to do things the easy way, rather than investigating the performance of several alternative methods. Just because it is easy to store large amounts of data in an XML document does not mean it will run fast.

Myth #5: Premature Optimization is the Root of All EvilEdit

This is a popular saying in computer science, and it is not without its merits. Compilers are very sophisticated and can do many low level optimizations. It can often be a complete waste of time for a programmer to try to manually get a low level function to be slightly faster when the compiler will achieve the same results. Unfortunately, many programmers, not just .NET programmers, use this saying to justify not considering performance at all until their application is undesirably slow. You should think about performance from the beginning when you are designing the app. Additionally, you should constantly be taking measurements. Do not wait until your code is slow before taking the steps to make it fast.

Performance Best Practices and Tips Edit

Don’t Guess, Measure Edit

You cannot judge how fast your code is unless you are measuring it. If you care about performance, you should be measuring your applications regularly. This is not something you do when the program is nearly finished, it should be incorporated into prototypes all the way through to the finished product. However, don't allow yourself to get so caught up in performance that you neglect functionality. Making sure that your products works is as important as making sure that it works quickly.

Algorithms, Not Bytes Edit

In the past, it was often worthwhile for a programmer to recode an important loop or function in assembly language, so as to have maximum control over execution and memory usage. However, compiler technology and computer hardware have developed to the point that the time required to optimize a program by hand and the resulting improvement in performance are no longer worth the effort. The .NET compiler and the CLR's JIT compilation make that kind of nitty-gritty optimization unnecessary and potentially detrimental in 999,999 out of 1,000,000 cases. Compilers are smart enough to unroll loops, pack memory, and keep bits organized, and they can make and implement in seconds those optimizations that might take a human several hours.

What no compiler can do, however, is optimize an algorithm. This is where the most effective optimizations can be made by the programmer. Restructuring loops, analyzing object lifetime, and designing a more efficient way to solve a particular problem are much better uses of your programming time.

In short, optimizing and tuning your algorithm and associated data structures is where you should focus your time and effort when dealing with performance.

Manage Object Lifetimes Edit

Garbage collection does not immediately free you from managing the lifetime of your objects. If an object is still being referenced anywhere in your code, the garbage collector will assume it's in use and will not reclaim its memory. If an object goes out of scope (such as once a function exits) or has its reference removed (for example, by setting it to Nothing (VB.NET) or null (C#)), it can be collected.

The memory management model of .NET is not exactly similar to the manual memory management required in languages like C++, and it's best to forget everything you know about destructors and memory management when learning how the .NET garbage collector works.

Creating and destroying objects very quickly can cause heap fragmentation, which requires the garbage collector to compact memory and re-validate object references, which can be a fairly intensive operation. However, the garbage collector only runs once the heap is out of memory or the application manually initiates a collection (which you really shouldn't do anyway).

The garbage collector organizes objects into three generations, which roughly correspond to the length of time an object has been alive. Organizing the heap in this manner allows the garbage collector to compact short-lived objects and run collections on portions of the heap, reducing execution time. What this means is that you should keep objects alive only as long as they are needed, as promoting objects to older generations can make collections slower.

Memory is Precious Edit

Be careful about allocating huge numbers of objects. Remember that every object that is created requires new memory and construction, which may require any number of function calls to complete. When possible, combine smaller data structures into larger ones to lower the number of objects you have to create. Remember also that accessing reference types often requires a heap lookup, which can be more detrimental to performance than you might think.

This is less of an issue with value types (structs), as they are allocated on the stack instead of the heap, but you should still keep things manageable.

Turn On Option Strict and Option Explicit Edit

If you can, turn on Option Strict and Option Explicit. These options will make all variables strongly typed, which will force you to explicitly declare all variables and strictly enforce types, freeing you from dealing with type conversions yourself. This can give you a performance increase, since the CLR will not be trying to do this all automatically. The downside is that you will have to write more code, but the benefits are more than worth it. Another positive aspect of strongly typed code is that all the extra type checking at compile time should lead to more stable and reliable applications.

(Note: Option Strict and Option Explicit are only needed when programming in VB.NET. Strongly typed variables are always used in C#.)

Use the Release build with Optimizations Turn On Edit

Release builds have a number of optimizations that are not present in Debug builds. You should primarily test performance on a Release build that was created with appropriate compiler optimizations.

Exceptions Should Be Exceptional Edit

Some programmers use exceptions to manage the flow of execution in an application. This is bad practice and can result in less readable, less stable, and lower performing code. Exceptions, as a rule, should only be used in the case of errors that cannot be handled in the proper context.

Use the StringBuilder Class Edit

When you are creating large string it is common to use the & operator (in VB.NET) or the + operator (in C#) to concatenate them. This is very easy but it means that the program is allocating additional strings for each concatenation. The compiler is smart enough to optimize many of these allocations if the concatenations occur on the same line of code, but it can still cause overhead in some situations. Instead of making a new string for every concatenation, you can use a StringBuilder when doing a lot of string concatenations. This will save a lot of unnecessary string allocation and copying.

This scenario is a perfect example of unintended excessive allocations that can go unnoticed if testing is not thorough enough.

Avoid Legacy VB Features Edit

As mentioned earlier several legacy VB features are still available in VB.NET. You should avoid them if you want to maximize performance.

On Error Goto is Visual Basic's old way of handling exceptions. Using these in your code can cause it to run significantly slower. In .NET, you should use a Try…Catch block to handle errors. Visual Basic 6 and earlier used Get and Put functions for File I/O, but using this method is significantly slower than using the System.IO.FileStream class.

Use For Loops Edit

While the code for a For…Each loop may look cleaner than using a normal For…Next loop, the For…Each loop can introduce overhead related to the extra function calls required by enumerators. In particular, you should avoid nesting For…Each loops. For...Each loops are useful in areas, however, so don't remove them completely from your toolbox. As always, be sure to measure the performance.

Use Jagged Arrays Edit

Rectangular arrays are often not as fast as jagged arrays in the .NET framework. The framework is highly optimized for one-dimensional arrays. This sometimes means jagged arrays (arrays of arrays) will outperform Rectangular arrays. If all of the cells in the array are being used then you should probably use rectangular arrays, but if only part of any dimension in the array is being utilized then jagged arrays will save space and benefit from some special-case optimizations in the CLR.

Use Structures Edit

In many cases, value types (structs) are faster and less memory-intensive than reference types (classes). Value types are allocated on the stack instead of the heap, so they do not require the reference lookups that objects on the heap need and are not subject to garbage collection. Also, all reference types have 12 bytes of object overhead and metadata that allow the garbage collector to manage them. However, when value types are used in certain situations where they need to be treated as reference types (particularly P/Invoke), they must be boxed and unboxed, which are time- and memory-intensive operations.

Value types are most useful for passing chunks of related data (such as vectors or points) within managed code.

Use DirectCast Edit

When casting object from one type to another, using DirectCast instead of CType in VB.NET or (x as Type) instead of ((Type)x) in C# can be a faster conversion. However, you must know that the target type is compatible with the type to which you are casting, and you should be aware that casts can fail silently, assigning a null reference to the object you thought you had cast.

DirectCast or "as" cannot be used to cast value types (structs), as value types cannot be null. Using .NET 2.0 generic collections can save you a lot of work in casting, as they enforce type safety and can be more efficient than equivalent non-generic collections.

What It All Means... Edit

None of these rules apply in every case, and in some cases ignoring them completely can be to your benefit. However, most of them are true in a great number of cases, so you shouldn't dismiss them without testing. Thorough, comprehensive testing is the only way to really measure the stability and performance of your program.

Profiling Tools Edit

Performance Monitor

The .NET framework adds many performance counters to the system performance monitor to examine your application’s resource utilization. From processor, memory and hard drive usage to garbage collections and JIT compiling, it has a wealth of information about what your program is doing. Every byte counts so it is essential that you keep a close eye of your memory usage. In addition, you should watch the garbage collector carefully. Try to make as many of your objects die in Generation 0 as possible.

CLR Profiler

You can download the CLR Profiler free from Microsoft. It is an extremely useful tool for analyzing your program. The CLR Profiler lets you graphically see object allocations and assembly loading. It also lets you see where objects are located in memory and a timeline of the memory usage of the application. This is very useful for finding memory leaks and analyzing your application’s memory use.

.NET Reflector

Reflector is not exactly a performance-testing tool. The tool lets you disassemble any .NET executable or DLL. You can then see the MSIL code of your program created by the compiler. It will show you what the compiler is optimizing for you so you can concentrate your efforts elsewhere. Keep in mind that there will be another optimization step when the function is JIT compiled, so this does not represent the final machine code that executes.