Archive for February, 2011

How to use CPU instructions in C# to gain performance

February 26, 2011 Leave a comment


Today, .NetFramework and C# became very common for developing even the most complex application very easily. I remember that before getting our hands-on on C# in 2002 we were using all kind of programming languages for different purposes ranging from Assembly, C++ to PowerBuilder. But, I also remember the power of using assembly or C++ to use every small drop of power of your hardware resources. Every once in a while I’m getting to a project where using regular framework functionality puts my computer to grill for couple days to calculate something. In those cases I go back to the good old C++ or Assembly routines to use the power of my computer. On this blog I will be showing you the simplest way to take advantage of your hardware without introducing any code complexity.


I believe that samples are the best teacher; therefore, I’ll be using a sample CPU instruction from Streaming SIMD Extensions (SSE). SSE is just one of the many instruction set extension to X86 architecture. I’ll be using an instruction named PTEST from the SSE4 instructions, which is almost in all Intel and AMD CPUs today. You can visit the links above for supported CPUs. PTEST helps us to perform bitwise comparison between two 128-bit parameters. I picked this because it was a good sample using also data structures. You can easily lookup online for any other instruction set for your project requirements.

We will be using unmanaged C++, wrap it with managed C++ and call it from C#. Don’t worry, it is easier than it sounds.
Thanks to MSDN we will have all necessary information at Alphabetical Listing of Intrinsic Functions. The PTEST _mm_testc_si128 is documented at If we would use C++ we would end up having the code from the MSDN link:

#include <stdio.h>
#include <smmintrin.h>

int main ()
    __m128i a, b;

    a.m128i_u64[0] = 0xAAAA55551111FFFF;
    b.m128i_u64[0] = 0xAAAA55551111FFFF;

    a.m128i_u64[1] = 0xFEDCBA9876543210;
    b.m128i_u64[1] = 0xFEDCBA9876543210;

    int res1 = _mm_testc_si128(a, b);

    a.m128i_u64[0] = 0xAAAA55551011FFFF;

    int res2 = _mm_testc_si128(a, b);

    printf_s("First result should be 1: %d\nSecond result should be 0: %d\n",
                res1, res2);

    return 0;

I would like to point out that there are many ways to develop a software and the one I’m providing here is maybe not the best solution for your requirement. I’m just providing one way to accomplish a task, it depends to you to fit into your solution.

Ok, let’s start with a fresh new solution: ( I’m assumig you have Visual Studio 2010 with C++ language installed on it.)

1 ) Add a new C# console application to your solution, for the testing purpose
2 ) Add a new Visual C++ > CLR > Class Library project to your solution, named TestCPUInt
3 ) Add a reference to your console application and select TestCPUInt from the projects

Now we are ready to code.
4 ) Open the TestCPUInt.h file, if it is already not opened
5 ) Insert the following code right on top of the document

#include <smmintrin.h>
#include <memory.h>

#pragma unmanaged

class SSE4_CPP
//Code here

#pragma managed

This is the infrastructure for our unmanaged C++ code which will do the SSE4 call. As you can see I placed our unmanaged code between #pragma unmanaged and #pragme managed. I think it is a great feature to be able to write unmanaged and managed code together. You may prefer to place the unmanaged code to another file, but I used one file for clarity. We used here two include header files, smmintrin.h and memory.h: the first one is for the SSE4 instructions and the other one is for a method I used to copy memory.
6 ) Now paste the following code at the //Code here location:

	int PTEST( __int16* bufferA, __int16* bufferB)
		__m128i a, b;

		//transfer the buffers to the _m128i data type, because we do not want to handle with that in managed code
		memcpy(a.m128i_i16, bufferA, sizeof(a.m128i_i16));
		memcpy(b.m128i_i16, bufferB, sizeof(b.m128i_i16));

		//Call the SSE4 PTEST instructions 
		return _mm_testc_si128(a, b);

This _mm_testc_si128 will emit the SSE4 PTEST instruction. We have a little memory operation right before it to fill out the __m128i data structure on the C++ code. I used memcpy to trasfer the data from the bufferA and bufferB arguments to the __m128i data structure to push it to the PTEST. I prefered to do this here to separate tthe whole SSE4 specific implementation. I could also send the __m128i to the PTEST method, but that would be more complex.

As I mentioned before, in this example I used the PTEST sample with a data structure, you may run into other instructions which require only a pointer, in that case you don’t need to do the memcpy operation. There will maybe some challenges if you are not familiar with C++, especially when the IntelliSense is removed in Visual Studio 2010 VC++, but you can always check out for online answers. For example the __m128i data structure is visible in the emmintrin.h file, which is located somewhere [Program Files]\Microsoft Visual Studio 10.0\VC\include. Or you can check all fundamental data types if you are not sure what to use instead of __int16*.

7 ) Now paste the following code on top of your managed C++ code. Which is on the bottom of your TestCPUInt.h file in the namespace section.

namespace TestCPUInt {

	public ref class SSE4
		int PTestWPointer(__int16* pBufferA, __int16* pBufferB)
			SSE4_CPP * sse4_cpp = new SSE4_CPP();
			return sse4_cpp->PTEST(pBufferA, pBufferB);


What we did here is to pass forward the pointers pBufferA and pBufferB, which we are going to call from C#, into the unmanaged code. For those who are not familiar with pointers, the * sign defines a pointer: __int16* means a pointer to a 16 bit integer. In our case that is the address of the first element of an array.
There are also ways without using managed C++ to call a dynamic library, but as I mentioned before, I’m showing only the simplest way for a C# developer.

8 ) Let’s go to our C# code in the console application to use this functionality.
9 ) First we have to switch the application to allow unsafe code. For that goto the project properties and check the “Allow unsafe code” under the build tab.
10 ) Add the following code to your Program.cs file:

static int TestCPUWithPointer(short[] bufferA, short[] bufferB)
	SSE4 sse4 = new SSE4();
		//fix the buffer variables in memory to prevent from getting moved by the garbage collector
		fixed (short* pBufferA = bufferA)
			fixed (short* pBufferB = bufferB)
				return sse4.PTestWPointer(pBufferA, pBufferB);

If you never used unsafe code before, you can check out unsafe (C# Reference). Actually, it is fairly simple logic; the PTestWPointer required a pointer to an array and the only way to get the pointer to an array is to use the fixed statement. The fixed statement pins my buffer array in memory in order to prevent the garbage collector to move it around. But that comes with a cost: in one of my projects the system was slowing down because of too many fixed objects in memory. Anyways, you may have to experiment for your own project.
That’s it!

But we will not stop here, for comparison I did the same operation in C#, as seen below:

static int TestCLR(short[] bufferA, short[] bufferB)
	//We want to test if all bits set in bufferB are also set in bufferA
	for (int i = 0; i < bufferA.Length; i++)
		if ((bufferA[i] & bufferB[i]) != bufferB[i])
			return 0;
	return 1;

Here I simply calculate if every bit of bufferB is in bufferA; PTEST does the same.

On the rest of the application I compared the performance of these two methods. Below is a code which does the comparison for sake of testing:

static void Main(string[] args)
	int testCount = 10000000;
	short[] buffer1 = new short[8];
	short[] buffer2 = new short[8];

	for (int i = 0; i < 8; i++)
		buffer1[i] = 32100;
		buffer2[i] = 32100;
	Stopwatch sw = new Stopwatch();
	int testResult = 0;
	for (int i = 0; i < testCount; i++)
		testResult = TestCPUWithPointer(buffer1, buffer2);	
	Console.WriteLine("SSE4 PTEST took {0:G} and returned {1}", sw.Elapsed, testResult);

	for (int i = 0; i < testCount; i++)
		testResult = TestCLR(buffer1, buffer2);
	Console.WriteLine("C# Test took {0:G} and returned {1}", sw.Elapsed, testResult);


On my environment I gained %20 performance. On some of my projects I gained up to 20 fold performance.
A last thing I would like to do is to show you how to move the fixed usage from C# to managed C++. That makes you code little cleaner like the one below:

static int TestCPU(short[] bufferA, short[] bufferB)
	SSE4 sse4 = new SSE4();
	return sse4.PTest(bufferA, bufferB);

As you can see, it is only a method call. In order to do this we have to add the following code to the TestCPUInt class in the TestCPUInt.h file:

int PTest(array<__int16>^ bufferA, array<_int16>^ bufferB)
	pin_ptr<__int16> pinnedBufferA = &bufferA[0]; // pin pointer to first element in arr
	__int16* pBufferA = (__int16*)pinnedBufferA; // pointer to the first element in arr
	pin_ptr<__int16> pinnedBufferB = &bufferB[0]; // pin pointer to first element in arr
	__int16* pBufferB = (__int16*)pinnedBufferB; // pointer to the first element in arr

	SSE4_CPP * sse4_cpp = new SSE4_CPP();
	return sse4_cpp->PTEST(pBufferA, pBufferB);

This time our method takes a managed array object instead of a __int16 pointer and pins it in the memory like we did using fixed in C#.


I believe that as much as higher level frameworks we are using there will always be situations where we have to use our hardware resources more wisely. Sometimes these performance improvements save us big amount of hardware investment.
Please go ahead look into CPU instructions, Parallel computing, GPGPU etc. and understand the hardware; knowing your tools better will make you a better software architect.


Careful with Optional Arguments in C#4.0

February 20, 2011 2 comments


Some time passed since optional arguments are introduced with Visual C# 2010 and we all got used to the convenience of not having to define method overloads for every different method signature. But, recently I came across of a limitation using optional arguments on enterprise solutions and now I’m using it with care.

The limitation is that if you use the optional arguments across libraries, the compiler will hard code the default value to the consumer and prevent you from re-deploying the provider library separately. It is very common scenario on enterprise applications where we have many libraries with different versions married to each other and this limitation makes it impossible to re-deploy only one dll without re-deploying all related libraries.

On this post I will explain you the details of this limitation. As I mentioned in my previous post Under the hood of anonymous methods in c#, it is very important to know the underlying architecture of a functionality you are using. This post is similar to that by explaining mechanics of the optional arguments.


I will not explain what optional arguments are, because it is already out since a while. But I will just give a quick reference to the description for those who are new to C#:

Optional arguments enable you to omit arguments for some parameters. … . Each optional parameter has a default value as part of its definition. If no argument is sent for that parameter, the default value is used. Default values must be constants.” (Named and Optional Arguments (C# Programming Guide) )

I run into this limitation while I was using some functionality from a secondary library, injected through an IoC container. The secondary library was accessed through an interface where some methods had optional arguments. I had everything deployed and working until I had to make some changes to the secondary library and alter the optional argument. After re-deploying the secondary library I figured out that the changes did not take effect and went I went into the IL code I figured out that the main library had the constant hard-coded into it.

In my situation I had interfaces, injections and a lot of complexity; to better picture the situation I will be using the simplest form of the limitation as in the following sample:

  • ProjectB has a method named Test with the following signature:

public void Test(string arg1 = "none")

  • ProjectA is referencing ProjectB and using the Test method with its default argument by making the following call:

static void Main(string[] args)
Class1 class1 = new Class1();

This works very well, because ProjectA is just using arg1 value as “none”. Now let’s look what is happening behing the scene:

If we compile ProjectA along with ProjectB and analyze the IL code, we will see that the optional argument is nothing more than a compiler feature. Because calling Test() is no different from calling Test(“none”), the compiler decides to compile our code as Test(“none”). That can be seen in the IL code and disassembled C# code below; the string constant “none” is hard-coded into ProjectA.

.method private hidebysig static void Main(string[] args) cil managed
L_0008: ldstr "none"
L_000d: callvirt instance void [ProjectB]ProjectB.Class1::Test(string)

private static void Main(string[] args)
new Class1().Test("none");

For libraries tightly coupled or in-library usage it is good that the compiler helps us eliminating some code and makes our life easier. But this comes with a price:

Let’s say we had to modify the Test method in ProjectB as Test( string arg1 = “something” ) and re-deploy it without re-deploying ProjectA. In this case ProjectA would still be calling the Test method with “none”.


Knowing this, it is good to use the optional arguments with caution across libraries when you have to support deployment scenarios with partial solutions.

Overview: Using Claims-based Access on Windows Azure

February 6, 2011 2 comments


On this blog, I would like to talk about a technology which will simplify user access for developers by allowing building claims-aware application: Windows Identity Foundation (WIF). The goal is to improve developer productivity, enhance application security, and enable interoperability. For most of us claims-aware authentication is nothing new, the new part I’m writing about is how to implement it in Windows Azure environment. If you already developed some solutions with WIF and also know what claims-based identity means, you can save some reading and jump directly to the Windows Azure section.


Authorization and authentication is a very important aspect of any software solution. But somehow many software solutions are not good enough designed to protect private or important data against attackers.

Many applications have their own user identity store developed into their business layer, where it has to be maintained and supported along with the application by developers. I’m not saying that every solution is like this on the world, there are many examples of centralized authorization and authentication solutions, but as we are entering another era, where we are moving our solutions to Windows Azure, it is becoming clear to us that our security solutions does not provide the required scalability and extensibility.

Claims-based Solution

Thanks to Windows Identity Foundation (WIF), we have the option to stop writing our custom identity plumbing and user identity databases for every application using .Net Framework. So, what is claims-based identity? I’ll try to explain it very short; the following diagram shows a simple and classic authentication implementation:

Diagram – 1

The client enters user name and password (1) to get access to the secure area. The data is passed to the security layer (2), where the credentials are validated against a user identity store (3). Then the result is most probably returned in form of some user claims (4) and passed to the application (5). The term “Claims” is used here in order to express that the data returned from the identity store has also more than just user properties. A claim can be a user’s email address, department etc. But claims are a little more than just user properties; claims include trust information about the provider too. At the end the client may receive access to the secure data (6).

Windows Identity Foundation

There is nothing wrong with this picture until you want to implement a different security, enhance your security or add one more application to your system. The problem is that you have to modify your code based about identity and security. Claims-based identity allows you to decouple this logic from the heart of your application and give the responsibility to another entity. That entity is the Identity Provider, as seen below:

Diagram – 2

The client starts with requesting the token requirements from the application (1), receives them (2) and passes them to the Identity Provider’s Security Token Service (STS) along with the user name and password (3). The STS validates the user information against the user identity store (4) and receives the token answers in form of claims (5). Then it passes the token along with the claims and a public key back to the client (6). The client forwards the token to the application. The application uses (8) WIF to resolve (9) the token (Canonicalization, Signature checking, Decryption, Check for expiration, Check for duplication etc) and maybe receives the claims it requires out of the token (10) if the token is valid. At the end the client may receive access to the secure data (10).

At the first look it looks like we are introducing a lot of steps (from 6 to 11), but when you think about the simplicity about the code you have to write to authenticate a user in your application using WIF saves a lot of trouble. For example, the only code I have to write to get a claim looks like this:

protected void Page_Load(object sender, EventArgs e)


    IClaimsIdentity claimsIdentity = ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities[0];

    String userEmail =

        (from c in claimsIdentity.Claims

        where c.ClaimType == System.IdentityModel.Claims.ClaimTypes.Email

        select c.Value).FirstOrDefault();


In this code, we are accessing the instance of our claims identity and querying the claims for the email address, which is a claim from a token generated by the STS. The rest is only application configuration. For example, in a very simple implementation we could only setup some pages to be seen by a certain role and make the following configuration:

<location path=”SecretPage.aspx”>



<allow roles=”Manager”/>

<deny users=”*”/>




Please check out the SDK and Toolkit links below, you will really enjoy the practicality of this framework.

Practicality is not the only advantage; it is also powerful when we want to implement different authentication technologies. For example switching to use active directory will not require any code changes on your application as well as on the communication to your application as seen below:

Diagram – 3

With said that, I would like to introduce you to the other parts of the Microsoft’s Identity and Access Platform software family: Active Directory Federation Services (ADFS) 2.0 and Windows CardSpace 2.0. You can get more information about these technologies on I’ll not get into the details of ADFS and CardSpace because it is outside our current scope, but those are nice technologies which you may use for your custom solution.


You can also develop your own custom STS with the help of the Windows Identity Foundation SDK. Creating a custom STS is as simple as right clicking your project, selecting “Add STS reference…” and following the wizard to create a new STS project. For more information you can download WIF and WIF SDK from these links:

And please do not forget the Toolkit from here, which provides all possible code examples you would need. Just a note: Please start first by setting up your environment as described in the Setup.docx in the Assets folder of the toolkit. I’m one of those who read the manual last, you can save the time until that point:)

Windows Azure

Now, how all this will help us on Windows Azure platform?

There are many scenarios you can start using Windows Azure platform as a part of your solution. One of them is to place your application to the cloud and keep your STS on-premise. In this way you can still take advantage of local identities for authenticating your users and take advantage of Windows Azure features like load balancing, scalability etc. The following diagram shows this example:

Diagram – 4

The logic is very similar to the previous once; the client connects to the application (6) only after sending the application (in this case a ASP.Net web application on a ASP.Net web role) a token (5) which was gathered from the identity provider (4) based on the requirements defined by the application (2). I did not draw the details of the identity provider and application because if can vary as I described in the claims-based identity scenarios. Please notice that the WIF is still used in the application on the Windows Azure. But, WIF is not in the Windows Azure GAC which is visible to the Windows Azure applications. Therefore I wanted to point it out that it is manually referenced.

One of the important details is to know that the application hosted in Windows Azure can have a different URI because of development, testing and production environments. Therefore the application’s URI should be dynamically embedded into the token for reply. As a result, the STS has to also be modified to validate the reply URI.

Another detail is to establish a trust relationship between the ASP.Net Web Role and the STS. That is simply done with the wizard which shows up when you right-click on your application project and select “Add STS reference…”.

One another key scenario is to also to delegate the Identity providing part to Windows Azure. That is by using the Windows Azure AppFabric Access Control technology instead of our STS.

The usage of AppFabric Access Control is shown in the following diagram:

Diagram – 5

As you can see, the claims-based workflow is not changed much; I only simplified the communication (1) just for clarity purposes. The SWT stands for Simple Web Token.

You can get more details about this scenario from the WIF toolkit labs at .


Using claims-based access solutions simplifies a project in many aspects like maintenance, security, extensibility etc. Having this technology applied to the cloud, expands our endless solutions space in another dimension. I only scratched the surface of this nice technology, please go ahead and check out the resources I pointed in this post. I hope you will enjoy it as much I did.