Tag Archive for C#

Moq, Callbacks and Out parameters: a particularly tricky edge case

In my current project we’re using Moq as our mocking framework for unit testing. It’s a very nice package – very simple, very intuitive for someone who has his wrapped around basic mocking concepts, but tonight I ran into an annoying limitation that, while rare, impeded my tests considerably.

Consider the following code:

  1: public interface IOut
  2: {
  3:    void DoOut (out string outval);
  4: }
  6: [Test]
  7: public void TestMethod()
  8: {
  9:   var mock = new Mock();
 10:   string outVal = "";
 11:   mock.Setup(out => out.DoOut(out outVal))
 12:       .Callback<string>(theValue => DoSomethingWithTheValue(theValue));
 13: }

What this code SHOULD do is create a mock object for the IOut interface that, when called with an out parameter, both assigns a value to it, and does something with the value in the Callback.

Unfortunately, this does not work.

The reason it doesn’t work is that the Callback method has many overloads, but they are all to various variants of the Action<> delegate. Either Action, which receives one parameter, or to Action, which receives three. None of these support out parameters. But none of these delegates has a signature with an out parameter. The result, at runtime, would be an error to the effect of

“Invalid callback parameters <string> on object ISetup<string&>”

Note the highlighted bits – The Setup method referred to a string& (a ref/out param), while the Callback inferred an Action delegate, which expectes a regular string param.

So what CAN we do?

The first option is submit a patch to the Moq project. It’s open-source, and a solution might be appreciated, especially since the project’s lead, Daniel Cazzulino, pretty much acknowledged that this is not supported right now. I might do that later, but right now it’s the middle of the night and I just want my tests to pass. So I manage to hack together this workaround which does the trick:

  1: public static class MoqExtension
  2: {
  3:    public delegate void OutAction (out TOut outVal);   
  4:    public static IReturnsThrows 
  5:                         OutCallback
  6:                                  (this ICallback mock, 
  7:                                   OutAction action) 
  8:                                      where TMock : class
  9:    {
 10:       mock.GetType()
 11:           .Assembly.GetType("Moq.MethodCall")
 12:           .InvokeMember("SetCallbackWithArguments", 
 13:                         BindingFlags.InvokeMethod 
 14:                             | BindingFlags.NonPublic 
 15:                             | BindingFlags.Instance, 
 16:                         null, mock, new object[] {action});
 17:       return mock as IReturnsThrows;
 18:    }
 19: }


We’ve created a new delegate, OutAction, that has out T as its parameters, and we’ve added a new extension method to Moq, OutCallback, which we will use instead of Callback. What this extension method does is use brute-force Reflection to hack into Moq’s internals and call the SetCallbackWithArguments method, which registers the Callback with Moq. Yes, it even works. I’m as surprised as you are.

Note and Caveats:

1) This code works for the very specific void Function (out T) signature. You will have to tailor it to the specific methods you want to mock. To make it generic, we’ll have to create a bucket-load of OutAction overloads, with different combinations of out and regular parameters.

2) This is very hacky and very fragile. If Moq’s inner implementation changes in a future version (the one I’m using is 4.0.10827.0), it will break.

3) I don’t like using out params, usually, but this is a 3rd party library, so I have to work with what I got.

Comments? Questions? Scathing criticisms on how ugly my code is? A different, simpler method of accomplishing this that I totally missed? Bring’em on.

Static Constructor throws the same exception again and again

Here’s a little gotcha I ran into today – if you have code in a class’s static constructor that throws an exception, we will get a TypeInitializationException with the original exception as the InnerException – so far, nothing new.

However, if we keep on calling methods on that object, we’ll keep receiving TypeInitializationExceptions. If it cannot be initialized, it cannot be called. Every time we try, we’ll receive the exact same exception. However, the static ctor will not be called again. What appears to happen is that the CLR caches the TypeInitializationException object itself, InnerException included, and rethrows it whenever the type is called.

What are the ramifications? Well, we received an OutOfMemoryException in our static ctor, but the outer exception was caught and tried again. So we got an OutOfMemoryException again, even though the memory problem was behind us, which sent us down the wrong track of looking for persistent memory problems. Theoretically, it could also be a leak – the inner exception holding some sort of reference that is never released, but that’s an edge case.

Here’s some code to illustrate the problem. The output clearly shows that while we wait a second between calls, the inner exception contains the time of the original exception, not any subsequent calls. Debugging also shows that the static ctor is called only once.

   1:  class Program
   2:  {
   3:      static void Main(string[] args)
   4:      {
   6:          for (int i = 0; i < 5; i++)
   7:          {
   8:              try
   9:              {
  10:                  Thread.Sleep(1000);
  11:                  StaticClass.Method();
  13:              }
  14:              catch (Exception ex)
  15:              {
  16:                  Console.WriteLine(ex.InnerException.Message);
  17:              }
  18:          }
  20:          Console.ReadLine();
  21:      }
  22:  }
  24:  static class StaticClass
  25:  {
  26:      static StaticClass()
  27:      {
  28:          throw new Exception("Exception thrown at " + DateTime.Now.ToString());
  29:      }
  31:      public static void Method()
  32:      { }
  34:  }

Upgrading all projects to .NET 3.5

A simple macro to change the Target Framework for a project from .NET 2.0 to .NET 3.5, hacked together in a few minutes. This will fail for C++/CLI projects, and possibly VB.NET projects (haven’t checked). Works fine for regular C# projects, as well as web projects:


For Each proj As Project In DTE.Solution.Projects
       proj.Properties.Item("TargetFramework").Value = 196613
       Debug.Print("Upgraded {0} to 3.5", proj.Name)
   Catch ex As Exception
       Debug.Print("Failed to upgrade {0} to 3.5", proj.Name)
   End Try

Next proj

Why 196613, you ask? Well, when I see such a number my first instinct is to feed it into calc.exe and switch it to hexadecimal. And I was right: 196613 in decimal converts to 0x00030005 – You can see the framework version hiding in there. Major version in the high word, minor in the low word. The previous TargetFramework number was 131072 – 0x00020000, or 2.0.


(Nitpickers might point out that I could simply set it to &30005 rather than messing with the obscure decimal number. They would be correct – but I got to this number through the debugger, so that’s how it will stay)

System.Diagnostics.EventLogEntry and the non-equal Equals.

Just a quick heads-up in case you’re stumped with this problem, or just passing by:

The System.Diagnostics.EventLogEntry class implements the Equals method to check if two entries are identical, even if they’re not the same instance. However, contrary to best practices, it does NOT overload the operator==, so these two bits of code will behave differently:

EventLog myEventLog = new EventLog("Application");
EventLogEntry entry1 = myEventLog.Entries[0];
EventLogEntry entry2 = myEventLog.Entries[0];

bool correct = entry1.Equals(entry2);
bool incorrect = entry1 == entry2;

After running this code, correct will be true while incorrect will be false.

Good to know, if you’re reading event logs in your code.

Displaying live log file contents

I’m writing some benchmarking code, which involves a Console application calling a COM+ hosted process and measuring performance. I want to constantly display results on my active console, but since some of my code is running out-of-process, I can’t really write directly to the console from all parts of the system. Not to mention the fact that I want it logged to a file as well.

So I cobbled together a quick Log class that does two things – it writes to a shared log file, keeping no locks so several processes can access it (I do serialize access to the WriteLine method itself, though). I don’t mind the overhead of opening/closing the file every time, since this isn’t production code.

The second method is the interesting one – it monitors the log file and returns every new line that is appended to it. If no lines are available, it will block until one is reached. The fun part was using the yield return keyword, which I’ve been looking for an excuse to use for quiet a while now.

Note that there are many places this code can go wrong or should be improved. There is no way to stop it running, only when the application is stopped. I bring this as the basic idea, and it can be cleaned up and improved later:

   1: public static IEnumerable<string> ReadNextLineOrBlock()
   2:         {
   3:             // Open the file without locking it.
   4:             using (FileStream logFile = new FileStream(Filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
   5:             using (StreamReader reader = new StreamReader(logFile))
   6:             {
   7:                 while (true)
   8:                 {
   9:                     // Read the next line.
  10:                     string line = reader.ReadLine();
  11:                     if (!string.IsNullOrEmpty(line))
  12:                     {
  13:                         yield return line;
  14:                     }
  15:                     else
  16:                     {
  17:                         Thread.Sleep(100);
  18:                     }
  19:                 }
  20:             }
  21:         }

This method can now be called from a worker thread:


   1: private static void StartListenerThread()
   2:         {
   3:             Thread t = new Thread(delegate()
   4:             {
   5:                 while (true)
   6:                 {
   7:                     foreach (string line in Log.ReadNextLineOrBlock())
   8:                     {
   9:                         // I added some formatting, too.
  10:                         if (line.Contains("Total"))
  11:                             Console.ForegroundColor = ConsoleColor.Green;
  13:                         Console.WriteLine(line);
  15:                         Console.ForegroundColor = ConsoleColor.White;
  16:                     }
  17:                 }
  18:             });
  20:             t.Start();
  21:         }

Another minor C#/C++ difference – adding existing files

Another minor detail that bit me for a few minutes today. I’m posting this so I’ll see it and remember, and possibly lodge it in other people’s consciousness.

Using Visual Studio 2005, adding an existing .cs file to a C# project will cause a copy of that file to be created in the project directory. If I want to link to the original file, I need to explicitly choose Add As Link from the dropdown on the Add button.

C++ projects, however, have the opposite behavior. Adding an existing item in a different folder will add a link to the original file in the original location. To create a copy, manually copy it to the project dir.

That is all.

Dynamically creating a Sharepoint Web-Service proxy instance

In my current code, I find myself accessing many Sharepoint web services. In one run of code I can access 5-6 different web services on dozens of different sites, not necessarily even on the same server. Given that I find myself writing a lot of boilerplate code looking like this:

using (SiteData siteDataWS = new SiteData())
   siteDataWS.Credentials = CredentialsCache.DefaultNetworkCredentials;
   siteDataWS.Url = currentSite.ToString() + “/_vti_bin/SiteData.asmx”;

   // Do something with the WS.

Seeing as this is quite wearisome, I’ve decided to go an genericize it. Given also that the name of the ASMX file for all Sharepoint services is the same as the name of the class to be instantiated, it gets even shorter:

1:  public static WSType CreateWebService (Uri siteUrl) where WSType : SoapHttpClientProtocol, new()
2: {
3:    WSType webService = new WSType();
4:    webService.Credentials = CredentialCache.DefaultNetworkCredentials;

5:    string webServiceName = typeof(WSType).Name;
6:    webService.Url = string.Format(“{0}/_vti_bin/{1}.asmx”, siteUrl, webServiceName);

7:    return webService;
8: }

Line 1: Define a generic method that receives the type of Web Service and returns an instance of it. Note the generic constraints – the type must be a Web Service Proxy (SoapHttpClientProtocol) and must be instantiable. The last one is necessary for line 3 to compile.

Line 5: Get the name of the type – for our example above it would be “SiteData” – and build the URL based on the site URL and the web service name.

Now I can transform the above snippet to this:

using (SiteData siteDataWS = WebServiceHelper.CreateWebService(currentSite))
   // Do something with the WS.

Drastic change? No, not really. But it keeps things clean.

Wrong constructor called in COM+: sneaky, sneaky bug

Here’s another nasty one that tried to bite me today. Let’s say I have the following classes:

public class COMPlusClass : ServicedComponent
   public COMPlusClass()
      // Default initialization.

   public COMPlusClass (string data)
      // Parameterized initialization.

public class Client
    public Client()
       COMPlusClass cpc = new COMPlusClass(“I am a client”);

Which constructor do you think will be called?

Naturally, I wouldn’t ask if it was the second one. Much to my surprise, the first constructor was called when I instantiated the COM+ component.

Why is this? Because COM+, due to its COM heritage, requires all components to expose a default, public, parameterless constructor and always initializes through it.

Why didn’t I get an error, then? Because I’m writing .NET classes that don’t know in advance that they will be instantiated through COM+. If all I know of COMPlusClass came from a COM type library, the second ctor probably wouldb’t be exposed. But since I see it as a .NET class, I can believe I am instantiating the parametrized constructor. COM+ probably discards my precious string along the way.

Solution? Add some sort of Initialize method to pass the construction data.

More Tales from the Unmanaged Side – System.String -> char*

Today, I had a very tight deadline to achieve a very simple task: pass a managed .NET string to an API function that expects a null-terminated char*. Trivial, you would expect? Unfortunately it wasn’t.

My first though was to do the pinning trick that I mentioned in my last post, but in this case I needed my resulting char* to be null-terminated.

Second thought was to go to the System.Runtime.InteropServices.Marshal class and see what it had for me. I found two contenders:

1) Marshal::StringToBSTR() – this creates a COM-compatible BSTR. I found various tutorials about BSTRs saying that they MIGHT be, under SOME circumstances, compatibles with zero-terminated wide-character strings. Didn’t seem to be safe enough.

2) Marshal::StringToHGlobalAuto() – this allocates a block of memory, copies the characters in and even null-terminates it for me. This looked like a winner. Stage one was done – we managed to get an unmanaged string. But can we use it now?

The next problem was that StringToHGlobalAuto returns an IntPtr, and casting it to a char* led to a compilation error. The solution to that is either to cast the IntPtr to a (void*) before casting to (char*), or to do the same action by calling the IntPtr’s ToPointer() method. The second option seems neater to those of us who like as few direct casts as possible – I’d rather my conversion was done by a method than by a possibly unsafe casting operation. I’m sure those more concerned with method-call overheads will disagree.

The next problem is that the result of this operation was a single character string – the first character from the expected string. C++ programmers who’ve struggled with Unicode will quickly spot the problem.

char* strings are null terminated – the first byte containing 00 is the terminator. For Unicode strings as returned by StringToHGlobalAuto, each character takes 2 bytes. If it’s a character from the lower reaches of the Unicode spectrum, the second byte, being the high-order byte, will usually be 00, thus terminating the string. There are two options:

1. Instead of char*, use wchar_t* – wide, 2-byte character string, terminated by two null bytes.
2. use StringToHGlobalANSI, which converts the string to standard 8-bit ANSI encoding. This should be used only if we know we can’t receive any unicode characters, or (as in my case) when the API we call only acceptts char*. :(


So that’s a bit more C++ that haunted and taunted me today. See you next time.

A C# coder in a C++/CLI world, part 1

Recently I’ve found myself stumbling around some C++/CLI code. C++ is a language which I learned years ago and never really worked with seriously, so I’ve been cursing and moaning as I worked. Strange for me to go back to a (partially) unmanaged environment now, with all sorts of assumptions that I have proven to be false. I’ll try to go over some pitfalls and insights I’m having during the visit. This is the first:

The Garbage Collector really spoiled me. I’m not talking about deleting what I instantiated, but all sorts of subtler bugs related to going out of scope. For instance, I had code like this that receives a managed, .NET byte[] and turns it into an unmanaged char*:

void DoSomething(array^ data)
   // Pin the managed array to an unmanaged char*.
   pin_ptr pinnedBuffer = &data[0];
   char* buffer = (char*)pinnedBuffer;      

   // Now do something with the unmanaged buffer.

Simple enough, but then I found myself needing it in a different method too. My first instinct, of course, is refactor those two lines into a new method to do the array conversion, especially since I had a couple of lines of parameter validation there too. But my new GetUnmanagedBuffer(array^) method wasn’t as simple as I hoped. This is because the pinnedBuffer object that I created to prevent the GC from moving the managed array went out of scope when the method exited, and by the time I used the buffer in my unmanaged code, the data wasn’t sure to be there anymore.

In the managed world, we’re used to one of two scenarios when returning function parameters: either’s a value type that’s copied completely, or a reference type that passes a managed, GCed reference. In both these cases, we know that returning any object from a method is a safe operation.

Additonally, we in managed land are used to refactoring being a safe operation, in most cases. If I don’t have to worry about scope, I don’t worry about extracting any block of code into its own method. In C++, however, I have this constant uncertainty. It will pass with exprience, but I still feel much less safe than I would moving to a different, managed environment.


This was the first installment of Tales from the Unmanaged Side. I’ll see if I have anything else to say the more I work on this