Tag Archive for WCF

Code Generation, SharedContracts and The Sneaky Bug

A short discussion ensued today on the topic of Code Generation tools like CodeSmith.

Like Unit Testing, code generation is a topic that some people swear by and some reject out of hand. I’m sure this is mostly a question of getting used to the concept. Only after I had used unit tests extensively in a real project could I appreciate the real value of having them – not just occasionally running some tests, or writing some cases beforehand. I’m talking a full suite of automated tests that could be run nightly or as part of a continuous integration setup. But I digress.

Code Generation was of course very useful in .NET 1.1 for generating strongly-typed collection classes and the likes, but that aspect has been pretty much deprecated with Generics in the 2.0 framework. There are still very useful for generating boilerplate code and translating metadata information (XSD, WSDL and other contract-type information) into strongly typed classes.

WCF uses Code Generation to create proxy wrapper from WSDL, creating a class with static code based on the contract information. This is called SharedContract mode. The alternative is Shared Type mode, where the interface isn’t defined as WSDL but as .NET Metadata – i.e. an interface or a class – and the proxy is based on that interface.

In a previous article I expressed a preference for SharedType when we have a closed system where we control both client and server. Ralph Squillace claims that there should be no difference as far as the developer experience is concerned – whether the proxy is generated dynamically at runtime from a shared type, or statically at compile-time by a shared contract.

The reason I disagree with this statement is the same reason I am wary of Code Generation tools in general. It’s not because I don’t trust them – it’s true that code generation bugs can introduce subtle errors into the system, but I assume that a serious Code Generation tool will receive proper attention and QA. I had already filed one bug report on SVCUTIL’s proxy generation code and it was promptly fixed.

The reason isn’t that I don’t trust the tool or even that I don’t trust the programmer using the tool, it’s that I don’t trust any process. The more steps I have, the more things can go wrong. When these steps are manual, even more so.

In a Shared Type scenario, changing the contract involves three steps:

1. Update the interface.
2. Update the Service.
3. Update the client.

In a Shared Contract scenario, it’s slightly different:

1. Update the interface.
2. Update service.
3. Regenerate the proxy.
4. Update the client.

(Note that #1 and #2 might be the same step, if the WSDL is generated directly from the service).

I’m leery of step #3. Not because it’s hard. Not because it’s long or exhausting or particularly annoying to perform – it’s not much more than a menu click in Visual Studio. I’m worried about it because it is a manual step, and all manual steps are bound to be forgotten occasionally. No matter how much we worry, we are that much more likely to find ourselves with a mismatched contract between client and server.

If the mismatch is big, it will be quickly noticed. If my client tries to call an operation that doesn’t exist, I’ll receive an error immediately. If I changed the types of my parameters, I’ll get an exception on the server.

But what if my changes are more subtle? What if I added an OperationBehavior on one end that wasn’t replicated on the other? What if I added a [KnownType] on one end and forgot to synchronize it on the other?

These are errors that hard to catch, and usually manifest much later than they are introduced. This is caused by my synchronization process being manual and more likely to fail.

This is true for other Code Generation scenarios too. If my code generation template creates strongly typed classes based on my database schema, I need to make sure I rerun the generation after each change to my database. How many times have I changed a database table during development and started debugging only to have my Typed Dataset code crash on load because of incompatible schemas, just because I forgot to rerun the generation tool? What if the changes were more subtle (like changing a string length limit) and would only be apparent at some later time in a specific set of circumstances?

 

I’m not saying that SharedContract is bad. It’s a necessity, of course, with open systems and interoperability scenarios. I’m not saying these problems are inevitable when generating code. A bit of discipline and common sense will go a long way. I’m just saying that leaving these holes can come back and bite us. And if we can do without them, we should.

Creating CustomBindings programatically

I had another revelation earlier, when going over Nicholas Allen's explanation of the GetProperty method and BindingContexts:

The Binding objects supplied by the WCF framework are only a wrapper. They have no logic in them. They mean nothing. They do nothing except allow access to the internal properties of their BindingElements. The BindingElements are the ones who do the real work.

There. It's a bit harsh, but it had to be said. Took me a while to wrap my head around it. It's syntactic sugar, since it's easier to do this:

NetTcpBinding binding = new NetTcpBinding();
binding.ReaderQuotas.MaxArrayLength = 512000;

than:

CustomBinding binding = new CustomBinding(new TransactionFlowBindingElement(), new BinaryMessageEncodingBindingElement(), new WindowsStreamSecurityBindingElement(), new TcpTransportBindingElement());
new BindingContext(binding, new BindingParameterCollection()).GetInnerPropertyProperty().MaxArrayLength = 512000;

The two, however, are equivalent. You can think of the NetTcpBinding class as a pre-selected 'kit' with several predefined selections, whereas the CustomBinding is a do-it-yourself set. The basic building blocks, though, are the same.

The problem with this is that if I want to deviate from the given settings, I have to start from scratch. Let's say I have the NetTcpBinding which by default uses the BinaryMessageEncodingBindingElement. If I want to replace that with a TextMessageEncodingBindingElement for some reason, or maybe the CompressionMessageEncodingBindingElement that is a part of the SDK samples for WCF, I can't do that with the NetTcpBinding.  I have to create my own custom binding based on the NetTcpBinding and modify it there.

Luckily, it's not that hard, and simpler than the ugly bit of code I had earlier:
note: The CompressionMessageEncodingBindingElement encapsulates the encoder that actually encodes the message.

// Create a custom binding based on NetTcp
CustomBinding compressingTcpBinding = new CustomBinding(new NetTcpBinding());

// Find the current MessageEncoding binding and its position in the BindingElement stack.
BinaryMessageEncoderBindingElement currentEncoder = compressingTcpBinding.Find();
int encoderIndex = compressingTcpBinding.Elements.IndexOf(currentEncoder);

// Create the new Encoder
CompressionMessageEncoderBindingElement compressionEncoder = new CompressionMessageEncoderBindingElement(currentEncoder);

// Add it to the stack instead of the current encoder.
compressingTcpBinding.Elements.SetItem(encoderIndex, compressionEncoder);

There – a perfect little NetTcpBinding clone with the Encoder neatly replaced, and all done by code, so we have a better idea of what actually happens there.

Nicholas Allen has promised an upcoming article about the binding element stack – waiting expectantly.

WCF: Navigating the Binding maze

A few days ago I ranted a bit about the Binding object model in WCF and how restrictive it feels when I want to imperatively describe my service bindings without being tied down to a specific transport. If I want to disable security or set the maximum allowed size per field in a message, I have to write identical-looking code for each different binding – something that just begs for refactoring which can't be done.

Luckily, the WCF team bloggers are ever alert, and a few days later I got a response from Nicholas Allen describing the reasons behind this behavior. His explanantion consisted of two parts, and I'd like to address them seperately:

1) Avoiding meaningless abstractions.

Take the example of security configuration on a transport. There are entirely incompatible security objects called NetTcpSecurity and NetNamedPipeSecurity. The only overlap in configuring these two transports happens to be when security is entirely disabled. That's not a very interesting abstraction to make.

Granted, one of my common failings is the urge for over-abstraction. I've read Spolsky and I've read Gunnerson, and I try to avoid excessive generalizations, but somtimes it gets the better of me. Having the security settings share a common base – even if only to allow setting No Security in a shared way – seems intuitive. I'll bow down to the WCF team's decision here – if the binding security objects really don't share a common base, there's no reason to abstract them together.

2) Aggregation, not inheritance.

We have a mechanism for dealing with this problem called GetProperty. For instance, if you want to set an XML reader quota in a generic fashion, you would use:

binding.GetProperty(new BindingParameterCollection()).MaxArrayLength = 2;

Instead of relying on inheritance for shared properties, the object model views each Binding object as an aggregate of several binding elements and properties. Since the properties of a binding can differ between two instances of the same transport, the GetProperty mechanism allows us to query the binding for a specific capability – the XmlDictionaryReaderQuotas capability in this case – and access it generically. I would assume the above line would have to be wrapped in some sort of null check in case the binding doesn't support XmlDictionaryReaderQuotas.

The first thing that came to my mind about this is the similarity to the way C++ code worked with COM objects. You used the shared IUnknown interface to call QueryInterface and see what interfaces were supported by the object, and then got a handle to those interfaces. This is similar – the GetProperty is the shared interface which allows us to query for further functionality.

This, however, led to this question: why wasn't this functionality also implemented with interfaces? Why couldn't we have an ISupportsReaderQuotas interface which defines the XmlDictionaryReaderQuotas property, and instead of calling GetProperty() I can cast my binding to the interface and work with that? Each binding can implement the interfaces that make sense to it, and I can use the existing .NET mechanisms (the is and as operators, as well as reflection support) to query the object for the required operations.

This will also bypass other limitations of this model. As it stands, I have no idea how to access the MaxReceivedMessageSize property on the binding – it's an Int64 seperately defined in each binding class. GetProperty's generic parameter is constrained to be a reference type, but even if I could query for Int64 I have no way of specifying which Int64 parameter I want. If this property was part of an IBindingMessageProvider interface I could access it through there.

The advantages to the GetProperty approach is that it is more flexible. I don't have to know at compile-time what interfaces my binding supports. Glancing at the GetProperty code in Reflector, I could see it go deep into the BindingElements that make up the Binding and calling GetProperty on all of them. This means that a binding element added at runtime or through configuration (say, a message encoder) get be retrieved without it being defined as part of the NetTcpBinding class.

Thinking it over (this is a stream-of-consciousness blog, can't you tell?), I can certainly see the logic behind the GetProperty mechanism, but it requires more extensive support for it in the binding object. This means the binding object should have as little “unattached” properties as possible, especially if they can be a part of several different bindings. The MaxMessageSize/MaxPoolSize and related parameters can be extracted to a MessageSizeInformation class, while the TransactionFlow/TransactionProtocol can be extracted to a TransactionInformation class – that way I can always query for the required “interface” at runtime without having to familiarize myself with specific properties of specific bindings.

(This actually gives me an interesting idea about combining interfaces and aggregation, but more on that later)

So, assuming you made it to the end – how do you feel about the current implementation? Am I completely off-base here? Do I make a valid point? Were these issues discussed internally before this implementation were chosen? What were the pros and cons?

I'd love to hear more opinions, both from anyone on the WCF that might be listening and from anyone with an opinion.

WCF Serialization Part 2d: A Solution, a Conclusion and a Contribution

This is the last part of my continuing saga of serializing dictionaries over WCF and beyond.

Quick recap: While WCF allows me to serialize an IDictionary easily, trying to serialize that dictionary later for other uses fails – specifically, caching it to disk using the Enterprise Library. This is because the Enterprise Library relies on the BinaryFormatter, which in turns relies on the type implementing ISerializable. An alternate solution was to use the NameValueCollection which implements ISerializable, but is incompatible with WCF's serialization.

I felt trapped, having to juggle between two incompatible serialization engines – one for communications, one for persistance. Frustrated. Annoyed. Helpless.

But then, as I was whining to my teammates, the solution came to me – there really isn't any reason to jump from one serialization method to the other. Since WCF gives me the most freedom and lets me use the IDictionary that I want, I can simply use WCF's serializer – the NetDataContractSerializer – for the Enterprise Library's cache serialization.

Going over EntLib's code proved very easy – the Caching project has a class called SerializationUtility that exposes two methods – ToBytes() and ToObject(). I'll reproduce the entire method, just to illustrate how simple it is:

public static byte[] ToBytes(object value)
{
   if (value == null)
   {
     
return null;
   }
  
  
byte[] inMemoryBytes;
  
using (MemoryStream inMemoryData = new MemoryStream())
   {
     
new BinaryFormatter().Serialize(inMemoryData, value);
      inMemoryBytes = inMemoryData.ToArray();
   }
  
return inMemoryBytes;
}

Given this simple method (and its even simpler brother, ToObject()) it's not hard to see that all the work that needs to be done is adding a reference to System.Runtime.Serialization and replacing BinaryFormatter with NetDataContractSerializer – and that's it. Their methods have identical signatures, so there's no work there, either.

The lovely thing about EntLib is that it comes with comprehensive unit tests. The only thing I did after making this change is letting NUnit chew on the 200-odd test methods defined and give me a positive result, and I'm good to go.

I've attached the new SerializationUtility.cs for those too lazy to make the change themselves, and the compiled Microsoft.Practices.EnterpriseLibrary.Caching.dll for those who want to just drop it in. Enjoy.

WCF Serialization Part 2c: Hacking the NameValueCollection (unsuccesfully)

As we mentioned here and here, I've been struggling to get the NameValueCollection object to pass through WCF serialization. Please read the first two posts first for some context.

 

In the previous episodes, we saw that we can't get WCF to serialize the NameValueCollection (NVC, from now on) because it incorrectly marked for its CollectionDataContract serialization, but then choked because it had no Add(object) method.

 

So to take the most direct approach, I subclassed the NameValueCollection and added my own Add(object) implementation to see if this jedi mind trick will let WCF work with the NVC class:

 

public class NVC : NameValueCollection

{

    public void Add(object obj)

    {

    }

}

 

Once I did this, the WCF stopped yelling at me that the contract was invalid, but (naturally) my data would vanish during transfer.

 

So next I tried putting a breakpoint inside my Add() method to see what object is passed to it – I was hoping for something along the lines of a KeyValuePair or even a DictionaryEntry, something I can use to manually recreate my NVC.

Turns out there's no such luck – even though the NVC has an internal object called NameObjectEntry that the NVC uses internally for storage, it's not exposed externally. Even the GetEnumerator() method returns an enumerator that only goes over the Keys, not a key/value dictionary.

This means that when recreating my dictionary, the value passed to my Add() method is a single string with the key name only, and the value is lost in translation.

 

No luck there, either.

WCF Serialization Part 2b: WCF Collection Serialization

As we mentioned here, I'm currently struggling with getting WCF to serialize my NameValueCollection object on a service operation. The previous post goes over the general details, while here I'll dive a bit deeper. It's recommended to read the first part before continuing.

 

I don't know exactly the logic that goes in the WCF serialization engine to determine what data contract will be selected for serialization.. I tried wading in with Reflector but got a bit lost. I managed to track down some interesting places, though. Both in the System.Runtime.Serialization assembly:

 

  • System.Runtime.Serialization.DataContract.CreateDataContract() 

This is apparently called when the serializer needs to decide what datacontract applies to the object, if nothing more explicit is found. We can see in line 38 that the function tries to create an instance of the collection type, which leads us to:

 

  • System.Runtime.Serialization.CollectionDataContract.IsCollectionOrTryCreate() 

This method checks if the type is a collection in various ways (interfaces implemented, etc), and has an explicit check if the type has an Add method that receives an object parameter (line 110):

itemType = type3.IsGenericType ? type3.GetGenericArguments()[0] : Globals.TypeOfObject;
CollectionDataContract.GetCollectionMethods(type, type3, new Type[] { itemType }, false, out info2, out info1);
if (info1 == null)
{             
   // Handle invalid collection
} 

 

I'm not entirely clear on the entire logic flow, but these two places seem to indicate that the WCF engine sees the NVC, flags it as a Collection (maybe due to it implementing ICollection) then makes sure it stands up to its strict requirements (stricter than just implementing ICollection) and throws an exception when it doesn't.

 

The big question here is why the IsCollectionOrTryCreate() method, which should return bool if the type is not a collection, instead chooses to throw an exception when it's an incompatible collection rather than returning false and letting the ISerializable handler take it from there.

 

In my next post: Some ugly hacks that also didn't work.

WCF Serialization Part 2a: NameValueCollection Denied!

As we all know, IDictionaries aren't serializable. This is has been a cause of much concern and consternation throughout the years. Throughout these trying times, however, we had one shining beacon in the night – the NameValueCollection in the System.Collections.Specialized namespace was a completely serializable implementation of a string/string dictionary. Used by the framework for everything from configuration settings to ASP query strings.

Now WCF comes along, and with it much serialization goodness, including the ability to easily serialize IDictionaries over the wire. While this is extremely convenient, we have to remember that IDictionaries are still not really serializable – they're just special-cased in WCF. This means that if we want to transfer a dictionary over the wire and then persist it to disk (using the Caching Application Block, for instance) we're still out of luck.

So I found myself coming back to the familiar old NameValueCollection (NVC from now on, for brevity) and passing that over the wire. Imagine my surprise, then, to realize that WCF fails to serialize an NVC in a service operation.

The error received is a System.Runtime.Serialization.InvalidDataContractException', saying that:
Type 'System.Collections.Specialized.NameObjectCollectionBase' is an invalid collection type since it does not have a valid Add method with parameter of type 'System.Object'

This is true – the NVC (and its base type, NameObjectCollectionBase) doesn't implement IList, only ICollection. Unlike ICollection, ICollection doesn't specify an Add() method – this is specifically added by its children, IList or IDictionary. This means that the NVC is free to add its own Add method, and in our case adds two overloads – one receiving string/string, the other receiving a whole NVC to merge.

Now let's go back to Sowmy Srinivasan's blog entry as linked above, we see that the WCF serialization engine uses these precedence rules:

  1. CLR built-in types
  2. Byte array, DateTime, TimeSpan, GUID, Uri, XmlQualifiedName, XmlElement and XmlNode array
  3. Enums
  4. Types marked with DataContract or CollectionDataContract attribute
  5. Types that implement IXmlSerializable
  6. Arrays and Collection classes including List, Dictionary and Hashtable.
  7. Types marked with Serializable attribute including those that implement ISerializable.

We can see that if I have an ICollection that's also marked as ISerializable (like the NVC), the built-in support for collections will kick in first, despite the type being explicitly marked for serialization.

This seems to be a bug in WCF's handling – on one hand it treats it as a Collection class, but then immediately dismisses it as unacceptable, but without letting it fall back on the 7th serialization option. Unfortunately, we can't mark a class to explicitly NOT take part in the built-in collection serialization, even if I mark it as [DataContract] instead of [CollectionDataContract].

Right now I have no solution for this problem. I'm using a Dictionary now with explicit implementation of IXmlSerializable, and manually serializing it before passing it on to my Cache manager.

In my next post I'll go over some of the deep digging I did to get to these conclusions.

WCF Serialization part 1: Interfaces, Base classes and the NetDataContractFormatSerializer

One of WCF's goals is interoperability with standard protocol stacks, like WSE or other WS-* implementations. This led, I am told, to the design of WCF's default object serializer – the DataContractFormatSerializer. This handy little engine serializes primitive types, common objects, enums, Collections and anything that implements IXmlSerializable or ISerializable, giving us a much nicer out-of-the-box-and-over-the-wire experience for our objects.

Because of the aforementioned design goal, however, it has one little problem – the serialized data it generates does not contain information about the .NET CLR type from which this data was serialized. This makes sense when you have interoperability in mind – the datatypes you pass should be described in your service metadata, such as WSDL, and not rely on an implementation detail like the CLR type itself.

When we're writing closed systems, however, where we control both ends of the pipe and use WCF code, it can get limiting. Conside this scenario:

[ServiceContract]
interface MyService
{
   [OperationContract]
   void SendMessage (IMessage message);
}

This is a pretty realistic simplification of my current system, and we often have interfaces or abstract base classes defined in our service contracts.

What happens now? Let's I try to send a concrete IMessage implementations to this contract. When the service receives the messages and tries to deserialize the message parameter, it's stuck – there's no way to create an abstract IMessage from the serialized data, and the data carries no information on how to deserialize it.
If my contract defined this:

 [OperationContract]
   void SendMessage (MessageBase message);

I would be in the same jam – it would try to deserialize a DerivedMessage object into a MessageBase – at best, there would be loss of data. At worst (and as it happens) it simply fails to work.

The first solution that WCF's object model offered is to explicitly mark the service with the concrete types it can receive. This is a stop-gap measure that works, but takes the whole point out of using interfaces and inheritance trees:

[OperationContract]
[ServiceKnownType(typeof(DerivedMessage)]
void SendMessage (MessageBase message);

This would require me to add a ServiceKnownType attribute for every single concrete type, past, present and future. Obviously, not a good solution.

An alternate method involved a daring bit of magic:

[OperationContract]
[ServiceKnownType(“MyKnownTypeMethod”)]
void SendMessage (MessageBase message);

In this case, some backstage voodoo looks for a static method of that given name, calls it during the type's static constructor and asks it to return a list of Known Types. Not only is this totally unintuitive, it also simply doesn't work, at least for me. Maybe it's the beta, maybe it's me.

The third solution came to me from the future. I was shaking my fist at the heavens and asking why couldn't WCF simply serialize the typenames inside, like is done with Remoting. Interop isn't my concern, and it's silly to lose such a powerful feature. My answer came from Aaron Skonnard's blog, and from his August 2006 MSDN Magazine article, not quite yet published:

It seems the standard DataContractFormatSerializer has a shy little brother called the NetDataContractFormatSerializer. The 'Net' here, like in other parts of WCF, means “.NET”. This is a formatter than can be used when both ends of the stream are running .NET, and it can thus embed the type information in the serialized blob. This makes all the fudging around with knowntypes completely unneccesary, and gives us the flexibility we had with earlier methods.

Using it, unfortunately, is a closely held secret. Skonnard's blog is the only mention of the name outside of the sketchy SDK documentation. It seems that we have to write our own OperationBehavior attribute (which Skonnard graciously supplies) and put it on our OperationContracts. The code can be found in his page, and usage is as follows:

[ServiceContract]
interface MyService
{
   [OperationContract]
   [NetDataContractFormat]
   void SendMessage (IMessage message);
}

And that's it. The new attribute will add code that will replace the Formatter used during serialization, and from now on we're on easy street.

Notice that we have to set this attribute on every operation we need it. I was originally frustrated with not getting it to work because I instinctively put the attribute not on the SendMessage operation but rather on the IMessage definition – it stood to reason that we put it on the serializee, not the serializer. Once I got my head wrapped around that, it turned out you can't put it on the ServiceContract but have to do it for each operation seperately.

It seems a lot of work, but it really is much simpler than the alternatives above. Thank you, Aaron, for revealing this hidden gem! 

 

WCF and Non-polymorphic bindings

I've been using WCF for the past few months and on the whole, I think the programming model works. I'm getting to really like the extensibility model and the whole deal seems to click rather well. One place where I feel it's a bit broken, though, is around the object model for Bindings.

 

The concept is simple – one of WCF's main selling point is having my communication channel be interchangeable, so I can replace a TCP/IP connection with HTTP or MSMQ almost seamlessly. This is translated into code by having each service have a Binding element which specifies the connection properties, and all these specific Binding objects – NetTcpBinding, NetMsmqBinding, etc – all derive from a shared Binding class.

It seems WCF sees seven layers to each binding, according to the SDK docs:

 

Layer

Options

Required

Transport Flow

TransportFlowBindingElement

No

Reliability

ReliableSessionBindingElement

No

Security

Symmetric, Asymmetric, Transport-Level

No

Shape Change

CompositeDuplexBindingElement

No

Transport Upgrades

SSL stream, Windows stream, Peer Resolver

No

Encoding

Text, Binary, MTOM, Custom

Yes

Transport

TCP, Named Pipes, HTTP, HTTPS, flavors of MSMQ, Custom

Yes

 

You would expect to have these as properties of the base Binding object – after all, if every binding has a required Encoding property, it makes sense, right?

 

Apparently this is not the case. The base Binding object contains only the most basic information – the naming information of the binding and the Timeout values, as well as shared functionality for opening and closing connections. Encoding? Security? All these are implemented individually in each binding class individually.

Not only that, but these settings are entirely incompatible – not only do I need to explicitly cast my Binding instance to a NetTcpBinding, the Security property of that object is of type NetTcpSecurity – a type that has nothing in common with, say, NetNamedPipeBinding's NetNamedPipeSecurity object. No inheritance, no shared interfaces, nothing.

 

Another example is the ReaderQuotas property, an XmlDictionaryReaderQuotas object that is defined seperately but identically in every Binding implementation. This means that I can't write code that sets the ReaderQuotas in a generic fashion, since there's no common interface or baseclass.

 

I find this very annoying. I'm trying to write a ConfigurationService component that supplies a client with the required binding information at runtime, but I find myself forced to hardcode several assumptions or else have very ugly try/catch code to explicitly see what type my Binding is and set the appropriate methods.

 

Seeing as we're in RC1, it's probably too late for this API to change. I wonder why it was developed this way. I hope the next version of WCF sees this revised.

 

My Singleton is disappearing!

WCF’s Instance Context model allows me to specify my service’s instantiation behavior – I can get a new instance for every call, have WCF manage a session for me and keep an instance per session, or just get lazy and have my service instantiated as a singleton object and use the same instance forever and ever. Juval Lowy covered the basics of these features here


I had a Singleton service defined, exposing several Operations through two different endpoints. After some testing, I noticed that every time a call was received on one of the interfaces the constructor would be called again – my singleton was getting dumped and recreated.


It turns out the operation that was causing the problems was marked like this:


[OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)]


Since the transaction behavior was the only lead I had, I started to look into it and checked out the other properties of the OperationBehavior attributes. I found the following attribute: (Warning: Links point to a local MSDN installation)










ReleaseInstanceMode



Gets or sets a value that indicates when in the course of an operation invocation to recycle the service object.

This operation allows me to declaratively tell WCF to drop my current instance before, after or before AND after my call. This is to allow stateless service calls to ensure that they will not carry over any state between operations. Unfortunately, the default value for the ReleaseInstanceMode is None – this wasn’t the reason I lost my instance.


The MSDN documentation for ReleaseInstanceMode, however, mentioned the following tidbit of information:



In transaction scenarios, the ReleaseInstanceMode property is often used to ensure that old data associated with the service object is cleaned up prior to processing a method call. You can also ensure that service objects associated with transactions are recycled after the transaction successfully completes by setting the ReleaseServiceInstanceOnTransactionComplete property to true.


This led me to the ServiceBehavior’s ReleaseServiceInstanceOnTransactionComplete property which set a more general setting that automatically recycled my service instance whenever a transaction completed successfully. Unlike the ReleaseInstanceMode property, this one is by default set to True, meaning that my transactional operation always released my Singleton instance after each successful run.


The moral of the story? The usual. Always check default values, especially on WCF’s Behavior attributes that have lots and lots of properties. Good luck.