Loading a Type specified in web.config, for example a Ninject Module

Yesterdays article about having your own configuration section in web.config included the option injectionModule:

<myApp injectModule="MyApp.MyAppTestNinjectModule">

The reason I am doing that is because I use Ninject 2 to do dependency injection on my ASP.net MVC 2 app.. Basically I have multiple database backends for my application and the TestNinjectModule implements an in-memory List so that I can develop the application without caring about data persistence yet.

Ninject uses so called Modules that specify the bindings. I have two modules, and my Test Module looks like this:

public class MyAppTestNinjectModule : NinjectModule
{
    public override void Load()
    {
        // As the Test Repositories usually use internal Lists, they need to be Singleton
        Bind<IProjectRepository>().To<TestProjectRepository>().InSingletonScope();
        Bind<INoteRepository>().To<TestNoteRepository>().InSingletonScope();
    }
}

So everytime Ninject sees IProjectRepository, it knows that it should give me the class TestProjectRepository. (Bonus Tip: When using List<T> as data storage, use InSingletonScope to make sure one instance is shared throughout the entire Application) My second Ninject module looks identical, except that I use MSSqlProjectRepository.

I wanted to have this configurable so that I can easily change between them or add more. Previously, creation of the Kernel (CreateKernel in global.asax) looked like this:

protected override IKernel CreateKernel()
{
    return new StandardKernel(new MyAppTestNinjectModule());
}

See that call to new MyAppTestNinjectModule()? If I want to change it, I need to recompile and redeploy the App.

The change itself is straight-forward if you know a bit of reflection. Here is what we need to do:

  1. Get the name of the Class as a string
  2. Find the Type that has this name
  3. Instantiate it
  4. Pass the instance to the StandardKernel constructor

Here is the overly commented code to do that:

protected override IKernel CreateKernel()
{
    // MyAppSettings is the class that reads the setting from
    // web.config
    string moduleName = MyAppSettings.InjectModule;

    // Type.GetType takes a string and tries to find a Type with
    // the *fully qualified name* - which includes the Namespace
    // and possibly also the Assembly if it's in another assembly
    Type moduleType = Type.GetType(moduleName);

    // If Type.GetType can't find the type, it returns Null
    NinjectModule module;
    if (moduleType != null)
    {
        // Activator.CreateInstance calls the parameterless constructor
        // of the given Type to create an instace. As this returns object
        // you need to cast it to the desired type, NinjectModule
        module = Activator.CreateInstance(moduleType) as NinjectModule;
    }
    else
    {
        // If the Type was not found, you need to handle that. You could instead
        // initialize Module through some default type, for example
        // module = new MyAppDefaultNinjectModule();
        // or error out - whatever suits your needs
        throw new MyAppConfigException(
             string.Format("Could not find Type: '{0}'", moduleName),
             "injectModule");
    }

    // As module is an instance of a NinjectModule (or derived) class, we
    // can use it to create Ninject's StandardKernel
    return new StandardKernel(module);
}

Having a nested Configuration Section in web.config

I’m currently working on an ASP.net Application for myself, and I was thinking about Configuration. I wanted to use web.config as this is the proper way, but I was unsure what the best way is. I did not want to use appSettings as I feel my own “block” is just more tidy. Here is how I want the actual configuration to look like:

<myApp injectModule="MyApp.MyAppTestNinjectModule">
  <localeSettings longDateFormat="MM/dd/yyyy HH:mm:ss" />
</myApp>

So I have “myApp” as my main element, with an attribute “injectModule” and a child element “localeSettings”. Child Elements allow me to group my settings even more, as I could have in theory dozens of settings, and having them all as attributes to the main myApp-element is messy.

Now, in order to use the myApp element, I need to tell .net about it in the <configSections> section of the web.config. There are two options: section and sectionGroup. I thought that sectionGroup is the correct one, but sectionGroups cannot have attributes, only child Elements. So they are just containers. On the other hand, sections can have attributes and child elements. I don’t exactly know why one would use sectionGroup, but I haven’t investigated on this too much as it was unsuitable.

So we need to create a section by adding this to the configSections xml element:

<section name="myApp" type="MyApp.MyAppConfigurationSection"/>

What does this do? It tells ASP.net that we will have an xml Element called myApp, and that this is handled though the MyApp.MyAppConfigurationSection Class. How does this class look like?

public class MyAppConfigurationSection : ConfigurationSection
{
    private static ConfigurationPropertyCollection _properties;
    private static ConfigurationProperty _propInjectModule;

    static MyAppConfigurationSection()
    {
        _propInjectModule = new ConfigurationProperty("injectModule", typeof (string),
                                                      "MyApp.MyAppNinjectModule",
                                                      ConfigurationPropertyOptions.None);
        _properties = new ConfigurationPropertyCollection { _propInjectModule };
    }

    protected override ConfigurationPropertyCollection Properties
    {
        get
        {
            return _properties;
        }
    }
    
    public string InjectModule
    {
        get { return this[_propInjectModule] as string; }
        set { this[_propInjectModule] = value; }
    }
}

Whoa, scary… But let’s look at this piece by piece. First of all, your class inherits from ConfigurationSection. This is the base class in the .net Framework to represent a section.

We start by defining a ConfigurationProperty named _propInjectModule and initializing it in the static constructor. The four parameters to the new ConfigurationProperty constructor are:

  1. The name of the attribute/property as it appears in the web.config later
  2. The Type of the property
  3. The default value – this is important if the parameter is not set in the web.config
  4. Options for this Attribute. I have not checked what IsKey and IsDefaultCollection do, but if IsRequired is set then it has to be set in the web.config

So in a nutshell, _propInjectModule is a string that defaults to MyApp.MyAppNinjectModule and is represented in the web.config as injectModule.

The ConfigurationPropertyCollection is needed by .net for some internal plumbing I guess – I haven’t checked on it’s exact purpose.

Finally, we have the public string InjectModule Property. This is used to access the actual value of the Property. If it’s not set in the web.config, the Default value is returned instead. As this is using the Indexer, I’m guessing that this is why the _properties is important.

So, how do we use it? There are multiple options, but I found having a static function most useful:

public static MyAppConfigurationSection GetMyAppConfig()
{
    var result = WebConfigurationManager.GetWebApplicationSection("myApp") as MyAppConfigurationSection;
    return result ?? new MyAppConfigurationSection();
}

This will try to get the myApp Section of the web.config. If this returns null (i.e. if there is none), then it will just initialize a new instance of the class so that we get it in any case. This works because we have default values for all Properties (Well, there is only one for now). If you want to have required Properties, you possibly need to do something harsh on that.

Okay, so try accessing MyAppConfigurationSection.GetMyAppConfig().InjectModule and you should get either the Default Value of “MyApp.MyAppNinjectModule” or whatever you have specified in the <myApp injectModule="MyApp.MyAppTestNinjectModule"> element of your web.config.

So far, so good. But what about the nested localeSettings element? If you just add it to the web.config, you should get an error message explaining that it couldn’t parse this element. Let’s wire it up.

If you want child elements to a ConfigurationSection, you have to use a ConfigurationElement. I do not exactly know what the real difference between a Section and an Element is and why we need three different layers of nesting (sectionGroup – section – element), but I’m sure the Framework designers had something in mind here.

Let us first implement the .net Class for the ConfigurationElement:

public class MyAppLocaleSettingsElement : ConfigurationElement
{
    private readonly static ConfigurationPropertyCollection _properties;
    private readonly static ConfigurationProperty _propLongDateFormat;

    protected override ConfigurationPropertyCollection Properties
    {
        get
        {
            return _properties;
        }
    }


    static MyAppLocaleSettingsElement()
    {
        _propLongDateFormat = new ConfigurationProperty("longDateFormat", typeof (string),
                                                        "ddd, MMM dd, yyyy HH:mm",
                                                        ConfigurationPropertyOptions.None);
        _properties = new ConfigurationPropertyCollection { _propLongDateFormat };
    }

    public string LongDateFormat
    {
        get { return this[_propLongDateFormat] as string; }
        set { this[_propLongDateFormat] = value; }
    }
}

As you see, this works exactly the same as a ConfigurationSection: We have a string property with a Default Value and a public Property.

What we need to do now is to wire it into the MyAppConfigurationSection class. Here are the changes to the class:

public class MyAppConfigurationSection : ConfigurationSection
{
    private readonly static ConfigurationProperty _propLocaleSettings;
    static MyAppConfigurationSection()
    {
        _propLocaleSettings = new ConfigurationProperty("localeSettings", typeof(MyAppLocaleSettingsElement),
                                                        new MyAppLocaleSettingsElement(),
                                                        ConfigurationPropertyOptions.None);
        _properties = new ConfigurationPropertyCollection { _propInjectModule, _propLocaleSettings };
    }
    public MyAppLocaleSettingsElement LocaleSettings
    {
        get { return this[_propLocaleSettings] as MyAppLocaleSettingsElement; }
    }
}

We created a new property that is represented through the “localeSettings” element in the xml. For the default value, I am returning a new instance – again, this works because I specified default Values. Finally, I have a new Property LocaleSettings which exposes the child element through the getter.

To access this, we can do MyAppConfigurationSection.GetMyAppConfig().LocaleSettings.LongDateFormat and either retrieve the default value or the one specified in the web.config.

Here is a recommendation though: I would recommend adding a separate Settings class that your application uses. This class sits between the MyAppConfigurationSection class and your application (it serves as a Facade). Why? To make sure that you can change your configuration structure without making too many breaking changes. Maybe I decide that “injectModule” shouldn’t be an attribute to the myApp element and instead I want to add a child Element? Or maybe I don’t want to use web.config at all and instead use a SQL Server database or a Web Service? Or maybe all of a sudden I decided that injectModule is a required Parameter, which means I now need to handle the case where the user hasn’t added a myApp element to his web.config?

Here is one example of Settings class:

public static class MyAppSettings
{
    private static readonly MyAppConfigurationSection Config;

    static MyAppSettings()
    {
        Config = MyAppConfigurationSection.GetMyAppConfig();
    }

    public static class LocaleSettings
    {
        public static string LongDateFormat
        {
            get
            {
                return Config.LocaleSettings.LongDateFormat;   
            }
        }
    }

    public static string InjectModule
    {
        get { return Config.InjectModule; }
    }
}

I’m using nested classes to group it, but I can now change the MyAppConfigurationSection class (or completely remove it) and only need to do changes in the MyAppSettings Facade-class without affecting the rest of my application.

If you really want to dig deep into this, check out these three links (thanks to marc_s):

An extension Method to Encode Strings

For a project I needed a method to Encode strings. I needed multiple modes, for example HtmlEncode but also some proprietary ones. To keep this as simple as possible, I wrote an extension method and an enum holding the possible modes. The function then decides which actual encoding function to pick (the signature needs to be string FunctionName(string input)) and invokes it.

This abridged example contains two functions: One for Html Encoding and one dummy function that just returns the string. You can easily extend this if needed.

public enum StringEncodeMode
{
    None,
    HtmlEncode
}

public static class StringExtensions
{
    public static string Encode(this string input, StringEncodeMode encodeMode)
    {
        Func<string, string> encodeFunc;
        switch (encodeMode)
        {
            case StringEncodeMode.HtmlEncode:
                encodeFunc = x => HttpUtility.HtmlEncode(x);
                break;
            case StringEncodeMode.None:
            default:
                encodeFunc = x => x;
                break;
        }
        return encodeFunc(input);
    }
}

Using a Converter to convert from a Model to a Business Class

In yesterdays posting about using RestSharp, I spent a paragraph at the end talking about Model vs. Business Classes and about converting between them. To recap:

  • Data Model classes are very closely related to the underlying data, be it a database or a web service. They are very fragile as they more or less mirror the data 1:1 and therefore often change. They may have deep nesting and you don’t have much freedom designing them as they need to match the data.
  • Business classes are an abstraction of the data model and are used by most parts of the application. They are usually rather flat and mostly stable, and you have the freedom to model them to suit your application, rather than mirror the underlying data.

Now, in order to convert between Data Model and Business Classes, you need some sort of Converter function. This could be a secondary constructor in the Business Class that takes a Model Class (not recommended as this created tight coupling), or this could be a separate class/function. Generally, I always try to find something that’s already in the Framework, and conveniently there is a Converter<TInput, TOutput> delegate in the Framework.

What does a standardized delegate allow us to do? It allows us to have a standardized converter function, of course! Let me clarify that: In the example yesterday, we wanted to Convert from a PowerPlantsDTO to a List<PowerPlant>. To do so, we had to run RestClient.Execute<PowerPlantsDTO> to get the DTO, perform the conversion to the List<PowerPlant> and work with that. But what if I want to implement the searchLocation service in our example Web Service? I would want to convert from a LocationsDTO to a List<Location>, right?

I would write a bunch of Converter-Functions and then I have two functions: One for SearchPlants that executes the request, converts it and returns the converted result, and one for SearchLocation which executes the request, converts it, and returns the converted result. As you see, these two functions more or less do the same, they just work with different classes and parameters to the RestRequest. So according to the DRY-principle, we should centralize them as much as possible.

So we want a general “Execute and Convert” function. Our two search functions should only do what is specific to them (that is: Constructing a proper RestRequest), but the execute and convert part is shared (except for the classes) and should be centralized therefore. And that is why standardized delegates matter. Simplified, Delegates are references to functions (you could call them function pointers, although they are not the same as FP’s in C++ – read more about them here) and allow us to pass a function into another function.

Let’s create an extension method to the RestClient:

public static class RestClientExtensionMethods
{
    public static TBusiness Execute<TModel,TBusiness>(this RestClient client,
      RestRequest request, Converter<TModel,TBusiness> converter) where TModel : new()
    {
        var restResult = client.Execute<TModel>(request);
        return converter.Invoke(restResult);
    }
}

Whoa, bracket overflow here, so let’s examine that. We have two generic classes: TBusiness and TModel. TModel is the class you previously passed into Execute, so this is the Data Model class. The “where TModel : new()” constraint is taken from the existing Execute function – if you don’t know about generic type constraints, MSDN has an article. TBusiness is the desired return value.

What this function does: It calls the existing Execute function using the class specified as TModel. Then, it invokes the Converter function (keep in mind: We are passing a reference to the converter function, not the result of it) and returns the result.

We have a Converter as a parameter to this class, so we need to pass in a function that has a TInput as it’s single parameter and a TOutput as it’s return value – check the syntax of the Converter at MSDN:

public delegate TOutput Converter<TInput, TOutput>(TInput input)

We create a new Class that has the functions. I went with a static class, but a non-static works as well.

public static class Converters
{
    public static PowerPlant ItemToPowerPlant(RestTest.Model.item input)
    {
        return new PowerPlant
                   {
                       PlantName = input.name,
                       City = input.location.city.value,
                       Latitude = input.location.latitude,
                       Longitude = input.location.longitude,
                       ZipCode = input.location.zip
                   };
    }

    public static List<PowerPlant> PowerPlantsDTOToList(PowerPlantsDTO input)
    {
        var result = new List<PowerPlant>();
        foreach(var sourcePlant in input) result.Add(ItemToPowerPlant(sourcePlant));
        return result;
    }
}

In this class, we have two converters: A converter that takes an item and returns a PowerPlant, and a converter that takes a PowerPlantsDTO and returns a List of PowerPlants. To invoke this, we need to create a new delegate that holds the desired function and call our extension method:

var convDelegate = 
  new Converter<PowerPlantsDTO, List<PowerPlant>>(Converters.PowerPlantsDTOToList);
var plants = client.Execute(request, convDelegate);

You may ask yourself “Why don’t we have to specify TModel and TBusiness on the call to Execute?” – That’s because the C# compiler is smart enough to infer the types. It knows that TModel is PowerPantsDTO and that TBusiness is List<PowerPlant>. I don’t know exactly how, but I guess it can infer it from the Signature of the delegate.

If you run this code now, plants will be a List<PowerPlant> – Nice!

But… That var convDelegate line is rather ugly, is it? I haven’t really found a nice way around it, but you can make it a new method of the static Converters class:

public static Converter<PowerPlantsDTO,List<PowerPlant>>
  GetPowerPlantsDTOToListConverter()
{
    return PowerPlantsDTOToList;
}

You can then simplify the Execute call:

var plants = client.Execute(request,Converters.GetPowerPlantsDTOToListConverter());

The important thing here to notice: You return the function, so no brackets! The .net runtime is smart enough to return it as a Converter, so no need to write “return new Converter<PowerPlantsDTO,List<PowerPlant>>(PowerPlantsDTOToList)” instead (even though that would also work).

Now, this whole approach may seem incredibly complicated with all those extra functions. And for simple projects, it most likely is. But if you need this for a lot of different conversions, it has a big advantage of offering you a standardized interface throughout the entire application.

I’m undecided though if RestClient should have that functionality due to separation of concerns. One could argue that RestClient should only deal with it’s own problem domain, which is talking to the REST API and returning a model class, and that someone else should deal with the conversion. In the end, you have to decide what fits your own architecture best. I’ve just shown one way how you can standardize and centralize this and hope you can gain something from it.

Using RestSharp to consume RESTful Web Services

Note: This was written a long time ago for the then-current version of RestSharp that had experimental Async support. John and his contributors have updated RestSharp tremendously since then, but by now these samples are outdated and only here for illustrative purposes.

Up until a few days ago, I mainly consumed Web Services using the “standard” model – SOAP. Now while that is all great and useful, it’s also a bit painful to debug and work with. So when I discovered REST (Representational State Transfer), it looked like a great alternative. At first glance, REST just looks like a normal URL, and in many cases when it comes to receive data, it is. As a web developer, you may (should) know what GET and POST is and how they work, and in order to receive data, you often just do a simple GET Request. So why overcomplicate things by wrapping it in a SOAP Envelope?

Now, the one big difference is that for REST, you need to know both the Address and the Format of the Request and Response already. To my knowledge, there is no Discovery-mechanism like there is in SOAP. But to be honest, I never ever used UDDI, and while automatically creating classes using a WSDL-to-C# Converter, at the end of the day I was manually implementing the whole thing anyway. SOAP and REST are – in my opinion – two very extreme opposites: SOAP has a ton of specifications and supplementary protocols (WS-* anyone?) which seems to make it great for enterprise applications and semi-closed systems. On the other hand, REST is low-friction, anarchistic and meant for “Strangers” to quickly implement. I see REST growing a lot, but the spearheads are very public internet sites like Twitter, Flickr, Del.icio.us, Amazon or Google.

As said, REST has no defined Syntax, but most services nowadays offer XML and JSON, sometimes both via a different URL or parameter. Now, let’s look at one, shall we? Most public REST Services require you to create an account and get an “API-Key” or similar. That is to ensure that the vendor can control and track access easily and to rate-limit the service. I’ve decided to pick a service that does not require authentication to keep this example low friction, and I found CARMA’s list of Power Plants, who have a description of their REST Service here.

Let’s just consume it, shall we? Open up your browser and send it to http://carma.org/api/1.1/searchPlants?location=5332921&limit=10&color=red&format=xml

Congratulations – you just consumed a REST Web Service! So what did you do? You sent a GET Request to carma.org, requesting /api/1.1/searchPlants. That’s like making a method call. In C# terms, imagine this method being implemented as

public XmlDocument searchPlants(int location, string color, int limit, string format)

(I’ve left out the other parameters for the sake of making a clear example).

If you are wondering what you just retrieved: Location 4338 is the Los Angeles County area (you could use the searchLocations service to search for locations), Color is the “dirtiness” of the power plants (descending from Red to Green), limit and format are self-explanatory. So this is a list of 10 “red”(=dirty) Power Plants in the Los Angeles area.

The result should look like this:

<items> 
<item> 
    <id>49046</id> 
    <name>WATSON COGEN</name> 
    <carbon> 
        <past>4503176.0000</past> 
        <present>4582168.0000</present> 
        <future>5401482.0000</future> 
    </carbon> 
    <energy> 
        <past>3827727.0000</past> 
        <present>3017826.0000</present> 
        <future>3506896.0000</future> 
    </energy> 
    <intensity> 
        <past>2352.9250</past> 
        <present>3036.7339</present> 
        <future>3080.4910</future> 
    </intensity> 
    <location> 
        <continent> 
            <id>5</id> 
            <value>North America</value> 
        </continent> 
        <country> 
            <id>202</id> 
            <value>United States</value> 
        </country> 
        <latitude>33.8219</latitude> 
        <longitude>-118.2633</longitude> 
        <state> 
            <id>644</id> 
            <value>California</value> 
        </state> 
        <city> 
            <id>60769</id> 
            <value>Carson</value> 
        </city> 
        <metroarea> 
            <id>3203</id> 
            <value>Los Angeles-Long Beach</value> 
        </metroarea> 
        <county> 
            <id>4338</id> 
            <value>Los Angeles</value> 
        </county> 
        <congdist> 
            <id>5298</id> 
            <value>Diane Watson</value> 
        </congdist> 
        <zip>90749</zip> 
    </location> 
</item> 
<item>
  <!-- cut for brevity -->
</item>
</items>

Now, let’s say you want to display this list in your .net Application. What you would do now is writing a function that opens a HTTP Request to fetch the data, a Business Class that holds the desired data, a class that converts the returned XML to a List of Business Classes and an abstraction on top of that so that this module in your application only exposes a “public YourBusinessClass searchPlants()” method. For simple GET Requests, that may be feasible, but why re-invent the wheel? Let’s use some existing Library, in this case RestSharp. What is RestSharp? It is a .net Library for accessing RESTful APIs.

Why would you use it? Multiple reasons. At the minimum, it implements the HTTP Client for you. Also, it automatically converts between the result of the web service (it supports either JSON or XML at the moment) and a custom .net class you create. It makes building the Requests very easy, and it supports all other verbs. To read any sort of data, you usually use GET requests. But many web services also allow you to write data, i.e. to send a Twitter status update. These requests are often made through POST, or (in “true” RESTful web services) through the less commonly known HTTP Verbs PUT or DELETE. RestSharp abstracts all that away from you behind a nice interface.

At the moment, there is no official released version, but it’s open source and hosted on GitHub. Just click on the green “download” button, open the .sln in Visual Studio 2008 and compile. I’m using commit f565e9d1f7d435ed075b45f686520da8bbe39bd7 for this example (the commit GUID is some sort of internal version that Git uses, similar to a Revision in Subversion). After compiling, you should have two DLLs, RestSharp.dll and Newtonsoft.Json.dll. The latter is only used when working with JSON Web Services, so in this example (where we consume the XML output), you can ignore it.

Create a new WinForms application and add a Reference to RestSharp.dll. As a first step, we need to create a few classes to hold our data. In this example, we want:

  • A List of Items
  • The Name of each item
  • The Name and ZipCode of the City
  • The Latitude and Longitude

If you look at the XML, you see we need 4 objects:

  1. A class that implements a List of Items
  2. A class that implements an item, which holds the Name and the Location
  3. A class that implements the Location, which holds the ZipCode, Latitude, Longitude and City Name
  4. A class that implements the City, which holds the Name

We are closely mirroring the Web Service Result here. RestSharp is not intended to create Business Classes – it’s a library that sits in the Model, and it’s task is to consume the web service and give you .net objects to work with. For now, create a new folder “Model” in your Application and implement the Classes.

using System.Collections.Generic;

namespace RestTest.Model
{
    public class CityDTO
    {
        public string value { get; set; }
    }

    public class LocationDTO
    {
        public CityDTO city { get; set; }
        public int zip { get; set; }
        public double latitude { get; set; }
        public double longitude { get; set; }
    }

    public class item
    {
        public string name { get; set; }
        public LocationDTO location { get; set; }
    }

    public class PowerPlantsDTO : List<item> { }
}

Three things to note here: First, the root class should be a subclass of List<T>, and T MUST be named after the element in the XML. That is because the Web Service returns a List of items, and RestSharp uses the Class Name when working with Lists to properly map it.

Then, the properties to be mapped need to be public properties with the same name as their XML Elements. You are of course free to add other elements and you have no obligation to implement properties you’re not interested in.

Also, you are free to pick your own Class Names with the exception of List<T>. I added the suffix DTO to them to indicate that they are merely Data Transfer Objects – their purpose is only to transfer Data from A to B. DTOs usually shouldn’t contain any logic (not even validation), except maybe for overriding ToString.

Let’s add a Button to our Form and add the following Click event code:

using RestSharp;
using RestTest.Model;

private void button1_Click(object sender, EventArgs e)
{
    var client = new RestClient();
    var request = new RestRequest();
    request.BaseUrl = "http://carma.org";
    request.Action = "api/1.1/searchPlants";
    request.AddParameter("location", 4338);
    request.AddParameter("limit", 10);
    request.AddParameter("color", "red");
    request.AddParameter("format", "xml");
    request.ResponseFormat = ResponseFormat.Xml;
    var plants = client.Execute<PowerPlantsDTO>(request);
    MessageBox.Show(plants.Count.ToString());
} 

Run it, click the button, and hopefully you will see a MessageBox with the number 10 in it. So, let me explain what we did here. First of all, we created a RestClient and a RestRequest. The Client handles all the sending of Requests and can be shared across the whole application if you want (Don’t take my word for it, but it looks like it is Thread-safe). The Request is an individual request, which starts of by specifying the BaseUrl and Action (to allow for more flexibility, for example when having different servers for testing and production use). Then, you add a list of parameters. Setting the ResponseFormat is optional – it will tell RestSharp what to expect back. Currently, it supports JSON, XML and AutoDetect which uses the MIME-Type to find out what it is.

The next line is the magic line, the generic Execute<T> method. RestSharp has two methods: Execute and Execute<T>. The non-generic Execute just gives you back a RestResponse object, which contains some headers and a raw chunk of XML (or JSON). On the other hand, Execute<T> will give you back your Model-class populated with data. This is not magic, but a big time saver. To verify that it really worked, add a ListBox to the Form and replace the MessageBox.Show call with this:

listBox1.Items.Clear();
foreach(var plant in plants)
{
    listBox1.Items.Add(string.Format("{0} - {1} - {2}", plant.name,
                                     plant.location.city.value,
                                     plant.location.zip));
}

Click the Button again and you should see something like this:

RestSharpExample

Awesome! Or… is it?

You may be tempted to now use the PowerPlantsDTO and item classes in your application. Or you may be downright turned off by the perceived ugliness of having a lot of nested classes and violating the Law of Demeter with that ugly plant.location.city.value call. So clarify what we have here: We have a Data Model, not a Business Class. Those 4 classes that we have are a 1:1 mapping to the web service. Their purpose is to take the Response and turn it into something we can work with, but it’s very fragile. Imagine the Web Service changes and puts the Zip Code from the Location into the City node – that would be a big breaking change.

So what we really want is a Business Class to work with – that is a class that you use throughout your application and that should almost never change, even if the underlying data model changes. Let’s implement this class then.

public class PowerPlant
{
    public string PlantName { get; set; }
    public string City { get; set; }
    public int ZipCode { get; set; }
    public double Latitude { get; set; }
    public double Longitude { get; set; }

    public PowerPlant(){}

    public PowerPlant(RestTest.Model.item inputItem)
    {
        PlantName = inputItem.name;
        City = inputItem.location.city.value;
        ZipCode = inputItem.location.zip;
        Latitude = inputItem.location.latitude;
        Longitude = inputItem.location.longitude;
    }

    public override string ToString()
    {
        return string.Format("{0} ({1}, {2})", PlantName, City, ZipCode);
    }
}

Change the bottom of the Button’s Click event to this:

var plants = client.Execute<PowerPlantsDTO>(request);

var PlantsList = new List<PowerPlant>();
foreach(var plant in plants) PlantsList.Add(new PowerPlant(plant));

listBox1.Items.Clear();
foreach(var plant in PlantsList)
{
    listBox1.Items.Add(plant);
}

Clicking the Button should give you a similar screen to the one above.

So what have we done? We created a Business Class which is a flat representation of a Power Plant – no more location.city.value, instead just a string called City. We also implemented a constructor that takes an item and converts it. As a bit of icing, we changed ToString to return it nicely formatted. In our Button Click event, we added two lines that convert a List of items to a List of PowerPlants.

Now, if the underlying data model changes, we only need to change the converter code in the constructor of PowerPlant and we’re done. This alone is a huge win already.

But is this everything? We could stop here and go on with it, but I want say one or two things about the converter code.

Currently, it sits in a second constructor of the PowerPlant class. Some people argue that this is a violation of the “Separation of concerns” principle. PowerPlant shouldn’t know about the item class and even less so should convert it. Now, you could just create a new Class “PowerPlant Converter” which has a function “PowerPlant PowerPlantFromItem(item inputItem)”. If you want to be really fancy, you could create a TypeConverter for it. There is also the awesome AutoMapper which makes mapping one type to another mostly automatic and generic, although you still have to specify your logic. I might show an example of AutoMapper in a later posting.

But regardless which path you choose, there are two rules I always recommend: 1. Never use a data model as a business class due to its fragility and 2. Make sure to centralize your Type Converters so that you only have to change it in one place when the data model changes.

A little Review of TekPub

Okay, first a little disclaimer: I won a subscription on a Twitter Quiz and therefore have not paid for TekPub. So my view might be slightly biased, but I try to be neutral. But if you want the TLDR version: TekPub is great and I recommend it.

A few weeks ago, a new site called TekPub launched, created by Rob Conery and James Avery. If you don’t know them, here is a brief bio: Rob used to work at Microsoft on the ASP.net team and also created SubSonic, an open source ORM. James is a long-time developer and founded several advertising networks, including The Lounge. What they provide is Training Videos about several technologies.

Currently (December 2009) they have Git, NHibernate, Building a Blog Software and two newly launched series about jQuery and Linux. Also, there is a free series about Concepts and evolving from a Coder to a Developer. Those are pretty “hip” topics, but more importantly, those are topics that (in my opinion) are not very well covered. Sure, there are tons and tons of documentation, tutorials and samples about Git or NHibernate, but a comprehensive, A to Z, Start to Finish series is rare. That’s the reason I still buy books, and that’s the reason I immediately became excited when I first saw the TekPub announcement.

Let’s start with some technical details first. The Videos are large. The WMV Version of the Git series is 1000×748 Pixels, and it shows positively. You can download the videos in WMV or MP4 format without DRM if you have a yearly subscription or buy a series. For monthly subscribers, there are no downloads. In all cases, you can stream them through their Silverlight player. There seems to be an issue with download links pointing to the wrong files, but that only seems to happen if you open multiple videos in tabs and try to download them – downloading them one by one caused no wrong files. Overall, the picture quality is very good. The fonts are readable and the slides are usually concise.

The audio quality is also very good, and this is one of the very strong sides. English is my second language and while I believe to speak it quite well, I do have problem understanding people who speak muffled (I cannot understand one word of Marlon Brando in The Godfather…) or too fast. On the other hand, people who speak too slow and too clear are boring me to death and often I watched Webcasts and wanted to shout out loud “Dude, get to the point already!”. TekPub so far has great speakers, regardless if it’s Rob, James, or Ayende on the NHibernate series. They have the right pace, they have a clear pronunciation and usually they get to the point fairly quickly. It has a very casual feel to it, which is exactly what I like. I hope they can keep it like that; I used to be a customer of another video training website (Lynda.com) and saw a lot of Hit and Miss when it comes to the speakers.

There are a few minor issues in the early episodes though. Every video starts with a short intro, playing some music and the logo. On some of them, the volume of the music is too loud compared to the narration, so I have to turn the volume down at the beginning and up as soon as the talking starts. Luckily, that is fixed in the newer episodes.

Now to the actual “meat” – the content and its presentation. I watched most of the Git and Build your own blog videos, giving the NHibernate ones a quick look as well. Each episode in a series focuses on one or two points, and the episodes usually build upon each other. That makes it more a Training than a quick reference. With the help of these videos, I managed to start understanding and using Git and I consider them a very good introduction for people who already know Source Control and want to switch or use Git in parallel. But sometimes, I feel like there was no flow. The Git and Build your own Blog series jump from Topic A to D to B to F to C. You may need to watch them a few times as some of the stuff explained later makes it easier to understand some of the concepts before. Also, sometimes the speaker either does not fully know about something at the time of recording or makes a small mistake, and in those cases there is an overlay on the video explaining the mistake or giving more detail. That’s okay because I know that re-dubbing or re-recording a section can be painful, but it can be confusing. But these mistakes are very rare. I think I saw them twice.

You can clearly see them improving and you can clearly see the different directions. Build your own Blog is more like a Video Blog of “Rob builds a blog software for himself” whereas Git and NHibernate is straight training.

Would I pay $200 for a yearly subscription? Good question. So far, there is not that much content and you are investing in a promise. On the other hand, they started two more series in the past 2 weeks. They are also clearly improving, as the later videos are better than the earlier ones. Also, they are not some no-names and also they are not known for abandoning projects and letting people down. Finally, one could argue how much the NHibernate series alone would be worth if they wouldn’t sell it for $25.

The alternatives to the yearly subscription are a monthly subscription (no downloads, only streaming) or purchasing a series. These options are easier to recommend, I definitely think that Git and NHibernate are worth their purchase prices. I’m undecided on Build your Own Blog and it’s too early to judge the jQuery and Linux for Softies series.

But overall, I am very positively surprised. For me, the pace and clear pronunciation is a huge thing, and the content is certainly top notch. Not flawless, but you can see that they know what to talk about and that they are not misleading anyone. They may need to improve the overall flow a little bit, but so far I haven’t found any video that wasn’t at least good. I definitely recommend keeping a close eye on it if you’re interested in video training.

A modest proposal: Password storage disclosure for websites

Okay, so we have yet another breach of security at a company, they got their entire database stolen, and once again it was discovered that they stored their passwords in clear text. This time it’s RockYou!, but it has happened multiple times in the past already, with Reddit being one of the famous offenders. I puke every time I sign up to some phpBB Forum and get an e-Mail with my password in clear text. Really, that doesn’t only happen to some crappy one-man companies, it also happens to some reputable companies (Telltale Games still does it, while Telerik at least changed it after I complained)

I’m starting to get fed up with this. Storing Passwords in Clear Text is an absolute no-no policy, with no excuse whatsoever. If this policy were a car, it would be an Edsel. If this policy were a computer game, it would be Big Rigs or Rapelay. If this policy were a crime, it would be bioterrorism. It’s not some “small oversight” or a “configuration mistake”. It’s a sign of complete and utter incompetence to run a web site. In my opinion, someone who stores passwords in clear text should be prohibited from using the Internet.

As said, this happens to reputable companies as well, so it’s not a small issue that eventually may go away. Therefore, I would love to see the privacy laws of most country changed to force websites to disclose how they store their passwords. We do already have privacy laws in the EU and US that force companies to disclose how they use any information collected. Can’t we expand it to force companies to disclose how they store this information as well?

It’s bad enough that Third-Party Websites ask for your data, but sadly again this is done by reputable websites. Okay, it can be questioned how reputable a website like LinkedIn is if they ask you for your E-Mail account, but the reality is that a) it happens and b) millions and millions of users use it. I don’t think we can get that genie back in the bottle, and I don’t think we’ll get comprehensive coverage of technologies like OAuth to prevent abuse like that.

In an ideal world, I could go to a website, check its privacy policy and see something like

All passwords are salted and hashed with SHA-512. Passwords are not persisted in clear text.

I would even go so far to ask for clear-text storage declared illegal and punished as a federal offense, unless a) it’s required for implementation and b) that implementation is clearly stated.

All Facebook Passwords are persisted in clear text, as we couldn’t figure out how to use the Facebook API and instead rely on HTML scraping.

I know that such a disclosure means nothing to the average John Doe, but it allows tech-savvy people to avoid such incompetent companies and whistleblowers to warn other people about these scams.

Remember: Reputation means nothing when it comes to data storage. Companies and Governments lose your private data every day and while you can’t really avoid it without missing out on a large part of what makes the Web so great, you should still think twice before giving any website the login details of any other website.