Wednesday, August 20, 2008

Taking the HttpContext out of the MVC Framework Controller

I really like the MVC Framework. I'm currently working on my third commercial project using it and I just love the flexibility it gives me, especially the opportunity to refactor the framework itself.

Here's an example: one of the things I dislike is the way the default Controller class is overly coupled to the HttpContext. HttpContext in itself is a problematic hangover from ASP.NET and to have it bound into the Controller to such an extent is a design mistake IMHO.

OK, so what do I mean? Take this action code from a controller:

public ActionResult Search()
{
    const int pageSize = 20;

    var criteria = new ArticleSearchCriteria();
    validatingBinder.UpdateFrom(criteria, Request.Form);

    var articles = articleRepository
        .GetAll()
        .ThatMatch(criteria)
        .ToPagedList(PageNumber, pageSize);

    return View("Search", View.Data
        .WithArticleSearchCriteria(criteria)
        .WithArticles(articles));
}

You see the reference to Request.Form there? The HttpRequest (and HttpResponse and HttpContext) are all exposed as properties of the base Controller class. In order to test that code I have to mock the HttpContext which is a real PITA.

Here's an alternative that I've started using. An IHttpContextService interface:

public interface IHttpContextService
{
    HttpContextBase Context { get; }
    HttpRequestBase Request { get; }
    HttpResponseBase Response { get; }
    NameValueCollection FormOrQuerystring { get; }
}

I can now just use Dependency injection in my controller to get a reference to an implementation of IHttpContextService. Note I don't care here how it is implemented:

public class ArticleController : ControllerBase
{
    private readonly IRepository<Article> articleRepository;
    private readonly IHttpContextService httpContextService;
    private readonly IValidatingBinder validatingBinder;

    public ArticleController(
        IRepository<Article> articleRepository, 
        IHttpContextService httpContextService,
        IValidatingBinder validatingBinder)
    {
        this.articleRepository = articleRepository;
        this.httpContextService = httpContextService;
        this.validatingBinder = validatingBinder;
    }

    public ActionResult Search()
    {
        const int pageSize = 20;

        var criteria = new ArticleSearchCriteria();
        validatingBinder.UpdateFrom(criteria, httpContextService.FormOrQuerystring);

        var articles = articleRepository
            .GetAll()
            .ThatMatch(criteria)
            .ToPagedList(PageNumber, pageSize);

        return View("Search", GroupLibView.Data
            .WithArticleSearchCriteria(criteria)
            .WithArticles(articles));
    }
}

And I can set up my IoC container to give me any implementation of IHttpContextService I want. Now because the IHttpContextService is provided by the IoC container, any dependencies that its implementation may require can also be provided by the IoC container which opens up all kinds of interesting opportunities.

Here's my current HttpContextService:

using System.Collections.Specialized;
using System.Web;

namespace Suteki.Common.Services
{
    public class HttpContextService : IHttpContextService
    {
        public HttpContextBase Context
        {
            get
            {
                return new HttpContextWrapper2(HttpContext.Current);
            }
        }

        public HttpRequestBase Request
        {
            get
            {
                return Context.Request;
            }
        }

        public HttpResponseBase Response
        {
            get
            {
                return Context.Response;
            }
        }

        public NameValueCollection FormOrQuerystring
        {
            get
            {
                if(Request.RequestType == "POST")
                {
                    return Request.Form;
                }
                return Request.QueryString;
            }
        }
    }
}

More on MSTest

I recently had a long and considered comment from Woonboo for my post "MSTest is sapping my will to live". Woonboo is a happy user of MSTest. I started writing a reply in the comments, but it was getting so involved I thought it would be nice to promote it to a new post.

Here's Woonboo's comment:

I know I'm late to the game, but mileage is everything.

I used to use NUnit, but after using MSTest for the past 2.5 years, I wouldn't go back, even with the bugs in VS2008 (which are minor and only hit if you are doing something specific with AppDomain).

Your config file thing is simple - you don't have to use attribute...in the 'settings' (right-click properties) add files to deploy.

It creates a separate test project because too many Morts out there will just put it in the same project otherwise and all your tests will be deployed with the code. I've worked with a number of these folks.

Having the built in ability to test private and internal methods/properties/fields is the biggest reason I love it. No extra code required. No need to loosen the scope (make things public) on what you're testing.

When a test fails, looking at the test results that gives me a hyper-link to every part of the stack trace where the test failed (or threw an exception) as saved me hundreds if not thousands of hours by now not having to navigate to the file, hit CNTL+G and enter the line number; especially if I have to walk up the stack to see if there was a path taken that shouldn't of been causing the problem.

Having the ability to have tests run automatically when I do a build or check-in (easier than I was ever able to with NUnit - but that's a SCC).

Code coverage gets tied to the tests you write.

Pulling up archives into the GUI of who ran what tests when on what machines. Great from the 'team lead' perspective.

You said in another post that technologies change but methodologies don't - don't let NUnit suck you into the 'one tool' mentality (although I could say the same about myself with MSTest). Love your posts generally - but I think you need to give MSTest another chance and ask someone who's been successful with it for help. It was hard for me to switch from NUnit too - but it was like when I switched from VB to C# - I never looked back once I got past the frustration point.

Woonboo's points get to the nub of the problem for me: I don't think MSTest was designed for TDD but for some other high-ceremony style of integration testing. I would submit that he is probably not working in that style. Let me take his points one by one to explain what I mean.

"you don't have to use attribute...in the 'settings' (right-click properties) add files to deploy". Yes, but I shouldn't have to do even this. Why can't it just test what's in the target directory? Building a separate run directory for every test and forcing the user to deliberately choose files to deploy is just another symptom of the high-ceremony MSTest approach.

"It creates a separate test project because too many Morts out there will just put it in the same project otherwise and all your tests will be deployed with the code." I agree that it's best to have a separate test project. But any testing framework should be unintrusive. I shouldn't have to have a separate project type. The only reason it's needed by MSTest is because MSTest requires so much configuration to be useful. And you shouldn't rely on project types to force you to correctly organise your source, otherwise we'd have some crazy Microsoft scheme to have the "domain-project", the "data-access" project, the "service-project". Don't suggest that too near to someone from the TFS team!

Now I have to have a rant here about "The Morts won't understand it" argument. I hear this over and over and over again. It usually goes along the lines of "I understand it fine, but the people I work with, or the 'maintenance' people won't". It's the lamest and most often deployed excuse for bad decisions. If the Morts can't understand something, whoever they are, get rid of them. What's the point in employing people who don't know how to do their job? But usually it's not the real reason, the real reason is lack of leadership, and lack of trust. Most advances in development practices actually make things simpler but it requires that you, if you are the team lead, sit down and work with the people you are supposed to be leading.

"Having the built in ability to test private and internal methods/properties/fields is the biggest reason I love it." One of the core reasons for doing TDD is way it drives your design. If you have to access private members in your tests I would submit that your design is not correctly decoupled. I *want* my testing framework to force me to behave like any other client when I'm writing tests.

Now there might be a situation where you have to write tests for some monolithic legacy code base, but MSTest won't help your there, you need to go and talk to Roy Osherove :)

"When a test fails, looking at the test results that gives me a hyper-link to every part of the stack trace where the test failed." You get exactly the same thing with Testdriven.NET. I have "run-tests" mapped to F8 (I've never found a use for bookmarks). "Run-tests" is context specific so if the cursor is currently inside a test method, only that test gets run. So I just hit F8 to run the test(s), the results appear on the console and I can click on any part of the stack trace to go directly to that line of code. I actually dislike fancy test runners. I don't want to have to click here and there to get to stack traces, I'd much rather everything was just thrown onto the console.

Of course running tests on your CI server is essential. I've always found it simple to do that both with Cruise Control and TFS. I don't think MSTest really adds much here and it doesn't make much sense unless you're using TFS.

"Pulling up archives into the GUI of who ran what tests when on what machines. Great from the 'team lead' perspective." OK, I don't get this. Surely what you should be looking at is code coverage. I run tests every few mintues when I'm coding, you probably wouldn't want to look at or store all those test runs. I think this goes back to my first point. A tool that's designed for high ceremony infrequent testing like MSTest would see storing a test run archive as a useful thing, anyone who's done TDD would see it as a huge waste of resources.

I gave MSTest a chance. I've also recently tried to use MBUnit. neither worked as well as TestDriven.NET + NUnit do for me. I must say that MBUnit came very close, and I wouldn't have too much problem with using if I was in a team that has already settled with it. MSTest was just a nightmare from start to finish. As I said to our project manager, if it was an open source tool, it would never have got any traction and would now be sitting unloved in sourceforge. Maybe it suits some people with a very non-agile, non-TDD methodology, but for anyone doing TDD I would stay well clear.

Woonboo. Thanks very much for provoking me to write this. I love a good debate and would really like to hear your reply. Thanks again!

Sunday, August 17, 2008

Firefox wins here (just)

I use the excellent Google Analytics for tracking stats on my blog. One of the things it tells me is the browser that you, dear reader, are using:

browser_stats

browser_pie

I guess I shouldn't be too surprised that a technical readership should marginally prefer Firefox. But it's still very nice to see.

What else can I tell you about yourself? Well you're probably American.

countries

country_stats

I had an interesting discussion in the office last week. I'm British, but I suggested that since the majority of my readership is from the USA I should adopt US spelling. Even mentioning such a thing was like finding a raised toilet seat in a nunnery, so rather than being lynched I'll stick to 'through' rather than 'thru'. It's a marginally interesting factlet that English speaking kids take almost twice as long to read and write than most other European countries because of the awfulness of our spelling.

One last thing, and this really does surprise me: The vast majority of traffic to my blog is driven by Google searches which is the result of random people typing in random search terms and then finding their way here. The curious thing is that the numbers are so consistent. Every week looks the same with between 200 and 250 visits during week days and 50 to 70 at the weekends. I would have expected more variation, but I guess it's a good demonstration of the predictability of randomness that it's like this.

searchtraffic

Wednesday, August 13, 2008

What's an Auto Mocking Container?

An auto mocking container (AMC) sounds pretty scary, but it's a really neat tool if you're writing a lot of unit tests and find yourself forever constructing mock objects. In the same way that an IoC container knits together dependencies at runtime, an AMC can create all your mock objects automatically for your unit tests.

Say you are testing this Reporter class

public class Reporter
{
  private readonly IReportBuilder reportBuilder;
  private readonly IReportSender reportSender;

  public Reporter(IReportBuilder reportBuilder, IReportSender reportSender)
  {
      this.reportBuilder = reportBuilder;
      this.reportSender = reportSender;
  }

  public void SendReports()
  {
      var reports = reportBuilder.GetReports();

      foreach (var report in reports)
      {
          //reportSender.SendReport(report);
      }
  }
}

Using Rhino.Mocks you might do something like this.

[Test]
public void SendReports_ShouldCreateReportsAndSendThem()
{
  // create the mock services that the Reporter requires
  var reportBuilder = MockRepository.GenerateStub<IReportBuilder>();
  var reportSender = MockRepository.GenerateStub<IReportSender>();

  // create the reporter, injecting the mock services
  var reporter = new Reporter(reportBuilder, reportSender);

  // create some reports
  var report1 = new Report();
  var report2 = new Report();
  var reports = new[] {report1, report2};

  // when the reportBuilder mock's GetReports method is called, return the reports
  // we created above
  reportBuilder.Expect(rb => rb.GetReports()).Return(reports);

  // excercise the method under test
  reporter.SendReports();

  // verify that the reporter sent the expected reports
  reportSender.AssertWasCalled(rs => rs.SendReport(report1));
  reportSender.AssertWasCalled(rs => rs.SendReport(report2));
}

After the tenth time you've had to write this kind of code you get quite bored of typing the mock object creation. It's irritating when you're thinking about using a new dependency in an existing class, and you have to add it to both the class under test and the setup for the test.

Here's an example using the AutoMockingContainer from Rhino Tools.

[Test]
public void SendReports_ShouldCreateReportsAndSendThem_WithAMC()
{
  // create a new auto mocking container
  var mocks = new MockRepository();
  var container = new Rhino.Testing.AutoMocking.AutoMockingContainer(mocks);
  container.Initialize();

  // just ask for the reporter
  // the container will automatically create the correct mocks
  var reporter = container.Create<Reporter>();

  // create some reports
  var report1 = new Report();
  var report2 = new Report();
  var reports = new[] { report1, report2 };

  // when the reportBuilder mock's GetReports method is called, return the reports
  // we created above
  container.Get<IReportBuilder>().Expect(rb => rb.GetReports()).Return(reports);

  // expect that the reporter sent the expected reports
  container.Get<IReportSender>().Expect(rs => rs.SendReport(report1));
  container.Get<IReportSender>().Expect(rs => rs.SendReport(report2));

  // excercise the method under test
  reporter.SendReports();

  // assert that expectation were met
  mocks.VerifyAll();
}

As you can see, after we set up the AMC we simply ask it for an instance of the class we wish to test. We don't have to worry about supplying the dependencies because the AMC works out what mocks need to be created and does it for us.

When we setup our expectations we can ask the AMC for the mock objects it's created.

You can get the latest version of the source code for the Rhino Tools AMC by pointing tortoise here:

https://rhino-tools.svn.sourceforge.net/svnroot/rhino-tools/trunk

Or you can download the code and grab the assemblies I've built from the trunk here

http://static.mikehadlow.com/Mike.AutoMockingContainer.zip

Monday, August 11, 2008

Microsoft.Sdc.Tasks

I just discovered this collection of MSBuild tasks. There are tasks for all kinds of things you might want to do during your build including...

  • Editing XML files using XPath
  • Setting up web sites including creating virtual directories
  • Editing registry settings
  • Setting version numbers
  • Lots of file and folder manipulation

... and lots more. I'm using its XmlFile task for changing my NHibernate-Windsor-integration IsWeb setting to false in the windsor.config file that's copied to my tests project:

<Target Name="AfterBuild">
<Copy
   SourceFiles="$(SolutionLocation)MyProject\Configuration\Windsor.config"
   DestinationFiles="$(TargetPath).Windsor.config"
   />
<Copy
   SourceFiles="$(SolutionLocation)MyProject\Web.config"
   DestinationFiles="$(TargetPath).config"
   />
<XmlFile.SetAttribute
           Path="$(TargetPath).Windsor.config"
           XPath="/configuration/facilities/facility[@id='nhibernate']"
           Name="isWeb"
           Value="false"
           IgnoreNoMatchFailure="false"
           Force="true"
   />
</Target>

Friday, August 08, 2008

The Queryable Domain Property Problem

LINQ has revolutionised the way we do data access. Being able to fluently describe queries in C# means that you never have to write a single line of SQL again. Of course LINQ isn't the only game in town. NHibernate has a rich API for describing queries as do most mature ORM tools. But to be a player in the .NET ORM game you simply have to provide a LINQ IQueryable API. It's been really nice to see the NHibernate-to-LINQ project take off and apparently LLBLGen Pro has an excellent LINQ implementation too.

Now that we can write our queries in C# it should mean that we can have completely DRY business logic. No more duplicate rules, one set in SQL, the other in the domain classes. But there's a problem: LINQ doesn't understand IL. If you write a query that includes a property or method, LINQ-to-SQL can't turn the logic encapsulated by it into a SQL statement.

To illustrate the problem take this simple schema for an order:

queriable_scema

Let's use the LINQ-to-SQL designer to create some classes:

queriable_classes

Now lets create a 'Total' property for the order that calculates the total by summing the order lines' quantities times their product's price.

public decimal Total
{
get
{
    return OrderLines.Sum(line => line.Quantity * line.Product.Price);
}
}

Here's a test to demonstrate that it works

[Test]
public void Total_ShouldCalculateCorrectTotal()
{
const decimal expectedTotal = 23.21m + 14.30m * 2 + 7.20m * 3;

var widget = new Product { Price = 23.21m };
var gadget = new Product { Price = 14.30m };
var wotsit = new Product { Price = 7.20m };

var order = new Order
{
    OrderLines =
    {
        new OrderLine { Quantity = 1, Product = widget},
        new OrderLine { Quantity = 2, Product = gadget},
        new OrderLine { Quantity = 3, Product = wotsit}
    }
};

Assert.That(order.Total, Is.EqualTo(expectedTotal));
}

Now, what happens when we use the Total property in a LINQ query like this:

[Test]
public void Total_ShouldCalculateCorrectTotalOfItemsInDb()
{
var total = dataContext.Orders.Select(order => order.Total).First();
Assert.That(total, Is.EqualTo(expectedTotal));
}

The test passes, but when we look at the SQL that was generated by LINQ-to-SQL we get this:

SELECT TOP (1) [t0].[Id]
FROM [dbo].[Order] AS [t0]

SELECT [t0].[Id], [t0].[OrderId], [t0].[Quantity], [t0].[ProductId]
FROM [dbo].[OrderLine] AS [t0]
WHERE [t0].[OrderId] = @p0
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1]

SELECT [t0].[Id], [t0].[Price]
FROM [dbo].[Product] AS [t0]
WHERE [t0].[Id] = @p0
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1]

SELECT [t0].[Id], [t0].[Price]
FROM [dbo].[Product] AS [t0]
WHERE [t0].[Id] = @p0
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [2]

SELECT [t0].[Id], [t0].[Price]
FROM [dbo].[Product] AS [t0]
WHERE [t0].[Id] = @p0
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [3]

LINQ-to-SQL doesn't know anything about the Total property, so it does as much as it can. It loads the Order. When the Total property executes, OrderLines is evaluated which causes the order lines to be loaded with a single select statement. Next each Product property of each OrderLine is evaluated in turn causing each Product to be selected individually. So we've had five SQL statements executed and the entire Order object graph loaded into memory just to find out the order total. Yes of course we could add data load options to eagerly load the entire object graph with one query, but we would still end up with the entire object graph in memory. If all we wanted was the order total this is very inefficient.

Now, if we construct a query where we explicitly ask for the sum of order line quantities times product prices, like this:

[Test]
public void CalculateTotalWithQuery()
{
var total = dataContext.OrderLines
    .Where(line => line.Order.Id == 1)
    .Sum(line => line.Quantity * line.Product.Price);

Assert.That(total, Is.EqualTo(expectedTotal));
}

We get this SQL

SELECT SUM([t3].[value]) AS [value]
FROM (
SELECT (CONVERT(Decimal(29,4),[t0].[Quantity])) * [t2].[Price] AS [value], [t1].[Id]
FROM [dbo].[OrderLine] AS [t0]
INNER JOIN [dbo].[Order] AS [t1] ON [t1].[Id] = [t0].[OrderId]
INNER JOIN [dbo].[Product] AS [t2] ON [t2].[Id] = [t0].[ProductId]
) AS [t3]
WHERE [t3].[Id] = @p0
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1]

One SQL statement has been created that returns a scalar value for the total. Much better. But now we've got duplicate business logic. We have definition of the order total calculation in the Total property of Order and another in the our query.

So what's the solution?

What we need is a way of creating our business logic in a single place that we can use in both our domain properties and in our queries. This brings me to two guys who have done some excellent work in trying to solve this problem: Fredrik Kalseth and Luke Marshall. I'm going to show you Luke's solution which is detailed in this series of blog posts.

It's based on the specification pattern. If you've not come across this before, Ian Cooper has a great description here. The idea with specifications is that you factor out your domain business logic into small composable classes. You can then test small bits of business logic in isolation and then compose them to create more complex rules; because we all know that rules rely on rules :)

The neat trick is to implement the specification as a lambda expression that can be executed against in-memory object graphs or inserted into an expression tree to be compiled into SQL.

Here's our Total property as a specification, or as Luke calls it, QueryProperty.

static readonly TotalProperty total = new TotalProperty();

[QueryProperty(typeof(TotalProperty))]
public decimal Total
{
get
{
    return total.Value(this);
}
}

class TotalProperty : QueryProperty<Order, decimal>
{
public TotalProperty()
    : base(order => order.OrderLines.Sum(line => line.Quantity * line.Product.Price))
{
}
}

We factored out the Total calculation into a specification called TotalProperty which passes the rule into the constructor of the QueryProperty base class. We also have a static instance of the TotalProperty specification. This is simply for performance reasons and acts a specification cache. Then in the Total property getter we ask the specification to calculate its value for the current instance.

Note that the Total property is decorated with a QueryPropertyAttribute. This is so that the custom query provider can recognise that this property also supplies a lambda expression via its specification, which is the type specified in the attribute constructor. This is the main weakness of this approach because there's an obvious error waiting to happen. The type passed in the QueryPropertyAttribute has to match the type of the specification. It's also very invasive since we have various bits of the framework (QueryProperty, QueryPropertyAttribute) surfacing in our domain code.

These days simply everyone has a generic repository and Luke is no different. His repository chains a custom query provider before the LINQ-to-SQL query provider that knows how to insert the specification expressions into the expression tree. We can use the repository like this:

[Test]
public void TotalQueryUsingRepository()
{
var repository = new RepositoryDatabase<Order>(dataContext);

var total = repository.AsQueryable().Select(order => order.Total).First();
Assert.That(total, Is.EqualTo(expectedTotal));
}

Note how the LINQ expression is exactly the same as one we ran above which caused five select statements to be executed and the entire Order object graph to be loaded into memory. When we run this new test we get this SQL:

SELECT TOP (1) [t4].[value]
FROM [dbo].[Order] AS [t0]
OUTER APPLY (
SELECT SUM([t3].[value]) AS [value]
FROM (
    SELECT (CONVERT(Decimal(29,4),[t1].[Quantity])) * [t2].[Price] AS [value], [t1].[OrderId]
    FROM [dbo].[OrderLine] AS [t1]
    INNER JOIN [dbo].[Product] AS [t2] ON [t2].[Id] = [t1].[ProductId]
    ) AS [t3]
WHERE [t3].[OrderId] = [t0].[Id]
) AS [t4]

A single select statement that returns a scalar value for the total. It's very nice, and with the caveats above it's by far the nicest solution to this problem that I've seen yet.

Monday, August 04, 2008

On Google App Engine

I was just reading Ayende's post 'Thinking About Cloud Computing'. He talks about two very different approaches; Amazon's EC2/GoGrid where you have a VM that sits on Amazon's or GoGrid's servers and Google App Engine where Google provide an application hosting environment. Ayende's take is that the Google approach is the one with legs and I'm inclined to agree with him.

I've been aware of EC2 for a while now because I'm a keen user of S3, but I'd not come across Google App Engine before. I really like the premise; you simply upload your application to the cloud and it just scales as required. At the moment it only supports Python, but this is something that will surely spread to other environments. It can only be a matter of time before someone supplies a Mono based .NET environment. Once that happens Mono will move from being an interesting .NET sideshow to being seriously mainstream.

Up to now, If you're thinking of building the next YouTube or Twitter you've got two choices, you can concentrate on getting a compelling new application out there and hope that you'll be able to deal with scaling it up if it gets popular. Possibly facing a similar to fate to Twitter, which has been having serious scaling issues. Or alternatively, you spend the money up front on infrastructure. Probably wasting money on something that may never fly.

With Cloud computing you don't have this dilemma. You can concentrate on building a fantastic application knowing that you don't have to worry about the infrastructure.

What will Microsoft's response to this be? Their OS monopoly must surely be threatened by a world where anyone can get limitless scalability on a pay-as-you-go basis. Why would anyone every buy a server operating system again? Will the beast soon start selling its own cloud services?

Sunday, August 03, 2008

How Did I Get Started In Software Development?

This is a new one for me, I've been tagged by Ben Hall to follow up on this 'meme' that's doing the rounds. Thanks Ben!

How old were you when you first started in programming?

I was probably 13 years old when I was given a book on BASIC programming. It was a spiral bound, hand written introduction. I wish I could find a reference to it on Google because I remember it as being an excellent tutorial for novice programmers. I spent months hand writing BASIC programs and executing them manually before my parents relented and bought me a TRS-80. I really really wanted an Apple II, but they were far too expensive at the time, three or four times as much as the 'trash 80'. The cassette tape based storage thingy never worked which meant that I would write a program, execute it and then start all over again the next time I turned the machine on. My best effort was probably a space landing game which incorporated Newton's laws of motion. You had to use trusters to land a lunar module on the only flat piece of ground the 'moon'. I'd grown up with the Apollo moon shots and was totally obsessed with space at the time.

What was your first programming language?

BASIC, see above.

What was the first real program you wrote?

Ah ha, I see I've answered all the questions already!

What languages have you used since you started programming?

As a teenager it was pretty much TRS-80 BASIC, but I also studied COBOL for Computer Science 'A' level. I had a friend who could write Z-80 machine code straight from his head which I was very impressed with, but I never managed more than making the screen flicker myself.

Professionally I've been solidly Microsoft: first Visual Basic and TSQL, then ASP VBScript and Javascript, and now C#. I've had a play with Java, Ruby, F# and even read The Little Schemer, but I wouldn't say I'm much past 'Hello World' with any of them.

What was your first professional programming gig?

After being obsessed with programming in my early teens, I abandoned it for the electric guitar. Yes, I was seduced by Rock. I spent the next few years playing in several no-hope bands and backpacking around the world. I was only after doing a degree in Development Studies and working as an English teacher in Japan for two years that I rediscovered computers and found that I still got a huge kick out of programming. I went back to college and did an IT masters degree and then got my first professional programming gig with a small company called SD Partners. It was a great 'in the deep end' experience and I got to write several VB/SQL Server client-server systems in the two years that I was with them.

If you knew then what you know now, would you have started programming?

Oh yes, without a doubt. In fact I wouldn't have stopped for ten years.

If there is one thing you learned along the way that you would tell new developers, what would it be?

There's an art to programming that goes beyond the tools. I didn't really discover this until five years into my professional programming career when I read Agile Software Development by Bob Martin. That book changed my life and it didn't mention a single Microsoft tool that I was currently using. Languages, Frameworks and APIs come and go, but patterns and principles of good software engineering stay around for a lot longer. Concentrate on those.

What's the most fun you've ever had programming?

It's when you discover a great abstraction, one that suddenly turns hundreds of lines of hackery into a beautiful extensible structure. That doesn't happen enough for me, but when it does I go home after work on cloud nine.

Recently I've really enjoyed creating Suteki Shop. As a hired-gun developer I rarely get to do things exactly as I want so it was really nice being able to build a show case of exactly how I think an application should be written. The problem is, 3 months later, I've totally changed my mind :P

Who am I calling out?

Preet Sangha and Ken Egozi

Friday, August 01, 2008

ADO.NET Data Services: Creating a custom Data Context #3: Updating

This follows part 1 where I create a custom data context and part 2 where I create a client application to talk to it.

For your custom data context to allow for updates you have to implement IUpdatable.

ado_net_im_iupdatable

This interface has a number of cryptic methods and it wasn't at all clear at first how to write an implementation for it. I'm sure there must be some documentation somewhere but I couldn't find it. I resorted to writing trace writes in the empty methods and firing inserts and updates at my web service. You can then use Sysinternals' DebugView to watch what happens.

First of all lets try an insert:

public void AddANewTeacher()
{
  Console.WriteLine("\r\nAddANewTeacher");

  var frankyChicken = new Teacher
                        {
                            ID = 3,
                            Name = "Franky Chicken"
                        };
  service.AddObject("Teachers", frankyChicken);
  var response = service.SaveChanges();
}

We get this result:

ado_net_im_debug_insert

So you can see that first all the existing Teachers are returned, then a new Teacher instance is created, it's properties set, SaveChanges is called and then ResolveResource. For my simple in-memory implementation I just added the new Teacher to my static list of teachers:

public object CreateResource(string containerName, string fullTypeName)
{
  Trace.WriteLine(string.Format("CreateResource('{0}', '{1}')", containerName, fullTypeName));

  var type = Type.GetType(fullTypeName);
  var resource = Activator.CreateInstance(type);

  switch (containerName)
  {
      case "Teachers":
          Root.Teachers.Add((Teacher)resource);
          break;
      case "Courses":
          Root.Courses.Add((Course)resource);
          break;
      default:
          throw new ApplicationException("Unknown containerName");
  }

  return resource;
}

public void SetValue(object targetResource, string propertyName, object propertyValue)
{
  Trace.WriteLine(string.Format("SetValue('{0}', '{1}', '{2})", targetResource, propertyName, propertyValue));

  Type type = targetResource.GetType();
  var property = type.GetProperty(propertyName);
  property.SetValue(targetResource, propertyValue, null);
}

Next let's try an update:

public void UpdateATeacher()
{
  Console.WriteLine("\r\nUpdateATeacher");

  var fredJones = service.Teachers.Where(t => t.ID == 2).Single();

  fredJones.Name = "Fred B Jones";

  service.UpdateObject(fredJones);
  service.SaveChanges();
}

We get this result:

ado_net_im_debug_update

This time the Teacher to be updated is returned by GetResource then SetValue is called to update the Name property. Finally SaveChanges and ResolveResources are called again.

The GetResource implementation is straight from Shawn Wildermuth's LINQ to SQL implementation.

public object GetResource(IQueryable query, string fullTypeName)
{
  Trace.WriteLine(string.Format("GetResource('query', '{0}')", fullTypeName));

  // Get the first result
  var results = (IEnumerable)query;
  object returnValue = null;
  foreach (object result in results)
  {
      if (returnValue != null) break;
      returnValue = result;
  }

  // Check the Typename if needed
  if (fullTypeName != null)
  {
      if (fullTypeName != returnValue.GetType().FullName)
      {
          throw new DataServiceException("Incorrect Type Returned");
      }
  }

  // Return the resource
  return returnValue;
}

Now all I have to do is work on creating relationships between entities, possibly more on this next week.

Code is here:

http://static.mikehadlow.com/Mike.DataServices.InMemory.zip

ADO.NET Data Services: Creating a custom Data Context #2: The Client

In part 1 I showed how to create a simple in-memory custom data context for ADO.NET Data Services. Creating a managed client is also very simple. First we need to provide a similar domain model to our server. In this case the classes are identical except that now Teacher has a List<Course> rather than a simple array (Course[]) as it's Courses property:

using System.Collections.Generic;

namespace Mike.DataServices.Client.Model
{
   public class Teacher
   {
       public int ID { get; set; }
       public string Name { get; set; }
       public List<Course> Courses { get; set; }
   }
}

Next I wrote a class to extend DataServiceContext with properties for Teachers and Courses that are both DataServiceQuery<T>. Both DataServiceContext and DataServiceQuery<T> live in the System.Data.Services.Client assembly. You don't have to create this class, but it makes the subsequent use of the DataServiceContext simpler. You can also use use the 'Add Service Reference' menu item, but I don't like the very verbose code that this generates.

using System;
using System.Data.Services.Client;
using Mike.DataServices.Client.Model;

namespace Mike.DataServices.Client
{
   public class SchoolServiceProxy : DataServiceContext
   {
       private const string url = "http://localhost:4246/SchoolService.svc";

       public SchoolServiceProxy() : base(new Uri(url))
       {
       }

       public DataServiceQuery<Teacher> Teachers
       {
           get
           {
               return CreateQuery<Teacher>("Teachers");
           }
       }

       public DataServiceQuery<Course> Courses
       {
           get
           {
               return CreateQuery<Course>("Courses");
           }
       }
   }
}

Here's a simple console program that outputs teacher John Smith and his courses and then the complete list of courses. The nice thing is that DataServiceQuery<T> implements IQueryable<T> so we can write LINQ queries against our RESTfull service.

using System;
using System.Linq;
using System.Data.Services.Client;
using Mike.DataServices.Client.Model;

namespace Mike.DataServices.Client
{
   class Program
   {
       readonly SchoolServiceProxy service = new SchoolServiceProxy();

       static void Main(string[] args)
       {
           var program = new Program();
           program.GetJohnSmith();
           program.GetAllCourses();
       }

       public void GetJohnSmith()
       {
           Console.WriteLine("\r\nGetJohnSmith");

           var teachers = service.Teachers.Where(c => c.Name == "John Smith");

           foreach (var teacher in teachers)
           {
               Console.WriteLine("Teacher: {0}", teacher.Name);

               // N+1 issue here
               service.LoadProperty(teacher, "Courses");

               foreach (var course in teacher.Courses)
               {
                   Console.WriteLine("\tCourse: {0}", course.Name);
               }
           }
       }

       public void GetAllCourses()
       {
           Console.WriteLine("\r\nGetAllCourses");

           var courses = service.Courses;

           foreach (var course in courses)
           {
               Console.WriteLine("Course: {0}", course.Name);
           }
       }
   }
}

We get this ouput:

ado_net_im_console

Code is here:

http://static.mikehadlow.com/Mike.DataServices.InMemory.zip

ADO.NET Data Services: Creating a custom Data Context #1

Yesterday I wrote a quick overview of ADO.NET Data Services. We saw how it exposes a RESTfull API on top of any IQueryable<T> data source. The IQueryable<T> interface is of course at the core of any LINQ enabled data service. It's very easy to write your own custom Data Context if you already have a data source that supports IQueryable<T>. It's worth remembering that anything that provides IEnumerable<T> can be converted to IQueryable<T> by the AsQueryable() extension method, which means we can simply export an in-memory object graph in a RESTfull fashion with ADO.NET Data Services. That's what I'm going to show how to do today.

I got these techniques from an excellent MSDN Magazine article by Elisa Flasko and Mike Flasko, Expose And Consume Data in A Web Services World.

The first thing we need to do is provide a Domain Model to export. Here is an extremely simple example, two classes: Teacher and Course. Note that each entity must have an ID property that the Data Service can recognize as its primary key.

ado_net_im_model

For a read-only service (I'll show insert, update and delete in part 2) you simply need a data context that exports the entities of the domain model as IQueryable<T> properties:

using System.Linq;
using Mike.DataServices.Model;

namespace Mike.DataServices
{
   public class SchoolDataContext
   {
       private static Teacher[] teachers;
       private static Course[] courses;

       public SchoolDataContext()
       {
           var johnSmith = new Teacher
                                   {
                                       ID = 1,
                                       Name = "John Smith"
                                   };
           var fredJones = new Teacher
                                   {
                                       ID = 2,
                                       Name = "Fred Jones"
                                   };

           var programming101 = new Course
                                    {
                                        ID = 1,
                                        Name = "programming 101",
                                        Teacher = johnSmith
                                    };
           var howToMakeAnything = new Course
                                       {
                                           ID = 2,
                                           Name = "How to make anything",
                                           Teacher = johnSmith
                                       };
           johnSmith.Courses = new[] {programming101, howToMakeAnything};

           var yourInnerFish = new Course
                                   {
                                       ID = 3,
                                       Name = "Your inner fish",
                                       Teacher = fredJones
                                   };
           fredJones.Courses = new[] {yourInnerFish};

           teachers = new[] {johnSmith, fredJones};
           courses = new[] {programming101, howToMakeAnything, yourInnerFish};
       }

       public IQueryable<Teacher> Teachers
       {
           get { return teachers.AsQueryable(); }
       }

       public IQueryable<Course> Courses
       {
           get { return courses.AsQueryable(); }
       }
   }
}

Note that we're building our object graph in the constructor in this demo. In a realistic implementation you'd probably have your application create its model somewhere else.

Now we simply have to set the type parameter of the DataService to our data context (DataService<SchoolDataContext>):

using System.Data.Services;

namespace Mike.DataServices
{
   [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)]
   public class SchoolService : DataService<SchoolDataContext>
   {
       public static void InitializeService(IDataServiceConfiguration config)
       {
           config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
       }
   }
}

And we can query our model via our RESTfull API:

ado_net_im_ie_root

And here's all the courses that the first teacher teaches:

ado_net_im_ie_teachers_courses

Code is here:

http://static.mikehadlow.com/Mike.DataServices.InMemory.zip

Last night's ALT.NET evening

I really enjoyed my 15 minutes of fame at last night's ALT.NET meeting. It was great to meet everyone, and I'm only sorry that I couldn't hang around for the after show drinks. Such is the cost of provincialism.

All the talks were good, but I especially enjoyed Seb Lambla's Open Rasta presentation, a very interesting RESTfull approach to ASP.NET. David De Florinier's talk on NServiceBus was also interesting since it's a tool I haven't had a chance to look at.

My talk was on Castle Windsor. You can download the slides here:

Powerpoint 2007

http://static.mikehadlow.com/The Castle Windsor Inversion of Control Container.pptx

Powerpoint 2003

http://static.mikehadlow.com/The Castle Windsor Inversion of Control Container_2003.ppt