When you work on projects of a certain size, you will find that you begin to add custom fields to your TFS Work Items that you later want to search on. In my case it was an External Requirement ID field that we use to store the ID of requirements that our customers use to track requirements. I often use these when communicating with our customers, or even members of my team.

For example:

  “Have you checked in A1 yet?” is easier to ask and understand than “Have you checked in 177484?”.

The problem that arises with this approach, is been able to find a work item by its External Requirement ID.

To solve this issue, I once again turned to Linqpad and came up with a script that lets you search by a comma separated list entries against your work items. After a bit of digging I managed to find the correct API to be able to produce reliable TFS Web Access URL’s in the results:

Results

To use the script, just place you collection address and custom field name at the top. You can also add any other filtering into WIQL, for example, you might only want to search a certain Area Path.

When you run the script you will be asked to “Enter External Requirement ID”, just enter the ID i.e. A1 or a list of ID’s i.e. A1, A2 and press Enter.

I keep mine “Pinned” to the Jump List of Linqpad on my taskbar for ease of access.

You can download the script from here:

The script is based on my Linqpad TFS Template, if you need it for any other version of Visual Studio, download the correct template and copy and paste the body of the script between them.

Visual Studio 2013 has a re-designed Team Explorer, and I really like it, but I sometimes find some options, such as “Find Shelvesets” and “New Query”, to be buried away.

For example, the normal path I have to take to find a shelveset is:

  • Pending Changes
  • Actions
  • Find Shelvesets

This is really “clicky” and something I wished I could do quicker.

Today I found out I could right-click on the tiles on the Team Explorer Home Page to get to some of those actions that are buried away:

Pending Changes

From Pending Changes you can get to Find Shelvesets

PendingChanges

Work Items

From Work Items you can get to “New Query”

WorkItems

My Work

From My Work you can get to Request Review

MyWork

Team Members

If you have the TFS 2013 Power Tools installed, then even the Team Members gets the right-click treatment with access to Team Utilities

TeamUtils

I recently started working on a new project and decided to use xUnit.net as my unit testing framework. I made a simple test class with a test method, but when I build it, Visual Studio Code Analysis complained with a “CA1822: Mark members as static” warning against my test method.

Let’s assume I have a test for MyClass that looks like this:

public class MyClassTest
{
    [Fact]
    public void TestMyProperty()
    {
        var myClass = new MyClass(1);
        Assert.Equal(1, myClass.MyProperty);
    }
}

 

In this version, Code Analysis is complaining because TestMyProperty does not uses any instance members from MyClassTest. This is all well and good in production code, but I assumed (incorrectly) that you must have instance test methods for the test framework to pick them up. As it turns out, xUnit.net is better than that and works on static methods as well as instance methods.

If these were all the tests I needed, I should mark the method as static. If all your test methods are static, you should mark the class as static too – otherwise you will get a “CA1812: Avoid uninstantiated internal classes” warning from Code Analysis.

So the fixed version will look like:

public static class MyClassTest
{
    [Fact]
    public static void TestMyProperty()
    {
        var myClass = new MyClass(1);
        Assert.Equal(1, myClass.MyProperty);
    }
}

 

Now, those of you who have used MSTest and Code Analysis might notice two things here:

  1. xUnit.net is not bound by the same restrictions as MSTest. Anything public with a [Fact] attribute will be tested.
  2. Why does CA1822 not occur when using the [TestMethod] attribute from MSTest?

To answer #2, I turned to ILSpy, and pulled apart the Code Analysis rule assembly to find the following “hack” by Microsoft. If you want to take a look, the DLL is located in: %ProgramFiles(x86)%\Microsoft Visual Studio 12.0\Team Tools\Static Analysis Tools\FxCop\Rules\PerformanceRules.dll

There is a class for each rule, the one I was interested in was called MarkMembersAsStatic with the hack located in this method:

private static bool IsVSUnitTestMethod(Method method)
{
    if (method.get_IsStatic() || 
        method.get_Parameters().get_Count() > 0 || 
        method.get_ReturnType() != FrameworkTypes.get_Void())
    {
        return false;
    }
    for (int i = 0; i < method.get_Attributes().get_Count(); i++)
    {
        AttributeNode attributeNode = method.get_Attributes().get_Item(i);
        if (attributeNode.get_Type().get_Name().get_Name() == 
            "TestInitializeAttribute" || 
            attributeNode.get_Type().get_Name().get_Name() == 
            "TestMethodAttribute" || 
            attributeNode.get_Type().get_Name().get_Name() == 
            "TestCleanupAttribute")
        {
            return true;
        }
    }
    return false;
}

Remember, this is disassembled code and has been re-formatted by me.

That’s right, there is an explicit suppression of the CA1822 rule against any method with an attribute called [TestInitialize], [TestMethod] or [TestCleanup]. Well, that explains that little mystery.

 

So far, I’ve not had any problems with xUnit.net, now I figured this out, and at some point in the future I may post more about my journey through TDD with it.

Scott Hanselman has just posted on his blog about how easy it is to get a Windows 8/8.1 Live Tile for your own site. At the bottom he posted a link to the Custom Windows Pinned Tiles plugin for WordPress by Nick Hasley.

I thought I’d give it a go, and after a few minutes of clicking, and a little hacking in Paint.net to make an icon, I now have a live tile that displays updates of from my RSS feed:

LiveTile

This really was easy, a big thanks to Nick for this awesome plugin.

In today’s post I’m going to go through the process for setting up the TFS Hands on Labs (HOLs) in Windows Azure.

The TFS Hands on Labs are great resources for learning the latest features in TFS. The latest TFS 2013 labs can be found here on Brian Keller’s blog.

I choose to use Windows Azure so that I can share the labs with my colleagues. I have an MSDN Subscription through my employment that gives me $150 (£130) of Azure credit a month, so I don’t have to worry too much about paying for all this.

Overview

My initial attempts to upload the VHD into Azure fell flat when I downloaded the 13GB RAR archive, unpacked it to a whopping 55GB VHD and then tried to upload to Windows Azure. My poor 2MB cable internet just wouldn’t cut it and the estimate was at least a week to upload, as well as that I was been severely throttled by my ISP.

So I came up with a cunning workaround to save my bandwidth:

  • Setup a “Downloaded” VM on Windows Azure.
  • Download, unpack and re-upload the VHD on the “Downloader” VM in Windows Azure.
  • Create a VM for the Hands on Lab and mount the VHD.

Prerequisites

To follow this guide you will need a Windows Azure account and a basic understanding of Consoles (Command Prompt / PowerShell) and know a little bit about setting up Windows Servers.

1. Creating a “Downloader” VM

The first thing you need to do is create a “Downloader” Virtual Machine on Windows Azure to perform the download, unpack and re-upload process. Once you have completed this process and you have your Labs up and running you can delete this “Downloader” VM to save your money.

TFSHOL1

  • “Quick Create” a new Medium spec. VM. Select a data centre in the region closest to you.

TFSHOL2

  • Windows Azure will have a think for a couple of minutes and your new VM should appear.
  • When the status turns to “Running”, select it from your VM list and click the “>< Connect” button at the bottom of the screen.
  • This will download a remote desktop profile for your VM and you can login using the username and password you supplied.

2. Downloading, unpacking and re-uploading the VHD

These next steps will be performed on the new VM over remote desktop.

  • The first thing you need to do is download and install the Free Download Manager (FDM) tool.
  • I found it easier to download this on my local PC and then browse to it through the mapped remote desktop drives on the VM and copy it up. Internet Explorer on servers is annoyingly secured.
  • Once you have FDM installed, run it and import the list of URLs from Brian’s blog post.
  • This will take sometime to download, so now’s a good time to get a cup of tea.
  • Once the download is completed. Run the “VisualStudio2013.Preview.part01.exe” file on the server and click through the options to start the extraction. I did this on a Small VM and it took even longer than the download (that’s why I am suggesting a Medium VM).
  • Whilst it is unpacking now is a good time to get the “Downloader” VM and Windows Azure setup ready to upload the VHD.

2.1 Preparing to Upload a VHD to Azure.

  • First, grab a copy of Web Platform Installer and install it on the Server. Again, you may want to download it locally and copy it up to bypass Internet Explorer.
  • Open Web Platform Installer and install “Windows Azure PowerShell” along with any dependencies.
  • Open up PowerShell and run the following commands:
Set-ExecutionPolicy RemoteSigned
Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"
  • This will setup PowerShell with all the Azure CmdLets.
  • Get a copy of your Publish Settings file from Azure using this PowerShell command. This opens up a browser window to download a file (again I did this locally and uploaded it).
Get-AzurePublishSettingsFile
  • Now you have your Publish Settings file you need to import them and select your subscription using the following PowerShell commands:
Import-AzurePublishSettingsFile 'C:\SubscriptionName-8-27-2013-credentials.publishsettings'
Set-AzureSubscription 'SubscriptionName'
  • Replace SubscriptionName with the name of your subscription.
  • Finally, verify the settings with the following command. You should see your subscription details listed.
Get-AzureSubscription

2.2 Creating your Storage

Now that you have PowerShell ready, you need somewhere to upload the VHD to on Azure.

  • Head back to the Windows Azure Portal.
  • On the left hand side, select Storage and click “+ New”.

TFSHOL3

  • Click “Quick Create”. Give it a name and select the same region as the Downloader VM.
  • Once Azure has created the Storage, you need a Container.
  • Select the storage you have just created, and navigate to the “Containers” tab and click “Add” at the bottom.

TFSHOL4

  • Give the Container a name and leave the access as “Private”.
  • You now should have everything setup to upload the VHD

2.3 Uploading the VHD

  • Back on the Downloader VM, wait for the installer to complete, this may take some time.
  • Now that the installation is complete you can start the upload process.
  • To upload the VHD you need 2 pieces of information, the source and destination.
  • The source will be something like:
    • C:\Downloads\Visual Studio 2013 Preview\WMIv2\Virtual Hard Disks\TD02WS12SFx64.vhd – where C:\Downloads is where you let FDM download the files to.
  • The destination you can get from the Azure Portal and will be something like:
    • http://storagename.blob.core.windows.net/containername/TD02WS12SFx64.vhd  – where storagename and containername are the names of your storage and container.
  • In PowerShell on the Downloader VM run the following PowerShell command:
Add-AzureVhd
  • And supply the Destination and LocalFilePath (Source) when prompted.
  • This upload will take a few hours, so keep and eye on the progress – I left mine overnight.
  • When this is complete, you can create your Hands on Lab VM.

3. Creating the Disk and Hands on Lab VM

Now you should have everything you need to create your Hands on Lab VM.

  • Head back to the Windows Azure Portal again.
  • Go to “Virtual Machines” and select the “Disks” tab at the top.
  • Click “Create” and give your Disk a name and browse to the VHD in your storage container.

TFSHOL5

  • Once your Disk is created go back and create another Virtual Machine.
  • This time, don’t use “Quick Create”, instead select “From Gallery”

TFSHOL6

  • When you use “From Gallery” you will see an option on the left that says “My Disks”.
  • Select you Disk that you just created and click “Next”.
  • Fill in the rest of the options for creating a Large VM (a medium also might work).
  • Keep filling in the form until the end. Your VM will be created and start provisioning.
  • When this VM is and the status is “Running”, you are good to go.
  • Click on the “>< Connect” button for the new VM and you should be able to login as “Administrator”. The password is in the “Working with the Visual Studio 2013 ALM Virtual Machine” document that comes with the VM.

Final Steps

It is recommended that you secure the VM from the outside world – this Windows Azure VM is on the internet. So change the Administrator password and disable RDP access for any of the user accounts that don’t need it.

Conclusion

Setting up this VM was fun and a great excuse to learn the Windows Azure platform. Remember you can delete the Downloaded VM and any associated artefacts (except the TFS HOL HVD) when you have done with it.

If you have any questions or comments, please contact me and I’ll do my best.

Associated Links / Credits

Here are a few links I used to pull all this together, if you’re stuck you might find something useful there.

Today’s topic is another TFS 2012 post-upgrade “fix”.

I upgraded our TFS 2010 instance to TFS 2012 four month ago and, slowly, I have been fixing up all the things that broke. I’ve been using the TFS Web Administration page as a guide for what jobs are still having problems. For those of you who don’t know what that is and have on-premise TFS 2012 (or greater), I heartily recommend you check out Grant Holliday’s blog post on it.

The problem we were seeing was that the Job Monitoring graph was reporting about 50% of all jobs as failures. And nearly all the failures were for one job type, the “LabManager VMM Server Background Synchronization” job. This runs every few minutes and tries to keep the LabManager server up to date. The problem is that we setup LabManager on our TFS 2010 instance, and then tore down the VMM Lab Management Server without allowing TFS to de-register it. The TFS Administrator console did not offer any options to remove the server.

I posted on the MSDN Forums but sadly, the suggestions from Microsoft didn’t help.

In the end I turned to ILSpy and started disassembling the TFS Server Side Assemblies and found references to a “Registry” that contained the settings for Lab Management. This Registry turned out to be stored in the TFS database.

Before we go any further, I just want to be 100% clear that, messing around in the TFS is not supported by myself, or Microsoft, you do this of your own free will, and I will not support you if this goes wrong.

Within the Tfs_Configuration database there is a table called tbl_RegistryItems. This contains the configuration for Lab Management. Because we do not use Lab Management in any shape of form at the moment, I was happy to delete all our settings relating to it. If you do use Lab Management, then I don’t suggest you try this.

After backing up all the data that I was about to delete, I ran the following SQL Script to delete all our Lab Management configuration:

use Tfs_Configuration;

delete tbl_RegistryItems
where ParentPath = '#\Configuration\Application\LabManagementSettings\';

The data in this table was cached, so I needed to restart my TFS Services to get it to pick it up, but once that was done. My Lab Manager job no longer reports an error and my Job Monitoring pie chart is nearly 100% green now.

Since upgrading to TFS 2012.2 (from Update 1) I have been seeing the following error in a couple of places:

Server was unable to process request. ---> 
There was an error generating the XML document. ---> 
TF20507: The string argument contains a character that is not valid:'u8203'. 
Correct the argument, and then try the operation again.

It first appeared when loading VS2012.2 with TFS Sidekicks 4.5 installed. It also appeared when I called IIdentityManagementService.ReadIdentities() via the TFS 2012 Object Model. I guess this is what TFS Sidekicks is calling under the covers.

It turned out that this was caused by the Zero Width Space Unicode character (\u200b) been present in a Team’s Description field. Some of our users had copied and pasted into there and brought it along from somewhere, where it came from I’ll probably never know.

To track down which Team had this problem was quite a trek. Firstly, this doesn’t happen using the V10 (TFS 2010) Object Model, only the V11 Team Foundation client assembly references.

Using the following code in my TFS Template (2012 Version) as a base:

var managementService = 
    tfs.GetService<IIdentityManagementService>();
var members =
    managementService
    .ReadIdentity(
        GroupWellKnownDescriptors.EveryoneGroup, 
        MembershipQuery.Expanded,
        ReadIdentityOptions.None);
    .Members;

I then call this method to get the crash:

var nodeMembers = 
    managementService
    .ReadIdentities(
        members, 
        MembershipQuery.Expanded, 
        ReadIdentityOptions.ExtendedProperties);

The problem is, in my TFS Instance, there are 634 “members” and I had no idea which one might be causing the problem:

Replacing the above line with a simple loop and try…catch block reduced that for me:

foreach (var member in members)
{
    try
    {
        var nodeMembers = 
            managementService
            .ReadIdentities(
                new [] { member, }, 
                MembershipQuery.Expanded, 
                ReadIdentityOptions.ExtendedProperties);
    }
    catch (Exception e)
    {
        e.Dump(a.Identifier);
    }
}

I now had the Identity of the member with an issue, the problem I now faced was getting the name of the member.

A typical Identity looks like this:

S-1-9-1441374244-17626364-2447400142-3087036873-88942238-1-3433204373-3394714127-2914434643-4144131896

In the end I fired up SQL Profiler, pointed it at TFS and re-ran my code snippet. I then stopped the trace and searched for the first part my Identity: 1441374244. This led me to the following SQL statement:

declare @p3 dbo.typ_KeyValuePairInt32StringVarcharTable;
insert into @p3 values(0,'S-1-9-1441374244-2649122007-3436464326-2922169763-974421344-0-0-0-0-3');

exec prc_ReadGroups 
    @partitionId=1,
    @scopeId='5c5c4fe4-eba6-4899-87f1-f2f8a1802a6e',
    @groupSids=@p3,
    @groupIds=default;

By copying this into SQL Management Studio and running it against the Tfs_Configuration database (in a begin tran…rollback tran block, of course) I was able to get details of the Identity, in my case it was a Team.

I viewed the results of this Query as “Text” from SQL Management Studio and pasted them into a blank UTF-8 document in Notepad++. Flicking to the trusty Hex Editor plugin I could soon see some characters that did not belong in the Description text. These were the u200b Zero Width Spaces.

To fix this I just opened up my Team Web Access, navigated to the Team’s page and deleted the Description, re-typed it and saved. Re-running my Linqpad sample proved the issue was solved.

I’ve logged this as a Connect Bug as I am sure it is not intentional and it only started happening since installing in Update 2.

Posted in TFS.

Updated 03-Jun-2013: Changed Dropbox to SkyDrive after Outdoor Navigation changed.

This post is going to cover a combination of two the things I love, Linqpad and Geocaching.

If you’re not familiar with Geocaching, but have any love for the outdoors, then I heartily recommend you try it. It’s pretty much treasure hunting for grown-ups and using grown-up toys. It’s also great for the family, I never go without my son with me.

If you’re not familiar with Linqpad, go get acquainted, I use it for so many different roles, in this case, scripting C#.

As I am only an amateur Geocacher, my GPS device of choice is my Windows Phone (currently a Lumia 920 – which I also love) and GPS Tuner’s Outdoor Navigation. Outdoor Navigation is ace, it has many features making a great companion when out and about. However, it does not have full integration with Geocaching.com, so you have to use SkyDrive to synchronise a “loc” file downloaded from Geocaching.com.

This brings us to the problem, each loc file is one Point of Interest in Outdoor Navigation, meaning that if I plan to stock up on (say five) caches before heading out I need to download five files, copy five files to SkyDrive, fire up Outdoor Navigation and import each file, one after another from list. Downloading the files is easy enough, my browser only requires one click. Uploading to SkyDrive is equally easy, just multi-select and copy. But the import into Outdoor Navigation is not multi-select or fluid to work through the a long list. The import process was starting to get tedious after a while so I decided to dig into the loc files to see if there was a way to speed things up.

Here’s a loc file for just one Cache (from today’s outing):

<?xml version="1.0" encoding="UTF-8"?>
<loc version="1.0" src="Groundspeak">
    <waypoint>
        <name id="GC37092">
            <![CDATA[A Longer Dog Walk #1 by The Briar Rose]]>
        </name>
        <coord lat="53.602717" lon="-1.79085"/>
        <type>Geocache</type>
        <link text="Cache Details">http://www.geocaching.com/seek/cache_details.aspx?wp=GC37092</link>
        <difficulty>1.5</difficulty>
        <terrain>1</terrain>
        <container>2</container>
    </waypoint>
</loc>

It’s an XML document with a “loc” root element and a “waypoint” for the Point of Interest. After a bit of experimentation I found I could put any number of “waypoint” elements into the document and that Outdoor Navigation would only have to import one file that contained any number of Points of Interest. Bingo!

Now I had a way to import just one file if, but to do that I needed to combine all the individual loc files from Geocaching.com. The next issue was how to combine them, enter Linqpad, as always.

Using a bit of Linq-Xml I was able to load each file, grab the “waypoint” element(s) and then combine them into a new XML document that I could save to disk and even auto upload to SkyDrive – providing I used the SkyDrive Application for Windows.

First. Load all the loc files I have downloaded.

var locationElements = 
  Directory
  .EnumerateFiles(folderPath, "*.loc")
  .SelectMany(path => new LocFile(path).GetElements());

There’s a bit going on here. We start by grabbing any file with the “loc” extension in the folder stored in the folderPath variable – “Downloads\GC”  in my case – Creating a new instance of the LocFile class calling GetElements() and combining all the results together. I’m using SelectMany instead of Select because I am assuming there could be more than one “waypoint” element in a loc file. What we are left with is an IEnumerable<XElement> containing all the “waypoint” elements from all the files.

Second. Creating the new XDocument:

var combinedLocFiles = 
  new XDocument(
    new XElement("loc",
        new XAttribute("version", "1.0"),
        new XAttribute("src", "Groundspeak"),
        locationElements))
  .Dump();

Here we are creating a new XDocument with a root element of “loc” with the attributes “version” and “src” with the values of “1.0″ and “Groundspeak”, respectively (the first, second and last lines in the above XML). And then filling the root element with the contents of the locationElements – the “waypoint” elements from the First snippet.

Finally. The LocFile class.

class LocFile
{
    readonly String _filePath;
    public LocFile(String filePath)
    {
        _filePath = filePath;
    }
    
    public ReadOnlyCollection<XElement> GetElements()
    {
        return
            XDocument
            .Load(_filePath)
            .Root
            .Elements()
            .ToList()
            .AsReadOnly();
    }
}

This is the class that represents a loc file and knows how to get the relevant Elements from it.

These three snippets are the meat of the script, everything else is just fluff to get the correct paths and save the output and copy it to SkyDrive.

It’s pretty easy to use, just fire up Linqpad, load the Script and then run it. Before I start I ensure that there are only the loc files I want to combine in my “folderPath”, but other than that, you’re away.

If you need to change the paths, that’s at the top of the script and is pretty straight forward too.

Download

Here’s the link to download the “linq” file from my SkyDrive.

Today I found my self in a situation where I needed to initialise a property in MSBuild via the /property:<n>=<v> (short form /p) command line switch to an empty string. The reason I had to do this was to so that I could remove some property from my OutputPath when building on Team Foundation Server.

For Example, in my C Sharp project file I had the following line.

<OutputPath>$(MyProperty)\bin\</OutputPath>

In some scenarios I wanted $(MyProperty) to be “Build” and in other cases I wanted it to be removed.

So the scenarios went a little like this:

<!-- Scenario 1 -->
<OutputPath>Build\bin\</OutputPath>
<!-- Scenario 2 -->
<OutputPath>\bin\</OutputPath>

After a quick visit to Google and thumbing through Inside the Microsoft Build Engine 2nd Edition by Sayed Ibrahim Hashimi, I still could not find a canonical to answer my question. I read something that said an empty string should be two single quotes, but those substituted those into my build causing it to error (it turns out, that was for checking if a variable is empty). In the end I went to the command line and started experimenting. I thought I’d just pop my findings on here in case anyone else has a need for it.

The Answer is to just use PropertyName= and no more.

For Example:

MSBuild /property:MyProperty= MySolution.sln

This syntax works at the start, middle or end of your property command line switch. So these are all valid too:

MSBuild /property:Foo=One;MyProperty= MySolution.sln
MSBuild /property:Foo=One;MyProperty=;Bar=Two MySolution.sln
MSBuild /property:MyProperty=;Bar=Two MySolution.sln

You can check the MSBuild diagnostic (using the /verbosity:diagnostic switch) log to confirm that it worked:

MSBuildUserExtensionsPath = C:\Users\Dave\AppData\Local\Microsoft\MSBuild
MyProperty = 
NUMBER_OF_PROCESSORS = 4

I am a massive fan of Linqpad, especially as a code scratch pad, but it is also very useful for performing queries against the Team Foundation Server SDK.

I regularly find myself wanting to get information out of our TFS Collection via the API, whether it be Build Information, Work Item Queries, Version Information, etc. Occasionally, I also need to update Build Definitions’ Process XML en-mass.

To make my life easier and to enable me to spin up these queries as quick as possible I came up with a “Template” Linqpad script that I can always use as a baseline.

The important code is as follows and the “linq” file has all the references and namespaces I could ever need:

void Main()
{
    const String CollectionAddress = "http://tfsserver:8080/tfs/MyCollection";
    
    using (var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(CollectionAddress)))
    {
        tfs.EnsureAuthenticated();
        var server = tfs.GetService<>();
    }
}

Using

I have this in “My Linqpad Queries” and as soon as I open the file I press Ctrl+Shift+C to clone it to a new query so I don’t save change to the “template”. Linqpad doesn’t yet support “Templates”, there is a UserVoice  request for it however.

Once I have the cloned copy I insert the name of the service I plan to call into GetService<>, and then go to work. The API is quite easy to use, and the MSDN documentation is pretty comprehensive. The common services I use are:

  • IBuildServer for Builds
  • VersionControlServer for Source Control
  • WorkItemStore for Work Items

In the downloaded file there are also some XNamespace declarations at the top, which are used when I have to update IBuildDetail.BuildDefinition.ProcessParameters using Linq to Xml. These are the common three I found myself having to declare each time, so I just made them part of the template.

Download

You can download the “linq” file for Linqpad from my SkyDrive: