Taeguk.co.uk
  • TFS 2015 does not support 2012 build agents

    This post is part PSA, part debugging story.

    The important bit:

    Team Foundation Server 2012 XAML Build Agents do not work with TFS 2015

    I discover this fact the weekend just gone whilst performing an upgrade to TFS 2015.3 from TFS 2012.4.

    The plan was to only upgrade the TFS Server and leave the build infrastructure running on TFS 2012. This seemed like a sound idea as I know Microsoft care about compatibility, and the upgrade was more complicated than your usual one. I figured it would just keep working and that I’d upgrade the build agents later, boy was I wrong.

    I may have even checked the documentation, which does not show a compatibility, but it isn’t explicitly called out, so I could have glanced over it.

    TFS build compatibility Look - No 2012

    The problems with TFS 2012 build agents against TFS 2015 manifested as two different errors when I queued a build without a Drop Location. Queuing a build with a drop location worked just fine.

    Error 1 - Build agents not using the FQDN

    The build infrastructure runs on a different domain to the Team Foundation Server.

    We have tfs-server.corp.com for TFS and build-server.corp-development.com for builds.

    The error manifested as:

    FQDN error message

    The error that appeared twice was not very helpful.

    An error occurred while copying diagnostic activity logs to the drop location. Details: An error occurred while sending the request.

    I eventually debugged this (details later) and found out that the last task on the build agent was trying to access tfs-server with no DNS suffix of .corp.com to publish some logs. As a temporary workaround I bobbed an entry in the hosts file entry to make tfs-server point to the actual IP of the TFS server.

    Error 2 - the bad request

    With the all the steps of the build resolving the server name, I came across the second error.

    Bad request error message

    The error message was still no more use than the last one:

    An error occurred while copying diagnostic activity logs to the drop location. Details: TF270002: An error occurred copying files from ‘C:\Users\tfsbuild\AppData\Local\Temp\BuildAgent\172\Logs\151436\LogsToCopy\ActivityLog.AgentScope.172.xml’ to ‘ActivityLog.AgentScope.172.xml’. Details: BadRequest: Bad Request

    An error occurred while copying diagnostic activity logs to the drop location. Details: An error occurred while sending the request.

    My debugging would lead me to see that this was caused by TFS returning an HTTP 400 (Bad Request) for the exact same step as the first error.

    It was at this point I figured something was really wrong and started searching for compatibility problems. In my effort to find a KB or update I re-checked the documentation and noticed the lack of support as well as finding an MSDN forum post from RicRak where they solved the problem by upgrading their agents off of TFS 2012.

    Solution

    My solution was to upgrade our entire build infrastructure (some 9/10 servers) to TFS 2015, and discovering you must install VS2015 on the servers too to get the Test Runner to work.

    One day of diagnosis and testing to get to the point of knowing TFS 2015 build agents would solve the problem and still build our codebase. Another half-day was spend upgrading all the servers.

    Diagnostics

    How do you figure out when something like this goes wrong? TFS diagnostic logging did not provide any more information than minimum logging did. The error only appeared at the very end of a build, it wasn’t related to a step in the XAML workflow, nor any variables in the build process.

    The solution (as always) came from Charlie Kilian on Stack Overflow.

    I stopped the Build Service and opened up TFSBuildServiceHost.exe.config and added the following section:

    <system.diagnostics>
        <sources>
            <source name="System.Net" tracemode="includehex" maxdatasize="1024">
                <listeners>
                    <add name="System.Net"/>
                </listeners>
            </source>
        </sources>
        <switches>
            <add name="System.Net" value="Verbose"/>
        </switches>
        <sharedListeners>
            <add name="System.Net"
                type="System.Diagnostics.TextWriterTraceListener"
                initializeData="C:\Logs\network.log" />
        </sharedListeners>
        <trace autoflush="true"/>
    </system.diagnostics>
    

    Then restarted the build service and ran the smallest build I could to produce minimal logs.

    The log folder looked something like this:

    Log files on disk

    The network.log file had a few errors, but nothing fatal looking, so I looked in the other files for errors and finally found this line:

    System.Net Error: 0 : [4916] Exception in HttpWebRequest#13319471:: - The remote name could not be resolved: 'tfs-server'.
    

    That was proceeded by:

    System.Net Verbose: 0 : [4928] HttpWebRequest#13319471::HttpWebRequest(http://tfs-server:8080/tfs/DefaultCollection/_apis/resources/containers/122598?itemPath=logs%2FActivityLog.AgentScope.172.xml#752534963)
    

    Here you can see the server name without the necessary DNS suffix during some HTTP POST to _apis/resources/containers.

    This was the point I added the hosts file entry and then got the next error.

    For the second error I repeated the diagnostic logging steps and this time found the following errors (searching for Bad Request):

    System.Net Information: 0 : [16628] Connection#50276392 - Received status line: Version=1.1, StatusCode=400, StatusDescription=Bad Request.
    

    By tracing the ID (in this case 16628) back up the file I found it was a call to the same endpoint, but this time a PUT:

    System.Net Information: 0 : [16628] HttpWebRequest#9100089 - Request: PUT /tfs/DefaultCollection/_apis/resources/containers/122603?itemPath=logs%2FActivityLog.AgentScope.59.xml HTTP/1.1
    

    This was the point I gave up thinking this could be fixed by a configuration change.

    Conclusion

    I wish I had read something like this before I planned the weekend. I did do testing, but because testing TFS in live is risky I had most of the test instance network isolated and that required a lot of configurations; I just thought this error was just configuration based, lesson well and truly learned.

    It would have been nice to see this called out more explicitly on MSDN. In my opinion these are two bugs that Microsoft decided not to fix in the TFS 2012 product life-cycle.

    On the plus side, I learned some really neat debugging skills I didn’t know before.

    Remember, if you’re upgrading from TFS 2012, plan to upgrade your build agents at the same time!

  • Deployment Pipeline with VSTS and Release Management

    Back in 2014 I wrote a UNC to URI Path Converter using ASP MVC 4 and Visual Studio Team Services with a XAML Build process template to continuously deploy the changes to an Azure Website. This was my first Azure Website and most of it was just using the default settings from the New Project dialog in Visual Studio, all very “point and click”.

    It worked well and had an average of a few hundred page requests a week and so far, I’ve been happy with everything as it “just worked”. The other day I wanted to add a small feature and noticed that after pushing and deploying the change that Azure was warning me XAML builds would soon be deprecated. So, whilst I was making some changes I decided it would be a good opportunity for me to get up to date on a few new technologies that I have not used in anger.

    I planned to setup the following for the website:

    • Rewrite in .NET Core.
    • Custom VSTS Build vNext.
    • Deployment Pipeline using Microsoft Release Management.

    Rewrite in .NET Core

    My previous .NET Core app at this point was a console application, so I took this as an opportunity to get to grips with setting up a build and a suite of unit tests using xUnit.net. Getting this working in Visual Studio was straight forward following the xUnit.net documentation, but getting the build to run on VSTS was a bit hit and miss. I eventually settled on a mix-match combination of dotnet command line tools and the Visual Studio Test Runner.

    VSTS Build Steps

    Using the VS Test step solved the problem with dotnet test not been able to run the xUnit.net tests on the build server. I kept the individual dotnet restore, dotnet publish (site) and dotnet build (tests) as I wanted control over the publish. I also have a suite of deployment tests that based on the Full .NET Framework which I build using VS Build. These were the building blocks of my pipeline.

    Custom VSTS Build vNext

    By keeping control over dotnet publish I could pack the website ready to by pushed to Azure using Microsoft Release Management. I took the output of dotnet publish and zipped it up into an archive and published this as a build artifact.

    The build process also took the output of DeploymentTests build and zipped it into a separate archive and published that too.

    I now had a website and a suite of “Deployment Tests” as artifacts from my build.

    Deployment Pipeline using Microsoft Release Management

    A deployment pipeline is where code goes through various stages and each stage provides increasing confidence, usually at the cost of extra time (Martin Fowler: DeploymentPipeline). My pipeline was quite simple:

    Build -> Fast Tests -> Deploy to Pre-Prod -> Test Via API -> Deploy to Live -> Test Via API
    

    This process meant that the build was fast and only ran isolated fast unit tests against the code. Only then did it deploy onto a Pre-Production server (another Free Azure Website), and run a set of integration tests against the Website via the API, if these tests passed, then I repeated the process onto the Live website.

    Using Microsoft Release Management, I was able to orchestrate this using a single Release definition, and defining two environments to deploy to.

    Release Management

    I considered using Deployment Slots on Azure to do a deploy and then swap to the Slots after the tests passed, but Slots are only available on the Standard pricing tier and I wanted to keep this free, so I setup another free Website instance and ran the tests on there.

    I used a Variable against each Environment in Release Management to store the Azure Website Name.

    Environment's variables

    These variables had two uses, the first was to keep the steps for each environment the same, I only need to set the variable to a different value.

    The second was very cool, because the variables in TFS Build and RM are actually environment variables I could write the following method in the code of my deployment tests:

    public static String BaseUri => $"http://{Environment.GetEnvironmentVariable("AzureWebSiteName")}.azurewebsites.net/";
    

    And then run the API integration tests against the value of BaseUri.

    I planned to write some User Interface tests using either Coded-UI or Selenium, but due to the Hosted Build agents not supported Interactive Mode which is needed to run User Interface tests, I made them conditional and they only run in Visual Studio locally. I do have a plan to get these running in the future.

    The whole process looks like this:

    Deployment Pipeline Flowchart

    Conclusion

    Whilst this is a massively over engineered solution for such a simple website, it was fun to learn some new tricks and understand how to put a release pipeline together using the VSTS and Azure platforms. I also used it as opportunity to tidy up my resources in Azure and consolidate all my related resources into an Azure RM Resource Group, including the Application Insights I use to monitor it.

  • Now using SSL

    Today I’ve changed over to using SSL by default.

    SSL in Chrome

    The main reason for moving is that SSL gives better SEO - and that my old blog was SSL so I’m sure there will be some SSL links scattered about the web. It also prevents any silly public networks injecting anything into any of my pages.

    I’m using CloudFlare to secure to the communications from your browser to them. Thanks to Sheharyar Naseer for his excellent guide that got me up and running in no time, and to DNSimple for their excellent DNS Service that made it a piece of cake changing my Nameservers.

  • Using SignalR in FSharp without Dynamic

    I’ve been building an FSharp Dashboard by following along this post from Louie Bacaj’s which was part of last years FSharp Advent calendar. I have to say it’s a great post and has got me up and running in no time.

    If you want to skip the story and get to the FSharp and SignalR part scroll down to Changing the Hub.

    One small problem I noticed was that I could not use any of the features of FSharp Core v4. For example, the new tryXXX functions such as Array.tryLast were not available.

    After a bit of digging I happened across the Project Properties which were stuck on 3.1.2.1.

    Project Properties

    Turns out that the FSharp.Interop.Dynamic package is dependant on FSharp.Core v3.1.2.1.

    So this turned into a challenge of how do I use SignalR without Dynamic. After a bit of googling I landed on this page that showed Strongly Typed Hubs. So I knew it was possible…

    Removing Dependencies

    The first step to fixing this was to remove the FSharp.Core dependencies I no longer needed, these were:

    Uninstall-Package FSharp.Interop.Dynamic 
    Uninstall-Package Dynamitey
    Uninstall-Package FSharp.Core
    

    I then just browsed through the source and removed all the open declarations.

    Re-adding FSharp Core

    Slight problem now, I no longer had any FSharp Core references, so I needed to add one in. I’m not sure if this is the best way to solve this, but I just copied and pasted these lines from a empty FSharp project I just created:

    <Reference Include="mscorlib" />
    <!--Add this bit-->
    <Reference Include="FSharp.Core, Version=$(TargetFSharpCoreVersion), Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
        <Private>True</Private>
    </Reference>
    <!--End-->
    <Reference Include="Newtonsoft.Json">
    

    Changing the Hub

    Now all I had to do was update the code to use the statically typed hub.

    First step was to create an interface for the metricsHub:

    type IMetricsHub = 
        abstract member AddMessage: string -> unit
        abstract member BroadcastPerformance: PerfModel seq -> unit
    

    Then change our Hub to inherit from the generic Hub<T>:

    [<HubName("metricsHub")>]
    type metricsHub() = 
        inherit Hub<IMetricsHub>() // < Generic version of our interface.
    

    And changed all the calls from:

    Clients.All?message(message)
    

    to

    Clients.All.Message message
    

    Getting the Context

    With SignalR you cannot just new up an instance of a Hub, you have to use GlobalHost.ConnectionManager.GetHubContext<THub>. The problem is that this gives you and IHubContext which only exposes the dynamic interface again. A bit more googling and I found that you need to pass our interface as a second generic parameter and you will get an IHubContext<IMetricsHub>.

    So this:

    let context = GlobalHost.ConnectionManager.GetHubContext<metricsHub>()
    

    Becomes:

    let context = GlobalHost.ConnectionManager.GetHubContext<metricsHub, IMetricsHub>()
    

    Now you can call Context.Clients.All.BroadcastPerformance and not worry about that pesky dynamic any more.

    Conclusion

    The documentation on SignalR isn’t very good, it was easy enough to find out about the statically typed version, but finding out how to get one out of the context was a right pain.

    I’ve published a fork of Louies GitHub repo with four commits that show the steps needed to move from dynamic to statically typed SignalR here so you can see the changes I needed to make.

  • Adding Cloudapp DNS to Azure VM

    I’ve recently just deployed a new Azure Linux VM for hosting a Discourse instance I run and noticed that is didn’t have a DNS entry on cloudapp.net. Last time I deployed one it was instantly given one in the format server-name.cloudapp.net, but this time it wasn’t and I had to set it up by myself.

    I suspect it is something new for Resource Managed deployments.

    Here’s a list of the steps you need to follow if you ever need to do the same.

    Assuming you have just deployed a VM and it doesn’t have a DNS on cloudapp.net you will see something like this:

    newly deployed vm

    Dissociate Public IP

    First you need to Dissociate the Public IP so you can make changes.

    Click the Public IP Address to open the settings:

    public ip settings

    Then click Dissociate and confirm when prompted.

    public ip settings dissociate

    You cannot change any settings whilst the Public IP is in use.

    Configuring the DNS

    From the Public IP page, click All Settings then Configuration to open up the settings:

    public ip settings configuration

    Then you can enter a new DNS prefix for datacentre.cloudapp.azure.net:

    public ip configuration new dns

    Reassociate the Public IP

    Now you need to reassociate the Public IP with the VM.

    From the VM Screen (First Image) click All Settings, then Network Interfaces:

    vm network interfaces

    Click on the Interface listed:

    all vm network interfaces

    Click on IP Addresses from the Settings blade:

    network interfaces ip addresses

    Click on Enable then click on the IP Address Configure Required… and select the default (highlighted) Public IP Address from the list.

    select public ip.

    Then click Save.

    Validation and Testing

    Now if you close and re-open the VM blade you should see a new Public IP address appear.

    Click on the Public IP Address to open the blade and you will see your full DNS Entry and a Copy to clipboard button when you hover on it:

    vm with new dns

    To test, ping the VM and see if the DNS resolves:

    C:\> ping taeguk-test-dns.northeurope.cloudapp.azure.com
    
    Pinging taeguk-test-dns.northeurope.cloudapp.azure.com [40.127.129.7]
    

    The requests will timeout because Azure has ICMP disabled, but so long as the DNS resolves, you’ve done it.

    Conclusion

    This seems to be a change that I can’t find a source for to do with Resource Managed VM’s instead of Classic VM’s. It used to work OK on classic VM’s.

    Note: I have deleted the VM in this post now.

  • Shut the Box is Live

    Today I’ve just published my first App into the Windows and Windows Phone Store.

    Screenshot

    You can download using the image below, if you want to check it out. It’s 100% free and no ads.

    Windows Store Download

    It is a simple version of the pub game Shut the Box, I have page here with more information about game.

    This was my first attempt at a Windows Application and I’ve really enjoyed the experience of building it. I tried to use as many new things to me as possible to learn as much as I can through the process. A quick list of new things I’ve explorer whilst working on this are:

    • Git
    • Visual Studio Online Kanban for planning and tracking work (up until now I’ve only used TFS 2012.4).
    • TFS Build vNext.
    • Application Insights.
    • Custom MSBuild Project to encapsulate all restore/build/test workflows.
    • xUnit.net for Universal Apps (lots of beta’s to test).

    Working with the Windows Store was a bit of “hit and miss”, for a while I could not see get to the “Dashboard” part of the site “because of my Azure account”, or so I was told. This seemed to resolve itself eventually, but was very annoying at the time. I was not offered any explanation, only that I should create a new Microsoft Account to publish apps through, which I was not prepared to do.

    It took 3 attempts to get the application through certification. Firstly it failed because I had not run the Application Certification Kit and had a transparent Windows tile that is not allowed. The second failure was because Russia, Brazil, Korea and China require certification of anything that is a Game in the store. I decided not to publish it to those markets at the moment because I wanted it out there, and figuring out how to complete the certification seemed like too much work. I may look into it again later, but for now I am happy.

    This application has been a long time coming, mostly down to my lack of free time and/or willingness to work on it, but I’m glad it’s finally published, now to try and release some updates and add some more nice features.

    If you enjoy the game, please feel free to leave me a good rating / comment in the Store.

  • Roslyn Based Attribute Remover

    Major Update 1-Aug-2015: Changed VisitAttributeList to VisitMethodDeclaration to fix some bugs with the help of Josh Varty.

    I’m a big fan of XUnit as a replacement for MSTest and use it extensively in my home projects, but I’m still struggling to find a way to integrate it into my work projects.

    This post looks at one of the obstacles I had to overcome, namely the use of [TestCategory("Atomic")] on all tests that are run on TFS as part of the build. The use of this attribute came about because the MSTest test runner did not support a concept of “run all tests without a category”, so we came up with an explicit category called “Atomic” - probably not the best decision in hindsight. The XUnit test runner does not support test categories, so I needed to find a way to remove the TestCategory attribute with the value of Atomic from any method. I’m sure I could have used regex to solve this, and I’m sure that would have caused more problems:

    To generate #1 albums, 'jay --help' recommends the -z flag.

    via xkcd

    Instead I created a Linqpad script and used the syntactic analyser from the Microsoft.CodeAnalysis package.

    PM> Install-Package Microsoft.CodeAnalysis
    

    I found that the syntactic analyser allowed me to input some C# source code, and by writing my own CSharpSyntaxRewriter, remove any attributes I didn’t want.

    I started by creating some C# that had the TestCategory attribute applied in as many different ways as possible:

    namespace P
    {
        class Program
        {
            public void NoAttributes() { }
    
            [TestMethod, TestCategory("Atomic")]
            public void OnOneLine() { }
    
            [TestMethod]
            [TestCategory("Atomic")]
            public void SeparateAttribute() { }
            
            //snip...
            //And so on down to, right down to...
                    
            [TestMethod, TestCategory("Atomic"), TestCategory("Atomic")]
            public void TwoAttributesOneLineAndOneThatDoesntMatch() { }
        }
    }

    You can see all the examples I tested against in the Gist.

    The CSharpSyntaxRewriter took a lot of messing around with to get right, but I eventually figured that by overriding the VisitMethodDeclaration method I could remove attributes from the syntax tree as they were visited.

    To get some C# code into a syntax tree, there is the obviously named CSharpSyntaxTree.ParseText(String) method. You can then get a CSharpSyntaxRewriter (in my case my own AttributeRemoverRewriter class) to visit everything by calling Visit(). Because this is all immutable, you need to grab the result, which can now be converted into a string and dumped out.

    var tree = CSharpSyntaxTree.ParseText(code);
    var rewriter = new AttributeRemoverRewriter(
        attributeName: "TestCategory", 
        attributeValue: "Atomic");
    
    var rewrittenRoot = rewriter.Visit(tree.GetRoot());
    
    rewrittenRoot.GetText().ToString().Dump();

    The interesting part of the AttributeRemoverRewriter class is the VisitMethodDeclaration method which finds and removes attribute nodes that are not needed:

    public override SyntaxNode VisitMethodDeclaration(MethodDeclarationSyntax node)
    {
        var newAttributes = new SyntaxList<AttributeListSyntax>();
    
        foreach (var attributeList in node.AttributeLists)
        {
            var nodesToRemove =
                attributeList
                .Attributes
                .Where(
                    attribute =>
                        AttributeNameMatches(attribute)
                        &&
                        HasMatchingAttributeValue(attribute))
                .ToArray();
    
            //If the lists are the same length, we are removing all attributes and can just avoid populating newAttributes.
            if (nodesToRemove.Length != attributeList.Attributes.Count)
            {
                var newAttribute =
                    (AttributeListSyntax)VisitAttributeList(
                        attributeList.RemoveNodes(nodesToRemove, SyntaxRemoveOptions.KeepNoTrivia));
    
                newAttributes = newAttributes.Add(newAttribute);
            }
        }
    
        //Get the leading trivia (the newlines and comments)
        var leadTriv = node.GetLeadingTrivia();
        node = node.WithAttributeLists(newAttributes);
    
        //Append the leading trivia to the method
        node = node.WithLeadingTrivia(leadTriv);
        return node;
    }

    The AttributeNameMatches method is implemented to find an attribute that starts with TestCategory, this is because attributes in .NET have Attribute at the end of their name e.g. TestCategoryAttribute, but most people never type it. I figured in this case it was more likley to exist than to have another attribute starting with TestCategory. I don’t think there is an elegant way to avoid using StartsWith in the syntactic analyser, I would have had to switch to the sematic analyser and that would have made this a much more complicated solution.

    The HasMatchingAttributeValue pretty much does what it says, it looks for the value of the attribute been just Atomic and nothing else.

    Once the nodes that match are found, it checks if the number of attributes on a method is equal to the number it wants to remove, if so the newAttributes list is not populated and the method is updated to keep its trivia, but without any attributes. This shouldn’t be the case for this specific scenario because just a TestCategory on its own doesn’t make sense.

    Remove just the matching attributes

    If there are some attributes that do not need removing, then just the matching one should be removed. For example:

    [TestMethod, TestCategory("Atomic")]
    public void OnOneLine() { }

    When the visitor reaches the attributes on this method, it will populate the newAttributes list with just the attributes we want to keep and then update the method so that it has just the remaining attributes its trivia.

    Conclusion

    Using Roslyn was a bit of a steep learning curve to start with, but once I found out what I was doing, I knew I could rely on the Roslyn team to have dealt with all the different ways of implementing attributes in C#. That didn’t stop me from finding what appears to be a bug causing me to re-write bits of the script and this post, and some more edge cases when I ran it across a > 500 test classes.

    However, if I were to try and use regex to find and remove some of the more complicated ones, and deal with the other edge cases, I’d have gone mad by now.

    • You can get the full Gist here.

    If you paste this into a Linqpad “program” and then just install the NuGet Package you should be able to try it out. Note this was built against the 1.0.0 version of the package.

  • Automating the Deployment of TFS Global Lists

    The TFS Global List is a Team Project Collection wide entity and, to the best of my knowledge, requires someone to be a member of the Collection Administrators group to be able to update it – there is no explicit group or permission for “Upload Global List”. This can be quite a problem if there are a number of Lists within your Global List that are updated frequently by the users of your Collection.

    Your current options are either:

    1. Ask the Collection Administrators for every little change (and complain if they take too long, they have a holiday, etc.)
    2. Keep adding people/groups to the Collection Administrators group (and hand out way too much power to people who don’t need it).

    We went for option #1, then option #2, until neither became sustainable.

    The solution I came up with is based on post Deploying Process Template Changes Using TFS 2010 Build by Ed Blankenship, but instead of deploying the whole process template, we just deploy the Global List. (N.B. our TFSBuild account is a Collection Administrator).

    Building the Template

    To build the template I started by copying the DefaultTemplate.11.1.xaml file that ships with TFS 2012 and stripped out all of the activities and process parameters that were no longer required then added a new activity to invoke the witadmin command line tool to import the Global List.

    I won’t go into detail of the process of how I changed the activities because there were quite a lot of steps. It is quite straight forward. However, a quick overview is: remove anything to do with compiling code, running tests or gated checkins, then add a new activity to invoke the witadmin command line. It will probably be easier understood by looking at the finished template - available to download at the end. I may write a follow up post with the exact details.

    Using the template

    • To use the tempalte you need to have the Global Lists file checked into Version Control, you can follow the advice in the Wrox Professional Team Foundation Server 2013 book to create a Team Project for your all your Process artefacts, or if you just want to keep it simple:
      • Use witadmin to export the global list file:
      • witadmin exportgloballist /collection:http://tfs:8080/tfs/DefaultCollection /f:GlobalList.xml
      • Check that file into its own folder somewhere in souce control, in this example we will use $/TFS/GlobalList/GlobalList.xml (having it in its own folder helps).
    • Once you have the template downloaded, you need to check it into Version Control, usually $/MyTeamProject/BuildProcessTemplates/.
    • Create a new build definition.
    • Fill in the General tab however you like.
    • In the Trigger tab select Continous Integration.
    • In the Source Settings tab select the folder with your GlobalList.xml as Active ($/TFS/GlobalList/)
    • In the Build Defaults tab, select “This build does not copy output files to a drop folder”.
    • In the Process tab we need to do a few steps:
      • To install the template, click Show Details:
      • Show details
      • Click New… and browse to the template we checked in ($/MyTeamProject/BuildProcessTemplates).
      • Fill in the sections as follows:
      • Process Parameters
      • I didn’t know the best way to get the URI of the Team Project collection, so I made it a argument you need enter.
      • If you are not using VS2012 on your build server, you will need to find a way to get witadmin.exe on there and then update the path to the location.

    Once the above has been completed you should be able to the queue a new build using the new defintion and check the output to see if the global list has been successfully uploaded. Just open the build and check the summary, if everything went well you should see the following:

    Build Summary

    If there were any problem, check the “View Log”, the build is using Detailed logging which should include enough information to figure out what went wrong.

    Conclusion

    I’ve now stopped worrying about having to update the global list for everyone who needs something new adding and I no longer am affraid of lots of people been Collection Administrators who really shouldn’t have been. I can just grant check-in permissions to the folder that contains our global list and leave people to it.

    Download

    I’m keeping this on my GitHub:

    If have any improvements (to the post / template), feel free to send me a PR.

  • Moving from WordPress

    This is my last post on WordPress and first post on Jekyll GitHub Pages.

    I’ve decided to abandon WordPress running on Azure Web Apps for a simpler static blog using Jekyll to convert Markdown to static content hosted on GitHub pages. I’ll go into the process I went though in a future post.

    This posts is here as a marker of when I moved everything over. I’ve tried to get the permalinks in Jekyll to match the ones in WordPress - but breaking any that were from my brief stint on DasBlog. As far as I know everything should just be the same - including the RSS feed on /feed.

    Shoot me a mail if there is a problem.

    Everything from this point on will be on the new format.

    The actual migration will occur at some point next week.

  • Fixing a broken Kanban Board

    Update 05-Aug-2015: This fix also resolves a second issue.

    There are two problems that I have identified in TFS 2012.4 with Kanban boards (Backlog Board) not functioning correctly. The fix in this post describes how to resolve both issues by deleting the board configuration from the database.

    Nothing but errors

    The first problem was presented as a completely broken TFS Kanban board. All they could see was a generic “there has been a problem” pink popup instead of their cards.

    When presented with this, the usual fix is to ensure background agent job is running, which it was. So I took a look in the Windows event logs on the server for more detail and found this error (most of the details are removed for brevity, this is from the middle):

    Detailed Message: TF30065: An unhandled exception occurred.
     
    Exception Message: The given key was not present in the dictionary. (type KeyNotFoundException)
    Exception Stack Trace:    at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
       at Microsoft.TeamFoundation.Server.WebAccess.Agile.Models.WorkItemSource.<>c__DisplayClass18.<GetProposedInProgressWorkItemData>b__13(IDataRecord dataRecord)
       at Microsoft.TeamFoundation.Server.WebAccess.Agile.Utility.WorkItemServiceUtils.<GetWorkItems>d__c.MoveNext()
       at Microsoft.TeamFoundation.Server.WebAccess.Agile.Models.WorkItemSource.GetProposedInProgressWorkItemData(ICollection`1 rowData, ICollection`1 hierarchy, ISet`1 parentIds)
    

    With this little information, all I could assume was that, somehow the configuration had become corrupted.

    Bouncing cards

    The second issue was when dragging a card/work item from one column to the other, it instantly bounced back the original column. This was happening on the client because we could repro the issue with the network cable unplugged. It wasn’t that the card couldn’t transition state, that presents by not allowing the card to be dragged. In my case, the card could be dragged and dropped, it went into the column for less than a second and then bounced back, occasionally leaving a card drawn in an odd location in the browser.

    Further testing proved that card should be able to move between these columns when placed on another teams board, just not for this team.

    As before, I guessed that somehow the configuration in the database was corrupt.

    The fix

    This fix is a little heavy handed, but by deleting the board configuration from the database, you can re-setup your board as before with no issues.

    Be sure to make a note of the column configuration before you start.

    NOTE: Neither I nor Microsoft support you making changes directly to your TFS database. You do so at you own risk, and probably best with a backup. This SQL worked against our TFS 2012.4 Database, I cannot guarantee other versions have the same schema.

    First step is to find your TeamId from the Collection Database. Team Ids can be found in the ADObjects table.

    select * from ADObjects
    where SamAccountName like '%MyTeamName%';
    

    The TeamFoundationId GUID in this table is the value we are interest in.

    You can find the Board and Columns in the tbl_Board and tbl_BoardColumn tables using the following SQL:

    select * from tbl_Board b
    join tbl_BoardColumn bc on b.Id = bc.BoardId
    where TeamId = 'YouTeamId';
    

    Once you are happy that you have the found the rows for the team, you can then delete them from those two tables. You should probably copy the results into Excel just in case things go wrong.

    To delete you can use the following SQL Queries:

    delete bc
    from tbl_Board b
    join tbl_BoardColumn bc on b.Id = bc.BoardId
    where TeamId = 'YouTeamId';
    
    delete tbl_Board
    where TeamId = 'YouTeamId';
    

    Now if you refresh the board it should report that there is no configuration and needs to be setup again from scratch.

    I’ve no idea what caused these problems, or if it is fixed in a future update, but this got things working again for me.

subscribe via RSS