• Drag and Drop Batch Files

    This is a little trick that can make dealing with batch files a real breeze.

    You can make a batch file support drag and drop.

    Drag and drop

    Here I’ve create a simple batch file that takes in a single argument tells you that it is listing the file then prints the contents using the TYPE command and then PAUSE’s.

    @echo off
    echo Listing the contents of %1
    echo.
    type %1
    echo.
    echo.
    pause
    

    This works because when you drop a file on a executable in Windows the first argument passed to that program is the name of the file you dropped on it. So in the above script %1 is the full path to whatever file you drop on the batch file.

    I’ve used this in a few different ways:

    1. SDelete: I have a batch file to call SDelete with 64 passes . I created a shortcut to the batch file with an icon (so it looks nice), that I use for deleting sensitive files at work.
    2. Restoring development databases: I have another a batch file to restore development database backups, first it unzips the archive and then runs restore via SQLCMD.

    I’m sure there are a lot more uses for this. If you want to process multiple files you can iterate through all the arguments.

    Thanks to bepe from XDA Developers who was the person who first showed me this technique in his ROM Kitchen videos many years ago.

  • Handling results from complex operations

    In this article I am going to look at a number of different approaches to model the results of a complex operation in C#. This is a technique I find useful when I have to perform some logic, and it can return different types of results, depending on the outcome. I’ll start with some naive approaches, before looking at two options that look more promising.

    I’ll be using the following logic for my “complex operation”:

    if length(s) is even
        return length(s);
    else
        return "Length is odd";
    

    With an additional a requirement that any exceptions in the “complex operation” will be handled in a suitable way.

    This is actually something I have had to implement at work in a number of cases; try an operation, then depending on the outcome handle it in the most appropriate way.

    In the examples I’ll be using Console.WriteLine for simplicity, but in the real world there could be database calls, UI updates, HTML rendering, service calls, whatever usually makes testing hard.

    The inputs to and outputs from every example will be the same.

    Inputs:

    • "Food" (returns 4)
    • "Foo" (returns Length is odd.)
    • null (returns NullReferenceException)

    All the code for the following is available in my GitHub repo ResultType-blog - these are Linqpad scripts, but can easily be modified to be C# by removing the first line.

    1. Just do it

    Here’s the most trivial approach to solve the issue:

    void Main()
    {
        ComplexOperation("Food");
        ComplexOperation("Foo");
        ComplexOperation(null);
    }
    
    void ComplexOperation(String input)
    {
        try
        {
            if (input.Length % 2 == 0)
                Console.WriteLine($"Even length: {input.Length}.");
            else
                Console.WriteLine("Length is odd.");
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex);
        }
    }
    

    This meets our requirements, but lets see if we can see a few issues with it.

    Firstly, it isn’t possible to test the Complex Operation on it’s own, you have no way to mock out the dependencies. Secondly, you’re mixing business logic, with side effects. Readers of Mark Seemann’s blog will know that this makes code harder to reason about.

    A common approach to solve this is to introduce a type to model the result of the complex operation. The remaining examples look at different approaches to do this.

    2. Implicit Results

    Let’s start with an Implicit Result type:

    class Result
    {
        public String FailureMessage { get; set;}
        public Int32? EvenLength { get; set; }
        public Exception Error { get; set;}
    }
    

    I call it Implicit because if I passed you an instance of it, you have no obvious way of knowing what the result is or how to figure out what happened during the complex operation. You could check that EvenLength is not null and assume success, but what’s to say I didn’t put set it to 0 and populate FailureMessage instead? It it mostly guess work and assumptions to know what this result contains.

    Here’s the new program and complex operation using this type:

    void Main()
    {
        var inputs = new[] { "Food", "Foo", null, };
    
        foreach (var result in inputs.Select(ComplexOperation))
        {
            if (result.Error != null)
                Console.WriteLine(result.Error);
            else if (result.FailureMessage != null)
                Console.WriteLine(result.FailureMessage);
            else
                Console.WriteLine($"Even length: {result.EvenLength}.");
        }
    }
    
    Result ComplexOperation(String input)
    {
        try
        {
            if (input.Length % 2 == 0)
                return new Result { EvenLength = input.Length, };
            else
                return new Result { FailureMessage = "Length is odd.", };
        }
        catch (Exception ex)
        {
            return new Result { Error = ex, };
        }
    }
    

    Now you see the implementation you know there is no funny business, but I made you read the complex operation to be sure.

    There are some other problems with this too:

    • The type is mutable, meaning something might change the result after the operation.
    • When the type is constructed, it is half-full, some members will have values, some won’t.
    • You have to know how to process a result, I’d find myself asking can an error also have a message?.
    • It violates the Open/closed principle because changes to Success, Failure or Error require a change to this one type.

    This does, however, separate the operation from the outputs, making the operation testable without it needing any additional depencies for the output. The operation is now Pure, which is why I can use Select on each input to return the result.

    3. Explicit Results

    I’ve harped on enough about how bad it it that the result is implied in the previous example. So let’s have a go at been more explicit.

    Here’s a result type with an enum to tell you what happened:

    class Result
    {
        public String FailureMessage { get; set;}
        public Int32? EvenLength { get; set; }
        public Exception Error { get; set; }
        public ResultType Type { get; set; }
    }
    
    enum ResultType
    {
        Success,
        Failure,
        Error,
    }
    

    Now when you get a result you can first check the Type and know it is an error.

    Here’s the application code for it:

    void Main()
    {
        var inputs = new[] { "Food", "Foo", null, };
    
        foreach (var result in inputs.Select(ComplexOperation))
        {
            switch (result.Type)
            {
                case ResultType.Success:
                    Console.WriteLine($"Even length: {result.EvenLength}.");
                    break;
                case ResultType.Failure:
                    Console.WriteLine(result.FailureMessage);
                    break;
                case ResultType.Error:
                    Console.WriteLine(result.Error);
                    break;
            }
        }
    }
    
    Result ComplexOperation(String input)
    {
        try
        {
            if (input.Length % 2 == 0)
                return new Result { EvenLength = input.Length, Type = ResultType.Success, };
            else
                return new Result { FailureMessage = "Length is odd.", Type = ResultType.Failure, };
        }
        catch (Exception ex)
        {
            return new Result { Error = ex, Type = ResultType.Error, };
        }
    }
    

    This still has some of the problems of the previous version: mutability, Open/closed principle, mixed bag of properties, and not knowing what to do with them. It also has the problem that nothing is forcing you to check the Type, I might just say It’s OK, this won’t fail, just get my the EvenLength - famous last words…

    So, whilst it is a little better, it can still lead to unreasonable code.

    4. Explicit - with factory methods

    To solve the problem of people creating a “mixed bag” of mutable properties, a factory method could be created on the type to initialise the result in the correct state depending on the outcome of the operation.

    class Result
    {
        public String FailureMessage { get; private set; }
        public Int32? EvenLength { get; private set; }
        public Exception Error { get; private set; }
        public ResultType Type { get; private set; }
    
        public static Result CreateFailure(String message)
        {
            return new Result { FailureMessage = message, Type = ResultType.Failure, };
        }
    
        public static Result CreateSuccess(Int32 value)
        {
            return new Result { EvenLength = value, Type = ResultType.Success, };
        }
    
        public static Result CreateError(Exception ex)
        {
            return new Result { Error = ex, Type = ResultType.Error, };
        }
    }
    

    This changes the complex operation to look like this:

    Result ComplexOperation(String input)
    {
        try
        {
            if (input.Length % 2 == 0)
                return Result.CreateSuccess(input.Length);
            else
                return Result.CreateFailure("Length is odd.");
        }
        catch (Exception ex)
        {
            return Result.CreateError(ex);
        }
    }
    

    The rest of the program is unchanged from the previous version.

    We now have a way to know that the Result with the type success will only have an EvenValue, however we still need to ignore the other properties that don’t relate to success. There’s still nothing forcing people to check the Type, and this requires additional factory methods for every state.

    I’ve seen a number of people stop at this level, and call it “good enough” to avoid having to go to the next level. You still have unreasonable code, and have to understand things in the operation.

    5. Exceptions for control flow

    This is another approach I have seen used, I do not like it, but I thought I would include it, as I almost used it years ago before using one of the approaches in the following sections.

    void Main()
    {
        var inputs = new[] { "Food", "Foo", null, };
    
        foreach (var input in inputs)
        {
            try
            {
                var result = ComplexOperation(input);
                Console.WriteLine($"Even length: {result}.");
            }
            catch (BusinessException be)
            {
                switch (be)
                {
                    case FailureException f:
                        Console.WriteLine(f.Message);
                        break;
                    case ErrorException e:
                        Console.WriteLine(e.InnerException);
                        break;
                    default:
                        throw;
                }
            }
        }
    }
    
    Int32 ComplexOperation(String input)
    {
        try
        {
            if (input.Length % 2 == 0)
                return input.Length;
            else
                throw new FailureException("Length is odd.");
        }
        catch (Exception ex) when (!(ex is BusinessException))
        {
            throw new ErrorException(ex);
        }
    }
    
    class FailureException : BusinessException
    {
        public FailureException(String message) : base(message) { }
    }
    
    class ErrorException : BusinessException
    {
        public ErrorException(Exception inner) : base(inner) { }
    }
    
    abstract class BusinessException : Exception
    {
        public BusinessException(String message) : base(message) { }
        public BusinessException(Exception inner) : base("Something bad happened", inner) { }
    }
    

    I hope by looking at this code you can see it isn’t an ideal appraoch.

    I’ve introduced the concept of a BusinessException that the program will handle in a try...catch block. All problems in the complex operation will throw some sort of exception derived from BusinessException, which the program will then type match on. I’ve used pattern matching here, but I’ve see other approaches such as a Dictionary<Exception, Action<Exception>> that has a list of exceptions and the delegate to call.

    Using exceptions like this is the equivelent of goto, many people have said it before, so I won’t go into detail on that aspect. I did notice when writing this how hard it is to not accidentally catch your own BusinessException, this is why I have an exception filter to not handle them twice: catch (Exception ex) when (!(ex is BusinessException)). I could imagine the case where one stray try...catch could cause a lot of problems.

    6. Type per Result

    I’ve now harped one enough about not knowing what to do with results. This example removes the ambiguity and uses a separate type for each result.

    class Success : Result
    {
        public Int32 EvenLength { get; }
        public Success(Int32 value) { EvenLength = value; }
    }
    class Failure : Result
    {
        public String FailureMessage { get; }
        public Failure(String message) { FailureMessage = message; }
    }
    
    class Error : Result
    {
        public Exception Exception { get; }
        public Error(Exception ex) { Exception = ex; }
    }
    
    abstract class Result
    {
    }
    

    Each result now has its own type. Each type only has properties relating to that type of result. The results are immutable.

    The base class in this case is empty, but it might capture the input, elapsed time, or anything else you need in every result.

    The program and complex operation now are much easier to reason about, and it is a lot harder to mix things up:

    void Main()
    {
        var inputs = new[] { "Food", "Foo", null, };
    
        foreach (var result in inputs.Select(ComplexOperation))
        {
            switch (result)
            {
                case Success s:
                    Console.WriteLine($"Even length: {s.EvenLength}.");
                    break;
                case Failure f:
                    Console.WriteLine(f.FailureMessage);
                    break;
                case Error e:
                    Console.WriteLine(e.Exception);
                    break;
            }
        }
    }
    
    Result ComplexOperation(String input)
    {
        try
        {
            if (input.Length % 2 == 0)
                return new Success(input.Length);
            else
                return new Failure("Length is odd.");
        }
        catch (Exception ex)
        {
            return new Error(ex);
        }
    }
    

    I’m using C# 7’s Pattern Matching feature in the program to compare each type of the result. When I get a match it is already cast into the correct type, so s will be an instance of Success, and Success only has properties relating to a successful outcome.

    To me this is very clear what I can do next after a complex operation and what happened in the operation.

    It is reassuring to know that if I have a Success type I can only see properties relating to a successful operation, I can pass the result to another method, that accepts an instance of Success knowing it can’t be called with an Error by mistake - the type safety in the language is on your side.

    Consider these 2 methods:

    void DisplaySuccess(Result r) { }
    void DisplaySuccess(Success s) { }
    

    In all previous examples you had to have the first version, and that would either assume you called it correctly, or would have to check that r is a success. The second method can only be called with an instance of Success, you cannot pass an Error to it, making it much harder to get wrong.

    There are a few negatives to this approach, in C#’s pattern matching the compiler doesn’t check you have every case matched, adding a new result type means I need to find all instances of result handlers and update them. If you have only one handler, then this isn’t so bad.

    Another consideration is that the result’s “next step” logic - what happens next - is separated from the type. Sometimes this could be desirable, other times you might want it contained in a single place, it depends on how your application is designed and what works best. The next exmaple looks at keeping the behaviour with the result.

    7. Types with Behaviour

    I’ve fleshed the following code out a little more, to highlight one of the drawbacks of the approach. In all previous examples, I’ve left out how you might test the entire operation - passing in test doubles for Console.WriteLine into the program from the composition root would be trivial.

    However, in this case I wanted to show the extra effort needed to keep things testable.

    First, we’ll look at the base result type:

    abstract class Result
    {
        public Result(IProcessor processor)
        {
            Processor = processor;
        }
    
        protected IProcessor Processor { get;}
    
        public abstract void Process();
    }
    
    interface IProcessor
    {
        void WriteMessage(Object message);
    }
    
    class Processor : IProcessor
    {
        public void WriteMessage(Object message) => Console.WriteLine(message);
    }
    

    There’s now an additional Process member on every result and every result needs access to a IProcessor which facilitates the injection of the dependencies for the Process method.

    This is what the calling program will use to handle the result:

    void Main()
    {
        var inputs = new[] { "Food", "Foo", null, };
    
        foreach (var result in inputs.Select(ComplexOperation))
        {
            result.Process();
        }
    }
    

    This looks very neat, I get a result, I call Process.

    The problems are getting the dependencies managed in an nice way. When deriving instances of the Result you need to write the code to pass IProcessor through:

    class Success : Result
    {
        public Int32 EvenLength { get; }
        public Success(IProcessor p, Int32 value) : base(p) { EvenLength = value; }
        public override void Process() => Processor.WriteMessage($"Even length: {EvenLength}.");
    }
    
    class Failure : Result
    {
        public String FailureMessage { get; }
        public Failure(IProcessor p, String message) : base(p) { FailureMessage = message; }
        public override void Process() => Processor.WriteMessage(FailureMessage);
    }
    
    class Error : Result
    {
        public Exception Exception { get; }
        public Error(IProcessor p, Exception ex) : base(p) { Exception = ex; }
        public override void Process() => Processor.WriteMessage(Exception);
    }
    

    Each result now has an implementation of the logic to handle it. If you want to know what happens given a success, I can just look at the Success type.

    But when you create an instance, you also need to pass in an IProcessor, so the complex operation will have to do this:

    IProcessor processor = new Processor();
    
    Result ComplexOperation(String input)
    {
        try
        {
            if (input.Length % 2 == 0)
                return new Success(processor, input.Length);
            else
                return new Failure(processor, "Length is odd.");
        }
        catch (Exception ex)
        {
            return new Error(processor, ex);
        }
    }
    

    This is quite a lot of ceremony, and now the complex operation has knowledge of the IProcessor. An instance of an IProcessor would have to be injected so that it can be passed into each result. The complex operation doesn’t depend on IProcessor though, just the results, making this a kind of transient dependency.

    This example isn’t perfect, but I have used it in a number of places where I wanted to keep the logic of what to do with a result with the result, and not separated out across the code base. Usually when there’s a lot of code related to handling the result.

    I also like that I am able to write code such as:

    var result = ComplexOperation(input);
    resut.Process();
    

    If you need to add a new result, type (e.g. Timeout) you can do so by just deriving a new type from Result and implementing all the logic there. The only other place that needs a change is the complex operation to return new Timeout(processor), the program doesn’t have to change.

    8. Bonus F# version

    I am a big fan of F# so I thought I would model the same problem in F#.

    I’ve deliberately kept it similar to the C# examples to avoid it getting too functional. This is quite close to example #6 above.

    type Result =
        | Success of int
        | Failure of string
        | Error of Exception
    
    (* Unchecked.defaultof<String> is used for null to make it crash - F# doesn't do null really. *)
    let inputs = [ "Food"; "Foo"; Unchecked.defaultof<String>; ]
    
    let complexOp (input: string) =
        try
            if input.Length % 2 = 0 then
                Success input.Length
            else
                Failure "Length is odd."
        with
        | ex -> Error ex
    
    let processResult r =
        match r with
        | Success s -> printfn "Even length: %d" s
        | Failure f -> printfn "%s" f
        | Error e -> printfn "%A" e
    
    let main () =
        let results =
            inputs
            |> Seq.map complexOp
    
        results |> Seq.iter processResult
    
    main ()
    

    I’ve modelled the result as a Discriminated Union with 3 cases, one for each outcome. The complex operation, like the C# version returns 1 of these 3 cases. What is nice in F# is that in processResult where I take in a single result and handle it, the pattern match must be complete. If I added another case to the Result type, the compiler will complain that is isn’t handled in the match.

    Conclusion

    This won’t be an exhaustive list of ways to handle results, but it does provide some different approaches to the problem that should help keep your code base a little cleaner. Options 6 and 7 are ones I would use in C#, the rest create unreasonable code that I would not like to have to think about. The complex operation is never going to be a few lines of code like in my scenario, it might be many classes working to do many different operations, building one final result. I like it when I don’t have to know the implementation details of an operation to know what the behaviour is for a given outcome.

    Above, I have only used Success, Failure and Error as the outcomes of my operation, but I could have modelled different states for success too: a MatchFound/NoMatch result could be suitable for a result type.

  • Controlling VS2017 Developer Console Start Directory

    At work I use ConEmu for my console, it’s a great console to work on Windows with. To keep things tidy I have all my code on my X:\ partition. In ConEmu I have different “Tasks” setup for different configurations of Visual Studio and pass /Dir X:\ as one of the task parameters so that a new Console’s current Dir is X:\.

    ConEnum Settings

    When running “Developer Command Prompt for VS 2017” on my work computer I noticed that the directory it was opening in wasn’t the current directory that ConEmu was setting, but C:\Dave\Source.

    **********************************************************************
    ** Visual Studio 2017 Developer Command Prompt v15.0.26228.9
    ** Copyright (c) 2017 Microsoft Corporation
    **********************************************************************
    
    C:\Dave\Source>
    

    After a bit of digging through the batch files I found the reason for this is because of this bit code in:

    C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\Tools\vsdevcmd\core\vsdevcmd_end.bat

    ...
    @REM Set the current directory that users will be set after the script completes
    @REM in the following order:
    @REM 1. [VSCMD_START_DIR] will be used if specified in the user environment
    @REM 2. [USERPROFILE]\source if it exists
    @REM 3. current directory
    if "%VSCMD_START_DIR%" NEQ "" (
        cd /d "%VSCMD_START_DIR%"
    ) else (
        if EXIST "%USERPROFILE%\Source" (
            cd /d "%USERPROFILE%\Source"
        )
    )
    ...
    

    As you can see, it has two chances to pick a different directory before using your current one.

    In my case, I had a folder at %USERPROFILE%\Source, which was empty, so I deleted it.

    The other alternative is to set the VSCMD_START_DIR environment variable for your user account to your preferred directory.

  • TFS 2015 does not support 2012 build agents

    This post is part PSA, part debugging story.

    The important bit:

    Team Foundation Server 2012 XAML Build Agents do not work with TFS 2015

    I discover this fact the weekend just gone whilst performing an upgrade to TFS 2015.3 from TFS 2012.4.

    The plan was to only upgrade the TFS Server and leave the build infrastructure running on TFS 2012. This seemed like a sound idea as I know Microsoft care about compatibility, and the upgrade was more complicated than your usual one. I figured it would just keep working and that I’d upgrade the build agents later, boy was I wrong.

    I may have even checked the documentation, which does not show a compatibility, but it isn’t explicitly called out, so I could have glanced over it.

    TFS build compatibility Look - No 2012

    The problems with TFS 2012 build agents against TFS 2015 manifested as two different errors when I queued a build without a Drop Location. Queuing a build with a drop location worked just fine.

    Error 1 - Build agents not using the FQDN

    The build infrastructure runs on a different domain to the Team Foundation Server.

    We have tfs-server.corp.com for TFS and build-server.corp-development.com for builds.

    The error manifested as:

    FQDN error message

    The error that appeared twice was not very helpful.

    An error occurred while copying diagnostic activity logs to the drop location. Details: An error occurred while sending the request.

    I eventually debugged this (details later) and found out that the last task on the build agent was trying to access tfs-server with no DNS suffix of .corp.com to publish some logs. As a temporary workaround I bobbed an entry in the hosts file entry to make tfs-server point to the actual IP of the TFS server.

    Error 2 - the bad request

    With the all the steps of the build resolving the server name, I came across the second error.

    Bad request error message

    The error message was still no more use than the last one:

    An error occurred while copying diagnostic activity logs to the drop location. Details: TF270002: An error occurred copying files from ‘C:\Users\tfsbuild\AppData\Local\Temp\BuildAgent\172\Logs\151436\LogsToCopy\ActivityLog.AgentScope.172.xml’ to ‘ActivityLog.AgentScope.172.xml’. Details: BadRequest: Bad Request

    An error occurred while copying diagnostic activity logs to the drop location. Details: An error occurred while sending the request.

    My debugging would lead me to see that this was caused by TFS returning an HTTP 400 (Bad Request) for the exact same step as the first error.

    It was at this point I figured something was really wrong and started searching for compatibility problems. In my effort to find a KB or update I re-checked the documentation and noticed the lack of support as well as finding an MSDN forum post from RicRak where they solved the problem by upgrading their agents off of TFS 2012.

    Solution

    My solution was to upgrade our entire build infrastructure (some 9/10 servers) to TFS 2015, and discovering you must install VS2015 on the servers too to get the Test Runner to work.

    One day of diagnosis and testing to get to the point of knowing TFS 2015 build agents would solve the problem and still build our codebase. Another half-day was spend upgrading all the servers.

    Diagnostics

    How do you figure out when something like this goes wrong? TFS diagnostic logging did not provide any more information than minimum logging did. The error only appeared at the very end of a build, it wasn’t related to a step in the XAML workflow, nor any variables in the build process.

    The solution (as always) came from Charlie Kilian on Stack Overflow.

    I stopped the Build Service and opened up TFSBuildServiceHost.exe.config and added the following section:

    <system.diagnostics>
        <sources>
            <source name="System.Net" tracemode="includehex" maxdatasize="1024">
                <listeners>
                    <add name="System.Net"/>
                </listeners>
            </source>
        </sources>
        <switches>
            <add name="System.Net" value="Verbose"/>
        </switches>
        <sharedListeners>
            <add name="System.Net"
                type="System.Diagnostics.TextWriterTraceListener"
                initializeData="C:\Logs\network.log" />
        </sharedListeners>
        <trace autoflush="true"/>
    </system.diagnostics>
    

    Then restarted the build service and ran the smallest build I could to produce minimal logs.

    The log folder looked something like this:

    Log files on disk

    The network.log file had a few errors, but nothing fatal looking, so I looked in the other files for errors and finally found this line:

    System.Net Error: 0 : [4916] Exception in HttpWebRequest#13319471:: - The remote name could not be resolved: 'tfs-server'.
    

    That was proceeded by:

    System.Net Verbose: 0 : [4928] HttpWebRequest#13319471::HttpWebRequest(http://tfs-server:8080/tfs/DefaultCollection/_apis/resources/containers/122598?itemPath=logs%2FActivityLog.AgentScope.172.xml#752534963)
    

    Here you can see the server name without the necessary DNS suffix during some HTTP POST to _apis/resources/containers.

    This was the point I added the hosts file entry and then got the next error.

    For the second error I repeated the diagnostic logging steps and this time found the following errors (searching for Bad Request):

    System.Net Information: 0 : [16628] Connection#50276392 - Received status line: Version=1.1, StatusCode=400, StatusDescription=Bad Request.
    

    By tracing the ID (in this case 16628) back up the file I found it was a call to the same endpoint, but this time a PUT:

    System.Net Information: 0 : [16628] HttpWebRequest#9100089 - Request: PUT /tfs/DefaultCollection/_apis/resources/containers/122603?itemPath=logs%2FActivityLog.AgentScope.59.xml HTTP/1.1
    

    This was the point I gave up thinking this could be fixed by a configuration change.

    Conclusion

    I wish I had read something like this before I planned the weekend. I did do testing, but because testing TFS in live is risky I had most of the test instance network isolated and that required a lot of configurations; I just thought this error was just configuration based, lesson well and truly learned.

    It would have been nice to see this called out more explicitly on MSDN. In my opinion these are two bugs that Microsoft decided not to fix in the TFS 2012 product life-cycle.

    On the plus side, I learned some really neat debugging skills I didn’t know before.

    Remember, if you’re upgrading from TFS 2012, plan to upgrade your build agents at the same time!

  • Deployment Pipeline with VSTS and Release Management

    Back in 2014 I wrote a UNC to URI Path Converter using ASP MVC 4 and Visual Studio Team Services with a XAML Build process template to continuously deploy the changes to an Azure Website. This was my first Azure Website and most of it was just using the default settings from the New Project dialog in Visual Studio, all very “point and click”.

    It worked well and had an average of a few hundred page requests a week and so far, I’ve been happy with everything as it “just worked”. The other day I wanted to add a small feature and noticed that after pushing and deploying the change that Azure was warning me XAML builds would soon be deprecated. So, whilst I was making some changes I decided it would be a good opportunity for me to get up to date on a few new technologies that I have not used in anger.

    I planned to setup the following for the website:

    • Rewrite in .NET Core.
    • Custom VSTS Build vNext.
    • Deployment Pipeline using Microsoft Release Management.

    Rewrite in .NET Core

    My previous .NET Core app at this point was a console application, so I took this as an opportunity to get to grips with setting up a build and a suite of unit tests using xUnit.net. Getting this working in Visual Studio was straight forward following the xUnit.net documentation, but getting the build to run on VSTS was a bit hit and miss. I eventually settled on a mix-match combination of dotnet command line tools and the Visual Studio Test Runner.

    VSTS Build Steps

    Using the VS Test step solved the problem with dotnet test not been able to run the xUnit.net tests on the build server. I kept the individual dotnet restore, dotnet publish (site) and dotnet build (tests) as I wanted control over the publish. I also have a suite of deployment tests that based on the Full .NET Framework which I build using VS Build. These were the building blocks of my pipeline.

    Custom VSTS Build vNext

    By keeping control over dotnet publish I could pack the website ready to by pushed to Azure using Microsoft Release Management. I took the output of dotnet publish and zipped it up into an archive and published this as a build artifact.

    The build process also took the output of DeploymentTests build and zipped it into a separate archive and published that too.

    I now had a website and a suite of “Deployment Tests” as artifacts from my build.

    Deployment Pipeline using Microsoft Release Management

    A deployment pipeline is where code goes through various stages and each stage provides increasing confidence, usually at the cost of extra time (Martin Fowler: DeploymentPipeline). My pipeline was quite simple:

    Build -> Fast Tests -> Deploy to Pre-Prod -> Test Via API -> Deploy to Live -> Test Via API
    

    This process meant that the build was fast and only ran isolated fast unit tests against the code. Only then did it deploy onto a Pre-Production server (another Free Azure Website), and run a set of integration tests against the Website via the API, if these tests passed, then I repeated the process onto the Live website.

    Using Microsoft Release Management, I was able to orchestrate this using a single Release definition, and defining two environments to deploy to.

    Release Management

    I considered using Deployment Slots on Azure to do a deploy and then swap to the Slots after the tests passed, but Slots are only available on the Standard pricing tier and I wanted to keep this free, so I setup another free Website instance and ran the tests on there.

    I used a Variable against each Environment in Release Management to store the Azure Website Name.

    Environment's variables

    These variables had two uses, the first was to keep the steps for each environment the same, I only need to set the variable to a different value.

    The second was very cool, because the variables in TFS Build and RM are actually environment variables I could write the following method in the code of my deployment tests:

    public static String BaseUri => $"http://{Environment.GetEnvironmentVariable("AzureWebSiteName")}.azurewebsites.net/";
    

    And then run the API integration tests against the value of BaseUri.

    I planned to write some User Interface tests using either Coded-UI or Selenium, but due to the Hosted Build agents not supported Interactive Mode which is needed to run User Interface tests, I made them conditional and they only run in Visual Studio locally. I do have a plan to get these running in the future.

    The whole process looks like this:

    Deployment Pipeline Flowchart

    Conclusion

    Whilst this is a massively over engineered solution for such a simple website, it was fun to learn some new tricks and understand how to put a release pipeline together using the VSTS and Azure platforms. I also used it as opportunity to tidy up my resources in Azure and consolidate all my related resources into an Azure RM Resource Group, including the Application Insights I use to monitor it.

  • Now using SSL

    Today I’ve changed over to using SSL by default.

    SSL in Chrome

    The main reason for moving is that SSL gives better SEO - and that my old blog was SSL so I’m sure there will be some SSL links scattered about the web. It also prevents any silly public networks injecting anything into any of my pages.

    I’m using CloudFlare to secure to the communications from your browser to them. Thanks to Sheharyar Naseer for his excellent guide that got me up and running in no time, and to DNSimple for their excellent DNS Service that made it a piece of cake changing my Nameservers.

  • Using SignalR in FSharp without Dynamic

    I’ve been building an FSharp Dashboard by following along this post from Louie Bacaj’s which was part of last years FSharp Advent calendar. I have to say it’s a great post and has got me up and running in no time.

    If you want to skip the story and get to the FSharp and SignalR part scroll down to Changing the Hub.

    One small problem I noticed was that I could not use any of the features of FSharp Core v4. For example, the new tryXXX functions such as Array.tryLast were not available.

    After a bit of digging I happened across the Project Properties which were stuck on 3.1.2.1.

    Project Properties

    Turns out that the FSharp.Interop.Dynamic package is dependant on FSharp.Core v3.1.2.1.

    So this turned into a challenge of how do I use SignalR without Dynamic. After a bit of googling I landed on this page that showed Strongly Typed Hubs. So I knew it was possible…

    Removing Dependencies

    The first step to fixing this was to remove the FSharp.Core dependencies I no longer needed, these were:

    Uninstall-Package FSharp.Interop.Dynamic 
    Uninstall-Package Dynamitey
    Uninstall-Package FSharp.Core
    

    I then just browsed through the source and removed all the open declarations.

    Re-adding FSharp Core

    Slight problem now, I no longer had any FSharp Core references, so I needed to add one in. I’m not sure if this is the best way to solve this, but I just copied and pasted these lines from a empty FSharp project I just created:

    <Reference Include="mscorlib" />
    <!--Add this bit-->
    <Reference Include="FSharp.Core, Version=$(TargetFSharpCoreVersion), Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
        <Private>True</Private>
    </Reference>
    <!--End-->
    <Reference Include="Newtonsoft.Json">
    

    Changing the Hub

    Now all I had to do was update the code to use the statically typed hub.

    First step was to create an interface for the metricsHub:

    type IMetricsHub = 
        abstract member AddMessage: string -> unit
        abstract member BroadcastPerformance: PerfModel seq -> unit
    

    Then change our Hub to inherit from the generic Hub<T>:

    [<HubName("metricsHub")>]
    type metricsHub() = 
        inherit Hub<IMetricsHub>() // < Generic version of our interface.
    

    And changed all the calls from:

    Clients.All?message(message)
    

    to

    Clients.All.Message message
    

    Getting the Context

    With SignalR you cannot just new up an instance of a Hub, you have to use GlobalHost.ConnectionManager.GetHubContext<THub>. The problem is that this gives you and IHubContext which only exposes the dynamic interface again. A bit more googling and I found that you need to pass our interface as a second generic parameter and you will get an IHubContext<IMetricsHub>.

    So this:

    let context = GlobalHost.ConnectionManager.GetHubContext<metricsHub>()
    

    Becomes:

    let context = GlobalHost.ConnectionManager.GetHubContext<metricsHub, IMetricsHub>()
    

    Now you can call Context.Clients.All.BroadcastPerformance and not worry about that pesky dynamic any more.

    Conclusion

    The documentation on SignalR isn’t very good, it was easy enough to find out about the statically typed version, but finding out how to get one out of the context was a right pain.

    I’ve published a fork of Louies GitHub repo with four commits that show the steps needed to move from dynamic to statically typed SignalR here so you can see the changes I needed to make.

  • Adding Cloudapp DNS to Azure VM

    I’ve recently just deployed a new Azure Linux VM for hosting a Discourse instance I run and noticed that is didn’t have a DNS entry on cloudapp.net. Last time I deployed one it was instantly given one in the format server-name.cloudapp.net, but this time it wasn’t and I had to set it up by myself.

    I suspect it is something new for Resource Managed deployments.

    Here’s a list of the steps you need to follow if you ever need to do the same.

    Assuming you have just deployed a VM and it doesn’t have a DNS on cloudapp.net you will see something like this:

    newly deployed vm

    Dissociate Public IP

    First you need to Dissociate the Public IP so you can make changes.

    Click the Public IP Address to open the settings:

    public ip settings

    Then click Dissociate and confirm when prompted.

    public ip settings dissociate

    You cannot change any settings whilst the Public IP is in use.

    Configuring the DNS

    From the Public IP page, click All Settings then Configuration to open up the settings:

    public ip settings configuration

    Then you can enter a new DNS prefix for datacentre.cloudapp.azure.net:

    public ip configuration new dns

    Reassociate the Public IP

    Now you need to reassociate the Public IP with the VM.

    From the VM Screen (First Image) click All Settings, then Network Interfaces:

    vm network interfaces

    Click on the Interface listed:

    all vm network interfaces

    Click on IP Addresses from the Settings blade:

    network interfaces ip addresses

    Click on Enable then click on the IP Address Configure Required… and select the default (highlighted) Public IP Address from the list.

    select public ip.

    Then click Save.

    Validation and Testing

    Now if you close and re-open the VM blade you should see a new Public IP address appear.

    Click on the Public IP Address to open the blade and you will see your full DNS Entry and a Copy to clipboard button when you hover on it:

    vm with new dns

    To test, ping the VM and see if the DNS resolves:

    C:\> ping taeguk-test-dns.northeurope.cloudapp.azure.com
    
    Pinging taeguk-test-dns.northeurope.cloudapp.azure.com [40.127.129.7]
    

    The requests will timeout because Azure has ICMP disabled, but so long as the DNS resolves, you’ve done it.

    Conclusion

    This seems to be a change that I can’t find a source for to do with Resource Managed VM’s instead of Classic VM’s. It used to work OK on classic VM’s.

    Note: I have deleted the VM in this post now.

  • Shut the Box is Live

    Today I’ve just published my first App into the Windows and Windows Phone Store.

    Screenshot

    You can download using the image below, if you want to check it out. It’s 100% free and no ads.

    Windows Store Download

    It is a simple version of the pub game Shut the Box, I have page here with more information about game.

    This was my first attempt at a Windows Application and I’ve really enjoyed the experience of building it. I tried to use as many new things to me as possible to learn as much as I can through the process. A quick list of new things I’ve explorer whilst working on this are:

    • Git
    • Visual Studio Online Kanban for planning and tracking work (up until now I’ve only used TFS 2012.4).
    • TFS Build vNext.
    • Application Insights.
    • Custom MSBuild Project to encapsulate all restore/build/test workflows.
    • xUnit.net for Universal Apps (lots of beta’s to test).

    Working with the Windows Store was a bit of “hit and miss”, for a while I could not see get to the “Dashboard” part of the site “because of my Azure account”, or so I was told. This seemed to resolve itself eventually, but was very annoying at the time. I was not offered any explanation, only that I should create a new Microsoft Account to publish apps through, which I was not prepared to do.

    It took 3 attempts to get the application through certification. Firstly it failed because I had not run the Application Certification Kit and had a transparent Windows tile that is not allowed. The second failure was because Russia, Brazil, Korea and China require certification of anything that is a Game in the store. I decided not to publish it to those markets at the moment because I wanted it out there, and figuring out how to complete the certification seemed like too much work. I may look into it again later, but for now I am happy.

    This application has been a long time coming, mostly down to my lack of free time and/or willingness to work on it, but I’m glad it’s finally published, now to try and release some updates and add some more nice features.

    If you enjoy the game, please feel free to leave me a good rating / comment in the Store.

  • Roslyn Based Attribute Remover

    Major Update 1-Aug-2015: Changed VisitAttributeList to VisitMethodDeclaration to fix some bugs with the help of Josh Varty.

    I’m a big fan of XUnit as a replacement for MSTest and use it extensively in my home projects, but I’m still struggling to find a way to integrate it into my work projects.

    This post looks at one of the obstacles I had to overcome, namely the use of [TestCategory("Atomic")] on all tests that are run on TFS as part of the build. The use of this attribute came about because the MSTest test runner did not support a concept of “run all tests without a category”, so we came up with an explicit category called “Atomic” - probably not the best decision in hindsight. The XUnit test runner does not support test categories, so I needed to find a way to remove the TestCategory attribute with the value of Atomic from any method. I’m sure I could have used regex to solve this, and I’m sure that would have caused more problems:

    To generate #1 albums, 'jay --help' recommends the -z flag.

    via xkcd

    Instead I created a Linqpad script and used the syntactic analyser from the Microsoft.CodeAnalysis package.

    PM> Install-Package Microsoft.CodeAnalysis
    

    I found that the syntactic analyser allowed me to input some C# source code, and by writing my own CSharpSyntaxRewriter, remove any attributes I didn’t want.

    I started by creating some C# that had the TestCategory attribute applied in as many different ways as possible:

    namespace P
    {
        class Program
        {
            public void NoAttributes() { }
    
            [TestMethod, TestCategory("Atomic")]
            public void OnOneLine() { }
    
            [TestMethod]
            [TestCategory("Atomic")]
            public void SeparateAttribute() { }
            
            //snip...
            //And so on down to, right down to...
                    
            [TestMethod, TestCategory("Atomic"), TestCategory("Atomic")]
            public void TwoAttributesOneLineAndOneThatDoesntMatch() { }
        }
    }
    

    You can see all the examples I tested against in the Gist.

    The CSharpSyntaxRewriter took a lot of messing around with to get right, but I eventually figured that by overriding the VisitMethodDeclaration method I could remove attributes from the syntax tree as they were visited.

    To get some C# code into a syntax tree, there is the obviously named CSharpSyntaxTree.ParseText(String) method. You can then get a CSharpSyntaxRewriter (in my case my own AttributeRemoverRewriter class) to visit everything by calling Visit(). Because this is all immutable, you need to grab the result, which can now be converted into a string and dumped out.

    var tree = CSharpSyntaxTree.ParseText(code);
    var rewriter = new AttributeRemoverRewriter(
        attributeName: "TestCategory", 
        attributeValue: "Atomic");
    
    var rewrittenRoot = rewriter.Visit(tree.GetRoot());
    
    rewrittenRoot.GetText().ToString().Dump();
    

    The interesting part of the AttributeRemoverRewriter class is the VisitMethodDeclaration method which finds and removes attribute nodes that are not needed:

    public override SyntaxNode VisitMethodDeclaration(MethodDeclarationSyntax node)
    {
        var newAttributes = new SyntaxList<AttributeListSyntax>();
    
        foreach (var attributeList in node.AttributeLists)
        {
            var nodesToRemove =
                attributeList
                .Attributes
                .Where(
                    attribute =>
                        AttributeNameMatches(attribute)
                        &&
                        HasMatchingAttributeValue(attribute))
                .ToArray();
    
            //If the lists are the same length, we are removing all attributes and can just avoid populating newAttributes.
            if (nodesToRemove.Length != attributeList.Attributes.Count)
            {
                var newAttribute =
                    (AttributeListSyntax)VisitAttributeList(
                        attributeList.RemoveNodes(nodesToRemove, SyntaxRemoveOptions.KeepNoTrivia));
    
                newAttributes = newAttributes.Add(newAttribute);
            }
        }
    
        //Get the leading trivia (the newlines and comments)
        var leadTriv = node.GetLeadingTrivia();
        node = node.WithAttributeLists(newAttributes);
    
        //Append the leading trivia to the method
        node = node.WithLeadingTrivia(leadTriv);
        return node;
    }
    

    The AttributeNameMatches method is implemented to find an attribute that starts with TestCategory, this is because attributes in .NET have Attribute at the end of their name e.g. TestCategoryAttribute, but most people never type it. I figured in this case it was more likley to exist than to have another attribute starting with TestCategory. I don’t think there is an elegant way to avoid using StartsWith in the syntactic analyser, I would have had to switch to the sematic analyser and that would have made this a much more complicated solution.

    The HasMatchingAttributeValue pretty much does what it says, it looks for the value of the attribute been just Atomic and nothing else.

    Once the nodes that match are found, it checks if the number of attributes on a method is equal to the number it wants to remove, if so the newAttributes list is not populated and the method is updated to keep its trivia, but without any attributes. This shouldn’t be the case for this specific scenario because just a TestCategory on its own doesn’t make sense.

    Remove just the matching attributes

    If there are some attributes that do not need removing, then just the matching one should be removed. For example:

    [TestMethod, TestCategory("Atomic")]
    public void OnOneLine() { }
    

    When the visitor reaches the attributes on this method, it will populate the newAttributes list with just the attributes we want to keep and then update the method so that it has just the remaining attributes its trivia.

    Conclusion

    Using Roslyn was a bit of a steep learning curve to start with, but once I found out what I was doing, I knew I could rely on the Roslyn team to have dealt with all the different ways of implementing attributes in C#. That didn’t stop me from finding what appears to be a bug causing me to re-write bits of the script and this post, and some more edge cases when I ran it across a > 500 test classes.

    However, if I were to try and use regex to find and remove some of the more complicated ones, and deal with the other edge cases, I’d have gone mad by now.

    • You can get the full Gist here.

    If you paste this into a Linqpad “program” and then just install the NuGet Package you should be able to try it out. Note this was built against the 1.0.0 version of the package.

subscribe via RSS