• Now using SSL

    Today I’ve changed over to using SSL by default.

    SSL in Chrome

    The main reason for moving is that SSL gives better SEO - and that my old blog was SSL so I’m sure there will be some SSL links scattered about the web. It also prevents any silly public networks injecting anything into any of my pages.

    I’m using CloudFlare to secure to the communications from your browser to them. Thanks to Sheharyar Naseer for his excellent guide that got me up and running in no time, and to DNSimple for their excellent DNS Service that made it a piece of cake changing my Nameservers.

  • Using SignalR in FSharp without Dynamic

    I’ve been building an FSharp Dashboard by following along this post from Louie Bacaj’s which was part of last years FSharp Advent calendar. I have to say it’s a great post and has got me up and running in no time.

    If you want to skip the story and get to the FSharp and SignalR part scroll down to Changing the Hub.

    One small problem I noticed was that I could not use any of the features of FSharp Core v4. For example, the new tryXXX functions such as Array.tryLast were not available.

    After a bit of digging I happened across the Project Properties which were stuck on

    Project Properties

    Turns out that the FSharp.Interop.Dynamic package is dependant on FSharp.Core v3.1.2.1.

    So this turned into a challenge of how do I use SignalR without Dynamic. After a bit of googling I landed on this page that showed Strongly Typed Hubs. So I knew it was possible…

    Removing Dependencies

    The first step to fixing this was to remove the FSharp.Core dependencies I no longer needed, these were:

    Uninstall-Package FSharp.Interop.Dynamic 
    Uninstall-Package Dynamitey
    Uninstall-Package FSharp.Core

    I then just browsed through the source and removed all the open declarations.

    Re-adding FSharp Core

    Slight problem now, I no longer had any FSharp Core references, so I needed to add one in. I’m not sure if this is the best way to solve this, but I just copied and pasted these lines from a empty FSharp project I just created:

    <Reference Include="mscorlib" />
    <!--Add this bit-->
    <Reference Include="FSharp.Core, Version=$(TargetFSharpCoreVersion), Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
    <Reference Include="Newtonsoft.Json">

    Changing the Hub

    Now all I had to do was update the code to use the statically typed hub.

    First step was to create an interface for the metricsHub:

    type IMetricsHub = 
        abstract member AddMessage: string -> unit
        abstract member BroadcastPerformance: PerfModel seq -> unit

    Then change our Hub to inherit from the generic Hub<T>:

    type metricsHub() = 
        inherit Hub<IMetricsHub>() // < Generic version of our interface.

    And changed all the calls from:



    Clients.All.Message message

    Getting the Context

    With SignalR you cannot just new up an instance of a Hub, you have to use GlobalHost.ConnectionManager.GetHubContext<THub>. The problem is that this gives you and IHubContext which only exposes the dynamic interface again. A bit more googling and I found that you need to pass our interface as a second generic parameter and you will get an IHubContext<IMetricsHub>.

    So this:

    let context = GlobalHost.ConnectionManager.GetHubContext<metricsHub>()


    let context = GlobalHost.ConnectionManager.GetHubContext<metricsHub, IMetricsHub>()

    Now you can call Context.Clients.All.BroadcastPerformance and not worry about that pesky dynamic any more.


    The documentation on SignalR isn’t very good, it was easy enough to find out about the statically typed version, but finding out how to get one out of the context was a right pain.

    I’ve published a fork of Louies GitHub repo with four commits that show the steps needed to move from dynamic to statically typed SignalR here so you can see the changes I needed to make.

  • Adding Cloudapp DNS to Azure VM

    I’ve recently just deployed a new Azure Linux VM for hosting a Discourse instance I run and noticed that is didn’t have a DNS entry on cloudapp.net. Last time I deployed one it was instantly given one in the format server-name.cloudapp.net, but this time it wasn’t and I had to set it up by myself.

    I suspect it is something new for Resource Managed deployments.

    Here’s a list of the steps you need to follow if you ever need to do the same.

    Assuming you have just deployed a VM and it doesn’t have a DNS on cloudapp.net you will see something like this:

    newly deployed vm

    Dissociate Public IP

    First you need to Dissociate the Public IP so you can make changes.

    Click the Public IP Address to open the settings:

    public ip settings

    Then click Dissociate and confirm when prompted.

    public ip settings dissociate

    You cannot change any settings whilst the Public IP is in use.

    Configuring the DNS

    From the Public IP page, click All Settings then Configuration to open up the settings:

    public ip settings configuration

    Then you can enter a new DNS prefix for datacentre.cloudapp.azure.net:

    public ip configuration new dns

    Reassociate the Public IP

    Now you need to reassociate the Public IP with the VM.

    From the VM Screen (First Image) click All Settings, then Network Interfaces:

    vm network interfaces

    Click on the Interface listed:

    all vm network interfaces

    Click on IP Addresses from the Settings blade:

    network interfaces ip addresses

    Click on Enable then click on the IP Address Configure Required… and select the default (highlighted) Public IP Address from the list.

    select public ip.

    Then click Save.

    Validation and Testing

    Now if you close and re-open the VM blade you should see a new Public IP address appear.

    Click on the Public IP Address to open the blade and you will see your full DNS Entry and a Copy to clipboard button when you hover on it:

    vm with new dns

    To test, ping the VM and see if the DNS resolves:

    C:\> ping taeguk-test-dns.northeurope.cloudapp.azure.com
    Pinging taeguk-test-dns.northeurope.cloudapp.azure.com []

    The requests will timeout because Azure has ICMP disabled, but so long as the DNS resolves, you’ve done it.


    This seems to be a change that I can’t find a source for to do with Resource Managed VM’s instead of Classic VM’s. It used to work OK on classic VM’s.

    Note: I have deleted the VM in this post now.

  • Shut the Box is Live

    Today I’ve just published my first App into the Windows and Windows Phone Store.


    You can download using the image below, if you want to check it out. It’s 100% free and no ads.

    Windows Store Download

    It is a simple version of the pub game Shut the Box, I have page here with more information about game.

    This was my first attempt at a Windows Application and I’ve really enjoyed the experience of building it. I tried to use as many new things to me as possible to learn as much as I can through the process. A quick list of new things I’ve explorer whilst working on this are:

    • Git
    • Visual Studio Online Kanban for planning and tracking work (up until now I’ve only used TFS 2012.4).
    • TFS Build vNext.
    • Application Insights.
    • Custom MSBuild Project to encapsulate all restore/build/test workflows.
    • xUnit.net for Universal Apps (lots of beta’s to test).

    Working with the Windows Store was a bit of “hit and miss”, for a while I could not see get to the “Dashboard” part of the site “because of my Azure account”, or so I was told. This seemed to resolve itself eventually, but was very annoying at the time. I was not offered any explanation, only that I should create a new Microsoft Account to publish apps through, which I was not prepared to do.

    It took 3 attempts to get the application through certification. Firstly it failed because I had not run the Application Certification Kit and had a transparent Windows tile that is not allowed. The second failure was because Russia, Brazil, Korea and China require certification of anything that is a Game in the store. I decided not to publish it to those markets at the moment because I wanted it out there, and figuring out how to complete the certification seemed like too much work. I may look into it again later, but for now I am happy.

    This application has been a long time coming, mostly down to my lack of free time and/or willingness to work on it, but I’m glad it’s finally published, now to try and release some updates and add some more nice features.

    If you enjoy the game, please feel free to leave me a good rating / comment in the Store.

  • Roslyn Based Attribute Remover

    Major Update 1-Aug-2015: Changed VisitAttributeList to VisitMethodDeclaration to fix some bugs with the help of Josh Varty.

    I’m a big fan of XUnit as a replacement for MSTest and use it extensively in my home projects, but I’m still struggling to find a way to integrate it into my work projects.

    This post looks at one of the obstacles I had to overcome, namely the use of [TestCategory("Atomic")] on all tests that are run on TFS as part of the build. The use of this attribute came about because the MSTest test runner did not support a concept of “run all tests without a category”, so we came up with an explicit category called “Atomic” - probably not the best decision in hindsight. The XUnit test runner does not support test categories, so I needed to find a way to remove the TestCategory attribute with the value of Atomic from any method. I’m sure I could have used regex to solve this, and I’m sure that would have caused more problems:

    To generate #1 albums, 'jay --help' recommends the -z flag.

    via xkcd

    Instead I created a Linqpad script and used the syntactic analyser from the Microsoft.CodeAnalysis package.

    PM> Install-Package Microsoft.CodeAnalysis

    I found that the syntactic analyser allowed me to input some C# source code, and by writing my own CSharpSyntaxRewriter, remove any attributes I didn’t want.

    I started by creating some C# that had the TestCategory attribute applied in as many different ways as possible:

    namespace P
        class Program
            public void NoAttributes() { }
            [TestMethod, TestCategory("Atomic")]
            public void OnOneLine() { }
            public void SeparateAttribute() { }
            //And so on down to, right down to...
            [TestMethod, TestCategory("Atomic"), TestCategory("Atomic")]
            public void TwoAttributesOneLineAndOneThatDoesntMatch() { }

    You can see all the examples I tested against in the Gist.

    The CSharpSyntaxRewriter took a lot of messing around with to get right, but I eventually figured that by overriding the VisitMethodDeclaration method I could remove attributes from the syntax tree as they were visited.

    To get some C# code into a syntax tree, there is the obviously named CSharpSyntaxTree.ParseText(String) method. You can then get a CSharpSyntaxRewriter (in my case my own AttributeRemoverRewriter class) to visit everything by calling Visit(). Because this is all immutable, you need to grab the result, which can now be converted into a string and dumped out.

    var tree = CSharpSyntaxTree.ParseText(code);
    var rewriter = new AttributeRemoverRewriter(
        attributeName: "TestCategory", 
        attributeValue: "Atomic");
    var rewrittenRoot = rewriter.Visit(tree.GetRoot());

    The interesting part of the AttributeRemoverRewriter class is the VisitMethodDeclaration method which finds and removes attribute nodes that are not needed:

    public override SyntaxNode VisitMethodDeclaration(MethodDeclarationSyntax node)
        var newAttributes = new SyntaxList<AttributeListSyntax>();
        foreach (var attributeList in node.AttributeLists)
            var nodesToRemove =
                    attribute =>
            //If the lists are the same length, we are removing all attributes and can just avoid populating newAttributes.
            if (nodesToRemove.Length != attributeList.Attributes.Count)
                var newAttribute =
                        attributeList.RemoveNodes(nodesToRemove, SyntaxRemoveOptions.KeepNoTrivia));
                newAttributes = newAttributes.Add(newAttribute);
        //Get the leading trivia (the newlines and comments)
        var leadTriv = node.GetLeadingTrivia();
        node = node.WithAttributeLists(newAttributes);
        //Append the leading trivia to the method
        node = node.WithLeadingTrivia(leadTriv);
        return node;

    The AttributeNameMatches method is implemented to find an attribute that starts with TestCategory, this is because attributes in .NET have Attribute at the end of their name e.g. TestCategoryAttribute, but most people never type it. I figured in this case it was more likley to exist than to have another attribute starting with TestCategory. I don’t think there is an elegant way to avoid using StartsWith in the syntactic analyser, I would have had to switch to the sematic analyser and that would have made this a much more complicated solution.

    The HasMatchingAttributeValue pretty much does what it says, it looks for the value of the attribute been just Atomic and nothing else.

    Once the nodes that match are found, it checks if the number of attributes on a method is equal to the number it wants to remove, if so the newAttributes list is not populated and the method is updated to keep its trivia, but without any attributes. This shouldn’t be the case for this specific scenario because just a TestCategory on its own doesn’t make sense.

    Remove just the matching attributes

    If there are some attributes that do not need removing, then just the matching one should be removed. For example:

    [TestMethod, TestCategory("Atomic")]
    public void OnOneLine() { }

    When the visitor reaches the attributes on this method, it will populate the newAttributes list with just the attributes we want to keep and then update the method so that it has just the remaining attributes its trivia.


    Using Roslyn was a bit of a steep learning curve to start with, but once I found out what I was doing, I knew I could rely on the Roslyn team to have dealt with all the different ways of implementing attributes in C#. That didn’t stop me from finding what appears to be a bug causing me to re-write bits of the script and this post, and some more edge cases when I ran it across a > 500 test classes.

    However, if I were to try and use regex to find and remove some of the more complicated ones, and deal with the other edge cases, I’d have gone mad by now.

    • You can get the full Gist here.

    If you paste this into a Linqpad “program” and then just install the NuGet Package you should be able to try it out. Note this was built against the 1.0.0 version of the package.

  • Automating the Deployment of TFS Global Lists

    The TFS Global List is a Team Project Collection wide entity and, to the best of my knowledge, requires someone to be a member of the Collection Administrators group to be able to update it – there is no explicit group or permission for “Upload Global List”. This can be quite a problem if there are a number of Lists within your Global List that are updated frequently by the users of your Collection.

    Your current options are either:

    1. Ask the Collection Administrators for every little change (and complain if they take too long, they have a holiday, etc.)
    2. Keep adding people/groups to the Collection Administrators group (and hand out way too much power to people who don’t need it).

    We went for option #1, then option #2, until neither became sustainable.

    The solution I came up with is based on post Deploying Process Template Changes Using TFS 2010 Build by Ed Blankenship, but instead of deploying the whole process template, we just deploy the Global List. (N.B. our TFSBuild account is a Collection Administrator).

    Building the Template

    To build the template I started by copying the DefaultTemplate.11.1.xaml file that ships with TFS 2012 and stripped out all of the activities and process parameters that were no longer required then added a new activity to invoke the witadmin command line tool to import the Global List.

    I won’t go into detail of the process of how I changed the activities because there were quite a lot of steps. It is quite straight forward. However, a quick overview is: remove anything to do with compiling code, running tests or gated checkins, then add a new activity to invoke the witadmin command line. It will probably be easier understood by looking at the finished template - available to download at the end. I may write a follow up post with the exact details.

    Using the template

    • To use the tempalte you need to have the Global Lists file checked into Version Control, you can follow the advice in the Wrox Professional Team Foundation Server 2013 book to create a Team Project for your all your Process artefacts, or if you just want to keep it simple:
      • Use witadmin to export the global list file:
      • witadmin exportgloballist /collection:http://tfs:8080/tfs/DefaultCollection /f:GlobalList.xml
      • Check that file into its own folder somewhere in souce control, in this example we will use $/TFS/GlobalList/GlobalList.xml (having it in its own folder helps).
    • Once you have the template downloaded, you need to check it into Version Control, usually $/MyTeamProject/BuildProcessTemplates/.
    • Create a new build definition.
    • Fill in the General tab however you like.
    • In the Trigger tab select Continous Integration.
    • In the Source Settings tab select the folder with your GlobalList.xml as Active ($/TFS/GlobalList/)
    • In the Build Defaults tab, select “This build does not copy output files to a drop folder”.
    • In the Process tab we need to do a few steps:
      • To install the template, click Show Details:
      • Show details
      • Click New… and browse to the template we checked in ($/MyTeamProject/BuildProcessTemplates).
      • Fill in the sections as follows:
      • Process Parameters
      • I didn’t know the best way to get the URI of the Team Project collection, so I made it a argument you need enter.
      • If you are not using VS2012 on your build server, you will need to find a way to get witadmin.exe on there and then update the path to the location.

    Once the above has been completed you should be able to the queue a new build using the new defintion and check the output to see if the global list has been successfully uploaded. Just open the build and check the summary, if everything went well you should see the following:

    Build Summary

    If there were any problem, check the “View Log”, the build is using Detailed logging which should include enough information to figure out what went wrong.


    I’ve now stopped worrying about having to update the global list for everyone who needs something new adding and I no longer am affraid of lots of people been Collection Administrators who really shouldn’t have been. I can just grant check-in permissions to the folder that contains our global list and leave people to it.


    I’m keeping this on my GitHub:

    If have any improvements (to the post / template), feel free to send me a PR.

  • Moving from WordPress

    This is my last post on WordPress and first post on Jekyll GitHub Pages.

    I’ve decided to abandon WordPress running on Azure Web Apps for a simpler static blog using Jekyll to convert Markdown to static content hosted on GitHub pages. I’ll go into the process I went though in a future post.

    This posts is here as a marker of when I moved everything over. I’ve tried to get the permalinks in Jekyll to match the ones in WordPress - but breaking any that were from my brief stint on DasBlog. As far as I know everything should just be the same - including the RSS feed on /feed.

    Shoot me a mail if there is a problem.

    Everything from this point on will be on the new format.

    The actual migration will occur at some point next week.

  • Fixing a broken Kanban Board

    Update 05-Aug-2015: This fix also resolves a second issue.

    There are two problems that I have identified in TFS 2012.4 with Kanban boards (Backlog Board) not functioning correctly. The fix in this post describes how to resolve both issues by deleting the board configuration from the database.

    Nothing but errors

    The first problem was presented as a completely broken TFS Kanban board. All they could see was a generic “there has been a problem” pink popup instead of their cards.

    When presented with this, the usual fix is to ensure background agent job is running, which it was. So I took a look in the Windows event logs on the server for more detail and found this error (most of the details are removed for brevity, this is from the middle):

    Detailed Message: TF30065: An unhandled exception occurred.
    Exception Message: The given key was not present in the dictionary. (type KeyNotFoundException)
    Exception Stack Trace:    at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
       at Microsoft.TeamFoundation.Server.WebAccess.Agile.Models.WorkItemSource.<>c__DisplayClass18.<GetProposedInProgressWorkItemData>b__13(IDataRecord dataRecord)
       at Microsoft.TeamFoundation.Server.WebAccess.Agile.Utility.WorkItemServiceUtils.<GetWorkItems>d__c.MoveNext()
       at Microsoft.TeamFoundation.Server.WebAccess.Agile.Models.WorkItemSource.GetProposedInProgressWorkItemData(ICollection`1 rowData, ICollection`1 hierarchy, ISet`1 parentIds)

    With this little information, all I could assume was that, somehow the configuration had become corrupted.

    Bouncing cards

    The second issue was when dragging a card/work item from one column to the other, it instantly bounced back the original column. This was happening on the client because we could repro the issue with the network cable unplugged. It wasn’t that the card couldn’t transition state, that presents by not allowing the card to be dragged. In my case, the card could be dragged and dropped, it went into the column for less than a second and then bounced back, occasionally leaving a card drawn in an odd location in the browser.

    Further testing proved that card should be able to move between these columns when placed on another teams board, just not for this team.

    As before, I guessed that somehow the configuration in the database was corrupt.

    The fix

    This fix is a little heavy handed, but by deleting the board configuration from the database, you can re-setup your board as before with no issues.

    Be sure to make a note of the column configuration before you start.

    NOTE: Neither I nor Microsoft support you making changes directly to your TFS database. You do so at you own risk, and probably best with a backup. This SQL worked against our TFS 2012.4 Database, I cannot guarantee other versions have the same schema.

    First step is to find your TeamId from the Collection Database. Team Ids can be found in the ADObjects table.

    select * from ADObjects
    where SamAccountName like '%MyTeamName%';

    The TeamFoundationId GUID in this table is the value we are interest in.

    You can find the Board and Columns in the tbl_Board and tbl_BoardColumn tables using the following SQL:

    select * from tbl_Board b
    join tbl_BoardColumn bc on b.Id = bc.BoardId
    where TeamId = 'YouTeamId';

    Once you are happy that you have the found the rows for the team, you can then delete them from those two tables. You should probably copy the results into Excel just in case things go wrong.

    To delete you can use the following SQL Queries:

    delete bc
    from tbl_Board b
    join tbl_BoardColumn bc on b.Id = bc.BoardId
    where TeamId = 'YouTeamId';
    delete tbl_Board
    where TeamId = 'YouTeamId';

    Now if you refresh the board it should report that there is no configuration and needs to be setup again from scratch.

    I’ve no idea what caused these problems, or if it is fixed in a future update, but this got things working again for me.

  • Working in Visual Studio behind the Firewall

    • Updated 22-Mar-2016 Added VS Code
    • Updated 07-Oct-2016 Added NPM / Node JS

    Working in an “Enterprise” type environment means lots of fun obstacles getting in the way of your day to day work – the corporate proxy is one of my challenges.

    Since giving up on CNTLM Proxy due to instability and account lockouts, I haven’t been able to connect to nuget.org from the package manager, view the Visual Studio Extension Gallery or even get any extension/product updates from Visual Studio.

    This is a quick post with the changes I needed to get Visual Studio 2013 Update 4 (works on 2015 too), VS Code 0.10.11, NuGet 2.8 and Web Platform (Web PI) 5 to see past the corporate Squid proxy.


    Configuring NuGet based on this Stack Overflow answer by arcain.

    Running nuget.exe with the following switches will allow NuGet to use and authenticate with the proxy:

    nuget.exe config -set http_proxy=http://proxy-server:3128
    nuget.exe config -set http_proxy.user=DOMAIN\Dave
    nuget.exe config -set http_proxy.password=MyPassword

    It will put the values into your nuget.config file (with the password encrypted)

            <add key="http_proxy" value="http://proxy-server:3128" />
            <add key="http_proxy.user" value="DOMAIN\Dave" />
            <add key="http_proxy.password" value="base64encodedHopefullyEncryptedPassword" />

    Once Visual Studio is restarted, it should be able to see through the proxy.

    As per the comments on the answer some people might have success without the password – sadly, not in my case. Also, remember if you have to change your password (as I have to every month or so) you will need to re-enter your password.

    Visual Studio Code

    Visual Studio Code is a tricky one to setup because it isn’t .NET, it’s all JavaScript based. Most of my information came from the GitHub issue.

    • Determine your proxy server and port. When you have a complicated proxy, this is a pain and it took me a while as I use an automatic configuration script. If it is a standard server/port combo, you’re on an easier path.
    • I usually configure IE with a script from a URL like this one: http://proxy-server/script.dat. This is a plain JS script which, after a bit of looking at, I discovered pointed to proxy-cluster.fqdn.local:8881.
    • Now I have a server and port I need my authentication details.
    • Let’s assume my NTLM login is DOMAIN\User Name and my password is [email protected]!
    • The format for the credentials needs to be DOMAIN\User Name:[email protected]!, but you need to URL Encode the user name and password.
    • A simple online URL encoded can translate your username and password to: DOMAIN%5CUser%20Name and P%40ssword!.
    • Piece all this info into a single string like so: http://DOMAIN%5CUser%20Name:[email protected]:8881
    • Then add this into your User Settings in File, Preferences against the "http.proxy" value:
    // Place your settings in this file to overwrite the default settings
        "http.proxy": "http://DOMAIN%5CUser%20Name:[email protected]:8881"

    There are a lot of ways to mess this up, I almost gave up on VS Code after weeks of messing about, the removal of C# from base product made it “make or break time”. If you are struggling I suggest you re-read the GitHub issue. The main tip I found useful was to pop-open the Developer Tools in VS Code (under Help) and in the JavaScript Console run: require('url').parse('YOUR PROXY URL') and check the output.

    Big thanks to João Moreno for all his comments on the GitHub issue.

    NPM (Node JS)

    To use NPM there are 2 options:

    1. NPM Setting Variable
    2. Command Line Switch

    Both require the NTLM authentication URI from the Visual Studio Code section above, so read that if you need to.

    NPM Config

    You can just run the following from the command line:

    npm config set proxy http://DOMAIN%5CUser%20Name:[email protected]:8881

    The disadvantage of this is that it is always visible on your system, so you might want to remove it after installing packages:

    npm config rm proxy

    Command Line Switch

    When calling npm you can pass the NTLM authentication URI as a switch like so:

    npm install --proxy http://DOMAIN%5CUser%20Name:[email protected]:8881 jslint

    This requires you to know your proxy URI in advance, but if you are storing it in VS Code, you can copy and paste from there.

    I’m currently wrapping all NPM operations in Powershell scripts that automate checking of packages on disk, then prompting for authentication details if needed and building the URI on the fly.

    Visual Studio

    Setting up Visual Studio based on this blog post by Raffael Herrmann.

    • Open the devenv.exe.config file. I find it by right clicking the Visual Studio shortcut, selecting Properties and then “Open File Location”. If you have UAC enabled you will need to open it in a program running as Administrator.
    • Scroll to the end of the file and find the system.net section:
    <!-- More -->
            <ipv6 enabled="true"/>
    <!-- More -->
    • Add the following below </settings>:
    <defaultProxy useDefaultCredentials="true" enabled="true">
        <proxy bypassonlocal="true" proxyaddress="http://proxy-server:3128" />
    • The final version will look something like this:
            <ipv6 enabled="true"/>
        <defaultProxy useDefaultCredentials="true" enabled="true">
            <proxy bypassonlocal="true" proxyaddress="http://proxy-server:3128" />

    Web Platform Installer

    This was the same set of changes needed for Visual Studio, except with the WebPlatformInstaller.exe.config file, which I again obtained from the shortcut properties using “Open File Location”.


    Big thanks to Eric Cain and Raffael Herrmann for enabling me to connect to the internet again :).

  • UNC to URI Path Converter

    Yesterday, at work, I was trying to help someone enter a UNC path (\\server\share\file.txt) into a hyperlink control on our application, but it was rejecting it because it wasn’t a valid hyperlink. I discovered that you could enter a URI path (file://server/share/file.txt) and it worked fine. Problem solved? Not exactly, otherwise I wouldn’t have anything to write about.</p>

    The issue was that the users are non-technical folk and they can’t just fire up Linqpad and run the following code like I did to test if it worked:

    new Uri(@"\\server\share\file.txt").AbsoluteUri

    The rules around converting UNC paths to URI’s gets a little tricky when you have paths with spaces and special characters in them, so I thought I would Google/Bing for an online UNC to URI Path Converter… turns out I couldn’t find one, so I did what every software developer does, writes one.

    Path Converter

    This took a little over an evening, mostly due to me been unfamiliar with CSS, JQuery (I know, but it’s quicker than Vanilla JS) and TypeScript (first time I’ve ever used it).

    The entire website was setup in my VS Online account and linked to my Azure site as part of the new “one ASP.NET” setup template, and I even got to add Application Insights as part of the setup template. A few clicks on the Azure Portal and I had a Continuous Delivery Build Definition setup in VS Online. All I had to do then was push changes from my local git repository to VS Online and TFS would build the code and if it succeeded the site was updated within a few minutes.

    The site works by making a quick AJAX HTTP GET when you click the Convert button to a ASP.NET MVC site to use the URI class in .NET. That’s about it.

    Here’s the link to anyone who want’s to use it: http://pathconverter.azurewebsites.net/

subscribe via RSS