It has been a long time since I’ve written anything just due to how busy I’ve been. I have a few articles planned, and hopefully can get back to writing at a regular cadence.
I’m going to take a look at runbooks, I’ll be covering:
I’ve been working for the last two years on transitioning a service from running somewhere deep inside a data centre, where it was managed by an Ops Team and updates were scheduled through a Release Management team, to running in AWS where the development team deploy, manage, monitor and update it.
During this time I’ve been looking into a lot of things in the DevOps and Site Reliability Engineering (SRE) space to ensure that the service I am responsible is up and running at all times.
Whilst on a call someone mentioned mentioned “executing a runbook” to resolve a problem. I had previously only heard of runbooks in the context of Microsoft System Centre and was amazed that teams were using similar approaches on an AWS native service without any Microsoft stuff. Hoping to bring some of these into my service, I reached out for more information, expecting some code or configuration for AWS, but instead I was told none of them were automated, these were just documented processes that people followed.
I was a little crest fallen, “anyone can write a manual process, it wasn’t hard”, I thought.
But, I’d not done it!
I had no processes documented for such situations!
I started to look at the problems I might have with my service and what steps I might take to resolve them. There were a few things I knew, but had never written down. “How would someone else deal with it if I was on holiday?”, “Would I remember what to do in 12 months time?”. These were all things I should put into a runbook.
Now, I needed somewhere to store them. We use Atlassian Confluence, but any shared team documentation would suffice: OneNote, ADO or GitHub Wikis, Google Docs, any place your team keeps their documentation and can easily collaborate.
I setup a “parent” page for “Runbooks” with a quick intro and a table of contents, and then created my first runbook.
Just because it is a manual process doesn’t mean there’s no automation. It may be as simple as updating a line in a JSON configuration file in your repository and performing a standard deployment. The point is, to have a process documented telling you when and how you do it, and that it is clear.
I only create runbooks for processes that relate to production systems and things I don’t do every day.
Good candidates: Servers dying, certificates rotating, overnight jobs failing, etc.
Poor candidates: How to setup a development laptop, how to perform a release - these are documented, but they don’t meet the bar for creating a runbook - put them in another section.
If you don’t set the bar high, you have processes for anything and everything, and managing them becomes onerous. Keeping them focused means you have a small selection of procedures that cover the most important processes.
My runbooks have a simple structure. There is a Trigger and a Process, but I also have some metadata such as who owns it, when was it last updated, etc.
Sometimes I will maintain a log of when it was last run, for example, certificate rotation has a log of when it was rotated, and when it will next expire.
Triggers explain when to invoke a runbook. For example, it could be as simple as “If a server dies”. Or something a bit more involved “If X job fails, check the logs for A event, then follow process 1, otherwise follow process 2”. I will nest runbooks so top level ones cover a scenario and child runbooks cover different solutions to the same overall problem.
e.g.
The process is a list of steps you need to follow. I’ve not needed to use flowcharts yet, just using bulleted lists is enough. I ensure each step is clear and has examples of things you expect to find.
e.g.
To resolve the issue with the Server follow these steps:
ipconfig
and press EnterIPv4 Address. . . . . . . . . . . : 192.168.0.1
IPv4 Address
you want the one starting 192.168.0.
#3
into the box labelled “Server IP”There are two main types of runbook I have created:
BAU runbooks cover any maintenance tasks that need to be performed on a semi-regular basis, for example, the creating a new SSH Key, adding a new admin user, etc.
The trigger for a BAU runbook is usually some business process or event. These things are expected to happen and the runbook is just a record of the steps needed.
I label the BAU runbooks with [BAU]
in the title so I can tell which is which.
Problem runbooks are to be invoked when something goes wrong and it requires manual intervention to remediate. For example, a release goes live and errors increase, or an overnight process doesn’t run.
The trigger should be some alert from your monitoring solution. The process is a list of steps to identify what has gone wrong and what needs to be done to remedy the problem.
Above I’ve mentioned the structure of the pages and the structure of a runbook. I can’t replicate my runbooks, but I’ll show some hypothetical examples.
An example of the structure of a team’s documentation site
Name | Server Dies |
Description | Process to follow when a server dies |
Date | 15-Jan-2025 |
Version | 2 |
Owner | Dave |
Trigger
This runbook is to be executed when a email alert is received informing you a server has died, or if you notice a server isn’t responding.
NOTE: if this is because of maintenance, you don’t need to do anything as the engineer will restart it when they are done.
Process
#1
into the box labelled “Server to restart”References
An example, to show what I might have in a runbook.
Here are my best practices for runbooks.
On a Friday afternoon, or that boring meeting you can’t get out of, have a browse through and make sure they still make sense. When you write things you often do it from a position of understanding, and only in time do you realise you have missed a vital instruction. “Reboot the server” may be a valid instruction, but if you are SSH’d into a Linux server, do you know the exact command to trigger an immediate reboot?
If you have not actually performed the steps you cannot be sure your runbook is going to help you when you need it. If possible, test your process by following the steps, or better yet, have someone else follow it whilst you observe.
However, sometimes you cannot test them if they require outside coordination. In these cases it is still better to have them than not (see “Prepare for the Worst”).
I have a number of runbooks I have never run, for events that I hope never happen. These are for scenarios that are rare but would be a big problem if they triggered. By writing down the most likely steps needed to resolve the problem, I give myself a head start.
If you are doing a manual process for something with production and you realise “this is a bit complicated, I bet won’t remember this”, then it is an opportunity to create a runbook.
Runbooks should be:
By giving these processes a name, defining a scope, keeping them simple and putting them all together you have a powerful suite of processes for dealing with production issues.
Above, I said I was “crest fallen” when I found out these were manual processes, and not some amazing feat of automation, so why am I espousing the values of manual runbooks and not trying to just automated them all?
Simple. Perfect is the enemy of good.
If I waited until I could automate every process, I wouldn’t have any runbooks yet.
You have to balance the time it would take to automate these things with how much value it would provide. Some processes are very complex to engineer, and happen very rarely. Some would require you to build a whole new solution to perform a task that takes 10 minutes once a quarter. It isn’t always suitable to fully automate these processes.
By creating a manual runbook first, you can understand the process and measure the time spent performing it, and then make a business decision if automation is the right approach.
The lack of automation was a surprise at first, but once I got over myself, I realise how beneficial manual runbooks can be. It’s relatively simple to set them up using the tools you already have, and then if something goes wrong, you are prepared.
These sort of things may be common in Ops led services, but where the development team owns and operates them, this level of maturity is definitely still needed. DevOps must include the benefits of Development and Operations.
This post is part of the F# Advent Calendar 2021. Many thanks to Sergey Tihon for organising these. Go checkout the other many and excellent posts.
This year, I’ve run out of Xmas themed topics. Instead, I’m just sharing a few tips from a recent project I’ve been working on…
I’m going to show…
You can see the full source code for this project on GitHub here
Dev Containers are a feature of VS Code I was introduced to earlier this year and have since taken to using in all my projects.
They allow you to have a self contained development environment in DockerFile, including all the dependencies your application requires and extensions for Visual Studio Code.
If you have ever looked at the amount of things you have installed for various projects and wondered where it all came from and if you still need it - Dev Containers solves that problem. They also give you a very simple way to share things with your collaborators, no longer do I need a 10-step installation guide in a Readme file. Once you are setup for Dev Containers, getting going with a project that uses them is easy.
This blog is a GitHub Pages Site, and to develop and test it locally I had to install Ruby and a bunch of Gems, and Installing those on Windows is tricky at best. VS Code comes with some pre-defined Dev Container templates, so I just used the Jekyll one, and now I don’t have to install anything on my PC.
To get started, you will need WSL2 and the Remote Development Tools pack VS Code extension installed.
Then it just a matter of launching VS Code from in my WSL2 instance:
cd ~/xmas-2021
code .
Now in the VS Code Command Palette I select Remote Containers: Add Development Container Configuration Files… A quick search for “F#” helps get the extensions I need installed. In this case I just picked the defaults.
Once the DockerFile was created I changed the FROM
to use the standard .NET format that Microsoft uses (the F# template
may have changed by the time you read this) to pull in the latest .NET 6 Bullseye base image.
Before
FROM mcr.microsoft.com/vscode/devcontainers/dotnet:0-5.0-focal
After
# [Choice] .NET version: 6.0, 5.0, 3.1, 6.0-bullseye, 5.0-bullseye, 3.1-bullseye, 6.0-focal, 5.0-focal, 3.1-focal
ARG VARIANT=6.0-bullseye
FROM mcr.microsoft.com/vscode/devcontainers/dotnet:0-${VARIANT}
VS Code will then prompt to Repen in the Dev Container, selecting this will relaunch VS Code and build the docker file. Once complete, we’re good to go.
Now that I’m in VS Code, using the Dev Container, I can run dotnet
commands against the terminal inside VS Code. This is
what I’ll be using to create the skeleton of the website:
# install the template
dotnet new -i "giraffe-template::*"
# create the projects
dotnet new giraffe -o site
dotnet new xunit --language f# -o tests
# create the sln
dotnet new sln
dotnet sln add site/
dotnet sln add tests/
# add the reference from tests -> site
cd tests/
dotnet add reference ../site/
cd ..
I also update the projects target framework to net6.0 as the templates defaulted to net5.0.
For the site/
I updated to the latest giraffe 6 pre-release (alpha-2 as of now) and removed the reference to Ply
which is no longer needed.
That done I could run the site and the tests from inside the dev container:
dotnet run --project site/
dotnet test
Next, I’m going to rip out most of the code from the Giraffe template, just to give a simpler site to play with.
Excluding the open
’s it is only a few lines:
let demo =
text "hello world"
let webApp =
choose [
GET >=>
choose [
route "/" >=> demo
] ]
let configureApp (app : IApplicationBuilder) =
app.UseGiraffe(webApp)
let configureServices (services : IServiceCollection) =
services.AddGiraffe() |> ignore
[<EntryPoint>]
let main args =
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(
fun webHostBuilder ->
webHostBuilder
.Configure(configureApp)
.ConfigureServices(configureServices)
|> ignore)
.Build()
.Run()
0
I could have trimmed it further, but I’m going to use some of the constructs later.
When run you can perform a curl localhost:5000
against the site and get a “hello world” response.
I wanted to try out self-hosted tests against this API, so that I’m performing real HTTP calls and mocking as little as possible.
As Giraffe is based on ASP.NET you can follow the same process as you would for testing as ASP.NET application.
You will need to add the TestHost package to the tests project:
dotnet add package Microsoft.AspNetCore.TestHost
You can then create a basic XUnit test like so:
let createTestHost () =
WebHostBuilder()
.UseTestServer()
.Configure(configureApp) // from the "Site" project
.ConfigureServices(configureServices) // from the "Site" project
[<Fact>]
let ``First test`` () =
task {
use server = new TestServer(createTestHost())
use msg = new HttpRequestMessage(HttpMethod.Get, "/")
use client = server.CreateClient()
use! response = client.SendAsync msg
let! content = response.Content.ReadAsStringAsync()
let expected = "hello test"
Assert.Equal(expected, content)
}
If you dotnet test
, it should fail because the tests expects “hello test” instead of “hello world”.
However, you have now invoked your Server from your tests.
With this approach you can configure the site’s dependencies how you like, but as an example I’m going to show two different types of dependencies:
Suppose your site relies on settings from the “appsettings.json” file, but you want to test with a different value.
Let’s add an app settings to the Site first, then we’ll update the tests…
{
"MySite": {
"MyValue": "100"
}
}
I’ve removed everything else for the sake of brevity.
We need to make a few minor changes to the demo
function and also create a new type to represent the settings
[<CLIMutable>]
type Settings = { MyValue: int }
let demo =
fun (next : HttpFunc) (ctx : HttpContext) ->
let settings = ctx.GetService<IOptions<Settings>>()
let greeting = sprintf "hello world %d" settings.Value.MyValue
text greeting next ctx
And we need to update the configureServices
function to load the settings:
let serviceProvider = services.BuildServiceProvider()
let settings = serviceProvider.GetService<IConfiguration>()
services.Configure<Settings>(settings.GetSection("MySite")) |> ignore
If you run the tests now, you get “hello world 0” returned.
However, if you dotnet run
the site, and use curl
you will see hello world 100
returned.
This proves the configuration is loaded and read, however, it isn’t used by the tests - because the
appsettings.json
file isn’t part of the tests. You could copy the file into the tests and that would solve the problem,
but if you wanted different values for the tests you could create your own “appsettings.”json” file for the tests
{
"MySite": {
"MyValue": "3"
}
}
To do that we need function that will load the test configuration, and the add it into the pipeline for creating the TestHost:
let configureAppConfig (app: IConfigurationBuilder) =
app.AddJsonFile("appsettings.tests.json") |> ignore
()
let createTestHost () =
WebHostBuilder()
.UseTestServer()
.ConfigureAppConfiguration(configureAppConfig) // Use the test's config
.Configure(configureApp) // from the "Site" project
.ConfigureServices(configureServices) // from the "Site" project
Note: you will also need to tell the test project to include the appsettings.tests.json
file.
<ItemGroup>
<Content Include="appsettings.tests.json" CopyToOutputDirectory="always" />
</ItemGroup>
If you would like to use the same value from the config file in your tests you can access it via the test server:
let config = server.Services.GetService(typeof<IConfiguration>) :?> IConfiguration
let expectedNumber = config["MySite:MyValue"] |> int
let expected = sprintf "hello world %d" expectedNumber
In F# it’s nice to keep everything pure and functional, but sooner or later you will realise you need to interact with the outside world, and when testing from the outside like this, you may need to control those things.
Here I’m going to show you the same approach you would use for a C# ASP.NET site - using the built in dependency injection framework.
type IMyService =
abstract member GetNumber : unit -> int
type RealMyService() =
interface IMyService with
member _.GetNumber() = 42
let demo =
fun (next : HttpFunc) (ctx : HttpContext) ->
let settings = ctx.GetService<IOptions<Settings>>()
let myService = ctx.GetService<IMyService>()
let configNo = settings.Value.MyValue
let serviceNo = myService.GetNumber()
let greeting = sprintf "hello world %d %d" configNo serviceNo
text greeting next ctx
I’ve create a IMyService
interface and a class to implement it RealMyService
.
Then in configureServices
I’ve added it as a singleton:
services.AddSingleton<IMyService>(new RealMyService()) |> ignore
Now the tests fail again because 42
is appended to the results.
To make the tests pass, I want to pass in a mocked IMyService
that has a number that I want.
let luckyNumber = 8
type FakeMyService() =
interface IMyService with
member _.GetNumber() = luckyNumber
let configureTestServices (services: IServiceCollection) =
services.AddSingleton<IMyService>(new FakeMyService()) |> ignore
()
let createTestHost () =
WebHostBuilder()
.UseTestServer()
.ConfigureAppConfiguration(configureAppConfig) // Use the test's config
.Configure(configureApp) // from the "Site" project
.ConfigureServices(configureServices) // from the "Site" project
.ConfigureServices(configureTestServices) // mock services after real ones
Then in the tests I can expect the luckyNumber
:
let expected = sprintf "hello world %d %d" expectedNumber luckyNumber
And everything passes.
I hope this contains a few useful tips (if nothing else, I’ll probably be coming back to it in time to remember how to do some of these things) for getting going with Giraffe development in 2022.
You can see the full source code for this blog post here.
This post is inspired by and in response to Pendulum swing: internal by default by Mark Seemann.
Access modifiers in .NET can be used in a number of ways to achieve things, in this post I’ll talk about how I used them and why.
Firstly I should point out, I am NOT a library author, if I were, I may do things differently.
In .NET the public
and internal
access modifiers control the visibility of a class from another assembly. Classes that
are marked as public can be seen from another project/assembly, and those that are internal cannot.
I view public as saying, “here is some code for other people to use”. When I choose to make something public, I’m making a conscious decision that I want another component of the system to use this code. If they are dependant on me, then this is something I want them to consume.
For anything that is internal, I’m saying, this code is part of my component that only I should be using.
When writing code within a project, I can use my public and internal types interchangeably, there is no difference between them.
If in my project I had these 2 classes:
public Formatter { public void Format(); }
internal NameFormatter { public void Format(); }
and I was writing code elsewhere in my project, then I can choose to use either of them - there’s nothing stopping or guiding me using one or the other. There’s no encapsulation provided by the use of internal.
NOTE: When I say ‘I’, I actually mean, a team working on something of significant complexity, and that not everyone working on the code may know it inside out. The objective is to make it so that future developers working on the code “fall into the pit of success”.
If my intention was that NameFormatter
must not be used directly, I may use a different approach to “hide” it. For
example a private nested class:
public Formatter
{
private class NameFormatter() { }
}
or by using namespaces:
Project.Feature.Formatter
Project.Feature.Formatters.NameFormatter
These might not be the best approach, just a few ideas on how to make them less “discoverable”. The point I’m hoping to make is that within your own project internal doesn’t help, if you want to encapsulate logic, you need to use private (or protected).
In larger systems where people are dependant on my project, everything is internal by default, and only made public to surface the specific features they need.
So where does this leave me with unit testing? I am quite comfortable using InternalsVisibleTo
to allow my tests
access to the types it needs to.
The system I work on can have a lot of functionality that is internal
and only triggered by its own logic. Such as a
plugin that is loaded for a UI, or a message processor.
Testing everything through a “Receive Message” type function could be arduous. That said, I do like “outside-in” testing and I can test many things that way, but it is not reasonable to test everything that way.
In one of the systems I maintain, I do test a lot of it this way:
Arrange
Putting the system in a state
Act
Sending an input into the system
Assert
Observe the outputs are what is expected
By sending inputs and asserting the outputs tells me how the system works.
However, some subcomponents of this system are rather complex on their own, such as the RFC4517 Postal Address parser I had to implement. When testing this behaviour it made much more sense to test this particular class in isolation with a more “traditional” unit test approach, such as Xunit.net’s Theory tests with a simple set of Inputs and Expected outputs.
I wouldn’t have wanted to make my parser public, it wasn’t part of my component my dependants should care about.
I hope to write more about my testing approaches in the future.
For reasons I won’t go into, in one of the systems I work on a single “module” is comprised of a number of assemblies/projects, and the system is comprised of many modules. For this we use “InternalsVisibleTo” only so that the projects in the same module can see each other - in addition to unit testing as stated above.
This allows a single module to see everything it needs to, but dependant modules to only see what we choose to make visible. Keeping a small and focused API helps you know what others depend on and what the impact of your changes are.
When you use static analysis like .NET Analysers they make assumptions about what your code’s purpose is based on the access modifier. To .NET Analysers, public code is library code, to be called by external consumers.
A few examples of things only apply to public class:
IDisposable
implementation.The options you have are disable these rules, suppress them, or add the requisite code to support them.
When you are using Nullable Reference Types from C# 8.0 the compiler protects you from accidentally dereferencing null.
But public
means that anyone can write code to call it, so it errs on the side of caution and still warns you
that arguments may be null and you should check them.
Given the limited value within a project of using public
, I always default to internal
and will test against internal
classes happily, only using public
when I think something should be part of a public API to another person or part of
the system.
Internal types are only used by trusted and known callers. Nullable Reference type checking works well with them, as it knows they can only instantiated from within known code, allowing a more complete analysis.
If you are writing code for that is to be maintained for years to come by people other than yourself, using public or internal won’t help, you need to find other approaches to ensure that code is encapsulated and consumed appropriately.
This post is part of the F# Advent Calendar 2020. Many thanks to Sergey Tihon for organizing these. Go checkout the other many and excellent posts.
Back in July I got an email from KickStarter about a project for an RGB Snowman that works on Raspberry Pi’s and BBC micro:bits. My daughter loves building things on her micro:bit, and loves all things Christmassy, so I instantly backed it…
image from the KickStarter campaign
A few months later (and now in the proper season) my daughter has had her fun programming it for the micro:bit. Now it is my turn, and I thought it would make a good Christmas post if I could do it in F# and get it running on a Raspberry Pi with .NET Core / .NET.
Most of my Raspberry Pi programming so far has been with cobbled together Python scripts with little attention for detail or correctness, I’ve never run anything .NET on a Raspberry Pi.
This is my journey to getting it working with F# 5 / .NET 5 and running on a Raspberry Pi.
After my initial idea, next came the question, “can I actually do it?”. I took a look at the
Python demo application that was created for the SnowPi and saw it used rpi_ws281x
, a quick
google for “rpi_ws281x .net” and, yep, this looks possible.
However, that wasn’t to be. I first tried the popular ws281x.Net package from nuget, and
despite following the instructions to setup the native dependencies, I managed to get from
Seg Fault!
to WS2811_ERROR_HW_NOT_SUPPORTED
, which seemed to indicate that my RPi 4 wasn’t
supported and that I needed to update the native libraries. I couldn’t figure this out and gave up.
I then tried rpi-ws281x-csharp which looked newer, and even with compiling everything from source, I still couldn’t get it working.
After some more digging I finally found Ken Sampson had a fork of rpi-ws281x-csharp which looked newer than the once I used before, and it had a nuget package.
This one worked!
I could finally interact with the SnowPi from F# running in .NET 5. But so far all I had was “turn on all the lights”.
The problem with developing on a desktop PC and testing on an RPi is that it takes a while to build, publish, copy and test the programs.
I needed a way to test these easier, so I decided to redesign my app to use Command Objects and decouple the instructions from the execution. Now I could provide an alternate executor for the Console and see how it worked (within reason) without deploying to the Raspberry Pi.
As with most F# projects, first, I needed some types.
The first one I created was the Position to describe in English where each LED was so I didn’t have to think too hard when I wanted to light one up.
type Position =
| BottomLeft
| MiddleLeft
| TopLeft
| BottomRight
| MiddleRight
| TopRight
| Nose
| LeftEye
| RightEye
| BottomMiddle
| MiddleMiddle
| TopMiddle
static member All =
Reflection.FSharpType.GetUnionCases(typeof<Position>)
|> Seq.map (fun u -> Reflection.FSharpValue.MakeUnion(u, Array.empty) :?> Position)
|> Seq.toList
The All
member is useful when you need to access all positions at once.
I then created a Pixel record to store the state of a LED (this name was from the Python API to avoid
conflicts with the rpi_ws281x
type LED), and a Command union to hold each of the commands you can do
with the SnowPi:
type Pixel = {
Position: Position
Color : Color
}
type Command =
| SetLed of Pixel
| SetLeds of Pixel list
| Display
| SetAndDisplayLeds of Pixel list
| Sleep of int
| Clear
Some of the Commands (SetLed
vs SetLeds
and SetAndDisplayLeds
vs SetLeds; Display
) are there for
convenience when constructing commands.
With these types I could now model a basic program:
let redNose =
{ Position = Nose
Color = Color.Red }
let greenEyeL =
{ Position = LeftEye
Color = Color.LimeGreen }
// etc. Rest hidden for brevity
let simpleProgram = [
SetLeds [ redNose; greenEyeL; greenEyeR ]
Display
Sleep 1000
SetLeds [ redNose; greenEyeL; greenEyeR; topMiddle ]
Display
Sleep 1000
SetLeds [ redNose; greenEyeL; greenEyeR; topMiddle; midMiddle; ]
Display
Sleep 1000
SetLeds [ redNose; greenEyeL; greenEyeR; topMiddle; midMiddle; bottomMiddle; ]
Display
Sleep 1000
]
This is an F# List with 12 elements, each one corresponding to a Command to be run by something.
It is quite east to read what will happen, and I’ve given each of the Pixel values a nice name for reuse.
At the moment nothing happens until the program is executed:
The execute
function takes a list of commands then examines the config to determine which
interface to execute it on.
Both Real and Mock versions of execute
have the same signature, so I can create a list of each
of those functions and iterate through each one calling it with the cmds
arguments.
let execute config cmds name =
[
if config.UseSnowpi then
Real.execute
if config.UseMock then
Mock.execute
] // (Command list -> Unit) list
|> List.iter (fun f ->
Colorful.Console.WriteLine((sprintf "Executing: %s" name), Color.White)
f cmds)
The config
argument is partially applied so you don’t have to pass it every time:
let config = createConfigFromArgs argv
let execute = execute config
// I would have used `nameof` but Ionide doesn't support it at time of writing.
execute simpleProgram "simpleProgram"
The “Mock” draws a Snowman on the console, then does a write to each of the “Pixels” (in this case
the Cursor is set to the correct X and Y position for each [ ]
) in the correct colour
using Colorful.Console library to help.
[<Literal>]
let Snowman = """
###############
#############
###########
#########
#################
/ \
/ [ ] [ ] \
| |
\ [ ] /
\ /
/ \
/ [ ] \
/ [ ] [ ] \
/ [ ] \
| [ ] [ ] |
\ [ ] /
\[ ] [ ]/
\_____________/
"""
The implementation is quite imperative, as I needed to match the behaviour of the Native library in “Real”.
The SetLed
and SetLeds
commands push a Pixel
into a ResizeArray<Command>
(System.Collections.Generic.List<Command>
)
and then a Render
command instructs it to iterates over each item in the collection, draws the appropriate “X” on the Snowman
in the desired colour, and then clear the list ready for the next render.
let private drawLed led =
Console.SetCursorPosition (mapPosToConsole led.Position)
Console.Write('X', led.Color)
let private render () =
try
Seq.iter drawLed toRender
finally
Console.SetCursorPosition originalPos
This is one of the things I really like about F#, it is a Functional First language, but I can drop into imperative code whenever I need to. I’ll combe back to this point again later.
Using dotnet watch run
I can now write and test a program really quickly.
Implementing the “real” SnowPi turned out to be trivial, albeit imperative.
Just following the examples from the GitHub repo of the rpi-ws281x-csharp in C# and porting it to F## was enough to get me going with what I needed.
For example, the following snippet is nearly the full implementation:
open rpi_ws281x
open System.Drawing
let settings = Settings.CreateDefaultSettings();
let controller =
settings.AddController(
controllerType = ControllerType.PWM0,
ledCount = NumberOfLeds,
stripType = StripType.WS2811_STRIP_GRB,
brightness = 255uy,
invert = false)
let rpi = new WS281x(settings)
//Call once at the start
let setup() =
controller.Reset();
//Call once at the end
let teardown() =
rpi.Dispose()
let private setLeds pixels =
let toLedTuple pixel =
(posToLedNumber pixel.Position, pixel.Color)
pixels
|> List.map toLedTuple
|> List.iter controller.SetLED
let private render() =
rpi.Render()
The above snipped gives most of the functions you need to execute the commands against:
let rec private executeCmd cmd =
match cmd with
| SetLed p -> setLeds [p]
| SetLeds ps -> setLeds ps
| Display -> render ()
| SetAndDisplayLeds ps ->
executeCmd (SetLeds ps)
executeCmd Display
| Sleep ms -> System.Threading.Thread.Sleep(ms)
| Clear -> clear ()
Just to illustrate composing a few programs, I’ll post a two more, one simple traffic light I created and one I copied from the Demo app in the Python repository:
This displays the traditional British traffic light sequence. First, by creating lists for each of the
pixels and their associated colours (createPixels
is a simple helper method).
By appending the red and amber lists together, I can combine both red and amber pixels into a
new list that will display red and amber at the same time.
let red =
[ LeftEye; RightEye; Nose]
|> createPixels Color.Red
let amber =
[ TopLeft; TopMiddle; TopRight; MiddleMiddle ]
|> createPixels Color.Yellow
let green =
[ MiddleLeft; BottomLeft; BottomMiddle; MiddleRight; BottomRight ]
|> createPixels Color.LimeGreen
let redAmber =
List.append red amber
let trafficLights = [
Clear
SetAndDisplayLeds green
Sleep 3000
Clear
SetAndDisplayLeds amber
Sleep 1000
Clear
SetAndDisplayLeds red
Sleep 3000
Clear
SetAndDisplayLeds redAmber
Sleep 1000
Clear
SetAndDisplayLeds green
Sleep 1000
]
The overall program is just a set of commands to first clear then set the Leds and Display them at the same time, then sleep for a prescribed duration, before moving onto the next one.
This program is ported directly from the Python sample with a slight F# twist:
let colorWipe col =
Position.All
|> List.sortBy posToLedNumber
|> List.collect (
fun pos ->
[ SetLed { Position = pos; Color = col }
Display
Sleep 50 ])
let colorWipeProgram = [
for _ in [1..5] do
for col in [ Color.Red; Color.Green; Color.Blue; ] do
yield! colorWipe col
The colorWipe
function sets each Led in turn to a specified colour, displays it, waits 50ms, and moves
onto the next one. List.collect
is used to flatten the list of lists of commands into just a list of commands.
The colorWipeProgram
repeats this 5 times, but each time uses a different colour in the wipe. Whilst it may look imperative, it is using list comprehensions and is still just building commands to execute later.
The entire project is on GitHub here, if you want to have a look at the full source code and maybe even get a SnowPi and try it out.
The project started out fully imperative, and proved quite hard to implement correctly, especially as I wrote the mock first, and implemented the real SnowPi. The mock was written with different semantics to the the real SnowPi interface, and had to be rewritten a few times.
Once I moved to using Commands and got the right set of commands, I didn’t have to worry about refactoring the programs as I tweaked implementation details.
The building of programs from commands is purely functional and referentially transparent. You can see what a program will do before you even run it. This allowed me to use functional principals building up the programs, despite both implementations being rather imperative and side effect driven.
Going further, if I were to write tests for this, the important part would be the programs, which I could assert were formed correctly, without ever having to render them.
This post is part of the F# Advent Calendar 2019. Many thanks to Sergey Tihon for organizing these.
Last year I wrote an app for Santa to keep track of his list of presents to buy for the nice children of the world.
Sadly, the development team didn’t do proper research into Santa’s requirements; they couldn’t be bothered with a trek to the North Pole and just sat at home watching “The Santa Clause” and then reckoned they knew it all. Luckily no harm came to Christmas 2018.
Good news is, Santa’s been in touch and the additional requirements for this year are:
This year I’m going to walk through how you can solve Santa’s problem using something I’ve recently began playing with - FParsec.
FParsec is parser combinator library for F#.
I’d describe it as: a library that lets you write a parser by combining functions.
This is only my second go at using it, my first was to solve Mike Hadlow’s “Journeys” coding challenge. So this might not be the most idiomatic way to write a parser.
We’ll assume that Santa has bought some off the shelf OCR software and has scanned in some Christmas lists into a text file.
Alice: Nice
- Bike
- Socks * 2
Bobby: Naughty
- Coal
Claire:Nice
-Hat
- Gloves * 2
- Book
Dave : Naughty
- Nothing
As you can see the OCR software hasn’t done too well with the whitespace. We need a parser that is able to parse this into some nice F# records and handle the lack of perfect structure.
When writing solutions in F# I like to model the domain first:
module Domain =
type Behaviour = Naughty | Nice
type Gift = {
Gift: string
Quantity: int
}
type Child = {
Name: string
Behaviour: Behaviour
Gifts: Gift list
}
First the Behaviour
is modelled as a discriminated union: either Naughty
or Nice
.
A record for the Gift
holds the name of a gift and the quantity.
The Child
record models the name of the child, their behaviour and a list of gifts they are getting.
The overall output of a successfully parsing the text will be a list of Child
records.
Initially I thought it would be a clever idea to parse the text directly into the domain model. That didn’t work out so, instead I defined my own AST to parse into, then later map that into the domain model.
type Line =
| Child of string * Domain.Behaviour
| QuantifiedGift of string * int
| SingleGift of string
A Child
line represents a child and their Behaviour
this year. A QuantifiedGift
represents a gift that was specified
with a quantity (e.g. “Bike * 2”) and a SingleGift
represents a gift without a quantity.
Modelling this way avoids putting domain logic into your parser - for example, what is the quantity of a single gift? It might seem trivial, but the less the parser knows about your domain the easier it is to create.
Before we get into the actual parsing of the lines, there’s a helper I added called wsAround
:
open FParsec
let wsAround c =
spaces >>. skipChar c >>. spaces
This is a function that creates a parser based on a single character c
and allows the character c
to be
surrounded by whitespace (spaces
function). The skipChar
function says that I don’t care about
parsing the value of c
, just that c
has to be there. I’ll go into the >>.
later on, but it is one of
FParsec’s custom operators for combining parsers.
So wsAround ':'
lets me parse :
with potential whitespace either side of it.
It can be used as part of parsing any of the following:
a : b
a:b
a: b
And as the examples above show, there are a few places where we don’t care about whitespace either side of a separator:
:
separating the name and behaviour.-
that precedes either types of gift.*
for quantified gifts.A child line is defined as “a name and behaviour separated by a :
”.
For example: Dave : Nice
And as stated above, there can be any amount (or none) of whitespace around the :
.
The pName
function defines how to parse a name:
let pName =
let endOfName = wsAround ':'
many1CharsTill anyChar endOfName |>> string
many1CharsTill
is a parser that runs two other parsers. The first argument is the parser it will look
for “many chars” from, the second argument is the parser that tells it when to stop.
Here it parses any character using anyChar
until it reaches the endOfName
parser, which is a function that looks for
:
with whitespace around it.
The result of the parser is then converted into a string
using the |>>
operator.
The pBehaviour
function parses naughty or nice into the discriminated union:
let pBehaviour =
(pstringCI "nice" >>% Domain.Nice)
<|>
(pstringCI "naughty" >>% Domain.Naughty)
This defines 2 parsers, one for each case, and uses the <|>
operator to choose between them.
pstringCI "nice"
is looking to parse the string nice
case-insensitive and then the >>%
operator discards the
parsed string and just returns Domain.Nice
.
These 2 functions are combined to create the pChild
function that can parse the full line of text into a Child
line.
let pChild =
let pName = //...
let pBehaviour = //...
pName .>>. pBehaviour |>> Child
pName
and pBehaviour
are combined with the .>>.
operator to create a tuple of each parsers result, then the result
or that is passed to the Child
line constructor by the |>>
operator.
Both gifts make use of the startOfGiftName
parser function:
let startOfGiftName = wsAround '-'
A single gift is parsed with:
let pSingleGift =
let allTillEOL = manyChars (noneOf "\n")
startOfGiftName >>. allTillEOL |>> SingleGift
The allTillEOL
function was taken from this StackOverflow answer and parses everything up to the end of a line.
This is combined with startOfGiftName
using the >>.
operator, which is similar to the .>>.
operator, but in this case
I only want the result from the right-hand side parser - in this case the allTillEOL
, this is then passed into the SingleGift
union case constructor.
A quantified gift is parsed with:
let pQuantifiedGift =
let endOfQty = wsAround '*'
let pGiftName =
startOfGiftName >>. manyCharsTill anyChar endOfQty
pGiftName .>>. pint32 |>> QuantifiedGift
This uses endOfQty
and pGiftName
combined in a similar way to the pName
in pChild
. Parsing all characters up until the
*
and only keeping the name part.
pGiftName
is combined with pint32
with the .>>.
function to get the result of both parsers in a tuple and is fed into the
QuantifiedGift
union case.
The top level parser is pLine
which parses each line of the text into one of the cases from the Line
discriminated union.
let pLine =
attempt pQuantifiedGift
<|>
attempt pSingleGift
<|>
pChild
This uses the <|>
that was used for the Behaviour
, but it also requires the attempt
function before the first two parsers.
This is because these parsers consume some of the input stream as they execute. Without the attempt
it would start on
a quantified gift, then realise it is actually a single gift and have no way to go into the next choice.
Using attempt
allows the parser to “rewind” when it has a problem - like a quantified gift missing a *
.
If you want to see how this works, you need to decorate your parser functions with the <!>
operator that is defined here.
This shows the steps the parser takes and allows you to see that it has “gone the wrong way”.
Finally a helper method called parseInput
is used to parse the entire file:
let parseInput input =
run (sepBy pLine newline) input
This calls the run
function passing in a sepBy
parser for each pLine
separated by a newline
. This way each line is processed on it’s own.
That is the end of the parser module.
The current output of parseInput
is a ParserResult<Line list, unit>
. Assuming success there is now a list of Line
union cases
that need to be mapped into a list of Child
from the domain.
These have separate structures:
Child
record is hierarchical - it contains a list of Gift
s.Line
s has structure defined by the order of elements, Gift
s follow the Child
they relate to.Initially I thought about using a fold
to go through each line, if the line was a child, add a child to the head of
the results, if the line was a gift add it to the head of the list of gifts of the first child in the list, this was the code:
let folder (state: Child list) (line : Line) : Child list =
let addGift nm qty =
let head::tail = state
let newHead = { head with Gifts = {Gift = nm; Quantity = qty; } :: head.Gifts; }
newHead :: tail
match line with
| Child (name, behaviour) -> { Name = name; Behaviour = behaviour; Gifts = []; } :: state
| SingleGift name -> addGift name 1
| QuantifiedGift (name, quantity) -> addGift name quantity
This worked, but because F# lists are implemented as singly linked lists you add to the head of the list instead of the tail. This
had the annoying feature that the Child
items were revered in the list - not so bad, but then the list of gifts in each child was backwards too.
I could have sorted both lists, but it would require recreating the results as the lists are immutable and I wanted to keep to idiomatic F# as
much as I could.
A foldBack
on the other hand works backwards “up” the list, which meant I could get the results in the order I wanted, but there
was a complication. When going forward, the first line was always a child, so I always had a child to add gifts to. Going backwards
there is just gifts until you get to a child, so you have to maintain a list of gifts, until you reach a child line, then you can
create a child assign the gifts, then clear the list.
This is how I implemented it:
module Translation =
open Domain
open Parser
let foldLine line state = //Line -> Child list * Gift list -> Child list * Gift list
let cList, gList = state
let addChild name behaviour =
{ Name = name; Behaviour = behaviour; Gifts = gList; } :: cList
let addGift name quantity =
{ Gift = name; Quantity = quantity; } :: gList
match line with
| Child (name, behaviour) -> addChild name behaviour, []
| SingleGift name -> cList, addGift name 1
| QuantifiedGift (name, quantity) -> cList, addGift name quantity
The state
is a tuple of lists, the first for the Child list
(the result we want) and the second for keeping track of the gifts
that are not yet assigned to children.
First this function deconstructs state
into the child and gift lists - cList
and gList
respectively.
Next I’ve declared some helper functions for adding to either the Child
or Gift
list:
addChild
creates a new Child
with the Gifts
set to the accumulated list of Gifts (gList
) and prepends it onto cList
.addGift
creates a new Gift
and prepends it onto gList
.Then the correct function is called based on the type of Line.
Child list
with a Empty Gift list
.Child list
, with the current item added to the Gift list
.The overall result is a tuple of all the Child
records correctly populated, and an empty list of Gift
records, as the last item will be the
first row and that will be a Child
.
let mapLinesToDomain lines = //ParserResult<Line list, unit> -> Child list
let initState = [],[]
let mapped =
match lines with
| Success (lines, _, _) -> Seq.foldBack foldLine lines initState
| Failure (err, _, _) -> failwith err
fst mapped
Finally, the output of parseInput
can be piped into mapLinesToDomain
to get the Child list
we need:
let childList =
Parser.parseInput input //Input is just a string from File.ReadAllText
|> Translation.mapLinesToDomain
I really like how simple parsers can be once written, but it takes some time to get used to how they work and how you need to separate the parsing and domain logic.
My main pain points were:
attempt
, I just assumed <|>
worked like pattern matching, turns out, it doesn’t.I made heavy use of the F# REPL and found it helped massively as I worked my way through writing each parser and then combining
them together. For example, I first wrote the Behaviour parser and tested it worked correctly on just “Naughty” and “Nice”.
Then I wrote a parser for the Child’s name and :
and tested it on “Dave : Nice”, but only getting the name.
Then I could write a function to combine the two together and check that the results were correct again. The whole development
process was done this way, just add a bit more code, bit more example, test in the REPL and repeat.
The whole code for this is on GitHub - it is only 115 lines long, including code to print the list of Children back out so I could see the results.
subscribe via RSS