Concurrency in .NET
Modern patterns of concurrent and parallel programming
Riccardo Terrell
  • MEAP began December 2016
  • Publication in April 2018 (estimated)
  • ISBN 9781617292996
  • 500 pages (estimated)
  • printed in black & white

The author is clearly well educated in the topic and does a great job of explaining the material.

Jeremy Lange

The multicore processor revolution has begun. Parallel computation is powerful and increasingly accessible and multicore computation is incorporated into all sorts of applications, including finance software, video games, web applications, machine-learning, and market analysis. To get the best performance, your application has to partition and divide processing to take full advantage of multicore processors. Functional languages help developers support concurrency by encouraging immutable data structures that can be passed between threads without having to worry about a shared state, all while avoiding side effects.

Concurrency in .NET teaches you how to build concurrent and scalable programs in .NET using the functional paradigm. This intermediate-level guide is aimed at developers, architects, and passionate computer programmers who are interested in writing code with improved speed and effectiveness by adopting a declarative and pain-free programming style. You'll start by learning the foundations of concurrency and important functional techniques and paradigms used in the rest of the book. Then you'll dive in to concurrent and parallel programming designs, emphasizing the functional paradigm with both theory and practice with lots of code samples. The third part of the book covers a real "cradle to grave" application implementation, covering the techniques and skills learned during the book.

Table of Contents detailed table of contents

Part 1: Functional Concurrent programming Concepts

1. Functional Concurrency Foundations

1.1. What you will learn from this book

1.2. Let’s start with terminology

1.2.1. Sequential programming performs one task at a time

1.2.2. Parallel programming executes multiples tasks simultaneously

1.2.3. Multitasking performs multiple tasks concurrently over time

1.2.4. Multithreading for performance improvement

1.3. Why the need for concurrency?

1.3.1. Present and future of concurrent programming

1.4. The pitfalls of concurrent programming

1.4.1. Concurrency hazards

1.4.2. The sharing of state evolution

1.4.3. A simple real-world example: parallel quicksort

1.4.4. Benchmarking in F#

1.5. Why choose functional programming for concurrency

1.5.1. Benefits of functional programming

1.6. Embracing the functional paradigm

1.7. Why use F# and C# for functional concurrent programming

1.8. Summary

2. Functional Programming Techniques for Concurrency

2.1. Using function composition to solve complex problems with simple solutions

2.1.1. Function composition in C#

2.1.2. Function composition in F#

2.2. Closure to simplify functional thinking

2.2.1. Captured variables in closures with lambda expressions

2.2.2. Closure in a multithreading environment

2.2.3. Memoization-caching technique for program speedup

2.2.4. Memoize in action for a fast Web-Crawler

2.2.5. Lazy memoization for better performance

2.2.6. Gotchas for function memoization

2.3. Effective concurrent speculation to amortize the cost of expensive computations

2.3.1. Precomputation with natural functional support

2.3.2. Let the best computation win

2.4. Being lazy is a good thing

2.4.1. Strict programming languages for better understanding concurrent behaviors

2.4.2. Lazy caching technique and thread-safe singleton pattern

2.4.3. Lazy support in F#

2.4.4. Lazy and Task, a powerful combination

2.5. Summary

3. Functional Data Structures and Immutability

3.1. Real-world example—Hunt the thread-unsafe object

====.NET immutable collections: a safe solution ==== The .NET concurrent collections: a faster solution ==== The Agent message passing pattern–a faster and better solution === Functional data structures can be shared safely among threads === Immutability for a change ==== Functional data structure for data parallelism ==== Performance implication behind using immutability ==== Immutability in C# ==== Immutability in F# ==== Functional lists - linking cells in a chain ==== Building a persistent data structure—an immutable binary tree (B-Tree) === Recursive function – a natural way to iterate ==== The tail of a correct recursive function—Tail-Call optimized ==== Continuation passing style (CPS) to optimize recursive function === Summary

Part 2 How to approach different parts of a concurrent program

4. The Basics of Processing Big Data: Data Parallelism Part 1

4.1. What is data parallelism

4.1.1. Data and task parallelism

4.1.2. The “embarrassingly parallel” concept

4.1.3. Data parallelism support in .NET

4.2. The Fork/Join pattern: Parallel Mandelbrot

4.2.1. When the garbage collector is the bottleneck: struct vs. class objects

4.2.2. The downside of parallel loops

4.3. How to measure performance speed

4.3.1. The Amdahl’s Law defines the limit of performance improvement

4.3.2. The Gustafson’s Law - a step further to measure performance improvement

4.3.3. The limitations of parallel loops: the sum of prime numbers

4.3.4. What can possibly go wrong with a simple loop?

4.3.5. The declarative parallel programming model

4.4. Summary

5. PLINQ and Map-Reduce: Data Parallelism Part 2

5.1. A short introduction to PLINQ

5.1.1. How is PLINQ more functional?

5.1.2. PLINQ and pure functions: the parallel words counter

5.1.3. Avoiding side effects with pure functions

5.1.4. Isolate and control side effects: refactoring the parallel words counter

5.2. Aggregating and reducing data in parallel

5.2.1. Deforesting: one of many advantages to folding

5.2.2. Fold in PLINQ: the Aggregate functions

5.2.3. Implementing a parallel Reduce function for PLINQ

5.2.4. Parallel list comprehension in F#: PSeq

5.2.5. Parallel array in F#

5.3. Parallel MapReduce pattern

5.3.1. The Map and Reduce functions

5.4. Summary

6. Real-Time Event Streams: Functional Reactive Programming

6.1. What is Reactive programming: Big Event processing

===.NET tools for Reactive programming ==== Event combinators—a better solution ====.NET interoperability with F# combinators === Reactive programming in .NET: Reactive Extensions (Rx) ==== From LINQ/PLINQ to Reactive Extensions ==== IObservable—the dual IEnumerable ==== Reactive Extensions in action ==== Real-time streaming with Reactive Extensions ==== From events to F# observables === Taming the event stream—Twitter emotion analysis using Reactive Extensions programming ==== SelectMany–the monadic bind operator === An Rx publisher–subscriber ==== The Subject type for a powerful Publisher-Subscriber hub ==== Reactive Extensions in relation to concurrency ==== Implementing a reusable Rx Publisher-Subscriber ==== Analyzing tweet emotions using an Rx Pub-Sub class ==== Observer in action ==== The convenient F# object expression === Summary

7. Task-Based Functional Parallelism

7.1. A short introduction to task parallelism

7.1.1. Why task parallelism and functional programming?

7.1.2. Task parallelism support in .NET

7.2. The .NET Task Parallel Library (TPL)

7.2.1. Running operations in parallel with TPL Parallel.Invoke

7.3. The problem of void in C#

7.3.1. The solution of void in C# - the Unit type

7.4. Continuation-passing style – a functional control flow

7.4.1. Why exploit CPS?

7.4.2. Waiting for a task to complete—the continuation model

7.5. Strategies for composing task operations

7.5.1. Mathematical pattern for better composition

7.5.2. Guidelines for using tasks

7.6. The parallel functional pipeline pattern

7.7. Summary

8. Task Asynchronicity for the win

8.1. The Asynchronous Programming Model (APM)

8.1.1. The value of asynchronous programming

8.1.2. Scalability and Asynchronous Programming

8.1.3. CPU-bound and I/O-bound operations

8.2. Unbounded parallelism with asynchronous programming

8.3. Asynchronous support in .NET

8.3.1. Asynchronous programming breaks the code structure

8.3.2. Event-based asynchronous programming

8.4. The C# Task-Asynchronous-Programming (Async/Await)

8.4.1. Anonymous asynchronous lambdas

8.4.2. Task<T> is a monadic container

8.5. TAP Asynchronous processing: a case study

8.5.1. Asynchronous cancellation

8.5.2. Task-based asynchronous composition with the monadic Bind operator

8.5.3. Deferring asynchronous computation enables composition

8.5.4. Retry if something goes wrong

8.5.5. Handling errors in asynchronous operations

8.5.6. Asynchronous parallel processing of historical Stock Market

8.5.7. Asynchronous stock market parallel process as tasks complete

8.6. Summary

9. Asynchronous functional programming in F#

9.1. Asynchronous functional aspect

9.2. What is the F# Asynchronous-Workflow

9.2.1. The continuation passing style in computation expressions

9.2.2. The asynchronous workflow in action—Azure Blob storage parallel operations

9.3. Asynchronous Computation Expressions

9.3.1. Difference between computation expressions and monads

9.3.2. AsyncRetry—building your own computation expression

9.3.3. Extending the asynchronous workflow

9.3.4. Mapping asynchronous operation – The Functor

9.3.5. Parallelize asynchronous workflows—Async.Parallel

9.3.6. Asynchronous workflow cancellation support

9.3.7. Taming parallel asynchronous operations

9.4. Summary

10. Functional combinators for fluent concurrent programming

10.1. The execution flow is not always on the “happy path” — Error handling

10.1.1. The problem of error handling in imperative programming

10.1.2. Errors Combinators – Retry, Fallback, Otherwise, and Task.Catch in C#

10.2. Error handling in functional programming – exceptions for flow control

10.3. Handling errors with the Task<Option<T>> type in C#

10.4. The AsyncOption type in F# — Combining the Async and Option types

10.5. Idiomatic F# functional asynchronous error handling – Async.Catch

10.6. Preserving exception semantic with the Result type

10.7. The composite Task<Result<T>> type – taming exceptions in asynchronous operations

10.8. Modelling error handling in F# with the Async and Result types

10.9. Extending the F# AsyncResult type with monadic bind operators

10.9.1. Abstracting operations with functional combinators

10.9.2. Functional combinators in the nutshell

10.10. The Task Parallel Library build in asynchronous combinators

10.11. Exploiting the Task.WhenAny combinator for redundancy and interleaving

10.12. Exploiting the Task.WhenAll combinator for asynchronous-ForEach

10.13. Mathematical pattern review – Brief analysis of what you have seen so far

10.14. Monoids for data parallelism

10.15. Functors to map elevated types

10.16. Monads to compose without side effects

10.16.1. The ultimate parallel composition – Applicative Functor

10.17. Extending the F# async workflow with applicative functor operators

10.18. Applicative functor semantic in F# with infix operators

10.19. Exploiting heterogeneous parallel computation with Applicative

10.20. Composing and executing heterogeneous parallel computations

10.21. Controllig the flow with conditional asynchronous combinators

10.22. Asynchronous combinators in action

10.23. Summary

11. Applying reactive programming everywhere with Agents

11.1. What is reactive programming, and how is it useful?

11.2. The asynchronous message-passing programming model

11.2.1. Relation with message passing and immutability

11.2.2. Natural isolation

11.3. What is an agent?

11.3.1. The components of an agent

11.3.2. What an agent can do

11.3.3. The share-nothing approach for lock-free concurrent programming

11.3.4. How is agent-based programming functional?

11.3.5. Agent is object-oriented

11.4. The F# agent – MailboxProcessor

11.4.1. The mailbox asynchronous recursive loop

11.5. Avoiding database bottlenecks with F# MailboxProcessor

11.5.1. The MailboxProcessor message type – discriminated unions

11.5.2. The MailboxProcessor two-way communication

11.5.3. Consuming the AgentSQL from C#

11.5.4. Parallelize the workflow with a group-coordination of agents

11.5.5. How to handle errors with F# MailboxProcessor

11.5.6. Stop, cancel, and dispose MailboxProcessor – CancellationToken

11.5.7. Distributing the work with MailboxProcessor

11.5.8. Caching operations with an agent

11.5.9. Reporting results from a MailboxProcessor

11.5.10. Using the ThreadPool to report events from MailboxProcessor

11.6. The F# MailboxProcessor – a light agent – 10,000 agents for a game of life

11.7. Summary

12. Parallel workflow and agents programming with TPL DataFlow

12.1. The power of TPL DataFlow

12.2. Designed to compose - TPL DataFlow blocks

12.2.1. Using the BufferBlock<TInput> as a FIFO buffer

12.2.2. Transforming your Data with the TransformBlock<TInput, TOutput>

12.2.3. Completing the work with ActionBlock<TInput >

12.2.4. Linking DataFlow blocks

12.3. Implementing a sophisticated producer/consumer with TDF

12.3.1. A multiple-producer/single-consumer pattern – TPL DataFlow

12.3.2. A single-producer/multiple-consumers pattern – TPL DataFlow

12.4. Enabling agent model in C# using TPL DataFlow

12.4.1. Agent fold-over state and messages—Aggregate

12.4.2. Agents interaction—A parallel word counter

12.5. A parallel workflow to compress and encrypt a large stream

12.5.1. Context: the problem of processing a large Stream of data

12.5.2. Ensuring the order integrity of a stream of messages

12.5.3. Linking, propagating, and completion

12.5.4. Rules for building a DataFlow workflow

12.5.5. Reactive Extensions (Rx) and TPL DataFlow meshes

12.6. Summary

Part 3: Building your toolbox for success

13. Recipes and design patterns for successful concurrent programming

13.1. Asynchronous Object-Pool for recycling objects to reduce memory consumption

13.1.1. Solution: asynchronously recycling a pool of objects

13.2. Implementing a custom parallel fork-join operator

13.2.1. Solution: leveraging the TPL DataFlow to compose a pipeline of independent and concurrent steps forming the fork/join pattern

13.3. Parallelizing Tasks with dependencies - Design your code to optimize performance

13.3.1. Solution: Implementing a dependencies graph of tasks using the F# MailboxProcessor and exposing the methods as standard Task to be consumed also by C#

13.4. Gate for coordinating concurrent I/O operations sharing resources: One write, multiple reads

13.4.1. Solution: Leveraging agent programming model to access and apply multiple read/write operations to shared thread-safe resources

13.5. Thread-Safe Random number generator

13.5.1. Solution: Using the Thread-Local object to guarantee a unique and isolated object instance for each thread accessor

13.6. Polymorphic Event Aggregator

13.6.1. Solution: Implementing a polymorphic publisher-subscriber pattern using the Reactive Extensions Subject type

13.7. Custom Rx Scheduler to control the degree of parallelism

13.7.1. Solution: Implementing a Reactive Extensions scheduler with multiple concurrent agents to control the degree of parallelism

13.8. Implementing a concurrent Reactive Scalable Server/Client

13.8.1. Solution: Combining Reactive Extensions and asynchronous programming for building a reactive and scalable Server/Cleint bidirectional communication

13.9. Implementing a reusable custom High Performant Parallel Filter-Map operator

13.9.1. Solution: Combining filter and map parallel operations to reduce memory pressure, eliminating unnecessary temporary data allocation

13.10. Implementing Communicating Sequential Process (CSP) with Agents for no-blocking synchronous message passing model

13.10.1. Solution: Coordinating and balancing the payload between read and write operations using the agent programming model

13.11. Taming and coordinating concurrent jobs using agent programming model

13.11.1. Solution: Implementing an agent that coordinates and concurrently runs a set of jobs with a configured degree of parallelism.

13.12. Composing monadic functions with the Kleisli operator

13.12.1. Solution: Effortlessly combining asynchronous operations using the Kleisli composition operator

13.13. Summary

14. How to build a scalable and responsive server/client (mobile) application using concurrent functional programming

14.1. Functional programming on the server in the real world

14.2. How to design a successful performant application

14.2.1. The secret sauce: ACD

14.2.2. A different asynchronous pattern—queuing work for later execution

14.3. Choosing the right concurrent programming model

14.3.1. Real-time communication with SignalR

14.4. Real-time trading – stock market high-level architecture

14.5. Essential elements for the stock market application

14.6. Let’s code the stock market trading application

14.7. Benchmark to measure the scalabaility of the Stock-Ticker application

14.8. Summary


Appendix A: Functional programming

A.1. What is Functional Programming?

A.1.1. The benefits of functional programming

A.1.2. The tenets of functional programming

A.1.3. The clash of program paradigms—from imperative to object-oriented to functional programming

A.1.4. Higher-order functions for raising the abstraction

A.1.5. Higher-order functions and lambda expressions for code reusability

A.1.6. Lambda expressions and anonymous functions

A.1.7. Currying

A.1.8. Partially applied functions

A.1.9. Partial application benefits

A.1.10. The power of partial function application and currying in C#

Appendix B: F# overview

B.1. The “let” Binding

B.2. Understanding function signatures in F#

B.3. Create mutable types – mutable and ref

B.4. Functions as first class types

B.5. Composition - Pipe and Composition operators

B.6. Delegates

B.7. Comments

B.8. Open statements

B.9. Basic data types

B.10. Special String definition

B.11. Tuple

B.12. Record-Types

B.13. Discriminated Unions

B.14. Pattern matching

B.15. Active patterns

B.16. Collections

B.17. Arrays

B.18. Sequences (seq)

B.19. Lists

B.20. Sets

B.21. Maps

B.22. Loops

B.23. Class and inheritance

B.24. Abstract classes and inheritance

B.25. Interfaces

B.26. Object expressions

B.27. Casting

B.28. Units of Measure

B.29. Event Module API reference

B.30. Resources

Appendix C: Interoperability between F# asynchronous workflow and .NET Task

What's inside

  • Code examples in both C# and F#
  • Building high-performance concurrent systems
  • Integrating concurrent programming abstractions
  • Concurrent patterns such as fork/join, producer-consumer, Map-Reduce and pipeline
  • Implementing a real-time event stream processing
  • Seamlessly accelerate sequential programs by using data-parallel collections
  • Creating a data-access layer to handle massive concurrent requests

About the reader

This book is for readers with solid knowledge of a mainstream programming language, preferably C# or F#.

About the author

Riccardo Terrell is a .NET seasoned software engineer, senior software architect and Microsoft MVP who is passionate about functional programming. He is well known and actively involved in the functional programming community including .NET meet ups and conferences and is the organizer for the Washington DC F# User Group.

Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
MEAP combo $59.99 pBook + eBook + liveBook
MEAP eBook $47.99 pdf + ePub + kindle + liveBook

FREE domestic shipping on three or more pBooks

Interesting look at the options for achieving structured concurrency in the .NET ecosystem. Accessible to those with minimal previous exposure to FP and category theory.

Andy Kirsch