From 280282ca469e5e15da6e392f45d2789166d059b6 Mon Sep 17 00:00:00 2001 From: Heather Miller Date: Sun, 8 Jan 2017 23:59:26 +0100 Subject: Fixing up futures semantics section --- chapter/2/futures.md | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 84d7254..64b43bb 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -146,25 +146,21 @@ Perhaps most famous in recent memory is that of promises in JavaScript. In 2007, ## Semantics of Execution -Over the years promises and futures have been implemented in different programming languages. Different languages chose to implement futures/promises in a different way. In this section, we try to introduce some different ways in which futures and promises actually get executed and resolved underneath their APIs. +As architectures and runtimes have developed and changed over the years, so too have the techniques for implementing futures/promises such that the abstraction translates into efficient utilization of system resources. In this section, we'll cover the three primary executions models upon which futures/promises are built on top of in popular languages and libraries. That is, we'll see the different ways in which futures and promises actually get executed and resolved underneath their APIs. ### Thread Pools -Thread pools are a group of ready, idle threads which can be given work. They help with the overhead of worker creation, which can add up in a long running process. The actual implementation may vary everywhere, but what differentiates thread pools is the number of threads it uses. It can either be fixed, or dynamic. Advantage of having a fixed thread pool is that it degrades gracefully : the amount of load a system can handle is fixed, and using fixed thread pool, we can effectively limit the amount of load it is put under. Granularity of a thread pool is the number of threads it instantiates. +A thread pool is an abstraction that gives users access to a group of ready, idle threads which can be given work. Thread pool implementations take care of worker creation, management, and scheduling, which can easily become tricky and costly if not handled carefully. Thread pools come in many different flavors, with many different techniques for scheduling and executing tasks, and with fixed numbers of threads or the ability of the pool to dynamically resize itself depending on load. +A classic thread pool implementation is Java's `Executor`, which is an object which executes the `Runnable` tasks. `Executor`s provide a way of abstracting out how the details of how a task will actually run. These details, like selecting a thread to run the task, how the task is scheduled are managed by the underlying implementation of the `Executor` interface. -In Java executor is an object which executes the Runnable tasks. Executors provides a way of abstracting out how the details of how a task will actually run. These details, like selecting a thread to run the task, how the task is scheduled are managed by the object implementing the Executor interface. Threads are an example of a Runnable in java. Executors can be used instead of creating a thread explicitly. +Similar to `Executor`, Scala includes is a `ExecutionContext`s as part of the `scala.concurrent` package. The basic intent behind Scala's `ExecutionContext` is the same as Java's `Executor`; it is responsible for efficiently executing computations concurrently without requiring the user of the pool to have to worry about things like scheduling. Importantly, `ExecutionContext` can be thought of as an interface; that is, it is possible to _swap in_ different underlying thread pool implementations and keep the same thread pool interface. +While it's possible to use different thread pool implementations, Scala's default `ExecutionContext` implementation is backed by Java's `ForkJoinPool`; a thread pool implementation that features a work-stealing algorithm in which idle threads pick up tasks previously scheduled to other busy threads. The `ForkJoinPool` is a popular thread pool implementation due to its improved performance over `Executor`s, its ability to better avoid pool-induced deadlock, and for minimizing the amount of time spent switching between threads. -Similar to Executor, there is an ExecutionContext as part of scala.concurrent. The basic intent behind it is same as an Executor : it is responsible for executing computations. How it does it can is opaque to the caller. It can create a new thread, use a pool of threads or run it on the same thread as the caller, although the last option is generally not recommended. Scala.concurrent package comes with an implementation of ExecutionContext by default, which is a global static thread pool. +Scala's futures (and promises) are based on this `ExecutionContext`. While typically users use the underlying default `ExecutionContext` which is backed by a `ForkJoinPool`, users may also elect to provide (or implement) their own `ExecutionContext` if they need a specific behavior, like blocking futures. - -ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would be highly undesirable for most of the systems. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads. - - -In Scala, Futures are generally a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking (although it is possible to have blocking futures, like Java 6). In Scala, futures (and promises) are based on ExecutionContext. Using ExecutionContext gives users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios. - -Scala futures api expects an ExecutionContext to be passed along. This parameter is implicit, and usually ExecutionContext.global. An example : +In Scala, every usage of a future or promise requires some kind of `ExecutionContext` to be passed along. This parameter is implicit, and is usually `ExecutionContext.global` (the default underlying `ForkJoinPool` `ExecutionContext`). For example, a creating and running a basic future: ```scala @@ -172,8 +168,9 @@ implicit val ec = ExecutionContext.global val f : Future[String] = Future { “hello world” } ``` -In this example, the global execution context is used to asynchronously run the created future. Taking another example, +In this example, the global execution context is used to asynchronously run the created future. As mentioned earlier, the `ExecutionContext` parameter to the `Future` is _implicit_. That means that if the compiler finds an instance of an `ExecutionContext` in so-called _implicit scope_, it is automatically passed to the call to `Future` without the user having to explicitly pass it. In the example above, `ec` is put into implicit scope through the use of the `implicit` keyword when declaring `ec`. +As mentioned earlier, futures and promises in Scala are _asynchronous_, which is achieved through the use of callbacks. For example: ```scala implicit val ec = ExecutionContext.global @@ -191,7 +188,7 @@ f.onComplete { It is generally a good idea to use callbacks with Futures, as the value may not be available when you want to use it. -So, how does it all work together ? +So, how does it all work together? As we mentioned, Futures require an ExecutionContext, which is an implicit parameter to virtually all of the futures API. This ExecutionContext is used to execute the future. Scala is flexible enough to let users implement their own Execution Contexts, but let’s talk about the default ExecutionContext, which is a ForkJoinPool. @@ -201,12 +198,10 @@ ForkJoinPool is ideal for many small computations that spawn off and then come b ### Event Loops -Modern systems typically rely on many other systems to provide the functionality they do. There’s a file system underneath, a database system, and other web services to rely on for the information. Interaction with these components typically involves a period where we’re doing nothing but waiting for the response back. This is single largest waste of computing resources. - +Modern platforms and runtimes typically rely on many underlying system layers to operate. For example, there’s an underlying file system, a database system, and other web services that may be relied on by a given language implementation, library, or framework. Interaction with these components typically involves a period where we’re doing nothing but waiting for the response. This is single largest waste of computing resources. Javascript is a single threaded asynchronous runtime. Now, conventionally async programming is generally associated with multi-threading, but we’re not allowed to create new threads in Javascript. Instead, asynchronicity in Javascript is achieved using an event-loop mechanism. - Javascript has historically been used to interact with the DOM and user interactions in the browser, and thus an event-driven programming model was a natural fit for the language. This has scaled up surprisingly well in high throughput scenarios in NodeJS. -- cgit v1.2.3 From 9c101595f738902f0742c094e0f39e91bc11551a Mon Sep 17 00:00:00 2001 From: Heather Miller Date: Mon, 9 Jan 2017 00:14:13 +0100 Subject: More fixes --- chapter/2/futures.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 64b43bb..9c738d6 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -158,7 +158,7 @@ Similar to `Executor`, Scala includes is a `ExecutionContext`s as part of the `s While it's possible to use different thread pool implementations, Scala's default `ExecutionContext` implementation is backed by Java's `ForkJoinPool`; a thread pool implementation that features a work-stealing algorithm in which idle threads pick up tasks previously scheduled to other busy threads. The `ForkJoinPool` is a popular thread pool implementation due to its improved performance over `Executor`s, its ability to better avoid pool-induced deadlock, and for minimizing the amount of time spent switching between threads. -Scala's futures (and promises) are based on this `ExecutionContext`. While typically users use the underlying default `ExecutionContext` which is backed by a `ForkJoinPool`, users may also elect to provide (or implement) their own `ExecutionContext` if they need a specific behavior, like blocking futures. +Scala's futures (and promises) are based on this `ExecutionContext` interface to an underlying thread pool. While typically users use the underlying default `ExecutionContext` which is backed by a `ForkJoinPool`, users may also elect to provide (or implement) their own `ExecutionContext` if they need a specific behavior, like blocking futures. In Scala, every usage of a future or promise requires some kind of `ExecutionContext` to be passed along. This parameter is implicit, and is usually `ExecutionContext.global` (the default underlying `ForkJoinPool` `ExecutionContext`). For example, a creating and running a basic future: @@ -180,13 +180,12 @@ val f = Future { } f.onComplete { - case success(response) => println(response.body) - case Failure(t) => println(t) + case Success(response) => println(response) + case Failure(t) => println(t.getMessage()) } ``` - -It is generally a good idea to use callbacks with Futures, as the value may not be available when you want to use it. +In this example, we first create a future `f`, and when it completes, we provide two possible expressions that can be invoked depending on whether the future was executed successfully or if there was an error. In this case, if successful, we get the result of the computation an HTTP string, and we print it. Else, if an exception was thrown, we get the message string contained within the exception and we print that. So, how does it all work together? -- cgit v1.2.3