From 9be42ea820c1b3f623ebeebaedfc3cd81107b4c9 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 28 Oct 2016 09:56:46 -0400 Subject: Create temp.md --- chapter/2/temp.md | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) create mode 100644 chapter/2/temp.md (limited to 'chapter/2') diff --git a/chapter/2/temp.md b/chapter/2/temp.md new file mode 100644 index 0000000..fcefc09 --- /dev/null +++ b/chapter/2/temp.md @@ -0,0 +1,23 @@ +# What are promises ? + +- Future, promise, delay, or deferred. +- Definition + +# Historical Background + +- Algol thunk +- Incremental garbage collection of Processes - 1977 +- 1995 Joule channels +- 1997 Mark Miller - E + +# Current state of things + +- Lot of work done in Javascript +- Scala +- Finagle +- Java8 +- ? + +# Future Work + +- ? -- cgit v1.2.3 From cedc03d63afc7f837062e4b66a3bcbcc34516b56 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 28 Oct 2016 11:25:57 -0400 Subject: Update temp.md --- chapter/2/temp.md | 1 + 1 file changed, 1 insertion(+) (limited to 'chapter/2') diff --git a/chapter/2/temp.md b/chapter/2/temp.md index fcefc09..0506ded 100644 --- a/chapter/2/temp.md +++ b/chapter/2/temp.md @@ -2,6 +2,7 @@ - Future, promise, delay, or deferred. - Definition +- States of promises # Historical Background -- cgit v1.2.3 From d29720b469b4d72434200ab15873c03c225dbd8b Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 9 Dec 2016 17:58:10 -0500 Subject: Update futures.md --- chapter/2/futures.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 5c56e92..1ddbc02 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -1,11 +1,11 @@ --- layout: page title: "Futures" -by: "Joe Schmoe and Mary Jane" +by: "Kisalaya Prasad and Avanti Patil" --- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. {% cite Uniqueness --file futures %} ## References -{% bibliography --file futures %} \ No newline at end of file +{% bibliography --file futures %} -- cgit v1.2.3 From 734128b127c0322699feb036305e4896421783c2 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 9 Dec 2016 21:12:57 -0500 Subject: Update futures.md --- chapter/2/futures.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 1ddbc02..02c2d5f 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -4,7 +4,20 @@ title: "Futures" by: "Kisalaya Prasad and Avanti Patil" --- -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. {% cite Uniqueness --file futures %} +#Introduction + +As human beings we have an ability to multitask ie. we can walk, talk and eat at the same time except when you sneeze. Sneeze is like a blocking activity from the normal course of action, because it forces you to stop what you’re doing for a brief moment and then you resume where you left off. Activities like multitasking are called multithreading in computer lingo. In contrast to this behaviour, computer processors are single threaded. So when we say that a computer system has multi-threaded environment, it is actually just an illusion created by processor where processor’s time is shared between multiple processes. Sometimes processor gets blocked when some tasks are hindered from normal execution due to blocking calls. Such blocking calls can range from IO operations like read/write to disk or sending/receiving packets to/from network. Blocking calls can take disproportionate amount of time compared to the processor’s task execution i.e. iterating over a list. + + +The processor can either handle blocking calls in two ways: +- **Synchronous method**: As a part of running task in synchronous method, processor continues to wait for the blocking call to complete the task and return the result. After this processor will resume processing next task. Problem with this kind of method is CPU time not utilized in an ideal manner. +- **Asynchronous method**: When you add asynchrony, you can utilize the time of CPU to work on some other task using one of the preemptive time sharing algorithm. Now when the asynchronous call returns the result, processor can again switch back to the previous process using preemption and resume the process from the point where it’d left off. + +In the world of asynchronous communications many terminologies were defined to help programmers reach the ideal level of resource utilization. As a part of this article we will talk about motivation behind rise of Promises and Futures, we will explain programming model associated with it and discuss evolution of this programming construct, finally we will end this discussion with how this construct helps us today in different general purpose programming languages. + + + +{% cite Uniqueness --file futures %} ## References -- cgit v1.2.3 From 016620da68e304dfa1032c4d90204d94d9a2de69 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 9 Dec 2016 21:16:02 -0500 Subject: Update futures.md --- chapter/2/futures.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 02c2d5f..1584075 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -16,8 +16,10 @@ The processor can either handle blocking calls in two ways: In the world of asynchronous communications many terminologies were defined to help programmers reach the ideal level of resource utilization. As a part of this article we will talk about motivation behind rise of Promises and Futures, we will explain programming model associated with it and discuss evolution of this programming construct, finally we will end this discussion with how this construct helps us today in different general purpose programming languages. +
+ timeline +
-{% cite Uniqueness --file futures %} ## References -- cgit v1.2.3 From e2a8069e2e8cb013483920a7ab5222528bd885ea Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 9 Dec 2016 21:16:56 -0500 Subject: Add files via upload --- chapter/2/1.png | Bin 0 -> 14176 bytes chapter/2/10.png | Bin 0 -> 9834 bytes chapter/2/11.png | Bin 0 -> 12134 bytes chapter/2/12.png | Bin 0 -> 17071 bytes chapter/2/13.png | Bin 0 -> 21547 bytes chapter/2/14.png | Bin 0 -> 11405 bytes chapter/2/15.png | Bin 0 -> 15262 bytes chapter/2/2.png | Bin 0 -> 6152 bytes chapter/2/3.png | Bin 0 -> 13719 bytes chapter/2/4.png | Bin 0 -> 25404 bytes chapter/2/5.png | Bin 0 -> 20821 bytes chapter/2/6.png | Bin 0 -> 19123 bytes chapter/2/7.png | Bin 0 -> 30068 bytes chapter/2/8.png | Bin 0 -> 13899 bytes chapter/2/9.png | Bin 0 -> 6463 bytes 15 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 chapter/2/1.png create mode 100644 chapter/2/10.png create mode 100644 chapter/2/11.png create mode 100644 chapter/2/12.png create mode 100644 chapter/2/13.png create mode 100644 chapter/2/14.png create mode 100644 chapter/2/15.png create mode 100644 chapter/2/2.png create mode 100644 chapter/2/3.png create mode 100644 chapter/2/4.png create mode 100644 chapter/2/5.png create mode 100644 chapter/2/6.png create mode 100644 chapter/2/7.png create mode 100644 chapter/2/8.png create mode 100644 chapter/2/9.png (limited to 'chapter/2') diff --git a/chapter/2/1.png b/chapter/2/1.png new file mode 100644 index 0000000..1d98f19 Binary files /dev/null and b/chapter/2/1.png differ diff --git a/chapter/2/10.png b/chapter/2/10.png new file mode 100644 index 0000000..f54711d Binary files /dev/null and b/chapter/2/10.png differ diff --git a/chapter/2/11.png b/chapter/2/11.png new file mode 100644 index 0000000..7673d90 Binary files /dev/null and b/chapter/2/11.png differ diff --git a/chapter/2/12.png b/chapter/2/12.png new file mode 100644 index 0000000..7b2e13f Binary files /dev/null and b/chapter/2/12.png differ diff --git a/chapter/2/13.png b/chapter/2/13.png new file mode 100644 index 0000000..a2b8457 Binary files /dev/null and b/chapter/2/13.png differ diff --git a/chapter/2/14.png b/chapter/2/14.png new file mode 100644 index 0000000..5027666 Binary files /dev/null and b/chapter/2/14.png differ diff --git a/chapter/2/15.png b/chapter/2/15.png new file mode 100644 index 0000000..4f2c188 Binary files /dev/null and b/chapter/2/15.png differ diff --git a/chapter/2/2.png b/chapter/2/2.png new file mode 100644 index 0000000..a75c08b Binary files /dev/null and b/chapter/2/2.png differ diff --git a/chapter/2/3.png b/chapter/2/3.png new file mode 100644 index 0000000..9cc66b5 Binary files /dev/null and b/chapter/2/3.png differ diff --git a/chapter/2/4.png b/chapter/2/4.png new file mode 100644 index 0000000..8cfec98 Binary files /dev/null and b/chapter/2/4.png differ diff --git a/chapter/2/5.png b/chapter/2/5.png new file mode 100644 index 0000000..b86de04 Binary files /dev/null and b/chapter/2/5.png differ diff --git a/chapter/2/6.png b/chapter/2/6.png new file mode 100644 index 0000000..aaafdbd Binary files /dev/null and b/chapter/2/6.png differ diff --git a/chapter/2/7.png b/chapter/2/7.png new file mode 100644 index 0000000..7183fb6 Binary files /dev/null and b/chapter/2/7.png differ diff --git a/chapter/2/8.png b/chapter/2/8.png new file mode 100644 index 0000000..d6d2e0e Binary files /dev/null and b/chapter/2/8.png differ diff --git a/chapter/2/9.png b/chapter/2/9.png new file mode 100644 index 0000000..1b67a45 Binary files /dev/null and b/chapter/2/9.png differ -- cgit v1.2.3 From aaea26ca23ddb0492e9d74623a2c62d844286e13 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 9 Dec 2016 21:27:44 -0500 Subject: Update futures.md
timeline
--- chapter/2/futures.md | 179 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 179 insertions(+) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 1584075..2d7c51c 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -20,6 +20,185 @@ In the world of asynchronous communications many terminologies were defined to h timeline +#Motivation + + +A “Promise” object represents a value that may not be available yet. A Promise is an object that represents a task with two possible outcomes, success or failure and holds callbacks that fire when one outcome or the other has occurred. + + +The rise of promises and futures as a topic of relevance can be traced parallel to the rise of asynchronous or distributed systems. This seems natural, since futures represent a value available in Future which fits in very naturally with the latency which is inherent to these heterogeneous systems. The recent adoption of NodeJS and server side Javascript has only made promises more relevant. But, the idea of having a placeholder for a result came in significantly before than the current notion of futures and promises. + + +Thunks can be thought of as a primitive notion of a Future or Promise. According to its inventor P. Z. Ingerman, thunks are "A piece of coding which provides an address". They were designed as a way of binding actual parameters to their formal definitions in Algol-60 procedure calls. If a procedure is called with an expression in the place of a formal parameter, the compiler generates a thunk which computes the expression and leaves the address of the result in some standard location. + + +The first mention of Futures was by Baker and Hewitt in a paper on Incremental Garbage Collection of Processes. They coined the term - call-by-futures to describe a calling convention in which each formal parameter to a method is bound to a process which evaluates the expression in the parameter in parallel with other parameters. Before this paper, Algol 68 also presented a way to make this kind of concurrent parameter evaluation possible, using the collateral clauses and parallel clauses for parameter binding. + + +In their paper, Baker and Hewitt introduced a notion of Futures as a 3-tuple representing an expression E consisting of (1) A process which evaluates E, (2) A memory location where the result of E needs to be stored, (3) A list of processes which are waiting on E. But, the major focus of their work was not on role of futures and the role they play in Asynchronous distributed computing, and focused on garbage collecting the processes which evaluate expressions not needed by the function. + + +The Multilisp language, presented by Halestead in 1985 built upon this call-by-future with a Future annotation. Binding a variable to a future expression creates a process which evaluates that expression and binds x to a token which represents its (eventual) result. This design of futures influenced the paper of design of Promises in Argus by Liskov and Shrira in 1988. Building upon the initial design of Future in Multilisp, they extended the original idea by introducing strongly typed Promises and integration with call streams.This made it easier to handle exception propagation from callee to the caller and also to handle the typical problems in a multi-computer system like network failures. This paper also talked about stream composition, a notion which is similar to promise pipelining today. + + +E is an object-oriented programming language for secure distributed computing, created by Mark S. Miller, Dan Bornstein, and others at Electric Communities in 1997. One of the major contribution of E was the first non-blocking implementation of Promises. It traces its routes to Joule which was a dataflow programming language. The notion of promise pipelining in E is inherited from Joule. + + +Among the modern languages, Python was perhaps the first to come up with something on the lines of E’s promises with the Twisted library. Coming out in 2002, it had a concept of Deferred objects, which were used to receive the result of an operation not yet completed. They were just like normal objects and could be passed along, but they didn’t have a value. They supported a callback which would get called once the result of the operation was complete. + + +Promises and javascript have an interesting history. In 2007 inspired by Python’s twisted, dojo came up with it’s own implementation of of dojo.Deferred. This inspired Kris Zyp to then come up with the CommonJS Promises/A spec in 2009. Ryan Dahl introduced the world to NodeJS in the same year. In it’s early versions, Node used promises for the non-blocking API. When NodeJS moved away from promises to its now familiar error-first callback API, it left a void for a promises API. Q.js was an implementation of Promises/A spec by Kris Kowal around this time. FuturesJS library by AJ ONeal was another library which aimed to solve flow-control problems without using Promises in the strictest of senses. In 2011, JQuery v1.5 first introduced Promises to its wider and ever-growing audience. The API for JQuery was subtly different than the Promises/A spec. With the rise of HTML5 and different APIs, there came a problem of different and messy interfaces. A+ promises aimed to solve this problem. From this point on, leading from widespread adoption of A+ spec, promises was finally made a part of ECMAScript® 2015 Language Specification. Still, a lack of backward compatibility and additional features provided means that libraries like BlueBird and Q.js still have a place in the javascript ecosystem. + + +#Different Definitions + + +Future, promise, Delay or Deferred generally refer to same synchronisation mechanism where an object acts as a proxy for a yet unknown result. When the result is discovered, promises hold some code which then gets executed. The definitions have changed a little over the years but the idea remained the same. + + +In some languages however, there is a subtle difference between what is a Future and a Promise. +“A ‘Future’ is a read-only reference to a yet-to-be-computed value”. +“A ‘Promise’ is a pretty much the same except that you can write to it as well.” + + +In other words, you can read from both Futures and Promises, but you can only write to Promises. You can get the Future associated with a Promise by calling the future method on it, but conversion in the other direction is not possible. Another way to look at it would be, if you Promise something, you are responsible for keeping it, but if someone else makes a Promise to you, you expect them to honor it in Future. + + +More technically, in Scala, “SIP-14 – Futures and Promises” defines them as follows: +A future is as a placeholder object for a result that does not yet exist. +A promise is a writable, single-assignment container, which completes a future. Promises can complete the future with a result to indicate success, or with an exception to indicate failure. + + +C# also makes the distinction between futures and promises. In C#, futures are implemented as Task and in fact in earlier versions of the Task Parallel Library futures were implemented with a class Future which later became Task. The result of the future is available in the readonly property Task.Result which returns T + + +In Javascript world, Jquery introduces a notion of Deferred objects which are used to represent a unit of work which is not yet finished. Deferred object contains a promise object which represent the result of that unit of work. Promises are values returned by a function, while the deferred object can be canceled by its caller. + + +In Java 8, the Future interface has methods to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation when it is complete. CompletableFutures can be thought of as Promises as their value can be set. But it also implements the Future interface and therefore it can be used as a Future too. Promises can be thought of as a future with a public set method which the caller (or anybody else) can use to set the value of the future. + +# Semantics of Execution + +Over the years promises and futures have been implemented in different programming languages and created a buzz in parallel computing world. We will take a look at some of the programming languages who designed frameworks to enhance performance of applications using Promises and futures. + +## Fork-Join + +Doing things in parallel is usually an effective way of doing things in modern systems. The systems are getting more and more capable of running more than one things at once, and the latency associated with doing things in a distributed environment is not going away anytime soon. Inside the JVM, threads are a basic unit of concurrency. Threads are independent, heap-sharing execution contexts. Threads are generally considered to be lightweight when compared to a process, and can share both code and data. The cost of context switching between threads is cheap. But, even if we claim that threads are lightweight, the cost of creation and destruction of threads in a long running threads can add up to something significant. A practical way is address this problem is to manage a pool of worker threads. + + +In Java executor is an object which executes the Runnable tasks. Executors provides a way of abstracting out how the details of how a task will actually run. These details, like selecting a thread to run the task, how the task is scheduled are managed by the object implementing the Executor interface. Threads are an example of a Runnable in java. Executors can be used instead of creating a thread explicitly. + + +Similar to Executor, there is an ExecutionContext as part of scala.concurrent. The basic intent behind it is same as an Executor : it is responsible for executing computations. How it does it can is opaque to the caller. It can create a new thread, use a pool of threads or run it on the same thread as the caller, although the last option is generally not recommended. Scala.concurrent package comes with an implementation of ExecutionContext by default, which is a global static thread pool. + + +ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would typically come with a bad system design. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads. + + +Futures are generally a good way to reason about asynchronous code. A good way to call a webservice, add a block of code to do something when you get back the response, and move on without waiting for the response. They’re also a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking. in Scala, futures (and promises) are based on ExecutionContext. + + +In Scala, futures are created using an ExecutionContext. This gives the users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios. Futures in scala are placeholders for a yet unknown value. A promise then can be thought of as a way to provide that value. A promise p completes the future returned by p.future. + + +Scala futures api expects an ExecutionContext to be passed along. This parameter is implicit, and usually ExecutionContext.global. An example : + + +
+ timeline +
+ +In this example, the global execution context is used to asynchronously run the created future. Taking another example, + +
+ timeline +
+ +It is generally a good idea to use callbacks with Futures, as the value may not be available when you want to use it. + +So, how does it all work together ? + +As we mentioned, Futures require an ExecutionContext, which is an implicit parameter to virtually all of the futures API. This ExecutionContext is used to execute the future. Scala is flexible enough to let users implement their own Execution Contexts, but let’s talk about the default ExecutionContext, which is a ForkJoinPool. + + +ForkJoinPool is ideal for many small computations that spawn off and then come back together. Scala’s ForkJoinPool requires the tasks submitted to it to be a ForkJoinTask. The tasks submitted to the global ExecutionContext is quietly wrapped inside a ForkJoinTask and then executed. ForkJoinPool also supports a possibly blocking task, using ManagedBlock method which creates a spare thread if required to ensure that there is sufficient parallelism if the current thread is blocked. To summarize, ForkJoinPool is an really good general purpose ExecutionContext, which works really well in most of the scenarios. + + + +## Event Loops + +Modern systems typically rely on many other systems to provide the functionality they do. There’s a file system underneath, a database system, and other web services to rely on for the information. Interaction with these components typically involves a period where we’re doing nothing but waiting for the response back. This is single largest waste of computing resources. + + +Javascript is a single threaded asynchronous runtime. Now, conventionally async programming is generally associated with multi-threading, but we’re not allowed to create new threads in Javascript. Instead, asynchronicity in Javascript is achieved using an event-loop mechanism. + + +Javascript has historically been used to interact with the DOM and user interactions in the browser, and thus an event-driven programming model was a natural fit for the language. This has scaled up surprisingly well in high throughput scenarios in NodeJS. + + +The general idea behind event-driven programming model is that the logic flow control is determined by the order in which events are processed. This is underpinned by a mechanism which is constantly listening for events and fires a callback when it is detected. This is the Javascript’s event loop in a nutshell. + + +A typical Javascript engine has a few basic components. They are : +- **Heap** +Used to allocate memory for objects +- **Stack** +Function call frames go into a stack from where they’re picked up from top to be executed. +- **Queue** + A message queue holds the messages to be processed. + + +Each message has a callback function which is fired when the message is processed. These messages can be generated by user actions like button clicks or scrolling, or by actions like HTTP requests, request to a database to fetch records or reading/writing to a file. + + +Separating when a message is queued from when it is executed means the single thread doesn’t have to wait for an action to complete before moving on to another. We attach a callback to the action we want to do, and when the time comes, the callback is run with the result of our action. Callbacks work good in isolation, but they force us into a continuation passing style of execution, what is otherwise known as Callback hell. + +
+ timeline +
+ +**Programs must be written for people to read, and only incidentally for machines to execute.** - *Harold Abelson and Gerald Jay Sussman* + +Promises are an abstraction which make working with async operations in javascript much more fun. Moving on from a continuation passing style, where you specify what needs to be done once the action is done, the callee simply returns a Promise object. This inverts the chain of responsibility, as now the caller is responsible for handling the result of the promise when it is settled. + +The ES2015 spec specifies that “promises must not fire their resolution/rejection function on the same turn of the event loop that they are created on.” This is an important property because it ensures deterministic order of execution. Also, once a promise is fulfilled or failed, the promise’s value MUST not be changed. This ensures that a promise cannot be resolved more than once. + +Let’s take an example to understand the promise resolution workflow as it happens inside the Javascript Engine. + +Suppose we execute a function, here g() which in turn, calls function f(). Function f returns a promise, which, after counting down for 1000 ms, resolves the promise with a single value, true. Once f gets resolved, a value true or false is alerted based on the value of the promise. + + +
+ timeline +
+ +Now, javascript’s runtime is single threaded. This statement is true, and not true. The thread which executes the user code is single threaded. It executes what is on top of the stack, runs it to completion, and then moves onto what is next on the stack. But, there are also a number of helper threads which handle things like network or timer/settimeout type events. This timing thread handles the counter for setTimeout. + +
+ timeline +
+ +Once the timer expires, the timer thread puts a message on the message queue. The queued up messages are then handled by the event loop. The event loop as described above, is simply an infinite loop which checks if a message is ready to be processed, picks it up and puts it on the stack for it’s callback to be executed. + +
+ timeline +
+ +Here, since the future is resolved with a value of true, we are alerted with a value true when the callback is picked up for execution. + +
+ timeline +
+ +Some finer details : +We’ve ignored the heap here, but all the functions, variables and callbacks are stored on heap. +As we’ve seen here, even though Javascript is said to be single threaded, there are number of helper threads to help main thread do things like timeout, UI, network operations, file operations etc. +Run-to-completion helps us reason about the code in a nice way. Whenever a function starts, it needs to finish before yielding the main thread. The data it accesses cannot be modified by someone else. This also means every function needs to finish in a reasonable amount of time, otherwise the program seems hung. This makes Javascript well suited for I/O tasks which are queued up and then picked up when finished, but not for data processing intensive tasks which generally take long time to finish. +We haven’t talked about error handling, but it gets handled the same exact way, with the error callback being called with the error object the promise is rejected with. + + +Event loops have proven to be surprisingly performant. When network servers are designed around multithreading, as soon as you end up with a few hundred concurrent connections, the CPU spends so much of its time task switching that you start to lose overall performance. Switching from one thread to another has overhead which can add up significantly at scale. Apache used to choke even as low as a few hundred concurrent users when using a thread per connection while Node can scale up to a 100,000 concurrent connections based on event loops and asynchronous IO. + ## References -- cgit v1.2.3 From fbbddcd432af1391fedacb92441097aa90f78681 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 9 Dec 2016 21:28:38 -0500 Subject: Update futures.md --- chapter/2/futures.md | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 2d7c51c..e7affd9 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -200,6 +200,31 @@ We haven’t talked about error handling, but it gets handled the same exact way Event loops have proven to be surprisingly performant. When network servers are designed around multithreading, as soon as you end up with a few hundred concurrent connections, the CPU spends so much of its time task switching that you start to lose overall performance. Switching from one thread to another has overhead which can add up significantly at scale. Apache used to choke even as low as a few hundred concurrent users when using a thread per connection while Node can scale up to a 100,000 concurrent connections based on event loops and asynchronous IO. +##Thread Model + + +Oz programming language introduced an idea of dataflow concurrency model. In Oz, whenever the program comes across an unbound variable, it waits for it to be resolved. This dataflow property of variables helps us write threads in Oz that communicate through streams in a producer-consumer pattern. The major benefit of dataflow based concurrency model is that it’s deterministic - same operation called with same parameters always produces the same result. It makes it a lot easier to reason about concurrent programs, if the code is side-effect free. + + +Alice ML is a dialect of Standard ML with support for lazy evaluation, concurrent, distributed, and constraint programming. The early aim of Alice project was to reconstruct the functionalities of Oz programming language on top of a typed programming language. Building on the Standard ML dialect, Alice also provides concurrency features as part of the language through the use of a future type. Futures in Alice represent an undetermined result of a concurrent operation. Promises in Alice ML are explicit handles for futures. + + +Any expression in Alice can be evaluated in it's own thread using spawn keyword. Spawn always returns a future which acts as a placeholder for the result of the operation. Futures in Alice ML can be thought of as functional threads, in a sense that threads in Alice always have a result. A thread is said to be touching a future if it performs an operation that requires the value future is a placeholder for. All threads touching a future are blocked until the future is resolved. If a thread raises an exception, the future is failed and this exception is re-raised in the threads touching it. Futures can also be passed along as values. This helps us achieve the dataflow model of concurrency in Alice. + + +Alice also allows for lazy evaluation of expressions. Expressions preceded with the lazy keyword are evaluated to a lazy future. The lazy future is evaluated when it is needed. If the computation associated with a concurrent or lazy future ends with an exception, it results in a failed future. Requesting a failed future does not block, it simply raises the exception that was the cause of the failure. + +#Implicit vs. Explicit Promises + + +We define Implicit promises as ones where we don’t have to manually trigger the computation vs Explicit promises where we have to trigger the resolution of future manually, either by calling a start function or by requiring the value. This distinction can be understood in terms of what triggers the calculation : With Implicit promises, the creation of a promise also triggers the computation, while with Explicit futures, one needs to triggers the resolution of a promise. This trigger can in turn be explicit, like calling a start method, or implicit, like lazy evaluation where the first use of a promise’s value triggers its evaluation. + + +The idea for explicit futures were introduced in the Baker and Hewitt paper. They’re a little trickier to implement, and require some support from the underlying language, and as such they aren’t that common. The Baker and Hewitt paper talked about using futures as placeholders for arguments to a function, which get evaluated in parallel, but when they’re needed. Also, lazy futures in Alice ML have a similar explicit invocation mechanism, the first thread touching a future triggers its evaluation. + + +Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of promises in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. + ## References {% bibliography --file futures %} -- cgit v1.2.3 From 7c22b7077038d0947f2159118c3ee3d18b321fe7 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 9 Dec 2016 21:32:15 -0500 Subject: Update futures.md --- chapter/2/futures.md | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 80 insertions(+), 1 deletion(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index e7affd9..45d2d0b 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -222,9 +222,88 @@ We define Implicit promises as ones where we don’t have to manually trigger th The idea for explicit futures were introduced in the Baker and Hewitt paper. They’re a little trickier to implement, and require some support from the underlying language, and as such they aren’t that common. The Baker and Hewitt paper talked about using futures as placeholders for arguments to a function, which get evaluated in parallel, but when they’re needed. Also, lazy futures in Alice ML have a similar explicit invocation mechanism, the first thread touching a future triggers its evaluation. - Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of promises in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. +# Promise Pipelining +One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises. + +
+ timeline +
+ +Futures/Promises can be passed along, waited upon, or chained and joined together. These properties helps make life easier for the programmers working with them. This also reduces the latency associated with distributed computing. Promises enable dataflow concurrency, which is also deterministic, and easier to reason. + +The history of promise pipelining can be traced back to the call-streams in Argus and channels in Joule. In Argus, Call streams are a mechanism for communication between distributed components. The communicating entities, a sender and a receiver are connected by a stream, and sender can make calls to receiver over it. Streams can be thought of as RPC, except that these allow callers to run in parallel with the receiver while processing the call. When making a call in Argus, the caller receives a promise for the result. In the paper on Promises by Liskov and Shrira, they mention that having integrated futures into call streams, next logical step would be to talk about stream composition. This means arranging streams into pipelines where output of one stream can be used as input of the next stream. They talk about composing streams using fork and coenter. + + +Modern promise specifications, like one in Javascript comes with methods which help working with promise pipelining easier. In javascript, a Promises.all method is provided, which takes in an iterable over Promises, and returns a new Promise which gets resolved when all the promises in the iterable get resolved. There’s also a race method, which returns a promise which is resolved when the first promise in the iterable gets resolved. + + +In scala, futures have a onSuccess method which acts as a callback to when the future is complete. This callback itself can be used to sequentially chain futures together. But this results in bulkier code. Fortunately, Scala api comes with combinators which allow for easier combination of results from futures. Examples of combinators are map, flatmap, filter, withFilter. + +# Handling Errors + +In a synchronous programming model, the most logical way of handling errors is a try...catch block. + +
+ timeline +
+ + +Unfortunately, the same thing doesn’t directly translate to asynchronous code. + +
+ timeline +
+ +In javascript world, some patterns emerged, most noticeably the error-first callback style, also adopted by Node. Although this works, but it is not very composable, and eventually takes us back to what is called callback hell. Fortunately, Promises come to the rescue. + + +Although most of the earlier papers did not talk about error handling, the Promises paper by Liskov and Shrira did acknowledge the possibility of failure in a distributed environment. They talked about propagation of exceptions from the called procedure to the caller and also about call streams, and how broken streams could be handled. E language also talked about broken promises and setting a promise to the exception of broken references. + +In modern languages, Promises generally come with two callbacks. One to handle the success case and other to handle the failure. + +
+ timeline +
+ +In Javascript, Promises also have a catch method, which help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler. + + +
+ timeline +
+ +The same behavior can be written using catch block. + +
+ timeline +
+ + +#Futures and Promises in Action + + +##Twitter Finagle + + +Finagle is a protocol-agnostic, asynchronous RPC system for the JVM that makes it easy to build robust clients and servers in Java, Scala, or any JVM-hosted language. It uses idea of Futures to encapsulate concurrent tasks and are analogous to threads, but even more lightweight. + + +##Correctables +Correctables were introduced by Rachid Guerraoui, Matej Pavlovic, and Dragos-Adrian Seredinschi at OSDI ‘16, in a paper titled Incremental Consistency Guarantees for Replicated Objects. As the title suggests, Correctables aim to solve the problems with consistency in replicated objects. They provide incremental consistency guarantees by capturing successive changes to the value of a replicated object. Applications can opt to receive a fast but possibly inconsistent result if eventual consistency is acceptable, or to wait for a strongly consistent result. Correctables API draws inspiration from, and builds on the API of Promises. Promises have a two state model to represent an asynchronous task, it starts in blocked state and proceeds to a ready state when the value is available. This cannot represent the incremental nature of correctables. Instead, Correctables have a updating state when it starts. From there on, it remains in updating state during intermediate updates, and when the final result is available, it transitions to final state. If an error occurs in between, it moves into an error state. Each state change triggers a callback. + +
+ timeline +
+ +##Folly Futures +Folly is a library by Facebook for asynchronous C++ inspired by the implementation of Futures by Twitter for Scala. It builds upon the Futures in the C++11 Standard. Like Scala’s futures, they also allow for implementing a custom executor which provides different ways of running a Future (thread pool, event loop etc). + + +##NodeJS Fiber +Fibers provide coroutine support for v8 and node. Applications can use Fibers to allow users to write code without using a ton of callbacks, without sacrificing the performance benefits of asynchronous IO. Think of fibers as light-weight threads for nodejs where the scheduling is in the hands of the programmer. The node-fibers library doesn’t recommend using raw API and code together without any abstractions, and provides a Futures implementation which is ‘fiber-aware’. + ## References {% bibliography --file futures %} -- cgit v1.2.3 From 485dcc5031aab9668acb93a0f37728f5e0bbd183 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Fri, 9 Dec 2016 21:37:19 -0500 Subject: Delete temp.md --- chapter/2/temp.md | 24 ------------------------ 1 file changed, 24 deletions(-) delete mode 100644 chapter/2/temp.md (limited to 'chapter/2') diff --git a/chapter/2/temp.md b/chapter/2/temp.md deleted file mode 100644 index 0506ded..0000000 --- a/chapter/2/temp.md +++ /dev/null @@ -1,24 +0,0 @@ -# What are promises ? - -- Future, promise, delay, or deferred. -- Definition -- States of promises - -# Historical Background - -- Algol thunk -- Incremental garbage collection of Processes - 1977 -- 1995 Joule channels -- 1997 Mark Miller - E - -# Current state of things - -- Lot of work done in Javascript -- Scala -- Finagle -- Java8 -- ? - -# Future Work - -- ? -- cgit v1.2.3 From 147af3f9983cf4b485c1323870830f606268711e Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Tue, 13 Dec 2016 02:33:36 -0500 Subject: Update futures.md --- chapter/2/futures.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 45d2d0b..4e6a3ee 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -10,10 +10,10 @@ As human beings we have an ability to multitask ie. we can walk, talk and eat at The processor can either handle blocking calls in two ways: -- **Synchronous method**: As a part of running task in synchronous method, processor continues to wait for the blocking call to complete the task and return the result. After this processor will resume processing next task. Problem with this kind of method is CPU time not utilized in an ideal manner. -- **Asynchronous method**: When you add asynchrony, you can utilize the time of CPU to work on some other task using one of the preemptive time sharing algorithm. Now when the asynchronous call returns the result, processor can again switch back to the previous process using preemption and resume the process from the point where it’d left off. +- **Synchronously**: As a part of running task in synchronous method, processor continues to wait for the blocking call to complete the task and return the result. After this processor will resume processing next task. Problem with this kind of method is CPU time not utilized in an ideal manner. Also, there is a possiblity of deadlocks here, which can be tricky to recover from. +- **Asynchronously**: When you add asynchrony, you can utilize the time of CPU to work on some other task using one of the preemptive time sharing algorithm. Now when the asynchronous call returns the result, processor can again switch back to the previous process using preemption and resume the process from the point where it’d left off. -In the world of asynchronous communications many terminologies were defined to help programmers reach the ideal level of resource utilization. As a part of this article we will talk about motivation behind rise of Promises and Futures, we will explain programming model associated with it and discuss evolution of this programming construct, finally we will end this discussion with how this construct helps us today in different general purpose programming languages. +In the world of asynchronous communications many programming models were introduced to help programmers wrangle with dependencies between processes optimally. As a part of this article we will talk about motivation behind rise of Promises and Futures, we will explain programming model associated with it and discuss evolution of this programming construct, finally we will end this discussion with how this construct helps us today in different general purpose programming languages.
@@ -53,7 +53,7 @@ Promises and javascript have an interesting history. In 2007 inspired by Python #Different Definitions -Future, promise, Delay or Deferred generally refer to same synchronisation mechanism where an object acts as a proxy for a yet unknown result. When the result is discovered, promises hold some code which then gets executed. The definitions have changed a little over the years but the idea remained the same. +Future, promise, Delay or Deferred generally refer to same synchronisation mechanism where an object acts as a proxy for a yet unknown result. When the result is discovered, promises hold some code which then gets executed. In some languages however, there is a subtle difference between what is a Future and a Promise. -- cgit v1.2.3 From 33c1a42832803f24c70fa7966e5194e8884cca14 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Wed, 14 Dec 2016 11:58:53 -0500 Subject: Update futures.md --- chapter/2/futures.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 4e6a3ee..d3e2d7e 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -222,7 +222,7 @@ We define Implicit promises as ones where we don’t have to manually trigger th The idea for explicit futures were introduced in the Baker and Hewitt paper. They’re a little trickier to implement, and require some support from the underlying language, and as such they aren’t that common. The Baker and Hewitt paper talked about using futures as placeholders for arguments to a function, which get evaluated in parallel, but when they’re needed. Also, lazy futures in Alice ML have a similar explicit invocation mechanism, the first thread touching a future triggers its evaluation. -Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of promises in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. +Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of futures in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. # Promise Pipelining One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises. -- cgit v1.2.3 From c9aaf8c95d2f4aeeb7fe6e1eda873b4f082c27f2 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Thu, 15 Dec 2016 12:13:38 -0500 Subject: Update futures.md --- chapter/2/futures.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index d3e2d7e..61df8b6 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -83,7 +83,7 @@ Over the years promises and futures have been implemented in different programmi ## Fork-Join -Doing things in parallel is usually an effective way of doing things in modern systems. The systems are getting more and more capable of running more than one things at once, and the latency associated with doing things in a distributed environment is not going away anytime soon. Inside the JVM, threads are a basic unit of concurrency. Threads are independent, heap-sharing execution contexts. Threads are generally considered to be lightweight when compared to a process, and can share both code and data. The cost of context switching between threads is cheap. But, even if we claim that threads are lightweight, the cost of creation and destruction of threads in a long running threads can add up to something significant. A practical way is address this problem is to manage a pool of worker threads. +Doing things in parallel is usually an effective way of doing things in modern systems. The systems are getting more and more capable of running more than one things at once, and the latency associated with doing things in a distributed environment is not going away anytime soon. Inside the JVM, threads are a basic unit of concurrency. Threads are independent, heap-sharing execution contexts. Threads are generally considered to be lightweight when compared to a process, and can share both code and data. The cost of context switching between threads is cheaper than what it is between processes. But, even if we claim that threads are lightweight, the cost of creation and destruction of threads in a long running threads can add up to something significant. A practical way is address this problem is to manage a pool of worker threads. In Java executor is an object which executes the Runnable tasks. Executors provides a way of abstracting out how the details of how a task will actually run. These details, like selecting a thread to run the task, how the task is scheduled are managed by the object implementing the Executor interface. Threads are an example of a Runnable in java. Executors can be used instead of creating a thread explicitly. @@ -95,7 +95,7 @@ Similar to Executor, there is an ExecutionContext as part of scala.concurrent. T ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would typically come with a bad system design. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads. -Futures are generally a good way to reason about asynchronous code. A good way to call a webservice, add a block of code to do something when you get back the response, and move on without waiting for the response. They’re also a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking. in Scala, futures (and promises) are based on ExecutionContext. +Futures are generally a good way to reason about asynchronous code. A good way to call a webservice, add a block of code to do something when you get back the response, and move on without waiting for the response. They’re also a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking. In Scala, futures are created using an ExecutionContext. This gives the users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios. Futures in scala are placeholders for a yet unknown value. A promise then can be thought of as a way to provide that value. A promise p completes the future returned by p.future. -- cgit v1.2.3 From 4a56dd765cc86ab891c3f76d9693c0bad40e86e6 Mon Sep 17 00:00:00 2001 From: kisalaya89 Date: Thu, 15 Dec 2016 13:02:48 -0500 Subject: Update futures.md --- chapter/2/futures.md | 164 ++++++++++++++++++++++++++++++++------------------- 1 file changed, 103 insertions(+), 61 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 61df8b6..6e019c7 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -6,14 +6,14 @@ by: "Kisalaya Prasad and Avanti Patil" #Introduction -As human beings we have an ability to multitask ie. we can walk, talk and eat at the same time except when you sneeze. Sneeze is like a blocking activity from the normal course of action, because it forces you to stop what you’re doing for a brief moment and then you resume where you left off. Activities like multitasking are called multithreading in computer lingo. In contrast to this behaviour, computer processors are single threaded. So when we say that a computer system has multi-threaded environment, it is actually just an illusion created by processor where processor’s time is shared between multiple processes. Sometimes processor gets blocked when some tasks are hindered from normal execution due to blocking calls. Such blocking calls can range from IO operations like read/write to disk or sending/receiving packets to/from network. Blocking calls can take disproportionate amount of time compared to the processor’s task execution i.e. iterating over a list. +As human beings we have an ability to multitask ie. we can walk, talk and eat at the same time except when you sneeze. Sneeze is like a blocking activity from the normal course of action, because it forces you to stop what you’re doing for a brief moment and then you resume where you left off. Activities like multitasking are called multithreading in computer lingo. In contrast to this behaviour, computer processors are single threaded. So when we say that a computer system has multi-threaded environment, it is actually just an illusion created by processor where processor’s time is shared between multiple processes. Sometimes processor gets blocked when some tasks are hindered from normal execution due to blocking calls. Such blocking calls can range from IO operations like read/write to disk or sending/receiving packets to/from network. Blocking calls can take disproportionate amount of time compared to the processor’s task execution i.e. iterating over a list. The processor can either handle blocking calls in two ways: -- **Synchronously**: As a part of running task in synchronous method, processor continues to wait for the blocking call to complete the task and return the result. After this processor will resume processing next task. Problem with this kind of method is CPU time not utilized in an ideal manner. Also, there is a possiblity of deadlocks here, which can be tricky to recover from. -- **Asynchronously**: When you add asynchrony, you can utilize the time of CPU to work on some other task using one of the preemptive time sharing algorithm. Now when the asynchronous call returns the result, processor can again switch back to the previous process using preemption and resume the process from the point where it’d left off. +- **Synchronous method**: As a part of running task in synchronous method, processor continues to wait for the blocking call to complete the task and return the result. After this processor will resume processing next task. Problem with this kind of method is CPU time not utilized in an ideal manner. +- **Asynchronous method**: When you add asynchrony, you can utilize the time of CPU to work on some other task using one of the preemptive time sharing algorithm. Now when the asynchronous call returns the result, processor can again switch back to the previous process using preemption and resume the process from the point where it’d left off. -In the world of asynchronous communications many programming models were introduced to help programmers wrangle with dependencies between processes optimally. As a part of this article we will talk about motivation behind rise of Promises and Futures, we will explain programming model associated with it and discuss evolution of this programming construct, finally we will end this discussion with how this construct helps us today in different general purpose programming languages. +In the world of asynchronous communications many terminologies were defined to help programmers reach the ideal level of resource utilization. As a part of this article we will talk about motivation behind rise of Promises and Futures, we will explain programming model associated with it and discuss evolution of this programming construct, finally we will end this discussion with how this construct helps us today in different general purpose programming languages.
@@ -23,22 +23,22 @@ In the world of asynchronous communications many programming models were introdu #Motivation -A “Promise” object represents a value that may not be available yet. A Promise is an object that represents a task with two possible outcomes, success or failure and holds callbacks that fire when one outcome or the other has occurred. +A “Promise” object represents a value that may not be available yet. A Promise is an object that represents a task with two possible outcomes, success or failure and holds callbacks that fire when one outcome or the other has occurred. -The rise of promises and futures as a topic of relevance can be traced parallel to the rise of asynchronous or distributed systems. This seems natural, since futures represent a value available in Future which fits in very naturally with the latency which is inherent to these heterogeneous systems. The recent adoption of NodeJS and server side Javascript has only made promises more relevant. But, the idea of having a placeholder for a result came in significantly before than the current notion of futures and promises. +The rise of promises and futures as a topic of relevance can be traced parallel to the rise of asynchronous or distributed systems. This seems natural, since futures represent a value available in Future which fits in very naturally with the latency which is inherent to these heterogeneous systems. The recent adoption of NodeJS and server side Javascript has only made promises more relevant. But, the idea of having a placeholder for a result came in significantly before than the current notion of futures and promises. -Thunks can be thought of as a primitive notion of a Future or Promise. According to its inventor P. Z. Ingerman, thunks are "A piece of coding which provides an address". They were designed as a way of binding actual parameters to their formal definitions in Algol-60 procedure calls. If a procedure is called with an expression in the place of a formal parameter, the compiler generates a thunk which computes the expression and leaves the address of the result in some standard location. +Thunks can be thought of as a primitive notion of a Future or Promise. According to its inventor P. Z. Ingerman, thunks are "A piece of coding which provides an address". They were designed as a way of binding actual parameters to their formal definitions in Algol-60 procedure calls. If a procedure is called with an expression in the place of a formal parameter, the compiler generates a thunk which computes the expression and leaves the address of the result in some standard location. -The first mention of Futures was by Baker and Hewitt in a paper on Incremental Garbage Collection of Processes. They coined the term - call-by-futures to describe a calling convention in which each formal parameter to a method is bound to a process which evaluates the expression in the parameter in parallel with other parameters. Before this paper, Algol 68 also presented a way to make this kind of concurrent parameter evaluation possible, using the collateral clauses and parallel clauses for parameter binding. +The first mention of Futures was by Baker and Hewitt in a paper on Incremental Garbage Collection of Processes. They coined the term - call-by-futures to describe a calling convention in which each formal parameter to a method is bound to a process which evaluates the expression in the parameter in parallel with other parameters. Before this paper, Algol 68 also presented a way to make this kind of concurrent parameter evaluation possible, using the collateral clauses and parallel clauses for parameter binding. In their paper, Baker and Hewitt introduced a notion of Futures as a 3-tuple representing an expression E consisting of (1) A process which evaluates E, (2) A memory location where the result of E needs to be stored, (3) A list of processes which are waiting on E. But, the major focus of their work was not on role of futures and the role they play in Asynchronous distributed computing, and focused on garbage collecting the processes which evaluate expressions not needed by the function. -The Multilisp language, presented by Halestead in 1985 built upon this call-by-future with a Future annotation. Binding a variable to a future expression creates a process which evaluates that expression and binds x to a token which represents its (eventual) result. This design of futures influenced the paper of design of Promises in Argus by Liskov and Shrira in 1988. Building upon the initial design of Future in Multilisp, they extended the original idea by introducing strongly typed Promises and integration with call streams.This made it easier to handle exception propagation from callee to the caller and also to handle the typical problems in a multi-computer system like network failures. This paper also talked about stream composition, a notion which is similar to promise pipelining today. +The Multilisp language, presented by Halestead in 1985 built upon this call-by-future with a Future annotation. Binding a variable to a future expression creates a process which evaluates that expression and binds x to a token which represents its (eventual) result. This design of futures influenced the paper of design of Promises in Argus by Liskov and Shrira in 1988. Building upon the initial design of Future in Multilisp, they extended the original idea by introducing strongly typed Promises and integration with call streams.This made it easier to handle exception propagation from callee to the caller and also to handle the typical problems in a multi-computer system like network failures. This paper also talked about stream composition, a notion which is similar to promise pipelining today. E is an object-oriented programming language for secure distributed computing, created by Mark S. Miller, Dan Bornstein, and others at Electric Communities in 1997. One of the major contribution of E was the first non-blocking implementation of Promises. It traces its routes to Joule which was a dataflow programming language. The notion of promise pipelining in E is inherited from Joule. @@ -47,35 +47,35 @@ E is an object-oriented programming language for secure distributed computing, c Among the modern languages, Python was perhaps the first to come up with something on the lines of E’s promises with the Twisted library. Coming out in 2002, it had a concept of Deferred objects, which were used to receive the result of an operation not yet completed. They were just like normal objects and could be passed along, but they didn’t have a value. They supported a callback which would get called once the result of the operation was complete. -Promises and javascript have an interesting history. In 2007 inspired by Python’s twisted, dojo came up with it’s own implementation of of dojo.Deferred. This inspired Kris Zyp to then come up with the CommonJS Promises/A spec in 2009. Ryan Dahl introduced the world to NodeJS in the same year. In it’s early versions, Node used promises for the non-blocking API. When NodeJS moved away from promises to its now familiar error-first callback API, it left a void for a promises API. Q.js was an implementation of Promises/A spec by Kris Kowal around this time. FuturesJS library by AJ ONeal was another library which aimed to solve flow-control problems without using Promises in the strictest of senses. In 2011, JQuery v1.5 first introduced Promises to its wider and ever-growing audience. The API for JQuery was subtly different than the Promises/A spec. With the rise of HTML5 and different APIs, there came a problem of different and messy interfaces. A+ promises aimed to solve this problem. From this point on, leading from widespread adoption of A+ spec, promises was finally made a part of ECMAScript® 2015 Language Specification. Still, a lack of backward compatibility and additional features provided means that libraries like BlueBird and Q.js still have a place in the javascript ecosystem. +Promises and javascript have an interesting history. In 2007 inspired by Python’s twisted, dojo came up with it’s own implementation of of dojo.Deferred. This inspired Kris Zyp to then come up with the CommonJS Promises/A spec in 2009. Ryan Dahl introduced the world to NodeJS in the same year. In it’s early versions, Node used promises for the non-blocking API. When NodeJS moved away from promises to its now familiar error-first callback API, it left a void for a promises API. Q.js was an implementation of Promises/A spec by Kris Kowal around this time. FuturesJS library by AJ ONeal was another library which aimed to solve flow-control problems without using Promises in the strictest of senses. In 2011, JQuery v1.5 first introduced Promises to its wider and ever-growing audience. The API for JQuery was subtly different than the Promises/A spec. With the rise of HTML5 and different APIs, there came a problem of different and messy interfaces. A+ promises aimed to solve this problem. From this point on, leading from widespread adoption of A+ spec, promises was finally made a part of ECMAScript® 2015 Language Specification. Still, a lack of backward compatibility and additional features provided means that libraries like BlueBird and Q.js still have a place in the javascript ecosystem. #Different Definitions -Future, promise, Delay or Deferred generally refer to same synchronisation mechanism where an object acts as a proxy for a yet unknown result. When the result is discovered, promises hold some code which then gets executed. +Future, promise, Delay or Deferred generally refer to same synchronisation mechanism where an object acts as a proxy for a yet unknown result. When the result is discovered, promises hold some code which then gets executed. The definitions have changed a little over the years but the idea remained the same. -In some languages however, there is a subtle difference between what is a Future and a Promise. -“A ‘Future’ is a read-only reference to a yet-to-be-computed value”. -“A ‘Promise’ is a pretty much the same except that you can write to it as well.” +In some languages however, there is a subtle difference between what is a Future and a Promise. +“A ‘Future’ is a read-only reference to a yet-to-be-computed value”. +“A ‘Promise’ is a pretty much the same except that you can write to it as well.” In other words, you can read from both Futures and Promises, but you can only write to Promises. You can get the Future associated with a Promise by calling the future method on it, but conversion in the other direction is not possible. Another way to look at it would be, if you Promise something, you are responsible for keeping it, but if someone else makes a Promise to you, you expect them to honor it in Future. -More technically, in Scala, “SIP-14 – Futures and Promises” defines them as follows: -A future is as a placeholder object for a result that does not yet exist. +More technically, in Scala, “SIP-14 – Futures and Promises” defines them as follows: +A future is as a placeholder object for a result that does not yet exist. A promise is a writable, single-assignment container, which completes a future. Promises can complete the future with a result to indicate success, or with an exception to indicate failure. C# also makes the distinction between futures and promises. In C#, futures are implemented as Task and in fact in earlier versions of the Task Parallel Library futures were implemented with a class Future which later became Task. The result of the future is available in the readonly property Task.Result which returns T -In Javascript world, Jquery introduces a notion of Deferred objects which are used to represent a unit of work which is not yet finished. Deferred object contains a promise object which represent the result of that unit of work. Promises are values returned by a function, while the deferred object can be canceled by its caller. +In Javascript world, Jquery introduces a notion of Deferred objects which are used to represent a unit of work which is not yet finished. Deferred object contains a promise object which represent the result of that unit of work. Promises are values returned by a function, while the deferred object can be canceled by its caller. -In Java 8, the Future interface has methods to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation when it is complete. CompletableFutures can be thought of as Promises as their value can be set. But it also implements the Future interface and therefore it can be used as a Future too. Promises can be thought of as a future with a public set method which the caller (or anybody else) can use to set the value of the future. +In Java 8, the Future interface has methods to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation when it is complete. CompletableFutures can be thought of as Promises as their value can be set. But it also implements the Future interface and therefore it can be used as a Future too. Promises can be thought of as a future with a public set method which the caller (or anybody else) can use to set the value of the future. # Semantics of Execution @@ -83,19 +83,19 @@ Over the years promises and futures have been implemented in different programmi ## Fork-Join -Doing things in parallel is usually an effective way of doing things in modern systems. The systems are getting more and more capable of running more than one things at once, and the latency associated with doing things in a distributed environment is not going away anytime soon. Inside the JVM, threads are a basic unit of concurrency. Threads are independent, heap-sharing execution contexts. Threads are generally considered to be lightweight when compared to a process, and can share both code and data. The cost of context switching between threads is cheaper than what it is between processes. But, even if we claim that threads are lightweight, the cost of creation and destruction of threads in a long running threads can add up to something significant. A practical way is address this problem is to manage a pool of worker threads. +Doing things in parallel is usually an effective way of doing things in modern systems. The systems are getting more and more capable of running more than one things at once, and the latency associated with doing things in a distributed environment is not going away anytime soon. Inside the JVM, threads are a basic unit of concurrency. Threads are independent, heap-sharing execution contexts. Threads are generally considered to be lightweight when compared to a process, and can share both code and data. The cost of context switching between threads is cheap. But, even if we claim that threads are lightweight, the cost of creation and destruction of threads in a long running threads can add up to something significant. A practical way is address this problem is to manage a pool of worker threads. -In Java executor is an object which executes the Runnable tasks. Executors provides a way of abstracting out how the details of how a task will actually run. These details, like selecting a thread to run the task, how the task is scheduled are managed by the object implementing the Executor interface. Threads are an example of a Runnable in java. Executors can be used instead of creating a thread explicitly. +In Java executor is an object which executes the Runnable tasks. Executors provides a way of abstracting out how the details of how a task will actually run. These details, like selecting a thread to run the task, how the task is scheduled are managed by the object implementing the Executor interface. Threads are an example of a Runnable in java. Executors can be used instead of creating a thread explicitly. -Similar to Executor, there is an ExecutionContext as part of scala.concurrent. The basic intent behind it is same as an Executor : it is responsible for executing computations. How it does it can is opaque to the caller. It can create a new thread, use a pool of threads or run it on the same thread as the caller, although the last option is generally not recommended. Scala.concurrent package comes with an implementation of ExecutionContext by default, which is a global static thread pool. +Similar to Executor, there is an ExecutionContext as part of scala.concurrent. The basic intent behind it is same as an Executor : it is responsible for executing computations. How it does it can is opaque to the caller. It can create a new thread, use a pool of threads or run it on the same thread as the caller, although the last option is generally not recommended. Scala.concurrent package comes with an implementation of ExecutionContext by default, which is a global static thread pool. -ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would typically come with a bad system design. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads. +ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would typically come with a bad system design. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads. -Futures are generally a good way to reason about asynchronous code. A good way to call a webservice, add a block of code to do something when you get back the response, and move on without waiting for the response. They’re also a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking. +Futures are generally a good way to reason about asynchronous code. A good way to call a webservice, add a block of code to do something when you get back the response, and move on without waiting for the response. They’re also a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking. in Scala, futures (and promises) are based on ExecutionContext. In Scala, futures are created using an ExecutionContext. This gives the users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios. Futures in scala are placeholders for a yet unknown value. A promise then can be thought of as a way to provide that value. A promise p completes the future returned by p.future. @@ -104,15 +104,27 @@ In Scala, futures are created using an ExecutionContext. This gives the users fl Scala futures api expects an ExecutionContext to be passed along. This parameter is implicit, and usually ExecutionContext.global. An example : -
- timeline -
+```scala +implicit val ec = ExecutionContext.global +val f : Future[String] = Future { “hello world” } +``` In this example, the global execution context is used to asynchronously run the created future. Taking another example, -
- timeline -
+ +```scala +implicit val ec = ExecutionContext.global + +val f = Future { + Http("http://api.fixed.io/latest?base=USD").asString +} + +f.onComplete { + case success(response) => println(response.body) + case Failure(t) => println(t) +} +``` + It is generally a good idea to use callbacks with Futures, as the value may not be available when you want to use it. @@ -121,16 +133,15 @@ So, how does it all work together ? As we mentioned, Futures require an ExecutionContext, which is an implicit parameter to virtually all of the futures API. This ExecutionContext is used to execute the future. Scala is flexible enough to let users implement their own Execution Contexts, but let’s talk about the default ExecutionContext, which is a ForkJoinPool. -ForkJoinPool is ideal for many small computations that spawn off and then come back together. Scala’s ForkJoinPool requires the tasks submitted to it to be a ForkJoinTask. The tasks submitted to the global ExecutionContext is quietly wrapped inside a ForkJoinTask and then executed. ForkJoinPool also supports a possibly blocking task, using ManagedBlock method which creates a spare thread if required to ensure that there is sufficient parallelism if the current thread is blocked. To summarize, ForkJoinPool is an really good general purpose ExecutionContext, which works really well in most of the scenarios. - +ForkJoinPool is ideal for many small computations that spawn off and then come back together. Scala’s ForkJoinPool requires the tasks submitted to it to be a ForkJoinTask. The tasks submitted to the global ExecutionContext is quietly wrapped inside a ForkJoinTask and then executed. ForkJoinPool also supports a possibly blocking task, using ManagedBlock method which creates a spare thread if required to ensure that there is sufficient parallelism if the current thread is blocked. To summarize, ForkJoinPool is an really good general purpose ExecutionContext, which works really well in most of the scenarios. ## Event Loops -Modern systems typically rely on many other systems to provide the functionality they do. There’s a file system underneath, a database system, and other web services to rely on for the information. Interaction with these components typically involves a period where we’re doing nothing but waiting for the response back. This is single largest waste of computing resources. +Modern systems typically rely on many other systems to provide the functionality they do. There’s a file system underneath, a database system, and other web services to rely on for the information. Interaction with these components typically involves a period where we’re doing nothing but waiting for the response back. This is single largest waste of computing resources. -Javascript is a single threaded asynchronous runtime. Now, conventionally async programming is generally associated with multi-threading, but we’re not allowed to create new threads in Javascript. Instead, asynchronicity in Javascript is achieved using an event-loop mechanism. +Javascript is a single threaded asynchronous runtime. Now, conventionally async programming is generally associated with multi-threading, but we’re not allowed to create new threads in Javascript. Instead, asynchronicity in Javascript is achieved using an event-loop mechanism. Javascript has historically been used to interact with the DOM and user interactions in the browser, and thus an event-driven programming model was a natural fit for the language. This has scaled up surprisingly well in high throughput scenarios in NodeJS. @@ -143,12 +154,12 @@ A typical Javascript engine has a few basic components. They are : - **Heap** Used to allocate memory for objects - **Stack** -Function call frames go into a stack from where they’re picked up from top to be executed. +Function call frames go into a stack from where they’re picked up from top to be executed. - **Queue** - A message queue holds the messages to be processed. + A message queue holds the messages to be processed. -Each message has a callback function which is fired when the message is processed. These messages can be generated by user actions like button clicks or scrolling, or by actions like HTTP requests, request to a database to fetch records or reading/writing to a file. +Each message has a callback function which is fired when the message is processed. These messages can be generated by user actions like button clicks or scrolling, or by actions like HTTP requests, request to a database to fetch records or reading/writing to a file. Separating when a message is queued from when it is executed means the single thread doesn’t have to wait for an action to complete before moving on to another. We attach a callback to the action we want to do, and when the time comes, the callback is run with the result of our action. Callbacks work good in isolation, but they force us into a continuation passing style of execution, what is otherwise known as Callback hell. @@ -159,13 +170,13 @@ Separating when a message is queued from when it is executed means the single th **Programs must be written for people to read, and only incidentally for machines to execute.** - *Harold Abelson and Gerald Jay Sussman* -Promises are an abstraction which make working with async operations in javascript much more fun. Moving on from a continuation passing style, where you specify what needs to be done once the action is done, the callee simply returns a Promise object. This inverts the chain of responsibility, as now the caller is responsible for handling the result of the promise when it is settled. +Promises are an abstraction which make working with async operations in javascript much more fun. Moving on from a continuation passing style, where you specify what needs to be done once the action is done, the callee simply returns a Promise object. This inverts the chain of responsibility, as now the caller is responsible for handling the result of the promise when it is settled. -The ES2015 spec specifies that “promises must not fire their resolution/rejection function on the same turn of the event loop that they are created on.” This is an important property because it ensures deterministic order of execution. Also, once a promise is fulfilled or failed, the promise’s value MUST not be changed. This ensures that a promise cannot be resolved more than once. +The ES2015 spec specifies that “promises must not fire their resolution/rejection function on the same turn of the event loop that they are created on.” This is an important property because it ensures deterministic order of execution. Also, once a promise is fulfilled or failed, the promise’s value MUST not be changed. This ensures that a promise cannot be resolved more than once. Let’s take an example to understand the promise resolution workflow as it happens inside the Javascript Engine. -Suppose we execute a function, here g() which in turn, calls function f(). Function f returns a promise, which, after counting down for 1000 ms, resolves the promise with a single value, true. Once f gets resolved, a value true or false is alerted based on the value of the promise. +Suppose we execute a function, here g() which in turn, calls function f(). Function f returns a promise, which, after counting down for 1000 ms, resolves the promise with a single value, true. Once f gets resolved, a value true or false is alerted based on the value of the promise.
@@ -184,7 +195,7 @@ Once the timer expires, the timer thread puts a message on the message queue. Th timeline
-Here, since the future is resolved with a value of true, we are alerted with a value true when the callback is picked up for execution. +Here, since the future is resolved with a value of true, we are alerted with a value true when the callback is picked up for execution.
timeline @@ -192,18 +203,18 @@ Here, since the future is resolved with a value of true, we are alerted with a v Some finer details : We’ve ignored the heap here, but all the functions, variables and callbacks are stored on heap. -As we’ve seen here, even though Javascript is said to be single threaded, there are number of helper threads to help main thread do things like timeout, UI, network operations, file operations etc. -Run-to-completion helps us reason about the code in a nice way. Whenever a function starts, it needs to finish before yielding the main thread. The data it accesses cannot be modified by someone else. This also means every function needs to finish in a reasonable amount of time, otherwise the program seems hung. This makes Javascript well suited for I/O tasks which are queued up and then picked up when finished, but not for data processing intensive tasks which generally take long time to finish. +As we’ve seen here, even though Javascript is said to be single threaded, there are number of helper threads to help main thread do things like timeout, UI, network operations, file operations etc. +Run-to-completion helps us reason about the code in a nice way. Whenever a function starts, it needs to finish before yielding the main thread. The data it accesses cannot be modified by someone else. This also means every function needs to finish in a reasonable amount of time, otherwise the program seems hung. This makes Javascript well suited for I/O tasks which are queued up and then picked up when finished, but not for data processing intensive tasks which generally take long time to finish. We haven’t talked about error handling, but it gets handled the same exact way, with the error callback being called with the error object the promise is rejected with. -Event loops have proven to be surprisingly performant. When network servers are designed around multithreading, as soon as you end up with a few hundred concurrent connections, the CPU spends so much of its time task switching that you start to lose overall performance. Switching from one thread to another has overhead which can add up significantly at scale. Apache used to choke even as low as a few hundred concurrent users when using a thread per connection while Node can scale up to a 100,000 concurrent connections based on event loops and asynchronous IO. +Event loops have proven to be surprisingly performant. When network servers are designed around multithreading, as soon as you end up with a few hundred concurrent connections, the CPU spends so much of its time task switching that you start to lose overall performance. Switching from one thread to another has overhead which can add up significantly at scale. Apache used to choke even as low as a few hundred concurrent users when using a thread per connection while Node can scale up to a 100,000 concurrent connections based on event loops and asynchronous IO. ##Thread Model -Oz programming language introduced an idea of dataflow concurrency model. In Oz, whenever the program comes across an unbound variable, it waits for it to be resolved. This dataflow property of variables helps us write threads in Oz that communicate through streams in a producer-consumer pattern. The major benefit of dataflow based concurrency model is that it’s deterministic - same operation called with same parameters always produces the same result. It makes it a lot easier to reason about concurrent programs, if the code is side-effect free. +Oz programming language introduced an idea of dataflow concurrency model. In Oz, whenever the program comes across an unbound variable, it waits for it to be resolved. This dataflow property of variables helps us write threads in Oz that communicate through streams in a producer-consumer pattern. The major benefit of dataflow based concurrency model is that it’s deterministic - same operation called with same parameters always produces the same result. It makes it a lot easier to reason about concurrent programs, if the code is side-effect free. Alice ML is a dialect of Standard ML with support for lazy evaluation, concurrent, distributed, and constraint programming. The early aim of Alice project was to reconstruct the functionalities of Oz programming language on top of a typed programming language. Building on the Standard ML dialect, Alice also provides concurrency features as part of the language through the use of a future type. Futures in Alice represent an undetermined result of a concurrent operation. Promises in Alice ML are explicit handles for futures. @@ -217,12 +228,12 @@ Alice also allows for lazy evaluation of expressions. Expressions preceded with #Implicit vs. Explicit Promises -We define Implicit promises as ones where we don’t have to manually trigger the computation vs Explicit promises where we have to trigger the resolution of future manually, either by calling a start function or by requiring the value. This distinction can be understood in terms of what triggers the calculation : With Implicit promises, the creation of a promise also triggers the computation, while with Explicit futures, one needs to triggers the resolution of a promise. This trigger can in turn be explicit, like calling a start method, or implicit, like lazy evaluation where the first use of a promise’s value triggers its evaluation. +We define Implicit promises as ones where we don’t have to manually trigger the computation vs Explicit promises where we have to trigger the resolution of future manually, either by calling a start function or by requiring the value. This distinction can be understood in terms of what triggers the calculation : With Implicit promises, the creation of a promise also triggers the computation, while with Explicit futures, one needs to triggers the resolution of a promise. This trigger can in turn be explicit, like calling a start method, or implicit, like lazy evaluation where the first use of a promise’s value triggers its evaluation. The idea for explicit futures were introduced in the Baker and Hewitt paper. They’re a little trickier to implement, and require some support from the underlying language, and as such they aren’t that common. The Baker and Hewitt paper talked about using futures as placeholders for arguments to a function, which get evaluated in parallel, but when they’re needed. Also, lazy futures in Alice ML have a similar explicit invocation mechanism, the first thread touching a future triggers its evaluation. -Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of futures in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. +Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of promises in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. # Promise Pipelining One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises. @@ -231,30 +242,49 @@ One of the criticism of traditional RPC systems would be that they’re blocking timeline
-Futures/Promises can be passed along, waited upon, or chained and joined together. These properties helps make life easier for the programmers working with them. This also reduces the latency associated with distributed computing. Promises enable dataflow concurrency, which is also deterministic, and easier to reason. +Futures/Promises can be passed along, waited upon, or chained and joined together. These properties helps make life easier for the programmers working with them. This also reduces the latency associated with distributed computing. Promises enable dataflow concurrency, which is also deterministic, and easier to reason. The history of promise pipelining can be traced back to the call-streams in Argus and channels in Joule. In Argus, Call streams are a mechanism for communication between distributed components. The communicating entities, a sender and a receiver are connected by a stream, and sender can make calls to receiver over it. Streams can be thought of as RPC, except that these allow callers to run in parallel with the receiver while processing the call. When making a call in Argus, the caller receives a promise for the result. In the paper on Promises by Liskov and Shrira, they mention that having integrated futures into call streams, next logical step would be to talk about stream composition. This means arranging streams into pipelines where output of one stream can be used as input of the next stream. They talk about composing streams using fork and coenter. -Modern promise specifications, like one in Javascript comes with methods which help working with promise pipelining easier. In javascript, a Promises.all method is provided, which takes in an iterable over Promises, and returns a new Promise which gets resolved when all the promises in the iterable get resolved. There’s also a race method, which returns a promise which is resolved when the first promise in the iterable gets resolved. +Modern promise specifications, like one in Javascript comes with methods which help working with promise pipelining easier. In javascript, a Promises.all method is provided, which takes in an iterable over Promises, and returns a new Promise which gets resolved when all the promises in the iterable get resolved. There’s also a race method, which returns a promise which is resolved when the first promise in the iterable gets resolved. In scala, futures have a onSuccess method which acts as a callback to when the future is complete. This callback itself can be used to sequentially chain futures together. But this results in bulkier code. Fortunately, Scala api comes with combinators which allow for easier combination of results from futures. Examples of combinators are map, flatmap, filter, withFilter. # Handling Errors -In a synchronous programming model, the most logical way of handling errors is a try...catch block. +In a synchronous programming model, the most logical way of handling errors is a try...catch block. -
- timeline -
+```javascript +try{ + do something1; + do something2; + do something3; + ... +} catch ( exception ){ + HandleException; +} -Unfortunately, the same thing doesn’t directly translate to asynchronous code. +``` + +Unfortunately, the same thing doesn’t directly translate to asynchronous code. + + +```javascript + +foo = doSomethingAsync(); + +try{ + foo(); + // This doesn’t work as the error might not have been thrown yet +} catch ( exception ){ + handleException; +} -
- timeline -
+ +``` In javascript world, some patterns emerged, most noticeably the error-first callback style, also adopted by Node. Although this works, but it is not very composable, and eventually takes us back to what is called callback hell. Fortunately, Promises come to the rescue. @@ -267,7 +297,7 @@ In modern languages, Promises generally come with two callbacks. One to handle timeline
-In Javascript, Promises also have a catch method, which help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler. +In Javascript, Promises also have a catch method, which help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler.
@@ -276,10 +306,22 @@ In Javascript, Promises also have a catch method, which help deal with errors in The same behavior can be written using catch block. -
- timeline -
+```scala + +work("") +.then(work) +.then(error) +.then(work) +.catch(handleError) +.then(check); + +function check(data) { + console.log(data == "1123"); + return Promise.resolve(); +} + +``` #Futures and Promises in Action @@ -291,7 +333,7 @@ Finagle is a protocol-agnostic, asynchronous RPC system for the JVM that makes i ##Correctables -Correctables were introduced by Rachid Guerraoui, Matej Pavlovic, and Dragos-Adrian Seredinschi at OSDI ‘16, in a paper titled Incremental Consistency Guarantees for Replicated Objects. As the title suggests, Correctables aim to solve the problems with consistency in replicated objects. They provide incremental consistency guarantees by capturing successive changes to the value of a replicated object. Applications can opt to receive a fast but possibly inconsistent result if eventual consistency is acceptable, or to wait for a strongly consistent result. Correctables API draws inspiration from, and builds on the API of Promises. Promises have a two state model to represent an asynchronous task, it starts in blocked state and proceeds to a ready state when the value is available. This cannot represent the incremental nature of correctables. Instead, Correctables have a updating state when it starts. From there on, it remains in updating state during intermediate updates, and when the final result is available, it transitions to final state. If an error occurs in between, it moves into an error state. Each state change triggers a callback. +Correctables were introduced by Rachid Guerraoui, Matej Pavlovic, and Dragos-Adrian Seredinschi at OSDI ‘16, in a paper titled Incremental Consistency Guarantees for Replicated Objects. As the title suggests, Correctables aim to solve the problems with consistency in replicated objects. They provide incremental consistency guarantees by capturing successive changes to the value of a replicated object. Applications can opt to receive a fast but possibly inconsistent result if eventual consistency is acceptable, or to wait for a strongly consistent result. Correctables API draws inspiration from, and builds on the API of Promises. Promises have a two state model to represent an asynchronous task, it starts in blocked state and proceeds to a ready state when the value is available. This cannot represent the incremental nature of correctables. Instead, Correctables have a updating state when it starts. From there on, it remains in updating state during intermediate updates, and when the final result is available, it transitions to final state. If an error occurs in between, it moves into an error state. Each state change triggers a callback.
timeline -- cgit v1.2.3 From 372c6fac3d1264a84160ae957d136dadae80d3d1 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Thu, 15 Dec 2016 13:40:50 -0500 Subject: reduced image boundry --- chapter/2/15.png | Bin 15262 -> 25242 bytes 1 file changed, 0 insertions(+), 0 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/15.png b/chapter/2/15.png index 4f2c188..f61e288 100644 Binary files a/chapter/2/15.png and b/chapter/2/15.png differ -- cgit v1.2.3 From d93e39c28c0112f23ba2e39632421596e547dce5 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Thu, 15 Dec 2016 13:42:13 -0500 Subject: removed images --- chapter/2/10.png | Bin 9834 -> 0 bytes chapter/2/11.png | Bin 12134 -> 0 bytes chapter/2/12.png | Bin 17071 -> 0 bytes chapter/2/14.png | Bin 11405 -> 0 bytes chapter/2/2.png | Bin 6152 -> 0 bytes chapter/2/3.png | Bin 13719 -> 0 bytes chapter/2/futures.md | 63 +++++++++++++++++++++++++++++++++------------------ 7 files changed, 41 insertions(+), 22 deletions(-) delete mode 100644 chapter/2/10.png delete mode 100644 chapter/2/11.png delete mode 100644 chapter/2/12.png delete mode 100644 chapter/2/14.png delete mode 100644 chapter/2/2.png delete mode 100644 chapter/2/3.png (limited to 'chapter/2') diff --git a/chapter/2/10.png b/chapter/2/10.png deleted file mode 100644 index f54711d..0000000 Binary files a/chapter/2/10.png and /dev/null differ diff --git a/chapter/2/11.png b/chapter/2/11.png deleted file mode 100644 index 7673d90..0000000 Binary files a/chapter/2/11.png and /dev/null differ diff --git a/chapter/2/12.png b/chapter/2/12.png deleted file mode 100644 index 7b2e13f..0000000 Binary files a/chapter/2/12.png and /dev/null differ diff --git a/chapter/2/14.png b/chapter/2/14.png deleted file mode 100644 index 5027666..0000000 Binary files a/chapter/2/14.png and /dev/null differ diff --git a/chapter/2/2.png b/chapter/2/2.png deleted file mode 100644 index a75c08b..0000000 Binary files a/chapter/2/2.png and /dev/null differ diff --git a/chapter/2/3.png b/chapter/2/3.png deleted file mode 100644 index 9cc66b5..0000000 Binary files a/chapter/2/3.png and /dev/null differ diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 6e019c7..c264dab 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -4,7 +4,7 @@ title: "Futures" by: "Kisalaya Prasad and Avanti Patil" --- -#Introduction +# Introduction As human beings we have an ability to multitask ie. we can walk, talk and eat at the same time except when you sneeze. Sneeze is like a blocking activity from the normal course of action, because it forces you to stop what you’re doing for a brief moment and then you resume where you left off. Activities like multitasking are called multithreading in computer lingo. In contrast to this behaviour, computer processors are single threaded. So when we say that a computer system has multi-threaded environment, it is actually just an illusion created by processor where processor’s time is shared between multiple processes. Sometimes processor gets blocked when some tasks are hindered from normal execution due to blocking calls. Such blocking calls can range from IO operations like read/write to disk or sending/receiving packets to/from network. Blocking calls can take disproportionate amount of time compared to the processor’s task execution i.e. iterating over a list. @@ -16,11 +16,11 @@ The processor can either handle blocking calls in two ways: In the world of asynchronous communications many terminologies were defined to help programmers reach the ideal level of resource utilization. As a part of this article we will talk about motivation behind rise of Promises and Futures, we will explain programming model associated with it and discuss evolution of this programming construct, finally we will end this discussion with how this construct helps us today in different general purpose programming languages. -
+
timeline
-#Motivation +# Motivation A “Promise” object represents a value that may not be available yet. A Promise is an object that represents a task with two possible outcomes, success or failure and holds callbacks that fire when one outcome or the other has occurred. @@ -81,7 +81,7 @@ In Java 8, the Future interface has methods to check if the computation is co Over the years promises and futures have been implemented in different programming languages and created a buzz in parallel computing world. We will take a look at some of the programming languages who designed frameworks to enhance performance of applications using Promises and futures. -## Fork-Join +## Thread Pools Doing things in parallel is usually an effective way of doing things in modern systems. The systems are getting more and more capable of running more than one things at once, and the latency associated with doing things in a distributed environment is not going away anytime soon. Inside the JVM, threads are a basic unit of concurrency. Threads are independent, heap-sharing execution contexts. Threads are generally considered to be lightweight when compared to a process, and can share both code and data. The cost of context switching between threads is cheap. But, even if we claim that threads are lightweight, the cost of creation and destruction of threads in a long running threads can add up to something significant. A practical way is address this problem is to manage a pool of worker threads. @@ -164,7 +164,7 @@ Each message has a callback function which is fired when the message is processe Separating when a message is queued from when it is executed means the single thread doesn’t have to wait for an action to complete before moving on to another. We attach a callback to the action we want to do, and when the time comes, the callback is run with the result of our action. Callbacks work good in isolation, but they force us into a continuation passing style of execution, what is otherwise known as Callback hell. -
+
timeline
@@ -179,25 +179,25 @@ Let’s take an example to understand the promise resolution workflow as it happ Suppose we execute a function, here g() which in turn, calls function f(). Function f returns a promise, which, after counting down for 1000 ms, resolves the promise with a single value, true. Once f gets resolved, a value true or false is alerted based on the value of the promise. -
+
timeline
Now, javascript’s runtime is single threaded. This statement is true, and not true. The thread which executes the user code is single threaded. It executes what is on top of the stack, runs it to completion, and then moves onto what is next on the stack. But, there are also a number of helper threads which handle things like network or timer/settimeout type events. This timing thread handles the counter for setTimeout. -
+
timeline
Once the timer expires, the timer thread puts a message on the message queue. The queued up messages are then handled by the event loop. The event loop as described above, is simply an infinite loop which checks if a message is ready to be processed, picks it up and puts it on the stack for it’s callback to be executed. -
+
timeline
Here, since the future is resolved with a value of true, we are alerted with a value true when the callback is picked up for execution. -
+
timeline
@@ -211,7 +211,7 @@ We haven’t talked about error handling, but it gets handled the same exact way Event loops have proven to be surprisingly performant. When network servers are designed around multithreading, as soon as you end up with a few hundred concurrent connections, the CPU spends so much of its time task switching that you start to lose overall performance. Switching from one thread to another has overhead which can add up significantly at scale. Apache used to choke even as low as a few hundred concurrent users when using a thread per connection while Node can scale up to a 100,000 concurrent connections based on event loops and asynchronous IO. -##Thread Model +## Thread Model Oz programming language introduced an idea of dataflow concurrency model. In Oz, whenever the program comes across an unbound variable, it waits for it to be resolved. This dataflow property of variables helps us write threads in Oz that communicate through streams in a producer-consumer pattern. The major benefit of dataflow based concurrency model is that it’s deterministic - same operation called with same parameters always produces the same result. It makes it a lot easier to reason about concurrent programs, if the code is side-effect free. @@ -225,7 +225,7 @@ Any expression in Alice can be evaluated in it's own thread using spawn keyword. Alice also allows for lazy evaluation of expressions. Expressions preceded with the lazy keyword are evaluated to a lazy future. The lazy future is evaluated when it is needed. If the computation associated with a concurrent or lazy future ends with an exception, it results in a failed future. Requesting a failed future does not block, it simply raises the exception that was the cause of the failure. -#Implicit vs. Explicit Promises +# Implicit vs. Explicit Promises We define Implicit promises as ones where we don’t have to manually trigger the computation vs Explicit promises where we have to trigger the resolution of future manually, either by calling a start function or by requiring the value. This distinction can be understood in terms of what triggers the calculation : With Implicit promises, the creation of a promise also triggers the computation, while with Explicit futures, one needs to triggers the resolution of a promise. This trigger can in turn be explicit, like calling a start method, or implicit, like lazy evaluation where the first use of a promise’s value triggers its evaluation. @@ -238,7 +238,7 @@ Implicit futures were introduced originally by Friedman and Wise in a paper in 1 # Promise Pipelining One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises. -
+
timeline
@@ -264,8 +264,8 @@ try{ do something3; ... } catch ( exception ){ - HandleException; -} + HandleException; +} ``` @@ -281,7 +281,7 @@ try{ // This doesn’t work as the error might not have been thrown yet } catch ( exception ){ handleException; -} +} ``` @@ -293,14 +293,33 @@ Although most of the earlier papers did not talk about error handling, the Promi In modern languages, Promises generally come with two callbacks. One to handle the success case and other to handle the failure. -
- timeline -
-In Javascript, Promises also have a catch method, which help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler. +#### In Scala +```scala + +f onComplete { + case Success(data) => handleSuccess(data) + case Failure(e) => handleFailure(e) +} +``` + +#### In Javascript +```javascript + +promise.then(function (data) { + // success callback + console.log(data); +}, function (error) { + // failure callback + console.error(error); +}); +``` -
+In Javascript, Promises have a catch method, which help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler. + + +
timeline
@@ -315,7 +334,7 @@ work("") .then(work) .catch(handleError) .then(check); - + function check(data) { console.log(data == "1123"); return Promise.resolve(); @@ -335,7 +354,7 @@ Finagle is a protocol-agnostic, asynchronous RPC system for the JVM that makes i ##Correctables Correctables were introduced by Rachid Guerraoui, Matej Pavlovic, and Dragos-Adrian Seredinschi at OSDI ‘16, in a paper titled Incremental Consistency Guarantees for Replicated Objects. As the title suggests, Correctables aim to solve the problems with consistency in replicated objects. They provide incremental consistency guarantees by capturing successive changes to the value of a replicated object. Applications can opt to receive a fast but possibly inconsistent result if eventual consistency is acceptable, or to wait for a strongly consistent result. Correctables API draws inspiration from, and builds on the API of Promises. Promises have a two state model to represent an asynchronous task, it starts in blocked state and proceeds to a ready state when the value is available. This cannot represent the incremental nature of correctables. Instead, Correctables have a updating state when it starts. From there on, it remains in updating state during intermediate updates, and when the final result is available, it transitions to final state. If an error occurs in between, it moves into an error state. Each state change triggers a callback. -
+
timeline
-- cgit v1.2.3 From 8b9054bf97086ebd951b25bda63c4f401f8fda57 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Thu, 15 Dec 2016 13:49:05 -0500 Subject: sharper image --- chapter/2/15.png | Bin 25242 -> 48459 bytes 1 file changed, 0 insertions(+), 0 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/15.png b/chapter/2/15.png index f61e288..15a2a81 100644 Binary files a/chapter/2/15.png and b/chapter/2/15.png differ -- cgit v1.2.3 From 9454c5b53bab5f0d8a4f5755af4e4829e9d200b7 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Thu, 15 Dec 2016 17:29:49 -0500 Subject: moved images to new folder and added code --- chapter/2/1.png | Bin 14176 -> 0 bytes chapter/2/13.png | Bin 21547 -> 0 bytes chapter/2/15.png | Bin 48459 -> 0 bytes chapter/2/4.png | Bin 25404 -> 0 bytes chapter/2/5.png | Bin 20821 -> 0 bytes chapter/2/6.png | Bin 19123 -> 0 bytes chapter/2/7.png | Bin 30068 -> 0 bytes chapter/2/8.png | Bin 13899 -> 0 bytes chapter/2/9.png | Bin 6463 -> 0 bytes chapter/2/futures.md | 112 ++++++++++++++++++++++++++++++++++++++++----------- 10 files changed, 88 insertions(+), 24 deletions(-) delete mode 100644 chapter/2/1.png delete mode 100644 chapter/2/13.png delete mode 100644 chapter/2/15.png delete mode 100644 chapter/2/4.png delete mode 100644 chapter/2/5.png delete mode 100644 chapter/2/6.png delete mode 100644 chapter/2/7.png delete mode 100644 chapter/2/8.png delete mode 100644 chapter/2/9.png (limited to 'chapter/2') diff --git a/chapter/2/1.png b/chapter/2/1.png deleted file mode 100644 index 1d98f19..0000000 Binary files a/chapter/2/1.png and /dev/null differ diff --git a/chapter/2/13.png b/chapter/2/13.png deleted file mode 100644 index a2b8457..0000000 Binary files a/chapter/2/13.png and /dev/null differ diff --git a/chapter/2/15.png b/chapter/2/15.png deleted file mode 100644 index 15a2a81..0000000 Binary files a/chapter/2/15.png and /dev/null differ diff --git a/chapter/2/4.png b/chapter/2/4.png deleted file mode 100644 index 8cfec98..0000000 Binary files a/chapter/2/4.png and /dev/null differ diff --git a/chapter/2/5.png b/chapter/2/5.png deleted file mode 100644 index b86de04..0000000 Binary files a/chapter/2/5.png and /dev/null differ diff --git a/chapter/2/6.png b/chapter/2/6.png deleted file mode 100644 index aaafdbd..0000000 Binary files a/chapter/2/6.png and /dev/null differ diff --git a/chapter/2/7.png b/chapter/2/7.png deleted file mode 100644 index 7183fb6..0000000 Binary files a/chapter/2/7.png and /dev/null differ diff --git a/chapter/2/8.png b/chapter/2/8.png deleted file mode 100644 index d6d2e0e..0000000 Binary files a/chapter/2/8.png and /dev/null differ diff --git a/chapter/2/9.png b/chapter/2/9.png deleted file mode 100644 index 1b67a45..0000000 Binary files a/chapter/2/9.png and /dev/null differ diff --git a/chapter/2/futures.md b/chapter/2/futures.md index c264dab..5ab4c3e 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -17,7 +17,7 @@ In the world of asynchronous communications many terminologies were defined to h
- timeline + timeline
# Motivation @@ -50,11 +50,10 @@ Among the modern languages, Python was perhaps the first to come up with somethi Promises and javascript have an interesting history. In 2007 inspired by Python’s twisted, dojo came up with it’s own implementation of of dojo.Deferred. This inspired Kris Zyp to then come up with the CommonJS Promises/A spec in 2009. Ryan Dahl introduced the world to NodeJS in the same year. In it’s early versions, Node used promises for the non-blocking API. When NodeJS moved away from promises to its now familiar error-first callback API, it left a void for a promises API. Q.js was an implementation of Promises/A spec by Kris Kowal around this time. FuturesJS library by AJ ONeal was another library which aimed to solve flow-control problems without using Promises in the strictest of senses. In 2011, JQuery v1.5 first introduced Promises to its wider and ever-growing audience. The API for JQuery was subtly different than the Promises/A spec. With the rise of HTML5 and different APIs, there came a problem of different and messy interfaces. A+ promises aimed to solve this problem. From this point on, leading from widespread adoption of A+ spec, promises was finally made a part of ECMAScript® 2015 Language Specification. Still, a lack of backward compatibility and additional features provided means that libraries like BlueBird and Q.js still have a place in the javascript ecosystem. -#Different Definitions +# Different Definitions -Future, promise, Delay or Deferred generally refer to same synchronisation mechanism where an object acts as a proxy for a yet unknown result. When the result is discovered, promises hold some code which then gets executed. The definitions have changed a little over the years but the idea remained the same. - +Future, promise, Delay or Deferred generally refer to same synchronisation mechanism where an object acts as a proxy for a yet unknown result. When the result is discovered, promises hold some code which then gets executed. In some languages however, there is a subtle difference between what is a Future and a Promise. “A ‘Future’ is a read-only reference to a yet-to-be-computed value”. @@ -79,7 +78,7 @@ In Java 8, the Future interface has methods to check if the computation is co # Semantics of Execution -Over the years promises and futures have been implemented in different programming languages and created a buzz in parallel computing world. We will take a look at some of the programming languages who designed frameworks to enhance performance of applications using Promises and futures. +Over the years promises and futures have been implemented in different programming languages. Different languages chose to implement futures/promises in a different way. In this section, we try to introduce some different ways in which futures and promises actually get executed and resolved underneath their APIs. ## Thread Pools @@ -164,11 +163,52 @@ Each message has a callback function which is fired when the message is processe Separating when a message is queued from when it is executed means the single thread doesn’t have to wait for an action to complete before moving on to another. We attach a callback to the action we want to do, and when the time comes, the callback is run with the result of our action. Callbacks work good in isolation, but they force us into a continuation passing style of execution, what is otherwise known as Callback hell. -
- timeline -
-**Programs must be written for people to read, and only incidentally for machines to execute.** - *Harold Abelson and Gerald Jay Sussman* +```javascript + +getData = function(param, callback){ + $.get('http://example.com/get/'+param, + function(responseText){ + callback(responseText); + }); +} + +getData(0, function(a){ + getData(a, function(b){ + getData(b, function(c){ + getData(c, function(d){ + getData(d, function(e){ + + }); + }); + }); + }); +}); + +``` + +

VS

+ +```javascript + +getData = function(param, callback){ + return new Promise(function(resolve, reject) { + $.get('http://example.com/get/'+param, + function(responseText){ + resolve(responseText); + }); + }); +} + +getData(0).then(getData) + .then(getData). + then(getData). + then(getData); + + +``` + +> **Programs must be written for people to read, and only incidentally for machines to execute.** - *Harold Abelson and Gerald Jay Sussman* Promises are an abstraction which make working with async operations in javascript much more fun. Moving on from a continuation passing style, where you specify what needs to be done once the action is done, the callee simply returns a Promise object. This inverts the chain of responsibility, as now the caller is responsible for handling the result of the promise when it is settled. @@ -180,25 +220,25 @@ Suppose we execute a function, here g() which in turn, calls function f(). Funct
- timeline + timeline
Now, javascript’s runtime is single threaded. This statement is true, and not true. The thread which executes the user code is single threaded. It executes what is on top of the stack, runs it to completion, and then moves onto what is next on the stack. But, there are also a number of helper threads which handle things like network or timer/settimeout type events. This timing thread handles the counter for setTimeout.
- timeline + timeline
Once the timer expires, the timer thread puts a message on the message queue. The queued up messages are then handled by the event loop. The event loop as described above, is simply an infinite loop which checks if a message is ready to be processed, picks it up and puts it on the stack for it’s callback to be executed.
- timeline + timeline
Here, since the future is resolved with a value of true, we are alerted with a value true when the callback is picked up for execution.
- timeline + timeline
Some finer details : @@ -239,7 +279,7 @@ Implicit futures were introduced originally by Friedman and Wise in a paper in 1 One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises.
- timeline + timeline
Futures/Promises can be passed along, waited upon, or chained and joined together. These properties helps make life easier for the programmers working with them. This also reduces the latency associated with distributed computing. Promises enable dataflow concurrency, which is also deterministic, and easier to reason. @@ -319,14 +359,38 @@ promise.then(function (data) { In Javascript, Promises have a catch method, which help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler. -
- timeline -
+```javascript +function work(data) { + return Promise.resolve(data+"1"); +} + +function error(data) { + return Promise.reject(data+"2"); +} + +function handleError(error) { + return error +"3"; +} + + +work("") +.then(work) +.then(error) +.then(work) // this will be skipped +.then(work, handleError) +.then(check); + +function check(data) { + console.log(data == "1123"); + return Promise.resolve(); +} + +``` The same behavior can be written using catch block. -```scala +```javascript work("") .then(work) @@ -342,27 +406,27 @@ function check(data) { ``` -#Futures and Promises in Action +# Futures and Promises in Action -##Twitter Finagle +## Twitter Finagle Finagle is a protocol-agnostic, asynchronous RPC system for the JVM that makes it easy to build robust clients and servers in Java, Scala, or any JVM-hosted language. It uses idea of Futures to encapsulate concurrent tasks and are analogous to threads, but even more lightweight. -##Correctables +## Correctables Correctables were introduced by Rachid Guerraoui, Matej Pavlovic, and Dragos-Adrian Seredinschi at OSDI ‘16, in a paper titled Incremental Consistency Guarantees for Replicated Objects. As the title suggests, Correctables aim to solve the problems with consistency in replicated objects. They provide incremental consistency guarantees by capturing successive changes to the value of a replicated object. Applications can opt to receive a fast but possibly inconsistent result if eventual consistency is acceptable, or to wait for a strongly consistent result. Correctables API draws inspiration from, and builds on the API of Promises. Promises have a two state model to represent an asynchronous task, it starts in blocked state and proceeds to a ready state when the value is available. This cannot represent the incremental nature of correctables. Instead, Correctables have a updating state when it starts. From there on, it remains in updating state during intermediate updates, and when the final result is available, it transitions to final state. If an error occurs in between, it moves into an error state. Each state change triggers a callback.
- timeline + timeline
-##Folly Futures +## Folly Futures Folly is a library by Facebook for asynchronous C++ inspired by the implementation of Futures by Twitter for Scala. It builds upon the Futures in the C++11 Standard. Like Scala’s futures, they also allow for implementing a custom executor which provides different ways of running a Future (thread pool, event loop etc). -##NodeJS Fiber +## NodeJS Fiber Fibers provide coroutine support for v8 and node. Applications can use Fibers to allow users to write code without using a ton of callbacks, without sacrificing the performance benefits of asynchronous IO. Think of fibers as light-weight threads for nodejs where the scheduling is in the hands of the programmer. The node-fibers library doesn’t recommend using raw API and code together without any abstractions, and provides a Futures implementation which is ‘fiber-aware’. ## References -- cgit v1.2.3 From 94626547d5c756dc0f19f4d31f65ba5eb9df992f Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Thu, 15 Dec 2016 22:36:37 -0500 Subject: moved images and fixed comments --- chapter/2/futures.md | 25 +++++++++++++------------ chapter/2/images/1.png | Bin 0 -> 14176 bytes chapter/2/images/15.png | Bin 0 -> 48459 bytes chapter/2/images/5.png | Bin 0 -> 20821 bytes chapter/2/images/6.png | Bin 0 -> 19123 bytes chapter/2/images/7.png | Bin 0 -> 30068 bytes chapter/2/images/8.png | Bin 0 -> 13899 bytes chapter/2/images/9.png | Bin 0 -> 6463 bytes 8 files changed, 13 insertions(+), 12 deletions(-) create mode 100644 chapter/2/images/1.png create mode 100644 chapter/2/images/15.png create mode 100644 chapter/2/images/5.png create mode 100644 chapter/2/images/6.png create mode 100644 chapter/2/images/7.png create mode 100644 chapter/2/images/8.png create mode 100644 chapter/2/images/9.png (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 5ab4c3e..612ed8e 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -25,7 +25,6 @@ In the world of asynchronous communications many terminologies were defined to h A “Promise” object represents a value that may not be available yet. A Promise is an object that represents a task with two possible outcomes, success or failure and holds callbacks that fire when one outcome or the other has occurred. - The rise of promises and futures as a topic of relevance can be traced parallel to the rise of asynchronous or distributed systems. This seems natural, since futures represent a value available in Future which fits in very naturally with the latency which is inherent to these heterogeneous systems. The recent adoption of NodeJS and server side Javascript has only made promises more relevant. But, the idea of having a placeholder for a result came in significantly before than the current notion of futures and promises. @@ -47,7 +46,7 @@ E is an object-oriented programming language for secure distributed computing, c Among the modern languages, Python was perhaps the first to come up with something on the lines of E’s promises with the Twisted library. Coming out in 2002, it had a concept of Deferred objects, which were used to receive the result of an operation not yet completed. They were just like normal objects and could be passed along, but they didn’t have a value. They supported a callback which would get called once the result of the operation was complete. -Promises and javascript have an interesting history. In 2007 inspired by Python’s twisted, dojo came up with it’s own implementation of of dojo.Deferred. This inspired Kris Zyp to then come up with the CommonJS Promises/A spec in 2009. Ryan Dahl introduced the world to NodeJS in the same year. In it’s early versions, Node used promises for the non-blocking API. When NodeJS moved away from promises to its now familiar error-first callback API, it left a void for a promises API. Q.js was an implementation of Promises/A spec by Kris Kowal around this time. FuturesJS library by AJ ONeal was another library which aimed to solve flow-control problems without using Promises in the strictest of senses. In 2011, JQuery v1.5 first introduced Promises to its wider and ever-growing audience. The API for JQuery was subtly different than the Promises/A spec. With the rise of HTML5 and different APIs, there came a problem of different and messy interfaces. A+ promises aimed to solve this problem. From this point on, leading from widespread adoption of A+ spec, promises was finally made a part of ECMAScript® 2015 Language Specification. Still, a lack of backward compatibility and additional features provided means that libraries like BlueBird and Q.js still have a place in the javascript ecosystem. +Promises and javascript have an interesting history. In 2007 inspired by Python’s twisted, dojo came up with it’s own implementation of of dojo.Deferred. This inspired Kris Zyp to then come up with the CommonJS Promises/A spec in 2009. Ryan Dahl introduced the world to NodeJS in the same year. In it’s early versions, Node used promises for the non-blocking API. When NodeJS moved away from promises to its now familiar error-first callback API (the first argument for the callback should be an error object), it left a void for a promises API. Q.js was an implementation of Promises/A spec by Kris Kowal around this time. FuturesJS library by AJ ONeal was another library which aimed to solve flow-control problems without using Promises in the strictest of senses. In 2011, JQuery v1.5 first introduced Promises to its wider and ever-growing audience. The API for JQuery was subtly different than the Promises/A spec. With the rise of HTML5 and different APIs, there came a problem of different and messy interfaces which added to the already infamous callback hell. A+ promises aimed to solve this problem. From this point on, leading from widespread adoption of A+ spec, promises was finally made a part of ECMAScript® 2015 Language Specification. Still, a lack of backward compatibility and additional features provided means that libraries like BlueBird and Q.js still have a place in the javascript ecosystem. # Different Definitions @@ -60,21 +59,23 @@ In some languages however, there is a subtle difference between what is a Future “A ‘Promise’ is a pretty much the same except that you can write to it as well.” -In other words, you can read from both Futures and Promises, but you can only write to Promises. You can get the Future associated with a Promise by calling the future method on it, but conversion in the other direction is not possible. Another way to look at it would be, if you Promise something, you are responsible for keeping it, but if someone else makes a Promise to you, you expect them to honor it in Future. +In other words, a future is a read-only window to a value written into a promise. You can get the Future associated with a Promise by calling the future method on it, but conversion in the other direction is not possible. Another way to look at it would be, if you Promise something, you are responsible for keeping it, but if someone else makes a Promise to you, you expect them to honor it in Future. More technically, in Scala, “SIP-14 – Futures and Promises” defines them as follows: A future is as a placeholder object for a result that does not yet exist. A promise is a writable, single-assignment container, which completes a future. Promises can complete the future with a result to indicate success, or with an exception to indicate failure. +An important difference between Scala and Java (6) futures is that Scala futures were asynchronous in nature. Java's future, at least till Java 6, were blocking. Java 7 introduced the Futures as the asynchronous construct which are more familiar in the distributed computing world. + -C# also makes the distinction between futures and promises. In C#, futures are implemented as Task and in fact in earlier versions of the Task Parallel Library futures were implemented with a class Future which later became Task. The result of the future is available in the readonly property Task.Result which returns T +In Java 8, the Future interface has methods to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation when it is complete. CompletableFutures can be thought of as Promises as their value can be set. But it also implements the Future interface and therefore it can be used as a Future too. Promises can be thought of as a future with a public set method which the caller (or anybody else) can use to set the value of the future. In Javascript world, Jquery introduces a notion of Deferred objects which are used to represent a unit of work which is not yet finished. Deferred object contains a promise object which represent the result of that unit of work. Promises are values returned by a function, while the deferred object can be canceled by its caller. -In Java 8, the Future interface has methods to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation when it is complete. CompletableFutures can be thought of as Promises as their value can be set. But it also implements the Future interface and therefore it can be used as a Future too. Promises can be thought of as a future with a public set method which the caller (or anybody else) can use to set the value of the future. +C# also makes the distinction between futures and promises. In C#, futures are implemented as Task and in fact in earlier versions of the Task Parallel Library futures were implemented with a class Future which later became Task. The result of the future is available in the readonly property Task.Result which returns T. Tasks are asynchronous in C#. # Semantics of Execution @@ -82,7 +83,7 @@ Over the years promises and futures have been implemented in different programmi ## Thread Pools -Doing things in parallel is usually an effective way of doing things in modern systems. The systems are getting more and more capable of running more than one things at once, and the latency associated with doing things in a distributed environment is not going away anytime soon. Inside the JVM, threads are a basic unit of concurrency. Threads are independent, heap-sharing execution contexts. Threads are generally considered to be lightweight when compared to a process, and can share both code and data. The cost of context switching between threads is cheap. But, even if we claim that threads are lightweight, the cost of creation and destruction of threads in a long running threads can add up to something significant. A practical way is address this problem is to manage a pool of worker threads. +Thread pools are a group of ready, idle threads which can be given work. They help with the overhead of worker creation, which can add up in a long running process. The actual implementation may vary everywhere, but what differentiates thread pools is the number of threads it uses. It can either be fixed, or dynamic. Advantage of having a fixed thread pool is that it degrades gracefully : the amount of load a system can handle is fixed, and using fixed thread pool, we can effectively limit the amount of load it is put under. Granularity of a thread pool is the number of threads it instantiates. In Java executor is an object which executes the Runnable tasks. Executors provides a way of abstracting out how the details of how a task will actually run. These details, like selecting a thread to run the task, how the task is scheduled are managed by the object implementing the Executor interface. Threads are an example of a Runnable in java. Executors can be used instead of creating a thread explicitly. @@ -94,11 +95,9 @@ Similar to Executor, there is an ExecutionContext as part of scala.concurrent. T ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would typically come with a bad system design. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads. -Futures are generally a good way to reason about asynchronous code. A good way to call a webservice, add a block of code to do something when you get back the response, and move on without waiting for the response. They’re also a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking. in Scala, futures (and promises) are based on ExecutionContext. - - -In Scala, futures are created using an ExecutionContext. This gives the users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios. Futures in scala are placeholders for a yet unknown value. A promise then can be thought of as a way to provide that value. A promise p completes the future returned by p.future. +Futures are generally a good way to reason about asynchronous code. A good way to call a web service, add a block of code to do something when you get back the response, and move on without waiting for the response. They’re also a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking. in Scala, futures (and promises) are based on ExecutionContext. +Using ExecutionContext gives users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios. Scala futures api expects an ExecutionContext to be passed along. This parameter is implicit, and usually ExecutionContext.global. An example : @@ -210,7 +209,8 @@ getData(0).then(getData) > **Programs must be written for people to read, and only incidentally for machines to execute.** - *Harold Abelson and Gerald Jay Sussman* -Promises are an abstraction which make working with async operations in javascript much more fun. Moving on from a continuation passing style, where you specify what needs to be done once the action is done, the callee simply returns a Promise object. This inverts the chain of responsibility, as now the caller is responsible for handling the result of the promise when it is settled. + +Promises are an abstraction which make working with async operations in javascript much more fun. Callbacks lead to inversion of control, which is difficult to reason about at scale. Moving on from a continuation passing style, where you specify what needs to be done once the action is done, the callee simply returns a Promise object. This inverts the chain of responsibility, as now the caller is responsible for handling the result of the promise when it is settled. The ES2015 spec specifies that “promises must not fire their resolution/rejection function on the same turn of the event loop that they are created on.” This is an important property because it ensures deterministic order of execution. Also, once a promise is fulfilled or failed, the promise’s value MUST not be changed. This ensures that a promise cannot be resolved more than once. @@ -275,6 +275,7 @@ The idea for explicit futures were introduced in the Baker and Hewitt paper. The Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of promises in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. + # Promise Pipelining One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises. @@ -427,7 +428,7 @@ Folly is a library by Facebook for asynchronous C++ inspired by the implementati ## NodeJS Fiber -Fibers provide coroutine support for v8 and node. Applications can use Fibers to allow users to write code without using a ton of callbacks, without sacrificing the performance benefits of asynchronous IO. Think of fibers as light-weight threads for nodejs where the scheduling is in the hands of the programmer. The node-fibers library doesn’t recommend using raw API and code together without any abstractions, and provides a Futures implementation which is ‘fiber-aware’. +Fibers provide coroutine support for v8 and node. Applications can use Fibers to allow users to write code without using a ton of callbacks, without sacrificing the performance benefits of asynchronous IO. Think of fibers as light-weight threads for NodeJs where the scheduling is in the hands of the programmer. The node-fibers library doesn’t recommend using raw API and code together without any abstractions, and provides a Futures implementation which is ‘fiber-aware’. ## References diff --git a/chapter/2/images/1.png b/chapter/2/images/1.png new file mode 100644 index 0000000..1d98f19 Binary files /dev/null and b/chapter/2/images/1.png differ diff --git a/chapter/2/images/15.png b/chapter/2/images/15.png new file mode 100644 index 0000000..15a2a81 Binary files /dev/null and b/chapter/2/images/15.png differ diff --git a/chapter/2/images/5.png b/chapter/2/images/5.png new file mode 100644 index 0000000..b86de04 Binary files /dev/null and b/chapter/2/images/5.png differ diff --git a/chapter/2/images/6.png b/chapter/2/images/6.png new file mode 100644 index 0000000..aaafdbd Binary files /dev/null and b/chapter/2/images/6.png differ diff --git a/chapter/2/images/7.png b/chapter/2/images/7.png new file mode 100644 index 0000000..7183fb6 Binary files /dev/null and b/chapter/2/images/7.png differ diff --git a/chapter/2/images/8.png b/chapter/2/images/8.png new file mode 100644 index 0000000..d6d2e0e Binary files /dev/null and b/chapter/2/images/8.png differ diff --git a/chapter/2/images/9.png b/chapter/2/images/9.png new file mode 100644 index 0000000..1b67a45 Binary files /dev/null and b/chapter/2/images/9.png differ -- cgit v1.2.3 From 3dc8ca64299e1cfc53b194174d15f8449246b985 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Fri, 16 Dec 2016 01:40:39 -0500 Subject: added bib --- chapter/2/futures.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 612ed8e..4e17472 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -275,6 +275,22 @@ The idea for explicit futures were introduced in the Baker and Hewitt paper. The Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of promises in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. +In Scala, although the futures are implicit, Promises can be used to have an explicit-like behavior. This is useful in a scenario where we need to stack up some computations and then resolve the Promise. + +An Example : + +```scala + +val p = Promise[Foo]() + +p.future.map( ... ).filter( ... ) foreach println + +p.complete(new Foo) + +``` + +Here, we create a Promise, and complete it later. In between we stack up a set of computations which get executed once the promise is completed. + # Promise Pipelining One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises. -- cgit v1.2.3 From db54d6db890d4c8e99e138095af8cd8e20755acc Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Fri, 16 Dec 2016 13:34:04 -0500 Subject: fixed motivation adding more details --- chapter/2/futures.md | 28 ++++++++++++---------------- 1 file changed, 12 insertions(+), 16 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 4e17472..5f8bd74 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -11,9 +11,9 @@ As human beings we have an ability to multitask ie. we can walk, talk and eat at The processor can either handle blocking calls in two ways: - **Synchronous method**: As a part of running task in synchronous method, processor continues to wait for the blocking call to complete the task and return the result. After this processor will resume processing next task. Problem with this kind of method is CPU time not utilized in an ideal manner. -- **Asynchronous method**: When you add asynchrony, you can utilize the time of CPU to work on some other task using one of the preemptive time sharing algorithm. Now when the asynchronous call returns the result, processor can again switch back to the previous process using preemption and resume the process from the point where it’d left off. +- **Asynchronous method**: When you add asynchrony, you can utilize the time of CPU to work on some other task using one of the preemptive time sharing algorithm. This is not blocking the processor at any time and when the asynchronous call returns the result, processor can again switch back to the previous process using preemption and resume the process from the point where it’d left off. -In the world of asynchronous communications many terminologies were defined to help programmers reach the ideal level of resource utilization. As a part of this article we will talk about motivation behind rise of Promises and Futures, we will explain programming model associated with it and discuss evolution of this programming construct, finally we will end this discussion with how this construct helps us today in different general purpose programming languages. +In the world of asynchronous communications many terminologies were defined to help programmers reach the ideal level of resource utilization. As a part of this article we will talk about motivation behind rise of Promises and Futures, how the current notion we have of futures and promises have evolved over time, try to explain various execution models associated with it and finally we will end this discussion with how this construct helps us today in different general purpose programming languages.
@@ -22,13 +22,9 @@ In the world of asynchronous communications many terminologies were defined to h # Motivation +The rise of promises and futures as a topic of relevance can be traced parallel to the rise of asynchronous or distributed systems. This seems natural, since futures represent a value available in Future which fits in very naturally with the latency which is inherent to these heterogeneous systems. The recent adoption of NodeJS and server side Javascript has only made promises more relevant. But, the idea of having a placeholder for a result came in significantly before than the current notion of futures and promises. As we will see in further sections, this idea of having a *"placeholder for a value that might not be available"* has changed meanings over time. -A “Promise” object represents a value that may not be available yet. A Promise is an object that represents a task with two possible outcomes, success or failure and holds callbacks that fire when one outcome or the other has occurred. - -The rise of promises and futures as a topic of relevance can be traced parallel to the rise of asynchronous or distributed systems. This seems natural, since futures represent a value available in Future which fits in very naturally with the latency which is inherent to these heterogeneous systems. The recent adoption of NodeJS and server side Javascript has only made promises more relevant. But, the idea of having a placeholder for a result came in significantly before than the current notion of futures and promises. - - -Thunks can be thought of as a primitive notion of a Future or Promise. According to its inventor P. Z. Ingerman, thunks are "A piece of coding which provides an address". They were designed as a way of binding actual parameters to their formal definitions in Algol-60 procedure calls. If a procedure is called with an expression in the place of a formal parameter, the compiler generates a thunk which computes the expression and leaves the address of the result in some standard location. +Thunks can be thought of as a primitive notion of a Future or Promise. According to its inventor P. Z. Ingerman, thunks are "A piece of coding which provides an address". {% cite 23 --file futures %} They were designed as a way of binding actual parameters to their formal definitions in Algol-60 procedure calls. If a procedure is called with an expression in the place of a formal parameter, the compiler generates a thunk which computes the expression and leaves the address of the result in some standard location. The first mention of Futures was by Baker and Hewitt in a paper on Incremental Garbage Collection of Processes. They coined the term - call-by-futures to describe a calling convention in which each formal parameter to a method is bound to a process which evaluates the expression in the parameter in parallel with other parameters. Before this paper, Algol 68 also presented a way to make this kind of concurrent parameter evaluation possible, using the collateral clauses and parallel clauses for parameter binding. @@ -37,10 +33,12 @@ The first mention of Futures was by Baker and Hewitt in a paper on Incremental G In their paper, Baker and Hewitt introduced a notion of Futures as a 3-tuple representing an expression E consisting of (1) A process which evaluates E, (2) A memory location where the result of E needs to be stored, (3) A list of processes which are waiting on E. But, the major focus of their work was not on role of futures and the role they play in Asynchronous distributed computing, and focused on garbage collecting the processes which evaluate expressions not needed by the function. -The Multilisp language, presented by Halestead in 1985 built upon this call-by-future with a Future annotation. Binding a variable to a future expression creates a process which evaluates that expression and binds x to a token which represents its (eventual) result. This design of futures influenced the paper of design of Promises in Argus by Liskov and Shrira in 1988. Building upon the initial design of Future in Multilisp, they extended the original idea by introducing strongly typed Promises and integration with call streams.This made it easier to handle exception propagation from callee to the caller and also to handle the typical problems in a multi-computer system like network failures. This paper also talked about stream composition, a notion which is similar to promise pipelining today. +The Multilisp language, presented by Halestead in 1985 built upon this call-by-future with a Future annotation. Binding a variable to a future expression creates a process which evaluates that expression and binds x to a token which represents its (eventual) result. It allowed an operation to move past the actual computation without waiting for it to complete. If the value is never used, the current computation will not pause. MultiLisp also had a lazy future construct, called Delay, which only gets evaluated when the value is first required. + + This design of futures influenced the paper of design of Promises in Argus by Liskov and Shrira in 1988. Both futures in MultiLisp and Promises in Argus provisioned for the result of a call to be picked up later. Building upon the initial design of Future in MultiLisp, they extended the original idea by introducing strongly typed Promises and integration with call streams. Call streams are a language-independent communication mechanism connecting a sender and a receiver in a distributed programming environment. It is used to make calls from sender to receiver like normal RPC. In addition, sender could also make stream-calls where it chooses to not wait for the reply and can make further calls. Stream calls seem like a good use-case for a placeholder to access the result of a call in the future : Promises. Call streams also had provisions for handling network failures. This made it easier to handle exception propagation from callee to the caller and also to handle the typical problems in a multi-computer system. This paper also talked about stream composition. The call-streams could be arranged in pipelines where output of one stream could be used as input on next stream. This notion is not much different to what is known as promise pipelining today, which will be introduced in more details later. -E is an object-oriented programming language for secure distributed computing, created by Mark S. Miller, Dan Bornstein, and others at Electric Communities in 1997. One of the major contribution of E was the first non-blocking implementation of Promises. It traces its routes to Joule which was a dataflow programming language. The notion of promise pipelining in E is inherited from Joule. +E is an object-oriented programming language for secure distributed computing, created by Mark S. Miller, Dan Bornstein, and others at Electric Communities in 1997. One of the major contribution of E was the first non-blocking implementation of Promises. It traces its routes to Joule which was a dataflow programming language. E had an eventually operator, * <- * . This created what is called an eventual send in E : the program doesn't wait for the operation to complete and moves to next sequential statement. Eventual-sends queue a pending delivery and complete immediately, returning a promise. A pending delivery includes a resolver for the promise. Further messages can also be eventually send to a promise before it is resolved. These messages are queued up and forwarded once the promise is resolved. The notion of promise pipelining in E is also inherited from Joule. Among the modern languages, Python was perhaps the first to come up with something on the lines of E’s promises with the Twisted library. Coming out in 2002, it had a concept of Deferred objects, which were used to receive the result of an operation not yet completed. They were just like normal objects and could be passed along, but they didn’t have a value. They supported a callback which would get called once the result of the operation was complete. @@ -61,9 +59,8 @@ In some languages however, there is a subtle difference between what is a Future In other words, a future is a read-only window to a value written into a promise. You can get the Future associated with a Promise by calling the future method on it, but conversion in the other direction is not possible. Another way to look at it would be, if you Promise something, you are responsible for keeping it, but if someone else makes a Promise to you, you expect them to honor it in Future. - More technically, in Scala, “SIP-14 – Futures and Promises” defines them as follows: -A future is as a placeholder object for a result that does not yet exist. +A future is a placeholder object for a result that does not yet exist. A promise is a writable, single-assignment container, which completes a future. Promises can complete the future with a result to indicate success, or with an exception to indicate failure. An important difference between Scala and Java (6) futures is that Scala futures were asynchronous in nature. Java's future, at least till Java 6, were blocking. Java 7 introduced the Futures as the asynchronous construct which are more familiar in the distributed computing world. @@ -92,12 +89,10 @@ In Java executor is an object which executes the Runnable tasks. Executors provi Similar to Executor, there is an ExecutionContext as part of scala.concurrent. The basic intent behind it is same as an Executor : it is responsible for executing computations. How it does it can is opaque to the caller. It can create a new thread, use a pool of threads or run it on the same thread as the caller, although the last option is generally not recommended. Scala.concurrent package comes with an implementation of ExecutionContext by default, which is a global static thread pool. -ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would typically come with a bad system design. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads. - +ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would be highly undesirable for most of the systems. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads. -Futures are generally a good way to reason about asynchronous code. A good way to call a web service, add a block of code to do something when you get back the response, and move on without waiting for the response. They’re also a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking. in Scala, futures (and promises) are based on ExecutionContext. -Using ExecutionContext gives users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios. +In Scala, Futures are generally a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking (although it is possible to have blocking futures, like Java 6). In Scala, futures (and promises) are based on ExecutionContext. Using ExecutionContext gives users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios. Scala futures api expects an ExecutionContext to be passed along. This parameter is implicit, and usually ExecutionContext.global. An example : @@ -309,6 +304,7 @@ Modern promise specifications, like one in Javascript comes with methods which h In scala, futures have a onSuccess method which acts as a callback to when the future is complete. This callback itself can be used to sequentially chain futures together. But this results in bulkier code. Fortunately, Scala api comes with combinators which allow for easier combination of results from futures. Examples of combinators are map, flatmap, filter, withFilter. + # Handling Errors In a synchronous programming model, the most logical way of handling errors is a try...catch block. -- cgit v1.2.3 From f0e4ab32d559e198cd439fc8b3fc80159f191019 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Fri, 16 Dec 2016 19:59:01 -0500 Subject: added more details --- chapter/2/futures.md | 91 ++++++++++++++++++++++++++++++++++++++++++++---- chapter/2/images/p-1.svg | 4 +++ chapter/2/images/p-2.svg | 4 +++ 3 files changed, 93 insertions(+), 6 deletions(-) create mode 100644 chapter/2/images/p-1.svg create mode 100644 chapter/2/images/p-2.svg (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 5f8bd74..842da29 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -290,13 +290,21 @@ Here, we create a Promise, and complete it later. In between we stack up a set o # Promise Pipelining One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises. + + +
+ timeline +
+
- timeline + timeline
Futures/Promises can be passed along, waited upon, or chained and joined together. These properties helps make life easier for the programmers working with them. This also reduces the latency associated with distributed computing. Promises enable dataflow concurrency, which is also deterministic, and easier to reason. -The history of promise pipelining can be traced back to the call-streams in Argus and channels in Joule. In Argus, Call streams are a mechanism for communication between distributed components. The communicating entities, a sender and a receiver are connected by a stream, and sender can make calls to receiver over it. Streams can be thought of as RPC, except that these allow callers to run in parallel with the receiver while processing the call. When making a call in Argus, the caller receives a promise for the result. In the paper on Promises by Liskov and Shrira, they mention that having integrated futures into call streams, next logical step would be to talk about stream composition. This means arranging streams into pipelines where output of one stream can be used as input of the next stream. They talk about composing streams using fork and coenter. +The history of promise pipelining can be traced back to the call-streams in Argus. In Argus, Call streams are a mechanism for communication between distributed components. The communicating entities, a sender and a receiver are connected by a stream, and sender can make calls to receiver over it. Streams can be thought of as RPC, except that these allow callers to run in parallel with the receiver while processing the call. When making a call in Argus, the caller receives a promise for the result. In the paper on Promises by Liskov and Shrira, they mention that having integrated futures into call streams, next logical step would be to talk about stream composition. This means arranging streams into pipelines where output of one stream can be used as input of the next stream. They talk about composing streams using fork and coenter. + +Channels in Joule were a similar idea, providing a channel which connects an acceptor and a distributor. Joule was a direct ancestor to E language. Modern promise specifications, like one in Javascript comes with methods which help working with promise pipelining easier. In javascript, a Promises.all method is provided, which takes in an iterable over Promises, and returns a new Promise which gets resolved when all the promises in the iterable get resolved. There’s also a race method, which returns a promise which is resolved when the first promise in the iterable gets resolved. @@ -305,6 +313,8 @@ Modern promise specifications, like one in Javascript comes with methods which h In scala, futures have a onSuccess method which acts as a callback to when the future is complete. This callback itself can be used to sequentially chain futures together. But this results in bulkier code. Fortunately, Scala api comes with combinators which allow for easier combination of results from futures. Examples of combinators are map, flatmap, filter, withFilter. + + # Handling Errors In a synchronous programming model, the most logical way of handling errors is a try...catch block. @@ -339,8 +349,7 @@ try{ ``` -In javascript world, some patterns emerged, most noticeably the error-first callback style, also adopted by Node. Although this works, but it is not very composable, and eventually takes us back to what is called callback hell. Fortunately, Promises come to the rescue. - +In javascript world, some patterns emerged, most noticeably the error-first callback style ( which we've seen before, also adopted by Node). Although this works, but it is not very composable, and eventually takes us back to what is called callback hell. Fortunately, Promises come to the rescue. Although most of the earlier papers did not talk about error handling, the Promises paper by Liskov and Shrira did acknowledge the possibility of failure in a distributed environment. They talked about propagation of exceptions from the called procedure to the caller and also about call streams, and how broken streams could be handled. E language also talked about broken promises and setting a promise to the exception of broken references. @@ -356,6 +365,47 @@ f onComplete { } ``` +In Scala, the Try type represents a computation that may either result in an exception, or return a successfully computed value. For example, Try[Int] represents a computation which can either result in Int if it's successful, or return a Throwable if something is wrong. + +```scala + +val a: Int = 100 +val b: Int = 10 +def divide: Try[Int] = Try(a/b) + +divide match { + case Success(v) => + println(v) + case Failure(e) => + println(e) +} + +``` + +** This prints 10 , while ** + +```scala + +val a: Int = 100 +val b: Int = 0 +def divide: Try[Int] = Try(a/b) + +divide match { + case Success(v) => + println(v) + case Failure(e) => + println(e) +} + +``` + +** This prints java.lang.ArithmeticException: / by zero ** + +Try type can be pipelined, allowing for catching exceptions and recovering from them along the way. + + + + #### In Javascript ```javascript @@ -425,7 +475,34 @@ function check(data) { ## Twitter Finagle -Finagle is a protocol-agnostic, asynchronous RPC system for the JVM that makes it easy to build robust clients and servers in Java, Scala, or any JVM-hosted language. It uses idea of Futures to encapsulate concurrent tasks and are analogous to threads, but even more lightweight. +Finagle is a protocol-agnostic, asynchronous RPC system for the JVM that makes it easy to build robust clients and servers in Java, Scala, or any JVM-hosted language. It uses Futures to encapsulate concurrent tasks. Finagle +introduces two other abstractions built on top of Futures to reason about distributed software : + +- ** Services ** are asynchronous functions which represent system boundaries. + +- ** Filters ** are application-independent blocks of logic like handling timeouts and authentication. + +In Finagle, operations describe what needs to be done, while the actual execution is left to be handled by the runtime. The runtime comes with a robust implementation of connection pooling, failure detection and recovery and load balancers. + +Example of a Service: + + +```scala + +val service = new Service[HttpRequest, HttpResponse] { + def apply(request: HttpRequest) = + Future(new DefaultHttpResponse(HTTP_1_1, OK)) +} + +``` +A timeout filter can be implemented as : + +```scala + +def timeoutFilter(d: Duration) = + { (req, service) => service(req).within(d) } + +``` ## Correctables @@ -435,6 +512,7 @@ Correctables were introduced by Rachid Guerraoui, Matej Pavlovic, and Dragos-Adr timeline
+ ## Folly Futures Folly is a library by Facebook for asynchronous C++ inspired by the implementation of Futures by Twitter for Scala. It builds upon the Futures in the C++11 Standard. Like Scala’s futures, they also allow for implementing a custom executor which provides different ways of running a Future (thread pool, event loop etc). @@ -442,6 +520,7 @@ Folly is a library by Facebook for asynchronous C++ inspired by the implementati ## NodeJS Fiber Fibers provide coroutine support for v8 and node. Applications can use Fibers to allow users to write code without using a ton of callbacks, without sacrificing the performance benefits of asynchronous IO. Think of fibers as light-weight threads for NodeJs where the scheduling is in the hands of the programmer. The node-fibers library doesn’t recommend using raw API and code together without any abstractions, and provides a Futures implementation which is ‘fiber-aware’. -## References + +# References {% bibliography --file futures %} diff --git a/chapter/2/images/p-1.svg b/chapter/2/images/p-1.svg new file mode 100644 index 0000000..87e180b --- /dev/null +++ b/chapter/2/images/p-1.svg @@ -0,0 +1,4 @@ + + + + diff --git a/chapter/2/images/p-2.svg b/chapter/2/images/p-2.svg new file mode 100644 index 0000000..f5c6b05 --- /dev/null +++ b/chapter/2/images/p-2.svg @@ -0,0 +1,4 @@ + + + + -- cgit v1.2.3 From 9583f55e47e787bda753f0b310d0fc48e3cfab06 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Fri, 16 Dec 2016 21:04:49 -0500 Subject: added more details --- chapter/2/futures.md | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 842da29..20eb75e 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -302,13 +302,29 @@ One of the criticism of traditional RPC systems would be that they’re blocking Futures/Promises can be passed along, waited upon, or chained and joined together. These properties helps make life easier for the programmers working with them. This also reduces the latency associated with distributed computing. Promises enable dataflow concurrency, which is also deterministic, and easier to reason. -The history of promise pipelining can be traced back to the call-streams in Argus. In Argus, Call streams are a mechanism for communication between distributed components. The communicating entities, a sender and a receiver are connected by a stream, and sender can make calls to receiver over it. Streams can be thought of as RPC, except that these allow callers to run in parallel with the receiver while processing the call. When making a call in Argus, the caller receives a promise for the result. In the paper on Promises by Liskov and Shrira, they mention that having integrated futures into call streams, next logical step would be to talk about stream composition. This means arranging streams into pipelines where output of one stream can be used as input of the next stream. They talk about composing streams using fork and coenter. +The history of promise pipelining can be traced back to the call-streams in Argus. In Argus, Call streams are a mechanism for communication between distributed components. The communicating entities, a sender and a receiver are connected by a stream, and sender can make calls to receiver over it. Streams can be thought of as RPC, except that these allow callers to run in parallel with the receiver while processing the call. When making a call in Argus, the caller receives a promise for the result. In the paper on Promises by Liskov and Shrira, they mention that having integrated Promises into call streams, next logical step would be to talk about stream composition. This means arranging streams into pipelines where output of one stream can be used as input of the next stream. They talk about composing streams using fork and coenter. Channels in Joule were a similar idea, providing a channel which connects an acceptor and a distributor. Joule was a direct ancestor to E language. -Modern promise specifications, like one in Javascript comes with methods which help working with promise pipelining easier. In javascript, a Promises.all method is provided, which takes in an iterable over Promises, and returns a new Promise which gets resolved when all the promises in the iterable get resolved. There’s also a race method, which returns a promise which is resolved when the first promise in the iterable gets resolved. +Modern promise specifications, like one in Javascript comes with methods which help working with promise pipelining easier. In javascript, a Promises.all method is provided, which takes in an iterable and returns a new Promise which gets resolved when all the promises in the iterable get resolved. There’s also a race method, which returns a promise which is resolved when the first promise in the iterable gets resolved. +```javascript + +var a = Promise.resolve(1); +var b = new Promise(function (resolve, reject) { + setTimeout(resolve, 100, 2); +}); + +Promise.all([p1, p2]).then(values => { + console.log(values); // [1,2] +}); + +Promise.race([p1, p2]).then(function(value) { + console.log(value); // 1 +}); + +``` In scala, futures have a onSuccess method which acts as a callback to when the future is complete. This callback itself can be used to sequentially chain futures together. But this results in bulkier code. Fortunately, Scala api comes with combinators which allow for easier combination of results from futures. Examples of combinators are map, flatmap, filter, withFilter. @@ -375,15 +391,13 @@ def divide: Try[Int] = Try(a/b) divide match { case Success(v) => - println(v) + println(v) // 10 case Failure(e) => println(e) } ``` -** This prints 10 , while ** - ```scala val a: Int = 100 @@ -394,13 +408,11 @@ divide match { case Success(v) => println(v) case Failure(e) => - println(e) + println(e) // java.lang.ArithmeticException: / by zero } ``` -** This prints java.lang.ArithmeticException: / by zero ** - Try type can be pipelined, allowing for catching exceptions and recovering from them along the way. -- cgit v1.2.3 From 92d4bc3799dc7b8ced25742485e232b6f14af3a1 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Fri, 16 Dec 2016 22:55:58 -0500 Subject: added more details : final --- chapter/2/futures.md | 107 ++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 75 insertions(+), 32 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index 20eb75e..ff32d19 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -288,23 +288,38 @@ Here, we create a Promise, and complete it later. In between we stack up a set o # Promise Pipelining -One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises. +One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises.
- timeline + timeline
- timeline + timeline
Futures/Promises can be passed along, waited upon, or chained and joined together. These properties helps make life easier for the programmers working with them. This also reduces the latency associated with distributed computing. Promises enable dataflow concurrency, which is also deterministic, and easier to reason. The history of promise pipelining can be traced back to the call-streams in Argus. In Argus, Call streams are a mechanism for communication between distributed components. The communicating entities, a sender and a receiver are connected by a stream, and sender can make calls to receiver over it. Streams can be thought of as RPC, except that these allow callers to run in parallel with the receiver while processing the call. When making a call in Argus, the caller receives a promise for the result. In the paper on Promises by Liskov and Shrira, they mention that having integrated Promises into call streams, next logical step would be to talk about stream composition. This means arranging streams into pipelines where output of one stream can be used as input of the next stream. They talk about composing streams using fork and coenter. -Channels in Joule were a similar idea, providing a channel which connects an acceptor and a distributor. Joule was a direct ancestor to E language. +Channels in Joule were a similar idea, providing a channel which connects an acceptor and a distributor. Joule was a direct ancestor to E language, and talked about it in more detail. + +``` + +t3 := (x <- a()) <- c(y <- b()) + +t1 := x <- a() +t2 := y <- b() +t3 := t1 <- c(t2) + +``` + +Without pipelining in E, this call will require three round trips. First to send a() to x, then b() to y then finally c to the result t1 with t2 as an argument. But with pipelining, the later messages can be sent with promises as result of earlier messages as argument. This allowed sending all the messages together, thereby saving the costly round trips. This is assuming x and y are on the same remote machine, otherwise we can still evaluate t1 and t2 parallely. + + +Notice that this pipelining mechanism is different from asynchronous message passing, as in asynchronous message passing, even if t1 and t2 get evaluated in parallel, to resolve t3 we still wait for t1 and t2 to be resolved, and send it again in another call to the remote machine. Modern promise specifications, like one in Javascript comes with methods which help working with promise pipelining easier. In javascript, a Promises.all method is provided, which takes in an iterable and returns a new Promise which gets resolved when all the promises in the iterable get resolved. There’s also a race method, which returns a promise which is resolved when the first promise in the iterable gets resolved. @@ -326,14 +341,12 @@ Promise.race([p1, p2]).then(function(value) { ``` -In scala, futures have a onSuccess method which acts as a callback to when the future is complete. This callback itself can be used to sequentially chain futures together. But this results in bulkier code. Fortunately, Scala api comes with combinators which allow for easier combination of results from futures. Examples of combinators are map, flatmap, filter, withFilter. - - +In Scala, futures have a onSuccess method which acts as a callback to when the future is complete. This callback itself can be used to sequentially chain futures together. But this results in bulkier code. Fortunately, Scala api comes with combinators which allow for easier combination of results from futures. Examples of combinators are map, flatmap, filter, withFilter. # Handling Errors -In a synchronous programming model, the most logical way of handling errors is a try...catch block. +If world would have run without errors we would rejoice in unison, but it is not the case in programming world as well. When you run a program you either receive an expected output or an error. Error can be defined as wrong output or an exception. In a synchronous programming model, the most logical way of handling errors is a try...catch block. ```javascript @@ -365,14 +378,25 @@ try{ ``` -In javascript world, some patterns emerged, most noticeably the error-first callback style ( which we've seen before, also adopted by Node). Although this works, but it is not very composable, and eventually takes us back to what is called callback hell. Fortunately, Promises come to the rescue. -Although most of the earlier papers did not talk about error handling, the Promises paper by Liskov and Shrira did acknowledge the possibility of failure in a distributed environment. They talked about propagation of exceptions from the called procedure to the caller and also about call streams, and how broken streams could be handled. E language also talked about broken promises and setting a promise to the exception of broken references. -In modern languages, Promises generally come with two callbacks. One to handle the success case and other to handle the failure. +Although most of the earlier papers did not talk about error handling, the Promises paper by Liskov and Shrira did acknowledge the possibility of failure in a distributed environment. To put this in Argus's perspective, the 'claim' operation waits until the promise is ready. Then it returns normally if the call terminated normally, and otherwise it signals the appropriate 'exception', e.g., +``` +y: real := pt$claim(x) + except when foo: ... + when unavailable(s: string): . + when failure(s: string): . . + end + +``` +Here x is a promise object of type pt; the form pi$claim illustrates the way Argus identifies an operation of a type by concatenating the type name with the operation name. When there are communication problems, RPCs in Argus terminate either with the 'unavailable' exception or the 'failure' exception. +'Unavailable' - means that the problem is temporary, e.g., communication is impossible right now. +'Failure' - means that the problem is permanent, e.g., the handler’s guardian does not exist. +Thus stream calls (and sends) whose replies are lost because of broken streams will terminate with one of these exceptions. Both exceptions have a string argument that explains the reason for the failure, e.g., future(“handler does not exist”), or unavailable(“cannot communicate”). Since any call can fail, every handler can raise the exceptions failure and unavailable. In this paper they also talked about propagation of exceptions from the called procedure to the caller. In paper about E language they talk about broken promises and setting a promise to the exception of broken references. + +In modern languages like Scala, Promises generally come with two callbacks. One to handle the success case and other to handle the failure. e.g. -#### In Scala ```scala f onComplete { @@ -385,21 +409,6 @@ In Scala, the Try type represents a computation that may either result in an exc ```scala -val a: Int = 100 -val b: Int = 10 -def divide: Try[Int] = Try(a/b) - -divide match { - case Success(v) => - println(v) // 10 - case Failure(e) => - println(e) -} - -``` - -```scala - val a: Int = 100 val b: Int = 0 def divide: Try[Int] = Try(a/b) @@ -415,9 +424,6 @@ divide match { Try type can be pipelined, allowing for catching exceptions and recovering from them along the way. - - - #### In Javascript ```javascript @@ -429,9 +435,46 @@ promise.then(function (data) { console.error(error); }); +``` +Scala futures exception handling: + +When asynchronous computations throw unhandled exceptions, futures associated with those computations fail. Failed futures store an instance of Throwable instead of the result value. Futures provide the onFailure callback method, which accepts a PartialFunction to be applied to a Throwable. TimeoutException, scala.runtime.NonLocalReturnControl[] and ExecutionException exceptions are treated differently + +Scala promises exception handling: + +When failing a promise with an exception, three subtypes of Throwables are handled specially. If the Throwable used to break the promise is a scala.runtime.NonLocalReturnControl, then the promise is completed with the corresponding value. If the Throwable used to break the promise is an instance of Error, InterruptedException, or scala.util.control.ControlThrowable, the Throwable is wrapped as the cause of a new ExecutionException which, in turn, is failing the promise. + + +To handle errors with asynchronous methods and callbacks, the error-first callback style ( which we've seen before, also adopted by Node) is the most common convention. Although this works, but it is not very composable, and eventually takes us back to what is called callback hell. Fortunately, Promises allow asynchronous code to apply structured error handling. Promises .then method takes in two callbacks, a onFulfilled to handle when a promise is resolved successfully and a onRejected to handle if the promise is rejected. + +```javascript + +var p = new Promise(function(resolve, reject){ + resolve(100); +}); + +p.then(function(data){ + console.log(data); // 100 +},function(error){ + console.err(error); +}); + +var q = new Promise(function(resolve, reject){ + reject(new Error( + {'message':'Divide by zero'} + )); +}); + +q.then(function(data){ + console.log(data); +},function(error){ + console.err(error);// {'message':'Divide by zero'} +}); + ``` -In Javascript, Promises have a catch method, which help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler. + +Promises also have a catch method, which work the same way as onFailure callback, but also help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler. ```javascript @@ -464,7 +507,6 @@ function check(data) { The same behavior can be written using catch block. - ```javascript work("") @@ -481,6 +523,7 @@ function check(data) { ``` + # Futures and Promises in Action -- cgit v1.2.3 From 0f59ea090aef37374634b7400a6ebd73ec9782a8 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Fri, 16 Dec 2016 22:56:55 -0500 Subject: added more details: Image --- chapter/2/images/p-1.png | Bin 0 -> 39600 bytes chapter/2/images/p-2.png | Bin 0 -> 40084 bytes 2 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 chapter/2/images/p-1.png create mode 100644 chapter/2/images/p-2.png (limited to 'chapter/2') diff --git a/chapter/2/images/p-1.png b/chapter/2/images/p-1.png new file mode 100644 index 0000000..7061fe3 Binary files /dev/null and b/chapter/2/images/p-1.png differ diff --git a/chapter/2/images/p-2.png b/chapter/2/images/p-2.png new file mode 100644 index 0000000..ccc5d09 Binary files /dev/null and b/chapter/2/images/p-2.png differ -- cgit v1.2.3 From 0868f49162590810be1876aae04c8d08e08db442 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Fri, 16 Dec 2016 23:10:33 -0500 Subject: added more details: Image --- chapter/2/images/1.png | Bin 14176 -> 41235 bytes 1 file changed, 0 insertions(+), 0 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/images/1.png b/chapter/2/images/1.png index 1d98f19..569c326 100644 Binary files a/chapter/2/images/1.png and b/chapter/2/images/1.png differ -- cgit v1.2.3 From f8c4ee8046a830f62d27d1d591bf1410a71f0164 Mon Sep 17 00:00:00 2001 From: Kisalaya Date: Fri, 16 Dec 2016 23:25:24 -0500 Subject: added more details --- chapter/2/futures.md | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) (limited to 'chapter/2') diff --git a/chapter/2/futures.md b/chapter/2/futures.md index ff32d19..0075773 100644 --- a/chapter/2/futures.md +++ b/chapter/2/futures.md @@ -266,11 +266,35 @@ Alice also allows for lazy evaluation of expressions. Expressions preceded with We define Implicit promises as ones where we don’t have to manually trigger the computation vs Explicit promises where we have to trigger the resolution of future manually, either by calling a start function or by requiring the value. This distinction can be understood in terms of what triggers the calculation : With Implicit promises, the creation of a promise also triggers the computation, while with Explicit futures, one needs to triggers the resolution of a promise. This trigger can in turn be explicit, like calling a start method, or implicit, like lazy evaluation where the first use of a promise’s value triggers its evaluation. -The idea for explicit futures were introduced in the Baker and Hewitt paper. They’re a little trickier to implement, and require some support from the underlying language, and as such they aren’t that common. The Baker and Hewitt paper talked about using futures as placeholders for arguments to a function, which get evaluated in parallel, but when they’re needed. Also, lazy futures in Alice ML have a similar explicit invocation mechanism, the first thread touching a future triggers its evaluation. +The idea for explicit futures were introduced in the Baker and Hewitt paper. They’re a little trickier to implement, and require some support from the underlying language, and as such they aren’t that common. The Baker and Hewitt paper talked about using futures as placeholders for arguments to a function, which get evaluated in parallel, but when they’re needed. MultiLisp also had a mechanism to delay the evaluation of the future to the time when it's value is first used, using the defer construct. Lazy futures in Alice ML have a similar explicit invocation mechanism, the first thread touching a future triggers its evaluation. + +An example for Explicit Futures would be (from AliceML): + +``` +fun enum n = lazy n :: enum (n+1) + +``` + +This example generates an infinite stream of integers and if stated when it is created, will compete for the system resources. Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of promises in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation. -In Scala, although the futures are implicit, Promises can be used to have an explicit-like behavior. This is useful in a scenario where we need to stack up some computations and then resolve the Promise. +For example + +```scala + +val f = Future { + Http("http://api.fixer.io/latest?base=USD").asString +} + +f onComplete { + case Success(response) => println(response.body) + case Failure(t) => println(t) +} + +``` + +This sends the HTTP call as soon as it the Future is created. In Scala, although the futures are implicit, Promises can be used to have an explicit-like behavior. This is useful in a scenario where we need to stack up some computations and then resolve the Promise. An Example : -- cgit v1.2.3